text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: The Riemannian Geometry of Deep Generative Models,
Abstract: Deep generative models learn a mapping from a low dimensional latent space to
a high-dimensional data space. Under certain regularity conditions, these
models parameterize nonlinear manifolds in the data space. In this paper, we
investigate the Riemannian geometry of these generated manifolds. First, we
develop efficient algorithms for computing geodesic curves, which provide an
intrinsic notion of distance between points on the manifold. Second, we develop
an algorithm for parallel translation of a tangent vector along a path on the
manifold. We show how parallel translation can be used to generate analogies,
i.e., to transport a change in one data point into a semantically similar
change of another data point. Our experiments on real image data show that the
manifolds learned by deep generative models, while nonlinear, are surprisingly
close to zero curvature. The practical implication is that linear paths in the
latent space closely approximate geodesics on the generated manifold. However,
further investigation into this phenomenon is warranted, to identify if there
are other architectures or datasets where curvature plays a more prominent
role. We believe that exploring the Riemannian geometry of deep generative
models, using the tools developed in this paper, will be an important step in
understanding the high-dimensional, nonlinear spaces these models learn. | [
1,
0,
0,
1,
0,
0
] |
Title: Portable, high-performance containers for HPC,
Abstract: Building and deploying software on high-end computing systems is a
challenging task. High performance applications have to reliably run across
multiple platforms and environments, and make use of site-specific resources
while resolving complicated software-stack dependencies. Containers are a type
of lightweight virtualization technology that attempt to solve this problem by
packaging applications and their environments into standard units of software
that are: portable, easy to build and deploy, have a small footprint, and low
runtime overhead. In this work we present an extension to the container runtime
of Shifter that provides containerized applications with a mechanism to access
GPU accelerators and specialized networking from the host system, effectively
enabling performance portability of containers across HPC resources. The
presented extension makes possible to rapidly deploy high-performance software
on supercomputers from containerized applications that have been developed,
built, and tested in non-HPC commodity hardware, e.g. the laptop or workstation
of a researcher. | [
1,
0,
0,
0,
0,
0
] |
Title: Quantum dynamics of a hydrogen-like atom in a time-dependent box: non-adiabatic regime,
Abstract: We consider a hydrogen atom confined in time-dependent trap created by a
spherical impenetrable box with time-dependent radius. For such model we study
the behavior of atomic electron under the (non-adiabatic) dynamical confinement
caused by the rapidly moving wall of the box. The expectation values of the
total and kinetic energy, average force, pressure and coordinate are analyzed
as a function of time for linearly expanding, contracting and harmonically
breathing boxes. It is shown that linearly extending box leads to de-excitation
of the atom, while the rapidly contracting box causes the creation of very high
pressure on the atom and transition of the atomic electron into the unbound
state. In harmonically breathing box diffusive excitation of atomic electron
may occur in analogy with that for atom in a microwave field. | [
0,
1,
0,
0,
0,
0
] |
Title: Complexity Results for MCMC derived from Quantitative Bounds,
Abstract: This paper considers how to obtain MCMC quantitative convergence bounds which
can be translated into tight complexity bounds in high-dimensional setting. We
propose a modified drift-and-minorization approach, which establishes a
generalized drift condition defined in a subset of the state space. The subset
is called the "large set", and is chosen to rule out some "bad" states which
have poor drift property when the dimension gets large. Using the "large set"
together with a "centered" drift function, a quantitative bound can be obtained
which can be translated into a tight complexity bound. As a demonstration, we
analyze a certain realistic Gibbs sampler algorithm and obtain a complexity
upper bound for the mixing time, which shows that the number of iterations
required for the Gibbs sampler to converge is constant. It is our hope that
this modified drift-and-minorization approach can be employed in many other
specific examples to obtain complexity bounds for high-dimensional Markov
chains. | [
0,
0,
0,
1,
0,
0
] |
Title: Symmetries and regularity for holomorphic maps between balls,
Abstract: Let $f:{\mathbb B}^n \to {\mathbb B}^N$ be a holomorphic map. We study
subgroups $\Gamma_f \subseteq {\rm Aut}({\mathbb B}^n)$ and $T_f \subseteq {\rm
Aut}({\mathbb B}^N)$. When $f$ is proper, we show both these groups are Lie
subgroups. When $\Gamma_f$ contains the center of ${\bf U}(n)$, we show that
$f$ is spherically equivalent to a polynomial. When $f$ is minimal we show that
there is a homomorphism $\Phi:\Gamma_f \to T_f$ such that $f$ is equivariant
with respect to $\Phi$. To do so, we characterize minimality via the triviality
of a third group $H_f$. We relate properties of ${\rm Ker}(\Phi)$ to older
results on invariant proper maps between balls. When $f$ is proper but
completely non-rational, we show that either both $\Gamma_f$ and $T_f$ are
finite or both are noncompact. | [
0,
0,
1,
0,
0,
0
] |
Title: On the treatment of $\ell$-changing proton-hydrogen Rydberg atom collisions,
Abstract: Energy-conserving, angular momentum-changing collisions between protons and
highly excited Rydberg hydrogen atoms are important for precise understanding
of atomic recombination at the photon decoupling era, and the elemental
abundance after primordial nucleosynthesis. Early approaches to $\ell$-changing
collisions used perturbation theory for only dipole-allowed ($\Delta \ell=\pm
1$) transitions. An exact non-perturbative quantum mechanical treatment is
possible, but it comes at computational cost for highly excited Rydberg states.
In this note we show how to obtain a semi-classical limit that is accurate and
simple, and develop further physical insights afforded by the non-perturbative
quantum mechanical treatment. | [
0,
1,
0,
0,
0,
0
] |
Title: Complex Economic Activities Concentrate in Large Cities,
Abstract: Why do some economic activities agglomerate more than others? And, why does
the agglomeration of some economic activities continue to increase despite
recent developments in communication and transportation technologies? In this
paper, we present evidence that complex economic activities concentrate more in
large cities. We find this to be true for technologies, scientific
publications, industries, and occupations. Using historical patent data, we
show that the urban concentration of complex economic activities has been
continuously increasing since 1850. These findings suggest that the increasing
urban concentration of jobs and innovation might be a consequence of the
growing complexity of the economy. | [
1,
0,
0,
0,
0,
0
] |
Title: Reynolds number dependence of the structure functions in homogeneous turbulence,
Abstract: We compare the predictions of stochastic closure theory (SCT) with
experimental measurements of homogeneous turbulence made in the Variable
Density Turbulence Tunnel (VDTT) at the Max Planck Institute for Dynamics and
Self-Organization in Gottingen. While the general form of SCT contains
infinitely many free parameters, the data permit us to reduce the number to
seven, only three of which are active over the entire inertial range. Of these
three, one parameter characterizes the variance of the mean field noise in SCT
and another characterizes the rate in the large deviations of the mean. The
third parameter is the decay exponent of the Fourier variables in the Fourier
expansion of the noise, which characterizes the smoothness of the turbulent
velocity. SCT compares favorably with velocity structure functions measured in
the experiment. We considered even-order structure functions ranging in order
from two to eight as well as the third-order structure functions at five
Taylor-Reynolds numbers (Rl) between 110 and 1450. The comparisons highlight
several advantages of the SCT, which include explicit predictions for the
structure functions at any scale and for any Reynolds number. We observed that
finite-Rl corrections, for instance, are important even at the highest Reynolds
numbers produced in the experiments. SCT gives us the correct basis function to
express all the moments of the velocity differences in turbulence in Fourier
space. The SCT produces the coefficients of the series and so determines the
statistical quantities that characterize the small scales in turbulence. It
also characterizes the random force acting on the fluid in the stochastic
Navier-Stokes equation, as described in the paper. | [
0,
1,
0,
0,
0,
0
] |
Title: Power and Energy-efficiency Roofline Model for GPUs,
Abstract: Energy consumption has been a great deal of concern in recent years and
developers need to take energy-efficiency into account when they design
algorithms. Their design needs to be energy-efficient and low-power while it
tries to achieve attainable performance provided by underlying hardware.
However, different optimization techniques have different effects on power and
energy-efficiency and a visual model would assist in the selection process.
In this paper, we extended the roofline model and provided a visual
representation of optimization strategies for power consumption. Our model is
composed of various ceilings regarding each strategy we included in our models.
One roofline model for computational performance and one for memory performance
is introduced. We assembled our models based on some optimization strategies
for two widespread GPUs from NVIDIA: Geforce GTX 970 and Tesla K80. | [
1,
0,
0,
0,
0,
0
] |
Title: Equivalence of estimates on domain and its boundary,
Abstract: Let $\Omega$ be a pseudoconvex domain in $\mathbb C^n$ with smooth boundary
$b\Omega$. We define general estimates $(f\text{-}\mathcal M)^k_{\Omega}$ and
$(f\text{-}\mathcal M)^k_{b\Omega}$ on $k$-forms for the complex Laplacian
$\Box$ on $\Omega$ and the Kohn-Laplacian $\Box_b$ on $b\Omega$. For $1\le k\le
n-2$, we show that $(f\text{-}\mathcal M)^k_{b\Omega}$ holds if and only if
$(f\text{-}\mathcal M)^k_{\Omega}$ and $(f\text{-}\mathcal M)^{n-k-1}_{\Omega}$
hold. Our proof relies on Kohn's method in [Ann. of Math. (2), 156(1):213--248,
2002]. | [
0,
0,
1,
0,
0,
0
] |
Title: The igus Humanoid Open Platform: A Child-sized 3D Printed Open-Source Robot for Research,
Abstract: The use of standard robotic platforms can accelerate research and lower the
entry barrier for new research groups. There exist many affordable humanoid
standard platforms in the lower size ranges of up to 60cm, but larger humanoid
robots quickly become less affordable and more difficult to operate, maintain
and modify. The igus Humanoid Open Platform is a new and affordable, fully
open-source humanoid platform. At 92cm in height, the robot is capable of
interacting in an environment meant for humans, and is equipped with enough
sensors, actuators and computing power to support researchers in many fields.
The structure of the robot is entirely 3D printed, leading to a lightweight and
visually appealing design. The main features of the platform are described in
this article. | [
1,
0,
0,
0,
0,
0
] |
Title: Chimera states in complex networks: interplay of fractal topology and delay,
Abstract: Chimera states are an example of intriguing partial synchronization patterns
emerging in networks of identical oscillators. They consist of spatially
coexisting domains of coherent (synchronized) and incoherent (desynchronized)
dynamics. We analyze chimera states in networks of Van der Pol oscillators with
hierarchical connectivities, and elaborate the role of time delay introduced in
the coupling term. In the parameter plane of coupling strength and delay time
we find tongue-like regions of existence of chimera states alternating with
regions of existence of coherent travelling waves. We demonstrate that by
varying the time delay one can deliberately stabilize desired spatio-temporal
patterns in the system. | [
0,
1,
0,
0,
0,
0
] |
Title: Inferactive data analysis,
Abstract: We describe inferactive data analysis, so-named to denote an interactive
approach to data analysis with an emphasis on inference after data analysis.
Our approach is a compromise between Tukey's exploratory (roughly speaking
"model free") and confirmatory data analysis (roughly speaking classical and
"model based"), also allowing for Bayesian data analysis. We view this approach
as close in spirit to current practice of applied statisticians and data
scientists while allowing frequentist guarantees for results to be reported in
the scientific literature, or Bayesian results where the data scientist may
choose the statistical model (and hence the prior) after some initial
exploratory analysis. While this approach to data analysis does not cover every
scenario, and every possible algorithm data scientists may use, we see this as
a useful step in concrete providing tools (with frequentist statistical
guarantees) for current data scientists. The basis of inference we use is
selective inference [Lee et al., 2016, Fithian et al., 2014], in particular its
randomized form [Tian and Taylor, 2015a]. The randomized framework, besides
providing additional power and shorter confidence intervals, also provides
explicit forms for relevant reference distributions (up to normalization)
through the {\em selective sampler} of Tian et al. [2016]. The reference
distributions are constructed from a particular conditional distribution formed
from what we call a DAG-DAG -- a Data Analysis Generative DAG. As sampling
conditional distributions in DAGs is generally complex, the selective sampler
is crucial to any practical implementation of inferactive data analysis. Our
principal goal is in reviewing the recent developments in selective inference
as well as describing the general philosophy of selective inference. | [
0,
0,
1,
1,
0,
0
] |
Title: Promising Accurate Prefix Boosting for sequence-to-sequence ASR,
Abstract: In this paper, we present promising accurate prefix boosting (PAPB), a
discriminative training technique for attention based sequence-to-sequence
(seq2seq) ASR. PAPB is devised to unify the training and testing scheme in an
effective manner. The training procedure involves maximizing the score of each
partial correct sequence obtained during beam search compared to other
hypotheses. The training objective also includes minimization of token
(character) error rate. PAPB shows its efficacy by achieving 10.8\% and 3.8\%
WER with and without RNNLM respectively on Wall Street Journal dataset. | [
1,
0,
0,
0,
0,
0
] |
Title: Tunable GMM Kernels,
Abstract: The recently proposed "generalized min-max" (GMM) kernel can be efficiently
linearized, with direct applications in large-scale statistical learning and
fast near neighbor search. The linearized GMM kernel was extensively compared
in with linearized radial basis function (RBF) kernel. On a large number of
classification tasks, the tuning-free GMM kernel performs (surprisingly) well
compared to the best-tuned RBF kernel. Nevertheless, one would naturally expect
that the GMM kernel ought to be further improved if we introduce tuning
parameters.
In this paper, we study three simple constructions of tunable GMM kernels:
(i) the exponentiated-GMM (or eGMM) kernel, (ii) the powered-GMM (or pGMM)
kernel, and (iii) the exponentiated-powered-GMM (epGMM) kernel. The pGMM kernel
can still be efficiently linearized by modifying the original hashing procedure
for the GMM kernel. On about 60 publicly available classification datasets, we
verify that the proposed tunable GMM kernels typically improve over the
original GMM kernel. On some datasets, the improvements can be astonishingly
significant.
For example, on 11 popular datasets which were used for testing deep learning
algorithms and tree methods, our experiments show that the proposed tunable GMM
kernels are strong competitors to trees and deep nets. The previous studies
developed tree methods including "abc-robust-logitboost" and demonstrated the
excellent performance on those 11 datasets (and other datasets), by
establishing the second-order tree-split formula and new derivatives for
multi-class logistic loss. Compared to tree methods like
"abc-robust-logitboost" (which are slow and need substantial model sizes), the
tunable GMM kernels produce largely comparable results. | [
1,
0,
0,
1,
0,
0
] |
Title: The linear nature of pseudowords,
Abstract: Given a pseudoword over suitable pseudovarieties, we associate to it a
labeled linear order determined by the factorizations of the pseudoword. We
show that, in the case of the pseudovariety of aperiodic finite semigroups, the
pseudoword can be recovered from the labeled linear order. | [
0,
0,
1,
0,
0,
0
] |
Title: Molecules cooled below the Doppler limit,
Abstract: The ability to cool atoms below the Doppler limit -- the minimum temperature
reachable by Doppler cooling -- has been essential to most experiments with
quantum degenerate gases, optical lattices and atomic fountains, among many
other applications. A broad set of new applications await ultracold molecules,
and the extension of laser cooling to molecules has begun. A molecular
magneto-optical trap has been demonstrated, where molecules approached the
Doppler limit. However, the sub-Doppler temperatures required for most
applications have not yet been reached. Here we cool molecules to 50 uK, well
below the Doppler limit, using a three-dimensional optical molasses. These
ultracold molecules could be loaded into optical tweezers to trap arbitrary
arrays for quantum simulation, launched into a molecular fountain for testing
fundamental physics, and used to study ultracold collisions and ultracold
chemistry. | [
0,
1,
0,
0,
0,
0
] |
Title: On purely generated $α$-smashing weight structures and weight-exact localizations,
Abstract: This paper is dedicated to new methods of constructing weight structures and
weight-exact localizations; our arguments generalize their bounded versions
considered in previous papers of the authors. We start from a class of objects
$P$ of triangulated category $C$ that satisfies a certain negativity condition
(there are no $C$-extensions of positive degrees between elements of $P$; we
actually need a somewhat stronger condition of this sort) to obtain a weight
structure both "halves" of which are closed either with respect to
$C$-coproducts of less than $\alpha$ objects (for $\alpha$ being a fixed
regular cardinal) or with respect to all coproducts (provided that $C$ is
closed with respect to coproducts of this sort). This construction gives all
"reasonable" weight structures satisfying the latter condition. In particular,
we obtain certain weight structures on spectra (in $SH$) consisting of less
than $\alpha$ cells and on certain localizations of $SH$; these results are
new. | [
0,
0,
1,
0,
0,
0
] |
Title: Bayesian fairness,
Abstract: We consider the problem of how decision making can be fair when the
underlying probabilistic model of the world is not known with certainty. We
argue that recent notions of fairness in machine learning need to explicitly
incorporate parameter uncertainty, hence we introduce the notion of {\em
Bayesian fairness} as a suitable candidate for fair decision rules. Using
balance, a definition of fairness introduced by Kleinberg et al (2016), we show
how a Bayesian perspective can lead to well-performing, fair decision rules
even under high uncertainty. | [
1,
0,
0,
1,
0,
0
] |
Title: Noether's Problem on Semidirect Product Groups,
Abstract: Let $K$ be a field, $G$ a finite group. Let $G$ act on the function field $L
= K(x_{\sigma} : \sigma \in G)$ by $\tau \cdot x_{\sigma} = x_{\tau\sigma}$ for
any $\sigma, \tau \in G$. Denote the fixed field of the action by $K(G) = L^{G}
= \left\{ \frac{f}{g} \in L : \sigma(\frac{f}{g}) = \frac{f}{g}, \forall \sigma
\in G \right\}$. Noether's problem asks whether $K(G)$ is rational (purely
transcendental) over $K$. It is known that if $G = C_m \rtimes C_n$ is a
semidirect product of cyclic groups $C_m$ and $C_n$ with $\mathbb{Z}[\zeta_n]$
a unique factorization domain, and $K$ contains an $e$th primitive root of
unity, where $e$ is the exponent of $G$, then $K(G)$ is rational over $K$. In
this paper, we give another criteria to determine whether $K(C_m \rtimes C_n)$
is rational over $K$. In particular, if $p, q$ are prime numbers and there
exists $x \in \mathbb{Z}[\zeta_q]$ such that the norm
$N_{\mathbb{Q}(\zeta_q)/\mathbb{Q}}(x) = p$, then $\mathbb{C}(C_{p} \rtimes
C_{q})$ is rational over $\mathbb{C}$. | [
0,
0,
1,
0,
0,
0
] |
Title: A note on the uniqueness of models in social abstract argumentation,
Abstract: Social abstract argumentation is a principled way to assign values to
conflicting (weighted) arguments. In this note we discuss the important
property of the uniqueness of the model. | [
1,
0,
0,
0,
0,
0
] |
Title: Bounds on the Size and Asymptotic Rate of Subblock-Constrained Codes,
Abstract: The study of subblock-constrained codes has recently gained attention due to
their application in diverse fields. We present bounds on the size and
asymptotic rate for two classes of subblock-constrained codes. The first class
is binary constant subblock-composition codes (CSCCs), where each codeword is
partitioned into equal sized subblocks, and every subblock has the same fixed
weight. The second class is binary subblock energy-constrained codes (SECCs),
where the weight of every subblock exceeds a given threshold. We present novel
upper and lower bounds on the code sizes and asymptotic rates for binary CSCCs
and SECCs. For a fixed subblock length and small relative distance, we show
that the asymptotic rate for CSCCs (resp. SECCs) is strictly lower than the
corresponding rate for constant weight codes (CWCs) (resp. heavy weight codes
(HWCs)). Further, for codes with high weight and low relative distance, we show
that the asymptotic rates for CSCCs is strictly lower than that of SECCs, which
contrasts that the asymptotic rate for CWCs is equal to that of HWCs. We also
provide a correction to an earlier result by Chee et al. (2014) on the
asymptotic CSCC rate. Additionally, we present several numerical examples
comparing the rates for CSCCs and SECCs with those for constant weight codes
and heavy weight codes. | [
1,
0,
0,
0,
0,
0
] |
Title: Computer Self-efficacy and Its Relationship with Web Portal Usage: Evidence from the University of the East,
Abstract: The University of the East Web Portal is an academic, web based system that
provides educational electronic materials and e-learning services. To fully
optimize its usage, it is imperative to determine the factors that relate to
its usage. Thus, this study, to determine the computer self-efficacy of the
faculty members of the University of the East and its relationship with their
web portal usage, was conceived. Using a validated questionnaire, the profile
of the respondents, their computer self-efficacy, and web portal usage were
gathered. Data showed that the respondents were relatively young (M = 40 years
old), majority had masters degree (f = 85, 72%), most had been using the web
portal for four semesters (f = 60, 51%), and the large part were intermediate
web portal users (f = 69, 59%). They were highly skilled in using the computer
(M = 4.29) and skilled in using the Internet (M = 4.28). E-learning services (M
= 3.29) and online library resources (M = 3.12) were only used occasionally.
Pearson correlation revealed that age was positively correlated with online
library resources (r = 0.267, p < 0.05) and a negative relationship existed
between perceived skill level in using the portal and online library resources
usage (r = -0.206, p < 0.05). A 2x2 chi square revealed that the highest
educational attainment had a significant relationship with online library
resources (chi square = 5.489, df = 1, p < 0.05). Basic computer (r = 0.196, p
< 0.05) and Internet skills (r = 0.303, p < 0.05) were significantly and
positively related with e-learning services usage but not with online library
resources usage. Other individual factors such as attitudes towards the web
portal and anxiety towards using the web portal can be investigated. | [
1,
0,
0,
0,
0,
0
] |
Title: A Liouville theorem for indefinite fractional diffusion equations and its application to existence of solutions,
Abstract: In this work we obtain a Liouville theorem for positive, bounded solutions of
the equation $$ (-\Delta)^s u= h(x_N)f(u) \quad \hbox{in }\mathbb{R}^{N} $$
where $(-\Delta)^s$ stands for the fractional Laplacian with $s\in (0,1)$, and
the functions $h$ and $f$ are nondecreasing. The main feature is that the
function $h$ changes sign in $\mathbb{R}$, therefore the problem is sometimes
termed as indefinite. As an application we obtain a priori bounds for positive
solutions of some boundary value problems, which give existence of such
solutions by means of bifurcation methods. | [
0,
0,
1,
0,
0,
0
] |
Title: Robustness of Quantum-Enhanced Adaptive Phase Estimation,
Abstract: As all physical adaptive quantum-enhanced metrology schemes operate under
noisy conditions with only partially understood noise characteristics, so a
practical control policy must be robust even for unknown noise. We aim to
devise a test to evaluate the robustness of AQEM policies and assess the
resource used by the policies. The robustness test is performed on QEAPE by
simulating the scheme under four phase-noise models corresponding to
normal-distribution noise, random-telegraph noise, skew-normal-distribution
noise, and log-normal-distribution noise. Control policies are devised either
by an evolutionary algorithm under the same noisy conditions, albeit ignorant
of its properties, or a Bayesian-based feedback method that assumes no noise.
Our robustness test and resource comparison method can be used to determining
the efficacy and selecting a suitable policy. | [
1,
0,
0,
1,
0,
0
] |
Title: How Complex is your classification problem? A survey on measuring classification complexity,
Abstract: Extracting characteristics from the training datasets of classification
problems has proven effective in a number of meta-analyses. Among them,
measures of classification complexity can estimate the difficulty in separating
the data points into their expected classes. Descriptors of the spatial
distribution of the data and estimates of the shape and size of the decision
boundary are among the existent measures for this characterization. This
information can support the formulation of new data-driven pre-processing and
pattern recognition techniques, which can in turn be focused on challenging
characteristics of the problems. This paper surveys and analyzes measures which
can be extracted from the training datasets in order to characterize the
complexity of the respective classification problems. Their use in recent
literature is also reviewed and discussed, allowing to prospect opportunities
for future work in the area. Finally, descriptions are given on an R package
named Extended Complexity Library (ECoL) that implements a set of complexity
measures and is made publicly available. | [
0,
0,
0,
1,
0,
0
] |
Title: Constraining Dark Energy Dynamics in Extended Parameter Space,
Abstract: Dynamical dark energy has been recently suggested as a promising and physical
way to solve the 3.4 sigma tension on the value of the Hubble constant $H_0$
between the direct measurement of Riess et al. (2016) (R16, hereafter) and the
indirect constraint from Cosmic Microwave Anisotropies obtained by the Planck
satellite under the assumption of a $\Lambda$CDM model. In this paper, by
parameterizing dark energy evolution using the $w_0$-$w_a$ approach, and
considering a $12$ parameter extended scenario, we find that: a) the tension on
the Hubble constant can indeed be solved with dynamical dark energy, b) a
cosmological constant is ruled out at more than $95 \%$ c.l. by the Planck+R16
dataset, and c) all of the standard quintessence and half of the "downward
going" dark energy model space (characterized by an equation of state that
decreases with time) is also excluded at more than $95 \%$ c.l. These results
are further confirmed when cosmic shear, CMB lensing, or SN~Ia luminosity
distance data are also included. However, tension remains with the BAO dataset.
A cosmological constant and small portion of the freezing quintessence models
are still in agreement with the Planck+R16+BAO dataset at between 68\% and 95\%
c.l. Conversely, for Planck plus a phenomenological $H_0$ prior, both thawing
and freezing quintessence models prefer a Hubble constant of less than 70
km/s/Mpc. The general conclusions hold also when considering models with
non-zero spatial curvature. | [
0,
1,
0,
0,
0,
0
] |
Title: Evaluation of matrix factorisation approaches for muscle synergy extraction,
Abstract: The muscle synergy concept provides a widely-accepted paradigm to break down
the complexity of motor control. In order to identify the synergies, different
matrix factorisation techniques have been used in a repertoire of fields such
as prosthesis control and biomechanical and clinical studies. However, the
relevance of these matrix factorisation techniques is still open for discussion
since there is no ground truth for the underlying synergies. Here, we evaluate
factorisation techniques and investigate the factors that affect the quality of
estimated synergies. We compared commonly used matrix factorisation methods:
Principal component analysis (PCA), Independent component analysis (ICA),
Non-negative matrix factorization (NMF) and second-order blind identification
(SOBI). Publicly available real data were used to assess the synergies
extracted by each factorisation method in the classification of wrist
movements. Synthetic datasets were utilised to explore the effect of muscle
synergy sparsity, level of noise and number of channels on the extracted
synergies. Results suggest that the sparse synergy model and a higher number of
channels would result in better-estimated synergies. Without dimensionality
reduction, SOBI showed better results than other factorisation methods. This
suggests that SOBI would be an alternative when a limited number of electrodes
is available but its performance was still poor in that case. Otherwise, NMF
had the best performance when the number of channels was higher than the number
of synergies. Therefore, NMF would be the best method for muscle synergy
extraction. | [
0,
0,
0,
0,
1,
0
] |
Title: Comparison of two classifications of a class of ODE's in the case of general position,
Abstract: Two classifications of second order ODE's cubic with respect to the first
order derivative are compared in the case of general position, which is common
for both classifications. The correspondence of vectorial, pseudovectorial,
scalar, and pseudoscalar invariants is established. | [
0,
0,
1,
0,
0,
0
] |
Title: Point-hyperplane frameworks, slider joints, and rigidity preserving transformations,
Abstract: A one-to-one correspondence between the infinitesimal motions of bar-joint
frameworks in $\mathbb{R}^d$ and those in $\mathbb{S}^d$ is a classical
observation by Pogorelov, and further connections among different rigidity
models in various different spaces have been extensively studied. In this
paper, we shall extend this line of research to include the infinitesimal
rigidity of frameworks consisting of points and hyperplanes. This enables us to
understand correspondences between point-hyperplane rigidity, classical
bar-joint rigidity, and scene analysis.
Among other results, we derive a combinatorial characterization of graphs
that can be realized as infinitesimally rigid frameworks in the plane with a
given set of points collinear. This extends a result by Jackson and Jordán,
which deals with the case when three points are collinear. | [
0,
0,
1,
0,
0,
0
] |
Title: Genetic Algorithms for Evolving Computer Chess Programs,
Abstract: This paper demonstrates the use of genetic algorithms for evolving: 1) a
grandmaster-level evaluation function, and 2) a search mechanism for a chess
program, the parameter values of which are initialized randomly. The evaluation
function of the program is evolved by learning from databases of (human)
grandmaster games. At first, the organisms are evolved to mimic the behavior of
human grandmasters, and then these organisms are further improved upon by means
of coevolution. The search mechanism is evolved by learning from tactical test
suites. Our results show that the evolved program outperforms a two-time world
computer chess champion and is at par with the other leading computer chess
programs. | [
1,
0,
0,
1,
0,
0
] |
Title: Low-cost Autonomous Navigation System Based on Optical Flow Classification,
Abstract: This work presents a low-cost robot, controlled by a Raspberry Pi, whose
navigation system is based on vision. The strategy used consisted of
identifying obstacles via optical flow pattern recognition. Its estimation was
done using the Lucas-Kanade algorithm, which can be executed by the Raspberry
Pi without harming its performance. Finally, an SVM-based classifier was used
to identify patterns of this signal associated with obstacles movement. The
developed system was evaluated considering its execution over an optical flow
pattern dataset extracted from a real navigation environment. In the end, it
was verified that the acquisition cost of the system was inferior to that
presented by most of the cited works, while its performance was similar to
theirs. | [
1,
0,
0,
0,
0,
0
] |
Title: Effects of the Mach number on the evolution of vortex-surface fields in compressible Taylor--Green flows,
Abstract: We investigate the evolution of vortex-surface fields (VSFs) in compressible
Taylor--Green flows at Mach numbers ($Ma$) ranging from 0.5 to 2.0 using direct
numerical simulation. The formulation of VSFs in incompressible flows is
extended to compressible flows, and a mass-based renormalization of VSFs is
used to facilitate characterizing the evolution of a particular vortex surface.
The effects of the Mach number on the VSF evolution are different in three
stages. In the early stage, the jumps of the compressive velocity component
near shocklets generate sinks to contract surrounding vortex surfaces, which
shrink vortex volume and distort vortex surfaces. The subsequent reconnection
of vortex surfaces, quantified by the minimal distance between approaching
vortex surfaces and the exchange of vorticity fluxes, occurs earlier and has a
higher reconnection degree for larger $Ma$ owing to the dilatational
dissipation and shocklet-induced reconnection of vortex lines. In the late
stage, the positive dissipation rate and negative pressure work accelerate the
loss of kinetic energy and suppress vortex twisting with increasing $Ma$. | [
0,
1,
0,
0,
0,
0
] |
Title: Detailed, accurate, human shape estimation from clothed 3D scan sequences,
Abstract: We address the problem of estimating human pose and body shape from 3D scans
over time. Reliable estimation of 3D body shape is necessary for many
applications including virtual try-on, health monitoring, and avatar creation
for virtual reality. Scanning bodies in minimal clothing, however, presents a
practical barrier to these applications. We address this problem by estimating
body shape under clothing from a sequence of 3D scans. Previous methods that
have exploited body models produce smooth shapes lacking personalized details.
We contribute a new approach to recover a personalized shape of the person. The
estimated shape deviates from a parametric model to fit the 3D scans. We
demonstrate the method using high quality 4D data as well as sequences of
visual hulls extracted from multi-view images. We also make available BUFF, a
new 4D dataset that enables quantitative evaluation
(this http URL). Our method outperforms the state of the art in
both pose estimation and shape estimation, qualitatively and quantitatively. | [
1,
0,
0,
0,
0,
0
] |
Title: The Picard groups for unital inclusions of unital $C^*$-algebras,
Abstract: We shall introduce the notion of the Picard group for an inclusion of
$C^*$-algebras. We shall also study its basic properties and the relation
between the Picard group for an inclusion of $C^*$-algebras and the ordinary
Picard group. Furthermore, we shall give some examples of the Picard groups for
unital inclusions of unital $C^*$-algebras. | [
0,
0,
1,
0,
0,
0
] |
Title: Multi-Layer Generalized Linear Estimation,
Abstract: We consider the problem of reconstructing a signal from multi-layered
(possibly) non-linear measurements. Using non-rigorous but standard methods
from statistical physics we present the Multi-Layer Approximate Message Passing
(ML-AMP) algorithm for computing marginal probabilities of the corresponding
estimation problem and derive the associated state evolution equations to
analyze its performance. We also give the expression of the asymptotic free
energy and the minimal information-theoretically achievable reconstruction
error. Finally, we present some applications of this measurement model for
compressed sensing and perceptron learning with structured matrices/patterns,
and for a simple model of estimation of latent variables in an auto-encoder. | [
1,
1,
0,
1,
0,
0
] |
Title: The Trace and the Mass of subcritical GJMS Operators,
Abstract: Let $L_g$ be the subcritical GJMS operator on an even-dimensional compact
manifold $(X, g)$ and consider the zeta-regularized trace
$\mathrm{Tr}_\zeta(L_g^{-1})$ of its inverse. We show that if $\ker L_g = 0$,
then the supremum of this quantity, taken over all metrics $g$ of fixed volume
in the conformal class, is always greater than or equal to the corresponding
quantity on the standard sphere. Moreover, we show that in the case that it is
strictly larger, the supremum is attained by a metric of constant mass. Using
positive mass theorems, we give some geometric conditions for this to happen. | [
0,
0,
1,
0,
0,
0
] |
Title: Two-photon imaging assisted by a dynamic random medium,
Abstract: Random scattering is usually viewed as a serious nuisance in optical imaging,
and needs to be prevented in the conventional imaging scheme based on
single-photon interference. Here we proposed a two-photon imaging scheme with
the widely used lens replaced by a dynamic random medium. In contrast to
destroying imaging process, the dynamic random medium in our scheme works as a
crucial imaging element to bring constructive interference, and allows us to
image an object from light field scattered by this dynamic random medium. On
the one hand, our imaging scheme with incoherent two-photon illumination
enables us to achieve super-resolution imaging with the resolution reaching
Heisenberg limit. On the other hand, with coherent two-photon illumination, the
image of a pure-phase object can be obtained in our imaging scheme. These
results show new possibilities to overcome bottleneck of widely used
single-photon imaging by developing imaging method based on multi-photon
interference. | [
0,
1,
0,
0,
0,
0
] |
Title: Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the $L_0$ Norm,
Abstract: Deployment of deep neural networks (DNNs) in safety- or security-critical
systems requires provable guarantees on their correct behaviour. A common
requirement is robustness to adversarial perturbations in a neighbourhood
around an input. In this paper we focus on the $L_0$ norm and aim to compute,
for a trained DNN and an input, the maximal radius of a safe norm ball around
the input within which there are no adversarial examples. Then we define global
robustness as an expectation of the maximal safe radius over a test data set.
We first show that the problem is NP-hard, and then propose an approximate
approach to iteratively compute lower and upper bounds on the network's
robustness. The approach is \emph{anytime}, i.e., it returns intermediate
bounds and robustness estimates that are gradually, but strictly, improved as
the computation proceeds; \emph{tensor-based}, i.e., the computation is
conducted over a set of inputs simultaneously, instead of one by one, to enable
efficient GPU computation; and has \emph{provable guarantees}, i.e., both the
bounds and the robustness estimates can converge to their optimal values.
Finally, we demonstrate the utility of the proposed approach in practice to
compute tight bounds by applying and adapting the anytime algorithm to a set of
challenging problems, including global robustness evaluation, competitive $L_0$
attacks, test case generation for DNNs, and local robustness evaluation on
large-scale ImageNet DNNs. We release the code of all case studies via GitHub. | [
0,
0,
0,
1,
0,
0
] |
Title: Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications,
Abstract: We study combinatorial multi-armed bandit with probabilistically triggered
arms (CMAB-T) and semi-bandit feedback. We resolve a serious issue in the prior
CMAB-T studies where the regret bounds contain a possibly exponentially large
factor of $1/p^*$, where $p^*$ is the minimum positive probability that an arm
is triggered by any action. We address this issue by introducing a triggering
probability modulated (TPM) bounded smoothness condition into the general
CMAB-T framework, and show that many applications such as influence
maximization bandit and combinatorial cascading bandit satisfy this TPM
condition. As a result, we completely remove the factor of $1/p^*$ from the
regret bounds, achieving significantly better regret bounds for influence
maximization and cascading bandits than before. Finally, we provide lower bound
results showing that the factor $1/p^*$ is unavoidable for general CMAB-T
problems, suggesting that the TPM condition is crucial in removing this factor. | [
1,
0,
0,
1,
0,
0
] |
Title: A duality principle for the multi-block entanglement entropy of free fermion systems,
Abstract: The analysis of the entanglement entropy of a subsystem of a one-dimensional
quantum system is a powerful tool for unravelling its critical nature. For
instance, the scaling behaviour of the entanglement entropy determines the
central charge of the associated Virasoro algebra. For a free fermion system,
the entanglement entropy depends essentially on two sets, namely the set $A$ of
sites of the subsystem considered and the set $K$ of excited momentum modes. In
this work we make use of a general duality principle establishing the
invariance of the entanglement entropy under exchange of the sets $A$ and $K$
to tackle complex problems by studying their dual counterparts. The duality
principle is also a key ingredient in the formulation of a novel conjecture for
the asymptotic behavior of the entanglement entropy of a free fermion system in
the general case in which both sets $A$ and $K$ consist of an arbitrary number
of blocks. We have verified that this conjecture reproduces the numerical
results with excellent precision for all the configurations analyzed. We have
also applied the conjecture to deduce several asymptotic formulas for the
mutual and $r$-partite information generalizing the known ones for the single
block case. | [
0,
1,
1,
0,
0,
0
] |
Title: Game Efficiency through Linear Programming Duality,
Abstract: The efficiency of a game is typically quantified by the price of anarchy
(PoA), defined as the worst ratio of the objective function value of an
equilibrium --- solution of the game --- and that of an optimal outcome. Given
the tremendous impact of tools from mathematical programming in the design of
algorithms and the similarity of the price of anarchy and different measures
such as the approximation and competitive ratios, it is intriguing to develop a
duality-based method to characterize the efficiency of games.
In the paper, we present an approach based on linear programming duality to
study the efficiency of games. We show that the approach provides a general
recipe to analyze the efficiency of games and also to derive concepts leading
to improvements. The approach is particularly appropriate to bound the PoA.
Specifically, in our approach the dual programs naturally lead to competitive
PoA bounds that are (almost) optimal for several classes of games. The approach
indeed captures the smoothness framework and also some current non-smooth
techniques/concepts. We show the applicability to the wide variety of games and
environments, from congestion games to Bayesian welfare, from full-information
settings to incomplete-information ones. | [
1,
0,
0,
0,
0,
0
] |
Title: On reductions of the discrete Kadomtsev--Petviashvili-type equations,
Abstract: The reduction by restricting the spectral parameters $k$ and $k'$ on a
generic algebraic curve of degree $\mathcal{N}$ is performed for the discrete
AKP, BKP and CKP equations, respectively. A variety of two-dimensional discrete
integrable systems possessing a more general solution structure arise from the
reduction, and in each case a unified formula for generic positive integer
$\mathcal{N}\geq 2$ is given to express the corresponding reduced integrable
lattice equations. The obtained extended two-dimensional lattice models give
rise to many important integrable partial difference equations as special
degenerations. Some new integrable lattice models such as the discrete
Sawada--Kotera, Kaup--Kupershmidt and Hirota--Satsuma equations in extended
form are given as examples within the framework. | [
0,
1,
0,
0,
0,
0
] |
Title: Convergence diagnostics for stochastic gradient descent with constant step size,
Abstract: Many iterative procedures in stochastic optimization exhibit a transient
phase followed by a stationary phase. During the transient phase the procedure
converges towards a region of interest, and during the stationary phase the
procedure oscillates in that region, commonly around a single point. In this
paper, we develop a statistical diagnostic test to detect such phase transition
in the context of stochastic gradient descent with constant learning rate. We
present theory and experiments suggesting that the region where the proposed
diagnostic is activated coincides with the convergence region. For a class of
loss functions, we derive a closed-form solution describing such region.
Finally, we suggest an application to speed up convergence of stochastic
gradient descent by halving the learning rate each time stationarity is
detected. This leads to a new variant of stochastic gradient descent, which in
many settings is comparable to state-of-art. | [
1,
0,
1,
1,
0,
0
] |
Title: Strong Khovanov-Floer Theories and Functoriality,
Abstract: We provide a unified framework for proving Reidemeister-invariance and
functoriality for a wide range of link homology theories. These include Lee
homology, Heegaard Floer homology of branched double covers, singular instanton
homology, and \Szabo's geometric link homology theory. We follow Baldwin,
Hedden, and Lobb (arXiv:1509.04691) in leveraging the relationships between
these theories and Khovanov homology. We obtain stronger functoriality results
by avoiding spectral sequences and instead showing that each theory factors
through Bar-Natan's cobordism-theoretic link homology theory. | [
0,
0,
1,
0,
0,
0
] |
Title: Formal Verification of Neural Network Controlled Autonomous Systems,
Abstract: In this paper, we consider the problem of formally verifying the safety of an
autonomous robot equipped with a Neural Network (NN) controller that processes
LiDAR images to produce control actions. Given a workspace that is
characterized by a set of polytopic obstacles, our objective is to compute the
set of safe initial conditions such that a robot trajectory starting from these
initial conditions is guaranteed to avoid the obstacles. Our approach is to
construct a finite state abstraction of the system and use standard
reachability analysis over the finite state abstraction to compute the set of
the safe initial states. The first technical problem in computing the finite
state abstraction is to mathematically model the imaging function that maps the
robot position to the LiDAR image. To that end, we introduce the notion of
imaging-adapted sets as partitions of the workspace in which the imaging
function is guaranteed to be affine. We develop a polynomial-time algorithm to
partition the workspace into imaging-adapted sets along with computing the
corresponding affine imaging functions. Given this workspace partitioning, a
discrete-time linear dynamics of the robot, and a pre-trained NN controller
with Rectified Linear Unit (ReLU) nonlinearity, the second technical challenge
is to analyze the behavior of the neural network. To that end, we utilize a
Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible
segments of different ReLUs. SMC solvers then use a Boolean satisfiability
solver and a convex programming solver and decompose the problem into smaller
subproblems. To accelerate this process, we develop a pre-processing algorithm
that could rapidly prune the space feasible ReLU segments. Finally, we
demonstrate the efficiency of the proposed algorithms using numerical
simulations with increasing complexity of the neural network controller. | [
1,
0,
0,
0,
0,
0
] |
Title: Measurements of the depth of maximum of air-shower profiles at the Pierre Auger Observatory and their composition implications,
Abstract: Air-showers measured by the Pierre Auger Observatory were analyzed in order
to extract the depth of maximum (Xmax).The results allow the analysis of the
Xmax distributions as a function of energy ($> 10^{17.8}$ eV). The Xmax
distributions, their mean and standard deviation are analyzed with the help of
shower simulations with the aim of interpreting the mass composition. The mean
and standard deviation were used to derive <ln A> and its variance as a
function of energy. The fraction of four components (p, He, N and Fe) were fit
to the Xmax distributions. Regardless of the hadronic model used the data is
better described by a mix of light, intermediate and heavy primaries. Also,
independent of the hadronic models, a decrease of the proton flux with energy
is observed. No significant contribution of iron nuclei is derived in the
entire energy range studied. | [
0,
1,
0,
0,
0,
0
] |
Title: Softmax Q-Distribution Estimation for Structured Prediction: A Theoretical Interpretation for RAML,
Abstract: Reward augmented maximum likelihood (RAML), a simple and effective learning
framework to directly optimize towards the reward function in structured
prediction tasks, has led to a number of impressive empirical successes. RAML
incorporates task-specific reward by performing maximum-likelihood updates on
candidate outputs sampled according to an exponentiated payoff distribution,
which gives higher probabilities to candidates that are close to the reference
output. While RAML is notable for its simplicity, efficiency, and its
impressive empirical successes, the theoretical properties of RAML, especially
the behavior of the exponentiated payoff distribution, has not been examined
thoroughly. In this work, we introduce softmax Q-distribution estimation, a
novel theoretical interpretation of RAML, which reveals the relation between
RAML and Bayesian decision theory. The softmax Q-distribution can be regarded
as a smooth approximation of the Bayes decision boundary, and the Bayes
decision rule is achieved by decoding with this Q-distribution. We further show
that RAML is equivalent to approximately estimating the softmax Q-distribution,
with the temperature $\tau$ controlling approximation error. We perform two
experiments, one on synthetic data of multi-class classification and one on
real data of image captioning, to demonstrate the relationship between RAML and
the proposed softmax Q-distribution estimation method, verifying our
theoretical analysis. Additional experiments on three structured prediction
tasks with rewards defined on sequential (named entity recognition), tree-based
(dependency parsing) and irregular (machine translation) structures show
notable improvements over maximum likelihood baselines. | [
1,
0,
0,
1,
0,
0
] |
Title: On general $(α, β)$-metrics of weak Landsberg type,
Abstract: In this paper, we study general $(\alpha,\beta)$-metrics which $\alpha$ is a
Riemannian metric and $\beta$ is an one-form. We have proven that every weak
Landsberg general $(\alpha,\beta)$-metric is a Berwald metric, where $\beta$ is
a closed and conformal one-form. This show that there exist no generalized
unicorn metric in this class of general $(\alpha,\beta)$-metric. Further, We
show that $F$ is a Landsberg general $(\alpha,\beta)$-metric if and only if it
is weak Landsberg general $(\alpha,\beta)$-metric, where $\beta$ is a closed
and conformal one-form. | [
0,
0,
1,
0,
0,
0
] |
Title: An Intersectional Definition of Fairness,
Abstract: We introduce a measure of fairness for algorithms and data with regard to
multiple protected attributes. Our proposed definition, differential fairness,
is informed by the framework of intersectionality, which analyzes how
interlocking systems of power and oppression affect individuals along
overlapping dimensions including race, gender, sexual orientation, class, and
disability. We show that our criterion behaves sensibly for any subset of the
set of protected attributes, and we illustrate links to differential privacy. A
case study on census data demonstrates the utility of our approach. | [
0,
0,
0,
1,
0,
0
] |
Title: Technological Parasitism,
Abstract: Technological parasitism is a new theory to explain the evolution of
technology in society. In this context, this study proposes a model to analyze
the interaction between a host technology (system) and a parasitic technology
(subsystem) to explain evolutionary pathways of technologies as complex
systems. The coefficient of evolutionary growth of the model here indicates the
typology of evolution of parasitic technology in relation to host technology:
i.e., underdevelopment, growth and development. This approach is illustrated
with realistic examples using empirical data of product and process
technologies. Overall, then, the theory of technological parasitism can be
useful for bringing a new perspective to explain and generalize the evolution
of technology and predict which innovations are likely to evolve rapidly in
society. | [
0,
0,
0,
0,
0,
1
] |
Title: On the sample mean after a group sequential trial,
Abstract: A popular setting in medical statistics is a group sequential trial with
independent and identically distributed normal outcomes, in which interim
analyses of the sum of the outcomes are performed. Based on a prescribed
stopping rule, one decides after each interim analysis whether the trial is
stopped or continued. Consequently, the actual length of the study is a random
variable. It is reported in the literature that the interim analyses may cause
bias if one uses the ordinary sample mean to estimate the location parameter.
For a generic stopping rule, which contains many classical stopping rules as a
special case, explicit formulas for the expected length of the trial, the bias,
and the mean squared error (MSE) are provided. It is deduced that, for a fixed
number of interim analyses, the bias and the MSE converge to zero if the first
interim analysis is performed not too early. In addition, optimal rates for
this convergence are provided. Furthermore, under a regularity condition,
asymptotic normality in total variation distance for the sample mean is
established. A conclusion for naive confidence intervals based on the sample
mean is derived. It is also shown how the developed theory naturally fits in
the broader framework of likelihood theory in a group sequential trial setting.
A simulation study underpins the theoretical findings. | [
0,
0,
1,
1,
0,
0
] |
Title: A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings,
Abstract: A reliable wireless connection between the operator and the teleoperated
Unmanned Ground Vehicle (UGV) is critical in many Urban Search and Rescue
(USAR) missions. Unfortunately, as was seen in e.g. the Fukushima disaster, the
networks available in areas where USAR missions take place are often severely
limited in range and coverage. Therefore, during mission execution, the
operator needs to keep track of not only the physical parts of the mission,
such as navigating through an area or searching for victims, but also the
variations in network connectivity across the environment. In this paper, we
propose and evaluate a new teleoperation User Interface (UI) that includes a
way of estimating the Direction of Arrival (DoA) of the Radio Signal Strength
(RSS) and integrating the DoA information in the interface. The evaluation
shows that using the interface results in more objects found, and less aborted
missions due to connectivity problems, as compared to a standard interface. The
proposed interface is an extension to an existing interface centered around the
video stream captured by the UGV. But instead of just showing the network
signal strength in terms of percent and a set of bars, the additional
information of DoA is added in terms of a color bar surrounding the video feed.
With this information, the operator knows what movement directions are safe,
even when moving in regions close to the connectivity threshold. | [
1,
0,
0,
0,
0,
0
] |
Title: Practical Integer-to-Binary Mapping for Quantum Annealers,
Abstract: Recent advancements in quantum annealing hardware and numerous studies in
this area suggests that quantum annealers have the potential to be effective in
solving unconstrained binary quadratic programming problems. Naturally, one may
desire to expand the application domain of these machines to problems with
general discrete variables. In this paper, we explore the possibility of
employing quantum annealers to solve unconstrained quadratic programming
problems over a bounded integer domain. We present an approach for encoding
integer variables into binary ones, thereby representing unconstrained integer
quadratic programming problems as unconstrained binary quadratic programming
problems. To respect some of the limitations of the currently developed quantum
annealers, we propose an integer encoding, named bounded- coefficient encoding,
in which we limit the size of the coefficients that appear in the encoding.
Furthermore, we propose an algorithm for finding the upper bound on the
coefficients of the encoding using the precision of the machine and the
coefficients of the original integer problem. Finally, we experimentally show
that this approach is far more resilient to the noise of the quantum annealers
compared to traditional approaches for the encoding of integers in base two. | [
1,
0,
1,
0,
0,
0
] |
Title: Nonequilibrium entropic bounds for Darwinian replicators,
Abstract: Life evolved on our planet by means of a combination of Darwinian selection
and innovations leading to higher levels of complexity. The emergence and
selection of replicating entities is a central problem in prebiotic evolution.
Theoretical models have shown how populations of different types of replicating
entities exclude or coexist with other classes of replicators. Models are
typically kinetic, based on standard replicator equations. On the other hand,
the presence of thermodynamical constrains for these systems remain an open
question. This is largely due to the lack of a general theory of out of
statistical methods for systems far from equilibrium. Nonetheless, a first
approach to this problem has been put forward in a series of novel
developements in non-equilibrium physics, under the rubric of the extended
second law of thermodynamics. The work presented here is twofold: firstly, we
review this theoretical framework and provide a brief description of the three
fundamental replicator types in prebiotic evolution: parabolic, malthusian and
hyperbolic. Finally, we employ these previously mentioned techinques to explore
how replicators are constrained by thermodynamics. | [
0,
1,
0,
0,
0,
0
] |
Title: Possible Quasi-Periodic modulation in the z = 1.1 $γ$-ray blazar PKS 0426-380,
Abstract: We search for $\gamma$-ray and optical periodic modulations in a distant flat
spectrum radio quasar (FSRQ) PKS 0426-380 (the redshift $z=1.1$). Using two
techniques (i.e., the maximum likelihood optimization and the exposure-weighted
aperture photometry), we obtain $\gamma$-ray light curves from \emph{Fermi}-LAT
Pass 8 data covering from 2008 August to 2016 December. We then analyze the
light curves with the Lomb-Scargle Periodogram (LSP) and the Weighted Wavelet
Z-transform (WWZ). A $\gamma$-ray quasi-periodicity with a period of 3.35 $\pm$
0.68 years is found at the significance-level of $\simeq3.6\ \sigma$. The
optical-UV flux covering from 2005 August to 2013 April provided by ASI SCIENCE
DATA CENTER is also analyzed, but no significant quasi-periodicity is found. It
should be pointed out that the result of the optical-UV data could be tentative
because of the incomplete of the data. Further long-term multiwavelength
monitoring of this FSRQ is needed to confirm its quasi-periodicity. | [
0,
1,
0,
0,
0,
0
] |
Title: Study of charged hadron multiplicities in charged-current neutrino-lead interactions in the OPERA detector,
Abstract: The OPERA experiment was designed to search for $\nu_{\mu} \rightarrow
\nu_{\tau}$ oscillations in appearance mode through the direct observation of
tau neutrinos in the CNGS neutrino beam. In this paper, we report a study of
the multiplicity of charged particles produced in charged-current neutrino
interactions in lead. We present charged hadron average multiplicities, their
dispersion and investigate the KNO scaling in different kinematical regions.
The results are presented in detail in the form of tables that can be used in
the validation of Monte Carlo generators of neutrino-lead interactions. | [
0,
1,
0,
0,
0,
0
] |
Title: Graphene and its elemental analogue: A molecular dynamics view of fracture phenomenon,
Abstract: Graphene and some graphene like two dimensional materials; hexagonal boron
nitride (hBN) and silicene have unique mechanical properties which severely
limit the suitability of conventional theories used for common brittle and
ductile materials to predict the fracture response of these materials. This
study revealed the fracture response of graphene, hBN and silicene nanosheets
under different tiny crack lengths by molecular dynamics (MD) simulations using
LAMMPS. The useful strength of these large area two dimensional materials are
determined by their fracture toughness. Our study shows a comparative analysis
of mechanical properties among the elemental analogues of graphene and
suggested that hBN can be a good substitute for graphene in terms of mechanical
properties. We have also found that the pre-cracked sheets fail in brittle
manner and their failure is governed by the strength of the atomic bonds at the
crack tip. The MD prediction of fracture toughness shows significant difference
with the fracture toughness determined by Griffth's theory of brittle failure
which restricts the applicability of Griffith's criterion for these materials
in case of nano-cracks. Moreover, the strengths measured in armchair and zigzag
directions of nanosheets of these materials implied that the bonds in armchair
direction has the stronger capability to resist crack propagation compared to
zigzag direction. | [
0,
1,
0,
0,
0,
0
] |
Title: Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning,
Abstract: Skilled robotic manipulation benefits from complex synergies between
non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing
can help rearrange cluttered objects to make space for arms and fingers;
likewise, grasping can help displace objects to make pushing movements more
precise and collision-free. In this work, we demonstrate that it is possible to
discover and learn these synergies from scratch through model-free deep
reinforcement learning. Our method involves training two fully convolutional
networks that map from visual observations to actions: one infers the utility
of pushes for a dense pixel-wise sampling of end effector orientations and
locations, while the other does the same for grasping. Both networks are
trained jointly in a Q-learning framework and are entirely self-supervised by
trial and error, where rewards are provided from successful grasps. In this
way, our policy learns pushing motions that enable future grasps, while
learning grasps that can leverage past pushes. During picking experiments in
both simulation and real-world scenarios, we find that our system quickly
learns complex behaviors amid challenging cases of clutter, and achieves better
grasping success rates and picking efficiencies than baseline alternatives
after only a few hours of training. We further demonstrate that our method is
capable of generalizing to novel objects. Qualitative results (videos), code,
pre-trained models, and simulation environments are available at
this http URL | [
1,
0,
0,
1,
0,
0
] |
Title: Decoupled Access-Execute on ARM big.LITTLE,
Abstract: Energy-efficiency plays a significant role given the battery lifetime
constraints in embedded systems and hand-held devices. In this work we target
the ARM big.LITTLE, a heterogeneous platform that is dominant in the mobile and
embedded market, which allows code to run transparently on different
microarchitectures with individual energy and performance characteristics. It
allows to se more energy efficient cores to conserve power during simple tasks
and idle times and switch over to faster, more power hungry cores when
performance is needed. This proposal explores the power-savings and the
performance gains that can be achieved by utilizing the ARM big.LITTLE core in
combination with Decoupled Access-Execute (DAE). DAE is a compiler technique
that splits code regions into two distinct phases: a memory-bound Access phase
and a compute-bound Execute phase. By scheduling the memory-bound phase on the
LITTLE core, and the compute-bound phase on the big core, we conserve energy
while caching data from main memory and perform computations at maximum
performance. Our preliminary findings show that applying DAE on ARM big.LITTLE
has potential. By prefetching data in Access we can achieve an IPC improvement
of up to 37% in the Execute phase, and manage to shift more than half of the
program runtime to the LITTLE core. We also provide insight into advantages and
disadvantages of our approach, present preliminary results and discuss
potential solutions to overcome locking overhead. | [
1,
0,
0,
0,
0,
0
] |
Title: Handling Adversarial Concept Drift in Streaming Data,
Abstract: Classifiers operating in a dynamic, real world environment, are vulnerable to
adversarial activity, which causes the data distribution to change over time.
These changes are traditionally referred to as concept drift, and several
approaches have been developed in literature to deal with the problem of drift
handling and detection. However, most concept drift handling techniques,
approach it as a domain independent task, to make them applicable to a wide
gamut of reactive systems. These techniques were developed from an adversarial
agnostic perspective, where they are naive and assume that drift is a benign
change, which can be fixed by updating the model. However, this is not the case
when an active adversary is trying to evade the deployed classification system.
In such an environment, the properties of concept drift are unique, as the
drift is intended to degrade the system and at the same time designed to avoid
detection by traditional concept drift detection techniques. This special
category of drift is termed as adversarial drift, and this paper analyzes its
characteristics and impact, in a streaming environment. A novel framework for
dealing with adversarial concept drift is proposed, called the Predict-Detect
streaming framework. Experimental evaluation of the framework, on generated
adversarial drifting data streams, demonstrates that this framework is able to
provide reliable unsupervised indication of drift, and is able to recover from
drifts swiftly. While traditional partially labeled concept drift detection
methodologies fail to detect adversarial drifts, the proposed framework is able
to detect such drifts and operates with <6% labeled data, on average. Also, the
framework provides benefits for active learning over imbalanced data streams,
by innately providing for feature space honeypots, where minority class
adversarial samples may be captured. | [
0,
0,
0,
1,
0,
0
] |
Title: Real intersection homology,
Abstract: We present a definition of intersection homology for real algebraic varieties
that is analogous to Goresky and MacPherson's original definition of
intersection homology for complex varieties. | [
0,
0,
1,
0,
0,
0
] |
Title: High Contrast Observations of Bright Stars with a Starshade,
Abstract: Starshades are a leading technology to enable the direct detection and
spectroscopic characterization of Earth-like exoplanets. In an effort to
advance starshade technology through system level demonstrations, the
McMath-Pierce Solar Telescope was adapted to enable the suppression of
astronomical sources with a starshade. The long baselines achievable with the
heliostat provide measurements of starshade performance at a flight-like
Fresnel number and resolution, aspects critical to the validation of optical
models. The heliostat has provided the opportunity to perform the first
astronomical observations with a starshade and has made science accessible in a
unique parameter space, high contrast at moderate inner working angles. On-sky
images are valuable for developing the experience and tools needed to extract
science results from future starshade observations. We report on high contrast
observations of nearby stars provided by a starshade. We achieve 5.6e-7
contrast at 30 arcseconds inner working angle on the star Vega and provide new
photometric constraints on background stars near Vega. | [
0,
1,
0,
0,
0,
0
] |
Title: Arbitrary order 2D virtual elements for polygonal meshes: Part II, inelastic problem,
Abstract: The present paper is the second part of a twofold work, whose first part is
reported in [3], concerning a newly developed Virtual Element Method (VEM) for
2D continuum problems. The first part of the work proposed a study for linear
elastic problem. The aim of this part is to explore the features of the VEM
formulation when material nonlinearity is considered, showing that the accuracy
and easiness of implementation discovered in the analysis inherent to the first
part of the work are still retained. Three different nonlinear constitutive
laws are considered in the VEM formulation. In particular, the generalized
viscoplastic model, the classical Mises plasticity with isotropic/kinematic
hardening and a shape memory alloy (SMA) constitutive law are implemented. The
versatility with respect to all the considered nonlinear material constitutive
laws is demonstrated through several numerical examples, also remarking that
the proposed 2D VEM formulation can be straightforwardly implemented as in a
standard nonlinear structural finite element method (FEM) framework. | [
0,
0,
1,
0,
0,
0
] |
Title: Reconstruction of Hidden Representation for Robust Feature Extraction,
Abstract: This paper aims to develop a new and robust approach to feature
representation. Motivated by the success of Auto-Encoders, we first theoretical
summarize the general properties of all algorithms that are based on
traditional Auto-Encoders: 1) The reconstruction error of the input can not be
lower than a lower bound, which can be viewed as a guiding principle for
reconstructing the input. Additionally, when the input is corrupted with
noises, the reconstruction error of the corrupted input also can not be lower
than a lower bound. 2) The reconstruction of a hidden representation achieving
its ideal situation is the necessary condition for the reconstruction of the
input to reach the ideal state. 3) Minimizing the Frobenius norm of the
Jacobian matrix of the hidden representation has a deficiency and may result in
a much worse local optimum value. We believe that minimizing the reconstruction
error of the hidden representation is more robust than minimizing the Frobenius
norm of the Jacobian matrix of the hidden representation. Based on the above
analysis, we propose a new model termed Double Denoising Auto-Encoders (DDAEs),
which uses corruption and reconstruction on both the input and the hidden
representation. We demonstrate that the proposed model is highly flexible and
extensible and has a potentially better capability to learn invariant and
robust feature representations. We also show that our model is more robust than
Denoising Auto-Encoders (DAEs) for dealing with noises or inessential features.
Furthermore, we detail how to train DDAEs with two different pre-training
methods by optimizing the objective function in a combined and separate manner,
respectively. Comparative experiments illustrate that the proposed model is
significantly better for representation learning than the state-of-the-art
models. | [
1,
0,
0,
1,
0,
0
] |
Title: Cosmological Evolution and Exact Solutions in a Fourth-order Theory of Gravity,
Abstract: A fourth-order theory of gravity is considered which in terms of dynamics has
the same degrees of freedom and number of constraints as those of scalar-tensor
theories. In addition it admits a canonical point-like Lagrangian description.
We study the critical points of the theory and we show that it can describe the
matter epoch of the universe and that two accelerated phases can be recovered
one of which describes a de Sitter universe. Finally for some models exact
solutions are presented. | [
0,
1,
1,
0,
0,
0
] |
Title: A numerical study of the F-model with domain-wall boundaries,
Abstract: We perform a numerical study of the F-model with domain-wall boundary
conditions. Various exact results are known for this particular case of the
six-vertex model, including closed expressions for the partition function for
any system size as well as its asymptotics and leading finite-size corrections.
To complement this picture we use a full lattice multi-cluster algorithm to
study equilibrium properties of this model for systems of moderate size, up to
L=512. We compare the energy to its exactly known large-L asymptotics. We
investigate the model's infinite-order phase transition by means of finite-size
scaling for an observable derived from the staggered polarization in order to
test the method put forward in our recent joint work with Duine and Barkema. In
addition we analyse local properties of the model. Our data are perfectly
consistent with analytical expressions for the arctic curves. We investigate
the structure inside the temperate region of the lattice, confirming the
oscillations in vertex densities that were first observed by Sylju{\aa}sen and
Zvonarev, and recently studied by Lyberg et al. We point out
'(anti)ferroelectric' oscillations close to the corresponding frozen regions as
well as 'higher-order' oscillations forming an intricate pattern with
saddle-point-like features. | [
0,
1,
0,
0,
0,
0
] |
Title: Visualizing the Loss Landscape of Neural Nets,
Abstract: Neural network training relies on our ability to find "good" minimizers of
highly non-convex loss functions. It is well-known that certain network
architecture designs (e.g., skip connections) produce loss functions that train
easier, and well-chosen training parameters (batch size, learning rate,
optimizer) produce minimizers that generalize better. However, the reasons for
these differences, and their effects on the underlying loss landscape, are not
well understood. In this paper, we explore the structure of neural loss
functions, and the effect of loss landscapes on generalization, using a range
of visualization methods. First, we introduce a simple "filter normalization"
method that helps us visualize loss function curvature and make meaningful
side-by-side comparisons between loss functions. Then, using a variety of
visualizations, we explore how network architecture affects the loss landscape,
and how training parameters affect the shape of minimizers. | [
1,
0,
0,
1,
0,
0
] |
Title: A Calculus of Truly Concurrent Mobile Processes,
Abstract: We make a mixture of Milner's $\pi$-calculus and our previous work on truly
concurrent process algebra, which is called $\pi_{tc}$. We introduce syntax and
semantics of $\pi_{tc}$, its properties based on strongly truly concurrent
bisimilarities. Also, we include an axiomatization of $\pi_{tc}$. $\pi_{tc}$
can be used as a formal tool in verifying mobile systems in a truly concurrent
flavor. | [
1,
0,
0,
0,
0,
0
] |
Title: Gigahertz optomechanical modulation by split-ring-resonator nanophotonic meta-atom arrays,
Abstract: Using polarization-resolved transient reflection spectroscopy, we investigate
the ultrafast modulation of light interacting with a metasurface consisting of
coherently vibrating nanophotonic meta-atoms in the form of U-shaped split-ring
resonators, that exhibit co-localized optical and mechanical resonances. With a
two-dimensional square-lattice array of these resonators formed of gold on a
glass substrate, we monitor the visible-pump-pulse induced gigahertz
oscillations in intensity of reflected linearly-polarized infrared probe light
pulses, modulated by the resonators effectively acting as miniature tuning
forks. A multimodal vibrational response involving the opening and closing
motion of the split rings is detected in this way. Numerical simulations of the
associated transient deformations and strain fields elucidate the complex
nanomechanical dynamics contributing to the ultrafast optical modulation, and
point to the role of acousto-plasmonic interactions through the opening and
closing motion of the SRR gaps as the dominant effect. Applications include
ultrafast acoustooptic modulator design and sensing. | [
0,
1,
0,
0,
0,
0
] |
Title: Guaranteed Fault Detection and Isolation for Switched Affine Models,
Abstract: This paper considers the problem of fault detection and isolation (FDI) for
switched affine models. We first study the model invalidation problem and its
application to guaranteed fault detection. Novel and intuitive
optimization-based formulations are proposed for model invalidation and
T-distinguishability problems, which we demonstrate to be computationally more
efficient than an earlier formulation that required a complicated change of
variables. Moreover, we introduce a distinguishability index as a measure of
separation between the system and fault models, which offers a practical method
for finding the smallest receding time horizon that is required for fault
detection, and for finding potential design recommendations for ensuring
T-distinguishability. Then, we extend our fault detection guarantees to the
problem of fault isolation with multiple fault models, i.e., the identification
of the type and location of faults, by introducing the concept of
I-isolability. An efficient way to implement the FDI scheme is also proposed,
whose run-time does not grow with the number of fault models that are
considered. Moreover, we derive bounds on detection and isolation delays and
present an adaptive scheme for reducing isolation delays. Finally, the
effectiveness of the proposed method is illustrated using several examples,
including an HVAC system model with multiple faults. | [
1,
0,
1,
0,
0,
0
] |
Title: An Introduction to Animal Movement Modeling with Hidden Markov Models using Stan for Bayesian Inference,
Abstract: Hidden Markov models (HMMs) are popular time series model in many fields
including ecology, economics and genetics. HMMs can be defined over discrete or
continuous time, though here we only cover the former. In the field of movement
ecology in particular, HMMs have become a popular tool for the analysis of
movement data because of their ability to connect observed movement data to an
underlying latent process, generally interpreted as the animal's unobserved
behavior. Further, we model the tendency to persist in a given behavior over
time. Notation presented here will generally follow the format of Zucchini et
al. (2016) and cover HMMs applied in an unsupervised case to animal movement
data, specifically positional data. We provide Stan code to analyze movement
data of the wild haggis as presented first in Michelot et al. (2016). | [
0,
0,
0,
1,
1,
0
] |
Title: Sampling a Network to Find Nodes of Interest,
Abstract: The focus of the current research is to identify people of interest in social
networks. We are especially interested in studying dark networks, which
represent illegal or covert activity. In such networks, people are unlikely to
disclose accurate information when queried. We present REDLEARN, an algorithm
for sampling dark networks with the goal of identifying as many nodes of
interest as possible. We consider two realistic lying scenarios, which describe
how individuals in a dark network may attempt to conceal their connections. We
test and present our results on several real-world multilayered networks, and
show that REDLEARN achieves up to a 340% improvement over the next best
strategy. | [
1,
1,
0,
0,
0,
0
] |
Title: Representation Mixing for TTS Synthesis,
Abstract: Recent character and phoneme-based parametric TTS systems using deep learning
have shown strong performance in natural speech generation. However, the choice
between character or phoneme input can create serious limitations for practical
deployment, as direct control of pronunciation is crucial in certain cases. We
demonstrate a simple method for combining multiple types of linguistic
information in a single encoder, named representation mixing, enabling flexible
choice between character, phoneme, or mixed representations during inference.
Experiments and user studies on a public audiobook corpus show the efficacy of
our approach. | [
1,
0,
0,
0,
0,
0
] |
Title: Projection Based Weight Normalization for Deep Neural Networks,
Abstract: Optimizing deep neural networks (DNNs) often suffers from the ill-conditioned
problem. We observe that the scaling-based weight space symmetry property in
rectified nonlinear network will cause this negative effect. Therefore, we
propose to constrain the incoming weights of each neuron to be unit-norm, which
is formulated as an optimization problem over Oblique manifold. A simple yet
efficient method referred to as projection based weight normalization (PBWN) is
also developed to solve this problem. PBWN executes standard gradient updates,
followed by projecting the updated weight back to Oblique manifold. This
proposed method has the property of regularization and collaborates well with
the commonly used batch normalization technique. We conduct comprehensive
experiments on several widely-used image datasets including CIFAR-10,
CIFAR-100, SVHN and ImageNet for supervised learning over the state-of-the-art
convolutional neural networks, such as Inception, VGG and residual networks.
The results show that our method is able to improve the performance of DNNs
with different architectures consistently. We also apply our method to Ladder
network for semi-supervised learning on permutation invariant MNIST dataset,
and our method outperforms the state-of-the-art methods: we obtain test errors
as 2.52%, 1.06%, and 0.91% with only 20, 50, and 100 labeled samples,
respectively. | [
1,
0,
0,
0,
0,
0
] |
Title: Mapping stable direct and retrograde orbits around the triple system of asteroids (45) Eugenia,
Abstract: It is well accepted that knowing the composition and the orbital evolution of
asteroids may help us to understand the process of formation of the Solar
System. It is also known that asteroids can represent a threat to our planet.
Such important role made space missions to asteroids a very popular topic in
the current astrodynamics and astronomy studies. By taking into account the
increasingly interest in space missions to asteroids, especially to multiple
systems, we present a study aimed to characterize the stable and unstable
regions around the triple system of asteroids (45) Eugenia. The goal is to
characterize unstable and stable regions of this system and compare with the
system 2001 SN263 - the target of the ASTER mission. Besides, Prado (2014) used
a new concept for mapping orbits considering the disturbance received by the
spacecraft from all the perturbing forces individually. This method was also
applied to (45) Eugenia. We present the stable and unstable regions for
particles with relative inclination between 0 and 180 degrees. We found that
(45) Eugenia presents larger stable regions for both, prograde and retrograde
cases. This is mainly because the satellites of this system are small when
compared to the primary body, and because they are not so close to each other.
We also present a comparison between those two triple systems, and a discussion
on how these results may guide us in the planning of future missions. | [
0,
1,
0,
0,
0,
0
] |
Title: Zinc oxide induces the stringent response and major reorientations in the central metabolism of Bacillus subtilis,
Abstract: Microorganisms, such as bacteria, are one of the first targets of
nanoparticles in the environment. In this study, we tested the effect of two
nanoparticles, ZnO and TiO2, with the salt ZnSO4 as the control, on the
Gram-positive bacterium Bacillus subtilis by 2D gel electrophoresis-based
proteomics. Despite a significant effect on viability (LD50), TiO2 NPs had no
detectable effect on the proteomic pattern, while ZnO NPs and ZnSO4
significantly modified B. subtilis metabolism. These results allowed us to
conclude that the effects of ZnO observed in this work were mainly attributable
to Zn dissolution in the culture media. Proteomic analysis highlighted twelve
modulated proteins related to central metabolism: MetE and MccB (cysteine
metabolism), OdhA, AspB, IolD, AnsB, PdhB and YtsJ (Krebs cycle) and XylA,
YqjI, Drm and Tal (pentose phosphate pathway). Biochemical assays, such as free
sulfhydryl, CoA-SH and malate dehydrogenase assays corroborated the observed
central metabolism reorientation and showed that Zn stress induced oxidative
stress, probably as a consequence of thiol chelation stress by Zn ions. The
other patterns affected by ZnO and ZnSO4 were the stringent response and the
general stress response. Nine proteins involved in or controlled by the
stringent response showed a modified expression profile in the presence of ZnO
NPs or ZnSO4: YwaC, SigH, YtxH, YtzB, TufA, RplJ, RpsB, PdhB and Mbl. An
increase in the ppGpp concentration confirmed the involvement of the stringent
response during a Zn stress. All these metabolic reorientations in response to
Zn stress were probably the result of complex regulatory mechanisms including
at least the stringent response via YwaC. | [
0,
0,
0,
0,
1,
0
] |
Title: Learning what matters - Sampling interesting patterns,
Abstract: In the field of exploratory data mining, local structure in data can be
described by patterns and discovered by mining algorithms. Although many
solutions have been proposed to address the redundancy problems in pattern
mining, most of them either provide succinct pattern sets or take the interests
of the user into account-but not both. Consequently, the analyst has to invest
substantial effort in identifying those patterns that are relevant to her
specific interests and goals. To address this problem, we propose a novel
approach that combines pattern sampling with interactive data mining. In
particular, we introduce the LetSIP algorithm, which builds upon recent
advances in 1) weighted sampling in SAT and 2) learning to rank in interactive
pattern mining. Specifically, it exploits user feedback to directly learn the
parameters of the sampling distribution that represents the user's interests.
We compare the performance of the proposed algorithm to the state-of-the-art in
interactive pattern mining by emulating the interests of a user. The resulting
system allows efficient and interleaved learning and sampling, thus
user-specific anytime data exploration. Finally, LetSIP demonstrates favourable
trade-offs concerning both quality-diversity and exploitation-exploration when
compared to existing methods. | [
1,
0,
0,
1,
0,
0
] |
Title: Orthogonal involutions and totally singular quadratic forms in characteristic two,
Abstract: We associate to every central simple algebra with involution of orthogonal
type in characteristic two a totally singular quadratic form which reflects
certain anisotropy properties of the involution. It is shown that this
quadratic form can be used to classify totally decomposable algebras with
orthogonal involution. Also, using this form, a criterion is obtained for an
orthogonal involution on a split algebra to be conjugated to the transpose
involution. | [
0,
0,
1,
0,
0,
0
] |
Title: De-blending Deep Herschel Surveys: A Multi-wavelength Approach,
Abstract: Cosmological surveys in the far infrared are known to suffer from confusion.
The Bayesian de-blending tool, XID+, currently provides one of the best ways to
de-confuse deep Herschel SPIRE images, using a flat flux density prior. This
work is to demonstrate that existing multi-wavelength data sets can be
exploited to improve XID+ by providing an informed prior, resulting in more
accurate and precise extracted flux densities. Photometric data for galaxies in
the COSMOS field were used to constrain spectral energy distributions (SEDs)
using the fitting tool CIGALE. These SEDs were used to create Gaussian prior
estimates in the SPIRE bands for XID+. The multi-wavelength photometry and the
extracted SPIRE flux densities were run through CIGALE again to allow us to
compare the performance of the two priors. Inferred ALMA flux densities
(F$^i$), at 870$\mu$m and 1250$\mu$m, from the best fitting SEDs from the
second CIGALE run were compared with measured ALMA flux densities (F$^m$) as an
independent performance validation. Similar validations were conducted with the
SED modelling and fitting tool MAGPHYS and modified black body functions to
test for model dependency. We demonstrate a clear improvement in agreement
between the flux densities extracted with XID+ and existing data at other
wavelengths when using the new informed Gaussian prior over the original
uninformed prior. The residuals between F$^m$ and F$^i$ were calculated. For
the Gaussian prior, these residuals, expressed as a multiple of the ALMA error
($\sigma$), have a smaller standard deviation, 7.95$\sigma$ for the Gaussian
prior compared to 12.21$\sigma$ for the flat prior, reduced mean, 1.83$\sigma$
compared to 3.44$\sigma$, and have reduced skew to positive values, 7.97
compared to 11.50. These results were determined to not be significantly model
dependent. This results in statistically more reliable SPIRE flux densities. | [
0,
1,
0,
0,
0,
0
] |
Title: The use of Charts, Pivot Tables, and Array Formulas in two Popular Spreadsheet Corpora,
Abstract: The use of spreadsheets in industry is widespread. Companies base decisions
on information coming from spreadsheets. Unfortunately, spreadsheets are
error-prone and this increases the risk that companies base their decisions on
inaccurate information, which can lead to incorrect decisions and loss of
money. In general, spreadsheet research is aimed to reduce the error-proneness
of spreadsheets. Most research is concentrated on the use of formulas. However,
there are other constructions in spreadsheets, like charts, pivot tables, and
array formulas, that are also used to present decision support information to
the user. There is almost no research about how these constructions are used.
To improve spreadsheet quality it is important to understand how spreadsheets
are used and to obtain a complete understanding, the use of charts, pivot
tables, and array formulas should be included in research. In this paper, we
analyze two popular spreadsheet corpora: Enron and EUSES on the use of the
aforementioned constructions. | [
1,
0,
0,
0,
0,
0
] |
Title: Disordered statistical physics in low dimensions: extremes, glass transition, and localization,
Abstract: This thesis presents original results in two domains of disordered
statistical physics: logarithmic correlated Random Energy Models (logREMs), and
localization transitions in long-range random matrices.
In the first part devoted to logREMs, we show how to characterise their
common properties and model--specific data. Then we develop their replica
symmetry breaking treatment, which leads to the freezing scenario of their free
energy distribution and the general description of their minima process, in
terms of decorated Poisson point process. We also report a series of new
applications of the Jack polynomials in the exact predictions of some
observables in the circular model and its variants. Finally, we present the
recent progress on the exact connection between logREMs and the Liouville
conformal field theory.
The goal of the second part is to introduce and study a new class of banded
random matrices, the broadly distributed class, which is characterid an
effective sparseness. We will first study a specific model of the class, the
Beta Banded random matrices, inspired by an exact mapping to a recently studied
statistical model of long--range first--passage percolation/epidemics dynamics.
Using analytical arguments based on the mapping and numerics, we show the
existence of localization transitions with mobility edges in the
"stretch--exponential" parameter--regime of the statistical models. Then, using
a block--diagonalization renormalization approach, we argue that such
localization transitions occur generically in the broadly distributed class. | [
0,
1,
0,
0,
0,
0
] |
Title: Character Distributions of Classical Chinese Literary Texts: Zipf's Law, Genres, and Epochs,
Abstract: We collect 14 representative corpora for major periods in Chinese history in
this study. These corpora include poetic works produced in several dynasties,
novels of the Ming and Qing dynasties, and essays and news reports written in
modern Chinese. The time span of these corpora ranges between 1046 BCE and 2007
CE. We analyze their character and word distributions from the viewpoint of the
Zipf's law, and look for factors that affect the deviations and similarities
between their Zipfian curves. Genres and epochs demonstrated their influences
in our analyses. Specifically, the character distributions for poetic works of
between 618 CE and 1644 CE exhibit striking similarity. In addition, although
texts of the same dynasty may tend to use the same set of characters, their
character distributions still deviate from each other. | [
1,
0,
0,
0,
0,
0
] |
Title: StackInsights: Cognitive Learning for Hybrid Cloud Readiness,
Abstract: Hybrid cloud is an integrated cloud computing environment utilizing a mix of
public cloud, private cloud, and on-premise traditional IT infrastructures.
Workload awareness, defined as a detailed full range understanding of each
individual workload, is essential in implementing the hybrid cloud. While it is
critical to perform an accurate analysis to determine which workloads are
appropriate for on-premise deployment versus which workloads can be migrated to
a cloud off-premise, the assessment is mainly performed by rule or policy based
approaches. In this paper, we introduce StackInsights, a novel cognitive system
to automatically analyze and predict the cloud readiness of workloads for an
enterprise. Our system harnesses the critical metrics across the entire stack:
1) infrastructure metrics, 2) data relevance metrics, and 3) application
taxonomy, to identify workloads that have characteristics of a) low sensitivity
with respect to business security, criticality and compliance, and b) low
response time requirements and access patterns. Since the capture of the data
relevance metrics involves an intrusive and in-depth scanning of the content of
storage objects, a machine learning model is applied to perform the business
relevance classification by learning from the meta level metrics harnessed
across stack. In contrast to traditional methods, StackInsights significantly
reduces the total time for hybrid cloud readiness assessment by orders of
magnitude. | [
1,
0,
0,
0,
0,
0
] |
Title: Risk-averse model predictive control,
Abstract: Risk-averse model predictive control (MPC) offers a control framework that
allows one to account for ambiguity in the knowledge of the underlying
probability distribution and unifies stochastic and worst-case MPC. In this
paper we study risk-averse MPC problems for constrained nonlinear Markovian
switching systems using generic cost functions, and derive Lyapunov-type
risk-averse stability conditions by leveraging the properties of risk-averse
dynamic programming operators. We propose a controller design procedure to
design risk-averse stabilizing terminal conditions for constrained nonlinear
Markovian switching systems. Lastly, we cast the resulting risk-averse optimal
control problem in a favorable form which can be solved efficiently and thus
deems risk-averse MPC suitable for applications. | [
0,
0,
1,
0,
0,
0
] |
Title: Monte Carlo Tree Search for Asymmetric Trees,
Abstract: We present an extension of Monte Carlo Tree Search (MCTS) that strongly
increases its efficiency for trees with asymmetry and/or loops. Asymmetric
termination of search trees introduces a type of uncertainty for which the
standard upper confidence bound (UCB) formula does not account. Our first
algorithm (MCTS-T), which assumes a non-stochastic environment, backs-up tree
structure uncertainty and leverages it for exploration in a modified UCB
formula. Results show vastly improved efficiency in a well-known asymmetric
domain in which MCTS performs arbitrarily bad. Next, we connect the ideas about
asymmetric termination to the presence of loops in the tree, where the same
state appears multiple times in a single trace. An extension to our algorithm
(MCTS-T+), which in addition to non-stochasticity assumes full state
observability, further increases search efficiency for domains with loops as
well. Benchmark testing on a set of OpenAI Gym and Atari 2600 games indicates
that our algorithms always perform better than or at least equivalent to
standard MCTS, and could be first-choice tree search algorithms for
non-stochastic, fully-observable environments. | [
0,
0,
0,
1,
0,
0
] |
Title: On the difference-to-sum power ratio of speech and wind noise based on the Corcos model,
Abstract: The difference-to-sum power ratio was proposed and used to suppress wind
noise under specific acoustic conditions. In this contribution, a general
formulation of the difference-to-sum power ratio associated with a mixture of
speech and wind noise is proposed and analyzed. In particular, it is assumed
that the complex coherence of convective turbulence can be modelled by the
Corcos model. In contrast to the work in which the power ratio was first
presented, the employed Corcos model holds for every possible air stream
direction and takes into account the lateral coherence decay rate. The obtained
expression is subsequently validated with real data for a dual microphone
set-up. Finally, the difference-to- sum power ratio is exploited as a spatial
feature to indicate the frame-wise presence of wind noise, obtaining improved
detection performance when compared to an existing multi-channel wind noise
detection approach. | [
1,
0,
0,
0,
0,
0
] |
Title: A Re-weighted Joint Spatial-Radon Domain CT Image Reconstruction Model for Metal Artifact Reduction,
Abstract: High density implants such as metals often lead to serious artifacts in the
reconstructed CT images which hampers the accuracy of image based diagnosis and
treatment planning. In this paper, we propose a novel wavelet frame based CT
image reconstruction model to reduce metal artifacts. This model is built on a
joint spatial and Radon (projection) domain (JSR) image reconstruction
framework with a built-in weighting and re-weighting mechanism in Radon domain
to repair degraded projection data. The new weighting strategy used in the
proposed model not only makes the regularization in Radon domain by wavelet
frame transform more effective, but also makes the commonly assumed linear
model for CT imaging a more accurate approximation of the nonlinear physical
problem. The proposed model, which will be referred to as the re-weighted JSR
model, combines the ideas of the recently proposed wavelet frame based JSR
model \cite{Dong2013} and the normalized metal artifact reduction model
\cite{meyer2010normalized}, and manages to achieve noticeably better CT
reconstruction quality than both methods. To solve the proposed re-weighted JSR
model, an efficient alternative iteration algorithm is proposed with guaranteed
convergence. Numerical experiments on both simulated and real CT image data
demonstrate the effectiveness of the re-weighted JSR model and its advantage
over some of the state-of-the-art methods. | [
0,
1,
1,
0,
0,
0
] |
Title: A Design Based on Stair-case Band Alignment of Electron Transport Layer for Improving Performance and Stability in Planar Perovskite Solar Cells,
Abstract: Among the n-type metal oxide materials used in the planar perovskite solar
cells, zinc oxide (ZnO) is a promising candidate to replace titanium dioxide
(TiO2) due to its relatively high electron mobility, high transparency, and
versatile nanostructures. Here, we present the application of low temperature
solution processed ZnO/Al-doped ZnO (AZO) bilayer thin film as electron
transport layers (ETLs) in the inverted perovskite solar cells, which provide a
stair-case band profile. Experimental results revealed that the power
conversion efficiency (PCE) of perovskite solar cells were significantly
increased from 12.25 to 16.07% by employing the AZO thin film as the buffer
layer. Meanwhile, the short-circuit current density (Jsc), open-circuit voltage
(Voc), and fill factor (FF) were improved to 20.58 mA/cm2, 1.09V, and 71.6%,
respectively. The enhancement in performance is attributed to the modified
interface in ETL with stair-case band alignment of ZnO/AZO/CH3NH3PbI3, which
allows more efficient extraction of photogenerated electrons in the CH3NH3PbI3
active layer. Thus, it is demonstrated that the ZnO/AZO bilayer ETLs would
benefit the electron extraction and contribute in enhancing the performance of
perovskite solar cells. | [
0,
1,
0,
0,
0,
0
] |
Title: Statistics on functional data and covariance operators in linear inverse problems,
Abstract: We introduce a framework for the statistical analysis of functional data in a
setting where these objects cannot be fully observed, but only indirect and
noisy measurements are available, namely an inverse problem setting. The
proposed methodology can be applied either to the analysis of indirectly
observed functional data or to the associated covariance operators,
representing second-order information, and thus lying on a non-Euclidean space.
To deal with the ill-posedness of the inverse problem, we exploit the spatial
structure of the sample data by introducing a flexible regularizing term
embedded in the model. Thanks to its efficiency, the proposed model is applied
to MEG data, leading to a novel statistical approach to the investigation of
functional connectivity. | [
0,
0,
0,
1,
0,
0
] |
Title: Sound Event Detection in Synthetic Audio: Analysis of the DCASE 2016 Task Results,
Abstract: As part of the 2016 public evaluation challenge on Detection and
Classification of Acoustic Scenes and Events (DCASE 2016), the second task
focused on evaluating sound event detection systems using synthetic mixtures of
office sounds. This task, which follows the `Event Detection - Office
Synthetic' task of DCASE 2013, studies the behaviour of tested algorithms when
facing controlled levels of audio complexity with respect to background noise
and polyphony/density, with the added benefit of a very accurate ground truth.
This paper presents the task formulation, evaluation metrics, submitted
systems, and provides a statistical analysis of the results achieved, with
respect to various aspects of the evaluation dataset. | [
1,
0,
0,
1,
0,
0
] |
Title: Pressure-induced Superconductivity in the Three-component Fermion Topological Semimetal Molybdenum Phosphide,
Abstract: Topological semimetal, a novel state of quantum matter hosting exotic
emergent quantum phenomena dictated by the non-trivial band topology, has
emerged as a new frontier in condensed-matter physics. Very recently, a
coexistence of triply degenerate points of band crossing and Weyl points near
the Fermi level was theoretically predicted and immediately experimentally
verified in single crystalline molybdenum phosphide (MoP). Here we show in this
material the high-pressure electronic transport and synchrotron X-ray
diffraction (XRD) measurements, combined with density functional theory (DFT)
calculations. We report the emergence of pressure-induced superconductivity in
MoP with a critical temperature Tc of about 2 K at 27.6 GPa, rising to 3.7 K at
the highest pressure of 95.0 GPa studied. No structural phase transitions is
detected up to 60.6 GPa from the XRD. Meanwhile, the Weyl points and triply
degenerate points topologically protected by the crystal symmetry are retained
at high pressure as revealed by our DFT calculations. The coexistence of
three-component fermion and superconductivity in heavily pressurized MoP offers
an excellent platform to study the interplay between topological phase of
matter and superconductivity. | [
0,
1,
0,
0,
0,
0
] |
Title: Collective excitations and supersolid behavior of bosonic atoms inside two crossed optical cavities,
Abstract: We discuss the nature of symmetry breaking and the associated collective
excitations for a system of bosons coupled to the electromagnetic field of two
optical cavities. For the specific configuration realized in a recent
experiment at ETH, we show that, in absence of direct intercavity scattering
and for parameters chosen such that the atoms couple symmetrically to both
cavities, the system possesses an approximate $U(1)$ symmetry which holds
asymptotically for vanishing cavity field intensity. It corresponds to the
invariance with respect to redistributing the total intensity $I=I_1+I_2$
between the two cavities. The spontaneous breaking of this symmetry gives rise
to a broken continuous translation-invariance for the atoms, creating a
supersolid-like order in the presence of a Bose-Einstein condensate. In
particular, we show that atom-mediated scattering between the two cavities,
which favors the state with equal light intensities $I_1=I_2$ and reduces the
symmetry to $\mathbf{Z}_2\otimes \mathbf{Z}_2$, gives rise to a finite value
$\sim \sqrt{I}$ of the effective Goldstone mass. For strong atom driving, this
low energy mode is clearly separated from an effective Higgs excitation
associated with changes of the total intensity $I$. In addition, we compute the
spectral distribution of the cavity light field and show that both the Higgs
and Goldstone mode acquire a finite lifetime due to Landau damping at non-zero
temperature. | [
0,
1,
0,
0,
0,
0
] |
Title: Generalized Value Iteration Networks: Life Beyond Lattices,
Abstract: In this paper, we introduce a generalized value iteration network (GVIN),
which is an end-to-end neural network planning module. GVIN emulates the value
iteration algorithm by using a novel graph convolution operator, which enables
GVIN to learn and plan on irregular spatial graphs. We propose three novel
differentiable kernels as graph convolution operators and show that the
embedding based kernel achieves the best performance. We further propose
episodic Q-learning, an improvement upon traditional n-step Q-learning that
stabilizes training for networks that contain a planning module. Lastly, we
evaluate GVIN on planning problems in 2D mazes, irregular graphs, and
real-world street networks, showing that GVIN generalizes well for both
arbitrary graphs and unseen graphs of larger scale and outperforms a naive
generalization of VIN (discretizing a spatial graph into a 2D image). | [
1,
0,
0,
0,
0,
0
] |
Title: Switch Functions,
Abstract: We define a switch function to be a function from an interval to $\{1,-1\}$
with a finite number of sign changes. (Special cases are the Walsh functions.)
By a topological argument, we prove that, given $n$ real-valued functions,
$f_1, \dots, f_n$, in $L^1[0,1]$, there exists a switch function, $\sigma$,
with at most $n$ sign changes that is simultaneously orthogonal to all of them
in the sense that $\int_0^1 \sigma(t)f_i(t)dt=0$, for all $i = 1, \dots , n$.
Moreover, we prove that, for each $\lambda \in (-1,1)$, there exists a unique
switch function, $\sigma$, with $n$ switches such that $\int_0^1 \sigma(t) p(t)
dt = \lambda \int_0^1 p(t)dt$ for every real polynomial $p$ of degree at most
$n-1$. We also prove the same statement holds for every real even polynomial of
degree at most $2n-2$. Furthermore, for each of these latter results, we write
down, in terms of $\lambda$ and $n$, a degree $n$ polynomial whose roots are
the switch points of $\sigma$; we are thereby able to compute these switch
functions. | [
0,
0,
1,
0,
0,
0
] |
Title: First international comparison of fountain primary frequency standards via a long distance optical fiber link,
Abstract: We report on the first comparison of distant caesium fountain primary
frequency standards (PFSs) via an optical fiber link. The 1415 km long optical
link connects two PFSs at LNE-SYRTE (Laboratoire National de métrologie et
d'Essais - SYstème de Références Temps-Espace) in Paris (France)
with two at PTB (Physikalisch-Technische Bundesanstalt) in Braunschweig
(Germany). For a long time, these PFSs have been major contributors to accuracy
of the International Atomic Time (TAI), with stated accuracies of around
$3\times 10^{-16}$. They have also been the references for a number of absolute
measurements of clock transition frequencies in various optical frequency
standards in view of a future redefinition of the second. The phase coherent
optical frequency transfer via a stabilized telecom fiber link enables far
better resolution than any other means of frequency transfer based on satellite
links. The agreement for each pair of distant fountains compared is well within
the combined uncertainty of a few 10$^{-16}$ for all the comparisons, which
fully supports the stated PFSs' uncertainties. The comparison also includes a
rubidium fountain frequency standard participating in the steering of TAI and
enables a new absolute determination of the $^{87}$Rb ground state hyperfine
transition frequency with an uncertainty of $3.1\times 10^{-16}$.
This paper is dedicated to the memory of André Clairon, who passed away
on the 24$^{th}$ of December 2015, for his pioneering and long-lasting efforts
in atomic fountains. He also pioneered optical links from as early as 1997. | [
0,
1,
0,
0,
0,
0
] |
Title: Hardy inequalities, Rellich inequalities and local Dirichlet forms,
Abstract: First the Hardy and Rellich inequalities are defined for the submarkovian
operator associated with a local Dirichlet form. Secondly, two general
conditions are derived which are sufficient to deduce the Rellich inequality
from the Hardy inequality. In addition the Rellich constant is calculated from
the Hardy constant. Thirdly, we establish that the criteria for the Rellich
inequality are verified for a large class of weighted second-order operators on
a domain $\Omega\subseteq \Ri^d$. The weighting near the boundary $\partial
\Omega$ can be different from the weighting at infinity. Finally these results
are applied to weighted second-order operators on $\Ri^d\backslash\{0\}$ and to
a general class of operators of Grushin type. | [
0,
0,
1,
0,
0,
0
] |
Title: On the generation of drift flows in wall-bounded flows transiting to turbulence,
Abstract: Despite recent progress, laminar-turbulent coexistence in transitional planar
wall-bounded shear flows is still not well understood. Contrasting with the
processes by which chaotic flow inside turbulent patches is sustained at the
local (minimal flow unit) scale, the mechanisms controlling the obliqueness of
laminar-turbulent interfaces typically observed all along the coexistence range
are still mysterious. An extension of Waleffe's approach [Phys. Fluids 9 (1997)
883--900] is used to show that, already at the local scale, drift flows
breaking the problem's spanwise symmetry are generated just by slightly
detuning the modes involved in the self-sustainment process. This opens
perspectives for theorizing the formation of laminar-turbulent patterns. | [
0,
1,
0,
0,
0,
0
] |
Title: Goldbach's Function Approximation Using Deep Learning,
Abstract: Goldbach conjecture is one of the most famous open mathematical problems. It
states that every even number, bigger than two, can be presented as a sum of 2
prime numbers. % In this work we present a deep learning based model that
predicts the number of Goldbach partitions for a given even number.
Surprisingly, our model outperforms all state-of-the-art analytically derived
estimations for the number of couples, while not requiring prime factorization
of the given number. We believe that building a model that can accurately
predict the number of couples brings us one step closer to solving one of the
world most famous open problems. To the best of our knowledge, this is the
first attempt to consider machine learning based data-driven methods to
approximate open mathematical problems in the field of number theory, and hope
that this work will encourage such attempts. | [
0,
0,
0,
1,
0,
0
] |
Title: Estimation of a Continuous Distribution on a Real Line by Discretization Methods -- Complete Version--,
Abstract: For an unknown continuous distribution on a real line, we consider the
approximate estimation by the discretization. There are two methods for the
discretization. First method is to divide the real line into several intervals
before taking samples ("fixed interval method") . Second method is dividing the
real line using the estimated percentiles after taking samples ("moving
interval method"). In either way, we settle down to the estimation problem of a
multinomial distribution. We use (symmetrized) $f$-divergence in order to
measure the discrepancy of the true distribution and the estimated one. Our
main result is the asymptotic expansion of the risk (i.e. expected divergence)
up to the second-order term in the sample size. We prove theoretically that the
moving interval method is asymptotically superior to the fixed interval method.
We also observe how the presupposed intervals (fixed interval method) or
percentiles (moving interval method) affect the asymptotic risk. | [
0,
0,
1,
1,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.