text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: No Need for a Lexicon? Evaluating the Value of the Pronunciation Lexica in End-to-End Models,
Abstract: For decades, context-dependent phonemes have been the dominant sub-word unit
for conventional acoustic modeling systems. This status quo has begun to be
challenged recently by end-to-end models which seek to combine acoustic,
pronunciation, and language model components into a single neural network. Such
systems, which typically predict graphemes or words, simplify the recognition
process since they remove the need for a separate expert-curated pronunciation
lexicon to map from phoneme-based units to words. However, there has been
little previous work comparing phoneme-based versus grapheme-based sub-word
units in the end-to-end modeling framework, to determine whether the gains from
such approaches are primarily due to the new probabilistic model, or from the
joint learning of the various components with grapheme-based units.
In this work, we conduct detailed experiments which are aimed at quantifying
the value of phoneme-based pronunciation lexica in the context of end-to-end
models. We examine phoneme-based end-to-end models, which are contrasted
against grapheme-based ones on a large vocabulary English Voice-search task,
where we find that graphemes do indeed outperform phonemes. We also compare
grapheme and phoneme-based approaches on a multi-dialect English task, which
once again confirm the superiority of graphemes, greatly simplifying the system
for recognizing multiple dialects. | [
1,
0,
0,
1,
0,
0
] |
Title: Experimental Two-dimensional Quantum Walk on a Photonic Chip,
Abstract: Quantum walks, in virtue of the coherent superposition and quantum
interference, possess exponential superiority over its classical counterpart in
applications of quantum searching and quantum simulation. The quantum enhanced
power is highly related to the state space of quantum walks, which can be
expanded by enlarging the photon number and/or the dimensions of the evolution
network, but the former is considerably challenging due to probabilistic
generation of single photons and multiplicative loss. Here we demonstrate a
two-dimensional continuous-time quantum walk by using the external geometry of
photonic waveguide arrays, rather than the inner degree of freedoms of photons.
Using femtosecond laser direct writing, we construct a large-scale
three-dimensional structure which forms a two-dimensional lattice with up to
49X49 nodes on a photonic chip. We demonstrate spatial two-dimensional quantum
walks using heralded single photons and single-photon-level imaging. We analyze
the quantum transport properties via observing the ballistic evolution pattern
and the variance profile, which agree well with simulation results. We further
reveal the transient nature that is the unique feature for quantum walks of
beyond one dimension. An architecture that allows a walk to freely evolve in
all directions and a large scale, combining with defect and disorder control,
may bring up powerful and versatile quantum walk machines for classically
intractable problems. | [
0,
1,
0,
0,
0,
0
] |
Title: Reaction-Diffusion Models for Glioma Tumor Growth,
Abstract: Mathematical modelling of tumor growth is one of the most useful and
inexpensive approaches to determine and predict the stage, size and progression
of tumors in realistic geometries. Moreover, these models has been used to get
an insight into cancer growth and invasion and in the analysis of tumor size
and geometry for applications in cancer treatment and surgical planning. The
present revision attempts to present a general perspective of the use of models
based on reaction-diffusion equations not only for the description of tumor
growth in gliomas, addressing for processes such as tumor heterogeneity,
hypoxia, dormancy and necrosis, but also its potential use as a tool in
designing optimized and patient specific therapies. | [
0,
1,
0,
0,
0,
0
] |
Title: Teaching computer code at school,
Abstract: In today's education systems, there is a deep concern about the importance of
teaching code and computer programming in schools. Moving digital learning from
a simple use of tools to understanding the processes of the internal
functioning of these tools is an old / new debate originated with the digital
laboratories of the 1960. Today, it is emerging again under impulse of the
large - scale public sphere digitalization and the new constructivist education
theories. Teachers and educators discuss not only the viability of code
teaching in the classroom, but also the intellectual and cognitive advantages
for students. The debate thus takes several orientations and is resourced in
the entanglement of arguments and interpretations of any order, technical,
educational, cultural, cognitive and psychological. However, that phenomenon
which undoubtedly augurs for a profound transformation in the future models of
learning and teaching , is predicting a new and almost congenital digital
humanism | [
1,
0,
0,
0,
0,
0
] |
Title: Nonconvex penalties with analytical solutions for one-bit compressive sensing,
Abstract: One-bit measurements widely exist in the real world, and they can be used to
recover sparse signals. This task is known as the problem of learning
halfspaces in learning theory and one-bit compressive sensing (1bit-CS) in
signal processing. In this paper, we propose novel algorithms based on both
convex and nonconvex sparsity-inducing penalties for robust 1bit-CS. We provide
a sufficient condition to verify whether a solution is globally optimal or not.
Then we show that the globally optimal solution for positive homogeneous
penalties can be obtained in two steps: a proximal operator and a normalization
step. For several nonconvex penalties, including minimax concave penalty (MCP),
$\ell_0$ norm, and sorted $\ell_1$ penalty, we provide fast algorithms for
finding the analytical solutions by solving the dual problem. Specifically, our
algorithm is more than $200$ times faster than the existing algorithm for MCP.
Its efficiency is comparable to the algorithm for the $\ell_1$ penalty in time,
while its performance is much better. Among these penalties, the sorted
$\ell_1$ penalty is most robust to noise in different settings. | [
1,
0,
0,
1,
0,
0
] |
Title: Benchmarking Automatic Machine Learning Frameworks,
Abstract: AutoML serves as the bridge between varying levels of expertise when
designing machine learning systems and expedites the data science process. A
wide range of techniques is taken to address this, however there does not exist
an objective comparison of these techniques. We present a benchmark of current
open source AutoML solutions using open source datasets. We test auto-sklearn,
TPOT, auto_ml, and H2O's AutoML solution against a compiled set of regression
and classification datasets sourced from OpenML and find that auto-sklearn
performs the best across classification datasets and TPOT performs the best
across regression datasets. | [
0,
0,
0,
1,
0,
0
] |
Title: Multiscale mixing patterns in networks,
Abstract: Assortative mixing in networks is the tendency for nodes with the same
attributes, or metadata, to link to each other. It is a property often found in
social networks manifesting as a higher tendency of links occurring between
people with the same age, race, or political belief. Quantifying the level of
assortativity or disassortativity (the preference of linking to nodes with
different attributes) can shed light on the factors involved in the formation
of links and contagion processes in complex networks. It is common practice to
measure the level of assortativity according to the assortativity coefficient,
or modularity in the case of discrete-valued metadata. This global value is the
average level of assortativity across the network and may not be a
representative statistic when mixing patterns are heterogeneous. For example, a
social network spanning the globe may exhibit local differences in mixing
patterns as a consequence of differences in cultural norms. Here, we introduce
an approach to localise this global measure so that we can describe the
assortativity, across multiple scales, at the node level. Consequently we are
able to capture and qualitatively evaluate the distribution of mixing patterns
in the network. We find that for many real-world networks the distribution of
assortativity is skewed, overdispersed and multimodal. Our method provides a
clearer lens through which we can more closely examine mixing patterns in
networks. | [
1,
0,
0,
0,
0,
0
] |
Title: Simulation of Parabolic Flow on an Eye-Shaped Domain with Moving Boundary,
Abstract: During the upstroke of a normal eye blink, the upper lid moves and paints a
thin tear film over the exposed corneal and conjunctival surfaces. This thin
tear film may be modeled by a nonlinear fourth-order PDE derived from
lubrication theory. A challenge in the numerical simulation of this model is to
include both the geometry of the eye and the movement of the eyelid. A pair of
orthogonal and conformal maps transform a square into an approximate
representation of the exposed ocular surface of a human eye. A spectral
collocation method on the square produces relatively efficient solutions on the
eye-shaped domain via these maps. The method is demonstrated on linear and
nonlinear second-order diffusion equations and shown to have excellent accuracy
as measured pointwise or by conservation checks. Future work will use the
method for thin-film equations on the same type of domain. | [
0,
0,
1,
0,
0,
0
] |
Title: Faster algorithms for 1-mappability of a sequence,
Abstract: In the k-mappability problem, we are given a string x of length n and
integers m and k, and we are asked to count, for each length-m factor y of x,
the number of other factors of length m of x that are at Hamming distance at
most k from y. We focus here on the version of the problem where k = 1. The
fastest known algorithm for k = 1 requires time O(mn log n/ log log n) and
space O(n). We present two algorithms that require worst-case time O(mn) and
O(n log^2 n), respectively, and space O(n), thus greatly improving the state of
the art. Moreover, we present an algorithm that requires average-case time and
space O(n) for integer alphabets if m = {\Omega}(log n/ log {\sigma}), where
{\sigma} is the alphabet size. | [
1,
0,
0,
0,
0,
0
] |
Title: Simulation optimization: A review of algorithms and applications,
Abstract: Simulation Optimization (SO) refers to the optimization of an objective
function subject to constraints, both of which can be evaluated through a
stochastic simulation. To address specific features of a particular
simulation---discrete or continuous decisions, expensive or cheap simulations,
single or multiple outputs, homogeneous or heterogeneous noise---various
algorithms have been proposed in the literature. As one can imagine, there
exist several competing algorithms for each of these classes of problems. This
document emphasizes the difficulties in simulation optimization as compared to
mathematical programming, makes reference to state-of-the-art algorithms in the
field, examines and contrasts the different approaches used, reviews some of
the diverse applications that have been tackled by these methods, and
speculates on future directions in the field. | [
1,
0,
1,
0,
0,
0
] |
Title: Computational Aspects of Optimal Strategic Network Diffusion,
Abstract: The diffusion of information has been widely modeled as stochastic diffusion
processes on networks. Alshamsi et al. (2018) proposed a model of strategic
diffusion in networks of related activities. In this work we investigate the
computational aspects of finding the optimal strategy of strategic diffusion.
We prove that finding an optimal solution to the problem is NP-complete in a
general case. To overcome this computational difficulty, we present an
algorithm to compute an optimal solution based on a dynamic programming
technique. We also show that the problem is fixed parameter-tractable when
parametrized by the product of the treewidth and maximum degree. We analyze the
possibility of developing an efficient approximation algorithm and show that
two heuristic algorithms proposed so far cannot have better than a logarithmic
approximation guarantee. Finally, we prove that the problem does not admit
better than a logarithmic approximation, unless P=NP. | [
1,
0,
0,
0,
0,
0
] |
Title: Replication issues in syntax-based aspect extraction for opinion mining,
Abstract: Reproducing experiments is an important instrument to validate previous work
and build upon existing approaches. It has been tackled numerous times in
different areas of science. In this paper, we introduce an empirical
replicability study of three well-known algorithms for syntactic centric
aspect-based opinion mining. We show that reproducing results continues to be a
difficult endeavor, mainly due to the lack of details regarding preprocessing
and parameter setting, as well as due to the absence of available
implementations that clarify these details. We consider these are important
threats to validity of the research on the field, specifically when compared to
other problems in NLP where public datasets and code availability are critical
validity components. We conclude by encouraging code-based research, which we
think has a key role in helping researchers to understand the meaning of the
state-of-the-art better and to generate continuous advances. | [
1,
0,
0,
0,
0,
0
] |
Title: Manifold Adversarial Learning,
Abstract: The recently proposed adversarial training methods show the robustness to
both adversarial and original examples and achieve state-of-the-art results in
supervised and semi-supervised learning. All the existing adversarial training
methods con- sider only how the worst perturbed examples (i.e., adversarial
examples) could affect the model output. Despite their success, we argue that
such setting may be in lack of generalization, since the output space (or label
space) is apparently less informative. In this paper, we propose a novel
method, called Manifold Adver- sarial Training (MAT). MAT manages to build an
adversarial framework based on how the worst perturbation could affect the
distributional manifold rather than the output space. Particularly, a latent
data space with the Gaussian Mixture Model (GMM) will be first derived. On one
hand, MAT tries to perturb the input samples in the way that would rough the
distributional manifold the worst. On the other hand, the deep learning model
is trained trying to promote in the latent space the manifold smoothness,
measured by the variation of Gaussian mixtures (given the local perturbation
around the data point). Importantly, since the latent space is more informative
than the output space, the proposed MAT can learn better a ro- bust and compact
data representation, leading to further performance improvemen- t. The proposed
MAT is important in that it can be considered as a superset of one
recently-proposed discriminative feature learning approach called center loss.
We conducted a series of experiments in both supervised and semi-supervised
learn- ing on three benchmark data sets, showing that the proposed MAT can
achieve remarkable performance, much better than those of the state-of-the-art
adversarial approaches. | [
0,
0,
0,
1,
0,
0
] |
Title: A direct measure of free electron gas via the Kinematic Sunyaev-Zel'dovich effect in Fourier-space analysis,
Abstract: We present the measurement of the kinematic Sunyaev-Zel'dovich (kSZ) effect
in Fourier space, rather than in real space. We measure the density-weighted
pairwise kSZ power spectrum, the first use of this promising approach, by
cross-correlating a cleaned Cosmic Microwave Background (CMB) temperature map,
which jointly uses both Planck Release 2 and Wilkinson Microwave Anisotropy
Probe nine-year data, with the two galaxy samples, CMASS and LOWZ, derived fr
om the Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12. With the
current data, we constrain the average optical depth $\tau$ multiplied by the
ratio of the Hubble parameter at redshift $z$ and the present day, $E=H/H_0$;
we find $\tau E = (3.95\pm1.62)\times10^{-5}$ for LOWZ and $\tau E = ( 1.25\pm
1.06)\times10^{-5}$ for CMASS, with the optimal angular radius of an aperture
photometry filter to estimate the CMB temperature distortion associ ated with
each galaxy. By repeating the pairwise kSZ power analysis for various aperture
radii, we measure the optical depth as a function of aperture ra dii. While
this analysis results in the kSZ signals with only evidence for a detection,
${\rm S/N}=2.54$ for LOWZ and $1.24$ for CMASS, the combination of future CMB
and spectroscopic galaxy surveys should enable precision measurements. We
estimate that the combination of CMB-S4 and data from DESI shoul d yield
detections of the kSZ signal with ${\rm S/N}=70-100$, depending on the
resolution of CMB-S4. | [
0,
1,
0,
0,
0,
0
] |
Title: Regulating Access to System Sensors in Cooperating Programs,
Abstract: Modern operating systems such as Android, iOS, Windows Phone, and Chrome OS
support a cooperating program abstraction. Instead of placing all functionality
into a single program, programs cooperate to complete tasks requested by users.
However, untrusted programs may exploit interactions with other programs to
obtain unauthorized access to system sensors either directly or through
privileged services. Researchers have proposed that programs should only be
authorized to access system sensors on a user-approved input event, but these
methods do not account for possible delegation done by the program receiving
the user input event. Furthermore, proposed delegation methods do not enable
users to control the use of their input events accurately. In this paper, we
propose ENTRUST, a system that enables users to authorize sensor operations
that follow their input events, even if the sensor operation is performed by a
program different from the program receiving the input event. ENTRUST tracks
user input as well as delegation events and restricts the execution of such
events to compute unambiguous delegation paths to enable accurate and reusable
authorization of sensor operations. To demonstrate this approach, we implement
the ENTRUST authorization system for Android. We find, via a laboratory user
study, that attacks can be prevented at a much higher rate (54-64%
improvement); and via a field user study, that ENTRUST requires no more than
three additional authorizations per program with respect to the first-use
approach, while incurring modest performance (<1%) and memory overheads (5.5 KB
per program). | [
1,
0,
0,
0,
0,
0
] |
Title: Identification of Treatment Effects under Conditional Partial Independence,
Abstract: Conditional independence of treatment assignment from potential outcomes is a
commonly used but nonrefutable assumption. We derive identified sets for
various treatment effect parameters under nonparametric deviations from this
conditional independence assumption. These deviations are defined via a
conditional treatment assignment probability, which makes it straightforward to
interpret. Our results can be used to assess the robustness of empirical
conclusions obtained under the baseline conditional independence assumption. | [
0,
0,
0,
1,
0,
0
] |
Title: First demonstration of emulsion multi-stage shifter for accelerator neutrino experiment in J-PARC T60,
Abstract: We describe the first ever implementation of an emulsion multi-stage shifter
in an accelerator neutrino experiment. The system was installed in the neutrino
monitor building in J-PARC as a part of a test experiment T60 and stable
operation was maintained for a total of 126.6 days. By applying time
information to emulsion films, various results were obtained. Time resolutions
of 5.3 to 14.7 s were evaluated in an operation spanning 46.9 days (time
resolved numbers of 3.8--1.4$\times10^{5}$). By using timing and spatial
information, a reconstruction of coincident events that consisted of high
multiplicity events and vertex events, including neutrino events was performed.
Emulsion events were matched to events observed by INGRID, one of near
detectors of the T2K experiment, with high reliability (98.5\%) and hybrid
analysis was established via use of the multi-stage shifter. The results
demonstrate that the multi-stage shifter is feasible for use in neutrino
experiments. | [
0,
1,
0,
0,
0,
0
] |
Title: Deep Generative Adversarial Networks for Compressed Sensing Automates MRI,
Abstract: Magnetic resonance image (MRI) reconstruction is a severely ill-posed linear
inverse task demanding time and resource intensive computations that can
substantially trade off {\it accuracy} for {\it speed} in real-time imaging. In
addition, state-of-the-art compressed sensing (CS) analytics are not cognizant
of the image {\it diagnostic quality}. To cope with these challenges we put
forth a novel CS framework that permeates benefits from generative adversarial
networks (GAN) to train a (low-dimensional) manifold of diagnostic-quality MR
images from historical patients. Leveraging a mixture of least-squares (LS)
GANs and pixel-wise $\ell_1$ cost, a deep residual network with skip
connections is trained as the generator that learns to remove the {\it
aliasing} artifacts by projecting onto the manifold. LSGAN learns the texture
details, while $\ell_1$ controls the high-frequency noise. A multilayer
convolutional neural network is then jointly trained based on diagnostic
quality images to discriminate the projection quality. The test phase performs
feed-forward propagation over the generator network that demands a very low
computational overhead. Extensive evaluations are performed on a large
contrast-enhanced MR dataset of pediatric patients. In particular, images rated
based on expert radiologists corroborate that GANCS retrieves high contrast
images with detailed texture relative to conventional CS, and pixel-wise
schemes. In addition, it offers reconstruction under a few milliseconds, two
orders of magnitude faster than state-of-the-art CS-MRI schemes. | [
1,
0,
0,
1,
0,
0
] |
Title: On multiplicative independence of rational function iterates,
Abstract: We give lower bounds for the degree of multiplicative combinations of
iterates of rational functions (with certain exceptions) over a general field,
establishing the multiplicative independence of said iterates. This leads to a
generalisation of Gao's method for constructing elements in the finite field
$\mathbb{F}_{q^n}$ whose orders are larger than any polynomial in $n$ when $n$
becomes large. Additionally, we discuss the finiteness of polynomials which
translate a given finite set of polynomials to become multiplicatively
dependent. | [
0,
0,
1,
0,
0,
0
] |
Title: Waist size for cusps in hyperbolic 3-manifolds II,
Abstract: The waist size of a cusp in an orientable hyperbolic 3-manifold is the length
of the shortest nontrivial curve generated by a parabolic isometry in the
maximal cusp boundary. Previously, it was shown that the smallest possible
waist size, which is 1, is realized only by the cusp in the figure-eight knot
complement. In this paper, it is proved that the next two smallest waist sizes
are realized uniquely for the cusps in the $5_2$ knot complement and the
manifold obtained by (2,1)-surgery on the Whitehead link. One application is an
improvement on the universal upper bound for the length of an unknotting tunnel
in a 2-cusped hyperbolic 3-manifold. | [
0,
0,
1,
0,
0,
0
] |
Title: Modular Labelled Sequent Calculi for Abstract Separation Logics,
Abstract: Abstract separation logics are a family of extensions of Hoare logic for
reasoning about programs that manipulate resources such as memory locations.
These logics are "abstract" because they are independent of any particular
concrete resource model. Their assertion languages, called propositional
abstract separation logics (PASLs), extend the logic of (Boolean) Bunched
Implications (BBI) in various ways. In particular, these logics contain the
connectives $*$ and $-\!*$, denoting the composition and extension of resources
respectively.
This added expressive power comes at a price since the resulting logics are
all undecidable. Given their wide applicability, even a semi-decision procedure
for these logics is desirable. Although several PASLs and their relationships
with BBI are discussed in the literature, the proof theory and automated
reasoning for these logics were open problems solved by the conference version
of this paper, which developed a modular proof theory for various PASLs using
cut-free labelled sequent calculi. This paper non-trivially improves upon this
previous work by giving a general framework of calculi on which any new axiom
in the logic satisfying a certain form corresponds to an inference rule in our
framework, and the completeness proof is generalised to consider such axioms.
Our base calculus handles Calcagno et al.'s original logic of separation
algebras by adding sound rules for partial-determinism and cancellativity,
while preserving cut-elimination. We then show that many important properties
in separation logic, such as indivisible unit, disjointness, splittability, and
cross-split, can be expressed in our general axiom form. Thus our framework
offers inference rules and completeness for these properties for free. Finally,
we show how our calculi reduce to calculi with global label substitutions,
enabling more efficient implementation. | [
1,
0,
0,
0,
0,
0
] |
Title: Superfluidity and relaxation dynamics of a laser-stirred 2D Bose gas,
Abstract: We investigate the superfluid behavior of a two-dimensional (2D) Bose gas of
$^{87}$Rb atoms using classical field dynamics. In the experiment by R.
Desbuquois \textit{et al.}, Nat. Phys. \textbf{8}, 645 (2012), a 2D
quasicondensate in a trap is stirred by a blue-detuned laser beam along a
circular path around the trap center. Here, we study this experiment from a
theoretical perspective. The heating induced by stirring increases rapidly
above a velocity $v_c$, which we define as the critical velocity. We identify
the superfluid, the crossover, and the thermal regime by a finite, a sharply
decreasing, and a vanishing critical velocity, respectively. We demonstrate
that the onset of heating occurs due to the creation of vortex-antivortex
pairs. A direct comparison of our numerical results to the experimental ones
shows good agreement, if a systematic shift of the critical phase-space density
is included. We relate this shift to the absence of thermal equilibrium between
the condensate and the thermal wings, which were used in the experiment to
extract the temperature. We expand on this observation by studying the full
relaxation dynamics between the condensate and the thermal cloud. | [
0,
1,
0,
0,
0,
0
] |
Title: A Study of Energy Trading in a Low-Voltage Network: Centralised and Distributed Approaches,
Abstract: Over the past years, distributed energy resources (DER) have been the object
of many studies, which recognise and establish their emerging role in the
future of power systems. However, the implementation of many scenarios and
mechanism are still challenging. This paper provides an overview of a local
energy market and explores the approaches in which consumers and prosumers take
part in this market. Therefore, the purpose of this paper is to review the
benefits of local markets for users. This study assesses the performance of
distributed and centralised trading mechanisms, comparing scenarios where the
objective of the exchange may be based on individual or social welfare.
Simulation results show the advantages of local markets and demonstrate the
importance of advancing the understanding of local markets. | [
1,
0,
0,
0,
0,
0
] |
Title: Review of methods for assessing the causal effect of binary interventions from aggregate time-series observational data,
Abstract: Researchers are often interested in assessing the impact of an intervention
on an outcome of interest in situations where the intervention is
non-randomised, information is available at an aggregate level, the
intervention is only applied to one or few units, the intervention is binary,
and there are outcome measurements at multiple time points. In this paper, we
review existing methods for causal inference in the setup just outlined. We
detail the assumptions underlying each method, emphasise connections between
the different approaches and provide guidelines regarding their practical
implementation. Several open problems are identified thus highlighting the need
for future research. | [
0,
0,
0,
1,
0,
0
] |
Title: On the Difference Between Closest, Furthest, and Orthogonal Pairs: Nearly-Linear vs Barely-Subquadratic Complexity in Computational Geometry,
Abstract: Point location problems for $n$ points in $d$-dimensional Euclidean space
(and $\ell_p$ spaces more generally) have typically had two kinds of
running-time solutions:
* (Nearly-Linear) less than $d^{poly(d)} \cdot n \log^{O(d)} n$ time, or
* (Barely-Subquadratic) $f(d) \cdot n^{2-1/\Theta(d)}$ time, for various $f$.
For small $d$ and large $n$, "nearly-linear" running times are generally
feasible, while "barely-subquadratic" times are generally infeasible. For
example, in the Euclidean metric, finding a Closest Pair among $n$ points in
${\mathbb R}^d$ is nearly-linear, solvable in $2^{O(d)} \cdot n \log^{O(1)} n$
time, while known algorithms for Furthest Pair (the diameter of the point set)
are only barely-subquadratic, requiring $\Omega(n^{2-1/\Theta(d)})$ time. Why
do these proximity problems have such different time complexities? Is there a
barrier to obtaining nearly-linear algorithms for problems which are currently
only barely-subquadratic?
We give a novel exact and deterministic self-reduction for the Orthogonal
Vectors problem on $n$ vectors in $\{0,1\}^d$ to $n$ vectors in ${\mathbb
Z}^{\omega(\log d)}$ that runs in $2^{o(d)}$ time. As a consequence,
barely-subquadratic problems such as Euclidean diameter, Euclidean bichromatic
closest pair, ray shooting, and incidence detection do not have
$O(n^{2-\epsilon})$ time algorithms (in Turing models of computation) for
dimensionality $d = \omega(\log \log n)^2$, unless the popular Orthogonal
Vectors Conjecture and the Strong Exponential Time Hypothesis are false. That
is, while poly-log-log-dimensional Closest Pair is in $n^{1+o(1)}$ time, the
analogous case of Furthest Pair can encode larger-dimensional problems
conjectured to require $n^{2-o(1)}$ time. We also show that the All-Nearest
Neighbors problem in $\omega(\log n)$ dimensions requires $n^{2-o(1)}$ time to
solve, assuming either of the above conjectures. | [
1,
0,
0,
0,
0,
0
] |
Title: Efficient Nonparametric Bayesian Inference For X-Ray Transforms,
Abstract: We consider the statistical inverse problem of recovering a function $f: M
\to \mathbb R$, where $M$ is a smooth compact Riemannian manifold with
boundary, from measurements of general $X$-ray transforms $I_a(f)$ of $f$,
corrupted by additive Gaussian noise. For $M$ equal to the unit disk with
`flat' geometry and $a=0$ this reduces to the standard Radon transform, but our
general setting allows for anisotropic media $M$ and can further model local
`attenuation' effects -- both highly relevant in practical imaging problems
such as SPECT tomography. We propose a nonparametric Bayesian inference
approach based on standard Gaussian process priors for $f$. The posterior
reconstruction of $f$ corresponds to a Tikhonov regulariser with a reproducing
kernel Hilbert space norm penalty that does not require the calculation of the
singular value decomposition of the forward operator $I_a$. We prove
Bernstein-von Mises theorems that entail that posterior-based inferences such
as credible sets are valid and optimal from a frequentist point of view for a
large family of semi-parametric aspects of $f$. In particular we derive the
asymptotic distribution of smooth linear functionals of the Tikhonov
regulariser, which is shown to attain the semi-parametric Cramér-Rao
information bound. The proofs rely on an invertibility result for the `Fisher
information' operator $I_a^*I_a$ between suitable function spaces, a result of
independent interest that relies on techniques from microlocal analysis. We
illustrate the performance of the proposed method via simulations in various
settings. | [
0,
0,
1,
1,
0,
0
] |
Title: Glass-Box Program Synthesis: A Machine Learning Approach,
Abstract: Recently proposed models which learn to write computer programs from data use
either input/output examples or rich execution traces. Instead, we argue that a
novel alternative is to use a glass-box loss function, given as a program
itself that can be directly inspected. Glass-box optimization covers a wide
range of problems, from computing the greatest common divisor of two integers,
to learning-to-learn problems.
In this paper, we present an intelligent search system which learns, given
the partial program and the glass-box problem, the probabilities over the space
of programs. We empirically demonstrate that our informed search procedure
leads to significant improvements compared to brute-force program search, both
in terms of accuracy and time. For our experiments we use rich context free
grammars inspired by number theory, text processing, and algebra. Our results
show that (i) performing 4 rounds of our framework typically solves about 70%
of the target problems, (ii) our framework can improve itself even in domain
agnostic scenarios, and (iii) it can solve problems that would be otherwise too
slow to solve with brute-force search. | [
1,
0,
0,
1,
0,
0
] |
Title: Ramsey Classes with Closure Operations (Selected Combinatorial Applications),
Abstract: We state the Ramsey property of classes of ordered structures with closures
and given local properties. This generalises many old and new results: the
Nešetřil-Rödl Theorem, the author's Ramsey lift of bowtie-free
graphs as well as the Ramsey Theorem for Finite Models (i.e. structures with
both functions and relations) thus providing the ultimate generalisation of
Structural Ramsey Theorem. We give here a more concise reformulation of recent
authors paper "All those Ramsey classes (Ramsey classes with closures and
forbidden homomorphisms)" and the main purpose of this paper is to show several
applications. Particularly we prove the Ramsey property of ordered sets with
equivalences on the power set, Ramsey theorem for Steiner systems, Ramsey
theorem for resolvable designs and a partial Ramsey type results for
$H$-factorizable graphs. All of these results are natural, easy to state, yet
proofs involve most of the theory developed. | [
1,
0,
1,
0,
0,
0
] |
Title: On Optimal Spectrum Access of Cognitive Relay With Finite Packet Buffer,
Abstract: We investigate a cognitive radio system where secondary user (SU) relays
primary user (PU) packets using two-phase relaying. SU transmits its own
packets with some access probability in relaying phase using time sharing. PU
and SU have queues of finite capacity which results in packet loss when the
queues are full. Utilizing knowledge of relay queue state, SU aims to maximize
its packet throughput while keeping packet loss probability of PU below a
threshold. By exploiting structure of the problem, we formulate it as a linear
program and find optimal access policy of SU. We also propose low complexity
sub-optimal access policies, namely constant probability transmission and step
transmission. Numerical results are presented to compare performance of
proposed methods and study effect of queue sizes on packet throughput. | [
1,
0,
0,
0,
0,
0
] |
Title: Visibility of minorities in social networks,
Abstract: Homophily can put minority groups at a disadvantage by restricting their
ability to establish links with people from a majority group. This can limit
the overall visibility of minorities in the network. Building on a
Barabási-Albert model variation with groups and homophily, we show how the
visibility of minority groups in social networks is a function of (i) their
relative group size and (ii) the presence or absence of homophilic behavior. We
provide an analytical solution for this problem and demonstrate the existence
of asymmetric behavior. Finally, we study the visibility of minority groups in
examples of real-world social networks: sexual contacts, scientific
collaboration, and scientific citation. Our work presents a foundation for
assessing the visibility of minority groups in social networks in which
homophilic or heterophilic behaviour is present. | [
1,
1,
0,
0,
0,
0
] |
Title: Solvability and microlocal analysis of the fractional Eringen wave equation,
Abstract: We discuss unique existence and microlocal regularity properties of Sobolev
space solutions to the fractional Eringen wave equation, initially given in the
form of a system of equations in which the classical non-local Eringen
constitutive equation is generalized by employing space-fractional derivatives.
Numerical examples illustrate the shape of solutions in dependence of the order
of the space-fractional derivative. | [
0,
0,
1,
0,
0,
0
] |
Title: Confidence Interval Estimators for MOS Values,
Abstract: For the quantification of QoE, subjects often provide individual rating
scores on certain rating scales which are then aggregated into Mean Opinion
Scores (MOS). From the observed sample data, the expected value is to be
estimated. While the sample average only provides a point estimator, confidence
intervals (CI) are an interval estimate which contains the desired expected
value with a given confidence level. In subjective studies, the number of
subjects performing the test is typically small, especially in lab
environments. The used rating scales are bounded and often discrete like the
5-point ACR rating scale. Therefore, we review statistical approaches in the
literature for their applicability in the QoE domain for MOS interval
estimation (instead of having only a point estimator, which is the MOS). We
provide a conservative estimator based on the SOS hypothesis and binomial
distributions and compare its performance (CI width, outlier ratio of CI
violating the rating scale bounds) and coverage probability with well known CI
estimators. We show that the provided CI estimator works very well in practice
for MOS interval estimators, while the commonly used studentized CIs suffer
from a positive outlier ratio, i.e., CIs beyond the bounds of the rating scale.
As an alternative, bootstrapping, i.e., random sampling of the subjective
ratings with replacement, is an efficient CI estimator leading to typically
smaller CIs, but lower coverage than the proposed estimator. | [
1,
0,
0,
0,
0,
0
] |
Title: Removal of Batch Effects using Generative Adversarial Networks,
Abstract: Many biological data analysis processes like Cytometry or Next Generation
Sequencing (NGS) produce massive amounts of data which needs to be processed in
batches for down-stream analysis. Such datasets are prone to technical
variations due to difference in handling the batches possibly at different
times, by different experimenters or under other different conditions. This
adds variation to the batches coming from the same source sample. These
variations are known as Batch Effects. It is possible that these variations and
natural variations due to biology confound but such situations can be avoided
by performing experiments in a carefully planned manner. Batch effects can
hamper down-stream analysis and may also cause results to be inconclusive.
Thus, it is essential to correct for these effects. Some recent methods propose
deep learning based solution to solve this problem. We demonstrate that this
can be solved using a novel Generative Adversarial Networks (GANs) based
framework. The advantage of using this framework over other prior approaches is
that here we do not require to choose a reproducing kernel and define its
parameters.We demonstrate results of our framework on a Mass Cytometry dataset. | [
1,
0,
0,
1,
0,
0
] |
Title: The Current-Phase Relation of Ferromagnetic Josephson Junction Between Triplet Superconductors,
Abstract: We study the Josephson effect of a $\rm{T_1 F T_2}$ junction, consisting of
spin-triplet superconductors (T), a weak ferromagnetic metal (F), and
ferromagnetic insulating interfaces. Two types of the triplet order parameters
are considered; $(k_x +ik_y)\hat{z}$ and $k_x \hat{x}+k_y\hat{y}$. We compute
the current density in the ballistic limit by using the generalized
quasiclassical formalism developed to take into account the interference effect
of the multilayered ferromagnetic junction. We discuss in detail how the
current-phase relation is affected by orientations of the d-vectors of
superconductor and the magnetizations of the ferromagnetic tunneling barrier.
General condition for the anomalous Josephson effect is also derived. | [
0,
1,
0,
0,
0,
0
] |
Title: Superregular grammars do not provide additional explanatory power but allow for a compact analysis of animal song,
Abstract: A pervasive belief with regard to the differences between human language and
animal vocal sequences (song) is that they belong to different classes of
computational complexity, with animal song belonging to regular languages,
whereas human language is superregular. This argument, however, lacks empirical
evidence since superregular analyses of animal song are understudied. The goal
of this paper is to perform a superregular analysis of animal song, using data
from gibbons as a case study, and demonstrate that a superregular analysis can
be effectively used with non-human data. A key finding is that a superregular
analysis does not increase explanatory power but rather provides for compact
analysis. For instance, fewer grammatical rules are necessary once
superregularity is allowed. This pattern is analogous to a previous
computational analysis of human language, and accordingly, the null hypothesis,
that human language and animal song are governed by the same type of
grammatical systems, cannot be rejected. | [
0,
0,
0,
0,
1,
0
] |
Title: Deep Learning based Large Scale Visual Recommendation and Search for E-Commerce,
Abstract: In this paper, we present a unified end-to-end approach to build a large
scale Visual Search and Recommendation system for e-commerce. Previous works
have targeted these problems in isolation. We believe a more effective and
elegant solution could be obtained by tackling them together. We propose a
unified Deep Convolutional Neural Network architecture, called VisNet, to learn
embeddings to capture the notion of visual similarity, across several semantic
granularities. We demonstrate the superiority of our approach for the task of
image retrieval, by comparing against the state-of-the-art on the Exact
Street2Shop dataset. We then share the design decisions and trade-offs made
while deploying the model to power Visual Recommendations across a catalog of
50M products, supporting 2K queries a second at Flipkart, India's largest
e-commerce company. The deployment of our solution has yielded a significant
business impact, as measured by the conversion-rate. | [
1,
0,
0,
0,
0,
0
] |
Title: The effect of inhomogeneous phase on the critical temperature of smart meta-superconductor MgB2,
Abstract: The critical temperature (TC) of MgB2, one of the key factors limiting its
application, is highly desired to be improved. On the basis of the
meta-material structure, we prepared a smart meta-superconductor structure
consisting of MgB2 micro-particles and inhomogeneous phases by an ex situ
process. The effect of inhomogeneous phase on the TC of smart
meta-superconductor MgB2 was investigated. Results showed that the onset
temperature (Ton C) of doping samples was lower than those of pure MgB2.
However, the offset temperature (Toff C) of the sample doped with Y2O3:Eu3+
nanosheets with a thickness of 2~3 nm which is much less than the coherence
length of MgB2 is 1.2 K higher than that of pure MgB2. The effect of the
applied electric field on the TC of sample was also studied. Results indicated
that with the increase of current, Ton C is slightly increased in the samples
doping with different inhomogeneous phases. When increasing current, the Toff C
of the samples doped with nonluminous inhomogeneous phases was decreased.
However, the Toff C of the luminescent inhomogeneous phase doping samples
increased and then decreased as increasing current. | [
0,
1,
0,
0,
0,
0
] |
Title: Syllable-aware Neural Language Models: A Failure to Beat Character-aware Ones,
Abstract: Syllabification does not seem to improve word-level RNN language modeling
quality when compared to character-based segmentation. However, our best
syllable-aware language model, achieving performance comparable to the
competitive character-aware model, has 18%-33% fewer parameters and is trained
1.2-2.2 times faster. | [
1,
0,
0,
1,
0,
0
] |
Title: Persistent Entropy for Separating Topological Features from Noise in Vietoris-Rips Complexes,
Abstract: Persistent homology studies the evolution of k-dimensional holes along a
nested sequence of simplicial complexes (called a filtration). The set of bars
(i.e. intervals) representing birth and death times of k-dimensional holes
along such sequence is called the persistence barcode. k-Dimensional holes with
short lifetimes are informally considered to be "topological noise", and those
with long lifetimes are considered to be "topological features" associated to
the filtration. Persistent entropy is defined as the Shannon entropy of the
persistence barcode of a given filtration. In this paper we present new
important properties of persistent entropy of Cech and Vietoris-Rips
filtrations. Among the properties, we put a focus on the stability theorem that
allows to use persistent entropy for comparing persistence barcodes. Later, we
derive a simple method for separating topological noise from features in
Vietoris-Rips filtrations. | [
1,
0,
0,
0,
0,
0
] |
Title: Intrinsic alignment of redMaPPer clusters: cluster shape - matter density correlation,
Abstract: We measure the alignment of the shapes of galaxy clusters, as traced by their
satellite distributions, with the matter density field using the public
redMaPPer catalogue based on SDSS-DR8, which contains 26 111 clusters up to
z~0.6. The clusters are split into nine redshift and richness samples; in each
of them we detect a positive alignment, showing that clusters point towards
density peaks. We interpret the measurements within the tidal alignment
paradigm, allowing for a richness and redshift dependence. The intrinsic
alignment (IA) amplitude at the pivot redshift z=0.3 and pivot richness
\lambda=30 is A_{IA}^{gen}=12.6_{-1.2}^{+1.5}. We obtain tentative evidence
that the signal increases towards higher richness and lower redshift. Our
measurements agree well with results of maxBCG clusters and with
dark-matter-only simulations. Comparing our results to IA measurements of
luminous red galaxies, we find that the IA amplitude of galaxy clusters forms a
smooth extension towards higher mass. This suggests that these systems share a
common alignment mechanism, which can be exploited to improve our physical
understanding of IA. | [
0,
1,
0,
0,
0,
0
] |
Title: The deterioration of materials from air pollution as derived from satellite and ground based observations,
Abstract: Dose-Response Functions (DRFs) are widely used in estimating corrosion and/or
soiling levels of materials used in constructions and cultural monuments. These
functions quantify the effects of air pollution and environmental parameters on
different materials through ground based measurements of specific air
pollutants and climatic parameters. Here, we propose a new approach where
available satellite observations are used instead of ground-based data. Through
this approach, the usage of DRFs is expanded in cases/areas where there is no
availability of in situ measurements, introducing also a totally new field
where satellite data can be shown to be very helpful. In the present work
satellite observations made by MODIS (MODerate resolution Imaging
Spectroradiometer) on board Terra and Aqua, OMI (Ozone Monitoring Instrument)
on board Aura and AIRS (Atmospheric Infrared Sounder) on board Aqua have been
used. | [
0,
1,
0,
0,
0,
0
] |
Title: Persistent Monitoring of Stochastic Spatio-temporal Phenomena with a Small Team of Robots,
Abstract: This paper presents a solution for persistent monitoring of real-world
stochastic phenomena, where the underlying covariance structure changes sharply
across time, using a small number of mobile robot sensors. We propose an
adaptive solution for the problem where stochastic real-world dynamics are
modeled as a Gaussian Process (GP). The belief on the underlying covariance
structure is learned from recently observed dynamics as a Gaussian Mixture (GM)
in the low-dimensional hyper-parameters space of the GP and adapted across time
using Sequential Monte Carlo methods. Each robot samples a belief point from
the GM and locally optimizes a set of informative regions by greedy
maximization of the submodular entropy function. The key contributions of this
paper are threefold: adapting the belief on the covariance using Markov Chain
Monte Carlo (MCMC) sampling such that particles survive even under sharp
covariance changes across time; exploiting the belief to transform the problem
of entropy maximization into a decentralized one; and developing an
approximation algorithm to maximize entropy on a set of informative regions in
the continuous space. We illustrate the application of the proposed solution
through extensive simulations using an artificial dataset and multiple real
datasets from fixed sensor deployments, and compare it to three competing
state-of-the-art approaches. | [
1,
0,
0,
1,
0,
0
] |
Title: Unified Gas-kinetic Scheme with Multigrid Convergence for Rarefied Flow Study,
Abstract: The unified gas kinetic scheme (UGKS) is a direct modeling method based on
the gas dynamical model on the mesh size and time step scales. With the
implementation of particle transport and collision in a time-dependent flux
function, the UGKS can recover multiple flow physics from the kinetic particle
transport to the hydrodynamic wave propagation. In comparison with direct
simulation Monte Carlo (DSMC), the equations-based UGKS can use the implicit
techniques in the updates of macroscopic conservative variables and microscopic
distribution function. The implicit UGKS significantly increases the
convergence speed for steady flow computations, especially in the highly
rarefied and near continuum regime. In order to further improve the
computational efficiency, for the first time a geometric multigrid technique is
introduced into the implicit UGKS, where the prediction step for the
equilibrium state and the evolution step for the distribution function are both
treated with multigrid acceleration. The multigrid implicit UGKS (MIUGKS) is
used in the non-equilibrium flow study, which includes microflow, such as
lid-driven cavity flow and the flow passing through a finite-length flat plate,
and high speed one, such as supersonic flow over a square cylinder. The MIUGKS
shows 5 to 9 times efficiency increase over the previous implicit scheme. For
the low speed microflow, the efficiency of MIUGKS is several orders of
magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5
and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the
DSMC method for a convergent steady state solution. | [
0,
1,
0,
0,
0,
0
] |
Title: Learning agile and dynamic motor skills for legged robots,
Abstract: Legged robots pose one of the greatest challenges in robotics. Dynamic and
agile maneuvers of animals cannot be imitated by existing methods that are
crafted by humans. A compelling alternative is reinforcement learning, which
requires minimal craftsmanship and promotes the natural evolution of a control
policy. However, so far, reinforcement learning research for legged robots is
mainly limited to simulation, and only few and comparably simple examples have
been deployed on real systems. The primary reason is that training with real
robots, particularly with dynamically balancing systems, is complicated and
expensive. In the present work, we introduce a method for training a neural
network policy in simulation and transferring it to a state-of-the-art legged
system, thereby leveraging fast, automated, and cost-effective data generation
schemes. The approach is applied to the ANYmal robot, a sophisticated
medium-dog-sized quadrupedal system. Using policies trained in simulation, the
quadrupedal machine achieves locomotion skills that go beyond what had been
achieved with prior methods: ANYmal is capable of precisely and
energy-efficiently following high-level body velocity commands, running faster
than before, and recovering from falling even in complex configurations. | [
1,
0,
0,
1,
0,
0
] |
Title: Optimal partition problems for the fractional laplacian,
Abstract: In this work, we prove an existence result for an optimal partition problem
of the form $$\min \{F_s(A_1,\dots,A_m)\colon A_i \in \mathcal{A}_s, \, A_i\cap
A_j =\emptyset \mbox{ for } i\neq j\},$$ where $F_s$ is a cost functional with
suitable assumptions of monotonicity and lowersemicontinuity, $\mathcal{A}_s$
is the class of admissible domains and the condition $A_i\cap A_j =\emptyset$
is understood in the sense of the Gagliardo $s$-capacity, where $0<s<1$.
Examples of this type of problem are related to the fractional eigenvalues. In
addition, we prove some type of convergence of the $s$-minimizers to the
minimizer of the problem with $s=1$, studied in \cite{Bucur-Buttazzo-Henrot}. | [
0,
0,
1,
0,
0,
0
] |
Title: New results on sum-product type growth over fields,
Abstract: We prove a range of new sum-product type growth estimates over a general
field $\mathbb{F}$, in particular the special case $\mathbb{F}=\mathbb{F}_p$.
They are unified by the theme of "breaking the $3/2$ threshold", epitomising
the previous state of the art. These estimates stem from specially suited
applications of incidence bounds over $\mathbb{F}$, which apply to higher
moments of representation functions.
We establish the estimate $|R[A]| \gtrsim |A|^{8/5}$ for cardinality of the
set $R[A]$ of distinct cross-ratios defined by triples of elements of a
(sufficiently small if $\mathbb{F}$ has positive characteristic, similarly for
the rest of the estimates) set $A\subset \mathbb{F}$, pinned at infinity. The
cross-ratio naturally arises in various sum-product type questions of
projective nature and is the unifying concept underlying most of our results.
It enables one to take advantage of its symmetry properties as an onset of
growth of, for instance, products of difference sets. The geometric nature of
the cross-ratio enables us to break the version of the above threshold for the
minimum number of distinct triangle areas $Ouu'$, defined by points $u,u'$ of a
non-collinear point set $P\subset \mathbb{F}^2$.
Another instance of breaking the threshold is showing that if $A$ is
sufficiently small and has additive doubling constant $M$, then $|AA|\gtrsim
M^{-2}|A|^{14/9}$. This result has a second moment version, which allows for
new upper bounds for the number of collinear point triples in the set $A\times
A\subset \mathbb{F}^2$, the quantity often arising in applications of geometric
incidence estimates. | [
0,
0,
1,
0,
0,
0
] |
Title: Structural and electronic properties of germanene on MoS$_2$,
Abstract: To date, germanene has only been synthesized on metallic substrates. A
metallic substrate is usually detrimental for the two-dimensional Dirac nature
of germanene because the important electronic states near the Fermi level of
germanene can hybridize with the electronic states of the metallic substrate.
Here we report the successful synthesis of germanene on molybdenum disulfide
(MoS$_2$), a band gap material. Pre-existing defects in the MoS$_2$ surface act
as preferential nucleation sites for the germanene islands. The lattice
constant of the germanene layer (3.8 $\pm$ 0.2 \AA) is about 20\% larger than
the lattice constant of the MoS$_2$ substrate (3.16 \AA). Scanning tunneling
spectroscopy measurements and density functional theory calculations reveal
that there are, besides the linearly dispersing bands at the $K$ points, two
parabolic bands that cross the Fermi level at the $\Gamma$ point. | [
0,
1,
0,
0,
0,
0
] |
Title: Utilizing Lexical Similarity between Related, Low-resource Languages for Pivot-based SMT,
Abstract: We investigate pivot-based translation between related languages in a low
resource, phrase-based SMT setting. We show that a subword-level pivot-based
SMT model using a related pivot language is substantially better than word and
morpheme-level pivot models. It is also highly competitive with the best direct
translation model, which is encouraging as no direct source-target training
corpus is used. We also show that combining multiple related language pivot
models can rival a direct translation model. Thus, the use of subwords as
translation units coupled with multiple related pivot languages can compensate
for the lack of a direct parallel corpus. | [
1,
0,
0,
0,
0,
0
] |
Title: Multiband Electronic Structure of Magnetic Quantum Dots: Numerical Studies,
Abstract: Semiconductor quantum dots (QDs) doped with magnetic impurities have been a
focus of continuous research for a couple of decades. A significant effort has
been devoted to studies of magnetic polarons (MP) in these nanostructures.
These collective states arise through exchange interaction between a carrier
confined in a QD and localized spins of the magnetic impurities (typically:
Mn). We discuss our theoretical description of various MP properties in
self-assembled QDs. We present a self-consistent, temperature-dependent
approach to MPs formed by a valence band hole. We use the Luttinger-Kohn k.p
Hamiltonian to account for the important effects of spin-orbit interaction. | [
0,
1,
0,
0,
0,
0
] |
Title: Scattered Sentences have Few Separable Randomizations,
Abstract: In the paper "Randomizations of Scattered Sentences", Keisler showed that if
Martin's axiom for aleph one holds, then every scattered sentence has few
separable randomizations, and asked whether the conclusion could be proved in
ZFC alone. We show here that the answer is "yes". It follows that the absolute
Vaught conjecture holds if and only if every $L_{\omega_1\omega}$-sentence with
few separable randomizations has countably many countable models. | [
0,
0,
1,
0,
0,
0
] |
Title: G2 instantons and the Seiberg-Witten monopoles,
Abstract: I describe a relation (mostly conjectural) between the Seiberg-Witten
monopoles, Fueter sections, and G2 instantons. In the last part of this article
I gathered some open questions connected with this relation. | [
0,
0,
1,
0,
0,
0
] |
Title: Optimization by a quantum reinforcement algorithm,
Abstract: A reinforcement algorithm solves a classical optimization problem by
introducing a feedback to the system which slowly changes the energy landscape
and converges the algorithm to an optimal solution in the configuration space.
Here, we use this strategy to concentrate (localize) preferentially the wave
function of a quantum particle, which explores the configuration space of the
problem, on an optimal configuration. We examine the method by solving
numerically the equations governing the evolution of the system, which are
similar to the nonlinear Schrödinger equations, for small problem sizes. In
particular, we observe that reinforcement increases the minimal energy gap of
the system in a quantum annealing algorithm. Our numerical simulations and the
latter observation show that such kind of quantum feedbacks might be helpful in
solving a computationally hard optimization problem by a quantum reinforcement
algorithm. | [
1,
1,
0,
0,
0,
0
] |
Title: Fast Image Processing with Fully-Convolutional Networks,
Abstract: We present an approach to accelerating a wide variety of image processing
operators. Our approach uses a fully-convolutional network that is trained on
input-output pairs that demonstrate the operator's action. After training, the
original operator need not be run at all. The trained network operates at full
resolution and runs in constant time. We investigate the effect of network
architecture on approximation accuracy, runtime, and memory footprint, and
identify a specific architecture that balances these considerations. We
evaluate the presented approach on ten advanced image processing operators,
including multiple variational models, multiscale tone and detail manipulation,
photographic style transfer, nonlocal dehazing, and nonphotorealistic
stylization. All operators are approximated by the same model. Experiments
demonstrate that the presented approach is significantly more accurate than
prior approximation schemes. It increases approximation accuracy as measured by
PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from
27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 compared to
the most accurate prior approximation scheme, while being the fastest. We show
that our models generalize across datasets and across resolutions, and
investigate a number of extensions of the presented approach. The results are
shown in the supplementary video at this https URL | [
1,
0,
0,
0,
0,
0
] |
Title: Revealing patterns in HIV viral load data and classifying patients via a novel machine learning cluster summarization method,
Abstract: HIV RNA viral load (VL) is an important outcome variable in studies of HIV
infected persons. There exists only a handful of methods which classify
patients by viral load patterns. Most methods place limits on the use of viral
load measurements, are often specific to a particular study design, and do not
account for complex, temporal variation. To address this issue, we propose a
set of four unambiguous computable characteristics (features) of time-varying
HIV viral load patterns, along with a novel centroid-based classification
algorithm, which we use to classify a population of 1,576 HIV positive clinic
patients into one of five different viral load patterns (clusters) often found
in the literature: durably suppressed viral load (DSVL), sustained low viral
load (SLVL), sustained high viral load (SHVL), high viral load suppression
(HVLS), and rebounding viral load (RVL). The centroid algorithm summarizes
these clusters in terms of their centroids and radii. We show that this allows
new viral load patterns to be assigned pattern membership based on the distance
from the centroid relative to its radius, which we term radial normalization
classification. This method has the benefit of providing an objective and
quantitative method to assign viral load pattern membership with a concise and
interpretable model that aids clinical decision making. This method also
facilitates meta-analyses by providing computably distinct HIV categories.
Finally we propose that this novel centroid algorithm could also be useful in
the areas of cluster comparison for outcomes research and data reduction in
machine learning. | [
0,
0,
0,
1,
1,
0
] |
Title: Distributed Stochastic Model Predictive Control for Large-Scale Linear Systems with Private and Common Uncertainty Sources,
Abstract: This paper presents a distributed stochastic model predictive control (SMPC)
approach for large-scale linear systems with private and common uncertainties
in a plug-and-play framework. Using the so-called scenario approach, the
centralized SMPC involves formulating a large-scale finite-horizon scenario
optimization problem at each sampling time, which is in general computationally
demanding, due to the large number of required scenarios. We present two novel
ideas in this paper to address this issue. We first develop a technique to
decompose the large-scale scenario program into distributed scenario programs
that exchange a certain number of scenarios with each other in order to compute
local decisions using the alternating direction method of multipliers (ADMM).
We show the exactness of the decomposition with a-priori probabilistic
guarantees for the desired level of constraint fulfillment for both uncertainty
sources. As our second contribution, we develop an inter-agent soft
communication scheme based on a set parametrization technique together with the
notion of probabilistically reliable sets to reduce the required communication
between the subproblems. We show how to incorporate the probabilistic
reliability notion into existing results and provide new guarantees for the
desired level of constraint violations. Two different simulation studies of two
types of systems interactions, dynamically coupled and coupling constraints,
are presented to illustrate the advantages of the proposed framework. | [
1,
0,
1,
0,
0,
0
] |
Title: The Velocity of the Decoding Wave for Spatially Coupled Codes on BMS Channels,
Abstract: We consider the dynamics of belief propagation decoding of spatially coupled
Low-Density Parity-Check codes. It has been conjectured that after a short
transient phase, the profile of "error probabilities" along the spatial
direction of a spatially coupled code develops a uniquely-shaped wave-like
solution that propagates with constant velocity v. Under this assumption, and
for transmission over general Binary Memoryless Symmetric channels, we derive a
formula for v. We also propose approximations that are simpler to compute and
support our findings using numerical data. | [
1,
0,
1,
0,
0,
0
] |
Title: In Search of an Entity Resolution OASIS: Optimal Asymptotic Sequential Importance Sampling,
Abstract: Entity resolution (ER) presents unique challenges for evaluation methodology.
While crowdsourcing platforms acquire ground truth, sound approaches to
sampling must drive labelling efforts. In ER, extreme class imbalance between
matching and non-matching records can lead to enormous labelling requirements
when seeking statistically consistent estimates for rigorous evaluation. This
paper addresses this important challenge with the OASIS algorithm: a sampler
and F-measure estimator for ER evaluation. OASIS draws samples from a (biased)
instrumental distribution, chosen to ensure estimators with optimal asymptotic
variance. As new labels are collected OASIS updates this instrumental
distribution via a Bayesian latent variable model of the annotator oracle, to
quickly focus on unlabelled items providing more information. We prove that
resulting estimates of F-measure, precision, recall converge to the true
population values. Thorough comparisons of sampling methods on a variety of ER
datasets demonstrate significant labelling reductions of up to 83% without loss
to estimate accuracy. | [
1,
0,
0,
1,
0,
0
] |
Title: Vestigial nematic order and superconductivity in the doped topological insulator Cu$_{x}$Bi$_{2}$Se$_{3}$,
Abstract: If the topological insulator Bi$_{2}$Se$_{3}$ is doped with electrons,
superconductivity with $T_{\rm c}\approx3-4\:{\rm K}$ emerges for a low
density of carriers ($n\approx10^{20}{\rm cm}^{-3}$) and with a small ratio of
the superconducting coherence length and Fermi wave length:
$\xi/\lambda_{F}\approx2\cdots4$. These values make fluctuations of the
superconducting order parameter increasingly important, to the extend that the
$T_{c}$-value is surprisingly large. Strong spin-orbit interaction led to the
proposal of an odd-parity pairing state. This begs the question of the nature
of the transition in an unconventional superconductor with strong pairing
fluctuations. We show that for a multi-component order parameter, these
fluctuations give rise to a nematic phase at $T_{\rm nem}>T_{c}$. Below
$T_{c}$ several experiments demonstrated a rotational symmetry breaking where
the Cooper pair wave function is locked to the lattice. Our theory shows that
this rotational symmetry breaking, as vestige of the superconducting state,
already occurs above $T_{c}$. The nematic phase is characterized by vanishing
off-diagonal long range order, yet with anisotropic superconducting
fluctuations. It can be identified through direction-dependent
para-conductivity, lattice softening, and an enhanced Raman response in the
$E_{g}$ symmetry channel. In addition, nematic order partially avoids the usual
fluctuation suppression of $T_{c}$. | [
0,
1,
0,
0,
0,
0
] |
Title: On the semi-continuity problem of normalized volumes of singularities,
Abstract: We show that in any $\mathbb{Q}$-Gorenstein flat family of klt singularities,
normalized volumes can only jump down at countably many subvarieties. A quick
consequence is that smooth points have the largest normalized volume among all
klt singularities. Using an alternative characterization of K-semistability
developed by Li, Xu and the author, we show that K-semistability is a very
generic or empty condition in any $\mathbb{Q}$-Gorenstein flat family of log
Fano pairs. | [
0,
0,
1,
0,
0,
0
] |
Title: A Generalization of Convolutional Neural Networks to Graph-Structured Data,
Abstract: This paper introduces a generalization of Convolutional Neural Networks
(CNNs) from low-dimensional grid data, such as images, to graph-structured
data. We propose a novel spatial convolution utilizing a random walk to uncover
the relations within the input, analogous to the way the standard convolution
uses the spatial neighborhood of a pixel on the grid. The convolution has an
intuitive interpretation, is efficient and scalable and can also be used on
data with varying graph structure. Furthermore, this generalization can be
applied to many standard regression or classification problems, by learning the
the underlying graph. We empirically demonstrate the performance of the
proposed CNN on MNIST, and challenge the state-of-the-art on Merck molecular
activity data set. | [
1,
0,
0,
1,
0,
0
] |
Title: Robust Adversarial Reinforcement Learning,
Abstract: Deep neural networks coupled with fast simulation and improved computation
have led to recent successes in the field of reinforcement learning (RL).
However, most current RL-based approaches fail to generalize since: (a) the gap
between simulation and real world is so large that policy-learning approaches
fail to transfer; (b) even if policy learning is done in real world, the data
scarcity leads to failed generalization from training to test scenarios (e.g.,
due to different friction or object masses). Inspired from H-infinity control
methods, we note that both modeling errors and differences in training and test
scenarios can be viewed as extra forces/disturbances in the system. This paper
proposes the idea of robust adversarial reinforcement learning (RARL), where we
train an agent to operate in the presence of a destabilizing adversary that
applies disturbance forces to the system. The jointly trained adversary is
reinforced -- that is, it learns an optimal destabilization policy. We
formulate the policy learning as a zero-sum, minimax objective function.
Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah,
Swimmer, Hopper and Walker2d) conclusively demonstrate that our method (a)
improves training stability; (b) is robust to differences in training/test
conditions; and c) outperform the baseline even in the absence of the
adversary. | [
1,
0,
0,
0,
0,
0
] |
Title: Dynamic behaviour of Multilamellar Vesicles under Poiseuille flow,
Abstract: Surfactant solutions exhibit multilamellar surfactant vesicles (MLVs) under
flow conditions and in concentration ranges which are found in a large number
of industrial applications. MLVs are typically formed from a lamellar phase and
play an important role in determining the rheological properties of surfactant
solutions. Despite the wide literature on the collective dynamics of flowing
MLVs, investigations on the flow behavior of single MLVs are scarce. In this
work, we investigate a concentrated aqueous solution of linear alkylbenzene
sulfonic acid (HLAS), characterized by MLVs dispersed in an isotropic micellar
phase. Rheological tests show that the HLAS solution is a shear-thinning fluid
with a power law index dependent on the shear rate. Pressure-driven shear flow
of the HLAS solution in glass capillaries is investigated by high-speed video
microscopy and image analysis. The so obtained velocity profiles provide
evidence of a power-law fluid behaviour of the HLAS solution and images show a
flow-focusing effect of the lamellar phase in the central core of the
capillary. The flow behavior of individual MLVs shows analogies with that of
unilamellar vesicles and emulsion droplets. Deformed MLVs exhibit typical
shapes of unilamellar vesicles, such as parachute and bullet-like. Furthermore,
MLV velocity follows the classical Hetsroni theory for droplets provided that
the power law shear dependent viscosity of the HLAS solution is taken into
account. The results of this work are relevant for the processing of
surfactant-based systems in which the final properties depend on flow-induced
morphology, such as cosmetic formulations and food products. | [
0,
1,
0,
0,
0,
0
] |
Title: Approximate fixed points and B-amenable groups,
Abstract: A topological group $G$ is B-amenable if and only if every continuous affine
action of $G$ on a bounded convex subset of a locally convex space has an
approximate fixed point. Similar results hold more generally for slightly
uniformly continuous semigroup actions. | [
0,
0,
1,
0,
0,
0
] |
Title: Simulating Cosmic Microwave Background anisotropy measurements for Microwave Kinetic Inductance Devices,
Abstract: Microwave Kinetic Inductance Devices (MKIDs) are poised to allow for
massively and natively multiplexed photon detectors arrays and are a natural
choice for the next-generation CMB-Stage 4 experiment which will require 105
detectors. In this proceed- ing we discuss what noise performance of present
generation MKIDs implies for CMB measurements. We consider MKID noise spectra
and simulate a telescope scan strategy which projects the detector noise onto
the CMB sky. We then analyze the simulated CMB + MKID noise to understand
particularly low frequency noise affects the various features of the CMB, and
thusly set up a framework connecting MKID characteristics with scan strategies,
to the type of CMB signals we may probe with such detectors. | [
0,
1,
0,
0,
0,
0
] |
Title: Truth-Telling Mechanism for Secure Two-Way Relay Communications with Energy-Harvesting Revenue,
Abstract: This paper brings the novel idea of paying the utility to the winning agents
in terms of some physical entity in cooperative communications. Our setting is
a secret two-way communication channel where two transmitters exchange
information in the presence of an eavesdropper. The relays are selected from a
set of interested parties such that the secrecy sum rate is maximized. In
return, the selected relay nodes' energy harvesting requirements will be
fulfilled up to a certain threshold through their own payoff so that they have
the natural incentive to be selected and involved in the communication.
However, relays may exaggerate their private information in order to improve
their chance to be selected. Our objective is to develop a mechanism for relay
selection that enforces them to reveal the truth since otherwise they may be
penalized. We also propose a joint cooperative relay beamforming and transmit
power optimization scheme based on an alternating optimization approach. Note
that the problem is highly non-convex since the objective function appears as a
product of three correlated Rayleigh quotients. While a common practice in the
existing literature is to optimize the relay beamforming vector for given
transmit power via rank relaxation, we propose a second-order cone programming
(SOCP)-based approach in this paper which requires a significantly lower
computational task. The performance of the incentive control mechanism and the
optimization algorithm has been evaluated through numerical simulations. | [
1,
0,
0,
0,
0,
0
] |
Title: Prediction of Individual Outcomes for Asthma Sufferers,
Abstract: We consider the problem of individual-specific medication level
recommendation (initiation, removal, increase, or decrease) for asthma
sufferers. Asthma is one of the most common chronic diseases in both adults and
children, affecting 8% of the US population and costing $37-63 billion/year in
the US. Asthma is a complex disease, whose symptoms may wax and wane, making it
difficult for clinicians to predict outcomes and prognosis. Improved ability to
predict prognosis can inform decision making and may promote conversations
between clinician and provider around optimizing medication therapy. Data from
the US Medical Expenditure Panel Survey (MEPS) years 2000-2010 were used to fit
a longitudinal model for a multivariate response of adverse events (Emergency
Department or In-patient visits, excessive rescue inhaler use, and oral steroid
use). To reduce bias in the estimation of medication effects, medication level
was treated as a latent process which was restricted to be consistent with
prescription refill data. This approach is demonstrated to be effective in the
MEPS cohort via predictions on a validation hold out set and a synthetic data
simulation study. This framework can be easily generalized to medication
decisions for other conditions as well. | [
0,
0,
0,
1,
0,
0
] |
Title: Bayesian Probabilistic Numerical Methods,
Abstract: The emergent field of probabilistic numerics has thus far lacked clear
statistical principals. This paper establishes Bayesian probabilistic numerical
methods as those which can be cast as solutions to certain inverse problems
within the Bayesian framework. This allows us to establish general conditions
under which Bayesian probabilistic numerical methods are well-defined,
encompassing both non-linear and non-Gaussian models. For general computation,
a numerical approximation scheme is proposed and its asymptotic convergence
established. The theoretical development is then extended to pipelines of
computation, wherein probabilistic numerical methods are composed to solve more
challenging numerical tasks. The contribution highlights an important research
frontier at the interface of numerical analysis and uncertainty quantification,
with a challenging industrial application presented. | [
1,
0,
1,
1,
0,
0
] |
Title: Strong Local Nondeterminism of Spherical Fractional Brownian Motion,
Abstract: Let $B = \left\{ B\left( x\right),\, x\in \mathbb{S}^{2}\right\} $ be the
fractional Brownian motion indexed by the unit sphere $\mathbb{S}^{2}$ with
index $0<H\leq \frac{1}{2}$, introduced by Istas \cite{IstasECP05}. We
establish optimal estimates for its angular power spectrum $\{d_\ell, \ell = 0,
1, 2, \ldots\}$, and then exploit its high-frequency behavior to establish the
property of its strong local nondeterminism of $B$. | [
0,
0,
1,
1,
0,
0
] |
Title: Distant Supervision for Topic Classification of Tweets in Curated Streams,
Abstract: We tackle the challenge of topic classification of tweets in the context of
analyzing a large collection of curated streams by news outlets and other
organizations to deliver relevant content to users. Our approach is novel in
applying distant supervision based on semi-automatically identifying curated
streams that are topically focused (for example, on politics, entertainment, or
sports). These streams provide a source of labeled data to train topic
classifiers that can then be applied to categorize tweets from more
topically-diffuse streams. Experiments on both noisy labels and human
ground-truth judgments demonstrate that our approach yields good topic
classifiers essentially "for free", and that topic classifiers trained in this
manner are able to dynamically adjust for topic drift as news on Twitter
evolves. | [
1,
0,
0,
0,
0,
0
] |
Title: On the Optimization Landscape of Tensor Decompositions,
Abstract: Non-convex optimization with local search heuristics has been widely used in
machine learning, achieving many state-of-art results. It becomes increasingly
important to understand why they can work for these NP-hard problems on typical
data. The landscape of many objective functions in learning has been
conjectured to have the geometric property that "all local optima are
(approximately) global optima", and thus they can be solved efficiently by
local search algorithms. However, establishing such property can be very
difficult.
In this paper, we analyze the optimization landscape of the random
over-complete tensor decomposition problem, which has many applications in
unsupervised learning, especially in learning latent variable models. In
practice, it can be efficiently solved by gradient ascent on a non-convex
objective. We show that for any small constant $\epsilon > 0$, among the set of
points with function values $(1+\epsilon)$-factor larger than the expectation
of the function, all the local maxima are approximate global maxima.
Previously, the best-known result only characterizes the geometry in small
neighborhoods around the true components. Our result implies that even with an
initialization that is barely better than the random guess, the gradient ascent
algorithm is guaranteed to solve this problem.
Our main technique uses Kac-Rice formula and random matrix theory. To our
best knowledge, this is the first time when Kac-Rice formula is successfully
applied to counting the number of local minima of a highly-structured random
polynomial with dependent coefficients. | [
1,
0,
1,
1,
0,
0
] |
Title: Estimate of Joule Heating in a Flat Dechirper,
Abstract: We have performed Joule power loss calculations for a flat dechirper. We have
considered the configurations of the beam on-axis between the two plates---for
chirp control---and for the beam especially close to one plate---for use as a
fast kicker. Our calculations use a surface impedance approach, one that is
valid when corrugation parameters are small compared to aperture (the
perturbative parameter regime). In our model we ignore effects of field
reflections at the sides of the dechirper plates, and thus expect the results
to underestimate the Joule losses. The analytical results were also tested by
numerical, time-domain simulations. We find that most of the wake power lost by
the beam is radiated out to the sides of the plates. For the case of the beam
passing by a single plate, we derive an analytical expression for the
broad-band impedance, and---in Appendix B---numerically confirm recently
developed, analytical formulas for the short-range wakes. While our theory can
be applied to the LCLS-II dechirper with large gaps, for the nominal apertures
we are not in the perturbative regime and the reflection contribution to Joule
losses is not negligible. With input from computer simulations, we estimate the
Joule power loss (assuming bunch charge of 300 pC, repetition rate of 100 kHz)
is 21~W/m for the case of two plates, and 24 W/m for the case of a single
plate. | [
0,
1,
0,
0,
0,
0
] |
Title: Accurate Kernel Learning for Linear Gaussian Markov Processes using a Scalable Likelihood Computation,
Abstract: We report an exact likelihood computation for Linear Gaussian Markov
processes that is more scalable than existing algorithms for complex models and
sparsely sampled signals. Better scaling is achieved through elimination of
repeated computations in the Kalman likelihood, and by using the diagonalized
form of the state transition equation. Using this efficient computation, we
study the accuracy of kernel learning using maximum likelihood and the
posterior mean in a simulation experiment. The posterior mean with a reference
prior is more accurate for complex models and sparse sampling. Because of its
lower computation load, the maximum likelihood estimator is an attractive
option for more densely sampled signals and lower order models. We confirm
estimator behavior in experimental data through their application to speleothem
data. | [
0,
0,
0,
1,
0,
0
] |
Title: Positive semi-definite embedding for dimensionality reduction and out-of-sample extensions,
Abstract: In machine learning or statistics, it is often desirable to reduce the
dimensionality of high dimensional data. We propose to obtain the low
dimensional embedding coordinates as the eigenvectors of a positive
semi-definite kernel matrix. This kernel matrix is the solution of a
semi-definite program promoting a low rank solution and defined with the help
of a diffusion kernel. Besides, we also discuss an infinite dimensional
analogue of the same semi-definite program. From a practical perspective, a
main feature of our approach is the existence of a non-linear out-of-sample
extension formula of the embedding coordinates that we call a projected
Nyström approximation. This extension formula yields an extension of the
kernel matrix to a data-dependent Mercer kernel function. Although the
semi-definite program may be solved directly, we propose another strategy based
on a rank constrained formulation solved thanks to a projected power method
algorithm followed by a singular value decomposition. This strategy allows for
a reduced computational time. | [
1,
0,
0,
1,
0,
0
] |
Title: Bounded Projective Functions and Hyperbolic Metrics with Isolated Singularities,
Abstract: We establish a correspondence on a Riemann surface between hyperbolic metrics
with isolated singularities and bounded projective functions whose Schwarzian
derivatives have at most double poles and whose monodromies lie in ${\rm
PSU}(1,\,1)$. As an application, we construct explicitly a new class of
hyperbolic metrics with countably many singularities on the unit disc. | [
0,
0,
1,
0,
0,
0
] |
Title: Active Inductive Logic Programming for Code Search,
Abstract: Modern search techniques either cannot efficiently incorporate human feedback
to refine search results or to express structural or semantic properties of
desired code. The key insight of our interactive code search technique ALICE is
that user feedback could be actively incorporated to allow users to easily
express and refine search queries. We design a query language to model the
structure and semantics of code as logic facts. Given a code example with user
annotations, ALICE automatically extracts a logic query from features that are
tagged as important. Users can refine the search query by labeling one or more
examples as desired (positive) or irrelevant (negative). ALICE then infers a
new logic query that separates the positives from negative examples via active
inductive logic programming. Our comprehensive and systematic simulation
experiment shows that ALICE removes a large number of false positives quickly
by actively incorporating user feedback. Its search algorithm is also robust to
noise and user labeling mistakes. Our choice of leveraging both positive and
negative examples and the nested containment structure of selected code is
effective in refining search queries. Compared with an existing technique,
Critics, ALICE does not require a user to manually construct a search pattern
and yet achieves comparable precision and recall with fewer search iterations
on average. A case study with users shows that ALICE is easy to use and helps
express complex code patterns. | [
1,
0,
0,
0,
0,
0
] |
Title: Continued fraction algorithms and Lagrange's theorem in ${\mathbb Q}_p$,
Abstract: We present several continued fraction algorithms, each of which gives an
eventually periodic expansion for every quadratic element of ${\mathbb Q}_p$
over ${\mathbb Q}$ and gives a finite expansion for every rational number. We
also give, for each of our algorithms, the complete characterization of
elements having purely periodic expansions. | [
0,
0,
1,
0,
0,
0
] |
Title: Scaling and bias codes for modeling speaker-adaptive DNN-based speech synthesis systems,
Abstract: Most neural-network based speaker-adaptive acoustic models for speech
synthesis can be categorized into either layer-based or input-code approaches.
Although both approaches have their own pros and cons, most existing works on
speaker adaptation focus on improving one or the other. In this paper, after we
first systematically overview the common principles of neural-network based
speaker-adaptive models, we show that these approaches can be represented in a
unified framework and can be generalized further. More specifically, we
introduce the use of scaling and bias codes as generalized means for
speaker-adaptive transformation. By utilizing these codes, we can create a more
efficient factorized speaker-adaptive model and capture advantages of both
approaches while reducing their disadvantages. The experiments show that the
proposed method can improve the performance of speaker adaptation compared with
speaker adaptation based on the conventional input code. | [
1,
0,
0,
1,
0,
0
] |
Title: CELLO-3D: Estimating the Covariance of ICP in the Real World,
Abstract: The fusion of Iterative Closest Point (ICP) reg- istrations in existing state
estimation frameworks relies on an accurate estimation of their uncertainty. In
this paper, we study the estimation of this uncertainty in the form of a
covariance. First, we scrutinize the limitations of existing closed-form
covariance estimation algorithms over 3D datasets. Then, we set out to estimate
the covariance of ICP registrations through a data-driven approach, with over 5
100 000 registrations on 1020 pairs from real 3D point clouds. We assess our
solution upon a wide spectrum of environments, ranging from structured to
unstructured and indoor to outdoor. The capacity of our algorithm to predict
covariances is accurately assessed, as well as the usefulness of these
estimations for uncertainty estimation over trajectories. The proposed method
estimates covariances better than existing closed-form solutions, and makes
predictions that are consistent with observed trajectories. | [
1,
0,
0,
0,
0,
0
] |
Title: Projection Theorems Using Effective Dimension,
Abstract: In this paper we use the theory of computing to study fractal dimensions of
projections in Euclidean spaces. A fundamental result in fractal geometry is
Marstrand's projection theorem, which shows that for every analytic set E, for
almost every line L, the Hausdorff dimension of the orthogonal projection of E
onto L is maximal. We use Kolmogorov complexity to give two new results on the
Hausdorff and packing dimensions of orthogonal projections onto lines. The
first shows that the conclusion of Marstrand's theorem holds whenever the
Hausdorff and packing dimensions agree on the set E, even if E is not analytic.
Our second result gives a lower bound on the packing dimension of projections
of arbitrary sets. Finally, we give a new proof of Marstrand's theorem using
the theory of computing. | [
1,
0,
1,
0,
0,
0
] |
Title: A DIRT-T Approach to Unsupervised Domain Adaptation,
Abstract: Domain adaptation refers to the problem of leveraging labeled data in a
source domain to learn an accurate model in a target domain where labels are
scarce or unavailable. A recent approach for finding a common representation of
the two domains is via domain adversarial training (Ganin & Lempitsky, 2015),
which attempts to induce a feature extractor that matches the source and target
feature distributions in some feature space. However, domain adversarial
training faces two critical limitations: 1) if the feature extraction function
has high-capacity, then feature distribution matching is a weak constraint, 2)
in non-conservative domain adaptation (where no single classifier can perform
well in both the source and target domains), training the model to do well on
the source domain hurts performance on the target domain. In this paper, we
address these issues through the lens of the cluster assumption, i.e., decision
boundaries should not cross high-density data regions. We propose two novel and
related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model,
which combines domain adversarial training with a penalty term that punishes
the violation the cluster assumption; 2) the Decision-boundary Iterative
Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model
as initialization and employs natural gradient steps to further minimize the
cluster assumption violation. Extensive empirical results demonstrate that the
combination of these two models significantly improve the state-of-the-art
performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation
benchmarks. | [
0,
0,
0,
1,
0,
0
] |
Title: Robust valley polarization of helium ion modified atomically thin MoS$_{2}$,
Abstract: Atomically thin semiconductors have dimensions that are commensurate with
critical feature sizes of future optoelectronic devices defined using
electron/ion beam lithography. Robustness of their emergent optical and
valleytronic properties is essential for typical exposure doses used during
fabrication. Here, we explore how focused helium ion bombardment affects the
intrinsic vibrational, luminescence and valleytronic properties of atomically
thin MoS$_{2}$. By probing the disorder dependent vibrational response we
deduce the interdefect distance by applying a phonon confinement model. We show
that the increasing interdefect distance correlates with disorder-related
luminescence arising 180 meV below the neutral exciton emission. We perform
ab-initio density functional theory of a variety of defect related
morphologies, which yield first indications on the origin of the observed
additional luminescence. Remarkably, no significant reduction of free exciton
valley polarization is observed until the interdefect distance approaches a few
nanometers, namely the size of the free exciton Bohr radius. Our findings pave
the way for direct writing of sub-10 nm nanoscale valleytronic devices and
circuits using focused helium ions. | [
0,
1,
0,
0,
0,
0
] |
Title: Sustained sensorimotor control as intermittent decisions about prediction errors: Computational framework and application to ground vehicle steering,
Abstract: A conceptual and computational framework is proposed for modelling of human
sensorimotor control, and is exemplified for the sensorimotor task of steering
a car. The framework emphasises control intermittency, and extends on existing
models by suggesting that the nervous system implements intermittent control
using a combination of (1) motor primitives, (2) prediction of sensory outcomes
of motor actions, and (3) evidence accumulation of prediction errors. It is
shown that approximate but useful sensory predictions in the intermittent
control context can be constructed without detailed forward models, as a
superposition of simple prediction primitives, resembling neurobiologically
observed corollary discharges. The proposed mathematical framework allows
straightforward extension to intermittent behaviour from existing
one-dimensional continuous models in the linear control and ecological
psychology traditions. Empirical observations from a driving simulator provide
support for some of the framework assumptions: It is shown that human steering
control, in routine lane-keeping and in a demanding near-limit task, is better
described as a sequence of discrete stepwise steering adjustments, than as
continuous control. Furthermore, the amplitudes of individual steering
adjustments are well predicted by a compound visual cue signalling steering
error, and even better so if also adjusting for predictions of how the same cue
is affected by previous control. Finally, evidence accumulation is shown to
explain observed covariability between inter-adjustment durations and
adjustment amplitudes, seemingly better so than the type of threshold
mechanisms that are typically assumed in existing models of intermittent
control. | [
1,
0,
0,
0,
0,
0
] |
Title: Generalized Similarity U: A Non-parametric Test of Association Based on Similarity,
Abstract: Second generation sequencing technologies are being increasingly used for
genetic association studies, where the main research interest is to identify
sets of genetic variants that contribute to various phenotype. The phenotype
can be univariate disease status, multivariate responses and even
high-dimensional outcomes. Considering the genotype and phenotype as two
complex objects, this also poses a general statistical problem of testing
association between complex objects. We here proposed a similarity-based test,
generalized similarity U (GSU), that can test the association between complex
objects. We first studied the theoretical properties of the test in a general
setting and then focused on the application of the test to sequencing
association studies. Based on theoretical analysis, we proposed to use
Laplacian kernel based similarity for GSU to boost power and enhance
robustness. Through simulation, we found that GSU did have advantages over
existing methods in terms of power and robustness. We further performed a whole
genome sequencing (WGS) scan for Alzherimer Disease Neuroimaging Initiative
(ADNI) data, identifying three genes, APOE, APOC1 and TOMM40, associated with
imaging phenotype. We developed a C++ package for analysis of whole genome
sequencing data using GSU. The source codes can be downloaded at
this https URL. | [
0,
0,
0,
1,
1,
0
] |
Title: Non-Convex Weighted Lp Nuclear Norm based ADMM Framework for Image Restoration,
Abstract: Since the matrix formed by nonlocal similar patches in a natural image is of
low rank, the nuclear norm minimization (NNM) has been widely used in various
image processing studies. Nonetheless, nuclear norm based convex surrogate of
the rank function usually over-shrinks the rank components and makes different
components equally, and thus may produce a result far from the optimum. To
alleviate the above-mentioned limitations of the nuclear norm, in this paper we
propose a new method for image restoration via the non-convex weighted Lp
nuclear norm minimization (NCW-NNM), which is able to more accurately enforce
the image structural sparsity and self-similarity simultaneously. To make the
proposed model tractable and robust, the alternative direction multiplier
method (ADMM) is adopted to solve the associated non-convex minimization
problem. Experimental results on various types of image restoration problems,
including image deblurring, image inpainting and image compressive sensing (CS)
recovery, demonstrate that the proposed method outperforms many current
state-of-the-art methods in both the objective and the perceptual qualities. | [
1,
0,
0,
0,
0,
0
] |
Title: Localization of ions within one-, two- and three-dimensional Coulomb crystals by a standing wave optical potential,
Abstract: We demonstrate light-induced localization of Coulomb-interacting particles in
multi-dimensional structures. Subwavelength localization of ions within small
multi-dimensional Coulomb crystals by an intracavity optical standing wave
field is evidenced by measuring the difference in scattering inside
symmetrically red- and blue-detuned optical lattices and is observed even for
ions undergoing substantial radial micromotion. These results are promising
steps towards the structural control of ion Coulomb crystals by optical fields
as well as for complex many-body simulations with ion crystals or for the
investigation of heat transfer at the nanoscale, and have potential
applications for ion-based cavity quantum electrodynamics, cavity optomechanics
and ultracold ion chemistry. | [
0,
1,
0,
0,
0,
0
] |
Title: Joint Modeling of Event Sequence and Time Series with Attentional Twin Recurrent Neural Networks,
Abstract: A variety of real-world processes (over networks) produce sequences of data
whose complex temporal dynamics need to be studied. More especially, the event
timestamps can carry important information about the underlying network
dynamics, which otherwise are not available from the time-series evenly sampled
from continuous signals. Moreover, in most complex processes, event sequences
and evenly-sampled times series data can interact with each other, which
renders joint modeling of those two sources of data necessary. To tackle the
above problems, in this paper, we utilize the rich framework of (temporal)
point processes to model event data and timely update its intensity function by
the synergic twin Recurrent Neural Networks (RNNs). In the proposed
architecture, the intensity function is synergistically modulated by one RNN
with asynchronous events as input and another RNN with time series as input.
Furthermore, to enhance the interpretability of the model, the attention
mechanism for the neural point process is introduced. The whole model with
event type and timestamp prediction output layers can be trained end-to-end and
allows a black-box treatment for modeling the intensity. We substantiate the
superiority of our model in synthetic data and three real-world benchmark
datasets. | [
1,
0,
0,
0,
0,
0
] |
Title: Room temperature line lists for CO\2 symmetric isotopologues with \textit{ab initio} computed intensities,
Abstract: Remote sensing experiments require high-accuracy, preferably sub-percent,
line intensities and in response to this need we present computed room
temperature line lists for six symmetric isotopologues of carbon dioxide:
$^{13}$C$^{16}$O$_2$, $^{14}$C$^{16}$O$_2$, $^{12}$C$^{17}$O$_2$,
$^{12}$C$^{18}$O$_2$, $^{13}$C$^{17}$O$_2$ and $^{13}$C$^{18}$O$_2$, covering
the range 0-8000 \cm. Our calculation scheme is based on variational nuclear
motion calculations and on a reliability analysis of the generated line
intensities. Rotation-vibration wavefunctions and energy levels are computed
using the DVR3D software suite and a high quality semi-empirical potential
energy surface (PES), followed by computation of intensities using an
\abinitio\ dipole moment surface (DMS). Four line lists are computed for each
isotopologue to quantify sensitivity to minor distortions of the PES/DMS.
Reliable lines are benchmarked against recent state-of-the-art measurements and
against the HITRAN2012 database, supporting the claim that the majority of line
intensities for strong bands are predicted with sub-percent accuracy. Accurate
line positions are generated using an effective Hamiltonian. We recommend the
use of these line lists for future remote sensing studies and their inclusion
in databases. | [
0,
1,
0,
0,
0,
0
] |
Title: The GIT moduli of semistable pairs consisting of a cubic curve and a line on ${\mathbb P}^{2}$,
Abstract: We discuss the GIT moduli of semistable pairs consisting of a cubic curve and
a line on the projective plane. We study in some detail this moduli and compare
it with another moduli suggested by Alexeev. It is the moduli of pairs (with no
specified semi-abelian action) consisting of a cubic curve with at worst nodal
singularities and a line which does not pass through singular points of the
cubic curve. Meanwhile, we make a comparison between Nakamura's
compactification of the moduli of level three elliptic curves and these two
moduli spaces. | [
0,
0,
1,
0,
0,
0
] |
Title: The unexpected resurgence of Weyl geometry in late 20-th century physics,
Abstract: Weyl's original scale geometry of 1918 ("purely infinitesimal geometry") was
withdrawn by its author from physical theorizing in the early 1920s. It had a
comeback in the last third of the 20th century in different contexts: scalar
tensor theories of gravity, foundations of gravity, foundations of quantum
mechanics, elementary particle physics, and cosmology. It seems that Weyl
geometry continues to offer an open research potential for the foundations of
physics even after the turn to the new millennium. | [
0,
1,
1,
0,
0,
0
] |
Title: Discrete Dynamic Causal Modeling and Its Relationship with Directed Information,
Abstract: This paper explores the discrete Dynamic Causal Modeling (DDCM) and its
relationship with Directed Information (DI). We prove the conditional
equivalence between DDCM and DI in characterizing the causal relationship
between two brain regions. The theoretical results are demonstrated using fMRI
data obtained under both resting state and stimulus based state. Our numerical
analysis is consistent with that reported in previous study. | [
0,
0,
0,
1,
0,
0
] |
Title: Two-step approach to scheduling quantum circuits,
Abstract: As the effort to scale up existing quantum hardware proceeds, it becomes
necessary to schedule quantum gates in a way that minimizes the number of
operations. There are three constraints that have to be satisfied: the order or
dependency of the quantum gates in the specific algorithm, the fact that any
qubit may be involved in at most one gate at a time, and the restriction that
two-qubit gates are implementable only between connected qubits. The last
aspect implies that the compilation depends not only on the algorithm, but also
on hardware properties like connectivity. Here we suggest a two-step approach
in which logical gates are initially scheduled neglecting connectivity
considerations, while routing operations are added at a later step in a way
that minimizes their overhead. We rephrase the subtasks of gate scheduling in
terms of graph problems like edge-coloring and maximum subgraph isomorphism.
While this approach is general, we specialize to a one dimensional array of
qubits to propose a routing scheme that is minimal in the number of exchange
operations. As a practical application, we schedule the Quantum Approximate
Optimization Algorithm in a linear geometry and quantify the reduction in the
number of gates and circuit depth that results from increasing the efficacy of
the scheduling strategies. | [
1,
0,
0,
0,
0,
0
] |
Title: A Study of the Allan Variance for Constant-Mean Non-Stationary Processes,
Abstract: The Allan Variance (AV) is a widely used quantity in areas focusing on error
measurement as well as in the general analysis of variance for autocorrelated
processes in domains such as engineering and, more specifically, metrology. The
form of this quantity is widely used to detect noise patterns and indications
of stability within signals. However, the properties of this quantity are not
known for commonly occurring processes whose covariance structure is
non-stationary and, in these cases, an erroneous interpretation of the AV could
lead to misleading conclusions. This paper generalizes the theoretical form of
the AV to some non-stationary processes while at the same time being valid also
for weakly stationary processes. Some simulation examples show how this new
form can help to understand the processes for which the AV is able to
distinguish these from the stationary cases and hence allow for a better
interpretation of this quantity in applied cases. | [
0,
0,
1,
1,
0,
0
] |
Title: Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization,
Abstract: We consider the stochastic composition optimization problem proposed in
\cite{wang2017stochastic}, which has applications ranging from estimation to
statistical and machine learning. We propose the first ADMM-based algorithm
named com-SVR-ADMM, and show that com-SVR-ADMM converges linearly for strongly
convex and Lipschitz smooth objectives, and has a convergence rate of $O( \log
S/S)$, which improves upon the $O(S^{-4/9})$ rate in
\cite{wang2016accelerating} when the objective is convex and Lipschitz smooth.
Moreover, com-SVR-ADMM possesses a rate of $O(1/\sqrt{S})$ when the objective
is convex but without Lipschitz smoothness. We also conduct experiments and
show that it outperforms existing algorithms. | [
1,
0,
0,
1,
0,
0
] |
Title: Hierarchical Behavioral Repertoires with Unsupervised Descriptors,
Abstract: Enabling artificial agents to automatically learn complex, versatile and
high-performing behaviors is a long-lasting challenge. This paper presents a
step in this direction with hierarchical behavioral repertoires that stack
several behavioral repertoires to generate sophisticated behaviors. Each
repertoire of this architecture uses the lower repertoires to create complex
behaviors as sequences of simpler ones, while only the lowest repertoire
directly controls the agent's movements. This paper also introduces a novel
approach to automatically define behavioral descriptors thanks to an
unsupervised neural network that organizes the produced high-level behaviors.
The experiments show that the proposed architecture enables a robot to learn
how to draw digits in an unsupervised manner after having learned to draw lines
and arcs. Compared to traditional behavioral repertoires, the proposed
architecture reduces the dimensionality of the optimization problems by orders
of magnitude and provides behaviors with a twice better fitness. More
importantly, it enables the transfer of knowledge between robots: a
hierarchical repertoire evolved for a robotic arm to draw digits can be
transferred to a humanoid robot by simply changing the lowest layer of the
hierarchy. This enables the humanoid to draw digits although it has never been
trained for this task. | [
1,
0,
0,
0,
0,
0
] |
Title: Indirect Image Registration with Large Diffeomorphic Deformations,
Abstract: The paper adapts the large deformation diffeomorphic metric mapping framework
for image registration to the indirect setting where a template is registered
against a target that is given through indirect noisy observations. The
registration uses diffeomorphisms that transform the template through a (group)
action. These diffeomorphisms are generated by solving a flow equation that is
defined by a velocity field with certain regularity. The theoretical analysis
includes a proof that indirect image registration has solutions (existence)
that are stable and that converge as the data error tends so zero, so it
becomes a well-defined regularization method. The paper concludes with examples
of indirect image registration in 2D tomography with very sparse and/or highly
noisy data. | [
1,
0,
1,
0,
0,
0
] |
Title: Morphological estimators on Sunyaev--Zel'dovich maps of MUSIC clusters of galaxies,
Abstract: The determination of the morphology of galaxy clusters has important
repercussion on their cosmological and astrophysical studies. In this paper we
address the morphological characterisation of synthetic maps of the
Sunyaev--Zel'dovich (SZ) effect produced for a sample of 258 massive clusters
($M_{vir}>5\times10^{14}h^{-1}$M$_\odot$ at $z=0$), extracted from the MUSIC
hydrodynamical simulations. Specifically, we apply five known morphological
parameters, already used in X-ray, two newly introduced ones, and we combine
them together in a single parameter. We analyse two sets of simulations
obtained with different prescriptions of the gas physics (non radiative and
with cooling, star formation and stellar feedback) at four redshifts between
0.43 and 0.82. For each parameter we test its stability and efficiency to
discriminate the true cluster dynamical state, measured by theoretical
indicators. The combined parameter discriminates more efficiently relaxed and
disturbed clusters. This parameter had a mild correlation with the hydrostatic
mass ($\sim 0.3$) and a strong correlation ($\sim 0.8$) with the offset between
the SZ centroid and the cluster centre of mass. The latter quantity results as
the most accessible and efficient indicator of the dynamical state for SZ
studies. | [
0,
1,
0,
0,
0,
0
] |
Title: Pulsar braking and the P-Pdot diagram,
Abstract: The location of radio pulsars in the period-period derivative (P-Pdot) plane
has been a key diagnostic tool since the early days of pulsar astronomy. Of
particular importance is how pulsars evolve through the P-Pdot diagram with
time. Here we show that the decay of the inclination angle (alpha-dot) between
the magnetic and rotation axes plays a critical role. In particular, alpha-dot
strongly impacts on the braking torque, an effect which has been largely
ignored in previous work. We carry out simulations which include a negative
alpha-dot term, and show that it is possible to reproduce the observational
P-Pdot diagram without the need for either pulsars with long birth periods or
magnetic field decay. Our best model indicates a birth rate of 1 radio pulsar
per century and a total Galactic population of ~20000 pulsars beaming towards
Earth. | [
0,
1,
0,
0,
0,
0
] |
Title: Uniqueness of stable capillary hypersurfaces in a ball,
Abstract: In this paper we prove that any immersed stable capillary hypersurfaces in a
ball in space forms are totally umbilical. This solves completely a
long-standing open problem. In the proof one of crucial ingredients is a new
Minkowski type formula. We also prove a Heintze-Karcher-Ros type inequality for
hypersurfaces in a ball, which, together with the new Minkowski formula, yields
a new proof of Alexandrov's Theorem for embedded CMC hypersurfaces in a ball
with free boundary. | [
0,
0,
1,
0,
0,
0
] |
Title: Archetypes for Representing Data about the Brazilian Public Hospital Information System and Outpatient High Complexity Procedures System,
Abstract: The Brazilian Ministry of Health has selected the openEHR model as a standard
for electronic health record systems. This paper presents a set of archetypes
to represent the main data from the Brazilian Public Hospital Information
System and the High Complexity Procedures Module of the Brazilian public
Outpatient Health Information System. The archetypes from the public openEHR
Clinical Knowledge Manager (CKM), were examined in order to select archetypes
that could be used to represent the data of the above mentioned systems. For
several concepts, it was necessary to specialize the CKM archetypes, or design
new ones. A total of 22 archetypes were used: 8 new, 5 specialized and 9 reused
from CKM. This set of archetypes can be used not only for information exchange,
but also for generating a big anonymized dataset for testing openEHR-based
systems. | [
1,
0,
0,
0,
0,
0
] |
Title: Asymptotic behavior of memristive circuits and combinatorial optimization,
Abstract: The interest in memristors has risen due to their possible application both
as memory units and as computational devices in combination with CMOS. This is
in part due to their nonlinear dynamics and a strong dependence on the circuit
topology. We provide evidence that also purely memristive circuits can be
employed for computational purposes. We show that a Lyapunov function,
polynomial in the internal memory parameters, exists for the case of DC
controlled memristors. Such Lyapunov function can be asymptotically mapped to
quadratic combinatorial optimization problems. This shows a direct parallel
between memristive circuits and the Hopfield-Little model. In the case of
Erdos-Renyi random circuits, we provide numerical evidence that the
distribution of the matrix elements of the couplings can be roughly
approximated by a Gaussian distribution, and that they scale with the inverse
square root of the number of elements. This provides an approximated but direct
connection to the physics of disordered system and, in particular, of mean
field spin glasses. Using this and the fact that the interaction is controlled
by a projector operator on the loop space of the circuit, we estimate the
number of stationary points of the Lyapunov function, and provide a scaling
formula as an upper bound in terms of the circuit topology only. In order to
put these ideas into practice, we provide an instance of optimization of the
Nikkei 225 dataset in the Markowitz framework, and show that it is competitive
compared to exponential annealing. | [
1,
1,
0,
0,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.