title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Large-sample approximations for variance-covariance matrices of high-dimensional time series | Distributional approximations of (bi--) linear functions of sample
variance-covariance matrices play a critical role to analyze vector time
series, as they are needed for various purposes, especially to draw inference
on the dependence structure in terms of second moments and to analyze
projections onto lower dimensional spaces as those generated by principal
components. This particularly applies to the high-dimensional case, where the
dimension $d$ is allowed to grow with the sample size $n$ and may even be
larger than $n$. We establish large-sample approximations for such bilinear
forms related to the sample variance-covariance matrix of a high-dimensional
vector time series in terms of strong approximations by Brownian motions. The
results cover weakly dependent as well as many long-range dependent linear
processes and are valid for uniformly $ \ell_1 $-bounded projection vectors,
which arise, either naturally or by construction, in many statistical problems
extensively studied for high-dimensional series. Among those problems are
sparse financial portfolio selection, sparse principal components, the LASSO,
shrinkage estimation and change-point analysis for high--dimensional time
series, which matter for the analysis of big data and are discussed in greater
detail.
| 0 | 0 | 1 | 1 | 0 | 0 |
Resilience: A Criterion for Learning in the Presence of Arbitrary Outliers | We introduce a criterion, resilience, which allows properties of a dataset
(such as its mean or best low rank approximation) to be robustly computed, even
in the presence of a large fraction of arbitrary additional data. Resilience is
a weaker condition than most other properties considered so far in the
literature, and yet enables robust estimation in a broader variety of settings.
We provide new information-theoretic results on robust distribution learning,
robust estimation of stochastic block models, and robust mean estimation under
bounded $k$th moments. We also provide new algorithmic results on robust
distribution learning, as well as robust mean estimation in $\ell_p$-norms.
Among our proof techniques is a method for pruning a high-dimensional
distribution with bounded $1$st moments to a stable "core" with bounded $2$nd
moments, which may be of independent interest.
| 1 | 0 | 0 | 1 | 0 | 0 |
Non-Euclidean geometry, nontrivial topology and quantum vacuum effects | Space out of a topological defect of the Abrikosov-Nielsen-Olesen vortex type
is locally flat but non-Euclidean. If a spinor field is quantized in such a
space, then a variety of quantum effects is induced in the vacuum. Basing on
the continuum model for long-wavelength electronic excitations, originating in
the tight-binding approximation for the nearest neighbor interaction of atoms
in the crystal lattice, we consider quantum ground state effects in monolayer
structures warped into nanocones by a disclination; the nonzero size of the
disclination is taken into account, and a boundary condition at the edge of the
disclination is chosen to ensure self-adjointness of the Dirac-Weyl Hamiltonian
operator. In the case of carbon nanocones, we find circumstances when the
quantum ground state effects are independent of the boundary parameter and the
disclination size.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spatial risk measures and rate of spatial diversification | An accurate assessment of the risk of extreme environmental events is of
great importance for populations, authorities and the banking/insurance
industry. Koch (2017) introduced a notion of spatial risk measure and a
corresponding set of axioms which are well suited to analyze the risk due to
events having a spatial extent, precisely such as environmental phenomena. The
axiom of asymptotic spatial homogeneity is of particular interest since it
allows one to quantify the rate of spatial diversification when the region
under consideration becomes large. In this paper, we first investigate the
general concepts of spatial risk measures and corresponding axioms further. We
also explain the usefulness of this theory for the actuarial practice. Second,
in the case of a general cost field, we especially give sufficient conditions
such that spatial risk measures associated with expectation, variance,
Value-at-Risk as well as expected shortfall and induced by this cost field
satisfy the axioms of asymptotic spatial homogeneity of order 0, -2, -1 and -1,
respectively. Last but not least, in the case where the cost field is a
function of a max-stable random field, we mainly provide conditions on both the
function and the max-stable field ensuring the latter properties. Max-stable
random fields are relevant when assessing the risk of extreme events since they
appear as a natural extension of multivariate extreme-value theory to the level
of random fields. Overall, this paper improves our understanding of spatial
risk measures as well as of their properties with respect to the space variable
and generalizes many results obtained in Koch (2017).
| 0 | 0 | 0 | 0 | 0 | 1 |
Double Homotopy (Co)Limits for Relative Categories | We answer the question to what extent homotopy (co)limits in categories with
weak equivalences allow for a Fubini-type interchange law. The main obstacle is
that we do not assume our categories with weak equivalences to come equipped
with a calculus for homotopy (co)limits, such as a derivator.
| 0 | 0 | 1 | 0 | 0 | 0 |
Theory of ground states for classical Heisenberg spin systems I | We formulate part I of a rigorous theory of ground states for classical,
finite, Heisenberg spin systems. The main result is that all ground states can
be constructed from the eigenvectors of a real, symmetric matrix with entries
comprising the coupling constants of the spin system as well as certain
Lagrange parameters. The eigenvectors correspond to the unique maximum of the
minimal eigenvalue considered as a function of the Lagrange parameters.
However, there are rare cases where all ground states obtained in this way have
unphysical dimensions $M>3$ and the theory would have to be extended. Further
results concern the degree of additional degeneracy, additional to the trivial
degeneracy of ground states due to rotations or reflections. The theory is
illustrated by a couple of elementary examples.
| 0 | 1 | 1 | 0 | 0 | 0 |
A Pliable Index Coding Approach to Data Shuffling | A promising research area that has recently emerged, is on how to use index
coding to improve the communication efficiency in distributed computing
systems, especially for data shuffling in iterative computations. In this
paper, we posit that pliable index coding can offer a more efficient framework
for data shuffling, as it can better leverage the many possible shuffling
choices to reduce the number of transmissions. We theoretically analyze pliable
index coding under data shuffling constraints, and design a hierarchical
data-shuffling scheme that uses pliable coding as a component. We find benefits
up to $O(ns/m)$ over index coding, where $ns/m$ is the average number of
workers caching a message, and $m$, $n$, and $s$ are the numbers of messages,
workers, and cache size, respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
The statistical challenge of constraining the low-mass IMF in Local Group dwarf galaxies | We use Monte Carlo simulations to explore the statistical challenges of
constraining the characteristic mass ($m_c$) and width ($\sigma$) of a
lognormal sub-solar initial mass function (IMF) in Local Group dwarf galaxies
using direct star counts. For a typical Milky Way (MW) satellite ($M_{V} =
-8$), jointly constraining $m_c$ and $\sigma$ to a precision of $\lesssim 20\%$
requires that observations be complete to $\lesssim 0.2 M_{\odot}$, if the IMF
is similar to the MW IMF. A similar statistical precision can be obtained if
observations are only complete down to $0.4M_{\odot}$, but this requires
measurement of nearly 100$\times$ more stars, and thus, a significantly more
massive satellite ($M_{V} \sim -12$). In the absence of sufficiently deep data
to constrain the low-mass turnover, it is common practice to fit a
single-sloped power law to the low-mass IMF, or to fit $m_c$ for a lognormal
while holding $\sigma$ fixed. We show that the former approximation leads to
best-fit power law slopes that vary with the mass range observed and can
largely explain existing claims of low-mass IMF variations in MW satellites,
even if satellite galaxies have the same IMF as the MW. In addition, fixing
$\sigma$ during fitting leads to substantially underestimated uncertainties in
the recovered value of $m_c$ (by a factor of $\sim 4$ for typical
observations). If the IMFs of nearby dwarf galaxies are lognormal and do vary,
observations must reach down to $\sim m_c$ in order to robustly detect these
variations. The high-sensitivity, near-infrared capabilities of JWST and WFIRST
have the potential to dramatically improve constraints on the low-mass IMF. We
present an efficient observational strategy for using these facilities to
measure the IMFs of Local Group dwarf galaxies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spatial analysis of airborne laser scanning point clouds for predicting forest variables | With recent developments in remote sensing technologies, plot-level forest
resources can be predicted utilizing airborne laser scanning (ALS). The
prediction is often assisted by mostly vertical summaries of the ALS point
clouds. We present a spatial analysis of the point cloud by studying the
horizontal distribution of the pulse returns through canopy height models
thresholded at different height levels. The resulting patterns of patches of
vegetation and gabs on each layer are summarized to spatial ALS features. We
propose new features based on the Euler number, which is the number of patches
minus the number of gaps, and the empty-space function, which is a spatial
summary function of the gab space. The empty-space function is also used to
describe differences in the gab structure between two different layers. We
illustrate usefulness of the proposed spatial features for predicting different
forest variables that summarize the spatial structure of forests or their
breast height diameter distribution. We employ the proposed spatial features,
in addition to commonly used features from literature, in the well-known k-nn
estimation method to predict the forest variables. We present the methodology
on the example of a study site in Central Finland.
| 0 | 0 | 0 | 1 | 1 | 0 |
Analytic and arithmetic properties of the $(Γ,χ)$-automorphic reproducing kernel function | We consider the reproducing kernel function of the theta Bargmann-Fock
Hilbert space associated to given full-rank lattice and pseudo-character, and
we deal with some of its analytical and arithmetical properties. Specially, the
distribution and discreteness of its zeros are examined and analytic sets
inside a product of fundamental cells is characterized and shown to be finite
and of cardinal less or equal to the dimension of the theta Bargmann-Fock
Hilbert space. Moreover, we obtain some remarkable lattice sums by evaluating
the so-called complex Hermite-Taylor coefficients. Some of them generalize some
of the arithmetic identities established by Perelomov in the framework of
coherent states for the specific case of von Neumann lattice. Such complex
Hermite-Taylor coefficients are nontrivial examples of the so-called lattice's
functions according the Serre terminology. The perfect use of the basic
properties of the complex Hermite polynomials is crucial in this framework.
| 0 | 0 | 1 | 0 | 0 | 0 |
Concentration of $1$-Lipschitz functions on manifolds with boundary with Dirichlet boundary condition | In this paper, we consider a concentration of measure problem on Riemannian
manifolds with boundary. We study concentration phenomena of non-negative
$1$-Lipschitz functions with Dirichlet boundary condition around zero, which is
called boundary concentration phenomena. We first examine relation between
boundary concentration phenomena and large spectral gap phenomena of Dirichlet
eigenvalues of Laplacian. We will obtain analogue of the Gromov-V. D. Milman
theorem and the Funano-Shioya theorem for closed manifolds. Furthermore, to
capture boundary concentration phenomena, we introduce a new invariant called
the observable inscribed radius. We will formulate comparison theorems for such
invariant under a lower Ricci curvature bound, and a lower mean curvature bound
for the boundary. Based on such comparison theorems, we investigate various
boundary concentration phenomena of sequences of manifolds with boundary.
| 0 | 0 | 1 | 0 | 0 | 0 |
Simulated Annealing for JPEG Quantization | JPEG is one of the most widely used image formats, but in some ways remains
surprisingly unoptimized, perhaps because some natural optimizations would go
outside the standard that defines JPEG. We show how to improve JPEG compression
in a standard-compliant, backward-compatible manner, by finding improved
default quantization tables. We describe a simulated annealing technique that
has allowed us to find several quantization tables that perform better than the
industry standard, in terms of both compressed size and image fidelity.
Specifically, we derive tables that reduce the FSIM error by over 10% while
improving compression by over 20% at quality level 95 in our tests; we also
provide similar results for other quality levels. While we acknowledge our
approach can in some images lead to visible artifacts under large
magnification, we believe use of these quantization tables, or additional
tables that could be found using our methodology, would significantly reduce
JPEG file sizes with improved overall image quality.
| 1 | 0 | 0 | 0 | 0 | 0 |
Greedy Strategy Works for Clustering with Outliers and Coresets Construction | We study the problems of clustering with outliers in high dimension. Though a
number of methods have been developed in the past decades, it is still quite
challenging to design quality guaranteed algorithms with low complexities for
the problems. Our idea is inspired by the greedy method, Gonzalez's algorithm,
for solving the problem of ordinary $k$-center clustering. Based on some novel
observations, we show that this greedy strategy actually can handle
$k$-center/median/means clustering with outliers efficiently, in terms of
qualities and complexities. We further show that the greedy approach yields
small coreset for the problem in doubling metrics, so as to reduce the time
complexity significantly. Moreover, a by-product is that the coreset
construction can be applied to speedup the popular density-based clustering
approach DBSCAN.
| 1 | 0 | 0 | 0 | 0 | 0 |
All-but-the-Top: Simple and Effective Postprocessing for Word Representations | Real-valued word representations have transformed NLP applications; popular
examples are word2vec and GloVe, recognized for their ability to capture
linguistic regularities. In this paper, we demonstrate a {\em very simple}, and
yet counter-intuitive, postprocessing technique -- eliminate the common mean
vector and a few top dominating directions from the word vectors -- that
renders off-the-shelf representations {\em even stronger}. The postprocessing
is empirically validated on a variety of lexical-level intrinsic tasks (word
similarity, concept categorization, word analogy) and sentence-level tasks
(semantic textural similarity and { text classification}) on multiple datasets
and with a variety of representation methods and hyperparameter choices in
multiple languages; in each case, the processed representations are
consistently better than the original ones.
| 1 | 0 | 0 | 1 | 0 | 0 |
Detecting Near Duplicates in Software Documentation | Contemporary software documentation is as complicated as the software itself.
During its lifecycle, the documentation accumulates a lot of near duplicate
fragments, i.e. chunks of text that were copied from a single source and were
later modified in different ways. Such near duplicates decrease documentation
quality and thus hamper its further utilization. At the same time, they are
hard to detect manually due to their fuzzy nature. In this paper we give a
formal definition of near duplicates and present an algorithm for their
detection in software documents. This algorithm is based on the exact software
clone detection approach: the software clone detection tool Clone Miner was
adapted to detect exact duplicates in documents. Then, our algorithm uses these
exact duplicates to construct near ones. We evaluate the proposed algorithm
using the documentation of 19 open source and commercial projects. Our
evaluation is very comprehensive - it covers various documentation types:
design and requirement specifications, programming guides and API
documentation, user manuals. Overall, the evaluation shows that all kinds of
software documentation contain a significant number of both exact and near
duplicates. Next, we report on the performed manual analysis of the detected
near duplicates for the Linux Kernel Documentation. We present both quantative
and qualitative results of this analysis, demonstrate algorithm strengths and
weaknesses, and discuss the benefits of duplicate management in software
documents.
| 1 | 0 | 0 | 0 | 0 | 0 |
L-Graphs and Monotone L-Graphs | In an $\mathsf{L}$-embedding of a graph, each vertex is represented by an
$\mathsf{L}$-segment, and two segments intersect each other if and only if the
corresponding vertices are adjacent in the graph. If the corner of each
$\mathsf{L}$-segment in an $\mathsf{L}$-embedding lies on a straight line, we
call it a monotone $\mathsf{L}$-embedding. In this paper we give a full
characterization of monotone $\mathsf{L}$-embeddings by introducing a new class
of graphs which we call "non-jumping" graphs. We show that a graph admits a
monotone $\mathsf{L}$-embedding if and only if the graph is a non-jumping
graph. Further, we show that outerplanar graphs, convex bipartite graphs,
interval graphs, 3-leaf power graphs, and complete graphs are subclasses of
non-jumping graphs. Finally, we show that distance-hereditary graphs and
$k$-leaf power graphs ($k\le 4$) admit $\mathsf{L}$-embeddings.
| 1 | 0 | 0 | 0 | 0 | 0 |
ZigZag: A new approach to adaptive online learning | We develop a novel family of algorithms for the online learning setting with
regret against any data sequence bounded by the empirical Rademacher complexity
of that sequence. To develop a general theory of when this type of adaptive
regret bound is achievable we establish a connection to the theory of
decoupling inequalities for martingales in Banach spaces. When the hypothesis
class is a set of linear functions bounded in some norm, such a regret bound is
achievable if and only if the norm satisfies certain decoupling inequalities
for martingales. Donald Burkholder's celebrated geometric characterization of
decoupling inequalities (1984) states that such an inequality holds if and only
if there exists a special function called a Burkholder function satisfying
certain restricted concavity properties. Our online learning algorithms are
efficient in terms of queries to this function.
We realize our general theory by giving novel efficient algorithms for
classes including lp norms, Schatten p-norms, group norms, and reproducing
kernel Hilbert spaces. The empirical Rademacher complexity regret bound implies
--- when used in the i.i.d. setting --- a data-dependent complexity bound for
excess risk after online-to-batch conversion. To showcase the power of the
empirical Rademacher complexity regret bound, we derive improved rates for a
supervised learning generalization of the online learning with low rank experts
task and for the online matrix prediction task.
In addition to obtaining tight data-dependent regret bounds, our algorithms
enjoy improved efficiency over previous techniques based on Rademacher
complexity, automatically work in the infinite horizon setting, and are
scale-free. To obtain such adaptive methods, we introduce novel machinery, and
the resulting algorithms are not based on the standard tools of online convex
optimization.
| 1 | 0 | 1 | 1 | 0 | 0 |
Comparing Rule-Based and Deep Learning Models for Patient Phenotyping | Objective: We investigate whether deep learning techniques for natural
language processing (NLP) can be used efficiently for patient phenotyping.
Patient phenotyping is a classification task for determining whether a patient
has a medical condition, and is a crucial part of secondary analysis of
healthcare data. We assess the performance of deep learning algorithms and
compare them with classical NLP approaches.
Materials and Methods: We compare convolutional neural networks (CNNs),
n-gram models, and approaches based on cTAKES that extract pre-defined medical
concepts from clinical notes and use them to predict patient phenotypes. The
performance is tested on 10 different phenotyping tasks using 1,610 discharge
summaries extracted from the MIMIC-III database.
Results: CNNs outperform other phenotyping algorithms in all 10 tasks. The
average F1-score of our model is 76 (PPV of 83, and sensitivity of 71) with our
model having an F1-score up to 37 points higher than alternative approaches. We
additionally assess the interpretability of our model by presenting a method
that extracts the most salient phrases for a particular prediction.
Conclusion: We show that NLP methods based on deep learning improve the
performance of patient phenotyping. Our CNN-based algorithm automatically
learns the phrases associated with each patient phenotype. As such, it reduces
the annotation complexity for clinical domain experts, who are normally
required to develop task-specific annotation rules and identify relevant
phrases. Our method performs well in terms of both performance and
interpretability, which indicates that deep learning is an effective approach
to patient phenotyping based on clinicians' notes.
| 1 | 0 | 0 | 1 | 0 | 0 |
Jackknife multiplier bootstrap: finite sample approximations to the $U$-process supremum with applications | This paper is concerned with finite sample approximations to the supremum of
a non-degenerate $U$-process of a general order indexed by a function class. We
are primarily interested in situations where the function class as well as the
underlying distribution change with the sample size, and the $U$-process itself
is not weakly convergent as a process. Such situations arise in a variety of
modern statistical problems. We first consider Gaussian approximations, namely,
approximate the $U$-process supremum by the supremum of a Gaussian process, and
derive coupling and Kolmogorov distance bounds. Such Gaussian approximations
are, however, not often directly applicable in statistical problems since the
covariance function of the approximating Gaussian process is unknown. This
motivates us to study bootstrap-type approximations to the $U$-process
supremum. We propose a novel jackknife multiplier bootstrap (JMB) tailored to
the $U$-process, and derive coupling and Kolmogorov distance bounds for the
proposed JMB method. All these results are non-asymptotic, and established
under fairly general conditions on function classes and underlying
distributions. Key technical tools in the proofs are new local maximal
inequalities for $U$-processes, which may be useful in other problems. We also
discuss applications of the general approximation results to testing for
qualitative features of nonparametric functions based on generalized local
$U$-processes.
| 0 | 0 | 1 | 1 | 0 | 0 |
On the universality of anomalous scaling exponents of structure functions in turbulent flows | All previous experiments in open turbulent flows (e.g. downstream of grids,
jet and atmospheric boundary layer) have produced quantitatively consistent
values for the scaling exponents of velocity structure functions. The only
measurement in closed turbulent flow (von Kármán swirling flow) using
Taylor-hypothesis, however, produced scaling exponents that are significantly
smaller, suggesting that the universality of these exponents are broken with
respect to change of large scale geometry of the flow. Here, we report
measurements of longitudinal structure functions of velocity in a von
Kármán setup without the use of Taylor-hypothesis. The measurements are
made using Stereo Particle Image Velocimetry at 4 different ranges of spatial
scales, in order to observe a combined inertial subrange spanning roughly one
and a half order of magnitude. We found scaling exponents (up to 9th order)
that are consistent with values from open turbulent flows, suggesting that they
might be in fact universal.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Heat Kernel and Weyl Anomaly of Schrödinger invariant theory | We propose a method inspired from discrete light cone quantization (DLCQ) to
determine the heat kernel for a Schrödinger field theory (Galilean boost
invariant with $z=2$ anisotropic scaling symmetry) living in $d+1$ dimensions,
coupled to a curved Newton-Cartan background starting from a heat kernel of a
relativistic conformal field theory ($z=1$) living in $d+2$ dimensions. We use
this method to show the Schrödinger field theory of a complex scalar field
cannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly
$\mathcal{A}^{G}_{d+1}$ for Schrödinger theory is related to the Weyl anomaly
of a free relativistic scalar CFT $\mathcal{A}^{R}_{d+2}$ via
$\mathcal{A}^{G}_{d+1}= 2\pi \delta (m) \mathcal{A}^{R}_{d+2}$ where $m$ is the
charge of the scalar field under particle number symmetry. We provide further
evidence of vanishing anomaly by evaluating Feynman diagrams in all orders of
perturbation theory. We present an explicit calculation of the anomaly using a
regulated Schrödinger operator, without using the null cone reduction
technique. We generalise our method to show that a similar result holds for one
time derivative theories with even $z>2$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Tree-based networks: characterisations, metrics, and support trees | Phylogenetic networks generalise phylogenetic trees and allow for the
accurate representation of the evolutionary history of a set of present-day
species whose past includes reticulate events such as hybridisation and lateral
gene transfer. One way to obtain such a network is by starting with a (rooted)
phylogenetic tree $T$, called a base tree, and adding arcs between arcs of $T$.
The class of phylogenetic networks that can be obtained in this way is called
tree-based networks and includes the prominent classes of tree-child and
reticulation-visible networks. Initially defined for binary phylogenetic
networks, tree-based networks naturally extend to arbitrary phylogenetic
networks. In this paper, we generalise recent tree-based characterisations and
associated proximity measures for binary phylogenetic networks to arbitrary
phylogenetic networks. These characterisations are in terms of matchings in
bipartite graphs, path partitions, and antichains. Some of the generalisations
are straightforward to establish using the original approach, while others
require a very different approach. Furthermore, for an arbitrary tree-based
network $N$, we characterise the support trees of $N$, that is, the tree-based
embeddings of $N$. We use this characterisation to give an explicit formula for
the number of support trees of $N$ when $N$ is binary. This formula is written
in terms of the components of a bipartite graph.
| 1 | 0 | 0 | 0 | 0 | 0 |
Comparing People with Bibliometrics | Bibliometric indicators, citation counts and/or download counts are
increasingly being used to inform personnel decisions such as hiring or
promotions. These statistics are very often misused. Here we provide a guide to
the factors which should be considered when using these so-called quantitative
measures to evaluate people. Rules of thumb are given for when begin to use
bibliometric measures when comparing otherwise similar candidates.
| 1 | 1 | 0 | 0 | 0 | 0 |
Urban Dreams of Migrants: A Case Study of Migrant Integration in Shanghai | Unprecedented human mobility has driven the rapid urbanization around the
world. In China, the fraction of population dwelling in cities increased from
17.9% to 52.6% between 1978 and 2012. Such large-scale migration poses
challenges for policymakers and important questions for researchers. To
investigate the process of migrant integration, we employ a one-month complete
dataset of telecommunication metadata in Shanghai with 54 million users and 698
million call logs. We find systematic differences between locals and migrants
in their mobile communication networks and geographical locations. For
instance, migrants have more diverse contacts and move around the city with a
larger radius than locals after they settle down. By distinguishing new
migrants (who recently moved to Shanghai) from settled migrants (who have been
in Shanghai for a while), we demonstrate the integration process of new
migrants in their first three weeks. Moreover, we formulate classification
problems to predict whether a person is a migrant. Our classifier is able to
achieve an F1-score of 0.82 when distinguishing settled migrants from locals,
but it remains challenging to identify new migrants because of class imbalance.
This classification setup holds promise for identifying new migrants who will
successfully integrate into locals (new migrants that misclassified as locals).
| 1 | 1 | 0 | 0 | 0 | 0 |
A computational method for estimating Burr XII parameters with complete and multiple censored data | Flexibility in shape and scale of Burr XII distribution can make close
approximation of numerous well-known probability density functions. Due to
these capabilities, the usages of Burr XII distribution are applied in risk
analysis, lifetime data analysis and process capability estimation. In this
paper the Cross-Entropy (CE) method is further developed in terms of Maximum
Likelihood Estimation (MLE) to estimate the parameters of Burr XII distribution
for the complete data or in the presence of multiple censoring. A simulation
study is conducted to evaluate the performance of the MLE by means of CE method
for different parameter settings and sample sizes. The results are compared to
other existing methods in both uncensored and censored situations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Locally stationary spatio-temporal interpolation of Argo profiling float data | Argo floats measure seawater temperature and salinity in the upper 2,000 m of
the global ocean. Statistical analysis of the resulting spatio-temporal dataset
is challenging due to its nonstationary structure and large size. We propose
mapping these data using locally stationary Gaussian process regression where
covariance parameter estimation and spatio-temporal prediction are carried out
in a moving-window fashion. This yields computationally tractable nonstationary
anomaly fields without the need to explicitly model the nonstationary
covariance structure. We also investigate Student-$t$ distributed fine-scale
variation as a means to account for non-Gaussian heavy tails in ocean
temperature data. Cross-validation studies comparing the proposed approach with
the existing state-of-the-art demonstrate clear improvements in point
predictions and show that accounting for the nonstationarity and
non-Gaussianity is crucial for obtaining well-calibrated uncertainties. This
approach also provides data-driven local estimates of the spatial and temporal
dependence scales for the global ocean which are of scientific interest in
their own right.
| 0 | 1 | 0 | 1 | 0 | 0 |
Knowledge Reuse for Customization: Metamodels in an Open Design Community for 3d Printing | Theories of knowledge reuse posit two distinct processes: reuse for
replication and reuse for innovation. We identify another distinct process,
reuse for customization. Reuse for customization is a process in which
designers manipulate the parameters of metamodels to produce models that
fulfill their personal needs. We test hypotheses about reuse for customization
in Thingiverse, a community of designers that shares files for
three-dimensional printing. 3D metamodels are reused more often than the 3D
models they generate. The reuse of metamodels is amplified when the metamodels
are created by designers with greater community experience. Metamodels make the
community's design knowledge available for reuse for customization-or further
extension of the metamodels, a kind of reuse for innovation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dynamic Rank Maximal Matchings | We consider the problem of matching applicants to posts where applicants have
preferences over posts. Thus the input to our problem is a bipartite graph G =
(A U P,E), where A denotes a set of applicants, P is a set of posts, and there
are ranks on edges which denote the preferences of applicants over posts. A
matching M in G is called rank-maximal if it matches the maximum number of
applicants to their rank 1 posts, subject to this the maximum number of
applicants to their rank 2 posts, and so on.
We consider this problem in a dynamic setting, where vertices and edges can
be added and deleted at any point. Let n and m be the number of vertices and
edges in an instance G, and r be the maximum rank used by any rank-maximal
matching in G. We give a simple O(r(m+n))-time algorithm to update an existing
rank-maximal matching under each of these changes. When r = o(n), this is
faster than recomputing a rank-maximal matching completely using a known
algorithm like that of Irving et al., which takes time O(min((r + n,
r*sqrt(n))m).
| 1 | 0 | 0 | 0 | 0 | 0 |
Early identification of important patents through network centrality | One of the most challenging problems in technological forecasting is to
identify as early as possible those technologies that have the potential to
lead to radical changes in our society. In this paper, we use the US patent
citation network (1926-2010) to test our ability to early identify a list of
historically significant patents through citation network analysis. We show
that in order to effectively uncover these patents shortly after they are
issued, we need to go beyond raw citation counts and take into account both the
citation network topology and temporal information. In particular, an
age-normalized measure of patent centrality, called rescaled PageRank, allows
us to identify the significant patents earlier than citation count and PageRank
score. In addition, we find that while high-impact patents tend to rely on
other high-impact patents in a similar way as scientific papers, the patents'
citation dynamics is significantly slower than that of papers, which makes the
early identification of significant patents more challenging than that of
significant papers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Central limit theorem for the variable bandwidth kernel density estimators | In this paper we study the ideal variable bandwidth kernel density estimator
introduced by McKay (1993) and Jones, McKay and Hu (1994) and the plug-in
practical version of the variable bandwidth kernel estimator with two sequences
of bandwidths as in Giné and Sang (2013). Based on the bias and variance
analysis of the ideal and true variable bandwidth kernel density estimators, we
study the central limit theorems for each of them.
| 0 | 0 | 1 | 1 | 0 | 0 |
Distance-to-Mean Continuous Conditional Random Fields to Enhance Prediction Problem in Traffic Flow Data | The increase of vehicle in highways may cause traffic congestion as well as
in the normal roadways. Predicting the traffic flow in highways especially, is
demanded to solve this congestion problem. Predictions on time-series
multivariate data, such as in the traffic flow dataset, have been largely
accomplished through various approaches. The approach with conventional
prediction algorithms, such as with Support Vector Machine (SVM), is only
capable of accommodating predictions that are independent in each time unit.
Hence, the sequential relationships in this time series data is hardly
explored. Continuous Conditional Random Field (CCRF) is one of Probabilistic
Graphical Model (PGM) algorithms which can accommodate this problem. The
neighboring aspects of sequential data such as in the time series data can be
expressed by CCRF so that its predictions are more reliable. In this article, a
novel approach called DM-CCRF is adopted by modifying the CCRF prediction
algorithm to strengthen the probability of the predictions made by the baseline
regressor. The result shows that DM-CCRF is superior in performance compared to
CCRF. This is validated by the error decrease of the baseline up to 9%
significance. This is twice the standard CCRF performance which can only
decrease baseline error by 4.582% at most.
| 1 | 0 | 0 | 0 | 0 | 0 |
Submap-based Pose-graph Visual SLAM: A Robust Visual Exploration and Localization System | For VSLAM (Visual Simultaneous Localization and Mapping), localization is a
challenging task, especially for some challenging situations: textureless
frames, motion blur, etc.. To build a robust exploration and localization
system in a given space or environment, a submap-based VSLAM system is proposed
in this paper. Our system uses a submap back-end and a visual front-end. The
main advantage of our system is its robustness with respect to tracking
failure, a common problem in current VSLAM algorithms. The robustness of our
system is compared with the state-of-the-art in terms of average tracking
percentage. The precision of our system is also evaluated in terms of ATE
(absolute trajectory error) RMSE (root mean square error) comparing the
state-of-the-art. The ability of our system in solving the `kidnapped' problem
is demonstrated. Our system can improve the robustness of visual localization
in challenging situations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Partial and Total Dielectronic Recombination Rate Coefficients for W$^{55+}$ to W$^{38+}$ | Dielectronic recombination (DR) is the dominant mode of recombination in
magnetically confined fusion plasmas for intermediate to low-charged ions of W.
Complete, final-state resolved partial isonuclear W DR rate coefficient data is
required for detailed collisional-radiative modelling for such plasmas in
preparation for the upcoming fusion experiment ITER. To realize this
requirement, we continue {\it The Tungsten Project} by presenting our
calculations for tungsten ions W$^{55+}$ to W$^{38+}$. As per our prior
calculations for W$^{73+}$ to W$^{56+}$, we use the collision package {\sc
autostructure} to calculate partial and total DR rate coefficients for all
relevant core-excitations in intermediate coupling (IC) and configuration
average (CA) using $\kappa$-averaged relativistic wavefunctions. Radiative
recombination (RR) rate coefficients are also calculated for the purpose of
evaluating ionization fractions. Comparison of our DR rate coefficients for
W$^{46+}$ with other authors yields agreement to within 7-19\% at peak
abundance verifying the reliability of our method. Comparison of partial DR
rate coefficients calculated in IC and CA yield differences of a factor
$\sim{2}$ at peak abundance temperature, highlighting the importance of
relativistic configuration mixing. Large differences are observed between
ionization fractions calculated using our recombination rate coefficient data
and that of Pütterich~\etal [Plasma Phys. and Control. Fusion 50 085016,
(2008)]. These differences are attributed to deficiencies in the average-atom
method used by the former to calculate their data.
| 0 | 1 | 0 | 0 | 0 | 0 |
Congenial Causal Inference with Binary Structural Nested Mean Models | Structural nested mean models (SNMMs) are among the fundamental tools for
inferring causal effects of time-dependent exposures from longitudinal studies.
With binary outcomes, however, current methods for estimating multiplicative
and additive SNMM parameters suffer from variation dependence between the
causal SNMM parameters and the non-causal nuisance parameters. Estimating
methods for logistic SNMMs do not suffer from this dependence. Unfortunately,
in contrast with the multiplicative and additive models, unbiased estimation of
the causal parameters of a logistic SNMM rely on additional modeling
assumptions even when the treatment probabilities are known. These difficulties
have hindered the uptake of SNMMs in epidemiological practice, where binary
outcomes are common. We solve the variation dependence problem for the binary
multiplicative SNMM by a reparametrization of the non-causal nuisance
parameters. Our novel nuisance parameters are variation independent of the
causal parameters, and hence allows the fitting of a multiplicative SNMM by
unconstrained maximum likelihood. It also allows one to construct true (i.e.
congenial) doubly robust estimators of the causal parameters. Along the way, we
prove that an additive SNMM with binary outcomes does not admit a variation
independent parametrization, thus explaining why we restrict ourselves to the
multiplicative SNMM.
| 0 | 0 | 0 | 1 | 0 | 0 |
Improving Search through A3C Reinforcement Learning based Conversational Agent | We develop a reinforcement learning based search assistant which can assist
users through a set of actions and sequence of interactions to enable them
realize their intent. Our approach caters to subjective search where the user
is seeking digital assets such as images which is fundamentally different from
the tasks which have objective and limited search modalities. Labeled
conversational data is generally not available in such search tasks and
training the agent through human interactions can be time consuming. We propose
a stochastic virtual user which impersonates a real user and can be used to
sample user behavior efficiently to train the agent which accelerates the
bootstrapping of the agent. We develop A3C algorithm based context preserving
architecture which enables the agent to provide contextual assistance to the
user. We compare the A3C agent with Q-learning and evaluate its performance on
average rewards and state values it obtains with the virtual user in validation
episodes. Our experiments show that the agent learns to achieve higher rewards
and better states.
| 1 | 0 | 0 | 0 | 0 | 0 |
Eco-Routing based on a Data Driven Fuel Consumption Model | A nonparametric fuel consumption model is developed and used for eco-routing
algorithm development in this paper. Six months of driving information from the
city of Ann Arbor is collected from 2,000 vehicles. The road grade information
from more than 1,100 km of road network is modeled and the software Autonomie
is used to calculate fuel consumption for all trips on the road network. Four
different routing strategies including shortest distance, shortest time,
eco-routing, and travel-time-constrained eco-routing are compared. The results
show that eco-routing can reduce fuel consumption, but may increase travel
time. A travel-time-constrained eco-routing algorithm is developed to keep most
the fuel saving benefit while incurring very little increase in travel time.
| 0 | 0 | 0 | 1 | 0 | 0 |
SuperSpike: Supervised learning in multi-layer spiking neural networks | A vast majority of computation in the brain is performed by spiking neural
networks. Despite the ubiquity of such spiking, we currently lack an
understanding of how biological spiking neural circuits learn and compute
in-vivo, as well as how we can instantiate such capabilities in artificial
spiking circuits in-silico. Here we revisit the problem of supervised learning
in temporally coding multi-layer spiking neural networks. First, by using a
surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based
three factor learning rule capable of training multi-layer networks of
deterministic integrate-and-fire neurons to perform nonlinear computations on
spatiotemporal spike patterns. Second, inspired by recent results on feedback
alignment, we compare the performance of our learning rule under different
credit assignment strategies for propagating output errors to hidden units.
Specifically, we test uniform, symmetric and random feedback, finding that
simpler tasks can be solved with any type of feedback, while more complex tasks
require symmetric feedback. In summary, our results open the door to obtaining
a better scientific understanding of learning and computation in spiking neural
networks by advancing our ability to train them to solve nonlinear problems
involving transformations between different spatiotemporal spike-time patterns.
| 1 | 0 | 0 | 1 | 0 | 0 |
Distributions-oriented wind forecast verification by a hidden Markov model for multivariate circular-linear data | Winds from the North-West quadrant and lack of precipitation are known to
lead to an increase of PM10 concentrations over a residential neighborhood in
the city of Taranto (Italy). In 2012 the local government prescribed a
reduction of industrial emissions by 10% every time such meteorological
conditions are forecasted 72 hours in advance. Wind forecasting is addressed
using the Weather Research and Forecasting (WRF) atmospheric simulation system
by the Regional Environmental Protection Agency. In the context of
distributions-oriented forecast verification, we propose a comprehensive
model-based inferential approach to investigate the ability of the WRF system
to forecast the local wind speed and direction allowing different performances
for unknown weather regimes. Ground-observed and WRF-forecasted wind speed and
direction at a relevant location are jointly modeled as a 4-dimensional time
series with an unknown finite number of states characterized by homogeneous
distributional behavior. The proposed model relies on a mixture of joint
projected and skew normal distributions with time-dependent states, where the
temporal evolution of the state membership follows a first order Markov
process. Parameter estimates, including the number of states, are obtained by a
Bayesian MCMC-based method. Results provide useful insights on the performance
of WRF forecasts in relation to different combinations of wind speed and
direction.
| 0 | 0 | 0 | 1 | 0 | 0 |
Mean Field Residual Networks: On the Edge of Chaos | We study randomly initialized residual networks using mean field theory and
the theory of difference equations. Classical feedforward neural networks, such
as those with tanh activations, exhibit exponential behavior on the average
when propagating inputs forward or gradients backward. The exponential forward
dynamics causes rapid collapsing of the input space geometry, while the
exponential backward dynamics causes drastic vanishing or exploding gradients.
We show, in contrast, that by adding skip connections, the network will,
depending on the nonlinearity, adopt subexponential forward and backward
dynamics, and in many cases in fact polynomial. The exponents of these
polynomials are obtained through analytic methods and proved and verified
empirically to be correct. In terms of the "edge of chaos" hypothesis, these
subexponential and polynomial laws allow residual networks to "hover over the
boundary between stability and chaos," thus preserving the geometry of the
input space and the gradient information flow. In our experiments, for each
activation function we study here, we initialize residual networks with
different hyperparameters and train them on MNIST. Remarkably, our
initialization time theory can accurately predict test time performance of
these networks, by tracking either the expected amount of gradient explosion or
the expected squared distance between the images of two input vectors.
Importantly, we show, theoretically as well as empirically, that common
initializations such as the Xavier or the He schemes are not optimal for
residual networks, because the optimal initialization variances depend on the
depth. Finally, we have made mathematical contributions by deriving several new
identities for the kernels of powers of ReLU functions by relating them to the
zeroth Bessel function of the second kind.
| 1 | 1 | 0 | 0 | 0 | 0 |
Automorphisms and deformations of conformally Kähler, Einstein-Maxwell metrics | We obtain a structure theorem for the group of holomorphic automorphisms of a
conformally Kähler, Einstein-Maxwell metric, extending the classical results
of Matsushima, Licherowicz and Calabi in the Kähler-Einstein, cscK, and
extremal Kähler cases. Combined with previous results of LeBrun,
Apostolov-Maschler and Futaki-Ono, this completes the classification of the
conformally Kähler, Einstein--Maxwell metrics on $\mathbb{CP}^1 \times
\mathbb{CP}^1$. We also use our result in order to introduce a (relative)
Mabuchi energy in the more general context of $(K, q, a)$-extremal Kähler
metrics in a given Kähler class, and show that the existence of $(K, q,
a)$-extremal Kähler metrics is stable under small deformation of the Kähler
class, the Killing vector field $K$ and the normalization constant $a$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Human-Level Intelligence or Animal-Like Abilities? | The vision systems of the eagle and the snake outperform everything that we
can make in the laboratory, but snakes and eagles cannot build an eyeglass or a
telescope or a microscope. (Judea Pearl)
| 1 | 0 | 0 | 1 | 0 | 0 |
Wembedder: Wikidata entity embedding web service | I present a web service for querying an embedding of entities in the Wikidata
knowledge graph. The embedding is trained on the Wikidata dump using Gensim's
Word2Vec implementation and a simple graph walk. A REST API is implemented.
Together with the Wikidata API the web service exposes a multilingual resource
for over 600'000 Wikidata items and properties.
| 1 | 0 | 0 | 1 | 0 | 0 |
Meta-Learning for Contextual Bandit Exploration | We describe MELEE, a meta-learning algorithm for learning a good exploration
policy in the interactive contextual bandit setting. Here, an algorithm must
take actions based on contexts, and learn based only on a reward signal from
the action taken, thereby generating an exploration/exploitation trade-off.
MELEE addresses this trade-off by learning a good exploration strategy for
offline tasks based on synthetic data, on which it can simulate the contextual
bandit setting. Based on these simulations, MELEE uses an imitation learning
strategy to learn a good exploration policy that can then be applied to true
contextual bandit tasks at test time. We compare MELEE to seven strong baseline
contextual bandit algorithms on a set of three hundred real-world datasets, on
which it outperforms alternatives in most settings, especially when differences
in rewards are large. Finally, we demonstrate the importance of having a rich
feature representation for learning how to explore.
| 1 | 0 | 0 | 1 | 0 | 0 |
Population polarization dynamics and next-generation social media algorithms | We present a many-body theory that explains and reproduces recent
observations of population polarization dynamics, is supported by controlled
human experiments, and addresses the controversy surrounding the Internet's
impact. It predicts that whether and how a population becomes polarized is
dictated by the nature of the underlying competition, rather than the validity
of the information that individuals receive or their online bubbles. Building
on this framework, we show that next-generation social media algorithms aimed
at pulling people together, will instead likely lead to an explosive
percolation process that generates new pockets of extremes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ride Sharing and Dynamic Networks Analysis | The potential of an efficient ride-sharing scheme to significantly reduce
traffic congestion, lower emission level, as well as facilitating the
introduction of smart cities has been widely demonstrated. This positive thrust
however is faced with several delaying factors, one of which is the volatility
and unpredictability of the potential benefit (or utilization) of ride-sharing
at different times, and in different places. In this work the following
research questions are posed: (a) Is ride-sharing utilization stable over time
or does it undergo significant changes? (b) If ride-sharing utilization is
dynamic, can it be correlated with some traceable features of the traffic? and
(c) If ride-sharing utilization is dynamic, can it be predicted ahead of time?
We analyze a dataset of over 14 Million taxi trips taken in New York City. We
propose a dynamic travel network approach for modeling and forecasting the
potential ride-sharing utilization over time, showing it to be highly volatile.
In order to model the utilization's dynamics we propose a network-centric
approach, projecting the aggregated traffic taken from continuous time periods
into a feature space comprised of topological features of the network implied
by this traffic. This feature space is then used to model the dynamics of
ride-sharing utilization over time. The results of our analysis demonstrate the
significant volatility of ride-sharing utilization over time, indicating that
any policy, design or plan that would disregard this aspect and chose a static
paradigm would undoubtably be either highly inefficient or provide insufficient
resources. We show that using our suggested approach it is possible to model
the potential utilization of ride sharing based on the topological properties
of the rides network. We also show that using this method the potential
utilization can be forecasting a few hours ahead of time.
| 1 | 1 | 0 | 0 | 0 | 0 |
Gas Adsorption and Dynamics in Pillared Graphene Frameworks | Pillared Graphene Frameworks are a novel class of microporous materials made
by graphene sheets separated by organic spacers. One of their main features is
that the pillar type and density can be chosen to tune the material properties.
In this work, we present a computer simulation study of adsorption and dynamics
of H$_{4}$, CH$_{2}$, CO$_{2}$, N$_{2}$ and O$_{2}$ and binary mixtures
thereof, in Pillared Graphene Frameworks with nitrogen-containing organic
spacers. In general, we find that pillar density plays the most important role
in determining gas adsorption. In the low-pressure regime (< 10 bar) the amount
of gas adsorbed is an increasing function of pillar density. At higher
pressures the opposite trend is observed. Diffusion coefficients were computed
for representative structures taking into account the framework flexibility
that is essential in assessing the dynamical properties of the adsorbed gases.
Good performance for the gas separation in CH$_{4}$/H$_{2}$, CO$_{2}$/H$_{2}$
and CO$_{2}$/N$_{2}$ mixtures was found with values comparable to those of
metal-organic frameworks and zeolites.
| 0 | 1 | 0 | 0 | 0 | 0 |
DepthSynth: Real-Time Realistic Synthetic Data Generation from CAD Models for 2.5D Recognition | Recent progress in computer vision has been dominated by deep neural networks
trained over large amounts of labeled data. Collecting such datasets is however
a tedious, often impossible task; hence a surge in approaches relying solely on
synthetic data for their training. For depth images however, discrepancies with
real scans still noticeably affect the end performance. We thus propose an
end-to-end framework which simulates the whole mechanism of these devices,
generating realistic depth data from 3D models by comprehensively modeling
vital factors e.g. sensor noise, material reflectance, surface geometry. Not
only does our solution cover a wider range of sensors and achieve more
realistic results than previous methods, assessed through extended evaluation,
but we go further by measuring the impact on the training of neural networks
for various recognition tasks; demonstrating how our pipeline seamlessly
integrates such architectures and consistently enhances their performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Short Laws for Finite Groups and Residual Finiteness Growth | We prove that for every $n \in \mathbb{N}$ and $\delta>0$ there exists a word
$w_n \in F_2$ of length $n^{2/3} \log(n)^{3+\delta}$ which is a law for every
finite group of order at most $n$. This improves upon the main result of [A.
Thom, About the length of laws for finite groups, Isr. J. Math.]. As an
application we prove a new lower bound on the residual finiteness growth of
non-abelian free groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Majorana Spin Liquids, Topology and Superconductivity in Ladders | We theoretically address spin chain analogs of the Kitaev quantum spin model
on the honeycomb lattice. The emergent quantum spin liquid phases or Anderson
resonating valence bond (RVB) states can be understood, as an effective model,
in terms of p-wave superconductivity and Majorana fermions. We derive a
generalized phase diagram for the two-leg ladder system with tunable
interaction strengths between chains allowing us to vary the shape of the
lattice (from square to honeycomb ribbon or brickwall ladder). We evaluate the
winding number associated with possible emergent (topological) gapless modes at
the edges. In the Az phase, as a result of the emergent Z2 gauge fields and
pi-flux ground state, one may build spin-1/2 (loop) qubit operators by analogy
to the toric code. In addition, we show how the intermediate gapless B phase
evolves in the generalized ladder model. For the brickwall ladder, the $B$
phase is reduced to one line, which is analyzed through perturbation theory in
a rung tensor product states representation and bosonization. Finally, we show
that doping with a few holes can result in the formation of hole pairs and
leads to a mapping with the Su-Schrieffer-Heeger model in polyacetylene; a
superconducting-insulating quantum phase transition for these hole pairs is
accessible, as well as related topological properties.
| 0 | 1 | 0 | 0 | 0 | 0 |
Conditional Mean and Quantile Dependence Testing in High Dimension | Motivated by applications in biological science, we propose a novel test to
assess the conditional mean dependence of a response variable on a large number
of covariates. Our procedure is built on the martingale difference divergence
recently proposed in Shao and Zhang (2014), and it is able to detect a certain
type of departure from the null hypothesis of conditional mean independence
without making any specific model assumptions. Theoretically, we establish the
asymptotic normality of the proposed test statistic under suitable assumption
on the eigenvalues of a Hermitian operator, which is constructed based on the
characteristic function of the covariates. These conditions can be simplified
under banded dependence structure on the covariates or Gaussian design. To
account for heterogeneity within the data, we further develop a testing
procedure for conditional quantile independence at a given quantile level and
provide an asymptotic justification. Empirically, our test of conditional mean
independence delivers comparable results to the competitor, which was
constructed under the linear model framework, when the underlying model is
linear. It significantly outperforms the competitor when the conditional mean
admits a nonlinear form.
| 0 | 0 | 1 | 1 | 0 | 0 |
Finding low-tension communities | Motivated by applications that arise in online social media and collaboration
networks, there has been a lot of work on community-search and team-formation
problems. In the former class of problems, the goal is to find a subgraph that
satisfies a certain connectivity requirement and contains a given collection of
seed nodes. In the latter class of problems, on the other hand, the goal is to
find individuals who collectively have the skills required for a task and form
a connected subgraph with certain properties.
In this paper, we extend both the community-search and the team-formation
problems by associating each individual with a profile. The profile is a
numeric score that quantifies the position of an individual with respect to a
topic. We adopt a model where each individual starts with a latent profile and
arrives to a conformed profile through a dynamic conformation process, which
takes into account the individual's social interaction and the tendency to
conform with one's social environment. In this framework, social tension arises
from the differences between the conformed profiles of neighboring individuals
as well as from differences between individuals' conformed and latent profiles.
Given a network of individuals, their latent profiles and this conformation
process, we extend the community-search and the team-formation problems by
requiring the output subgraphs to have low social tension. From the technical
point of view, we study the complexity of these problems and propose algorithms
for solving them effectively. Our experimental evaluation in a number of social
networks reveals the efficacy and efficiency of our methods.
| 1 | 1 | 0 | 0 | 0 | 0 |
Cohomology of the flag variety under PBW degenerations | PBW degenerations are a particularly nice family of flat degenerations of
type A flag varieties. We show that the cohomology of any PBW degeneration of
the flag variety surjects onto the cohomology of the original flag variety, and
that this holds in an equivariant setting too. We also prove that the same is
true in the symplectic setting when considering Feigin's linear degeneration of
the symplectic flag variety.
| 0 | 0 | 1 | 0 | 0 | 0 |
Exact MAP inference in general higher-order graphical models using linear programming | This paper is concerned with the problem of exact MAP inference in general
higher-order graphical models by means of a traditional linear programming
relaxation approach. In fact, the proof that we have developed in this paper is
a rather simple algebraic proof being made straightforward, above all, by the
introduction of two novel algebraic tools. Indeed, on the one hand, we
introduce the notion of delta-distribution which merely stands for the
difference of two arbitrary probability distributions, and which mainly serves
to alleviate the sign constraint inherent to a traditional probability
distribution. On the other hand, we develop an approximation framework of
general discrete functions by means of an orthogonal projection expressing in
terms of linear combinations of function margins with respect to a given
collection of point subsets, though, we rather exploit the latter approach for
the purpose of modeling locally consistent sets of discrete functions from a
global perspective. After that, as a first step, we develop from scratch the
expectation optimization framework which is nothing else than a reformulation,
on stochastic grounds, of the convex-hull approach, as a second step, we
develop the traditional LP relaxation of such an expectation optimization
approach, and we show that it enables to solve the MAP inference problem in
graphical models under rather general assumptions. Last but not least, we
describe an algorithm which allows to compute an exact MAP solution from a
perhaps fractional optimal (probability) solution of the proposed LP
relaxation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dissolution of topological Fermi arcs in a dirty Weyl semimetal | Weyl semimetals (WSMs) have recently attracted a great deal of attention as
they provide condensed matter realization of chiral anomaly, feature
topologically protected Fermi arc surface states and sustain sharp chiral Weyl
quasiparticles up to a critical disorder at which a continuous quantum phase
transition (QPT) drives the system into a metallic phase. We here numerically
demonstrate that with increasing strength of disorder the Fermi arc gradually
looses its sharpness, and close to the WSM-metal QPT it completely dissolves
into the metallic bath of the bulk. Predicted topological nature of the
WSM-metal QPT and the resulting bulk-boundary correspondence across this
transition can directly be observed in
angle-resolved-photo-emmision-spectroscopy (ARPES) and Fourier transformed
scanning-tunneling-microscopy (STM) measurements by following the continuous
deformation of the Fermi arcs with increasing disorder in recently discovered
Weyl materials.
| 0 | 1 | 0 | 0 | 0 | 0 |
Performance Improvement in Noisy Linear Consensus Networks with Time-Delay | We analyze performance of a class of time-delay first-order consensus
networks from a graph topological perspective and present methods to improve
it. The performance is measured by network's square of H-2 norm and it is shown
that it is a convex function of Laplacian eigenvalues and the coupling weights
of the underlying graph of the network. First, we propose a tight convex, but
simple, approximation of the performance measure in order to achieve lower
complexity in our design problems by eliminating the need for
eigen-decomposition. The effect of time-delay reincarnates itself in the form
of non-monotonicity, which results in nonintuitive behaviors of the performance
as a function of graph topology. Next, we present three methods to improve the
performance by growing, re-weighting, or sparsifying the underlying graph of
the network. It is shown that our suggested algorithms provide near-optimal
solutions with lower complexity with respect to existing methods in literature.
| 1 | 0 | 0 | 0 | 0 | 0 |
Use of First and Third Person Views for Deep Intersection Classification | We explore the problem of intersection classification using monocular
on-board passive vision, with the goal of classifying traffic scenes with
respect to road topology. We divide the existing approaches into two broad
categories according to the type of input data: (a) first person vision (FPV)
approaches, which use an egocentric view sequence as the intersection is
passed; and (b) third person vision (TPV) approaches, which use a single view
immediately before entering the intersection. The FPV and TPV approaches each
have advantages and disadvantages. Therefore, we aim to combine them into a
unified deep learning framework. Experimental results show that the proposed
FPV-TPV scheme outperforms previous methods and only requires minimal FPV/TPV
measurements.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Liouville heat kernel for k-coarse MBRW and nonuniversality | We study the Liouville heat kernel (in the $L^2$ phase) associated with a
class of logarithmically correlated Gaussian fields on the two dimensional
torus. We show that for each $\varepsilon>0$ there exists such a field, whose
covariance is a bounded perturbation of that of the two dimensional Gaussian
free field, and such that the associated Liouville heat kernel satisfies the
short time estimates, $$ \exp \left( - t^{ - \frac 1 { 1 + \frac 1 2 \gamma^2 }
- \varepsilon } \right) \le p_t^\gamma (x, y) \le \exp \left( - t^{- \frac 1 {
1 + \frac 1 2 \gamma^2 } + \varepsilon } \right) , $$ for $\gamma<1/2$. In
particular, these are different from predictions, due to Watabiki, concerning
the Liouville heat kernel for the two dimensional Gaussian free field.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bounding the convergence time of local probabilistic evolution | Isoperimetric inequalities form a very intuitive yet powerful
characterization of the connectedness of a state space, that has proven
successful in obtaining convergence bounds. Since the seventies they form an
essential tool in differential geometry, graph theory and Markov chain
analysis. In this paper we use isoperimetric inequalities to construct a bound
on the convergence time of any local probabilistic evolution that leaves its
limit distribution invariant. We illustrate how this general result leads to
new bounds on convergence times beyond the explicit Markovian setting, among
others on quantum dynamics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Excitable behaviors | This chapter revisits the concept of excitability, a basic system property of
neurons. The focus is on excitable systems regarded as behaviors rather than
dynamical systems. By this we mean open systems modulated by specific
interconnection properties rather than closed systems classified by their
parameter ranges. Modeling, analysis, and synthesis questions can be formulated
in the classical language of circuit theory. The input-output characterization
of excitability is in terms of the local sensitivity of the current-voltage
relationship. It suggests the formulation of novel questions for non-linear
system theory, inspired by questions from experimental neurophysiology.
| 1 | 0 | 0 | 0 | 0 | 0 |
Laplacian solitons: questions and homogeneous examples | We give the first examples of closed Laplacian solitons which are shrinking,
and in particular produce closed Laplacian flow solutions with a finite-time
singularity. Extremally Ricci pinched G2-structures (introduced by Bryant)
which are steady Laplacian solitons have also been found. All the examples are
left-invariant G2-structures on solvable Lie groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Markoff Group of Transformations in Prime and Composite Moduli | The Markoff group of transformations is a group $\Gamma$ of affine integral
morphisms, which is known to act transitively on the set of all positive
integer solutions to the equation $x^{2}+y^{2}+z^{2}=xyz$. The fundamental
strong approximation conjecture for the Markoff equation states that for every
prime $p$, the group $\Gamma$ acts transitively on the set
$X^{*}\left(p\right)$ of non-zero solutions to the same equation over
$\mathbb{Z}/p\mathbb{Z}$. Recently, Bourgain, Gamburd and Sarnak proved this
conjecture for all primes outside a small exceptional set.
In the current paper, we study a group of permutations obtained by the action
of $\Gamma$ on $X^{*}\left(p\right)$, and show that for most primes, it is the
full symmetric or alternating group. We use this result to deduce that $\Gamma$
acts transitively also on the set of non-zero solutions in a big class of
composite moduli.
Our result is also related to a well-known theorem of Gilman, stating that
for any finite non-abelian simple group $G$ and $r\ge3$, the group
$\mathrm{Aut}\left(F_{r}\right)$ acts on at least one $T_{r}$-system of $G$ as
the alternating or symmetric group. In this language, our main result
translates to that for most primes $p$, the group
$\mathrm{Aut}\left(F_{2}\right)$ acts on a particular $T_{2}$-system of
$\mathrm{PSL}\left(2,p\right)$ as the alternating or symmetric group.
| 0 | 0 | 1 | 0 | 0 | 0 |
The equivariant index of twisted dirac operators and semi-classical limits | Consider a spin manifold M, equipped with a line bundle L and an action of a
compact Lie group G. We can attach to this data a family Theta(k) of
distributions on the dual of the Lie algebra of G. The aim of this paper is to
study the asymptotic behaviour of Theta(k) when k is large, and M possibly non
compact, and to explore a functorial consequence of this formula for reduced
spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Proportionally Representative Participatory Budgeting: Axioms and Algorithms | Participatory budgeting is one of the exciting developments in deliberative
grassroots democracy. We concentrate on approval elections and propose
proportional representation axioms in participatory budgeting, by generalizing
relevant axioms for approval-based multi-winner elections. We observe a rich
landscape with respect to the computational complexity of identifying
proportional budgets and computing such, and present budgeting methods that
satisfy these axioms by identifying budgets that are representative to the
demands of vast segments of the voters.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Arctic Ocean seasonal cycles of heat and freshwater fluxes: observation-based inverse estimates | This paper presents the first estimate of the seasonal cycle of ocean and sea
ice net heat and freshwater (FW) fluxes around the boundary of the Arctic
Ocean. The ocean transports are estimated primarily using 138 moored
instruments deployed in September 2005 to August 2006 across the four main
Arctic gateways: Davis, Fram and Bering Straits, and the Barents Sea Opening
(BSO). Sea ice transports are estimated from a sea ice assimilation product.
Monthly velocity fields are calculated with a box inverse model that enforces
volume and salinity conservation. The resulting net ocean and sea ice heat and
FW fluxes (annual mean $\pm$ 1 standard deviation) are 175 $\pm$48 TW and 204
$\pm$85 mSv (respectively; 1 Sv = 10$^{6} m^{3} s^{-1}$). These boundary fluxes
accurately represent the annual means of the relevant surface fluxes. Oceanic
net heat transport variability is driven by temperature variability in upper
part of the water column and by volume transport variability in the Atlantic
Water layer. Oceanic net FW transport variability is dominated by Bering Strait
velocity variability. The net water mass transformation in the Arctic entails a
freshening and cooling of inflowing waters by 0.62$\pm$0.23 in salinity and
3.74$\pm$0.76C in temperature, respectively, and a reduction in density by
0.23$\pm$0.20 kg m$^{-3}$. The volume transport into the Arctic of waters
associated with this water mass transformation is 11.3$\pm$1.2 Sv, and the
export is -11.4$\pm$1.1 Sv. The boundary heat and FW fluxes provide a benchmark
data set for the validation of numerical models and atmospheric re-analyses
products.
| 0 | 1 | 0 | 0 | 0 | 0 |
Standard Zero-Free Regions for Rankin--Selberg L-Functions via Sieve Theory | We give a simple proof of a standard zero-free region in the $t$-aspect for
the Rankin--Selberg $L$-function $L(s,\pi \times \widetilde{\pi})$ for any
unitary cuspidal automorphic representation $\pi$ of
$\mathrm{GL}_n(\mathbb{A}_F)$ that is tempered at every nonarchimedean place
outside a set of Dirichlet density zero.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mass transfer in asymptotic-giant-branch binary systems | Binary stars can interact via mass transfer when one member (the primary)
ascends onto a giant branch. The amount of gas ejected by the binary and the
amount of gas accreted by the secondary over the lifetime of the primary
influence the subsequent binary phenomenology. Some of the gas ejected by the
binary will remain gravitationally bound and its distribution will be closely
related to the formation of planetary nebulae. We investigate the nature of
mass transfer in binary systems containing an AGB star by adding radiative
transfer to the AstroBEAR AMR Hydro/MHD code.
| 0 | 1 | 0 | 0 | 0 | 0 |
On polar relative normalizations of ruled surfaces | This paper deals with skew ruled surfaces in the Euclidean space
$\mathbb{E}^{3}$ which are equipped with polar normalizations, that is,
relative normalizations such that the relative normal at each point of the
ruled surface lies on the corresponding polar plane. We determine the
invariants of a such normalized ruled surface and we study some properties of
the Tchebychev vector field and the support vector field of a polar
normalization. Furthermore, we study a special polar normalization, the
relative image of which degenerates into a curve.
| 0 | 0 | 1 | 0 | 0 | 0 |
Femtosecond X-ray Fourier holography imaging of free-flying nanoparticles | Ultrafast X-ray imaging provides high resolution information on individual
fragile specimens such as aerosols, metastable particles, superfluid quantum
systems and live biospecimen, which is inaccessible with conventional imaging
techniques. Coherent X-ray diffractive imaging, however, suffers from intrinsic
loss of phase, and therefore structure recovery is often complicated and not
always uniquely-defined. Here, we introduce the method of in-flight holography,
where we use nanoclusters as reference X-ray scatterers in order to encode
relative phase information into diffraction patterns of a virus. The resulting
hologram contains an unambiguous three-dimensional map of a virus and two
nanoclusters with the highest lat- eral resolution so far achieved via single
shot X-ray holography. Our approach unlocks the benefits of holography for
ultrafast X-ray imaging of nanoscale, non-periodic systems and paves the way to
direct observation of complex electron dynamics down to the attosecond time
scale.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Rank Effect | We decompose returns for portfolios of bottom-ranked, lower-priced assets
relative to the market into rank crossovers and changes in the relative price
of those bottom-ranked assets. This decomposition is general and consistent
with virtually any asset pricing model. Crossovers measure changes in rank and
are smoothly increasing over time, while return fluctuations are driven by
volatile relative price changes. Our results imply that in a closed,
dividend-free market in which the relative price of bottom-ranked assets is
approximately constant, a portfolio of those bottom-ranked assets will
outperform the market portfolio over time. We show that bottom-ranked relative
commodity futures prices have increased only slightly, and confirm the
existence of substantial excess returns predicted by our theory. If these
excess returns did not exist, then top-ranked relative prices would have had to
be much higher in 2018 than those actually observed -- this would imply a
radically different commodity price distribution.
| 0 | 0 | 0 | 0 | 0 | 1 |
The Generalized Label Correcting Method for Optimal Kinodynamic Motion Planning | Nearly all autonomous robotic systems use some form of motion planning to
compute reference motions through their environment. An increasing use of
autonomous robots in a broad range of applications creates a need for
efficient, general purpose motion planning algorithms that are applicable in
any of these new application domains.
This thesis presents a resolution complete optimal kinodynamic motion
planning algorithm based on a direct forward search of the set of admissible
input signals to a dynamical model. The advantage of this generalized label
correcting method is that it does not require a local planning subroutine as in
the case of related methods.
Preliminary material focuses on new topological properties of the canonical
problem formulation that are used to show continuity of the performance
objective. These observations are used to derive a generalization of Bellman's
principle of optimality in the context of kinodynamic motion planning. A
generalized label correcting algorithm is then proposed which leverages these
results to prune candidate input signals from the search when their cost is
greater than related signals.
The second part of this thesis addresses admissible heuristics for
kinodynamic motion planning. An admissibility condition is derived that can be
used to verify the admissibility of candidate heuristics for a particular
problem. This condition also characterizes a convex set of admissible
heuristics.
A linear program is formulated to obtain a heuristic which is as close to the
optimal cost-to-go as possible while remaining admissible. This optimization is
justified by showing its solution coincides with the solution to the
Hamilton-Jacobi-Bellman equation. Lastly, a sum-of-squares relaxation of this
infinite-dimensional linear program is proposed for obtaining provably
admissible approximate solutions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning to Compose Task-Specific Tree Structures | For years, recursive neural networks (RvNNs) have been shown to be suitable
for representing text into fixed-length vectors and achieved good performance
on several natural language processing tasks. However, the main drawback of
RvNNs is that they require structured input, which makes data preparation and
model implementation hard. In this paper, we propose Gumbel Tree-LSTM, a novel
tree-structured long short-term memory architecture that learns how to compose
task-specific tree structures only from plain text data efficiently. Our model
uses Straight-Through Gumbel-Softmax estimator to decide the parent node among
candidates dynamically and to calculate gradients of the discrete decision. We
evaluate the proposed model on natural language inference and sentiment
analysis, and show that our model outperforms or is at least comparable to
previous models. We also find that our model converges significantly faster
than other models.
| 1 | 0 | 0 | 0 | 0 | 0 |
Navier-Stokes flow past a rigid body: attainability of steady solutions as limits of unsteady weak solutions, starting and landing cases | Consider the Navier-Stokes flow in 3-dimensional exterior domains, where a
rigid body is translating with prescribed translational velocity
$-h(t)u_\infty$ with constant vector $u_\infty\in \mathbb R^3\setminus\{0\}$.
Finn raised the question whether his steady slutions are attainable as limits
for $t\to\infty$ of unsteady solutions starting from motionless state when
$h(t)=1$ after some finite time and $h(0)=0$ (starting problem). This was
affirmatively solved by Galdi, Heywood and Shibata for small $u_\infty$. We
study some generalized situation in which unsteady solutions start from large
motions being in $L^3$. We then conclude that the steady solutions for small
$u_\infty$ are still attainable as limits of evolution of those fluid motions
which are found as a sort of weak solutions. The opposite situation, in which
$h(t)=0$ after some finite time and $h(0)=1$ (landing problem), is also
discussed. In this latter case, the rest state is attainable no matter how
large $u_\infty$ is.
| 0 | 0 | 1 | 0 | 0 | 0 |
Flux cost functions and the choice of metabolic fluxes | Metabolic fluxes in cells are governed by physical, biochemical,
physiological, and economic principles. Cells may show "economical" behaviour,
trading metabolic performance against the costly side-effects of high enzyme or
metabolite concentrations. Some constraint-based flux prediction methods score
fluxes by heuristic flux costs as proxies of enzyme investments. However,
linear cost functions ignore enzyme kinetics and the tight coupling between
fluxes, metabolite levels and enzyme levels. To derive more realistic cost
functions, I define an apparent "enzymatic flux cost" as the minimal enzyme
cost at which the fluxes can be realised in a given kinetic model, and a
"kinetic flux cost", which includes metabolite cost. I discuss the mathematical
properties of such flux cost functions, their usage for flux prediction, and
their importance for cells' metabolic strategies. The enzymatic flux cost
scales linearly with the fluxes and is a concave function on the flux polytope.
The costs of two flows are usually not additive, due to an additional
"compromise cost". Between flux polytopes, where fluxes change their
directions, the enzymatic cost shows a jump. With strictly concave flux cost
functions, cells can reduce their enzymatic cost by running different fluxes in
different cell compartments or at different moments in time. The enzymactic
flux cost can be translated into an approximated cell growth rate, a convex
function on the flux polytope. Growth-maximising metabolic states can be
predicted by Flux Cost Minimisation (FCM), a variant of FBA based on general
flux cost functions. The solutions are flux distributions in corners of the
flux polytope, i.e. typically elementary flux modes. Enzymatic flux costs can
be linearly or nonlinearly approximated, providing model parameters for linear
FBA based on kinetic parameters and extracellular concentrations, and justified
by a kinetic model.
| 0 | 0 | 0 | 0 | 1 | 0 |
Finding Archetypal Spaces for Data Using Neural Networks | Archetypal analysis is a type of factor analysis where data is fit by a
convex polytope whose corners are "archetypes" of the data, with the data
represented as a convex combination of these archetypal points. While
archetypal analysis has been used on biological data, it has not achieved
widespread adoption because most data are not well fit by a convex polytope in
either the ambient space or after standard data transformations. We propose a
new approach to archetypal analysis. Instead of fitting a convex polytope
directly on data or after a specific data transformation, we train a neural
network (AAnet) to learn a transformation under which the data can best fit
into a polytope. We validate this approach on synthetic data where we add
nonlinearity. Here, AAnet is the only method that correctly identifies the
archetypes. We also demonstrate AAnet on two biological datasets. In a T cell
dataset measured with single cell RNA-sequencing, AAnet identifies several
archetypal states corresponding to naive, memory, and cytotoxic T cells. In a
dataset of gut microbiome profiles, AAnet recovers both previously described
microbiome states and identifies novel extrema in the data. Finally, we show
that AAnet has generative properties allowing us to uniformly sample from the
data geometry even when the input data is not uniformly distributed.
| 1 | 0 | 0 | 1 | 0 | 0 |
Formally continuous functions on Baire space | A function from Baire space to the natural numbers is called formally
continuous if it is induced by a morphism between the corresponding formal
spaces. We compare formal continuity to two other notions of continuity on
Baire space working in Bishop constructive mathematics: one is a function
induced by a Brouwer-operation (i.e. inductively defined neighbourhood
function); the other is a function uniformly continuous near every compact
image. We show that formal continuity is equivalent to the former while it is
strictly stronger than the latter.
| 1 | 0 | 1 | 0 | 0 | 0 |
Universal Joint Image Clustering and Registration using Partition Information | We consider the problem of universal joint clustering and registration of
images and define algorithms using multivariate information functionals. We
first study registering two images using maximum mutual information and prove
its asymptotic optimality. We then show the shortcomings of pairwise
registration in multi-image registration, and design an asymptotically optimal
algorithm based on multiinformation. Further, we define a novel multivariate
information functional to perform joint clustering and registration of images,
and prove consistency of the algorithm. Finally, we consider registration and
clustering of numerous limited-resolution images, defining algorithms that are
order-optimal in scaling of number of pixels in each image with the number of
images.
| 1 | 0 | 1 | 1 | 0 | 0 |
Quantitative statistical stability and speed of convergence to equilibrium for partially hyperbolic skew products | We consider a general relation between fixed point stability of suitably
perturbed transfer operators and convergence to equilibrium (a notion which is
strictly related to decay of correlations). We apply this relation to
deterministic perturbations of a class of (piecewise) partially hyperbolic skew
products whose behavior on the preserved fibration is dominated by the
expansion of the base map. In particular we apply the results to power law
mixing toral extensions. It turns out that in this case, the dependence of the
physical measure on small deterministic perturbations, in a suitable
anisotropic metric is at least Holder continuous, with an exponent which is
explicitly estimated depending on the arithmetical properties of the system. We
show explicit examples of toral extensions having actually Holder stability and
non differentiable dependence of the physical measure on perturbations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Minimax Rényi Redundancy | The redundancy for universal lossless compression of discrete memoryless
sources in Campbell's setting is characterized as a minimax Rényi divergence,
which is shown to be equal to the maximal $\alpha$-mutual information via a
generalized redundancy-capacity theorem. Special attention is placed on the
analysis of the asymptotics of minimax Rényi divergence, which is determined
up to a term vanishing in blocklength.
| 1 | 0 | 1 | 0 | 0 | 0 |
DeepArchitect: Automatically Designing and Training Deep Architectures | In deep learning, performance is strongly affected by the choice of
architecture and hyperparameters. While there has been extensive work on
automatic hyperparameter optimization for simple spaces, complex spaces such as
the space of deep architectures remain largely unexplored. As a result, the
choice of architecture is done manually by the human expert through a slow
trial and error process guided mainly by intuition. In this paper we describe a
framework for automatically designing and training deep models. We propose an
extensible and modular language that allows the human expert to compactly
represent complex search spaces over architectures and their hyperparameters.
The resulting search spaces are tree-structured and therefore easy to traverse.
Models can be automatically compiled to computational graphs once values for
all hyperparameters have been chosen. We can leverage the structure of the
search space to introduce different model search algorithms, such as random
search, Monte Carlo tree search (MCTS), and sequential model-based optimization
(SMBO). We present experiments comparing the different algorithms on CIFAR-10
and show that MCTS and SMBO outperform random search. In addition, these
experiments show that our framework can be used effectively for model
discovery, as it is possible to describe expressive search spaces and discover
competitive models without much effort from the human expert. Code for our
framework and experiments has been made publicly available.
| 0 | 0 | 0 | 1 | 0 | 0 |
Dynamical control of atoms with polarized bichromatic weak field | We propose ultranarrow dynamical control of population oscillation (PO)
between ground states through the polarization content of an input bichromatic
field. Appropriate engineering of classical interference between optical fields
results in PO arising exclusively from optical pumping. Contrary to the
expected broad spectral response associated with optical pumping, we obtain
subnatural linewidth in complete absence of quantum interference. The
ellipticity of the light polarizations can be used for temporal shaping of the
PO leading to generation of multiple sidebands even at low light level.
| 0 | 1 | 0 | 0 | 0 | 0 |
Cycle packings of the complete multigraph | Bryant, Horsley, Maenhaut and Smith recently gave necessary and sufficient
conditions for when the complete multigraph can be decomposed into cycles of
specified lengths $m_1,m_2,\ldots,m_\tau$. In this paper we characterise
exactly when there exists a packing of the complete multigraph with cycles of
specified lengths $m_1,m_2,\ldots,m_\tau$. While cycle decompositions can give
rise to packings by removing cycles from the decomposition, in general it is
not known when there exists a packing of the complete multigraph with cycles of
various specified lengths.
| 0 | 0 | 1 | 0 | 0 | 0 |
Statistical Inference for the Population Landscape via Moment Adjusted Stochastic Gradients | Modern statistical inference tasks often require iterative optimization
methods to approximate the solution. Convergence analysis from optimization
only tells us how well we are approximating the solution deterministically, but
overlooks the sampling nature of the data. However, due to the randomness in
the data, statisticians are keen to provide uncertainty quantification, or
confidence, for the answer obtained after certain steps of optimization.
Therefore, it is important yet challenging to understand the sampling
distribution of the iterative optimization methods.
This paper makes some progress along this direction by introducing a new
stochastic optimization method for statistical inference, the moment adjusted
stochastic gradient descent. We establish non-asymptotic theory that
characterizes the statistical distribution of the iterative methods, with good
optimization guarantee. On the statistical front, the theory allows for model
misspecification, with very mild conditions on the data. For optimization, the
theory is flexible for both the convex and non-convex cases. Remarkably, the
moment adjusting idea motivated from "error standardization" in statistics
achieves similar effect as Nesterov's acceleration in optimization, for certain
convex problems as in fitting generalized linear models. We also demonstrate
this acceleration effect in the non-convex setting through experiments.
| 0 | 0 | 1 | 1 | 0 | 0 |
Super cavity solitons and the coexistence of multiple nonlinear states in a tristable passive Kerr resonator | Passive Kerr cavities driven by coherent laser fields display a rich
landscape of nonlinear physics, including bistability, pattern formation, and
localised dissipative structures (solitons). Their conceptual simplicity has
for several decades offered an unprecedented window into nonlinear cavity
dynamics, providing insights into numerous systems and applications ranging
from all-optical memory devices to microresonator frequency combs. Yet despite
the decades of study, a recent theoretical study has surprisingly alluded to an
entirely new and unexplored paradigm in the regime where nonlinearly tilted
cavity resonances overlap with one another [T. Hansson and S. Wabnitz, J. Opt.
Soc. Am. B 32, 1259 (2015)]. We have used synchronously driven fiber ring
resonators to experimentally access this regime, and observed the rise of new
nonlinear dissipative states. Specifically, we have observed, for the first
time to the best of our knowledge, the stable coexistence of dissipative
(cavity) solitons and extended modulation instability (Turing) patterns, and
performed real time measurements that unveil the dynamics of the ensuing
nonlinear structures. When operating in the regime of continuous wave
tristability, we have further observed the coexistence of two distinct cavity
soliton states, one of which can be identified as a "super" cavity soliton as
predicted by Hansson and Wabnitz. Our experimental findings are in excellent
agreement with theoretical analyses and numerical simulations of the
infinite-dimensional Ikeda map that governs the cavity dynamics. The results
from our work reveal that experimental systems can support complex combinations
of distinct nonlinear states, and they could have practical implications to
future microresonator-based frequency comb sources.
| 0 | 1 | 0 | 0 | 0 | 0 |
Twisting and Mixing | We present a framework that connects three interesting classes of groups: the
twisted groups (also known as Suzuki-Ree groups), the mixed groups and the
exotic pseudo-reductive groups.
For a given characteristic p, we construct categories of twisted and mixed
schemes. Ordinary schemes are a full subcategory of the mixed schemes. Mixed
schemes arise from a twisted scheme by base change, although not every mixed
scheme arises this way. The group objects in these categories are called
twisted and mixed group schemes.
Our main theorems state: (1) The twisted Chevalley groups ${}^2\mathsf B_2$,
${}^2\mathsf G_2$ and ${}^2\mathsf F_4$ arise as rational points of twisted
group schemes. (2) The mixed groups in the sense of Tits arise as rational
points of mixed group schemes over mixed fields. (3) The exotic
pseudo-reductive groups of Conrad, Gabber and Prasad are Weil restrictions of
mixed group schemes.
| 0 | 0 | 1 | 0 | 0 | 0 |
KeyVec: Key-semantics Preserving Document Representations | Previous studies have demonstrated the empirical success of word embeddings
in various applications. In this paper, we investigate the problem of learning
distributed representations for text documents which many machine learning
algorithms take as input for a number of NLP tasks.
We propose a neural network model, KeyVec, which learns document
representations with the goal of preserving key semantics of the input text. It
enables the learned low-dimensional vectors to retain the topics and important
information from the documents that will flow to downstream tasks. Our
empirical evaluations show the superior quality of KeyVec representations in
two different document understanding tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Integer Factorization with a Neuromorphic Sieve | The bound to factor large integers is dominated by the computational effort
to discover numbers that are smooth, typically performed by sieving a
polynomial sequence. On a von Neumann architecture, sieving has log-log
amortized time complexity to check each value for smoothness. This work
presents a neuromorphic sieve that achieves a constant time check for
smoothness by exploiting two characteristic properties of neuromorphic
architectures: constant time synaptic integration and massively parallel
computation. The approach is validated by modifying msieve, one of the fastest
publicly available integer factorization implementations, to use the IBM
Neurosynaptic System (NS1e) as a coprocessor for the sieving stage.
| 1 | 0 | 0 | 0 | 0 | 0 |
Probabilistic Assessment of PV-Battery System Impacts on LV Distribution Networks | The increasing uptake of residential batteries has led to suggestions that
the prevalence of batteries on LV networks will serendipitously mitigate the
technical problems induced by PV installations. However, in general, the
effects of PV-battery systems on LV networks have not been well studied. Given
this background, in this paper, we test the assertion that the uncoordinated
operation of batteries improves network performance. In order to carry out this
assessment, we develop a methodology for incorporating home energy management
(HEM) operational decisions within a Monte Carlo (MC) power flow analysis
comprising three parts. First, due to the unavailability of large number of
load and PV traces required for MC analysis, we used a maximum a-posteriori
Dirichlet process to generate statistically representative synthetic profiles.
Second, a policy function approximation (PFA) that emulates the outputs of the
HEM solver is implemented to provide battery scheduling policies for a pool of
customers, making simulation of optimization-based HEM feasible within MC
studies. Third, the resulting net loads are used in a MC power flow time series
study. The efficacy of our method is shown on three typical LV feeders. Our
assessment finds that uncoordinated PV-battery systems have little beneficial
impact on LV networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Simulating Cellular Communications in Vehicular Networks: Making SimuLTE Interoperable with Veins | The evolution of cellular technologies toward 5G progressively enables
efficient and ubiquitous communications in an increasing number of fields.
Among these, vehicular networks are being considered as one of the most
promising and challenging applications, requiring support for communications in
high-speed mobility and delay-constrained information exchange in proximity. In
this context, simulation frameworks under the OMNeT++ umbrella are already
available: SimuLTE and Veins for cellular and vehicular systems, respectively.
In this paper, we describe the modifications that make SimuLTE interoperable
with Veins and INET, which leverage the OMNeT++ paradigm, and allow us to
achieve our goal without any modification to either of the latter two. We
discuss the limitations of the previous solution, namely VeinsLTE, which
integrates all three in a single framework, thus preventing independent
evolution and upgrades of each building block.
| 1 | 0 | 0 | 0 | 0 | 0 |
AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks | Training deep neural networks with Stochastic Gradient Descent, or its
variants, requires careful choice of both learning rate and batch size. While
smaller batch sizes generally converge in fewer training epochs, larger batch
sizes offer more parallelism and hence better computational efficiency. We have
developed a new training approach that, rather than statically choosing a
single batch size for all epochs, adaptively increases the batch size during
the training process. Our method delivers the convergence rate of small batch
sizes while achieving performance similar to large batch sizes. We analyse our
approach using the standard AlexNet, ResNet, and VGG networks operating on the
popular CIFAR-10, CIFAR-100, and ImageNet datasets. Our results demonstrate
that learning with adaptive batch sizes can improve performance by factors of
up to 6.25 on 4 NVIDIA Tesla P100 GPUs while changing accuracy by less than 1%
relative to training with fixed batch sizes.
| 1 | 0 | 0 | 1 | 0 | 0 |
Extremely broadband ultralight thermally emissive metasurfaces | We report the design, fabrication and characterization of ultralight highly
emissive metaphotonic structures with record-low mass/area that emit thermal
radiation efficiently over a broad spectral (2 to 35 microns) and angular (0-60
degrees) range. The structures comprise one to three pairs of alternating
nanometer-scale metallic and dielectric layers, and have measured effective 300
K hemispherical emissivities of 0.7 to 0.9. To our knowledge, these structures,
which are all subwavelength in thickness are the lightest reported metasurfaces
with comparable infrared emissivity. The superior optical properties, together
with their mechanical flexibility, low outgassing, and low areal mass, suggest
that these metasurfaces are candidates for thermal management in applications
demanding of ultralight flexible structures, including aerospace applications,
ultralight photovoltaics, lightweight flexible electronics, and textiles for
thermal insulation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Wright-Fisher diffusions for evolutionary games with death-birth updating | We investigate spatial evolutionary games with death-birth updating in large
finite populations. Within growing spatial structures subject to appropriate
conditions, the density processes of a fixed type are proven to converge to the
Wright-Fisher diffusions with drift. In addition, convergence in the
Wasserstein distance of the laws of their occupation measures holds. The proofs
of these results develop along an equivalence between the laws of the
evolutionary games and certain voter models and rely on the analogous results
of voter models on large finite sets by convergences of the Radon-Nikodym
derivative processes. As another application of this equivalence of laws, we
show that in a general, large population of size $N$, for which the stationary
probabilities of the corresponding voting kernel are comparable to uniform
probabilities, a first-derivative test among the major methods for these
evolutionary games is applicable at least up to weak selection strengths in the
usual biological sense (that is, selection strengths of the order $\mathcal
O(1/N)$).
| 0 | 0 | 1 | 0 | 0 | 0 |
Quantized Compressed Sensing for Partial Random Circulant Matrices | We provide the first analysis of a non-trivial quantization scheme for
compressed sensing measurements arising from structured measurements.
Specifically, our analysis studies compressed sensing matrices consisting of
rows selected at random, without replacement, from a circulant matrix generated
by a random subgaussian vector. We quantize the measurements using stable,
possibly one-bit, Sigma-Delta schemes, and use a reconstruction method based on
convex optimization. We show that the part of the reconstruction error due to
quantization decays polynomially in the number of measurements. This is in line
with analogous results on Sigma-Delta quantization associated with random
Gaussian or subgaussian matrices, and significantly better than results
associated with the widely assumed memoryless scalar quantization. Moreover, we
prove that our approach is stable and robust; i.e., the reconstruction error
degrades gracefully in the presence of non-quantization noise and when the
underlying signal is not strictly sparse. The analysis relies on results
concerning subgaussian chaos processes as well as a variation of McDiarmid's
inequality.
| 1 | 0 | 1 | 0 | 0 | 0 |
Optimal Scheduling of Multi-Energy Systems with Flexible Electrical and Thermal Loads | This paper proposes a detailed optimal scheduling model of an exemplar
multi-energy system comprising combined cycle power plants (CCPPs), battery
energy storage systems, renewable energy sources, boilers, thermal energy
storage systems,electric loads and thermal loads. The proposed model considers
the detailed start-up and shutdown power trajectories of the gas turbines,
steam turbines and boilers. Furthermore, a practical,multi-energy load
management scheme is proposed within the framework of the optimal scheduling
problem. The proposed load management scheme utilizes the flexibility offered
by system components such as flexible electrical pump loads, electrical
interruptible loads and a flexible thermal load to reduce the overall energy
cost of the system. The efficacy of the proposed model in reducing the energy
cost of the system is demonstrated in the context of a day-ahead scheduling
problem using four illustrative scenarios.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the sharpness and the injective property of basic justification models | Justification Awareness Models, JAMs, were proposed by S.~Artemov as a tool
for modelling epistemic scenarios like Russel's Prime Minister example. It was
demonstrated that the sharpness and the injective property of a model play
essential role in the epistemic usage of JAMs. The problem to axiomatize these
properties using the propositional justification language was left opened. We
propose the solution and define a decidable justification logic Jref that is
sound and complete with respect to the class of all sharp injective
justification models.
| 1 | 0 | 0 | 0 | 0 | 0 |
AFT*: Integrating Active Learning and Transfer Learning to Reduce Annotation Efforts | The splendid success of convolutional neural networks (CNNs) in computer
vision is largely attributed to the availability of large annotated datasets,
such as ImageNet and Places. However, in biomedical imaging, it is very
challenging to create such large annotated datasets, as annotating biomedical
images is not only tedious, laborious, and time consuming, but also demanding
of costly, specialty-oriented skills, which are not easily accessible. To
dramatically reduce annotation cost, this paper presents a novel method to
naturally integrate active learning and transfer learning (fine-tuning) into a
single framework, called AFT*, which starts directly with a pre-trained CNN to
seek "worthy" samples for annotation and gradually enhance the (fine-tuned) CNN
via continuous fine-tuning. We have evaluated our method in three distinct
biomedical imaging applications, demonstrating that it can cut the annotation
cost by at least half, in comparison with the state-of-the-art method. This
performance is attributed to the several advantages derived from the advanced
active, continuous learning capability of our method. Although AFT* was
initially conceived in the context of computer-aided diagnosis in biomedical
imaging, it is generic and applicable to many tasks in computer vision and
image analysis; we illustrate the key ideas behind AFT* with the Places
database for scene interpretation in natural images.
| 0 | 0 | 0 | 1 | 0 | 0 |
Do triangle-free planar graphs have exponentially many 3-colorings? | Thomassen conjectured that triangle-free planar graphs have an exponential
number of $3$-colorings. We show this conjecture to be equivalent to the
following statement: there exists a positive real $\alpha$ such that whenever
$G$ is a planar graph and $A$ is a subset of its edges whose deletion makes $G$
triangle-free, there exists a subset $A'$ of $A$ of size at least $\alpha|A|$
such that $G-(A\setminus A')$ is $3$-colorable. This equivalence allows us to
study restricted situations, where we can prove the statement to be true.
| 0 | 0 | 1 | 0 | 0 | 0 |
Smart TWAP trading in continuous-time equilibria | This paper presents a continuous-time equilibrium model of TWAP trading and
liquidity provision in a market with multiple strategic investors with
heterogeneous intraday trading targets. We solve the model in closed-form and
show there are infinitely many equilibria. We compare the competitive
equilibrium with different non-price-taking equilibria. In addition, we show
intraday TWAP benchmarking reduces market liquidity relative to just terminal
trading targets alone. The model is computationally tractable, and we provide a
number of numerical illustrations. An extension to stochastic VWAP targets is
also provided.
| 0 | 0 | 0 | 0 | 0 | 1 |
Self-sustained activity in balanced networks with low firing-rate | The brain can display self-sustained activity (SSA), which is the persistent
firing of neurons in the absence of external stimuli. This spontaneous activity
shows low neuronal firing rates and is observed in diverse in vitro and in vivo
situations. In this work, we study the influence of excitatory/inhibitory
balance, connection density, and network size on the self-sustained activity of
a neuronal network model. We build a random network of adaptive exponential
integrate-and-fire (AdEx) neuron models connected through inhibitory and
excitatory chemical synapses. The AdEx model mimics several behaviours of
biological neurons, such as spike initiation, adaptation, and bursting
patterns. In an excitation/inhibition balanced state, if the mean connection
degree (K) is fixed, the firing rate does not depend on the network size (N),
whereas for fixed N, the firing rate decreases when K increases. However, for
large K, SSA states can appear only for large N. We show the existence of SSA
states with similar behaviours to those observed in experimental recordings,
such as very low and irregular neuronal firing rates, and spike-train power
spectra with slow fluctuations, only for balanced networks of large size.
| 0 | 0 | 0 | 0 | 1 | 0 |
Fixed-Rank Approximation of a Positive-Semidefinite Matrix from Streaming Data | Several important applications, such as streaming PCA and semidefinite
programming, involve a large-scale positive-semidefinite (psd) matrix that is
presented as a sequence of linear updates. Because of storage limitations, it
may only be possible to retain a sketch of the psd matrix. This paper develops
a new algorithm for fixed-rank psd approximation from a sketch. The approach
combines the Nystrom approximation with a novel mechanism for rank truncation.
Theoretical analysis establishes that the proposed method can achieve any
prescribed relative error in the Schatten 1-norm and that it exploits the
spectral decay of the input matrix. Computer experiments show that the proposed
method dominates alternative techniques for fixed-rank psd matrix approximation
across a wide range of examples.
| 1 | 0 | 0 | 1 | 0 | 0 |
A New Perspective on Robust $M$-Estimation: Finite Sample Theory and Applications to Dependence-Adjusted Multiple Testing | Heavy-tailed errors impair the accuracy of the least squares estimate, which
can be spoiled by a single grossly outlying observation. As argued in the
seminal work of Peter Huber in 1973 [{\it Ann. Statist.} {\bf 1} (1973)
799--821], robust alternatives to the method of least squares are sorely
needed. To achieve robustness against heavy-tailed sampling distributions, we
revisit the Huber estimator from a new perspective by letting the tuning
parameter involved diverge with the sample size. In this paper, we develop
nonasymptotic concentration results for such an adaptive Huber estimator,
namely, the Huber estimator with the tuning parameter adapted to sample size,
dimension, and the variance of the noise. Specifically, we obtain a
sub-Gaussian-type deviation inequality and a nonasymptotic Bahadur
representation when noise variables only have finite second moments. The
nonasymptotic results further yield two conventional normal approximation
results that are of independent interest, the Berry-Esseen inequality and
Cramér-type moderate deviation. As an important application to large-scale
simultaneous inference, we apply these robust normal approximation results to
analyze a dependence-adjusted multiple testing procedure for moderately
heavy-tailed data. It is shown that the robust dependence-adjusted procedure
asymptotically controls the overall false discovery proportion at the nominal
level under mild moment conditions. Thorough numerical results on both
simulated and real datasets are also provided to back up our theory.
| 0 | 0 | 1 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.