title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
An Invitation to Polynomiography via Exponential Series | The subject of Polynomiography deals with algorithmic visualization of
polynomial equations, having many applications in STEM and art, see [Kal04].
Here we consider the polynomiography of the partial sums of the exponential
series. While the exponential function is taught in standard calculus courses,
it is unlikely that properties of zeros of its partial sums are considered in
such courses, let alone their visualization as science or art. The Monthly
article Zemyan discusses some mathematical properties of these zeros. Here we
exhibit some fractal and non-fractal polynomiographs of the partial sums while
also presenting a brief introduction of the underlying concepts.
Polynomiography establishes a different kind of appreciation of the
significance of polynomials in STEM, as well as in art. It helps in the
teaching of various topics at diverse levels. It also leads to new discoveries
on polynomials and inspires new applications. We also present a link for the
educator to get access to a demo polynomiography software together with a
module that helps teach basic topics to middle and high school students, as
well as undergraduates.
| 1 | 0 | 1 | 0 | 0 | 0 |
Towards Comfortable Cycling: A Practical Approach to Monitor the Conditions in Cycling Paths | This is a no brainer. Using bicycles to commute is the most sustainable form
of transport, is the least expensive to use and are pollution-free. Towns and
cities have to be made bicycle-friendly to encourage their wide usage.
Therefore, cycling paths should be more convenient, comfortable, and safe to
ride. This paper investigates a smartphone application, which passively
monitors the road conditions during cyclists ride. To overcome the problems of
monitoring roads, we present novel algorithms that sense the rough cycling
paths and locate road bumps. Each event is detected in real time to improve the
user friendliness of the application. Cyclists may keep their smartphones at
any random orientation and placement. Moreover, different smartphones sense the
same incident dissimilarly and hence report discrepant sensor values. We
further address the aforementioned difficulties that limit such crowd-sourcing
application. We evaluate our sensing application on cycling paths in Singapore,
and show that it can successfully detect such bad road conditions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Comparison moduli spaces of Riemann surfaces | We define a kind of moduli space of nested surfaces and mappings, which we
call a comparison moduli space. We review examples of such spaces in geometric
function theory and modern Teichmueller theory, and illustrate how a wide range
of phenomena in complex analysis are captured by this notion of moduli space.
The paper includes a list of open problems in classical and modern function
theory and Teichmueller theory ranging from general theoretical questions to
specific technical problems.
| 0 | 0 | 1 | 0 | 0 | 0 |
Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data | Manually labeled corpora are expensive to create and often not available for
low-resource languages or domains. Automatic labeling approaches are an
alternative way to obtain labeled data in a quicker and cheaper way. However,
these labels often contain more errors which can deteriorate a classifier's
performance when trained on this data. We propose a noise layer that is added
to a neural network architecture. This allows modeling the noise and train on a
combination of clean and noisy data. We show that in a low-resource NER task we
can improve performance by up to 35% by using additional, noisy data and
handling the noise.
| 0 | 0 | 0 | 1 | 0 | 0 |
Prior-aware Dual Decomposition: Document-specific Topic Inference for Spectral Topic Models | Spectral topic modeling algorithms operate on matrices/tensors of word
co-occurrence statistics to learn topic-specific word distributions. This
approach removes the dependence on the original documents and produces
substantial gains in efficiency and provable topic inference, but at a cost:
the model can no longer provide information about the topic composition of
individual documents. Recently Thresholded Linear Inverse (TLI) is proposed to
map the observed words of each document back to its topic composition. However,
its linear characteristics limit the inference quality without considering the
important prior information over topics. In this paper, we evaluate Simple
Probabilistic Inverse (SPI) method and novel Prior-aware Dual Decomposition
(PADD) that is capable of learning document-specific topic compositions in
parallel. Experiments show that PADD successfully leverages topic correlations
as a prior, notably outperforming TLI and learning quality topic compositions
comparable to Gibbs sampling on various data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Spectra of Earth-like Planets Through Geological Evolution Around FGKM Stars | Future observations of terrestrial exoplanet atmospheres will occur for
planets at different stages of geological evolution. We expect to observe a
wide variety of atmospheres and planets with alternative evolutionary paths,
with some planets resembling Earth at different epochs. For an Earth-like
atmospheric time trajectory, we simulate planets from prebiotic to current
atmosphere based on geological data. We use a stellar grid F0V to M8V
($T_\mathrm{eff}$ = 7000$\mskip3mu$K to 2400$\mskip3mu$K) to model four
geological epochs of Earth's history corresponding to a prebiotic world
(3.9$\mskip3mu$Ga), the rise of oxygen at 2.0$\mskip3mu$Ga and at
0.8$\mskip3mu$Ga, and the modern Earth. We show the VIS - IR spectral features,
with a focus on biosignatures through geological time for this grid of Sun-like
host stars and the effect of clouds on their spectra.
We find that the observability of biosignature gases reduces with increasing
cloud cover and increases with planetary age. The observability of the visible
O$_2$ feature for lower concentrations will partly depend on clouds, which
while slightly reducing the feature increase the overall reflectivity thus the
detectable flux of a planet. The depth of the IR ozone feature contributes
substantially to the opacity at lower oxygen concentrations especially for the
high near-UV stellar environments around F stars. Our results are a grid of
model spectra for atmospheres representative of Earth's geological history to
inform future observations and instrument design and are publicly available
online.
| 0 | 1 | 0 | 0 | 0 | 0 |
Empathy in Bimatrix Games | Although the definition of what empathetic preferences exactly are is still
evolving, there is a general consensus in the psychology, science and
engineering communities that the evolution toward players' behaviors in
interactive decision-making problems will be accompanied by the exploitation of
their empathy, sympathy, compassion, antipathy, spitefulness, selfishness,
altruism, and self-abnegating states in the payoffs. In this article, we study
one-shot bimatrix games from a psychological game theory viewpoint. A new
empathetic payoff model is calculated to fit empirical observations and both
pure and mixed equilibria are investigated. For a realized empathy structure,
the bimatrix game is categorized among four generic class of games. Number of
interesting results are derived. A notable level of involvement can be observed
in the empathetic one-shot game compared the non-empathetic one and this holds
even for games with dominated strategies. Partial altruism can help in breaking
symmetry, in reducing payoff-inequality and in selecting social welfare and
more efficient outcomes. By contrast, partial spite and self-abnegating may
worsen payoff equity. Empathetic evolutionary game dynamics are introduced to
capture the resulting empathetic evolutionarily stable strategies under wide
range of revision protocols including Brown-von Neumann-Nash, Smith, imitation,
replicator, and hybrid dynamics. Finally, mutual support and Berge solution are
investigated and their connection with empathetic preferences are established.
We show that pure altruism is logically inconsistent, only by balancing it with
some partial selfishness does it create a consistent psychology.
| 1 | 0 | 0 | 0 | 0 | 0 |
Free transport for convex potentials | We construct non-commutative analogs of transport maps among free Gibbs state
satisfying a certain convexity condition. Unlike previous constructions, our
approach is non-perturbative in nature and thus can be used to construct
transport maps between free Gibbs states associated to potentials which are far
from quadratic, i.e., states which are far from the semicircle law. An
essential technical ingredient in our approach is the extension of free
stochastic analysis to non-commutative spaces of functions based on the
Haagerup tensor product.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Trace Criterion for Kernel Bandwidth Selection for Support Vector Data Description | Support vector data description (SVDD) is a popular anomaly detection
technique. The SVDD classifier partitions the whole data space into an
$\textit{inlier}$ region, which consists of the region $\textit{near}$ the
training data, and an $\textit{outlier}$ region, which consists of points
$\textit{away}$ from the training data. The computation of the SVDD classifier
requires a kernel function, for which the Gaussian kernel is a common choice.
The Gaussian kernel has a bandwidth parameter, and it is important to set the
value of this parameter correctly for good results. A small bandwidth leads to
overfitting such that the resulting SVDD classifier overestimates the number of
anomalies, whereas a large bandwidth leads to underfitting and an inability to
detect many anomalies. In this paper, we present a new unsupervised method for
selecting the Gaussian kernel bandwidth. Our method, which exploits the
low-rank representation of the kernel matrix to suggest a kernel bandwidth
value, is competitive with existing bandwidth selection methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Chemical exfoliation of MoS2 leads to semiconducting 1T' phase and not the metallic 1T phase | A trigonal phase existing only as small patches on chemically exfoliated few
layer, thermodynamically stable 1H phase of MoS2 is believed to influence
critically properties of MoS2 based devices. This phase has been most often
attributed to the metallic 1T phase. We investigate the electronic structure of
chemically exfoliated MoS2 few layered systems using spatially resolved (lesser
than 120 nm resolution) photoemission spectroscopy and Raman spectroscopy in
conjunction with state-of-the-art electronic structure calculations. On the
basis of these results, we establish that the ground state of this phase is a
small gap (~90 meV) semiconductor in contrast to most claims in the literature;
we also identify the specific trigonal (1T') structure it has among many
suggested ones.
| 0 | 1 | 0 | 0 | 0 | 0 |
A note on the approximate admissibility of regularized estimators in the Gaussian sequence model | We study the problem of estimating an unknown vector $\theta$ from an
observation $X$ drawn according to the normal distribution with mean $\theta$
and identity covariance matrix under the knowledge that $\theta$ belongs to a
known closed convex set $\Theta$. In this general setting, Chatterjee (2014)
proved that the natural constrained least squares estimator is "approximately
admissible" for every $\Theta$. We extend this result by proving that the same
property holds for all convex penalized estimators as well. Moreover, we
simplify and shorten the original proof considerably. We also provide explicit
upper and lower bounds for the universal constant underlying the notion of
approximate admissibility.
| 0 | 0 | 1 | 1 | 0 | 0 |
Simple labeled graph $C^*$-algebras are associated to disagreeable labeled spaces | By a labeled graph $C^*$-algebra we mean a $C^*$-algebra associated to a
labeled space $(E,\mathcal L,\mathcal E)$ consisting of a labeled graph
$(E,\mathcal L)$ and the smallest normal accommodating set $\mathcal E$ of
vertex subsets. Every graph $C^*$-algebra $C^*(E)$ is a labeled graph
$C^*$-algebra and it is well known that $C^*(E)$ is simple if and only if the
graph $E$ is cofinal and satisfies Condition (L). Bates and Pask extend these
conditions of graphs $E$ to labeled spaces, and show that if a set-finite and
receiver set-finite labeled space $(E,\mathcal L, \mathcal E)$ is cofinal and
disagreeable, then its $C^*$-algebra $C^*(E,\mathcal L, \mathcal E)$ is simple.
In this paper, we show that the converse is also true.
| 0 | 0 | 1 | 0 | 0 | 0 |
Extracting Epistatic Interactions in Type 2 Diabetes Genome-Wide Data Using Stacked Autoencoder | 2 Diabetes is a leading worldwide public health concern, and its increasing
prevalence has significant health and economic importance in all nations. The
condition is a multifactorial disorder with a complex aetiology. The genetic
determinants remain largely elusive, with only a handful of identified
candidate genes. Genome wide association studies (GWAS) promised to
significantly enhance our understanding of genetic based determinants of common
complex diseases. To date, 83 single nucleotide polymorphisms (SNPs) for type 2
diabetes have been identified using GWAS. Standard statistical tests for single
and multi-locus analysis such as logistic regression, have demonstrated little
effect in understanding the genetic architecture of complex human diseases.
Logistic regression is modelled to capture linear interactions but neglects the
non-linear epistatic interactions present within genetic data. There is an
urgent need to detect epistatic interactions in complex diseases as this may
explain the remaining missing heritability in such diseases. In this paper, we
present a novel framework based on deep learning algorithms that deal with
non-linear epistatic interactions that exist in genome wide association data.
Logistic association analysis under an additive genetic model, adjusted for
genomic control inflation factor, is conducted to remove statistically
improbable SNPs to minimize computational overheads.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Bootstrap Method for Error Estimation in Randomized Matrix Multiplication | In recent years, randomized methods for numerical linear algebra have
received growing interest as a general approach to large-scale problems.
Typically, the essential ingredient of these methods is some form of randomized
dimension reduction, which accelerates computations, but also creates random
approximation error. In this way, the dimension reduction step encodes a
tradeoff between cost and accuracy. However, the exact numerical relationship
between cost and accuracy is typically unknown, and consequently, it may be
difficult for the user to precisely know (1) how accurate a given solution is,
or (2) how much computation is needed to achieve a given level of accuracy. In
the current paper, we study randomized matrix multiplication (sketching) as a
prototype setting for addressing these general problems. As a solution, we
develop a bootstrap method for {directly estimating} the accuracy as a function
of the reduced dimension (as opposed to deriving worst-case bounds on the
accuracy in terms of the reduced dimension). From a computational standpoint,
the proposed method does not substantially increase the cost of standard
sketching methods, and this is made possible by an "extrapolation" technique.
In addition, we provide both theoretical and empirical results to demonstrate
the effectiveness of the proposed method.
| 1 | 0 | 0 | 1 | 0 | 0 |
Results of measurements of the flux of albedo muons with NEVOD-DECOR experimental complex | Results of investigations of the near-horizontal muons in the range of zenith
angles of 85-95 degrees are presented. In this range, so-called "albedo" muons
(atmospheric muons scattered in the ground into the upper hemisphere) are
detected. Albedo muons are one of the main sources of the background in
neutrino experiments. Experimental data of two series of measurements conducted
at the experimental complex NEVOD-DECOR with the duration of about 30 thousand
hours "live" time are analyzed. The results of measurements of the muon flux
intensity are compared with simulation results using Monte-Carlo on the basis
of two multiple Coulomb scattering models: model of point-like nuclei and model
taking into account finite size of nuclei.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nanoscale superconducting memory based on the kinetic inductance of asymmetric nanowire loops | The demand for low-dissipation nanoscale memory devices is as strong as ever.
As Moore's Law is staggering, and the demand for a low-power-consuming
supercomputer is high, the goal of making information processing circuits out
of superconductors is one of the central goals of modern technology and
physics. So far, digital superconducting circuits could not demonstrate their
immense potential. One important reason for this is that a dense
superconducting memory technology is not yet available. Miniaturization of
traditional superconducting quantum interference devices is difficult below a
few micrometers because their operation relies on the geometric inductance of
the superconducting loop. Magnetic memories do allow nanometer-scale
miniaturization, but they are not purely superconducting (Baek et al 2014 Nat.
Commun. 5 3888). Our approach is to make nanometer scale memory cells based on
the kinetic inductance (and not geometric inductance) of superconducting
nanowire loops, which have already shown many fascinating properties (Aprili
2006 Nat. Nanotechnol. 1 15; Hopkins et al 2005 Science 308 1762). This allows
much smaller devices and naturally eliminates magnetic-field cross-talk. We
demonstrate that the vorticity, i.e., the winding number of the order
parameter, of a closed superconducting loop can be used for realizing a
nanoscale nonvolatile memory device. We demonstrate how to alter the vorticity
in a controlled fashion by applying calibrated current pulses. A reliable
read-out of the memory is also demonstrated. We present arguments that such
memory can be developed to operate without energy dissipation.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Autonomic Architecture of the Licas System | Licas (lightweight internet-based communication for autonomic services) is a
distributed framework for building service-based systems. The framework
provides a p2p server and more intelligent processing of information through
its AI algorithms. Distributed communication includes XML-RPC, REST, HTTP and
Web Services. It can now provide a robust platform for building different types
of system, where Microservices or SOA would be possible. However, the system
may be equally suited for the IoT, as it provides classes to connect with
external sources and has an optional Autonomic Manager with a MAPE control loop
integrated into the communication process. The system is also mobile-compatible
with Android. This paper focuses in particular on the autonomic setup and how
that might be used. A novel linking mechanism has been described previously [5]
that can be used to dynamically link sources and this is also considered, as
part of the autonomous framework.
| 1 | 0 | 0 | 0 | 0 | 0 |
Some Characterizations on the Normalized Lommel, Struve and Bessel Functions of the First Kind | In this paper, we introduce new technique for determining some necessary and
sufficient conditions of the normalized Bessel functions $j_{\nu}$, normalized
Struve functions $h_{\nu}$ and normalized Lommel functions $s_{\mu,\nu}$ of the
first kind, to be in the subclasses of starlike and convex functions of order
$\alpha$ and type $\beta$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Applied Evaluative Informetrics: Part 1 | This manuscript is a preprint version of Part 1 (General Introduction and
Synopsis) of the book Applied Evaluative Informetrics, to be published by
Springer in the summer of 2017. This book presents an introduction to the field
of applied evaluative informetrics, and is written for interested scholars and
students from all domains of science and scholarship. It sketches the field's
history, recent achievements, and its potential and limits. It explains the
notion of multi-dimensional research performance, and discusses the pros and
cons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In
addition, it presents quantitative research assessment as an evaluation
science, and focuses on the role of extra-informetric factors in the
development of indicators, and on the policy context of their application. It
also discusses the way forward, both for users and for developers of
informetric tools.
| 1 | 0 | 0 | 0 | 0 | 0 |
General multilevel Monte Carlo methods for pricing discretely monitored Asian options | We describe general multilevel Monte Carlo methods that estimate the price of
an Asian option monitored at $m$ fixed dates. Our approach yields unbiased
estimators with standard deviation $O(\epsilon)$ in $O(m + (1/\epsilon)^{2})$
expected time for a variety of processes including the Black-Scholes model,
Merton's jump-diffusion model, the Square-Root diffusion model, Kou's double
exponential jump-diffusion model, the variance gamma and NIG exponential Levy
processes and, via the Milstein scheme, processes driven by scalar stochastic
differential equations. Using the Euler scheme, our approach estimates the
Asian option price with root mean square error $O(\epsilon)$ in
$O(m+(\ln(\epsilon)/\epsilon)^{2})$ expected time for processes driven by
multidimensional stochastic differential equations. Numerical experiments
confirm that our approach outperforms the conventional Monte Carlo method by a
factor of order $m$.
| 0 | 0 | 0 | 0 | 0 | 1 |
Medical applications of diamond magnetometry: commercial viability | The sensing of magnetic fields has important applications in medicine,
particularly to the sensing of signals in the heart and brain. The fields
associated with biomagnetism are exceptionally weak, being many orders of
magnitude smaller than the Earth's magnetic field. To measure them requires
that we use the most sensitive detection techniques, however, to be
commercially viable this must be done at an affordable cost. The current state
of the art uses costly SQUID magnetometers, although they will likely be
superseded by less costly, but otherwise limited, alkali vapour magnetometers.
Here, we discuss the application of diamond magnetometers to medical
applications. Diamond magnetometers are robust, solid state devices that work
in a broad range of environments, with the potential for sensitivity comparable
to the leading technologies.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the origin of the hydraulic jump in a thin liquid film | For more than a century, it has been believed that all hydraulic jumps are
created due to gravity. However, we found that thin-film hydraulic jumps are
not induced by gravity. This study explores the initiation of thin-film
hydraulic jumps. For circular jumps produced by the normal impingement of a jet
onto a solid surface, we found that the jump is formed when surface tension and
viscous forces balance the momentum in the film and gravity plays no
significant role. Experiments show no dependence on the orientation of the
surface and a scaling relation balancing viscous forces and surface tension
collapses the experimental data. Experiments on thin film planar jumps in a
channel also show that the predominant balance is with surface tension,
although for the thickness of the films we studied gravity also played a role
in the jump formation. A theoretical analysis shows that the downstream
transport of surface tension energy is the previously neglected, critical
ingredient in these flows and that capillary waves play the role of gravity
waves in a traditional jump in demarcating the transition from the
supercritical to subcritical flow associated with these jumps.
| 0 | 1 | 0 | 0 | 0 | 0 |
Discovery of Complex Anomalous Patterns of Sexual Violence in El Salvador | When sexual violence is a product of organized crime or social imaginary, the
links between sexual violence episodes can be understood as a latent structure.
With this assumption in place, we can use data science to uncover complex
patterns. In this paper we focus on the use of data mining techniques to unveil
complex anomalous spatiotemporal patterns of sexual violence. We illustrate
their use by analyzing all reported rapes in El Salvador over a period of nine
years. Through our analysis, we are able to provide evidence of phenomena that,
to the best of our knowledge, have not been previously reported in literature.
We devote special attention to a pattern we discover in the East, where
underage victims report their boyfriends as perpetrators at anomalously high
rates. Finally, we explain how such analyzes could be conducted in real-time,
enabling early detection of emerging patterns to allow law enforcement agencies
and policy makers to react accordingly.
| 0 | 0 | 0 | 1 | 0 | 0 |
Using Social Network Information in Bayesian Truth Discovery | We investigate the problem of truth discovery based on opinions from multiple
agents who may be unreliable or biased. We consider the case where agents'
reliabilities or biases are correlated if they belong to the same community,
which defines a group of agents with similar opinions regarding a particular
event. An agent can belong to different communities for different events, and
these communities are unknown a priori. We incorporate knowledge of the agents'
social network in our truth discovery framework and develop Laplace variational
inference methods to estimate agents' reliabilities, communities, and the event
states. We also develop a stochastic variational inference method to scale our
model to large social networks. Simulations and experiments on real data
suggest that when observations are sparse, our proposed methods perform better
than several other inference methods, including majority voting, TruthFinder,
AccuSim, the Confidence-Aware Truth Discovery method, the Bayesian Classifier
Combination (BCC) method, and the Community BCC method.
| 1 | 0 | 0 | 1 | 0 | 0 |
IVOA Recommendation: HiPS - Hierarchical Progressive Survey | This document presents HiPS, a hierarchical scheme for the description,
storage and access of sky survey data. The system is based on hierarchical
tiling of sky regions at finer and finer spatial resolution which facilitates a
progressive view of a survey, and supports multi-resolution zooming and
panning. HiPS uses the HEALPix tessellation of the sky as the basis for the
scheme and is implemented as a simple file structure with a direct indexing
scheme that leads to practical implementations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Mammography Assessment using Multi-Scale Deep Classifiers | Applying deep learning methods to mammography assessment has remained a
challenging topic. Dense noise with sparse expressions, mega-pixel raw data
resolution, lack of diverse examples have all been factors affecting
performance. The lack of pixel-level ground truths have especially limited
segmentation methods in pushing beyond approximately bounding regions. We
propose a classification approach grounded in high performance tissue
assessment as an alternative to all-in-one localization and assessment models
that is also capable of pinpointing the causal pixels. First, the objective of
the mammography assessment task is formalized in the context of local tissue
classifiers. Then, the accuracy of a convolutional neural net is evaluated on
classifying patches of tissue with suspicious findings at varying scales, where
highest obtained AUC is above $0.9$. The local evaluations of one such expert
tissue classifier is used to augment the results of a heatmap regression model
and additionally recover the exact causal regions at high resolution as a
saliency image suitable for clinical settings.
| 0 | 0 | 0 | 1 | 0 | 0 |
Studies to Understand and Optimize the Performance of Scintillation Counters for the Mu2e Cosmic Ray Veto System | In order to optimize the performance of the CRV, reflection studies and aging
studies were conducted.
| 0 | 1 | 0 | 0 | 0 | 0 |
Linear Parameter Varying Representation of a class of MIMO Nonlinear Systems | Linear parameter-varying (LPV) models form a powerful model class to analyze
and control a (nonlinear) system of interest. Identifying an LPV model of a
nonlinear system can be challenging due to the difficulty of selecting the
scheduling variable(s) a priori, especially if a first principles based
understanding of the system is unavailable. Converting a nonlinear model to an
LPV form is also non-trivial and requires systematic methods to automate the
process.
Inspired by these challenges, a systematic LPV embedding approach starting
from multiple-input multiple-output (MIMO) linear fractional representations
with a nonlinear feedback block (NLFR) is proposed. This NLFR model class is
embedded into the LPV model class by an automated factorization of the
(possibly MIMO) static nonlinear block present in the model. As a result of the
factorization, an LPV-LFR or an LPV state-space model with affine dependency on
the scheduling is obtained. This approach facilitates the selection of the
scheduling variable and the connected mapping of system variables. Such a
conversion method enables to use nonlinear identification tools to estimate LPV
models.
The potential of the proposed approach is illustrated on a 2-DOF nonlinear
mass-spring-damper example.
| 1 | 0 | 0 | 0 | 0 | 0 |
Kinetic and radiative power from optically thin accretion flows | We perform a set of general relativistic, radiative, magneto-hydrodynamical
simulations (GR-RMHD) to study the transition from radiatively inefficient to
efficient state of accretion on a non-rotating black hole. We study ion to
electron temperature ratios ranging from $T_{\rm i}/T_{\rm e}=10$ to $100$, and
simulate flows corresponding to accretion rates as low as $10^{-6}\dot M_{\rm
Edd}$, and as high as $10^{-2}\dot M_{\rm Edd}$. We have found that the
radiative output of accretion flows increases with accretion rate, and that the
transition occurs earlier for hotter electrons (lower $T_{\rm i}/T_{\rm e}$
ratio). At the same time, the mechanical efficiency hardly changes and accounts
to ${\approx}\,3\%$ of the accreted rest mass energy flux, even at the highest
simulated accretion rates. This is particularly important for the mechanical
AGN feedback regulating massive galaxies, groups, and clusters. Comparison with
recent observations of radiative and mechanical AGN luminosities suggests that
the ion to electron temperature ratio in the inner, collisionless accretion
flow should fall within $10<T_{\rm i}/T_{\rm e}<30$, i.e., the electron
temperature should be several percent of the ion temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
First-Principles Many-Body Investigation of Correlated Oxide Heterostructures: Few-Layer-Doped SmTiO$_3$ | Correlated oxide heterostructures pose a challenging problem in condensed
matter research due to their structural complexity interweaved with demanding
electron states beyond the effective single-particle picture. By exploring the
correlated electronic structure of SmTiO$_3$ doped with few layers of SrO, we
provide an insight into the complexity of such systems. Furthermore, it is
shown how the advanced combination of band theory on the level of Kohn-Sham
density functional theory with explicit many-body theory on the level of
dynamical mean-field theory provides an adequate tool to cope with the problem.
Coexistence of band-insulating, metallic and Mott-critical electronic regions
is revealed in individual heterostructures with multi-orbital manifolds.
Intriguing orbital polarizations, that qualitatively vary between the metallic
and the Mott layers are also encountered.
| 0 | 1 | 0 | 0 | 0 | 0 |
Faster Boosting with Smaller Memory | The two state-of-the-art implementations of boosted trees: XGBoost and
LightGBM, can process large training sets extremely fast. However, this
performance requires that memory size is sufficient to hold a 2-3 multiple of
the training set size. This paper presents an alternative approach to
implementing boosted trees. which achieves a significant speedup over XGBoost
and LightGBM, especially when memory size is small. This is achieved using a
combination of two techniques: early stopping and stratified sampling, which
are explained and analyzed in the paper. We describe our implementation and
present experimental results to support our claims.
| 1 | 0 | 0 | 1 | 0 | 0 |
Deep Structured Learning for Facial Action Unit Intensity Estimation | We consider the task of automated estimation of facial expression intensity.
This involves estimation of multiple output variables (facial action units ---
AUs) that are structurally dependent. Their structure arises from statistically
induced co-occurrence patterns of AU intensity levels. Modeling this structure
is critical for improving the estimation performance; however, this performance
is bounded by the quality of the input features extracted from face images. The
goal of this paper is to model these structures and estimate complex feature
representations simultaneously by combining conditional random field (CRF)
encoded AU dependencies with deep learning. To this end, we propose a novel
Copula CNN deep learning approach for modeling multivariate ordinal variables.
Our model accounts for $ordinal$ structure in output variables and their
$non$-$linear$ dependencies via copula functions modeled as cliques of a CRF.
These are jointly optimized with deep CNN feature encoding layers using a newly
introduced balanced batch iterative training algorithm. We demonstrate the
effectiveness of our approach on the task of AU intensity estimation on two
benchmark datasets. We show that joint learning of the deep features and the
target output structure results in significant performance gains compared to
existing deep structured models for analysis of facial expressions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Placing the spotted T Tauri star LkCa 4 on an HR diagram | Ages and masses of young stars are often estimated by comparing their
luminosities and effective temperatures to pre-main sequence stellar evolution
tracks, but magnetic fields and starspots complicate both the observations and
evolution. To understand their influence, we study the heavily-spotted
weak-lined T-Tauri star LkCa 4 by searching for spectral signatures of
radiation originating from the starspot or starspot groups. We introduce a new
methodology for constraining both the starspot filling factor and the spot
temperature by fitting two-temperature stellar atmosphere models constructed
from Phoenix synthetic spectra to a high-resolution near-IR IGRINS spectrum.
Clearly discernable spectral features arise from both a hot photospheric
component $T_{\mathrm{hot}} \sim4100$ K and to a cool component
$T_{\mathrm{cool}} \sim2700-3000$ K, which covers $\sim80\%$ of the visible
surface. This mix of hot and cool emission is supported by analyses of the
spectral energy distribution, rotational modulation of colors and of TiO band
strengths, and features in low-resolution optical/near-IR spectroscopy.
Although the revised effective temperature and luminosity make LkCa 4 appear
much younger and lower mass than previous estimates from unspotted stellar
evolution models, appropriate estimates will require the production and
adoption of spotted evolutionary models. Biases from starspots likely afflict
most fully convective young stars and contribute to uncertainties in ages and
age spreads of open clusters. In some spectral regions starspots act as a
featureless veiling continuum owing to high rotational broadening and heavy
line-blanketing in cool star spectra. Some evidence is also found for an
anti-correlation between the velocities of the warm and cool components.
| 0 | 1 | 0 | 0 | 0 | 0 |
Coexistence of pressure-induced structural phases in bulk black phosphorus: a combined x-ray diffraction and Raman study up to 18 GPa | We report a study of the structural phase transitions induced by pressure in
bulk black phosphorus by using both synchrotron x-ray diffraction for pressures
up to 12.2 GPa and Raman spectroscopy up to 18.2 GPa. Very recently black
phosphorus attracted large attention because of the unique properties of
fewlayers samples (phosphorene), but some basic questions are still open in the
case of the bulk system. As concerning the presence of a Raman spectrum above
10 GPa, which should not be observed in an elemental simple cubic system, we
propose a new explanation by attributing a key role to the non-hydrostatic
conditions occurring in Raman experiments. Finally, a combined analysis of
Raman and XRD data allowed us to obtain quantitative information on presence
and extent of coexistences between different structural phases from ~5 up to
~15 GPa. This information can have an important role in theoretical studies on
pressure-induced structural and electronic phase transitions in black
phosphorus.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Complexity of the Weighted Fused Lasso | The solution path of the 1D fused lasso for an $n$-dimensional input is
piecewise linear with $\mathcal{O}(n)$ segments (Hoefling et al. 2010 and
Tibshirani et al 2011). However, existing proofs of this bound do not hold for
the weighted fused lasso. At the same time, results for the generalized lasso,
of which the weighted fused lasso is a special case, allow $\Omega(3^n)$
segments (Mairal et al. 2012). In this paper, we prove that the number of
segments in the solution path of the weighted fused lasso is
$\mathcal{O}(n^2)$, and that, for some instances, it is $\Omega(n^2)$. We also
give a new, very simple, proof of the $\mathcal{O}(n)$ bound for the fused
lasso.
| 0 | 0 | 0 | 1 | 0 | 0 |
Prediction of Stable Ground-State Lithium Polyhydrides under High Pressures | Hydrogen-rich compounds are important for understanding the dissociation of
dense molecular hydrogen, as well as searching for room temperature
Bardeen-Cooper-Schrieffer (BCS) superconductors. A recent high pressure
experiment reported the successful synthesis of novel insulating lithium
polyhydrides when above 130 GPa. However, the results are in sharp contrast to
previous theoretical prediction by PBE functional that around this pressure
range all lithium polyhydrides (LiHn (n = 2-8)) should be metallic. In order to
address this discrepancy, we perform unbiased structure search with first
principles calculation by including the van der Waals interaction that was
ignored in previous prediction to predict the high pressure stable structures
of LiHn (n = 2-11, 13) up to 200 GPa. We reproduce the previously predicted
structures, and further find novel compositions that adopt more stable
structures. The van der Waals functional (vdW-DF) significantly alters the
relative stability of lithium polyhydrides, and predicts that the stable
stoichiometries for the ground-state should be LiH2 and LiH9 at 130-170 GPa,
and LiH2, LiH8 and LiH10 at 180-200 GPa. Accurate electronic structure
calculation with GW approximation indicates that LiH, LiH2, LiH7, and LiH9 are
insulative up to at least 208 GPa, and all other lithium polyhydrides are
metallic. The calculated vibron frequencies of these insulating phases are also
in accordance with the experimental infrared (IR) data. This reconciliation
with the experimental observation suggests that LiH2, LiH7, and LiH9 are the
possible candidates for lithium polyhydrides synthesized in that experiment.
Our results reinstate the credibility of density functional theory in
description H-rich compounds, and demonstrate the importance of considering van
der Waals interaction in this class of materials.
| 0 | 1 | 0 | 0 | 0 | 0 |
Analytical methods for vacuum simulations in high energy accelerators for future machines based on the LHC performance | The Future Circular Collider (FCC), currently in the design phase, will
address many outstanding questions in particle physics. The technology to
succeed in this 100 km circumference collider goes beyond present limits.
Ultra-high vacuum conditions in the beam pipe is one essential requirement to
provide a smooth operation. Different physics phenomena as photon-, ion- and
electron- induced desorption and thermal outgassing of the chamber walls
challenge this requirement. This paper presents an analytical model and a
computer code PyVASCO that supports the design of a stable vacuum system by
providing an overview of all the gas dynamics happening inside the beam pipes.
A mass balance equation system describes the density distribution of the four
dominating gas species $\text{H}_2, \text{CH}_4$, $\text{CO}$ and
$\text{CO}_2$. An appropriate solving algorithm is discussed in detail and a
validation of the model including a comparison of the output to the readings of
LHC gauges is presented. This enables the evaluation of different designs for
the FCC.
| 0 | 1 | 0 | 0 | 0 | 0 |
Global solutions to reaction-diffusion equations with super-linear drift and multiplicative noise | Let $\xi(t\,,x)$ denote space-time white noise and consider a
reaction-diffusion equation of the form \[
\dot{u}(t\,,x)=\tfrac12 u"(t\,,x) + b(u(t\,,x)) + \sigma(u(t\,,x))
\xi(t\,,x), \] on $\mathbb{R}_+\times[0\,,1]$, with homogeneous Dirichlet
boundary conditions and suitable initial data, in the case that there exists
$\varepsilon>0$ such that $\vert b(z)\vert \ge|z|(\log|z|)^{1+\varepsilon}$ for
all sufficiently-large values of $|z|$. When $\sigma\equiv 0$, it is well known
that such PDEs frequently have non-trivial stationary solutions. By contrast,
Bonder and Groisman (2009) have recently shown that there is finite-time blowup
when $\sigma$ is a non-zero constant. In this paper, we prove that the
Bonder--Groisman condition is unimproveable by showing that the
reaction-diffusion equation with noise is "typically" well posed when $\vert
b(z) \vert =O(|z|\log_+|z|)$ as $|z|\to\infty$. We interpret the word
"typically" in two essentially-different ways without altering the conclusions
of our assertions.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the semigroup rank of a group | For an arbitrary group $G$, it is shown that either the semigroup rank $G{\rm
rk}S$ equals the group rank $G{\rm rk}G$, or $G{\rm rk}S = G{\rm rk}G+1$. This
is the starting point for the rest of the article, where the semigroup rank for
diverse kinds of groups is analysed. The semigroup rank of relatively free
groups, for any variety of groups, is computed. For a finitely generated
abelian group~$G$, it is proven that $G{\rm rk}S = G{\rm rk}G+1$ if and only if
$G$ is torsion-free. In general, this is not true. Partial results are obtained
in the nilpotent case. It is also proven that if $M$ is a connected closed
surface, then $(\pi_1(M)){\rm rk}S = (\pi_1(M)){\rm rk}G+1$ if and only if $M$
is orientable.
| 0 | 0 | 1 | 0 | 0 | 0 |
A time series distance measure for efficient clustering of input output signals by their underlying dynamics | Starting from a dataset with input/output time series generated by multiple
deterministic linear dynamical systems, this paper tackles the problem of
automatically clustering these time series. We propose an extension to the
so-called Martin cepstral distance, that allows to efficiently cluster these
time series, and apply it to simulated electrical circuits data. Traditionally,
two ways of handling the problem are used. The first class of methods employs a
distance measure on time series (e.g. Euclidean, Dynamic Time Warping) and a
clustering technique (e.g. k-means, k-medoids, hierarchical clustering) to find
natural groups in the dataset. It is, however, often not clear whether these
distance measures effectively take into account the specific temporal
correlations in these time series. The second class of methods uses the
input/output data to identify a dynamic system using an identification scheme,
and then applies a model norm-based distance (e.g. H2, H-infinity) to find out
which systems are similar. This, however, can be very time consuming for large
amounts of long time series data. We show that the new distance measure
presented in this paper performs as good as when every input/output pair is
modelled explicitly, but remains computationally much less complex. The
complexity of calculating this distance between two time series of length N is
O(N logN).
| 1 | 0 | 0 | 1 | 0 | 0 |
Parametric Analysis of Cherenkov Light LDF from EAS for High Energy Gamma Rays and Nuclei: Ways of Practical Application | In this paper we propose a 'knee-like' approximation of the lateral
distribution of the Cherenkov light from extensive air showers in the energy
range 30-3000 TeV and study a possibility of its practical application in high
energy ground-based gamma-ray astronomy experiments (in particular, in
TAIGA-HiSCORE). The approximation has a very good accuracy for individual
showers and can be easily simplified for practical application in the HiSCORE
wide angle timing array in the condition of a limited number of triggered
stations.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the primary spacing and microsegregation of cellular dendrites in laser deposited Ni-Nb alloys | In this study, an alloy phase-field model is used to simulate solidification
microstructures at different locations within a solidified molten pool. The
temperature gradient $G$ and the solidification velocity $V$ are obtained from
a macroscopic heat transfer finite element simulation and provided as input to
the phase-field model. The effects of laser beam speed and the location within
the melt pool on the primary arm spacing and on the extent of Nb partitioning
at the cell tips are investigated. Simulated steady-state primary spacings are
compared with power law and geometrical models. Cell tip compositions are
compared to a dendrite growth model. The extent of non-equilibrium interface
partitioning of the phase-field model is investigated. Although the phase-field
model has an anti-trapping solute flux term meant to maintain local interface
equilibrium, we have found that during simulations it was insufficient at
maintaining equilibrium. This is due to the fact that the additive
manufacturing solidification conditions fall well outside the allowed limits of
this flux term.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Alternative Approach to Functional Linear Partial Quantile Regression | We have previously proposed the partial quantile regression (PQR) prediction
procedure for functional linear model by using partial quantile covariance
techniques and developed the simple partial quantile regression (SIMPQR)
algorithm to efficiently extract PQR basis for estimating functional
coefficients. However, although the PQR approach is considered as an attractive
alternative to projections onto the principal component basis, there are
certain limitations to uncovering the corresponding asymptotic properties
mainly because of its iterative nature and the non-differentiability of the
quantile loss function. In this article, we propose and implement an
alternative formulation of partial quantile regression (APQR) for functional
linear model by using block relaxation method and finite smoothing techniques.
The proposed reformulation leads to insightful results and motivates new
theory, demonstrating consistency and establishing convergence rates by
applying advanced techniques from empirical process theory. Two simulations and
two real data from ADHD-200 sample and ADNI are investigated to show the
superiority of our proposed methods.
| 0 | 0 | 1 | 1 | 0 | 0 |
Parallel mining of time-faded heavy hitters | We present PFDCMSS, a novel message-passing based parallel algorithm for
mining time-faded heavy hitters. The algorithm is a parallel version of the
recently published FDCMSS sequential algorithm. We formally prove its
correctness by showing that the underlying data structure, a sketch augmented
with a Space Saving stream summary holding exactly two counters, is mergeable.
Whilst mergeability of traditional sketches derives immediately from theory, we
show that merging our augmented sketch is non trivial. Nonetheless, the
resulting parallel algorithm is fast and simple to implement. To the best of
our knowledge, PFDCMSS is the first parallel algorithm solving the problem of
mining time-faded heavy hitters on message-passing parallel architectures.
Extensive experimental results confirm that PFDCMSS retains the extreme
accuracy and error bound provided by FDCMSS whilst providing excellent parallel
scalability.
| 1 | 0 | 0 | 0 | 0 | 0 |
On a Minkowski-like inequality for asymptotically flat static manifolds | The Minkowski inequality is a classical inequality in differential geometry,
giving a bound from below, on the total mean curvature of a convex surface in
Euclidean space, in terms of its area. Recently there has been interest in
proving versions of this inequality for manifolds other than R^n; for example,
such an inequality holds for surfaces in spatial Schwarzschild and
AdS-Schwarzschild manifolds. In this note, we adapt a recent analysis of Y. Wei
to prove a Minkowski-like inequality for general static asymptotically flat
manifolds.
| 0 | 0 | 1 | 0 | 0 | 0 |
Instantaneous Arbitrage and the CAPM | This paper studies the concept of instantaneous arbitrage in continuous time
and its relation to the instantaneous CAPM. Absence of instantaneous arbitrage
is equivalent to the existence of a trading strategy which satisfies the CAPM
beta pricing relation in place of the market. Thus the difference between the
arbitrage argument and the CAPM argument in Black and Scholes (1973) is this:
the arbitrage argument assumes that there exists some portfolio satisfying the
capm equation, whereas the CAPM argument assumes, in addition, that this
portfolio is the market portfolio.
| 0 | 0 | 0 | 0 | 0 | 1 |
On the Complexity of Approximating Wasserstein Barycenter | We study the complexity of approximating Wassertein barycenter of $m$
discrete measures, or histograms of size $n$ by contrasting two alternative
approaches, both using entropic regularization. The first approach is based on
the Iterative Bregman Projections (IBP) algorithm for which our novel analysis
gives a complexity bound proportional to $\frac{mn^2}{\varepsilon^2}$ to
approximate the original non-regularized barycenter.
Using an alternative accelerated-gradient-descent-based approach, we obtain a
complexity proportional to $\frac{mn^{2.5}}{\varepsilon} $. As a byproduct, we
show that the regularization parameter in both approaches has to be
proportional to $\varepsilon$, which causes instability of both algorithms when
the desired accuracy is high. To overcome this issue, we propose a novel
proximal-IBP algorithm, which can be seen as a proximal gradient method, which
uses IBP on each iteration to make a proximal step. We also consider the
question of scalability of these algorithms using approaches from distributed
optimization and show that the first algorithm can be implemented in a
centralized distributed setting (master/slave), while the second one is
amenable to a more general decentralized distributed setting with an arbitrary
network topology.
| 1 | 0 | 0 | 0 | 0 | 0 |
Witnessing Adversarial Training in Reproducing Kernel Hilbert Spaces | Modern implicit generative models such as generative adversarial networks
(GANs) are generally known to suffer from instability and lack of
interpretability as it is difficult to diagnose what aspects of the target
distribution are missed by the generative model. In this work, we propose a
theoretically grounded solution to these issues by augmenting the GAN's loss
function with a kernel-based regularization term that magnifies local
discrepancy between the distributions of generated and real samples. The
proposed method relies on so-called witness points in the data space which are
jointly trained with the generator and provide an interpretable indication of
where the two distributions locally differ during the training procedure. In
addition, the proposed algorithm is scaled to higher dimensions by learning the
witness locations in a latent space of an autoencoder. We theoretically
investigate the dynamics of the training procedure, prove that a desirable
equilibrium point exists, and the dynamical system is locally stable around
this equilibrium. Finally, we demonstrate different aspects of the proposed
algorithm by numerical simulations of analytical solutions and empirical
results for low and high-dimensional datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Heterogeneous Transfer Learning: An Unsupervised Approach | Transfer learning leverages the knowledge in one domain, the source domain,
to improve learning efficiency in another domain, the target domain. Existing
transfer learning research is relatively well-progressed, but only in
situations where the feature spaces of the domains are homogeneous and the
target domain contains at least a few labeled instances. However, transfer
learning has not been well-studied in heterogeneous settings with an unlabeled
target domain. To contribute to the research in this emerging field, this paper
presents: (1) an unsupervised knowledge transfer theorem that prevents negative
transfer; and (2) a principal angle-based metric to measure the distance
between two pairs of domains. The metric shows the extent to which homogeneous
representations have preserved the information in original source and target
domains. The unsupervised knowledge transfer theorem sets out the transfer
conditions necessary to prevent negative transfer. Linear monotonic maps meet
the transfer conditions of the theorem and, hence, are used to construct
homogeneous representations of the heterogeneous domains, which in principle
prevents negative transfer. The metric and the theorem have been implemented in
an innovative transfer model, called a Grassmann-LMM-geodesic flow kernel
(GLG), that is specifically designed for knowledge transfer across
heterogeneous domains. The GLG model learns homogeneous representations of
heterogeneous domains by minimizing the proposed metric. Knowledge is
transferred through these learned representations via a geodesic flow kernel.
Notably, the theorem presented in this paper provides the sufficient transfer
conditions needed to guarantee that knowledge is transferred from a source
domain to an unlabeled target domain with correctness.
| 0 | 0 | 0 | 1 | 0 | 0 |
Diffusion transformations, Black-Scholes equation and optimal stopping | We develop a new class of path transformations for one-dimensional diffusions
that are tailored to alter their long-run behaviour from transient to recurrent
or vice versa. This immediately leads to a formula for the distribution of the
first exit times of diffusions, which is recently characterised by Karatzas and
Ruf \cite{KR} as the minimal solution of an appropriate Cauchy problem under
more stringent conditions. A particular limit of these transformations also
turn out to be instrumental in characterising the stochastic solutions of
Cauchy problems defined by the generators of strict local martingales, which
are well-known for not having unique solutions even when one restricts
solutions to have linear growth. Using an appropriate diffusion transformation
we show that the aforementioned stochastic solution can be written in terms of
the unique classical solution of an {\em alternative} Cauchy problem with
suitable boundary conditions. This in particular resolves the long-standing
issue of non-uniqueness with the Black-Scholes equations in derivative pricing
in the presence of {\em bubbles}. Finally, we use these path transformations to
propose a unified framework for solving explicitly the optimal stopping problem
for one-dimensional diffusions with discounting, which in particular is
relevant for the pricing and the computation of optimal exercise boundaries of
perpetual American options.
| 0 | 0 | 1 | 0 | 0 | 0 |
Generation of concept-representative symbols | The visual representation of concepts or ideas through the use of simple
shapes has always been explored in the history of Humanity, and it is believed
to be the origin of writing. We focus on computational generation of visual
symbols to represent concepts. We aim to develop a system that uses background
knowledge about the world to find connections among concepts, with the goal of
generating symbols for a given concept. We are also interested in exploring the
system as an approach to visual dissociation and visual conceptual blending.
This has a great potential in the area of Graphic Design as a tool to both
stimulate creativity and aid in brainstorming in projects such as logo,
pictogram or signage design.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantitative results using variants of Schmidt's game: Dimension bounds, arithmetic progressions, and more | Schmidt's game is generally used to deduce qualitative information about the
Hausdorff dimensions of fractal sets and their intersections. However, one can
also ask about quantitative versions of the properties of winning sets. In this
paper we show that such quantitative information has applications to various
questions including:
* What is the maximal length of an arithmetic progression on the "middle
$\epsilon$" Cantor set?
* What is the smallest $n$ such that there is some element of the ternary
Cantor set whose continued fraction partial quotients are all $\leq n$?
* What is the Hausdorff dimension of the set of $\epsilon$-badly approximable
numbers on the Cantor set?
We show that a variant of Schmidt's game known as the $potential$ $game$ is
capable of providing better bounds on the answers to these questions than the
classical Schmidt's game. We also use the potential game to provide a new proof
of an important lemma in the classical proof of the existence of Hall's Ray.
| 0 | 0 | 1 | 0 | 0 | 0 |
Sublogarithmic Distributed Algorithms for Lovász Local lemma, and the Complexity Hierarchy | Locally Checkable Labeling (LCL) problems include essentially all the classic
problems of $\mathsf{LOCAL}$ distributed algorithms. In a recent enlightening
revelation, Chang and Pettie [arXiv 1704.06297] showed that any LCL (on bounded
degree graphs) that has an $o(\log n)$-round randomized algorithm can be solved
in $T_{LLL}(n)$ rounds, which is the randomized complexity of solving (a
relaxed variant of) the Lovász Local Lemma (LLL) on bounded degree $n$-node
graphs. Currently, the best known upper bound on $T_{LLL}(n)$ is $O(\log n)$,
by Chung, Pettie, and Su [PODC'14], while the best known lower bound is
$\Omega(\log\log n)$, by Brandt et al. [STOC'16]. Chang and Pettie conjectured
that there should be an $O(\log\log n)$-round algorithm.
Making the first step of progress towards this conjecture, and providing a
significant improvement on the algorithm of Chung et al. [PODC'14], we prove
that $T_{LLL}(n)= 2^{O(\sqrt{\log\log n})}$. Thus, any $o(\log n)$-round
randomized distributed algorithm for any LCL problem on bounded degree graphs
can be automatically sped up to run in $2^{O(\sqrt{\log\log n})}$ rounds.
Using this improvement and a number of other ideas, we also improve the
complexity of a number of graph coloring problems (in arbitrary degree graphs)
from the $O(\log n)$-round results of Chung, Pettie and Su [PODC'14] to
$2^{O(\sqrt{\log\log n})}$. These problems include defective coloring, frugal
coloring, and list vertex-coloring.
| 1 | 0 | 0 | 0 | 0 | 0 |
Continuous User Authentication via Unlabeled Phone Movement Patterns | In this paper, we propose a novel continuous authentication system for
smartphone users. The proposed system entirely relies on unlabeled phone
movement patterns collected through smartphone accelerometer. The data was
collected in a completely unconstrained environment over five to twelve days.
The contexts of phone usage were identified using k-means clustering. Multiple
profiles, one for each context, were created for every user. Five machine
learning algorithms were employed for classification of genuine and impostors.
The performance of the system was evaluated over a diverse population of 57
users. The mean equal error rates achieved by Logistic Regression, Neural
Network, kNN, SVM, and Random Forest were 13.7%, 13.5%, 12.1%, 10.7%, and 5.6%
respectively. A series of statistical tests were conducted to compare the
performance of the classifiers. The suitability of the proposed system for
different types of users was also investigated using the failure to enroll
policy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Testing isomorphism of lattices over CM-orders | A CM-order is a reduced order equipped with an involution that mimics complex
conjugation. The Witt-Picard group of such an order is a certain group of ideal
classes that is closely related to the "minus part" of the class group. We
present a deterministic polynomial-time algorithm for the following problem,
which may be viewed as a special case of the principal ideal testing problem:
given a CM-order, decide whether two given elements of its Witt-Picard group
are equal. In order to prevent coefficient blow-up, the algorithm operates with
lattices rather than with ideals. An important ingredient is a technique
introduced by Gentry and Szydlo in a cryptographic context. Our application of
it to lattices over CM-orders hinges upon a novel existence theorem for
auxiliary ideals, which we deduce from a result of Konyagin and Pomerance in
elementary number theory.
| 1 | 0 | 1 | 0 | 0 | 0 |
How strong are correlations in strongly recurrent neuronal networks? | Cross-correlations in the activity in neural networks are commonly used to
characterize their dynamical states and their anatomical and functional
organizations. Yet, how these latter network features affect the spatiotemporal
structure of the correlations in recurrent networks is not fully understood.
Here, we develop a general theory for the emergence of correlated neuronal
activity from the dynamics in strongly recurrent networks consisting of several
populations of binary neurons. We apply this theory to the case in which the
connectivity depends on the anatomical or functional distance between the
neurons. We establish the architectural conditions under which the system
settles into a dynamical state where correlations are strong, highly robust and
spatially modulated. We show that such strong correlations arise if the network
exhibits an effective feedforward structure. We establish how this feedforward
structure determines the way correlations scale with the network size and the
degree of the connectivity. In networks lacking an effective feedforward
structure correlations are extremely small and only weakly depend on the number
of connections per neuron. Our work shows how strong correlations can be
consistent with highly irregular activity in recurrent networks, two key
features of neuronal dynamics in the central nervous system.
| 0 | 0 | 0 | 0 | 1 | 0 |
JSON: data model, query languages and schema specification | Despite the fact that JSON is currently one of the most popular formats for
exchanging data on the Web, there are very few studies on this topic and there
are no agreement upon theoretical framework for dealing with JSON. There- fore
in this paper we propose a formal data model for JSON documents and, based on
the common features present in available systems using JSON, we define a
lightweight query language allowing us to navigate through JSON documents. We
also introduce a logic capturing the schema proposal for JSON and study the
complexity of basic computational tasks associated with these two formalisms.
| 1 | 0 | 0 | 0 | 0 | 0 |
A six-factor asset pricing model | The present study introduce the human capital component to the Fama and
French five-factor model proposing an equilibrium six-factor asset pricing
model. The study employs an aggregate of four sets of portfolios mimicking size
and industry with varying dimensions. The first set consists of three set of
six portfolios each sorted on size to B/M, size to investment, and size to
momentum. The second set comprises of five index portfolios, third, a four-set
of twenty-five portfolios each sorted on size to B/M, size to investment, size
to profitability, and size to momentum, and the final set constitute thirty
industry portfolios. To estimate the parameters of six-factor asset pricing
model for the four sets of variant portfolios, we use OLS and Generalized
method of moments based robust instrumental variables technique (IVGMM). The
results obtained from the relevance, endogeneity, overidentifying restrictions,
and the Hausman's specification, tests indicate that the parameter estimates of
the six-factor model using IVGMM are robust and performs better than the OLS
approach. The human capital component shares equally the predictive power
alongside the factors in the framework in explaining the variations in return
on portfolios. Furthermore, we assess the t-ratio of the human capital
component of each IVGMM estimates of the six-factor asset pricing model for the
four sets of variant portfolios. The t-ratio of the human capital of the
eighty-three IVGMM estimates are more than 3.00 with reference to the standard
proposed by Harvey et al. (2016). This indicates the empirical success of the
six-factor asset-pricing model in explaining the variation in asset returns.
| 0 | 0 | 0 | 0 | 0 | 1 |
Director Field Analysis (DFA): Exploring Local White Matter Geometric Structure in diffusion MRI | In Diffusion Tensor Imaging (DTI) or High Angular Resolution Diffusion
Imaging (HARDI), a tensor field or a spherical function field (e.g., an
orientation distribution function field), can be estimated from measured
diffusion weighted images. In this paper, inspired by the microscopic
theoretical treatment of phases in liquid crystals, we introduce a novel
mathematical framework, called Director Field Analysis (DFA), to study local
geometric structural information of white matter based on the reconstructed
tensor field or spherical function field: 1) We propose a set of mathematical
tools to process general director data, which consists of dyadic tensors that
have orientations but no direction. 2) We propose Orientational Order (OO) and
Orientational Dispersion (OD) indices to describe the degree of alignment and
dispersion of a spherical function in a single voxel or in a region,
respectively; 3) We also show how to construct a local orthogonal coordinate
frame in each voxel exhibiting anisotropic diffusion; 4) Finally, we define
three indices to describe three types of orientational distortion (splay, bend,
and twist) in a local spatial neighborhood, and a total distortion index to
describe distortions of all three types. To our knowledge, this is the first
work to quantitatively describe orientational distortion (splay, bend, and
twist) in general spherical function fields from DTI or HARDI data. The
proposed DFA and its related mathematical tools can be used to process not only
diffusion MRI data but also general director field data, and the proposed
scalar indices are useful for detecting local geometric changes of white matter
for voxel-based or tract-based analysis in both DTI and HARDI acquisitions. The
related codes and a tutorial for DFA will be released in DMRITool.
| 1 | 1 | 0 | 0 | 0 | 0 |
A Multi-Objective Deep Reinforcement Learning Framework | This paper presents a new multi-objective deep reinforcement learning (MODRL)
framework based on deep Q-networks. We propose the use of linear and non-linear
methods to develop the MODRL framework that includes both single-policy and
multi-policy strategies. The experimental results on two benchmark problems
including the two-objective deep sea treasure environment and the
three-objective mountain car problem indicate that the proposed framework is
able to converge to the optimal Pareto solutions effectively. The proposed
framework is generic, which allows implementation of different deep
reinforcement learning algorithms in different complex environments. This
therefore overcomes many difficulties involved with standard multi-objective
reinforcement learning (MORL) methods existing in the current literature. The
framework creates a platform as a testbed environment to develop methods for
solving various problems associated with the current MORL. Details of the
framework implementation can be referred to
this http URL.
| 0 | 0 | 0 | 1 | 0 | 0 |
Advantages of versatile neural-network decoding for topological codes | Finding optimal correction of errors in generic stabilizer codes is a
computationally hard problem, even for simple noise models. While this task can
be simplified for codes with some structure, such as topological stabilizer
codes, developing good and efficient decoders still remains a challenge. In our
work, we systematically study a very versatile class of decoders based on
feedforward neural networks. To demonstrate adaptability, we apply neural
decoders to the triangular color and toric codes under various noise models
with realistic features, such as spatially-correlated errors. We report that
neural decoders provide significant improvement over leading efficient decoders
in terms of the error-correction threshold. Using neural networks simplifies
the process of designing well-performing decoders, and does not require prior
knowledge of the underlying noise model.
| 0 | 0 | 0 | 1 | 0 | 0 |
A retrieval-based dialogue system utilizing utterance and context embeddings | Finding semantically rich and computer-understandable representations for
textual dialogues, utterances and words is crucial for dialogue systems (or
conversational agents), as their performance mostly depends on understanding
the context of conversations. Recent research aims at finding distributed
vector representations (embeddings) for words, such that semantically similar
words are relatively close within the vector-space. Encoding the "meaning" of
text into vectors is a current trend, and text can range from words, phrases
and documents to actual human-to-human conversations. In recent research
approaches, responses have been generated utilizing a decoder architecture,
given the vector representation of the current conversation. In this paper, the
utilization of embeddings for answer retrieval is explored by using
Locality-Sensitive Hashing Forest (LSH Forest), an Approximate Nearest Neighbor
(ANN) model, to find similar conversations in a corpus and rank possible
candidates. Experimental results on the well-known Ubuntu Corpus (in English)
and a customer service chat dataset (in Dutch) show that, in combination with a
candidate selection method, retrieval-based approaches outperform generative
ones and reveal promising future research directions towards the usability of
such a system.
| 1 | 0 | 0 | 0 | 0 | 0 |
Show, Attend and Interact: Perceivable Human-Robot Social Interaction through Neural Attention Q-Network | For a safe, natural and effective human-robot social interaction, it is
essential to develop a system that allows a robot to demonstrate the
perceivable responsive behaviors to complex human behaviors. We introduce the
Multimodal Deep Attention Recurrent Q-Network using which the robot exhibits
human-like social interaction skills after 14 days of interacting with people
in an uncontrolled real world. Each and every day during the 14 days, the
system gathered robot interaction experiences with people through a
hit-and-trial method and then trained the MDARQN on these experiences using
end-to-end reinforcement learning approach. The results of interaction based
learning indicate that the robot has learned to respond to complex human
behaviors in a perceivable and socially acceptable manner.
| 1 | 0 | 0 | 1 | 0 | 0 |
Sparse Gaussian Processes for Continuous-Time Trajectory Estimation on Matrix Lie Groups | Continuous-time trajectory representations are a powerful tool that can be
used to address several issues in many practical simultaneous localization and
mapping (SLAM) scenarios, like continuously collected measurements distorted by
robot motion, or during with asynchronous sensor measurements. Sparse Gaussian
processes (GP) allow for a probabilistic non-parametric trajectory
representation that enables fast trajectory estimation by sparse GP regression.
However, previous approaches are limited to dealing with vector space
representations of state only. In this technical report we extend the work by
Barfoot et al. [1] to general matrix Lie groups, by applying constant-velocity
prior, and defining locally linear GP. This enables using sparse GP approach in
a large space of practical SLAM settings. In this report we give the theory and
leave the experimental evaluation in future publications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Testing homogeneity of proportions from sparse binomial data with a large number of groups | In this paper, we consider testing the homogeneity for proportions in
independent binomial distributions especially when data are sparse for large
number of groups. We provide broad aspects of our proposed tests such as
theoretical studies, simulations and real data application. We present the
asymptotic null distributions and asymptotic powers for our proposed tests and
compare their performance with existing tests. Our simulation studies show that
none of tests dominate the others, however our proposed test and a few tests
are expected to control given sizes and obtain significant powers. We also
present a real example regarding safety concerns associated with Avandiar
(rosiglitazone) in Nissen and Wolsky (2007).
| 0 | 0 | 1 | 1 | 0 | 0 |
Henkin measures for the Drury-Arveson space | We exhibit Borel probability measures on the unit sphere in $\mathbb C^d$ for
$d \ge 2$ which are Henkin for the multiplier algebra of the Drury-Arveson
space, but not Henkin in the classical sense. This provides a negative answer
to a conjecture of Clouâtre and Davidson.
| 0 | 0 | 1 | 0 | 0 | 0 |
Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery | PCA is one of the most widely used dimension reduction techniques. A related
easier problem is "subspace learning" or "subspace estimation". Given
relatively clean data, both are easily solved via singular value decomposition
(SVD). The problem of subspace learning or PCA in the presence of outliers is
called robust subspace learning or robust PCA (RPCA). For long data sequences,
if one tries to use a single lower dimensional subspace to represent the data,
the required subspace dimension may end up being quite large. For such data, a
better model is to assume that it lies in a low-dimensional subspace that can
change over time, albeit gradually. The problem of tracking such data (and the
subspaces) while being robust to outliers is called robust subspace tracking
(RST). This article provides a magazine-style overview of the entire field of
robust subspace learning and tracking. In particular solutions for three
problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition
(S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an
entire data vector is either an outlier or an inlier. The S+LR formulation
instead assumes that outliers occur on only a few data vector indices and hence
are well modeled as sparse corruptions.
| 0 | 0 | 0 | 1 | 0 | 0 |
Discriminative Bimodal Networks for Visual Localization and Detection with Natural Language Queries | Associating image regions with text queries has been recently explored as a
new way to bridge visual and linguistic representations. A few pioneering
approaches have been proposed based on recurrent neural language models trained
generatively (e.g., generating captions), but achieving somewhat limited
localization accuracy. To better address natural-language-based visual entity
localization, we propose a discriminative approach. We formulate a
discriminative bimodal neural network (DBNet), which can be trained by a
classifier with extensive use of negative samples. Our training objective
encourages better localization on single images, incorporates text phrases in a
broad range, and properly pairs image regions with text phrases into positive
and negative examples. Experiments on the Visual Genome dataset demonstrate the
proposed DBNet significantly outperforms previous state-of-the-art methods both
for localization on single images and for detection on multiple images. We we
also establish an evaluation protocol for natural-language visual detection.
| 1 | 0 | 0 | 1 | 0 | 0 |
The failure of rational dilation on the symmetrized $n$-disk for any $n\geq 3$ | The open and closed \textit{symmetrized polydisc} or, \textit{symmetrized
$n$-disc} for $n\geq 2$, are the following subsets of $\mathbb C^n$:
\begin{align*} \mathbb G_n &=\left\{ \left(\sum_{1\leq i\leq n} z_i,\sum_{1\leq
i<j\leq n}z_iz_j,\dots, \prod_{i=1}^n z_i \right): \,|z_i|< 1, i=1,\dots,n
\right \}, \Gamma_n & =\left\{ \left(\sum_{1\leq i\leq n} z_i,\sum_{1\leq
i<j\leq n}z_iz_j,\dots, \prod_{i=1}^n z_i \right): \,|z_i|\leq 1, i=1,\dots,n
\right \}. \end{align*} A tuple of commuting $n$ operators
$(S_1,\dots,S_{n-1},P)$ defined on a Hilbert space $\mathcal H$ for which
$\Gamma_n$ is a spectral set is called a $\Gamma_n$-contraction. In this
article, we show by a counter example that rational dilation fails on the
symmetrized $n$-disc for any $n\geq 3$. We find new characterizations for the
points in $\mathbb G_n$ and $\Gamma_n$. We also present few new
characterizations for the $\Gamma_n$-unitaries and $\Gamma_n$-isometries.
| 0 | 0 | 1 | 0 | 0 | 0 |
Particle-without-Particle: a practical pseudospectral collocation method for linear partial differential equations with distributional sources | Partial differential equations with distributional sources---in particular,
involving (derivatives of) delta distributions---have become increasingly
ubiquitous in numerous areas of physics and applied mathematics. It is often of
considerable interest to obtain numerical solutions for such equations, but any
singular ("particle"-like) source modeling invariably introduces nontrivial
computational obstacles. A common method to circumvent these is through some
form of delta function approximation procedure on the computational grid;
however, this often carries significant limitations on the efficiency of the
numerical convergence rates, or sometimes even the resolvability of the problem
at all.
In this paper, we present an alternative technique for tackling such
equations which avoids the singular behavior entirely: the
"Particle-without-Particle" method. Previously introduced in the context of the
self-force problem in gravitational physics, the idea is to discretize the
computational domain into two (or more) disjoint pseudospectral
(Chebyshev-Lobatto) grids such that the "particle" is always at the interface
between them; thus, one only needs to solve homogeneous equations in each
domain, with the source effectively replaced by jump (boundary) conditions
thereon. We prove here that this method yields solutions to any linear PDE the
source of which is any linear combination of delta distributions and
derivatives thereof supported on a one-dimensional subspace of the problem
domain. We then implement it to numerically solve a variety of relevant PDEs:
hyperbolic (with applications to neuroscience and acoustics), parabolic (with
applications to finance), and elliptic. We generically obtain improved
convergence rates relative to typical past implementations relying on delta
function approximations.
| 0 | 0 | 0 | 0 | 1 | 1 |
X-Cube Fracton Model on Generic Lattices: Phases and Geometric Order | Fracton order is a new kind of quantum order characterized by topological
excitations that exhibit remarkable mobility restrictions and a robust ground
state degeneracy (GSD) which can increase exponentially with system size. In
this paper, we present a generic lattice construction (in three dimensions) for
a generalized X-cube model of fracton order, where the mobility restrictions of
the subdimensional particles inherit the geometry of the lattice. This helps
explain a previous result that lattice curvature can produce a robust GSD, even
on a manifold with trivial topology. We provide explicit examples to show that
the (zero temperature) phase of matter is sensitive to the lattice geometry. In
one example, the lattice geometry confines the dimension-1 particles to small
loops, which allows the fractons to be fully mobile charges, and the resulting
phase is equivalent to (3+1)-dimensional toric code. However, the phase is
sensitive to more than just lattice curvature; different lattices without
curvature (e.g. cubic or stacked kagome lattices) also result in different
phases of matter, which are separated by phase transitions. Unintuitively
however, according to a previous definition of phase [Chen, Gu, Wen 2010], even
just a rotated or rescaled cubic lattice results in different phases of matter,
which motivates us to propose a new and coarser definition of phase for gapped
ground states and fracton order. The new equivalence relation between ground
states is given by the composition of a local unitary transformation and a
quasi-isometry (which can rotate and rescale the lattice); equivalently, ground
states are in the same phase if they can be adiabatically connected by varying
both the Hamiltonian and the positions of the degrees of freedom (via a
quasi-isometry). In light of the importance of geometry, we further propose
that fracton orders should be regarded as a geometric order.
| 0 | 1 | 0 | 0 | 0 | 0 |
Genetic Algorithm for Epidemic Mitigation by Removing Relationships | Min-SEIS-Cluster is an optimization problem which aims at minimizing the
infection spreading in networks. In this problem, nodes can be susceptible to
an infection, exposed to an infection, or infectious. One of the main features
of this problem is the fact that nodes have different dynamics when interacting
with other nodes from the same community. Thus, the problem is characterized by
distinct probabilities of infecting nodes from both the same and from different
communities. This paper presents a new genetic algorithm that solves the
Min-SEIS-Cluster problem. This genetic algorithm surpassed the current
heuristic of this problem significantly, reducing the number of infected nodes
during the simulation of the epidemics. The results therefore suggest that our
new genetic algorithm is the state-of-the-art heuristic to solve this problem.
| 1 | 0 | 1 | 0 | 0 | 0 |
Angpow: a software for the fast computation of accurate tomographic power spectra | The statistical distribution of galaxies is a powerful probe to constrain
cosmological models and gravity. In particular the matter power spectrum $P(k)$
brings information about the cosmological distance evolution and the galaxy
clustering together. However the building of $P(k)$ from galaxy catalogues
needs a cosmological model to convert angles on the sky and redshifts into
distances, which leads to difficulties when comparing data with predicted
$P(k)$ from other cosmological models, and for photometric surveys like LSST.
The angular power spectrum $C_\ell(z_1,z_2)$ between two bins located at
redshift $z_1$ and $z_2$ contains the same information than the matter power
spectrum, is free from any cosmological assumption, but the prediction of
$C_\ell(z_1,z_2)$ from $P(k)$ is a costly computation when performed exactly.
The Angpow software aims at computing quickly and accurately the auto
($z_1=z_2$) and cross ($z_1 \neq z_2$) angular power spectra between redshift
bins. We describe the developed algorithm, based on developments on the
Chebyshev polynomial basis and on the Clenshaw-Curtis quadrature method. We
validate the results with other codes, and benchmark the performance. Angpow is
flexible and can handle any user defined power spectra, transfer functions, and
redshift selection windows. The code is fast enough to be embedded inside
programs exploring large cosmological parameter spaces through the
$C_\ell(z_1,z_2)$ comparison with data. We emphasize that the Limber's
approximation, often used to fasten the computation, gives wrong $C_\ell$
values for cross-correlations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Robust and Flexible Estimation of Stochastic Mediation Effects: A Proposed Method and Example in a Randomized Trial Setting | Causal mediation analysis can improve understanding of the mechanisms
underlying epidemiologic associations. However, the utility of natural direct
and indirect effect estimation has been limited by the assumption of no
confounder of the mediator-outcome relationship that is affected by prior
exposure---an assumption frequently violated in practice. We build on recent
work that identified alternative estimands that do not require this assumption
and propose a flexible and double robust semiparametric targeted minimum
loss-based estimator for data-dependent stochastic direct and indirect effects.
The proposed method treats the intermediate confounder affected by prior
exposure as a time-varying confounder and intervenes stochastically on the
mediator using a distribution which conditions on baseline covariates and
marginalizes over the intermediate confounder. In addition, we assume the
stochastic intervention is given, conditional on observed data, which results
in a simpler estimator and weaker identification assumptions. We demonstrate
the estimator's finite sample and robustness properties in a simple simulation
study. We apply the method to an example from the Moving to Opportunity
experiment. In this application, randomization to receive a housing voucher is
the treatment/instrument that influenced moving to a low-poverty neighborhood,
which is the intermediate confounder. We estimate the data-dependent stochastic
direct effect of randomization to the voucher group on adolescent marijuana use
not mediated by change in school district and the stochastic indirect effect
mediated by change in school district. We find no evidence of mediation. Our
estimator is easy to implement in standard statistical software, and we provide
annotated R code to further lower implementation barriers.
| 0 | 0 | 0 | 1 | 0 | 0 |
Radial anisotropy in omega Cen limiting the room for an intermediate-mass black hole | Finding an intermediate-mass black hole (IMBH) in a globular cluster (or
proving its absence) would provide valuable insights into our understanding of
galaxy formation and evolution. However, it is challenging to identify a unique
signature of an IMBH that cannot be accounted for by other processes.
Observational claims of IMBH detection are indeed often based on analyses of
the kinematics of stars in the cluster core, the most common signature being a
rise in the velocity dispersion profile towards the centre of the system.
Unfortunately, this IMBH signal is degenerate with the presence of
radially-biased pressure anisotropy in the globular cluster. To explore the
role of anisotropy in shaping the observational kinematics of clusters, we
analyse the case of omega Cen by comparing the observed profiles to those
calculated from the family of LIMEPY models, that account for the presence of
anisotropy in the system in a physically motivated way. The best-fit radially
anisotropic models reproduce the observational profiles well, and describe the
central kinematics as derived from Hubble Space Telescope proper motions
without the need for an IMBH.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes? | A successful grasp requires careful balancing of the contact forces. Deducing
whether a particular grasp will be successful from indirect measurements, such
as vision, is therefore quite challenging, and direct sensing of contacts
through touch sensing provides an appealing avenue toward more successful and
consistent robotic grasping. However, in order to fully evaluate the value of
touch sensing for grasp outcome prediction, we must understand how touch
sensing can influence outcome prediction accuracy when combined with other
modalities. Doing so using conventional model-based techniques is exceptionally
difficult. In this work, we investigate the question of whether touch sensing
aids in predicting grasp outcomes within a multimodal sensing framework that
combines vision and touch. To that end, we collected more than 9,000 grasping
trials using a two-finger gripper equipped with GelSight high-resolution
tactile sensors on each finger, and evaluated visuo-tactile deep neural network
models to directly predict grasp outcomes from either modality individually,
and from both modalities together. Our experimental results indicate that
incorporating tactile readings substantially improve grasping performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the essential spectrum of elliptic differential operators | Let $\mathcal{A}$ be a $C^*$-algebra of bounded uniformly continuous
functions on $X=\mathbb{R}^d$ such that $\mathcal{A}$ is stable under
translations and contains the continuous functions that have a limit at
infinity. Denote $\mathcal{A}^\dagger$ the boundary of $X$ in the character
space of $\mathcal{A}$. Then the crossed product
$\mathscr{A}=\mathcal{A}\rtimes X$ of $\mathcal{A}$ by the natural action of
$X$ on $\mathcal{A}$ is a well defined $C^*$-algebra and to each operator
$A\in\mathscr{A}$ one may naturally associate a family of bounded operators
$A_\varkappa$ on $L^2(X)$ indexed by the characters
$\varkappa\in\mathcal{A}^\dagger$. We show that the essential spectrum of $A$
is the union of the spectra of the operators $A_\varkappa$. The applications
cover very general classes of singular elliptic operators.
| 0 | 0 | 1 | 0 | 0 | 0 |
Shrinking Horizon Model Predictive Control with Signal Temporal Logic Constraints under Stochastic Disturbances | We present Shrinking Horizon Model Predictive Control (SHMPC) for
discrete-time linear systems with Signal Temporal Logic (STL) specification
constraints under stochastic disturbances. The control objective is to maximize
an optimization function under the restriction that a given STL specification
is satisfied with high probability against stochastic uncertainties. We
formulate a general solution, which does not require precise knowledge of the
probability distributions of the (possibly dependent) stochastic disturbances;
only the bounded support intervals of the density functions and moment
intervals are used. For the specific case of disturbances that are independent
and normally distributed, we optimize the controllers further by utilizing
knowledge of the disturbance probability distributions. We show that in both
cases, the control law can be obtained by solving optimization problems with
linear constraints at each step. We experimentally demonstrate effectiveness of
this approach by synthesizing a controller for an HVAC system.
| 1 | 0 | 1 | 0 | 0 | 0 |
Novel processes and metrics for a scientific evaluation rooted in the principles of science - Version 1 | Scientific evaluation is a determinant of how scientists, institutions and
funders behave, and as such is a key element in the making of science. In this
article, we propose an alternative to the current norm of evaluating research
with journal rank. Following a well-defined notion of scientific value, we
introduce qualitative processes that can also be quantified and give rise to
meaningful and easy-to-use article-level metrics. In our approach, the goal of
a scientist is transformed from convincing an editorial board through a
vertical process to convincing peers through an horizontal one. We argue that
such an evaluation system naturally provides the incentives and logic needed to
constantly promote quality, reproducibility, openness and collaboration in
science. The system is legally and technically feasible and can gradually lead
to the self-organized reappropriation of the scientific process by the
scholarly community and its institutions. We propose an implementation of our
evaluation system with the platform "the Self-Journals of Science"
(www.sjscience.org).
| 1 | 0 | 0 | 0 | 0 | 0 |
Topological quantization of energy transport in micro- and nano-mechanical lattices | Topological effects typically discussed in the context of quantum physics are
emerging as one of the central paradigms of physics. Here, we demonstrate the
role of topology in energy transport through dimerized micro- and
nano-mechanical lattices in the classical regime, i.e., essentially "masses and
springs". We show that the thermal conductance factorizes into topological and
non-topological components. The former takes on three discrete values and
arises due to the appearance of edge modes that prevent good contact between
the heat reservoirs and the bulk, giving a length-independent reduction of the
conductance. In essence, energy input at the boundary mostly stays there, an
effect robust against disorder and nonlinearity. These results bridge two
seemingly disconnected disciplines of physics, namely topology and thermal
transport, and suggest ways to engineer thermal contacts, opening a direction
to explore the ramifications of topological properties on nanoscale technology.
| 0 | 1 | 0 | 0 | 0 | 0 |
Environmental impact assessment for climate change policy with the simulation-based integrated assessment model E3ME-FTT-GENIE | A high degree of consensus exists in the climate sciences over the role that
human interference with the atmosphere is playing in changing the climate.
Following the Paris Agreement, a similar consensus exists in the policy
community over the urgency of policy solutions to the climate problem. The
context for climate policy is thus moving from agenda setting, which has now
been mostly established, to impact assessment, in which we identify policy
pathways to implement the Paris Agreement. Most integrated assessment models
currently used to address the economic and technical feasibility of avoiding
climate change are based on engineering perspectives with a normative systems
optimisation philosophy, suitable for agenda setting, but unsuitable to assess
the socio-economic impacts of a realistic baskets of climate policies. Here, we
introduce a fully descriptive, simulation-based integrated assessment model
designed specifically to assess policies, formed by the combination of (1) a
highly disaggregated macro-econometric simulation of the global economy based
on time series regressions (E3ME), (2) a family of bottom-up evolutionary
simulations of technology diffusion based on cross-sectional discrete choice
models (FTT), and (3) a carbon cycle and atmosphere circulation model of
intermediate complexity (GENIE-1). We use this combined model to create a
detailed global and sectoral policy map and scenario that sets the economy on a
pathway that achieves the goals of the Paris Agreement with >66% probability of
not exceeding 2$^\circ$C of global warming. We propose a blueprint for a new
role for integrated assessment models in this upcoming policy assessment
context.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nested Convex Bodies are Chaseable | In the Convex Body Chasing problem, we are given an initial point $v_0$ in
$R^d$ and an online sequence of $n$ convex bodies $F_1, ..., F_n$. When we
receive $F_i$, we are required to move inside $F_i$. Our goal is to minimize
the total distance travelled. This fundamental online problem was first studied
by Friedman and Linial (DCG 1993). They proved an $\Omega(\sqrt{d})$ lower
bound on the competitive ratio, and conjectured that a competitive ratio
depending only on d is possible. However, despite much interest in the problem,
the conjecture remains wide open.
We consider the setting in which the convex bodies are nested: $F_1 \supset
... \supset F_n$. The nested setting is closely related to extending the online
LP framework of Buchbinder and Naor (ESA 2005) to arbitrary linear constraints.
Moreover, this setting retains much of the difficulty of the general setting
and captures an essential obstacle in resolving Friedman and Linial's
conjecture. In this work, we give the first $f(d)$-competitive algorithm for
chasing nested convex bodies in $R^d$.
| 1 | 0 | 0 | 0 | 0 | 0 |
Analytic properties of approximate lattices | We introduce a notion of cocycle-induction for strong uniform approximate
lattices in locally compact second countable groups and use it to relate
(relative) Kazhdan- and Haagerup-type of approximate lattices to the
corresponding properties of the ambient locally compact groups. Our approach
applies to large classes of uniform approximate lattices (though not all of
them) and is flexible enough to cover the $L^p$-versions of Property (FH) and
a-(FH)-menability as well as quasified versions thereof a la Burger--Monod and
Ozawa.
| 0 | 0 | 1 | 0 | 0 | 0 |
Auxiliary Variables in TLA+ | Auxiliary variables are often needed for verifying that an implementation is
correct with respect to a higher-level specification. They augment the formal
description of the implementation without changing its semantics--that is, the
set of behaviors that it describes. This paper explains rules for adding
history, prophecy, and stuttering variables to TLA+ specifications, ensuring
that the augmented specification is equivalent to the original one. The rules
are explained with toy examples, and they are used to verify the correctness of
a simplified version of a snapshot algorithm due to Afek et al.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Homological model for the coloured Jones polynomials | In this paper we will present a homological model for Coloured Jones
Polynomials. For each color $N \in \N$, we will describe the invariant
$J_N(L,q)$ as a graded intersection pairing of certain homological classes in a
covering of the configuration space on the punctured disk. This construction is
based on the Lawrence representation and a result due to Kohno that relates
quantum representations and homological representations of the braid groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
Extremes of threshold-dependent Gaussian processes | In this contribution we are concerned with the asymptotic behaviour as $u\to
\infty$ of $\mathbb{P}\{\sup_{t\in [0,T]} X_u(t)> u\}$, where $X_u(t),t\in
[0,T],u>0$ is a family of centered Gaussian processes with continuous
trajectories. A key application of our findings concerns
$\mathbb{P}\{\sup_{t\in [0,T]} (X(t)+ g(t))> u\}$ as $u\to\infty$, for $X$ a
centered Gaussian process and $g$ some measurable trend function. Further
applications include the approximation of both the ruin time and the ruin
probability of the Brownian motion risk model with constant force of interest.
| 0 | 0 | 1 | 1 | 0 | 0 |
The null hypothesis of common jumps in case of irregular and asynchronous observations | This paper proposes novel tests for the absence of jumps in a univariate
semimartingale and for the absence of common jumps in a bivariate
semimartingale. Our methods rely on ratio statistics of power variations based
on irregular observations, sampled at different frequencies. We develop central
limit theorems for the statistics under the respective null hypotheses and
apply bootstrap procedures to assess the limiting distributions. Further we
define corrected statistics to improve the finite sample performance.
Simulations show that the test based on our corrected statistic yields good
results and even outperforms existing tests in the case of regular
observations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Property Safety Stock Policy for Correlated Commodities Based on Probability Inequality | Deriving the optimal safety stock quantity with which to meet customer
satisfaction is one of the most important topics in stock management. However,
it is difficult to control the stock management of correlated marketable
merchandise when using an inventory control method that was developed under the
assumption that the demands are not correlated. For this, we propose a
deterministic approach that uses a probability inequality to derive a
reasonable safety stock for the case in which we know the correlation between
various commodities. Moreover, over a given lead time, the relation between the
appropriate safety stock and the allowable stockout rate is analytically
derived, and the potential of our proposed procedure is validated by numerical
experiments.
| 0 | 0 | 1 | 1 | 0 | 0 |
Carleman estimates for the time-fractional advection-diffusion equations and applications | In this article, we prove Carleman estimates for the generalized
time-fractional advection-diffusion equations by considering the fractional
derivative as perturbation for the first order time-derivative. As a direct
application of the Carleman estimates, we show a conditional stability of a
lateral Cauchy problem for the time-fractional advection-diffusion equation,
and we also investigate the stability of an inverse source problem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Generating and Aligning from Data Geometries with Generative Adversarial Networks | Unsupervised domain mapping has attracted substantial attention in recent
years due to the success of models based on the cycle-consistency assumption.
These models map between two domains by fooling a probabilistic discriminator,
thereby matching the probability distributions of the real and generated data.
Instead of this probabilistic approach, we cast the problem in terms of
aligning the geometry of the manifolds of the two domains. We introduce the
Manifold Geometry Matching Generative Adversarial Network (MGM GAN), which adds
two novel mechanisms to facilitate GANs sampling from the geometry of the
manifold rather than the density and then aligning two manifold geometries: (1)
an importance sampling technique that reweights points based on their density
on the manifold, making the discriminator only able to discern geometry and (2)
a penalty adapted from traditional manifold alignment literature that
explicitly enforces the geometry to be preserved. The MGM GAN leverages the
manifolds arising from a pre-trained autoencoder to bridge the gap between
formal manifold alignment literature and existing GAN work, and demonstrate the
advantages of modeling the manifold geometry over its density.
| 1 | 0 | 0 | 1 | 0 | 0 |
Wolf-Rayet spin at low metallicity and its implication for Black Hole formation channels | The spin of Wolf-Rayet (WR) stars at low metallicity (Z) is most relevant for
our understanding of gravitational wave sources such as GW 150914, as well as
the incidence of long-duration gamma-ray bursts (GRBs). Two scenarios have been
suggested for both phenomena: one of them involves rapid rotation and
quasi-chemical homogeneous evolution (CHE), the other invokes classical
evolution through mass loss in single and binary systems. WR spin rates might
enable us to test these two scenarios. In order to obtain empirical constraints
on black hole progenitor spin, we infer wind asymmetries in all 12 known WR
stars in the Small Magellanic Cloud (SMC) at Z = 1/5 Zsun, as well as within a
significantly enlarged sample of single and binary WR stars in the Large
Magellanic Cloud (LMC at Z = 1/2 Zsun), tripling the sample of Vink (2007).
This brings the total LMC sample to 39, making it appropriate for comparison to
the Galactic sample. We measure WR wind asymmetries with VLT-FORS linear
spectropolarimetry. We report the detection of new line effects in the LMC WN
star BAT99-43 and the WC star BAT99-70, as well as the famous WR/LBV HD 5980 in
the SMC, which might be evolving chemically homogeneously. With the previous
reported line effects in the late-type WNL (Ofpe/WN9) objects BAT99-22 and
BAT99-33, this brings the total LMC WR sample to 4, i.e. a frequency of ~10%.
Perhaps surprisingly, the incidence of line effects amongst low-Z WR stars is
not found to be any higher than amongst the Galactic WR sample, challenging the
rotationally-induced CHE model. As WR mass loss is likely Z-dependent, our
Magellanic Cloud line-effect WR stars may maintain their surface rotation and
fulfill the basic conditions for producing long GRBs, both via the classical
post-red supergiant (RSG) or luminous blue variable (LBV) channel, as well as
resulting from CHE due to physics specific to very massive stars (VMS).
| 0 | 1 | 0 | 0 | 0 | 0 |
What Drives the International Development Agenda? An NLP Analysis of the United Nations General Debate 1970-2016 | There is surprisingly little known about agenda setting for international
development in the United Nations (UN) despite it having a significant
influence on the process and outcomes of development efforts. This paper
addresses this shortcoming using a novel approach that applies natural language
processing techniques to countries' annual statements in the UN General Debate.
Every year UN member states deliver statements during the General Debate on
their governments' perspective on major issues in world politics. These
speeches provide invaluable information on state preferences on a wide range of
issues, including international development, but have largely been overlooked
in the study of global politics. This paper identifies the main international
development topics that states raise in these speeches between 1970 and 2016,
and examine the country-specific drivers of international development rhetoric.
| 1 | 0 | 0 | 0 | 0 | 0 |
Randomizing growing networks with a time-respecting null model | Complex networks are often used to represent systems that are not static but
grow with time: people make new friendships, new papers are published and refer
to the existing ones, and so forth. To assess the statistical significance of
measurements made on such networks, we propose a randomization methodology---a
time-respecting null model---that preserves both the network's degree sequence
and the time evolution of individual nodes' degree values. By preserving the
temporal linking patterns of the analyzed system, the proposed model is able to
factor out the effect of the system's temporal patterns on its structure. We
apply the model to the citation network of Physical Review scholarly papers and
the citation network of US movies. The model reveals that the two datasets are
strikingly different with respect to their degree-degree correlations, and we
discuss the important implications of this finding on the information provided
by paradigmatic node centrality metrics such as indegree and Google's PageRank.
The randomization methodology proposed here can be used to assess the
significance of any structural property in growing networks, which could bring
new insights into the problems where null models play a critical role, such as
the detection of communities and network motifs.
| 1 | 1 | 0 | 0 | 0 | 0 |
High-dimensional posterior consistency for hierarchical non-local priors in regression | The choice of tuning parameter in Bayesian variable selection is a critical
problem in modern statistics. Especially in the related work of nonlocal prior
in regression setting, the scale parameter reflects the dispersion of the
non-local prior density around zero, and implicitly determines the size of the
regression coefficients that will be shrunk to zero. In this paper, we
introduce a fully Bayesian approach with the pMOM nonlocal prior where we place
an appropriate Inverse-Gamma prior on the tuning parameter to analyze a more
robust model that is comparatively immune to misspecification of scale
parameter. Under standard regularity assumptions, we extend the previous work
where $p$ is bounded by the number of observations $n$ and establish strong
model selection consistency when $p$ is allowed to increase at a polynomial
rate with $n$. Through simulation studies, we demonstrate that our model
selection procedure outperforms commonly used penalized likelihood methods in a
range of simulation settings.
| 0 | 0 | 1 | 1 | 0 | 0 |
A metric of mutual energy and unlikely intersections for dynamical systems | We introduce a metric of mutual energy for adelic measures associated to the
Arakelov-Zhang pairing. Using this metric and potential theoretic techniques
involving discrete approximations to energy integrals, we prove an effective
bound on a problem of Baker and DeMarco on unlikely intersections of dynamical
systems, specifically, for the set of complex parameters $c$ for which $z=0$
and $1$ are both preperiodic under iteration of $f_c(z)=z^2 + c$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Volvox barberi flocks, forming near-optimal, two-dimensional, polydisperse lattice packings | Volvox barberi is a multicellular green alga forming spherical colonies of
10000-50000 differentiated somatic and germ cells. Here, I show that these
colonies actively self-organize over minutes into "flocks" that can contain
more than 100 colonies moving and rotating collectively for hours. The colonies
in flocks form two-dimensional, irregular, "active crystals", with lattice
angles and colony diameters both following log-normal distributions. Comparison
with a dynamical simulation of soft spheres with diameters matched to the
Volvox samples, and a weak long-range attractive force, show that the Volvox
flocks achieve optimal random close-packing. A dye tracer in the Volvox medium
revealed large hydrodynamic vortices generated by colony and flock rotations,
providing a likely source of the forces leading to flocking and optimal
packing.
| 0 | 0 | 0 | 0 | 1 | 0 |
Learning to Identify Ambiguous and Misleading News Headlines | Accuracy is one of the basic principles of journalism. However, it is
increasingly hard to manage due to the diversity of news media. Some editors of
online news tend to use catchy headlines which trick readers into clicking.
These headlines are either ambiguous or misleading, degrading the reading
experience of the audience. Thus, identifying inaccurate news headlines is a
task worth studying. Previous work names these headlines "clickbaits" and
mainly focus on the features extracted from the headlines, which limits the
performance since the consistency between headlines and news bodies is
underappreciated. In this paper, we clearly redefine the problem and identify
ambiguous and misleading headlines separately. We utilize class sequential
rules to exploit structure information when detecting ambiguous headlines. For
the identification of misleading headlines, we extract features based on the
congruence between headlines and bodies. To make use of the large unlabeled
data set, we apply a co-training method and gain an increase in performance.
The experiment results show the effectiveness of our methods. Then we use our
classifiers to detect inaccurate headlines crawled from different sources and
conduct a data analysis.
| 1 | 0 | 0 | 0 | 0 | 0 |
Evidence of Complex Contagion of Information in Social Media: An Experiment Using Twitter Bots | It has recently become possible to study the dynamics of information
diffusion in techno-social systems at scale, due to the emergence of online
platforms, such as Twitter, with millions of users. One question that
systematically recurs is whether information spreads according to simple or
complex dynamics: does each exposure to a piece of information have an
independent probability of a user adopting it (simple contagion), or does this
probability depend instead on the number of sources of exposure, increasing
above some threshold (complex contagion)? Most studies to date are
observational and, therefore, unable to disentangle the effects of confounding
factors such as social reinforcement, homophily, limited attention, or network
community structure. Here we describe a novel controlled experiment that we
performed on Twitter using `social bots' deployed to carry out coordinated
attempts at spreading information. We propose two Bayesian statistical models
describing simple and complex contagion dynamics, and test the competing
hypotheses. We provide experimental evidence that the complex contagion model
describes the observed information diffusion behavior more accurately than
simple contagion. Future applications of our results include more effective
defenses against malicious propaganda campaigns on social media, improved
marketing and advertisement strategies, and design of effective network
intervention techniques.
| 1 | 1 | 0 | 0 | 0 | 0 |
Birecurrent sets | A set is called recurrent if its minimal automaton is strongly connected and
birecurrent if it is recurrent as well as its reversal. We prove a series of
results concerning birecurrent sets. It is already known that any birecurrent
set is completely reducible (that is, such that the minimal representation of
its characteristic series is completely reducible). The main result of this
paper characterizes completely reducible sets as linear combinations of
birecurrent sets
| 1 | 0 | 1 | 0 | 0 | 0 |
Mobile Robotic Fabrication at 1:1 scale: the In situ Fabricator | This paper presents the concept of an In situ Fabricator, a mobile robot
intended for on-site manufacturing, assembly and digital fabrication. We
present an overview of a prototype system, its capabilities, and highlight the
importance of high-performance control, estimation and planning algorithms for
achieving desired construction goals. Next, we detail on two architectural
application scenarios: first, building a full-size undulating brick wall, which
required a number of repositioning and autonomous localisation manoeuvres.
Second, the Mesh Mould concrete process, which shows that an In situ Fabricator
in combination with an innovative digital fabrication tool can be used to
enable completely novel building technologies. Subsequently, important
limitations and disadvantages of our approach are discussed. Based on that, we
identify the need for a new type of robotic actuator, which facilitates the
design of novel full-scale construction robots. We provide brief insight into
the development of this actuator and conclude the paper with an outlook on the
next-generation In situ Fabricator, which is currently under development.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.