title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Riemann-Langevin Particle Filtering in Track-Before-Detect | Track-before-detect (TBD) is a powerful approach that consists in providing
the tracker with sensor measurements directly without pre-detection. Due to the
measurement model non-linearities, online state estimation in TBD is most
commonly solved via particle filtering. Existing particle filters for TBD do
not incorporate measurement information in their proposal distribution. The
Langevin Monte Carlo (LMC) is a sampling method whose proposal is able to
exploit all available knowledge of the posterior (that is, both prior and
measurement information). This letter synthesizes recent advances in LMC-based
filtering to describe the Riemann-Langevin particle filter and introduces its
novel application to TBD. The benefits of our approach are illustrated in a
challenging low-noise scenario.
| 0 | 0 | 0 | 1 | 0 | 0 |
The universal connection for principal bundles over homogeneous spaces and twistor space of coadjoint orbits | Given a holomorphic principal bundle $Q\, \longrightarrow\, X$, the universal
space of holomorphic connections is a torsor $C_1(Q)$ for $\text{ad} Q \otimes
T^*X$ such that the pullback of $Q$ to $C_1(Q)$ has a tautological holomorphic
connection. When $X\,=\, G/P$, where $P$ is a parabolic subgroup of a complex
simple group $G$, and $Q$ is the frame bundle of an ample line bundle, we show
that $C_1(Q)$ may be identified with $G/L$, where $L\, \subset\, P$ is a Levi
factor. We use this identification to construct the twistor space associated to
a natural hyper-Kähler metric on $T^*(G/P)$, recovering Biquard's description
of this twistor space, but employing only finite-dimensional, Lie-theoretic
means.
| 0 | 0 | 1 | 0 | 0 | 0 |
Equivariance Through Parameter-Sharing | We propose to study equivariance in deep neural networks through parameter
symmetries. In particular, given a group $\mathcal{G}$ that acts discretely on
the input and output of a standard neural network layer $\phi_{W}: \Re^{M} \to
\Re^{N}$, we show that $\phi_{W}$ is equivariant with respect to
$\mathcal{G}$-action iff $\mathcal{G}$ explains the symmetries of the network
parameters $W$. Inspired by this observation, we then propose two
parameter-sharing schemes to induce the desirable symmetry on $W$. Our
procedures for tying the parameters achieve $\mathcal{G}$-equivariance and,
under some conditions on the action of $\mathcal{G}$, they guarantee
sensitivity to all other permutation groups outside $\mathcal{G}$.
| 1 | 0 | 0 | 1 | 0 | 0 |
Quantum light in curved low dimensional hexagonal boron nitride systems | Low-dimensional wide bandgap semiconductors open a new playing field in
quantum optics using sub-bandgap excitation. In this field, hexagonal boron
nitride (h-BN) has been reported to host single quantum emitters (QEs), linking
QE density to perimeters. Furthermore, curvature/perimeters in transition metal
dichalcogenides (TMDCs) have demonstrated a key role in QE formation. We
investigate a curvature-abundant BN system - quasi one-dimensional BN nanotubes
(BNNTs) fabricated via a catalyst-free method. We find that non-treated BNNT is
an abundant source of stable QEs and analyze their emission features down to
single nanotubes, comparing dispersed/suspended material. Combining high
spatial resolution of a scanning electron microscope, we categorize and
pin-point emission origin to a scale of less than 20 nm, giving us a one-to-one
validation of emission source with dimensions smaller than the laser excitation
wavelength, elucidating nano-antenna effects. Two emission origins emerge:
hybrid/entwined BNNT. By artificially curving h-BN flakes, similar QE spectral
features are observed. The impact on emission of solvents used in commercial
products and curved regions is also demonstrated. The 'out of the box'
availability of QEs in BNNT, lacking processing contamination, is a milestone
for unraveling their atomic features. These findings open possibilities for
precision engineering of QEs, puts h-BN under a similar 'umbrella' of TMDC's
QEs and provides a model explaining QEs spatial localization/formation using
electron/ion irradiation and chemical etching.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation | While recent developments in autonomous vehicle (AV) technology highlight
substantial progress, we lack tools for rigorous and scalable testing.
Real-world testing, the $\textit{de facto}$ evaluation environment, places the
public in danger, and, due to the rare nature of accidents, will require
billions of miles in order to statistically validate performance claims. We
implement a simulation framework that can test an entire modern autonomous
driving system, including, in particular, systems that employ deep-learning
perception and control algorithms. Using adaptive importance-sampling methods
to accelerate rare-event probability evaluation, we estimate the probability of
an accident under a base distribution governing standard traffic behavior. We
demonstrate our framework on a highway scenario, accelerating system evaluation
by $2$-$20$ times over naive Monte Carlo sampling methods and $10$-$300
\mathsf{P}$ times (where $\mathsf{P}$ is the number of processors) over
real-world testing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Exact time-dependent exchange-correlation potential in electron scattering processes | We identify peak and valley structures in the exact exchange-correlation
potential of time-dependent density functional theory that are crucial for
time-resolved electron scattering in a model one-dimensional system. These
structures are completely missed by adiabatic approximations which consequently
significantly underestimate the scattering probability. A recently-proposed
non-adiabatic approximation is shown to correctly capture the approach of the
electron to the target when the initial Kohn-Sham state is chosen judiciously,
and is more accurate than standard adiabatic functionals, but it ultimately
fails to accurately capture reflection. These results may explain the
underestimate of scattering probabilities in some recent studies on molecules
and surfaces.
| 0 | 1 | 0 | 0 | 0 | 0 |
Polynomial-Time Algorithms for Sliding Tokens on Cactus Graphs and Block Graphs | Given two independent sets $I, J$ of a graph $G$, and imagine that a token
(coin) is placed at each vertex of $I$. The Sliding Token problem asks if one
could transform $I$ to $J$ via a sequence of elementary steps, where each step
requires sliding a token from one vertex to one of its neighbors so that the
resulting set of vertices where tokens are placed remains independent. This
problem is $\mathsf{PSPACE}$-complete even for planar graphs of maximum degree
$3$ and bounded-treewidth. In this paper, we show that Sliding Token can be
solved efficiently for cactus graphs and block graphs, and give upper bounds on
the length of a transformation sequence between any two independent sets of
these graph classes. Our algorithms are designed based on two main
observations. First, all structures that forbid the existence of a sequence of
token slidings between $I$ and $J$, if exist, can be found in polynomial time.
A sufficient condition for determining no-instances can be easily derived using
this characterization. Second, without such forbidden structures, a sequence of
token slidings between $I$ and $J$ does exist. In this case, one can indeed
transform $I$ to $J$ (and vice versa) using a polynomial number of
token-slides.
| 1 | 0 | 0 | 0 | 0 | 0 |
Summarized Network Behavior Prediction | This work studies the entity-wise topical behavior from massive network logs.
Both the temporal and the spatial relationships of the behavior are explored
with the learning architectures combing the recurrent neural network (RNN) and
the convolutional neural network (CNN). To make the behavioral data appropriate
for the spatial learning in CNN, several reduction steps are taken to form the
topical metrics and place them homogeneously like pixels in the images. The
experimental result shows both the temporal- and the spatial- gains when
compared to a multilayer perceptron (MLP) network. A new learning framework
called spatially connected convolutional networks (SCCN) is introduced to more
efficiently predict the behavior.
| 1 | 0 | 0 | 1 | 0 | 0 |
Stabiliser states are efficiently PAC-learnable | The exponential scaling of the wave function is a fundamental property of
quantum systems with far reaching implications in our ability to process
quantum information. A problem where these are particularly relevant is quantum
state tomography. State tomography, whose objective is to obtain a full
description of a quantum system, can be analysed in the framework of
computational learning theory. In this model, quantum states have been shown to
be Probably Approximately Correct (PAC)-learnable with sample complexity linear
in the number of qubits. However, it is conjectured that in general quantum
states require an exponential amount of computation to be learned. Here, using
results from the literature on the efficient classical simulation of quantum
systems, we show that stabiliser states are efficiently PAC-learnable. Our
results solve an open problem formulated by Aaronson [Proc. R. Soc. A, 2088,
(2007)] and propose learning theory as a tool for exploring the power of
quantum computation.
| 1 | 0 | 0 | 0 | 0 | 0 |
A bound for the shortest reset words for semisimple synchronizing automata via the packing number | We show that if a semisimple synchronizing automaton with $n$ states has a
minimal reachable non-unary subset of cardinality $r\ge 2$, then there is a
reset word of length at most $(n-1)D(2,r,n)$, where $D(2,r,n)$ is the
$2$-packing number for families of $r$-subsets of $[1,n]$.
| 1 | 0 | 1 | 0 | 0 | 0 |
HyperENTM: Evolving Scalable Neural Turing Machines through HyperNEAT | Recent developments within memory-augmented neural networks have solved
sequential problems requiring long-term memory, which are intractable for
traditional neural networks. However, current approaches still struggle to
scale to large memory sizes and sequence lengths. In this paper we show how
access to memory can be encoded geometrically through a HyperNEAT-based Neural
Turing Machine (HyperENTM). We demonstrate that using the indirect HyperNEAT
encoding allows for training on small memory vectors in a bit-vector copy task
and then applying the knowledge gained from such training to speed up training
on larger size memory vectors. Additionally, we demonstrate that in some
instances, networks trained to copy bit-vectors of size 9 can be scaled to
sizes of 1,000 without further training. While the task in this paper is
simple, these results could open up the problems amendable to networks with
external memories to problems with larger memory vectors and theoretically
unbounded memory sizes.
| 1 | 0 | 0 | 0 | 0 | 0 |
X-Ray and Gamma-Ray Emission from Middle-aged Supernova Remnants in Cavities. I. Spherical Symmetry | We present analytical and numerical studies of models of supernova-remnant
(SNR) blast waves expanding into uniform media and interacting with a denser
cavity wall, in one spatial dimension. We predict the nonthermal emission from
such blast waves: synchrotron emission at radio and X-ray energies, and
bremsstrahlung, inverse-Compton emission (from cosmic-microwave-background seed
photons, ICCMB), and emission from the decay of $\pi^0$ mesons produced in
inelastic collisions between accelerated ions and thermal gas, at GeV and TeV
energies. Accelerated particle spectra are assumed to be power-laws with
exponential cutoffs at energies limited by the remnant age or (for electrons,
if lower) by radiative losses. We compare the results with those from
homogeneous ("one-zone") models. Such models give fair representations of the
1-D results for uniform media, but cavity-wall interactions produce effects for
which one-zone models are inadequate. We study the time evolution of SNR
morphology and emission with time. Strong morphological differences exist
between ICCMB and $\pi^0$-decay emission, at some stages, the TeV emission can
be dominated by the former and the GeV by the latter, resulting in strong
energy-dependence of morphology. Integrated gamma-ray spectra show apparent
power-laws of slopes that vary with time, but do not indicate the energy
distribution of a single population of particles. As observational capabilities
at GeV and TeV energies improve, spatial inhomogeneity in SNRs will need to be
accounted for.
| 0 | 1 | 0 | 0 | 0 | 0 |
An adverse selection approach to power pricing | We study the optimal design of electricity contracts among a population of
consumers with different needs. This question is tackled within the framework
of Principal-Agent problems in presence of adverse selection. The particular
features of electricity induce an unusual structure on the production cost,
with no decreasing return to scale. We are nevertheless able to provide an
explicit solution for the problem at hand. The optimal contracts are either
linear or polynomial with respect to the consumption. Whenever the outside
options offered by competitors are not uniform among the different type of
consumers, we exhibit situations where the electricity provider should contract
with consumers with either low or high appetite for electricity.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains | We consider the minimization of an objective function given access to
unbiased estimates of its gradient through stochastic gradient descent (SGD)
with constant step-size. While the detailed analysis was only performed for
quadratic functions, we provide an explicit asymptotic expansion of the moments
of the averaged SGD iterates that outlines the dependence on initial
conditions, the effect of noise and the step-size, as well as the lack of
convergence in the general (non-quadratic) case. For this analysis, we bring
tools from Markov chain theory into the analysis of stochastic gradient. We
then show that Richardson-Romberg extrapolation may be used to get closer to
the global optimum and we show empirical improvements of the new extrapolation
scheme.
| 0 | 0 | 1 | 1 | 0 | 0 |
Learning Overcomplete HMMs | We study the problem of learning overcomplete HMMs---those that have many
hidden states but a small output alphabet. Despite having significant practical
importance, such HMMs are poorly understood with no known positive or negative
results for efficient learning. In this paper, we present several new
results---both positive and negative---which help define the boundaries between
the tractable and intractable settings. Specifically, we show positive results
for a large subclass of HMMs whose transition matrices are sparse,
well-conditioned, and have small probability mass on short cycles. On the other
hand, we show that learning is impossible given only a polynomial number of
samples for HMMs with a small output alphabet and whose transition matrices are
random regular graphs with large degree. We also discuss these results in the
context of learning HMMs which can capture long-term dependencies.
| 1 | 0 | 0 | 1 | 0 | 0 |
Migration barriers for surface diffusion on a rigid lattice: challenges and solutions | Atomistic rigid lattice Kinetic Monte Carlo is an efficient method for
simulating nano-objects and surfaces at timescales much longer than those
accessible by molecular dynamics. A laborious part of constructing any Kinetic
Monte Carlo model is, however, to calculate all migration barriers that are
needed to give the probabilities for any atom jump event to occur in the
simulations. One of the common methods of barrier calculations is Nudged
Elastic Band. The number of barriers needed to fully describe simulated systems
is typically between hundreds of thousands and millions. Calculations of such a
large number of barriers of various processes is far from trivial. In this
paper, we will discuss the challenges arising during barriers calculations on a
surface and present a systematic and reliable tethering force approach to
construct a rigid lattice barrier parameterization of face-centred and
body-centred cubic metal lattices. We have produced several different barrier
sets for Cu and for Fe that can be used for KMC simulations of processes on
arbitrarily rough surfaces. The sets are published as Data in Brief articles
and available for the use.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Quest for an Acyclic Graph | The paper aims at finding acyclic graphs under a given set of constraints.
More specifically, given a propositional formula {\phi} over edges of a
fixed-size graph, the objective is to find a model of {\phi} that corresponds
to a graph that is acyclic. The paper proposes several encodings of the problem
and compares them in an experimental evaluation using stateof-the-art SAT
solvers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Robust Matrix Elastic Net based Canonical Correlation Analysis: An Effective Algorithm for Multi-View Unsupervised Learning | This paper presents a robust matrix elastic net based canonical correlation
analysis (RMEN-CCA) for multiple view unsupervised learning problems, which
emphasizes the combination of CCA and the robust matrix elastic net (RMEN) used
as coupled feature selection. The RMEN-CCA leverages the strength of the RMEN
to distill naturally meaningful features without any prior assumption and to
measure effectively correlations between different 'views'. We can further
employ directly the kernel trick to extend the RMEN-CCA to the kernel scenario
with theoretical guarantees, which takes advantage of the kernel trick for
highly complicated nonlinear feature learning. Rather than simply incorporating
existing regularization minimization terms into CCA, this paper provides a new
learning paradigm for CCA and is the first to derive a coupled feature
selection based CCA algorithm that guarantees convergence. More significantly,
for CCA, the newly-derived RMEN-CCA bridges the gap between measurement of
relevance and coupled feature selection. Moreover, it is nontrivial to tackle
directly the RMEN-CCA by previous optimization approaches derived from its
sophisticated model architecture. Therefore, this paper further offers a bridge
between a new optimization problem and an existing efficient iterative
approach. As a consequence, the RMEN-CCA can overcome the limitation of CCA and
address large-scale and streaming data problems. Experimental results on four
popular competing datasets illustrate that the RMEN-CCA performs more
effectively and efficiently than do state-of-the-art approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset | Visual Question Answering (VQA) has received a lot of attention over the past
couple of years. A number of deep learning models have been proposed for this
task. However, it has been shown that these models are heavily driven by
superficial correlations in the training data and lack compositionality -- the
ability to answer questions about unseen compositions of seen concepts. This
compositionality is desirable and central to intelligence. In this paper, we
propose a new setting for Visual Question Answering where the test
question-answer pairs are compositionally novel compared to training
question-answer pairs. To facilitate developing models under this setting, we
present a new compositional split of the VQA v1.0 dataset, which we call
Compositional VQA (C-VQA). We analyze the distribution of questions and answers
in the C-VQA splits. Finally, we evaluate several existing VQA models under
this new setting and show that the performances of these models degrade by a
significant amount compared to the original VQA setting.
| 1 | 0 | 0 | 0 | 0 | 0 |
Spherical Planetary Robot for Rugged Terrain Traversal | Wheeled planetary rovers such as the Mars Exploration Rovers (MERs) and Mars
Science Laboratory (MSL) have provided unprecedented, detailed images of the
Mars surface. However, these rovers are large and are of high-cost as they need
to carry sophisticated instruments and science laboratories. We propose the
development of low-cost planetary rovers that are the size and shape of
cantaloupes and that can be deployed from a larger rover. The rover named
SphereX is 2 kg in mass, is spherical, holonomic and contains a hopping
mechanism to jump over rugged terrain. A small low-cost rover complements a
larger rover, particularly to traverse rugged terrain or roll down a canyon,
cliff or crater to obtain images and science data. While it may be a one-way
journey for these small robots, they could be used tactically to obtain
high-reward science data. The robot is equipped with a pair of stereo cameras
to perform visual navigation and has room for a science payload. In this paper,
we analyze the design and development of a laboratory prototype. The results
show a promising pathway towards development of a field system.
| 1 | 1 | 0 | 0 | 0 | 0 |
Active Learning for Accurate Estimation of Linear Models | We explore the sequential decision making problem where the goal is to
estimate uniformly well a number of linear models, given a shared budget of
random contexts independently sampled from a known distribution. The decision
maker must query one of the linear models for each incoming context, and
receives an observation corrupted by noise levels that are unknown, and depend
on the model instance. We present Trace-UCB, an adaptive allocation algorithm
that learns the noise levels while balancing contexts accordingly across the
different linear functions, and derive guarantees for simple regret in both
expectation and high-probability. Finally, we extend the algorithm and its
guarantees to high dimensional settings, where the number of linear models
times the dimension of the contextual space is higher than the total budget of
samples. Simulations with real data suggest that Trace-UCB is remarkably
robust, outperforming a number of baselines even when its assumptions are
violated.
| 1 | 0 | 0 | 1 | 0 | 0 |
Revisiting Frequency Reuse towards Supporting Ultra-Reliable Ubiquitous-Rate Communication | One of the goals of 5G wireless systems stated by the NGMN alliance is to
provide moderate rates (50+ Mbps) everywhere and with very high reliability. We
term this service Ultra-Reliable Ubiquitous-Rate Communication (UR2C). This
paper investigates the role of frequency reuse in supporting UR2C in the
downlink. To this end, two frequency reuse schemes are considered:
user-specific frequency reuse (FRu) and BS-specific frequency reuse (FRb). For
a given unit frequency channel, FRu reduces the number of serving user
equipments (UEs), whereas FRb directly decreases the number of interfering base
stations (BSs). This increases the distance from the interfering BSs and the
signal-to-interference ratio (SIR) attains ultra-reliability, e.g. 99% SIR
coverage at a randomly picked UE. The ultra-reliability is, however, achieved
at the cost of the reduced frequency allocation, which may degrade overall
downlink rate. To fairly capture this reliability-rate tradeoff, we propose
ubiquitous rate defined as the maximum downlink rate whose required SIR can be
achieved with ultra-reliability. By using stochastic geometry, we derive
closed-form ubiquitous rate as well as the optimal frequency reuse rules for
UR2C.
| 1 | 0 | 0 | 0 | 0 | 0 |
Local isometric immersions of pseudo-spherical surfaces and k-th order evolution equations | We consider the class of evolution equations that describe pseudo-spherical
surfaces of the form u\_t = F (u, $\partial$u/$\partial$x, ..., $\partial$^k
u/$\partial$x^k), k $\ge$ 2 classified by Chern-Tenenblat. This class of
equations is characterized by the property that to each solution of a
differential equation within this class, there corresponds a 2-dimensional
Riemannian metric of curvature-1. We investigate the following problem: given
such a metric, is there a local isometric immersion in R 3 such that the
coefficients of the second fundamental form of the surface depend on a jet of
finite order of u? By extending our previous result for second order evolution
equation to k-th order equations, we prove that there is only one type of
equations that admit such an isometric immersion. We prove that the
coefficients of the second fundamental forms of the local isometric immersion
determined by the solutions u are universal, i.e., they are independent of u.
Moreover, we show that there exists a foliation of the domain of the parameters
of the surface by straight lines with the property that the mean curvature of
the surface is constant along the images of these straight lines under the
isometric immersion.
| 0 | 0 | 1 | 0 | 0 | 0 |
Integrating Flexible Normalization into Mid-Level Representations of Deep Convolutional Neural Networks | Deep convolutional neural networks (CNNs) are becoming increasingly popular
models to predict neural responses in visual cortex. However, contextual
effects, which are prevalent in neural processing and in perception, are not
explicitly handled by current CNNs, including those used for neural prediction.
In primary visual cortex, neural responses are modulated by stimuli spatially
surrounding the classical receptive field in rich ways. These effects have been
modeled with divisive normalization approaches, including flexible models,
where spatial normalization is recruited only to the degree responses from
center and surround locations are deemed statistically dependent. We propose a
flexible normalization model applied to mid-level representations of deep CNNs
as a tractable way to study contextual normalization mechanisms in mid-level
cortical areas. This approach captures non-trivial spatial dependencies among
mid-level features in CNNs, such as those present in textures and other visual
stimuli, that arise from tiling high order features, geometrically. We expect
that the proposed approach can make predictions about when spatial
normalization might be recruited in mid-level cortical areas. We also expect
this approach to be useful as part of the CNN toolkit, therefore going beyond
more restrictive fixed forms of normalization.
| 0 | 0 | 0 | 0 | 1 | 0 |
Moments and non-vanishing of Hecke $L$-functions with quadratic characters in $\mathbb{Q}(i)$ at the central point | In this paper, we study the moments of central values of Hecke $L$-functions
associated with quadratic characters in $\mq(i)$, and establish quantitative
non-vanishing result for the $L$-values.
| 0 | 0 | 1 | 0 | 0 | 0 |
Temporal Graph Offset Reconstruction: Towards Temporally Robust Graph Representation Learning | Graphs are a commonly used construct for representing relationships between
elements in complex high dimensional datasets. Many real-world phenomenon are
dynamic in nature, meaning that any graph used to represent them is inherently
temporal. However, many of the machine learning models designed to capture
knowledge about the structure of these graphs ignore this rich temporal
information when creating representations of the graph. This results in models
which do not perform well when used to make predictions about the future state
of the graph -- especially when the delta between time stamps is not small. In
this work, we explore a novel training procedure and an associated unsupervised
model which creates graph representations optimised to predict the future state
of the graph. We make use of graph convolutional neural networks to encode the
graph into a latent representation, which we then use to train our temporal
offset reconstruction method, inspired by auto-encoders, to predict a later
time point -- multiple time steps into the future. Using our method, we
demonstrate superior performance for the task of future link prediction
compared with none-temporal state-of-the-art baselines. We show our approach to
be capable of outperforming non-temporal baselines by 38% on a real world
dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Decentralized Tube-based Model Predictive Control of Uncertain Nonlinear Multi-Agent Systems | This paper addresses the problem of decentralized tube-based nonlinear Model
Predictive Control (NMPC) for a class of uncertain nonlinear continuous-time
multi-agent systems with additive and bounded disturbance. In particular, the
problem of robust navigation of a multi-agent system to predefined states of
the workspace while using only local information is addressed, under certain
distance and control input constraints. We propose a decentralized feedback
control protocol that consists of two terms: a nominal control input, which is
computed online and is the outcome of a Decentralized Finite Horizon Optimal
Control Problem (DFHOCP) that each agent solves at every sampling time, for its
nominal system dynamics; and an additive state feedback law which is computed
offline and guarantees that the real trajectories of each agent will belong to
a hyper-tube centered along the nominal trajectory, for all times. The volume
of the hyper-tube depends on the upper bound of the disturbances as well as the
bounds of the derivatives of the dynamics. In addition, by introducing certain
distance constraints, the proposed scheme guarantees that the initially
connected agents remain connected for all times. Under standard assumptions
that arise in nominal NMPC schemes, controllability assumptions as well as
communication capabilities between the agents, we guarantee that the
multi-agent system is ISS (Input to State Stable) with respect to the
disturbances, for all initial conditions satisfying the state constraints.
Simulation results verify the correctness of the proposed framework.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cognition of the circle in ancient India | We discuss the understanding of geometry of the circle in ancient India, in
terms of enunciation of various principles, constructions, applications etc.
during various phases of history and cultural contexts.
| 0 | 0 | 1 | 0 | 0 | 0 |
Estimates for $π(x)$ for large values of $x$ and Ramanujan's prime counting inequality | In this paper we use refined approximations for Chebyshev's
$\vartheta$-function to establish new explicit estimates for the prime counting
function $\pi(x)$, which improve the current best estimates for large values of
$x$. As an application we find an upper bound for the number $H_0$ which is
defined to be the smallest positive integer so that Ramanujan's prime counting
inequality holds for every $x \geq H_0$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Unveiled electric profiles within hydrogen bonds suggest DNA base pairs with similar bond strengths | Electrical forces are the background of all the interactions occurring in
biochemical systems. From here and by using a combination of ab-initio and
ad-hoc models, we introduce the first description of electric field profiles
with intrabond resolution to support a characterization of single bond forces
attending to its electrical origin. This fundamental issue has eluded a
physical description so far. Our method is applied to describe hydrogen bonds
(HB) in DNA base pairs. Numerical results reveal that base pairs in DNA could
be equivalent considering HB strength contributions, which challenges previous
interpretations of thermodynamic properties of DNA based on the assumption that
Adenine/Thymine pairs are weaker than Guanine/Cytosine pairs due to the sole
difference in the number of HB. Thus, our methodology provides solid
foundations to support the development of extended models intended to go deeper
into the molecular mechanisms of DNA functioning.
| 0 | 0 | 0 | 0 | 1 | 0 |
Mimetization of the elastic properties of cancellous bone via a parameterized cellular material | Bone tissue mechanical properties and trabecular microarchitecture are the
main factors that determine the biomechanical properties of cancellous bone.
Artificial cancellous microstructures, typically described by a reduced number
of geometrical parameters, can be designed to obtain a mechanical behavior
mimicking that of natural bone. In this work, we assess the ability of the
parameterized microstructure introduced by Kowalczyk (2006) to mimic the
elastic response of cancellous bone. Artificial microstructures are compared
with actual bone samples in terms of elasticity matrices and their symmetry
classes. The capability of the parameterized microstructure to combine the
dominant isotropic, hexagonal, tetragonal and orthorhombic symmetry classes in
the proportions present in the cancellous bone is shown. Based on this finding,
two optimization approaches are devised to find the geometrical parameters of
the artificial microstructure that better mimics the elastic response of a
target natural bone specimen: a Sequential Quadratic Programming algorithm that
minimizes the norm of the difference between the elasticity matrices, and a
Pattern Search algorithm that minimizes the difference between the symmetry
class decompositions. The pattern search approach is found to produce the best
results. The performance of the method is demonstrated via analyses for 146
bone samples.
| 0 | 1 | 0 | 0 | 0 | 0 |
Extended Vertical Lists for Temporal Pattern Mining from Multivariate Time Series | Temporal Pattern Mining (TPM) is the problem of mining predictive complex
temporal patterns from multivariate time series in a supervised setting. We
develop a new method called the Fast Temporal Pattern Mining with Extended
Vertical Lists. This method utilizes an extension of the Apriori property which
requires a more complex pattern to appear within records only at places where
all of its subpatterns are detected as well. The approach is based on a novel
data structure called the Extended Vertical List that tracks positions of the
first state of the pattern inside records. Extensive computational results
indicate that the new method performs significantly faster than the previous
version of the algorithm for TMP. However, the speed-up comes at the expense of
memory usage.
| 0 | 0 | 0 | 1 | 0 | 0 |
PowerAlert: An Integrity Checker using Power Measurement | We propose PowerAlert, an efficient external integrity checker for untrusted
hosts. Current attestation systems suffer from shortcomings in requiring
complete checksum of the code segment, being static, use of timing information
sourced from the untrusted machine, or use of timing information with high
error (network round trip time). We address those shortcomings by (1) using
power measurements from the host to ensure that the checking code is executed
and (2) checking a subset of the kernel space over a long period of time. We
compare the power measurement against a learned power model of the execution of
the machine and validate that the execution was not tampered. Finally, power
diversifies the integrity checking program to prevent the attacker from
adapting. We implement a prototype of PowerAlert using Raspberry pi and
evaluate the performance of the integrity checking program generation. We model
the interaction between PowerAlert and an attacker as a game. We study the
effectiveness of the random initiation strategy in deterring the attacker. The
study shows that \power forces the attacker to trade-off stealthiness for the
risk of detection, while still maintaining an acceptable probability of
detection given the long lifespan of stealthy attacks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Extension complexities of Cartesian products involving a pyramid | It is an open question whether the linear extension complexity of the
Cartesian product of two polytopes P, Q is the sum of the extension
complexities of P and Q. We give an affirmative answer to this question for the
case that one of the two polytopes is a pyramid.
| 1 | 0 | 1 | 0 | 0 | 0 |
Speech recognition for medical conversations | In this work we explored building automatic speech recognition models for
transcribing doctor patient conversation. We collected a large scale dataset of
clinical conversations ($14,000$ hr), designed the task to represent the real
word scenario, and explored several alignment approaches to iteratively improve
data quality. We explored both CTC and LAS systems for building speech
recognition models. The LAS was more resilient to noisy data and CTC required
more data clean up. A detailed analysis is provided for understanding the
performance for clinical tasks. Our analysis showed the speech recognition
models performed well on important medical utterances, while errors occurred in
causal conversations. Overall we believe the resulting models can provide
reasonable quality in practice.
| 1 | 0 | 0 | 1 | 0 | 0 |
Changing users' security behaviour towards security questions: A game based learning approach | Fallback authentication is used to retrieve forgotten passwords. Security
questions are one of the main techniques used to conduct fallback
authentication. In this paper, we propose a serious game design that uses
system-generated security questions with the aim of improving the usability of
fallback authentication. For this purpose, we adopted the popular picture-based
"4 Pics 1 word" mobile game. This game was selected because of its use of
pictures and cues, which previous psychology research found to be crucial to
aid memorability. This game asks users to pick the word that relates to the
given pictures. We then customized this game by adding features which help
maximize the following memory retrieval skills: (a) verbal cues - by providing
hints with verbal descriptions, (b) spatial cues - by maintaining the same
order of pictures, (c) graphical cues - by showing 4 images for each challenge,
(d) interactivity/engaging nature of the game.
| 1 | 0 | 0 | 0 | 0 | 0 |
Understanding Web Archiving Services and Their (Mis)Use on Social Media | Web archiving services play an increasingly important role in today's
information ecosystem, by ensuring the continuing availability of information,
or by deliberately caching content that might get deleted or removed. Among
these, the Wayback Machine has been proactively archiving, since 2001, versions
of a large number of Web pages, while newer services like archive.is allow
users to create on-demand snapshots of specific Web pages, which serve as time
capsules that can be shared across the Web. In this paper, we present a
large-scale analysis of Web archiving services and their use on social media,
shedding light on the actors involved in this ecosystem, the content that gets
archived, and how it is shared. We crawl and study: 1) 21M URLs from
archive.is, spanning almost two years, and 2) 356K archive.is plus 391K Wayback
Machine URLs that were shared on four social networks: Reddit, Twitter, Gab,
and 4chan's Politically Incorrect board (/pol/) over 14 months. We observe that
news and social media posts are the most common types of content archived,
likely due to their perceived ephemeral and/or controversial nature. Moreover,
URLs of archiving services are extensively shared on "fringe" communities
within Reddit and 4chan to preserve possibly contentious content. Lastly, we
find evidence of moderators nudging or even forcing users to use archives,
instead of direct links, for news sources with opposing ideologies, potentially
depriving them of ad revenue.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mining a Sub-Matrix of Maximal Sum | Biclustering techniques have been widely used to identify homogeneous
subgroups within large data matrices, such as subsets of genes similarly
expressed across subsets of patients. Mining a max-sum sub-matrix is a related
but distinct problem for which one looks for a (non-necessarily contiguous)
rectangular sub-matrix with a maximal sum of its entries. Le Van et al. (Ranked
Tiling, 2014) already illustrated its applicability to gene expression analysis
and addressed it with a constraint programming (CP) approach combined with
large neighborhood search (CP-LNS). In this work, we exhibit some key
properties of this NP-hard problem and define a bounding function such that
larger problems can be solved in reasonable time. Two different algorithms are
proposed in order to exploit the highlighted characteristics of the problem: a
CP approach with a global constraint (CPGC) and mixed integer linear
programming (MILP). Practical experiments conducted both on synthetic and real
gene expression data exhibit the characteristics of these approaches and their
relative benefits over the original CP-LNS method. Overall, the CPGC approach
tends to be the fastest to produce a good solution. Yet, the MILP formulation
is arguably the easiest to formulate and can also be competitive.
| 1 | 0 | 0 | 1 | 0 | 0 |
Training Deep Networks without Learning Rates Through Coin Betting | Deep learning methods achieve state-of-the-art performance in many
application scenarios. Yet, these methods require a significant amount of
hyperparameters tuning in order to achieve the best results. In particular,
tuning the learning rates in the stochastic optimization process is still one
of the main bottlenecks. In this paper, we propose a new stochastic gradient
descent procedure for deep networks that does not require any learning rate
setting. Contrary to previous methods, we do not adapt the learning rates nor
we make use of the assumed curvature of the objective function. Instead, we
reduce the optimization process to a game of betting on a coin and propose a
learning-rate-free optimal algorithm for this scenario. Theoretical convergence
is proven for convex and quasi-convex functions and empirical evidence shows
the advantage of our algorithm over popular stochastic gradient algorithms.
| 1 | 0 | 1 | 1 | 0 | 0 |
Joint Beamforming and Antenna Selection for Sum Rate Maximization in Cognitive Radio Networks | This letter studies joint transmit beamforming and antenna selection at a
secondary base station (BS) with multiple primary users (PUs) in an underlay
cognitive radio multiple-input single-output broadcast channel. The objective
is to maximize the sum rate subject to the secondary BS transmit power, minimum
required rates for secondary users, and PUs' interference power constraints.
The utility function of interest is nonconcave and the involved constraints are
nonconvex, so this problem is hard to solve. Nevertheless, we propose a new
iterative algorithm that finds local optima at the least. We use an inner
approximation method to construct and solve a simple convex quadratic program
of moderate dimension at each iteration of the proposed algorithm. Simulation
results indicate that the proposed algorithm converges quickly and outperforms
existing approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multi-sequence segmentation via score and higher-criticism tests | We propose local segmentation of multiple sequences sharing a common time- or
location-index, building upon the single sequence local segmentation methods of
Niu and Zhang (2012) and Fang, Li and Siegmund (2016). We also propose reverse
segmentation of multiple sequences that is new even in the single sequence
context. We show that local segmentation estimates change-points consistently
for both single and multiple sequences, and that both methods proposed here
detect signals well, with the reverse segmentation method outperforming a large
number of known segmentation methods on a commonly used single sequence test
scenario. We show that on a recent allele-specific copy number study involving
multiple cancer patients, the simultaneous segmentations of the DNA sequences
of all the patients provide information beyond that obtained by segmentation of
the sequences one at a time.
| 0 | 0 | 1 | 1 | 0 | 0 |
Evidence for structural transition in crystalline tantalum pentoxide films grown by RF magnetron sputtering | We investigate the effect of annealing temperature on the crystalline
structure and physical properties of tantalum-pentoxide films grown by radio
frequency magnetron sputtering. For this purpose, several tantalum films were
deposited and the Ta$_2$O$_5$ crystalline phase was induced by exposing the
samples to heat treatments in air in the temperature range from (575 to
1000)$^\circ$C. Coating characterization was performed using X-ray diffraction,
scanning electron microscopy, Raman spectroscopy and UV-VIS spectroscopy. By
X-ray diffraction analysis we found that a hexagonal Ta$_2$O$_5$ phase
generates at temperatures above $675^\circ$C. As the annealing temperature
raises, we observe peak sharpening and new peaks in the corresponding
diffraction patterns indicating a possible structural transition from hexagonal
to orthorhombic. The microstructure of the films starts with flake-like
structures formed on the surface and evolves, as the temperature is further
increased, to round grains. We found out that, according to the features
exhibited in the corresponding spectra, Raman spectroscopy can be sensitive
enough to discriminate between the orthorhombic and hexagonal phases of
Ta$_2$O$_5$. Finally, as the films crystallize the magnitude of the optical
band gap increases from 2.4 eV to the typical reported value of 3.8 eV.
| 0 | 1 | 0 | 0 | 0 | 0 |
Emergence and Reductionism: an awkward Baconian alliance | This article discusses the relationship between emergence and reductionism
from the perspective of a condensed matter physicist. Reductionism and
emergence play an intertwined role in the everyday life of the physicist, yet
we rarely stop to contemplate their relationship: indeed, the two are often
regarded as conflicting world-views of science. I argue that in practice, they
compliment one-another, forming an awkward alliance in a fashion envisioned by
the Renaissance scientist, Francis Bacon. Looking at the historical record in
classical and quantum physics, I discuss how emergence fits into a reductionist
view of nature. Often, a deep understanding of reductionist physics depends on
the understanding of its emergent consequences. Thus the concept of energy was
unknown to Newton, Leibnitz, Lagrange or Hamilton, because they did not
understand heat. Similarly, the understanding of the weak force awaited an
understanding of the Meissner effect in superconductivity. Emergence can thus
be likened to an encrypted consequence of reductionism. Taking examples from
current research, including topological insulators and strange metals, I show
that the convection between emergence and reductionism continues to provide a
powerful driver for frontier scientific research, linking the lab with the
cosmos.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pipelined Parallel FFT Architecture | In this paper, an optimized efficient VLSI architecture of a pipeline Fast
Fourier transform (FFT) processor capable of producing the reverse output order
sequence is presented. Paper presents Radix-2 multipath delay architecture for
FFT calculation. The implementation of FFT in hardware is very critical because
for calculation of FFT number of butterfly operations i.e. number of
multipliers requires due to which hardware gets increased means indirectly cost
of hardware is automatically gets increased. Also multiplier operations are
slow that's why it limits the speed of operation of architecture. The optimized
VLSI implementation of FFT algorithm is presented in this paper. Here
architecture is pipelined to optimize it and to increase the speed of
operation. Also to increase the speed of operation 2 levels parallel processing
is used.
| 1 | 0 | 1 | 0 | 0 | 0 |
Global smoothing of a subanalytic set | We give rather simple answers to two long-standing questions in real-analytic
geometry, on global smoothing of a subanalytic set, and on transformation of a
proper real-analytic mapping to a mapping with equidimensional fibres by global
blowings-up of the target. These questions are related: a positive answer to
the second can be used to reduce the first to the simpler semianalytic case. We
show that the second question has a negative answer, in general, and that the
first problem nevertheless has a positive solution.
| 0 | 0 | 1 | 0 | 0 | 0 |
Coupled Compound Poisson Factorization | We present a general framework, the coupled compound Poisson factorization
(CCPF), to capture the missing-data mechanism in extremely sparse data sets by
coupling a hierarchical Poisson factorization with an arbitrary data-generating
model. We derive a stochastic variational inference algorithm for the resulting
model and, as examples of our framework, implement three different
data-generating models---a mixture model, linear regression, and factor
analysis---to robustly model non-random missing data in the context of
clustering, prediction, and matrix factorization. In all three cases, we test
our framework against models that ignore the missing-data mechanism on large
scale studies with non-random missing data, and we show that explicitly
modeling the missing-data mechanism substantially improves the quality of the
results, as measured using data log likelihood on a held-out test set.
| 1 | 0 | 0 | 1 | 0 | 0 |
A model bridging chimera state and explosive synchronization | Global and partial synchronization are the two distinctive forms of
synchronization in coupled oscillators and have been well studied in the past
decades. Recent attention on synchronization is focused on the chimera state
(CS) and explosive synchronization (ES), but little attention has been paid to
their relationship. We here study this topic by presenting a model to bridge
these two phenomena, which consists of two groups of coupled oscillators and
its coupling strength is adaptively controlled by a local order parameter. We
find that this model displays either CS or ES in two limits. In between the two
limits, this model exhibits both CS and ES, where CS can be observed for a
fixed coupling strength and ES appears when the coupling is increased
adiabatically. Moreover, we show both theoretically and numerically that there
are a variety of CS basin patterns for the case of identical oscillators,
depending on the distributions of both the initial order parameters and the
initial average phases. This model suggests a way to easily observe CS, in
contrast to others models having some (weak or strong) dependence on initial
conditions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spatial modeling of shot conversion in soccer to single out goalscoring ability | Goals are results of pin-point shots and it is a pivotal decision in soccer
when, how and where to shoot. The main contribution of this study is two-fold.
At first, after showing that there exists high spatial correlation in the data
of shots across games, we introduce a spatial process in the error structure to
model the probability of conversion from a shot depending on positional and
situational covariates. The model is developed using a full Bayesian framework.
Secondly, based on the proposed model, we define two new measures that can
appropriately quantify the impact of an individual in soccer, by evaluating the
positioning senses and shooting abilities of the players. As a practical
application, the method is implemented on Major League Soccer data from 2016/17
season.
| 0 | 0 | 0 | 1 | 0 | 0 |
Metrics for Formal Structures, with an Application to Kripke Models and their Dynamics | This report introduces and investigates a family of metrics on sets of
pointed Kripke models. The metrics are generalizations of the Hamming distance
applicable to countably infinite binary strings and, by extension, logical
theories or semantic structures. We first study the topological properties of
the resulting metric spaces. A key result provides sufficient conditions for
spaces having the Stone property, i.e., being compact, totally disconnected and
Hausdorff. Second, we turn to mappings, where it is shown that a widely used
type of model transformations, product updates, give rise to continuous maps in
the induced topology.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hyperbolic Dispersion Dominant Regime Identified through Spontaneous Emission Variations near Metamaterial Interfaces | Surface plasmon polariton, hyberbolic dispersion of energy and momentum, and
emission interference provide opportunities to control photoluminescence
properties. However, the interplays between these regimes need to be understood
to take advantage of them in optoelectronic applications. Here, we investigate
broadband variations induced by hyperbolic metamaterial (HMM) multilayer
nanostructures on the spontaneous emission of selected organic chromophores.
Experimental and calculated spontaneous emission lifetimes are shown to vary
non-monotonously near HMM interfaces. With the SPP and interference dominant
regimes. With the HMM number of pairs used as the analysis parameter, the
lifetime is shown to be independent of the number of pairs in the surface
plasmon polaritons, and emission interference dominant regimes, while it
decreases in the Hyperbolic Dispersion dominant regime. We also show that the
spontaneous emission lifetime is similarly affected by transverse positive and
transverse negative HMMs. This work has broad implications on the rational
design of functional photonic surfaces to control the luminescence of
semiconductor chromophores.
| 0 | 1 | 0 | 0 | 0 | 0 |
Legendrian Satellites and Decomposable Concordances | We investigate the ramifications of the Legendrian satellite construction on
the relation of Lagrangian cobordism between Legendrian knots. Under a simple
hypothesis, we construct a Lagrangian concordance between two Legendrian
satellites by stacking up a sequence of elementary cobordisms. This
construction narrows the search for "non-decomposable" Lagrangian cobordisms
and yields new families of decomposable Lagrangian slice knots. Finally, we
show that the maximum Thurston-Bennequin number of a smoothly slice knot
provides an obstruction to any Legendrian satellite of that knot being
Lagrangian slice.
| 0 | 0 | 1 | 0 | 0 | 0 |
Plausible Deniability for Privacy-Preserving Data Synthesis | Releasing full data records is one of the most challenging problems in data
privacy. On the one hand, many of the popular techniques such as data
de-identification are problematic because of their dependence on the background
knowledge of adversaries. On the other hand, rigorous methods such as the
exponential mechanism for differential privacy are often computationally
impractical to use for releasing high dimensional data or cannot preserve high
utility of original data due to their extensive data perturbation.
This paper presents a criterion called plausible deniability that provides a
formal privacy guarantee, notably for releasing sensitive datasets: an output
record can be released only if a certain amount of input records are
indistinguishable, up to a privacy parameter. This notion does not depend on
the background knowledge of an adversary. Also, it can efficiently be checked
by privacy tests. We present mechanisms to generate synthetic datasets with
similar statistical properties to the input data and the same format. We study
this technique both theoretically and experimentally. A key theoretical result
shows that, with proper randomization, the plausible deniability mechanism
generates differentially private synthetic data. We demonstrate the efficiency
of this generative technique on a large dataset; it is shown to preserve the
utility of original data with respect to various statistical analysis and
machine learning measures.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Co-Evolution of Test Maintenance and Code Maintenance through the lens of Fine-Grained Semantic Changes | Automatic testing is a widely adopted technique for improving software
quality. Software developers add, remove and update test methods and test
classes as part of the software development process as well as during the
evolution phase, following the initial release. In this work we conduct a large
scale study of 61 popular open source projects and report the relationships we
have established between test maintenance, production code maintenance, and
semantic changes (e.g, statement added, method removed, etc.). performed in
developers' commits.
We build predictive models, and show that the number of tests in a software
project can be well predicted by employing code maintenance profiles (i.e., how
many commits were performed in each of the maintenance activities: corrective,
perfective, adaptive). Our findings also reveal that more often than not,
developers perform code fixes without performing complementary test maintenance
in the same commit (e.g., update an existing test or add a new one). When
developers do perform test maintenance, it is likely to be affected by the
semantic changes they perform as part of their commit.
Our work is based on studying 61 popular open source projects, comprised of
over 240,000 commits consisting of over 16,000,000 semantic change type
instances, performed by over 4,000 software engineers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning | Autonomous vehicles (AVs) are on the road. To safely and efficiently interact
with other road participants, AVs have to accurately predict the behavior of
surrounding vehicles and plan accordingly. Such prediction should be
probabilistic, to address the uncertainties in human behavior. Such prediction
should also be interactive, since the distribution over all possible
trajectories of the predicted vehicle depends not only on historical
information, but also on future plans of other vehicles that interact with it.
To achieve such interaction-aware predictions, we propose a probabilistic
prediction approach based on hierarchical inverse reinforcement learning (IRL).
First, we explicitly consider the hierarchical trajectory-generation process of
human drivers involving both discrete and continuous driving decisions. Based
on this, the distribution over all future trajectories of the predicted vehicle
is formulated as a mixture of distributions partitioned by the discrete
decisions. Then we apply IRL hierarchically to learn the distributions from
real human demonstrations. A case study for the ramp-merging driving scenario
is provided. The quantitative results show that the proposed approach can
accurately predict both the discrete driving decisions such as yield or pass as
well as the continuous trajectories.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fast-slow asymptotics for a Markov chain model of fast sodium current | We explore the feasibility of using fast-slow asymptotic to eliminate the
computational stiffness of the discrete-state, continuous-time deterministic
Markov chain models of ionic channels underlying cardiac excitability. We focus
on a Markov chain model of the fast sodium current, and investigate its
asymptotic behaviour with respect to small parameters identified in different
ways.
| 0 | 1 | 0 | 0 | 0 | 0 |
Beating the bookies with their own numbers - and how the online sports betting market is rigged | The online sports gambling industry employs teams of data analysts to build
forecast models that turn the odds at sports games in their favour. While
several betting strategies have been proposed to beat bookmakers, from expert
prediction models and arbitrage strategies to odds bias exploitation, their
returns have been inconsistent and it remains to be shown that a betting
strategy can outperform the online sports betting market. We designed a
strategy to beat football bookmakers with their own numbers. Instead of
building a forecasting model to compete with bookmakers predictions, we
exploited the probability information implicit in the odds publicly available
in the marketplace to find bets with mispriced odds. Our strategy proved
profitable in a 10-year historical simulation using closing odds, a 6-month
historical simulation using minute to minute odds, and a 5-month period during
which we staked real money with the bookmakers (we made code, data and models
publicly available). Our results demonstrate that the football betting market
is inefficient - bookmakers can be consistently beaten across thousands of
games in both simulated environments and real-life betting. We provide a
detailed description of our betting experience to illustrate how the sports
gambling industry compensates these market inefficiencies with discriminatory
practices against successful clients.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fixation probabilities for the Moran process in evolutionary games with two strategies: graph shapes and large population asymptotics | This paper is based on the complete classification of evolutionary scenarios
for the Moran process with two strategies given by Taylor et al. (B. Math.
Biol. 66(6): 1621--1644, 2004). Their classification is based on whether each
strategy is a Nash equilibrium and whether the fixation probability for a
single individual of each strategy is larger or smaller than its value for
neutral evolution. We improve on this analysis by showing that each
evolutionary scenario is characterized by a definite graph shape for the
fixation probability function. A second class of results deals with the
behavior of the fixation probability when the population size tends to
infinity. We develop asymptotic formulae that approximate the fixation
probability in this limit and conclude that some of the evolutionary scenarios
cannot exist when the population size is large.
| 0 | 0 | 0 | 0 | 1 | 0 |
FPGA Design Techniques for Stable Cryogenic Operation | In this paper we show how a deep-submicron FPGA can be modified to operate at
extremely low temperatures through modifications in the supporting hardware and
in the firmware programming it. Though FPGAs are not designed to operate at a
few Kelvin, it is possible to do so on virtue of the extremely high doping
levels found in deep-submicron CMOS technology nodes. First, any PCB component,
that does not conform with this requirement, is removed. Both the majority of
decoupling capacitor types and voltage regulators are not well behaved at
cryogenic temperatures, asking for an ad-hoc solution to stabilize the FPGA
supply voltage, especially for sensitive applications. Therefore, we have
designed a firmware that enforces a constant power consumption, so as to
stabilize the supply voltage in the interior of the FPGA chip. The FPGA is
powered with a supply at several meters distance, causing significant IR drop
and thus fluctuations on the local supply voltage. To achieve the
stabilization, the variation in digital logic speed, which directly corresponds
to changes in supply voltage, is constantly measured and corrected for through
a tunable oscillator farm, implemented on the FPGA. The method is versatile and
robust, enabling seamless porting to other FPGA families and configurations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Curvature-aided Incremental Aggregated Gradient Method | We propose a new algorithm for finite sum optimization which we call the
curvature-aided incremental aggregated gradient (CIAG) method. Motivated by the
problem of training a classifier for a d-dimensional problem, where the number
of training data is $m$ and $m \gg d \gg 1$, the CIAG method seeks to
accelerate incremental aggregated gradient (IAG) methods using aids from the
curvature (or Hessian) information, while avoiding the evaluation of matrix
inverses required by the incremental Newton (IN) method. Specifically, our idea
is to exploit the incrementally aggregated Hessian matrix to trace the full
gradient vector at every incremental step, therefore achieving an improved
linear convergence rate over the state-of-the-art IAG methods. For strongly
convex problems, the fast linear convergence rate requires the objective
function to be close to quadratic, or the initial point to be close to optimal
solution. Importantly, we show that running one iteration of the CIAG method
yields the same improvement to the optimality gap as running one iteration of
the full gradient method, while the complexity is $O(d^2)$ for CIAG and $O(md)$
for the full gradient. Overall, the CIAG method strikes a balance between the
high computation complexity incremental Newton-type methods and the slow IAG
method. Our numerical results support the theoretical findings and show that
the CIAG method often converges with much fewer iterations than IAG, and
requires much shorter running time than IN when the problem dimension is high.
| 1 | 0 | 0 | 1 | 0 | 0 |
The composition of Solar system asteroids and Earth/Mars moons, and the Earth-Moon composition similarity | [abridged] In the typical giant-impact scenario for the Moon formation most
of the Moon's material originates from the impactor. Any Earth-impactor
composition difference should, therefore, correspond to a comparable Earth-Moon
composition difference. Analysis of Moon rocks shows a close Earth-Moon
composition similarity, posing a challenge for the giant-impact scenario, given
that impactors were thought to significantly differ in composition from the
planets they impact. Here we use a large set of 140 simulations to show that
the composition of impactors could be very similar to that of the planets they
impact; in $4.9\%$-$18.2\%$ ($1.9\%$-$6.7\%$) of the cases the resulting
composition of the Moon is consistent with the observations of
$\Delta^{17}O<15$ ($\Delta^{17}O<6$ ppm). These findings suggest that the
Earth-Moon composition similarity could be resolved as to arise from the
primordial Earth-impactor composition similarity. Note that although we find
the likelihood for the suggested competing model of very high mass-ratio
impacts (producing significant Earth-impactor composition mixing) is comparable
($<6.7\%$), this scenario also requires additional fine-tuned requirements of a
very fast spinning Earth. Using the same simulations we also explore the
composition of giant-impact formed Mars-moons as well as Vesta-like asteroids.
We find that the Mars-moon composition difference should be large, but smaller
than expected if the moons are captured asteroids. Finally, we find that the
left-over planetesimals ('asteroids') in our simulations are frequently
scattered far away from their initial positions, thus potentially explaining
the mismatch between the current position and composition of the Vesta
asteroid.
| 0 | 1 | 0 | 0 | 0 | 0 |
Maximum Margin Interval Trees | Learning a regression function using censored or interval-valued output data
is an important problem in fields such as genomics and medicine. The goal is to
learn a real-valued prediction function, and the training output labels
indicate an interval of possible values. Whereas most existing algorithms for
this task are linear models, in this paper we investigate learning nonlinear
tree models. We propose to learn a tree by minimizing a margin-based
discriminative objective function, and we provide a dynamic programming
algorithm for computing the optimal solution in log-linear time. We show
empirically that this algorithm achieves state-of-the-art speed and prediction
accuracy in a benchmark of several data sets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Provable and practical approximations for the degree distribution using sublinear graph samples | The degree distribution is one of the most fundamental properties used in the
analysis of massive graphs. There is a large literature on graph sampling,
where the goal is to estimate properties (especially the degree distribution)
of a large graph through a small, random sample. The degree distribution
estimation poses a significant challenge, due to its heavy-tailed nature and
the large variance in degrees.
We design a new algorithm, SADDLES, for this problem, using recent
mathematical techniques from the field of sublinear algorithms. The SADDLES
algorithm gives provably accurate outputs for all values of the degree
distribution. For the analysis, we define two fatness measures of the degree
distribution, called the $h$-index and the $z$-index. We prove that SADDLES is
sublinear in the graph size when these indices are large. A corollary of this
result is a provably sublinear algorithm for any degree distribution bounded
below by a power law.
We deploy our new algorithm on a variety of real datasets and demonstrate its
excellent empirical behavior. In all instances, we get extremely accurate
approximations for all values in the degree distribution by observing at most
$1\%$ of the vertices. This is a major improvement over the state-of-the-art
sampling algorithms, which typically sample more than $10\%$ of the vertices to
give comparable results. We also observe that the $h$ and $z$-indices of real
graphs are large, validating our theoretical analysis.
| 1 | 0 | 1 | 0 | 0 | 0 |
Efficient and Robust Polylinear Analysis of Noisy Time Series | A method is proposed to generate an optimal fit of a number of connected
linear trend segments onto time-series data. To be able to efficiently handle
many lines, the method employs a stochastic search procedure to determine
optimal transition point locations. Traditional methods use exhaustive grid
searches, which severely limit the scale of the problems for which they can be
utilized. The proposed approach is tried against time series with severe noise
to demonstrate its robustness, and then it is applied to real medical data as
an illustrative example.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the optimal design of wall-to-wall heat transport | We consider the problem of optimizing heat transport through an
incompressible fluid layer. Modeling passive scalar transport by
advection-diffusion, we maximize the mean rate of total transport by a
divergence-free velocity field. Subject to various boundary conditions and
intensity constraints, we prove that the maximal rate of transport scales
linearly in the r.m.s. kinetic energy and, up to possible logarithmic
corrections, as the $1/3$rd power of the mean enstrophy in the advective
regime. This makes rigorous a previous prediction on the near optimality of
convection rolls for energy-constrained transport. Optimal designs for
enstrophy-constrained transport are significantly more difficult to describe:
we introduce a "branching" flow design with an unbounded number of degrees of
freedom and prove it achieves nearly optimal transport. The main technical tool
behind these results is a variational principle for evaluating the transport of
candidate designs. The principle admits dual formulations for bounding
transport from above and below. While the upper bound is closely related to the
"background method", the lower bound reveals a connection between the optimal
design problems considered herein and other apparently related model problems
from mathematical materials science. These connections serve to motivate
designs.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the decay rate for the wave equation with viscoelastic boundary damping | We consider the wave equation with a boundary condition of memory type. Under
natural conditions on the acoustic impedance $\hat{k}$ of the boundary one can
define a corresponding semigroup of contractions (Desch, Fasangova, Milota,
Probst 2010). With the help of Tauberian theorems we establish energy decay
rates via resolvent estimates on the generator $-\mathcal{A}$ of the semigroup.
We reduce the problem of estimating the resolvent of $-\mathcal{A}$ to the
problem of estimating the resolvent of the corresponding stationary problem.
Under not too strict additional assumptions on $\hat{k}$ we establish an upper
bound on the resolvent. For the wave equation on the interval or the disk we
prove our estimates to be sharp.
| 0 | 0 | 1 | 0 | 0 | 0 |
Code Reuse With Transformation Objects | We present an approach for a lightweight datatype-generic programming in
Objective Caml programming language aimed at better code reuse. We show, that a
large class of transformations usually expressed via recursive functions with
pattern matching can be implemented using the single per-type traversal
function and the set of object-encoded transformations, which we call
transformation objects. Object encoding allows transformations to be modified,
inherited and extended in a conventional object-oriented manner. However, the
data representation is kept untouched which preserves the ability to construct
and pattern-match it in the usual way. Our approach equally works for regular
and polymorphic variant types which makes it possible to combine data types and
their transformations from statically typed and separately compiled components.
We also present an implementation which allows us to automatically derive most
functionality from a slightly augmented type descriptions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fixed effects testing in high-dimensional linear mixed models | Many scientific and engineering challenges -- ranging from pharmacokinetic
drug dosage allocation and personalized medicine to marketing mix (4Ps)
recommendations -- require an understanding of the unobserved heterogeneity in
order to develop the best decision making-processes. In this paper, we develop
a hypothesis test and the corresponding p-value for testing for the
significance of the homogeneous structure in linear mixed models. A robust
matching moment construction is used for creating a test that adapts to the
size of the model sparsity. When unobserved heterogeneity at a cluster level is
constant, we show that our test is both consistent and unbiased even when the
dimension of the model is extremely high. Our theoretical results rely on a new
family of adaptive sparse estimators of the fixed effects that do not require
consistent estimation of the random effects. Moreover, our inference results do
not require consistent model selection. We showcase that moment matching can be
extended to nonlinear mixed effects models and to generalized linear mixed
effects models. In numerical and real data experiments, we find that the
developed method is extremely accurate, that it adapts to the size of the
underlying model and is decidedly powerful in the presence of irrelevant
covariates.
| 1 | 0 | 1 | 1 | 0 | 0 |
Ubiquitous quasi-Fuchsian surfaces in cusped hyperbolic 3-manifolds | This paper proves that every finite volume hyperbolic 3-manifold M contains a
ubiquitous collection of closed, immersed, quasi-Fuchsian surfaces. These
surfaces are ubiquitous in the sense that their preimages in the universal
cover separate any pair of disjoint, non-asymptotic geodesic planes. The proof
relies in a crucial way on the corresponding theorem of Kahn and Markovic for
closed 3-manifolds. As a corollary of this result and a companion statement
about surfaces with cusps, we recover Wise's theorem that the fundamental group
of M acts freely and cocompactly on a CAT(0) cube complex.
| 0 | 0 | 1 | 0 | 0 | 0 |
Generic Cospark of a Matrix Can Be Computed in Polynomial Time | The cospark of a matrix is the cardinality of the sparsest vector in the
column space of the matrix. Computing the cospark of a matrix is well known to
be an NP hard problem. Given the sparsity pattern (i.e., the locations of the
non-zero entries) of a matrix, if the non-zero entries are drawn from
independently distributed continuous probability distributions, we prove that
the cospark of the matrix equals, with probability one, to a particular number
termed the generic cospark of the matrix. The generic cospark also equals to
the maximum cospark of matrices consistent with the given sparsity pattern. We
prove that the generic cospark of a matrix can be computed in polynomial time,
and offer an algorithm that achieves this.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generative Adversarial Network based Speaker Adaptation for High Fidelity WaveNet Vocoder | Neural networks based vocoders, typically the WaveNet, have achieved
spectacular performance for text-to-speech (TTS) in recent years. Although
state-of-the-art parallel WaveNet has addressed the issue of real-time waveform
generation, there remains problems. Firstly, due to the noisy input signal of
the model, there is still a gap between the quality of generated and natural
waveforms. Secondly, a parallel WaveNet is trained under a distilled training
framework, which makes it tedious to adapt a well trained model to a new
speaker. To address these two problems, this paper proposes an end-to-end
adaptation method based on the generative adversarial network (GAN), which can
reduce the computational cost for the training of new speaker adaptation. Our
subjective experiments shows that the proposed training method can further
reduce the quality gap between generated and natural waveforms.
| 1 | 0 | 0 | 0 | 0 | 0 |
CitizenGrid: An Online Middleware for Crowdsourcing Scientific Research | In the last few years, contributions of the general public in scientific
projects has increased due to the advancement of communication and computing
technologies. Internet played an important role in connecting scientists and
volunteers who are interested in participating in their scientific projects.
However, despite potential benefits, only a limited number of crowdsourcing
based large-scale science (citizen science) projects have been deployed due to
the complexity involved in setting them up and running them. In this paper, we
present CitizenGrid - an online middleware platform which addresses security
and deployment complexity issues by making use of cloud computing and
virtualisation technologies. CitizenGrid incentivises scientists to make their
small-to-medium scale applications available as citizen science projects by: 1)
providing a directory of projects through a web-based portal that makes
applications easy to discover; 2) providing flexibility to participate in,
monitor, and control multiple citizen science projects from a common interface;
3) supporting diverse categories of citizen science projects. The paper
describes the design, development and evaluation of CitizenGrid and its use
cases.
| 1 | 0 | 0 | 0 | 0 | 0 |
Singular sensitivity in a Keller-Segel-fluid system | In bounded smooth domains $\Omega\subset\mathbb{R}^N$, $N\in\{2,3\}$,
considering the chemotaxis--fluid system
\[ \begin{cases} \begin{split} & n_t + u\cdot \nabla n &= \Delta n - \chi
\nabla \cdot(\frac{n}{c}\nabla c) &\\ & c_t + u\cdot \nabla c &= \Delta c - c +
n &\\ & u_t + \kappa (u\cdot \nabla) u &= \Delta u + \nabla P + n\nabla \Phi &
\end{split}\end{cases} \] with singular sensitivity, we prove global existence
of classical solutions for given $\Phi\in C^2(\bar{\Omega})$, for $\kappa=0$
(Stokes-fluid) if $N=3$ and $\kappa\in\{0,1\}$ (Stokes- or Navier--Stokes
fluid) if $N=2$ and under the condition that \[
0<\chi<\sqrt{\frac{2}{N}}. \]
| 0 | 0 | 1 | 0 | 0 | 0 |
Asymptotic theory of multiple-set linear canonical analysis | This paper deals with asymptotics for multiple-set linear canonical analysis
(MSLCA). A definition of this analysis, that adapts the classical one to the
context of Euclidean random variables, is given and properties of the related
canonical coefficients are derived. Then, estimators of the MSLCA's elements,
based on empirical covariance operators, are proposed and asymptotics for these
estimators are obtained. More precisely, we prove their consistency and we
obtain asymptotic normality for the estimator of the operator that gives MSLCA,
and also for the estimator of the vector of canonical coefficients. These
results are then used to obtain a test for mutual non-correlation between the
involved Euclidean random variables.
| 0 | 0 | 1 | 1 | 0 | 0 |
Deep Learning to Improve Breast Cancer Early Detection on Screening Mammography | The rapid development of deep learning, a family of machine learning
techniques, has spurred much interest in its application to medical imaging
problems. Here, we develop a deep learning algorithm that can accurately detect
breast cancer on screening mammograms using an "end-to-end" training approach
that efficiently leverages training datasets with either complete clinical
annotation or only the cancer status (label) of the whole image. In this
approach, lesion annotations are required only in the initial training stage,
and subsequent stages require only image-level labels, eliminating the reliance
on rarely available lesion annotations. Our all convolutional network method
for classifying screening mammograms attained excellent performance in
comparison with previous methods. On an independent test set of digitized film
mammograms from Digital Database for Screening Mammography (DDSM), the best
single model achieved a per-image AUC of 0.88, and four-model averaging
improved the AUC to 0.91 (sensitivity: 86.1%, specificity: 80.1%). On a
validation set of full-field digital mammography (FFDM) images from the
INbreast database, the best single model achieved a per-image AUC of 0.95, and
four-model averaging improved the AUC to 0.98 (sensitivity: 86.7%, specificity:
96.1%). We also demonstrate that a whole image classifier trained using our
end-to-end approach on the DDSM digitized film mammograms can be transferred to
INbreast FFDM images using only a subset of the INbreast data for fine-tuning
and without further reliance on the availability of lesion annotations. These
findings show that automatic deep learning methods can be readily trained to
attain high accuracy on heterogeneous mammography platforms, and hold
tremendous promise for improving clinical tools to reduce false positive and
false negative screening mammography results.
| 1 | 0 | 0 | 1 | 0 | 0 |
Nviz - A General Purpse Visualization tool for Wireless Sensor Networks | In a Wireless Sensor Network (WSN), data manipulation and representation is a
crucial part and can take a lot of time to be developed from scratch. Although
various visualization tools have been created for certain projects so far,
these tools can only be used in certain scenarios, due to their hard-coded
packet formats and network's properties. To speed up the development process, a
visualization tool which can adapt to any kind of WSN is essentially necessary.
For this purpose, a general-purpose visualization tool - NViz, which can
represent and visualize data for any kind of WSN, is proposed. NViz allows
users to set their network's properties and packet formats through XML files.
Based on properties defined, users can choose the meaning of them and let NViz
represents the data respectively. Furthermore, a better Replay mechanism, which
lets researchers and developers debug their WSN easily, is also integrated in
this tool. NViz is designed based on a layered architecture which allows for
clear and well-defined interrelationships and interfaces between each
component.
| 1 | 0 | 0 | 0 | 0 | 0 |
Transient photon echoes from donor-bound excitons in ZnO epitaxial layers | The coherent optical response from 140~nm and 65~nm thick ZnO epitaxial
layers is studied using transient four-wave-mixing spectroscopy with picosecond
temporal resolution. Resonant excitation of neutral donor-bound excitons
results in two-pulse and three-pulse photon echoes. For the donor-bound A
exciton (D$^0$X$_\text{A}$) at temperature of 1.8~K we evaluate optical
coherence times $T_2=33-50$~ps corresponding to homogeneous linewidths of
$13-19~\mu$eV, about two orders of magnitude smaller as compared with the
inhomogeneous broadening of the optical transitions. The coherent dynamics is
determined mainly by the population decay with time $T_1=30-40$~ps, while pure
dephasing is negligible in the studied high quality samples even for strong
optical excitation. Temperature increase leads to a significant shortening of
$T_2$ due to interaction with acoustic phonons. In contrast, the loss of
coherence of the donor-bound B exciton (D$^0$X$_\text{B}$) is significantly
faster ($T_2=3.6$~ps) and governed by pure dephasing processes.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Quasar Luminosity Function at Redshift 4 with Hyper Suprime-Cam Wide Survey | We present the luminosity function of z=4 quasars based on the Hyper
Suprime-Cam Subaru Strategic Program Wide layer imaging data in the g, r, i, z,
and y bands covering 339.8 deg^2. From stellar objects, 1666 z~4 quasar
candidates are selected by the g-dropout selection down to i=24.0 mag. Their
photometric redshifts cover the redshift range between 3.6 and 4.3 with an
average of 3.9. In combination with the quasar sample from the Sloan Digital
Sky Survey in the same redshift range, the quasar luminosity function covering
the wide luminosity range of M1450=-22 to -29 mag is constructed. It is well
described by a double power-law model with a knee at M1450=-25.36+-0.13 mag and
a flat faint-end slope with a power-law index of -1.30+-0.05. The knee and
faint-end slope show no clear evidence of redshift evolution from those at z~2.
The flat slope implies that the UV luminosity density of the quasar population
is dominated by the quasars around the knee, and does not support the steeper
faint-end slope at higher redshifts reported at z>5. If we convert the M1450
luminosity function to the hard X-ray 2-10keV luminosity function using the
relation between UV and X-ray luminosity of quasars and its scatter, the number
density of UV-selected quasars matches well with that of the X-ray-selected
AGNs above the knee of the luminosity function. Below the knee, the UV-selected
quasars show a deficiency compared to the hard X-ray luminosity function. The
deficiency can be explained by the lack of obscured AGNs among the UV-selected
quasars.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Industry 4.0: Gap Analysis between Current Automotive MES and Industry Standards using Model-Based Requirement Engineering | The dawn of the fourth industrial revolution, Industry 4.0 has created great
enthusiasm among companies and researchers by giving them an opportunity to
pave the path towards the vision of a connected smart factory ecosystem.
However, in context of automotive industry there is an evident gap between the
requirements supported by the current automotive manufacturing execution
systems (MES) and the requirements proposed by industrial standards from the
International Society of Automation (ISA) such as, ISA-95, ISA-88 over which
the Industry 4.0 is being built on. In this paper, we bridge this gap by
following a model-based requirements engineering approach along with a gap
analysis process. Our work is mainly divided into three phases, (i) automotive
MES tool selection phase, (ii) requirements modeling phase, (iii) and gap
analysis phase based on the modeled requirements. During the MES tool selection
phase, we used known reliable sources such as, MES product survey reports,
white papers that provide in-depth and comprehensive information about various
comparison criteria and tool vendors list for the current MES landscape. During
the requirement modeling phase, we specified requirements derived from the
needs of ISA-95 and ISA-88 industrial standards using the general purpose
Systems Modeling Language (SysML). During the gap analysis phase, we find the
misalignment between standard requirements and the compliance of the existing
software tools to those standards.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Constrained Conditional Likelihood Approach for Estimating the Means of Selected Populations | Given p independent normal populations, we consider the problem of estimating
the mean of those populations, that based on the observed data, give the
strongest signals. We explicitly condition on the ranking of the sample means,
and consider a constrained conditional maximum likelihood (CCMLE) approach,
avoiding the use of any priors and of any sparsity requirement between the
population means. Our results show that if the observed means are too close
together, we should in fact use the grand mean to estimate the mean of the
population with the larger sample mean. If they are separated by more than a
certain threshold, we should shrink the observed means towards each other. As
intuition suggests, it is only if the observed means are far apart that we
should conclude that the magnitude of separation and consequent ranking are not
due to chance. Unlike other methods, our approach does not need to pre-specify
the number of selected populations and the proposed CCMLE is able to perform
simultaneous inference. Our method, which is conceptually straightforward, can
be easily adapted to incorporate other selection criteria.
Selected populations, Maximum likelihood, Constrained MLE, Post-selection
inference
| 0 | 0 | 0 | 1 | 0 | 0 |
Phase transition in the spiked random tensor with Rademacher prior | We consider the problem of detecting a deformation from a symmetric Gaussian
random $p$-tensor $(p\geq 3)$ with a rank-one spike sampled from the Rademacher
prior. Recently in Lesieur et al. (2017), it was proved that there exists a
critical threshold $\beta_p$ so that when the signal-to-noise ratio exceeds
$\beta_p$, one can distinguish the spiked and unspiked tensors and weakly
recover the prior via the minimal mean-square-error method. On the other side,
Perry, Wein, and Bandeira (2017) proved that there exists a $\beta_p'<\beta_p$
such that any statistical hypothesis test can not distinguish these two
tensors, in the sense that their total variation distance asymptotically
vanishes, when the signa-to-noise ratio is less than $\beta_p'$. In this work,
we show that $\beta_p$ is indeed the critical threshold that strictly separates
the distinguishability and indistinguishability between the two tensors under
the total variation distance. Our approach is based on a subtle analysis of the
high temperature behavior of the pure $p$-spin model with Ising spin, arising
initially from the field of spin glasses. In particular, we identify the
signal-to-noise criticality $\beta_p$ as the critical temperature,
distinguishing the high and low temperature behavior, of the Ising pure
$p$-spin mean-field spin glass model.
| 0 | 0 | 1 | 0 | 0 | 0 |
Extended depth-range profilometry using the phase-difference and phase-sum of two close-sensitivity projected fringes | We propose a high signal-to-noise extended depth-range three-dimensional (3D)
profilometer projecting two linear-fringes with close phase-sensitivity. We use
temporal phase-shifting algorithms (PSAs) to phase-demodulate the two close
sensitivity phases. Then we calculate their phase-difference and their
phase-sum. If the sensitivity between the two phases is close enough, their
phase-difference is not-wrapped. The non-wrapped phase-difference as
extended-range profilometry is well known and has been widely used. However as
this paper shows, the closeness between the two demodulated phases makes their
difference quite noisy. On the other hand, as we show, their phase-sum has a
much higher phase-sensitivity and signal-to-noise ratio but it is highly
wrapped. Spatial unwrapping of the phase-sum is precluded for separate or
highly discontinuous objects. However it is possible to unwrap the phase-sum by
using the phase-difference as first approximation and our previously published
2-step temporal phase-unwrapping. Therefore the proposed profilometry technique
allows unwrapping the higher sensitivity phase-sum using the noisier
phase-difference as stepping stone. Due to the non-linear nature of the
extended 2-steps temporal-unwrapper, the harmonics and noise errors in the
phase-difference do not propagate towards the unwrapping phase-sum. To the best
of our knowledge this is the highest signal-to-noise ratio, extended
depth-range, 3D digital profilometry technique reported to this date.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Language for Probabilistically Oblivious Computation | An oblivious computation is one that is free of direct and indirect
information leaks, e.g., due to observable differences in timing and memory
access patterns. This paper presents Lobliv, a core language whose type system
enforces obliviousness. Prior work on type-enforced oblivious computation has
focused on deterministic programs. Lobliv is new in its consideration of
programs that implement probabilistic algorithms, such as those involved in
cryptography. Lobliv employs a substructural type system and a novel notion of
probability region to ensure that information is not leaked via the
distribution of visible events. The use of regions was motivated by a source of
unsoundness that we discovered in the type system of ObliVM, a language for
implementing state of the art oblivious algorithms and data structures. We
prove that Lobliv's type system enforces obliviousness and show that it is
nevertheless powerful enough to check state-of-the-art, efficient oblivious
data structures, such as stacks and queues, and even tree-based oblivious RAMs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Single shot, double differential spectral measurements of inverse Compton scattering in linear and nonlinear regimes | Inverse Compton scattering (ICS) is a unique mechanism for producing fast
pulses - picosecond and below - of bright X- to gamma-rays. These nominally
narrow spectral bandwidth electromagnetic radiation pulses are efficiently
produced in the interaction between intense, well-focused electron and laser
beams. The spectral characteristics of such sources are affected by many
experimental parameters, such as the bandwidth of the laser, and the angles of
both the electrons and laser photons at collision. The laser field amplitude
induces harmonic generation and importantly, for the present work, nonlinear
red shifting, both of which dilute the spectral brightness of the radiation. As
the applications enabled by this source often depend sensitively on its
spectra, it is critical to resolve the details of the wavelength and angular
distribution obtained from ICS collisions. With this motivation, we present
here an experimental study that greatly improves on previous spectral
measurement methods based on X-ray K-edge filters, by implementing a
multi-layer bent-crystal X-ray spectrometer. In tandem with a collimating slit,
this method reveals a projection of the double-differential angular-wavelength
spectrum of the ICS radiation in a single shot. The measurements enabled by
this diagnostic illustrate the combined off-axis and nonlinear-field-induced
red shifting in the ICS emission process. They reveal in detail the strength of
the normalized laser vector potential, and provide a non-destructive measure of
the temporal and spatial electron-laser beam overlap.
| 0 | 1 | 0 | 0 | 0 | 0 |
Yarkovsky Drift Detections for 159 Near-Earth Asteroids | The Yarkovsky effect is a thermal process acting upon the orbits of small
celestial bodies, which can cause these orbits to slowly expand or contract
with time. The effect is subtle -- typical drift rates lie near $10^{-4}$ au/My
for a $\sim$1 km diameter object -- and is thus generally difficult to measure.
However, objects with long observation intervals, as well as objects with radar
detections, serve as excellent candidates for the observation of this effect.
We analyzed both optical and radar astrometry for all numbered Near-Earth
Asteroids (NEAs), as well as several un-numbered NEAs, for the purpose of
detecting and quantifying the Yarkovsky effect. We present 159 objects with
measured drift rates. Our Yarkovsky sample is the largest published set of such
detections, and presents an opportunity to examine the physical properties of
these NEAs and the Yarkovsky effect in a statistical manner. In particular, we
confirm the Yarkovsky effect's theoretical size dependence of 1/$D$, where $D$
is diameter. We also examine the efficiency with which this effect acts on our
sample objects and find typical efficiencies of around 12%. We interpret this
efficiency with respect to the typical spin and thermal properties of objects
in our sample. We report the ratio of negative to positive drift rates in our
sample as $N_R/N_P = 2.9 \pm 0.7$ and interpret this ratio in terms of
retrograde/prograde rotators and main belt escape routes. The observed ratio
has a probability of 1 in 46 million of occurring by chance, which confirms the
presence of a non-gravitational influence. We examine how the presence of radar
data affects the strength and precision of our detections. We find that, on
average, the precision of radar+optical detections improves by a factor of
approximately 1.6 for each additional apparition with ranging data compared to
that of optical-only solutions.
| 0 | 1 | 0 | 0 | 0 | 0 |
A short note on Godbersen's Conjecture | In this short note we improve the best to date bound in Godbersen's
conjecture, and show some implications for unbalanced difference bodies.
| 0 | 0 | 1 | 0 | 0 | 0 |
The derivative NLS equation: global existence with solitons | We extend the global existence result for the derivative NLS equation to the
case when the initial datum includes a finite number of solitons. This is
achieved by an application of the Bäcklund transformation that removes a
finite number of zeros of the scattering coefficient. By means of this
transformation, the Riemann--Hilbert problem for meromorphic functions can be
formulated as the one for analytic functions, the solvability of which was
obtained recently.
| 0 | 1 | 1 | 0 | 0 | 0 |
Missing dust signature in the cosmic microwave background | I examine a possible spectral distortion of the Cosmic Microwave Background
(CMB) due to its absorption by galactic and intergalactic dust. I show that
even subtle intergalactic opacity of $1 \times 10^{-7}\, \mathrm{mag}\, h\,
\mathrm{Gpc}^{-1}$ at the CMB wavelengths in the local Universe causes
non-negligible CMB absorption and decline of the CMB intensity because the
opacity steeply increases with redshift. The CMB should be distorted even
during the epoch of the Universe defined by redshifts $z < 10$. For this epoch,
the maximum spectral distortion of the CMB is at least $20 \times 10^{-22}
\,\mathrm{Wm}^{-2}\, \mathrm{Hz}^{-1}\, \mathrm{sr}^{-1}$ at 300 GHz being well
above the sensitivity of the COBE/FIRAS, WMAP or Planck flux measurements. If
dust mass is considered to be redshift dependent with noticeable dust abundance
at redshifts 2-4, the predicted CMB distortion is even higher. The CMB would be
distorted also in a perfectly transparent universe due to dust in galaxies but
this effect is lower by one order than that due to intergalactic opacity. The
fact that the distortion of the CMB by dust is not observed is intriguing and
questions either opacity and extinction law measurements or validity of the
current model of the Universe.
| 0 | 1 | 0 | 0 | 0 | 0 |
Option market (in)efficiency and implied volatility dynamics after return jumps | In informationally efficient financial markets, option prices and this
implied volatility should immediately be adjusted to new information that
arrives along with a jump in underlying's return, whereas gradual changes in
implied volatility would indicate market inefficiency. Using minute-by-minute
data on S&P 500 index options, we provide evidence regarding delayed and
gradual movements in implied volatility after the arrival of return jumps.
These movements are directed and persistent, especially in the case of negative
return jumps. Our results are significant when the implied volatilities are
extracted from at-the-money options and out-of-the-money puts, while the
implied volatility obtained from out-of-the-money calls converges to its new
level immediately rather than gradually. Thus, our analysis reveals that the
implied volatility smile is adjusted to jumps in underlying's return
asymmetrically. Finally, it would be possible to have statistical arbitrage in
zero-transaction-cost option markets, but under actual option price spreads,
our results do not imply abnormal option returns.
| 0 | 0 | 0 | 0 | 0 | 1 |
Combining Alchemical Transformation with Physical Pathway to Accurately Compute Absolute Binding Free Energy | We present a new method that combines alchemical transformation with physical
pathway to accurately and efficiently compute the absolute binding free energy
of receptor-ligand complex. Currently, the double decoupling method (DDM) and
the potential of mean force approach (PMF) methods are widely used to compute
the absolute binding free energy of biomolecules. The DDM relies on
alchemically decoupling the ligand from its environments, which can be
computationally challenging for large ligands and charged ligands because of
the large magnitude of the decoupling free energies involved. On the other
hand, the PMF approach uses physical pathway to extract the ligand out of the
binding site, thus avoids the alchemical decoupling of the ligand. However, the
PMF method has its own drawback because of the reliance on a ligand
binding/unbinding pathway free of steric obstruction from the receptor atoms.
Therefore, in the presence of deeply buried ligand functional groups the
convergence of the PMF calculation can be very slow leading to large errors in
the computed binding free energy. Here we develop a new method called AlchemPMF
by combining alchemical transformation with physical pathway to overcome the
major drawback in the PMF method. We have tested the new approach on the
binding of a charged ligand to an allosteric site on HIV-1 Integrase. After 20
ns of simulation per umbrella sampling window, the new method yields absolute
binding free energies within ~1 kcal/mol from the experimental result, whereas
the standard PMF approach and the DDM calculations result in errors of ~5
kcal/mol and > 2 kcal/mol, respectively. Furthermore, the binding free energy
computed using the new method is associated with smaller statistical error
compared with those obtained from the existing methods.
| 0 | 0 | 0 | 0 | 1 | 0 |
Embedding Feature Selection for Large-scale Hierarchical Classification | Large-scale Hierarchical Classification (HC) involves datasets consisting of
thousands of classes and millions of training instances with high-dimensional
features posing several big data challenges. Feature selection that aims to
select the subset of discriminant features is an effective strategy to deal
with large-scale HC problem. It speeds up the training process, reduces the
prediction time and minimizes the memory requirements by compressing the total
size of learned model weight vectors. Majority of the studies have also shown
feature selection to be competent and successful in improving the
classification accuracy by removing irrelevant features. In this work, we
investigate various filter-based feature selection methods for dimensionality
reduction to solve the large-scale HC problem. Our experimental evaluation on
text and image datasets with varying distribution of features, classes and
instances shows upto 3x order of speed-up on massive datasets and upto 45% less
memory requirements for storing the weight vectors of learned model without any
significant loss (improvement for some datasets) in the classification
accuracy. Source Code: this https URL.
| 1 | 0 | 0 | 1 | 0 | 0 |
Orientably-regular maps on twisted linear fractional groups | We present an enumeration of orientably-regular maps with automorphism group
isomorphic to the twisted linear fractional group $M(q^2)$ for any odd prime
power $q$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Deep Mean Functions for Meta-Learning in Gaussian Processes | Fitting machine learning models in the low-data limit is challenging. The
main challenge is to obtain suitable prior knowledge and encode it into the
model, for instance in the form of a Gaussian process prior. Recent advances in
meta-learning offer powerful methods for extracting such prior knowledge from
data acquired in related tasks. When it comes to meta-learning in Gaussian
process models, approaches in this setting have mostly focused on learning the
kernel function of the prior, but not on learning its mean function. In this
work, we propose to parameterize the mean function of a Gaussian process with a
deep neural network and train it with a meta-learning procedure. We present
analytical and empirical evidence that mean function learning can be superior
to kernel learning alone, particularly if data is scarce.
| 1 | 0 | 0 | 1 | 0 | 0 |
Projectors separating spectra for $L^2$ on symmetric spaces $GL(n,\C)/GL(n,\R)$ | The Plancherel decomposition of $L^2$ on a pseudo-Riemannian symmetric space
$GL(n,C)/GL(n,R)$ has spectrum of $[n/2]$ types. We write explicitly orthogonal
projectors separating spectrum into uniform pieces
| 0 | 0 | 1 | 0 | 0 | 0 |
On the continued fraction expansion of absolutely normal numbers | We construct an absolutely normal number whose continued fraction expansion
is normal in the sense that it contains all finite patterns of partial
quotients with the expected asymptotic frequency as given by the Gauss-Kuzmin
measure. The construction is based on ideas of Sierpinski and uses a large
deviations theorem for sums of mixing random variables.
| 0 | 0 | 1 | 0 | 0 | 0 |
Talking Open Data | Enticing users into exploring Open Data remains an important challenge for
the whole Open Data paradigm. Standard stock interfaces often used by Open Data
portals are anything but inspiring even for tech-savvy users, let alone those
without an articulated interest in data science. To address a broader range of
citizens, we designed an open data search interface supporting natural language
interactions via popular platforms like Facebook and Skype. Our data-aware
chatbot answers search requests and suggests relevant open datasets, bringing
fun factor and a potential of viral dissemination into Open Data exploration.
The current system prototype is available for Facebook
(this https URL) and Skype
(this https URL) users.
| 1 | 0 | 0 | 0 | 0 | 0 |
Static non-reciprocity in mechanical metamaterials | Reciprocity is a fundamental principle governing various physical systems,
which ensures that the transfer function between any two points in space is
identical, regardless of geometrical or material asymmetries. Breaking this
transmission symmetry offers enhanced control over signal transport, isolation
and source protection. So far, devices that break reciprocity have been mostly
considered in dynamic systems, for electromagnetic, acoustic and mechanical
wave propagation associated with spatio-temporal variations. Here we show that
it is possible to strongly break reciprocity in static systems, realizing
mechanical metamaterials that, by combining large nonlinearities with suitable
geometrical asymmetries, and possibly topological features, exhibit vastly
different output displacements under excitation from different sides, as well
as one-way displacement amplification. In addition to extending non-reciprocity
and isolation to statics, our work sheds new light on the understanding of
energy propagation in non-linear materials with asymmetric crystalline
structures and topological properties, opening avenues for energy absorption,
conversion and harvesting, soft robotics, prosthetics and optomechanics.
| 0 | 1 | 0 | 0 | 0 | 0 |
A forward-adjoint operator pair based on the elastic wave equation for use in transcranial photoacoustic tomography | Photoacoustic computed tomography (PACT) is an emerging imaging modality that
exploits optical contrast and ultrasonic detection principles to form images of
the photoacoustically induced initial pressure distribution within tissue. The
PACT reconstruction problem corresponds to an inverse source problem in which
the initial pressure distribution is recovered from measurements of the
radiated wavefield.
A major challenge in transcranial PACT brain imaging is compensation for
aberrations in the measured data due to the presence of the skull. Ultrasonic
waves undergo absorption, scattering and longitudinal-to-shear wave mode
conversion as they propagate through the skull. To properly account for these
effects, a wave-equation-based inversion method should be employed that can
model the heterogeneous elastic properties of the skull. In this work, a
forward model based on a finite-difference time-domain discretization of the
three-dimensional elastic wave equation is established and a procedure for
computing the corresponding adjoint of the forward operator is presented.
Massively parallel implementations of these operators employing multiple
graphics processing units (GPUs) are also developed. The developed numerical
framework is validated and investigated in computer-simulation and experimental
phantom studies whose designs are motivated by transcranial PACT applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sewing Riemannian Manifolds with Positive Scalar Curvature | We explore to what extent one may hope to preserve geometric properties of
three dimensional manifolds with lower scalar curvature bounds under
Gromov-Hausdorff and Intrinsic Flat limits. We introduce a new construction,
called sewing, of three dimensional manifolds that preserves positive scalar
curvature. We then use sewing to produce sequences of such manifolds which
converge to spaces that fail to have nonnegative scalar curvature in a standard
generalized sense. Since the notion of nonnegative scalar curvature is not
strong enough to persist alone, we propose that one pair a lower scalar
curvature bound with a lower bound on the area of a closed minimal surface when
taking sequences as this will exclude the possibility of sewing of manifolds.
| 0 | 0 | 1 | 0 | 0 | 0 |
Concentration of quadratic forms under a Bernstein moment assumption | A concentration result for quadratic form of independent subgaussian random
variables is derived. If the moments of the random variables satisfy a
"Bernstein condition", then the variance term of the Hanson-Wright inequality
can be improved. The Bernstein condition is satisfied, for instance, by all
log-concave subgaussian distributions.
| 0 | 0 | 1 | 1 | 0 | 0 |
Short DNA persistence length in a mesoscopic helical model | The flexibility of short DNA chains is investigated via computation of the
average correlation function between dimers which defines the persistence
length. Path integration techniques have been applied to confine the phase
space available to base pair fluctuations and derive the partition function.
The apparent persistence lengths of a set of short chains have been computed as
a function of the twist conformation both in the over-twisted and the untwisted
regimes, whereby the equilibrium twist is selected by free energy minimization.
The obtained values are significantly lower than those generally attributed to
kilo-base long DNA. This points to an intrinsic helix flexibility at short
length scales, arising from large fluctuational effects and local bending, in
line with recent experimental indications. The interplay between helical
untwisting and persistence length has been discussed for a heterogeneous
fragment by weighing the effects of the sequence specificities through the
non-linear stacking potential.
| 0 | 0 | 0 | 0 | 1 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.