title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Composing Differential Privacy and Secure Computation: A case study on scaling private record linkage | Private record linkage (PRL) is the problem of identifying pairs of records
that are similar as per an input matching rule from databases held by two
parties that do not trust one another. We identify three key desiderata that a
PRL solution must ensure: 1) perfect precision and high recall of matching
pairs, 2) a proof of end-to-end privacy, and 3) communication and computational
costs that scale subquadratically in the number of input records. We show that
all of the existing solutions for PRL - including secure 2-party computation
(S2PC), and their variants that use non-private or differentially private (DP)
blocking to ensure subquadratic cost - violate at least one of the three
desiderata. In particular, S2PC techniques guarantee end-to-end privacy but
have either low recall or quadratic cost. In contrast, no end-to-end privacy
guarantee has been formalized for solutions that achieve subquadratic cost.
This is true even for solutions that compose DP and S2PC: DP does not permit
the release of any exact information about the databases, while S2PC algorithms
for PRL allow the release of matching records.
In light of this deficiency, we propose a novel privacy model, called output
constrained differential privacy, that shares the strong privacy protection of
DP, but allows for the truthful release of the output of a certain function
applied to the data. We apply this to PRL, and show that protocols satisfying
this privacy model permit the disclosure of the true matching records, but
their execution is insensitive to the presence or absence of a single
non-matching record. We find that prior work that combine DP and S2PC
techniques even fail to satisfy this end-to-end privacy model. Hence, we
develop novel protocols that provably achieve this end-to-end privacy
guarantee, together with the other two desiderata of PRL.
| 1 | 0 | 0 | 0 | 0 | 0 |
Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction | Despite the recent popularity of deep generative state space models, few
comparisons have been made between network architectures and the inference
steps of the Bayesian filtering framework -- with most models simultaneously
approximating both state transition and update steps with a single recurrent
neural network (RNN). In this paper, we introduce the Recurrent Neural Filter
(RNF), a novel recurrent variational autoencoder architecture that learns
distinct representations for each Bayesian filtering step, captured by a series
of encoders and decoders. Testing this on three real-world time series
datasets, we demonstrate that decoupling representations not only improves the
accuracy of one-step-ahead forecasts while providing realistic uncertainty
estimates, but also facilitates multistep prediction through the separation of
encoder stages.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Submodularity-Based Approach for Multi-Agent Optimal Coverage Problems | We consider the optimal coverage problem where a multi-agent network is
deployed in an environment with obstacles to maximize a joint event detection
probability. The objective function of this problem is non-convex and no global
optimum is guaranteed by gradient-based algorithms developed to date. We first
show that the objective function is monotone submodular, a class of functions
for which a simple greedy algorithm is known to be within 0.63 of the optimal
solution. We then derive two tighter lower bounds by exploiting the curvature
information (total curvature and elemental curvature) of the objective
function. We further show that the tightness of these lower bounds is
complementary with respect to the sensing capabilities of the agents. The
greedy algorithm solution can be subsequently used as an initial point for a
gradient-based algorithm to obtain solutions even closer to the global optimum.
Simulation results show that this approach leads to significantly better
performance relative to previously used algorithms.
| 1 | 0 | 1 | 0 | 0 | 0 |
A GPU-based Multi-level Algorithm for Boundary Value Problems | A novel and scalable geometric multi-level algorithm is presented for the
numerical solution of elliptic partial differential equations, specially
designed to run with high occupancy of streaming processors inside Graphics
Processing Units(GPUs). The algorithm consists of iterative, superposed
operations on a single grid, and it is composed of two simple full-grid
routines: a restriction and a coarsened interpolation-relaxation. The
restriction is used to collect sources using recursive coarsened averages, and
the interpolation-relaxation simultaneously applies coarsened finite-difference
operators and interpolations. The routines are scheduled in a saw-like refining
cycle. Convergence to machine precision is achieved repeating the full cycle
using accumulated residuals and successively collecting the solution. Its total
number of operations scale linearly with the number of nodes. It provides an
attractive fast solver for Boundary Value Problems (BVPs), specially for
simulations running entirely in the GPU. Applications shown in this work
include the deformation of two-dimensional grids, the computation of
three-dimensional streamlines for a singular trifoil-knot vortex and the
calculation of three-dimensional electric potentials in heterogeneous
dielectric media.
| 1 | 1 | 0 | 0 | 0 | 0 |
Counterexample-Guided k-Induction Verification for Fast Bug Detection | Recently, the k-induction algorithm has proven to be a successful approach
for both finding bugs and proving correctness. However, since the algorithm is
an incremental approach, it might waste resources trying to prove incorrect
programs. In this paper, we propose to extend the k-induction algorithm in
order to shorten the number of steps required to find a property violation. We
convert the algorithm into a meet-in-the-middle bidirectional search algorithm,
using the counterexample produced from over-approximating the program. The
preliminary results show that the number of steps required to find a property
violation is reduced to $\lfloor\frac{k}{2} + 1\rfloor$ and the verification
time for programs with large state space is reduced considerably.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cautious Model Predictive Control using Gaussian Process Regression | Gaussian process (GP) regression has been widely used in supervised machine
learning due to its flexibility and inherent ability to describe uncertainty in
function estimation. In the context of control, it is seeing increasing use for
modeling of nonlinear dynamical systems from data, as it allows the direct
assessment of residual model uncertainty. We present a model predictive control
(MPC) approach that integrates a nominal system with an additive nonlinear part
of the dynamics modeled as a GP. Approximation techniques for propagating the
state distribution are reviewed and we describe a principled way of formulating
the chance constrained MPC problem, which takes into account residual
uncertainties provided by the GP model to enable cautious control. Using
additional approximations for efficient computation, we finally demonstrate the
approach in a simulation example, as well as in a hardware implementation for
autonomous racing of remote controlled race cars, highlighting improvements
with regard to both performance and safety over a nominal controller.
| 1 | 0 | 1 | 0 | 0 | 0 |
Probabilistic Trajectory Segmentation by Means of Hierarchical Dirichlet Process Switching Linear Dynamical Systems | Using movement primitive libraries is an effective means to enable robots to
solve more complex tasks. In order to build these movement libraries, current
algorithms require a prior segmentation of the demonstration trajectories. A
promising approach is to model the trajectory as being generated by a set of
Switching Linear Dynamical Systems and inferring a meaningful segmentation by
inspecting the transition points characterized by the switching dynamics. With
respect to the learning, a nonparametric Bayesian approach is employed
utilizing a Gibbs sampler.
| 1 | 0 | 0 | 1 | 0 | 0 |
Radio Frequency Interference Mitigation | Radio astronomy observational facilities are under constant upgradation and
development to achieve better capabilities including increasing the time and
frequency resolutions of the recorded data, and increasing the receiving and
recording bandwidth. As only a limited spectrum resource has been allocated to
radio astronomy by the International Telecommunication Union, this results in
the radio observational instrumentation being inevitably exposed to undesirable
radio frequency interference (RFI) signals which originate mainly from
terrestrial human activity and are becoming stronger with time. RFIs degrade
the quality of astronomical data and even lead to data loss. The impact of RFIs
on scientific outcome is becoming progressively difficult to manage. In this
article, we motivate the requirement for RFI mitigation, and review the RFI
characteristics, mitigation techniques and strategies. Mitigation strategies
adopted at some representative observatories, telescopes and arrays are also
introduced. We also discuss and present advantages and shortcomings of the four
classes of RFI mitigation strategies, applicable at the connected causal
stages: preventive, pre-detection, pre-correlation and post-correlation. The
proper identification and flagging of RFI is key to the reduction of data loss
and improvement in data quality, and is also the ultimate goal of developing
RFI mitigation techniques. This can be achieved through a strategy involving a
combination of the discussed techniques in stages. Recent advances in high
speed digital signal processing and high performance computing allow for
performing RFI excision of large data volumes generated from large telescopes
or arrays in both real time and offline modes, aiding the proposed strategy.
| 0 | 1 | 0 | 0 | 0 | 0 |
Online Calibration of Phasor Measurement Unit Using Density-Based Spatial Clustering | Data quality of Phasor Measurement Unit (PMU) is receiving increasing
attention as it has been identified as one of the limiting factors that affect
many wide-area measurement system (WAMS) based applications. In general,
existing PMU calibration methods include offline testing and model based
approaches. However, in practice, the effectiveness of both is limited due to
the very strong assumptions employed. This paper presents a novel framework for
online bias error detection and calibration of PMU measurement using
density-based spatial clustering of applications with noise (DBSCAN) based on
much relaxed assumptions. With a new problem formulation, the proposed data
mining based methodology is applicable across a wide spectrum of practical
conditions and one side-product of it is more accurate transmission line
parameters for EMS database and protective relay settings. Case studies
demonstrate the effectiveness of the proposed approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Some basic properties of bounded solutions of parabolic equations with p-Laplacian diffusion | We provide a detailed (and fully rigorous) derivation of several fundamental
properties of bounded weak solutions to initial-value problems for general
conservative 2nd-order parabolic equations with p-Laplacian diffusion and
(arbitrary) bounded and integrable initial data.
| 0 | 0 | 1 | 0 | 0 | 0 |
Andreev Reflection without Fermi surface alignment in High T$_{c}$-Topological heterostructures | We address the controversy over the proximity effect between topological
materials and high T$_{c}$ superconductors. Junctions are produced between
Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$ and materials with different Fermi
surfaces (Bi$_{2}$Te$_{3}$ \& graphite). Both cases reveal tunneling spectra
consistent with Andreev reflection. This is confirmed by magnetic field that
shifts features via the Doppler effect. This is modeled with a single parameter
that accounts for tunneling into a screening supercurrent. Thus the tunneling
involves Cooper pairs crossing the heterostructure, showing the Fermi surface
mis-match does not hinder the ability to form transparent interfaces, which is
accounted for by the extended Brillouin zone and different lattice symmetries.
| 0 | 1 | 0 | 0 | 0 | 0 |
Structural Data Recognition with Graph Model Boosting | This paper presents a novel method for structural data recognition using a
large number of graph models. In general, prevalent methods for structural data
recognition have two shortcomings: 1) Only a single model is used to capture
structural variation. 2) Naive recognition methods are used, such as the
nearest neighbor method. In this paper, we propose strengthening the
recognition performance of these models as well as their ability to capture
structural variation. The proposed method constructs a large number of graph
models and trains decision trees using the models. This paper makes two main
contributions. The first is a novel graph model that can quickly perform
calculations, which allows us to construct several models in a feasible amount
of time. The second contribution is a novel approach to structural data
recognition: graph model boosting. Comprehensive structural variations can be
captured with a large number of graph models constructed in a boosting
framework, and a sophisticated classifier can be formed by aggregating the
decision trees. Consequently, we can carry out structural data recognition with
powerful recognition capability in the face of comprehensive structural
variation. The experiments shows that the proposed method achieves impressive
results and outperforms existing methods on datasets of IAM graph database
repository.
| 1 | 0 | 0 | 1 | 0 | 0 |
Exceptional points in two simple textbook examples | We propose to introduce the concept of exceptional points in intermediate
courses on mathematics and classical mechanics by means of simple textbook
examples. The first one is an ordinary second-order differential equation with
constant coefficients. The second one is the well known damped harmonic
oscillator. They enable one to connect the occurrence of linearly dependent
exponential solutions with a defective matrix that cannot be diagonalized but
can be transformed into a Jordan canonical form.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bootstrap of residual processes in regression: to smooth or not to smooth ? | In this paper we consider a location model of the form $Y = m(X) +
\varepsilon$, where $m(\cdot)$ is the unknown regression function, the error
$\varepsilon$ is independent of the $p$-dimensional covariate $X$ and
$E(\varepsilon)=0$. Given i.i.d. data $(X_1,Y_1),\ldots,(X_n,Y_n)$ and given an
estimator $\hat m(\cdot)$ of the function $m(\cdot)$ (which can be parametric
or nonparametric of nature), we estimate the distribution of the error term
$\varepsilon$ by the empirical distribution of the residuals $Y_i-\hat m(X_i)$,
$i=1,\ldots,n$. To approximate the distribution of this estimator, Koul and
Lahiri (1994) and Neumeyer (2008, 2009) proposed bootstrap procedures, based on
smoothing the residuals either before or after drawing bootstrap samples. So
far it has been an open question whether a classical non-smooth residual
bootstrap is asymptotically valid in this context. In this paper we solve this
open problem, and show that the non-smooth residual bootstrap is consistent. We
illustrate this theoretical result by means of simulations, that show the
accuracy of this bootstrap procedure for various models, testing procedures and
sample sizes.
| 0 | 0 | 1 | 1 | 0 | 0 |
Polynomiality for the Poisson centre of truncated maximal parabolic subalgebras | We show that the Poisson centre of truncated maximal parabolic subalgebras of
a simple Lie algebra of type B, D and E_6 is a polynomial algebra.
In roughly half of the cases the polynomiality of the Poisson centre was
already known by a completely different method.
For the rest of the cases, our approach is to construct an algebraic slice in
the sense of Kostant given by an adapted pair and the computation of an
improved upper bound for the Poisson centre.
| 0 | 0 | 1 | 0 | 0 | 0 |
Row-Centric Lossless Compression of Markov Images | Motivated by the question of whether the recently introduced Reduced Cutset
Coding (RCC) offers rate-complexity performance benefits over conventional
context-based conditional coding for sources with two-dimensional Markov
structure, this paper compares several row-centric coding strategies that vary
in the amount of conditioning as well as whether a model or an empirical table
is used in the encoding of blocks of rows. The conclusion is that, at least for
sources exhibiting low-order correlations, 1-sided model-based conditional
coding is superior to the method of RCC for a given constraint on complexity,
and conventional context-based conditional coding is nearly as good as the
1-sided model-based coding.
| 1 | 0 | 0 | 0 | 0 | 0 |
Planetesimal formation by the streaming instability in a photoevaporating disk | Recent years have seen growing interest in the streaming instability as a
candidate mechanism to produce planetesimals. However, these investigations
have been limited to small-scale simulations. We now present the results of a
global protoplanetary disk evolution model that incorporates planetesimal
formation by the streaming instability, along with viscous accretion,
photoevaporation by EUV, FUV, and X-ray photons, dust evolution, the water ice
line, and stratified turbulence. Our simulations produce massive (60-130
$M_\oplus$) planetesimal belts beyond 100 au and up to $\sim 20 M_\oplus$ of
planetesimals in the middle regions (3-100 au). Our most comprehensive model
forms 8 $M_\oplus$ of planetesimals inside 3 au, where they can give rise to
terrestrial planets. The planetesimal mass formed in the inner disk depends
critically on the timing of the formation of an inner cavity in the disk by
high-energy photons. Our results show that the combination of photoevaporation
and the streaming instability are efficient at converting the solid component
of protoplanetary disks into planetesimals. Our model, however, does not form
enough early planetesimals in the inner and middle regions of the disk to give
rise to giant planets and super-Earths with gaseous envelopes. Additional
processes such as particle pileups and mass loss driven by MHD winds may be
needed to drive the formation of early planetesimal generations in the planet
forming regions of protoplanetary disks.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fault Tolerant Thermal Control of Steam Turbine Shell Deflections | The metal-to-metal clearances of a steam turbine during full or part load
operation are among the main drivers of efficiency. The requirement to add
clearances is driven by a number of factors including the relative movements of
the steam turbine shell and rotor during transient conditions such as startup
and shutdown. This paper includes a description of a control algorithm to
manage external heating blankets for the thermal control of the shell
deflections during turbine shutdown. The proposed method is tolerant of changes
in the heat loss characteristics of the system as well as simultaneous
component failures.
| 1 | 0 | 0 | 0 | 0 | 0 |
Causal Mediation Analysis Leveraging Multiple Types of Summary Statistics Data | Summary statistics of genome-wide association studies (GWAS) teach causal
relationship between millions of genetic markers and tens and thousands of
phenotypes. However, underlying biological mechanisms are yet to be elucidated.
We can achieve necessary interpretation of GWAS in a causal mediation
framework, looking to establish a sparse set of mediators between genetic and
downstream variables, but there are several challenges. Unlike existing methods
rely on strong and unrealistic assumptions, we tackle practical challenges
within a principled summary-based causal inference framework. We analyzed the
proposed methods in extensive simulations generated from real-world genetic
data. We demonstrated only our approach can accurately redeem causal genes,
even without knowing actual individual-level data, despite the presence of
competing non-causal trails.
| 1 | 0 | 0 | 1 | 1 | 0 |
Causal Queries from Observational Data in Biological Systems via Bayesian Networks: An Empirical Study in Small Networks | Biological networks are a very convenient modelling and visualisation tool to
discover knowledge from modern high-throughput genomics and postgenomics data
sets. Indeed, biological entities are not isolated, but are components of
complex multi-level systems. We go one step further and advocate for the
consideration of causal representations of the interactions in living
systems.We present the causal formalism and bring it out in the context of
biological networks, when the data is observational. We also discuss its
ability to decipher the causal information flow as observed in gene expression.
We also illustrate our exploration by experiments on small simulated networks
as well as on a real biological data set.
| 0 | 0 | 0 | 1 | 1 | 0 |
Hierarchical Bloom Filter Trees for Approximate Matching | Bytewise approximate matching algorithms have in recent years shown
significant promise in de- tecting files that are similar at the byte level.
This is very useful for digital forensic investigators, who are regularly faced
with the problem of searching through a seized device for pertinent data. A
common scenario is where an investigator is in possession of a collection of
"known-illegal" files (e.g. a collection of child abuse material) and wishes to
find whether copies of these are stored on the seized device. Approximate
matching addresses shortcomings in traditional hashing, which can only find
identical files, by also being able to deal with cases of merged files,
embedded files, partial files, or if a file has been changed in any way.
Most approximate matching algorithms work by comparing pairs of files, which
is not a scalable approach when faced with large corpora. This paper
demonstrates the effectiveness of using a "Hierarchical Bloom Filter Tree"
(HBFT) data structure to reduce the running time of
collection-against-collection matching, with a specific focus on the MRSH-v2
algorithm. Three experiments are discussed, which explore the effects of
different configurations of HBFTs. The proposed approach dramatically reduces
the number of pairwise comparisons required, and demonstrates substantial speed
gains, while maintaining effectiveness.
| 1 | 0 | 0 | 0 | 0 | 0 |
GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids | GANDALF is a new hydrodynamics and N-body dynamics code designed for
investigating planet formation, star formation and star cluster problems.
GANDALF is written in C++, parallelised with both OpenMP and MPI and contains a
python library for analysis and visualisation. The code has been written with a
fully object-oriented approach to easily allow user-defined implementations of
physics modules or other algorithms. The code currently contains
implementations of Smoothed Particle Hydrodynamics, Meshless Finite-Volume and
collisional N-body schemes, but can easily be adapted to include additional
particle schemes. We present in this paper the details of its implementation,
results from the test suite, serial and parallel performance results and
discuss the planned future development. The code is freely available as an open
source project on the code-hosting website github at
this https URL and is available under the GPLv2
license.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pre-freezing transition in Boltzmann-Gibbs measures associated with log-correlated fields | We consider Boltzmann-Gibbs measures associated with log-correlated Gaussian
fields as potentials and study their multifractal properties which exhibit
phase transitions. In particular, the pre-freezing and freezing phenomena of
the annealed exponent, predicted by Fyodorov using a modified
replica-symmetry-breaking ansatz, are generalised to arbitrary dimension and
verified using results from Gaussian multiplicative chaos theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Combinatorial Optimization Algorithms over Graphs | The design of good heuristics or approximation algorithms for NP-hard
combinatorial optimization problems often requires significant specialized
knowledge and trial-and-error. Can we automate this challenging, tedious
process, and learn the algorithms instead? In many real-world applications, it
is typically the case that the same optimization problem is solved again and
again on a regular basis, maintaining the same problem structure but differing
in the data. This provides an opportunity for learning heuristic algorithms
that exploit the structure of such recurring problems. In this paper, we
propose a unique combination of reinforcement learning and graph embedding to
address this challenge. The learned greedy policy behaves like a meta-algorithm
that incrementally constructs a solution, and the action is determined by the
output of a graph embedding network capturing the current state of the
solution. We show that our framework can be applied to a diverse range of
optimization problems over graphs, and learns effective algorithms for the
Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
Optimal Oil Production and Taxation in Presence of Global Disruptions | This paper studies the optimal extraction policy of an oil field as well as
the efficient taxation of the revenues generated. Taking into account the fact
that the oil price in worldwide commodity markets fluctuates randomly following
global and seasonal macroeconomic parameters, we model the evolution of the oil
price as a mean reverting regime-switching jump diffusion process. Given that
oil producing countries rely on oil sale revenues as well as taxes levied on
oil companies for a good portion of the revenue side of their budgets, we
formulate this problem as a differential game where the two players are the
mining company whose aim is to maximize the revenues generated from its
extracting activities and the government agency in charge of regulating and
taxing natural resources. We prove the existence of a Nash equilibrium and the
convergence of an approximating scheme for the value functions. Furthermore,
optimal extraction and fiscal policies that should be applied when the
equilibrium is reached are derived.A numerical example is presented to
illustrate these results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Critical well-posedness and scattering results for fractional Hartree-type equations | Scattering for the mass-critical fractional Schrödinger equation with a
cubic Hartree-type nonlinearity for initial data in a small ball in the
scale-invariant space of three-dimensional radial and square-integrable initial
data is established. For this, we prove a bilinear estimate for free solutions
and extend it to perturbations of bounded quadratic variation. This result is
shown to be sharp by proving the unboundedness of a third order derivative of
the flow map in the super-critical range.
| 0 | 0 | 1 | 0 | 0 | 0 |
Lightweight Multilingual Software Analysis | Developer preferences, language capabilities and the persistence of older
languages contribute to the trend that large software codebases are often
multilingual, that is, written in more than one computer language. While
developers can leverage monolingual software development tools to build
software components, companies are faced with the problem of managing the
resultant large, multilingual codebases to address issues with security,
efficiency, and quality metrics. The key challenge is to address the opaque
nature of the language interoperability interface: one language calling
procedures in a second (which may call a third, or even back to the first),
resulting in a potentially tangled, inefficient and insecure codebase. An
architecture is proposed for lightweight static analysis of large multilingual
codebases: the MLSA architecture. Its modular and table-oriented structure
addresses the open-ended nature of multiple languages and language
interoperability APIs. We focus here as an application on the construction of
call-graphs that capture both inter-language and intra-language calls. The
algorithms for extracting multilingual call-graphs from codebases are
presented, and several examples of multilingual software engineering analysis
are discussed. The state of the implementation and testing of MLSA is
presented, and the implications for future work are discussed.
| 1 | 0 | 0 | 0 | 0 | 0 |
Room-temperature 1.54 $μ$m photoluminescence of Er:O$_x$ centers at extremely low concentration in silicon | The demand for single photon sources at $\lambda~=~1.54~\mu$m, which follows
from the consistent development of quantum networks based on commercial optical
fibers, makes Er:O$_x$ centers in Si still a viable resource thanks to the
optical transition of $Er^{3+}~:~^4I_{13/2}~\rightarrow~^4I_{15/2}$. Yet, to
date, the implementation of such system remains hindered by its extremely low
emission rate. In this Letter, we explore the room-temperature
photoluminescence (PL) at the telecomm wavelength of very low implantation
doses of $Er:O_x$ in $Si$. The emitted photons, excited by a $\lambda~=~792~nm$
laser in both large areas and confined dots of diameter down to $5~\mu$m, are
collected by an inverted confocal microscope. The lower-bound number of
detectable emission centers within our diffraction-limited illumination spot is
estimated to be down to about 10$^4$, corresponding to an emission rate per
individual ion of about $4~\times~10^{3}$ photons/s.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sparse Algorithm for Robust LSSVM in Primal Space | As enjoying the closed form solution, least squares support vector machine
(LSSVM) has been widely used for classification and regression problems having
the comparable performance with other types of SVMs. However, LSSVM has two
drawbacks: sensitive to outliers and lacking sparseness. Robust LSSVM (R-LSSVM)
overcomes the first partly via nonconvex truncated loss function, but the
current algorithms for R-LSSVM with the dense solution are faced with the
second drawback and are inefficient for training large-scale problems. In this
paper, we interpret the robustness of R-LSSVM from a re-weighted viewpoint and
give a primal R-LSSVM by the representer theorem. The new model may have sparse
solution if the corresponding kernel matrix has low rank. Then approximating
the kernel matrix by a low-rank matrix and smoothing the loss function by
entropy penalty function, we propose a convergent sparse R-LSSVM (SR-LSSVM)
algorithm to achieve the sparse solution of primal R-LSSVM, which overcomes two
drawbacks of LSSVM simultaneously. The proposed algorithm has lower complexity
than the existing algorithms and is very efficient for training large-scale
problems. Many experimental results illustrate that SR-LSSVM can achieve better
or comparable performance with less training time than related algorithms,
especially for training large scale problems.
| 1 | 0 | 0 | 1 | 0 | 0 |
Value Propagation for Decentralized Networked Deep Multi-agent Reinforcement Learning | We consider the networked multi-agent reinforcement learning (MARL) problem
in a fully decentralized setting, where agents learn to coordinate to achieve
the joint success. This problem is widely encountered in many areas including
traffic control, distributed control, and smart grids. We assume that the
reward function for each agent can be different and observed only locally by
the agent itself. Furthermore, each agent is located at a node of a
communication network and can exchanges information only with its neighbors.
Using softmax temporal consistency and a decentralized optimization method, we
obtain a principled and data-efficient iterative algorithm. In the first step
of each iteration, an agent computes its local policy and value gradients and
then updates only policy parameters. In the second step, the agent propagates
to its neighbors the messages based on its value function and then updates its
own value function. Hence we name the algorithm value propagation. We prove a
non-asymptotic convergence rate 1/T with the nonlinear function approximation.
To the best of our knowledge, it is the first MARL algorithm with convergence
guarantee in the control, off-policy and non-linear function approximation
setting. We empirically demonstrate the effectiveness of our approach in
experiments.
| 1 | 0 | 0 | 1 | 0 | 0 |
Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible | Non-interactive Local Differential Privacy (LDP) requires data analysts to
collect data from users through noisy channel at once. In this paper, we extend
the frontiers of Non-interactive LDP learning and estimation from several
aspects. For learning with smooth generalized linear losses, we propose an
approximate stochastic gradient oracle estimated from non-interactive LDP
channel, using Chebyshev expansion. Combined with inexact gradient methods, we
obtain an efficient algorithm with quasi-polynomial sample complexity bound.
For the high-dimensional world, we discover that under $\ell_2$-norm assumption
on data points, high-dimensional sparse linear regression and mean estimation
can be achieved with logarithmic dependence on dimension, using random
projection and approximate recovery. We also extend our methods to Kernel Ridge
Regression. Our work is the first one that makes learning and estimation
possible for a broad range of learning tasks under non-interactive LDP model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning Independent Causal Mechanisms | Statistical learning relies upon data sampled from a distribution, and we
usually do not care what actually generated it in the first place. From the
point of view of causal modeling, the structure of each distribution is induced
by physical mechanisms that give rise to dependences between observables.
Mechanisms, however, can be meaningful autonomous modules of generative models
that make sense beyond a particular entailed data distribution, lending
themselves to transfer between problems. We develop an algorithm to recover a
set of independent (inverse) mechanisms from a set of transformed data points.
The approach is unsupervised and based on a set of experts that compete for
data generated by the mechanisms, driving specialization. We analyze the
proposed method in a series of experiments on image data. Each expert learns to
map a subset of the transformed data back to a reference distribution. The
learned mechanisms generalize to novel domains. We discuss implications for
transfer learning and links to recent trends in generative modeling.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Bayesian Model for False Information Belief Impact, Optimal Design, and Fake News Containment | This work is a technical approach to modeling false information nature,
design, belief impact and containment in multi-agent networks. We present a
Bayesian mathematical model for source information and viewer's belief, and how
the former impacts the latter in a media (network) of broadcasters and viewers.
Given the proposed model, we study how a particular information (true or false)
can be optimally designed into a report, so that on average it conveys the most
amount of the original intended information to the viewers of the network.
Consequently, the model allows us to study susceptibility of a particular group
of viewers to false information, as a function of statistical metrics of the
their prior beliefs (e.g. bias, hesitation, open-mindedness, credibility
assessment etc.). In addition, based on the same model we can study false
information "containment" strategies imposed by network administrators.
Specifically, we study a credibility assessment strategy, where every
disseminated report must be within a certain distance of the truth. We study
the trade-off between false and true information-belief convergence using this
scheme which leads to ways for optimally deciding how truth sensitive an
information dissemination network should operate.
| 1 | 0 | 0 | 0 | 0 | 0 |
Topological dynamics of gyroscopic and Floquet lattices from Newton's laws | Despite intense interest in realizing topological phases across a variety of
electronic, photonic and mechanical platforms, the detailed microscopic origin
of topological behavior often remains elusive. To bridge this conceptual gap,
we show how hallmarks of topological modes - boundary localization and
chirality - emerge from Newton's laws in mechanical topological systems. We
first construct a gyroscopic lattice with analytically solvable edge modes, and
show how the Lorentz and spring restoring forces conspire to support very
robust "dangling bond" boundary modes. The chirality and locality of these
modes intuitively emerges from microscopic balancing of restoring forces and
cyclotron tendencies. Next, we introduce the highlight of this work, a very
experimentally realistic mechanical non-equilibrium (Floquet) Chern lattice
driven by AC electromagnets. Through appropriate synchronization of the AC
driving protocol, the Floquet lattice is "pushed around" by a rotating
potential analogous to an object washed ashore by water waves. Besides hosting
"dangling bond" chiral modes analogous to the gyroscopic boundary modes, our
Floquet Chern lattice also supports peculiar half-period chiral modes with no
static analog. With key parameters controlled electronically, our setup has the
advantage of being dynamically tunable for applications involving arbitrary
Floquet modulations. The physical intuition gleaned from our two prototypical
topological systems are applicable not just to arbitrarily complicated
mechanical systems, but also photonic and electrical topological setups.
| 0 | 1 | 1 | 0 | 0 | 0 |
Stability of axisymmetric chiral skyrmions | We examine topological solitons in a minimal variational model for a chiral
magnet, so-called chiral skyrmions. In the regime of large background fields,
we prove linear stability of axisymmetric chiral skyrmions under arbitrary
perturbations in the energy space, a long-standing open question in physics
literature. Moreover, we show strict local minimality of axisymmetric chiral
skyrmions and nearby existence of moving soliton solution for the
Landau-Lifshitz-Gilbert equation driven by a small spin transfer torque.
| 0 | 0 | 1 | 0 | 0 | 0 |
Efficiency versus instability in plasma accelerators | Plasma wake-field acceleration is one of the main technologies being
developed for future high-energy colliders. Potentially, it can create a
cost-effective path to the highest possible energies for e+e- or
{\gamma}-{\gamma} colliders and produce a profound effect on the developments
for high-energy physics. Acceleration in a blowout regime, where all plasma
electrons are swept away from the axis, is presently considered to be the
primary choice for beam acceleration. In this paper, we derive a universal
efficiency-instability relation, between the power efficiency and the key
instability parameter of the trailing bunch for beam acceleration in the
blowout regime. We also show that the suppression of instability in the
trailing bunch can be achieved through BNS damping by the introduction of a
beam energy variation along the bunch. Unfortunately, in the high efficiency
regime, the required energy variation is quite high, and is not presently
compatible with collider-quality beams. We would like to stress that the
development of the instability imposes a fundamental limitation on the
acceleration efficiency, and it is unclear how it could be overcome for
high-luminosity linear colliders. With minor modifications, the considered
limitation on the power efficiency is applicable to other types of
acceleration.
| 0 | 1 | 0 | 0 | 0 | 0 |
Resistivity bound for hydrodynamic bad metals | We obtain a rigorous upper bound on the resistivity $\rho$ of an electron
fluid whose electronic mean free path is short compared to the scale of spatial
inhomogeneities. When such a hydrodynamic electron fluid supports a non-thermal
diffusion process -- such as an imbalance mode between different bands -- we
show that the resistivity bound becomes $\rho \lesssim A \, \Gamma$. The
coefficient $A$ is independent of temperature and inhomogeneity lengthscale,
and $\Gamma$ is a microscopic momentum-preserving scattering rate. In this way
we obtain a unified and novel mechanism -- without umklapp -- for $\rho \sim
T^2$ in a Fermi liquid and the crossover to $\rho \sim T$ in quantum critical
regimes. This behavior is widely observed in transition metal oxides, organic
metals, pnictides and heavy fermion compounds and has presented a longstanding
challenge to transport theory. Our hydrodynamic bound allows phonon
contributions to diffusion constants, including thermal diffusion, to directly
affect the electrical resistivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Minimal Exploration in Structured Stochastic Bandits | This paper introduces and addresses a wide class of stochastic bandit
problems where the function mapping the arm to the corresponding reward
exhibits some known structural properties. Most existing structures (e.g.
linear, Lipschitz, unimodal, combinatorial, dueling, ...) are covered by our
framework. We derive an asymptotic instance-specific regret lower bound for
these problems, and develop OSSB, an algorithm whose regret matches this
fundamental limit. OSSB is not based on the classical principle of "optimism in
the face of uncertainty" or on Thompson sampling, and rather aims at matching
the minimal exploration rates of sub-optimal arms as characterized in the
derivation of the regret lower bound. We illustrate the efficiency of OSSB
using numerical experiments in the case of the linear bandit problem and show
that OSSB outperforms existing algorithms, including Thompson sampling.
| 1 | 0 | 0 | 1 | 0 | 0 |
Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains | While modern day web applications aim to create impact at the civilization
level, they have become vulnerable to adversarial activity, where the next
cyber-attack can take any shape and can originate from anywhere. The increasing
scale and sophistication of attacks, has prompted the need for a data driven
solution, with machine learning forming the core of many cybersecurity systems.
Machine learning was not designed with security in mind, and the essential
assumption of stationarity, requiring that the training and testing data follow
similar distributions, is violated in an adversarial domain. In this paper, an
adversary's view point of a classification based system, is presented. Based on
a formal adversarial model, the Seed-Explore-Exploit framework is presented,
for simulating the generation of data driven and reverse engineering attacks on
classifiers. Experimental evaluation, on 10 real world datasets and using the
Google Cloud Prediction Platform, demonstrates the innate vulnerability of
classifiers and the ease with which evasion can be carried out, without any
explicit information about the classifier type, the training data or the
application domain. The proposed framework, algorithms and empirical
evaluation, serve as a white hat analysis of the vulnerabilities, and aim to
foster the development of secure machine learning frameworks.
| 1 | 0 | 0 | 1 | 0 | 0 |
On Optimistic versus Randomized Exploration in Reinforcement Learning | We discuss the relative merits of optimistic and randomized approaches to
exploration in reinforcement learning. Optimistic approaches presented in the
literature apply an optimistic boost to the value estimate at each state-action
pair and select actions that are greedy with respect to the resulting
optimistic value function. Randomized approaches sample from among
statistically plausible value functions and select actions that are greedy with
respect to the random sample. Prior computational experience suggests that
randomized approaches can lead to far more statistically efficient learning. We
present two simple analytic examples that elucidate why this is the case. In
principle, there should be optimistic approaches that fare well relative to
randomized approaches, but that would require intractable computation.
Optimistic approaches that have been proposed in the literature sacrifice
statistical efficiency for the sake of computational efficiency. Randomized
approaches, on the other hand, may enable simultaneous statistical and
computational efficiency.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fast Monte-Carlo Localization on Aerial Vehicles using Approximate Continuous Belief Representations | Size, weight, and power constrained platforms impose constraints on
computational resources that introduce unique challenges in implementing
localization algorithms. We present a framework to perform fast localization on
such platforms enabled by the compressive capabilities of Gaussian Mixture
Model representations of point cloud data. Given raw structural data from a
depth sensor and pitch and roll estimates from an on-board attitude reference
system, a multi-hypothesis particle filter localizes the vehicle by exploiting
the likelihood of the data originating from the mixture model. We demonstrate
analysis of this likelihood in the vicinity of the ground truth pose and detail
its utilization in a particle filter-based vehicle localization strategy, and
later present results of real-time implementations on a desktop system and an
off-the-shelf embedded platform that outperform localization results from
running a state-of-the-art algorithm on the same environment.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generalized two-field $α$-attractor models from geometrically finite hyperbolic surfaces | We consider four-dimensional gravity coupled to a non-linear sigma model
whose scalar manifold is a non-compact geometrically finite surface $\Sigma$
endowed with a Riemannian metric of constant negative curvature. When the
space-time is an FLRW universe, such theories produce a very wide
generalization of two-field $\alpha$-attractor models, being parameterized by a
positive constant $\alpha$, by the choice of a finitely-generated surface group
$\Gamma\subset \mathrm{PSL}(2,\mathbb{R})$ (which is isomorphic with the
fundamental group of $\Sigma$) and by the choice of a scalar potential defined
on $\Sigma$. The traditional two-field $\alpha$-attractor models arise when
$\Gamma$ is the trivial group, in which case $\Sigma$ is the Poincaré disk.
We give a general prescription for the study of such models through
uniformization in the so-called "non-elementary" case and discuss some of their
qualitative features in the gradient flow approximation, which we relate to
Morse theory. We also discuss some aspects of the SRST approximation in these
models, showing that it is generally not well-suited for studying dynamics near
cusp ends. When $\Sigma$ is non-compact and the scalar potential is
"well-behaved" at the ends, we show that, in the {\em naive} local one-field
truncation, our generalized models have the same universal behavior as ordinary
one-field $\alpha$-attractors if inflation happens near any of the ends of
$\Sigma$ where the extended potential has a local maximum, for trajectories
which are well approximated by non-canonically parameterized geodesics near the
ends, we also discuss spiral trajectories near the ends.
| 0 | 1 | 1 | 0 | 0 | 0 |
The Geodetic Hull Number is Hard for Chordal Graphs | We show the hardness of the geodetic hull number for chordal graphs.
| 1 | 0 | 0 | 0 | 0 | 0 |
$\overline{M}_{1,n}$ is usually not uniruled in characteristic $p$ | Using etale cohomology, we define a birational invariant for varieties in
characteristic $p$ that serves as an obstruction to uniruledness - a variant on
an obstruction to unirationality due to Ekedahl. We apply this to
$\overline{M}_{1,n}$ and show that $\overline{M}_{1,n}$ is not uniruled in
characteristic $p$ as long as $n \geq p \geq 11$. To do this, we use Deligne's
description of the etale cohomology of $\overline{M}_{1,n}$ and apply the
theory of congruences between modular forms.
| 0 | 0 | 1 | 0 | 0 | 0 |
Active Community Detection: A Maximum Likelihood Approach | We propose novel semi-supervised and active learning algorithms for the
problem of community detection on networks. The algorithms are based on
optimizing the likelihood function of the community assignments given a graph
and an estimate of the statistical model that generated it. The optimization
framework is inspired by prior work on the unsupervised community detection
problem in Stochastic Block Models (SBM) using Semi-Definite Programming (SDP).
In this paper we provide the next steps in the evolution of learning
communities in this context which involves a constrained semi-definite
programming algorithm, and a newly presented active learning algorithm. The
active learner intelligently queries nodes that are expected to maximize the
change in the model likelihood. Experimental results show that this active
learning algorithm outperforms the random-selection semi-supervised version of
the same algorithm as well as other state-of-the-art active learning
algorithms. Our algorithms significantly improved performance is demonstrated
on both real-world and SBM-generated networks even when the SBM has a signal to
noise ratio (SNR) below the known unsupervised detectability threshold.
| 1 | 0 | 0 | 1 | 0 | 0 |
Continuum Limit of Posteriors in Graph Bayesian Inverse Problems | We consider the problem of recovering a function input of a differential
equation formulated on an unknown domain $M$. We assume to have access to a
discrete domain $M_n=\{x_1, \dots, x_n\} \subset M$, and to noisy measurements
of the output solution at $p\le n$ of those points. We introduce a graph-based
Bayesian inverse problem, and show that the graph-posterior measures over
functions in $M_n$ converge, in the large $n$ limit, to a posterior over
functions in $M$ that solves a Bayesian inverse problem with known domain.
The proofs rely on the variational formulation of the Bayesian update, and on
a new topology for the study of convergence of measures over functions on point
clouds to a measure over functions on the continuum. Our framework, techniques,
and results may serve to lay the foundations of robust uncertainty
quantification of graph-based tasks in machine learning. The ideas are
presented in the concrete setting of recovering the initial condition of the
heat equation on an unknown manifold.
| 0 | 0 | 1 | 1 | 0 | 0 |
Automatic Conflict Detection in Police Body-Worn Audio | Automatic conflict detection has grown in relevance with the advent of
body-worn technology, but existing metrics such as turn-taking and overlap are
poor indicators of conflict in police-public interactions. Moreover, standard
techniques to compute them fall short when applied to such diversified and
noisy contexts. We develop a pipeline catered to this task combining adaptive
noise removal, non-speech filtering and new measures of conflict based on the
repetition and intensity of phrases in speech. We demonstrate the effectiveness
of our approach on body-worn audio data collected by the Los Angeles Police
Department.
| 1 | 0 | 0 | 1 | 0 | 0 |
The cobordism hypothesis | Assuming a conjecture about factorization homology with adjoints, we prove
the cobordism hypothesis, after Baez-Dolan, Costello, Hopkins-Lurie, and Lurie.
| 0 | 0 | 1 | 0 | 0 | 0 |
LAMOST telescope reveals that Neptunian cousins of hot Jupiters are mostly single offspring of stars that are rich in heavy elements | We discover a population of short-period, Neptune-size planets sharing key
similarities with hot Jupiters: both populations are preferentially hosted by
metal-rich stars, and both are preferentially found in Kepler systems with
single transiting planets. We use accurate LAMOST DR4 stellar parameters for
main-sequence stars to study the distributions of short-period 1d < P < 10d
Kepler planets as a function of host star metallicity. The radius distribution
of planets around metal-rich stars is more "puffed up" as compared to that
around metal-poor hosts. In two period-radius regimes, planets preferentially
reside around metal-rich stars, while there are hardly any planets around
metal-poor stars. One is the well-known hot Jupiters, and the other is a
population of Neptune-size planets (2 R_Earth <~ R_p <~ 6 R_Earth), dubbed as
"Hoptunes". Also like hot Jupiters, Hoptunes occur more frequently in systems
with single transiting planets though the fraction of Hoptunes occurring in
multiples is larger than that of hot Jupiters. About 1% of solar-type stars
host "Hoptunes", and the frequencies of Hoptunes and hot Jupiters increase with
consistent trends as a function of [Fe/H]. In the planet radius distribution,
hot Jupiters and Hoptunes are separated by a "valley" at approximately Saturn
size (in the range of 6 R_Earth <~ R_p <~ 10 R_Earth), and this "hot-Saturn
valley" represents approximately an order-of-magnitude decrease in planet
frequency compared to hot Jupiters and Hoptunes. The empirical "kinship"
between Hoptunes and hot Jupiters suggests likely common processes (migration
and/or formation) responsible for their existence.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Latent Variable Model for Two-Dimensional Canonical Correlation Analysis and its Variational Inference | Describing the dimension reduction (DR) techniques by means of probabilistic
models has recently been given special attention. Probabilistic models, in
addition to a better interpretability of the DR methods, provide a framework
for further extensions of such algorithms. One of the new approaches to the
probabilistic DR methods is to preserving the internal structure of data. It is
meant that it is not necessary that the data first be converted from the matrix
or tensor format to the vector format in the process of dimensionality
reduction. In this paper, a latent variable model for matrix-variate data for
canonical correlation analysis (CCA) is proposed. Since in general there is not
any analytical maximum likelihood solution for this model, we present two
approaches for learning the parameters. The proposed methods are evaluated
using the synthetic data in terms of convergence and quality of mappings. Also,
real data set is employed for assessing the proposed methods with several
probabilistic and none-probabilistic CCA based approaches. The results confirm
the superiority of the proposed methods with respect to the competing
algorithms. Moreover, this model can be considered as a framework for further
extensions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Model enumeration in propositional circumscription via unsatisfiable core analysis | Many practical problems are characterized by a preference relation over
admissible solutions, where preferred solutions are minimal in some sense. For
example, a preferred diagnosis usually comprises a minimal set of reasons that
is sufficient to cause the observed anomaly. Alternatively, a minimal
correction subset comprises a minimal set of reasons whose deletion is
sufficient to eliminate the observed anomaly. Circumscription formalizes such
preference relations by associating propositional theories with minimal models.
The resulting enumeration problem is addressed here by means of a new algorithm
taking advantage of unsatisfiable core analysis. Empirical evidence of the
efficiency of the algorithm is given by comparing the performance of the
resulting solver, CIRCUMSCRIPTINO, with HCLASP, CAMUS MCS, LBX and MCSLS on the
enumeration of minimal models for problems originating from practical
applications.
This paper is under consideration for acceptance in TPLP.
| 1 | 0 | 0 | 0 | 0 | 0 |
Structured Neural Summarization | Summarization of long sequences into a concise statement is a core problem in
natural language processing, requiring non-trivial understanding of the input.
Based on the promising results of graph neural networks on highly structured
data, we develop a framework to extend existing sequence encoders with a graph
component that can reason about long-distance relationships in weakly
structured data such as text. In an extensive evaluation, we show that the
resulting hybrid sequence-graph models outperform both pure sequence models as
well as pure graph models on a range of summarization tasks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Variations on a Visserian Theme | A first order theory T is said to be "tight" if for any two deductively
closed extensions U and V of T (both of which are formulated in the language of
T), U and V are bi-interpretable iff U = V. By a theorem of Visser, PA (Peano
Arithmetic) is tight. Here we show that Z_2 (second order arithmetic), ZF
(Zermelo-Fraenkel set theory), and KM (Kelley-Morse theory of classes) are also
tight theories.
| 0 | 0 | 1 | 0 | 0 | 0 |
Galerkin Least-Squares Stabilization in Ice Sheet Modeling - Accuracy, Robustness, and Comparison to other Techniques | We investigate the accuracy and robustness of one of the most common methods
used in glaciology for the discretization of the $\mathfrak{p}$-Stokes
equations: equal order finite elements with Galerkin Least-Squares (GLS)
stabilization. Furthermore we compare the results to other stabilized methods.
We find that the vertical velocity component is more sensitive to the choice of
GLS stabilization parameter than horizontal velocity. Additionally, the
accuracy of the vertical velocity component is especially important since
errors in this component can cause ice surface instabilities and propagate into
future ice volume predictions. If the element cell size is set to the minimum
edge length and the stabilization parameter is allowed to vary non-linearly
with viscosity, the GLS stabilization parameter found in literature is a good
choice on simple domains. However, near ice margins the standard parameter
choice may result in significant oscillations in the vertical component of the
surface velocity. For these cases, other stabilization techniques, such as the
interior penalty method, result in better accuracy and are less sensitive to
the choice of the stabilization parameter. During this work we also discovered
that the manufactured solutions often used to evaluate errors in glaciology are
not reliable due to high artificial surface forces at singularities. We perform
our numerical experiments in both FEniCS and Elmer/Ice.
| 0 | 1 | 0 | 0 | 0 | 0 |
Improved Query Reformulation for Concept Location using CodeRank and Document Structures | During software maintenance, developers usually deal with a significant
number of software change requests. As a part of this, they often formulate an
initial query from the request texts, and then attempt to map the concepts
discussed in the request to relevant source code locations in the software
system (a.k.a., concept location). Unfortunately, studies suggest that they
often perform poorly in choosing the right search terms for a change task. In
this paper, we propose a novel technique --ACER-- that takes an initial query,
identifies appropriate search terms from the source code using a novel term
weight --CodeRank, and then suggests effective reformulation to the initial
query by exploiting the source document structures, query quality analysis and
machine learning. Experiments with 1,675 baseline queries from eight subject
systems report that our technique can improve 71% of the baseline queries which
is highly promising. Comparison with five closely related existing techniques
in query reformulation not only validates our empirical findings but also
demonstrates the superiority of our technique.
| 1 | 0 | 0 | 0 | 0 | 0 |
High-performance parallel computing in the classroom using the public goods game as an example | The use of computers in statistical physics is common because the sheer
number of equations that describe the behavior of an entire system particle by
particle often makes it impossible to solve them exactly. Monte Carlo methods
form a particularly important class of numerical methods for solving problems
in statistical physics. Although these methods are simple in principle, their
proper use requires a good command of statistical mechanics, as well as
considerable computational resources. The aim of this paper is to demonstrate
how the usage of widely accessible graphics cards on personal computers can
elevate the computing power in Monte Carlo simulations by orders of magnitude,
thus allowing live classroom demonstration of phenomena that would otherwise be
out of reach. As an example, we use the public goods game on a square lattice
where two strategies compete for common resources in a social dilemma
situation. We show that the second-order phase transition to an absorbing phase
in the system belongs to the directed percolation universality class, and we
compare the time needed to arrive at this result by means of the main processor
and by means of a suitable graphics card. Parallel computing on graphics
processing units has been developed actively during the last decade, to the
point where today the learning curve for entry is anything but steep for those
familiar with programming. The subject is thus ripe for inclusion in graduate
and advanced undergraduate curricula, and we hope that this paper will
facilitate this process in the realm of physics education. To that end, we
provide a documented source code for an easy reproduction of presented results
and for further development of Monte Carlo simulations of similar systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Coupled spin-charge dynamics in helical Fermi liquids beyond the random phase approximation | We consider a helical system of fermions with a generic spin (or pseudospin)
orbit coupling. Using the equation of motion approach for the single-particle
distribution functions, and a mean-field decoupling of the higher order
distribution functions, we find a closed form for the charge and spin density
fluctuations in terms of the charge and spin density linear response functions.
Approximating the nonlocal exchange term with a Hubbard-like local-field
factor, we obtain coupled spin and charge density response matrix beyond the
random phase approximation, whose poles give the dispersion of four collective
spin-charge modes. We apply our generic technique to the well-explored
two-dimensional system with Rashba spin-orbit coupling and illustrate how it
gives results for the collective modes, Drude weight, and spin-Hall
conductivity which are in very good agreement with the results obtained from
other more sophisticated approaches.
| 0 | 1 | 0 | 0 | 0 | 0 |
Correlation decay in fermionic lattice systems with power-law interactions at non-zero temperature | We study correlations in fermionic lattice systems with long-range
interactions in thermal equilibrium. We prove a bound on the correlation decay
between anti-commuting operators and generalize a long-range Lieb-Robinson type
bound. Our results show that in these systems of spatial dimension $D$ with,
not necessarily translation invariant, two-site interactions decaying
algebraically with the distance with an exponent $\alpha \geq 2\,D$,
correlations between such operators decay at least algebraically with an
exponent arbitrarily close to $\alpha$ at any non-zero temperature. Our bound
is asymptotically tight, which we demonstrate by a high temperature expansion
and by numerically analyzing density-density correlations in the 1D quadratic
(free, exactly solvable) Kitaev chain with long-range pairing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Integrated Microsimulation Framework for Dynamic Pedestrian Movement Estimation in Mobility Hub | We present an integrated microsimulation framework to estimate the pedestrian
movement over time and space with limited data on directional counts. Using the
activity-based approach, simulation can compute the overall demand and
trajectory of each agent, which are in accordance with the available partial
observations and are in response to the initial and evolving supply conditions
and schedules. This simulation contains a chain of processes including:
activities generation, decision point choices, and assignment. They are
considered in an iteratively updating loop so that the simulation can
dynamically correct its estimates of demand. A Markov chain is constructed for
this loop. These considerations transform the problem into a convergence
problem. A Metropolitan Hasting algorithm is then adapted to identify the
optimal solution. This framework can be used to fill the lack of data or to
model the reactions of demand to exogenous changes in the scenario. Finally, we
present a case study on Montreal Central Station, on which we tested the
developed framework and calibrated the models. We then applied it to a possible
future scenario for the same station.
| 0 | 1 | 1 | 0 | 0 | 0 |
Dimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization | Stochastic optimization naturally arises in machine learning. Efficient
algorithms with provable guarantees, however, are still largely missing, when
the objective function is nonconvex and the data points are dependent. This
paper studies this fundamental challenge through a streaming PCA problem for
stationary time series data. Specifically, our goal is to estimate the
principle component of time series data with respect to the covariance matrix
of the stationary distribution. Computationally, we propose a variant of Oja's
algorithm combined with downsampling to control the bias of the stochastic
gradient caused by the data dependency. Theoretically, we quantify the
uncertainty of our proposed stochastic algorithm based on diffusion
approximations. This allows us to prove the asymptotic rate of convergence and
further implies near optimal asymptotic sample complexity. Numerical
experiments are provided to support our analysis.
| 0 | 0 | 0 | 1 | 0 | 0 |
Efficient tracking of a growing number of experts | We consider a variation on the problem of prediction with expert advice,
where new forecasters that were unknown until then may appear at each round. As
often in prediction with expert advice, designing an algorithm that achieves
near-optimal regret guarantees is straightforward, using aggregation of
experts. However, when the comparison class is sufficiently rich, for instance
when the best expert and the set of experts itself changes over time, such
strategies naively require to maintain a prohibitive number of weights
(typically exponential with the time horizon). By contrast, designing
strategies that both achieve a near-optimal regret and maintain a reasonable
number of weights is highly non-trivial. We consider three increasingly
challenging objectives (simple regret, shifting regret and sparse shifting
regret) that extend existing notions defined for a fixed expert ensemble; in
each case, we design strategies that achieve tight regret bounds, adaptive to
the parameters of the comparison class, while being computationally
inexpensive. Moreover, our algorithms are anytime, agnostic to the number of
incoming experts and completely parameter-free. Such remarkable results are
made possible thanks to two simple but highly effective recipes: first the
"abstention trick" that comes from the specialist framework and enables to
handle the least challenging notions of regret, but is limited when addressing
more sophisticated objectives. Second, the "muting trick" that we introduce to
give more flexibility. We show how to combine these two tricks in order to
handle the most challenging class of comparison strategies.
| 1 | 0 | 0 | 1 | 0 | 0 |
Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data | Subsequence clustering of multivariate time series is a useful tool for
discovering repeated patterns in temporal data. Once these patterns have been
discovered, seemingly complicated datasets can be interpreted as a temporal
sequence of only a small number of states, or clusters. For example, raw sensor
data from a fitness-tracking application can be expressed as a timeline of a
select few actions (i.e., walking, sitting, running). However, discovering
these patterns is challenging because it requires simultaneous segmentation and
clustering of the time series. Furthermore, interpreting the resulting clusters
is difficult, especially when the data is high-dimensional. Here we propose a
new method of model-based clustering, which we call Toeplitz Inverse
Covariance-based Clustering (TICC). Each cluster in the TICC method is defined
by a correlation network, or Markov random field (MRF), characterizing the
interdependencies between different observations in a typical subsequence of
that cluster. Based on this graphical representation, TICC simultaneously
segments and clusters the time series data. We solve the TICC problem through
alternating minimization, using a variation of the expectation maximization
(EM) algorithm. We derive closed-form solutions to efficiently solve the two
resulting subproblems in a scalable way, through dynamic programming and the
alternating direction method of multipliers (ADMM), respectively. We validate
our approach by comparing TICC to several state-of-the-art baselines in a
series of synthetic experiments, and we then demonstrate on an automobile
sensor dataset how TICC can be used to learn interpretable clusters in
real-world scenarios.
| 1 | 0 | 1 | 0 | 0 | 0 |
The ellipse law: Kirchhoff meets dislocations | In this paper we consider a nonlocal energy $I_\alpha$ whose kernel is
obtained by adding to the Coulomb potential an anisotropic term weighted by a
parameter $\alpha\in \R$. The case $\alpha=0$ corresponds to purely logarithmic
interactions, minimised by the celebrated circle law for a quadratic
confinement; $\alpha=1$ corresponds to the energy of interacting dislocations,
minimised by the semi-circle law. We show that for $\alpha\in (0,1)$ the
minimiser can be computed explicitly and is the normalised characteristic
function of the domain enclosed by an \emph{ellipse}. To prove our result we
borrow techniques from fluid dynamics, in particular those related to
Kirchhoff's celebrated result that domains enclosed by ellipses are rotating
vortex patches, called \emph{Kirchhoff ellipses}. Therefore we show a
surprising connection between vortices and dislocations.
| 0 | 0 | 1 | 0 | 0 | 0 |
SAML-QC: a Stochastic Assessment and Machine Learning based QC technique for Industrial Printing | Recently, the advancement in industrial automation and high-speed printing
has raised numerous challenges related to the printing quality inspection of
final products. This paper proposes a machine vision based technique to assess
the printing quality of text on industrial objects. The assessment is based on
three quality defects such as text misalignment, varying printing shades, and
misprinted text. The proposed scheme performs the quality inspection through
stochastic assessment technique based on the second-order statistics of
printing. First: the text-containing area on printed product is identified
through image processing techniques. Second: the alignment testing of the
identified text-containing area is performed. Third: optical character
recognition is performed to divide the text into different small boxes and only
the intensity value of each text-containing box is taken as a random variable
and second-order statistics are estimated to determine the varying printing
defects in the text under one, two and three sigma thresholds. Fourth: the
K-Nearest Neighbors based supervised machine learning is performed to provide
the stochastic process for misprinted text detection. Finally, the technique is
deployed on an industrial image for the printing quality assessment with
varying values of n and m. The results have shown that the proposed SAML-QC
technique can perform real-time automated inspection for industrial printing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Probing the gravitational redshift with an Earth-orbiting satellite | We present an approach to testing the gravitational redshift effect using the
RadioAstron satellite. The experiment is based on a modification of the Gravity
Probe A scheme of nonrelativistic Doppler compensation and benefits from the
highly eccentric orbit and ultra-stable atomic hydrogen maser frequency
standard of the RadioAstron satellite. Using the presented techniques we expect
to reach an accuracy of the gravitational redshift test of order $10^{-5}$, a
magnitude better than that of Gravity Probe A. Data processing is ongoing, our
preliminary results agree with the validity of the Einstein Equivalence
Principle.
| 0 | 1 | 0 | 0 | 0 | 0 |
A stencil scaling approach for accelerating matrix-free finite element implementations | We present a novel approach to fast on-the-fly low order finite element
assembly for scalar elliptic partial differential equations of Darcy type with
variable coefficients optimized for matrix-free implementations. Our approach
introduces a new operator that is obtained by appropriately scaling the
reference stiffness matrix from the constant coefficient case. Assuming
sufficient regularity, an a priori analysis shows that solutions obtained by
this approach are unique and have asymptotically optimal order convergence in
the $H^1$- and the $L^2$-norm on hierarchical hybrid grids. For the
pre-asymptotic regime, we present a local modification that guarantees uniform
ellipticity of the operator. Cost considerations show that our novel approach
requires roughly one third of the floating-point operations compared to a
classical finite element assembly scheme employing nodal integration. Our
theoretical considerations are illustrated by numerical tests that confirm the
expectations with respect to accuracy and run-time. A large scale application
with more than a hundred billion ($1.6\cdot10^{11}$) degrees of freedom
executed on 14,310 compute cores demonstrates the efficiency of the new scaling
approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cramér-Rao Lower Bounds for Positioning with Large Intelligent Surfaces | We consider the potential for positioning with a system where antenna arrays
are deployed as a large intelligent surface (LIS). We derive
Fisher-informations and Cramér-Rao lower bounds (CRLB) in closed-form for
terminals along the central perpendicular line (CPL) of the LIS for all three
Cartesian dimensions. For terminals at positions other than the CPL,
closed-form expressions for the Fisher-informations and CRLBs seem out of
reach, and we alternatively provide approximations (in closed-form) which are
shown to be very accurate. We also show that under mild conditions, the CRLBs
in general decrease quadratically in the surface-area for both the $x$ and $y$
dimensions. For the $z$-dimension (distance from the LIS), the CRLB decreases
linearly in the surface-area when terminals are along the CPL. However, when
terminals move away from the CPL, the CRLB is dramatically increased and then
also decreases quadratically in the surface-area. We also extensively discuss
the impact of different deployments (centralized and distributed) of the LIS.
| 1 | 0 | 0 | 0 | 0 | 0 |
Asymptotic behaviour methods for the Heat Equation. Convergence to the Gaussian | In this expository work we discuss the asymptotic behaviour of the solutions
of the classical heat equation posed in the whole Euclidean space.
After an introductory review of the main facts on the existence and
properties of solutions, we proceed with the proofs of convergence to the
Gaussian fundamental solution, a result that holds for all integrable
solutions, and represents in the PDE setting the Central Limit Theorem of
probability. We present several methods of proof: first, the scaling method.
Then several versions of the representation method. This is followed by the
functional analysis approach that leads to the famous related equations,
Fokker-Planck and Ornstein-Uhlenbeck. The analysis of this connection is also
given in rather complete form here. Finally, we present the Boltzmann entropy
method, coming from kinetic equations.
The different methods are interesting because of the possible extension to
prove the asymptotic behaviour or stabilization analysis for more general
equations, linear or nonlinear. It all depends a lot on the particular
features, and only one or some of the methods work in each case.Other settings
of the Heat Equation are briefly discussed in Section 9 and a longer mention of
results for different equations is done in Section 10.
| 0 | 0 | 1 | 0 | 0 | 0 |
Magnetization dynamics of weakly interacting sub-100 nm square artificial spin ices | Artificial Spin Ice (ASI), consisting of a two dimensional array of nanoscale
magnetic elements, provides a fascinating opportunity to observe the physics of
out of equilibrium systems. Initial studies concentrated on the static, frozen
state, whilst more recent studies have accessed the out-of-equilibrium dynamic,
fluctuating state. This opens up exciting possibilities such as the observation
of systems exploring their energy landscape through monopole quasiparticle
creation, potentially leading to ASI magnetricity, and to directly observe
unconventional phase transitions. In this work we have measured and analysed
the magnetic relaxation of thermally active ASI systems by means of SQUID
magnetometry. We have investigated the effect of the interaction strength on
the magnetization dynamics at different temperatures in the range where the
nanomagnets are thermally active and have observed that they follow an
Arrhenius-type Néel-Brown behaviour. An unexpected negative correlation of
the average blocking temperature with the interaction strength is also
observed, which is supported by Monte Carlo simulations. The magnetization
relaxation measurements show faster relaxation for more strongly coupled
nanoelements with similar dimensions. The analysis of the stretching exponents
obtained from the measurements suggest 1-D chain-like magnetization dynamics.
This indicates that the nature of the interactions between nanoelements lowers
the dimensionality of the ASI from 2-D to 1-D. Finally, we present a way to
quantify the effective interaction energy of a square ASI system, and compare
it to the interaction energy calculated from a simple dipole model and also to
the magnetostatic energy computed with micromagnetic simulations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Filtering Tweets for Social Unrest | Since the events of the Arab Spring, there has been increased interest in
using social media to anticipate social unrest. While efforts have been made
toward automated unrest prediction, we focus on filtering the vast volume of
tweets to identify tweets relevant to unrest, which can be provided to
downstream users for further analysis. We train a supervised classifier that is
able to label Arabic language tweets as relevant to unrest with high
reliability. We examine the relationship between training data size and
performance and investigate ways to optimize the model building process while
minimizing cost. We also explore how confidence thresholds can be set to
achieve desired levels of performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Structured Connectivity Augmentation | We initiate the algorithmic study of the following "structured augmentation"
question: is it possible to increase the connectivity of a given graph G by
superposing it with another given graph H? More precisely, graph F is the
superposition of G and H with respect to injective mapping \phi: V(H)->V(G) if
every edge uv of F is either an edge of G, or \phi^{-1}(u)\phi^{-1}(v) is an
edge of H. We consider the following optimization problem. Given graphs G,H,
and a weight function \omega assigning non-negative weights to pairs of
vertices of V(G), the task is to find \varphi of minimum weight
\omega(\phi)=\sum_{xy\in E(H)}\omega(\phi(x)\varphi(y)) such that the edge
connectivity of the superposition F of G and H with respect to \phi is higher
than the edge connectivity of G. Our main result is the following "dichotomy"
complexity classification. We say that a class of graphs C has bounded
vertex-cover number, if there is a constant t depending on C only such that the
vertex-cover number of every graph from C does not exceed t. We show that for
every class of graphs C with bounded vertex-cover number, the problems of
superposing into a connected graph F and to 2-edge connected graph F, are
solvable in polynomial time when H\in C. On the other hand, for any hereditary
class C with unbounded vertex-cover number, both problems are NP-hard when H\in
C. For the unweighted variants of structured augmentation problems, i.e. the
problems where the task is to identify whether there is a superposition of
graphs of required connectivity, we provide necessary and sufficient
combinatorial conditions on the existence of such superpositions. These
conditions imply polynomial time algorithms solving the unweighted variants of
the problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Transition probability of Brownian motion in the octant and its application to default modeling | We derive a semi-analytic formula for the transition probability of
three-dimensional Brownian motion in the positive octant with absorption at the
boundaries. Separation of variables in spherical coordinates leads to an
eigenvalue problem for the resulting boundary value problem in the two angular
components. The main theoretical result is a solution to the original problem
expressed as an expansion into special functions and an eigenvalue which has to
be chosen to allow a matching of the boundary condition. We discuss and test
several computational methods to solve a finite-dimensional approximation to
this nonlinear eigenvalue problem. Finally, we apply our results to the
computation of default probabilities and credit valuation adjustments in a
structural credit model with mutual liabilities.
| 0 | 0 | 0 | 0 | 0 | 1 |
Block-Sparse Recurrent Neural Networks | Recurrent Neural Networks (RNNs) are used in state-of-the-art models in
domains such as speech recognition, machine translation, and language
modelling. Sparsity is a technique to reduce compute and memory requirements of
deep learning models. Sparse RNNs are easier to deploy on devices and high-end
server processors. Even though sparse operations need less compute and memory
relative to their dense counterparts, the speed-up observed by using sparse
operations is less than expected on different hardware platforms. In order to
address this issue, we investigate two different approaches to induce block
sparsity in RNNs: pruning blocks of weights in a layer and using group lasso
regularization to create blocks of weights with zeros. Using these techniques,
we demonstrate that we can create block-sparse RNNs with sparsity ranging from
80% to 90% with small loss in accuracy. This allows us to reduce the model size
by roughly 10x. Additionally, we can prune a larger dense network to recover
this loss in accuracy while maintaining high block sparsity and reducing the
overall parameter count. Our technique works with a variety of block sizes up
to 32x32. Block-sparse RNNs eliminate overheads related to data storage and
irregular memory accesses while increasing hardware efficiency compared to
unstructured sparsity.
| 1 | 0 | 0 | 1 | 0 | 0 |
Equitable neighbour-sum-distinguishing edge and total colourings | With any (not necessarily proper) edge $k$-colouring
$\gamma:E(G)\longrightarrow\{1,\dots,k\}$ of a graph $G$,one can associate a
vertex colouring $\sigma\_{\gamma}$ given by $\sigma\_{\gamma}(v)=\sum\_{e\ni
v}\gamma(e)$.A neighbour-sum-distinguishing edge $k$-colouring is an edge
colouring whose associated vertex colouring is proper.The
neighbour-sum-distinguishing index of a graph $G$ is then the smallest $k$ for
which $G$ admitsa neighbour-sum-distinguishing edge $k$-colouring.These notions
naturally extends to total colourings of graphs that assign colours to both
vertices and edges.We study in this paper equitable
neighbour-sum-distinguishing edge colourings andtotal colourings, that is
colourings $\gamma$ for whichthe number of elements in any two colour classes
of $\gamma$ differ by at most one.We determine the equitable
neighbour-sum-distinguishing indexof complete graphs, complete bipartite graphs
and forests,and the equitable neighbour-sum-distinguishing total chromatic
numberof complete graphs and bipartite graphs.
| 1 | 0 | 1 | 0 | 0 | 0 |
An Oracle Property of The Nadaraya-Watson Kernel Estimator for High Dimensional Nonparametric Regression | The celebrated Nadaraya-Watson kernel estimator is among the most studied
method for nonparametric regression. A classical result is that its rate of
convergence depends on the number of covariates and deteriorates quickly as the
dimension grows, which underscores the "curse of dimensionality" and has
limited its use in high dimensional settings. In this article, we show that
when the true regression function is single or multi-index, the effects of the
curse of dimensionality may be mitigated for the Nadaraya-Watson kernel
estimator. Specifically, we prove that with $K$-fold cross-validation, the
Nadaraya-Watson kernel estimator indexed by a positive semidefinite bandwidth
matrix has an oracle property that its rate of convergence depends on the
number of indices of the regression function rather than the number of
covariates. Intuitively, this oracle property is a consequence of allowing the
bandwidths to diverge to infinity as opposed to restricting them all to
converge to zero at certain rates as done in previous theoretical studies. Our
result provides a theoretical perspective for the use of kernel estimation in
high dimensional nonparametric regression and other applications such as metric
learning when a low rank structure is anticipated. Numerical illustrations are
given through simulations and real data examples.
| 0 | 0 | 1 | 1 | 0 | 0 |
Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe | We consider the problem of bandit optimization, inspired by stochastic
optimization and online learning problems with bandit feedback. In this
problem, the objective is to minimize a global loss function of all the
actions, not necessarily a cumulative loss. This framework allows us to study a
very general class of problems, with applications in statistics, machine
learning, and other fields. To solve this problem, we analyze the
Upper-Confidence Frank-Wolfe algorithm, inspired by techniques for bandits and
convex optimization. We give theoretical guarantees for the performance of this
algorithm over various classes of functions, and discuss the optimality of
these results.
| 0 | 0 | 1 | 1 | 0 | 0 |
Highly sensitive atomic based MW interferometry | We theoretically study a scheme to develop an atomic based MW interferometry
using the Rydberg states in Rb. Unlike the traditional MW interferometry, this
scheme is not based upon the electrical circuits, hence the sensitivity of the
phase and the amplitude/strength of the MW field is not limited by the Nyquist
thermal noise. Further this system has great advantage due to its very high
bandwidth, ranging from radio frequency (RF), micro wave (MW) to terahertz
regime. In addition, this is \textbf{orders of magnitude} more sensitive to
field strength as compared to the prior demonstrations on the MW electrometry
using the Rydberg atomic states. However previously studied atomic systems are
only sensitive to the field strength but not to the phase and hence this scheme
provides a great opportunity to characterize the MW completely including the
propagation direction and the wavefront. This study opens up a new dimension in
the Radar technology such as in synthetic aperture radar interferometry. The MW
interferometry is based upon a six-level loopy ladder system involving the
Rydberg states in which two sub-systems interfere constructively or
destructively depending upon the phase between the MW electric fields closing
the loop.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Wisdom of a Kalman Crowd | The Kalman Filter has been called one of the greatest inventions in
statistics during the 20th century. Its purpose is to measure the state of a
system by processing the noisy data received from different electronic sensors.
In comparison, a useful resource for managers in their effort to make the right
decisions is the wisdom of crowds. This phenomenon allows managers to combine
judgments by different employees to get estimates that are often more accurate
and reliable than estimates, which managers produce alone. Since harnessing the
collective intelligence of employees, and filtering signals from multiple noisy
sensors appear related, we looked at the possibility of using the Kalman Filter
on estimates by people. Our predictions suggest, and our findings based on the
Survey of Professional Forecasters reveal, that the Kalman Filter can help
managers solve their decision-making problems by giving them stronger signals
before they choose. Indeed, when used on a subset of forecasters identified by
the Contribution Weighted Model, the Kalman Filter beat that rule clearly,
across all the forecasting horizons in the survey.
| 0 | 0 | 0 | 0 | 0 | 1 |
Noisy independent component analysis of auto-correlated components | We present a new method for the separation of superimposed, independent,
auto-correlated components from noisy multi-channel measurement. The presented
method simultaneously reconstructs and separates the components, taking all
channels into account and thereby increases the effective signal-to-noise ratio
considerably, allowing separations even in the high noise regime.
Characteristics of the measurement instruments can be included, allowing for
application in complex measurement situations. Independent posterior samples
can be provided, permitting error estimates on all desired quantities. Using
the concept of information field theory, the algorithm is not restricted to any
dimensionality of the underlying space or discretization scheme thereof.
| 0 | 1 | 0 | 1 | 0 | 0 |
Ages and structural and dynamical parameters of two globular clusters in the M81 group | GC-1 and GC-2 are two globular clusters (GCs) in the remote halo of M81 and
M82 in the M81 group discovered by Jang et al. using the {\it Hubble Space
Telescope} ({\it HST}) images. These two GCs were observed as part of the
Beijing--Arizona--Taiwan--Connecticut (BATC) Multicolor Sky Survey, using 14
intermediate-band filters covering a wavelength range of 4000--10000 \AA. We
accurately determine these two clusters' ages and masses by comparing their
spectral energy distributions (from 2267 to 20000~{\AA}, comprising photometric
data in the near-ultraviolet of the {\it Galaxy Evolution Explorer}, 14 BATC
intermediate-band, and Two Micron All Sky Survey near-infrared $JHK_{\rm s}$
filters) with theoretical stellar population-synthesis models, resulting in
ages of $15.50\pm3.20$ for GC-1 and $15.10\pm2.70$ Gyr for GC-2. The masses of
GC-1 and GC-2 obtained here are $1.77-2.04\times 10^6$ and $5.20-7.11\times
10^6 \rm~M_\odot$, respectively. In addition, the deep observations with the
Advanced Camera for Surveys and Wide Field Camera 3 on the {\it HST} are used
to provide the surface brightness profiles of GC-1 and GC-2. The structural and
dynamical parameters are derived from fitting the profiles to three different
models; in particular, the internal velocity dispersions of GC-1 and GC-2 are
derived, which can be compared with ones obtained based on spectral
observations in the future. For the first time, in this paper, the $r_h$ versus
$M_V$ diagram shows that GC-2 is an ultra-compact dwarf in the M81 group.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bayesian Renewables Scenario Generation via Deep Generative Networks | We present a method to generate renewable scenarios using Bayesian
probabilities by implementing the Bayesian generative adversarial
network~(Bayesian GAN), which is a variant of generative adversarial networks
based on two interconnected deep neural networks. By using a Bayesian
formulation, generators can be constructed and trained to produce scenarios
that capture different salient modes in the data, allowing for better diversity
and more accurate representation of the underlying physical process. Compared
to conventional statistical models that are often hard to scale or sample from,
this method is model-free and can generate samples extremely efficiently. For
validation, we use wind and solar times-series data from NREL integration data
sets to train the Bayesian GAN. We demonstrate that proposed method is able to
generate clusters of wind scenarios with different variance and mean value, and
is able to distinguish and generate wind and solar scenarios simultaneously
even if the historical data are intentionally mixed.
| 0 | 0 | 0 | 1 | 0 | 0 |
Graphons: A Nonparametric Method to Model, Estimate, and Design Algorithms for Massive Networks | Many social and economic systems are naturally represented as networks, from
off-line and on-line social networks, to bipartite networks, like Netflix and
Amazon, between consumers and products. Graphons, developed as limits of
graphs, form a natural, nonparametric method to describe and estimate large
networks like Facebook and LinkedIn. Here we describe the development of the
theory of graphons, for both dense and sparse networks, over the last decade.
We also review theorems showing that we can consistently estimate graphons from
massive networks in a wide variety of models. Finally, we show how to use
graphons to estimate missing links in a sparse network, which has applications
from estimating social and information networks in development economics, to
rigorously and efficiently doing collaborative filtering with applications to
movie recommendations in Netflix and product suggestions in Amazon.
| 1 | 1 | 0 | 0 | 0 | 0 |
Hopf Parametric Adjoint Objects through a 2-adjunction of the type Adj-Mnd | In this article Hopf parametric adjunctions are defined and analysed within
the context of the 2-adjunction of the type $\mathbf{Adj}$-$\mathbf{Mnd}$. In
order to do so, the definition of adjoint objects in the 2-category of
adjunctions and in the 2-category of monads for $Cat$ are revised and
characterized. This article finalises with the application of the obtained
results on current categorical characterization of Hopf Monads.
| 0 | 0 | 1 | 0 | 0 | 0 |
Krylov Subspace Recycling for Fast Iterative Least-Squares in Machine Learning | Solving symmetric positive definite linear problems is a fundamental
computational task in machine learning. The exact solution, famously, is
cubicly expensive in the size of the matrix. To alleviate this problem, several
linear-time approximations, such as spectral and inducing-point methods, have
been suggested and are now in wide use. These are low-rank approximations that
choose the low-rank space a priori and do not refine it over time. While this
allows linear cost in the data-set size, it also causes a finite, uncorrected
approximation error. Authors from numerical linear algebra have explored ways
to iteratively refine such low-rank approximations, at a cost of a small number
of matrix-vector multiplications. This idea is particularly interesting in the
many situations in machine learning where one has to solve a sequence of
related symmetric positive definite linear problems. From the machine learning
perspective, such deflation methods can be interpreted as transfer learning of
a low-rank approximation across a time-series of numerical tasks. We study the
use of such methods for our field. Our empirical results show that, on
regression and classification problems of intermediate size, this approach can
interpolate between low computational cost and numerical precision.
| 1 | 0 | 0 | 1 | 0 | 0 |
Towards a Physical Oracle for the Partition Problem using Analogue Computing | Despite remarkable achievements in its practical tractability, the notorious
class of NP-complete problems has been escaping all attempts to find a
worst-case polynomial time-bound solution algorithms for any of them. The vast
majority of work relies on Turing machines or equivalent models, all of which
relate to digital computing. This raises the question of whether a computer
that is (partly) non-digital could offer a new door towards an efficient
solution. And indeed, the partition problem, which is another NP-complete
sibling of the famous Boolean satisfiability problem SAT, might be open to
efficient solutions using analogue computing. We investigate this hypothesis
here, providing experimental evidence that Partition, and in turn also SAT, may
become tractable on a combined digital and analogue computing machine. This
work provides mostly theoretical and based on simulations, and as such does not
exhibit a polynomial time algorithm to solve NP-complete problems. Instead, it
is intended as a pointer to new directions of research on special-purpose
computing architectures that may help handling the class NP efficiently.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bayesian Methods in Cosmology | These notes aim at presenting an overview of Bayesian statistics, the
underlying concepts and application methodology that will be useful to
astronomers seeking to analyse and interpret a wide variety of data about the
Universe. The level starts from elementary notions, without assuming any
previous knowledge of statistical methods, and then progresses to more
advanced, research-level topics. After an introduction to the importance of
statistical inference for the physical sciences, elementary notions of
probability theory and inference are introduced and explained. Bayesian methods
are then presented, starting from the meaning of Bayes Theorem and its use as
inferential engine, including a discussion on priors and posterior
distributions. Numerical methods for generating samples from arbitrary
posteriors (including Markov Chain Monte Carlo and Nested Sampling) are then
covered. The last section deals with the topic of Bayesian model selection and
how it is used to assess the performance of models, and contrasts it with the
classical p-value approach. A series of exercises of various levels of
difficulty are designed to further the understanding of the theoretical
material, including fully worked out solutions for most of them.
| 0 | 1 | 0 | 1 | 0 | 0 |
Information Extraction in Illicit Domains | Extracting useful entities and attribute values from illicit domains such as
human trafficking is a challenging problem with the potential for widespread
social impact. Such domains employ atypical language models, have `long tails'
and suffer from the problem of concept drift. In this paper, we propose a
lightweight, feature-agnostic Information Extraction (IE) paradigm specifically
designed for such domains. Our approach uses raw, unlabeled text from an
initial corpus, and a few (12-120) seed annotations per domain-specific
attribute, to learn robust IE models for unobserved pages and websites.
Empirically, we demonstrate that our approach can outperform feature-centric
Conditional Random Field baselines by over 18\% F-Measure on five annotated
sets of real-world human trafficking datasets in both low-supervision and
high-supervision settings. We also show that our approach is demonstrably
robust to concept drift, and can be efficiently bootstrapped even in a serial
computing environment.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Tutorial on Kernel Density Estimation and Recent Advances | This tutorial provides a gentle introduction to kernel density estimation
(KDE) and recent advances regarding confidence bands and geometric/topological
features. We begin with a discussion of basic properties of KDE: the
convergence rate under various metrics, density derivative estimation, and
bandwidth selection. Then, we introduce common approaches to the construction
of confidence intervals/bands, and we discuss how to handle bias. Next, we talk
about recent advances in the inference of geometric and topological features of
a density function using KDE. Finally, we illustrate how one can use KDE to
estimate a cumulative distribution function and a receiver operating
characteristic curve. We provide R implementations related to this tutorial at
the end.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimizing expected word error rate via sampling for speech recognition | State-level minimum Bayes risk (sMBR) training has become the de facto
standard for sequence-level training of speech recognition acoustic models. It
has an elegant formulation using the expectation semiring, and gives large
improvements in word error rate (WER) over models trained solely using
cross-entropy (CE) or connectionist temporal classification (CTC). sMBR
training optimizes the expected number of frames at which the reference and
hypothesized acoustic states differ. It may be preferable to optimize the
expected WER, but WER does not interact well with the expectation semiring, and
previous approaches based on computing expected WER exactly involve expanding
the lattices used during training. In this paper we show how to perform
optimization of the expected WER by sampling paths from the lattices used
during conventional sMBR training. The gradient of the expected WER is itself
an expectation, and so may be approximated using Monte Carlo sampling. We show
experimentally that optimizing WER during acoustic model training gives 5%
relative improvement in WER over a well-tuned sMBR baseline on a 2-channel
query recognition task (Google Home).
| 1 | 0 | 0 | 1 | 0 | 0 |
Real-Time Illegal Parking Detection System Based on Deep Learning | The increasing illegal parking has become more and more serious. Nowadays the
methods of detecting illegally parked vehicles are based on background
segmentation. However, this method is weakly robust and sensitive to
environment. Benefitting from deep learning, this paper proposes a novel
illegal vehicle parking detection system. Illegal vehicles captured by camera
are firstly located and classified by the famous Single Shot MultiBox Detector
(SSD) algorithm. To improve the performance, we propose to optimize SSD by
adjusting the aspect ratio of default box to accommodate with our dataset
better. After that, a tracking and analysis of movement is adopted to judge the
illegal vehicles in the region of interest (ROI). Experiments show that the
system can achieve a 99% accuracy and real-time (25FPS) detection with strong
robustness in complex environments.
| 1 | 0 | 0 | 1 | 0 | 0 |
On a representation of fractional Brownian motion and the limit distributions of statistics arising in cusp statistical models | We discuss some extensions of results from the recent paper by Chernoyarov et
al. (Ann. Inst. Stat. Math., October 2016) concerning limit distributions of
Bayesian and maximum likelihood estimators in the model "signal plus white
noise" with irregular cusp-type signals. Using a new representation of
fractional Brownian motion (fBm) in terms of cusp functions we show that as the
noise intensity tends to zero, the limit distributions are expressed in terms
of fBm for the full range of asymmetric cusp-type signals correspondingly with
the Hurst parameter H, 0<H<1. Simulation results for the densities and
variances of the limit distributions of Bayesian and maximum likelihood
estimators are also provided.
| 0 | 0 | 1 | 1 | 0 | 0 |
Stochastic Canonical Correlation Analysis | We tightly analyze the sample complexity of CCA, provide a learning algorithm
that achieves optimal statistical performance in time linear in the required
number of samples (up to log factors), as well as a streaming algorithm with
similar guarantees.
| 1 | 0 | 0 | 1 | 0 | 0 |
Segmentation of Instances by Hashing | We propose a novel approach to address the Simultaneous Detection and
Segmentation problem. Using hierarchical structures we use an efficient and
accurate procedure that exploits the hierarchy feature information using
Locality Sensitive Hashing. We build on recent work that utilizes convolutional
neural networks to detect bounding boxes in an image and then use the top
similar hierarchical region that best fits each bounding box after hashing, we
call this approach CZ Segmentation. We then refine our final segmentation
results by automatic hierarchy pruning. CZ Segmentation introduces a train-free
alternative to Hypercolumns. We conduct extensive experiments on PASCAL VOC
2012 segmentation dataset, showing that CZ gives competitive state-of-the-art
object segmentations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Grafting for Combinatorial Boolean Model using Frequent Itemset Mining | This paper introduces the combinatorial Boolean model (CBM), which is defined
as the class of linear combinations of conjunctions of Boolean attributes. This
paper addresses the issue of learning CBM from labeled data. CBM is of high
knowledge interpretability but naïve learning of it requires exponentially
large computation time with respect to data dimension and sample size. To
overcome this computational difficulty, we propose an algorithm GRAB (GRAfting
for Boolean datasets), which efficiently learns CBM within the
$L_1$-regularized loss minimization framework. The key idea of GRAB is to
reduce the loss minimization problem to the weighted frequent itemset mining,
in which frequent patterns are efficiently computable. We employ benchmark
datasets to empirically demonstrate that GRAB is effective in terms of
computational efficiency, prediction accuracy and knowledge discovery.
| 1 | 0 | 0 | 1 | 0 | 0 |
Rapid Assessment of Damaged Homes in the Florida Keys after Hurricane Irma | On September 10, 2017, Hurricane Irma made landfall in the Florida Keys and
caused significant damage. Informed by hydrodynamic storm surge and wave
modeling and post-storm satellite imagery, a rapid damage survey was soon
conducted for 1600+ residential buildings in Big Pine Key and Marathon. Damage
categorizations and statistical analysis reveal distinct factors governing
damage at these two locations. The distance from the coast is significant for
the damage in Big Pine Key, as severely damaged buildings were located near
narrow waterways connected to the ocean. Building type and size are critical in
Marathon, highlighted by the near-complete destruction of trailer communities
there. These observations raise issues of affordability and equity that need
consideration in damage recovery and rebuilding for resilience.
| 0 | 0 | 0 | 1 | 0 | 0 |
Status maximization as a source of fairness in a networked dictator game | Human behavioural patterns exhibit selfish or competitive, as well as
selfless or altruistic tendencies, both of which have demonstrable effects on
human social and economic activity. In behavioural economics, such effects have
traditionally been illustrated experimentally via simple games like the
dictator and ultimatum games. Experiments with these games suggest that, beyond
rational economic thinking, human decision-making processes are influenced by
social preferences, such as an inclination to fairness. In this study we
suggest that the apparent gap between competitive and altruistic human
tendencies can be bridged by assuming that people are primarily maximising
their status, i.e., a utility function different from simple profit
maximisation. To this end we analyse a simple agent-based model, where
individuals play the repeated dictator game in a social network they can
modify. As model parameters we consider the living costs and the rate at which
agents forget infractions by others. We find that individual strategies used in
the game vary greatly, from selfish to selfless, and that both of the above
parameters determine when individuals form complex and cohesive social
networks.
| 1 | 0 | 0 | 0 | 0 | 1 |
On Dziobek Special Central Configurations | We study the special central configurations of the curved N-body problem in
S^3. We show that there are special central configurations formed by N masses
for any N >2. We then extend the concept of special central configurations to
S^n, n>0, and study one interesting class of special central configurations in
S^n, the Dziobek special central configurations. We obtain a criterion for them
and reduce it to two sets of equations. Then we apply these equations to
special central configurations of 3 bodies on S^1, 4 bodies on S^2, and 5
bodies in S^3.
| 0 | 0 | 1 | 0 | 0 | 0 |
Laser Interferometer Space Antenna | Following the selection of The Gravitational Universe by ESA, and the
successful flight of LISA Pathfinder, the LISA Consortium now proposes a 4 year
mission in response to ESA's call for missions for L3. The observatory will be
based on three arms with six active laser links, between three identical
spacecraft in a triangular formation separated by 2.5 million km.
LISA is an all-sky monitor and will offer a wide view of a dynamic cosmos
using Gravitational Waves as new and unique messengers to unveil The
Gravitational Universe. It provides the closest ever view of the infant
Universe at TeV energy scales, has known sources in the form of verification
binaries in the Milky Way, and can probe the entire Universe, from its smallest
scales near the horizons of black holes, all the way to cosmological scales.
The LISA mission will scan the entire sky as it follows behind the Earth in its
orbit, obtaining both polarisations of the Gravitational Waves simultaneously,
and will measure source parameters with astrophysically relevant sensitivity in
a band from below $10^{-4}\,$Hz to above $10^{-1}\,$Hz.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning from a lot: Empirical Bayes in high-dimensional prediction settings | Empirical Bayes is a versatile approach to `learn from a lot' in two ways:
first, from a large number of variables and second, from a potentially large
amount of prior information, e.g. stored in public repositories. We review
applications of a variety of empirical Bayes methods to several well-known
model-based prediction methods including penalized regression, linear
discriminant analysis, and Bayesian models with sparse or dense priors. We
discuss `formal' empirical Bayes methods which maximize the marginal
likelihood, but also more informal approaches based on other data summaries. We
contrast empirical Bayes to cross-validation and full Bayes, and discuss hybrid
approaches. To study the relation between the quality of an empirical Bayes
estimator and $p$, the number of variables, we consider a simple empirical
Bayes estimator in a linear model setting.
We argue that empirical Bayes is particularly useful when the prior contains
multiple parameters which model a priori information on variables, termed
`co-data'. In particular, we present two novel examples that allow for co-data.
First, a Bayesian spike-and-slab setting that facilitates inclusion of multiple
co-data sources and types; second, a hybrid empirical Bayes-full Bayes ridge
regression approach for estimation of the posterior predictive interval.
| 0 | 0 | 0 | 1 | 0 | 0 |
Dissipativity Theory for Accelerating Stochastic Variance Reduction: A Unified Analysis of SVRG and Katyusha Using Semidefinite Programs | Techniques for reducing the variance of gradient estimates used in stochastic
programming algorithms for convex finite-sum problems have received a great
deal of attention in recent years. By leveraging dissipativity theory from
control, we provide a new perspective on two important variance-reduction
algorithms: SVRG and its direct accelerated variant Katyusha. Our perspective
provides a physically intuitive understanding of the behavior of SVRG-like
methods via a principle of energy conservation. The tools discussed here allow
us to automate the convergence analysis of SVRG-like methods by capturing their
essential properties in small semidefinite programs amenable to standard
analysis and computational techniques. Our approach recovers existing
convergence results for SVRG and Katyusha and generalizes the theory to
alternative parameter choices. We also discuss how our approach complements the
linear coupling technique. Our combination of perspectives leads to a better
understanding of accelerated variance-reduced stochastic methods for finite-sum
problems.
| 0 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.