title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Predictive and Prescriptive Analytics for Location Selection of Add-on Retail Products | In this paper, we study an analytical approach to selecting expansion
locations for retailers selling add-on products whose demand is derived from
the demand of another base product. Demand for the add-on product is realized
only as a supplement to the demand of the base product. In our context, either
of the two products could be subject to spatial autocorrelation where demand at
a given location is impacted by demand at other locations. Using data from an
industrial partner selling add-on products, we build predictive models for
understanding the derived demand of the add-on product and establish an
optimization framework for automating expansion decisions to maximize expected
sales. Interestingly, spatial autocorrelation and the complexity of the
predictive model impact the complexity and the structure of the prescriptive
optimization model. Our results indicate that the models formulated are highly
effective in predicting add-on product sales, and that using the optimization
framework built on the predictive model can result in substantial increases in
expected sales over baseline policies.
| 0 | 0 | 0 | 1 | 0 | 0 |
Algebraic characterization of regular fractions under level permutations | In this paper we study the behavior of the fractions of a factorial design
under permutations of the factor levels. We focus on the notion of regular
fraction and we introduce methods to check whether a given symmetric orthogonal
array can or can not be transformed into a regular fraction by means of
suitable permutations of the factor levels. The proposed techniques take
advantage of the complex coding of the factor levels and of some tools from
polynomial algebra. Several examples are described, mainly involving factors
with five levels.
| 0 | 0 | 1 | 1 | 0 | 0 |
Biomedical Event Trigger Identification Using Bidirectional Recurrent Neural Network Based Models | Biomedical events describe complex interactions between various biomedical
entities. Event trigger is a word or a phrase which typically signifies the
occurrence of an event. Event trigger identification is an important first step
in all event extraction methods. However many of the current approaches either
rely on complex hand-crafted features or consider features only within a
window. In this paper we propose a method that takes the advantage of recurrent
neural network (RNN) to extract higher level features present across the
sentence. Thus hidden state representation of RNN along with word and entity
type embedding as features avoid relying on the complex hand-crafted features
generated using various NLP toolkits. Our experiments have shown to achieve
state-of-art F1-score on Multi Level Event Extraction (MLEE) corpus. We have
also performed category-wise analysis of the result and discussed the
importance of various features in trigger identification task.
| 1 | 0 | 0 | 0 | 0 | 0 |
Modern-day Universities and Regional Development | Nowadays it is quite evident that knowledge-based society necessarily
involves the revaluation of human and intangible assets, as the advancement of
local economies significantly depend on the qualitative and quantitative
characteristics of human capital[Lundvall, 2004]. As we can instantaneously
link the universities as main actors in the creation of highly-qualified labour
force, the role of universities increases parallel to the previously mentioned
progresses. Universities are the general institutions of education, however i
nthe need of adaptation to present local needs, their activities have broadened
in the past decades [Wright et al, 2008; Etzkowitz, 2002]. Most universities
experienced a transition period in which next to their classic activities,
namely education and research, so called third mission activities also started
to count, thus serving many purposes of economy and society.
| 1 | 0 | 0 | 0 | 0 | 0 |
Method of Reduction of Variables for Bilinear Matrix Inequality Problems in System and Control Designs | Bilinear matrix inequality (BMI) problems in system and control designs are
investigated in this paper. A solution method of reduction of variables (MRVs)
is proposed. This method consists of a principle of variable classification, a
procedure for problem transformation, and a hybrid algorithm that combines
deterministic and stochastic search engines. The classification principle is
used to classify the decision variables of a BMI problem into two categories:
1) external and 2) internal variables. Theoretical analysis is performed to
show that when the classification principle is applicable, a BMI problem can be
transformed into an unconstrained optimization problem that has fewer decision
variables. Stochastic search and deterministic search are then applied to
determine the decision variables of the unconstrained problem externally and
explore the internal problem structure, respectively. The proposed method can
address feasibility, single-objective, and multiobjective problems constrained
by BMIs in a unified manner. A number of numerical examples in system and
control designs are provided to validate the proposed methodology. Simulations
show that the MRVs can outperform existing BMI solution methods in most
benchmark problems and achieve similar levels of performance in the remaining
problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nonlinear transport associated with spin-density-wave dynamics in Ca$_3$Co$_{4}$O$_9$ | We have carried out the transient nonlinear transport measurements on the
layered cobalt oxide Ca$_3$Co$_{4}$O$_9$, in which a spin density wave (SDW)
transition is proposed at $T_{\rm SDW} \simeq 30$ K. We find that, below
$T_{\rm SDW}$, the electrical conductivity systematically varies with both the
applied current and the time, indicating a close relationship between the
observed nonlinear conduction and the SDW order in this material. The time
dependence of the conductivity is well analyzed by considering the dynamics of
SDW which involves a low-field deformation and a sliding motion above a
threshold field. We also measure the transport properties of the isovalent
Sr-substituted systems to examine an impurity effect on the nonlinear response,
and discuss the obtained threshold fields in terms of thermal fluctuations of
the SDW order parameter.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bayesian Compression for Deep Learning | Compression and computational efficiency in deep learning have become a
problem of great significance. In this work, we argue that the most principled
and effective way to attack this problem is by adopting a Bayesian point of
view, where through sparsity inducing priors we prune large parts of the
network. We introduce two novelties in this paper: 1) we use hierarchical
priors to prune nodes instead of individual weights, and 2) we use the
posterior uncertainties to determine the optimal fixed point precision to
encode the weights. Both factors significantly contribute to achieving the
state of the art in terms of compression rates, while still staying competitive
with methods designed to optimize for speed or energy efficiency.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Observability Concept in a Class of Hybrid Control systems | In the discrete modeling approach for hybrid control systems, the continuous
plant is reduced to a discrete event approximation, called the DES-plant, that
is governed by a discrete event system, representing the controller. The
observability of the DES-plant model is crucial for the synthesis of the
controller and for the proper closed loop evolution of the hybrid control
system. Based on a version of the framework for hybrid control systems proposed
by Antsaklis, the paper analysis the relation between the properties of the
cellular space of the continuous plant and a mechanism of plant-symbols
generation, on one side, and the observability of the DES-plant automaton on
the other side. Finally an observable discrete event abstraction of the
continuous double integrator is presented.
| 1 | 0 | 1 | 0 | 0 | 0 |
Towards a More Reliable Privacy-preserving Recommender System | This paper proposes a privacy-preserving distributed recommendation
framework, Secure Distributed Collaborative Filtering (SDCF), to preserve the
privacy of value, model and existence altogether. That says, not only the
ratings from the users to the items, but also the existence of the ratings as
well as the learned recommendation model are kept private in our framework. Our
solution relies on a distributed client-server architecture and a two-stage
Randomized Response algorithm, along with an implementation on the popular
recommendation model, Matrix Factorization (MF). We further prove SDCF to meet
the guarantee of Differential Privacy so that clients are allowed to specify
arbitrary privacy levels. Experiments conducted on numerical rating prediction
and one-class rating action prediction exhibit that SDCF does not sacrifice too
much accuracy for privacy.
| 1 | 0 | 0 | 0 | 0 | 0 |
A study of posture judgement on vehicles using wearable acceleration sensor | We study methods to estimate drivers' posture in vehicles using acceleration
data of wearable sensor and conduct field tests. To prevent fatal accidents,
demands for safety management of bus and taxi are high. However, acceleration
of vehicles is added to wearable sensor in vehicles. Therefore, we study
methods to estimate driving posture using acceleration data acquired from shirt
type wearable sensor hitoe and conduct field tests.
| 1 | 0 | 0 | 0 | 0 | 0 |
Smoothed nonparametric two-sample tests | We propose new smoothed median and the Wilcoxon's rank sum test. As is
pointed out by Maesono et al.(2016), some nonparametric discrete tests have a
problem with their significance probability. Because of this problem, the
selection of the median and the Wilcoxon's test can be biased too, however, we
show new smoothed tests are free from the problem. Significance probabilities
and local asymptotic powers of the new tests are studied, and we show that they
inherit good properties of the discrete tests.
| 0 | 0 | 1 | 1 | 0 | 0 |
The Complexity of Graph-Based Reductions for Reachability in Markov Decision Processes | We study the never-worse relation (NWR) for Markov decision processes with an
infinite-horizon reachability objective. A state q is never worse than a state
p if the maximal probability of reaching the target set of states from p is at
most the same value from q, regard- less of the probabilities labelling the
transitions. Extremal-probability states, end components, and essential states
are all special cases of the equivalence relation induced by the NWR. Using the
NWR, states in the same equivalence class can be collapsed. Then, actions
leading to sub- optimal states can be removed. We show the natural decision
problem associated to computing the NWR is coNP-complete. Finally, we ex- tend
a previously known incomplete polynomial-time iterative algorithm to
under-approximate the NWR.
| 1 | 0 | 0 | 0 | 0 | 0 |
A stack-vector routing protocol for automatic tunneling | In a network, a tunnel is a part of a path where a protocol is encapsulated
in another one. A tunnel starts with an encapsulation and ends with the
corresponding decapsulation. Several tunnels can be nested at some stage,
forming a protocol stack. Tunneling is very important nowadays and it is
involved in several tasks: IPv4/IPv6 transition, VPNs, security (IPsec, onion
routing), etc. However, tunnel establishment is mainly performed manually or by
script, which present obvious scalability issues. Some works attempt to
automate a part of the process (e.g., TSP, ISATAP, etc.). However, the
determination of the tunnel(s) endpoints is not fully automated, especially in
the case of an arbitrary number of nested tunnels. The lack of routing
protocols performing automatic tunneling is due to the unavailability of path
computation algorithms taking into account encapsulations and decapsulations.
There is a polynomial centralized algorithm to perform the task. However, to
the best of our knowledge, no fully distributed path computation algorithm is
known. Here, we propose the first fully distributed algorithm for path
computation with automatic tunneling, i.e., taking into account encapsulation,
decapsulation and conversion of protocols. Our algorithm is a generalization of
the distributed Bellman-Ford algorithm, where the distance vector is replaced
by a protocol stack vector. This allows to know how to route a packet with some
protocol stack. We prove that the messages size of our algorithm is polynomial,
even if the shortest path can be of exponential length. We also prove that the
algorithm converges after a polynomial number of steps in a synchronized
setting. We adapt our algorithm into a proto-protocol for routing with
automatic tunneling and we show its efficiency through simulations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Using of heterogeneous corpora for training of an ASR system | The paper summarizes the development of the LVCSR system built as a part of
the Pashto speech-translation system at the SCALE (Summer Camp for Applied
Language Exploration) 2015 workshop on "Speech-to-text-translation for
low-resource languages". The Pashto language was chosen as a good "proxy"
low-resource language, exhibiting multiple phenomena which make the
speech-recognition and and speech-to-text-translation systems development hard.
Even when the amount of data is seemingly sufficient, given the fact that the
data originates from multiple sources, the preliminary experiments reveal that
there is little to no benefit in merging (concatenating) the corpora and more
elaborate ways of making use of all of the data must be worked out.
This paper concentrates only on the LVCSR part and presents a range of
different techniques that were found to be useful in order to benefit from
multiple different corpora
| 1 | 0 | 0 | 0 | 0 | 0 |
Inferring Narrative Causality between Event Pairs in Films | To understand narrative, humans draw inferences about the underlying
relations between narrative events. Cognitive theories of narrative
understanding define these inferences as four different types of causality,
that include pairs of events A, B where A physically causes B (X drop, X
break), to pairs of events where A causes emotional state B (Y saw X, Y felt
fear). Previous work on learning narrative relations from text has either
focused on "strict" physical causality, or has been vague about what relation
is being learned. This paper learns pairs of causal events from a corpus of
film scene descriptions which are action rich and tend to be told in
chronological order. We show that event pairs induced using our methods are of
high quality and are judged to have a stronger causal relation than event pairs
from Rel-grams.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Hom-Gerstenhaber algebras and Hom-Lie algebroids | We define the notion of hom-Batalin-Vilkovisky algebras and strong
differential hom-Gerstenhaber algebras as a special class of hom-Gerstenhaber
algebras and provide canonical examples associated to some well-known
hom-structures. Representations of a hom-Lie algebroid on a hom-bundle are
defined and a cohomology of a regular hom-Lie algebroid with coefficients in a
representation is studied. We discuss about relationship between these classes
of hom-Gerstenhaber algebras and geometric structures on a vector bundle. As an
application, we associate a homology to a regular hom-Lie algebroid and then
define a hom-Poisson homology associated to a hom-Poisson manifold.
| 0 | 0 | 1 | 0 | 0 | 0 |
Global existence in the 1D quasilinear parabolic-elliptic chemotaxis system with critical nonlinearity | The paper should be viewed as complement of an earlier result in [8]. In the
paper just mentioned it is shown that 1d case of a quasilinear
parabolic-elliptic Keller-Segel system is very special. Namely, unlike in
higher dimensions, there is no critical nonlinearity. Indeed, for the nonlinear
diffusion of the form 1/u all the solutions, independently on the magnitude of
initial mass, stay bounded. However, the argument presented in [8] deals with
the Jager-Luckhaus type system. And is very sensitive to this restriction.
Namely, the change of variables introduced in [8], being a main step of the
method, works only for the Jager-Luckhaus modification. It does not seem to be
applicable in the usual version of the parabolic-elliptic Keller-Segel system.
The present paper fulfils this gap and deals with the case of the usual
parabolic-elliptic version. To handle it we establish a new Lyapunov-like
functional (it is related to what was done in [8]), which leads to global
existence of the initial-boundary value problem for any initial mass.
| 0 | 0 | 1 | 0 | 0 | 0 |
Supercongruences between truncated ${}_3F_2$ hypergeometric series | We establish four supercongruences between truncated ${}_3F_2$ hypergeometric
series involving $p$-adic Gamma functions, which extend some of the
Rodriguez-Villegas supercongruences.
| 0 | 0 | 1 | 0 | 0 | 0 |
Indoor Localization Using Visible Light Via Fusion Of Multiple Classifiers | A multiple classifiers fusion localization technique using received signal
strengths (RSSs) of visible light is proposed, in which the proposed system
transmits different intensity modulated sinusoidal signals by LEDs and the
signals received by a Photo Diode (PD) placed at various grid points. First, we
obtain some {\emph{approximate}} received signal strengths (RSSs) fingerprints
by capturing the peaks of power spectral density (PSD) of the received signals
at each given grid point. Unlike the existing RSSs based algorithms, several
representative machine learning approaches are adopted to train multiple
classifiers based on these RSSs fingerprints. The multiple classifiers
localization estimators outperform the classical RSS-based LED localization
approaches in accuracy and robustness. To further improve the localization
performance, two robust fusion localization algorithms, namely, grid
independent least square (GI-LS) and grid dependent least square (GD-LS), are
proposed to combine the outputs of these classifiers. We also use a singular
value decomposition (SVD) based LS (LS-SVD) method to mitigate the numerical
stability problem when the prediction matrix is singular. Experiments conducted
on intensity modulated direct detection (IM/DD) systems have demonstrated the
effectiveness of the proposed algorithms. The experimental results show that
the probability of having mean square positioning error (MSPE) of less than 5cm
achieved by GD-LS is improved by 93.03\% and 93.15\%, respectively, as compared
to those by the RSS ratio (RSSR) and RSS matching methods with the FFT length
of 2000.
| 1 | 0 | 0 | 1 | 0 | 0 |
Node Centralities and Classification Performance for Characterizing Node Embedding Algorithms | Embedding graph nodes into a vector space can allow the use of machine
learning to e.g. predict node classes, but the study of node embedding
algorithms is immature compared to the natural language processing field
because of a diverse nature of graphs. We examine the performance of node
embedding algorithms with respect to graph centrality measures that
characterize diverse graphs, through systematic experiments with four node
embedding algorithms, four or five graph centralities, and six datasets.
Experimental results give insights into the properties of node embedding
algorithms, which can be a basis for further research on this topic.
| 1 | 0 | 0 | 1 | 0 | 0 |
Data Fusion Reconstruction of Spatially Embedded Complex Networks | We introduce a kernel Lasso (kLasso) optimization that simultaneously
accounts for spatial regularity and network sparsity to reconstruct spatial
complex networks from data. Through a kernel function, the proposed approach
exploits spatial embedding distances to penalize overabundance of spatially
long-distance connections. Examples of both synthetic and real-world spatial
networks show that the proposed method improves significantly upon existing
network reconstruction techniques that mainly concerns sparsity but not spatial
regularity. Our results highlight the promise of data fusion in the
reconstruction of complex networks, by utilizing both microscopic node-level
dynamics (e.g., time series data) and macroscopic network-level information
(metadata).
| 0 | 1 | 0 | 1 | 0 | 0 |
Reconstruction from Periodic Nonlinearities, With Applications to HDR Imaging | We consider the problem of reconstructing signals and images from periodic
nonlinearities. For such problems, we design a measurement scheme that supports
efficient reconstruction; moreover, our method can be adapted to extend to
compressive sensing-based signal and image acquisition systems. Our techniques
can be potentially useful for reducing the measurement complexity of high
dynamic range (HDR) imaging systems, with little loss in reconstruction
quality. Several numerical experiments on real data demonstrate the
effectiveness of our approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Multirole Logic (Extended Abstract) | We identify multirole logic as a new form of logic in which
conjunction/disjunction is interpreted as an ultrafilter on the power set of
some underlying set (of roles) and the notion of negation is generalized to
endomorphisms on this underlying set. We formalize both multirole logic (MRL)
and linear multirole logic (LMRL) as natural generalizations of classical logic
(CL) and classical linear logic (CLL), respectively, and also present a
filter-based interpretation for intuitionism in multirole logic. Among various
meta-properties established for MRL and LMRL, we obtain one named multiparty
cut-elimination stating that every cut involving one or more sequents (as a
generalization of a (binary) cut involving exactly two sequents) can be
eliminated, thus extending the celebrated result of cut-elimination by Gentzen.
| 1 | 0 | 1 | 0 | 0 | 0 |
Interpreting Classifiers through Attribute Interactions in Datasets | In this work we present the novel ASTRID method for investigating which
attribute interactions classifiers exploit when making predictions. Attribute
interactions in classification tasks mean that two or more attributes together
provide stronger evidence for a particular class label. Knowledge of such
interactions makes models more interpretable by revealing associations between
attributes. This has applications, e.g., in pharmacovigilance to identify
interactions between drugs or in bioinformatics to investigate associations
between single nucleotide polymorphisms. We also show how the found attribute
partitioning is related to a factorisation of the data generating distribution
and empirically demonstrate the utility of the proposed method.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Modified Levy Jump-Diffusion Model Based on Market Sentiment Memory for Online Jump Prediction | In this paper, we propose a modified Levy jump diffusion model with market
sentiment memory for stock prices, where the market sentiment comes from data
mining implementation using Tweets on Twitter. We take the market sentiment
process, which has memory, as the signal of Levy jumps in the stock price. An
online learning and optimization algorithm with the Unscented Kalman filter
(UKF) is then proposed to learn the memory and to predict possible price jumps.
Experiments show that the algorithm provides a relatively good performance in
identifying asset return trends.
| 1 | 0 | 0 | 0 | 0 | 0 |
Testing approximate predictions of displacements of cosmological dark matter halos | We present a test to quantify how well some approximate methods, designed to
reproduce the mildly non-linear evolution of perturbations, are able to
reproduce the clustering of DM halos once the grouping of particles into halos
is defined and kept fixed. The following methods have been considered:
Lagrangian Perturbation Theory (LPT) up to third order, Truncated LPT,
Augmented LPT, MUSCLE and COLA. The test runs as follows: halos are defined by
applying a friends-of-friends (FoF) halo finder to the output of an N-body
simulation. The approximate methods are then applied to the same initial
conditions of the simulation, producing for all particles displacements from
their starting position and velocities. The position and velocity of each halo
are computed by averaging over the particles that belong to that halo,
according to the FoF halo finder. This procedure allows us to perform a
well-posed test of how clustering of the matter density and halo density fields
are recovered, without asking to the approximate method an accurate
reconstruction of halos. We have considered the results at $z=0,0.5,1$, and we
have analysed power spectrum in real and redshift space, object-by-object
difference in position and velocity, density Probability Distribution Function
(PDF) and its moments, phase difference of Fourier modes. We find that higher
LPT orders are generally able to better reproduce the clustering of halos,
while little or no improvement is found for the matter density field when going
to 2LPT and 3LPT. Augmentation provides some improvement when coupled with
2LPT, while its effect is limited when coupled with 3LPT. Little improvement is
brought by MUSCLE with respect to Augmentation. The more expensive
particle-mesh code COLA outperforms all LPT methods [abridged]
| 0 | 1 | 0 | 0 | 0 | 0 |
Efficient and Secure Routing Protocol for WSN-A Thesis | Advances in Wireless Sensor Network (WSN) have provided the availability of
small and low-cost sensors with the capability of sensing various types of
physical and environmental conditions, data processing, and wireless
communication. Since WSN protocols are application specific, the focus has been
given to the routing protocols that might differ depending on the application
and network architecture. In this work, novel routing protocols have been
proposed which is a cluster-based security protocol is named as Efficient and
Secure Routing Protocol (ESRP) for WSN. The goal of ESRP is to provide an
energy efficient routing solution with dynamic security features for clustered
WSN. During the network formation, a node which is connected to a Personal
Computer (PC) has been selected as a sink node. Once the sensor nodes were
deployed, the sink node logically segregates the other nodes in a cluster
structure and subsequently creates a WSN. This centralized cluster formation
method is used to reduce the node level processing burden and avoid multiple
communications. In order to ensure reliable data delivery, various security
features have been incorporated in the proposed protocol such as Modified
Zero-Knowledge Protocol (MZKP), Promiscuous hearing method, Trapping of
adversaries and Mine detection. One of the unique features of this ESRP is that
it can dynamically decide about the selection of these security methods, based
on the residual energy of nodes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Jackknife variance estimation for common mean estimators under ordered variances and general two-sample statistics | Samples with a common mean but possibly different, ordered variances arise in
various fields such as interlaboratory experiments, field studies or the
analysis of sensor data. Estimators for the common mean under ordered variances
typically employ random weights, which depend on the sample means and the
unbiased variance estimators. They take different forms when the sample
estimators are in agreement with the order constraints or not, which
complicates even basic analyses such as estimating their variance. We propose
to use the jackknife, whose consistency is established for general smooth
two--sample statistics induced by continuously Gâteux or Fréchet
differentiable functionals, and, more generally, asymptotically linear
two--sample statistics, allowing us to study a large class of common mean
estimators. Further, it is shown that the common mean estimators under
consideration satisfy a central limit theorem (CLT). We investigate the
accuracy of the resulting confidence intervals by simulations and illustrate
the approach by analyzing several data sets.
| 0 | 0 | 1 | 1 | 0 | 0 |
ISM properties of a Massive Dusty Star-Forming Galaxy discovered at z ~ 7 | We report the discovery and constrain the physical conditions of the
interstellar medium of the highest-redshift millimeter-selected dusty
star-forming galaxy (DSFG) to date, SPT-S J031132-5823.4 (hereafter
SPT0311-58), at $z=6.900 +/- 0.002$. SPT0311-58 was discovered via its 1.4mm
thermal dust continuum emission in the South Pole Telescope (SPT)-SZ survey.
The spectroscopic redshift was determined through an ALMA 3mm frequency scan
that detected CO(6-5), CO(7-6) and [CI](2-1), and subsequently confirmed by
detections of CO(3-2) with ATCA and [CII] with APEX. We constrain the
properties of the ISM in SPT0311-58 with a radiative transfer analysis of the
dust continuum photometry and the CO and [CI] line emission. This allows us to
determine the gas content without ad hoc assumptions about gas mass scaling
factors. SPT0311-58 is extremely massive, with an intrinsic gas mass of $M_{\rm
gas} = 3.3 \pm 1.9 \times10^{11}\,M_{\odot}$. Its large mass and intense star
formation is very rare for a source well into the Epoch of Reionization.
| 0 | 1 | 0 | 0 | 0 | 0 |
A convex formulation of traffic dynamics on transportation networks | This article proposes a numerical scheme for computing the evolution of
vehicular traffic on a road network over a finite time horizon. The traffic
dynamics on each link is modeled by the Hamilton-Jacobi (HJ) partial
differential equation (PDE), which is an equivalent form of the
Lighthill-Whitham-Richards PDE. The main contribution of this article is the
construction of a single convex optimization program which computes the traffic
flow at a junction over a finite time horizon and decouples the PDEs on
connecting links. Compared to discretization schemes which require the
computation of all traffic states on a time-space grid, the proposed convex
optimization approach computes the boundary flows at the junction using only
the initial condition on links and the boundary conditions of the network. The
computed boundary flows at the junction specify the boundary condition for the
HJ PDE on connecting links, which then can be separately solved using an
existing semi-explicit scheme for single link HJ PDE. As demonstrated in a
numerical example of ramp metering control, the proposed convex optimization
approach also provides a natural framework for optimal traffic control
applications.
| 0 | 1 | 1 | 0 | 0 | 0 |
Computational and informatics advances for reproducible data analysis in neuroimaging | The reproducibility of scientific research has become a point of critical
concern. We argue that openness and transparency are critical for
reproducibility, and we outline an ecosystem for open and transparent science
that has emerged within the human neuroimaging community. We discuss the range
of open data sharing resources that have been developed for neuroimaging data,
and the role of data standards (particularly the Brain Imaging Data Structure)
in enabling the automated sharing, processing, and reuse of large neuroimaging
datasets. We outline how the open-source Python language has provided the basis
for a data science platform that enables reproducible data analysis and
visualization. We also discuss how new advances in software engineering, such
as containerization, provide the basis for greater reproducibility in data
analysis. The emergence of this new ecosystem provides an example for many
areas of science that are currently struggling with reproducibility.
| 0 | 0 | 0 | 0 | 1 | 0 |
HPD-invariance of the Tate, Beilinson and Parshin conjectures | We prove that the Tate, Beilinson and Parshin conjectures are invariant under
Homological Projective Duality (=HPD). As an application, we obtain a proof of
these celebrated conjectures (as well as of the strong form of the Tate
conjecture) in the new cases of linear sections of determinantal varieties and
complete intersections of quadrics. Furthermore, we extend the original
conjectures of Tate, Beilinson and Parshin from schemes to stacks and prove
these extended conjectures for certain low-dimensional global orbifolds.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multi-dueling Bandits with Dependent Arms | The dueling bandits problem is an online learning framework for learning from
pairwise preference feedback, and is particularly well-suited for modeling
settings that elicit subjective or implicit human feedback. In this paper, we
study the problem of multi-dueling bandits with dependent arms, which extends
the original dueling bandits setting by simultaneously dueling multiple arms as
well as modeling dependencies between arms. These extensions capture key
characteristics found in many real-world applications, and allow for the
opportunity to develop significantly more efficient algorithms than were
possible in the original setting. We propose the \selfsparring algorithm, which
reduces the multi-dueling bandits problem to a conventional bandit setting that
can be solved using a stochastic bandit algorithm such as Thompson Sampling,
and can naturally model dependencies using a Gaussian process prior. We present
a no-regret analysis for multi-dueling setting, and demonstrate the
effectiveness of our algorithm empirically on a wide range of simulation
settings.
| 1 | 0 | 0 | 0 | 0 | 0 |
New constraints on the millimetre emission of six debris disks | The presence of dusty debris around main sequence stars denotes the existence
of planetary systems. Such debris disks are often identified by the presence of
excess continuum emission at infrared and (sub-)millimetre wavelengths, with
measurements at longer wavelengths tracing larger and cooler dust grains. The
exponent of the slope of the disk emission at sub-millimetre wavelengths, `q',
defines the size distribution of dust grains in the disk. This size
distribution is a function of the rigid strength of the dust producing parent
planetesimals. As part of the survey `PLAnetesimals around TYpical Pre-main
seqUence Stars' (PLATYPUS) we observed six debris disks at 9-mm using the
Australian Telescope Compact Array. We obtain marginal (~3-\sigma) detections
of three targets: HD 105, HD 61005, and HD 131835. Upper limits for the three
remaining disks, HD20807, HD109573, and HD109085, provide further constraint of
the (sub-)millimetre slope of their spectral energy distributions. The values
of q (or their limits) derived from our observations are all smaller than the
oft-assumed steady state collisional cascade model (q = 3.5), but lie well
within the theoretically expected range for debris disks q ~ 3 to 4. The
measured q values for our targets are all < 3.3, consistent with both
collisional modelling results and theoretical predictions for parent
planetesimal bodies being `rubble piles' held together loosely by their
self-gravity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bosonic integer quantum Hall effect as topological pumping | Based on a quasi-one-dimensional limit of quantum Hall states on a thin
torus, we construct a model of interaction-induced topological pumping which
mimics the Hall response of the bosonic integer quantum Hall (BIQH) state. The
quasi-one-dimensional counterpart of the BIQH state is identified as the
Haldane phase composed of two-component bosons which form effective spin-$1$
degrees of freedom. An adiabatic change between the Haldane phase and trivial
Mott insulators constitute {\it off-diagonal} topological pumping in which the
translation of the lattice potential for one component induces a current in the
other. The mechanism of this pumping is interpreted in terms of changes in
polarizations between symmetry-protected quantized values.
| 0 | 1 | 0 | 0 | 0 | 0 |
Connected Vehicular Transportation: Data Analytics and Traffic-dependent Networking | With onboard operating systems becoming increasingly common in vehicles, the
real-time broadband infotainment and Intelligent Transportation System (ITS)
service applications in fast-motion vehicles become ever demanding, which are
highly expected to significantly improve the efficiency and safety of our daily
on-road lives. The emerging ITS and vehicular applications, e.g., trip
planning, however, require substantial efforts on the real-time pervasive
information collection and big data processing so as to provide quick decision
making and feedbacks to the fast moving vehicles, which thus impose the
significant challenges on the development of an efficient vehicular
communication platform. In this article, we present TrasoNET, an integrated
network framework to provide realtime intelligent transportation services to
connected vehicles by exploring the data analytics and networking techniques.
TrasoNET is built upon two key components. The first one guides vehicles to the
appropriate access networks by exploring the information of realtime traffic
status, specific user preferences, service applications and network conditions.
The second component mainly involves a distributed automatic access engine,
which enables individual vehicles to make distributed access decisions based on
access recommender, local observation and historic information. We showcase the
application of TrasoNET in a case study on real-time traffic sensing based on
real traces of taxis.
| 1 | 0 | 0 | 0 | 0 | 0 |
Strongly ergodic equivalence relations: spectral gap and type III invariants | We obtain a spectral gap characterization of strongly ergodic equivalence
relations on standard measure spaces. We use our spectral gap criterion to
prove that a large class of skew-product equivalence relations arising from
measurable $1$-cocycles with values into locally compact abelian groups are
strongly ergodic. By analogy with the work of Connes on full factors, we
introduce the Sd and $\tau$ invariants for type ${\rm III}$ strongly ergodic
equivalence relations. As a corollary to our main results, we show that for any
type ${\rm III_1}$ ergodic equivalence relation $\mathcal R$, the Maharam
extension $\mathord{\text {c}}(\mathcal R)$ is strongly ergodic if and only if
$\mathcal R$ is strongly ergodic and the invariant $\tau(\mathcal R)$ is the
usual topology on $\mathbf R$. We also obtain a structure theorem for almost
periodic strongly ergodic equivalence relations analogous to Connes' structure
theorem for almost periodic full factors. Finally, we prove that for arbitrary
strongly ergodic free actions of bi-exact groups (e.g. hyperbolic groups), the
Sd and $\tau$ invariants of the orbit equivalence relation and of the
associated group measure space von Neumann factor coincide.
| 0 | 0 | 1 | 0 | 0 | 0 |
On-the-fly Operation Batching in Dynamic Computation Graphs | Dynamic neural network toolkits such as PyTorch, DyNet, and Chainer offer
more flexibility for implementing models that cope with data of varying
dimensions and structure, relative to toolkits that operate on statically
declared computations (e.g., TensorFlow, CNTK, and Theano). However, existing
toolkits - both static and dynamic - require that the developer organize the
computations into the batches necessary for exploiting high-performance
algorithms and hardware. This batching task is generally difficult, but it
becomes a major hurdle as architectures become complex. In this paper, we
present an algorithm, and its implementation in the DyNet toolkit, for
automatically batching operations. Developers simply write minibatch
computations as aggregations of single instance computations, and the batching
algorithm seamlessly executes them, on the fly, using computationally efficient
batched operations. On a variety of tasks, we obtain throughput similar to that
obtained with manual batches, as well as comparable speedups over
single-instance learning on architectures that are impractical to batch
manually.
| 1 | 0 | 0 | 1 | 0 | 0 |
Mixtures of Skewed Matrix Variate Bilinear Factor Analyzers | Clustering is the process of finding and analyzing underlying group structure
in data. In recent years, data as become increasingly higher dimensional and,
therefore, an increased need has arisen for dimension reduction techniques for
clustering. Although such techniques are firmly established in the literature
for multivariate data, there is a relative paucity in the area of matrix
variate or three way data. Furthermore, the few methods that are available all
assume matrix variate normality, which is not always sensible if cluster
skewness or excess kurtosis is present. Mixtures of bilinear factor analyzers
models using skewed matrix variate distributions are proposed. In all, four
such mixture models are presented, based on matrix variate skew-t, generalized
hyperbolic, variance gamma and normal inverse Gaussian distributions,
respectively.
| 0 | 0 | 0 | 1 | 0 | 0 |
Transfer Learning to Learn with Multitask Neural Model Search | Deep learning models require extensive architecture design exploration and
hyperparameter optimization to perform well on a given task. The exploration of
the model design space is often made by a human expert, and optimized using a
combination of grid search and search heuristics over a large space of possible
choices. Neural Architecture Search (NAS) is a Reinforcement Learning approach
that has been proposed to automate architecture design. NAS has been
successfully applied to generate Neural Networks that rival the best
human-designed architectures. However, NAS requires sampling, constructing, and
training hundreds to thousands of models to achieve well-performing
architectures. This procedure needs to be executed from scratch for each new
task. The application of NAS to a wide set of tasks currently lacks a way to
transfer generalizable knowledge across tasks. In this paper, we present the
Multitask Neural Model Search (MNMS) controller. Our goal is to learn a
generalizable framework that can condition model construction on successful
model searches for previously seen tasks, thus significantly speeding up the
search for new tasks. We demonstrate that MNMS can conduct an automated
architecture search for multiple tasks simultaneously while still learning
well-performing, specialized models for each task. We then show that
pre-trained MNMS controllers can transfer learning to new tasks. By leveraging
knowledge from previous searches, we find that pre-trained MNMS models start
from a better location in the search space and reduce search time on unseen
tasks, while still discovering models that outperform published human-designed
models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Hierarchical Game-Theoretic Planning for Autonomous Vehicles | The actions of an autonomous vehicle on the road affect and are affected by
those of other drivers, whether overtaking, negotiating a merge, or avoiding an
accident. This mutual dependence, best captured by dynamic game theory, creates
a strong coupling between the vehicle's planning and its predictions of other
drivers' behavior, and constitutes an open problem with direct implications on
the safety and viability of autonomous driving technology. Unfortunately,
dynamic games are too computationally demanding to meet the real-time
constraints of autonomous driving in its continuous state and action space. In
this paper, we introduce a novel game-theoretic trajectory planning algorithm
for autonomous driving, that enables real-time performance by hierarchically
decomposing the underlying dynamic game into a long-horizon "strategic" game
with simplified dynamics and full information structure, and a short-horizon
"tactical" game with full dynamics and a simplified information structure. The
value of the strategic game is used to guide the tactical planning, implicitly
extending the planning horizon, pushing the local trajectory optimization
closer to global solutions, and, most importantly, quantitatively accounting
for the autonomous vehicle and the human driver's ability and incentives to
influence each other. In addition, our approach admits non-deterministic models
of human decision-making, rather than relying on perfectly rational
predictions. Our results showcase richer, safer, and more effective autonomous
behavior in comparison to existing techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
Observable dictionary learning for high-dimensional statistical inference | This paper introduces a method for efficiently inferring a high-dimensional
distributed quantity from a few observations. The quantity of interest (QoI) is
approximated in a basis (dictionary) learned from a training set. The
coefficients associated with the approximation of the QoI in the basis are
determined by minimizing the misfit with the observations. To obtain a
probabilistic estimate of the quantity of interest, a Bayesian approach is
employed. The QoI is treated as a random field endowed with a hierarchical
prior distribution so that closed-form expressions can be obtained for the
posterior distribution. The main contribution of the present work lies in the
derivation of \emph{a representation basis consistent with the observation
chain} used to infer the associated coefficients. The resulting dictionary is
then tailored to be both observable by the sensors and accurate in
approximating the posterior mean. An algorithm for deriving such an observable
dictionary is presented. The method is illustrated with the estimation of the
velocity field of an open cavity flow from a handful of wall-mounted point
sensors. Comparison with standard estimation approaches relying on Principal
Component Analysis and K-SVD dictionaries is provided and illustrates the
superior performance of the present approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
Counterintuitive Reconstruction of the Polar O-Terminated ZnO Surface With Zinc Vacancies and Hydrogen | Understanding the structure of ZnO surface reconstructions and their
resultant properties is crucial to the rational design of ZnO-containing
devices ranging from optoelectronics to catalysts. Here, we are motivated by
recent experimental work which showed a new surface reconstruction containing
Zn vacancies ordered in a Zn(3x3) pattern in the subsurface of (0001)-O
terminated ZnO. A reconstruction with Zn vacancies on (0001)-O is surprising
and counterintuitive because Zn vacancies enhance the surface dipole rather
than reduce it. In this work, we show using Density Functional Theory (DFT)
that subsurface Zn vacancies can form on (0001)-O when coupled with adsorption
of surface H and are in fact stable under a wide range of common conditions. We
also show these vacancies have a significant ordering tendency and that
Sb-doping created subsurface inversion domain boundaries (IDBs) enhances the
driving force of Zn vacancy alignment into large domains of the Zn(3x3)
reconstruction.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Finite-Tame-Wild Trichotomy Theorem for Tensor Diagrams | In this paper, we consider the problem of determining when two tensor
networks are equivalent under a heterogeneous change of basis. In particular,
to a string diagram in a certain monoidal category (which we call tensor
diagrams), we formulate an associated abelian category of representations. Each
representation corresponds to a tensor network on that diagram. We then
classify which tensor diagrams give rise to categories that are finite, tame,
or wild in the traditional sense of representation theory. For those tensor
diagrams of finite and tame type, we classify the indecomposable
representations. Our main result is that a tensor diagram is wild if and only
if it contains a vertex of degree at least three. Otherwise, it is of tame or
finite type.
| 0 | 0 | 1 | 0 | 0 | 0 |
Decomposing the Quantile Ratio Index with applications to Australian income and wealth data | The quantile ratio index introduced by Prendergast and Staudte 2017 is a
simple and effective measure of relative inequality for income data that is
resistant to outliers. It measures the average relative distance of a randomly
chosen income from its symmetric quantile. Another useful property of this
index is investigated here: given a partition of the income distribution into a
union of sets of symmetric quantiles, one can find the conditional inequality
for each set as measured by the quantile ratio index and readily combine them
in a weighted average to obtain the index for the entire population. When
applied to data for various years, one can track how these contributions to
inequality vary over time, as illustrated here for Australian Bureau of
Statistics income and wealth data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Metamorphic Moving Horizon Estimation | This paper considers a practical scenario where a classical estimation method
might have already been implemented on a certain platform when one tries to
apply more advanced techniques such as moving horizon estimation (MHE). We are
interested to utilize MHE to upgrade, rather than completely discard, the
existing estimation technique. This immediately raises the question how one can
improve the estimation performance gradually based on the pre-estimator. To
this end, we propose a general methodology which incorporates the pre-estimator
with a tuning parameter {\lambda} between 0 and 1 into the quadratic cost
functions that are usually adopted in MHE. We examine the above idea in two
standard MHE frameworks that have been proposed in the existing literature. For
both frameworks, when {\lambda} = 0, the proposed strategy exactly matches the
existing classical estimator; when the value of {\lambda} is increased, the
proposed strategy exhibits a more aggressive normalized forgetting effect
towards the old data, thereby increasing the estimation performance gradually.
| 1 | 0 | 0 | 0 | 0 | 0 |
Erosion distance for generalized persistence modules | The persistence diagram of Cohen-Steiner, Edelsbrunner, and Harer was
recently generalized by Patel to the case of constructible persistence modules
with values in a symmetric monoidal category with images. Patel also introduced
a distance for persistence diagrams, the erosion distance. Motivated by this
work, we extend the erosion distance to a distance of rank invariants of
generalized persistence modules by using the generalization of the interleaving
distance of Bubenik, de Silva, and Scott as a guideline. This extension of the
erosion distance also gives, as a special case, a distance for multidimensional
persistent homology groups with torsion introduced by Frosini. We show that the
erosion distance is stable with respect to the interleaving distance, and that
it gives a lower bound for the natural pseudo-distance in the case of sublevel
set persistent homology of continuous functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Efficient Adjoint Computation for Wavelet and Convolution Operators | First-order optimization algorithms, often preferred for large problems,
require the gradient of the differentiable terms in the objective function.
These gradients often involve linear operators and their adjoints, which must
be applied rapidly. We consider two example problems and derive methods for
quickly evaluating the required adjoint operator. The first example is an image
deblurring problem, where we must compute efficiently the adjoint of
multi-stage wavelet reconstruction. Our formulation of the adjoint works for a
variety of boundary conditions, which allows the formulation to generalize to a
larger class of problems. The second example is a blind channel estimation
problem taken from the optimization literature where we must compute the
adjoint of the convolution of two signals. In each example, we show how the
adjoint operator can be applied efficiently while leveraging existing software.
| 0 | 0 | 1 | 0 | 0 | 0 |
RCD: Rapid Close to Deadline Scheduling for Datacenter Networks | Datacenter-based Cloud Computing services provide a flexible, scalable and
yet economical infrastructure to host online services such as multimedia
streaming, email and bulk storage. Many such services perform geo-replication
to provide necessary quality of service and reliability to users resulting in
frequent large inter- datacenter transfers. In order to meet tenant service
level agreements (SLAs), these transfers have to be completed prior to a
deadline. In addition, WAN resources are quite scarce and costly, meaning they
should be fully utilized. Several recently proposed schemes, such as B4,
TEMPUS, and SWAN have focused on improving the utilization of inter-datacenter
transfers through centralized scheduling, however, they fail to provide a
mechanism to guarantee that admitted requests meet their deadlines. Also, in a
recent study, authors propose Amoeba, a system that allows tenants to define
deadlines and guarantees that the specified deadlines are met, however, to
admit new traffic, the proposed system has to modify the allocation of already
admitted transfers. In this paper, we propose Rapid Close to Deadline
Scheduling (RCD), a close to deadline traffic allocation technique that is fast
and efficient. Through simulations, we show that RCD is up to 15 times faster
than Amoeba, provides high link utilization along with deadline guarantees, and
is able to make quick decisions on whether a new request can be fully satisfied
before its deadline.
| 1 | 0 | 0 | 0 | 0 | 0 |
Real representations of finite symplectic groups over fields of characteristic two | We prove that when $q$ is a power of $2$, every complex irreducible
representation of $\mathrm{Sp}(2n, \mathbb{F}_q)$ may be defined over the real
numbers, that is, all Frobenius-Schur indicators are 1. We also obtain a
generating function for the sum of the degrees of the unipotent characters of
$\mathrm{Sp}(2n, \mathbb{F}_q)$, or of $\mathrm{SO}(2n+1, \mathbb{F}_q)$, for
any prime power $q$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Risk measure estimation for $β$-mixing time series and applications | In this paper, we discuss the application of extreme value theory in the
context of stationary $\beta$-mixing sequences that belong to the Fréchet
domain of attraction. In particular, we propose a methodology to construct
bias-corrected tail estimators. Our approach is based on the combination of two
estimators for the extreme value index to cancel the bias. The resulting
estimator is used to estimate an extreme quantile. In a simulation study, we
outline the performance of our proposals that we compare to alternative
estimators recently introduced in the literature. Also, we compute the
asymptotic variance in specific examples when possible. Our methodology is
applied to two datasets on finance and environment.
| 0 | 0 | 1 | 1 | 0 | 0 |
Transfer entropy between communities in complex networks | With the help of transfer entropy, we analyze information flows between
communities of complex networks. We show that the transfer entropy provides a
coherent description of interactions between communities, including non-linear
interactions. To put some flesh on the bare bones, we analyze transfer
entropies between communities of five largest financial markets, represented as
networks of interacting stocks. Additionally, we discuss information transfer
of rare events, which is analyzed by Rényi transfer entropy.
| 0 | 1 | 0 | 0 | 0 | 0 |
Disentangled VAE Representations for Multi-Aspect and Missing Data | Many problems in machine learning and related application areas are
fundamentally variants of conditional modeling and sampling across multi-aspect
data, either multi-view, multi-modal, or simply multi-group. For example,
sampling from the distribution of English sentences conditioned on a given
French sentence or sampling audio waveforms conditioned on a given piece of
text. Central to many of these problems is the issue of missing data: we can
observe many English, French, or German sentences individually but only
occasionally do we have data for a sentence pair. Motivated by these
applications and inspired by recent progress in variational autoencoders for
grouped data, we develop factVAE, a deep generative model capable of handling
multi-aspect data, robust to missing observations, and with a prior that
encourages disentanglement between the groups and the latent dimensions. The
effectiveness of factVAE is demonstrated on a variety of rich real-world
datasets, including motion capture poses and pictures of faces captured from
varying poses and perspectives.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the spectral geometry of manifolds with conic singularities | In the previous article we derived a detailed asymptotic expansion of the
heat trace for the Laplace-Beltrami operator on functions on manifolds with
conic singularities. In this article we investigate how the terms in the
expansion reflect the geometry of the manifold. Since the general expansion
contains a logarithmic term, its vanishing is a necessary condition for
smoothness of the manifold. In the two-dimensional case this implies that the
constant term of the expansion contains a non-local term that determines the
length of the (circular) cross section and vanishes precisely if this length
equals $2\pi$, that is, in the smooth case. We proceed to the study of higher
dimensions. In the four-dimensional case, the logarithmic term in the expansion
vanishes precisely when the cross section is a spherical space form, and we
expect that the vanishing of a further singular term will imply again
smoothness, but this is not yet clear beyond the case of cyclic space forms. In
higher dimensions the situation is naturally more difficult. We illustrate this
in the case of cross sections with constant curvature. Then the logarithmic
term becomes a polynomial in the curvature with roots that are different from
1, which necessitates more vanishing of other terms, not isolated so far.
| 0 | 0 | 1 | 0 | 0 | 0 |
Neural Task Programming: Learning to Generalize Across Hierarchical Tasks | In this work, we propose a novel robot learning framework called Neural Task
Programming (NTP), which bridges the idea of few-shot learning from
demonstration and neural program induction. NTP takes as input a task
specification (e.g., video demonstration of a task) and recursively decomposes
it into finer sub-task specifications. These specifications are fed to a
hierarchical neural program, where bottom-level programs are callable
subroutines that interact with the environment. We validate our method in three
robot manipulation tasks. NTP achieves strong generalization across sequential
tasks that exhibit hierarchal and compositional structures. The experimental
results show that NTP learns to generalize well to- wards unseen tasks with
increasing lengths, variable topologies, and changing objectives.
| 1 | 0 | 0 | 0 | 0 | 0 |
Discovery of potential collaboration networks from open knowledge sources | Scientific publishing conveys the outputs of an academic or research
activity, in this sense; it also reflects the efforts and issues in which
people engage. To identify potential collaborative networks one of the simplest
approaches is to leverage the co-authorship relations. In this approach,
semantic and hierarchic relationships defined by a Knowledge Organization
System are used in order to improve the system's ability to recommend potential
networks beyond the lexical or syntactic analysis of the topics or concepts
that are of interest to academics.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards Planning and Control of Hybrid Systems with Limit Cycle using LQR Trees | We present a multi-query recovery policy for a hybrid system with goal limit
cycle. The sample trajectories and the hybrid limit cycle of the dynamical
system are stabilized using locally valid Time Varying LQR controller policies
which probabilistically cover a bounded region of state space. The original LQR
Tree algorithm builds such trees for non-linear static and non-hybrid systems
like a pendulum or a cart-pole. We leverage the idea of LQR trees to plan with
a continuous control set, unlike methods that rely on discretization like
dynamic programming to plan for hybrid dynamical systems where it is hard to
capture the exact event of discrete transition. We test the algorithm on a
compass gait model by stabilizing a dynamic walking hybrid limit cycle with
point foot contact from random initial conditions. We show results from the
simulation where the system comes back to a stable behavior with initial
position or velocity perturbation and noise.
| 1 | 0 | 0 | 0 | 0 | 0 |
Clustering with t-SNE, provably | t-distributed Stochastic Neighborhood Embedding (t-SNE), a clustering and
visualization method proposed by van der Maaten & Hinton in 2008, has rapidly
become a standard tool in a number of natural sciences. Despite its
overwhelming success, there is a distinct lack of mathematical foundations and
the inner workings of the algorithm are not well understood. The purpose of
this paper is to prove that t-SNE is able to recover well-separated clusters;
more precisely, we prove that t-SNE in the `early exaggeration' phase, an
optimization technique proposed by van der Maaten & Hinton (2008) and van der
Maaten (2014), can be rigorously analyzed. As a byproduct, the proof suggests
novel ways for setting the exaggeration parameter $\alpha$ and step size $h$.
Numerical examples illustrate the effectiveness of these rules: in particular,
the quality of embedding of topological structures (e.g. the swiss roll)
improves. We also discuss a connection to spectral clustering methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Observable Properties of Cool Winds from Galaxies, AGN, and Star Clusters. I. Theoretical Framework | Winds arising from galaxies, star clusters, and active galactic nuclei are
crucial players in star and galaxy formation, but it has proven remarkably
difficult to use observations of them to determine physical properties of
interest, particularly mass fluxes. Much of the difficulty stems from a lack of
a theory that links a physically-realistic model for winds' density, velocity,
and covering factors to calculations of light emission and absorption. In this
paper we provide such a model. We consider a wind launched from a turbulent
region with a range of column densities, derive the differential acceleration
of gas as a function of column density, and use this result to compute winds'
absorption profiles, emission profiles, and emission intensity maps in both
optically thin and optically thick species. The model is sufficiently simple
that all required computations can be done analytically up to straightforward
numerical integrals, rendering it suitable for the problem of deriving physical
parameters by fitting models to observed data. We show that our model produces
realistic absorption and emission profiles for some example cases, and argue
that the most promising methods of deducing mass fluxes are based on
combinations of absorption lines of different optical depths, or on combining
absorption with measurements of molecular line emission. In the second paper in
this series, we expand on these ideas by introducing a set of observational
diagnostics that are significantly more robust that those commonly in use, and
that can be used to obtain improved estimates of wind properties.
| 0 | 1 | 0 | 0 | 0 | 0 |
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition | Neural models enjoy widespread use across a variety of tasks and have grown
to become crucial components of many industrial systems. Despite their
effectiveness and extensive popularity, they are not without their exploitable
flaws. Initially applied to computer vision systems, the generation of
adversarial examples is a process in which seemingly imperceptible
perturbations are made to an image, with the purpose of inducing a deep
learning based classifier to misclassify the image. Due to recent trends in
speech processing, this has become a noticeable issue in speech recognition
models. In late 2017, an attack was shown to be quite effective against the
Speech Commands classification model. Limited-vocabulary speech classifiers,
such as the Speech Commands model, are used quite frequently in a variety of
applications, particularly in managing automated attendants in telephony
contexts. As such, adversarial examples produced by this attack could have
real-world consequences. While previous work in defending against these
adversarial examples has investigated using audio preprocessing to reduce or
distort adversarial noise, this work explores the idea of flooding particular
frequency bands of an audio signal with random noise in order to detect
adversarial examples. This technique of flooding, which does not require
retraining or modifying the model, is inspired by work done in computer vision
and builds on the idea that speech classifiers are relatively robust to natural
noise. A combined defense incorporating 5 different frequency bands for
flooding the signal with noise outperformed other existing defenses in the
audio space, detecting adversarial examples with 91.8% precision and 93.5%
recall.
| 1 | 0 | 0 | 0 | 0 | 0 |
Representation Learning and Pairwise Ranking for Implicit Feedback in Recommendation Systems | In this paper, we propose a novel ranking framework for collaborative
filtering with the overall aim of learning user preferences over items by
minimizing a pairwise ranking loss. We show the minimization problem involves
dependent random variables and provide a theoretical analysis by proving the
consistency of the empirical risk minimization in the worst case where all
users choose a minimal number of positive and negative items. We further derive
a Neural-Network model that jointly learns a new representation of users and
items in an embedded space as well as the preference relation of users over the
pairs of items. The learning objective is based on three scenarios of ranking
losses that control the ability of the model to maintain the ordering over the
items induced from the users' preferences, as well as, the capacity of the
dot-product defined in the learned embedded space to produce the ordering. The
proposed model is by nature suitable for implicit feedback and involves the
estimation of only very few parameters. Through extensive experiments on
several real-world benchmarks on implicit data, we show the interest of
learning the preference and the embedding simultaneously when compared to
learning those separately. We also demonstrate that our approach is very
competitive with the best state-of-the-art collaborative filtering techniques
proposed for implicit feedback.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the Sublinear Regret of Distributed Primal-Dual Algorithms for Online Constrained Optimization | This paper introduces consensus-based primal-dual methods for distributed
online optimization where the time-varying system objective function
$f_t(\mathbf{x})$ is given as the sum of local agents' objective functions,
i.e., $f_t(\mathbf{x}) = \sum_i f_{i,t}(\mathbf{x}_i)$, and the system
constraint function $\mathbf{g}(\mathbf{x})$ is given as the sum of local
agents' constraint functions, i.e., $\mathbf{g}(\mathbf{x}) = \sum_i
\mathbf{g}_i (\mathbf{x}_i) \preceq \mathbf{0}$. At each stage, each agent
commits to an adaptive decision pertaining only to the past and locally
available information, and incurs a new cost function reflecting the change in
the environment. Our algorithm uses weighted averaging of the iterates for each
agent to keep local estimates of the global constraints and dual variables. We
show that the algorithm achieves a regret of order $O(\sqrt{T})$ with the time
horizon $T$, in scenarios when the underlying communication topology is
time-varying and jointly-connected. The regret is measured in regard to the
cost function value as well as the constraint violation. Numerical results for
online routing in wireless multi-hop networks with uncertain channel rates are
provided to illustrate the performance of the proposed algorithm.
| 0 | 0 | 1 | 0 | 0 | 0 |
Reliability study of proportional odds family of discrete distributions | The proportional odds model gives a method of generating new family of
distributions by adding a parameter, called tilt parameter, to expand an
existing family of distributions. The new family of distributions so obtained
is known as Marshall-Olkin family of distributions or Marshall-Olkin extended
distributions. In this paper, we consider Marshall-Olkin family of
distributions in discrete case with fixed tilt parameter. We study different
ageing properties, as well as different stochastic orderings of this family of
distributions. All the results of this paper are supported by several examples.
| 0 | 0 | 1 | 1 | 0 | 0 |
Global Orientifolded Quivers with Inflation | We describe global embeddings of fractional D3 branes at orientifolded
singularities in type IIB flux compactifications. We present an explicit
Calabi-Yau example where the chiral visible sector lives on a local
orientifolded quiver while non-perturbative effects, $\alpha'$ corrections and
a T-brane hidden sector lead to full closed string moduli stabilisation in a de
Sitter vacuum. The same model can also successfully give rise to inflation
driven by a del Pezzo divisor. Our model represents the first explicit
Calabi-Yau example featuring both an inflationary and a chiral visible sector.
| 0 | 1 | 0 | 0 | 0 | 0 |
Discretization of Springer fibers | Consider a nilpotent element e in a simple complex Lie algebra. The Springer
fibre corresponding to e admits a discretization (discrete analogue) introduced
by the author in 1999. In this paper we propose a conjectural description of
that discretization which is more amenable to computation.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Underapproximation of Reach Sets of Abstract Continuous-Time Systems | We consider the problem of proving that each point in a given set of states
("target set") can indeed be reached by a given nondeterministic
continuous-time dynamical system from some initial state. We consider this
problem for abstract continuous-time models that can be concretized as various
kinds of continuous and hybrid dynamical systems.
The approach to this problem proposed in this paper is based on finding a
suitable superset S of the target set which has the property that each partial
trajectory of the system which lies entirely in S either is defined as the
initial time moment, or can be locally extended backward in time, or can be
locally modified in such a way that the resulting trajectory can be locally
extended back in time.
This reformulation of the problem has a relatively simple logical expression
and is convenient for applying various local existence theorems and local
dynamics analysis methods to proving reachability which makes it suitable for
reasoning about the behavior of continuous and hybrid dynamical systems in
proof assistants such as Mizar, Isabelle, etc.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Bayesian nonparametric approach to log-concave density estimation | The estimation of a log-concave density on $\mathbb{R}$ is a canonical
problem in the area of shape-constrained nonparametric inference. We present a
Bayesian nonparametric approach to this problem based on an exponentiated
Dirichlet process mixture prior and show that the posterior distribution
converges to the log-concave truth at the (near-) minimax rate in Hellinger
distance. Our proof proceeds by establishing a general contraction result based
on the log-concave maximum likelihood estimator that prevents the need for
further metric entropy calculations. We also present two computationally more
feasible approximations and a more practical empirical Bayes approach, which
are illustrated numerically via simulations.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Complete Characterization of the 1-Dimensional Intrinsic Cech Persistence Diagrams for Metric Graphs | Metric graphs are special types of metric spaces used to model and represent
simple, ubiquitous, geometric relations in data such as biological networks,
social networks, and road networks. We are interested in giving a qualitative
description of metric graphs using topological summaries. In particular, we
provide a complete characterization of the 1-dimensional intrinsic Cech
persistence diagrams for metric graphs using persistent homology. Together with
complementary results by Adamaszek et. al, which imply results on intrinsic
Cech persistence diagrams in all dimensions for a single cycle, our results
constitute important steps toward characterizing intrinsic Cech persistence
diagrams for arbitrary metric graphs across all dimensions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Critical exponent $ω$ in the Gross-Neveu-Yukawa model at $O(1/N)$ | The critcal exponent $\omega$ is evaluated at $O(1/N)$ in $d$-dimensions in
the Gross-Neveu model using the large $N$ critical point formalism. It is shown
to be in agreement with the recently determined three loop $\beta$-functions of
the Gross-Neveu-Yukawa model in four dimensions. The same exponent is computed
for the chiral Gross-Neveu and non-abelian Nambu-Jona-Lasinio universality
classes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Path Planning for Multiple Heterogeneous Unmanned Vehicles with Uncertain Service Times | This article presents a framework and develops a formulation to solve a path
planning problem for multiple heterogeneous Unmanned Vehicles (UVs) with
uncertain service times for each vehicle--target pair. The vehicles incur a
penalty proportional to the duration of their total service time in excess of a
preset constant. The vehicles differ in their motion constraints and are
located at distinct depots at the start of the mission. The vehicles may also
be equipped with disparate sensors. The objective is to find a tour for each
vehicle that starts and ends at its respective depot such that every target is
visited and serviced by some vehicle while minimizing the sum of the total
travel distance and the expected penalty incurred by all the vehicles. We
formulate the problem as a two-stage stochastic program with recourse, present
the theoretical properties of the formulation and advantages of using such a
formulation, as opposed to a deterministic expected value formulation, to solve
the problem. Extensive numerical simulations also corroborate the effectiveness
of the proposed approach.
| 1 | 0 | 1 | 0 | 0 | 0 |
Dropout-based Active Learning for Regression | Active learning is relevant and challenging for high-dimensional regression
models when the annotation of the samples is expensive. Yet most of the
existing sampling methods cannot be applied to large-scale problems, consuming
too much time for data processing. In this paper, we propose a fast active
learning algorithm for regression, tailored for neural network models. It is
based on uncertainty estimation from stochastic dropout output of the network.
Experiments on both synthetic and real-world datasets show comparable or better
performance (depending on the accuracy metric) as compared to the baselines.
This approach can be generalized to other deep learning architectures. It can
be used to systematically improve a machine-learning model as it offers a
computationally efficient way of sampling additional data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Modeling Human Categorization of Natural Images Using Deep Feature Representations | Over the last few decades, psychologists have developed sophisticated formal
models of human categorization using simple artificial stimuli. In this paper,
we use modern machine learning methods to extend this work into the realm of
naturalistic stimuli, enabling human categorization to be studied over the
complex visual domain in which it evolved and developed. We show that
representations derived from a convolutional neural network can be used to
model behavior over a database of >300,000 human natural image classifications,
and find that a group of models based on these representations perform well,
near the reliability of human judgments. Interestingly, this group includes
both exemplar and prototype models, contrasting with the dominance of exemplar
models in previous work. We are able to improve the performance of the
remaining models by preprocessing neural network representations to more
closely capture human similarity judgments.
| 1 | 0 | 0 | 1 | 0 | 0 |
BARCHAN: Blob Alignment for Robust CHromatographic ANalysis | Comprehensive Two dimensional gas chromatography (GCxGC) plays a central role
into the elucidation of complex samples. The automation of the identification
of peak areas is of prime interest to obtain a fast and repeatable analysis of
chromatograms. To determine the concentration of compounds or pseudo-compounds,
templates of blobs are defined and superimposed on a reference chromatogram.
The templates then need to be modified when different chromatograms are
recorded. In this study, we present a chromatogram and template alignment
method based on peak registration called BARCHAN. Peaks are identified using a
robust mathematical morphology tool. The alignment is performed by a
probabilistic estimation of a rigid transformation along the first dimension,
and a non-rigid transformation in the second dimension, taking into account
noise, outliers and missing peaks in a fully automated way. Resulting aligned
chromatograms and masks are presented on two datasets. The proposed algorithm
proves to be fast and reliable. It significantly reduces the time to results
for GCxGC analysis.
| 1 | 1 | 0 | 0 | 0 | 0 |
Homogeneity Pursuit in Single Index Models based Panel Data Analysis | Panel data analysis is an important topic in statistics and econometrics.
Traditionally, in panel data analysis, all individuals are assumed to share the
same unknown parameters, e.g. the same coefficients of covariates when the
linear models are used, and the differences between the individuals are
accounted for by cluster effects. This kind of modelling only makes sense if
our main interest is on the global trend, this is because it would not be able
to tell us anything about the individual attributes which are sometimes very
important. In this paper, we proposed a modelling based on the single index
models embedded with homogeneity for panel data analysis, which builds the
individual attributes in the model and is parsimonious at the same time. We
develop a data driven approach to identify the structure of homogeneity, and
estimate the unknown parameters and functions based on the identified
structure. Asymptotic properties of the resulting estimators are established.
Intensive simulation studies conducted in this paper also show the resulting
estimators work very well when sample size is finite. Finally, the proposed
modelling is applied to a public financial dataset and a UK climate dataset,
the results reveal some interesting findings.
| 0 | 0 | 1 | 1 | 0 | 0 |
Feeding vs. Falling: The growth and collapse of molecular clouds in a turbulent interstellar medium | In order to understand the origin of observed molecular cloud properties, it
is critical to understand how clouds interact with their environments during
their formation, growth, and collapse. It has been suggested that
accretion-driven turbulence can maintain clouds in a highly turbulent state,
preventing runaway collapse, and explaining the observed non-thermal velocity
dispersions. We present 3D, AMR, MHD simulations of a kiloparsec-scale,
stratified, supernova-driven, self-gravitating, interstellar medium, including
diffuse heating and radiative cooling. These simulations model the formation
and evolution of a molecular cloud population in the turbulent interstellar
medium. We use zoom-in techniques to focus on the dynamics of the mass
accretion and its history for individual molecular clouds. We find that mass
accretion onto molecular clouds proceeds as a combination of turbulent and near
free-fall accretion of a gravitationally bound envelope. Nearby supernova
explosions have a dual role, compressing the envelope, boosting accreted mass,
but also disrupting parts of the envelope and eroding mass from the cloud's
surface. It appears that the inflow rate of kinetic energy onto clouds from
supernova explosions is insufficient to explain the net rate of charge of the
cloud kinetic energy. In the absence of self-consistent star formation,
conversion of gravitational potential into kinetic energy during contraction
seems to be the main driver of non-thermal motions within clouds. We conclude
that although clouds interact strongly with their environments, bound clouds
are always in a state of gravitational contraction, close to runaway, and their
properties are a natural result of this collapse.
| 0 | 1 | 0 | 0 | 0 | 0 |
Complex waveguide based on a magneto-optic layer and a dielectric photonic crystal | We theoretically investigate the dispersion and polarization properties of
the electromagnetic waves in a multi-layered structure composed of a
magneto-optic waveguide on dielectric substrate covered by one-dimensional
dielectric photonic crystal. The numerical analysis of such a complex structure
shows polarization filtration of TE- and TM-modes depending on geometrical
parameters of the waveguide and photonic crystal. We consider different regimes
of the modes propagation inside such a structure: when guiding modes propagate
inside the magnetic film and decay in the photonic crystal; when they propagate
in both magnetic film and photonic crystal.
| 0 | 1 | 0 | 0 | 0 | 0 |
Discriminants of complete intersection space curves | In this paper, we develop a new approach to the discrimi-nant of a complete
intersection curve in the 3-dimensional projective space. By relying on the
resultant theory, we first prove a new formula that allows us to define this
discrimi-nant without ambiguity and over any commutative ring, in particular in
any characteristic. This formula also provides a new method for evaluating and
computing this discrimi-nant efficiently, without the need to introduce new
variables as with the well-known Cayley trick. Then, we obtain new properties
and computational rules such as the covariance and the invariance formulas.
Finally, we show that our definition of the discriminant satisfies to the
expected geometric property and hence yields an effective smoothness criterion
for complete intersection space curves. Actually, we show that in the generic
setting, it is the defining equation of the discriminant scheme if the ground
ring is assumed to be a unique factorization domain.
| 1 | 0 | 1 | 0 | 0 | 0 |
On the Characteristic and Permanent Polynomials of a Matrix | There is a digraph corresponding to every square matrix over $\mathbb{C}$. We
generate a recurrence relation using the Laplace expansion to calculate the
characteristic, and permanent polynomials of a square matrix. Solving this
recurrence relation, we found that the characteristic, and permanent
polynomials can be calculated in terms of characteristic, and permanent
polynomials of some specific induced subdigraphs of blocks in the digraph,
respectively. Interestingly, these induced subdigraphs are vertex-disjoint and
they partition the digraph. Similar to the characteristic, and permanent
polynomials; the determinant, and permanent can also be calculated. Therefore,
this article provides a combinatorial meaning of these useful quantities of the
matrix theory. We conclude this article with a number of open problems which
may be attempted for further research in this direction.
| 1 | 0 | 0 | 0 | 0 | 0 |
A bulk-boundary correspondence for dynamical phase transitions in one-dimensional topological insulators and superconductors | We study the Loschmidt echo for quenches in open one-dimensional lattice
models with symmetry protected topological phases. For quenches where dynamical
quantum phase transitions do occur we find that cusps in the bulk return rate
at critical times tc are associated with sudden changes in the boundary
contribution. For our main example, the Su-Schrieffer-Heeger model, we show
that these sudden changes are related to the periodical appearance of two
eigenvalues close to zero in the dynamical Loschmidt matrix. We demonstrate,
furthermore, that the structure of the Loschmidt spectrum is linked to the
periodic creation of long-range entanglement between the edges of the system.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Consciousness Prior | A new prior is proposed for representation learning, which can be combined
with other priors in order to help disentangling abstract factors from each
other. It is inspired by the phenomenon of consciousness seen as the formation
of a low-dimensional combination of a few concepts constituting a conscious
thought, i.e., consciousness as awareness at a particular time instant. This
provides a powerful constraint on the representation in that such
low-dimensional thought vectors can correspond to statements about reality
which are true, highly probable, or very useful for taking decisions. The fact
that a few elements of the current state can be combined into such a predictive
or useful statement is a strong constraint and deviates considerably from the
maximum likelihood approaches to modelling data and how states unfold in the
future based on an agent's actions. Instead of making predictions in the
sensory (e.g. pixel) space, the consciousness prior allows the agent to make
predictions in the abstract space, with only a few dimensions of that space
being involved in each of these predictions. The consciousness prior also makes
it natural to map conscious states to natural language utterances or to express
classical AI knowledge in the form of facts and rules, although the conscious
states may be richer than what can be expressed easily in the form of a
sentence, a fact or a rule.
| 1 | 0 | 0 | 1 | 0 | 0 |
Multi-Scale Pipeline for the Search of String-Induced CMB Anisotropies | We propose a multi-scale edge-detection algorithm to search for the
Gott-Kaiser-Stebbins imprints of a cosmic string (CS) network on the Cosmic
Microwave Background (CMB) anisotropies. Curvelet decomposition and extended
Canny algorithm are used to enhance the string detectability. Various
statistical tools are then applied to quantify the deviation of CMB maps having
a cosmic string contribution with respect to pure Gaussian anisotropies of
inflationary origin. These statistical measures include the one-point
probability density function, the weighted two-point correlation function
(TPCF) of the anisotropies, the unweighted TPCF of the peaks and of the
up-crossing map, as well as their cross-correlation. We use this algorithm on a
hundred of simulated Nambu-Goto CMB flat sky maps, covering approximately
$10\%$ of the sky, and for different string tensions $G\mu$. On noiseless sky
maps with an angular resolution of $0.9'$, we show that our pipeline detects
CSs with $G\mu$ as low as $G\mu\gtrsim 4.3\times 10^{-10}$. At the same
resolution, but with a noise level typical to a CMB-S4 phase II experiment, the
detection threshold would be to $G\mu\gtrsim 1.2 \times 10^{-7}$.
| 0 | 1 | 0 | 1 | 0 | 0 |
A simultaneous generalization of the theorems of Chevalley-Warning and Morlaye | Inspired by recent work of I. Baoulina, we give a simultaneous generalization
of the theorems of Chevalley-Warning and Morlaye.
| 0 | 0 | 1 | 0 | 0 | 0 |
Some Time-changed fractional Poisson processes | In this paper, we study the fractional Poisson process (FPP) time-changed by
an independent Lévy subordinator and the inverse of the Lévy subordinator,
which we call TCFPP-I and TCFPP-II, respectively. Various distributional
properties of these processes are established. We show that, under certain
conditions, the TCFPP-I has the long-range dependence property and also its law
of iterated logarithm is proved. It is shown that the TCFPP-II is a renewal
process and its waiting time distribution is identified. Its bivariate
distributions and also the governing difference-differential equation are
derived. Some specific examples for both the processes are discussed. Finally,
we present the simulations of the sample paths of these processes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fast algorithm of adaptive Fourier series | Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was
originated for the goal of positive frequency representations of signals. It
achieved the goal and at the same time offered fast decompositions of signals.
There then arose several types of AFDs. AFD merged with the greedy algorithm
idea, and in particular, motivated the so-called pre-orthogonal greedy
algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm.
The cost of the advantages of the AFD type decompositions is, however, the high
computational complexity due to the involvement of maximal selections of the
dictionary parameters. The present paper offers one formulation of the 1-D AFD
algorithm by building the FFT algorithm into it. Accordingly, the algorithm
complexity is reduced, from the original $\mathcal{O}(M N^2)$ to $\mathcal{O}(M
N\log_2 N)$, where $N$ denotes the number of the discretization points on the
unit circle and $M$ denotes the number of points in $[0,1)$. This greatly
enhances the applicability of AFD. Experiments are carried out to show the high
efficiency of the proposed algorithm.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hybrid Indexes to Expedite Spatial-Visual Search | Due to the growth of geo-tagged images, recent web and mobile applications
provide search capabilities for images that are similar to a given query image
and simultaneously within a given geographical area. In this paper, we focus on
designing index structures to expedite these spatial-visual searches. We start
by baseline indexes that are straightforward extensions of the current popular
spatial (R*-tree) and visual (LSH) index structures. Subsequently, we propose
hybrid index structures that evaluate both spatial and visual features in
tandem. The unique challenge of this type of query is that there are
inaccuracies in both spatial and visual features. Therefore, different
traversals of the index structures may produce different images as output, some
of which more relevant to the query than the others. We compare our hybrid
structures with a set of baseline indexes in both performance and result
accuracy using three real world datasets from Flickr, Google Street View, and
GeoUGV.
| 1 | 0 | 0 | 0 | 0 | 0 |
Model compression for faster structural separation of macromolecules captured by Cellular Electron Cryo-Tomography | Electron Cryo-Tomography (ECT) enables 3D visualization of macromolecule
structure inside single cells. Macromolecule classification approaches based on
convolutional neural networks (CNN) were developed to separate millions of
macromolecules captured from ECT systematically. However, given the fast
accumulation of ECT data, it will soon become necessary to use CNN models to
efficiently and accurately separate substantially more macromolecules at the
prediction stage, which requires additional computational costs. To speed up
the prediction, we compress classification models into compact neural networks
with little in accuracy for deployment. Specifically, we propose to perform
model compression through knowledge distillation. Firstly, a complex teacher
network is trained to generate soft labels with better classification
feasibility followed by training of customized student networks with simple
architectures using the soft label to compress model complexity. Our tests
demonstrate that our compressed models significantly reduce the number of
parameters and time cost while maintaining similar classification accuracy.
| 0 | 0 | 0 | 1 | 1 | 0 |
Low quasiparticle coherence temperature in the one band-Hubbard model: A slave-boson approach | We use the Kotliar-Ruckenstein slave-boson formalism to study the temperature
dependence of paramagnetic phases of the one-band Hubbard model for a variety
of band structures. We calculate the Fermi liquid quasiparticle spectral weight
$Z$ and identify the temperature at which it decreases significantly to a
crossover to a bad metal region. Near the Mott metal-insulator transition, this
coherence temperature $T_\textrm{coh}$ is much lower than the Fermi temperature
of the uncorrelated Fermi gas, as is observed in a broad range of strongly
correlated electron materials. After a proper rescaling of temperature and
interaction, we find a universal behavior that is independent of the band
structure of the system. We obtain the temperature-interaction phase diagram as
a function of doping, and we compare the temperature dependence of the double
occupancy, entropy, and charge compressibility with previous results obtained
with Dynamical Mean-Field Theory. We analyse the stability of the method by
calculating the charge compressibility.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Note on Iterated Consistency and Infinite Proofs | Schmerl and Beklemishev's work on iterated reflection achieves two aims: It
introduces the important notion of $\Pi^0_1$-ordinal, characterizing the
$\Pi^0_1$-theorems of a theory in terms of transfinite iterations of
consistency; and it provides an innovative calculus to compute the
$\Pi^0_1$-ordinals for a range of theories. The present note demonstrates that
these achievements are independent: We read off $\Pi^0_1$-ordinals from a
Schütte-style ordinal analysis via infinite proofs, in a direct and
transparent way.
| 0 | 0 | 1 | 0 | 0 | 0 |
Turning Internet of Things(IoT) into Internet of Vulnerabilities (IoV) : IoT Botnets | Internet of Things (IoT) is the next big evolutionary step in the world of
internet. The main intention behind the IoT is to enable safer living and risk
mitigation on different levels of life. With the advent of IoT botnets, the
view towards IoT devices has changed from enabler of enhanced living into
Internet of vulnerabilities for cyber criminals. IoT botnets has exposed two
different glaring issues, 1) A large number of IoT devices are accessible over
public Internet. 2) Security (if considered at all) is often an afterthought in
the architecture of many wide spread IoT devices. In this article, we briefly
outline the anatomy of the IoT botnets and their basic mode of operations. Some
of the major DDoS incidents using IoT botnets in recent times along with the
corresponding exploited vulnerabilities will be discussed. We also provide
remedies and recommendations to mitigate IoT related cyber risks and briefly
illustrate the importance of cyber insurance in the modern connected world.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cwikel estimates revisited | In this paper, we propose a new approach to Cwikel estimates both for the
Euclidean space and for the noncommutative Euclidean space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Smooth Pinball Neural Network for Probabilistic Forecasting of Wind Power | Uncertainty analysis in the form of probabilistic forecasting can
significantly improve decision making processes in the smart power grid for
better integrating renewable energy sources such as wind. Whereas point
forecasting provides a single expected value, probabilistic forecasts provide
more information in the form of quantiles, prediction intervals, or full
predictive densities. This paper analyzes the effectiveness of a novel approach
for nonparametric probabilistic forecasting of wind power that combines a
smooth approximation of the pinball loss function with a neural network
architecture and a weighting initialization scheme to prevent the quantile
cross over problem. A numerical case study is conducted using publicly
available wind data from the Global Energy Forecasting Competition 2014.
Multiple quantiles are estimated to form 10%, to 90% prediction intervals which
are evaluated using a quantile score and reliability measures. Benchmark models
such as the persistence and climatology distributions, multiple quantile
regression, and support vector quantile regression are used for comparison
where results demonstrate the proposed approach leads to improved performance
while preventing the problem of overlapping quantile estimates.
| 0 | 0 | 0 | 1 | 0 | 0 |
Asynchronous Coordinate Descent under More Realistic Assumptions | Asynchronous-parallel algorithms have the potential to vastly speed up
algorithms by eliminating costly synchronization. However, our understanding to
these algorithms is limited because the current convergence of asynchronous
(block) coordinate descent algorithms are based on somewhat unrealistic
assumptions. In particular, the age of the shared optimization variables being
used to update a block is assumed to be independent of the block being updated.
Also, it is assumed that the updates are applied to randomly chosen blocks. In
this paper, we argue that these assumptions either fail to hold or will imply
less efficient implementations. We then prove the convergence of
asynchronous-parallel block coordinate descent under more realistic
assumptions, in particular, always without the independence assumption. The
analysis permits both the deterministic (essentially) cyclic and random rules
for block choices. Because a bound on the asynchronous delays may or may not be
available, we establish convergence for both bounded delays and unbounded
delays. The analysis also covers nonconvex, weakly convex, and strongly convex
functions. We construct Lyapunov functions that directly model both objective
progress and delays, so delays are not treated errors or noise. A
continuous-time ODE is provided to explain the construction at a high level.
| 0 | 0 | 1 | 0 | 0 | 0 |
Zero-temperature magnetic response of small fullerene molecules at the classical and full quantum limit | The ground-state magnetic response of fullerene molecules with up to 36
vertices is calculated, when spins classical or with magnitude $s=\frac{1}{2}$
are located on their vertices and interact according to the nearest-neighbor
antiferromagnetic Heisenberg model. The frustrated topology, which originates
in the pentagons of the fullerenes and is enhanced by their close proximity,
leads to a significant number of classical magnetization and susceptibility
discontinuities, something not expected for a model lacking magnetic
anisotropy. This establishes the classical discontinuities as a generic feature
of fullerene molecules irrespective of their symmetry. The largest number of
discontinuities have the molecule with 26 sites, four of the magnetization and
two of the susceptibility, and an isomer with 34 sites, which has three each.
In addition, for several of the fullerenes the classical zero-field lowest
energy configuration has finite magnetization, which is unexpected for
antiferromagnetic interactions between an even number of spins and with each
spin having the same number of nearest-neighbors. The molecules come in
different symmetries and topologies and there are only a few patterns of
magnetic behavior that can be detected from such a small sample of relatively
small fullerenes. Contrary to the classical case, in the full quantum limit
$s=\frac{1}{2}$ there are no discontinuities for a subset of the molecules that
was considered. This leaves the icosahedral symmetry fullerenes as the only
ones known supporting ground-state magnetization discontinuities for
$s=\frac{1}{2}$. It is also found that a molecule with 34 sites has a
doubly-degenerate ground state when $s=\frac{1}{2}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stochastic Chemical Reaction Networks for Robustly Approximating Arbitrary Probability Distributions | We show that discrete distributions on the $d$-dimensional non-negative
integer lattice can be approximated arbitrarily well via the marginals of
stationary distributions for various classes of stochastic chemical reaction
networks. We begin by providing a class of detailed balanced networks and prove
that they can approximate any discrete distribution to any desired accuracy.
However, these detailed balanced constructions rely on the ability to
initialize a system precisely, and are therefore susceptible to perturbations
in the initial conditions. We therefore provide another construction based on
the ability to approximate point mass distributions and prove that this
construction is capable of approximating arbitrary discrete distributions for
any choice of initial condition. In particular, the developed models are
ergodic, so their limit distributions are robust to a finite number of
perturbations over time in the counts of molecules.
| 0 | 0 | 0 | 0 | 1 | 0 |
Deep Reinforcement Learning for Event-Driven Multi-Agent Decision Processes | The incorporation of macro-actions (temporally extended actions) into
multi-agent decision problems has the potential to address the curse of
dimensionality associated with such decision problems. Since macro-actions last
for stochastic durations, multiple agents executing decentralized policies in
cooperative environments must act asynchronously. We present an algorithm that
modifies Generalized Advantage Estimation for temporally extended actions,
allowing a state-of-the-art policy optimization algorithm to optimize policies
in Dec-POMDPs in which agents act asynchronously. We show that our algorithm is
capable of learning optimal policies in two cooperative domains, one involving
real-time bus holding control and one involving wildfire fighting with unmanned
aircraft. Our algorithm works by framing problems as "event-driven decision
processes," which are scenarios where the sequence and timing of actions and
events are random and governed by an underlying stochastic process. In addition
to optimizing policies with continuous state and action spaces, our algorithm
also facilitates the use of event-driven simulators, which do not require time
to be discretized into time-steps. We demonstrate the benefit of using
event-driven simulation in the context of multiple agents taking asynchronous
actions. We show that fixed time-step simulation risks obfuscating the sequence
in which closely-separated events occur, adversely affecting the policies
learned. Additionally, we show that arbitrarily shrinking the time-step scales
poorly with the number of agents.
| 1 | 0 | 0 | 0 | 0 | 0 |
Early Salient Region Selection Does Not Drive Rapid Visual Categorization | The current dominant visual processing paradigm in both human and machine
research is the feedforward, layered hierarchy of neural-like processing
elements. Within this paradigm, visual saliency is seen by many to have a
specific role, namely that of early selection. Early selection is thought to
enable very fast visual performance by limiting processing to only the most
relevant candidate portions of an image. Though this strategy has indeed led to
improved processing time efficiency in machine algorithms, at least one set of
critical tests of this idea has never been performed with respect to the role
of early selection in human vision. How would the best of the current saliency
models perform on the stimuli used by experimentalists who first provided
evidence for this visual processing paradigm? Would the algorithms really
provide correct candidate sub-images to enable fast categorization on those
same images? Here, we report on a new series of tests of these questions whose
results suggest that it is quite unlikely that such an early selection process
has any role in human rapid visual categorization.
| 0 | 0 | 0 | 0 | 1 | 0 |
Bonsai: Synthesis-Based Reasoning for Type Systems | We describe algorithms for symbolic reasoning about executable models of type
systems, supporting three queries intended for designers of type systems.
First, we check for type soundness bugs and synthesize a counterexample program
if such a bug is found. Second, we compare two versions of a type system,
synthesizing a program accepted by one but rejected by the other. Third, we
minimize the size of synthesized counterexample programs.
These algorithms symbolically evaluate typecheckers and interpreters,
producing formulas that characterize the set of programs that fail or succeed
in the typechecker and the interpreter. However, symbolically evaluating
interpreters poses efficiency challenges, which are caused by having to merge
execution paths of the various possible input programs. Our main contribution
is the Bonsai tree, a novel symbolic representation of programs and program
states which addresses these challenges. Bonsai trees encode complex syntactic
information in terms of logical constraints, enabling more efficient merging.
We implement these algorithms in the Bonsai tool, an assistant for type
system designers. We perform case studies on how Bonsai helps test and explore
a variety of type systems. Bonsai efficiently synthesizes counterexamples for
soundness bugs that have been inaccessible to automatic tools, and is the first
automated tool to find a counterexample for the recently discovered Scala
soundness bug SI-9633.
| 1 | 0 | 0 | 0 | 0 | 0 |
Preference-based performance measures for Time-Domain Global Similarity method | For Time-Domain Global Similarity (TDGS) method, which transforms the data
cleaning problem into a binary classification problem about the physical
similarity between channels, directly adopting common performance measures
could only guarantee the performance for physical similarity. Nevertheless,
practical data cleaning tasks have preferences for the correctness of original
data sequences. To obtain the general expressions of performance measures based
on the preferences of tasks, the mapping relations between performance of TDGS
method about physical similarity and correctness of data sequences are
investigated by probability theory in this paper. Performance measures for TDGS
method in several common data cleaning tasks are set. Cases when these
preference-based performance measures could be simplified are introduced.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the Prospects for Detecting a Net Photon Circular Polarization Produced by Decaying Dark Matter | If dark matter interactions with Standard Model particles are $CP$-violating,
then dark matter annihilation/decay can produce photons with a net circular
polarization. We consider the prospects for experimentally detecting evidence
for such a circular polarization. We identify optimal models for dark matter
interactions with the Standard Model, from the point of view of detectability
of the net polarization, for the case of either symmetric or asymmetric dark
matter. We find that, for symmetric dark matter, evidence for net polarization
could be found by a search of the Galactic Center by an instrument sensitive to
circular polarization with an efficiency-weighted exposure of at least
$50000~\text{cm}^2~\text{yr}$, provided the systematic detector uncertainties
are constrained at the $1\%$ level. Better sensitivity can be obtained in the
case of asymmetric dark matter. We discuss the prospects for achieving the
needed level of performance using possible detector technologies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Are Bitcoin Bubbles Predictable? Combining a Generalized Metcalfe's Law and the LPPLS Model | We develop a strong diagnostic for bubbles and crashes in bitcoin, by
analyzing the coincidence (and its absence) of fundamental and technical
indicators. Using a generalized Metcalfe's law based on network properties, a
fundamental value is quantified and shown to be heavily exceeded, on at least
four occasions, by bubbles that grow and burst. In these bubbles, we detect a
universal super-exponential unsustainable growth. We model this universal
pattern with the Log-Periodic Power Law Singularity (LPPLS) model, which
parsimoniously captures diverse positive feedback phenomena, such as herding
and imitation. The LPPLS model is shown to provide an ex-ante warning of market
instabilities, quantifying a high crash hazard and probabilistic bracket of the
crash time consistent with the actual corrections; although, as always, the
precise time and trigger (which straw breaks the camel's back) being exogenous
and unpredictable. Looking forward, our analysis identifies a substantial but
not unprecedented overvaluation in the price of bitcoin, suggesting many months
of volatile sideways bitcoin prices ahead (from the time of writing, March
2018).
| 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.