title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Robust 3D Distributed Formation Control with Application to Quadrotors | We present a distributed control strategy for a team of quadrotors to
autonomously achieve a desired 3D formation. Our approach is based on local
relative position measurements and does not require global position information
or inter-vehicle communication. We assume that quadrotors have a common sense
of direction, which is chosen as the direction of gravitational force measured
by their onboard IMU sensors. However, this assumption is not crucial, and our
approach is robust to inaccuracies and effects of acceleration on gravitational
measurements. In particular, converge to the desired formation is unaffected if
each quadrotor has a velocity vector that projects positively onto the desired
velocity vector provided by the formation control strategy. We demonstrate the
validity of proposed approach in an experimental setup and show that a team of
quadrotors achieve a desired 3D formation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Discrete Extremes | Our contribution is to widen the scope of extreme value analysis applied to
discrete-valued data. Extreme values of a random variable $X$ are commonly
modeled using the generalized Pareto distribution, a method that often gives
good results in practice. When $X$ is discrete, we propose two other methods
using a discrete generalized Pareto and a generalized Zipf distribution
respectively. Both are theoretically motivated and we show that they perform
well in estimating rare events in several simulated and real data cases such as
word frequency, tornado outbreaks and multiple births.
| 0 | 0 | 1 | 1 | 0 | 0 |
Rapid Adaptation with Conditionally Shifted Neurons | We describe a mechanism by which artificial neural networks can learn rapid
adaptation - the ability to adapt on the fly, with little data, to new tasks -
that we call conditionally shifted neurons. We apply this mechanism in the
framework of metalearning, where the aim is to replicate some of the
flexibility of human learning in machines. Conditionally shifted neurons modify
their activation values with task-specific shifts retrieved from a memory
module, which is populated rapidly based on limited task experience. On
metalearning benchmarks from the vision and language domains, models augmented
with conditionally shifted neurons achieve state-of-the-art results.
| 1 | 0 | 0 | 1 | 0 | 0 |
JamBot: Music Theory Aware Chord Based Generation of Polyphonic Music with LSTMs | We propose a novel approach for the generation of polyphonic music based on
LSTMs. We generate music in two steps. First, a chord LSTM predicts a chord
progression based on a chord embedding. A second LSTM then generates polyphonic
music from the predicted chord progression. The generated music sounds pleasing
and harmonic, with only few dissonant notes. It has clear long-term structure
that is similar to what a musician would play during a jam session. We show
that our approach is sensible from a music theory perspective by evaluating the
learned chord embeddings. Surprisingly, our simple model managed to extract the
circle of fifths, an important tool in music theory, from the dataset.
| 1 | 0 | 0 | 1 | 0 | 0 |
Inflationary Primordial Black Holes as All Dark Matter | Following a new microlensing constraint on primordial black holes (PBHs) with
$\sim10^{20}$--$10^{28}\,\mathrm{g}$~[1], we revisit the idea of PBH as all
Dark Matter (DM). We have shown that the updated observational constraints
suggest the viable mass function for PBHs as all DM to have a peak at $\simeq
10^{20}\,\mathrm{g}$ with a small width $\sigma \lesssim 0.1$, by imposing
observational constraints on an extended mass function in a proper way. We have
also provided an inflation model that successfully generates PBHs as all DM
fulfilling this requirement.
| 0 | 1 | 0 | 0 | 0 | 0 |
Condition number and matrices | It is well known the concept of the condition number $\kappa(A) =
\|A\|\|A^{-1}\|$, where $A$ is a $n \times n$ real or complex matrix and the
norm used is the spectral norm. Although it is very common to think in
$\kappa(A)$ as "the" condition number of $A$, the truth is that condition
numbers are associated to problems, not just instance of problems. Our goal is
to clarify this difference. We will introduce the general concept of condition
number and apply it to the particular case of real or complex matrices. After
this, we will introduce the classic condition number $\kappa(A)$ of a matrix
and show some known results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Computational Tools in Weighted Persistent Homology | In this paper, we study further properties and applications of weighted
homology and persistent homology. We introduce the Mayer-Vietoris sequence and
generalized Bockstein spectral sequence for weighted homology. For
applications, we show an algorithm to construct a filtration of weighted
simplicial complexes from a weighted network. We also prove a theorem that
allows us to calculate the mod $p^2$ weighted persistent homology given some
information on the mod $p$ weighted persistent homology.
| 0 | 0 | 1 | 0 | 0 | 0 |
Dirac fermions in borophene | Honeycomb structures of group IV elements can host massless Dirac fermions
with non-trivial Berry phases. Their potential for electronic applications has
attracted great interest and spurred a broad search for new Dirac materials
especially in monolayer structures. We present a detailed investigation of the
\beta 12 boron sheet, which is a borophene structure that can form
spontaneously on a Ag(111) surface. Our tight-binding analysis revealed that
the lattice of the \beta 12-sheet could be decomposed into two triangular
sublattices in a way similar to that for a honeycomb lattice, thereby hosting
Dirac cones. Furthermore, each Dirac cone could be split by introducing
periodic perturbations representing overlayer-substrate interactions. These
unusual electronic structures were confirmed by angle-resolved photoemission
spectroscopy and validated by first-principles calculations. Our results
suggest monolayer boron as a new platform for realizing novel high-speed
low-dissipation devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
Resolving the notorious case of conical intersections for coupled cluster dynamics | The motion of electrons and nuclei in photochemical events often involve
conical intersections, degeneracies between electronic states. They serve as
funnels for nuclear relaxation - on the femtosecond scale - in processes where
the electrons and nuclei couple nonadiabatically. Accurate ab initio quantum
chemical models are essential for interpreting experimental measurements of
such phenomena. In this paper we resolve a long-standing problem in coupled
cluster theory, presenting the first formulation of the theory that correctly
describes conical intersections between excited electronic states of the same
symmetry. This new development demonstrates that the highly accurate coupled
cluster theory can be applied to describe dynamics on excited electronic states
involving conical intersections.
| 0 | 1 | 0 | 0 | 0 | 0 |
Benchmarking gate-based quantum computers | With the advent of public access to small gate-based quantum processors, it
becomes necessary to develop a benchmarking methodology such that independent
researchers can validate the operation of these processors. We explore the
usefulness of a number of simple quantum circuits as benchmarks for gate-based
quantum computing devices and show that circuits performing identity operations
are very simple, scalable and sensitive to gate errors and are therefore very
well suited for this task. We illustrate the procedure by presenting benchmark
results for the IBM Quantum Experience, a cloud-based platform for gate-based
quantum computing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Determinant structure for tau-function of holonomic deformation of linear differential equations | In our previous works, a relationship between Hermite's two approximation
problems and Schlesinger transformations of linear differential equations has
been clarified. In this paper, we study tau-functions associated with holonomic
deformations of linear differential equations by using Hermite's two
approximation problems. As a result, we present a determinant formula for the
ratio of tau-functions (tau-quotient).
| 0 | 1 | 1 | 0 | 0 | 0 |
Transfer Operator Based Approach for Optimal Stabilization of Stochastic System | In this paper we develop linear transfer Perron Frobenius operator-based
approach for optimal stabilization of stochastic nonlinear system. One of the
main highlight of the proposed transfer operator based approach is that both
the theory and computational framework developed for the optimal stabilization
of deterministic dynamical system in [1] carries over to the stochastic case
with little change. The optimal stabilization problem is formulated as an
infinite dimensional linear program. Set oriented numerical methods are
proposed for the finite dimensional approximation of the transfer operator and
the controller. Simulation results are presented to verify the developed
framework.
| 1 | 0 | 1 | 0 | 0 | 0 |
CryptoDL: Deep Neural Networks over Encrypted Data | Machine learning algorithms based on deep neural networks have achieved
remarkable results and are being extensively used in different domains.
However, the machine learning algorithms requires access to raw data which is
often privacy sensitive. To address this issue, we develop new techniques to
provide solutions for running deep neural networks over encrypted data. In this
paper, we develop new techniques to adopt deep neural networks within the
practical limitation of current homomorphic encryption schemes. More
specifically, we focus on classification of the well-known convolutional neural
networks (CNN). First, we design methods for approximation of the activation
functions commonly used in CNNs (i.e. ReLU, Sigmoid, and Tanh) with low degree
polynomials which is essential for efficient homomorphic encryption schemes.
Then, we train convolutional neural networks with the approximation polynomials
instead of original activation functions and analyze the performance of the
models. Finally, we implement convolutional neural networks over encrypted data
and measure performance of the models. Our experimental results validate the
soundness of our approach with several convolutional neural networks with
varying number of layers and structures. When applied to the MNIST optical
character recognition tasks, our approach achieves 99.52\% accuracy which
significantly outperforms the state-of-the-art solutions and is very close to
the accuracy of the best non-private version, 99.77\%. Also, it can make close
to 164000 predictions per hour. We also applied our approach to CIFAR-10, which
is much more complex compared to MNIST, and were able to achieve 91.5\%
accuracy with approximation polynomials used as activation functions. These
results show that CryptoDL provides efficient, accurate and scalable
privacy-preserving predictions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Data-driven modeling of collaboration networks: A cross-domain analysis | We analyze large-scale data sets about collaborations from two different
domains: economics, specifically 22.000 R&D alliances between 14.500 firms, and
science, specifically 300.000 co-authorship relations between 95.000
scientists. Considering the different domains of the data sets, we address two
questions: (a) to what extent do the collaboration networks reconstructed from
the data share common structural features, and (b) can their structure be
reproduced by the same agent-based model. In our data-driven modeling approach
we use aggregated network data to calibrate the probabilities at which agents
establish collaborations with either newcomers or established agents. The model
is then validated by its ability to reproduce network features not used for
calibration, including distributions of degrees, path lengths, local clustering
coefficients and sizes of disconnected components. Emphasis is put on comparing
domains, but also sub-domains (economic sectors, scientific specializations).
Interpreting the link probabilities as strategies for link formation, we find
that in R&D collaborations newcomers prefer links with established agents,
while in co-authorship relations newcomers prefer links with other newcomers.
Our results shed new light on the long-standing question about the role of
endogenous and exogenous factors (i.e., different information available to the
initiator of a collaboration) in network formation.
| 1 | 1 | 0 | 0 | 0 | 0 |
Tunnelling in Dante's Inferno | We study quantum tunnelling in Dante's Inferno model of large field
inflation. Such a tunnelling process, which will terminate inflation, becomes
problematic if the tunnelling rate is rapid compared to the Hubble time scale
at the time of inflation. Consequently, we constrain the parameter space of
Dante's Inferno model by demanding a suppressed tunnelling rate during
inflation. The constraints are derived and explicit numerical bounds are
provided for representative examples. Our considerations are at the level of an
effective field theory; hence, the presented constraints have to hold
regardless of any UV completion.
| 0 | 1 | 0 | 0 | 0 | 0 |
Regulating Highly Automated Robot Ecologies: Insights from Three User Studies | Highly automated robot ecologies (HARE), or societies of independent
autonomous robots or agents, are rapidly becoming an important part of much of
the world's critical infrastructure. As with human societies, regulation,
wherein a governing body designs rules and processes for the society, plays an
important role in ensuring that HARE meet societal objectives. However, to
date, a careful study of interactions between a regulator and HARE is lacking.
In this paper, we report on three user studies which give insights into how to
design systems that allow people, acting as the regulatory authority, to
effectively interact with HARE. As in the study of political systems in which
governments regulate human societies, our studies analyze how interactions
between HARE and regulators are impacted by regulatory power and individual
(robot or agent) autonomy. Our results show that regulator power, decision
support, and adaptive autonomy can each diminish the social welfare of HARE,
and hint at how these seemingly desirable mechanisms can be designed so that
they become part of successful HARE.
| 1 | 0 | 0 | 0 | 0 | 0 |
Some theoretical results on tensor elliptical distribution | The multilinear normal distribution is a widely used tool in tensor analysis
of magnetic resonance imaging (MRI). Diffusion tensor MRI provides a
statistical estimate of a symmetric 2nd-order diffusion tensor, for each voxel
within an imaging volume. In this article, tensor elliptical (TE) distribution
is introduced as an extension to the multilinear normal (MLN) distribution.
Some properties including the characteristic function and distribution of
affine transformations are given. An integral representation connecting
densities of TE and MLN distributions is exhibited that is used in deriving the
expectation of any measurable function of a TE variate.
| 0 | 0 | 1 | 1 | 0 | 0 |
Applying Text Mining to Protest Stories as Voice against Media Censorship | Data driven activism attempts to collect, analyze and visualize data to
foster social change. However, during media censorship it is often impossible
to collect such data. Here we demonstrate that data from personal stories can
also help us to gain insights about protests and activism which can work as a
voice for the activists. We analyze protest story data by extracting location
network from the stories and perform emotion mining to get insight about the
protest.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cost-Effective Training of Deep CNNs with Active Model Adaptation | Deep convolutional neural networks have achieved great success in various
applications. However, training an effective DNN model for a specific task is
rather challenging because it requires a prior knowledge or experience to
design the network architecture, repeated trial-and-error process to tune the
parameters, and a large set of labeled data to train the model. In this paper,
we propose to overcome these challenges by actively adapting a pre-trained
model to a new task with less labeled examples. Specifically, the pre-trained
model is iteratively fine tuned based on the most useful examples. The examples
are actively selected based on a novel criterion, which jointly estimates the
potential contribution of an instance on optimizing the feature representation
as well as improving the classification model for the target task. On one hand,
the pre-trained model brings plentiful information from its original task,
avoiding redesign of the network architecture or training from scratch; and on
the other hand, the labeling cost can be significantly reduced by active label
querying. Experiments on multiple datasets and different pre-trained models
demonstrate that the proposed approach can achieve cost-effective training of
DNNs.
| 0 | 0 | 0 | 1 | 0 | 0 |
Flipping growth orientation of nanographitic structures by plasma enhanced chemical vapor deposition | Nanographitic structures (NGSs) with multitude of morphological features are
grown on SiO2/Si substrates by electron cyclotron resonance - plasma enhanced
chemical vapor deposition (ECR-PECVD). CH4 is used as source gas with Ar and H2
as dilutants. Field emission scanning electron microscopy, high resolution
transmission electron microscopy (HRTEM) and Raman spectroscopy are used to
study the structural and morphological features of the grown films. Herein, we
demonstrate, how the morphology can be tuned from planar to vertical structure
using single control parameter namely, dilution of CH4 with Ar and/or H2. Our
results show that the competitive growth and etching processes dictate the
morphology of the NGSs. While Ar-rich composition favors vertically oriented
graphene nanosheets, H2-rich composition aids growth of planar films. Raman
analysis reveals dilution of CH4 with either Ar or H2 or in combination helps
to improve the structural quality of the films. Line shape analysis of Raman 2D
band shows nearly symmetric Lorentzian profile which confirms the turbostratic
nature of the grown NGSs. Further, this aspect is elucidated by HRTEM studies
by observing elliptical diffraction pattern. Based on these experiments, a
comprehensive understanding is obtained on the growth and structural properties
of NGSs grown over a wide range of feedstock compositions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topological thermal Hall effect due to Weyl magnons | We present the first theoretical evidence of zero magnetic field topological
(anomalous) thermal Hall effect due to Weyl magnons. Here, we consider Weyl
magnons in stacked noncoplanar frustrated kagomé antiferromagnets recently
proposed by Owerre, [arXiv:1708.04240]. The Weyl magnons in this system result
from macroscopically broken time-reversal symmetry by the scalar spin chirality
of noncoplanar chiral spin textures. Most importantly, they come from the
lowest excitation, therefore they can be easily observed experimentally at low
temperatures due to the population effect. Similar to electronic Weyl nodes
close to the Fermi energy, Weyl magnon nodes in the lowest excitation are the
most important. Indeed, we show that the topological (anomalous) thermal Hall
effect in this system arises from nonvanishing Berry curvature due to Weyl
magnon nodes in the lowest excitation, and it depends on their distribution
(distance) in momentum space. The present result paves the way to directly
probe low excitation Weyl magnons and macroscopically broken time-reversal
symmetry in three-dimensional frustrated magnets with the anomalous thermal
Hall effect.
| 0 | 1 | 0 | 0 | 0 | 0 |
Numerical prediction of the piezoelectric transducer response in the acoustic nearfield using a one-dimensional electromechanical finite difference approach | We present a simple electromechanical finite difference model to study the
response of a piezoelectric polyvinylidenflourid (PVDF) transducer to
optoacoustic (OA) pressure waves in the acoustic nearfield prior to thermal
relaxation of the OA source volume. The assumption of nearfield conditions,
i.e. the absence of acoustic diffraction, allows to treat the problem using a
one-dimensional numerical approach. Therein, the computational domain is
modeled as an inhomogeneous elastic medium, characterized by its local wave
velocities and densities, allowing to explore the effect of stepwise impedance
changes on the stress wave propagation. The transducer is modeled as a thin
piezoelectric sensing layer and the electromechanical coupling is accomplished
by means of the respective linear constituting equations. Considering a
low-pass characteristic of the full experimental setup, we obtain the resulting
transducer signal. Complementing transducer signals measured in a controlled
laboratory experiment with numerical simulations that result from a model of
the experimental setup, we find that, bearing in mind the apparent limitations
of the one-dimensional approach, the simulated transducer signals can be used
very well to predict and interpret the experimental findings.
| 0 | 1 | 0 | 0 | 0 | 0 |
A free energy landscape of the capture of CO2 by frustrated Lewis pairs | Frustrated Lewis pairs (FLPs) are known for its ability to capture CO2.
Although many FLPs have been reported experimentally and several theoretical
studies have been carried out to address the reaction mechanism, the individual
roles of Lewis acids and bases of FLP in the capture of CO2 is still unclear.
In this study, we employed density functional theory (DFT) based metadynamics
simulations to investigate the complete path for the capture of CO2 by
tBu3P/B(C6F5)3 pair, and to understand the role of the Lewis acid and base.
Interestingly, we have found out that the Lewis acids play more important role
than Lewis bases. Specifically, the Lewis acids are crucial for catalytical
properties and are responsible for both kinetic and thermodynamics control. The
Lewis bases, however, have less impact on the catalytic performance and are
mainly responsible for the formation of FLP systems. Based on these findings,
we propose a thumb of rule for the future synthesis of FLP-based catalyst for
the utilization of CO2.
| 0 | 1 | 0 | 0 | 0 | 0 |
Improved Point Source Detection in Crowded Fields using Probabilistic Cataloging | Cataloging is challenging in crowded fields because sources are extremely
covariant with their neighbors and blending makes even the number of sources
ambiguous. We present the first optical probabilistic catalog, cataloging a
crowded (~0.1 sources per pixel brighter than 22nd magnitude in F606W) Sloan
Digital Sky Survey r band image from M2. Probabilistic cataloging returns an
ensemble of catalogs inferred from the image and thus can capture source-source
covariance and deblending ambiguities. By comparing to a traditional catalog of
the same image and a Hubble Space Telescope catalog of the same region, we show
that our catalog ensemble better recovers sources from the image. It goes more
than a magnitude deeper than the traditional catalog while having a lower false
discovery rate brighter than 20th magnitude. We also present an algorithm for
reducing this catalog ensemble to a condensed catalog that is similar to a
traditional catalog, except it explicitly marginalizes over source-source
covariances and nuisance parameters. We show that this condensed catalog has a
similar completeness and false discovery rate to the catalog ensemble. Future
telescopes will be more sensitive, and thus more of their images will be
crowded. Probabilistic cataloging performs better than existing software in
crowded fields and so should be considered when creating photometric pipelines
in the Large Synoptic Space Telescope era.
| 0 | 1 | 0 | 0 | 0 | 0 |
Local Convergence of Proximal Splitting Methods for Rank Constrained Problems | We analyze the local convergence of proximal splitting algorithms to solve
optimization problems that are convex besides a rank constraint. For this, we
show conditions under which the proximal operator of a function involving the
rank constraint is locally identical to the proximal operator of its convex
envelope, hence implying local convergence. The conditions imply that the
non-convex algorithms locally converge to a solution whenever a convex
relaxation involving the convex envelope can be expected to solve the
non-convex problem.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the boundary between qualitative and quantitative measures of causal effects | Causal relationships among variables are commonly represented via directed
acyclic graphs. There are many methods in the literature to quantify the
strength of arrows in a causal acyclic graph. These methods, however, have
undesirable properties when the causal system represented by a directed acyclic
graph is degenerate. In this paper, we characterize a degenerate causal system
using multiplicity of Markov boundaries, and show that in this case, it is
impossible to quantify causal effects in a reasonable fashion. We then propose
algorithms to identify such degenerate scenarios from observed data.
Performance of our algorithms is investigated through synthetic data analysis.
| 0 | 0 | 1 | 1 | 0 | 0 |
How Could Polyhedral Theory Harness Deep Learning? | The holy grail of deep learning is to come up with an automatic method to
design optimal architectures for different applications. In other words, how
can we effectively dimension and organize neurons along the network layers
based on the computational resources, input size, and amount of training data?
We outline promising research directions based on polyhedral theory and
mixed-integer representability that may offer an analytical approach to this
question, in contrast to the empirical techniques often employed.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimal top dag compression | It is shown that for a given ordered node-labelled tree of size $n$ and with
$s$ many different node labels, one can construct in linear time a top dag of
height $O(\log n)$ and size $O(n / \log_\sigma n) \cap O(d \cdot \log n)$,
where $\sigma = \max\{ 2, s\}$ and $d$ is the size of the minimal dag. The size
bound $O(n / \log_\sigma n)$ is optimal and improves on previous bounds.
| 1 | 0 | 0 | 0 | 0 | 0 |
Synthesizing SystemC Code from Delay Hybrid CSP | Delay is omnipresent in modern control systems, which can prompt oscillations
and may cause deterioration of control performance, invalidate both stability
and safety properties. This implies that safety or stability certificates
obtained on idealized, delay-free models of systems prone to delayed coupling
may be erratic, and further the incorrectness of the executable code generated
from these models. However, automated methods for system verification and code
generation that ought to address models of system dynamics reflecting delays
have not been paid enough attention yet in the computer science community. In
our previous work, on one hand, we investigated the verification of delay
dynamical and hybrid systems; on the other hand, we also addressed how to
synthesize SystemC code from a verified hybrid system modelled by Hybrid CSP
(HCSP) without delay. In this paper, we give a first attempt to synthesize
SystemC code from a verified delay hybrid system modelled by Delay HCSP
(dHCSP), which is an extension of HCSP by replacing ordinary differential
equations (ODEs) with delay differential equations (DDEs). We implement a tool
to support the automatic translation from dHCSP to SystemC.
| 1 | 0 | 0 | 0 | 0 | 0 |
Analysis of Annual Cyclone Frequencies over Bay of Bengal: Effect of 2004 Indian Ocean Tsunami | This paper discusses the time series trend and variability of the cyclone
frequencies over Bay of Bengal, particularly in order to conclude if there is
any significant difference in the pattern visible before and after the
disastrous 2004 Indian ocean tsunami based on the observed annual cyclone
frequency data obtained by India Meteorological Department over the years
1891-2015. Three different categories of cyclones- depression (<34 knots),
cyclonic storm (34-47 knots) and severe cyclonic storm (>47 knots) have been
analyzed separately using a non-homogeneous Poisson process approach. The
estimated intensity functions of the Poisson processes along with their first
two derivatives are discussed and all three categories show decreasing trend of
the intensity functions after the tsunami. Using an exact change-point
analysis, we show that the drops in mean intensity functions are significant
for all three categories. As of author's knowledge, no study so far have
discussed the relation between cyclones and tsunamis. Bay of Bengal is
surrounded by one of the most densely populated areas of the world and any kind
of significant change in tropical cyclone pattern has a large impact in various
ways, for example, disaster management planning and our study is immensely
important from that perspective.
| 0 | 0 | 0 | 1 | 0 | 0 |
Load Thresholds for Cuckoo Hashing with Overlapping Blocks | Dietzfelbinger and Weidling [DW07] proposed a natural variation of cuckoo
hashing where each of $cn$ objects is assigned $k = 2$ intervals of size $\ell$
in a linear (or cyclic) hash table of size $n$ and both start points are chosen
independently and uniformly at random. Each object must be placed into a table
cell within its intervals, but each cell can only hold one object. Experiments
suggested that this scheme outperforms the variant with blocks in which
intervals are aligned at multiples of $\ell$. In particular, the load threshold
is higher, i.e. the load $c$ that can be achieved with high probability. For
instance, Lehman and Panigrahy [LP09] empirically observed the threshold for
$\ell = 2$ to be around $96.5\%$ as compared to roughly $89.7\%$ using blocks.
They managed to pin down the asymptotics of the thresholds for large $\ell$,
but the precise values resisted rigorous analysis.
We establish a method to determine these load thresholds for all $\ell \geq
2$, and, in fact, for general $k \geq 2$. For instance, for $k = \ell = 2$ we
get $\approx 96.4995\%$. The key tool we employ is an insightful and general
theorem due to Leconte, Lelarge, and Massoulié [LLM13], which adapts methods
from statistical physics to the world of hypergraph orientability. In effect,
the orientability thresholds for our graph families are determined by belief
propagation equations for certain graph limits. As a side note we provide
experimental evidence suggesting that placements can be constructed in linear
time with loads close to the threshold using an adapted version of an algorithm
by Khosla [Kho13].
| 1 | 0 | 0 | 0 | 0 | 0 |
Evaluating the hot hand phenomenon using predictive memory selection for multistep Markov Chains: LeBron James' error correcting free throws | Consider the problem of modeling memory for discrete-state random walks using
higher-order Markov chains. This Letter introduces a general Bayesian framework
under the principle of minimizing prediction error to select, from data, the
number of prior states of recent history upon which a trajectory is
statistically dependent. In this framework, I provide closed-form expressions
for several alternative model selection criteria that approximate model
prediction error for future data. Using simulations, I evaluate the statistical
power of these criteria. These methods, when applied to data from the
2016--2017 NBA season, demonstrate evidence of statistical dependencies in
LeBron James' free throw shooting. In particular, a model depending on the
previous shot (single-step Markovian) is approximately as predictive as a model
with independent outcomes. A hybrid jagged model of two parameters, where James
shoots a higher percentage after a missed free throw than otherwise, is more
predictive than either model.
| 0 | 1 | 0 | 1 | 0 | 0 |
Closed almost-Kähler 4-manifolds of constant non-negative Hermitian holomorphic sectional curvature are Kähler | We show that a closed almost Kähler 4-manifold of globally constant
holomorphic sectional curvature $k\geq 0$ with respect to the canonical
Hermitian connection is automatically Kähler. The same result holds for $k<0$
if we require in addition that the Ricci curvature is J-invariant. The proofs
are based on the observation that such manifolds are self-dual, so that
Chern-Weil theory implies useful integral formulas, which are then combined
with results from Seiberg--Witten theory.
| 0 | 0 | 1 | 0 | 0 | 0 |
Pretest and Stein-Type Estimations in Quantile Regression Model | In this study, we consider preliminary test and shrinkage estimation
strategies for quantile regression models. In classical Least Squares
Estimation (LSE) method, the relationship between the explanatory and explained
variables in the coordinate plane is estimated with a mean regression line. In
order to use LSE, there are three main assumptions on the error terms showing
white noise process of the regression model, also known as Gauss-Markov
Assumptions, must be met: (1) The error terms have zero mean, (2) The variance
of the error terms is constant and (3) The covariance between the errors is
zero i.e., there is no autocorrelation. However, data in many areas, including
econometrics, survival analysis and ecology, etc. does not provide these
assumptions. First introduced by Koenker, quantile regression has been used to
complement this deficiency of classical regression analysis and to improve the
least square estimation. The aim of this study is to improve the performance of
quantile regression estimators by using pre-test and shrinkage strategies. A
Monte Carlo simulation study including a comparison with quantile $L_1$--type
estimators such as Lasso, Ridge and Elastic Net are designed to evaluate the
performances of the estimators. Two real data examples are given for
illustrative purposes. Finally, we obtain the asymptotic results of suggested
estimators
| 0 | 0 | 1 | 1 | 0 | 0 |
Outcrop fracture characterization on suppositional planes cutting through digital outcrop models (DOMs) | Conventional fracture data collection methods are usually implemented on
planar surfaces or assuming they are planar; these methods may introduce
sampling errors on uneven outcrop surfaces. Consequently, data collected on
limited types of outcrop surfaces (mainly bedding surfaces) may not be a
sufficient representation of fracture network characteristic in outcrops.
Recent development of techniques that obtain DOMs from outcrops and extract the
full extent of individual fractures offers the opportunity to address the
problem of performing the conventional sampling methods on uneven outcrop
surfaces. In this study, we propose a new method that performs outcrop fracture
characterization on suppositional planes cutting through DOMs. The
suppositional plane is the best fit plane of the outcrop surface, and the
fracture trace map is extracted on the suppositional plane so that the fracture
network can be further characterized. The amount of sampling errors introduced
by the conventional methods and avoided by the new method on 16 uneven outcrop
surfaces with different roughnesses are estimated. The results show that the
conventional sampling methods don't apply to outcrops other than bedding
surfaces or outcrops whose roughness > 0.04 m, and that the proposed method can
greatly extend the types of outcrop surfaces for outcrop fracture
characterization with the suppositional plane cutting through DOMs.
| 1 | 1 | 0 | 0 | 0 | 0 |
Exploiting routinely collected severe case data to monitor and predict influenza outbreaks | Influenza remains a significant burden on health systems. Effective responses
rely on the timely understanding of the magnitude and the evolution of an
outbreak. For monitoring purposes, data on severe cases of influenza in England
are reported weekly to Public Health England. These data are both readily
available and have the potential to provide valuable information to estimate
and predict the key transmission features of seasonal and pandemic influenza.
We propose an epidemic model that links the underlying unobserved influenza
transmission process to data on severe influenza cases. Within a Bayesian
framework, we infer retrospectively the parameters of the epidemic model for
each seasonal outbreak from 2012 to 2015, including: the effective reproduction
number; the initial susceptibility; the probability of admission to intensive
care given infection; and the effect of school closure on transmission. The
model is also implemented in real time to assess whether early forecasting of
the number of admission to intensive care is possible. Our model of admissions
data allows reconstruction of the underlying transmission dynamics revealing:
increased transmission during the season 2013/14 and a noticeable effect of
Christmas school holiday on disease spread during season 2012/13 and 2014/15.
When information on the initial immunity of the population is available,
forecasts of the number of admissions to intensive care can be substantially
improved. Readily available severe case data can be effectively used to
estimate epidemiological characteristics and to predict the evolution of an
epidemic, crucially allowing real-time monitoring of the transmission and
severity of the outbreak.
| 0 | 1 | 0 | 1 | 0 | 0 |
Consistent Inter-Model Specification for Time-Homogeneous SPX Stochastic Volatility and VIX Market Models | This paper shows how to recover stochastic volatility models (SVMs) from
market models for the VIX futures term structure. Market models have more
flexibility for fitting of curves than do SVMs, and therefore they are
better-suited for pricing VIX futures and derivatives. But the VIX itself is a
derivative of the S&P500 (SPX) and it is common practice to price SPX
derivatives using an SVM. Hence, a consistent model for both SPX and VIX
derivatives would be one where the SVM is obtained by inverting the market
model. This paper's main result is a method for the recovery of a stochastic
volatility function as the output of an inverse problem, with the inputs given
by a VIX futures market model. Analysis will show that some conditions need to
be met in order for there to not be any inter-model arbitrage or mis-priced
derivatives. Given these conditions the inverse problem can be solved. Several
models are analyzed and explored numerically to gain a better understanding of
the theory and its limitations.
| 0 | 0 | 0 | 0 | 0 | 1 |
Optimal Task Scheduling in Communication-Constrained Mobile Edge Computing Systems for Wireless Virtual Reality | Mobile edge computing (MEC) is expected to be an effective solution to
deliver 360-degree virtual reality (VR) videos over wireless networks. In
contrast to previous computation-constrained MEC framework, which reduces the
computation-resource consumption at the mobile VR device by increasing the
communication-resource consumption, we develop a communications-constrained MEC
framework to reduce communication-resource consumption by increasing the
computation-resource consumption and exploiting the caching resources at the
mobile VR device in this paper. Specifically, according to the task
modularization, the MEC server can only deliver the components which have not
been stored in the VR device, and then the VR device uses the received
components and the corresponding cached components to construct the task,
resulting in low communication-resource consumption but high delay. The MEC
server can also compute the task by itself to reduce the delay, however, it
consumes more communication-resource due to the delivery of entire task.
Therefore, we then propose a task scheduling strategy to decide which
computation model should the MEC server operates, in order to minimize the
communication-resource consumption under the delay constraint. Finally, we
discuss the tradeoffs between communications, computing, and caching in the
proposed system.
| 1 | 0 | 0 | 0 | 0 | 0 |
Infinitary generalizations of Deligne's completeness theorem | Given a regular cardinal $\kappa$ such that $\kappa^{<\kappa}=\kappa$, we
study a class of toposes with enough points, the $\kappa$-separable toposes.
These are equivalent to sheaf toposes over a site with $\kappa$-small limits
that has at most $\kappa$ many objects and morphisms, the (basis for the)
topology being generated by at most $\kappa$ many covering families, and that
satisfy a further exactness property $T$. We prove that these toposes have
enough $\kappa$-points, that is, points whose inverse image preserve all
$\kappa$-small limits. This generalizes the separable toposes of Makkai and
Reyes, that are a particular case when $\kappa=\omega$, when property $T$ is
trivially satisfied. This result is essentially a completeness theorem for a
certain infinitary logic that we call $\kappa$-geometric, where conjunctions of
less than $\kappa$ formulas and existential quantification on less than
$\kappa$ many variables is allowed. We prove that $\kappa$-geometric theories
have a $\kappa$-classifying topos having property $T$, the universal property
being that models of the theory in a Grothendieck topos with property $T$
correspond to $\kappa$-geometric morphisms (geometric morphisms the inverse
image of which preserves all $\kappa$-small limits) into that topos. Moreover,
we prove that $\kappa$-separable toposes occur as the $\kappa$-classifying
toposes of $\kappa$-geometric theories of at most $\kappa$ many axioms in
canonical form, and that every such $\kappa$-classifying topos is
$\kappa$-separable. Finally, we consider the case when $\kappa$ is weakly
compact and study the $\kappa$-classifying topos of a $\kappa$-coherent theory
(with at most $\kappa$ many axioms), that is, a theory where only disjunction
of less than $\kappa$ formulas are allowed, obtaining a version of Deligne's
theorem for $\kappa$-coherent toposes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset.
| 1 | 0 | 0 | 1 | 0 | 0 |
Saxion Cosmology for Thermalized Gravitino Dark Matter | In all supersymmetric theories, gravitinos, with mass suppressed by the
Planck scale, are an obvious candidate for dark matter; but if gravitinos ever
reached thermal equilibrium, such dark matter is apparently either too abundant
or too hot, and is excluded. However, in theories with an axion, a saxion
condensate is generated during an early era of cosmological history and its
late decay dilutes dark matter. We show that such dilution allows previously
thermalized gravitinos to account for the observed dark matter over very wide
ranges of gravitino mass, keV < $m_{3/2}$ < TeV, axion decay constant, $10^9$
GeV < $f_a$ < $10^{16}$ GeV, and saxion mass, 10 MeV < $m_s$ < 100 TeV.
Constraints on this parameter space are studied from BBN, supersymmetry
breaking, gravitino and axino production from freeze-in and saxion decay, and
from axion production from both misalignment and parametric resonance
mechanisms. Large allowed regions of $(m_{3/2}, f_a, m_s)$ remain, but differ
for DFSZ and KSVZ theories. Superpartner production at colliders may lead to
events with displaced vertices and kinks, and may contain saxions decaying to
$(WW,ZZ,hh), gg, \gamma \gamma$ or a pair of Standard Model fermions. Freeze-in
may lead to a sub-dominant warm component of gravitino dark matter, and saxion
decay to axions may lead to dark radiation.
| 0 | 1 | 0 | 0 | 0 | 0 |
One-step and Two-step Classification for Abusive Language Detection on Twitter | Automatic abusive language detection is a difficult but important task for
online social media. Our research explores a two-step approach of performing
classification on abusive language and then classifying into specific types and
compares it with one-step approach of doing one multi-class classification for
detecting sexist and racist languages. With a public English Twitter corpus of
20 thousand tweets in the type of sexism and racism, our approach shows a
promising performance of 0.827 F-measure by using HybridCNN in one-step and
0.824 F-measure by using logistic regression in two-steps.
| 1 | 0 | 0 | 0 | 0 | 0 |
Massive data compression for parameter-dependent covariance matrices | We show how the massive data compression algorithm MOPED can be used to
reduce, by orders of magnitude, the number of simulated datasets that are
required to estimate the covariance matrix required for the analysis of
gaussian-distributed data. This is relevant when the covariance matrix cannot
be calculated directly. The compression is especially valuable when the
covariance matrix varies with the model parameters. In this case, it may be
prohibitively expensive to run enough simulations to estimate the full
covariance matrix throughout the parameter space. This compression may be
particularly valuable for the next-generation of weak lensing surveys, such as
proposed for Euclid and LSST, for which the number of summary data (such as
band power or shear correlation estimates) is very large, $\sim 10^4$, due to
the large number of tomographic redshift bins that the data will be divided
into. In the pessimistic case where the covariance matrix is estimated
separately for all points in an MCMC analysis, this may require an unfeasible
$10^9$ simulations. We show here that MOPED can reduce this number by a factor
of 1000, or a factor of $\sim 10^6$ if some regularity in the covariance matrix
is assumed, reducing the number of simulations required to a manageable $10^3$,
making an otherwise intractable analysis feasible.
| 0 | 1 | 0 | 1 | 0 | 0 |
A Novel Data-Driven Framework for Risk Characterization and Prediction from Electronic Medical Records: A Case Study of Renal Failure | Electronic medical records (EMR) contain longitudinal information about
patients that can be used to analyze outcomes. Typically, studies on EMR data
have worked with established variables that have already been acknowledged to
be associated with certain outcomes. However, EMR data may also contain
hitherto unrecognized factors for risk association and prediction of outcomes
for a disease. In this paper, we present a scalable data-driven framework to
analyze EMR data corpus in a disease agnostic way that systematically uncovers
important factors influencing outcomes in patients, as supported by data and
without expert guidance. We validate the importance of such factors by using
the framework to predict for the relevant outcomes. Specifically, we analyze
EMR data covering approximately 47 million unique patients to characterize
renal failure (RF) among type 2 diabetic (T2DM) patients. We propose a
specialized L1 regularized Cox Proportional Hazards (CoxPH) survival model to
identify the important factors from those available from patient encounter
history. To validate the identified factors, we use a specialized generalized
linear model (GLM) to predict the probability of renal failure for individual
patients within a specified time window. Our experiments indicate that the
factors identified via our data-driven method overlap with the patient
characteristics recognized by experts. Our approach allows for scalable,
repeatable and efficient utilization of data available in EMRs, confirms prior
medical knowledge and can generate new hypothesis without expert supervision.
| 1 | 0 | 0 | 1 | 0 | 0 |
Knotted solutions for linear and nonlinear theories: electromagnetism and fluid dynamics | We examine knotted solutions, the most simple of which is the "Hopfion", from
the point of view of relations between electromagnetism and ideal fluid
dynamics. A map between fluid dynamics and electromagnetism works for initial
conditions or for linear perturbations, allowing us to find new knotted fluid
solutions. Knotted solutions are also found to to be solutions of nonlinear
generalizations of electromagnetism, and of quantum-corrected actions for
electromagnetism coupled to other modes. For null configurations,
electromagnetism can be described as a null pressureless fluid, for which we
can find solutions from the knotted solutions of electromagnetism. We also map
them to solutions of Euler's equations, obtained from a type of nonrelativistic
reduction of the relativistic fluid equations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Implicit Media Tagging and Affect Prediction from video of spontaneous facial expressions, recorded with depth camera | We present a method that automatically evaluates emotional response from
spontaneous facial activity recorded by a depth camera. The automatic
evaluation of emotional response, or affect, is a fascinating challenge with
many applications, including human-computer interaction, media tagging and
human affect prediction. Our approach in addressing this problem is based on
the inferred activity of facial muscles over time, as captured by a depth
camera recording an individual's facial activity. Our contribution is two-fold:
First, we constructed a database of publicly available short video clips, which
elicit a strong emotional response in a consistent manner across different
individuals. Each video was tagged by its characteristic emotional response
along 4 scales: \emph{Valence, Arousal, Likability} and \emph{Rewatch} (the
desire to watch again). The second contribution is a two-step prediction
method, based on learning, which was trained and tested using this database of
tagged video clips. Our method was able to successfully predict the
aforementioned 4 dimensional representation of affect, as well as to identify
the period of strongest emotional response in the viewing recordings, in a
method that is blind to the video clip being watch, revealing a significantly
high agreement between the recordings of independent viewers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improving Legal Information Retrieval by Distributional Composition with Term Order Probabilities | Legal professionals worldwide are currently trying to get up-to-pace with the
explosive growth in legal document availability through digital means. This
drives a need for high efficiency Legal Information Retrieval (IR) and Question
Answering (QA) methods. The IR task in particular has a set of unique
challenges that invite the use of semantic motivated NLP techniques. In this
work, a two-stage method for Legal Information Retrieval is proposed, combining
lexical statistics and distributional sentence representations in the context
of Competition on Legal Information Extraction/Entailment (COLIEE). The
combination is done with the use of disambiguation rules, applied over the
rankings obtained through n-gram statistics. After the ranking is done, its
results are evaluated for ambiguity, and disambiguation is done if a result is
decided to be unreliable for a given query. Competition and experimental
results indicate small gains in overall retrieval performance using the
proposed approach. Additionally, an analysis of error and improvement cases is
presented for a better understanding of the contributions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Modeling Study of Laser Beam Scattering by Defects on Semiconductor Wafers | Accurate modeling of light scattering from nanometer scale defects on Silicon
wafers is critical for enabling increasingly shrinking semiconductor technology
nodes of the future. Yet, such modeling of defect scattering remains unsolved
since existing modeling techniques fail to account for complex defect and wafer
geometries. Here, we present results of laser beam scattering from spherical
and ellipsoidal particles located on the surface of a silicon wafer. A
commercially available electromagnetic field solver (HFSS) was deployed on a
multiprocessor cluster to obtain results with previously unknown accuracy down
to light scattering intensity of -170 dB. We compute three dimensional
scattering patterns of silicon nanospheres located on a semiconductor wafer for
both perpendicular and parallel polarization and show the effect of sphere size
on scattering. We further computer scattering patterns of nanometer scale
ellipsoidal particles having different orientation angles and unveil the
effects of ellipsoidal orientation on scattering.
| 0 | 1 | 0 | 0 | 0 | 0 |
Combinatorial and Asymptotical Results on the Neighborhood Grid | In 2009, Joselli et al introduced the Neighborhood Grid data structure for
fast computation of neighborhood estimates in point clouds. Even though the
data structure has been used in several applications and shown to be
practically relevant, it is theoretically not yet well understood. The purpose
of this paper is to present a polynomial-time algorithm to build the data
structure. Furthermore, it is investigated whether the presented algorithm is
optimal. This investigations leads to several combinatorial questions for which
partial results are given.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the structure of join tensors with applications to tensor eigenvalue problems | We investigate the structure of join tensors, which may be regarded as the
multivariable extension of lattice-theoretic join matrices. Explicit formulae
for a polyadic decomposition (i.e., a linear combination of rank-1 tensors) and
a tensor-train decomposition of join tensors are derived on general join
semilattices. We discuss conditions under which the obtained decompositions are
optimal in rank, and examine numerically the storage complexity of the obtained
decompositions for a class of LCM tensors as a special case of join tensors. In
addition, we investigate numerically the sharpness of a theoretical upper bound
on the tensor eigenvalues of LCM tensors.
| 0 | 0 | 1 | 0 | 0 | 0 |
Adversarial Variational Optimization of Non-Differentiable Simulators | Complex computer simulators are increasingly used across fields of science as
generative models tying parameters of an underlying theory to experimental
observations. Inference in this setup is often difficult, as simulators rarely
admit a tractable density or likelihood function. We introduce Adversarial
Variational Optimization (AVO), a likelihood-free inference algorithm for
fitting a non-differentiable generative model incorporating ideas from
generative adversarial networks, variational optimization and empirical Bayes.
We adapt the training procedure of generative adversarial networks by replacing
the differentiable generative network with a domain-specific simulator. We
solve the resulting non-differentiable minimax problem by minimizing
variational upper bounds of the two adversarial objectives. Effectively, the
procedure results in learning a proposal distribution over simulator
parameters, such that the JS divergence between the marginal distribution of
the synthetic data and the empirical distribution of observed data is
minimized. We evaluate and compare the method with simulators producing both
discrete and continuous data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Learning Light Transport the Reinforced Way | We show that the equations of reinforcement learning and light transport
simulation are related integral equations. Based on this correspondence, a
scheme to learn importance while sampling path space is derived. The new
approach is demonstrated in a consistent light transport simulation algorithm
that uses reinforcement learning to progressively learn where light comes from.
As using this information for importance sampling includes information about
visibility, too, the number of light transport paths with zero contribution is
dramatically reduced, resulting in much less noisy images within a fixed time
budget.
| 1 | 0 | 0 | 0 | 0 | 0 |
Posterior Asymptotic Normality for an Individual Coordinate in High-dimensional Linear Regression | We consider the sparse high-dimensional linear regression model
$Y=Xb+\epsilon$ where $b$ is a sparse vector. For the Bayesian approach to this
problem, many authors have considered the behavior of the posterior
distribution when, in truth, $Y=X\beta+\epsilon$ for some given $\beta$. There
have been numerous results about the rate at which the posterior distribution
concentrates around $\beta$, but few results about the shape of that posterior
distribution. We propose a prior distribution for $b$ such that the marginal
posterior distribution of an individual coordinate $b_i$ is asymptotically
normal centered around an asymptotically efficient estimator, under the truth.
Such a result gives Bayesian credible intervals that match with the confidence
intervals obtained from an asymptotically efficient estimator for $b_i$. We
also discuss ways of obtaining such asymptotically efficient estimators on
individual coordinates. We compare the two-step procedure proposed by Zhang and
Zhang (2014) and a one-step modified penalization method.
| 0 | 0 | 1 | 1 | 0 | 0 |
Subdeterminant Maximization via Nonconvex Relaxations and Anti-concentration | Several fundamental problems that arise in optimization and computer science
can be cast as follows: Given vectors $v_1,\ldots,v_m \in \mathbb{R}^d$ and a
constraint family ${\cal B}\subseteq 2^{[m]}$, find a set $S \in \cal{B}$ that
maximizes the squared volume of the simplex spanned by the vectors in $S$. A
motivating example is the data-summarization problem in machine learning where
one is given a collection of vectors that represent data such as documents or
images. The volume of a set of vectors is used as a measure of their diversity,
and partition or matroid constraints over $[m]$ are imposed in order to ensure
resource or fairness constraints. Recently, Nikolov and Singh presented a
convex program and showed how it can be used to estimate the value of the most
diverse set when ${\cal B}$ corresponds to a partition matroid. This result was
recently extended to regular matroids in works of Straszak and Vishnoi, and
Anari and Oveis Gharan. The question of whether these estimation algorithms can
be converted into the more useful approximation algorithms -- that also output
a set -- remained open.
The main contribution of this paper is to give the first approximation
algorithms for both partition and regular matroids. We present novel
formulations for the subdeterminant maximization problem for these matroids;
this reduces them to the problem of finding a point that maximizes the absolute
value of a nonconvex function over a Cartesian product of probability
simplices. The technical core of our results is a new anti-concentration
inequality for dependent random variables that allows us to relate the optimal
value of these nonconvex functions to their value at a random point. Unlike
prior work on the constrained subdeterminant maximization problem, our proofs
do not rely on real-stability or convexity and could be of independent interest
both in algorithms and complexity.
| 1 | 0 | 1 | 1 | 0 | 0 |
On the Uplink Achievable Rate of Massive MIMO System With Low-Resolution ADC and RF Impairments | This paper considers channel estimation and uplink achievable rate of the
coarsely quantized massive multiple-input multiple-output (MIMO) system with
radio frequency (RF) impairments. We utilize additive quantization noise model
(AQNM) and extended error vector magnitude (EEVM) model to analyze the impacts
of low-resolution analog-to-digital converters (ADCs) and RF impairments
respectively. We show that hardware impairments cause a nonzero floor on the
channel estimation error, which contraries to the conventional case with ideal
hardware. The maximal-ratio combining (MRC) technique is then used at the
receiver, and an approximate tractable expression for the uplink achievable
rate is derived. The simulation results illustrate the appreciable
compensations between ADCs' resolution and RF impairments. The proposed studies
support the feasibility of equipping economical coarse ADCs and economical
imperfect RF components in practical massive MIMO systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
On The Complexity of Enumeration | We investigate the relationship between several enumeration complexity
classes and focus in particular on problems having enumeration algorithms with
incremental and polynomial delay (IncP and DelayP respectively). We show that,
for some algorithms, we can turn an average delay into a worst case delay
without increasing the space complexity, suggesting that IncP_1 = DelayP even
with polynomially bounded space. We use the Exponential Time Hypothesis to
exhibit a strict hierarchy inside IncP which gives the first separation of
DelayP and IncP. Finally we relate the uniform generation of solutions to
probabilistic enumeration algorithms with polynomial delay and polynomial
space.
| 1 | 0 | 0 | 0 | 0 | 0 |
On $p$-degree of elliptic curves | In this note we investigate the $p$-degree function of elliptic curves over
the field $\mathbb{Q}_p$ of $p$-adic numbers. The $p$-degree measures the least
complexity of a non-zero $p$-torsion point on an elliptic curve. We prove some
properties of this function and compute it explicitly in some special cases.
| 0 | 0 | 1 | 0 | 0 | 0 |
Combinatorial formulas for Kazhdan-Lusztig polynomials with respect to W-graph ideals | In \cite{y1} Yin generalized the definition of $W$-graph ideal $E_J$ in
weighted Coxeter groups and introduced the weighted Kazhdan-Lusztig polynomials
$ \left \{ P_{x,y} \mid x,y\in E_J\right \}$, where $J$ is a subset of simple
generators $S$. In this paper, we study the combinatorial formulas for those
polynomials, which extend the results of Deodhar \cite{v3} and Tagawa
\cite{h1}.
| 0 | 0 | 1 | 0 | 0 | 0 |
Arithmetic statistics of modular symbols | Mazur, Rubin, and Stein have recently formulated a series of conjectures
about statistical properties of modular symbols in order to understand central
values of twists of elliptic curve $L$-functions. Two of these conjectures
relate to the asymptotic growth of the first and second moments of the modular
symbols. We prove these on average by using analytic properties of Eisenstein
series twisted by modular symbols. Another of their conjectures predicts the
Gaussian distribution of normalized modular symbols ordered according to the
size of the denominator of the cusps. We prove this conjecture in a refined
version that also allows restrictions on the location of the cusps.
| 0 | 0 | 1 | 0 | 0 | 0 |
Approximation Techniques for Stochastic Analysis of Biological Systems | There has been an increasing demand for formal methods in the design process
of safety-critical synthetic genetic circuits. Probabilistic model checking
techniques have demonstrated significant potential in analyzing the intrinsic
probabilistic behaviors of complex genetic circuit designs. However, its
inability to scale limits its applicability in practice. This chapter addresses
the scalability problem by presenting a state-space approximation method to
remove unlikely states resulting in a reduced, finite state representation of
the infinite-state continuous-time Markov chain that is amenable to
probabilistic model checking. The proposed method is evaluated on a design of a
genetic toggle switch. Comparisons with another state-of-art tool demonstrates
both accuracy and efficiency of the presented method.
| 1 | 0 | 0 | 0 | 0 | 0 |
Characterizing time-irreversibility in disordered fermionic systems by the effect of local perturbations | We study the effects of local perturbations on the dynamics of disordered
fermionic systems in order to characterize time-irreversibility. We focus on
three different systems, the non-interacting Anderson and Aubry-André-Harper
(AAH-) models, and the interacting spinless disordered t-V chain. First, we
consider the effect on the full many-body wave-functions by measuring the
Loschmidt echo (LE). We show that in the extended/ergodic phase the LE decays
exponentially fast with time, while in the localized phase the decay is
algebraic. We demonstrate that the exponent of the decay of the LE in the
localized phase diverges proportionally to the single-particle localization
length as we approach the metal-insulator transition in the AAH model. Second,
we probe different phases of disordered systems by studying the time
expectation value of local observables evolved with two Hamiltonians that
differ by a spatially local perturbation. Remarkably, we find that many-body
localized systems could lose memory of the initial state in the long-time
limit, in contrast to the non-interacting localized phase where some memory is
always preserved.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Lower Bound for the Number of Central Configurations on H^2 | We study the indices of the geodesic central configurations on $\H^2$. We
then show that central configurations are bounded away from the singularity
set. With Morse's inequality, we get a lower bound for the number of central
configurations on $\H^2$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Scattering dominated high-temperature phase of 1T-TiSe2: an optical conductivity study | The controversy regarding the precise nature of the high-temperature phase of
1T-TiSe2 lasts for decades. It has intensified in recent times when new
evidence for the excitonic origin of the low-temperature charge-density wave
state started to unveil. Here we address the problem of the high-temperature
phase through precise measurements and detailed analysis of the optical
response of 1T-TiSe2 single crystals. The separate responses of electron and
hole subsystems are identified and followed in temperature. We show that
neither semiconductor nor semimetal pictures can be applied in their generic
forms as the scattering for both types of carriers is in the vicinity of the
Ioffe-Regel limit with decay rates being comparable to or larger than the
offsets of band extrema. The nonmetallic temperature dependence of transport
properties comes from the anomalous temperature dependence of scattering rates.
Near the transition temperature the heavy electrons and the light holes
contribute equally to the conductivity. This surprising coincidence is regarded
as the consequence of dominant intervalley scattering that precedes the
transition. The low-frequency peak in the optical spectra is identified and
attributed to the critical softening of the L-point collective mode.
| 0 | 1 | 0 | 0 | 0 | 0 |
Motional Ground State Cooling Outside the Lamb-Dicke Regime | We report Raman sideband cooling of a single sodium atom to its
three-dimensional motional ground state in an optical tweezer. Despite a large
Lamb-Dicke parameter, high initial temperature, and large differential light
shifts between the excited state and the ground state, we achieve a ground
state population of $93.5(7)$% after $53$ ms of cooling. Our technique includes
addressing high-order sidebands, where several motional quanta are removed by a
single laser pulse, and fast modulation of the optical tweezer intensity. We
demonstrate that Raman sideband cooling to the 3D motional ground state is
possible, even without tight confinement and low initial temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantifying the uncertainties in an ensemble of decadal climate predictions | Meaningful climate predictions must be accompanied by their corresponding
range of uncertainty. Quantifying the uncertainties is non-trivial, and
different methods have been suggested and used in the past. Here, we propose a
method that does not rely on any assumptions regarding the distribution of the
ensemble member predictions. The method is tested using the CMIP5 1981-2010
decadal predictions and is shown to perform better than two other methods
considered here. The improved estimate of the uncertainties is of great
importance for both practical use and for better assessing the significance of
the effects seen in theoretical studies.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Learning-Based Framework for Two-Dimensional Vehicle Maneuver Prediction over V2V Networks | Situational awareness in vehicular networks could be substantially improved
utilizing reliable trajectory prediction methods. More precise situational
awareness, in turn, results in notably better performance of critical safety
applications, such as Forward Collision Warning (FCW), as well as comfort
applications like Cooperative Adaptive Cruise Control (CACC). Therefore,
vehicle trajectory prediction problem needs to be deeply investigated in order
to come up with an end to end framework with enough precision required by the
safety applications' controllers. This problem has been tackled in the
literature using different methods. However, machine learning, which is a
promising and emerging field with remarkable potential for time series
prediction, has not been explored enough for this purpose. In this paper, a
two-layer neural network-based system is developed which predicts the future
values of vehicle parameters, such as velocity, acceleration, and yaw rate, in
the first layer and then predicts the two-dimensional, i.e. longitudinal and
lateral, trajectory points based on the first layer's outputs. The performance
of the proposed framework has been evaluated in realistic cut-in scenarios from
Safety Pilot Model Deployment (SPMD) dataset and the results show a noticeable
improvement in the prediction accuracy in comparison with the kinematics model
which is the dominant employed model by the automotive industry. Both ideal and
nonideal communication circumstances have been investigated for our system
evaluation. For non-ideal case, an estimation step is included in the framework
before the parameter prediction block to handle the drawbacks of packet drops
or sensor failures and reconstruct the time series of vehicle parameters at a
desirable frequency.
| 1 | 0 | 0 | 1 | 0 | 0 |
Learning from Multiple Cities: A Meta-Learning Approach for Spatial-Temporal Prediction | Spatial-temporal prediction is a fundamental problem for constructing smart
city, which is useful for tasks such as traffic control, taxi dispatching, and
environmental policy making. Due to data collection mechanism, it is common to
see data collection with unbalanced spatial distributions. For example, some
cities may release taxi data for multiple years while others only release a few
days of data; some regions may have constant water quality data monitored by
sensors whereas some regions only have a small collection of water samples. In
this paper, we tackle the problem of spatial-temporal prediction for the cities
with only a short period of data collection. We aim to utilize the long-period
data from other cities via transfer learning. Different from previous studies
that transfer knowledge from one single source city to a target city, we are
the first to leverage information from multiple cities to increase the
stability of transfer. Specifically, our proposed model is designed as a
spatial-temporal network with a meta-learning paradigm. The meta-learning
paradigm learns a well-generalized initialization of the spatial-temporal
network, which can be effectively adapted to target cities. In addition, a
pattern-based spatial-temporal memory is designed to distill long-term temporal
information (i.e., periodicity). We conduct extensive experiments on two tasks:
traffic (taxi and bike) prediction and water quality prediction. The
experiments demonstrate the effectiveness of our proposed model over several
competitive baseline models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Universal locally univalent functions and universal conformal metrics with constant curvature | We prove Runge-type theorems and universality results for locally univalent
holomorphic and meromorphic functions. Refining a result of M. Heins, we also
show that there is a universal bounded locally univalent function on the unit
disk. These results are used to prove that on any hyperbolic simply connected
plane domain there exist universal conformal metrics with prescribed constant
curvature.
| 0 | 0 | 1 | 0 | 0 | 0 |
Towards a New Interpretation of Separable Convolutions | In recent times, the use of separable convolutions in deep convolutional
neural network architectures has been explored. Several researchers, most
notably (Chollet, 2016) and (Ghosh, 2017) have used separable convolutions in
their deep architectures and have demonstrated state of the art or close to
state of the art performance. However, the underlying mechanism of action of
separable convolutions are still not fully understood. Although their
mathematical definition is well understood as a depthwise convolution followed
by a pointwise convolution, deeper interpretations such as the extreme
Inception hypothesis (Chollet, 2016) have failed to provide a thorough
explanation of their efficacy. In this paper, we propose a hybrid
interpretation that we believe is a better model for explaining the efficacy of
separable convolutions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Numerical Simulations of Regolith Sampling Processes | We present recent improvements in the simulation of regolith sampling
processes in microgravity using the numerical particle method smooth particle
hydrodynamics (SPH). We use an elastic-plastic soil constitutive model for
large deformation and failure flows for dynamical behaviour of regolith. In the
context of projected small body (asteroid or small moons) sample return
missions, we investigate the efficiency and feasibility of a particular
material sampling method: Brushes sweep material from the asteroid's surface
into a collecting tray. We analyze the influence of different material
parameters of regolith such as cohesion and angle of internal friction on the
sampling rate. Furthermore, we study the sampling process in two environments
by varying the surface gravity (Earth's and Phobos') and we apply different
rotation rates for the brushes. We find good agreement of our sampling
simulations on Earth with experiments and provide estimations for the influence
of the material properties on the collecting rate.
| 0 | 1 | 0 | 0 | 0 | 0 |
Lexical analysis of automated accounts on Twitter | In recent years, social bots have been using increasingly more sophisticated,
challenging detection strategies. While many approaches and features have been
proposed, social bots evade detection and interact much like humans making it
difficult to distinguish real human accounts from bot accounts. For detection
systems, various features under the broader categories of account profile,
tweet content, network and temporal pattern have been utilised. The use of
tweet content features is limited to analysis of basic terms such as URLs,
hashtags, name entities and sentiment. Given a set of tweet contents with no
obvious pattern can we distinguish contents produced by social bots from that
of humans? We aim to answer this question by analysing the lexical richness of
tweets produced by the respective accounts using large collections of different
datasets. Our results show a clear margin between the two classes in lexical
diversity, lexical sophistication and distribution of emoticons. We found that
the proposed lexical features significantly improve the performance of
classifying both account types. These features are useful for training a
standard machine learning classifier for effective detection of social bot
accounts. A new dataset is made freely available for further exploration.
| 1 | 0 | 0 | 0 | 0 | 0 |
The effect of the virial state of molecular clouds on the influence of feedback from massive stars | A set of Smoothed Particle Hydrodynamics simulations of the influence of
photoionising radiation and stellar winds on a series of 10$^{4}$M$_{\odot}$
turbulent molecular clouds with initial virial ratios of 0.7, 1.1, 1.5, 1.9 and
2.3 and initial mean densities of 136, 1135 and 9096\,cm$^{-3}$ are presented.
Reductions in star formation efficiency rates are found to be modest, in the
range $30\%-50\%$ and to not vary greatly across the parameter space. In no
case was star formation entirely terminated over the $\approx3$\,Myr duration
of the simulations. The fractions of material unbound by feedback are in the
range $20-60\%$, clouds with the lowest escape velocities being the most
strongly affected.\\ Leakage of ionised gas leads to the HII regions rapidly
becoming underpressured. The destructive effects of ionisation are thus largely
not due to thermally--driven expansion of the HII regions, but to momentum
transfer by photoevaporation of fresh material. Our simulations have similar
global ionisation rates and we show that the effects of feedback upon them can
be adequately modelled as a steady injection of momentum, resembling a
momentum--conserving wind.
| 0 | 1 | 0 | 0 | 0 | 0 |
A contract-based method to specify stimulus-response requirements | A number of formal methods exist for capturing stimulus-response requirements
in a declarative form. Someone yet needs to translate the resulting declarative
statements into imperative programs. The present article describes a method for
specification and verification of stimulus-response requirements in the form of
imperative program routines with conditionals and assertions. A program prover
then checks a candidate program directly against the stated requirements. The
article illustrates the approach by applying it to an ASM model of the Landing
Gear System, a widely used realistic example proposed for evaluating
specification and verification techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
Estimation of the covariance structure of heavy-tailed distributions | We propose and analyze a new estimator of the covariance matrix that admits
strong theoretical guarantees under weak assumptions on the underlying
distribution, such as existence of moments of only low order. While estimation
of covariance matrices corresponding to sub-Gaussian distributions is
well-understood, much less in known in the case of heavy-tailed data. As K.
Balasubramanian and M. Yuan write, "data from real-world experiments oftentimes
tend to be corrupted with outliers and/or exhibit heavy tails. In such cases,
it is not clear that those covariance matrix estimators .. remain optimal" and
"..what are the other possible strategies to deal with heavy tailed
distributions warrant further studies." We make a step towards answering this
question and prove tight deviation inequalities for the proposed estimator that
depend only on the parameters controlling the "intrinsic dimension" associated
to the covariance matrix (as opposed to the dimension of the ambient space); in
particular, our results are applicable in the case of high-dimensional
observations.
| 0 | 0 | 1 | 1 | 0 | 0 |
Deep Reinforcement Learning for De-Novo Drug Design | We propose a novel computational strategy for de novo design of molecules
with desired properties termed ReLeaSE (Reinforcement Learning for Structural
Evolution). Based on deep and reinforcement learning approaches, ReLeaSE
integrates two deep neural networks - generative and predictive - that are
trained separately but employed jointly to generate novel targeted chemical
libraries. ReLeaSE employs simple representation of molecules by their SMILES
strings only. Generative models are trained with stack-augmented memory network
to produce chemically feasible SMILES strings, and predictive models are
derived to forecast the desired properties of the de novo generated compounds.
In the first phase of the method, generative and predictive models are trained
separately with a supervised learning algorithm. In the second phase, both
models are trained jointly with the reinforcement learning approach to bias the
generation of new chemical structures towards those with the desired physical
and/or biological properties. In the proof-of-concept study, we have employed
the ReLeaSE method to design chemical libraries with a bias toward structural
complexity or biased toward compounds with either maximal, minimal, or specific
range of physical properties such as melting point or hydrophobicity, as well
as to develop novel putative inhibitors of JAK2. The approach proposed herein
can find a general use for generating targeted chemical libraries of novel
compounds optimized for either a single desired property or multiple
properties.
| 1 | 0 | 0 | 1 | 0 | 0 |
Batch Data Processing and Gaussian Two-Armed Bandit | We consider the two-armed bandit problem as applied to data processing if
there are two alternative processing methods available with different a priori
unknown efficiencies. One should determine the most effective method and
provide its predominant application. Gaussian two-armed bandit describes the
batch, and possibly parallel, processing when the same methods are applied to
sufficiently large packets of data and accumulated incomes are used for the
control. If the number of packets is large enough then such control does not
deteriorate the control performance, i.e. does not increase the minimax risk.
For example, in case of 50 packets the minimax risk is about 2% larger than
that one corresponding to one-by-one optimal processing. However, this is
completely true only for methods with close efficiencies because otherwise
there may be significant expected losses at the initial stage of control when
both actions are applied turn-by-turn. To avoid significant losses at the
initial stage of control one should take initial packets of data having smaller
sizes.
| 0 | 0 | 1 | 1 | 0 | 0 |
Estimating Under Five Mortality in Space and Time in a Developing World Context | Accurate estimates of the under-5 mortality rate (U5MR) in a developing world
context are a key barometer of the health of a nation. This paper describes new
models to analyze survey data on mortality in this context. We are interested
in both spatial and temporal description, that is, wishing to estimate U5MR
across regions and years, and to investigate the association between the U5MR
and spatially-varying covariate surfaces. We illustrate the methodology by
producing yearly estimates for subnational areas in Kenya over the period 1980
- 2014 using data from demographic health surveys (DHS). We use a binomial
likelihood with fixed effects for the urban/rural stratification to account for
the complex survey design. We carry out smoothing using Bayesian hierarchical
models with continuous spatial and temporally discrete components. A key
component of the model is an offset to adjust for bias due to the effects of
HIV epidemics. Substantively, there has been a sharp decline in U5MR in the
period 1980 - 2014, but large variability in estimated subnational rates
remains. A priority for future research is understanding this variability.
Temperature, precipitation and a measure of malaria infection prevalence were
candidates for inclusion in the covariate model.
| 0 | 0 | 0 | 1 | 0 | 0 |
Cyclotomic Construction of Strong External Difference Families in Finite Fields | Strong external difference family (SEDF) and its generalizations GSEDF,
BGSEDF in a finite abelian group $G$ are combinatorial designs raised by
Paterson and Stinson [7] in 2016 and have applications in communication theory
to construct optimal strong algebraic manipulation detection codes. In this
paper we firstly present some general constructions of these combinatorial
designs by using difference sets and partial difference sets in $G$. Then, as
applications of the general constructions, we construct series of SEDF, GSEDF
and BGSEDF in finite fields by using cyclotomic classes.
| 1 | 0 | 1 | 0 | 0 | 0 |
Acoustic double negativity induced by position correlations within a disordered set of monopolar resonators | Using a Multiple Scattering Theory algorithm, we investigate numerically the
transmission of ultrasonic waves through a disordered locally resonant
metamaterial containing only monopolar resonators. By comparing the cases of a
perfectly random medium with its pair correlated counterpart, we show that the
introduction of short range correlation can substantially impact the effective
parameters of the sample. We report, notably, the opening of an acoustic
transparency window in the region of the hybridization band gap. Interestingly,
the transparency window is found to be associated with negative values of both
effective compressibility and density. Despite this feature being unexpected
for a disordered medium of monopolar resonators, we show that it can be fully
described analytically and that it gives rise to negative refraction of waves.
| 0 | 1 | 0 | 0 | 0 | 0 |
Interoceptive robustness through environment-mediated morphological development | Typically, AI researchers and roboticists try to realize intelligent behavior
in machines by tuning parameters of a predefined structure (body plan and/or
neural network architecture) using evolutionary or learning algorithms. Another
but not unrelated longstanding property of these systems is their brittleness
to slight aberrations, as highlighted by the growing deep learning literature
on adversarial examples. Here we show robustness can be achieved by evolving
the geometry of soft robots, their control systems, and how their material
properties develop in response to one particular interoceptive stimulus
(engineering stress) during their lifetimes. By doing so we realized robots
that were equally fit but more robust to extreme material defects (such as
might occur during fabrication or by damage thereafter) than robots that did
not develop during their lifetimes, or developed in response to a different
interoceptive stimulus (pressure). This suggests that the interplay between
changes in the containing systems of agents (body plan and/or neural
architecture) at different temporal scales (evolutionary and developmental)
along different modalities (geometry, material properties, synaptic weights)
and in response to different signals (interoceptive and external perception)
all dictate those agents' abilities to evolve or learn capable and robust
strategies.
| 1 | 0 | 0 | 0 | 0 | 0 |
Strichartz estimates for non-degenerate Schrödinger equations | We consider Schrödinger equation with a non-degenerate metric on the
Euclidean space. We study local in time Strichartz estimates for the
Schrödinger equation without loss of derivatives including the endpoint case.
In contrast to the Riemannian metric case, we need the additional assumptions
for the well-posedness of our Schrödinger equation and for proving Strichartz
estimates without loss.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Diophantine equation $\sum_{j=1}^kjF_j^p=F_n^q$ | Let $F_n$ denote the $n^{th}$ term of the Fibonacci sequence. In this paper,
we investigate the Diophantine equation $F_1^p+2F_2^p+\cdots+kF_{k}^p=F_{n}^q$
in the positive integers $k$ and $n$, where $p$ and $q$ are given positive
integers. A complete solution is given if the exponents are included in the set
$\{1,2\}$. Based on the specific cases we could solve, and a computer search
with $p,q,k\le100$ we conjecture that beside the trivial solutions only
$F_8=F_1+2F_2+3F_3+4F_4$, $F_4^2=F_1+2F_2+3F_3$, and
$F_4^3=F_1^3+2F_2^3+3F_3^3$ satisfy the title equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Uncertainty measurement with belief entropy on interference effect in Quantum-Like Bayesian Networks | Social dilemmas have been regarded as the essence of evolution game theory,
in which the prisoner's dilemma game is the most famous metaphor for the
problem of cooperation. Recent findings revealed people's behavior violated the
Sure Thing Principle in such games. Classic probability methodologies have
difficulty explaining the underlying mechanisms of people's behavior. In this
paper, a novel quantum-like Bayesian Network was proposed to accommodate the
paradoxical phenomenon. The special network can take interference into
consideration, which is likely to be an efficient way to describe the
underlying mechanism. With the assistance of belief entropy, named as Deng
entropy, the paper proposes Belief Distance to render the model practical.
Tested with empirical data, the proposed model is proved to be predictable and
effective.
| 1 | 0 | 0 | 0 | 0 | 0 |
The diffusion equation with nonlocal data | We study the diffusion (or heat) equation on a finite 1-dimensional spatial
domain, but we replace one of the boundary conditions with a "nonlocal
condition", through which we specify a weighted average of the solution over
the spatial interval. We provide conditions on the regularity of both the data
and weight for the problem to admit a unique solution, and also provide a
solution representation in terms of contour integrals. The solution and
well-posedness results rely upon an extension of the Fokas (or unified)
transform method to initial-nonlocal value problems for linear equations; the
necessary extensions are described in detail. Despite arising naturally from
the Fokas transform method, the uniqueness argument appears to be novel even
for initial-boundary value problems.
| 0 | 0 | 1 | 0 | 0 | 0 |
Exploring Neural Transducers for End-to-End Speech Recognition | In this work, we perform an empirical comparison among the CTC,
RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech
recognition. We show that, without any language model, Seq2Seq and
RNN-Transducer models both outperform the best reported CTC models with a
language model, on the popular Hub5'00 benchmark. On our internal diverse
dataset, these trends continue - RNNTransducer models rescored with a language
model after beam search outperform our best CTC models. These results simplify
the speech recognition pipeline so that decoding can now be expressed purely as
neural network operations. We also study how the choice of encoder architecture
affects the performance of the three models - when all encoder layers are
forward only, and when encoders downsample the input representation
aggressively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Information Theory of Data Privacy | By combining Shannon's cryptography model with an assumption to the lower
bound of adversaries' uncertainty to the queried dataset, we develop a secure
Bayesian inference-based privacy model and then in some extent answer Dwork et
al.'s question [1]: "why Bayesian risk factors are the right measure for
privacy loss".
This model ensures an adversary can only obtain little information of each
individual from the model's output if the adversary's uncertainty to the
queried dataset is larger than the lower bound. Importantly, the assumption to
the lower bound almost always holds, especially for big datasets. Furthermore,
this model is flexible enough to balance privacy and utility: by using four
parameters to characterize the assumption, there are many approaches to balance
privacy and utility and to discuss the group privacy and the composition
privacy properties of this model.
| 1 | 0 | 0 | 0 | 0 | 0 |
A combined entropy and utility based generative model for large scale multiple discrete-continuous travel behaviour data | Generative models, either by simple clustering algorithms or deep neural
network architecture, have been developed as a probabilistic estimation method
for dimension reduction or to model the underlying properties of data
structures. Although their apparent use has largely been limited to image
recognition and classification, generative machine learning algorithms can be a
powerful tool for travel behaviour research. In this paper, we examine the
generative machine learning approach for analyzing multiple discrete-continuous
(MDC) travel behaviour data to understand the underlying heterogeneity and
correlation, increasing the representational power of such travel behaviour
models. We show that generative models are conceptually similar to choice
selection behaviour process through information entropy and variational
Bayesian inference. Specifically, we consider a restricted Boltzmann machine
(RBM) based algorithm with multiple discrete-continuous layer, formulated as a
variational Bayesian inference optimization problem. We systematically describe
the proposed machine learning algorithm and develop a process of analyzing
travel behaviour data from a generative learning perspective. We show parameter
stability from model analysis and simulation tests on an open dataset with
multiple discrete-continuous dimensions and a size of 293,330 observations. For
interpretability, we derive analytical methods for conditional probabilities as
well as elasticities. Our results indicate that latent variables in generative
models can accurately represent joint distribution consistently w.r.t multiple
discrete-continuous variables. Lastly, we show that our model can generate
statistically similar data distributions for travel forecasting and prediction.
| 1 | 0 | 0 | 1 | 0 | 0 |
Phase Transitions of Spectral Initialization for High-Dimensional Nonconvex Estimation | We study a spectral initialization method that serves a key role in recent
work on estimating signals in nonconvex settings. Previous analysis of this
method focuses on the phase retrieval problem and provides only performance
bounds. In this paper, we consider arbitrary generalized linear sensing models
and present a precise asymptotic characterization of the performance of the
method in the high-dimensional limit. Our analysis also reveals a phase
transition phenomenon that depends on the ratio between the number of samples
and the signal dimension. When the ratio is below a minimum threshold, the
estimates given by the spectral method are no better than random guesses drawn
from a uniform distribution on the hypersphere, thus carrying no information;
above a maximum threshold, the estimates become increasingly aligned with the
target signal. The computational complexity of the method, as measured by the
spectral gap, is also markedly different in the two phases. Worked examples and
numerical results are provided to illustrate and verify the analytical
predictions. In particular, simulations show that our asymptotic formulas
provide accurate predictions for the actual performance of the spectral method
even at moderate signal dimensions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Understanding GANs: the LQG Setting | Generative Adversarial Networks (GANs) have become a popular method to learn
a probability model from data. In this paper, we aim to provide an
understanding of some of the basic issues surrounding GANs including their
formulation, generalization and stability on a simple benchmark where the data
has a high-dimensional Gaussian distribution. Even in this simple benchmark,
the GAN problem has not been well-understood as we observe that existing
state-of-the-art GAN architectures may fail to learn a proper generative
distribution owing to (1) stability issues (i.e., convergence to bad local
solutions or not converging at all), (2) approximation issues (i.e., having
improper global GAN optimizers caused by inappropriate GAN's loss functions),
and (3) generalizability issues (i.e., requiring large number of samples for
training). In this setup, we propose a GAN architecture which recovers the
maximum-likelihood solution and demonstrates fast generalization. Moreover, we
analyze global stability of different computational approaches for the proposed
GAN optimization and highlight their pros and cons. Finally, we outline an
extension of our model-based approach to design GANs in more complex setups
than the considered Gaussian benchmark.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Rosenau-type approach to the approximation of the linear Fokker--Planck equation | {The numerical approximation of the solution of the Fokker--Planck equation
is a challenging problem that has been extensively investigated starting from
the pioneering paper of Chang and Cooper in 1970. We revisit this problem at
the light of the approximation of the solution to the heat equation proposed by
Rosenau in 1992. Further, by means of the same idea, we address the problem of
a consistent approximation to higher-order linear diffusion equations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Analysis of Footnote Chasing and Citation Searching in an Academic Search Engine | In interactive information retrieval, researchers consider the user behavior
towards systems and search tasks in order to adapt search results by analyzing
their past interactions. In this paper, we analyze the user behavior towards
Marcia Bates' search stratagems such as 'footnote chasing' and 'citation
search' in an academic search engine. We performed a preliminary analysis of
their frequency and stage of use in the social sciences search engine sowiport.
In addition, we explored the impact of these stratagems on the whole search
process performance. We can conclude that the appearance of these two search
features in real retrieval sessions lead to an improvement of the precision in
terms of positive interactions with 16% when using footnote chasing and 17% for
the citation search stratagem.
| 1 | 0 | 0 | 0 | 0 | 0 |
Direct Mapping Hidden Excited State Interaction Patterns from ab initio Dynamics and Its Implications on Force Field Development | The excited states of polyatomic systems are rather complex, and often
exhibit meta-stable dynamical behaviors. Static analysis of reaction pathway
often fails to sufficiently characterize excited state motions due to their
highly non-equilibrium nature. Here, we proposed a time series guided
clustering algorithm to generate most relevant meta-stable patterns directly
from ab initio dynamic trajectories. Based on the knowledge of these
meta-stable patterns, we suggested an interpolation scheme with only a concrete
and finite set of known patterns to accurately predict the ground and excited
state properties of the entire dynamics trajectories. As illustrated with the
example of sinapic acids, the estimation error for both ground and excited
state is very close, which indicates one could predict the ground and excited
state molecular properties with similar accuracy. These results may provide us
some insights to construct an excited state force field with compatible energy
terms as traditional ones.
| 0 | 1 | 0 | 1 | 0 | 0 |
You Are How You Walk: Uncooperative MoCap Gait Identification for Video Surveillance with Incomplete and Noisy Data | This work offers a design of a video surveillance system based on a soft
biometric -- gait identification from MoCap data. The main focus is on two
substantial issues of the video surveillance scenario: (1) the walkers do not
cooperate in providing learning data to establish their identities and (2) the
data are often noisy or incomplete. We show that only a few examples of human
gait cycles are required to learn a projection of raw MoCap data onto a
low-dimensional sub-space where the identities are well separable. Latent
features learned by Maximum Margin Criterion (MMC) method discriminate better
than any collection of geometric features. The MMC method is also highly robust
to noisy data and works properly even with only a fraction of joints tracked.
The overall workflow of the design is directly applicable for a day-to-day
operation based on the available MoCap technology and algorithms for gait
analysis. In the concept we introduce, a walker's identity is represented by a
cluster of gait data collected at their incidents within the surveillance
system: They are how they walk.
| 1 | 0 | 0 | 0 | 0 | 0 |
Clebsch-Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network | Recent work by Cohen \emph{et al.} has achieved state-of-the-art results for
learning spherical images in a rotation invariant way by using ideas from group
representation theory and noncommutative harmonic analysis. In this paper we
propose a generalization of this work that generally exhibits improved
performace, but from an implementation point of view is actually simpler. An
unusual feature of the proposed architecture is that it uses the
Clebsch--Gordan transform as its only source of nonlinearity, thus avoiding
repeated forward and backward Fourier transforms. The underlying ideas of the
paper generalize to constructing neural networks that are invariant to the
action of other compact groups.
| 0 | 0 | 0 | 1 | 0 | 0 |
Characterization of catastrophic instabilities: Market crashes as paradigm | Catastrophic events, though rare, do occur and when they occur, they have
devastating effects. It is, therefore, of utmost importance to understand the
complexity of the underlying dynamics and signatures of catastrophic events,
such as market crashes. For deeper understanding, we choose the US and Japanese
markets from 1985 onward, and study the evolution of the cross-correlation
structures of stock return matrices and their eigenspectra over different short
time-intervals or "epochs". A slight non-linear distortion is applied to the
correlation matrix computed for any epoch, leading to the emerging spectrum of
eigenvalues. The statistical properties of the emerging spectrum display: (i)
the shape of the emerging spectrum reflects the market instability, (ii) the
smallest eigenvalue may be able to statistically distinguish the nature of a
market turbulence or crisis -- internal instability or external shock, and
(iii) the time-lagged smallest eigenvalue has a statistically significant
correlation with the mean market cross-correlation. The smallest eigenvalue
seems to indicate that the financial market has become more turbulent in a
similar way as the mean does. Yet we show features of the smallest eigenvalue
of the emerging spectrum that distinguish different types of market
instabilities related to internal or external causes. Based on the paradigmatic
character of financial time series for other complex systems, the capacity of
the emerging spectrum to understand the nature of instability may be a new
feature, which can be broadly applied.
| 0 | 0 | 0 | 0 | 0 | 1 |
Discovery of Intrinsic Quantum Anomalous Hall Effect in Organic Mn-DCA Lattice | The quantum anomalous Hall (QAH) phase is a novel topological state of matter
characterized by a nonzero quantized Hall conductivity without an external
magnetic field. The realizations of QAH effect, however, are experimentally
challengeable. Based on ab initio calculations, here we propose an intrinsic
QAH phase in DCA Kagome lattice. The nontrivial topology in Kagome bands are
confirmed by the nonzero chern number, quantized Hall conductivity, and gapless
chiral edge states of Mn-DCA lattice. A tight-binding (TB) model is further
constructed to clarify the origin of QAH effect. Furthermore, its Curie
temperature, estimated to be ~ 253 K using Monte-Carlo simulation, is
comparable with room temperature and higher than most of two-dimensional
ferromagnetic thin films. Our findings present a reliable material platform for
the observation of QAH effect in covalent-organic frameworks.
| 0 | 1 | 0 | 0 | 0 | 0 |
Temporal Convolution Networks for Real-Time Abdominal Fetal Aorta Analysis with Ultrasound | The automatic analysis of ultrasound sequences can substantially improve the
efficiency of clinical diagnosis. In this work we present our attempt to
automate the challenging task of measuring the vascular diameter of the fetal
abdominal aorta from ultrasound images. We propose a neural network
architecture consisting of three blocks: a convolutional layer for the
extraction of imaging features, a Convolution Gated Recurrent Unit (C-GRU) for
enforcing the temporal coherence across video frames and exploiting the
temporal redundancy of a signal, and a regularized loss function, called
\textit{CyclicLoss}, to impose our prior knowledge about the periodicity of the
observed signal. We present experimental evidence suggesting that the proposed
architecture can reach an accuracy substantially superior to previously
proposed methods, providing an average reduction of the mean squared error from
$0.31 mm^2$ (state-of-art) to $0.09 mm^2$, and a relative error reduction from
$8.1\%$ to $5.3\%$. The mean execution speed of the proposed approach of 289
frames per second makes it suitable for real time clinical use.
| 0 | 0 | 0 | 1 | 0 | 0 |
The Role of Data Analysis in the Development of Intelligent Energy Networks | Data analysis plays an important role in the development of intelligent
energy networks (IENs). This article reviews and discusses the application of
data analysis methods for energy big data. The installation of smart energy
meters has provided a huge volume of data at different time resolutions,
suggesting data analysis is required for clustering, demand forecasting, energy
generation optimization, energy pricing, monitoring and diagnostics. The
currently adopted data analysis technologies for IENs include pattern
recognition, machine learning, data mining, statistics methods, etc. However,
existing methods for data analysis cannot fully meet the requirements for
processing the big data produced by the IENs and, therefore, more comprehensive
data analysis methods are needed to handle the increasing amount of data and to
mine more valuable information.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantification of the memory effect of steady-state currents from interaction-induced transport in quantum systems | Dynamics of a system in general depends on its initial state and how the
system is driven, but in many-body systems the memory is usually averaged out
during evolution. Here, interacting quantum systems without external
relaxations are shown to retain long-time memory effects in steady states. To
identify memory effects, we first show quasi-steady state currents form in
finite, isolated Bose and Fermi Hubbard models driven by interaction imbalance
and they become steady-state currents in the thermodynamic limit. By comparing
the steady state currents from different initial states or ramping rates of the
imbalance, long-time memory effects can be quantified. While the memory effects
of initial states are more ubiquitous, the memory effects of switching
protocols are mostly visible in interaction-induced transport in lattices. Our
simulations suggest the systems enter a regime governed by a generalized Fick's
law and memory effects lead to initial-state dependent diffusion coefficients.
We also identify conditions for enhancing memory effects and discuss possible
experimental implications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Geometrical optimization approach to isomerization: Models and limitations | We study laser-driven isomerization reactions through an excited electronic
state using the recently developed Geometrical Optimization procedure [J. Phys.
Chem. Lett. 6, 1724 (2015)]. The goal is to analyze whether an initial wave
packet in the ground state, with optimized amplitudes and phases, can be used
to enhance the yield of the reaction at faster rates, exploring how the
geometrical restrictions induced by the symmetry of the system impose
limitations in the optimization procedure. As an example we model the
isomerization in an oriented 2,2'-dimethyl biphenyl molecule with a simple
quartic potential. Using long (picosecond) pulses we find that the
isomerization can be achieved driven by a single pulse. The phase of the
initial superposition state does not affect the yield. However, using short
(femtosecond) pulses, one always needs a pair of pulses to force the reaction.
High yields can only be obtained by optimizing both the initial state, and the
wave packet prepared in the excited state, implying the well known pump-dump
mechanism.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.