title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
A general method to describe intersystem crossing dynamics in trajectory surface hopping | Intersystem crossing is a radiationless process that can take place in a
molecule irradiated by UV-Vis light, thereby playing an important role in many
environmental, biological and technological processes. This paper reviews
different methods to describe intersystem crossing dynamics, paying attention
to semiclassical trajectory theories, which are especially interesting because
they can be applied to large systems with many degrees of freedom. In
particular, a general trajectory surface hopping methodology recently developed
by the authors, which is able to include non-adiabatic and spin-orbit couplings
in excited-state dynamics simulations, is explained in detail. This method,
termed SHARC, can in principle include any arbitrary coupling, what makes it
generally applicable to photophysical and photochemical problems, also those
including explicit laser fields. A step-by-step derivation of the main
equations of motion employed in surface hopping based on the fewest-switches
method of Tully, adapted for the inclusion of spin-orbit interactions, is
provided. Special emphasis is put on describing the different possible choices
of the electronic bases in which spin-orbit can be included in surface hopping,
highlighting the advantages and inconsistencies of the different approaches.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generating Synthetic Data for Real World Detection of DoS Attacks in the IoT | Denial of service attacks are especially pertinent to the internet of things
as devices have less computing power, memory and security mechanisms to defend
against them. The task of mitigating these attacks must therefore be redirected
from the device onto a network monitor. Network intrusion detection systems can
be used as an effective and efficient technique in internet of things systems
to offload computation from the devices and detect denial of service attacks
before they can cause harm. However the solution of implementing a network
intrusion detection system for internet of things networks is not without
challenges due to the variability of these systems and specifically the
difficulty in collecting data. We propose a model-hybrid approach to model the
scale of the internet of things system and effectively train network intrusion
detection systems. Through bespoke datasets generated by the model, the IDS is
able to predict a wide spectrum of real-world attacks, and as demonstrated by
an experiment construct more predictive datasets at a fraction of the time of
other more standard techniques.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Data-driven Model for Interaction-aware Pedestrian Motion Prediction in Object Cluttered Environments | This paper reports on a data-driven, interaction-aware motion prediction
approach for pedestrians in environments cluttered with static obstacles. When
navigating in such workspaces shared with humans, robots need accurate motion
predictions of the surrounding pedestrians. Human navigation behavior is mostly
influenced by their surrounding pedestrians and by the static obstacles in
their vicinity. In this paper we introduce a new model based on Long-Short Term
Memory (LSTM) neural networks, which is able to learn human motion behavior
from demonstrated data. To the best of our knowledge, this is the first
approach using LSTMs, that incorporates both static obstacles and surrounding
pedestrians for trajectory forecasting. As part of the model, we introduce a
new way of encoding surrounding pedestrians based on a 1d-grid in polar angle
space. We evaluate the benefit of interaction-aware motion prediction and the
added value of incorporating static obstacles on both simulation and real-world
datasets by comparing with state-of-the-art approaches. The results show, that
our new approach outperforms the other approaches while being very
computationally efficient and that taking into account static obstacles for
motion predictions significantly improves the prediction accuracy, especially
in cluttered environments.
| 1 | 0 | 0 | 0 | 0 | 0 |
Performance study of SKIROC2 and SKIROC2A with BGA testboard | SKIROC2 is an ASIC to readout the silicon pad detectors for the
electromagnetic calorimeter in the International Linear Collider.
Characteristics of SKIROC2 and the new version of SKIROC2A, packaged with BGA,
are measured with testboards and charge injection. The results on the
signal-to-noise ratio of both trigger and ADC output, threshold tuning
capability and timing resolution are presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
Priority effects between annual and perennial plants | Dominance by annual plants has traditionally been considered a brief early
stage of ecological succession preceding inevitable dominance by competitive
perennials. A more recent, alternative view suggests that interactions between
annuals and perennials can result in priority effects, causing annual dominance
to persist if they are initially more common. Such priority effects would
complicate restoration of native perennial grasslands that have been invaded by
exotic annuals. However, the conditions under which these priority effects
occur remain unknown. Using a simple simulation model, we show that long-term
(500 years) priority effects are possible as long as the plants have low
fecundity and show an establishment-longevity tradeoff, with annuals having
competitive advantage over perennial seedlings. We also show that short-term
(up to 50 years) priority effects arise solely due to low fitness difference in
cases where perennials dominate in the long term. These results provide a
theoretical basis for predicting when restoration of annual-invaded grasslands
requires active removal of annuals and timely reintroduction of perennials.
| 0 | 0 | 0 | 0 | 1 | 0 |
Joining and decomposing reaction networks | In systems and synthetic biology, much research has focused on the behavior
and design of single pathways, while, more recently, experimental efforts have
focused on how cross-talk (coupling two or more pathways) or inhibiting
molecular function (isolating one part of the pathway) affects systems-level
behavior. However, the theory for tackling these larger systems in general has
lagged behind. Here, we analyze how joining networks (e.g., cross-talk) or
decomposing networks (e.g., inhibition or knock-outs) affects three properties
that reaction networks may possess---identifiability (recoverability of
parameter values from data), steady-state invariants (relationships among
species concentrations at steady state, used in model selection), and
multistationarity (capacity for multiple steady states, which correspond to
multiple cell decisions). Specifically, we prove results that clarify, for a
network obtained by joining two smaller networks, how properties of the smaller
networks can be inferred from or can imply similar properties of the original
network. Our proofs use techniques from computational algebraic geometry,
including elimination theory and differential algebra.
| 0 | 0 | 0 | 0 | 1 | 0 |
Evidence for structural damping in a high-stress silicon nitride nanobeam and its implications for quantum optomechanics | We resolve the thermal motion of a high-stress silicon nitride nanobeam at
frequencies far below its fundamental flexural resonance (3.4 MHz) using
cavity-enhanced optical interferometry. Over two decades, the displacement
spectrum is well-modeled by that of a damped harmonic oscillator driven by a
$1/f$ thermal force, suggesting that the loss angle of the beam material is
frequency-independent. The inferred loss angle at 3.4 MHz, $\phi = 4.5\cdot
10^{-6}$, agrees well with the quality factor ($Q$) of the fundamental beam
mode ($\phi = Q^{-1}$). In conjunction with $Q$ measurements made on higher
order flexural modes, and accounting for the mode dependence of stress-induced
loss dilution, we find that the intrinsic (undiluted) loss angle of the beam
changes by less than a factor of 2 between 50 kHz and 50 MHz. We discuss the
impact of such "structural damping" on experiments in quantum optomechanics, in
which the thermal force acting on a mechanical oscillator coupled to an optical
cavity is overwhelmed by radiation pressure shot noise. As an illustration, we
show that structural damping reduces the bandwidth of ponderomotive squeezing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Efficient, sparse representation of manifold distance matrices for classical scaling | Geodesic distance matrices can reveal shape properties that are largely
invariant to non-rigid deformations, and thus are often used to analyze and
represent 3-D shapes. However, these matrices grow quadratically with the
number of points. Thus for large point sets it is common to use a low-rank
approximation to the distance matrix, which fits in memory and can be
efficiently analyzed using methods such as multidimensional scaling (MDS). In
this paper we present a novel sparse method for efficiently representing
geodesic distance matrices using biharmonic interpolation. This method exploits
knowledge of the data manifold to learn a sparse interpolation operator that
approximates distances using a subset of points. We show that our method is 2x
faster and uses 20x less memory than current leading methods for solving MDS on
large point sets, with similar quality. This enables analyses of large point
sets that were previously infeasible.
| 1 | 0 | 0 | 1 | 0 | 0 |
A First Principle Study on Iron Substituted LiNi(BO3) to use as Cathode Material for Li-ion Batteries | In this work, the structural stability and the electronic properties of
LiNiBO 3 and LiFe x Ni (1-x) BO 3 are studied using first principle
calculations based on density functional theory. The calculated structural
parameters are in good agreement with the available theoretical data. The most
stable phases of the Fe substituted systems are predicted from the formation
energy hull generated using the cluster expansion method. The 66% of Fe
substitution at the Ni site gives the most stable structure among all the Fe
substituted systems. The bonding mechanisms of the considered systems are
discussed based on the density of states (DOS) and charge density plot. The
detailed analysis of the stability, electronic structure, and the bonding
mechanisms suggests that the systems can be a promising cathode material for Li
ion battery applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Noncommutative Knörrer type equivalences via noncommutative resolutions of singularities | We construct Knörrer type equivalences outside of the hypersurface case,
namely, between singularity categories of cyclic quotient surface singularities
and certain finite dimensional local algebras. This generalises Knörrer's
equivalence for singularities of Dynkin type A (between Krull dimensions $2$
and $0$) and yields many new equivalences between singularity categories of
finite dimensional algebras.
Our construction uses noncommutative resolutions of singularities, relative
singularity categories, and an idea of Hille & Ploog yielding strongly
quasi-hereditary algebras which we describe explicitly by building on Wemyss's
work on reconstruction algebras. Moreover, K-theory gives obstructions to
generalisations of our main result.
| 0 | 0 | 1 | 0 | 0 | 0 |
Torsion and K-theory for some free wreath products | We classify torsion actions of free wreath products of arbitrary compact
quantum groups and use this to prove that if $\mathbb{G}$ is a torsion-free
compact quantum group satisfying the strong Baum-Connes property, then
$\mathbb{G}\wr_{\ast}S_{N}^{+}$ also satisfies the strong Baum-Connes property.
We then compute the K-theory of free wreath products of classical and quantum
free groups by $SO_{q}(3)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
High Dimensional Inference in Partially Linear Models | We propose two semiparametric versions of the debiased Lasso procedure for
the model $Y_i = X_i\beta_0 + g_0(Z_i) + \epsilon_i$, where $\beta_0$ is high
dimensional but sparse (exactly or approximately). Both versions are shown to
have the same asymptotic normal distribution and do not require the minimal
signal condition for statistical inference of any component in $\beta_0$. Our
method also works when $Z_i$ is high dimensional provided that the function
classes $E(X_{ij} |Z_i)$s and $E(Y_i|Z_i)$ belong to exhibit certain sparsity
features, e.g., a sparse additive decomposition structure. We further develop a
simultaneous hypothesis testing procedure based on multiplier bootstrap. Our
testing method automatically takes into account of the dependence structure
within the debiased estimates, and allows the number of tested components to be
exponentially high.
| 0 | 0 | 1 | 1 | 0 | 0 |
The Impact of Small-Cell Bandwidth Requirements on Strategic Operators | Small-cell deployment in licensed and unlicensed spectrum is considered to be
one of the key approaches to cope with the ongoing wireless data demand
explosion. Compared to traditional cellular base stations with large
transmission power, small-cells typically have relatively low transmission
power, which makes them attractive for some spectrum bands that have strict
power regulations, for example, the 3.5GHz band [1]. In this paper we consider
a heterogeneous wireless network consisting of one or more service providers
(SPs). Each SP operates in both macro-cells and small-cells, and provides
service to two types of users: mobile and fixed. Mobile users can only
associate with macro-cells whereas fixed users can connect to either macro- or
small-cells. The SP charges a price per unit rate for each type of service.
Each SP is given a fixed amount of bandwidth and splits it between macro- and
small-cells. Motivated by bandwidth regulations, such as those for the 3.5Gz
band, we assume a minimum amount of bandwidth has to be set aside for
small-cells. We study the optimal pricing and bandwidth allocation strategies
in both monopoly and competitive scenarios. In the monopoly scenario the
strategy is unique. In the competitive scenario there exists a unique Nash
equilibrium, which depends on the regulatory constraints. We also analyze the
social welfare achieved, and compare it to that without the small-cell
bandwidth constraints. Finally, we discuss implications of our results on the
effectiveness of the minimum bandwidth constraint on influencing small-cell
deployments.
| 1 | 0 | 1 | 0 | 0 | 0 |
Optimisation approach for the Monge-Ampere equation | This paper studies the numerical approximation of solution of the Dirichlet
problem for the fully nonlinear Monge-Ampere equation. In this approach, we
take the advantage of reformulation the Monge-Ampere problem as an optimization
problem, to which we associate a well defined functional whose minimum provides
us with the solution to the Monge-Ampere problem after resolving a Poisson
problem by the finite element Galerkin method. We present some numerical
examples, for which a good approximation is obtained in 68 iterations.
| 0 | 0 | 1 | 0 | 0 | 0 |
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver | We study the effect of adaptive mesh refinement on a parallel domain
decomposition solver of a linear system of algebraic equations. These concepts
need to be combined within a parallel adaptive finite element software. A
prototype implementation is presented for this purpose. It uses adaptive mesh
refinement with one level of hanging nodes. Two and three-level versions of the
Balancing Domain Decomposition based on Constraints (BDDC) method are used to
solve the arising system of algebraic equations. The basic concepts are
recalled and components necessary for the combination are studied in detail. Of
particular interest is the effect of disconnected subdomains, a typical output
of the employed mesh partitioning based on space-filling curves, on the
convergence and solution time of the BDDC method. It is demonstrated using a
large set of experiments that while both refined meshes and disconnected
subdomains have a negative effect on the convergence of BDDC, the number of
iterations remains acceptable. In addition, scalability of the three-level BDDC
solver remains good on up to a few thousands of processor cores. The largest
presented problem using adaptive mesh refinement has over 10^9 unknowns and is
solved on 2048 cores.
| 1 | 0 | 1 | 0 | 0 | 0 |
The Pragmatics of Indirect Commands in Collaborative Discourse | Today's artificial assistants are typically prompted to perform tasks through
direct, imperative commands such as \emph{Set a timer} or \emph{Pick up the
box}. However, to progress toward more natural exchanges between humans and
these assistants, it is important to understand the way non-imperative
utterances can indirectly elicit action of an addressee. In this paper, we
investigate command types in the setting of a grounded, collaborative game. We
focus on a less understood family of utterances for eliciting agent action,
locatives like \emph{The chair is in the other room}, and demonstrate how these
utterances indirectly command in specific game state contexts. Our work shows
that models with domain-specific grounding can effectively realize the
pragmatic reasoning that is necessary for more robust natural language
interaction.
| 1 | 0 | 0 | 0 | 0 | 0 |
The Dawn of the Post-Naturalness Era | In an imaginary conversation with Guido Altarelli, I express my views on the
status of particle physics beyond the Standard Model and its future prospects.
| 0 | 1 | 0 | 0 | 0 | 0 |
An effective likelihood-free approximate computing method with statistical inferential guarantees | Approximate Bayesian computing is a powerful likelihood-free method that has
grown increasingly popular since early applications in population genetics.
However, complications arise in the theoretical justification for Bayesian
inference conducted from this method with a non-sufficient summary statistic.
In this paper, we seek to re-frame approximate Bayesian computing within a
frequentist context and justify its performance by standards set on the
frequency coverage rate. In doing so, we develop a new computational technique
called approximate confidence distribution computing, yielding theoretical
support for the use of non-sufficient summary statistics in likelihood-free
methods. Furthermore, we demonstrate that approximate confidence distribution
computing extends the scope of approximate Bayesian computing to include
data-dependent priors without damaging the inferential integrity. This
data-dependent prior can be viewed as an initial `distribution estimate' of the
target parameter which is updated with the results of the approximate
confidence distribution computing method. A general strategy for constructing
an appropriate data-dependent prior is also discussed and is shown to often
increase the computing speed while maintaining statistical inferential
guarantees. We supplement the theory with simulation studies illustrating the
benefits of the proposed method, namely the potential for broader applications
and the increased computing speed compared to the standard approximate Bayesian
computing methods.
| 0 | 0 | 1 | 1 | 0 | 0 |
The word and conjugacy problems in lacunary hyperbolic groups | We study the word and conjugacy problems in lacunary hyperbolic groups
(briefly, LHG). In particular, we describe a necessary and sufficient condition
for decidability of the word problem in LHG. Then, based on the graded
small-cancellation theory of Olshanskii, we develop a general framework which
allows us to construct lacunary hyperbolic groups with word and conjugacy
problems highly controllable and flexible both in terms of computability and
computational complexity.
As an application, we show that for any recursively enumerable subset
$\mathcal{L} \subseteq \mathcal{A}^*$, where $\mathcal{A}^*$ is the set of
words over arbitrarily chosen non-empty finite alphabet $\mathcal{A}$, there
exists a lacunary hyperbolic group $G_{\mathcal{L}}$ such that the membership
problem for $ \mathcal{L}$ is `almost' linear time equivalent to the conjugacy
problem in $G_{\mathcal{L}}$. Moreover, for the mentioned group the word and
individual conjugacy problems are decidable in `almost' linear time.
Another application is the construction of a lacunary hyperbolic group with
`almost' linear time word problem and with all the individual conjugacy
problems being undecidable except the word problem.
As yet another application of the developed framework, we construct infinite
verbally complete groups and torsion free Tarski monsters, i.e. infinite
torsion-free groups all of whose proper subgroups are cyclic, with `almost'
linear time word and polynomial time conjugacy problems. These groups are
constructed as quotients of arbitrarily given non-elementary torsion-free
hyperbolic groups and are lacunary hyperbolic.
Finally, as a consequence of the main results, we answer a few open
questions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Prediction of ultra-narrow Higgs resonance in magnon Bose-condensates | Higgs resonance modes in condensed matter systems are generally broad;
meaning large decay widths or short relaxation times. This common feature has
obscured and limited their observation to a select few systems. Contrary to
this, the present work predicts that Higgs resonances in magnetic field
induced, three-dimensional magnon Bose-condensates have vanishingly small decay
widths. Specifically for parameters relating to TlCuCl$_3$, we find an energy
($\Delta_H$) to width ($\Gamma_H$) ratio $\Delta_H/\Gamma_H\sim500$, making
this the narrowest predicted Higgs mode in a condensed matter system, some two
orders of magnitude `narrower' than the sharpest condensed matter Higgs
observed so far.
| 0 | 1 | 0 | 0 | 0 | 0 |
Better Text Understanding Through Image-To-Text Transfer | Generic text embeddings are successfully used in a variety of tasks. However,
they are often learnt by capturing the co-occurrence structure from pure text
corpora, resulting in limitations of their ability to generalize. In this
paper, we explore models that incorporate visual information into the text
representation. Based on comprehensive ablation studies, we propose a
conceptually simple, yet well performing architecture. It outperforms previous
multimodal approaches on a set of well established benchmarks. We also improve
the state-of-the-art results for image-related text datasets, using orders of
magnitude less data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Two-sample instrumental variable analyses using heterogeneous samples | Instrumental variable analysis is a widely used method to estimate causal
effects in the presence of unmeasured confounding. When the instruments,
exposure and outcome are not measured in the same sample, Angrist and Krueger
(1992) suggested to use two-sample instrumental variable (TSIV) estimators that
use sample moments from an instrument-exposure sample and an instrument-outcome
sample. However, this method is biased if the two samples are from
heterogeneous populations so that the distributions of the instruments are
different. In linear structural equation models, we derive a new class of TSIV
estimators that are robust to heterogeneous samples under the key assumption
that the structural relations in the two samples are the same. The widely used
two-sample two-stage least squares estimator belongs to this class. It is
generally not asymptotically efficient, although we find that it performs
similarly to the optimal TSIV estimator in most practical situations. We then
attempt to relax the linearity assumption. We find that, unlike one-sample
analyses, the TSIV estimator is not robust to misspecified exposure model.
Additionally, to nonparametrically identify the magnitude of the causal effect,
the noise in the exposure must have the same distributions in the two samples.
However, this assumption is in general untestable because the exposure is not
observed in one sample. Nonetheless, we may still identify the sign of the
causal effect in the absence of homogeneity of the noise.
| 0 | 0 | 1 | 1 | 0 | 0 |
Combinatorial models for Schubert polynomials | Schubert polynomials are a basis for the polynomial ring that represent
Schubert classes for the flag manifold. In this paper, we introduce and develop
several new combinatorial models for Schubert polynomials that relate them to
other known bases including key polynomials and fundamental slide polynomials.
We unify these and existing models by giving simple bijections between the
combinatorial objects indexing each. In particular, we give a simple bijective
proof that the balanced tableaux of Edelman and Greene enumerate reduced
expressions and a direct combinatorial proof of Kohnert's algorithm for
computing Schubert polynomials. Further, we generalize the insertion algorithm
of Edelman and Greene to give a bijection between reduced expressions and pairs
of tableaux of the same key diagram shape and use this to give a simple
formula, directly in terms of reduced expressions, for the key polynomial
expansion of a Schubert polynomial.
| 0 | 0 | 1 | 0 | 0 | 0 |
Phase-tunable Josephson thermal router | Since the the first studies of thermodynamics, heat transport has been a
crucial element for the understanding of any thermal system. Quantum mechanics
has introduced new appealing ingredients for the manipulation of heat currents,
such as the long-range coherence of the superconducting condensate. The latter
has been exploited by phase-coherent caloritronics, a young field of
nanoscience, to realize Josephson heat interferometers, which can control
electronic thermal currents as a function of the external magnetic flux. So
far, only one output temperature has been modulated, while multi-terminal
devices that allow to distribute the heat flux among different reservoirs are
still missing. Here, we report the experimental realization of a phase-tunable
thermal router able to control the heat transferred between two terminals
residing at different temperatures. Thanks to the Josephson effect, our
structure allows to regulate the thermal gradient between the output electrodes
until reaching its inversion. Together with interferometers, heat diodes and
thermal memories, the thermal router represents a fundamental step towards the
thermal conversion of non-linear electronic devices, and the realization of
caloritronic logic components.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topology Analysis of International Networks Based on Debates in the United Nations | In complex, high dimensional and unstructured data it is often difficult to
extract meaningful patterns. This is especially the case when dealing with
textual data. Recent studies in machine learning, information theory and
network science have developed several novel instruments to extract the
semantics of unstructured data, and harness it to build a network of relations.
Such approaches serve as an efficient tool for dimensionality reduction and
pattern detection. This paper applies semantic network science to extract
ideological proximity in the international arena, by focusing on the data from
General Debates in the UN General Assembly on the topics of high salience to
international community. UN General Debate corpus (UNGDC) covers all high-level
debates in the UN General Assembly from 1970 to 2014, covering all UN member
states. The research proceeds in three main steps. First, Latent Dirichlet
Allocation (LDA) is used to extract the topics of the UN speeches, and
therefore semantic information. Each country is then assigned a vector
specifying the exposure to each of the topics identified. This intermediate
output is then used in to construct a network of countries based on information
theoretical metrics where the links capture similar vectorial patterns in the
topic distributions. Topology of the networks is then analyzed through network
properties like density, path length and clustering. Finally, we identify
specific topological features of our networks using the map equation framework
to detect communities in our networks of countries.
| 1 | 0 | 1 | 1 | 0 | 0 |
Solving the incompressible surface Navier-Stokes equation by surface finite elements | We consider a numerical approach for the incompressible surface Navier-Stokes
equation on surfaces with arbitrary genus $g(\mathcal{S})$. The approach is
based on a reformulation of the equation in Cartesian coordinates of the
embedding $\mathbb{R}^3$, penalization of the normal component, a Chorin
projection method and discretization in space by surface finite elements for
each component. The approach thus requires only standard ingredients which most
finite element implementations can offer. We compare computational results with
discrete exterior calculus (DEC) simulations on a torus and demonstrate the
interplay of the flow field with the topology by showing realizations of the
Poincaré-Hopf theorem on $n$-tori.
| 0 | 1 | 0 | 0 | 0 | 0 |
Exploring a potential energy surface by machine learning for characterizing atomic transport | We propose a machine-learning method for evaluating the potential barrier
governing atomic transport based on the preferential selection of dominant
points for the atomic transport. The proposed method generates numerous random
samples of the entire potential energy surface (PES) from a probabilistic
Gaussian process model of the PES, which enables defining the likelihood of the
dominant points. The robustness and efficiency of the method are demonstrated
on a dozen model cases for proton diffusion in oxides, in comparison with a
conventional nudge elastic band method.
| 0 | 1 | 0 | 0 | 0 | 0 |
Products of random walks on finite groups with moderate growth | In this article, we consider products of random walks on finite groups with
moderate growth and discuss their cutoffs in the total variation. Based on
several comparison techniques, we are able to identify the total variation
cutoff of discrete time lazy random walks with the Hellinger distance cutoff of
continuous time random walks. Along with the cutoff criterion for Laplace
transforms, we derive a series of equivalent conditions on the existence of
cutoffs, including the existence of pre-cutoffs, Peres' product condition and a
formula generated by the graph diameters. For illustration, we consider
products of Heisenberg groups and randomized products of finite cycles.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Theoretical Analysis of Sparse Recovery Stability of Dantzig Selector and LASSO | Dantzig selector (DS) and LASSO problems have attracted plenty of attention
in statistical learning, sparse data recovery and mathematical optimization. In
this paper, we provide a theoretical analysis of the sparse recovery stability
of these optimization problems in more general settings and from a new
perspective. We establish recovery error bounds for these optimization problems
under a mild assumption called weak range space property of a transposed design
matrix. This assumption is less restrictive than the well known sparse recovery
conditions such as restricted isometry property (RIP), null space property
(NSP) or mutual coherence. In fact, our analysis indicates that this assumption
is tight and cannot be relaxed for the standard DS problems in order to
maintain their sparse recovery stability. As a result, a series of new
stability results for DS and LASSO have been established under various matrix
properties, including the RIP with constant $\delta_{2k}< 1/\sqrt{2}$ and the
(constant-free) standard NSP of order $k.$ We prove that these matrix
properties can yield an identical recovery error bound for DS and LASSO with
stability coefficients being measured by the so-called Robinson's constant,
instead of the conventional RIP or NSP constant. To our knowledge, this is the
first time that the stability results with such a unified feature are
established for DS and LASSO problems. Different from the standard analysis in
this area of research, our analysis is carried out deterministically, and the
key analytic tools used in our analysis include the error bound of linear
systems due to Hoffman and Robinson and polytope approximation of symmetric
convex bodies due to Barvinok.
| 0 | 0 | 1 | 1 | 0 | 0 |
Input Perturbations for Adaptive Regulation and Learning | Design of adaptive algorithms for simultaneous regulation and estimation of
MIMO linear dynamical systems is a canonical reinforcement learning problem.
Efficient policies whose regret (i.e. increase in the cost due to uncertainty)
scales at a square-root rate of time have been studied extensively in the
recent literature. Nevertheless, existing strategies are computationally
intractable and require a priori knowledge of key system parameters. The only
exception is a randomized Greedy regulator, for which asymptotic regret bounds
have been recently established. However, randomized Greedy leads to probable
fluctuations in the trajectory of the system, which renders its finite time
regret suboptimal.
This work addresses the above issues by designing policies that utilize input
signals perturbations. We show that perturbed Greedy guarantees non-asymptotic
regret bounds of (nearly) square-root magnitude w.r.t. time. More generally, we
establish high probability bounds on both the regret and the learning accuracy
under arbitrary input perturbations. The settings where Greedy attains the
information theoretic lower bound of logarithmic regret are also discussed. To
obtain the results, state-of-the-art tools from martingale theory together with
the recently introduced method of policy decomposition are leveraged. Beside
adaptive regulators, analysis of input perturbations captures key applications
including remote sensing and distributed control.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nearly circular domains which are integrable close to the boundary are ellipses | The Birkhoff conjecture says that the boundary of a strictly convex
integrable billiard table is necessarily an ellipse. In this article, we
consider a stronger notion of integrability, namely integrability close to the
boundary, and prove a local version of this conjecture: a small perturbation of
an ellipse of small eccentricity which preserves integrability near the
boundary, is itself an ellipse. This extends the result in [1], where
integrability was assumed on a larger set. In particular, it shows that (local)
integrability near the boundary implies global integrability. One of the
crucial ideas in the proof consists in analyzing Taylor expansion of the
corresponding action-angle coordinates with respect to the eccentricity
parameter, deriving and studying higher order conditions for the preservation
of integrable rational caustics.
| 0 | 0 | 1 | 0 | 0 | 0 |
Empirical study on social groups in pedestrian evacuation dynamics | Pedestrian crowds often include social groups, i.e. pedestrians that walk
together because of social relationships. They show characteristic
configurations and influence the dynamics of the entire crowd. In order to
investigate the impact of social groups on evacuations we performed an
empirical study with pupils. Several evacuation runs with groups of different
sizes and different interactions were performed. New group parameters are
introduced which allow to describe the dynamics of the groups and the
configuration of the group members quantitatively. The analysis shows a
possible decrease of evacuation times for large groups due to self-ordering
effects. Social groups can be approximated as ellipses that orientate along
their direction of motion. Furthermore, explicitly cooperative behaviour among
group members leads to a stronger aggregation of group members and an
intermittent way of evacuation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Capacity of the Aperture-Constrained AWGN Free-Space Communication Channel | In this paper, we derive upper and lower bounds as well as a simple
closed-form approximation for the capacity of the continuous-time, bandlimited,
additive white Gaussian noise channel in a three-dimensional free-space
electromagnetic propagation environment subject to constraints on the total
effective antenna aperture area of the link and a total transmitter power
constraint. We assume that the communication range is much larger than the
radius of the sphere containing the antennas at both ends of the link, and we
show that, in general, the capacity can only be achieved by transmitting
multiple spatially-multiplexed data streams simultaneously over the channel.
Furthermore, the lower bound on capacity can be approached asymptotically by
transmitting the data streams between a pair of physically-realizable
distributed antenna arrays at either end of the link. A consequence of this
result is that, in general, communication at close to the maximum achievable
data rate on a deep-space communication link can be achieved in practice if and
only if the communication system utilizes spatial multiplexing over a
distributed MIMO antenna array. Such an approach to deep-space communication
does not appear to be envisioned currently by any of the international space
agencies or any commercial space companies. A second consequence is that the
capacity of a long-range free-space communication link, if properly utilized,
grows asymptotically as a function of the square root of the received SNR
rather than only logarithmically in the received SNR.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quasiparticle entropy in superconductor/normal metal/superconductor proximity junctions in the diffusive limit | We discuss the quasiparticle entropy and heat capacity of a dirty
superconductor-normal metal-superconductor junction. In the case of short
junctions, the inverse proximity effect extending in the superconducting banks
plays a crucial role in determining the thermodynamic quantities. In this case,
commonly used approximations can violate thermodynamic relations between
supercurrent and quasiparticle entropy. We provide analytical and numerical
results as a function of different geometrical parameters. Quantitative
estimates for the heat capacity can be relevant for the design of caloritronic
devices or radiation sensor applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Kepler Study of Starspot Lifetimes with Respect to Light Curve Amplitude and Spectral Type | Wide-field high precision photometric surveys such as Kepler have produced
reams of data suitable for investigating stellar magnetic activity of cooler
stars. Starspot activity produces quasi-sinusoidal light curves whose phase and
amplitude vary as active regions grow and decay over time. Here we investigate,
firstly, whether there is a correlation between the size of starspots - assumed
to be related to the amplitude of the sinusoid - and their decay timescale and,
secondly, whether any such correlation depends on the stellar effective
temperature. To determine this, we computed the autocorrelation functions of
the light curves of samples of stars from Kepler and fitted them with apodised
periodic functions. The light curve amplitudes, representing spot size were
measured from the root-mean-squared scatter of the normalised light curves. We
used a Monte Carlo Markov Chain to measure the periods and decay timescales of
the light curves. The results show a correlation between the decay time of
starspots and their inferred size. The decay time also depends strongly on the
temperature of the star. Cooler stars have spots that last much longer, in
particular for stars with longer rotational periods. This is consistent with
current theories of diffusive mechanisms causing starspot decay. We also find
that the Sun is not unusually quiet for its spectral type - stars with
solar-type rotation periods and temperatures tend to have (comparatively)
smaller starspots than stars with mid-G or later spectral types.
| 0 | 1 | 0 | 0 | 0 | 0 |
Plasmon-Driven Acceleration in a Photo-Excited Nanotube | A plasmon-assisted channeling acceleration can be realized with a large
channel, possibly at the nanometer scale. Carbon nanotubes (CNTs) are the most
typical example of nano-channels that can confine a large number of channeled
particles in a photon-plasmon coupling condition. This paper presents a
theoretical and numerical study on the concept of high-field charge
acceleration driven by photo-excited Luttinger-liquid plasmons (LLP) in a
nanotube. An analytic description of the plasmon-assisted laser acceleration is
detailed with practical acceleration parameters, in particular with
specifications of a typical tabletop femtosecond laser system. The maximally
achievable acceleration gradients and energy gains within dephasing lengths and
CNT lengths are discussed with respect to laser-incident angles and CNT-filling
ratios.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards Evaluating Size Reduction Techniques for Software Model Checking | Formal verification techniques are widely used for detecting design flaws in
software systems. Formal verification can be done by transforming an already
implemented source code to a formal model and attempting to prove certain
properties of the model (e.g. that no erroneous state can occur during
execution). Unfortunately, transformations from source code to a formal model
often yield large and complex models, making the verification process
inefficient and costly. In order to reduce the size of the resulting model,
optimization transformations can be used. Such optimizations include common
algorithms known from compiler design and different program slicing techniques.
Our paper describes a framework for transforming C programs to a formal model,
enhanced by various optimizations for size reduction. We evaluate and compare
several optimization algorithms regarding their effect on the size of the model
and the efficiency of the verification. Results show that different
optimizations are more suitable for certain models, justifying the need for a
framework that includes several algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantum-continuum simulation of underpotential deposition at electrified metal-solution interfaces | The underpotential deposition of transition metal ions is a critical step in
many electrosynthetic approaches. While underpotential deposition has been
intensively studied at the atomic level, first-principles calculations in
vacuum can strongly underestimate the stability of underpotentially deposited
metals. It has been shown recently that the consideration of co-adsorbed anions
can deliver more reliable descriptions of underpotential deposition reactions;
however, the influence of additional key environmental factors such as the
electrification of the interface under applied voltage and the activities of
the ions in solution have yet to be investigated. In this work, copper
underpotential deposition on gold is studied under realistic electrochemical
conditions using a quantum-continuum model of the electrochemical interface. We
report here on the influence of surface electrification, concentration effects,
and anion co-adsorption on the stability of the copper underpotential
deposition layer on the gold (100) surface.
| 0 | 1 | 0 | 0 | 0 | 0 |
Neural Expectation Maximization | Many real world tasks such as reasoning and physical interaction require
identification and manipulation of conceptual entities. A first step towards
solving these tasks is the automated discovery of distributed symbol-like
representations. In this paper, we explicitly formalize this problem as
inference in a spatial mixture model where each component is parametrized by a
neural network. Based on the Expectation Maximization framework we then derive
a differentiable clustering method that simultaneously learns how to group and
represent individual entities. We evaluate our method on the (sequential)
perceptual grouping task and find that it is able to accurately recover the
constituent objects. We demonstrate that the learned representations are useful
for next-step prediction.
| 1 | 0 | 0 | 1 | 0 | 0 |
Gold Standard Online Debates Summaries and First Experiments Towards Automatic Summarization of Online Debate Data | Usage of online textual media is steadily increasing. Daily, more and more
news stories, blog posts and scientific articles are added to the online
volumes. These are all freely accessible and have been employed extensively in
multiple research areas, e.g. automatic text summarization, information
retrieval, information extraction, etc. Meanwhile, online debate forums have
recently become popular, but have remained largely unexplored. For this reason,
there are no sufficient resources of annotated debate data available for
conducting research in this genre. In this paper, we collected and annotated
debate data for an automatic summarization task. Similar to extractive gold
standard summary generation our data contains sentences worthy to include into
a summary. Five human annotators performed this task. Inter-annotator
agreement, based on semantic similarity, is 36% for Cohen's kappa and 48% for
Krippendorff's alpha. Moreover, we also implement an extractive summarization
system for online debates and discuss prominent features for the task of
summarizing online debate data automatically.
| 1 | 0 | 0 | 0 | 0 | 0 |
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors | We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning
algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior)
classifier, i.e., a randomized classifier obtained by a risk-sensitive
perturbation of the weights of a learned classifier. Entropy-SGD works by
optimizing the bound's prior, violating the hypothesis of the PAC-Bayes theorem
that the prior is chosen independently of the data. Indeed, available
implementations of Entropy-SGD rapidly obtain zero training error on random
labels and the same holds of the Gibbs posterior. In order to obtain a valid
generalization bound, we rely on a result showing that data-dependent priors
obtained by stochastic gradient Langevin dynamics (SGLD) yield valid PAC-Bayes
bounds provided the target distribution of SGLD is $\epsilon$-differentially
private. We observe that test error on MNIST and CIFAR10 falls within the
(empirically nonvacuous) risk bounds computed under the assumption that SGLD
reaches stationarity. In particular, Entropy-SGLD can be configured to yield
relatively tight generalization bounds and still fit real labels, although
these same settings do not obtain state-of-the-art performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction | Learning sophisticated feature interactions behind user behaviors is critical
in maximizing CTR for recommender systems. Despite great progress, existing
methods seem to have a strong bias towards low- or high-order interactions, or
require expertise feature engineering. In this paper, we show that it is
possible to derive an end-to-end learning model that emphasizes both low- and
high-order feature interactions. The proposed model, DeepFM, combines the power
of factorization machines for recommendation and deep learning for feature
learning in a new neural network architecture. Compared to the latest Wide \&
Deep model from Google, DeepFM has a shared input to its "wide" and "deep"
parts, with no need of feature engineering besides raw features. Comprehensive
experiments are conducted to demonstrate the effectiveness and efficiency of
DeepFM over the existing models for CTR prediction, on both benchmark data and
commercial data.
| 1 | 0 | 0 | 0 | 0 | 0 |
How a small quantum bath can thermalize long localized chains | We investigate the stability of the many-body localized (MBL) phase for a
system in contact with a single ergodic grain, modelling a Griffiths region
with low disorder. Our numerical analysis provides evidence that even a small
ergodic grain consisting of only 3 qubits can delocalize a localized chain, as
soon as the localization length exceeds a critical value separating localized
and extended regimes of the whole system. We present a simple theory,
consistent with the arguments in [Phys. Rev. B 95, 155129 (2017)], that assumes
a system to be locally ergodic unless the local relaxation time, determined by
Fermi's Golden Rule, is larger than the inverse level spacing. This theory
predicts a critical value for the localization length that is perfectly
consistent with our numerical calculations. We analyze in detail the behavior
of local operators inside and outside the ergodic grain, and find excellent
agreement of numerics and theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Transfer Learning for Brain-Computer Interfaces: An Euclidean Space Data Alignment Approach | Almost all EEG-based brain-computer interfaces (BCIs) need some labeled
subject-specific data to calibrate a new subject, as neural responses are
different across subjects to even the same stimulus. So, a major challenge in
developing high-performance and user-friendly BCIs is to cope with such
individual differences so that the calibration can be reduced or even
completely eliminated. This paper focuses on the latter. More specifically, we
consider an offline application scenario, in which we have unlabeled EEG trials
from a new subject, and would like to accurately label them by leveraging
auxiliary labeled EEG trials from other subjects in the same task. To
accommodate the individual differences, we propose a novel unsupervised
approach to align the EEG trials from different subjects in the Euclidean space
to make them more consistent. It has three desirable properties: 1) the aligned
trial lie in the Euclidean space, which can be used by any Euclidean space
signal processing and machine learning approach; 2) it can be computed very
efficiently; and, 3) it does not need any labeled trials from the new subject.
Experiments on motor imagery and event-related potentials demonstrated the
effectiveness and efficiency of our approach.
| 0 | 0 | 0 | 1 | 1 | 0 |
Linear and bilinear restriction to certain rotationally symmetric hypersurfaces | Conditional on Fourier restriction estimates for elliptic hypersurfaces, we
prove optimal restriction estimates for polynomial hypersurfaces of revolution
for which the defining polynomial has non-negative coefficients. In particular,
we obtain uniform--depending only on the dimension and polynomial
degree--estimates for restriction with affine surface measure, slightly beyond
the bilinear range. The main step in the proof of our linear result is an
(unconditional) bilinear adjoint restriction estimate for pieces at different
scales.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mixed one-bit compressive sensing with applications to overexposure correction for CT reconstruction | When a measurement falls outside the quantization or measurable range, it
becomes saturated and cannot be used in classical reconstruction methods. For
example, in C-arm angiography systems, which provide projection radiography,
fluoroscopy, digital subtraction angiography, and are widely used for medical
diagnoses and interventions, the limited dynamic range of C-arm flat detectors
leads to overexposure in some projections during an acquisition, such as
imaging relatively thin body parts (e.g., the knee). Aiming at overexposure
correction for computed tomography (CT) reconstruction, we in this paper
propose a mixed one-bit compressive sensing (M1bit-CS) to acquire information
from both regular and saturated measurements. This method is inspired by the
recent progress on one-bit compressive sensing, which deals with only sign
observations. Its successful applications imply that information carried by
saturated measurements is useful to improve recovery quality. For the proposed
M1bit-CS model, alternating direction methods of multipliers is developed and
an iterative saturation detection scheme is established. Then we evaluate
M1bit-CS on one-dimensional signal recovery tasks. In some experiments, the
performance of the proposed algorithms on mixed measurements is almost the same
as recovery on unsaturated ones with the same amount of measurements. Finally,
we apply the proposed method to overexposure correction for CT reconstruction
on a phantom and a simulated clinical image. The results are promising, as the
typical streaking artifacts and capping artifacts introduced by saturated
projection data are effectively reduced, yielding significant error reduction
compared with existing algorithms based on extrapolation.
| 1 | 0 | 1 | 0 | 0 | 0 |
Stochastic Gradient Descent on Highly-Parallel Architectures | There is an increased interest in building data analytics frameworks with
advanced algebraic capabilities both in industry and academia. Many of these
frameworks, e.g., TensorFlow and BIDMach, implement their compute-intensive
primitives in two flavors---as multi-thread routines for multi-core CPUs and as
highly-parallel kernels executed on GPU. Stochastic gradient descent (SGD) is
the most popular optimization method for model training implemented extensively
on modern data analytics platforms. While the data-intensive properties of SGD
are well-known, there is an intense debate on which of the many SGD variants is
better in practice. In this paper, we perform a comprehensive study of parallel
SGD for training generalized linear models. We consider the impact of three
factors -- computing architecture (multi-core CPU or GPU), synchronous or
asynchronous model updates, and data sparsity -- on three measures---hardware
efficiency, statistical efficiency, and time to convergence. In the process, we
design an optimized asynchronous SGD algorithm for GPU that leverages warp
shuffling and cache coalescing for data and model access. We draw several
interesting findings from our extensive experiments with logistic regression
(LR) and support vector machines (SVM) on five real datasets. For synchronous
SGD, GPU always outperforms parallel CPU---they both outperform a sequential
CPU solution by more than 400X. For asynchronous SGD, parallel CPU is the
safest choice while GPU with data replication is better in certain situations.
The choice between synchronous GPU and asynchronous CPU depends on the task and
the characteristics of the data. As a reference, our best implementation
outperforms TensorFlow and BIDMach consistently. We hope that our insights
provide a useful guide for applying parallel SGD to generalized linear models.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mixed Precision Training | Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units.
| 1 | 0 | 0 | 1 | 0 | 0 |
An elementary representation of the higher-order Jacobi-type differential equation | We investigate the differential equation for the Jacobi-type polynomials
which are orthogonal on the interval $[-1,1]$ with respect to the classical
Jacobi measure and an additional point mass at one endpoint. This scale of
higher-order equations was introduced by J. and R. Koekoek in 1999 essentially
by using special function methods. In this paper, a completely elementary
representation of the Jacobi-type differential operator of any even order is
given. This enables us to trace the orthogonality relation of the Jacobi-type
polynomials back to their differential equation. Moreover, we establish a new
factorization of the Jacobi-type operator which gives rise to a recurrence
relation with respect to the order of the equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Theory of $L$-edge spectroscopy of strongly correlated systems | X-ray absorption spectroscopy measured at the $L$-edge of transition metals
(TMs) is a powerful element-selective tool providing direct information about
the correlation effects in the $3d$ states. The theoretical modeling of the
$2p\rightarrow3d$ excitation processes remains to be challenging for
contemporary \textit{ab initio} electronic structure techniques, due to strong
core-hole and multiplet effects influencing the spectra. In this work we
present a realization of the method combining the density-functional theory
with multiplet ligand field theory, proposed in Haverkort et al.
(this https URL), Phys. Rev. B 85, 165113
(2012). In this approach a single-impurity Anderson model (SIAM) is
constructed, with almost all parameters obtained from first principles, and
then solved to obtain the spectra. In our implementation we adopt the language
of the dynamical mean-field theory and utilize the local density of states and
the hybridization function, projected onto TM $3d$ states, in order to
construct the SIAM. The developed computational scheme is applied to calculate
the $L$-edge spectra for several TM monoxides. A very good agreement between
the theory and experiment is found for all studied systems. The effect of
core-hole relaxation, hybridization discretization, possible extensions of the
method as well as its limitations are discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pseudo-spin Skyrmions in the Phase Diagram of Cuprate Superconductors | Topological states of matter are at the root of some of the most fascinating
phenomena in condensed matter physics. Here we argue that skyrmions in the
pseudo-spin space related to an emerging SU(2) symmetry enlighten many
mysterious properties of the pseudogap phase in under-doped cuprates. We detail
the role of the SU(2) symmetry in controlling the phase diagram of the
cuprates, in particular how a cascade of phase transitions explains the arising
of the pseudogap, superconducting and charge modulation phases seen at low
temperature. We specify the structure of the charge modulations inside the
vortex core below $T_{c}$, as well as in a wide temperature region above
$T_{c}$, which is a signature of the skyrmion topological structure. We argue
that the underlying SU(2) symmetry is the main structure controlling the
emergent complexity of excitations at the pseudogap scale $T^{*}$. The theory
yields a gapping of a large part of the anti-nodal region of the Brillouin
zone, along with $q=0$ phase transitions, of both nematic and loop currents
characters.
| 0 | 1 | 0 | 0 | 0 | 0 |
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method | We consider large scale empirical risk minimization (ERM) problems, where
both the problem dimension and variable size is large. In these cases, most
second order methods are infeasible due to the high cost in both computing the
Hessian over all samples and computing its inverse in high dimensions. In this
paper, we propose a novel adaptive sample size second-order method, which
reduces the cost of computing the Hessian by solving a sequence of ERM problems
corresponding to a subset of samples and lowers the cost of computing the
Hessian inverse using a truncated eigenvalue decomposition. We show that while
we geometrically increase the size of the training set at each stage, a single
iteration of the truncated Newton method is sufficient to solve the new ERM
within its statistical accuracy. Moreover, for a large number of samples we are
allowed to double the size of the training set at each stage, and the proposed
method subsequently reaches the statistical accuracy of the full training set
approximately after two effective passes. In addition to this theoretical
result, we show empirically on a number of well known data sets that the
proposed truncated adaptive sample size algorithm outperforms stochastic
alternatives for solving ERM problems.
| 1 | 0 | 1 | 1 | 0 | 0 |
Large covers and sharp resonances of hyperbolic surfaces | Let $\Gamma$ be a convex co-compact discrete group of isometries of the
hyperbolic plane $\mathbb{H}^2$, and $X=\Gamma\backslash \mathbb{H}^2$ the
associated surface. In this paper we investigate the behaviour of resonances of
the Laplacian for large degree covers of $X$ given by a finite index normal
subgroup of $\Gamma$. Using various techniques of thermodynamical formalism and
representation theory, we prove two new existence results of "sharp non-trivial
resonances" close to $\Re(s)=\delta_\Gamma$, both in the large degree limit,
for abelian covers and also infinite index congruence subgroups of
$SL2(\mathbb{Z})$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Computational Models of Tutor Feedback in Language Acquisition | This paper investigates the role of tutor feedback in language learning using
computational models. We compare two dominant paradigms in language learning:
interactive learning and cross-situational learning - which differ primarily in
the role of social feedback such as gaze or pointing. We analyze the
relationship between these two paradigms and propose a new mixed paradigm that
combines the two paradigms and allows to test algorithms in experiments that
combine no feedback and social feedback. To deal with mixed feedback
experiments, we develop new algorithms and show how they perform with respect
to traditional knn and prototype approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non-Saturated Throughput Analysis of Coexistence of Wi-Fi and Cellular With Listen-Before-Talk in Unlicensed Spectrum | This paper analyzes the coexistence performance of Wi-Fi and cellular
networks conditioned on non-saturated traffic in the unlicensed spectrum. Under
the condition, the time-domain behavior of a cellular small-cell base station
(SCBS) with a listen-before-talk (LBT) procedure is modeled as a Markov chain,
and it is combined with a Markov chain which describes the time-domain behavior
of a Wi-Fi access point. Using the proposed model, this study finds the optimal
contention window size of cellular SCBSs in which total throughput of both
networks is maximized while satisfying the required throughput of each network,
under the given traffic densities of both networks. This will serve as a
guideline for cellular operators with respect to performing LBT at cellular
SCBSs according to the changes of traffic volumes of both networks over time.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Dynamic Model for Traffic Flow Prediction Using Improved DRN | Real-time traffic flow prediction can not only provide travelers with
reliable traffic information so that it can save people's time, but also assist
the traffic management agency to manage traffic system. It can greatly improve
the efficiency of the transportation system. Traditional traffic flow
prediction approaches usually need a large amount of data but still give poor
performances. With the development of deep learning, researchers begin to pay
attention to artificial neural networks (ANNs) such as RNN and LSTM. However,
these ANNs are very time-consuming. In our research, we improve the Deep
Residual Network and build a dynamic model which previous researchers hardly
use. We firstly integrate the input and output of the $i^{th}$ layer to the
input of the $i+1^{th}$ layer and prove that each layer will fit a simpler
function so that the error rate will be much smaller. Then, we use the concept
of online learning in our model to update pre-trained model during prediction.
Our result shows that our model has higher accuracy than some state-of-the-art
models. In addition, our dynamic model can perform better in practical
applications.
| 0 | 0 | 0 | 1 | 0 | 0 |
Think globally, fit locally under the Manifold Setup: Asymptotic Analysis of Locally Linear Embedding | Since its introduction in 2000, the locally linear embedding (LLE) has been
widely applied in data science. We provide an asymptotical analysis of the LLE
under the manifold setup. We show that for the general manifold, asymptotically
we may not obtain the Laplace-Beltrami operator, and the result may depend on
the non-uniform sampling, unless a correct regularization is chosen. We also
derive the corresponding kernel function, which indicates that the LLE is not a
Markov process. A comparison with the other commonly applied nonlinear
algorithms, particularly the diffusion map, is provided, and its relationship
with the locally linear regression is also discussed.
| 0 | 0 | 1 | 1 | 0 | 0 |
Relative energetics of acetyl-histidine protomers with and without Zn2+ and a benchmark of energy methods | We studied acetylhistidine (AcH), bare or microsolvated with a zinc cation by
simulations in isolation. First, a global search for minima of the potential
energy surface combining both, empirical and first-principles methods, is
performed individually for either one of five possible protonation states.
Comparing the most stable structures between tautomeric forms of negatively
charged AcH shows a clear preference for conformers with the neutral imidazole
ring protonated at the N-epsilon-2 atom. When adding a zinc cation to the
system, the situation is reversed and N-delta-1-protonated structures are
energetically more favorable. Obtained minima structures then served as basis
for a benchmark study to examine the goodness of commonly applied levels of
theory, i.e. force fields, semi-empirical methods, density-functional
approximations (DFA), and wavefunction-based methods with respect to high-level
coupled-cluster calculations, i.e. the DLPNO-CCSD(T) method. All tested force
fields and semi-empirical methods show a poor performance in reproducing the
energy hierarchies of conformers, in particular of systems involving the zinc
cation. Meta-GGA, hybrid, double hybrid DFAs, and the MP2 method are able to
describe the energetics of the reference method within chemical accuracy, i.e.
with a mean absolute error of less than 1kcal/mol. Best performance is found
for the double hybrid DFA B3LYP+XYG3 with a mean absolute error of 0.7 kcal/mol
and a maximum error of 1.8 kcal/mol. While MP2 performs similarly as
B3LYP+XYG3, computational costs, i.e. timings, are increased by a factor of 4
in comparison due to the large basis sets required for accurate results.
| 0 | 0 | 0 | 0 | 1 | 0 |
"Space is blue and birds fly through it" | Quantum mechanics is not about 'quantum states': it is about values of
physical variables. I give a short fresh presentation and update on the
$relational$ perspective on the theory, and a comment on its philosophical
implications.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the Geometry of Nash and Correlated Equilibria with Cumulative Prospect Theoretic Preferences | It is known that the set of all correlated equilibria of an n-player
non-cooperative game is a convex polytope and includes all the Nash equilibria.
Further, the Nash equilibria all lie on the boundary of this polytope. We study
the geometry of both these equilibrium notions when the players have cumulative
prospect theoretic (CPT) preferences. The set of CPT correlated equilibria
includes all the CPT Nash equilibria but it need not be a convex polytope. We
show that it can, in fact, be disconnected. However, all the CPT Nash
equilibria continue to lie on its boundary. We also characterize the sets of
CPT correlated equilibria and CPT Nash equilibria for all 2x2 games.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multipoint Radiation Induced Ignition of Dust Explosions: Turbulent Clustering of Particles and Increased Transparency | It is known that unconfined dust explosions consist of a relatively weak
primary (turbulent) deflagrations followed by a devastating secondary
explosion. The secondary explosion may propagate with a speed of up to 1000 m/s
producing overpressures of over 8-10 atm. Since detonation is the only
established theory that allows a rapid burning producing a high pressure that
can be sustained in open areas, the generally accepted view was that the
mechanism explaining the high rate of combustion in dust explosions is
deflagration to detonation transition. In the present work we propose a
theoretical substantiation of the alternative propagation mechanism explaining
origin of the secondary explosion producing the high speeds of combustion and
high overpressures in unconfined dust explosions. We show that clustering of
dust particles in a turbulent flow gives rise to a significant increase of the
thermal radiation absorption length ahead of the advancing flame front. This
effect ensures that clusters of dust particles are exposed to and heated by the
radiation from hot combustion products of large gaseous explosions sufficiently
long time to become multi-point ignition kernels in a large volume ahead of the
advancing flame front. The ignition times of fuel-air mixture by the
radiatively heated clusters of particles is considerably reduced compared to
the ignition time by the isolated particle. The radiation-induced multi-point
ignitions of a large volume of fuel-air ahead of the primary flame efficiently
increase the total flame area, giving rise to the secondary explosion, which
results in high rates of combustion and overpressures required to account for
the observed level of overpressures and damages in unconfined dust explosions,
such as e.g. the 2005 Buncefield explosion and several vapor cloud explosions
of severity similar to that of the Buncefield incident.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modeling of the Latent Embedding of Music using Deep Neural Network | While both the data volume and heterogeneity of the digital music content is
huge, it has become increasingly important and convenient to build a
recommendation or search system to facilitate surfacing these content to the
user or consumer community. Most of the recommendation models fall into two
primary species, collaborative filtering based and content based approaches.
Variants of instantiations of collaborative filtering approach suffer from the
common issues of so called "cold start" and "long tail" problems where there is
not much user interaction data to reveal user opinions or affinities on the
content and also the distortion towards the popular content. Content-based
approaches are sometimes limited by the richness of the available content data
resulting in a heavily biased and coarse recommendation result. In recent
years, the deep neural network has enjoyed a great success in large-scale image
and video recognitions. In this paper, we propose and experiment using deep
convolutional neural network to imitate how human brain processes hierarchical
structures in the auditory signals, such as music, speech, etc., at various
timescales. This approach can be used to discover the latent factor models of
the music based upon acoustic hyper-images that are extracted from the raw
audio waves of music. These latent embeddings can be used either as features to
feed to subsequent models, such as collaborative filtering, or to build
similarity metrics between songs, or to classify music based on the labels for
training such as genre, mood, sentiment, etc.
| 1 | 0 | 0 | 0 | 0 | 0 |
Intense keV isolated attosecond pulse generation by orthogonally polarized multicycle midinfrared two-color laser field | We theoretically investigate the generation of intense keV attosecond pulses
in an orthogonally polarized multicycle midinfrared two-color laser field. It
is demonstrated that multiple continuum-like humps, which have a spectral width
of about twenty orders of harmonics and an intensity of about one order higher
than adjacent normal harmonic peaks, are generated under proper two-color
delays, owing to the reduction of the number of electron-ion recollisions and
suppression of inter-half-cycle interference effect of multiple electron
trajectories when the long wavelength midinfrared driving field is used. Using
the semiclassical trajectory model, we have revealed the two-dimensional
manipulation of the electron-ion recollision process, which agrees well with
the time frequency analysis. By filtering these humps, intense isolated
attosecond pulses are directly generated without any phase compensation. Our
proposal provides a simple technique to generate intense isolated attosecond
pulses with various central photon energies covering the multi-keV spectral
regime by using multicycle driving pulses with high pump energy in experiment.
| 0 | 1 | 0 | 0 | 0 | 0 |
Games with Costs and Delays | We demonstrate the usefulness of adding delay to infinite games with
quantitative winning conditions. In a delay game, one of the players may delay
her moves to obtain a lookahead on her opponent's moves. We show that
determining the winner of delay games with winning conditions given by parity
automata with costs is EXPTIME-complete and that exponential bounded lookahead
is both sufficient and in general necessary. Thus, although the parity
condition with costs is a quantitative extension of the parity condition, our
results show that adding costs does not increase the complexity of delay games
with parity conditions.
Furthermore, we study a new phenomenon that appears in quantitative delay
games: lookahead can be traded for the quality of winning strategies and vice
versa. We determine the extent of this tradeoff. In particular, even the
smallest lookahead allows to improve the quality of an optimal strategy from
the worst possible value to almost the smallest possible one. Thus, the benefit
of introducing lookahead is twofold: not only does it allow the delaying player
to win games she would lose without, but lookahead also allows her to improve
the quality of her winning strategies in games she wins even without lookahead.
| 1 | 0 | 0 | 0 | 0 | 0 |
Compile-Time Extensions to Hybrid ODEs | Reachability analysis for hybrid systems is an active area of development and
has resulted in many promising prototype tools. Most of these tools allow users
to express hybrid system as automata with a set of ordinary differential
equations (ODEs) associated with each state, as well as rules for transitions
between states. Significant effort goes into developing and verifying and
correctly implementing those tools. As such, it is desirable to expand the
scope of applicability tools of such as far as possible. With this goal, we
show how compile-time transformations can be used to extend the basic hybrid
ODE formalism traditionally supported in hybrid reachability tools such as
SpaceEx or Flow*. The extension supports certain types of partial derivatives
and equational constraints. These extensions allow users to express, among
other things, the Euler-Lagrangian equation, and to capture practically
relevant constraints that arise naturally in mechanical systems. Achieving this
level of expressiveness requires using a binding time-analysis (BTA), program
differentiation, symbolic Gaussian elimination, and abstract interpretation
using interval analysis. Except for BTA, the other components are either
readily available or can be easily added to most reachability tools. The paper
therefore focuses on presenting both the declarative and algorithmic
specifications for the BTA phase, and establishes the soundness of the
algorithmic specifications with respect to the declarative one.
| 1 | 0 | 0 | 0 | 0 | 0 |
Sorted Concave Penalized Regression | The Lasso is biased. Concave penalized least squares estimation (PLSE) takes
advantage of signal strength to reduce this bias, leading to sharper error
bounds in prediction, coefficient estimation and variable selection. For
prediction and estimation, the bias of the Lasso can be also reduced by taking
a smaller penalty level than what selection consistency requires, but such
smaller penalty level depends on the sparsity of the true coefficient vector.
The sorted L1 penalized estimation (Slope) was proposed for adaptation to such
smaller penalty levels. However, the advantages of concave PLSE and Slope do
not subsume each other. We propose sorted concave penalized estimation to
combine the advantages of concave and sorted penalizations. We prove that
sorted concave penalties adaptively choose the smaller penalty level and at the
same time benefits from signal strength, especially when a significant
proportion of signals are stronger than the corresponding adaptively selected
penalty levels. A local convex approximation, which extends the local linear
and quadratic approximations to sorted concave penalties, is developed to
facilitate the computation of sorted concave PLSE and proven to possess desired
prediction and estimation error bounds. We carry out a unified treatment of
penalty functions in a general optimization setting, including the penalty
levels and concavity of the above mentioned sorted penalties and mixed
penalties motivated by Bayesian considerations. Our analysis of prediction and
estimation errors requires the restricted eigenvalue condition on the design,
not beyond, and provides selection consistency under a required minimum signal
strength condition in addition. Thus, our results also sharpens existing
results on concave PLSE by removing the upper sparse eigenvalue component of
the sparse Riesz condition.
| 0 | 0 | 1 | 0 | 0 | 0 |
A New Fully Polynomial Time Approximation Scheme for the Interval Subset Sum Problem | The interval subset sum problem (ISSP) is a generalization of the well-known
subset sum problem. Given a set of intervals
$\left\{[a_{i,1},a_{i,2}]\right\}_{i=1}^n$ and a target integer $T,$ the ISSP
is to find a set of integers, at most one from each interval, such that their
sum best approximates the target $T$ but cannot exceed it. In this paper, we
first study the computational complexity of the ISSP. We show that the ISSP is
relatively easy to solve compared to the 0-1 Knapsack problem (KP). We also
identify several subclasses of the ISSP which are polynomial time solvable
(with high probability), albeit the problem is generally NP-hard. Then, we
propose a new fully polynomial time approximation scheme (FPTAS) for solving
the general ISSP problem. The time and space complexities of the proposed
scheme are ${\cal O}\left(n \max\left\{1 / \epsilon,\log n\right\}\right)$ and
${\cal O}\left(n+1/\epsilon\right),$ respectively, where $\epsilon$ is the
relative approximation error. To the best of our knowledge, the proposed scheme
has almost the same time complexity but a significantly lower space complexity
compared to the best known scheme. Both the correctness and efficiency of the
proposed scheme are validated by numerical simulations. In particular, the
proposed scheme successfully solves ISSP instances with $n=100,000$ and
$\epsilon=0.1\%$ within one second.
| 1 | 0 | 1 | 0 | 0 | 0 |
Data Dependent Kernel Approximation using Pseudo Random Fourier Features | Kernel methods are powerful and flexible approach to solve many problems in
machine learning. Due to the pairwise evaluations in kernel methods, the
complexity of kernel computation grows as the data size increases; thus the
applicability of kernel methods is limited for large scale datasets. Random
Fourier Features (RFF) has been proposed to scale the kernel method for solving
large scale datasets by approximating kernel function using randomized Fourier
features. While this method proved very popular, still it exists shortcomings
to be effectively used. As RFF samples the randomized features from a
distribution independent of training data, it requires sufficient large number
of feature expansions to have similar performances to kernelized classifiers,
and this is proportional to the number samples in the dataset. Thus, reducing
the number of feature dimensions is necessary to effectively scale to large
datasets. In this paper, we propose a kernel approximation method in a data
dependent way, coined as Pseudo Random Fourier Features (PRFF) for reducing the
number of feature dimensions and also to improve the prediction performance.
The proposed approach is evaluated on classification and regression problems
and compared with the RFF, orthogonal random features and Nystr{ö}m approach
| 1 | 0 | 0 | 1 | 0 | 0 |
Estimating Historical Hourly Traffic Volumes via Machine Learning and Vehicle Probe Data: A Maryland Case Study | This paper focuses on the problem of estimating historical traffic volumes
between sparsely-located traffic sensors, which transportation agencies need to
accurately compute statewide performance measures. To this end, the paper
examines applications of vehicle probe data, automatic traffic recorder counts,
and neural network models to estimate hourly volumes in the Maryland highway
network, and proposes a novel approach that combines neural networks with an
existing profiling method. On average, the proposed approach yields 24% more
accurate estimates than volume profiles, which are currently used by
transportation agencies across the US to compute statewide performance
measures. The paper also quantifies the value of using vehicle probe data in
estimating hourly traffic volumes, which provides important managerial insights
to transportation agencies interested in acquiring this type of data. For
example, results show that volumes can be estimated with a mean absolute
percent error of about 21% at locations where average number of observed probes
is between 30 and 47 vehicles/hr, which provides a useful guideline for
assessing the value of probe vehicle data from different vendors.
| 1 | 0 | 0 | 1 | 0 | 0 |
Monitoring of Wild Pseudomonas Biofilm Strain Conditions Using Statistical Characterisation of Scanning Electron Microscopy Images | The present paper proposes a novel method of quantification of the variation
in biofilm architecture, in correlation with the alteration of growth
conditions that include, variations of substrate and conditioning layer. The
polymeric biomaterial serving as substrates are widely used in implants and
indwelling medical devices, while the plasma proteins serve as the conditioning
layer. The present method uses descriptive statistics of FESEM images of
biofilms obtained during a variety of growth conditions. We aim to explore here
the texture and fractal analysis techniques, to identify the most
discriminatory features which are capable of predicting the difference in
biofilm growth conditions. We initially extract some statistical features of
biofilm images on bare polymer surfaces, followed by those on the same
substrates adsorbed with two different types of plasma proteins, viz. Bovine
serum albumin (BSA) and Fibronectin (FN), for two different adsorption times.
The present analysis has the potential to act as a futuristic technology for
developing a computerized monitoring system in hospitals with automated image
analysis and feature extraction, which may be used to predict the growth
profile of an emerging biofilm on surgical implants or similar medical
applications.
| 0 | 0 | 0 | 1 | 1 | 0 |
Continuous Measurement of an Atomic Current | We are interested in dynamics of quantum many-body systems under continuous
observation, and its physical realizations involving cold atoms in lattices. In
the present work we focus on continuous measurement of atomic currents in
lattice models, including the Hubbard model. We describe a Cavity QED setup,
where measurement of a homodyne current provides a faithful representation of
the atomic current as a function of time. We employ the quantum optical
description in terms of a diffusive stochastic Schrödinger equation to follow
the time evolution of the atomic system conditional to observing a given
homodyne current trajectory, thus accounting for the competition between the
Hamiltonian evolution and measurement back-action. As an illustration, we
discuss minimal models of atomic dynamics and continuous current measurement on
rings with synthetic gauge fields, involving both real space and synthetic
dimension lattices (represented by internal atomic states). Finally, by `not
reading' the current measurements the time evolution of the atomic system is
governed by a master equation, where - depending on the microscopic details of
our CQED setups - we effectively engineer a current coupling of our system to a
quantum reservoir. This provides novel scenarios of dissipative dynamics
generating `dark' pure quantum many-body states.
| 0 | 1 | 0 | 0 | 0 | 0 |
Zero-Inflated Autoregressive Conditional Duration Model for Discrete Trade Durations with Excessive Zeros | In finance, durations between successive transactions are usually modeled by
the autoregressive conditional duration model based on a continuous
distribution omitting frequent zero values. Zero durations can be caused by
either split transactions or independent transactions. We propose a discrete
model allowing for excessive zero values based on the zero-inflated negative
binomial distribution with score dynamics. We establish the invertibility of
the score filter. Additionally, we derive sufficient conditions for the
consistency and asymptotic normality of the maximum likelihood of the model
parameters. In an empirical study of DJIA stocks, we find that split
transactions cause on average 63% of zero values. Furthermore, the loss of
decimal places in the proposed model is less severe than incorrect treatment of
zero values in continuous models.
| 0 | 0 | 0 | 0 | 0 | 1 |
Analog Experiments on Tensile Strength of Dusty and Cometary Matter | The tensile strength of small dusty bodies in the solar system is determined
by the interaction between the composing grains. In the transition regime
between small and sticky dust ($\rm \mu m$) and non cohesive large grains (mm),
particles still stick to each other but are easily separated. In laboratory
experiments we find that thermal creep gas flow at low ambient pressure
generates an overpressure sufficient to overcome the tensile strength. For the
first time it allows a direct measurement of the tensile strength of
individual, very small (sub)-mm aggregates which consist of only tens of grains
in the (sub)-mm size range. We traced the disintegration of aggregates by
optical imaging in ground based as well as microgravity experiments and present
first results for basalt, palagonite and vitreous carbon samples with up to a
few hundred Pa. These measurements show that low tensile strength can be the
result of building loose aggregates with compact (sub)-mm units. This is in
favour of a combined cometary formation scenario by aggregation to compact
aggreates and gravitational instability of these units.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Compiling DNNFs without Determinism | State-of-the-art knowledge compilers generate deterministic subsets of DNNF,
which have been recently shown to be exponentially less succinct than DNNF. In
this paper, we propose a new method to compile DNNFs without enforcing
determinism necessarily. Our approach is based on compiling deterministic DNNFs
with the addition of auxiliary variables to the input formula. These variables
are then existentially quantified from the deterministic structure in linear
time, which would lead to a DNNF that is equivalent to the input formula and
not necessarily deterministic. On the theoretical side, we show that the new
method could generate exponentially smaller DNNFs than deterministic ones, even
by adding a single auxiliary variable. Further, we show that various existing
techniques that introduce auxiliary variables to the input formulas can be
employed in our framework. On the practical side, we empirically demonstrate
that our new method can significantly advance DNNF compilation on certain
benchmarks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Algebraic infinite delooping and derived destabilization | Working over the prime field of characteristic two, consequences of the
Koszul duality between the Steenrod algebra and the big Dyer-Lashof algebra are
studied, with an emphasis on the interplay between instability for the Steenrod
algebra action and that for the Dyer-Lashof operations. The central algebraic
framework is the category of length-graded modules over the Steenrod algebra
equipped with an unstable action of the Dyer-Lashof algebra, with compatibility
via the Nishida relations.
A first ingredient is a functor defined on modules over the Steenrod algebra
that arose in the work of Kuhn and McCarty on the homology of infinite loop
spaces. This functor is given in terms of derived functors of destabilization
from the category of modules over the Steenrod algebra to unstable modules,
enriched by taking into account the action of Dyer-Lashof operations.
A second ingredient is the derived functors of the Dyer-Lashof
indecomposables functor to length-graded modules over the Steenrod algebra.
These are related to functors used by Miller in his study of a spectral
sequence to calculate the homology of an infinite delooping. An important fact
is that these functors can be calculated as the homology of an explicit Koszul
complex with terms expressed as certain Steinberg functors. The latter are
quadratic dual to the more familiar Singer functors.
By exploiting the explicit complex built from the Singer functors which
calculates the derived functors of destabilization, Koszul duality leads to an
algebraic infinite delooping spectral sequence. This is conceptually similar to
Miller's spectral sequence, but there seems to be no direct relationship.
The spectral sequence sheds light on the relationship between unstable
modules over the Steenrod algebra and all modules.
| 0 | 0 | 1 | 0 | 0 | 0 |
WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling | To train an inference network jointly with a deep generative topic model,
making it both scalable to big corpora and fast in out-of-sample prediction, we
develop Weibull hybrid autoencoding inference (WHAI) for deep latent Dirichlet
allocation, which infers posterior samples via a hybrid of stochastic-gradient
MCMC and autoencoding variational Bayes. The generative network of WHAI has a
hierarchy of gamma distributions, while the inference network of WHAI is a
Weibull upward-downward variational autoencoder, which integrates a
deterministic-upward deep neural network, and a stochastic-downward deep
generative model based on a hierarchy of Weibull distributions. The Weibull
distribution can be used to well approximate a gamma distribution with an
analytic Kullback-Leibler divergence, and has a simple reparameterization via
the uniform noise, which help efficiently compute the gradients of the evidence
lower bound with respect to the parameters of the inference network. The
effectiveness and efficiency of WHAI are illustrated with experiments on big
corpora.
| 0 | 0 | 0 | 1 | 0 | 0 |
Constraining the Dynamics of Deep Probabilistic Models | We introduce a novel generative formulation of deep probabilistic models
implementing "soft" constraints on their function dynamics. In particular, we
develop a flexible methodological framework where the modeled functions and
derivatives of a given order are subject to inequality or equality constraints.
We then characterize the posterior distribution over model and constraint
parameters through stochastic variational inference. As a result, the proposed
approach allows for accurate and scalable uncertainty quantification on the
predictions and on all parameters. We demonstrate the application of equality
constraints in the challenging problem of parameter inference in ordinary
differential equation models, while we showcase the application of inequality
constraints on the problem of monotonic regression of count data. The proposed
approach is extensively tested in several experimental settings, leading to
highly competitive results in challenging modeling applications, while offering
high expressiveness, flexibility and scalability.
| 0 | 0 | 0 | 1 | 0 | 0 |
Democratizing Design for Future Computing Platforms | Information and communications technology can continue to change our world.
These advances will partially depend upon designs that synergistically combine
software with specialized hardware. Today open-source software incubates rapid
software-only innovation. The government can unleash software-hardware
innovation with programs to develop open hardware components, tools, and design
flows that simplify and reduce the cost of hardware design. Such programs will
speed development for startup companies, established industry leaders,
education, scientific research, and for government intelligence and defense
platforms.
| 1 | 0 | 0 | 0 | 0 | 0 |
EstimatedWold Representation and Spectral Density-Driven Bootstrap for Time Series | The second-order dependence structure of purely nondeterministic stationary
process is described by the coefficients of the famous Wold representation.
These coefficients can be obtained by factorizing the spectral density of the
process. This relation together with some spectral density estimator is used in
order to obtain consistent estimators of these coefficients. A spectral
density-driven bootstrap for time series is then developed which uses the
entire sequence of estimated MA coefficients together with appropriately
generated pseudo innovations in order to obtain a bootstrap pseudo time series.
It is shown that if the underlying process is linear and if the pseudo
innovations are generated by means of an i.i.d. wild bootstrap which mimics, to
the necessary extent, the moment structure of the true innovations, this
bootstrap proposal asymptotically works for a wide range of statistics. The
relations of the proposed bootstrap procedure to some other bootstrap
procedures, including the autoregressive-sieve bootstrap, are discussed. It is
shown that the latter is a special case of the spectral density-driven
bootstrap, if a parametric autoregressive spectral density estimator is used.
Simulations investigate the performance of the new bootstrap procedure in
finite sample situations. Furthermore, a real-life data example is presented.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Polynomial Time Match Test for Large Classes of Extended Regular Expressions | In the present paper, we study the match test for extended regular
expressions. We approach this NP-complete problem by introducing a novel
variant of two-way multihead automata, which reveals that the complexity of the
match test is determined by a hidden combinatorial property of extended regular
expressions, and it shows that a restriction of the corresponding parameter
leads to rich classes with a polynomial time match test. For presentational
reasons, we use the concept of pattern languages in order to specify extended
regular expressions. While this decision, formally, slightly narrows the scope
of our results, an extension of our concepts and results to more general
notions of extended regular expressions is straightforward.
| 1 | 0 | 0 | 0 | 0 | 0 |
Anomalous Magnetism for Dirac Electrons in Two Dimensional Rashba Systems | Spin-spin correlation function response in the low electronic density regime
and externally applied electric field is evaluated for 2D metallic crystals
under Rashba-type coupling, fixed number of particles and two-fold energy band
structure. Intrinsic Zeeman-like effect on electron spin polarization, density
of states, Fermi surface topology and transverse magnetic susceptibility are
analyzed in the zero temperature limit. A possible magnetic state for Dirac
electrons depending on the zero field band gap magnitude under this conditions
is found.
| 0 | 1 | 0 | 0 | 0 | 0 |
Estimating Average Treatment Effects with a Double-Index Propensity Score | We consider estimating average treatment effects (ATE) of a binary treatment
in observational data when data-driven variable selection is needed to select
relevant covariates from a moderately large number of available covariates
$\mathbf{X}$. To leverage covariates among $\mathbf{X}$ predictive of the
outcome for efficiency gain while using regularization to fit a parameteric
propensity score (PS) model, we consider a dimension reduction of $\mathbf{X}$
based on fitting both working PS and outcome models using adaptive LASSO. A
novel PS estimator, the Double-index Propensity Score (DiPS), is proposed, in
which the treatment status is smoothed over the linear predictors for
$\mathbf{X}$ from both the initial working models. The ATE is estimated by
using the DiPS in a normalized inverse probability weighting (IPW) estimator,
which is found to maintain double-robustness and also local semiparametric
efficiency with a fixed number of covariates $p$. Under misspecification of
working models, the smoothing step leads to gains in efficiency and robustness
over traditional doubly-robust estimators. These results are extended to the
case where $p$ diverges with sample size and working models are sparse.
Simulations show the benefits of the approach in finite samples. We illustrate
the method by estimating the ATE of statins on colorectal cancer risk in an
electronic medical record (EMR) study and the effect of smoking on C-reactive
protein (CRP) in the Framingham Offspring Study.
| 0 | 0 | 0 | 1 | 0 | 0 |
Emotion Detection and Analysis on Social Media | In this paper, we address the problem of detection, classification and
quantification of emotions of text in any form. We consider English text
collected from social media like Twitter, which can provide information having
utility in a variety of ways, especially opinion mining. Social media like
Twitter and Facebook is full of emotions, feelings and opinions of people all
over the world. However, analyzing and classifying text on the basis of
emotions is a big challenge and can be considered as an advanced form of
Sentiment Analysis. This paper proposes a method to classify text into six
different Emotion-Categories: Happiness, Sadness, Fear, Anger, Surprise and
Disgust. In our model, we use two different approaches and combine them to
effectively extract these emotions from text. The first approach is based on
Natural Language Processing, and uses several textual features like emoticons,
degree words and negations, Parts Of Speech and other grammatical analysis. The
second approach is based on Machine Learning classification algorithms. We have
also successfully devised a method to automate the creation of the training-set
itself, so as to eliminate the need of manual annotation of large datasets.
Moreover, we have managed to create a large bag of emotional words, along with
their emotion-intensities. On testing, it is shown that our model provides
significant accuracy in classifying tweets taken from Twitter.
| 1 | 0 | 0 | 0 | 0 | 0 |
HotFlip: White-Box Adversarial Examples for Text Classification | We propose an efficient method to generate white-box adversarial examples to
trick a character-level neural classifier. We find that only a few
manipulations are needed to greatly decrease the accuracy. Our method relies on
an atomic flip operation, which swaps one token for another, based on the
gradients of the one-hot input vectors. Due to efficiency of our method, we can
perform adversarial training which makes the model more robust to attacks at
test time. With the use of a few semantics-preserving constraints, we
demonstrate that HotFlip can be adapted to attack a word-level classifier as
well.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards fully commercial, UV-compatible fiber patch cords | We present and analyze two pathways to produce commercial optical-fiber patch
cords with stable long-term transmission in the ultraviolet (UV) at powers up
to $\sim$ 200 mW, and typical bulk transmission between 66-75\%. Commercial
fiber patch cords in the UV are of great interest across a wide variety of
scientific applications ranging from biology to metrology, and the lack of
availability has yet to be suitably addressed. We provide a guide to producing
such solarization-resistant, hydrogen-passivated, polarization-maintaining,
connectorized and jacketed optical fibers compatible with demanding scientific
and industrial applications. Our presentation describes the fabrication and
hydrogen loading procedure in detail and presents a high-pressure vessel
design, calculations of required \Ht\ loading times, and information on patch
cord handling and the mitigation of bending sensitivities. Transmission at 313
nm is measured over many months for cumulative energy on the fiber output of >
10 kJ with no demonstrable degradation due to UV solarization, in contrast to
standard uncured fibers. Polarization sensitivity and stability are
characterized yielding polarization extinction ratios between 15 dB and 25 dB
at 313 nm, where we find patch cords become linearly polarizing. We observe
that particle deposition at the fiber facet induced by high-intensity UV
exposure can (reversibly) deteriorate patch cord performance and describe a
technique for nitrogen purging of fiber collimators which mitigates this
phenomenon.
| 0 | 1 | 0 | 0 | 0 | 0 |
Convex Parameterizations and Fidelity Bounds for Nonlinear Identification and Reduced-Order Modelling | Model instability and poor prediction of long-term behavior are common
problems when modeling dynamical systems using nonlinear "black-box"
techniques. Direct optimization of the long-term predictions, often called
simulation error minimization, leads to optimization problems that are
generally non-convex in the model parameters and suffer from multiple local
minima. In this work we present methods which address these problems through
convex optimization, based on Lagrangian relaxation, dissipation inequalities,
contraction theory, and semidefinite programming. We demonstrate the proposed
methods with a model order reduction task for electronic circuit design and the
identification of a pneumatic actuator from experiment.
| 1 | 0 | 1 | 0 | 0 | 0 |
Rydberg states of helium in electric and magnetic fields of arbitrary relative orientation | A spectroscopic study of Rydberg states of helium ($n$ = 30 and 45) in
magnetic, electric and combined magnetic and electric fields with arbitrary
relative orientations of the field vectors is presented. The emphasis is on two
special cases where (i) the diamagnetic term is negligible and both
paramagnetic Zeeman and Stark effects are linear ($n$ = 30, $B \leq$ 120 mT and
$F$ = 0 - 78 V/cm ), and (ii) the diamagnetic term is dominant and the Stark
effect is linear ($n$ = 45, $B$ = 277 mT and $F$ = 0 - 8 V/cm). Both cases
correspond to regimes where the interactions induced by the electric and
magnetic fields are much weaker than the Coulomb interaction, but much stronger
than the spin-orbit interaction. The experimental spectra are compared to
spectra calculated by determining the eigenvalues of the Hamiltonian matrix
describing helium Rydberg states in the external fields. The spectra and the
calculated energy-level diagrams in external fields reveal avoided crossings
between levels of different $m_l$ values and pronounced $m_l$-mixing effects at
all angles between the electric and magnetic field vectors other than 0. These
observations are discussed in the context of the development of a method to
generate dense samples of cold atoms and molecules in a magnetic trap following
Rydberg-Stark deceleration.
| 0 | 1 | 0 | 0 | 0 | 0 |
Change-point inference on volatility in noisy Itô semimartingales | This work is concerned with tests on structural breaks in the spot volatility
process of a general Itô semimartingale based on discrete observations
contaminated with i.i.d. microstructure noise. We construct a consistent test
building up on infill asymptotic results for certain functionals of spectral
spot volatility estimates. A weak limit theorem is established under the null
hypothesis relying on extreme value theory. We prove consistency of the test
and of an associated estimator for the change point. A simulation study
illustrates the finite-sample performance of the method and efficiency gains
compared to a skip-sampling approach.
| 0 | 0 | 1 | 1 | 0 | 0 |
Double-diffusive erosion of the core of Jupiter | We present Direct Numerical Simulations of the transport of heat and heavy
elements across a double-diffusive interface or a double-diffusive staircase,
in conditions that are close to those one may expect to find near the boundary
between the heavy-element rich core and the hydrogen-helium envelope of giant
planets such as Jupiter. We find that the non-dimensional ratio of the buoyancy
flux associated with heavy element transport to the buoyancy flux associated
with heat transport lies roughly between 0.5 and 1, which is much larger than
previous estimates derived by analogy with geophysical double-diffusive
convection. Using these results in combination with a core-erosion model
proposed by Guillot et al. (2004), we find that the entire core of Jupiter
would be eroded within less than 1Myr assuming that the core-envelope boundary
is composed of a single interface. We also propose an alternative model that is
more appropriate in the presence of a well-established double-diffusive
staircase, and find that in this limit a large fraction of the core could be
preserved. These findings are interesting in the context of Juno's recent
results, but call for further modeling efforts to better understand the process
of core erosion from first principles.
| 0 | 1 | 0 | 0 | 0 | 0 |
Classifying Exoplanets with Gaussian Mixture Model | Recently, Odrzywolek and Rafelski (arXiv:1612.03556) have found three
distinct categories of exoplanets, when they are classified based on density.
We first carry out a similar classification of exoplanets according to their
density using the Gaussian Mixture Model, followed by information theoretic
criterion (AIC and BIC) to determine the optimum number of components. Such a
one-dimensional classification favors two components using AIC and three using
BIC, but the statistical significance from both the tests is not significant
enough to decisively pick the best model between two and three components. We
then extend this GMM-based classification to two dimensions by using both the
density and the Earth similarity index (arXiv:1702.03678), which is a measure
of how similar each planet is compared to the Earth. For this two-dimensional
classification, both AIC and BIC provide decisive evidence in favor of three
components.
| 0 | 1 | 0 | 0 | 0 | 0 |
Competing effects of Hund's splitting and symmetry-breaking perturbations on electronic order in Pb$_{1-x}$Sn$_{x}$Te | We study the effect of a uniform external magnetization on p-wave
superconductivity on the (001) surface of the crystalline topological
insulator(TCI) Pb$_{1-x}$Sn$_{x}$Te. It was shown by us in an earlier work that
a chiral p-wave finite momentum pairing (FFLO) state can be stabilized in this
system in the presence of weak repulsive interparticle interactions. In
particular, the superconducting instability is very sensitive to the Hund's
interaction in the multiorbital TCI, and no instabilities are found to be
possible for the "wrong" sign of the Hund's splitting. Here we show that for a
finite Hund's splitting of interactions, a significant value of the external
magnetization is needed to degrade the surface superconductivity, while in the
absence of the Hund's interaction, an arbitrarily small external magnetization
can destroy the superconductivity. This implies that multiorbital effects in
this system play an important role in stabilizing electronic order on the
surface.
| 0 | 1 | 0 | 0 | 0 | 0 |
Partition function of Chern-Simons theory as renormalized q-dimension | We calculate $q$-dimension of $k$-th Cartan power of fundamental
representation $\Lambda_0$, corresponding to affine root of affine simply laced
Kac-Moody algebras, and show that in the limit $q\rightarrow 1 $, and with
natural renormalization, it is equal to universal partition function of
Chern-Simons theory on three-dimensional sphere.
| 0 | 0 | 1 | 0 | 0 | 0 |
Task Recommendation in Crowdsourcing Based on Learning Preferences and Reliabilities | Workers participating in a crowdsourcing platform can have a wide range of
abilities and interests. An important problem in crowdsourcing is the task
recommendation problem, in which tasks that best match a particular worker's
preferences and reliabilities are recommended to that worker. A task
recommendation scheme that assigns tasks more likely to be accepted by a worker
who is more likely to complete it reliably results in better performance for
the task requester. Without prior information about a worker, his preferences
and reliabilities need to be learned over time. In this paper, we propose a
multi-armed bandit (MAB) framework to learn a worker's preferences and his
reliabilities for different categories of tasks. However, unlike the classical
MAB problem, the reward from the worker's completion of a task is unobservable.
We therefore include the use of gold tasks (i.e., tasks whose solutions are
known \emph{a priori} and which do not produce any rewards) in our task
recommendation procedure. Our model could be viewed as a new variant of MAB, in
which the random rewards can only be observed at those time steps where gold
tasks are used, and the accuracy of estimating the expected reward of
recommending a task to a worker depends on the number of gold tasks used. We
show that the optimal regret is $O(\sqrt{n})$, where $n$ is the number of tasks
recommended to the worker. We develop three task recommendation strategies to
determine the number of gold tasks for different task categories, and show that
they are order optimal. Simulations verify the efficiency of our approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Elliptic Weight Functions and Elliptic q-KZ Equation | By using representation theory of the elliptic quantum group U_{q,p}(sl_N^),
we present a systematic method of deriving the weight functions. The resultant
sl_N type elliptic weight functions are new and give elliptic and dynamical
analogues of those obtained in the trigonometric case. We then discuss some
basic properties of the elliptic weight functions. We also present an explicit
formula for formal elliptic hypergeometric integral solution to the face type,
i.e. dynamical, elliptic q-KZ equation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Formal duality in finite cyclic groups | The notion of formal duality in finite Abelian groups appeared recently in
relation to spherical designs, tight sphere packings, and energy minimizing
configurations in Euclidean spaces. For finite cyclic groups it is conjectured
that there are no primitive formally dual pairs besides the trivial one and the
TITO configuration. This conjecture has been verified for cyclic groups of
prime power order, as well as of square-free order. In this paper, we will
confirm the conjecture for other classes of cyclic groups, namely almost all
cyclic groups of order a product of two prime powers, with finitely many
exceptions for each pair of primes, or whose order $N$ satisfies $p\mid\!\mid
N$, where $p$ a prime satisfying the so-called self-conjugacy property with
respect to $N$. For the above proofs, various tools were needed: the field
descent method, used chiefly for the circulant Hadamard conjecture, the
techniques of Coven & Meyerowitz for sets that tile $\mathbb{Z}$ or
$\mathbb{Z}_N$ by translations, dubbed herein as the polynomial method, as well
as basic number theory of cyclotomic fields, especially the splitting of primes
in a given cyclotomic extension.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nonlinear Dynamics of Binocular Rivalry: A Comparative Study | When our eyes are presented with the same image, the brain processes it to
view it as a single coherent one. The lateral shift in the position of our
eyes, causes the two images to possess certain differences, which our brain
exploits for the purpose of depth perception and to gauge the size of objects
at different distances, a process commonly known as stereopsis. However, when
presented with two different visual stimuli, the visual awareness alternates.
This phenomenon of binocular rivalry is a result of competition between the
corresponding neuronal populations of the two eyes. The article presents a
comparative study of various dynamical models proposed to capture this process.
It goes on to study the effect of a certain parameter on the rate of perceptual
alternations and proceeds to disprove the initial propositions laid down to
characterise this phenomenon. It concludes with a discussion on the possible
future work that can be conducted to obtain a better picture of the neuronal
functioning behind this rivalry.
| 0 | 0 | 0 | 0 | 1 | 0 |
Nonstandard Analysis and Constructivism! | Almost two decades ago, Wattenberg published a paper with the title
'Nonstandard Analysis and Constructivism?' in which he speculates on a possible
connection between Nonstandard Analysis and constructive mathematics. We study
Wattenberg's work in light of recent research on the aforementioned connection.
On one hand, with only slight modification, some of Wattenberg's theorems in
Nonstandard Analysis are seen to yield effective and constructive theorems (not
involving Nonstandard Analysis). On the other hand, we establish the
incorrectness of some of Wattenberg's (explicit and implicit) claims regarding
the constructive status of the axioms Transfer and Standard Part of Nonstandard
Analysis.
| 0 | 0 | 1 | 0 | 0 | 0 |
Text-Independent Speaker Verification Using 3D Convolutional Neural Networks | In this paper, a novel method using 3D Convolutional Neural Network (3D-CNN)
architecture has been proposed for speaker verification in the text-independent
setting. One of the main challenges is the creation of the speaker models. Most
of the previously-reported approaches create speaker models based on averaging
the extracted features from utterances of the speaker, which is known as the
d-vector system. In our paper, we propose an adaptive feature learning by
utilizing the 3D-CNNs for direct speaker model creation in which, for both
development and enrollment phases, an identical number of spoken utterances per
speaker is fed to the network for representing the speakers' utterances and
creation of the speaker model. This leads to simultaneously capturing the
speaker-related information and building a more robust system to cope with
within-speaker variation. We demonstrate that the proposed method significantly
outperforms the traditional d-vector verification system. Moreover, the
proposed system can also be an alternative to the traditional d-vector system
which is a one-shot speaker modeling system by utilizing 3D-CNNs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Sequential identification of nonignorable missing data mechanisms | With nonignorable missing data, likelihood-based inference should be based on
the joint distribution of the study variables and their missingness indicators.
These joint models cannot be estimated from the data alone, thus requiring the
analyst to impose restrictions that make the models uniquely obtainable from
the distribution of the observed data. We present an approach for constructing
classes of identifiable nonignorable missing data models. The main idea is to
use a sequence of carefully set up identifying assumptions, whereby we specify
potentially different missingness mechanisms for different blocks of variables.
We show that the procedure results in models with the desirable property of
being non-parametric saturated.
| 0 | 0 | 1 | 1 | 0 | 0 |
Image Stitching by Line-guided Local Warping with Global Similarity Constraint | Low-textured image stitching remains a challenging problem. It is difficult
to achieve good alignment and it is easy to break image structures due to
insufficient and unreliable point correspondences. Moreover, because of the
viewpoint variations between multiple images, the stitched images suffer from
projective distortions. To solve these problems, this paper presents a
line-guided local warping method with a global similarity constraint for image
stitching. Line features which serve well for geometric descriptions and scene
constraints, are employed to guide image stitching accurately. On one hand, the
line features are integrated into a local warping model through a designed
weight function. On the other hand, line features are adopted to impose strong
geometric constraints, including line correspondence and line colinearity, to
improve the stitching performance through mesh optimization. To mitigate
projective distortions, we adopt a global similarity constraint, which is
integrated with the projective warps via a designed weight strategy. This
constraint causes the final warp to slowly change from a projective to a
similarity transformation across the image. Finally, the images undergo a
two-stage alignment scheme that provides accurate alignment and reduces
projective distortion. We evaluate our method on a series of images and compare
it with several other methods. The experimental results demonstrate that the
proposed method provides a convincing stitching performance and that it
outperforms other state-of-the-art methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.