text
stringlengths 138
2.38k
| labels
sequencelengths 6
6
| Predictions
sequencelengths 1
3
|
---|---|---|
Title: On discrimination between two close distribution tails,
Abstract: The goodness-of-fit test for discrimination of two tail distribution using
higher order statistics is proposed. The consistency of proposed test is proved
for two different alternatives. We do not assume belonging the corresponding
distribution function to a maximum domain of attraction. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Belief Propagation Min-Sum Algorithm for Generalized Min-Cost Network Flow,
Abstract: Belief Propagation algorithms are instruments used broadly to solve graphical
model optimization and statistical inference problems. In the general case of a
loopy Graphical Model, Belief Propagation is a heuristic which is quite
successful in practice, even though its empirical success, typically, lacks
theoretical guarantees. This paper extends the short list of special cases
where correctness and/or convergence of a Belief Propagation algorithm is
proven. We generalize formulation of Min-Sum Network Flow problem by relaxing
the flow conservation (balance) constraints and then proving that the Belief
Propagation algorithm converges to the exact result. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Sharper and Simpler Nonlinear Interpolants for Program Verification,
Abstract: Interpolation of jointly infeasible predicates plays important roles in
various program verification techniques such as invariant synthesis and CEGAR.
Intrigued by the recent result by Dai et al.\ that combines real algebraic
geometry and SDP optimization in synthesis of polynomial interpolants, the
current paper contributes its enhancement that yields sharper and simpler
interpolants. The enhancement is made possible by: theoretical observations in
real algebraic geometry; and our continued fraction-based algorithm that rounds
off (potentially erroneous) numerical solutions of SDP solvers. Experiment
results support our tool's effectiveness; we also demonstrate the benefit of
sharp and simple interpolants in program verification examples. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Understanding the Impact of Label Granularity on CNN-based Image Classification,
Abstract: In recent years, supervised learning using Convolutional Neural Networks
(CNNs) has achieved great success in image classification tasks, and large
scale labeled datasets have contributed significantly to this achievement.
However, the definition of a label is often application dependent. For example,
an image of a cat can be labeled as "cat" or perhaps more specifically "Persian
cat." We refer to this as label granularity. In this paper, we conduct
extensive experiments using various datasets to demonstrate and analyze how and
why training based on fine-grain labeling, such as "Persian cat" can improve
CNN accuracy on classifying coarse-grain classes, in this case "cat." The
experimental results show that training CNNs with fine-grain labels improves
both network's optimization and generalization capabilities, as intuitively it
encourages the network to learn more features, and hence increases
classification accuracy on coarse-grain classes under all datasets considered.
Moreover, fine-grain labels enhance data efficiency in CNN training. For
example, a CNN trained with fine-grain labels and only 40% of the total
training data can achieve higher accuracy than a CNN trained with the full
training dataset and coarse-grain labels. These results point to two possible
applications of this work: (i) with sufficient human resources, one can improve
CNN performance by re-labeling the dataset with fine-grain labels, and (ii)
with limited human resources, to improve CNN performance, rather than
collecting more training data, one may instead use fine-grain labels for the
dataset. We further propose a metric called Average Confusion Ratio to
characterize the effectiveness of fine-grain labeling, and show its use through
extensive experimentation. Code is available at
this https URL. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Anisotropic spin-density distribution and magnetic anisotropy of strained La$_{1-x}$Sr$_x$MnO$_3$ thin films: Angle-dependent x-ray magnetic circular dichroism,
Abstract: Magnetic anisotropies of ferromagnetic thin films are induced by epitaxial
strain from the substrate via strain-induced anisotropy in the orbital magnetic
moment and that in the spatial distribution of spin-polarized electrons.
However, the preferential orbital occupation in ferromagnetic metallic
La$_{1-x}$Sr$_x$MnO$_3$ (LSMO) thin films studied by x-ray linear dichroism
(XLD) has always been found out-of-plane for both tensile and compressive
epitaxial strain and hence irrespective of the magnetic anisotropy. In order to
resolve this mystery, we directly probed the preferential orbital occupation of
spin-polarized electrons in LSMO thin films under strain by angle-dependent
x-ray magnetic circular dichroism (XMCD). Anisotropy of the spin-density
distribution was found to be in-plane for the tensile strain and out-of-plane
for the compressive strain, consistent with the observed magnetic anisotropy.
The ubiquitous out-of-plane preferential orbital occupation seen by XLD is
attributed to the occupation of both spin-up and spin-down out-of-plane
orbitals in the surface magnetic dead layer. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Putting gravity in control,
Abstract: The aim of the present manuscript is to present a novel proposal in Geometric
Control Theory inspired in the principles of General Relativity and
energy-shaping control. | [
0,
0,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples,
Abstract: Sometimes it is not enough for a DNN to produce an outcome. For example, in
applications such as healthcare, users need to understand the rationale of the
decisions. Therefore, it is imperative to develop algorithms to learn models
with good interpretability (Doshi-Velez 2017). An important factor that leads
to the lack of interpretability of DNNs is the ambiguity of neurons, where a
neuron may fire for various unrelated concepts. This work aims to increase the
interpretability of DNNs on the whole image space by reducing the ambiguity of
neurons. In this paper, we make the following contributions:
1) We propose a metric to evaluate the consistency level of neurons in a
network quantitatively.
2) We find that the learned features of neurons are ambiguous by leveraging
adversarial examples.
3) We propose to improve the consistency of neurons on adversarial example
subset by an adversarial training algorithm with a consistent loss. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Learning a Hierarchical Latent-Variable Model of 3D Shapes,
Abstract: We propose the Variational Shape Learner (VSL), a generative model that
learns the underlying structure of voxelized 3D shapes in an unsupervised
fashion. Through the use of skip-connections, our model can successfully learn
and infer a latent, hierarchical representation of objects. Furthermore,
realistic 3D objects can be easily generated by sampling the VSL's latent
probabilistic manifold. We show that our generative model can be trained
end-to-end from 2D images to perform single image 3D model retrieval.
Experiments show, both quantitatively and qualitatively, the improved
generalization of our proposed model over a range of tasks, performing better
or comparable to various state-of-the-art alternatives. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Maximally rotating waves in AdS and on spheres,
Abstract: We study the cubic wave equation in AdS_(d+1) (and a closely related cubic
wave equation on S^3) in a weakly nonlinear regime. Via time-averaging, these
systems are accurately described by simplified infinite-dimensional quartic
Hamiltonian systems, whose structure is mandated by the fully resonant spectrum
of linearized perturbations. The maximally rotating sector, comprising only the
modes of maximal angular momentum at each frequency level, consistently
decouples in the weakly nonlinear regime. The Hamiltonian systems obtained by
this decoupling display remarkable periodic return behaviors closely analogous
to what has been demonstrated in recent literature for a few other related
equations (the cubic Szego equation, the conformal flow, the LLL equation).
This suggests a powerful underlying analytic structure, such as integrability.
We comment on the connection of our considerations to the Gross-Pitaevskii
equation for harmonically trapped Bose-Einstein condensates. | [
0,
1,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: AirCode: Unobtrusive Physical Tags for Digital Fabrication,
Abstract: We present AirCode, a technique that allows the user to tag physically
fabricated objects with given information. An AirCode tag consists of a group
of carefully designed air pockets placed beneath the object surface. These air
pockets are easily produced during the fabrication process of the object,
without any additional material or postprocessing. Meanwhile, the air pockets
affect only the scattering light transport under the surface, and thus are hard
to notice to our naked eyes. But, by using a computational imaging method, the
tags become detectable. We present a tool that automates the design of air
pockets for the user to encode information. AirCode system also allows the user
to retrieve the information from captured images via a robust decoding
algorithm. We demonstrate our tagging technique with applications for metadata
embedding, robotic grasping, as well as conveying object affordances. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Adversarial Learning for Neural Dialogue Generation,
Abstract: In this paper, drawing intuition from the Turing test, we propose using
adversarial training for open-domain dialogue generation: the system is trained
to produce sequences that are indistinguishable from human-generated dialogue
utterances. We cast the task as a reinforcement learning (RL) problem where we
jointly train two systems, a generative model to produce response sequences,
and a discriminator---analagous to the human evaluator in the Turing test--- to
distinguish between the human-generated dialogues and the machine-generated
ones. The outputs from the discriminator are then used as rewards for the
generative model, pushing the system to generate dialogues that mostly resemble
human dialogues.
In addition to adversarial training we describe a model for adversarial {\em
evaluation} that uses success in fooling an adversary as a dialogue evaluation
metric, while avoiding a number of potential pitfalls. Experimental results on
several metrics, including adversarial evaluation, demonstrate that the
adversarially-trained system generates higher-quality responses than previous
baselines. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: On the impact origin of Phobos and Deimos III: resulting composition from different impactors,
Abstract: The origin of Phobos and Deimos in a giant impact generated disk is gaining
larger attention. Although this scenario has been the subject of many studies,
an evaluation of the chemical composition of the Mars' moons in this framework
is missing. The chemical composition of Phobos and Deimos is unconstrained. The
large uncertainty about the origin of the mid-infrared features, the lack of
absorption bands in the visible and near-infrared spectra, and the effects of
secondary processes on the moons' surface make the determination of their
composition very difficult from remote sensing data. Simulations suggest a
formation of a disk made of gas and melt with their composition linked to the
nature of the impactor and Mars. Using thermodynamic equilibrium we investigate
the composition of dust (condensates from gas) and solids (from a cooling melt)
that result from different types of Mars impactors (Mars-, CI-, CV-, EH-,
comet-like). Our calculations show a wide range of possible chemical
compositions and noticeable differences between dust and solids depending on
the considered impactors. Assuming Phobos and Deimos as result of the accretion
and mixing of dust and solids, we find that the derived assemblage (dust rich
in metallic-iron, sulphides and/or carbon, and quenched solids rich in
silicates) can be compatible with the observations. The JAXA's MMX (Martian
Moons eXploration) mission will investigate the physical and chemical
properties of the Maroons, especially sampling from Phobos, before returning to
Earth. Our results could be then used to disentangle the origin and chemical
composition of the pristine body that hit Mars and suggest guidelines for
helping in the analysis of the returned samples. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Face Detection using Deep Learning: An Improved Faster RCNN Approach,
Abstract: In this report, we present a new face detection scheme using deep learning
and achieve the state-of-the-art detection performance on the well-known FDDB
face detetion benchmark evaluation. In particular, we improve the
state-of-the-art faster RCNN framework by combining a number of strategies,
including feature concatenation, hard negative mining, multi-scale training,
model pretraining, and proper calibration of key parameters. As a consequence,
the proposed scheme obtained the state-of-the-art face detection performance,
making it the best model in terms of ROC curves among all the published methods
on the FDDB benchmark. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: The cosmic spiderweb: equivalence of cosmic, architectural, and origami tessellations,
Abstract: For over twenty years, the term 'cosmic web' has guided our understanding of
the large-scale arrangement of matter in the cosmos, accurately evoking the
concept of a network of galaxies linked by filaments. But the physical
correspondence between the cosmic web and structural-engineering or textile
'spiderwebs' is even deeper than previously known, and extends to origami
tessellations as well. Here we explain that in a good structure-formation
approximation known as the adhesion model, threads of the cosmic web form a
spiderweb, i.e. can be strung up to be entirely in tension. The correspondence
is exact if nodes sampling voids are included, and if structure is excluded
within collapsed regions (walls, filaments and haloes), where dark-matter
multistreaming and baryonic physics affect the structure. We also suggest how
concepts arising from this link might be used to test cosmological models: for
example, to test for large-scale anisotropy and rotational flows in the cosmos. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Data-driven polynomial chaos expansion for machine learning regression,
Abstract: We present a regression technique for data driven problems based on
polynomial chaos expansion (PCE). PCE is a popular technique in the field of
uncertainty quantification (UQ), where it is typically used to replace a
runnable but expensive computational model subject to random inputs with an
inexpensive-to-evaluate polynomial function. The metamodel obtained enables a
reliable estimation of the statistics of the output, provided that a suitable
probabilistic model of the input is available.
In classical machine learning (ML) regression settings, however, the system
is only known through observations of its inputs and output, and the interest
lies in obtaining accurate pointwise predictions of the latter. Here, we show
that a PCE metamodel purely trained on data can yield pointwise predictions
whose accuracy is comparable to that of other ML regression models, such as
neural networks and support vector machines. The comparisons are performed on
benchmark datasets available from the literature. The methodology also enables
the quantification of the output uncertainties and is robust to noise.
Furthermore, it enjoys additional desirable properties, such as good
performance for small training sets and simplicity of construction, with only
little parameter tuning required. In the presence of statistically dependent
inputs, we investigate two ways to build the PCE, and show through simulations
that one approach is superior to the other in the stated settings. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Ratio Utility and Cost Analysis for Privacy Preserving Subspace Projection,
Abstract: With a rapidly increasing number of devices connected to the internet, big
data has been applied to various domains of human life. Nevertheless, it has
also opened new venues for breaching users' privacy. Hence it is highly
required to develop techniques that enable data owners to privatize their data
while keeping it useful for intended applications. Existing methods, however,
do not offer enough flexibility for controlling the utility-privacy trade-off
and may incur unfavorable results when privacy requirements are high. To tackle
these drawbacks, we propose a compressive-privacy based method, namely RUCA
(Ratio Utility and Cost Analysis), which can not only maximize performance for
a privacy-insensitive classification task but also minimize the ability of any
classifier to infer private information from the data. Experimental results on
Census and Human Activity Recognition data sets demonstrate that RUCA
significantly outperforms existing privacy preserving data projection
techniques for a wide range of privacy pricings. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Time-optimal control strategies in SIR epidemic models,
Abstract: We investigate the time-optimal control problem in SIR
(Susceptible-Infected-Recovered) epidemic models, focusing on different control
policies: vaccination, isolation, culling, and reduction of transmission.
Applying the Pontryagin's Minimum Principle (PMP) to the unconstrained control
problems (i.e. without costs of control or resource limitations), we prove
that, for all the policies investigated, only bang-bang controls with at most
one switch are admitted. When a switch occurs, the optimal strategy is to delay
the control action some amount of time and then apply the control at the
maximum rate for the remainder of the outbreak. This result is in contrast with
previous findings on the unconstrained problems of minimizing the total
infectious burden over an outbreak, where the optimal strategy is to use the
maximal control for the entire epidemic. Then, the critical consequence of our
results is that, in a wide range of epidemiological circumstances, it may be
impossible to minimize the total infectious burden while minimizing the
epidemic duration, and vice versa. Moreover, numerical simulations highlighted
additional unexpected results, showing that the optimal control can be delayed
also when the control reproduction number is lower than one and that the
switching time from no control to maximum control can even occur after the peak
of infection has been reached. Our results are especially important for
livestock diseases where the minimization of outbreaks duration is a priority
due to sanitary restrictions imposed to farms during ongoing epidemics, such as
animal movements and export bans. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Quantitative Biology"
] |
Title: DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization,
Abstract: Adaptive gradient-based optimization methods such as ADAGRAD, RMSPROP, and
ADAM are widely used in solving large-scale machine learning problems including
deep learning. A number of schemes have been proposed in the literature aiming
at parallelizing them, based on communications of peripheral nodes with a
central node, but incur high communications cost. To address this issue, we
develop a novel consensus-based distributed adaptive moment estimation method
(DADAM) for online optimization over a decentralized network that enables data
parallelization, as well as decentralized computation. The method is
particularly useful, since it can accommodate settings where access to local
data is allowed. Further, as established theoretically in this work, it can
outperform centralized adaptive algorithms, for certain classes of loss
functions used in applications. We analyze the convergence properties of the
proposed algorithm and provide a dynamic regret bound on the convergence rate
of adaptive moment estimation methods in both stochastic and deterministic
settings. Empirical results demonstrate that DADAM works also well in practice
and compares favorably to competing online optimization methods. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics",
"Statistics"
] |
Title: Paramagnetic Meissner effect in ZrB12 single crystal with non-monotonic vortex-vortex interactions,
Abstract: The magnetic response related to paramagnetic Meissner effect (PME) is
studied in a high quality single crystal ZrB12 with non-monotonic vortex-vortex
interactions. We observe the expulsion and penetration of magnetic flux in the
form of vortex clusters with increasing temperature. A vortex phase diagram is
constructed which shows that the PME can be explained by considering the
interplay among the flux compression, the different temperature dependencies of
the vortex-vortex and the vortex-pin interactions, and thermal fluctuations.
Such a scenario is in good agreement with the results of the magnetic
relaxation measurements. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Credal Networks under Epistemic Irrelevance,
Abstract: A credal network under epistemic irrelevance is a generalised type of
Bayesian network that relaxes its two main building blocks. On the one hand,
the local probabilities are allowed to be partially specified. On the other
hand, the assessments of independence do not have to hold exactly.
Conceptually, these two features turn credal networks under epistemic
irrelevance into a powerful alternative to Bayesian networks, offering a more
flexible approach to graph-based multivariate uncertainty modelling. However,
in practice, they have long been perceived as very hard to work with, both
theoretically and computationally.
The aim of this paper is to demonstrate that this perception is no longer
justified. We provide a general introduction to credal networks under epistemic
irrelevance, give an overview of the state of the art, and present several new
theoretical results. Most importantly, we explain how these results can be
combined to allow for the design of recursive inference methods. We provide
numerous concrete examples of how this can be achieved, and use these to
demonstrate that computing with credal networks under epistemic irrelevance is
most definitely feasible, and in some cases even highly efficient. We also
discuss several philosophical aspects, including the lack of symmetry, how to
deal with probability zero, the interpretation of lower expectations, the
axiomatic status of graphoid properties, and the difference between updating
and conditioning. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: EMRIs and the relativistic loss-cone: The curious case of the fortunate coincidence,
Abstract: Extreme mass ratio inspiral (EMRI) events are vulnerable to perturbations by
the stellar background, which can abort them prematurely by deflecting EMRI
orbits to plunging ones that fall directly into the massive black hole (MBH),
or to less eccentric ones that no longer interact strongly with the MBH. A
coincidental hierarchy between the collective resonant Newtonian torques due to
the stellar background, and the relative magnitudes of the leading-order
post-Newtonian precessional and radiative terms of the general relativistic
2-body problem, allows EMRIs to decouple from the background and produce
semi-periodic gravitational wave signals. I review the recent theoretical
developments that confirm this conjectured fortunate coincidence, and briefly
discuss the implications for EMRI rates, and show how these dynamical effects
can be probed locally by stars near the Galactic MBH. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Investigating the configurations in cross-shareholding: a joint copula-entropy approach,
Abstract: --- the companies populating a Stock market, along with their connections,
can be effectively modeled through a directed network, where the nodes
represent the companies, and the links indicate the ownership. This paper deals
with this theme and discusses the concentration of a market. A
cross-shareholding matrix is considered, along with two key factors: the node
out-degree distribution which represents the diversification of investments in
terms of the number of involved companies, and the node in-degree distribution
which reports the integration of a company due to the sales of its own shares
to other companies. While diversification is widely explored in the literature,
integration is most present in literature on contagions. This paper captures
such quantities of interest in the two frameworks and studies the stochastic
dependence of diversification and integration through a copula approach. We
adopt entropies as measures for assessing the concentration in the market. The
main question is to assess the dependence structure leading to a better
description of the data or to market polarization (minimal entropy) or market
fairness (maximal entropy). In so doing, we derive information on the way in
which the in- and out-degrees should be connected in order to shape the market.
The question is of interest to regulators bodies, as witnessed by specific
alert threshold published on the US mergers guidelines for limiting the
possibility of acquisitions and the prevalence of a single company on the
market. Indeed, all countries and the EU have also rules or guidelines in order
to limit concentrations, in a country or across borders, respectively. The
calibration of copulas and model parameters on the basis of real data serves as
an illustrative application of the theoretical proposal. | [
0,
0,
0,
0,
0,
1
] | [
"Quantitative Finance",
"Statistics"
] |
Title: The Enemy Among Us: Detecting Hate Speech with Threats Based 'Othering' Language Embeddings,
Abstract: Offensive or antagonistic language targeted at individuals and social groups
based on their personal characteristics (also known as cyber hate speech or
cyberhate) has been frequently posted and widely circulated viathe World Wide
Web. This can be considered as a key risk factor for individual and societal
tension linked toregional instability. Automated Web-based cyberhate detection
is important for observing and understandingcommunity and regional societal
tension - especially in online social networks where posts can be rapidlyand
widely viewed and disseminated. While previous work has involved using
lexicons, bags-of-words orprobabilistic language parsing approaches, they often
suffer from a similar issue which is that cyberhate can besubtle and indirect -
thus depending on the occurrence of individual words or phrases can lead to a
significantnumber of false negatives, providing inaccurate representation of
the trends in cyberhate. This problemmotivated us to challenge thinking around
the representation of subtle language use, such as references toperceived
threats from "the other" including immigration or job prosperity in a hateful
context. We propose anovel framework that utilises language use around the
concept of "othering" and intergroup threat theory toidentify these subtleties
and we implement a novel classification method using embedding learning to
computesemantic distances between parts of speech considered to be part of an
"othering" narrative. To validate ourapproach we conduct several experiments on
different types of cyberhate, namely religion, disability, race andsexual
orientation, with F-measure scores for classifying hateful instances obtained
through applying ourmodel of 0.93, 0.86, 0.97 and 0.98 respectively, providing
a significant improvement in classifier accuracy overthe state-of-the-art | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Linear Programming Formulations of Deterministic Infinite Horizon Optimal Control Problems in Discrete Time,
Abstract: This paper is devoted to a study of infinite horizon optimal control problems
with time discounting and time averaging criteria in discrete time. We
establish that these problems are related to certain infinite-dimensional
linear programming (IDLP) problems. We also establish asymptotic relationships
between the optimal values of problems with time discounting and long-run
average criteria. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Non-normality, reactivity, and intrinsic stochasticity in neural dynamics: a non-equilibrium potential approach,
Abstract: Intrinsic stochasticity can induce highly non-trivial effects on dynamical
systems, including stochastic and coherence resonance, noise induced
bistability, noise-induced oscillations, to name but a few. In this paper we
revisit a mechanism first investigated in the context of neuroscience by which
relatively small demographic (intrinsic) fluctuations can lead to the emergence
of avalanching behavior in systems that are deterministically characterized by
a single stable fixed point (up state). The anomalously large response of such
systems to stochasticity stems (or is strongly associated with) the existence
of a "non-normal" stability matrix at the deterministic fixed point, which may
induce the system to be "reactive". Here, we further investigate this mechanism
by exploring the interplay between non-normality and intrinsic (demographic)
stochasticity, by employing a number of analytical and computational
approaches. We establish, in particular, that the resulting dynamics in this
type of systems cannot be simply derived from a scalar potential but,
additionally, one needs to consider a curl flux which describes the essential
non-equilibrium nature of this type of noisy non-normal systems. Moreover, we
shed further light on the origin of the phenomenon, introduce the novel concept
of "non-linear reactivity", and rationalize of the observed the value of the
emerging avalanche exponents. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology",
"Physics",
"Mathematics"
] |
Title: Small nonlinearities in activation functions create bad local minima in neural networks,
Abstract: We investigate the loss surface of neural networks. We prove that even for
one-hidden-layer networks with "slightest" nonlinearity, the empirical risks
have spurious local minima in most cases. Our results thus indicate that in
general "no spurious local minima" is a property limited to deep linear
networks, and insights obtained from linear networks are not robust.
Specifically, for ReLU(-like) networks we constructively prove that for almost
all (in contrast to previous results) practical datasets there exist infinitely
many local minima. We also present a counterexample for more general
activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad
local minimum. Our results make the least restrictive assumptions relative to
existing results on local optimality in neural networks. We complete our
discussion by presenting a comprehensive characterization of global optimality
for deep linear networks, which unifies other results on this topic. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Stabilization Bounds for Linear Finite Dynamical Systems,
Abstract: A common problem to all applications of linear finite dynamical systems is
analyzing the dynamics without enumerating every possible state transition. Of
particular interest is the long term dynamical behaviour. In this paper, we
study the number of iterations needed for a system to settle on a fixed set of
elements. As our main result, we present two upper bounds on iterations needed,
and each one may be readily applied to a fixed point system test. The bounds
are based on submodule properties of iterated images and reduced systems modulo
a prime. We also provide examples where our bounds are optimal. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Magneto-thermopower in the Weak Ferromagnetic Oxide CaRu0.8Sc0.2O3: An Experimental Test for the Kelvin Formula in a Magnetic Material,
Abstract: We have measured the resistivity, the thermopower, and the specific heat of
the weak ferromagnetic oxide CaRu0.8Sc0.2O3 in external magnetic fields up to
140 kOe below 80 K. We have observed that the thermopower Q is significantly
suppressed by magnetic fields at around the ferromagnetic transition
temperature of 30 K, and have further found that the magneto-thermopower
{\Delta}Q(H, T) = Q(H, T) - Q(0, T) is roughly proportional to the
magneto-entropy {\Delta}S(H, T) = S(H, T)-S(0, T).We discuss this relationship
between the two quantities in terms of the Kelvin formula, and find that the
observed {\Delta}Q is quantitatively consistent with the values expected from
the Kelvin formula, a possible physical meaning of which is discussed. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Dispersive Magnetic and Electronic Excitations in Iridate Perovskites Probed with Oxygen $K$-Edge Resonant Inelastic X-ray Scattering,
Abstract: Resonant inelastic X-ray scattering (RIXS) experiments performed at the
oxygen-$K$ edge on the iridate perovskites {\SIOS} and {\SION} reveal a
sequence of well-defined dispersive modes over the energy range up to $\sim
0.8$ eV. The momentum dependence of these modes and their variation with the
experimental geometry allows us to assign each of them to specific collective
magnetic and/or electronic excitation processes, including single and
bi-magnons, and spin-orbit and electron-hole excitons. We thus demonstrated
that dispersive magnetic and electronic excitations are observable at the O-$K$
edge in the presence of the strong spin-orbit coupling in the $5d$ shell of
iridium and strong hybridization between Ir $5d$ and O $2p$ orbitals, which
confirm and expand theoretical expectations. More generally, our results
establish the utility of O-$K$ edge RIXS for studying the collective
excitations in a range of $5d$ materials that are attracting increasing
attention due to their novel magnetic and electronic properties. Especially,
the strong RIXS response at O-$K$ edge opens up the opportunity for
investigating collective excitations in thin films and heterostructures
fabricated from these materials. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: On reducing the communication cost of the diffusion LMS algorithm,
Abstract: The rise of digital and mobile communications has recently made the world
more connected and networked, resulting in an unprecedented volume of data
flowing between sources, data centers, or processes. While these data may be
processed in a centralized manner, it is often more suitable to consider
distributed strategies such as diffusion as they are scalable and can handle
large amounts of data by distributing tasks over networked agents. Although it
is relatively simple to implement diffusion strategies over a cluster, it
appears to be challenging to deploy them in an ad-hoc network with limited
energy budget for communication. In this paper, we introduce a diffusion LMS
strategy that significantly reduces communication costs without compromising
the performance. Then, we analyze the proposed algorithm in the mean and
mean-square sense. Next, we conduct numerical experiments to confirm the
theoretical findings. Finally, we perform large scale simulations to test the
algorithm efficiency in a scenario where energy is limited. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Statistical Inference on Panel Data Models: A Kernel Ridge Regression Method,
Abstract: We propose statistical inferential procedures for panel data models with
interactive fixed effects in a kernel ridge regression framework.Compared with
traditional sieve methods, our method is automatic in the sense that it does
not require the choice of basis functions and truncation parameters.Model
complexity is controlled by a continuous regularization parameter which can be
automatically selected by generalized cross validation. Based on empirical
processes theory and functional analysis tools, we derive joint asymptotic
distributions for the estimators in the heterogeneous setting. These joint
asymptotic results are then used to construct confidence intervals for the
regression means and prediction intervals for the future observations, both
being the first provably valid intervals in literature. Marginal asymptotic
normality of the functional estimators in homogeneous setting is also obtained.
Simulation and real data analysis demonstrate the advantages of our method. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: How to Scale Up Kernel Methods to Be As Good As Deep Neural Nets,
Abstract: The computational complexity of kernel methods has often been a major barrier
for applying them to large-scale learning problems. We argue that this barrier
can be effectively overcome. In particular, we develop methods to scale up
kernel models to successfully tackle large-scale learning problems that are so
far only approachable by deep learning architectures. Based on the seminal work
by Rahimi and Recht on approximating kernel functions with features derived
from random projections, we advance the state-of-the-art by proposing methods
that can efficiently train models with hundreds of millions of parameters, and
learn optimal representations from multiple kernels. We conduct extensive
empirical studies on problems from image recognition and automatic speech
recognition, and show that the performance of our kernel models matches that of
well-engineered deep neural nets (DNNs). To the best of our knowledge, this is
the first time that a direct comparison between these two methods on
large-scale problems is reported. Our kernel methods have several appealing
properties: training with convex optimization, cost for training a single model
comparable to DNNs, and significantly reduced total cost due to fewer
hyperparameters to tune for model selection. Our contrastive study between
these two very different but equally competitive models sheds light on
fundamental questions such as how to learn good representations. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Layered Based Augmented Complex Kalman Filter for Fast Forecasting-Aided State Estimation of Distribution Networks,
Abstract: In the presence of renewable resources, distribution networks have become
extremely complex to monitor, operate and control. Furthermore, for the real
time applications, active distribution networks require fast real time
distribution state estimation (DSE). Forecasting aided state estimator (FASE),
deploys measured data in consecutive time samples to refine the state estimate.
Although most of the DSE algorithms deal with real and imaginary parts of
distribution networks states independently, we propose a non iterative complex
DSE algorithm based on augmented complex Kalman filter (ACKF) which considers
the states as complex values. In case of real time DSE and in presence of a
large number of customer loads in the system, employing DSEs in one single
estimation layer is not computationally efficient. Consequently, our proposed
method performs in several estimation layers hierarchically as a Multi layer
DSE using ACKF (DSEMACKF). In the proposed method, a distribution network can
be divided into one main area and several subareas. The aggregated loads in
each subarea act like a big customer load in the main area. Load aggregation
results in a lower variability and higher cross correlation. This increases the
accuracy of the estimated states. Additionally, the proposed method is
formulated to include unbalanced loads in low voltage (LV) distribution
network. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Single-Queue Decoding for Neural Machine Translation,
Abstract: Neural machine translation models rely on the beam search algorithm for
decoding. In practice, we found that the quality of hypotheses in the search
space is negatively affected owing to the fixed beam size. To mitigate this
problem, we store all hypotheses in a single priority queue and use a universal
score function for hypothesis selection. The proposed algorithm is more
flexible as the discarded hypotheses can be revisited in a later step. We
further design a penalty function to punish the hypotheses that tend to produce
a final translation that is much longer or shorter than expected. Despite its
simplicity, we show that the proposed decoding algorithm is able to select
hypotheses with better qualities and improve the translation performance. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network,
Abstract: Music creation is typically composed of two parts: composing the musical
score, and then performing the score with instruments to make sounds. While
recent work has made much progress in automatic music generation in the
symbolic domain, few attempts have been made to build an AI model that can
render realistic music audio from musical scores. Directly synthesizing audio
with sound sample libraries often leads to mechanical and deadpan results,
since musical scores do not contain performance-level information, such as
subtle changes in timing and dynamics. Moreover, while the task may sound like
a text-to-speech synthesis problem, there are fundamental differences since
music audio has rich polyphonic sounds. To build such an AI performer, we
propose in this paper a deep convolutional model that learns in an end-to-end
manner the score-to-audio mapping between a symbolic representation of music
called the piano rolls and an audio representation of music called the
spectrograms. The model consists of two subnets: the ContourNet, which uses a
U-Net structure to learn the correspondence between piano rolls and
spectrograms and to give an initial result; and the TextureNet, which further
uses a multi-band residual network to refine the result by adding the spectral
texture of overtones and timbre. We train the model to generate music clips of
the violin, cello, and flute, with a dataset of moderate size. We also present
the result of a user study that shows our model achieves higher mean opinion
score (MOS) in naturalness and emotional expressivity than a WaveNet-based
model and two commercial sound libraries. We open our source code at
this https URL | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Learning Disentangled Representations with Semi-Supervised Deep Generative Models,
Abstract: Variational autoencoders (VAEs) learn representations of data by jointly
training a probabilistic encoder and decoder network. Typically these models
encode all features of the data into a single variable. Here we are interested
in learning disentangled representations that encode distinct aspects of the
data into separate variables. We propose to learn such representations using
model architectures that generalise from standard VAEs, employing a general
graphical model structure in the encoder and decoder. This allows us to train
partially-specified models that make relatively strong assumptions about a
subset of interpretable variables and rely on the flexibility of neural
networks to learn representations for the remaining variables. We further
define a general objective for semi-supervised learning in this model class,
which can be approximated using an importance sampling procedure. We evaluate
our framework's ability to learn disentangled representations, both by
qualitative exploration of its generative capacity, and quantitative evaluation
of its discriminative ability on a variety of models and datasets. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Meteorites from Phobos and Deimos at Earth?,
Abstract: We examine the conditions under which material from the martian moons Phobos
and Deimos could reach our planet in the form of meteorites. We find that the
necessary ejection speeds from these moons (900 and 600 m/s for Phobos and
Deimos respectively) are much smaller than from Mars' surface (5000 m/s). These
speeds are below typical impact speeds for asteroids and comets (10-40 km/s) at
Mars' orbit, and we conclude that the delivery of meteorites from Phobos and
Deimos to the Earth can occur. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Watermark Signal Detection and Its Application in Image Retrieval,
Abstract: We propose a few fundamental techniques to obtain effective watermark
features of images in the image search index, and utilize the signals in a
commercial search engine to improve the image search quality. We collect a
diverse and large set (about 1M) of images with human labels indicating whether
the image contains visible watermark. We train a few deep convolutional neural
networks to extract watermark information from the raw images. We also analyze
the images based on their domains to get watermark information from a
domain-based watermark classifier. The deep CNN classifiers we trained can
achieve high accuracy on the watermark data set. We demonstrate that using
these signals in Bing image search ranker, powered by LambdaMART, can
effectively reduce the watermark rate during the online image ranking. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Loop-augmented forests and a variant of the Foulkes' conjecture,
Abstract: A loop-augmented forest is a labeled rooted forest with loops on some of its
roots. By exploiting an interplay between nilpotent partial functions and
labeled rooted forests, we investigate the permutation action of the symmetric
group on loop-augmented forests. Furthermore, we describe an extension of the
Foulkes' conjecture and prove a special case. Among other important outcomes of
our analysis are a complete description of the stabilizer subgroup of an
idempotent in the semigroup of partial transformations and a generalization of
the (Knuth-Sagan) hook length formula. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: An Extension of Proof Graphs for Disjunctive Parameterised Boolean Equation Systems,
Abstract: A parameterised Boolean equation system (PBES) is a set of equations that
defines sets as the least and/or greatest fixed-points that satisfy the
equations. This system is regarded as a declarative program defining functions
that take a datum and returns a Boolean value. The membership problem of PBESs
is a problem to decide whether a given element is in the defined set or not,
which corresponds to an execution of the program. This paper introduces reduced
proof graphs, and studies a technique to solve the membership problem of PBESs,
which is undecidable in general, by transforming it into a reduced proof graph.
A vertex X(v) in a proof graph represents that the data v is in the set X, if
the graph satisfies conditions induced from a given PBES. Proof graphs are,
however, infinite in general. Thus we introduce vertices each of which stands
for a set of vertices of the original ones, which possibly results in a finite
graph. For a subclass of disjunctive PBESs, we clarify some conditions which
reduced proof graphs should satisfy. We also show some examples having no
finite proof graph except for reduced one. We further propose a reduced
dependency space, which contains reduced proof graphs as sub-graphs if a proof
graph exists. We provide a procedure to construct finite reduced dependency
spaces, and show the soundness and completeness of the procedure. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Scalable Structure Learning for Probabilistic Soft Logic,
Abstract: Statistical relational frameworks such as Markov logic networks and
probabilistic soft logic (PSL) encode model structure with weighted first-order
logical clauses. Learning these clauses from data is referred to as structure
learning. Structure learning alleviates the manual cost of specifying models.
However, this benefit comes with high computational costs; structure learning
typically requires an expensive search over the space of clauses which involves
repeated optimization of clause weights. In this paper, we propose the first
two approaches to structure learning for PSL. We introduce a greedy
search-based algorithm and a novel optimization method that trade-off
scalability and approximations to the structure learning problem in varying
ways. The highly scalable optimization method combines data-driven generation
of clauses with a piecewise pseudolikelihood (PPLL) objective that learns model
structure by optimizing clause weights only once. We compare both methods
across five real-world tasks, showing that PPLL achieves an order of magnitude
runtime speedup and AUC gains up to 15% over greedy search. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: A mode-coupling theory analysis of the rotation driven translational motion of aqueous polyatomic ions,
Abstract: In contrast to simple monatomic alkali and halide ions, complex polyatomic
ions like nitrate, acetate, nitrite, chlorate etc. have not been studied in any
great detail. Experiments have shown that diffusion of polyatomic ions exhibits
many remarkable anomalies, notable among them is the fact that polyatomic ions
with similar size show large difference in their diffusivity values. This fact
has drawn relatively little interest in scientific discussions. We show here
that a mode-coupling theory (MCT) can provide a physically meaningful
interpretation of the anomalous diffusivity of polyatomic ions in water, by
including the contribution of rotational jumps on translational friction. The
two systems discussed here, namely aqueous nitrate ion and aqueous acetate ion,
although have similar ionic radii exhibit largely different diffusivity values
due to the differences in the rate of their rotational jump motions. We have
further verified the mode-coupling theory formalism by comparing it with
experimental and simulation results that agrees well with the theoretical
prediction. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Chemistry"
] |
Title: Haptic Assembly and Prototyping: An Expository Review,
Abstract: An important application of haptic technology to digital product development
is in virtual prototyping (VP), part of which deals with interactive planning,
simulation, and verification of assembly-related activities, collectively
called virtual assembly (VA). In spite of numerous research and development
efforts over the last two decades, the industrial adoption of haptic-assisted
VP/VA has been slower than expected. Putting hardware limitations aside, the
main roadblocks faced in software development can be traced to the lack of
effective and efficient computational models of haptic feedback. Such models
must 1) accommodate the inherent geometric complexities faced when assembling
objects of arbitrary shape; and 2) conform to the computation time limitation
imposed by the notorious frame rate requirements---namely, 1 kHz for haptic
feedback compared to the more manageable 30-60 Hz for graphic rendering. The
simultaneous fulfillment of these competing objectives is far from trivial.
This survey presents some of the conceptual and computational challenges and
opportunities as well as promising future directions in haptic-assisted VP/VA,
with a focus on haptic assembly from a geometric modeling and spatial reasoning
perspective. The main focus is on revisiting definitions and classifications of
different methods used to handle the constrained multibody simulation in
real-time, ranging from physics-based and geometry-based to hybrid and unified
approaches using a variety of auxiliary computational devices to specify,
impose, and solve assembly constraints. Particular attention is given to the
newly developed 'analytic methods' inherited from motion planning and protein
docking that have shown great promise as an alternative paradigm to the more
popular combinatorial methods. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: On links between horocyclic and geodesic orbits on geometrically infinite surfaces,
Abstract: We study the topological dynamics of the horocycle flow $h_\mathbb{R}$ on a
geometrically infinite hyperbolic surface S. Let u be a non-periodic vector for
$h_\mathbb{R}$ in T^1 S. Suppose that the half-geodesic $u(\mathbb{R}^+)$ is
almost minimizing and that the injectivity radius along $u(\mathbb{R}^+)$ has a
finite inferior limit $Inj(u(\mathbb{R}^+))$. We prove that the closure of
$h_\mathbb{R} u$ meets the geodesic orbit along un unbounded sequence of points
$g_{t_n} u$. Moreover, if $Inj(u(\mathbb{R}^+)) = 0$, the whole half-orbit
$g_{\mathbb{R}^+} u$ is contained in $h_\mathbb{R} u$. When
$Inj(u(\mathbb{R}^+)) > 0$, it is known that in general $g_{\mathbb{R}^+} u
\subset h_\mathbb{R} u$. Yet, we give a construction where
$Inj(u(\mathbb{R}^+)) > 0$ and $g_{\mathbb{R}^+} u \subset h_\mathbb{R} u$,
which also constitutes a counterexample to Proposition 3 of [Led97]. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Landau levels from neutral Bogoliubov particles in two-dimensional nodal superconductors under strain and doping gradients,
Abstract: Motivated by recent work on strain-induced pseudo-magnetic fields in Dirac
and Weyl semimetals, we analyze the possibility of analogous fields in
two-dimensional nodal superconductors. We consider the prototypical case of a
d-wave superconductor, a representative of the cuprate family, and find that
the presence of weak strain leads to pseudo-magnetic fields and Landau
quantization of Bogoliubov quasiparticles in the low-energy sector. A similar
effect is induced by the presence of generic, weak doping gradients. In
contrast to genuine magnetic fields in superconductors, the strain- and doping
gradient-induced pseudo-magnetic fields couple in a way that preserves
time-reversal symmetry and is not subject to the screening associated with the
Meissner effect. These effects can be probed by tuning weak applied
supercurrents which lead to shifts in the energies of the Landau levels and
hence to quantum oscillations in thermodynamic and transport quantities. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Affine maps between quadratic assignment polytopes and subgraph isomorphism polytopes,
Abstract: We consider two polytopes. The quadratic assignment polytope $QAP(n)$ is the
convex hull of the set of tensors $x\otimes x$, $x \in P_n$, where $P_n$ is the
set of $n\times n$ permutation matrices. The second polytope is defined as
follows. For every permutation of vertices of the complete graph $K_n$ we
consider appropriate $\binom{n}{2} \times \binom{n}{2}$ permutation matrix of
the edges of $K_n$. The Young polytope $P((n-2,2))$ is the convex hull of all
such matrices.
In 2009, S. Onn showed that the subgraph isomorphism problem can be reduced
to optimization both over $QAP(n)$ and over $P((n-2,2))$. He also posed the
question whether $QAP(n)$ and $P((n-2,2))$, having $n!$ vertices each, are
isomorphic. We show that $QAP(n)$ and $P((n-2,2))$ are not isomorphic. Also, we
show that $QAP(n)$ is a face of $P((2n-2,2))$, but $P((n-2,2))$ is a projection
of $QAP(n)$. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: LinXGBoost: Extension of XGBoost to Generalized Local Linear Models,
Abstract: XGBoost is often presented as the algorithm that wins every ML competition.
Surprisingly, this is true even though predictions are piecewise constant. This
might be justified in high dimensional input spaces, but when the number of
features is low, a piecewise linear model is likely to perform better. XGBoost
was extended into LinXGBoost that stores at each leaf a linear model. This
extension, equivalent to piecewise regularized least-squares, is particularly
attractive for regression of functions that exhibits jumps or discontinuities.
Those functions are notoriously hard to regress. Our extension is compared to
the vanilla XGBoost and Random Forest in experiments on both synthetic and
real-world data sets. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Measuring Player Retention and Monetization using the Mean Cumulative Function,
Abstract: Game analytics supports game development by providing direct quantitative
feedback about player experience. Player retention and monetization in
particular have become central business statistics in free-to-play game
development. Many metrics have been used for this purpose. However, game
developers often want to perform analytics in a timely manner before all users
have churned from the game. This causes data censoring which makes many metrics
biased. In this work, we introduce how the Mean Cumulative Function (MCF) can
be used to generalize many academic metrics to censored data. The MCF allows us
to estimate the expected value of a metric over time, which for example may be
the number of game sessions, number of purchases, total playtime and lifetime
value. Furthermore, the popular retention rate metric is the derivative of this
estimate applied to the expected number of distinct days played. Statistical
tools based on the MCF allow game developers to determine whether a given
change improves a game, or whether a game is yet good enough for public
release. The advantages of this approach are demonstrated on a real
in-development free-to-play mobile game, the Hipster Sheep. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Bohemian Upper Hessenberg Toeplitz Matrices,
Abstract: We look at Bohemian matrices, specifically those with entries from $\{-1, 0,
{+1}\}$. More, we specialize the matrices to be upper Hessenberg, with
subdiagonal entries $1$. Even more, we consider Toeplitz matrices of this kind.
Many properties remain after these specializations, some of which surprised us.
Focusing on only those matrices whose characteristic polynomials have maximal
height allows us to explicitly identify these polynomials and give a lower
bound on their height. This bound is exponential in the order of the matrix. | [
1,
0,
0,
0,
0,
0
] | [
"Mathematics"
] |
Title: Evidence Logics with Relational Evidence,
Abstract: Dynamic evidence logics are logics for reasoning about the evidence and
evidence-based beliefs of agents in a dynamic environment. In this paper, we
introduce a family of logics for reasoning about relational evidence: evidence
that involves an orderings of states in terms of their relative plausibility.
We provide sound and complete axiomatizations for the logics. We also present
several evidential actions and prove soundness and completeness for the
associated dynamic logics. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Cable-Driven Actuation for Highly Dynamic Robotic Systems,
Abstract: This paper presents design and experimental evaluations of an articulated
robotic limb called Capler-Leg. The key element of Capler-Leg is its
single-stage cable-pulley transmission combined with a high-gap radius motor.
Our cable-pulley system is designed to be as light-weight as possible and to
additionally serve as the primary cooling element, thus significantly
increasing the power density and efficiency of the overall system. The total
weight of active elements on the leg, i.e. the stators and the rotors,
contribute more than 60% of the total leg weight, which is an order of
magnitude higher than most existing robots. The resulting robotic leg has low
inertia, high torque transparency, low manufacturing cost, no backlash, and a
low number of parts. Capler-Leg system itself, serves as an experimental setup
for evaluating the proposed cable- pulley design in terms of robustness and
efficiency. A continuous jump experiment shows a remarkable 96.5 % recuperation
rate, measured at the battery output. This means that almost all the mechanical
energy output used during push-off returned back to the battery during
touch-down. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Improved $A_1-A_\infty$ and related estimates for commutators of rough singular integrals,
Abstract: An $A_1-A_\infty$ estimate improving a previous result in arXiv:1607.06432 is
obtained. Also new a result in terms of the ${A_\infty}$ constant and the one
supremum $A_q-A_\infty^{\exp}$ constant, is proved, providing a counterpart for
the result obained in arXiv:1705.08364. Both of the preceding results rely upon
a sparse domination in terms of bilinear forms for $[b,T_\Omega]$ with
$\Omega\in L^\infty(\mathbb{S}^{n-1})$ and $b\in BMO$ which is established
relying upon techniques from arXiv:1705.07397. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Interpreted Formalisms for Configurations,
Abstract: Imprecise and incomplete specification of system \textit{configurations}
threatens safety, security, functionality, and other critical system properties
and uselessly enlarges the configuration spaces to be searched by configuration
engineers and auto-tuners. To address these problems, this paper introduces
\textit{interpreted formalisms based on real-world types for configurations}.
Configuration values are lifted to values of real-world types, which we
formalize as \textit{subset types} in Coq. Values of these types are dependent
pairs whose components are values of underlying Coq types and proofs of
additional properties about them. Real-world types both extend and further
constrain \textit{machine-level} configurations, enabling richer, proof-based
checking of their consistency with real-world constraints. Tactic-based proof
scripts are written once to automate the construction of proofs, if proofs
exist, for configuration fields and whole configurations. \textit{Failures to
prove} reveal real-world type errors. Evaluation is based on a case study of
combinatorial optimization of Hadoop performance by meta-heuristic search over
Hadoop configurations spaces. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: High-Mobility OFDM Downlink Transmission with Large-Scale Antenna Array,
Abstract: In this correspondence, we propose a new receiver design for high-mobility
orthogonal frequency division multiplexing (OFDM) downlink transmissions with a
large-scale antenna array. The downlink signal experiences the challenging fast
time-varying propagation channel. The time-varying nature originates from the
multiple carrier frequency offsets (CFOs) due to the transceiver oscillator
frequency offset (OFO) and multiple Doppler shifts. Let the received signal
first go through a carefully designed beamforming network, which could separate
multiple CFOs in the spatial domain with sufficient number of receive antennas.
A joint estimation method for the Doppler shifts and the OFO is further
developed. Then the conventional single-CFO compensation and channel estimation
method can be carried out for each beamforming branch. The proposed receiver
design avoids the complicated time-varying channel estimation, which differs a
lot from the conventional methods. More importantly, the proposed scheme can be
applied to the commonly used time-varying channel models, such as the Jakes'
channel model. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science"
] |
Title: Feature Enhancement in Visually Impaired Images,
Abstract: One of the major open problems in computer vision is detection of features in
visually impaired images. In this paper, we describe a potential solution using
Phase Stretch Transform, a new computational approach for image analysis, edge
detection and resolution enhancement that is inspired by the physics of the
photonic time stretch technique. We mathematically derive the intrinsic
nonlinear transfer function and demonstrate how it leads to (1) superior
performance at low contrast levels and (2) a reconfigurable operator for
hyper-dimensional classification. We prove that the Phase Stretch Transform
equalizes the input image brightness across the range of intensities resulting
in a high dynamic range in visually impaired images. We also show further
improvement in the dynamic range by combining our method with the conventional
techniques. Finally, our results show a method for computation of mathematical
derivatives via group delay dispersion operations. | [
1,
1,
0,
0,
0,
0
] | [
"Computer Science",
"Physics",
"Mathematics"
] |
Title: Heavy-Tailed Universality Predicts Trends in Test Accuracies for Very Large Pre-Trained Deep Neural Networks,
Abstract: Given two or more Deep Neural Networks (DNNs) with the same or similar
architectures, and trained on the same dataset, but trained with different
solvers, parameters, hyper-parameters, regularization, etc., can we predict
which DNN will have the best test accuracy, and can we do so without peeking at
the test data? In this paper, we show how to use a new Theory of Heavy-Tailed
Self-Regularization (HT-SR) to answer this. HT-SR suggests, among other things,
that modern DNNs exhibit what we call Heavy-Tailed Mechanistic Universality
(HT-MU), meaning that the correlations in the layer weight matrices can be fit
to a power law with exponents that lie in common Universality classes from
Heavy-Tailed Random Matrix Theory (HT-RMT). From this, we develop a Universal
capacity control metric that is a weighted average of these PL exponents.
Rather than considering small toy NNs, we examine over 50 different,
large-scale pre-trained DNNs, ranging over 15 different architectures, trained
on ImagetNet, each of which has been reported to have different test
accuracies. We show that this new capacity metric correlates very well with the
reported test accuracies of these DNNs, looking across each architecture
(VGG16/.../VGG19, ResNet10/.../ResNet152, etc.). We also show how to
approximate the metric by the more familiar Product Norm capacity measure, as
the average of the log Frobenius norm of the layer weight matrices. Our
approach requires no changes to the underlying DNN or its loss function, it
does not require us to train a model (although it could be used to monitor
training), and it does not even require access to the ImageNet data. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Atmospheric Circulation and Cloud Evolution on the Highly Eccentric Extrasolar Planet HD 80606b,
Abstract: Observations of the highly-eccentric (e~0.9) hot-Jupiter HD 80606b with
Spitzer have provided some of best probes of the physics at work in exoplanet
atmospheres. By observing HD 80606b during its periapse passage, atmospheric
radiative, advective, and chemical timescales can be directly measured and used
to constrain fundamental planetary properties such as rotation period, tidal
dissipation rate, and atmospheric composition (including aerosols). Here we
present three-dimensional general circulation models for HD 80606b that aim to
further explore the atmospheric physics shaping HD 80606b's observed Spitzer
phase curves. We find that our models that assume a planetary rotation period
twice that of the pseudo-synchronous rotation period best reproduce the phase
variations observed for HD~80606b near periapse passage with Spitzer.
Additionally, we find that the rapid formation/dissipation and vertical
transport of clouds in HD 80606b's atmosphere near periapse passage likely
shapes its observed phase variations. We predict that observations near
periapse passage at visible wavelengths could constrain the composition and
formation/advection timescales of the dominant cloud species in HD 80606b's
atmosphere. The time-variable forcing experienced by exoplanets on eccentric
orbits provides a unique and important window on radiative, dynamical, and
chemical processes in planetary atmospheres and an important link between
exoplanet observations and theory. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering,
Abstract: Version information plays an important role in spreadsheet understanding,
maintaining and quality improving. However, end users rarely use version
control tools to document spreadsheet version information. Thus, the
spreadsheet version information is missing, and different versions of a
spreadsheet coexist as individual and similar spreadsheets. Existing approaches
try to recover spreadsheet version information through clustering these similar
spreadsheets based on spreadsheet filenames or related email conversation.
However, the applicability and accuracy of existing clustering approaches are
limited due to the necessary information (e.g., filenames and email
conversation) is usually missing. We inspected the versioned spreadsheets in
VEnron, which is extracted from the Enron Corporation. In VEnron, the different
versions of a spreadsheet are clustered into an evolution group. We observed
that the versioned spreadsheets in each evolution group exhibit certain common
features (e.g., similar table headers and worksheet names). Based on this
observation, we proposed an automatic clustering algorithm, SpreadCluster.
SpreadCluster learns the criteria of features from the versioned spreadsheets
in VEnron, and then automatically clusters spreadsheets with the similar
features into the same evolution group. We applied SpreadCluster on all
spreadsheets in the Enron corpus. The evaluation result shows that
SpreadCluster could cluster spreadsheets with higher precision and recall rate
than the filename-based approach used by VEnron. Based on the clustering result
by SpreadCluster, we further created a new versioned spreadsheet corpus
VEnron2, which is much bigger than VEnron. We also applied SpreadCluster on the
other two spreadsheet corpora FUSE and EUSES. The results show that
SpreadCluster can cluster the versioned spreadsheets in these two corpora with
high precision. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Field-induced coexistence of $s_{++}$ and $s_{\pm}$ superconducting states in dirty multiband superconductors,
Abstract: In multiband systems, such as iron-based superconductors, the superconducting
states with locking and anti-locking of the interband phase differences, are
usually considered as mutually exclusive. For example, a dirty two-band system
with interband impurity scattering undergoes a sharp crossover between the
$s_{\pm}$ state (which favors phase anti locking) and the $s_{++}$ state (which
favors phase locking). We discuss here that the situation can be much more
complex in the presence of an external field or superconducting currents. In an
external applied magnetic field, dirty two-band superconductors do not feature
a sharp $s_{\pm}\to s_{++}$ crossover but rather a washed-out crossover to a
finite region in the parameter space where both $s_{\pm}$ and $s_{++}$ states
can coexist for example as a lattice or a microemulsion of inclusions of
different states. The current-carrying regions such as the regions near vortex
cores can exhibit an $s_\pm$ state while it is the $s_{++}$ state that is
favored in the bulk. This coexistence of both states can even be realized in
the Meissner state at the domain's boundaries featuring Meissner currents. We
demonstrate that there is a magnetic-field-driven crossover between the pure
$s_{\pm}$ and the $s_{++}$ states. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Journal of Open Source Software (JOSS): design and first-year review,
Abstract: This article describes the motivation, design, and progress of the Journal of
Open Source Software (JOSS). JOSS is a free and open-access journal that
publishes articles describing research software. It has the dual goals of
improving the quality of the software submitted and providing a mechanism for
research software developers to receive credit. While designed to work within
the current merit system of science, JOSS addresses the dearth of rewards for
key contributions to science made in the form of software. JOSS publishes
articles that encapsulate scholarship contained in the software itself, and its
rigorous peer review targets the software components: functionality,
documentation, tests, continuous integration, and the license. A JOSS article
contains an abstract describing the purpose and functionality of the software,
references, and a link to the software archive. The article is the entry point
of a JOSS submission, which encompasses the full set of software artifacts.
Submission and review proceed in the open, on GitHub. Editors, reviewers, and
authors work collaboratively and openly. Unlike other journals, JOSS does not
reject articles requiring major revision; while not yet accepted, articles
remain visible and under review until the authors make adequate changes (or
withdraw, if unable to meet requirements). Once an article is accepted, JOSS
gives it a DOI, deposits its metadata in Crossref, and the article can begin
collecting citations on indexers like Google Scholar and other services.
Authors retain copyright of their JOSS article, releasing it under a Creative
Commons Attribution 4.0 International License. In its first year, starting in
May 2016, JOSS published 111 articles, with more than 40 additional articles
under review. JOSS is a sponsored project of the nonprofit organization
NumFOCUS and is an affiliate of the Open Source Initiative. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Sample Complexity of Estimating the Policy Gradient for Nearly Deterministic Dynamical Systems,
Abstract: Reinforcement learning is a promising approach to learning robot controllers.
It has recently been shown that algorithms based on finite-difference estimates
of the policy gradient are competitive with algorithms based on the policy
gradient theorem. We propose a theoretical framework for understanding this
phenomenon. Our key insight is that many dynamical systems (especially those of
interest in robot control tasks) are \emph{nearly deterministic}---i.e., they
can be modeled as a deterministic system with a small stochastic perturbation.
We show that for such systems, finite-difference estimates of the policy
gradient can have substantially lower variance than estimates based on the
policy gradient theorem. We interpret these results in the context of
counterfactual estimation. Finally, we empirically evaluate our insights in an
experiment on the inverted pendulum. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: NDSHA: robust and reliable seismic hazard assessment,
Abstract: The Neo-Deterministic Seismic Hazard Assessment (NDSHA) method reliably and
realistically simulates the suite of earthquake ground motions that may impact
civil populations as well as their heritage buildings. The modeling technique
is developed from comprehensive physical knowledge of the seismic source
process, the propagation of earthquake waves and their combined interactions
with site effects. NDSHA effectively accounts for the tensor nature of
earthquake ground motions formally described as the tensor product of the
earthquake source functions and the Green Functions of the pathway. NDSHA uses
all available information about the space distribution of large magnitude
earthquake, including Maximum Credible Earthquake (MCE) and geological and
geophysical data. It does not rely on scalar empirical ground motion
attenuation models, as these are often both weakly constrained by available
observations and unable to account for the tensor nature of earthquake ground
motion. Standard NDSHA provides robust and safely conservative hazard estimates
for engineering design and mitigation decision strategies without requiring
(often faulty) assumptions about the probabilistic risk analysis model of
earthquake occurrence. If specific applications may benefit from temporal
information the definition of the Gutenberg-Richter (GR) relation is performed
according to the multi-scale seismicity model and occurrence rate is associated
to each modeled source. Observations from recent destructive earthquakes in
Italy and Nepal have confirmed the validity of NDSHA approach and application,
and suggest that more widespread application of NDSHA will enhance earthquake
safety and resilience of civil populations in all earthquake-prone regions,
especially in tectonically active areas where the historic earthquake record is
too short. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Earth Sciences"
] |
Title: A note on degenerate stirling polynomials of the second kind,
Abstract: In this paper, we consider the degenerate Stirling polynomials of the second
kind which are derived from the generating function. In addition, we give some
new identities for these polynomials. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Active learning machine learns to create new quantum experiments,
Abstract: How useful can machine learning be in a quantum laboratory? Here we raise the
question of the potential of intelligent machines in the context of scientific
research. A major motivation for the present work is the unknown reachability
of various entanglement classes in quantum experiments. We investigate this
question by using the projective simulation model, a physics-oriented approach
to artificial intelligence. In our approach, the projective simulation system
is challenged to design complex photonic quantum experiments that produce
high-dimensional entangled multiphoton states, which are of high interest in
modern quantum experiments. The artificial intelligence system learns to create
a variety of entangled states, and improves the efficiency of their
realization. In the process, the system autonomously (re)discovers experimental
techniques which are only now becoming standard in modern quantum optical
experiments - a trait which was not explicitly demanded from the system but
emerged through the process of learning. Such features highlight the
possibility that machines could have a significantly more creative role in
future research. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: The Gaia-ESO Survey: radial distribution of abundances in the Galactic disc from open clusters and young field stars,
Abstract: The spatial distribution of elemental abundances in the disc of our Galaxy
gives insights both on its assembly process and subsequent evolution, and on
the stellar nucleogenesis of the different elements. Gradients can be traced
using several types of objects as, for instance, (young and old) stars, open
clusters, HII regions, planetary nebulae. We aim at tracing the radial
distributions of abundances of elements produced through different
nucleosynthetic channels -the alpha-elements O, Mg, Si, Ca and Ti, and the
iron-peak elements Fe, Cr, Ni and Sc - by using the Gaia-ESO idr4 results of
open clusters and young field stars. From the UVES spectra of member stars, we
determine the average composition of clusters with ages >0.1 Gyr. We derive
statistical ages and distances of field stars. We trace the abundance gradients
using the cluster and field populations and we compare them with a
chemo-dynamical Galactic evolutionary model. Results. The adopted
chemo-dynamical model, with the new generation of metallicity-dependent stellar
yields for massive stars, is able to reproduce the observed spatial
distributions of abundance ratios, in particular the abundance ratios of [O/Fe]
and [Mg/Fe] in the inner disc (5 kpc<RGC <7 kpc), with their differences, that
were usually poorly explained by chemical evolution models. Often, oxygen and
magnesium are considered as equivalent in tracing alpha-element abundances and
in deducing, e.g., the formation time-scales of different Galactic stellar
populations. In addition, often [alpha/Fe] is computed combining several
alpha-elements. Our results indicate, as expected, a complex and diverse
nucleosynthesis of the various alpha-elements, in particular in the high
metallicity regimes, pointing towards a different origin of these elements and
highlighting the risk of considering them as a single class with common
features. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: In-situ Optical Characterization of Noble Metal Thin Film Deposition and Development of a High-performance Plasmonic Sensor,
Abstract: The present work addressed in this thesis introduces, for the first time, the
use of tilted fiber Bragg grating (TFBG) sensors for accurate, real-time, and
in-situ characterization of CVD and ALD processes for noble metals, but with a
particular focus on gold due to its desirable optical and plasmonic properties.
Through the use of orthogonally-polarized transverse electric (TE) and
transverse magnetic (TM) resonance modes imposed by a boundary condition at the
cladding-metal interface of the optical fiber, polarization-dependent
resonances excited by the TFBG are easily decoupled. It was found that for
ultrathin thicknesses of gold films from CVD (~6-65 nm), the anisotropic
property of these films made it non-trivial to characterize their effective
optical properties such as the real component of the permittivity.
Nevertheless, the TFBG introduces a new sensing platform to the ALD and CVD
community for extremely sensitive in-situ process monitoring. We later also
demonstrate thin film growth at low (<10 cycle) numbers for the well-known
Al2O3 thermal ALD process, as well as the plasma-enhanced gold ALD process.
Finally, the use of ALD-grown gold coatings has been employed for the
development of a plasmonic TFBG-based sensor with ultimate refractometric
sensitivity (~550 nm/RIU). | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Guaranteed Sufficient Decrease for Variance Reduced Stochastic Gradient Descent,
Abstract: In this paper, we propose a novel sufficient decrease technique for variance
reduced stochastic gradient descent methods such as SAG, SVRG and SAGA. In
order to make sufficient decrease for stochastic optimization, we design a new
sufficient decrease criterion, which yields sufficient decrease versions of
variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct. We
introduce a coefficient to scale current iterate and satisfy the sufficient
decrease property, which takes the decisions to shrink, expand or move in the
opposite direction, and then give two specific update rules of the coefficient
for Lasso and ridge regression. Moreover, we analyze the convergence properties
of our algorithms for strongly convex problems, which show that both of our
algorithms attain linear convergence rates. We also provide the convergence
guarantees of our algorithms for non-strongly convex problems. Our experimental
results further verify that our algorithms achieve significantly better
performance than their counterparts. | [
1,
0,
1,
1,
0,
0
] | [
"Computer Science",
"Mathematics",
"Statistics"
] |
Title: Entire holomorphic curves into projective spaces intersecting a generic hypersurface of high degree,
Abstract: In this note, we establish the following Second Main Theorem type estimate
for every entire non-algebraically degenerate holomorphic curve
$f\colon\mathbb{C}\rightarrow\mathbb{P}^n(\mathbb{C})$, in present of a {\sl
generic} hypersuface $D\subset\mathbb{P}^n(\mathbb{C})$ of sufficiently high
degree $d\geq 15(5n+1)n^n$: \[ T_f(r) \leq \,N_f^{[1]}(r,D) + O\big(\log T_f(r)
+ \log r \big)\parallel, \] where $T_f(r)$ and $N_f^{[1]}(r,D)$ stand for the
order function and the $1$-truncated counting function in Nevanlinna theory.
This inequality quantifies recent results on the logarithmic Green--Griffiths
conjecture. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters,
Abstract: Computing the inverse covariance matrix (or precision matrix) of large data
vectors is crucial in weak lensing (and multi-probe) analyses of the large
scale structure of the universe. Analytically computed covariances are
noise-free and hence straightforward to invert, however the model
approximations might be insufficient for the statistical precision of future
cosmological data. Estimating covariances from numerical simulations improves
on these approximations, but the sample covariance estimator is inherently
noisy, which introduces uncertainties in the error bars on cosmological
parameters and also additional scatter in their best fit values. For future
surveys, reducing both effects to an acceptable level requires an unfeasibly
large number of simulations.
In this paper we describe a way to expand the true precision matrix around a
covariance model and show how to estimate the leading order terms of this
expansion from simulations. This is especially powerful if the covariance
matrix is the sum of two contributions, $\smash{\mathbf{C} =
\mathbf{A}+\mathbf{B}}$, where $\smash{\mathbf{A}}$ is well understood
analytically and can be turned off in simulations (e.g. shape-noise for cosmic
shear) to yield a direct estimate of $\smash{\mathbf{B}}$. We test our method
in mock experiments resembling tomographic weak lensing data vectors from the
Dark Energy Survey (DES) and the Large Synoptic Survey Telecope (LSST). For DES
we find that $400$ N-body simulations are sufficient to achive negligible
statistical uncertainties on parameter constraints. For LSST this is achieved
with $2400$ simulations. The standard covariance estimator would require
>$10^5$ simulations to reach a similar precision. We extend our analysis to a
DES multi-probe case finding a similar performance. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Statistics"
] |
Title: Fractional quiver W-algebras,
Abstract: We introduce quiver gauge theory associated with the non-simply-laced type
fractional quiver, and define fractional quiver W-algebras by using
construction of arXiv:1512.08533 and arXiv:1608.04651 with representation of
fractional quivers. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Vortex creep at very low temperatures in single crystals of the extreme type-II superconductor Rh$_9$In$_4$S$_4$,
Abstract: We image vortex creep at very low temperatures using Scanning Tunneling
Microscopy (STM) in the superconductor Rh$_9$In$_4$S$_4$ ($T_c$=2.25 K). We
measure the superconducting gap of Rh$_9$In$_4$S$_4$, finding $\Delta\approx
0.33$meV and image a hexagonal vortex lattice up to close to H$_{c2}$,
observing slow vortex creep at temperatures as low as 150 mK. We estimate
thermal and quantum barriers for vortex motion and show that thermal
fluctuations likely cause vortex creep, in spite of being at temperatures
$T/T_c<0.1$. We study creeping vortex lattices by making images during long
times and show that the vortex lattice remains hexagonal during creep with
vortices moving along one of the high symmetry axis of the vortex lattice.
Furthermore, the creep velocity changes with the scanning window suggesting
that creep depends on the local arrangements of pinning centers. Vortices
fluctuate on small scale erratic paths, indicating that the vortex lattice
makes jumps trying different arrangements during its travel along the main
direction for creep. The images provide a visual account of how vortex lattice
motion maintains hexagonal order, while showing dynamic properties
characteristic of a glass. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Intention Games,
Abstract: Strategic interactions between competitive entities are generally considered
from the perspective of complete revelations of benefits achieved from those
interactions, in the form of public payoff functions in the announced games. In
this work, we propose a formal framework for a competitive ecosystem where each
player is permitted to deviate from publicly optimal strategies under certain
private payoffs greater than public payoffs, given that these deviations have
certain acceptable bounds as agreed by all players. We call this game theoretic
construction an Intention Game. We formally define an Intention Game, and
notions of equilibria that exist in such deviant interactions. We give an
example of a Cournot competition in a partially honest setting. We compare
Intention Games with conventional strategic form games. Finally, we give a
cryptographic use of Intention Games and a dual interpretation of this novel
framework. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Advances in Detection and Error Correction for Coherent Optical Communications: Regular, Irregular, and Spatially Coupled LDPC Code Designs,
Abstract: In this chapter, we show how the use of differential coding and the presence
of phase slips in the transmission channel affect the total achievable
information rates and capacity of a system. By means of the commonly used QPSK
modulation, we show that the use of differential coding does not decrease the
total amount of reliably conveyable information over the channel. It is a
common misconception that the use of differential coding introduces an
unavoidable differential loss. This perceived differential loss is rather a
consequence of simplified differential detection and decoding at the receiver.
Afterwards, we show how capacity-approaching coding schemes based on LDPC and
spatially coupled LDPC codes can be constructed by combining iterative
demodulation and decoding. For this, we first show how to modify the
differential decoder to account for phase slips and then how to use this
modified differential decoder to construct good LDPC codes. This construction
method can serve as a blueprint to construct good and practical LDPC codes for
other applications with iterative detection, such as higher order modulation
formats with non-square constellations, multi-dimensional optimized modulation
formats, turbo equalization to mitigate ISI (e.g., due to nonlinearities) and
many more. Finally, we introduce the class of spatially coupled (SC)-LDPC
codes, which are a generalization of LDPC codes with some outstanding
properties and which can be decoded with a very simple windowed decoder. We
show that the universal behavior of spatially coupled codes makes them an ideal
candidate for iterative differential demodulation/detection and decoding. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: Risk-Averse Classification,
Abstract: We develop a new approach to solving classification problems, which is bases
on the theory of coherent measures of risk and risk sharing ideas. The proposed
approach aims at designing a risk-averse classifier. The new approach allows
for associating distinct risk functional to each classes. The risk may be
measured by different (non-linear in probability) measures,
We analyze the structure of the new classifier design problem and establish
its theoretical relation to known risk-neutral design problems. In particular,
we show that the risk-sharing classification problem is equivalent to an
implicitly defined optimization problem with unequal, implicitly defined but
unknown, weights for each data point. We implement our methodology in a binary
classification scenario on several different data sets and carry out numerical
comparison with classifiers which are obtained using the Huber loss function
and other loss functions known in the literature. We formulate specific
risk-averse support vector machines in order to demonstrate the viability of
our method. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Zero divisor and unit elements with support of size 4 in group algebras of torsion free groups,
Abstract: Kaplansky Zero Divisor Conjecture states that if $G $ is a torsion free group
and $ \mathbb{F} $ is a field, then the group ring $\mathbb{F}[G]$ contains no
zero divisor and Kaplansky Unit Conjecture states that if $G $ is a torsion
free group and $ \mathbb{F} $ is a field, then $\mathbb{F}[G]$ contains no
non-trivial units. The support of an element $ \alpha= \sum_{x\in G}\alpha_xx$
in $\mathbb{F}[G] $, denoted by $supp(\alpha)$, is the set $ \{x \in
G|\alpha_x\neq 0\} $. In this paper we study possible zero divisors and units
with supports of size $ 4 $ in $\mathbb{F}[G]$. We prove that if
$ \alpha, \beta $ are non-zero elements in $ \mathbb{F}[G] $ for a possible
torsion free group $ G $ and an arbitrary field $ \mathbb{F} $ such that $
|supp(\alpha)|=4 $ and $ \alpha\beta=0 $, then $|supp(\beta)|\geq 7 $. In [J.
Group Theory, $16$ $ (2013),$ no. $5$, $667$-$693$], it is proved that if $
\mathbb{F}=\mathbb{F}_2 $ is the field with two elements, $ G $ is a torsion
free group and $ \alpha,\beta \in \mathbb{F}_2[G]\setminus \{0\}$ such that
$|supp(\alpha)|=4 $ and $ \alpha\beta =0 $, then $|supp(\beta)|\geq 8$. We
improve the latter result to $|supp(\beta)|\geq 9$. Also, concerning the Unit
Conjecture, we prove that if $\mathsf{a}\mathsf{b}=1$ for some
$\mathsf{a},\mathsf{b}\in \mathbb{F}[G]$ and $|supp(\mathsf{a})|=4$, then
$|supp(\mathsf{b})|\geq 6$. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Multimodal Word Distributions,
Abstract: Word embeddings provide point representations of words containing useful
semantic information. We introduce multimodal word distributions formed from
Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty
information. To learn these distributions, we propose an energy-based
max-margin objective. We show that the resulting approach captures uniquely
expressive semantic information, and outperforms alternatives, such as word2vec
skip-grams, and Gaussian embeddings, on benchmark datasets such as word
similarity and entailment. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Efficient Policy Learning,
Abstract: In many areas, practitioners seek to use observational data to learn a
treatment assignment policy that satisfies application-specific constraints,
such as budget, fairness, simplicity, or other functional form constraints. For
example, policies may be restricted to take the form of decision trees based on
a limited set of easily observable individual characteristics. We propose a new
approach to this problem motivated by the theory of semiparametrically
efficient estimation. Our method can be used to optimize either binary
treatments or infinitesimal nudges to continuous treatments, and can leverage
observational data where causal effects are identified using a variety of
strategies, including selection on observables and instrumental variables.
Given a doubly robust estimator of the causal effect of assigning everyone to
treatment, we develop an algorithm for choosing whom to treat, and establish
strong guarantees for the asymptotic utilitarian regret of the resulting
policy. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Computer Science"
] |
Title: Superintegrable relativistic systems in spacetime-dependent background fields,
Abstract: We consider a relativistic charged particle in background electromagnetic
fields depending on both space and time. We identify which symmetries of the
fields automatically generate integrals (conserved quantities) of the charge
motion, accounting fully for relativistic and gauge invariance. Using this we
present new examples of superintegrable relativistic systems. This includes
examples where the integrals of motion are quadratic or nonpolynomial in the
canonical momenta. | [
0,
1,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Formalization of Transform Methods using HOL Light,
Abstract: Transform methods, like Laplace and Fourier, are frequently used for
analyzing the dynamical behaviour of engineering and physical systems, based on
their transfer function, and frequency response or the solutions of their
corresponding differential equations. In this paper, we present an ongoing
project, which focuses on the higher-order logic formalization of transform
methods using HOL Light theorem prover. In particular, we present the
motivation of the formalization, which is followed by the related work. Next,
we present the task completed so far while highlighting some of the challenges
faced during the formalization. Finally, we present a roadmap to achieve our
objectives, the current status and the future goals for this project. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics",
"Physics"
] |
Title: Evidence of chaotic modes in the analysis of four delta Scuti stars,
Abstract: Since CoRoT observations unveiled the very low amplitude modes that form a
flat plateau in the power spectrum structure of delta Scuti stars, the nature
of this phenomenon, including the possibility of spurious signals due to the
light curve analysis, has been a matter of long-standing scientific debate. We
contribute to this debate by finding the structural parameters of a sample of
four delta Scuti stars, CID 546, CID 3619, CID 8669, and KIC 5892969, and
looking for a possible relation between these stars' structural parameters and
their power spectrum structure. For the purposes of characterization, we
developed a method of studying and analysing the power spectrum with high
precision and have applied it to both CoRoT and Kepler light curves. We obtain
the best estimates to date of these stars' structural parameters. Moreover, we
observe that the power spectrum structure depends on the inclination,
oblateness, and convective efficiency of each star. Our results suggest that
the power spectrum structure is real and is possibly formed by 2-period island
modes and chaotic modes. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Analytical Representations of Divisors of Integers,
Abstract: Certain analytical expressions which "feel" the divisors of natural numbers
are investigated. We show that these expressions encode to some extent the
well-known algorithm of the sieve of Eratosthenes. Most part of the text is
written in pedagogical style, however some formulas are new. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Semiparametrical Gaussian Processes Learning of Forward Dynamical Models for Navigating in a Circular Maze,
Abstract: This paper presents a problem of model learning for the purpose of learning
how to navigate a ball to a goal state in a circular maze environment with two
degrees of freedom. The motion of the ball in the maze environment is
influenced by several non-linear effects such as dry friction and contacts,
which are difficult to model physically. We propose a semiparametric model to
estimate the motion dynamics of the ball based on Gaussian Process Regression
equipped with basis functions obtained from physics first principles. The
accuracy of this semiparametric model is shown not only in estimation but also
in prediction at n-steps ahead and its compared with standard algorithms for
model learning. The learned model is then used in a trajectory optimization
algorithm to compute ball trajectories. We propose the system presented in the
paper as a benchmark problem for reinforcement and robot learning, for its
interesting and challenging dynamics and its relative ease of reproducibility. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Non-Generic Unramified Representations in Metaplectic Covering Groups,
Abstract: Let $G^{(r)}$ denote the metaplectic covering group of the linear algebraic
group $G$. In this paper we study conditions on unramified representations of
the group $G^{(r)}$ not to have a nonzero Whittaker function. We state a
general Conjecture about the possible unramified characters $\chi$ such that
the unramified sub-representation of
$Ind_{B^{(r)}}^{G^{(r)}}\chi\delta_B^{1/2}$ will have no nonzero Whittaker
function. We prove this Conjecture for the groups $GL_n^{(r)}$ with $r\ge n-1$,
and for the exceptional groups $G_2^{(r)}$ when $r\ne 2$. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Implications of right-handed neutrinos in $B-L$ extended standard model with scalar dark matter,
Abstract: We investigate the Standard Model (SM) with a $U(1)_{B-L}$ gauge extension
where a $B-L$ charged scalar is a viable dark matter (DM) candidate. The
dominant annihilation process, for the DM particle is through the $B-L$
symmetry breaking scalar to right-handed neutrino pair. We exploit the effect
of decay and inverse decay of the right-handed neutrino in thermal relic
abundance of the DM. Depending on the values of the decay rate, the DM relic
density can be significantly different from what is obtained in the standard
calculation assuming the right-handed neutrino is in thermal equilibrium and
there appear different regions of the parameter space satisfying the observed
DM relic density. For a DM mass less than $\mathcal{O}$(TeV), the direct
detection experiments impose a competitive bound on the mass of the
$U(1)_{B-L}$ gauge boson $Z^\prime$ with the collider experiments. Utilizing
the non-observation of the displaced vertices arising from the right-handed
neutrino decays, bound on the mass of $Z^\prime$ has been obtained at present
and higher luminosities at the LHC with 14 TeV centre of mass energy where an
integrated luminosity of 100fb$^{-1}$ is sufficient to probe $m_{Z'} \sim 5.5$
TeV. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Radiation hardness of small-pitch 3D pixel sensors up to HL-LHC fluences,
Abstract: A new generation of 3D silicon pixel detectors with a small pixel size of
50$\times$50 and 25$\times$100 $\mu$m$^{2}$ is being developed for the HL-LHC
tracker upgrades. The radiation hardness of such detectors was studied in beam
tests after irradiation to HL-LHC fluences up to $1.4\times10^{16}$
n$_{\mathrm{eq}}$/cm$^2$. At this fluence, an operation voltage of only 100 V
is needed to achieve 97% hit efficiency, with a power dissipation of 13
mW/cm$^2$ at -25$^{\circ}$C, considerably lower than for previous 3D sensor
generations and planar sensors. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Optimized State Space Grids for Abstractions,
Abstract: The practical impact of abstraction-based controller synthesis methods is
currently limited by the immense computational effort for obtaining
abstractions. In this note we focus on a recently proposed method to compute
abstractions whose state space is a cover of the state space of the plant by
congruent hyper-intervals. The problem of how to choose the size of the
hyper-intervals so as to obtain computable and useful abstractions is unsolved.
This note provides a twofold contribution towards a solution. Firstly, we
present a functional to predict the computational effort for the abstraction to
be computed. Secondly, we propose a method for choosing the aspect ratio of the
hyper-intervals when their volume is fixed. More precisely, we propose to
choose the aspect ratio so as to minimize a predicted number of transitions of
the abstraction to be computed, in order to reduce the computational effort. To
this end, we derive a functional to predict the number of transitions in
dependence of the aspect ratio. The functional is to be minimized subject to
suitable constraints. We characterize the unique solvability of the respective
optimization problem and prove that it transforms, under appropriate
assumptions, into an equivalent convex problem with strictly convex objective.
The latter problem can then be globally solved using standard numerical
methods. We demonstrate our approach on an example. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Dictionary Learning and Sparse Coding-based Denoising for High-Resolution Task Functional Connectivity MRI Analysis,
Abstract: We propose a novel denoising framework for task functional Magnetic Resonance
Imaging (tfMRI) data to delineate the high-resolution spatial pattern of the
brain functional connectivity via dictionary learning and sparse coding (DLSC).
In order to address the limitations of the unsupervised DLSC-based fMRI
studies, we utilize the prior knowledge of task paradigm in the learning step
to train a data-driven dictionary and to model the sparse representation. We
apply the proposed DLSC-based method to Human Connectome Project (HCP) motor
tfMRI dataset. Studies on the functional connectivity of cerebrocerebellar
circuits in somatomotor networks show that the DLSC-based denoising framework
can significantly improve the prominent connectivity patterns, in comparison to
the temporal non-local means (tNLM)-based denoising method as well as the case
without denoising, which is consistent and neuroscientifically meaningful
within motor area. The promising results show that the proposed method can
provide an important foundation for the high-resolution functional connectivity
analysis, and provide a better approach for fMRI preprocessing. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Optimal control of a Vlasov-Poisson plasma by an external magnetic field - The basics for variational calculus,
Abstract: We consider the three dimensional Vlasov-Poisson system that is equipped with
an external magnetic field to describe a plasma. The aim of various concrete
applications is to control a plasma in a desired fashion. This can be modeled
by an optimal control problem. For that reason the basics for calculus of
variations will be introduced in this paper. We have to find a suitable class
of fields that are admissible for this procedure as they provide unique global
solutions of the Vlasov-Poisson system. Then we can define a field-state
operator that maps any admissible field onto its corresponding distribution
function. We will show that this field-state operator is Lipschitz continuous
and (weakly) compact. Last we will consider a model problem with a tracking
type cost functional and we will show that this optimal control problem has at
least one globally optimal solution. | [
0,
0,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Lensless Photography with only an image sensor,
Abstract: Photography usually requires optics in conjunction with a recording device
(an image sensor). Eliminating the optics could lead to new form factors for
cameras. Here, we report a simple demonstration of imaging using a bare CMOS
sensor that utilizes computation. The technique relies on the space variant
point-spread functions resulting from the interaction of a point source in the
field of view with the image sensor. These space-variant point-spread functions
are combined with a reconstruction algorithm in order to image simple objects
displayed on a discrete LED array as well as on an LCD screen. We extended the
approach to video imaging at the native frame rate of the sensor. Finally, we
performed experiments to analyze the parametric impact of the object distance.
Improving the sensor designs and reconstruction algorithms can lead to useful
cameras without optics. | [
1,
1,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: The evolution of the temperature field during cavity collapse in liquid nitromethane. Part II: Reactive case,
Abstract: We study effect of cavity collapse in non-ideal explosives as a means of
controlling their sensitivity. The main aim is to understand the origin of
localised temperature peaks (hot spots) that play a leading order role at early
ignition stages. Thus, we perform 2D and 3D numerical simulations of shock
induced single gas-cavity collapse in nitromethane. Ignition is the result of a
complex interplay between fluid dynamics and exothermic chemical reaction. In
part I of this work we focused on the hydrodynamic effects in the collapse
process by switching off the reaction terms in the mathematical model. Here, we
reinstate the reactive terms and study the collapse of the cavity in the
presence of chemical reactions. We use a multi-phase formulation which
overcomes current challenges of cavity collapse modelling in reactive media to
obtain oscillation-free temperature fields across material interfaces to allow
the use of a temperature-based reaction rate law. The mathematical and physical
models are validated against experimental and analytic data. We identify which
of the previously-determined (in part I of this work) high-temperature regions
lead to ignition and comment on their reactive strength and reaction growth
rate. We quantify the sensitisation of nitromethane by the collapse of the
cavity by comparing ignition times of neat and single-cavity material; the
ignition occurs in less than half the ignition time of the neat material. We
compare 2D and 3D simulations to examine the change in topology, temperature
and reactive strength of the hot spots by the third dimension. It is apparent
that belated ignition times can be avoided by the use of 3D simulations. The
effect of the chemical reactions on the topology and strength of the hot spots
in the timescales considered is studied by comparing inert and reactive
simulations and examine maximum temperature fields and their growth rates. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Quantile Regression for Qualifying Match of GEFCom2017 Probabilistic Load Forecasting,
Abstract: We present a simple quantile regression-based forecasting method that was
applied in a probabilistic load forecasting framework of the Global Energy
Forecasting Competition 2017 (GEFCom2017). The hourly load data is log
transformed and split into a long-term trend component and a remainder term.
The key forecasting element is the quantile regression approach for the
remainder term that takes into account weekly and annual seasonalities such as
their interactions. Temperature information is only used to stabilize the
forecast of the long-term trend component. Public holidays information is
ignored. Still, the forecasting method placed second in the open data track and
fourth in the definite data track with our forecasting method, which is
remarkable given simplicity of the model. The method also outperforms the
Vanilla benchmark consistently. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Quantitative Finance"
] |
Title: Sparse Named Entity Classification using Factorization Machines,
Abstract: Named entity classification is the task of classifying text-based elements
into various categories, including places, names, dates, times, and monetary
values. A bottleneck in named entity classification, however, is the data
problem of sparseness, because new named entities continually emerge, making it
rather difficult to maintain a dictionary for named entity classification.
Thus, in this paper, we address the problem of named entity classification
using matrix factorization to overcome the problem of feature sparsity.
Experimental results show that our proposed model, with fewer features and a
smaller size, achieves competitive accuracy to state-of-the-art models. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Exploration of Large Networks with Covariates via Fast and Universal Latent Space Model Fitting,
Abstract: Latent space models are effective tools for statistical modeling and
exploration of network data. These models can effectively model real world
network characteristics such as degree heterogeneity, transitivity, homophily,
etc. Due to their close connection to generalized linear models, it is also
natural to incorporate covariate information in them. The current paper
presents two universal fitting algorithms for networks with edge covariates:
one based on nuclear norm penalization and the other based on projected
gradient descent. Both algorithms are motivated by maximizing likelihood for a
special class of inner-product models while working simultaneously for a wide
range of different latent space models, such as distance models, which allow
latent vectors to affect edge formation in flexible ways. These fitting
methods, especially the one based on projected gradient descent, are fast and
scalable to large networks. We obtain their rates of convergence for both
inner-product models and beyond. The effectiveness of the modeling approach and
fitting algorithms is demonstrated on five real world network datasets for
different statistical tasks, including community detection with and without
edge covariates, and network assisted learning. | [
1,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Computer Science"
] |
Title: Relationship Maintenance in Software Language Repositories,
Abstract: The context of this research is testing and building software systems and,
specifically, software language repositories (SLRs), i.e., repositories with
components for language processing (interpreters, translators, analyzers,
transformers, pretty printers, etc.). SLRs are typically set up for developing
and using metaprogramming systems, language workbenches, language definition
frameworks, executable semantic frameworks, and modeling frameworks. This work
is an inquiry into testing and building SLRs in a manner that the repository is
seen as a collection of language-typed artifacts being related by the
applications of language-typed functions or relations which serve language
processing. The notion of language is used in a broad sense to include text-,
tree-, graph-based languages as well as representations based on interchange
formats and also proprietary formats for serialization. The overall approach
underlying this research is one of language design driven by a complex case
study, i.e., a specific SLR with a significant number of processed languages
and language processors as well as a noteworthy heterogeneity in terms of
representation types and implementation languages. The knowledge gained by our
research is best understood as a declarative language design for regression
testing and build management, we introduce a corresponding language Ueber with
an executable semantics which maintains relationships between language-typed
artifacts in an SLR. The grounding of the reported research is based on the
comprehensive, formal, executable (logic programming-based) definition of the
Ueber language and its systematic application to the management of the SLR YAS
which consists of hundreds of language definition and processing components
(such as interpreters and transformations) for more than thirty languages (not
counting different representation types) with Prolog, Haskell, Java, and Python
being used as implementation languages. The importance of this work follows
from the significant costs implied by regression testing and build management
and also from the complexity of SLRs which calls for means to help with
understanding. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: The Emptiness Problem for Valence Automata over Graph Monoids,
Abstract: This work studies which storage mechanisms in automata permit decidability of
the emptiness problem. The question is formalized using valence automata, an
abstract model of automata in which the storage mechanism is given by a monoid.
For each of a variety of storage mechanisms, one can choose a (typically
infinite) monoid $M$ such that valence automata over $M$ are equivalent to
(one-way) automata with this type of storage. In fact, many important storage
mechanisms can be realized by monoids defined by finite graphs, called graph
monoids. Examples include pushdown stacks, partially blind counters (which
behave like Petri net places), blind counters (which may attain negative
values), and combinations thereof.
Hence, we study for which graph monoids the emptiness problem for valence
automata is decidable. A particular model realized by graph monoids is that of
Petri nets with a pushdown stack. For these, decidability is a long-standing
open question and we do not answer it here.
However, if one excludes subgraphs corresponding to this model, a
characterization can be achieved. Moreover, we provide a description of those
storage mechanisms for which decidability remains open. This leads to a model
that naturally generalizes both pushdown Petri nets and the priority
multicounter machines introduced by Reinhardt.
The cases that are proven decidable constitute a natural and apparently new
extension of Petri nets with decidable reachability. It is finally shown that
this model can be combined with another such extension by Atig and Ganty: We
present a further decidability result that subsumes both of these Petri net
extensions. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Near-Optimal Discrete Optimization for Experimental Design: A Regret Minimization Approach,
Abstract: The experimental design problem concerns the selection of k points from a
potentially large design pool of p-dimensional vectors, so as to maximize the
statistical efficiency regressed on the selected k design points. Statistical
efficiency is measured by optimality criteria, including A(verage),
D(eterminant), T(race), E(igen), V(ariance) and G-optimality. Except for the
T-optimality, exact optimization is NP-hard.
We propose a polynomial-time regret minimization framework to achieve a
$(1+\varepsilon)$ approximation with only $O(p/\varepsilon^2)$ design points,
for all the optimality criteria above.
In contrast, to the best of our knowledge, before our work, no
polynomial-time algorithm achieves $(1+\varepsilon)$ approximations for
D/E/G-optimality, and the best poly-time algorithm achieving
$(1+\varepsilon)$-approximation for A/V-optimality requires $k =
\Omega(p^2/\varepsilon)$ design points. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: A State-Space Approach to Dynamic Nonnegative Matrix Factorization,
Abstract: Nonnegative matrix factorization (NMF) has been actively investigated and
used in a wide range of problems in the past decade. A significant amount of
attention has been given to develop NMF algorithms that are suitable to model
time series with strong temporal dependencies. In this paper, we propose a
novel state-space approach to perform dynamic NMF (D-NMF). In the proposed
probabilistic framework, the NMF coefficients act as the state variables and
their dynamics are modeled using a multi-lag nonnegative vector autoregressive
(N-VAR) model within the process equation. We use expectation maximization and
propose a maximum-likelihood estimation framework to estimate the basis matrix
and the N-VAR model parameters. Interestingly, the N-VAR model parameters are
obtained by simply applying NMF. Moreover, we derive a maximum a posteriori
estimate of the state variables (i.e., the NMF coefficients) that is based on a
prediction step and an update step, similarly to the Kalman filter. We
illustrate the benefits of the proposed approach using different numerical
simulations where D-NMF significantly outperforms its static counterpart.
Experimental results for three different applications show that the proposed
approach outperforms two state-of-the-art NMF approaches that exploit temporal
dependencies, namely a nonnegative hidden Markov model and a frame stacking
approach, while it requires less memory and computational power. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: Dense families of modular curves, prime numbers and uniform symmetric tensor rank of multiplication in certain finite fields,
Abstract: We obtain new uniform bounds for the symmetric tensor rank of multiplication
in finite extensions of any finite field Fp or Fp2 where p denotes a prime
number greater or equal than 5. In this aim, we use the symmetric
Chudnovsky-type generalized algorithm applied on sufficiently dense families of
modular curves defined over Fp2 attaining the Drinfeld-Vladuts bound and on the
descent of these families to the definition field Fp. These families are
obtained thanks to prime number density theorems of type Hoheisel, in
particular a result due to Dudek (2016). | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Bayesian Network Regularized Regression for Modeling Urban Crime Occurrences,
Abstract: This paper considers the problem of statistical inference and prediction for
processes defined on networks. We assume that the network is known and measures
similarity, and our goal is to learn about an attribute associated with its
vertices. Classical regression methods are not immediately applicable to this
setting, as we would like our model to incorporate information from both
network structure and pertinent covariates. Our proposed model consists of a
generalized linear model with vertex indexed predictors and a basis expansion
of their coefficients, allowing the coefficients to vary over the network. We
employ a regularization procedure, cast as a prior distribution on the
regression coefficients under a Bayesian setup, so that the predicted responses
vary smoothly according to the topology of the network. We motivate the need
for this model by examining occurrences of residential burglary in Boston,
Massachusetts. Noting that crime rates are not spatially homogeneous, and that
the rates appear to vary sharply across regions in the city, we construct a
hierarchical model that addresses these issues and gives insight into spatial
patterns of crime occurrences. Furthermore, we examine efficient
expectation-maximization fitting algorithms and provide
computationally-friendly methods for eliciting hyper-prior parameters. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Computer Science"
] |
Title: Similarity forces and recurrent components in human face-to-face interaction networks,
Abstract: We show that the social dynamics responsible for the formation of connected
components that appear recurrently in face-to-face interaction networks, find a
natural explanation in the assumption that the agents of the temporal network
reside in a hidden similarity space. Distances between the agents in this space
act as similarity forces directing their motion towards other agents in the
physical space and determining the duration of their interactions. By contrast,
if such forces are ignored in the motion of the agents recurrent components do
not form, although other main properties of such networks can still be
reproduced. | [
1,
0,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.