title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Manipulative Elicitation -- A New Attack on Elections with Incomplete Preferences | Lu and Boutilier proposed a novel approach based on "minimax regret" to use
classical score based voting rules in the setting where preferences can be any
partial (instead of complete) orders over the set of alternatives. We show here
that such an approach is vulnerable to a new kind of manipulation which was not
present in the classical (where preferences are complete orders) world of
voting. We call this attack "manipulative elicitation." More specifically, it
may be possible to (partially) elicit the preferences of the agents in a way
that makes some distinguished alternative win the election who may not be a
winner if we elicit every preference completely. More alarmingly, we show that
the related computational task is polynomial time solvable for a large class of
voting rules which includes all scoring rules, maximin, Copeland$^\alpha$ for
every $\alpha\in[0,1]$, simplified Bucklin voting rules, etc. We then show that
introducing a parameter per pair of alternatives which specifies the minimum
number of partial preferences where this pair of alternatives must be
comparable makes the related computational task of manipulative elicitation
\NPC for all common voting rules including a class of scoring rules which
includes the plurality, $k$-approval, $k$-veto, veto, and Borda voting rules,
maximin, Copeland$^\alpha$ for every $\alpha\in[0,1]$, and simplified Bucklin
voting rules. Hence, in this work, we discover a fundamental vulnerability in
using minimax regret based approach in partial preferential setting and propose
a novel way to tackle it.
| 1 | 0 | 0 | 0 | 0 | 0 |
Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks | Event sequence, asynchronously generated with random timestamp, is ubiquitous
among applications. The precise and arbitrary timestamp can carry important
clues about the underlying dynamics, and has lent the event data fundamentally
different from the time-series whereby series is indexed with fixed and equal
time interval. One expressive mathematical tool for modeling event is point
process. The intensity functions of many point processes involve two
components: the background and the effect by the history. Due to its inherent
spontaneousness, the background can be treated as a time series while the other
need to handle the history events. In this paper, we model the background by a
Recurrent Neural Network (RNN) with its units aligned with time series indexes
while the history effect is modeled by another RNN whose units are aligned with
asynchronous events to capture the long-range dynamics. The whole model with
event type and timestamp prediction output layers can be trained end-to-end.
Our approach takes an RNN perspective to point process, and models its
background and history effect. For utility, our method allows a black-box
treatment for modeling the intensity which is often a pre-defined parametric
form in point processes. Meanwhile end-to-end training opens the venue for
reusing existing rich techniques in deep network for point process modeling. We
apply our model to the predictive maintenance problem using a log dataset by
more than 1000 ATMs from a global bank headquartered in North America.
| 1 | 0 | 0 | 1 | 0 | 0 |
A note on relative amenable of finite von Neumann algebras | Let $M$ be a finite von Neumann algebra (resp. a type II$_{1}$ factor) and
let $N\subset M$ be a II$_{1}$ factor (resp. $N\subset M$ have an atomic part).
We prove that the inclusion $N\subset M$ is amenable implies the identity map
on $M$ has an approximate factorization through $M_m(\mathbb{C})\otimes N $ via
trace preserving normal unital completely positive maps, which is a
generalization of a result of Haagerup. We also prove two permanence properties
for amenable inclusions. One is weak Haagerup property, the other is weak
exactness.
| 0 | 0 | 1 | 0 | 0 | 0 |
Powerful numbers in $(1^{\ell}+q^{\ell})(2^{\ell}+q^{\ell})\cdots (n^{\ell}+q^{\ell})$ | Let $q$ be a positive integer. Recently, Niu and Liu proved that if $n\ge
\max\{q,1198-q\}$, then the product $(1^3+q^3)(2^3+q^3)\cdots (n^3+q^3)$ is not
a powerful number. In this note, we prove that (i) for any odd prime power
$\ell$ and $n\ge \max\{q,11-q\}$, the product
$(1^{\ell}+q^{\ell})(2^{\ell}+q^{\ell})\cdots (n^{\ell}+q^{\ell})$ is not a
powerful number; (2) for any positive odd integer $\ell$, there exists an
integer $N_{q,\ell}$ such that for any positive integer $n\ge N_{q,\ell}$, the
product $(1^{\ell}+q^{\ell})(2^{\ell}+q^{\ell})\cdots (n^{\ell}+q^{\ell})$ is
not a powerful number.
| 0 | 0 | 1 | 0 | 0 | 0 |
LandmarkBoost: Efficient Visual Context Classifiers for Robust Localization | The growing popularity of autonomous systems creates a need for reliable and
efficient metric pose retrieval algorithms. Currently used approaches tend to
rely on nearest neighbor search of binary descriptors to perform the 2D-3D
matching and guarantee realtime capabilities on mobile platforms. These methods
struggle, however, with the growing size of the map, changes in viewpoint or
appearance, and visual aliasing present in the environment. The rigidly defined
descriptor patterns only capture a limited neighborhood of the keypoint and
completely ignore the overall visual context.
We propose LandmarkBoost - an approach that, in contrast to the conventional
2D-3D matching methods, casts the search problem as a landmark classification
task. We use a boosted classifier to classify landmark observations and
directly obtain correspondences as classifier scores. We also introduce a
formulation of visual context that is flexible, efficient to compute, and can
capture relationships in the entire image plane. The original binary
descriptors are augmented with contextual information and informative features
are selected by the boosting framework. Through detailed experiments, we
evaluate the retrieval quality and performance of LandmarkBoost, demonstrating
that it outperforms common state-of-the-art descriptor matching methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
On stably trivial spin torsors over low-dimensional schemes | The paper discusses stably trivial torsors for spin and orthogonal groups
over smooth affine schemes over infinite perfect fields of characteristic
unequal to 2. We give a complete description of all the invariants relevant for
the classification of such objects over schemes of dimension at most $3$, along
with many examples. The results are based on the
$\mathbb{A}^1$-representability theorem for torsors and transfer of known
computations of $\mathbb{A}^1$-homotopy sheaves along the sporadic isomorphisms
to spin groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the relation between dependency distance, crossing dependencies, and parsing. Comment on "Dependency distance: a new perspective on syntactic patterns in natural languages" by Haitao Liu et al | Liu et al. (2017) provide a comprehensive account of research on dependency
distance in human languages. While the article is a very rich and useful report
on this complex subject, here I will expand on a few specific issues where
research in computational linguistics (specifically natural language
processing) can inform DDM research, and vice versa. These aspects have not
been explored much in the article by Liu et al. or elsewhere, probably due to
the little overlap between both research communities, but they may provide
interesting insights for improving our understanding of the evolution of human
languages, the mechanisms by which the brain processes and understands
language, and the construction of effective computer systems to achieve this
goal.
| 1 | 0 | 0 | 0 | 0 | 0 |
Edge Erasures and Chordal Graphs | We prove several results about chordal graphs and weighted chordal graphs by
focusing on exposed edges. These are edges that are properly contained in a
single maximal complete subgraph. This leads to a characterization of chordal
graphs via deletions of a sequence of exposed edges from a complete graph. Most
interesting is that in this context the connected components of the
edge-induced subgraph of exposed edges are 2-edge connected. We use this latter
fact in the weighted case to give a modified version of Kruskal's second
algorithm for finding a minimum spanning tree in a weighted chordal graph. This
modified algorithm benefits from being local in an important sense.
| 1 | 0 | 1 | 0 | 0 | 0 |
One-dimensional plasmonic hotspots located between silver nanowire dimers evaluated by surface-enhanced resonance Raman scattering | Hotspots of surface-enhanced resonance Raman scattering (SERRS) are localized
within 1 nm at gaps or crevices of plasmonic nanoparticle (NP) dimers. We
demonstrate SERRS hotspots with volumes that are extended in one dimension tens
of thousand times compared to standard zero-dimensional hotspots using gaps or
crevices of plasmonic nanowire (NW) dimers.
| 0 | 1 | 0 | 0 | 0 | 0 |
An efficient methodology for the analysis and modeling of computer experiments with large number of inputs | Complex computer codes are often too time expensive to be directly used to
perform uncertainty, sensitivity, optimization and robustness analyses. A
widely accepted method to circumvent this problem consists in replacing
cpu-time expensive computer models by cpu inexpensive mathematical functions,
called metamodels. For example, the Gaussian process (Gp) model has shown
strong capabilities to solve practical problems , often involving several
interlinked issues. However, in case of high dimensional experiments (with
typically several tens of inputs), the Gp metamodel building process remains
difficult, even unfeasible, and application of variable selection techniques
cannot be avoided. In this paper, we present a general methodology allowing to
build a Gp metamodel with large number of inputs in a very efficient manner.
While our work focused on the Gp metamodel, its principles are fully generic
and can be applied to any types of metamodel. The objective is twofold:
estimating from a minimal number of computer experiments a highly predictive
metamodel. This methodology is successfully applied on an industrial computer
code.
| 0 | 0 | 1 | 1 | 0 | 0 |
Knowledge Adaptation: Teaching to Adapt | Domain adaptation is crucial in many real-world applications where the
distribution of the training data differs from the distribution of the test
data. Previous Deep Learning-based approaches to domain adaptation need to be
trained jointly on source and target domain data and are therefore unappealing
in scenarios where models need to be adapted to a large number of domains or
where a domain is evolving, e.g. spam detection where attackers continuously
change their tactics.
To fill this gap, we propose Knowledge Adaptation, an extension of Knowledge
Distillation (Bucilua et al., 2006; Hinton et al., 2015) to the domain
adaptation scenario. We show how a student model achieves state-of-the-art
results on unsupervised domain adaptation from multiple sources on a standard
sentiment analysis benchmark by taking into account the domain-specific
expertise of multiple teachers and the similarities between their domains.
When learning from a single teacher, using domain similarity to gauge
trustworthiness is inadequate. To this end, we propose a simple metric that
correlates well with the teacher's accuracy in the target domain. We
demonstrate that incorporating high-confidence examples selected by this metric
enables the student model to achieve state-of-the-art performance in the
single-source scenario.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Datamining Approach for Emotions Extraction and Discovering Cricketers performance from Stadium to Sensex | Microblogging sites are the direct platform for the users to express their
views. It has been observed from previous studies that people are viable to
flaunt their emotions for events (eg. natural catastrophes, sports, academics
etc.), for persons (actor/actress, sports person, scientist) and for the places
they visit. In this study we focused on a sport event, particularly the cricket
tournament and collected the emotions of the fans for their favorite players
using their tweets. Further, we acquired the stock market performance of the
brands which are either endorsing the players or sponsoring the match in the
tournament. It has been observed that performance of the player triggers the
users to flourish their emotions over social media therefore, we observed
correlation between players performance and fans' emotions. Therefore, we found
the direct connection between player's performance with brand's behavior on
stock market.
| 1 | 0 | 0 | 0 | 0 | 0 |
Local Nonparametric Estimation for Second-Order Jump-Diffusion Model Using Gamma Asymmetric Kernels | This paper discusses the local linear smoothing to estimate the unknown first
and second infinitesimal moments in second-order jump-diffusion model based on
Gamma asymmetric kernels. Under the mild conditions, we obtain the weak
consistency and the asymptotic normality of these estimators for both interior
and boundary design points. Besides the standard properties of the local linear
estimation such as simple bias representation and boundary bias correction, the
local linear smoothing using Gamma asymmetric kernels possess some extra
advantages such as variable bandwidth, variance reduction and resistance to
sparse design, which is validated through finite sample simulation study.
Finally, we employ the estimators for the return of some high frequency
financial data.
| 0 | 0 | 1 | 1 | 0 | 0 |
Ergodic actions of the compact quantum group $O_{-1}(2)$ | Among the ergodic actions of a compact quantum group $\mathbb{G}$ on possibly
non-commutative spaces, those that are {\it embeddable} are the natural
analogues of actions of a compact group on its homogeneous spaces. These can be
realized as {\it coideal subalgebras} of the function algebra
$\mathcal{O}(\mathbb{G})$ attached to the compact quantum group.
We classify the embeddable ergodic actions of the compact quantum group
$O_{-1}(2)$, basing our analysis on the bijective correspondence between all
ergodic actions of the classical group $O(2)$ and those of its quantum twist
resulting from the monoidal equivalence between their respective tensor
categories of unitary representations.
In the last section we give counterexamples showing that in general we cannot
expect a bijective correspondence between embeddable ergodic actions of two
monoidally equivalent compact quantum groups.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Vector ARMA Models Consistent with a Finite Matrix Covariance Sequence | We formulate the so called "VARMA covariance matching problem" and
demonstrate the existence of a solution using the degree theory from
differential topology.
| 0 | 0 | 1 | 1 | 0 | 0 |
Edge states in non-Fermi liquids | We devise an approach to the calculation of scaling dimensions of generic
operators describing scattering within multi-channel Luttinger liquid. The
local impurity scattering in an arbitrary configuration of conducting and
insulating channels is investigated and the problem is reduced to a single
algebraic matrix equation. In particular, the solution to this equation is
found for a finite array of chains described by Luttinger liquid models. It is
found that for a weak inter-chain hybridisation and intra-channel
electron-electron attraction the edge wires are robust against disorder whereas
bulk wires, on contrary, become insulating. Thus, the edge state may exist in a
finite sliding Luttinger liquid without time-reversal symmetry breaking
(quantum Hall systems) or spin-orbit interaction (topological insulators).
| 0 | 1 | 0 | 0 | 0 | 0 |
RobustFill: Neural Program Learning under Noisy I/O | The problem of automatically generating a computer program from some
specification has been studied since the early days of AI. Recently, two
competing approaches for automatic program learning have received significant
attention: (1) neural program synthesis, where a neural network is conditioned
on input/output (I/O) examples and learns to generate a program, and (2) neural
program induction, where a neural network generates new outputs directly using
a latent program representation.
Here, for the first time, we directly compare both approaches on a
large-scale, real-world learning task. We additionally contrast to rule-based
program synthesis, which uses hand-crafted semantics to guide the program
generation. Our neural models use a modified attention RNN to allow encoding of
variable-sized sets of I/O pairs. Our best synthesis model achieves 92%
accuracy on a real-world test set, compared to the 34% accuracy of the previous
best neural synthesis approach. The synthesis model also outperforms a
comparable induction model on this task, but we more importantly demonstrate
that the strength of each approach is highly dependent on the evaluation metric
and end-user application. Finally, we show that we can train our neural models
to remain very robust to the type of noise expected in real-world data (e.g.,
typos), while a highly-engineered rule-based system fails entirely.
| 1 | 0 | 0 | 0 | 0 | 0 |
Review of flexible and transparent thin-film transistors based on zinc oxide and related materials | Flexible and transparent electronics presents a new era of electronic
technologies. Ubiquitous applications involve wearable electronics, biosensors,
flexible transparent displays, radio-frequency identifications (RFIDs),
etc.Zinc oxide (ZnO) and related materials are the most commonly used inorganic
semiconductors in flexible and transparent devices, owing to their high
electrical performance, together with low processing temperature and good
optical transparency.In this paper, we review recent advances in flexible and
transparent thin-film transistors (TFTs) based on ZnO and related
materials.After a brief introduction, the main progresses on the preparation of
each component (substrate, electrodes, channel and dielectrics) are summarized
and discussed. Then, the effect of mechanical bending on electrical performance
was highlighted. Finally, we suggest the challenges and opportunities in future
investigations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Updated physics design of the DAEdALUS and IsoDAR coupled cyclotrons for high intensity H2+ beam production | The Decay-At-rest Experiment for delta-CP violation At a Laboratory for
Underground Science (DAEdALUS) and the Isotope Decay-At-Rest experiment
(IsoDAR) are proposed experiments to search for CP violation in the neutrino
sector, and "sterile" neutrinos, respectively. In order to be decisive within 5
years, the neutrino flux and, consequently, the driver beam current (produced
by chained cyclotrons) must be high. H2+ was chosen as primary beam ion in
order to reduce the electrical current and thus space charge. This has the
added advantage of allowing for stripping extraction at the exit of the
DAEdALUS Superconducting Ring Cyclotron (DSRC). The primary beam current is
higher than current cyclotrons have demonstrated which has led to a substantial
R&D effort of our collaboration in the last years. We present the results of
this research, including tests of prototypes and highly realistic beam
simulations, which led to the latest physics-based design. The presented
results suggest that it is feasible, albeit challenging, to accelerate 5 mA of
H2+ to 60 MeV/amu in a compact cyclotron and boost it to 800 MeV/amu in the
DSRC with clean extraction in both cases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scalable Gaussian Process Computations Using Hierarchical Matrices | We present a kernel-independent method that applies hierarchical matrices to
the problem of maximum likelihood estimation for Gaussian processes. The
proposed approximation provides natural and scalable stochastic estimators for
its gradient and Hessian, as well as the expected Fisher information matrix,
that are computable in quasilinear $O(n \log^2 n)$ complexity for a large range
of models. To accomplish this, we (i) choose a specific hierarchical
approximation for covariance matrices that enables the computation of their
exact derivatives and (ii) use a stabilized form of the Hutchinson stochastic
trace estimator. Since both the observed and expected information matrices can
be computed in quasilinear complexity, covariance matrices for MLEs can also be
estimated efficiently. After discussing the associated mathematics, we
demonstrate the scalability of the method, discuss details of its
implementation, and validate that the resulting MLEs and confidence intervals
based on the inverse Fisher information matrix faithfully approach those
obtained by the exact likelihood.
| 0 | 0 | 0 | 1 | 0 | 0 |
Evolutionary Data Systems | Anyone in need of a data system today is confronted with numerous complex
options in terms of system architectures, such as traditional relational
databases, NoSQL and NewSQL solutions as well as several sub-categories like
column-stores, row-stores etc. This overwhelming array of choices makes
bootstrapping data-driven applications difficult and time consuming, requiring
expertise often not accessible due to cost issues (e.g., to scientific labs or
small businesses). In this paper, we present the vision of evolutionary data
systems that free systems architects and application designers from the
complex, cumbersome and expensive process of designing and tuning specialized
data system architectures that fit only a single, static application scenario.
Setting up an evolutionary system is as simple as identifying the data. As new
data and queries come in, the system automatically evolves so that its
architecture matches the properties of the incoming workload at all times.
Inspired by the theory of evolution, at any given point in time, an
evolutionary system may employ multiple competing solutions down at the low
level of database architectures -- characterized as combinations of data
layouts, access methods and execution strategies. Over time, "the fittest wins"
and becomes the dominant architecture until the environment (workload) changes.
In our initial prototype, we demonstrate solutions that can seamlessly evolve
(back and forth) between a key-value store and a column-store architecture in
order to adapt to changing workloads.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optimal Learning for Sequential Decision Making for Expensive Cost Functions with Stochastic Binary Feedbacks | We consider the problem of sequentially making decisions that are rewarded by
"successes" and "failures" which can be predicted through an unknown
relationship that depends on a partially controllable vector of attributes for
each instance. The learner takes an active role in selecting samples from the
instance pool. The goal is to maximize the probability of success in either
offline (training) or online (testing) phases. Our problem is motivated by
real-world applications where observations are time-consuming and/or expensive.
We develop a knowledge gradient policy using an online Bayesian linear
classifier to guide the experiment by maximizing the expected value of
information of labeling each alternative. We provide a finite-time analysis of
the estimated error and show that the maximum likelihood estimator based
produced by the KG policy is consistent and asymptotically normal. We also show
that the knowledge gradient policy is asymptotically optimal in an offline
setting. This work further extends the knowledge gradient to the setting of
contextual bandits. We report the results of a series of experiments that
demonstrate its efficiency.
| 0 | 0 | 0 | 1 | 0 | 0 |
Determination of hysteresis in finite-state random walks using Bayesian cross validation | Consider the problem of modeling hysteresis for finite-state random walks
using higher-order Markov chains. This Letter introduces a Bayesian framework
to determine, from data, the number of prior states of recent history upon
which a trajectory is statistically dependent. The general recommendation is to
use leave-one-out cross validation, using an easily-computable formula that is
provided in closed form. Importantly, Bayes factors using flat model priors are
biased in favor of too-complex a model (more hysteresis) when a large amount of
data is present and the Akaike information criterion (AIC) is biased in favor
of too-sparse a model (less hysteresis) when few data are present.
| 0 | 1 | 0 | 1 | 0 | 0 |
Moyennes effectives de fonctions multiplicatives complexes | We establish effective mean-value estimates for a wide class of
multiplicative arithmetic functions, thereby providing (essentially optimal)
quantitative versions of Wirsing's classical estimates and extending those of
Halász. Several applications are derived, including: estimates for the
difference of mean-values of so-called pretentious functions, local laws for
the distribution of prime factors in an arbitrary set, and weighted
distribution of additive functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Magnetic Excitations and Continuum of a Field-Induced Quantum Spin Liquid in $α$-RuCl$_3$ | We report on terahertz spectroscopy of quantum spin dynamics in
$\alpha$-RuCl$_3$, a system proximate to the Kitaev honeycomb model, as a
function of temperature and magnetic field. An extended magnetic continuum
develops below the structural phase transition at $T_{s2}=62$K. With the onset
of a long-range magnetic order at $T_N=6.5$K, spectral weight is transferred to
a well-defined magnetic excitation at $\hbar \omega_1 = 2.48$meV, which is
accompanied by a higher-energy band at $\hbar \omega_2 = 6.48$meV. Both
excitations soften in magnetic field, signaling a quantum phase transition at
$B_c=7$T where we find a broad continuum dominating the dynamical response.
Above $B_c$, the long-range order is suppressed, and on top of the continuum,
various emergent magnetic excitations evolve. These excitations follow clear
selection rules and exhibit distinct field dependencies, characterizing the
dynamical properties of the field-induced quantum spin liquid.
| 0 | 1 | 0 | 0 | 0 | 0 |
Wireless Network-Level Partial Relay Cooperation: A Stable Throughput Analysis | In this work, we study the benefit of partial relay cooperation. We consider
a two-node system consisting of one source and one relay node transmitting
information to a common destination. The source and the relay have external
traffic and in addition, the relay is equipped with a flow controller to
regulate the incoming traffic from the source node. The cooperation is
performed at the network level. A collision channel with erasures is
considered. We provide an exact characterization of the stability region of the
system and we also prove that the system with partial cooperation is always
better or at least equal to the system without the flow controller.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improving Development Practices through Experimentation: an Industrial TDD Case | Test-Driven Development (TDD), an agile development approach that enforces
the construction of software systems by means of successive micro-iterative
testing coding cycles, has been widely claimed to increase external software
quality. In view of this, some managers at Paf-a Nordic gaming entertainment
company-were interested in knowing how would TDD perform at their premises.
Eventually, if TDD outperformed their traditional way of coding (i.e., YW,
short for Your Way), it would be possible to switch to TDD considering the
empirical evidence achieved at the company level. We conduct an experiment at
Paf to evaluate the performance of TDD, YW and the reverse approach of TDD
(i.e., ITL, short for Iterative-Test Last) on external quality. TDD outperforms
YW and ITL at Paf. Despite the encouraging results, we cannot recommend Paf to
immediately adopt TDD as the difference in performance between YW and TDD is
small. However, as TDD looks promising at Paf, we suggest to move some
developers to TDD and to run a future experiment to compare the performance of
TDD and YW. TDD slightly outperforms ITL in controlled experiments for TDD
novices. However, more industrial experiments are still needed to evaluate the
performance of TDD in real-life contexts.
| 1 | 0 | 0 | 0 | 0 | 0 |
An explicit Gross-Zagier formula related to the Sylvester Conjecture | Let $p\equiv 4,7\mod 9$ be a rational prime number such that $3\mod p$ is not
a cubic residue. In this paper we prove the 3-part of the product of the full
BSD conjectures for $E_p$ and $E_{3p^3}$ is true using an explicit Gross-Zagier
formula, where $E_p: x^3+y^3=p$ and $E_{3p^2}: x^3+y^3=3p^2$ are the elliptic
curves related to the Sylvester conjecture and cube sum problems.
| 0 | 0 | 1 | 0 | 0 | 0 |
Case Studies of Exocomets in the System of HD 10180 | The aim of our study is to investigate the dynamics of possible comets in the
HD 10180 system. This investigation is motivated by the discovery of exocomets
in various systems, especially $\beta$ Pictoris, as well as in at least ten
other systems. Detailed theoretical studies about the formation and evolution
of star--planet systems indicate that exocomets should be quite common. Further
observational results are expected in the foreseeable future, in part due to
the availability of the Large Synoptic Survey Telescope. Nonetheless, the Solar
System represents the best studied example for comets, thus serving as a prime
motivation for investigating comets in HD 10180 as well. HD 10180 is strikingly
similar to the Sun. This system contains six confirmed planets and (at least)
two additional planets subject to final verification. In our studies, we
consider comets of different inclinations and eccentricities and find an array
of different outcomes such as encounters with planets, captures, and escapes.
Comets with relatively large eccentricities are able to enter the inner region
of the system facing early planetary encounters. Stable comets experience
long-term evolution of orbital elements, as expected. We also tried to
distinguish cometary families akin to our Solar System but no clear distinction
between possible families was found. Generally, theoretical and observational
studies of exoplanets have a large range of ramifications, involving the
origin, structure and evolution of systems as well as the proliferation of
water and prebiotic compounds to terrestrial planets, which will increase their
chances of being habitable.
| 0 | 1 | 0 | 0 | 0 | 0 |
3D ab initio modeling in cryo-EM by autocorrelation analysis | Single-Particle Reconstruction (SPR) in Cryo-Electron Microscopy (cryo-EM) is
the task of estimating the 3D structure of a molecule from a set of noisy 2D
projections, taken from unknown viewing directions. Many algorithms for SPR
start from an initial reference molecule, and alternate between refining the
estimated viewing angles given the molecule, and refining the molecule given
the viewing angles. This scheme is called iterative refinement. Reliance on an
initial, user-chosen reference introduces model bias, and poor initialization
can lead to slow convergence. Furthermore, since no ground truth is available
for an unsolved molecule, it is difficult to validate the obtained results.
This creates the need for high quality ab initio models that can be quickly
obtained from experimental data with minimal priors, and which can also be used
for validation. We propose a procedure to obtain such an ab initio model
directly from raw data using Kam's autocorrelation method. Kam's method has
been known since 1980, but it leads to an underdetermined system, with missing
orthogonal matrices. Until now, this system has been solved only for special
cases, such as highly symmetric molecules or molecules for which a homologous
structure was already available. In this paper, we show that knowledge of just
two clean projections is sufficient to guarantee a unique solution to the
system. This system is solved by an optimization-based heuristic. For the first
time, we are then able to obtain a low-resolution ab initio model of an
asymmetric molecule directly from raw data, without 2D class averaging and
without tilting. Numerical results are presented on both synthetic and
experimental data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Reinforcement Learning with a Corrupted Reward Channel | No real-world reward function is perfect. Sensory errors and software bugs
may result in RL agents observing higher (or lower) rewards than they should.
For example, a reinforcement learning agent may prefer states where a sensory
error gives it the maximum reward, but where the true reward is actually small.
We formalise this problem as a generalised Markov Decision Problem called
Corrupt Reward MDP. Traditional RL methods fare poorly in CRMDPs, even under
strong simplifying assumptions and when trying to compensate for the possibly
corrupt rewards. Two ways around the problem are investigated. First, by giving
the agent richer data, such as in inverse reinforcement learning and
semi-supervised reinforcement learning, reward corruption stemming from
systematic sensory errors may sometimes be completely managed. Second, by using
randomisation to blunt the agent's optimisation, reward corruption can be
partially managed under some assumptions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Divisor-sum fibers | Let $s(\cdot)$ denote the sum-of-proper-divisors function, that is, $s(n) =
\sum_{d\mid n,~d<n}d$. Erdős-Granville-Pomerance-Spiro conjectured that for
any set $\mathcal{A}$ of asymptotic density zero, the preimage set
$s^{-1}(\mathcal{A})$ also has density zero. We prove a weak form of this
conjecture: If $\epsilon(x)$ is any function tending to $0$ as $x\to\infty$,
and $\mathcal{A}$ is a set of integers of cardinality at most
$x^{\frac12+\epsilon(x)}$, then the number of integers $n\le x$ with $s(n) \in
\mathcal{A}$ is $o(x)$, as $x\to\infty$. In particular, the EGPS conjecture
holds for infinite sets with counting function $O(x^{\frac12 + \epsilon(x)})$.
We also disprove a hypothesis from the same paper of EGPS by showing that for
any positive numbers $\alpha$ and $\epsilon$, there are integers $n$ with
arbitrarily many $s$-preimages lying between $\alpha(1-\epsilon)n$ and
$\alpha(1+\epsilon)n$. Finally, we make some remarks on solutions $n$ to
congruences of the form $\sigma(n) \equiv a\pmod{n}$, proposing a modification
of a conjecture appearing in recent work of the first two authors. We also
improve a previous upper bound for the number of solutions $n \leq x$, making
it uniform in $a$.
| 0 | 0 | 1 | 0 | 0 | 0 |
New zirconium hydrides predicted by structure search method based on first principles calculations | The formation of precipitated zirconium (Zr) hydrides is closely related to
the hydrogen embrittlement problem for the cladding materials of pressured
water reactors (PWR). In this work, we systematically investigated the crystal
structures of zirconium hydride (ZrHx) with different hydrogen concentrations
(x = 0~2, atomic ratio) by combining the basin hopping algorithm with first
principles calculations. We conclude that the P3m1 {\zeta}-ZrH0.5 is
dynamically unstable, while a novel dynamically stable P3m1 ZrH0.5 structure
was discovered in the structure search. The stability of bistable P42/nnm
ZrH1.5 structures and I4/mmm ZrH2 structures are also revisited. We find that
the P42/nnm (c/a > 1) ZrH1.5 is dynamically unstable, while the I4/mmm (c/a =
1.57) ZrH2 is dynamically stable.The P42/nnm (c/a < 1) ZrH1.5 might be a key
intermediate phase for the transition of {\gamma}->{\delta}->{\epsilon} phases.
Additionally, by using the thermal dynamic simulations, we find that
{\delta}-ZrH1.5 is the most stable structure at high temperature while ZrH2 is
the most stable hydride at low temperature. Slow cooling process will promote
the formation of {\delta}-ZrH1.5, and fast cooling process will promote the
formation of {\gamma}-ZrH. These results may help to understand the phase
transitions of zirconium hydrides.
| 0 | 1 | 0 | 0 | 0 | 0 |
Exchange striction driven magnetodielectric effect and potential photovoltaic effect in polar CaOFeS | CaOFeS is a semiconducting oxysulfide with polar layered triangular
structure. Here a comprehensive theoretical study has been performed to reveal
its physical properties, including magnetism, electronic structure, phase
transition, magnetodielectric effect, as well as optical absorption. Our
calculations confirm the Ising-like G-type antiferromagnetic ground state
driven by the next-nearest neighbor exchanges, which breaks the trigonal
symmetry and is responsible for the magnetodielectric effect driven by exchange
striction. In addition, a large coefficient of visible light absorption is
predicted, which leads to promising photovoltaic effect with the maximum
light-to-electricity energy conversion efficiency up to 24.2%.
| 0 | 1 | 0 | 0 | 0 | 0 |
Studying Magnetic Fields using Low-frequency Pulsar Observations | Low-frequency polarisation observations of pulsars, facilitated by
next-generation radio telescopes, provide powerful probes of astrophysical
plasmas that span many orders of magnitude in magnetic field strength and
scale: from pulsar magnetospheres to intervening magneto-ionic plasmas
including the ISM and the ionosphere. Pulsar magnetospheres with teragauss
field strengths can be explored through their numerous emission phenomena
across multiple frequencies, the mechanism behind which remains elusive.
Precise dispersion and Faraday rotation measurements towards a large number of
pulsars probe the three-dimensional large-scale (and eventually small-scale)
structure of the Galactic magnetic field, which plays a role in many
astrophysical processes, but is not yet well understood, especially towards the
Galactic halo. We describe some results and ongoing work from the Low Frequency
Array (LOFAR) and the Murchison Widefield Array (MWA) radio telescopes in these
areas. These and other pathfinder and precursor telescopes have reinvigorated
low-frequency science and build towards the Square Kilometre Array (SKA), which
will make significant advancements in studies of astrophysical magnetic fields
in the next 50 years.
| 0 | 1 | 0 | 0 | 0 | 0 |
Two-sided Facility Location | Recent years have witnessed the rise of many successful e-commerce
marketplace platforms like the Amazon marketplace, AirBnB, Uber/Lyft, and
Upwork, where a central platform mediates economic transactions between buyers
and sellers. Motivated by these platforms, we formulate a set of facility
location problems that we term Two-sided Facility location. In our model,
agents arrive at nodes in an underlying metric space, where the metric distance
between any buyer and seller captures the quality of the corresponding match.
The platform posts prices and wages at the nodes, and opens a set of facilities
to route the agents to. The agents at any facility are assumed to be matched.
The platform ensures high match quality by imposing a distance constraint
between a node and the facilities it is routed to. It ensures high service
availability by ensuring flow to the facility is at least a pre-specified lower
bound. Subject to these constraints, the goal of the platform is to maximize
the social surplus (or gains from trade) subject to weak budget balance, i.e.,
profit being non-negative.
We present an approximation algorithm for this problem that yields a $(1 +
\epsilon)$ approximation to surplus for any constant $\epsilon > 0$, while
relaxing the match quality (i.e., maximum distance of any match) by a constant
factor. We use an LP rounding framework that easily extends to other objectives
such as maximizing volume of trade or profit.
We justify our models by considering a dynamic marketplace setting where
agents arrive according to a stochastic process and have finite patience (or
deadlines) for being matched. We perform queueing analysis to show that for
policies that route agents to facilities and match them, ensuring a low
abandonment probability of agents reduces to ensuring sufficient flow arrives
at each facility.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optimal hedging under fast-varying stochastic volatility | In a market with a rough or Markovian mean-reverting stochastic volatility
there is no perfect hedge. Here it is shown how various delta-type hedging
strategies perform and can be evaluated in such markets. A precise
characterization of the hedging cost, the replication cost caused by the
volatility fluctuations, is presented in an asymptotic regime of rapid mean
reversion for the volatility fluctuations. The optimal dynamic asset based
hedging strategy in the considered regime is identified as the so-called
`practitioners' delta hedging scheme. It is moreover shown that the
performances of the delta-type hedging schemes are essentially independent of
the regularity of the volatility paths in the considered regime and that the
hedging costs are related to a vega risk martingale whose magnitude is
proportional to a new market risk parameter.
| 0 | 0 | 0 | 0 | 0 | 1 |
Personal Food Computer: A new device for controlled-environment agriculture | Due to their interdisciplinary nature, devices for controlled-environment
agriculture have the possibility to turn into ideal tools not only to conduct
research on plant phenology but also to create curricula in a wide range of
disciplines. Controlled-environment devices are increasing their
functionalities as well as improving their accessibility. Traditionally,
building one of these devices from scratch implies knowledge in fields such as
mechanical engineering, digital electronics, programming, and energy
management. However, the requirements of an effective controlled environment
device for personal use brings new constraints and challenges. This paper
presents the OpenAg Personal Food Computer (PFC); a low cost desktop size
platform, which not only targets plant phenology researchers but also
hobbyists, makers, and teachers from elementary to high-school levels (K-12).
The PFC is completely open-source and it is intended to become a tool that can
be used for collective data sharing and plant growth analysis. Thanks to its
modular design, the PFC can be used in a large spectrum of activities.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quenched Noise and Nonlinear Oscillations in Bistable Multiscale Systems | Nonlinear oscillators are a key modelling tool in many applications. The
influence of annealed noise on nonlinear oscillators has been studied
intensively. It can induce effects in nonlinear oscillators not present in the
deterministic setting. Yet, there is no theory regarding the quenched noise
scenario of random parameters sampled on fixed time intervals, although this
situation is often a lot more natural. Here we study a paradigmatic nonlinear
oscillator of van-der-Pol/FitzHugh-Nagumo type under quenched noise as a
piecewise-deterministic Markov process. There are several interesting effects
such as period shifts and new different trapped types of small-amplitude
oscillations, which can be captured analytically. Furthermore, we numerically
discover quenched resonance and show that it differs significantly from
previous finite-noise optimality resonance effects. This demonstrates that
quenched oscillatorscan be viewed as a new building block of nonlinear
dynamics.
| 0 | 1 | 1 | 0 | 0 | 0 |
Multi-Layer Convolutional Sparse Modeling: Pursuit and Dictionary Learning | The recently proposed Multi-Layer Convolutional Sparse Coding (ML-CSC) model,
consisting of a cascade of convolutional sparse layers, provides a new
interpretation of Convolutional Neural Networks (CNNs). Under this framework,
the computation of the forward pass in a CNN is equivalent to a pursuit
algorithm aiming to estimate the nested sparse representation vectors -- or
feature maps -- from a given input signal. Despite having served as a pivotal
connection between CNNs and sparse modeling, a deeper understanding of the
ML-CSC is still lacking: there are no pursuit algorithms that can serve this
model exactly, nor are there conditions to guarantee a non-empty model. While
one can easily obtain signals that approximately satisfy the ML-CSC
constraints, it remains unclear how to simply sample from the model and, more
importantly, how one can train the convolutional filters from real data.
In this work, we propose a sound pursuit algorithm for the ML-CSC model by
adopting a projection approach. We provide new and improved bounds on the
stability of the solution of such pursuit and we analyze different practical
alternatives to implement this in practice. We show that the training of the
filters is essential to allow for non-trivial signals in the model, and we
derive an online algorithm to learn the dictionaries from real data,
effectively resulting in cascaded sparse convolutional layers. Last, but not
least, we demonstrate the applicability of the ML-CSC model for several
applications in an unsupervised setting, providing competitive results. Our
work represents a bridge between matrix factorization, sparse dictionary
learning and sparse auto-encoders, and we analyze these connections in detail.
| 1 | 0 | 0 | 1 | 0 | 0 |
Coble's group and the integrability of the Gosset-Elte polytopes and tessellations | This paper considers the planar figure of a combinatorial polytope or
tessellation identified by the Coxeter symbol $k_{i,j}$ , inscribed in a conic,
satisfying the geometric constraint that each octahedral cell has a centre.
This realisation exists, and is movable, on account of some constraints being
satisfied as a consequence of the others. A close connection to the birational
group found originally by Coble in the different context of invariants for sets
of points in projective space, allows to specify precisely a determining subset
of vertices that may be freely chosen. This gives a unified geometric view of
certain integrable discrete systems in one, two and three dimensions. Making
contact with previous geometric accounts in the case of three dimensions, it is
shown how the figure also manifests as a configuration of circles generalising
the Clifford lattices, and how it can be applied to construct the spatial
point-line configurations called the Desargues maps.
| 0 | 1 | 1 | 0 | 0 | 0 |
Latent Hinge-Minimax Risk Minimization for Inference from a Small Number of Training Samples | Deep Learning (DL) methods show very good performance when trained on large,
balanced data sets. However, many practical problems involve imbalanced data
sets, or/and classes with a small number of training samples. The performance
of DL methods as well as more traditional classifiers drops significantly in
such settings. Most of the existing solutions for imbalanced problems focus on
customizing the data for training. A more principled solution is to use mixed
Hinge-Minimax risk [19] specifically designed to solve binary problems with
imbalanced training sets. Here we propose a Latent Hinge Minimax (LHM) risk and
a training algorithm that generalizes this paradigm to an ensemble of
hyperplanes that can form arbitrary complex, piecewise linear boundaries. To
extract good features, we combine LHM model with CNN via transfer learning. To
solve multi-class problem we map pre-trained category-specific LHM classifiers
to a multi-class neural network and adjust the weights with very fast tuning.
LHM classifier enables the use of unlabeled data in its training and the
mapping allows for multi-class inference, resulting in a classifier that
performs better than alternatives when trained on a small number of training
samples.
| 1 | 0 | 0 | 0 | 0 | 0 |
Autonomous Vehicle Speed Control for Safe Navigation of Occluded Pedestrian Crosswalk | Both humans and the sensors on an autonomous vehicle have limited sensing
capabilities. When these limitations coincide with scenarios involving
vulnerable road users, it becomes important to account for these limitations in
the motion planner. For the scenario of an occluded pedestrian crosswalk, the
speed of the approaching vehicle should be a function of the amount of
uncertainty on the roadway. In this work, the longitudinal controller is
formulated as a partially observable Markov decision process and dynamic
programming is used to compute the control policy. The control policy scales
the speed profile to be used by a model predictive steering controller.
| 1 | 0 | 0 | 0 | 0 | 0 |
Statistical inference methods for cumulative incidence function curves at a fixed point in time | Competing risks data arise frequently in clinical trials. When the
proportional subdistribution hazard assumption is violated or two cumulative
incidence function (CIF) curves cross, rather than comparing the overall
treatment effects, researchers may be interested in focusing on a comparison of
clinical utility at some fixed time points. This paper extend a series of tests
that are constructed based on a pseudo-value regression technique or different
transformation functions for CIFs and their variances based on Gaynor's or
Aalen's work, and the differences among CIFs at a given time point are
compared.
| 0 | 0 | 0 | 1 | 0 | 0 |
Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines | We compare and contrast the statistical physics and quantum physics inspired
approaches for unsupervised generative modeling of classical data. The two
approaches represent probabilities of observed data using energy-based models
and quantum states respectively.Classical and quantum information patterns of
the target datasets therefore provide principled guidelines for structural
design and learning in these two approaches. Taking the restricted Boltzmann
machines (RBM) as an example, we analyze the information theoretical bounds of
the two approaches. We verify our reasonings by comparing the performance of
RBMs of various architectures on the standard MNIST datasets.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Cofibration Category on Closure Spaces | We construct a cofibration category structure on the category of closure
spaces $\mathbf{Cl}$, the category whose objects are sets endowed with a
Čech closure operator and whose morphisms are the continuous maps between
them. We then study various closure structures on metric spaces, graphs, and
simplicial complexes, showing how each case gives rise to an interesting
homotopy theory. In particular, we show that there exists a natural family of
closure structures on metric spaces which produces a non-trivial homotopy
theory for finite metric spaces, i.e. point clouds, the spaces of interest in
topological data analysis. We then give a closure structure to graphs and
simplicial complexes which may be used to construct a new combinatorial (as
opposed to topological) homotopy theory for each skeleton of those spaces. We
show that there is a Seifert-van Kampen theorem for closure spaces, a
well-defined notion of persistent homotopy and an associated interleaving
distance, and, as an illustration of the difference with the topological
setting, we calculate the fundamental group for the circle and the wedge of
circles endowed with different closure structures.
| 0 | 0 | 1 | 0 | 0 | 0 |
XSAT of Linear CNF Formulas | Open questions with respect to the computational complexity of linear CNF
formulas in connection with regularity and uniformity are addressed. In
particular it is proven that any l-regular monotone CNF formula is
XSAT-unsatisfiable if its number of clauses m is not a multiple of l. For exact
linear formulas one finds surprisingly that l-regularity implies k-uniformity,
with m = 1 + k(l-1)) and allowed k-values obey k(k-1) = 0 (mod l). Then the
computational complexity of the class of monotone exact linear and l-regular
CNF formulas with respect to XSAT can be determined: XSAT-satisfiability is
either trivial, if m is not a multiple of l, or it can be decided in
sub-exponential time, namely O(exp(n^^1/2)). Sub-exponential time behaviour for
the wider class of regular and uniform linear CNF formulas can be shown for
certain subclasses.
| 1 | 0 | 1 | 0 | 0 | 0 |
Drone Squadron Optimization: a Self-adaptive Algorithm for Global Numerical Optimization | This paper proposes Drone Squadron Optimization, a new self-adaptive
metaheuristic for global numerical optimization which is updated online by a
hyper-heuristic. DSO is an artifact-inspired technique, as opposed to many
algorithms used nowadays, which are nature-inspired. DSO is very flexible
because it is not related to behaviors or natural phenomena. DSO has two core
parts: the semi-autonomous drones that fly over a landscape to explore, and the
Command Center that processes the retrieved data and updates the drones'
firmware whenever necessary. The self-adaptive aspect of DSO in this work is
the perturbation/movement scheme, which is the procedure used to generate
target coordinates. This procedure is evolved by the Command Center during the
global optimization process in order to adapt DSO to the search landscape. DSO
was evaluated on a set of widely employed benchmark functions. The statistical
analysis of the results shows that the proposed method is competitive with the
other methods in the comparison, the performance is promising, but several
future improvements are planned.
| 1 | 0 | 1 | 0 | 0 | 0 |
A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets | The original ImageNet dataset is a popular large-scale benchmark for training
Deep Neural Networks. Since the cost of performing experiments (e.g, algorithm
design, architecture search, and hyperparameter tuning) on the original dataset
might be prohibitive, we propose to consider a downsampled version of ImageNet.
In contrast to the CIFAR datasets and earlier downsampled versions of ImageNet,
our proposed ImageNet32$\times$32 (and its variants ImageNet64$\times$64 and
ImageNet16$\times$16) contains exactly the same number of classes and images as
ImageNet, with the only difference that the images are downsampled to
32$\times$32 pixels per image (64$\times$64 and 16$\times$16 pixels for the
variants, respectively). Experiments on these downsampled variants are
dramatically faster than on the original ImageNet and the characteristics of
the downsampled datasets with respect to optimal hyperparameters appear to
remain similar. The proposed datasets and scripts to reproduce our results are
available at this http URL and
this https URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Stability of Conditional Sequential Monte Carlo | The particle Gibbs (PG) sampler is a Markov Chain Monte Carlo (MCMC)
algorithm, which uses an interacting particle system to perform the Gibbs
steps. Each Gibbs step consists of simulating a particle system conditioned on
one particle path. It relies on a conditional Sequential Monte Carlo (cSMC)
method to create the particle system. We propose a novel interpretation of the
cSMC algorithm as a perturbed Sequential Monte Carlo (SMC) method and apply
telescopic decompositions developed for the analysis of SMC algorithms
\cite{delmoral2004} to derive a bound for the distance between the expected
sampled path from cSMC and the target distribution of the MCMC algorithm. This
can be used to get a uniform ergodicity result. In particular, we can show that
the mixing rate of cSMC can be kept constant by increasing the number of
particles linearly with the number of observations. Based on our decomposition,
we also prove a central limit theorem for the cSMC Algorithm, which cannot be
done using the approaches in \cite{Andrieu2013} and \cite{Lindsten2014}.
| 0 | 0 | 0 | 1 | 0 | 0 |
Diffusion along chains of normally hyperbolic cylinders | The present paper is part of a series of articles dedicated to the existence
of Arnold diffusion for cusp-residual perturbations of Tonelli Hamiltonians on
$\mathbb{A}^3$. Our goal here is to construct an abstract geometric framework
that can be used to prove the existence of diffusing orbits in the so-called a
priori stable setting, once the preliminary geometric reductions are preformed.
Our framework also applies, rather directly, to the a priori unstable setting.
The main geometric objects of interest are $3$-dimensional normally
hyperbolic invariant cylinders with boundary, which in particular admit
well-defined stable and unstable manifolds. These enable us to define, in our
setting, chains of cylinders, i.e., finite, ordered families of cylinders in
which each cylinder admits homoclinic connections, and any two consecutive
elements in the family admit heteroclinic connections.
Our main result is the existence of diffusing orbits drifting along such
chains, under precise conditions on the dynamics on the cylinders, and on their
homoclinic and heteroclinic structure.
| 0 | 1 | 1 | 0 | 0 | 0 |
Multi-wavelength Spectral Analysis of Ellerman Bombs Observed by FISS and IRIS | Ellerman bombs (EBs) are a kind of solar activities that is suggested to
occur in the lower atmosphere. Recent observations using the Interface Region
Imaging Spectrograph (IRIS) show connections of EBs and IRIS bombs (IBs),
implying that EBs might be heated to a much higher temperature ($8\times10^{4}$
K) than previous results. Here we perform a spectral analysis of the EBs
simultaneously observed by the Fast Imaging Solar Spectrograph (FISS) and IRIS.
The observational results show clear evidence of heating in the lower
atmosphere, indicated by the wing enhancement in H$\alpha$, Ca II 8542 Å
and Mg II triplet lines, and also by brightenings in the images of 1700 Å
and 2832 Å ultraviolet continuum channels. Additionally, the Mg II triplet
line intensity is correlated with that of H$\alpha$ when the EB occurs,
indicating the possibility to use the triplet as an alternative way to identify
EBs. However, we do not find any signal in IRIS hotter lines (C II and Si IV).
For further analysis, we employ a two-cloud model to fit the two chromospheric
lines (H$\alpha$ and Ca II 8542 Å) simultaneously, and obtain a temperature
enhancement of 2300 K for a strong EB. This temperature is among the highest of
previous modeling results while still insufficient to produce IB signatures at
ultraviolet wavelengths.
| 0 | 1 | 0 | 0 | 0 | 0 |
Numerical study of the Kadomtsev--Petviashvili equation and dispersive shock waves | A detailed numerical study of the long time behaviour of dispersive shock
waves in solutions to the Kadomtsev-Petviashvili (KP) I equation is presented.
It is shown that modulated lump solutions emerge from the dispersive shock
waves. For the description of dispersive shock waves, Whitham modulation
equations for KP are obtained. It is shown that the modulation equations near
the soliton line are hyperbolic for the KPII equation while they are elliptic
for the KPI equation leading to a focusing effect and the formation of lumps.
Such a behaviour is similar to the appearance of breathers for the focusing
nonlinear Schrodinger equation in the semiclassical limit.
| 0 | 1 | 1 | 0 | 0 | 0 |
Asymptotic power of Rao's score test for independence in high dimensions | Let ${\bf R}$ be the Pearson correlation matrix of $m$ normal random
variables. The Rao's score test for the independence hypothesis $H_0 : {\bf R}
= {\bf I}_m$, where ${\bf I}_m$ is the identity matrix of dimension $m$, was
first considered by Schott (2005) in the high dimensional setting. In this
paper, we study the asymptotic minimax power function of this test, under an
asymptotic regime in which both $m$ and the sample size $n$ tend to infinity
with the ratio $m/n$ upper bounded by a constant. In particular, our result
implies that the Rao's score test is rate-optimal for detecting the dependency
signal $\|{\bf R} - {\bf I}_m\|_F$ of order $\sqrt{m/n}$, where $\|\cdot\|_F$
is the matrix Frobenius norm.
| 0 | 0 | 1 | 1 | 0 | 0 |
Sensing and Modeling Human Behavior Using Social Media and Mobile Data | In the past years we have witnessed the emergence of the new discipline of
computational social science, which promotes a new data-driven and
computation-based approach to social sciences. In this article we discuss how
the availability of new technologies such as online social media and mobile
smartphones has allowed researchers to passively collect human behavioral data
at a scale and a level of granularity that were just unthinkable some years
ago. We also discuss how these digital traces can then be used to prove (or
disprove) existing theories and develop new models of human behavior.
| 1 | 1 | 0 | 0 | 0 | 0 |
Interior transmission eigenvalue problems on compact manifolds with boundary conductivity parameters | In this paper, we consider an interior transmission eigenvalue (ITE) problem
on some compact $C^{\infty }$-Riemannian manifolds with a common smooth
boundary. In particular, these manifolds may have different topologies, but we
impose some conditions of Riemannian metrics, indices of refraction and
boundary conductivity parameters on the boundary. Then we prove the
discreteness of the set of ITEs, the existence of infinitely many ITEs, and its
Weyl type lower bound. For our settings, we can adopt the argument by
Lakshtanov and Vainberg, considering the Dirichlet-to-Neumann map. As an
application, we derive the existence of non-scattering energies for
time-harmonic acoustic equations. For the sake of simplicity, we consider the
scattering theory on the Euclidean space. However, the argument is applicable
for certain kinds of non-compact manifolds with ends on which we can define the
scattering matrix.
| 0 | 0 | 1 | 0 | 0 | 0 |
A superpolynomial lower bound for the size of non-deterministic complement of an unambiguous automaton | Unambiguous non-deterministic finite automata have intermediate expressive
power and succinctness between deterministic and non-deterministic automata. It
has been conjectured that every unambiguous non-deterministic one-way finite
automaton (1UFA) recognizing some language L can be converted into a 1UFA
recognizing the complement of the original language L with polynomial increase
in the number of states. We disprove this conjecture by presenting a family of
1UFAs on a single-letter alphabet such that recognizing the complements of the
corresponding languages requires superpolynomial increase in the number of
states even for generic non-deterministic one-way finite automata. We also note
that both the languages and their complements can be recognized by sweeping
deterministic automata with a linear increase in the number of states.
| 1 | 0 | 0 | 0 | 0 | 0 |
Taxonomy Induction using Hypernym Subsequences | We propose a novel, semi-supervised approach towards domain taxonomy
induction from an input vocabulary of seed terms. Unlike all previous
approaches, which typically extract direct hypernym edges for terms, our
approach utilizes a novel probabilistic framework to extract hypernym
subsequences. Taxonomy induction from extracted subsequences is cast as an
instance of the minimumcost flow problem on a carefully designed directed
graph. Through experiments, we demonstrate that our approach outperforms
stateof- the-art taxonomy induction approaches across four languages.
Importantly, we also show that our approach is robust to the presence of noise
in the input vocabulary. To the best of our knowledge, no previous approaches
have been empirically proven to manifest noise-robustness in the input
vocabulary.
| 1 | 0 | 0 | 0 | 0 | 0 |
Identifying Clickbait Posts on Social Media with an Ensemble of Linear Models | The purpose of a clickbait is to make a link so appealing that people click
on it. However, the content of such articles is often not related to the title,
shows poor quality, and at the end leaves the reader unsatisfied.
To help the readers, the organizers of the clickbait challenge
(this http URL) asked the participants to build a machine
learning model for scoring articles with respect to their "clickbaitness".
In this paper we propose to solve the clickbait problem with an ensemble of
Linear SVM models, and our approach was tested successfully in the challenge:
it showed great performance of 0.036 MSE and ranked 3rd among all the solutions
to the contest.
| 1 | 0 | 0 | 0 | 0 | 0 |
WYS*: A DSL for Verified Secure Multi-party Computations | Secure multi-party computation (MPC) enables a set of mutually distrusting
parties to cooperatively compute, using a cryptographic protocol, a function
over their private data. This paper presents Wys*, a new domain-specific
language (DSL) for writing mixed-mode MPCs. Wys* is an embedded DSL hosted in
F*, a verification-oriented, effectful programming language. Wys* source
programs are essentially F* programs written in a custom MPC effect, meaning
that the programmers can use F*'s logic to verify the correctness and security
properties of their programs. To reason about the distributed runtime semantics
of these programs, we formalize a deep embedding of Wys*, also in F*. We
mechanize the necessary metatheory to prove that the properties verified for
the Wys* source programs carry over to the distributed, multi-party semantics.
Finally, we use F*'s extraction to extract an interpreter that we have proved
matches this semantics, yielding a partially verified implementation. Wys* is
the first DSL to enable formal verification of MPC programs. We have
implemented several MPC protocols in Wys*, including private set intersection,
joint median, and an MPC card dealing application, and have verified their
correctness and security.
| 1 | 0 | 0 | 0 | 0 | 0 |
Probabilities of causation of climate changes | Multiple changes in Earth's climate system have been observed over the past
decades. Determining how likely each of these changes are to have been caused
by human influence, is important for decision making on mitigation and
adaptation policy. Here we describe an approach for deriving the probability
that anthropogenic forcings have caused a given observed change. The proposed
approach is anchored into causal counterfactual theory (Pearl 2009) which has
been introduced recently, and was in fact partly used already, in the context
of extreme weather event attribution (EA). We argue that these concepts are
also relevant, and can be straightforwardly extended to, the context of
detection and attribution of long term trends associated to climate change
(D&A). For this purpose, and in agreement with the principle of
"fingerprinting" applied in the conventional D&A framework, a trajectory of
change is converted into an event occurrence defined by maximizing the causal
evidence associated to the forcing under scrutiny. Other key assumptions used
in the conventional D&A framework, in particular those related to numerical
models error, can also be adapted conveniently to this approach. Our proposal
thus allows to bridge the conventional framework with the standard causal
theory, in an attempt to improve the quantification of causal probabilities. An
illustration suggests that our approach is prone to yield a significantly
higher estimate of the probability that anthropogenic forcings have caused the
observed temperature change, thus supporting more assertive causal claims.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Realistic Dataset for the Smart Home Device Scheduling Problem for DCOPs | The field of Distributed Constraint Optimization has gained momentum in
recent years thanks to its ability to address various applications related to
multi-agent cooperation. While techniques to solve Distributed Constraint
Optimization Problems (DCOPs) are abundant and have matured substantially since
the field inception, the number of DCOP realistic applications and benchmark
used to asses the performance of DCOP algorithms is lagging behind. To contrast
this background we (i) introduce the Smart Home Device Scheduling (SHDS)
problem, which describe the problem of coordinating smart devices schedules
across multiple homes as a multi-agent system, (ii) detail the physical models
adopted to simulate smart sensors, smart actuators, and homes environments, and
(iii) introduce a DCOP realistic benchmark for SHDS problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
The extra scalar degrees of freedom from the two Higgs doublet model for dark energy | In principle a minimal extension of the standard model of Particle Physics,
the two Higgs doublet model, can be invoked to explain the scalar field
responsible of dark energy. The two doublets are in general mixed. After
diagonalization, the lightest CP-even Higgs and CP-odd Higgs are jointly taken
to be the dark energy candidate. The dark energy obtained from Higgs fields in
this case is indistinguishable from the cosmological constant.
| 0 | 1 | 0 | 0 | 0 | 0 |
Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience | We consider the problem of transferring policies to the real world by
training on a distribution of simulated scenarios. Rather than manually tuning
the randomization of simulations, we adapt the simulation parameter
distribution using a few real world roll-outs interleaved with policy training.
In doing so, we are able to change the distribution of simulations to improve
the policy transfer by matching the policy behavior in simulation and the real
world. We show that policies trained with our method are able to reliably
transfer to different robots in two real world tasks: swing-peg-in-hole and
opening a cabinet drawer. The video of our experiments can be found at
this https URL
| 1 | 0 | 0 | 0 | 0 | 0 |
Revisiting Simple Neural Networks for Learning Representations of Knowledge Graphs | We address the problem of learning vector representations for entities and
relations in Knowledge Graphs (KGs) for Knowledge Base Completion (KBC). This
problem has received significant attention in the past few years and multiple
methods have been proposed. Most of the existing methods in the literature use
a predefined characteristic scoring function for evaluating the correctness of
KG triples. These scoring functions distinguish correct triples (high score)
from incorrect ones (low score). However, their performance vary across
different datasets. In this work, we demonstrate that a simple neural network
based score function can consistently achieve near start-of-the-art performance
on multiple datasets. We also quantitatively demonstrate biases in standard
benchmark datasets, and highlight the need to perform evaluation spanning
various datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
On the tail behavior of a class of multivariate conditionally heteroskedastic processes | Conditions for geometric ergodicity of multivariate autoregressive
conditional heteroskedasticity (ARCH) processes, with the so-called BEKK (Baba,
Engle, Kraft, and Kroner) parametrization, are considered. We show for a class
of BEKK-ARCH processes that the invariant distribution is regularly varying. In
order to account for the possibility of different tail indices of the
marginals, we consider the notion of vector scaling regular variation, in the
spirit of Perfekt (1997, Advances in Applied Probability, 29, pp. 138-164). The
characterization of the tail behavior of the processes is used for deriving the
asymptotic properties of the sample covariance matrices.
| 0 | 0 | 1 | 1 | 0 | 0 |
On the Complexity of Detecting Convexity over a Box | It has recently been shown that the problem of testing global convexity of
polynomials of degree four is {strongly} NP-hard, answering an open question of
N.Z. Shor. This result is minimal in the degree of the polynomial when global
convexity is of concern. In a number of applications however, one is interested
in testing convexity only over a compact region, most commonly a box (i.e.,
hyper-rectangle). In this paper, we show that this problem is also strongly
NP-hard, in fact for polynomials of degree as low as three. This result is
minimal in the degree of the polynomial and in some sense justifies why
convexity detection in nonlinear optimization solvers is limited to quadratic
functions or functions with special structure. As a byproduct, our proof shows
that the problem of testing whether all matrices in an interval family are
positive semidefinite is strongly NP-hard. This problem, which was previously
shown to be (weakly) NP-hard by Nemirovski, is of independent interest in the
theory of robust control.
| 0 | 0 | 0 | 1 | 0 | 0 |
Variance-Reduced Stochastic Learning under Random Reshuffling | Several useful variance-reduced stochastic gradient algorithms, such as SVRG,
SAGA, Finito, and SAG, have been proposed to minimize empirical risks with
linear convergence properties to the exact minimizer. The existing convergence
results assume uniform data sampling with replacement. However, it has been
observed in related works that random reshuffling can deliver superior
performance over uniform sampling and, yet, no formal proofs or guarantees of
exact convergence exist for variance-reduced algorithms under random
reshuffling. This paper makes two contributions. First, it resolves this open
issue and provides the first theoretical guarantee of linear convergence under
random reshuffling for SAGA; the argument is also adaptable to other
variance-reduced algorithms. Second, under random reshuffling, the paper
proposes a new amortized variance-reduced gradient (AVRG) algorithm with
constant storage requirements compared to SAGA and with balanced gradient
computations compared to SVRG. AVRG is also shown analytically to converge
linearly.
| 1 | 0 | 0 | 1 | 0 | 0 |
CELIO: An application development framework for interactive spaces | Developing applications for interactive space is different from developing
cross-platform applications for personal computing. Input, output, and
architectural variations in each interactive space introduce big overhead in
terms of cost and time for developing, deploying and maintaining applications
for interactive spaces. Often, these applications become on-off experience tied
to the deployed spaces. To alleviate this problem and enable rapid responsive
space design applications similar to responsive web design, we present CELIO
application development framework for interactive spaces. The framework is
micro services based and neatly decouples application and design specifications
from hardware and architecture specifications of an interactive space. In this
paper, we describe this framework and its implementation details. Also, we
briefly discuss the use cases developed using this framework.
| 1 | 0 | 0 | 0 | 0 | 0 |
Single Iteration Conditional Based DSE Considering Spatial and Temporal Correlation | The increasing complexity of distribution network calls for advancement in
distribution system state estimation (DSSE) to monitor the operating conditions
more accurately. Sufficient number of measurements is imperative for a reliable
and accurate state estimation. The limitation on the measurement devices is
generally tackled with using the so-called pseudo measured data. However, the
errors in pseudo data by cur-rent techniques are quite high leading to a poor
DSSE. As customer loads in distribution networks show high cross-correlation in
various locations and over successive time steps, it is plausible that
deploying the spatial-temporal dependencies can improve the pseudo data
accuracy and estimation. Although, the role of spatial dependency in DSSE has
been addressed in the literature, one can hardly find an efficient DSSE
framework capable of incorporating temporal dependencies present in customer
loads. Consequently, to obtain a more efficient and accurate state estimation,
we propose a new non-iterative DSSE framework to involve spatial-temporal
dependencies together. The spatial-temporal dependencies are modeled by
conditional multivariate complex Gaussian distributions and are studied for
both static and real-time state estimations, where information at preceding
time steps are employed to increase the accuracy of DSSE. The efficiency of the
proposed approach is verified based on quality and accuracy indices, standard
deviation and computational time. Two balanced medium voltage (MV) and one
unbalanced low voltage (LV) distribution case studies are used for evaluations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Enhanced Network Embeddings via Exploiting Edge Labels | Network embedding methods aim at learning low-dimensional latent
representation of nodes in a network. While achieving competitive performance
on a variety of network inference tasks such as node classification and link
prediction, these methods treat the relations between nodes as a binary
variable and ignore the rich semantics of edges. In this work, we attempt to
learn network embeddings which simultaneously preserve network structure and
relations between nodes. Experiments on several real-world networks illustrate
that by considering different relations between different node pairs, our
method is capable of producing node embeddings of higher quality than a number
of state-of-the-art network embedding methods, as evaluated on a challenging
multi-label node classification task.
| 1 | 0 | 0 | 0 | 0 | 0 |
Virtual Constraints and Hybrid Zero Dynamics for Realizing Underactuated Bipedal Locomotion | Underactuation is ubiquitous in human locomotion and should be ubiquitous in
bipedal robotic locomotion as well. This chapter presents a coherent theory for
the design of feedback controllers that achieve stable walking gaits in
underactuated bipedal robots. Two fundamental tools are introduced, virtual
constraints and hybrid zero dynamics. Virtual constraints are relations on the
state variables of a mechanical model that are imposed through a time-invariant
feedback controller. One of their roles is to synchronize the robot's joints to
an internal gait phasing variable. A second role is to induce a low dimensional
system, the zero dynamics, that captures the underactuated aspects of a robot's
model, without any approximations. To enhance intuition, the relation between
physical constraints and virtual constraints is first established. From here,
the hybrid zero dynamics of an underactuated bipedal model is developed, and
its fundamental role in the design of asymptotically stable walking motions is
established. The chapter includes numerous references to robots on which the
highlighted techniques have been implemented.
| 1 | 0 | 1 | 0 | 0 | 0 |
Selective insulators and anomalous responses in correlated fermions with synthetic extra dimensions | We study a three-component fermionic fluid in an optical lattice in a regime
of intermediate-to- strong interactions allowing for Raman processes connecting
the different components, similar to those used to create artificial gauge
fields (AGF). Using Dynamical Mean-Field Theory we show that the combined
effect of interactions and AGFs induces a variety of anomalous phases in which
different components of the fermionic fluid display qualitative differences,
i.e., the physics is flavor-selective. Remarkably, the different components can
display huge differences in the correlation effects, measured by their
effective masses and non-monotonic behavior of their occupation number as a
function of the chemical potential, signaling a sort of selective instability
of the overall stable quantum fluid.
| 0 | 1 | 0 | 0 | 0 | 0 |
Computation of optimal transport and related hedging problems via penalization and neural networks | This paper presents a widely applicable approach to solving (multi-marginal,
martingale) optimal transport and related problems via neural networks. The
core idea is to penalize the optimization problem in its dual formulation and
reduce it to a finite dimensional one which corresponds to optimizing a neural
network with smooth objective function. We present numerical examples from
optimal transport, martingale optimal transport, portfolio optimization under
uncertainty and generative adversarial networks that showcase the generality
and effectiveness of the approach.
| 0 | 0 | 0 | 1 | 0 | 1 |
Active particles in periodic lattices | Both natural and artificial small-scale swimmers may often self-propel in
environments subject to complex geometrical constraints. While most past
theoretical work on low-Reynolds number locomotion addressed idealised
geometrical situations, not much is known on the motion of swimmers in
heterogeneous environments. As a first theoretical model, we investigate
numerically the behaviour of a single spherical micro-swimmer located in an
infinite, periodic body-centred cubic lattice consisting of rigid inert spheres
of the same size as the swimmer. Running a large number of simulations we
uncover the phase diagram of possible trajectories as a function of the
strength of the swimming actuation and the packing density of the lattice. We
then use hydrodynamic theory to rationalise our computational results and show
in particular how the far-field nature of the swimmer (pusher vs. puller)
governs even the behaviour at high volume fractions.
| 0 | 1 | 0 | 0 | 0 | 0 |
First measeurements in search for keV-sterile neutrino in tritium beta-decay by Troitsk nu-mass experiment | We present the first measurements of tritium beta-decay spectrum in the
electron energy range 16-18.6 keV. The goal is to find distortions which may
correspond to the presence of a heavy sterile neutrinos. A possible
contribution of this kind would manifest itself as a kink in the spectrum with
a similar shape but with end point shifted by the value of a heavy neutrino
mass. We set a new upper limits to the neutrino mixing matrix element U^2_{e4}
which improve existing limits by a factor from 2 to 5 in the mass range 0.1-2
keV.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fractional Abelian topological phases of matter for fermions in two-dimensional space | These notes constitute chapter 7 from "l'Ecole de Physique des Houches"
Session CIII, August 2014 dedicated to Topological Aspects of Condensed matter
physics. The tenfold way in quasi-one-dimensional space is presented. The
method of chiral Abelian bosonization is reviewed. It is then applied to the
stability analysis for the edge theory in symmetry class AII, and for the
construction of two-dimensional topological phases from coupled wires.
| 0 | 1 | 0 | 0 | 0 | 0 |
A new construction of universal spaces for asymptotic dimension | For each $n$, we construct a separable metric space $\mathbb{U}_n$ that is
universal in the coarse category of separable metric spaces with asymptotic
dimension ($\mathop{asdim}$) at most $n$ and universal in the uniform category
of separable metric spaces with uniform dimension ($\mathop{udim}$) at most
$n$. Thus, $\mathbb{U}_n$ serves as a universal space for dimension $n$ in both
the large-scale and infinitesimal topology. More precisely, we prove:
\[
\mathop{asdim} \mathbb{U}_n = \mathop{udim} \mathbb{U}_n = n
\] and such that for each separable metric space $X$,
a) if $\mathop{asdim} X \leq n$, then $X$ is coarsely equivalent to a subset
of $\mathbb{U}_n$;
b) if $\mathop{udim} X \leq n$, then $X$ is uniformly homeomorphic to a
subset of $\mathbb{U}_n$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Comprehensive Framework for Dynamic Bike Rebalancing in a Large Bike Sharing Network | Bike sharing is a vital component of a modern multi-modal transportation
system. However, its implementation can lead to bike supply-demand imbalance
due to fluctuating spatial and temporal demands. This study proposes a
comprehensive framework to develop optimal dynamic bike rebalancing strategies
in a large bike sharing network. It consists of three components, including a
station-level pick-up/drop-off prediction model, station clustering model, and
capacitated location-routing optimization model. For the first component, we
propose a powerful deep learning model called graph convolution neural network
model (GCNN) with data-driven graph filter (DDGF), which can automatically
learn the hidden spatial-temporal correlations among stations to provide more
accurate predictions; for the second component, we apply a graph clustering
algorithm labeled the Community Detection algorithm to cluster stations that
locate geographically close to each other and have a small net demand gap;
last, a capacitated location-routing problem (CLRP) is solved to deal with the
combination of two types of decision variables: the locations of bike
distribution centers and the design of distribution routes for each cluster.
| 0 | 0 | 0 | 1 | 0 | 0 |
On Thin Air Reads: Towards an Event Structures Model of Relaxed Memory | To model relaxed memory, we propose confusion-free event structures over an
alphabet with a justification relation. Executions are modeled by justified
configurations, where every read event has a justifying write event.
Justification alone is too weak a criterion, since it allows cycles of the kind
that result in so-called thin-air reads. Acyclic justification forbids such
cycles, but also invalidates event reorderings that result from compiler
optimizations and dynamic instruction scheduling. We propose the notion of
well-justification, based on a game-like model, which strikes a middle ground.
We show that well-justified configurations satisfy the DRF theorem: in any
data-race free program, all well-justified configurations are sequentially
consistent. We also show that rely-guarantee reasoning is sound for
well-justified configurations, but not for justified configurations. For
example, well-justified configurations are type-safe.
Well-justification allows many, but not all reorderings performed by relaxed
memory. In particular, it fails to validate the commutation of independent
reads. We discuss variations that may address these shortcomings.
| 1 | 0 | 0 | 0 | 0 | 0 |
Tensor ring decomposition | Tensor decompositions such as the canonical format and the tensor train
format have been widely utilized to reduce storage costs and operational
complexities for high-dimensional data, achieving linear scaling with the input
dimension instead of exponential scaling. In this paper, we investigate even
lower storage-cost representations in the tensor ring format, which is an
extension of the tensor train format with variable end-ranks. Firstly, we
introduce two algorithms for converting a tensor in full format to tensor ring
format with low storage cost. Secondly, we detail a rounding operation for
tensor rings and show how this requires new definitions of common linear
algebra operations in the format to obtain storage-cost savings. Lastly, we
introduce algorithms for transforming the graph structure of graph-based tensor
formats, with orders of magnitude lower complexity than existing literature.
The efficiency of all algorithms is demonstrated on a number of numerical
examples, and we achieve up to more than an order of magnitude higher
compression ratios than previous approaches to using the tensor ring format.
| 1 | 0 | 0 | 0 | 0 | 0 |
Combining Information from Multiple Forecasters: Inefficiency of Central Tendency | Even though the forecasting literature agrees that aggregating multiple
predictions of some future outcome typically outperforms the individual
predictions, there is no general consensus about the right way to do this. Most
common aggregators are means, defined loosely as aggregators that always remain
between the smallest and largest predictions. Examples include the arithmetic
mean, trimmed means, median, mid-range, and many other measures of central
tendency. If the forecasters use different information, the aggregator ideally
combines their information into a consensus without losing or distorting any of
it. An aggregator that achieves this is considered efficient. Unfortunately,
our results show that if the forecasters use their information accurately, an
aggregator that always remains strictly between the smallest and largest
predictions is never efficient in practice. A similar result holds even if the
ideal predictions are distorted with random error that is centered at zero. If
these noisy predictions are aggregated with a similar notion of centrality,
then, under some mild conditions, the aggregator is asymptotically inefficient.
| 0 | 0 | 1 | 1 | 0 | 0 |
QCD-Aware Recursive Neural Networks for Jet Physics | Recent progress in applying machine learning for jet physics has been built
upon an analogy between calorimeters and images. In this work, we present a
novel class of recursive neural networks built instead upon an analogy between
QCD and natural languages. In the analogy, four-momenta are like words and the
clustering history of sequential recombination jet algorithms is like the
parsing of a sentence. Our approach works directly with the four-momenta of a
variable-length set of particles, and the jet-based tree structure varies on an
event-by-event basis. Our experiments highlight the flexibility of our method
for building task-specific jet embeddings and show that recursive architectures
are significantly more accurate and data efficient than previous image-based
networks. We extend the analogy from individual jets (sentences) to full events
(paragraphs), and show for the first time an event-level classifier operating
on all the stable particles produced in an LHC event.
| 0 | 1 | 0 | 1 | 0 | 0 |
A conjecture on $C$-matrices of cluster algebras | For a skew-symmetrizable cluster algebra $\mathcal A_{t_0}$ with principal
coefficients at $t_0$, we prove that each seed $\Sigma_t$ of $\mathcal A_{t_0}$
is uniquely determined by its {\bf C-matrix}, which was proposed by Fomin and
Zelevinsky in \cite{FZ3} as a conjecture. Our proof is based on the fact that
the positivity of cluster variables and sign-coherence of $c$-vectors hold for
$\mathcal A_{t_0}$, which was actually verified in \cite{GHKK}. More discussion
is given in the sign-skew-symmetric case so as to obtain a conclusion as weak
version of the conjecture in this general case.
| 0 | 0 | 1 | 0 | 0 | 0 |
Attacking Similarity-Based Link Prediction in Social Networks | Link prediction is one of the fundamental problems in computational social
science. A particularly common means to predict existence of unobserved links
is via structural similarity metrics, such as the number of common neighbors;
node pairs with higher similarity are thus deemed more likely to be linked.
However, a number of applications of link prediction, such as predicting links
in gang or terrorist networks, are adversarial, with another party incentivized
to minimize its effectiveness by manipulating observed information about the
network. We offer a comprehensive algorithmic investigation of the problem of
attacking similarity-based link prediction through link deletion, focusing on
two broad classes of such approaches, one which uses only local information
about target links, and another which uses global network information. While we
show several variations of the general problem to be NP-Hard for both local and
global metrics, we exhibit a number of well-motivated special cases which are
tractable. Additionally, we provide principled and empirically effective
algorithms for the intractable cases, in some cases proving worst-case
approximation guarantees.
| 1 | 0 | 0 | 0 | 0 | 0 |
On algebraic branching programs of small width | In 1979 Valiant showed that the complexity class VP_e of families with
polynomially bounded formula size is contained in the class VP_s of families
that have algebraic branching programs (ABPs) of polynomially bounded size.
Motivated by the problem of separating these classes we study the topological
closure VP_e-bar, i.e. the class of polynomials that can be approximated
arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a
strikingly simple complete polynomial (in characteristic different from 2)
whose recursive definition is similar to the Fibonacci numbers. Further
understanding this polynomial seems to be a promising route to new formula
lower bounds.
Our methods are rooted in the study of ABPs of small constant width. In 1992
Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3
ABP size. We extend their result (in characteristic different from 2) by
showing that approximate formula size is polynomially equivalent to approximate
width-2 ABP size. This is surprising because in 2011 Allender and Wang gave
explicit polynomials that cannot be computed by width-2 ABPs at all! The
details of our construction lead to the aforementioned characterization of
VP_e-bar.
As a natural continuation of this work we prove that the class VNP can be
described as the class of families that admit a hypercube summation of
polynomially bounded dimension over a product of polynomially many affine
linear forms. This gives the first separations of algebraic complexity classes
from their nondeterministic analogs.
| 1 | 0 | 0 | 0 | 0 | 0 |
A lightweight thermal heat switch for redundant cryocooling on satellites | A previously designed cryogenic thermal heat switch for space applications
has been optimized for low mass, high structural stability, and reliability.
The heat switch makes use of the large linear thermal expansion coefficient
(CTE) of the thermoplastic UHMW-PE for actuation. A structure model, which
includes the temperature dependent properties of the actuator, is derived to be
able to predict the contact pressure between the switch parts. This pressure
was used in a thermal model in order to predict the switch performance under
different heat loads and operating temperatures. The two models were used to
optimize the mass and stability of the switch. Its reliability was proven by
cyclic actuation of the switch and by shaker tests.
| 0 | 1 | 0 | 0 | 0 | 0 |
Fast Automatic Smoothing for Generalized Additive Models | Multiple generalized additive models (GAMs) are a type of distributional
regression wherein parameters of probability distributions depend on predictors
through smooth functions, with selection of the degree of smoothness via $L_2$
regularization. Multiple GAMs allow finer statistical inference by
incorporating explanatory information in any or all of the parameters of the
distribution. Owing to their nonlinearity, flexibility and interpretability,
GAMs are widely used, but reliable and fast methods for automatic smoothing in
large datasets are still lacking, despite recent advances. We develop a general
methodology for automatically learning the optimal degree of $L_2$
regularization for multiple GAMs using an empirical Bayes approach. The smooth
functions are penalized by different amounts, which are learned simultaneously
by maximization of a marginal likelihood through an approximate
expectation-maximization algorithm that involves a double Laplace approximation
at the E-step, and leads to an efficient M-step. Empirical analysis shows that
the resulting algorithm is numerically stable, faster than all existing methods
and achieves state-of-the-art accuracy. For illustration, we apply it to an
important and challenging problem in the analysis of extremal data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Stabilization and control of Majorana bound states with elongated skyrmions | We show that elongated magnetic skyrmions can host Majorana bound states in a
proximity-coupled two-dimensional electron gas sandwiched between a chiral
magnet and an $s$-wave superconductor. Our proposal requires stable skyrmions
with unit topological charge, which can be realized in a wide range of
multilayer magnets, and allows quantum information transfer by using standard
methods in spintronics via skyrmion motion. We also show how braiding
operations can be realized in our proposal.
| 0 | 1 | 0 | 0 | 0 | 0 |
Meromorphic functions with small Schwarzian derivative | We consider the family of all meromorphic functions $f$ of the form $$
f(z)=\frac{1}{z}+b_0+b_1z+b_2z^2+\cdots $$ analytic and locally univalent in
the puncture disk $\mathbb{D}_0:=\{z\in\mathbb{C}:\,0<|z|<1\}$. Our first
objective in this paper is to find a sufficient condition for $f$ to be
meromorphically convex of order $\alpha$, $0\le \alpha<1$, in terms of the fact
that the absolute value of the well-known Schwarzian derivative $S_f (z)$ of
$f$ is bounded above by a smallest positive root of a non-linear equation.
Secondly, we consider a family of functions $g$ of the form
$g(z)=z+a_2z^2+a_3z^3+\cdots$ analytic and locally univalent in the open unit
disk $\mathbb{D}:=\{z\in\mathbb{C}:\,|z|<1\}$, and show that $g$ is belonging
to a family of functions convex in one direction if $|S_g(z)|$ is bounded above
by a small positive constant depending on the second coefficient $a_2$. In
particular, we show that such functions $g$ are also contained in the starlike
and close-to-convex family.
| 0 | 0 | 1 | 0 | 0 | 0 |
Measure-geometric Laplacians for discrete distributions | In 2002 Freiberg and Zähle introduced and developed a harmonic calculus for
measure-geometric Laplacians associated to continuous distributions. We show
their theory can be extended to encompass distributions with finite support and
give a matrix representation for the resulting operators. In the case of a
uniform discrete distribution we make use of this matrix representation to
explicitly determine the eigenvalues and the eigenfunctions of the associated
Laplacian.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimal heat transfer and optimal exit times | A heat exchanger can be modeled as a closed domain containing an
incompressible fluid. The moving fluid has a temperature distribution obeying
the advection-diffusion equation, with zero temperature boundary conditions at
the walls. Starting from a positive initial temperature distribution in the
interior, the goal is to flux the heat through the walls as efficiently as
possible. Here we consider a distinct but closely related problem, that of the
integrated mean exit time of Brownian particles starting inside the domain.
Since flows favorable to rapid heat exchange should lower exit times, we
minimize a norm of the exit time. This is a time-independent optimization
problem that we solve analytically in some limits, and numerically otherwise.
We find an (at least locally) optimal velocity field that cools the domain on a
mechanical time scale, in the sense that the integrated mean exit time is
independent on molecular diffusivity in the limit of large-energy flows.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Semantics Comparison Workbench for a Concurrent, Asynchronous, Distributed Programming Language | A number of high-level languages and libraries have been proposed that offer
novel and simple to use abstractions for concurrent, asynchronous, and
distributed programming. The execution models that realise them, however, often
change over time---whether to improve performance, or to extend them to new
language features---potentially affecting behavioural and safety properties of
existing programs. This is exemplified by SCOOP, a message-passing approach to
concurrent object-oriented programming that has seen multiple changes proposed
and implemented, with demonstrable consequences for an idiomatic usage of its
core abstraction. We propose a semantics comparison workbench for SCOOP with
fully and semi-automatic tools for analysing and comparing the state spaces of
programs with respect to different execution models or semantics. We
demonstrate its use in checking the consistency of properties across semantics
by applying it to a set of representative programs, and highlighting a
deadlock-related discrepancy between the principal execution models of SCOOP.
Furthermore, we demonstrate the extensibility of the workbench by generalising
the formalisation of an execution model to support recently proposed extensions
for distributed programming. Our workbench is based on a modular and
parameterisable graph transformation semantics implemented in the GROOVE tool.
We discuss how graph transformations are leveraged to atomically model
intricate language abstractions, how the visual yet algebraic nature of the
model can be used to ascertain soundness, and highlight how the approach could
be applied to similar languages.
| 1 | 0 | 0 | 0 | 0 | 0 |
Simulating Linear Logic in 1-Only Linear Logic | Linear Logic was introduced by Girard as a resource-sensitive refinement of
classical logic. It turned out that full propositional Linear Logic is
undecidable (Lincoln, Mitchell, Scedrov, and Shankar) and, hence, it is more
expressive than (modalized) classical or intuitionistic logic. In this paper we
focus on the study of the simplest fragments of Linear Logic, such as the
one-literal and constant-only fragments (the latter contains no literals at
all). Here we demonstrate that all these extremely simple fragments of Linear
Logic (one-literal, $\bot$-only, and even unit-only) are exactly of the same
expressive power as the corresponding full versions. We present also a complete
computational interpretation (in terms of acyclic programs with stack) for
bottom-free Intuitionistic Linear Logic. Based on this interpretation, we prove
the fairness of our encodings and establish the foregoing complexity results.
| 1 | 0 | 0 | 0 | 0 | 0 |
Adaptive Network Coding Schemes for Satellite Communications | In this paper, we propose two novel physical layer aware adaptive network
coding and coded modulation schemes for time variant channels. The proposed
schemes have been applied to different satellite communications scenarios with
different Round Trip Times (RTT). Compared to adaptive network coding, and
classical non-adaptive network coding schemes for time variant channels, as
benchmarks, the proposed schemes demonstrate that adaptation of packet
transmission based on the channel variation and corresponding erasures allows
for significant gains in terms of throughput, delay and energy efficiency. We
shed light on the trade-off between energy efficiency and delay-throughput
gains, demonstrating that conservative adaptive approaches that favors less
transmission under high erasures, might cause higher delay and less throughput
gains in comparison to non-conservative approaches that favor more transmission
to account for high erasures.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Symbolic Computation Framework for Constitutive Modelling Based On Entropy Principles | The entropy principle in the formulation of Müller and Liu is a common
tool used in constitutive modelling for the development of restrictions on the
unknown constitutive functions describing material properties of various
physical continua. In the current work, a symbolic software implementation of
the Liu algorithm, based on \verb|Maple| software and the \verb|GeM| package,
is presented. The computational framework is used to algorithmically perform
technically demanding symbolic computations related to the entropy principle,
to simplify and reduce Liu's identities, and ultimately to derive explicit
formulas describing classes of constitutive functions that do not violate the
entropy principle. Detailed physical examples are presented and discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Measuring bot and human behavioral dynamics | Bots, social media accounts controlled by software rather than by humans,
have recently been under the spotlight for their association with various forms
of online manipulation. To date, much work has focused on social bot detection,
but little attention has been devoted to the characterization and measurement
of the behavior and activity of bots, as opposed to humans'. Over the course of
the years, bots have become more sophisticated, and capable to reflect some
short-term behavior, emulating that of human users. The goal of this paper is
to study the behavioral dynamics that bots exhibit over the course of one
activity session, and highlight if and how these differ from human activity
signatures. By using a large Twitter dataset associated with recent political
events, we first separate bots and humans, then isolate their activity
sessions. We compile a list of quantities to be measured, like the propensity
of users to engage in social interactions or to produce content. Our analysis
highlights the presence of short-term behavioral trends in humans, which can be
associated with a cognitive origin, that are absent in bots, intuitively due to
their automated activity. These findings are finally codified to create and
evaluate a machine learning algorithm to detect activity sessions produced by
bots and humans, to allow for more nuanced bot detection strategies.
| 1 | 0 | 0 | 0 | 0 | 0 |
Optimal control of elliptic equations with positive measures | Optimal control problems without control costs in general do not possess
solutions due to the lack of coercivity. However, unilateral constraints
together with the assumption of existence of strictly positive solutions of a
pre-adjoint state equation, are sufficient to obtain existence of optimal
solutions in the space of Radon measures. Optimality conditions for these
generalized minimizers can be obtained using Fenchel duality, which requires a
non-standard perturbation approach if the control-to-observation mapping is not
continuous (e.g., for Neumann boundary control in three dimensions). Combining
a conforming discretization of the measure space with a semismooth Newton
method allows the numerical solution of the optimal control problem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Beyond Word Embeddings: Learning Entity and Concept Representations from Large Scale Knowledge Bases | Text representations using neural word embeddings have proven effective in
many NLP applications. Recent researches adapt the traditional word embedding
models to learn vectors of multiword expressions (concepts/entities). However,
these methods are limited to textual knowledge bases (e.g., Wikipedia). In this
paper, we propose a novel and simple technique for integrating the knowledge
about concepts from two large scale knowledge bases of different structure
(Wikipedia and Probase) in order to learn concept representations. We adapt the
efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia
text and Probase concept graph. We evaluate our concept embedding models on two
tasks: (1) analogical reasoning, where we achieve a state-of-the-art
performance of 91% on semantic analogies, (2) concept categorization, where we
achieve a state-of-the-art performance on two benchmark datasets achieving
categorization accuracy of 100% on one and 98% on the other. Additionally, we
present a case study to evaluate our model on unsupervised argument type
identification for neural semantic parsing. We demonstrate the competitive
accuracy of our unsupervised method and its ability to better generalize to out
of vocabulary entity mentions compared to the tedious and error prone methods
which depend on gazetteers and regular expressions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fraunhofer diffraction at the two-dimensional quadratically distorted (QD) Grating | A two-dimensional (2D) mathematical model of quadratically distorted (QD)
grating is established with the principles of Fraunhofer diffraction and
Fourier optics. Discrete sampling and bisection algorithm are applied for
finding numerical solution of the diffraction pattern of QD grating. This 2D
mathematical model allows the precise design of QD grating and improves the
optical performance of simultaneous multiplane imaging system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.