text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: The vectorial Ribaucour transformation for submanifolds of constant sectional curvature,
Abstract: We obtain a reduction of the vectorial Ribaucour transformation that
preserves the class of submanifolds of constant sectional curvature of space
forms, which we call the $L$-transformation. It allows to construct a family of
such submanifolds starting with a given one and a vector-valued solution of a
system of linear partial differential equations. We prove a decomposition
theorem for the $L$-transformation, which is a far-reaching generalization of
the classical permutability formula for the Ribaucour transformation of
surfaces of constant curvature in Euclidean three space. As a consequence, we
derive a Bianchi-cube theorem, which allows to produce, from $k$ initial scalar
$L$-transforms of a given submanifold of constant curvature, a whole
$k$-dimensional cube all of whose remaining $2^k-(k+1)$ vertices are
submanifolds with the same constant sectional curvature given by explicit
algebraic formulae. We also obtain further reductions, as well as corresponding
decomposition and Bianchi-cube theorems, for the classes of $n$-dimensional
flat Lagrangian submanifolds of $\mathbb{C}^n$ and $n$-dimensional Lagrangian
submanifolds with constant curvature $c$ of the complex projective space
$\mathbb C\mathbb P^n(4c)$ or the complex hyperbolic space $\mathbb C\mathbb
H^n(4c)$ of complex dimension $n$ and constant holomorphic curvature~4c. | [
0,
0,
1,
0,
0,
0
] |
Title: Social Networks through the Prism of Cognition,
Abstract: Human relations are driven by social events - people interact, exchange
information, share knowledge and emotions, or gather news from mass media.
These events leave traces in human memory. The initial strength of a trace
depends on cognitive factors such as emotions or attention span. Each trace
continuously weakens over time unless another related event activity
strengthens it. Here, we introduce a novel Cognition-driven Social Network
(CogSNet) model that accounts for cognitive aspects of social perception and
explicitly represents human memory dynamics. For validation, we apply our model
to NetSense data on social interactions among university students. The results
show that CogSNet significantly improves quality of modeling of human
interactions in social networks. | [
1,
0,
0,
0,
0,
0
] |
Title: Metriplectic Integrators for the Landau Collision Operator,
Abstract: We present a novel framework for addressing the nonlinear Landau collision
integral in terms of finite element and other subspace projection methods. We
employ the underlying metriplectic structure of the Landau collision integral
and, using a Galerkin discretization for the velocity space, we transform the
infinite-dimensional system into a finite-dimensional, time-continuous
metriplectic system. Temporal discretization is accomplished using the concept
of discrete gradients. The conservation of energy, momentum, and particle
densities, as well as the production of entropy is demonstrated algebraically
for the fully discrete system. Due to the generality of our approach, the
conservation properties and the monotonic behavior of entropy are guaranteed
for finite element discretizations in general, independently of the mesh
configuration. | [
0,
1,
0,
0,
0,
0
] |
Title: Lagrangian Transport Through Surfaces in Compressible Flows,
Abstract: A material-based, i.e., Lagrangian, methodology for exact integration of flux
by volume-preserving flows through a surface has been developed recently in
[Karrasch, SIAM J. Appl. Math., 76 (2016), pp. 1178-1190]. In the present
paper, we first generalize this framework to general compressible flows,
thereby solving the donating region problem in full generality. Second, we
demonstrate the efficacy of this approach on a slightly idealized version of a
classic two-dimensional mixing problem: transport in a cross-channel
micromixer, as considered recently in [Balasuriya, SIAM J. Appl. Dyn. Syst., 16
(2017), pp. 1015-1044]. | [
1,
1,
1,
0,
0,
0
] |
Title: Powerful numbers in $(1^{\ell}+q^{\ell})(2^{\ell}+q^{\ell})\cdots (n^{\ell}+q^{\ell})$,
Abstract: Let $q$ be a positive integer. Recently, Niu and Liu proved that if $n\ge
\max\{q,1198-q\}$, then the product $(1^3+q^3)(2^3+q^3)\cdots (n^3+q^3)$ is not
a powerful number. In this note, we prove that (i) for any odd prime power
$\ell$ and $n\ge \max\{q,11-q\}$, the product
$(1^{\ell}+q^{\ell})(2^{\ell}+q^{\ell})\cdots (n^{\ell}+q^{\ell})$ is not a
powerful number; (2) for any positive odd integer $\ell$, there exists an
integer $N_{q,\ell}$ such that for any positive integer $n\ge N_{q,\ell}$, the
product $(1^{\ell}+q^{\ell})(2^{\ell}+q^{\ell})\cdots (n^{\ell}+q^{\ell})$ is
not a powerful number. | [
0,
0,
1,
0,
0,
0
] |
Title: LandmarkBoost: Efficient Visual Context Classifiers for Robust Localization,
Abstract: The growing popularity of autonomous systems creates a need for reliable and
efficient metric pose retrieval algorithms. Currently used approaches tend to
rely on nearest neighbor search of binary descriptors to perform the 2D-3D
matching and guarantee realtime capabilities on mobile platforms. These methods
struggle, however, with the growing size of the map, changes in viewpoint or
appearance, and visual aliasing present in the environment. The rigidly defined
descriptor patterns only capture a limited neighborhood of the keypoint and
completely ignore the overall visual context.
We propose LandmarkBoost - an approach that, in contrast to the conventional
2D-3D matching methods, casts the search problem as a landmark classification
task. We use a boosted classifier to classify landmark observations and
directly obtain correspondences as classifier scores. We also introduce a
formulation of visual context that is flexible, efficient to compute, and can
capture relationships in the entire image plane. The original binary
descriptors are augmented with contextual information and informative features
are selected by the boosting framework. Through detailed experiments, we
evaluate the retrieval quality and performance of LandmarkBoost, demonstrating
that it outperforms common state-of-the-art descriptor matching methods. | [
1,
0,
0,
0,
0,
0
] |
Title: Edge Erasures and Chordal Graphs,
Abstract: We prove several results about chordal graphs and weighted chordal graphs by
focusing on exposed edges. These are edges that are properly contained in a
single maximal complete subgraph. This leads to a characterization of chordal
graphs via deletions of a sequence of exposed edges from a complete graph. Most
interesting is that in this context the connected components of the
edge-induced subgraph of exposed edges are 2-edge connected. We use this latter
fact in the weighted case to give a modified version of Kruskal's second
algorithm for finding a minimum spanning tree in a weighted chordal graph. This
modified algorithm benefits from being local in an important sense. | [
1,
0,
1,
0,
0,
0
] |
Title: Knowledge Adaptation: Teaching to Adapt,
Abstract: Domain adaptation is crucial in many real-world applications where the
distribution of the training data differs from the distribution of the test
data. Previous Deep Learning-based approaches to domain adaptation need to be
trained jointly on source and target domain data and are therefore unappealing
in scenarios where models need to be adapted to a large number of domains or
where a domain is evolving, e.g. spam detection where attackers continuously
change their tactics.
To fill this gap, we propose Knowledge Adaptation, an extension of Knowledge
Distillation (Bucilua et al., 2006; Hinton et al., 2015) to the domain
adaptation scenario. We show how a student model achieves state-of-the-art
results on unsupervised domain adaptation from multiple sources on a standard
sentiment analysis benchmark by taking into account the domain-specific
expertise of multiple teachers and the similarities between their domains.
When learning from a single teacher, using domain similarity to gauge
trustworthiness is inadequate. To this end, we propose a simple metric that
correlates well with the teacher's accuracy in the target domain. We
demonstrate that incorporating high-confidence examples selected by this metric
enables the student model to achieve state-of-the-art performance in the
single-source scenario. | [
1,
0,
0,
0,
0,
0
] |
Title: Local Nonparametric Estimation for Second-Order Jump-Diffusion Model Using Gamma Asymmetric Kernels,
Abstract: This paper discusses the local linear smoothing to estimate the unknown first
and second infinitesimal moments in second-order jump-diffusion model based on
Gamma asymmetric kernels. Under the mild conditions, we obtain the weak
consistency and the asymptotic normality of these estimators for both interior
and boundary design points. Besides the standard properties of the local linear
estimation such as simple bias representation and boundary bias correction, the
local linear smoothing using Gamma asymmetric kernels possess some extra
advantages such as variable bandwidth, variance reduction and resistance to
sparse design, which is validated through finite sample simulation study.
Finally, we employ the estimators for the return of some high frequency
financial data. | [
0,
0,
1,
1,
0,
0
] |
Title: Ergodic actions of the compact quantum group $O_{-1}(2)$,
Abstract: Among the ergodic actions of a compact quantum group $\mathbb{G}$ on possibly
non-commutative spaces, those that are {\it embeddable} are the natural
analogues of actions of a compact group on its homogeneous spaces. These can be
realized as {\it coideal subalgebras} of the function algebra
$\mathcal{O}(\mathbb{G})$ attached to the compact quantum group.
We classify the embeddable ergodic actions of the compact quantum group
$O_{-1}(2)$, basing our analysis on the bijective correspondence between all
ergodic actions of the classical group $O(2)$ and those of its quantum twist
resulting from the monoidal equivalence between their respective tensor
categories of unitary representations.
In the last section we give counterexamples showing that in general we cannot
expect a bijective correspondence between embeddable ergodic actions of two
monoidally equivalent compact quantum groups. | [
0,
0,
1,
0,
0,
0
] |
Title: Edge states in non-Fermi liquids,
Abstract: We devise an approach to the calculation of scaling dimensions of generic
operators describing scattering within multi-channel Luttinger liquid. The
local impurity scattering in an arbitrary configuration of conducting and
insulating channels is investigated and the problem is reduced to a single
algebraic matrix equation. In particular, the solution to this equation is
found for a finite array of chains described by Luttinger liquid models. It is
found that for a weak inter-chain hybridisation and intra-channel
electron-electron attraction the edge wires are robust against disorder whereas
bulk wires, on contrary, become insulating. Thus, the edge state may exist in a
finite sliding Luttinger liquid without time-reversal symmetry breaking
(quantum Hall systems) or spin-orbit interaction (topological insulators). | [
0,
1,
0,
0,
0,
0
] |
Title: RobustFill: Neural Program Learning under Noisy I/O,
Abstract: The problem of automatically generating a computer program from some
specification has been studied since the early days of AI. Recently, two
competing approaches for automatic program learning have received significant
attention: (1) neural program synthesis, where a neural network is conditioned
on input/output (I/O) examples and learns to generate a program, and (2) neural
program induction, where a neural network generates new outputs directly using
a latent program representation.
Here, for the first time, we directly compare both approaches on a
large-scale, real-world learning task. We additionally contrast to rule-based
program synthesis, which uses hand-crafted semantics to guide the program
generation. Our neural models use a modified attention RNN to allow encoding of
variable-sized sets of I/O pairs. Our best synthesis model achieves 92%
accuracy on a real-world test set, compared to the 34% accuracy of the previous
best neural synthesis approach. The synthesis model also outperforms a
comparable induction model on this task, but we more importantly demonstrate
that the strength of each approach is highly dependent on the evaluation metric
and end-user application. Finally, we show that we can train our neural models
to remain very robust to the type of noise expected in real-world data (e.g.,
typos), while a highly-engineered rule-based system fails entirely. | [
1,
0,
0,
0,
0,
0
] |
Title: Updated physics design of the DAEdALUS and IsoDAR coupled cyclotrons for high intensity H2+ beam production,
Abstract: The Decay-At-rest Experiment for delta-CP violation At a Laboratory for
Underground Science (DAEdALUS) and the Isotope Decay-At-Rest experiment
(IsoDAR) are proposed experiments to search for CP violation in the neutrino
sector, and "sterile" neutrinos, respectively. In order to be decisive within 5
years, the neutrino flux and, consequently, the driver beam current (produced
by chained cyclotrons) must be high. H2+ was chosen as primary beam ion in
order to reduce the electrical current and thus space charge. This has the
added advantage of allowing for stripping extraction at the exit of the
DAEdALUS Superconducting Ring Cyclotron (DSRC). The primary beam current is
higher than current cyclotrons have demonstrated which has led to a substantial
R&D effort of our collaboration in the last years. We present the results of
this research, including tests of prototypes and highly realistic beam
simulations, which led to the latest physics-based design. The presented
results suggest that it is feasible, albeit challenging, to accelerate 5 mA of
H2+ to 60 MeV/amu in a compact cyclotron and boost it to 800 MeV/amu in the
DSRC with clean extraction in both cases. | [
0,
1,
0,
0,
0,
0
] |
Title: Scalable Gaussian Process Computations Using Hierarchical Matrices,
Abstract: We present a kernel-independent method that applies hierarchical matrices to
the problem of maximum likelihood estimation for Gaussian processes. The
proposed approximation provides natural and scalable stochastic estimators for
its gradient and Hessian, as well as the expected Fisher information matrix,
that are computable in quasilinear $O(n \log^2 n)$ complexity for a large range
of models. To accomplish this, we (i) choose a specific hierarchical
approximation for covariance matrices that enables the computation of their
exact derivatives and (ii) use a stabilized form of the Hutchinson stochastic
trace estimator. Since both the observed and expected information matrices can
be computed in quasilinear complexity, covariance matrices for MLEs can also be
estimated efficiently. After discussing the associated mathematics, we
demonstrate the scalability of the method, discuss details of its
implementation, and validate that the resulting MLEs and confidence intervals
based on the inverse Fisher information matrix faithfully approach those
obtained by the exact likelihood. | [
0,
0,
0,
1,
0,
0
] |
Title: Wireless Network-Level Partial Relay Cooperation: A Stable Throughput Analysis,
Abstract: In this work, we study the benefit of partial relay cooperation. We consider
a two-node system consisting of one source and one relay node transmitting
information to a common destination. The source and the relay have external
traffic and in addition, the relay is equipped with a flow controller to
regulate the incoming traffic from the source node. The cooperation is
performed at the network level. A collision channel with erasures is
considered. We provide an exact characterization of the stability region of the
system and we also prove that the system with partial cooperation is always
better or at least equal to the system without the flow controller. | [
1,
0,
0,
0,
0,
0
] |
Title: An explicit Gross-Zagier formula related to the Sylvester Conjecture,
Abstract: Let $p\equiv 4,7\mod 9$ be a rational prime number such that $3\mod p$ is not
a cubic residue. In this paper we prove the 3-part of the product of the full
BSD conjectures for $E_p$ and $E_{3p^3}$ is true using an explicit Gross-Zagier
formula, where $E_p: x^3+y^3=p$ and $E_{3p^2}: x^3+y^3=3p^2$ are the elliptic
curves related to the Sylvester conjecture and cube sum problems. | [
0,
0,
1,
0,
0,
0
] |
Title: Case Studies of Exocomets in the System of HD 10180,
Abstract: The aim of our study is to investigate the dynamics of possible comets in the
HD 10180 system. This investigation is motivated by the discovery of exocomets
in various systems, especially $\beta$ Pictoris, as well as in at least ten
other systems. Detailed theoretical studies about the formation and evolution
of star--planet systems indicate that exocomets should be quite common. Further
observational results are expected in the foreseeable future, in part due to
the availability of the Large Synoptic Survey Telescope. Nonetheless, the Solar
System represents the best studied example for comets, thus serving as a prime
motivation for investigating comets in HD 10180 as well. HD 10180 is strikingly
similar to the Sun. This system contains six confirmed planets and (at least)
two additional planets subject to final verification. In our studies, we
consider comets of different inclinations and eccentricities and find an array
of different outcomes such as encounters with planets, captures, and escapes.
Comets with relatively large eccentricities are able to enter the inner region
of the system facing early planetary encounters. Stable comets experience
long-term evolution of orbital elements, as expected. We also tried to
distinguish cometary families akin to our Solar System but no clear distinction
between possible families was found. Generally, theoretical and observational
studies of exoplanets have a large range of ramifications, involving the
origin, structure and evolution of systems as well as the proliferation of
water and prebiotic compounds to terrestrial planets, which will increase their
chances of being habitable. | [
0,
1,
0,
0,
0,
0
] |
Title: 3D ab initio modeling in cryo-EM by autocorrelation analysis,
Abstract: Single-Particle Reconstruction (SPR) in Cryo-Electron Microscopy (cryo-EM) is
the task of estimating the 3D structure of a molecule from a set of noisy 2D
projections, taken from unknown viewing directions. Many algorithms for SPR
start from an initial reference molecule, and alternate between refining the
estimated viewing angles given the molecule, and refining the molecule given
the viewing angles. This scheme is called iterative refinement. Reliance on an
initial, user-chosen reference introduces model bias, and poor initialization
can lead to slow convergence. Furthermore, since no ground truth is available
for an unsolved molecule, it is difficult to validate the obtained results.
This creates the need for high quality ab initio models that can be quickly
obtained from experimental data with minimal priors, and which can also be used
for validation. We propose a procedure to obtain such an ab initio model
directly from raw data using Kam's autocorrelation method. Kam's method has
been known since 1980, but it leads to an underdetermined system, with missing
orthogonal matrices. Until now, this system has been solved only for special
cases, such as highly symmetric molecules or molecules for which a homologous
structure was already available. In this paper, we show that knowledge of just
two clean projections is sufficient to guarantee a unique solution to the
system. This system is solved by an optimization-based heuristic. For the first
time, we are then able to obtain a low-resolution ab initio model of an
asymmetric molecule directly from raw data, without 2D class averaging and
without tilting. Numerical results are presented on both synthetic and
experimental data. | [
0,
0,
0,
1,
0,
0
] |
Title: Reinforcement Learning with a Corrupted Reward Channel,
Abstract: No real-world reward function is perfect. Sensory errors and software bugs
may result in RL agents observing higher (or lower) rewards than they should.
For example, a reinforcement learning agent may prefer states where a sensory
error gives it the maximum reward, but where the true reward is actually small.
We formalise this problem as a generalised Markov Decision Problem called
Corrupt Reward MDP. Traditional RL methods fare poorly in CRMDPs, even under
strong simplifying assumptions and when trying to compensate for the possibly
corrupt rewards. Two ways around the problem are investigated. First, by giving
the agent richer data, such as in inverse reinforcement learning and
semi-supervised reinforcement learning, reward corruption stemming from
systematic sensory errors may sometimes be completely managed. Second, by using
randomisation to blunt the agent's optimisation, reward corruption can be
partially managed under some assumptions. | [
1,
0,
0,
1,
0,
0
] |
Title: Exchange striction driven magnetodielectric effect and potential photovoltaic effect in polar CaOFeS,
Abstract: CaOFeS is a semiconducting oxysulfide with polar layered triangular
structure. Here a comprehensive theoretical study has been performed to reveal
its physical properties, including magnetism, electronic structure, phase
transition, magnetodielectric effect, as well as optical absorption. Our
calculations confirm the Ising-like G-type antiferromagnetic ground state
driven by the next-nearest neighbor exchanges, which breaks the trigonal
symmetry and is responsible for the magnetodielectric effect driven by exchange
striction. In addition, a large coefficient of visible light absorption is
predicted, which leads to promising photovoltaic effect with the maximum
light-to-electricity energy conversion efficiency up to 24.2%. | [
0,
1,
0,
0,
0,
0
] |
Title: Two-sided Facility Location,
Abstract: Recent years have witnessed the rise of many successful e-commerce
marketplace platforms like the Amazon marketplace, AirBnB, Uber/Lyft, and
Upwork, where a central platform mediates economic transactions between buyers
and sellers. Motivated by these platforms, we formulate a set of facility
location problems that we term Two-sided Facility location. In our model,
agents arrive at nodes in an underlying metric space, where the metric distance
between any buyer and seller captures the quality of the corresponding match.
The platform posts prices and wages at the nodes, and opens a set of facilities
to route the agents to. The agents at any facility are assumed to be matched.
The platform ensures high match quality by imposing a distance constraint
between a node and the facilities it is routed to. It ensures high service
availability by ensuring flow to the facility is at least a pre-specified lower
bound. Subject to these constraints, the goal of the platform is to maximize
the social surplus (or gains from trade) subject to weak budget balance, i.e.,
profit being non-negative.
We present an approximation algorithm for this problem that yields a $(1 +
\epsilon)$ approximation to surplus for any constant $\epsilon > 0$, while
relaxing the match quality (i.e., maximum distance of any match) by a constant
factor. We use an LP rounding framework that easily extends to other objectives
such as maximizing volume of trade or profit.
We justify our models by considering a dynamic marketplace setting where
agents arrive according to a stochastic process and have finite patience (or
deadlines) for being matched. We perform queueing analysis to show that for
policies that route agents to facilities and match them, ensuring a low
abandonment probability of agents reduces to ensuring sufficient flow arrives
at each facility. | [
1,
0,
0,
0,
0,
0
] |
Title: Coble's group and the integrability of the Gosset-Elte polytopes and tessellations,
Abstract: This paper considers the planar figure of a combinatorial polytope or
tessellation identified by the Coxeter symbol $k_{i,j}$ , inscribed in a conic,
satisfying the geometric constraint that each octahedral cell has a centre.
This realisation exists, and is movable, on account of some constraints being
satisfied as a consequence of the others. A close connection to the birational
group found originally by Coble in the different context of invariants for sets
of points in projective space, allows to specify precisely a determining subset
of vertices that may be freely chosen. This gives a unified geometric view of
certain integrable discrete systems in one, two and three dimensions. Making
contact with previous geometric accounts in the case of three dimensions, it is
shown how the figure also manifests as a configuration of circles generalising
the Clifford lattices, and how it can be applied to construct the spatial
point-line configurations called the Desargues maps. | [
0,
1,
1,
0,
0,
0
] |
Title: Latent Hinge-Minimax Risk Minimization for Inference from a Small Number of Training Samples,
Abstract: Deep Learning (DL) methods show very good performance when trained on large,
balanced data sets. However, many practical problems involve imbalanced data
sets, or/and classes with a small number of training samples. The performance
of DL methods as well as more traditional classifiers drops significantly in
such settings. Most of the existing solutions for imbalanced problems focus on
customizing the data for training. A more principled solution is to use mixed
Hinge-Minimax risk [19] specifically designed to solve binary problems with
imbalanced training sets. Here we propose a Latent Hinge Minimax (LHM) risk and
a training algorithm that generalizes this paradigm to an ensemble of
hyperplanes that can form arbitrary complex, piecewise linear boundaries. To
extract good features, we combine LHM model with CNN via transfer learning. To
solve multi-class problem we map pre-trained category-specific LHM classifiers
to a multi-class neural network and adjust the weights with very fast tuning.
LHM classifier enables the use of unlabeled data in its training and the
mapping allows for multi-class inference, resulting in a classifier that
performs better than alternatives when trained on a small number of training
samples. | [
1,
0,
0,
0,
0,
0
] |
Title: Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines,
Abstract: We compare and contrast the statistical physics and quantum physics inspired
approaches for unsupervised generative modeling of classical data. The two
approaches represent probabilities of observed data using energy-based models
and quantum states respectively.Classical and quantum information patterns of
the target datasets therefore provide principled guidelines for structural
design and learning in these two approaches. Taking the restricted Boltzmann
machines (RBM) as an example, we analyze the information theoretical bounds of
the two approaches. We verify our reasonings by comparing the performance of
RBMs of various architectures on the standard MNIST datasets. | [
0,
0,
0,
1,
0,
0
] |
Title: A Cofibration Category on Closure Spaces,
Abstract: We construct a cofibration category structure on the category of closure
spaces $\mathbf{Cl}$, the category whose objects are sets endowed with a
Čech closure operator and whose morphisms are the continuous maps between
them. We then study various closure structures on metric spaces, graphs, and
simplicial complexes, showing how each case gives rise to an interesting
homotopy theory. In particular, we show that there exists a natural family of
closure structures on metric spaces which produces a non-trivial homotopy
theory for finite metric spaces, i.e. point clouds, the spaces of interest in
topological data analysis. We then give a closure structure to graphs and
simplicial complexes which may be used to construct a new combinatorial (as
opposed to topological) homotopy theory for each skeleton of those spaces. We
show that there is a Seifert-van Kampen theorem for closure spaces, a
well-defined notion of persistent homotopy and an associated interleaving
distance, and, as an illustration of the difference with the topological
setting, we calculate the fundamental group for the circle and the wedge of
circles endowed with different closure structures. | [
0,
0,
1,
0,
0,
0
] |
Title: Drone Squadron Optimization: a Self-adaptive Algorithm for Global Numerical Optimization,
Abstract: This paper proposes Drone Squadron Optimization, a new self-adaptive
metaheuristic for global numerical optimization which is updated online by a
hyper-heuristic. DSO is an artifact-inspired technique, as opposed to many
algorithms used nowadays, which are nature-inspired. DSO is very flexible
because it is not related to behaviors or natural phenomena. DSO has two core
parts: the semi-autonomous drones that fly over a landscape to explore, and the
Command Center that processes the retrieved data and updates the drones'
firmware whenever necessary. The self-adaptive aspect of DSO in this work is
the perturbation/movement scheme, which is the procedure used to generate
target coordinates. This procedure is evolved by the Command Center during the
global optimization process in order to adapt DSO to the search landscape. DSO
was evaluated on a set of widely employed benchmark functions. The statistical
analysis of the results shows that the proposed method is competitive with the
other methods in the comparison, the performance is promising, but several
future improvements are planned. | [
1,
0,
1,
0,
0,
0
] |
Title: A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets,
Abstract: The original ImageNet dataset is a popular large-scale benchmark for training
Deep Neural Networks. Since the cost of performing experiments (e.g, algorithm
design, architecture search, and hyperparameter tuning) on the original dataset
might be prohibitive, we propose to consider a downsampled version of ImageNet.
In contrast to the CIFAR datasets and earlier downsampled versions of ImageNet,
our proposed ImageNet32$\times$32 (and its variants ImageNet64$\times$64 and
ImageNet16$\times$16) contains exactly the same number of classes and images as
ImageNet, with the only difference that the images are downsampled to
32$\times$32 pixels per image (64$\times$64 and 16$\times$16 pixels for the
variants, respectively). Experiments on these downsampled variants are
dramatically faster than on the original ImageNet and the characteristics of
the downsampled datasets with respect to optimal hyperparameters appear to
remain similar. The proposed datasets and scripts to reproduce our results are
available at this http URL and
this https URL | [
1,
0,
0,
0,
0,
0
] |
Title: Multi-wavelength Spectral Analysis of Ellerman Bombs Observed by FISS and IRIS,
Abstract: Ellerman bombs (EBs) are a kind of solar activities that is suggested to
occur in the lower atmosphere. Recent observations using the Interface Region
Imaging Spectrograph (IRIS) show connections of EBs and IRIS bombs (IBs),
implying that EBs might be heated to a much higher temperature ($8\times10^{4}$
K) than previous results. Here we perform a spectral analysis of the EBs
simultaneously observed by the Fast Imaging Solar Spectrograph (FISS) and IRIS.
The observational results show clear evidence of heating in the lower
atmosphere, indicated by the wing enhancement in H$\alpha$, Ca II 8542 Å
and Mg II triplet lines, and also by brightenings in the images of 1700 Å
and 2832 Å ultraviolet continuum channels. Additionally, the Mg II triplet
line intensity is correlated with that of H$\alpha$ when the EB occurs,
indicating the possibility to use the triplet as an alternative way to identify
EBs. However, we do not find any signal in IRIS hotter lines (C II and Si IV).
For further analysis, we employ a two-cloud model to fit the two chromospheric
lines (H$\alpha$ and Ca II 8542 Å) simultaneously, and obtain a temperature
enhancement of 2300 K for a strong EB. This temperature is among the highest of
previous modeling results while still insufficient to produce IB signatures at
ultraviolet wavelengths. | [
0,
1,
0,
0,
0,
0
] |
Title: Numerical study of the Kadomtsev--Petviashvili equation and dispersive shock waves,
Abstract: A detailed numerical study of the long time behaviour of dispersive shock
waves in solutions to the Kadomtsev-Petviashvili (KP) I equation is presented.
It is shown that modulated lump solutions emerge from the dispersive shock
waves. For the description of dispersive shock waves, Whitham modulation
equations for KP are obtained. It is shown that the modulation equations near
the soliton line are hyperbolic for the KPII equation while they are elliptic
for the KPI equation leading to a focusing effect and the formation of lumps.
Such a behaviour is similar to the appearance of breathers for the focusing
nonlinear Schrodinger equation in the semiclassical limit. | [
0,
1,
1,
0,
0,
0
] |
Title: Asymptotic power of Rao's score test for independence in high dimensions,
Abstract: Let ${\bf R}$ be the Pearson correlation matrix of $m$ normal random
variables. The Rao's score test for the independence hypothesis $H_0 : {\bf R}
= {\bf I}_m$, where ${\bf I}_m$ is the identity matrix of dimension $m$, was
first considered by Schott (2005) in the high dimensional setting. In this
paper, we study the asymptotic minimax power function of this test, under an
asymptotic regime in which both $m$ and the sample size $n$ tend to infinity
with the ratio $m/n$ upper bounded by a constant. In particular, our result
implies that the Rao's score test is rate-optimal for detecting the dependency
signal $\|{\bf R} - {\bf I}_m\|_F$ of order $\sqrt{m/n}$, where $\|\cdot\|_F$
is the matrix Frobenius norm. | [
0,
0,
1,
1,
0,
0
] |
Title: Sensing and Modeling Human Behavior Using Social Media and Mobile Data,
Abstract: In the past years we have witnessed the emergence of the new discipline of
computational social science, which promotes a new data-driven and
computation-based approach to social sciences. In this article we discuss how
the availability of new technologies such as online social media and mobile
smartphones has allowed researchers to passively collect human behavioral data
at a scale and a level of granularity that were just unthinkable some years
ago. We also discuss how these digital traces can then be used to prove (or
disprove) existing theories and develop new models of human behavior. | [
1,
1,
0,
0,
0,
0
] |
Title: Taxonomy Induction using Hypernym Subsequences,
Abstract: We propose a novel, semi-supervised approach towards domain taxonomy
induction from an input vocabulary of seed terms. Unlike all previous
approaches, which typically extract direct hypernym edges for terms, our
approach utilizes a novel probabilistic framework to extract hypernym
subsequences. Taxonomy induction from extracted subsequences is cast as an
instance of the minimumcost flow problem on a carefully designed directed
graph. Through experiments, we demonstrate that our approach outperforms
stateof- the-art taxonomy induction approaches across four languages.
Importantly, we also show that our approach is robust to the presence of noise
in the input vocabulary. To the best of our knowledge, no previous approaches
have been empirically proven to manifest noise-robustness in the input
vocabulary. | [
1,
0,
0,
0,
0,
0
] |
Title: WYS*: A DSL for Verified Secure Multi-party Computations,
Abstract: Secure multi-party computation (MPC) enables a set of mutually distrusting
parties to cooperatively compute, using a cryptographic protocol, a function
over their private data. This paper presents Wys*, a new domain-specific
language (DSL) for writing mixed-mode MPCs. Wys* is an embedded DSL hosted in
F*, a verification-oriented, effectful programming language. Wys* source
programs are essentially F* programs written in a custom MPC effect, meaning
that the programmers can use F*'s logic to verify the correctness and security
properties of their programs. To reason about the distributed runtime semantics
of these programs, we formalize a deep embedding of Wys*, also in F*. We
mechanize the necessary metatheory to prove that the properties verified for
the Wys* source programs carry over to the distributed, multi-party semantics.
Finally, we use F*'s extraction to extract an interpreter that we have proved
matches this semantics, yielding a partially verified implementation. Wys* is
the first DSL to enable formal verification of MPC programs. We have
implemented several MPC protocols in Wys*, including private set intersection,
joint median, and an MPC card dealing application, and have verified their
correctness and security. | [
1,
0,
0,
0,
0,
0
] |
Title: Probabilities of causation of climate changes,
Abstract: Multiple changes in Earth's climate system have been observed over the past
decades. Determining how likely each of these changes are to have been caused
by human influence, is important for decision making on mitigation and
adaptation policy. Here we describe an approach for deriving the probability
that anthropogenic forcings have caused a given observed change. The proposed
approach is anchored into causal counterfactual theory (Pearl 2009) which has
been introduced recently, and was in fact partly used already, in the context
of extreme weather event attribution (EA). We argue that these concepts are
also relevant, and can be straightforwardly extended to, the context of
detection and attribution of long term trends associated to climate change
(D&A). For this purpose, and in agreement with the principle of
"fingerprinting" applied in the conventional D&A framework, a trajectory of
change is converted into an event occurrence defined by maximizing the causal
evidence associated to the forcing under scrutiny. Other key assumptions used
in the conventional D&A framework, in particular those related to numerical
models error, can also be adapted conveniently to this approach. Our proposal
thus allows to bridge the conventional framework with the standard causal
theory, in an attempt to improve the quantification of causal probabilities. An
illustration suggests that our approach is prone to yield a significantly
higher estimate of the probability that anthropogenic forcings have caused the
observed temperature change, thus supporting more assertive causal claims. | [
0,
0,
0,
1,
0,
0
] |
Title: Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience,
Abstract: We consider the problem of transferring policies to the real world by
training on a distribution of simulated scenarios. Rather than manually tuning
the randomization of simulations, we adapt the simulation parameter
distribution using a few real world roll-outs interleaved with policy training.
In doing so, we are able to change the distribution of simulations to improve
the policy transfer by matching the policy behavior in simulation and the real
world. We show that policies trained with our method are able to reliably
transfer to different robots in two real world tasks: swing-peg-in-hole and
opening a cabinet drawer. The video of our experiments can be found at
this https URL | [
1,
0,
0,
0,
0,
0
] |
Title: Revisiting Simple Neural Networks for Learning Representations of Knowledge Graphs,
Abstract: We address the problem of learning vector representations for entities and
relations in Knowledge Graphs (KGs) for Knowledge Base Completion (KBC). This
problem has received significant attention in the past few years and multiple
methods have been proposed. Most of the existing methods in the literature use
a predefined characteristic scoring function for evaluating the correctness of
KG triples. These scoring functions distinguish correct triples (high score)
from incorrect ones (low score). However, their performance vary across
different datasets. In this work, we demonstrate that a simple neural network
based score function can consistently achieve near start-of-the-art performance
on multiple datasets. We also quantitatively demonstrate biases in standard
benchmark datasets, and highlight the need to perform evaluation spanning
various datasets. | [
1,
0,
0,
1,
0,
0
] |
Title: Single Iteration Conditional Based DSE Considering Spatial and Temporal Correlation,
Abstract: The increasing complexity of distribution network calls for advancement in
distribution system state estimation (DSSE) to monitor the operating conditions
more accurately. Sufficient number of measurements is imperative for a reliable
and accurate state estimation. The limitation on the measurement devices is
generally tackled with using the so-called pseudo measured data. However, the
errors in pseudo data by cur-rent techniques are quite high leading to a poor
DSSE. As customer loads in distribution networks show high cross-correlation in
various locations and over successive time steps, it is plausible that
deploying the spatial-temporal dependencies can improve the pseudo data
accuracy and estimation. Although, the role of spatial dependency in DSSE has
been addressed in the literature, one can hardly find an efficient DSSE
framework capable of incorporating temporal dependencies present in customer
loads. Consequently, to obtain a more efficient and accurate state estimation,
we propose a new non-iterative DSSE framework to involve spatial-temporal
dependencies together. The spatial-temporal dependencies are modeled by
conditional multivariate complex Gaussian distributions and are studied for
both static and real-time state estimations, where information at preceding
time steps are employed to increase the accuracy of DSSE. The efficiency of the
proposed approach is verified based on quality and accuracy indices, standard
deviation and computational time. Two balanced medium voltage (MV) and one
unbalanced low voltage (LV) distribution case studies are used for evaluations. | [
0,
0,
0,
1,
0,
0
] |
Title: Enhanced Network Embeddings via Exploiting Edge Labels,
Abstract: Network embedding methods aim at learning low-dimensional latent
representation of nodes in a network. While achieving competitive performance
on a variety of network inference tasks such as node classification and link
prediction, these methods treat the relations between nodes as a binary
variable and ignore the rich semantics of edges. In this work, we attempt to
learn network embeddings which simultaneously preserve network structure and
relations between nodes. Experiments on several real-world networks illustrate
that by considering different relations between different node pairs, our
method is capable of producing node embeddings of higher quality than a number
of state-of-the-art network embedding methods, as evaluated on a challenging
multi-label node classification task. | [
1,
0,
0,
0,
0,
0
] |
Title: Virtual Constraints and Hybrid Zero Dynamics for Realizing Underactuated Bipedal Locomotion,
Abstract: Underactuation is ubiquitous in human locomotion and should be ubiquitous in
bipedal robotic locomotion as well. This chapter presents a coherent theory for
the design of feedback controllers that achieve stable walking gaits in
underactuated bipedal robots. Two fundamental tools are introduced, virtual
constraints and hybrid zero dynamics. Virtual constraints are relations on the
state variables of a mechanical model that are imposed through a time-invariant
feedback controller. One of their roles is to synchronize the robot's joints to
an internal gait phasing variable. A second role is to induce a low dimensional
system, the zero dynamics, that captures the underactuated aspects of a robot's
model, without any approximations. To enhance intuition, the relation between
physical constraints and virtual constraints is first established. From here,
the hybrid zero dynamics of an underactuated bipedal model is developed, and
its fundamental role in the design of asymptotically stable walking motions is
established. The chapter includes numerous references to robots on which the
highlighted techniques have been implemented. | [
1,
0,
1,
0,
0,
0
] |
Title: Selective insulators and anomalous responses in correlated fermions with synthetic extra dimensions,
Abstract: We study a three-component fermionic fluid in an optical lattice in a regime
of intermediate-to- strong interactions allowing for Raman processes connecting
the different components, similar to those used to create artificial gauge
fields (AGF). Using Dynamical Mean-Field Theory we show that the combined
effect of interactions and AGFs induces a variety of anomalous phases in which
different components of the fermionic fluid display qualitative differences,
i.e., the physics is flavor-selective. Remarkably, the different components can
display huge differences in the correlation effects, measured by their
effective masses and non-monotonic behavior of their occupation number as a
function of the chemical potential, signaling a sort of selective instability
of the overall stable quantum fluid. | [
0,
1,
0,
0,
0,
0
] |
Title: Computation of optimal transport and related hedging problems via penalization and neural networks,
Abstract: This paper presents a widely applicable approach to solving (multi-marginal,
martingale) optimal transport and related problems via neural networks. The
core idea is to penalize the optimization problem in its dual formulation and
reduce it to a finite dimensional one which corresponds to optimizing a neural
network with smooth objective function. We present numerical examples from
optimal transport, martingale optimal transport, portfolio optimization under
uncertainty and generative adversarial networks that showcase the generality
and effectiveness of the approach. | [
0,
0,
0,
1,
0,
1
] |
Title: First measeurements in search for keV-sterile neutrino in tritium beta-decay by Troitsk nu-mass experiment,
Abstract: We present the first measurements of tritium beta-decay spectrum in the
electron energy range 16-18.6 keV. The goal is to find distortions which may
correspond to the presence of a heavy sterile neutrinos. A possible
contribution of this kind would manifest itself as a kink in the spectrum with
a similar shape but with end point shifted by the value of a heavy neutrino
mass. We set a new upper limits to the neutrino mixing matrix element U^2_{e4}
which improve existing limits by a factor from 2 to 5 in the mass range 0.1-2
keV. | [
0,
1,
0,
0,
0,
0
] |
Title: A new construction of universal spaces for asymptotic dimension,
Abstract: For each $n$, we construct a separable metric space $\mathbb{U}_n$ that is
universal in the coarse category of separable metric spaces with asymptotic
dimension ($\mathop{asdim}$) at most $n$ and universal in the uniform category
of separable metric spaces with uniform dimension ($\mathop{udim}$) at most
$n$. Thus, $\mathbb{U}_n$ serves as a universal space for dimension $n$ in both
the large-scale and infinitesimal topology. More precisely, we prove:
\[
\mathop{asdim} \mathbb{U}_n = \mathop{udim} \mathbb{U}_n = n
\] and such that for each separable metric space $X$,
a) if $\mathop{asdim} X \leq n$, then $X$ is coarsely equivalent to a subset
of $\mathbb{U}_n$;
b) if $\mathop{udim} X \leq n$, then $X$ is uniformly homeomorphic to a
subset of $\mathbb{U}_n$. | [
0,
0,
1,
0,
0,
0
] |
Title: Tensor ring decomposition,
Abstract: Tensor decompositions such as the canonical format and the tensor train
format have been widely utilized to reduce storage costs and operational
complexities for high-dimensional data, achieving linear scaling with the input
dimension instead of exponential scaling. In this paper, we investigate even
lower storage-cost representations in the tensor ring format, which is an
extension of the tensor train format with variable end-ranks. Firstly, we
introduce two algorithms for converting a tensor in full format to tensor ring
format with low storage cost. Secondly, we detail a rounding operation for
tensor rings and show how this requires new definitions of common linear
algebra operations in the format to obtain storage-cost savings. Lastly, we
introduce algorithms for transforming the graph structure of graph-based tensor
formats, with orders of magnitude lower complexity than existing literature.
The efficiency of all algorithms is demonstrated on a number of numerical
examples, and we achieve up to more than an order of magnitude higher
compression ratios than previous approaches to using the tensor ring format. | [
1,
0,
0,
0,
0,
0
] |
Title: Combining Information from Multiple Forecasters: Inefficiency of Central Tendency,
Abstract: Even though the forecasting literature agrees that aggregating multiple
predictions of some future outcome typically outperforms the individual
predictions, there is no general consensus about the right way to do this. Most
common aggregators are means, defined loosely as aggregators that always remain
between the smallest and largest predictions. Examples include the arithmetic
mean, trimmed means, median, mid-range, and many other measures of central
tendency. If the forecasters use different information, the aggregator ideally
combines their information into a consensus without losing or distorting any of
it. An aggregator that achieves this is considered efficient. Unfortunately,
our results show that if the forecasters use their information accurately, an
aggregator that always remains strictly between the smallest and largest
predictions is never efficient in practice. A similar result holds even if the
ideal predictions are distorted with random error that is centered at zero. If
these noisy predictions are aggregated with a similar notion of centrality,
then, under some mild conditions, the aggregator is asymptotically inefficient. | [
0,
0,
1,
1,
0,
0
] |
Title: QCD-Aware Recursive Neural Networks for Jet Physics,
Abstract: Recent progress in applying machine learning for jet physics has been built
upon an analogy between calorimeters and images. In this work, we present a
novel class of recursive neural networks built instead upon an analogy between
QCD and natural languages. In the analogy, four-momenta are like words and the
clustering history of sequential recombination jet algorithms is like the
parsing of a sentence. Our approach works directly with the four-momenta of a
variable-length set of particles, and the jet-based tree structure varies on an
event-by-event basis. Our experiments highlight the flexibility of our method
for building task-specific jet embeddings and show that recursive architectures
are significantly more accurate and data efficient than previous image-based
networks. We extend the analogy from individual jets (sentences) to full events
(paragraphs), and show for the first time an event-level classifier operating
on all the stable particles produced in an LHC event. | [
0,
1,
0,
1,
0,
0
] |
Title: On algebraic branching programs of small width,
Abstract: In 1979 Valiant showed that the complexity class VP_e of families with
polynomially bounded formula size is contained in the class VP_s of families
that have algebraic branching programs (ABPs) of polynomially bounded size.
Motivated by the problem of separating these classes we study the topological
closure VP_e-bar, i.e. the class of polynomials that can be approximated
arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a
strikingly simple complete polynomial (in characteristic different from 2)
whose recursive definition is similar to the Fibonacci numbers. Further
understanding this polynomial seems to be a promising route to new formula
lower bounds.
Our methods are rooted in the study of ABPs of small constant width. In 1992
Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3
ABP size. We extend their result (in characteristic different from 2) by
showing that approximate formula size is polynomially equivalent to approximate
width-2 ABP size. This is surprising because in 2011 Allender and Wang gave
explicit polynomials that cannot be computed by width-2 ABPs at all! The
details of our construction lead to the aforementioned characterization of
VP_e-bar.
As a natural continuation of this work we prove that the class VNP can be
described as the class of families that admit a hypercube summation of
polynomially bounded dimension over a product of polynomially many affine
linear forms. This gives the first separations of algebraic complexity classes
from their nondeterministic analogs. | [
1,
0,
0,
0,
0,
0
] |
Title: A lightweight thermal heat switch for redundant cryocooling on satellites,
Abstract: A previously designed cryogenic thermal heat switch for space applications
has been optimized for low mass, high structural stability, and reliability.
The heat switch makes use of the large linear thermal expansion coefficient
(CTE) of the thermoplastic UHMW-PE for actuation. A structure model, which
includes the temperature dependent properties of the actuator, is derived to be
able to predict the contact pressure between the switch parts. This pressure
was used in a thermal model in order to predict the switch performance under
different heat loads and operating temperatures. The two models were used to
optimize the mass and stability of the switch. Its reliability was proven by
cyclic actuation of the switch and by shaker tests. | [
0,
1,
0,
0,
0,
0
] |
Title: Fast Automatic Smoothing for Generalized Additive Models,
Abstract: Multiple generalized additive models (GAMs) are a type of distributional
regression wherein parameters of probability distributions depend on predictors
through smooth functions, with selection of the degree of smoothness via $L_2$
regularization. Multiple GAMs allow finer statistical inference by
incorporating explanatory information in any or all of the parameters of the
distribution. Owing to their nonlinearity, flexibility and interpretability,
GAMs are widely used, but reliable and fast methods for automatic smoothing in
large datasets are still lacking, despite recent advances. We develop a general
methodology for automatically learning the optimal degree of $L_2$
regularization for multiple GAMs using an empirical Bayes approach. The smooth
functions are penalized by different amounts, which are learned simultaneously
by maximization of a marginal likelihood through an approximate
expectation-maximization algorithm that involves a double Laplace approximation
at the E-step, and leads to an efficient M-step. Empirical analysis shows that
the resulting algorithm is numerically stable, faster than all existing methods
and achieves state-of-the-art accuracy. For illustration, we apply it to an
important and challenging problem in the analysis of extremal data. | [
0,
0,
0,
1,
0,
0
] |
Title: Meromorphic functions with small Schwarzian derivative,
Abstract: We consider the family of all meromorphic functions $f$ of the form $$
f(z)=\frac{1}{z}+b_0+b_1z+b_2z^2+\cdots $$ analytic and locally univalent in
the puncture disk $\mathbb{D}_0:=\{z\in\mathbb{C}:\,0<|z|<1\}$. Our first
objective in this paper is to find a sufficient condition for $f$ to be
meromorphically convex of order $\alpha$, $0\le \alpha<1$, in terms of the fact
that the absolute value of the well-known Schwarzian derivative $S_f (z)$ of
$f$ is bounded above by a smallest positive root of a non-linear equation.
Secondly, we consider a family of functions $g$ of the form
$g(z)=z+a_2z^2+a_3z^3+\cdots$ analytic and locally univalent in the open unit
disk $\mathbb{D}:=\{z\in\mathbb{C}:\,|z|<1\}$, and show that $g$ is belonging
to a family of functions convex in one direction if $|S_g(z)|$ is bounded above
by a small positive constant depending on the second coefficient $a_2$. In
particular, we show that such functions $g$ are also contained in the starlike
and close-to-convex family. | [
0,
0,
1,
0,
0,
0
] |
Title: Optimal heat transfer and optimal exit times,
Abstract: A heat exchanger can be modeled as a closed domain containing an
incompressible fluid. The moving fluid has a temperature distribution obeying
the advection-diffusion equation, with zero temperature boundary conditions at
the walls. Starting from a positive initial temperature distribution in the
interior, the goal is to flux the heat through the walls as efficiently as
possible. Here we consider a distinct but closely related problem, that of the
integrated mean exit time of Brownian particles starting inside the domain.
Since flows favorable to rapid heat exchange should lower exit times, we
minimize a norm of the exit time. This is a time-independent optimization
problem that we solve analytically in some limits, and numerically otherwise.
We find an (at least locally) optimal velocity field that cools the domain on a
mechanical time scale, in the sense that the integrated mean exit time is
independent on molecular diffusivity in the limit of large-energy flows. | [
0,
1,
0,
0,
0,
0
] |
Title: Simulating Linear Logic in 1-Only Linear Logic,
Abstract: Linear Logic was introduced by Girard as a resource-sensitive refinement of
classical logic. It turned out that full propositional Linear Logic is
undecidable (Lincoln, Mitchell, Scedrov, and Shankar) and, hence, it is more
expressive than (modalized) classical or intuitionistic logic. In this paper we
focus on the study of the simplest fragments of Linear Logic, such as the
one-literal and constant-only fragments (the latter contains no literals at
all). Here we demonstrate that all these extremely simple fragments of Linear
Logic (one-literal, $\bot$-only, and even unit-only) are exactly of the same
expressive power as the corresponding full versions. We present also a complete
computational interpretation (in terms of acyclic programs with stack) for
bottom-free Intuitionistic Linear Logic. Based on this interpretation, we prove
the fairness of our encodings and establish the foregoing complexity results. | [
1,
0,
0,
0,
0,
0
] |
Title: Measuring bot and human behavioral dynamics,
Abstract: Bots, social media accounts controlled by software rather than by humans,
have recently been under the spotlight for their association with various forms
of online manipulation. To date, much work has focused on social bot detection,
but little attention has been devoted to the characterization and measurement
of the behavior and activity of bots, as opposed to humans'. Over the course of
the years, bots have become more sophisticated, and capable to reflect some
short-term behavior, emulating that of human users. The goal of this paper is
to study the behavioral dynamics that bots exhibit over the course of one
activity session, and highlight if and how these differ from human activity
signatures. By using a large Twitter dataset associated with recent political
events, we first separate bots and humans, then isolate their activity
sessions. We compile a list of quantities to be measured, like the propensity
of users to engage in social interactions or to produce content. Our analysis
highlights the presence of short-term behavioral trends in humans, which can be
associated with a cognitive origin, that are absent in bots, intuitively due to
their automated activity. These findings are finally codified to create and
evaluate a machine learning algorithm to detect activity sessions produced by
bots and humans, to allow for more nuanced bot detection strategies. | [
1,
0,
0,
0,
0,
0
] |
Title: Optimal control of elliptic equations with positive measures,
Abstract: Optimal control problems without control costs in general do not possess
solutions due to the lack of coercivity. However, unilateral constraints
together with the assumption of existence of strictly positive solutions of a
pre-adjoint state equation, are sufficient to obtain existence of optimal
solutions in the space of Radon measures. Optimality conditions for these
generalized minimizers can be obtained using Fenchel duality, which requires a
non-standard perturbation approach if the control-to-observation mapping is not
continuous (e.g., for Neumann boundary control in three dimensions). Combining
a conforming discretization of the measure space with a semismooth Newton
method allows the numerical solution of the optimal control problem. | [
0,
0,
1,
0,
0,
0
] |
Title: Fraunhofer diffraction at the two-dimensional quadratically distorted (QD) Grating,
Abstract: A two-dimensional (2D) mathematical model of quadratically distorted (QD)
grating is established with the principles of Fraunhofer diffraction and
Fourier optics. Discrete sampling and bisection algorithm are applied for
finding numerical solution of the diffraction pattern of QD grating. This 2D
mathematical model allows the precise design of QD grating and improves the
optical performance of simultaneous multiplane imaging system. | [
0,
1,
0,
0,
0,
0
] |
Title: Quantizing Euclidean motions via double-coset decomposition,
Abstract: Concepts from mathematical crystallography and group theory are used here to
quantize the group of rigid-body motions, resulting in a "motion alphabet" with
which to express robot motion primitives. From these primitives it is possible
to develop a dictionary of physical actions. Equipped with an alphabet of the
sort developed here, intelligent actions of robots in the world can be
approximated with finite sequences of characters, thereby forming the
foundation of a language in which to articulate robot motion. In particular, we
use the discrete handedness-preserving symmetries of macromolecular crystals
(known in mathematical crystallography as Sohncke space groups) to form a
coarse discretization of the space $\rm{SE}(3)$ of rigid-body motions. This
discretization is made finer by subdividing using the concept of double-coset
decomposition. More specifically, a very efficient, equivolumetric quantization
of spatial motion can be defined using the group-theoretic concept of a
double-coset decomposition of the form $\Gamma \backslash \rm{SE}(3) / \Delta$,
where $\Gamma$ is a Sohncke space group and $\Delta$ is a finite group of
rotational symmetries such as those of the icosahedron. The resulting discrete
alphabet is based on a very uniform sampling of $\rm{SE}(3)$ and is a tool for
describing the continuous trajectories of robots and humans. The general
"signals to symbols" problem in artificial intelligence is cast in this
framework for robots moving continuously in the world, and we present a
coarse-to-fine search scheme here to efficiently solve this decoding problem in
practice. | [
1,
0,
0,
0,
0,
0
] |
Title: A Study of FOSS'2013 Survey Data Using Clustering Techniques,
Abstract: FOSS is an acronym for Free and Open Source Software. The FOSS 2013 survey
primarily targets FOSS contributors and relevant anonymized dataset is publicly
available under CC by SA license. In this study, the dataset is analyzed from a
critical perspective using statistical and clustering techniques (especially
multiple correspondence analysis) with a strong focus on women contributors
towards discovering hidden trends and facts. Important inferences are drawn
about development practices and other facets of the free software and OSS
worlds. | [
1,
0,
0,
1,
0,
0
] |
Title: Linear-scaling electronic structure theory: Electronic temperature in the Kernel Polynomial Method,
Abstract: Linear-scaling electronic structure methods based on the calculation of
moments of the underlying electronic Hamiltonian offer a computationally
efficient and numerically robust scheme to drive large-scale atomistic
simulations, in which the quantum-mechanical nature of the electrons is
explicitly taken into account. We compare the kernel polynomial method to the
Fermi operator expansion method and establish a formal connection between the
two approaches. We show that the convolution of the kernel polynomial method
may be understood as an effective electron temperature. The results of a number
of possible kernels are formally examined, and then applied to a representative
tight-binding model. | [
0,
1,
0,
0,
0,
0
] |
Title: Characterizing the 2016 Russian IRA Influence Campaign,
Abstract: Until recently, social media were seen to promote democratic discourse on
social and political issues. However, this powerful communication ecosystem has
come under scrutiny for allowing hostile actors to exploit online discussions
in an attempt to manipulate public opinion. A case in point is the ongoing U.S.
Congress investigation of Russian interference in the 2016 U.S. election
campaign, with Russia accused of, among other things, using trolls (malicious
accounts created for the purpose of manipulation) and bots (automated accounts)
to spread propaganda and politically biased information. In this study, we
explore the effects of this manipulation campaign, taking a closer look at
users who re-shared the posts produced on Twitter by the Russian troll accounts
publicly disclosed by U.S. Congress investigation. We collected a dataset of 13
million election-related posts shared on Twitter in the year of 2016 by over a
million distinct users. This dataset includes accounts associated with the
identified Russian trolls as well as users sharing posts in the same time
period on a variety of topics around the 2016 elections. We use label
propagation to infer the users' ideology based on the news sources they share.
We are able to classify a large number of users as liberal or conservative with
precision and recall above 84%. Conservative users who retweet Russian trolls
produced significantly more tweets than liberal ones, about 8 times as many in
terms of tweets. Additionally, trolls' position in the retweet network is
stable over time, unlike users who retweet them who form the core of the
election-related retweet network by the end of 2016. Using state-of-the-art bot
detection techniques, we estimate that about 5% and 11% of liberal and
conservative users are bots, respectively. | [
1,
0,
0,
0,
0,
0
] |
Title: Correlating Cellular Features with Gene Expression using CCA,
Abstract: To understand the biology of cancer, joint analysis of multiple data
modalities, including imaging and genomics, is crucial. The involved nature of
gene-microenvironment interactions necessitates the use of algorithms which
treat both data types equally. We propose the use of canonical correlation
analysis (CCA) and a sparse variant as a preliminary discovery tool for
identifying connections across modalities, specifically between gene expression
and features describing cell and nucleus shape, texture, and stain intensity in
histopathological images. Applied to 615 breast cancer samples from The Cancer
Genome Atlas, CCA revealed significant correlation of several image features
with expression of PAM50 genes, known to be linked to outcome, while Sparse CCA
revealed associations with enrichment of pathways implicated in cancer without
leveraging prior biological understanding. These findings affirm the utility of
CCA for joint phenotype-genotype analysis of cancer. | [
0,
0,
0,
1,
1,
0
] |
Title: An Empirical Analysis of Vulnerabilities in Python Packages for Web Applications,
Abstract: This paper examines software vulnerabilities in common Python packages used
particularly for web development. The empirical dataset is based on the PyPI
package repository and the so-called Safety DB used to track vulnerabilities in
selected packages within the repository. The methodological approach builds on
a release-based time series analysis of the conditional probabilities for the
releases of the packages to be vulnerable. According to the results, many of
the Python vulnerabilities observed seem to be only modestly severe; input
validation and cross-site scripting have been the most typical vulnerabilities.
In terms of the time series analysis based on the release histories, only the
recent past is observed to be relevant for statistical predictions; the
classical Markov property holds. | [
1,
0,
0,
0,
0,
0
] |
Title: A Hybrid MILP and IPM for Dynamic Economic Dispatch with Valve Point Effect,
Abstract: Dynamic economic dispatch with valve-point effect (DED-VPE) is a non-convex
and non-differentiable optimization problem which is difficult to solve
efficiently. In this paper, a hybrid mixed integer linear programming (MILP)
and interior point method (IPM), denoted by MILP-IPM, is proposed to solve such
a DED-VPE problem, where the complicated transmission loss is also included.
Due to the non-differentiable characteristic of DED-VPE, the classical
derivative-based optimization methods can not be used any more. With the help
of model reformulation, a differentiable non-linear programming (NLP)
formulation which can be directly solved by IPM is derived. However, if the
DED-VPE is solved by IPM in a single step, the optimization will easily trap in
a poor local optima due to its non-convex and multiple local minima
characteristics. To exploit a better solution, an MILP method is required to
solve the DED-VPE without transmission loss, yielding a good initial point for
IPM to improve the quality of the solution. Simulation results demonstrate the
validity and effectiveness of the proposed MILP-IPM in solving DED-VPE. | [
0,
0,
1,
0,
0,
0
] |
Title: Using Big Data Technologies for HEP Analysis,
Abstract: The HEP community is approaching an era were the excellent performances of
the particle accelerators in delivering collision at high rate will force the
experiments to record a large amount of information. The growing size of the
datasets could potentially become a limiting factor in the capability to
produce scientific results timely and efficiently. Recently, new technologies
and new approaches have been developed in industry to answer to the necessity
to retrieve information as quickly as possible to analyze PB and EB datasets.
Providing the scientists with these modern computing tools will lead to
rethinking the principles of data analysis in HEP, making the overall
scientific process faster and smoother.
In this paper, we are presenting the latest developments and the most recent
results on the usage of Apache Spark for HEP analysis. The study aims at
evaluating the efficiency of the application of the new tools both
quantitatively, by measuring the performances, and qualitatively, focusing on
the user experience. The first goal is achieved by developing a data reduction
facility: working together with CERN Openlab and Intel, CMS replicates a real
physics search using Spark-based technologies, with the ambition of reducing 1
PB of public data in 5 hours, collected by the CMS experiment, to 1 TB of data
in a format suitable for physics analysis.
The second goal is achieved by implementing multiple physics use-cases in
Apache Spark using as input preprocessed datasets derived from official CMS
data and simulation. By performing different end-analyses up to the publication
plots on different hardware, feasibility, usability and portability are
compared to the ones of a traditional ROOT-based workflow. | [
1,
0,
0,
0,
0,
0
] |
Title: A Quantile Estimate Based on Local Curve Fitting,
Abstract: Quantile estimation is a problem presented in fields such as quality control,
hydrology, and economics. There are different techniques to estimate such
quantiles. Nevertheless, these techniques use an overall fit of the sample when
the quantiles of interest are usually located in the tails of the distribution.
Regression Approach for Quantile Estimation (RAQE) is a method based on
regression techniques and the properties of the empirical distribution to
address this problem. The method was first presented for the problem of
capability analysis. In this paper, a generalization of the method is
presented, extended to the multiple sample scenario, and data from real
examples is used to illustrate the proposed approaches. In addition,
theoretical framework is presented to support the extension for multiple
homogeneous samples and the use of the uncertainty of the estimated
probabilities as a weighting factor in the analysis. | [
0,
0,
0,
1,
0,
0
] |
Title: Diagnosing added value of convection-permitting regional models using precipitation event identification and tracking,
Abstract: Dynamical downscaling with high-resolution regional climate models may offer
the possibility of realistically reproducing precipitation and weather events
in climate simulations. As resolutions fall to order kilometers, the use of
explicit rather than parametrized convection may offer even greater fidelity.
However, these increased model resolutions both allow and require increasingly
complex diagnostics for evaluating model fidelity. In this study we use a suite
of dynamically downscaled simulations of the summertime U.S. (WRF driven by
NCEP) with systematic variations in parameters and treatment of convection as a
test case for evaluation of model precipitation. In particular, we use a novel
rainstorm identification and tracking algorithm that allocates essentially all
rainfall to individual precipitation events (Chang et al. 2016). This approach
allows multiple insights, including that, at least in these runs, model wet
bias is driven by excessive areal extent of precipitating events. Biases are
time-dependent, producing excessive diurnal cycle amplitude. We show that this
effect is produced not by new production of events but by excessive enlargement
of long-lived precipitation events during daytime, and that in the domain
average, precipitation biases appear best represented as additive offsets. Of
all model configurations evaluated, convection-permitting simulations most
consistently reduced biases in precipitation event characteristics. | [
0,
1,
0,
1,
0,
0
] |
Title: On rate of convergence in non-central limit theorems,
Abstract: The main result of this paper is the rate of convergence to Hermite-type
distributions in non-central limit theorems. To the best of our knowledge, this
is the first result in the literature on rates of convergence of functionals of
random fields to Hermite-type distributions with ranks greater than 2. The
results were obtained under rather general assumptions on the spectral
densities of random fields. These assumptions are even weaker than in the known
convergence results for the case of Rosenblatt distributions. Additionally,
Lévy concentration functions for Hermite-type distributions were
investigated. | [
0,
0,
1,
0,
0,
0
] |
Title: Optimal Output Regulation for Square, Over-Actuated and Under-Actuated Linear Systems,
Abstract: This paper considers two different problems in trajectory tracking control
for linear systems. First, if the control is not unique which is most input
energy efficient. Second, if exact tracking is infeasible which control
performs most accurately. These are typical challenges for over-actuated
systems and for under-actuated systems, respectively. We formulate both goals
as optimal output regulation problems. Then we contribute two new sets of
regulator equations to output regulation theory that provide the desired
solutions. A thorough study indicates solvability and uniqueness under weak
assumptions. E.g., we can always determine the solution of the classical
regulator equations that is most input energy efficient. This is of great value
if there are infinitely many solutions. We derive our results by a linear
quadratic tracking approach and establish a useful link to output regulation
theory. | [
1,
0,
0,
0,
0,
0
] |
Title: Conformal k-NN Anomaly Detector for Univariate Data Streams,
Abstract: Anomalies in time-series data give essential and often actionable information
in many applications. In this paper we consider a model-free anomaly detection
method for univariate time-series which adapts to non-stationarity in the data
stream and provides probabilistic abnormality scores based on the conformal
prediction paradigm. Despite its simplicity the method performs on par with
complex prediction-based models on the Numenta Anomaly Detection benchmark and
the Yahoo! S5 dataset. | [
1,
0,
0,
1,
0,
0
] |
Title: Toeplitz Quantization and Convexity,
Abstract: Let $T^m_f $ be the Toeplitz quantization of a real $ C^{\infty}$ function
defined on the sphere $ \mathbb{CP}(1)$. $T^m_f $ is therefore a Hermitian
matrix with spectrum $\lambda^m= (\lambda_0^m,\ldots,\lambda_m^m)$. Schur's
theorem says that the diagonal of a Hermitian matrix $A$ that has the same
spectrum of $ T^m_f $ lies inside a finite dimensional convex set whose extreme
points are $\{( \lambda_{\sigma(0)}^m,\ldots,\lambda_{\sigma(m)}^m)\}$, where
$\sigma$ is any permutation of $(m+1)$ elements. In this paper, we prove that
these convex sets "converge" to a huge convex set in $L^2([0,1])$ whose extreme
points are $ f^*\circ \phi$, where $ f^*$ is the decreasing rearrangement of $
f$ and $ \phi $ ranges over the set of measure preserving transformations of
the unit interval $ [0,1]$. | [
0,
0,
1,
0,
0,
0
] |
Title: On architectural choices in deep learning: From network structure to gradient convergence and parameter estimation,
Abstract: We study mechanisms to characterize how the asymptotic convergence of
backpropagation in deep architectures, in general, is related to the network
structure, and how it may be influenced by other design choices including
activation type, denoising and dropout rate. We seek to analyze whether network
architecture and input data statistics may guide the choices of learning
parameters and vice versa. Given the broad applicability of deep architectures,
this issue is interesting both from theoretical and a practical standpoint.
Using properties of general nonconvex objectives (with first-order
information), we first build the association between structural, distributional
and learnability aspects of the network vis-à-vis their interaction with
parameter convergence rates. We identify a nice relationship between feature
denoising and dropout, and construct families of networks that achieve the same
level of convergence. We then derive a workflow that provides systematic
guidance regarding the choice of network sizes and learning parameters often
mediated4 by input statistics. Our technical results are corroborated by an
extensive set of evaluations, presented in this paper as well as independent
empirical observations reported by other groups. We also perform experiments
showing the practical implications of our framework for choosing the best
fully-connected design for a given problem. | [
1,
0,
1,
1,
0,
0
] |
Title: Motion Segmentation via Global and Local Sparse Subspace Optimization,
Abstract: In this paper, we propose a new framework for segmenting feature-based moving
objects under affine subspace model. Since the feature trajectories in practice
are high-dimensional and contain a lot of noise, we firstly apply the sparse
PCA to represent the original trajectories with a low-dimensional global
subspace, which consists of the orthogonal sparse principal vectors.
Subsequently, the local subspace separation will be achieved via automatically
searching the sparse representation of the nearest neighbors for each projected
data. In order to refine the local subspace estimation result and deal with the
missing data problem, we propose an error estimation to encourage the projected
data that span a same local subspace to be clustered together. In the end, the
segmentation of different motions is achieved through the spectral clustering
on an affinity matrix, which is constructed with both the error estimation and
sparse neighbors optimization. We test our method extensively and compare it
with state-of-the-art methods on the Hopkins 155 dataset and Freiburg-Berkeley
Motion Segmentation dataset. The results show that our method is comparable
with the other motion segmentation methods, and in many cases exceed them in
terms of precision and computation time. | [
1,
0,
0,
0,
0,
0
] |
Title: Topological strings linking with quasi-particle exchange in superconducting Dirac semimetals,
Abstract: We demonstrate a topological classification of vortices in three dimensional
time-reversal invariant topological superconductors based on superconducting
Dirac semimetals with an s-wave superconducting order parameter by means of a
pair of numbers $(N_\Phi,N)$, accounting how many units $N_\Phi$ of magnetic
fluxes $hc/4e$ and how many $N$ chiral Majorana modes the vortex carries. From
these quantities, we introduce a topological invariant which further classifies
the properties of such vortices under linking processes. While such processes
are known to be related to instanton processes in a field theoretic
description, we demonstrate here that they are, in fact, also equivalent to the
fractional Josephson effect on junctions based at the edges of quantum spin
Hall systems. This allows one to consider microscopically the effects of
interactions in the linking problem. We therefore demonstrate that associated
to links between vortices, one has the exchange of quasi-particles, either
Majorana zero-modes or $e/2$ quasi-particles, which allows for a topological
classification of vortices in these systems, seen to be $\mathbb{Z}_8$
classified. While $N_\Phi$ and $N$ are shown to be both even or odd in the
weakly-interacting limit, in the strongly interacting scenario one loosens this
constraint. In this case, one may have further fractionalization possibilities
for the vortices, whose excitations are described by $SO(3)_3$-like conformal
field theories with quasi-particle exchanges of more exotic types. | [
0,
1,
0,
0,
0,
0
] |
Title: Self-Imitation Learning,
Abstract: This paper proposes Self-Imitation Learning (SIL), a simple off-policy
actor-critic algorithm that learns to reproduce the agent's past good
decisions. This algorithm is designed to verify our hypothesis that exploiting
past good experiences can indirectly drive deep exploration. Our empirical
results show that SIL significantly improves advantage actor-critic (A2C) on
several hard exploration Atari games and is competitive to the state-of-the-art
count-based exploration methods. We also show that SIL improves proximal policy
optimization (PPO) on MuJoCo tasks. | [
0,
0,
0,
1,
0,
0
] |
Title: Controllability to Equilibria of the 1-D Fokker-Planck Equation with Zero-Flux Boundary Condition,
Abstract: We consider the problem of controlling the spatiotemporal probability
distribution of a robotic swarm that evolves according to a reflected diffusion
process, using the space- and time-dependent drift vector field parameter as
the control variable. In contrast to previous work on control of the
Fokker-Planck equation, a zero-flux boundary condition is imposed on the
partial differential equation that governs the swarm probability distribution,
and only bounded vector fields are considered to be admissible as control
parameters. Under these constraints, we show that any initial probability
distribution can be transported to a target probability distribution under
certain assumptions on the regularity of the target distribution. In
particular, we show that if the target distribution is (essentially) bounded,
has bounded first-order and second-order partial derivatives, and is bounded
from below by a strictly positive constant, then this distribution can be
reached exactly using a drift vector field that is bounded in space and time.
Our proof is constructive and based on classical linear semigroup theoretic
concepts. | [
1,
0,
1,
0,
0,
0
] |
Title: A Parallel Simulator for Massive Reservoir Models Utilizing Distributed-Memory Parallel Systems,
Abstract: This paper presents our work on developing parallel computational methods for
two-phase flow on modern parallel computers, where techniques for linear
solvers and nonlinear methods are studied and the standard and inexact Newton
methods are investigated. A multi-stage preconditioner for two-phase flow is
applied and advanced matrix processing strategies are studied. A local
reordering method is developed to speed the solution of linear systems.
Numerical experiments show that these computational methods are effective and
scalable, and are capable of computing large-scale reservoir simulation
problems using thousands of CPU cores on parallel computers. The nonlinear
techniques, preconditioner and matrix processing strategies can also be applied
to three-phase black oil, compositional and thermal models. | [
1,
0,
0,
0,
0,
0
] |
Title: Bifurcation structure of cavity soliton dynamics in a VCSEL with saturable absorber and time-delayed feedback,
Abstract: We consider a wide-aperture surface-emitting laser with a saturable absorber
section subjected to time-delayed feedback. We adopt the mean-field approach
assuming a single longitudinal mode operation of the solitary VCSEL. We
investigate cavity soliton dynamics under the effect of time- delayed feedback
in a self-imaging configuration where diffraction in the external cavity is
negligible. Using bifurcation analysis, direct numerical simulations and
numerical path continuation methods, we identify the possible bifurcations and
map them in a plane of feedback parameters. We show that for both the
homogeneous and localized stationary lasing solutions in one spatial dimension
the time-delayed feedback induces complex spatiotemporal dynamics, in
particular a period doubling route to chaos, quasiperiodic oscillations and
multistability of the stationary solutions. | [
0,
1,
0,
0,
0,
0
] |
Title: Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank Matrix Recovery,
Abstract: We propose a new iteratively reweighted least squares (IRLS) algorithm for
the recovery of a matrix $X \in \mathbb{C}^{d_1\times d_2}$ of rank $r
\ll\min(d_1,d_2)$ from incomplete linear observations, solving a sequence of
low complexity linear problems. The easily implementable algorithm, which we
call harmonic mean iteratively reweighted least squares (HM-IRLS), optimizes a
non-convex Schatten-$p$ quasi-norm penalization to promote low-rankness and
carries three major strengths, in particular for the matrix completion setting.
First, we observe a remarkable global convergence behavior of the algorithm's
iterates to the low-rank matrix for relevant, interesting cases, for which any
other state-of-the-art optimization approach fails the recovery. Secondly,
HM-IRLS exhibits an empirical recovery probability close to $1$ even for a
number of measurements very close to the theoretical lower bound $r (d_1 +d_2
-r)$, i.e., already for significantly fewer linear observations than any other
tractable approach in the literature. Thirdly, HM-IRLS exhibits a locally
superlinear rate of convergence (of order $2-p$) if the linear observations
fulfill a suitable null space property. While for the first two properties we
have so far only strong empirical evidence, we prove the third property as our
main theoretical result. | [
1,
0,
1,
0,
0,
0
] |
Title: The Italian Pension Gap: a Stochastic Optimal Control Approach,
Abstract: We study the gap between the state pension provided by the Italian pension
system pre-Dini reform and post-Dini reform. The goal is to fill the gap
between the old and the new pension by joining a defined contribution pension
scheme and adopting an optimal investment strategy that is target-based. We
find that it is possible to cover, at least partially, this gap with the
additional income of the pension scheme, especially in the presence of late
retirement and in the presence of stagnant career. Workers with dynamic career
and workers who retire early are those who are most penalised by the reform.
Results are intuitive and in line with previous studies on the subject. | [
0,
0,
0,
0,
0,
1
] |
Title: Immigration-induced phase transition in a regulated multispecies birth-death process,
Abstract: Power-law-distributed species counts or clone counts arise in many biological
settings such as multispecies cell populations, population genetics, and
ecology. This empirical observation that the number of species $c_{k}$
represented by $k$ individuals scales as negative powers of $k$ is also
supported by a series of theoretical birth-death-immigration (BDI) models that
consistently predict many low-population species, a few intermediate-population
species, and very high-population species. However, we show how a simple global
population-dependent regulation in a neutral BDI model destroys the power law
distributions. Simulation of the regulated BDI model shows a high probability
of observing a high-population species that dominates the total population.
Further analysis reveals that the origin of this breakdown is associated with
the failure of a mean-field approximation for the expected species abundance
distribution. We find an accurate estimate for the expected distribution
$\langle c_k \rangle$ by mapping the problem to a lower-dimensional Moran
process, allowing us to also straightforwardly calculate the covariances
$\langle c_k c_\ell \rangle$. Finally, we exploit the concepts associated with
energy landscapes to explain the failure of the mean-field assumption by
identifying a phase transition in the quasi-steady-state species counts
triggered by a decreasing immigration rate. | [
0,
0,
0,
0,
1,
0
] |
Title: Nanoscale assembly of superconducting vortices with scanning tunnelling microscope tip,
Abstract: Vortices play a crucial role in determining the properties of superconductors
as well as their applications. Therefore, characterization and manipulation of
vortices, especially at the single vortex level, is of great importance. Among
many techniques to study single vortices, scanning tunneling microscopy (STM)
stands out as a powerful tool, due to its ability to detect the local
electronic states and high spatial resolution. However, local control of
superconductivity as well as the manipulation of individual vortices with the
STM tip is still lacking. Here we report a new function of the STM, namely to
control the local pinning in a superconductor through the heating effect. Such
effect allows us to quench the superconducting state at nanoscale, and leads to
the growth of vortex-clusters whose size can be controlled by the bias voltage.
We also demonstrate the use of an STM tip to assemble single quantum vortices
into desired nanoscale configurations. | [
0,
1,
0,
0,
0,
0
] |
Title: Spaces which invert weak homotopy equivalences,
Abstract: It is well known that if $X$ is a CW-complex, then for every weak homotopy
equivalence $f:A\to B$, the map $f_*:[X,A]\to [X,B]$ induced in homotopy
classes is a bijection. For which spaces $X$ is $f^*:[B,X]\to [A,X]$ a
bijection for every weak equivalence $f$? This question was considered by J.
Strom and T. Goodwillie. In this note we prove that a non-empty space inverts
weak equivalences if and only if it is contractible. | [
0,
0,
1,
0,
0,
0
] |
Title: Results from EDGES High-Band: I. Constraints on Phenomenological Models for the Global $21$ cm Signal,
Abstract: We report constraints on the global $21$ cm signal due to neutral hydrogen at
redshifts $14.8 \geq z \geq 6.5$. We derive our constraints from low foreground
observations of the average sky brightness spectrum conducted with the EDGES
High-Band instrument between September $7$ and October $26$, $2015$.
Observations were calibrated by accounting for the effects of antenna beam
chromaticity, antenna and ground losses, signal reflections, and receiver
parameters. We evaluate the consistency between the spectrum and
phenomenological models for the global $21$ cm signal. For tanh-based
representations of the ionization history during the epoch of reionization, we
rule out, at $\geq2\sigma$ significance, models with duration of up to $\Delta
z = 1$ at $z\approx8.5$ and higher than $\Delta z = 0.4$ across most of the
observed redshift range under the usual assumption that the $21$ cm spin
temperature is much larger than the temperature of the cosmic microwave
background (CMB) during reionization. We also investigate a `cold' IGM scenario
that assumes perfect Ly$\alpha$ coupling of the $21$ cm spin temperature to the
temperature of the intergalactic medium (IGM), but that the IGM is not heated
by early stars or stellar remants. Under this assumption, we reject tanh-based
reionization models of duration $\Delta z \lesssim 2$ over most of the observed
redshift range. Finally, we explore and reject a broad range of Gaussian models
for the $21$ cm absorption feature expected in the First Light era. As an
example, we reject $100$ mK Gaussians with duration (full width at half
maximum) $\Delta z \leq 4$ over the range $14.2\geq z\geq 6.5$ at $\geq2\sigma$
significance. | [
0,
1,
0,
0,
0,
0
] |
Title: Uniformly Bounded Sets in Quasiperiodically Forced Dynamical Systems,
Abstract: This paper addresses structures of state space in quasiperiodically forced
dynamical systems. We develop a theory of ergodic partition of state space in a
class of measure-preserving and dissipative flows, which is a natural extension
of the existing theory for measure-preserving maps. The ergodic partition
result is based on eigenspace at eigenvalue 0 of the associated Koopman
operator, which is realized via time-averages of observables, and provides a
constructive way to visualize a low-dimensional slice through a
high-dimensional invariant set. We apply the result to the systems with a
finite number of attractors and show that the time-average of a continuous
observable is well-defined and reveals the invariant sets, namely, a finite
number of basins of attraction. We provide a characterization of invariant sets
in the quasiperiodically forced systems. A theorem on uniform boundedness of
the invariant sets is proved. The series of analytical results enables
numerical analysis of invariant sets in the quasiperiodically forced systems
based on the ergodic partition and time-averages. Using this, we analyze a
nonlinear model of complex power grids that represents the short-term swing
instability, named the coherent swing instability. We show that our analytical
results can be used to understand stability regions in such complex systems. | [
1,
0,
0,
0,
0,
0
] |
Title: Stable basic sets for finite special linear and unitary group,
Abstract: In this paper we show, using Deligne-Lusztig theory and Kawanaka's theory of
generalised Gelfand-Graev representations, that the decomposition matrix of the
special linear and unitary group in non defining characteristic can be made
unitriangular with respect to a basic set that is stable under the action of
automorphisms. | [
0,
0,
1,
0,
0,
0
] |
Title: Oxidative species-induced excitonic transport in tubulin aromatic networks: Potential implications for neurodegenerative disease,
Abstract: Oxidative stress is a pathological hallmark of neurodegenerative tauopathic
disorders such as Alzheimer's disease and Parkinson's disease-related dementia,
which are characterized by altered forms of the microtubule-associated protein
(MAP) tau. MAP tau is a key protein in stabilizing the microtubule architecture
that regulates neuron morphology and synaptic strength. The precise role of
reactive oxygen species (ROS) in the tauopathic disease process, however, is
poorly understood. It is known that the production of ROS by mitochondria can
result in ultraweak photon emission (UPE) within cells. One likely absorber of
these photons is the microtubule cytoskeleton, as it forms a vast network
spanning neurons, is highly co-localized with mitochondria, and shows a high
density of aromatic amino acids. Functional microtubule networks may traffic
this ROS-generated endogenous photon energy for cellular signaling, or they may
serve as dissipaters/conduits of such energy. Experimentally, after in vitro
exposure to exogenous photons, microtubules have been shown to reorient and
reorganize in a dose-dependent manner with the greatest effect being observed
around 280 nm, in the tryptophan and tyrosine absorption range. In this paper,
recent modeling efforts based on ambient temperature experiment are presented,
showing that tubulin polymers can feasibly absorb and channel these
photoexcitations via resonance energy transfer, on the order of dendritic
length scales. Since microtubule networks are compromised in tauopathic
diseases, patients with these illnesses would be unable to support effective
channeling of these photons for signaling or dissipation. Consequent emission
surplus due to increased UPE production or decreased ability to absorb and
transfer may lead to increased cellular oxidative damage, thus hastening the
neurodegenerative process. | [
0,
1,
0,
0,
0,
0
] |
Title: On radial Schroedinger operators with a Coulomb potential,
Abstract: This paper presents a thorough analysis of 1-dimensional Schroedinger
operators whose potential is a linear combination of the Coulomb term 1/r and
the centrifugal term 1/r^2. We allow both coupling constants to be complex.
Using natural boundary conditions at 0, a two parameter holomorphic family of
closed operators is introduced. We call them the Whittaker operators, since in
the mathematical literature their eigenvalue equation is called the Whittaker
equation. Spectral and scattering theory for Whittaker operators is studied.
Whittaker operators appear in quantum mechanics as the radial part of the
Schroedinger operator with a Coulomb potential. | [
0,
0,
1,
0,
0,
0
] |
Title: Extrema-weighted feature extraction for functional data,
Abstract: Motivation: Although there is a rich literature on methods for assessing the
impact of functional predictors, the focus has been on approaches for dimension
reduction that can fail dramatically in certain applications. Examples of
standard approaches include functional linear models, functional principal
components regression, and cluster-based approaches, such as latent trajectory
analysis. This article is motivated by applications in which the dynamics in a
predictor, across times when the value is relatively extreme, are particularly
informative about the response. For example, physicians are interested in
relating the dynamics of blood pressure changes during surgery to post-surgery
adverse outcomes, and it is thought that the dynamics are more important when
blood pressure is significantly elevated or lowered.
Methods: We propose a novel class of extrema-weighted feature (XWF)
extraction models. Key components in defining XWFs include the marginal density
of the predictor, a function up-weighting values at high quantiles of this
marginal, and functionals characterizing local dynamics. Algorithms are
proposed for fitting of XWF-based regression and classification models, and are
compared with current methods for functional predictors in simulations and a
blood pressure during surgery application.
Results: XWFs find features of intraoperative blood pressure trajectories
that are predictive of postoperative mortality. By their nature, most of these
features cannot be found by previous methods. | [
0,
0,
0,
1,
0,
0
] |
Title: Facial Recognition Enabled Smart Door Using Microsoft Face API,
Abstract: Privacy and Security are two universal rights and, to ensure that in our
daily life we are secure, a lot of research is going on in the field of home
security, and IoT is the turning point for the industry, where we connect
everyday objects to share data for our betterment. Facial recognition is a
well-established process in which the face is detected and identified out of
the image. We aim to create a smart door, which secures the gateway on the
basis of who we are. In our proof of concept of a smart door we have used a
live HD camera on the front side of setup attached to a display monitor
connected to the camera to show who is standing in front of the door, also the
whole system will be able to give voice outputs by processing text them on the
Raspberry Pi ARM processor used and show the answers as output on the screen.
We are using a set of electromagnets controlled by the microcontroller, which
will act as a lock. So a person can open the smart door with the help of facial
recognition and at the same time also be able to interact with it. The facial
recognition is done by Microsoft face API but our state of the art desktop
application operating over Microsoft Visual Studio IDE reduces the
computational time by detecting the face out of the photo and giving that as
the output to Microsoft Face API, which is hosted over Microsoft Azure cloud
support. | [
1,
0,
0,
0,
0,
0
] |
Title: Composition by Conversation,
Abstract: Most musical programming languages are developed purely for coding virtual
instruments or algorithmic compositions. Although there has been some work in
the domain of musical query languages for music information retrieval, there
has been little attempt to unify the principles of musical programming and
query languages with cognitive and natural language processing models that
would facilitate the activity of composition by conversation. We present a
prototype framework, called MusECI, that merges these domains, permitting
score-level algorithmic composition in a text editor while also supporting
connectivity to existing natural language processing frameworks. | [
1,
0,
0,
0,
0,
0
] |
Title: Impact of theoretical priors in cosmological analyses: the case of single field quintessence,
Abstract: We investigate the impact of general conditions of theoretical stability and
cosmological viability on dynamical dark energy models. As a powerful example,
we study whether minimally coupled, single field Quintessence models that are
safe from ghost instabilities, can source the CPL expansion history recently
shown to be mildly favored by a combination of CMB (Planck) and Weak Lensing
(KiDS) data. We find that in their most conservative form, the theoretical
conditions impact the analysis in such a way that smooth single field
Quintessence becomes significantly disfavored with respect to the standard LCDM
cosmological model. This is due to the fact that these conditions cut a
significant portion of the (w0;wa) parameter space for CPL, in particular
eliminating the region that would be favored by weak lensing data. Within the
scenario of a smooth dynamical dark energy parametrized with CPL, weak lensing
data favors a region that would require multiple fields to ensure gravitational
stability. | [
0,
1,
0,
0,
0,
0
] |
Title: Wormholes and masses for Goldstone bosons,
Abstract: There exist non-trivial stationary points of the Euclidean action for an
axion particle minimally coupled to Einstein gravity, dubbed wormholes. They
explicitly break the continuos global shift symmetry of the axion in a
non-perturbative way, and generate an effective potential that may compete with
QCD depending on the value of the axion decay constant. In this paper, we
explore both theoretical and phenomenological aspects of this issue. On the
theory side, we address the problem of stability of the wormhole solutions, and
we show that the spectrum of the quadratic action features only positive
eigenvalues. On the phenomenological side, we discuss, beside the obvious
application to the QCD axion, relevant consequences for models with ultralight
dark matter, black hole superradiance, and the relaxation of the electroweak
scale. We conclude discussing wormhole solutions for a generic coset and the
potential they generate. | [
0,
1,
0,
0,
0,
0
] |
Title: Learning to Invert: Signal Recovery via Deep Convolutional Networks,
Abstract: The promise of compressive sensing (CS) has been offset by two significant
challenges. First, real-world data is not exactly sparse in a fixed basis.
Second, current high-performance recovery algorithms are slow to converge,
which limits CS to either non-real-time applications or scenarios where massive
back-end computing is available. In this paper, we attack both of these
challenges head-on by developing a new signal recovery framework we call {\em
DeepInverse} that learns the inverse transformation from measurement vectors to
signals using a {\em deep convolutional network}. When trained on a set of
representative images, the network learns both a representation for the signals
(addressing challenge one) and an inverse map approximating a greedy or convex
recovery algorithm (addressing challenge two). Our experiments indicate that
the DeepInverse network closely approximates the solution produced by
state-of-the-art CS recovery algorithms yet is hundreds of times faster in run
time. The tradeoff for the ultrafast run time is a computationally intensive,
off-line training procedure typical to deep networks. However, the training
needs to be completed only once, which makes the approach attractive for a host
of sparse recovery problems. | [
1,
0,
0,
1,
0,
0
] |
Title: The Garden of Eden theorem: old and new,
Abstract: We review topics in the theory of cellular automata and dynamical systems
that are related to the Moore-Myhill Garden of Eden theorem. | [
0,
1,
1,
0,
0,
0
] |
Title: Bosonization in Non-Relativistic CFTs,
Abstract: We demonstrate explicitly the correspondence between all protected operators
in a 2+1 dimensional non-supersymmetric bosonization duality in the
non-relativistic limit. Roughly speaking we consider $SU(N)$ Chern-Simons field
theory at level $k$ with $N_f$ flavours of fundamental boson, and match its
chiral sector to that of a $SU(k)$ theory at level $N$ with $N_f$ fundamental
fermions. We present the matching at the level of indices and individual
operators, seeing the mechanism of failure for $N_f > N$, and point out that
the non-relativistic setting is a particularly friendly setting for studying
interesting questions about such dualities. | [
0,
1,
0,
0,
0,
0
] |
Title: Learning latent representations for style control and transfer in end-to-end speech synthesis,
Abstract: In this paper, we introduce the Variational Autoencoder (VAE) to an
end-to-end speech synthesis model, to learn the latent representation of
speaking styles in an unsupervised manner. The style representation learned
through VAE shows good properties such as disentangling, scaling, and
combination, which makes it easy for style control. Style transfer can be
achieved in this framework by first inferring style representation through the
recognition network of VAE, then feeding it into TTS network to guide the style
in synthesizing speech. To avoid Kullback-Leibler (KL) divergence collapse in
training, several techniques are adopted. Finally, the proposed model shows
good performance of style control and outperforms Global Style Token (GST)
model in ABX preference tests on style transfer. | [
1,
0,
0,
0,
0,
0
] |
Title: Effects of Planetesimal Accretion on the Thermal and Structural Evolution of Sub-Neptunes,
Abstract: A remarkable discovery of NASA's Kepler mission is the wide diversity in the
average densities of planets of similar mass. After gas disk dissipation, fully
formed planets could interact with nearby planetesimals from a remnant
planetesimal disk. These interactions would often lead to planetesimal
accretion due to the relatively high ratio between the planet size and the hill
radius for typical planets. We present calculations using the open-source
stellar evolution toolkit MESA (Modules for Experiments in Stellar
Astrophysics) modified to include the deposition of planetesimals into the H/He
envelopes of sub-Neptunes (~1-20 MEarth). We show that planetesimal accretion
can alter the mass-radius isochrones for these planets. The same initial planet
as a result of the same total accreted planetesimal mass can have up to ~5%
difference in mean densities several Gyr after the last accretion due to
inherent stochasticity of the accretion process. During the phase of rapid
accretion these differences are more dramatic. The additional energy deposition
from the accreted planetesimals increase the ratio between the planet's radius
to that of the core during rapid accretion, which in turn leads to enhanced
loss of atmospheric mass. As a result, the same initial planet can end up with
very different envelope mass fractions. These differences manifest as
differences in mean densities long after accretion stops. These effects are
particularly important for planets initially less massive than ~10 MEarth and
with envelope mass fraction less than ~10%, thought to be the most common type
of planets discovered by Kepler. | [
0,
1,
0,
0,
0,
0
] |
Title: Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions,
Abstract: We obtain estimation error rates and sharp oracle inequalities for
regularization procedures of the form \begin{equation*}
\hat f \in argmin_{f\in
F}\left(\frac{1}{N}\sum_{i=1}^N\ell(f(X_i), Y_i)+\lambda \|f\|\right)
\end{equation*} when $\|\cdot\|$ is any norm, $F$ is a convex class of
functions and $\ell$ is a Lipschitz loss function satisfying a Bernstein
condition over $F$. We explore both the bounded and subgaussian stochastic
frameworks for the distribution of the $f(X_i)$'s, with no assumption on the
distribution of the $Y_i$'s. The general results rely on two main objects: a
complexity function, and a sparsity equation, that depend on the specific
setting in hand (loss $\ell$ and norm $\|\cdot\|$).
As a proof of concept, we obtain minimax rates of convergence in the
following problems: 1) matrix completion with any Lipschitz loss function,
including the hinge and logistic loss for the so-called 1-bit matrix completion
instance of the problem, and quantile losses for the general case, which
enables to estimate any quantile on the entries of the matrix; 2) logistic
LASSO and variants such as the logistic SLOPE; 3) kernel methods, where the
loss is the hinge loss, and the regularization function is the RKHS norm. | [
0,
0,
1,
1,
0,
0
] |
Title: On closed Lie ideals of certain tensor products of $C^*$-algebras,
Abstract: For a simple $C^*$-algebra $A$ and any other $C^*$-algebra $B$, it is proved
that every closed ideal of $A \otimes^{\min} B$ is a product ideal if either
$A$ is exact or $B$ is nuclear. Closed commutator of a closed ideal in a Banach
algebra whose every closed ideal possesses a quasi-central approximate identity
is described in terms of the commutator of the Banach algebra. If $\alpha$ is
either the Haagerup norm, the operator space projective norm or the
$C^*$-minimal norm, then this allows us to identify all closed Lie ideals of $A
\otimes^{\alpha} B$, where $A$ and $B$ are simple, unital $C^*$-algebras with
one of them admitting no tracial functionals, and to deduce that every
non-central closed Lie ideal of $B(H) \otimes^{\alpha} B(H)$ contains the
product ideal $K(H) \otimes^{\alpha} K(H)$. Closed Lie ideals of $A
\otimes^{\min} C(X)$ are also determined, $A$ being any simple unital
$C^*$-algebra with at most one tracial state and $X$ any compact Hausdorff
space. And, it is shown that closed Lie ideals of $A \otimes^{\alpha} K(H)$ are
precisely the product ideals, where $A$ is any unital $C^*$-algebra and
$\alpha$ any completely positive uniform tensor norm. | [
0,
0,
1,
0,
0,
0
] |
Title: Plenoptic Monte Carlo Object Localization for Robot Grasping under Layered Translucency,
Abstract: In order to fully function in human environments, robot perception will need
to account for the uncertainty caused by translucent materials. Translucency
poses several open challenges in the form of transparent objects (e.g.,
drinking glasses), refractive media (e.g., water), and diffuse partial
occlusions (e.g., objects behind stained glass panels). This paper presents
Plenoptic Monte Carlo Localization (PMCL) as a method for localizing object
poses in the presence of translucency using plenoptic (light-field)
observations. We propose a new depth descriptor, the Depth Likelihood Volume
(DLV), and its use within a Monte Carlo object localization algorithm. We
present results of localizing and manipulating objects with translucent
materials and objects occluded by layers of translucency. Our PMCL
implementation uses observations from a Lytro first generation light field
camera to allow a Michigan Progress Fetch robot to perform grasping. | [
1,
0,
0,
0,
0,
0
] |
Title: Impact and mitigation strategy for future solar flares,
Abstract: It is widely established that extreme space weather events associated with
solar flares are capable of causing widespread technological damage. We develop
a simple mathematical model to assess the economic losses arising from these
phenomena over time. We demonstrate that the economic damage is characterized
by an initial period of power-law growth, followed by exponential amplification
and eventual saturation. We outline a mitigation strategy to protect our planet
by setting up a magnetic shield to deflect charged particles at the Lagrange
point L$_1$, and demonstrate that this approach appears to be realizable in
terms of its basic physical parameters. We conclude our analysis by arguing
that shielding strategies adopted by advanced civilizations will lead to
technosignatures that are detectable by upcoming missions. | [
0,
1,
0,
0,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.