title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
An Alternative to EM for Gaussian Mixture Models: Batch and Stochastic Riemannian Optimization | We consider maximum likelihood estimation for Gaussian Mixture Models (Gmms).
This task is almost invariably solved (in theory and practice) via the
Expectation Maximization (EM) algorithm. EM owes its success to various
factors, of which is its ability to fulfill positive definiteness constraints
in closed form is of key importance. We propose an alternative to EM by
appealing to the rich Riemannian geometry of positive definite matrices, using
which we cast Gmm parameter estimation as a Riemannian optimization problem.
Surprisingly, such an out-of-the-box Riemannian formulation completely fails
and proves much inferior to EM. This motivates us to take a closer look at the
problem geometry, and derive a better formulation that is much more amenable to
Riemannian optimization. We then develop (Riemannian) batch and stochastic
gradient algorithms that outperform EM, often substantially. We provide a
non-asymptotic convergence analysis for our stochastic method, which is also
the first (to our knowledge) such global analysis for Riemannian stochastic
gradient. Numerous empirical results are included to demonstrate the
effectiveness of our methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Coprime Quantum Chain | In this paper we introduce and study the coprime quantum chain, i.e. a
strongly correlated quantum system defined in terms of the integer eigenvalues
$n_i$ of the occupation number operators at each site of a chain of length $M$.
The $n_i$'s take value in the interval $[2,q]$ and may be regarded as $S_z$
eigenvalues in the spin representation $j = (q-2)/2$. The distinctive
interaction of the model is based on the coprimality matrix $\bf \Phi$: for the
ferromagnetic case, this matrix assigns lower energy to configurations where
occupation numbers $n_i$ and $n_{i+1}$ of neighbouring sites share a common
divisor, while for the anti-ferromagnetic case it assigns lower energy to
configurations where $n_i$ and $n_{i+1}$ are coprime. The coprime chain, both
in the ferro and anti-ferromagnetic cases, may present an exponential number of
ground states whose values can be exactly computed by means of graph
theoretical tools. In the ferromagnetic case there are generally also
frustration phenomena. A fine tuning of local operators may lift the
exponential ground state degeneracy and, according to which operators are
switched on, the system may be driven into different classes of universality,
among which the Ising or Potts universality class. The paper also contains an
appendix by Don Zagier on the exact eigenvalues and eigenvectors of the
coprimality matrix in the limit $q \rightarrow \infty$.
| 0 | 1 | 1 | 0 | 0 | 0 |
Inverse cascades and resonant triads in rotating and stratified turbulence | Kraichnan seminal ideas on inverse cascades yielded new tools to study common
phenomena in geophysical turbulent flows. In the atmosphere and the oceans,
rotation and stratification result in a flow that can be approximated as
two-dimensional at very large scales, but which requires considering
three-dimensional effects to fully describe turbulent transport processes and
non-linear phenomena. Motions can thus be classified into two classes: fast
modes consisting of inertia-gravity waves, and slow quasi-geostrophic modes for
which the Coriolis force and horizontal pressure gradients are close to
balance. In this paper we review previous results on the strength of the
inverse cascade in rotating and stratified flows, and then present new results
on the effect of varying the strength of rotation and stratification (measured
by the ratio $N/f$ of the Brunt-Väisäla frequency to the Coriolis
frequency) on the amplitude of the waves and on the flow quasi-geostrophic
behavior. We show that the inverse cascade is more efficient in the range of
$N/f$ for which resonant triads do not exist, $1/2 \le N/f \le 2$. We then use
the spatio-temporal spectrum, and characterization of the flow temporal and
spatial scales, to show that in this range slow modes dominate the dynamics,
while the strength of the waves (and their relevance in the flow dynamics) is
weaker.
| 0 | 1 | 0 | 0 | 0 | 0 |
Towards a Rigorous Methodology for Measuring Adoption of RPKI Route Validation and Filtering | A proposal to improve routing security---Route Origin Authorization
(ROA)---has been standardized. A ROA specifies which network is allowed to
announce a set of Internet destinations. While some networks now specify ROAs,
little is known about whether other networks check routes they receive against
these ROAs, a process known as Route Origin Validation (ROV). Which networks
blindly accept invalid routes? Which reject them outright? Which de-preference
them if alternatives exist?
Recent analysis attempts to use uncontrolled experiments to characterize ROV
adoption by comparing valid routes and invalid routes. However, we argue that
gaining a solid understanding of ROV adoption is impossible using currently
available data sets and techniques. Our measurements suggest that, although
some ISPs are not observed using invalid routes in uncontrolled experiments,
they are actually using different routes for (non-security) traffic engineering
purposes, without performing ROV. We conclude with a description of a
controlled, verifiable methodology for measuring ROV and present three ASes
that do implement ROV, confirmed by operators.
| 1 | 0 | 0 | 0 | 0 | 0 |
Half-quadratic transportation problems | We present a primal--dual memory efficient algorithm for solving a relaxed
version of the general transportation problem. Our approach approximates the
original cost function with a differentiable one that is solved as a sequence
of weighted quadratic transportation problems. The new formulation allows us to
solve differentiable, non-- convex transportation problems.
| 1 | 0 | 1 | 0 | 0 | 0 |
Entanglement induced interactions in binary mixtures | We establish a conceptual framework for the identification and the
characterization of induced interactions in binary mixtures and reveal their
intricate relation to entanglement between the components or species of the
mixture. Exploiting an expansion in terms of the strength of the entanglement
among the two species, enables us to deduce an effective single-species
description. In this way, we naturally incorporate the mutual feedback of the
species and obtain induced interactions for both species which are effectively
present among the particles of same type. Importantly, our approach
incorporates few-body and inhomogeneous systems extending the scope of induced
interactions where two particles interact via a bosonic bath-type environment.
Employing the example of a one-dimensional spin-polarized ultracold Bose-Fermi
mixture, we obtain induced Bose-Bose and Fermi-Fermi interactions with
short-range attraction and long-range repulsion. With this, we show how beyond
species mean-field physics visible in the two-body correlation functions can be
understood via the induced interactions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Conditional fiducial models | The fiducial is not unique in general, but we prove that in a restricted
class of models it is uniquely determined by the sampling distribution of the
data. It depends in particular not on the choice of a data generating model.
The arguments lead to a generalization of the classical formula found by Fisher
(1930). The restricted class includes cases with discrete distributions, the
case of the shape parameter in the Gamma distribution, and also the case of the
correlation coefficient in a bivariate Gaussian model. One of the examples can
also be used in a pedagogical context to demonstrate possible difficulties with
likelihood-, Bayesian-, and bootstrap-inference. Examples that demonstrate
non-uniqueness are also presented. It is explained that they can be seen as
cases with restrictions on the parameter space. Motivated by this the concept
of a conditional fiducial model is introduced. This class of models includes
the common case of iid samples from a one-parameter model investigated by
Hannig (2013), the structural group models investigated by Fraser (1968), and
also certain models discussed by Fisher (1973) in his final writing on the
subject.
| 0 | 0 | 1 | 1 | 0 | 0 |
Statistics students' identification of inferential model elements within contexts of their own invention | Statistical thinking partially depends upon an iterative process by which
essential features of a problem setting are identified and mapped onto an
abstract model or archetype, and then translated back into the context of the
original problem setting (Wild and Pfannkuch 1999). Assessment in introductory
statistics often relies on tasks that present students with data in context and
expects them to choose and describe an appropriate model. This study explores
post-secondary student responses to an alternative task that prompts students
to clearly identify a sample, population, statistic, and parameter using a
context of their own invention. The data include free text narrative responses
of a random sample of 500 students from a sample of more than 1600 introductory
statistics students. Results suggest that students' responses often portrayed
sample and population accurately. Portrayals of statistic and parameter were
less reliable and were associated with descriptions of a wide variety of other
concepts. Responses frequently attributed a variable of some kind to the
statistic, or a study design detail to the parameter. Implications for
instruction and research are discussed, including a call for emphasis on a
modeling paradigm in introductory statistics.
| 0 | 0 | 0 | 1 | 0 | 0 |
Mask R-CNN | We present a conceptually simple, flexible, and general framework for object
instance segmentation. Our approach efficiently detects objects in an image
while simultaneously generating a high-quality segmentation mask for each
instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a
branch for predicting an object mask in parallel with the existing branch for
bounding box recognition. Mask R-CNN is simple to train and adds only a small
overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to
generalize to other tasks, e.g., allowing us to estimate human poses in the
same framework. We show top results in all three tracks of the COCO suite of
challenges, including instance segmentation, bounding-box object detection, and
person keypoint detection. Without bells and whistles, Mask R-CNN outperforms
all existing, single-model entries on every task, including the COCO 2016
challenge winners. We hope our simple and effective approach will serve as a
solid baseline and help ease future research in instance-level recognition.
Code has been made available at: this https URL
| 1 | 0 | 0 | 0 | 0 | 0 |
An overview of the marine food web in Icelandic waters using Ecopath with Ecosim | Fishing activities have broad impacts that affect, although not exclusively,
the targeted stocks. These impacts affect predators and prey of the harvested
species, as well as the whole ecosystem it inhabits. Ecosystem models can be
used to study the interactions that occur within a system, including those
between different organisms and those between fisheries and targeted species.
Trophic web models like Ecopath with Ecosim (EwE) can handle fishing fleets as
a top predator, with top-down impact on harvested organisms. The aim of this
study was to better understand the Icelandic marine ecosystem and the
interactions within. This was done by constructing an EwE model of Icelandic
waters. The model was run from 1984 to 2013 and was fitted to time series of
biomass estimates, landings data and mean annual temperature. The final model
was chosen by selecting the model with the lowest Akaike information criterion.
A skill assessment was performed using the Pearson's correlation coefficient,
the coefficient of determination, the modelling efficiency and the reliability
index to evaluate the model performance. The model performed satisfactorily
when simulating previously estimated biomass and known landings. Most of the
groups with time series were estimated to have top-down control over their
prey. These are harvested species with direct and/or indirect links to lower
trophic levels and future fishing policies should take this into account. This
model could be used as a tool to investigate how such policies could impact the
marine ecosystem in Icelandic waters.
| 0 | 0 | 0 | 0 | 1 | 0 |
Character tables and the problem of existence of finite projective planes | Recently, the authors of the present work (together with M. N. Kolountzakis)
introduced a new version of the non-commutative Delsarte scheme and applied it
to the problem of mutually unbiased bases. Here we use this method to
investigate the existence of a finite projective plane of a given order d. In
particular, a short new proof is obtained for the nonexistence of a projective
plane of order 6. For higher orders like 10 and 12, the method is non decisive
but could turn out to give important supplementary informations.
| 0 | 0 | 1 | 0 | 0 | 0 |
A bilevel approach for optimal contract pricing of independent dispatchable DG units in distribution networks | Distributed Generation (DG) units are increasingly installed in the power
systems. Distribution Companies (DisCo) can opt to purchase the electricity
from DG in an energy purchase contract to supply the customer demand and reduce
energy loss. This paper proposes a framework for optimal contract pricing of
independent dispatchable DG units considering competition among them. While DG
units tend to increase their profit from the energy purchase contract, DisCo
minimizes the demand supply cost. Multi-leader follower game theory concept is
used to analyze the situation in which competing DG units offer the energy
price to DisCo and DisCo determines the DG generation. A bi-level approach is
used to formulate the competition in which each DG problem is the upper-level
problem and the DisCo problem is considered as the lower-level one. Combining
the optimality conditions ofall upper-level problems with the lower level
problem results in a multi-DG equilibrium problem formulated as an equilibrium
problem with equilibrium constraints (EPEC). Using a nonlinear approach, the
EPEC problem is reformulated as a single nonlinear optimization model which is
simultaneously solved for all independent DG units. The proposed framework was
applied to the Modified IEEE 34-Bus Distribution Test System. Performance and
robustness of the proposed framework in determining econo-technically fare DG
contract price has been demonstrated through a series of analyses.
| 1 | 0 | 0 | 0 | 0 | 0 |
Human-in-the-loop Artificial Intelligence | Little by little, newspapers are revealing the bright future that Artificial
Intelligence (AI) is building. Intelligent machines will help everywhere.
However, this bright future has a dark side: a dramatic job market contraction
before its unpredictable transformation. Hence, in a near future, large numbers
of job seekers will need financial support while catching up with these novel
unpredictable jobs. This possible job market crisis has an antidote inside. In
fact, the rise of AI is sustained by the biggest knowledge theft of the recent
years. Learning AI machines are extracting knowledge from unaware skilled or
unskilled workers by analyzing their interactions. By passionately doing their
jobs, these workers are digging their own graves.
In this paper, we propose Human-in-the-loop Artificial Intelligence (HIT-AI)
as a fairer paradigm for Artificial Intelligence systems. HIT-AI will reward
aware and unaware knowledge producers with a different scheme: decisions of AI
systems generating revenues will repay the legitimate owners of the knowledge
used for taking those decisions. As modern Robin Hoods, HIT-AI researchers
should fight for a fairer Artificial Intelligence that gives back what it
steals.
| 1 | 0 | 0 | 0 | 0 | 0 |
Increased adaptability to rapid environmental change can more than make up for the two-fold cost of males | The famous "two-fold cost of sex" is really the cost of anisogamy -- why
should females mate with males who do not contribute resources to offspring,
rather than isogamous partners who contribute equally? In typical anisogamous
populations, a single very fit male can have an enormous number of offspring,
far larger than is possible for any female or isogamous individual. If the
sexual selection on males aligns with the natural selection on females,
anisogamy thus allows much more rapid adaptation via super-successful males. We
show via simulations that this effect can be sufficient to overcome the
two-fold cost and maintain anisogamy against isogamy in populations adapting to
environmental change. The key quantity is the variance in male fitness -- if
this exceeds what is possible in an isogamous population, anisogamous
populations can win out in direct competition by adapting faster.
| 0 | 0 | 0 | 0 | 1 | 0 |
Searching edges in the overlap of two plane graphs | Consider a pair of plane straight-line graphs, whose edges are colored red
and blue, respectively, and let n be the total complexity of both graphs. We
present a O(n log n)-time O(n)-space technique to preprocess such pair of
graphs, that enables efficient searches among the red-blue intersections along
edges of one of the graphs. Our technique has a number of applications to
geometric problems. This includes: (1) a solution to the batched red-blue
search problem [Dehne et al. 2006] in O(n log n) queries to the oracle; (2) an
algorithm to compute the maximum vertical distance between a pair of 3D
polyhedral terrains one of which is convex in O(n log n) time, where n is the
total complexity of both terrains; (3) an algorithm to construct the Hausdorff
Voronoi diagram of a family of point clusters in the plane in O((n+m) log^3 n)
time and O(n+m) space, where n is the total number of points in all clusters
and m is the number of crossings between all clusters; (4) an algorithm to
construct the farthest-color Voronoi diagram of the corners of n axis-aligned
rectangles in O(n log^2 n) time; (5) an algorithm to solve the stabbing circle
problem for n parallel line segments in the plane in optimal O(n log n) time.
All these results are new or improve on the best known algorithms.
| 1 | 0 | 0 | 0 | 0 | 0 |
Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation | As more industries integrate machine learning into socially sensitive
decision processes like hiring, loan-approval, and parole-granting, we are at
risk of perpetuating historical and contemporary socioeconomic disparities.
This is a critical problem because on the one hand, organizations who use but
do not understand the discriminatory potential of such systems will facilitate
the widening of social disparities under the assumption that algorithms are
categorically objective. On the other hand, the responsible use of machine
learning can help us measure, understand, and mitigate the implicit historical
biases in socially sensitive data by expressing implicit decision-making mental
models in terms of explicit statistical models. In this paper we specify,
implement, and evaluate a "fairness-aware" machine learning interface called
themis-ml, which is intended for use by individual data scientists and
engineers, academic research teams, or larger product teams who use machine
learning in production systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Supersymmetric field theories and geometric Langlands: The other side of the coin | This note announces results on the relations between the approach of
Beilinson and Drinfeld to the geometric Langlands correspondence based on
conformal field theory, the approach of Kapustin and Witten based on $N=4$ SYM,
and the AGT-correspondence. The geometric Langlands correspondence is described
as the Nekrasov-Shatashvili limit of a generalisation of the AGT-correspondence
in the presence of surface operators. Following the approaches of Kapustin -
Witten and Nekrasov - Witten we interpret some aspects of the resulting picture
using an effective description in terms of two-dimensional sigma models having
Hitchin's moduli spaces as target-manifold.
| 0 | 0 | 1 | 0 | 0 | 0 |
Non-dispersive conservative regularisation of nonlinear shallow water (and isothermal Euler) equations | A new regularisation of the shallow water (and isentropic Euler) equations is
proposed. The regularised equations are non-dissipative, non-dispersive and
possess a variational structure. Thus, the mass, the momentum and the energy
are conserved. Hence, for instance, regularised hydraulic jumps are smooth and
non-oscillatory. Another particularly interesting feature of this
regularisation is that smoothed `shocks' propagates at exactly the same speed
as the original discontinuous ones. The performance of the new model is
illustrated numerically on some dam-break test cases, which are classical in
the hyperbolic realm.
| 0 | 1 | 0 | 0 | 0 | 0 |
Some Identities associated with mock theta functions $ω(q)$ and $ν(q)$ | Recently, Andrews, Dixit and Yee defined two partition functions
$p_{\omega}(n)$ and $p_{\nu}(n)$ that are related with Ramanujan's mock theta
functions $\omega(q)$ and $\nu(q)$, respectively. In this paper, we present two
variable generalizations of their results. As an application, we reprove their
results on $p_{\omega}(n)$ and $p_{\nu}(n)$ that are analogous to Euler's
pentagonal number theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Model order reduction for random nonlinear dynamical systems and low-dimensional representations for their quantities of interest | We examine nonlinear dynamical systems of ordinary differential equations or
differential algebraic equations. In an uncertainty quantification, physical
parameters are replaced by random variables. The inner variables as well as a
quantity of interest are expanded into series with orthogonal basis functions
like the polynomial chaos expansions, for example. On the one hand, the
stochastic Galerkin method yields a large coupled dynamical system. On the
other hand, a stochastic collocation method, which uses a quadrature rule or a
sampling scheme, can be written in the form of a large weakly coupled dynamical
system. We apply projection-based methods of nonlinear model order reduction to
the large systems. A reduced-order model implies a low-dimensional
representation of the quantity of interest. We focus on model order reduction
by proper orthogonal decomposition. The error of a best approximation located
in a low-dimensional subspace is analysed. We illustrate results of numerical
computations for test examples.
| 0 | 0 | 1 | 0 | 0 | 0 |
Monomial generators of complete planar ideals | We provide an algorithm that computes a set of generators for any complete
ideal in a smooth complex surface. More interestingly, these generators admit a
presentation as monomials in a set of maximal contact elements associated to
the minimal log-resolution of the ideal. Furthermore, the monomial expression
given by our method is an equisingularity invariant of the ideal. As an
outcome, we provide a geometric method to compute the integral closure of a
planar ideal and we apply our algorithm to some families of complete ideals.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimization of a SSP's Header Bidding Strategy using Thompson Sampling | Over the last decade, digital media (web or app publishers) generalized the
use of real time ad auctions to sell their ad spaces. Multiple auction
platforms, also called Supply-Side Platforms (SSP), were created. Because of
this multiplicity, publishers started to create competition between SSPs. In
this setting, there are two successive auctions: a second price auction in each
SSP and a secondary, first price auction, called header bidding auction,
between SSPs.In this paper, we consider an SSP competing with other SSPs for ad
spaces. The SSP acts as an intermediary between an advertiser wanting to buy ad
spaces and a web publisher wanting to sell its ad spaces, and needs to define a
bidding strategy to be able to deliver to the advertisers as many ads as
possible while spending as little as possible. The revenue optimization of this
SSP can be written as a contextual bandit problem, where the context consists
of the information available about the ad opportunity, such as properties of
the internet user or of the ad placement.Using classical multi-armed bandit
strategies (such as the original versions of UCB and EXP3) is inefficient in
this setting and yields a low convergence speed, as the arms are very
correlated. In this paper we design and experiment a version of the Thompson
Sampling algorithm that easily takes this correlation into account. We combine
this bayesian algorithm with a particle filter, which permits to handle
non-stationarity by sequentially estimating the distribution of the highest bid
to beat in order to win an auction. We apply this methodology on two real
auction datasets, and show that it significantly outperforms more classical
approaches.The strategy defined in this paper is being developed to be deployed
on thousands of publishers worldwide.
| 0 | 0 | 0 | 1 | 0 | 0 |
Mixed Rademacher and BPS Black Holes | Dyonic 1/4-BPS states in Type IIB string theory compactified on $\mathrm{K}3
\times T^2$ are counted by meromorphic Jacobi forms. The finite parts of these
functions, which are mixed mock Jacobi forms, account for the degeneracy of
states stable throughout the moduli space of the compactification. In this
paper, we obtain an exact asymptotic expansion for their Fourier coefficients,
refining the Hardy-Ramanujan-Littlewood circle method to deal with their
mixed-mock character. The result is compared to a low-energy supergravity
computation of the exact entropy of extremal dyonic 1/4-BPS single-centered
black holes, obtained by applying supersymmetric localization techniques to the
quantum entropy function.
| 0 | 0 | 1 | 0 | 0 | 0 |
A generalization of the injectivity condition for Projected Entangled Pair States | We introduce a family of tensor network states that we term semi-injective
Projected Entangled-Pair States (PEPS). They extend the class of injective PEPS
and include other states, like the ground states of the AKLT and the CZX models
in square lattices. We construct parent Hamiltonians for which semi-injective
PEPS are unique ground states. We also determine the necessary and sufficient
conditions for two tensors to generate the same family of such states in two
spatial dimensions. Using this result, we show that the third cohomology
labeling of Symmetry Protected Topological phases extends to semi-injective
PEPS.
| 0 | 1 | 0 | 0 | 0 | 0 |
Limits to single photon transduction by a single atom: Non-Markov theory | Single atoms form a model system for understanding the limits of single
photon detection. Here, we develop a non-Markov theory of single-photon
absorption by a two-level atom to place limits on the absorption (transduction)
time. We show the existence of a finite rise time in the probability of
excitation of the atom during the absorption event which is infinitely fast in
previous Markov theories. This rise time is governed by the bandwidth of the
atom-field interaction spectrum and leads to a fundamental jitter in
time-stamping the absorption event. Our theoretical framework captures both the
weak and strong atom-field coupling regimes and sheds light on the spectral
matching between the interaction bandwidth and single photon Fock state pulse
spectrum. Our work opens questions whether such jitter in the absorption event
can be observed in a multi-mode realistic single photon detector. Finally, we
also shed light on the fundamental differences between linear and nonlinear
detector outputs for single photon Fock state vs. coherent state pulses.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantifying the Contributions of Training Data and Algorithm Logic to the Performance of Automated Cause-assignment Algorithms for Verbal Autopsy | A verbal autopsy (VA) consists of a survey with a relative or close contact
of a person who has recently died. VA surveys are commonly used to infer likely
causes of death for individuals when deaths happen outside of hospitals or
healthcare facilities. Several statistical and algorithmic methods are
available to assign cause of death using VA surveys. Each of these methods
require as inputs some information about the joint distribution of symptoms and
causes. In this note, we examine the generalizability of this symptom-cause
information by comparing different automated coding methods using various
combinations of inputs and evaluation data. VA algorithm performance is
affected by both the specific SCI themselves and the logic of a given
algorithm. Using a variety of performance metrics for all existing VA
algorithms, we demonstrate that in general the adequacy of the information
about the joint distribution between symptoms and cause affects performance at
least as much or more than algorithm logic.
| 0 | 0 | 0 | 1 | 0 | 0 |
Rydberg excitation of cold atoms inside a hollow core fiber | We report on a versatile, highly controllable hybrid cold Rydberg atom fiber
interface, based on laser cooled atoms transported into a hollow core
Kagomé crystal fiber. Our experiments are the first to demonstrate the
feasibility of exciting cold Rydberg atoms inside a hollow core fiber and we
study the influence of the fiber on Rydberg electromagnetically induced
transparency (EIT) signals. Using a temporally resolved detection method to
distinguish between excitation and loss, we observe two different regimes of
the Rydberg excitations: one EIT regime and one regime dominated by atom loss.
These results are a substantial advancement towards future use of our system
for quantum simulation or information.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dynamics and evolution of planets in mean-motion resonances | In some planetary systems the orbital periods of two of its members present a
commensurability, usually known by mean-motion resonance. These resonances
greatly enhance the mutual gravitational influence of the planets. As a
consequence, these systems present uncommon behaviours and their motions need
to be studied with specific methods. Some features are unique and allow us a
better understanding and characterisation of these systems. Moreover,
mean-motion resonances are a result of an early migration of the orbits in an
accretion disk, so it is possible to derive constraints on their formation.
Here we review the dynamics of a pair of resonant planets and explain how their
orbits evolve in time. We apply our results to the HD45365 planetary system
| 0 | 1 | 1 | 0 | 0 | 0 |
The Gain in the Field of Two Electromagnetic Waves | We consider the motion of a nonrelativistic electron in the field of two
strong monochromatic light waves propagating counter to each other. The matrix
elements of emission and absorption are found. An expression is obtained for
the gain of a weak test wave by using such matrix elements.
| 0 | 1 | 0 | 0 | 0 | 0 |
Periods and factors of weak model sets | There is a renewed interest in weak model sets due to their connection to
$\mathcal B$-free systems, which emerged from Sarnak's program on the Möbius
disjointness conjecture. Here we continue our recent investigation
[arXiv:1511.06137] of the extended hull ${\mathcal M}^{\scriptscriptstyle
G}_{\scriptscriptstyle W}$, a dynamical system naturally associated to a weak
model set in an abelian group $G$ with relatively compact window $W$. For
windows having a nowhere dense boundary (this includes compact windows), we
identify the maximal equicontinuous factor of ${\mathcal M}^{\scriptscriptstyle
G}_{\scriptscriptstyle W}$ and give a sufficient condition when ${\mathcal
M}^{\scriptscriptstyle G}_{\scriptscriptstyle W}$ is an almost 1:1 extension of
its maximal equicontinuous factor. If the window is measurable with positive
Haar measure and is almost compact, then the system ${\mathcal
M}^{\scriptscriptstyle G}_{\scriptscriptstyle W}$ equipped with its Mirsky
measure is isomorphic to its Kronecker factor. For general nontrivial ergodic
probability measures on ${\mathcal M}^{\scriptscriptstyle
G}_{\scriptscriptstyle W}$, we provide a kind of lower bound for the Kronecker
factor. All relevant factor systems are natural $G$-actions on quotient
subgroups of the torus underlying the weak model set. These are obtained by
factoring out suitable window periods. Our results are specialised to the usual
hull of the weak model set, and they are also interpreted for ${\mathcal
B}$-free systems.
| 0 | 0 | 1 | 0 | 0 | 0 |
The Individual Impact Index ($i^3$) Statistic: A Novel Article-Level Citation Metric | Citation metrics are analytic measures used to evaluate the usage, impact and
dissemination of scientific research. Traditionally, citation metrics have been
independently measured at each level of the publication pyramid, namely at the
article-level, at the author-level, and at the journal-level. The most commonly
used metrics have been focused on journal-level measurements, such as the
Impact Factor and the Eigenfactor, as well as on researcher-level metrics like
the Hirsch index (h-index) and i10 index. On the other hand, reliable
article-level metrics are less widespread, and are often reserved to
non-standardized and non-scientific characteristics of individual articles,
such as views, citations, downloads, and mentions in social and news media.
These characteristics are known as 'altmetrics'. However, when the number of
views and citations are similar between two articles, no discriminating measure
currently exists with which to assess and compare each articles' individual
impact. Given the modern, exponentially growing scientific literature,
scientists and readers of Science need optimized, reliable, objective methods
for managing, measuring and comparing research outputs and individual
publications. To this end, I hereby describe and propose a new standardized
article-level metric henceforth known as the 'Individual Impact Index
Statistic', or $i^3$ for short. The $i^3$ is a weighted algorithm that takes
advantage of the peer-review process, and considers a number of characteristics
of individual scientific publications in order to yield a standardized and
readily comparable measure of impact and dissemination. The strengths,
limitations, and potential uses of this novel metric are also discussed.
| 1 | 0 | 0 | 0 | 0 | 0 |
Time-of-Flight Three Dimensional Neutron Diffraction in Transmission Mode for Mapping Crystal Grain Structures | The physical properties of polycrystalline materials depend on their
microstructure, which is the nano-to-centimeter-scale arrangement of phases and
defects in their interior. Such microstructure depends on the shape,
crystallographic phase and orientation, and interfacing of the grains
constituting the material. This article presents a new non-destructive 3D
technique to study bulk samples with sizes in the cm range with a resolution of
hundred micrometers: time-of-flight three-dimensional neutron diffraction (ToF
3DND). Compared to existing analogous X-ray diffraction techniques, ToF 3DND
enables studies of samples that can be both larger in size and made of heavier
elements. Moreover, ToF 3DND facilitates the use of complicated sample
environments. The basic ToF 3DND setup, utilizing an imaging detector with high
spatial and temporal resolution, can easily be implemented at a time-of-flight
neutron beamline. The technique was developed and tested with data collected at
the Materials and Life Science Experimental Facility of the Japan Proton
Accelerator Complex (J-PARC) for an iron sample. We successfully reconstructed
the shape of 108 grains and developed an indexing procedure. The reconstruction
algorithms have been validated by reconstructing two stacked Co-Ni-Ga single
crystals and by comparison with a grain map obtained by post-mortem electron
backscatter diffraction (EBSD).
| 0 | 1 | 0 | 0 | 0 | 0 |
Integrable $sl(\infty)$-modules and Category $\mathcal O$ for $\mathfrak{gl}(m|n)$ | We introduce and study new categories T(g,k)of integrable sl(\infty)-modules
which depend on the choice of a certain reductive subalgebra k in g=sl(\infty).
The simple objects of these categories are tensor modules as in the previously
studied category, however, the choice of k provides more flexibility of
nonsimple modules. We then choose k to have two infinite-dimensional diagonal
blocks, and show that a certain injective object K(m|n) in T(g,k) realizes a
categorical sl(\infty)-action on the integral category O(m|n) of the Lie
superalgebra gl(m|n). We show that the socle of K(m|n) is generated by the
projective modules in O(m|n), and compute the socle filtration of K(m|n)
explicitly. We conjecture that the socle filtration of K(m|n) reflects a
"degree of atypicality filtration" on the category O(m|n). We also conjecture
that a natural tensor filtration on K(m|n) arises via the Duflo--Serganova
functor sending the category O(m|n) to O(m-1|n-1). We prove this latter
conjecture for a direct summand of K(m|n) corresponding to the
finite-dimensional gl(m|n)-modules.
| 0 | 0 | 1 | 0 | 0 | 0 |
Island dynamics and anisotropy during vapor phase epitaxy of m-plane GaN | Using in situ grazing-incidence x-ray scattering, we have measured the
diffuse scattering from islands that form during layer-by-layer growth of GaN
by metal-organic vapor phase epitaxy on the (1010) m-plane surface. The diffuse
scattering is extended in the (0001) in-plane direction in reciprocal space,
indicating a strong anisotropy with islands elongated along [1 $\overline{2}$
10] and closely spaced along [0001]. This is confirmed by atomic force
microscopy of a quenched sample. Islands were characterized as a function of
growth rate G and temperature. The island spacing along [0001] observed during
the growth of the first monolayer obeys a power-law dependence on growth rate
G$^{-n}$, with an exponent $n = 0.25 \pm 0.02$. Results are in agreement with
recent kinetic Monte Carlo simulations, indicating that elongated islands
result from the dominant anisotropy in step edge energy and not from surface
diffusion anisotropy. The observed power-law exponent can be explained using a
simple steady-state model, which gives n = 1/4.
| 0 | 1 | 0 | 0 | 0 | 0 |
Komlós-Major-Tusnády approximations to increments of uniform empirical processes | The well-known Komlós-Major-Tusnády inequalities [Z. Wahrsch. Verw.
Gebiete 32 (1975) 111-131; Z. Wahrsch. Verw. Gebiete 34 (1976) 33-58] provide
sharp inequalities to partial sums of iid standard exponential random variables
by a sequence of standard Brownian motions. In this paper, we employ these
results to establish Gaussian approximations to weighted increments of uniform
empirical and quantile processes. This approach provides rates to the
approximations which, among others, have direct applications to statistics of
extreme values for randomly censored data.
| 0 | 0 | 1 | 1 | 0 | 0 |
Evaluating Gaussian Process Metamodels and Sequential Designs for Noisy Level Set Estimation | We consider the problem of learning the level set for which a noisy black-box
function exceeds a given threshold.
To efficiently reconstruct the level set, we investigate Gaussian process
(GP) metamodels. Our focus is on strongly stochastic samplers, in particular
with heavy-tailed simulation noise and low signal-to-noise ratio.
To guard against noise misspecification, we assess the performance of three
variants: (i) GPs with Student-$t$ observations; (ii) Student-$t$ processes
(TPs); and (iii) classification GPs modeling the sign of the response. As a
fourth extension, we study GP surrogates with monotonicity constraints that are
relevant when the level set is known to be connected. In conjunction with these
metamodels, we analyze several acquisition functions for guiding the sequential
experimental designs, extending existing stepwise uncertainty reduction
criteria to the stochastic contour-finding context. This also motivates our
development of (approximate) updating formulas to efficiently compute such
acquisition functions. Our schemes are benchmarked by using a variety of
synthetic experiments in 1--6 dimensions. We also consider an application of
level set estimation for determining the optimal exercise policy and valuation
of Bermudan options in finance.
| 0 | 0 | 0 | 1 | 0 | 0 |
MTBase: Optimizing Cross-Tenant Database Queries | In the last decade, many business applications have moved into the cloud. In
particular, the "database-as-a-service" paradigm has become mainstream. While
existing multi-tenant data management systems focus on single-tenant query
processing, we believe that it is time to rethink how queries can be processed
across multiple tenants in such a way that we do not only gain more valuable
insights, but also at minimal cost. As we will argue in this paper, standard
SQL semantics are insufficient to process cross-tenant queries in an
unambiguous way, which is why existing systems use other, expensive means like
ETL or data integration. We first propose MTSQL, a set of extensions to
standard SQL, which fixes the ambiguity problem. Next, we present MTBase, a
query processing middleware that efficiently processes MTSQL on top of SQL. As
we will see, there is a canonical, provably correct, rewrite algorithm from
MTSQL to SQL, which may however result in poor query execution performance,
even on high-performance database products. We further show that with
carefully-designed optimizations, execution times can be reduced in such ways
that the difference to single-tenant queries becomes marginal.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mechanism Deduction from Noisy Chemical Reaction Networks | We introduce KiNetX, a fully automated meta-algorithm for the kinetic
analysis of complex chemical reaction networks derived from semi-accurate but
efficient electronic structure calculations. It is designed to (i) accelerate
the automated exploration of such networks, and (ii) cope with model-inherent
errors in electronic structure calculations on elementary reaction steps. We
developed and implemented KiNetX to possess three features. First, KiNetX
evaluates the kinetic relevance of every species in a (yet incomplete) reaction
network to confine the search for new elementary reaction steps only to those
species that are considered possibly relevant. Second, KiNetX identifies and
eliminates all kinetically irrelevant species and elementary reactions to
reduce a complex network graph to a comprehensible mechanism. Third, KiNetX
estimates the sensitivity of species concentrations toward changes in
individual rate constants (derived from relative free energies), which allows
us to systematically select the most efficient electronic structure model for
each elementary reaction given a predefined accuracy. The novelty of KiNetX
consists in the rigorous propagation of correlated free-energy uncertainty
through all steps of our kinetic analyis. To examine the performance of KiNetX,
we developed AutoNetGen. It semirandomly generates chemistry-mimicking reaction
networks by encoding chemical logic into their underlying graph structure.
AutoNetGen allows us to consider a vast number of distinct chemistry-like
scenarios and, hence, to discuss assess the importance of rigorous uncertainty
propagation in a statistical context. Our results reveal that KiNetX reliably
supports the deduction of product ratios, dominant reaction pathways, and
possibly other network properties from semi-accurate electronic structure data.
| 0 | 0 | 0 | 0 | 1 | 0 |
Structural scale $q-$derivative and the LLG-Equation in a scenario with fractionality | In the present contribution, we study the Landau-Lifshitz-Gilbert equation
with two versions of structural derivatives recently proposed: the scale
$q-$derivative in the non-extensive statistical mechanics and the axiomatic
metric derivative, which presents Mittag-Leffler functions as eigenfunctions.
The use of structural derivatives aims to take into account long-range forces,
possible non-manifest or hidden interactions and the dimensionality of space.
Having this purpose in mind, we build up an evolution operator and a deformed
version of the LLG equation. Damping in the oscillations naturally show up
without an explicit Gilbert damping term.
| 0 | 1 | 1 | 0 | 0 | 0 |
Analog Optical Computing by Half-Wavelength Slabs | A new approach to perform analog optical differentiation is presented using
half-wavelength slabs. First, a half-wavelength dielectric slab is used to
design a first order differentiator. The latter works properly for both major
polarizations, in contrast to designs based on Brewster effect [Opt. Lett. 41,
3467 (2016)]. Inspired by the proposed dielectric differentiator, and by
exploiting the unique features of graphene, we further design and demonstrate a
reconfigurable and highly miniaturized differentiator using a half-wavelength
plasmonic graphene film. To the best of our knowledge, our proposed
graphene-based differentiator is even smaller than the most compact
differentiator presented so far [Opt. Lett. 40, 5239 (2015)].
| 0 | 1 | 0 | 0 | 0 | 0 |
A Separation Principle for Control in the Age of Deep Learning | We review the problem of defining and inferring a "state" for a control
system based on complex, high-dimensional, highly uncertain measurement streams
such as videos. Such a state, or representation, should contain all and only
the information needed for control, and discount nuisance variability in the
data. It should also have finite complexity, ideally modulated depending on
available resources. This representation is what we want to store in memory in
lieu of the data, as it "separates" the control task from the measurement
process. For the trivial case with no dynamics, a representation can be
inferred by minimizing the Information Bottleneck Lagrangian in a function
class realized by deep neural networks. The resulting representation has much
higher dimension than the data, already in the millions, but it is smaller in
the sense of information content, retaining only what is needed for the task.
This process also yields representations that are invariant to nuisance factors
and having maximally independent components. We extend these ideas to the
dynamic case, where the representation is the posterior density of the task
variable given the measurements up to the current time, which is in general
much simpler than the prediction density maintained by the classical Bayesian
filter. Again this can be finitely-parametrized using a deep neural network,
and already some applications are beginning to emerge. No explicit assumption
of Markovianity is needed; instead, complexity trades off approximation of an
optimal representation, including the degree of Markovianity.
| 1 | 0 | 0 | 1 | 0 | 0 |
Improving Neural Network Quantization using Outlier Channel Splitting | Quantization can improve the execution latency and energy efficiency of
neural networks on both commodity GPUs and specialized accelerators. The
majority of existing literature focuses on training quantized DNNs, while this
work examines the less-studied topic of quantizing a floating-point model
without (re)training. DNN weights and activations follow a bell-shaped
distribution post-training, while practical hardware uses a linear quantization
grid. This leads to challenges in dealing with outliers in the distribution.
Prior work has addressed this by clipping the outliers or using specialized
hardware. In this work, we propose outlier channel splitting (OCS), which
duplicates channels containing outliers, then halves the channel values. The
network remains functionally identical, but affected outliers are moved toward
the center of the distribution. OCS requires no additional training and works
on commodity hardware. Experimental evaluation on ImageNet classification and
language modeling shows that OCS can outperform state-of-the-art clipping
techniques with only minor overhead.
| 1 | 0 | 0 | 1 | 0 | 0 |
On some further properties and application of Weibull-R family of distributions | In this paper, we provide some new results for the Weibull-R family of
distributions (Alzaghal, Ghosh and Alzaatreh (2016)). We derive some new
structural properties of the Weibull-R family of distributions. We provide
various characterizations of the family via conditional moments, some functions
of order statistics and via record values.
| 0 | 0 | 1 | 1 | 0 | 0 |
Rejection of the principle of material frame indifference | The principle of material frame indifference is shown to be incompatible with
the basic balance laws of continuum mechanics. In its role of providing
constraints on possible constitutive prescriptions it must be replaced by the
classical principle of Galilean invariance.
| 0 | 1 | 0 | 0 | 0 | 0 |
Construction of flows of finite-dimensional algebras | Recently, we introduced the notion of flow (depending on time) of
finite-dimensional algebras. A flow of algebras (FA) is a particular case of a
continuous-time dynamical system whose states are finite-dimensional algebras
with (cubic) matrices of structural constants satisfying an analogue of the
Kolmogorov-Chapman equation (KCE). Since there are several kinds of
multiplications between cubic matrices one has fix a multiplication first and
then consider the KCE with respect to the fixed multiplication. The existence
of a solution for the KCE provides the existence of an FA. In this paper our
aim is to find sufficient conditions on the multiplications under which the
corresponding KCE has a solution. Mainly our conditions are given on the
algebra of cubic matrices (ACM) considered with respect to a fixed
multiplication of cubic matrices. Under some assumptions on the ACM (e.g. power
associative, unital, associative, commutative) we describe a wide class of FAs,
which contain algebras of arbitrary finite dimension. In particular, adapting
the theory of continuous-time Markov processes, we construct a class of FAs
given by the matrix exponent of cubic matrices. Moreover, we remarkably extend
the set of FAs given with respect to the Maksimov's multiplications of our
previous paper (J. Algebra 470 (2017) 263--288). For several FAs we study the
time-dependent behavior (dynamics) of the algebras. We derive a system of
differential equations for FAs.
| 0 | 0 | 1 | 0 | 0 | 0 |
PdBI U/LIRG Survey (PULS): Dense Molecular Gas in Arp 220 and NGC 6240 | Aims. We present new IRAM Plateau de Bure Interferometer observations of Arp
220 in HCN, HCO$^{+}$, HN$^{13}$C J=1-0, C$_{2}$H N=1-0, SiO J = 2-1, HNCO
J$_{k,k'}$ = 5$_{0,4}$ - 4$_{0,4}$, CH$_{3}$CN(6-5), CS J=2-1 and 5-4 and
$^{13}$CO J=1-0 and 2-1 and of NGC 6240 in HCN, HCO$^{+}$ J = 1-0 and C$_{2}$H
N = 1-0. In addition, we present Atacama Large Millimeter/submillmeter Array
science verification observations of Arp 220 in CS J = 4-3 and
CH$_{3}$CN(10-9). Various lines are used to analyse the physical conditions of
the molecular gas including the [$^{12}$CO]/[$^{13}$CO] and
[$^{12}$CO]/[C$^{18}$O] abundance ratios. These observations will be made
available to the public. Methods. We create brightness temperature line ratio
maps to present the different physical conditions across Arp 220 and NGC 6240.
In addition, we use the radiative transfer code RADEX and a Monte Carlo Markov
Chain likelihood code to model the $^{12}$CO, $^{13}$CO and C$^{18}$O lines of
Arp 220 at ~2" (~700 pc) scales, where the $^{12}$CO and C$^{18}$O measurements
were obtained from literature. Results. Line ratios of optically thick lines
such as $^{12}$CO show smoothly varying ratios while the line ratios of
optically thin lines such as $^{13}$CO show a east-west gradient across Arp
220. The HCN/HCO$^{+}$ line ratio differs between Arp 220 and NGC 6240, where
Arp 220 has line ratios above 2 and NGC 6240 below 1. The radiative transfer
analysis solution is consistent with a warm (~40 K), moderately dense
(~10$^{3.4}$ cm$^{-3}$) molecular gas component averaged over the two nuclei.
We find [$^{12}$CO]/[$^{13}$CO] and [$^{12}$CO]/[C$^{18}$O] abundance ratios of
~90 for both. The abundance enhancement of C$^{18}$O can be explained by
stellar nucleosynthesis enrichment of the interstellar medium.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Dark Side(-Channel) of Mobile Devices: A Survey on Network Traffic Analysis | In recent years, mobile devices (e.g., smartphones and tablets) have met an
increasing commercial success and have become a fundamental element of the
everyday life for billions of people all around the world. Mobile devices are
used not only for traditional communication activities (e.g., voice calls and
messages) but also for more advanced tasks made possible by an enormous amount
of multi-purpose applications (e.g., finance, gaming, and shopping). As a
result, those devices generate a significant network traffic (a consistent part
of the overall Internet traffic). For this reason, the research community has
been investigating security and privacy issues that are related to the network
traffic generated by mobile devices, which could be analyzed to obtain
information useful for a variety of goals (ranging from device security and
network optimization, to fine-grained user profiling).
In this paper, we review the works that contributed to the state of the art
of network traffic analysis targeting mobile devices. In particular, we present
a systematic classification of the works in the literature according to three
criteria: (i) the goal of the analysis; (ii) the point where the network
traffic is captured; and (iii) the targeted mobile platforms. In this survey,
we consider points of capturing such as Wi-Fi Access Points, software
simulation, and inside real mobile devices or emulators. For the surveyed
works, we review and compare analysis techniques, validation methods, and
achieved results. We also discuss possible countermeasures, challenges and
possible directions for future research on mobile traffic analysis and other
emerging domains (e.g., Internet of Things). We believe our survey will be a
reference work for researchers and practitioners in this research field.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non-Ergodic Delocalization in the Rosenzweig-Porter Model | We consider the Rosenzweig-Porter model $H = V + \sqrt{T}\, \Phi$, where $V$
is a $N \times N$ diagonal matrix, $\Phi$ is drawn from the $N \times N$
Gaussian Orthogonal Ensemble, and $N^{-1} \ll T \ll 1$. We prove that the
eigenfunctions of $H$ are typically supported in a set of approximately $NT$
sites, thereby confirming the existence of a previously conjectured non-ergodic
delocalized phase. Our proof is based on martingale estimates along the
characteristic curves of the stochastic advection equation satisfied by the
local resolvent of the Brownian motion representation of $H$.
| 0 | 1 | 0 | 0 | 0 | 0 |
MUDA: A Truthful Multi-Unit Double-Auction Mechanism | In a seminal paper, McAfee (1992) presented a truthful mechanism for double
auctions, attaining asymptotically-optimal gain-from-trade without any prior
information on the valuations of the traders. McAfee's mechanism handles
single-parametric agents, allowing each seller to sell a single unit and each
buyer to buy a single unit. This paper presents a double-auction mechanism that
handles multi-parametric agents and allows multiple units per trader, as long
as the valuation functions of all traders have decreasing marginal returns. The
mechanism is prior-free, ex-post individually-rational, dominant-strategy
truthful and strongly-budget-balanced. Its gain-from-trade approaches the
optimum when the market size is sufficiently large.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multi-variable LSTM neural network for autoregressive exogenous model | In this paper, we propose multi-variable LSTM capable of accurate forecasting
and variable importance interpretation for time series with exogenous
variables. Current attention mechanism in recurrent neural networks mostly
focuses on the temporal aspect of data and falls short of characterizing
variable importance. To this end, the multi-variable LSTM equipped with
tensorized hidden states is developed to learn hidden states for individual
variables, which give rise to our mixture temporal and variable attention.
Based on such attention mechanism, we infer and quantify variable importance.
Extensive experiments using real datasets with Granger-causality test and the
synthetic dataset with ground truth demonstrate the prediction performance and
interpretability of multi-variable LSTM in comparison to a variety of
baselines. It exhibits the prospect of multi-variable LSTM as an end-to-end
framework for both forecasting and knowledge discovery.
| 0 | 0 | 0 | 1 | 0 | 0 |
Spectral Image Visualization Using Generative Adversarial Networks | Spectral images captured by satellites and radio-telescopes are analyzed to
obtain information about geological compositions distributions, distant asters
as well as undersea terrain. Spectral images usually contain tens to hundreds
of continuous narrow spectral bands and are widely used in various fields. But
the vast majority of those image signals are beyond the visible range, which
calls for special visualization technique. The visualizations of spectral
images shall convey as much information as possible from the original signal
and facilitate image interpretation. However, most of the existing visualizatio
methods display spectral images in false colors, which contradict with human's
experience and expectation. In this paper, we present a novel visualization
generative adversarial network (GAN) to display spectral images in natural
colors. To achieve our goal, we propose a loss function which consists of an
adversarial loss and a structure loss. The adversarial loss pushes our solution
to the natural image distribution using a discriminator network that is trained
to differentiate between false-color images and natural-color images. We also
use a cycle loss as the structure constraint to guarantee structure
consistency. Experimental results show that our method is able to generate
structure-preserved and natural-looking visualizations.
| 0 | 0 | 0 | 1 | 0 | 0 |
Simulations for 21 cm radiation lensing at EoR redshifts | We introduce simulations aimed at assessing how well weak gravitational
lensing of 21cm radiation from the Epoch of Reionization ($z \sim 8$) can be
measured by an SKA-like radio telescope. A simulation pipeline has been
implemented to study the performance of lensing reconstruction techniques. We
show how well the lensing signal can be reconstructed using the
three-dimensional quadratic lensing estimator in Fourier space assuming
different survey strategies. The numerical code introduced in this work is
capable of dealing with issues that can not be treated analytically such as the
discreteness of visibility measurements and the inclusion of a realistic model
for the antennae distribution. This paves the way for future numerical studies
implementing more realistic reionization models, foreground subtraction
schemes, and testing the performance of lensing estimators that take into
account the non-Gaussian distribution of HI after reionization. If multiple
frequency channels covering $z \sim 7-11.6$ are combined, Phase 1 of SKA-Low
should be able to obtain good quality images of the lensing potential with a
total resolution of $\sim 1.6$ arcmin. The SKA-Low Phase 2 should be capable of
providing images with high-fidelity even using data from $z\sim 7.7 - 8.3$. We
perform tests aimed at evaluating the numerical implementation of the mapping
reconstruction. We also discuss the possibility of measuring an accurate
lensing power spectrum. Combining data from $z \sim 7-11.6$ using the SKA2-Low
telescope model, we find constraints comparable to sample variance in the range
$L<1000$, even for survey areas as small as $25\mbox{ deg}^2$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Coverage characteristics of self-repelling random walks in mobile ad-hoc networks | A self-repelling random walk of a token on a graph is one in which at each
step, the token moves to a neighbor that has been visited least often (with
ties broken randomly). The properties of self-repelling random walks have been
analyzed for two dimensional lattices and these walks have been shown to
exhibit a remarkable uniformity with which they visit nodes in a graph. In this
paper, we extend this analysis to self-repelling random walks on mobile
networks in which the underlying graph itself is temporally evolving. Using
network simulations in ns-3, we characterize the number of times each node is
visited from the start until all nodes have been visited at least once. We
evaluate under different mobility models and on networks ranging from 100 to
1000 nodes. Our results show that until about 85% coverage, duplicate visits
are very rare highlighting the efficiency with which a majority of nodes in the
network can be visited. Even at 100% coverage, the exploration overhead (the
ratio of number of steps to number of unique visited nodes) remains low and
under 2. Our analysis shows that self-repelling random walks are effective,
structure-free tools for data aggregation in mobile ad-hoc networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
Model Agnostic Time Series Analysis via Matrix Estimation | We propose an algorithm to impute and forecast a time series by transforming
the observed time series into a matrix, utilizing matrix estimation to recover
missing values and de-noise observed entries, and performing linear regression
to make predictions. At the core of our analysis is a representation result,
which states that for a large model class, the transformed time series matrix
is (approximately) low-rank. In effect, this generalizes the widely used
Singular Spectrum Analysis (SSA) in time series literature, and allows us to
establish a rigorous link between time series analysis and matrix estimation.
The key to establishing this link is constructing a Page matrix with
non-overlapping entries rather than a Hankel matrix as is commonly done in the
literature (e.g., SSA). This particular matrix structure allows us to provide
finite sample analysis for imputation and prediction, and prove the asymptotic
consistency of our method. Another salient feature of our algorithm is that it
is model agnostic with respect to both the underlying time dynamics and the
noise distribution in the observations. The noise agnostic property of our
approach allows us to recover the latent states when only given access to noisy
and partial observations a la a Hidden Markov Model; e.g., recovering the
time-varying parameter of a Poisson process without knowing that the underlying
process is Poisson. Furthermore, since our forecasting algorithm requires
regression with noisy features, our approach suggests a matrix estimation based
method - coupled with a novel, non-standard matrix estimation error metric - to
solve the error-in-variable regression problem, which could be of interest in
its own right. Through synthetic and real-world datasets, we demonstrate that
our algorithm outperforms standard software packages (including R libraries) in
the presence of missing data as well as high levels of noise.
| 0 | 0 | 0 | 1 | 0 | 0 |
Efficient motion planning for problems lacking optimal substructure | We consider the motion-planning problem of planning a collision-free path of
a robot in the presence of risk zones. The robot is allowed to travel in these
zones but is penalized in a super-linear fashion for consecutive accumulative
time spent there. We suggest a natural cost function that balances path length
and risk-exposure time. Specifically, we consider the discrete setting where we
are given a graph, or a roadmap, and we wish to compute the minimal-cost path
under this cost function. Interestingly, paths defined using our cost function
do not have an optimal substructure. Namely, subpaths of an optimal path are
not necessarily optimal. Thus, the Bellman condition is not satisfied and
standard graph-search algorithms such as Dijkstra cannot be used. We present a
path-finding algorithm, which can be seen as a natural generalization of
Dijkstra's algorithm. Our algorithm runs in $O\left((n_B\cdot n) \log( n_B\cdot
n) + n_B\cdot m\right)$ time, where~$n$ and $m$ are the number of vertices and
edges of the graph, respectively, and $n_B$ is the number of intersections
between edges and the boundary of the risk zone. We present simulations on
robotic platforms demonstrating both the natural paths produced by our cost
function and the computational efficiency of our algorithm.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bayesian Optimal Data Detector for mmWave OFDM System with Low-Resolution ADC | Orthogonal frequency division multiplexing (OFDM) has been widely used in
communication systems operating in the millimeter wave (mmWave) band to combat
frequency-selective fading and achieve multi-Gbps transmissions, such as IEEE
802.15.3c and IEEE 802.11ad. For mmWave systems with ultra high sampling rate
requirements, the use of low-resolution analog-to-digital converters (ADCs)
(i.e., 1-3 bits) ensures an acceptable level of power consumption and system
costs. However, orthogonality among sub-channels in the OFDM system cannot be
maintained because of the severe non-linearity caused by low-resolution ADC,
which renders the design of data detector challenging. In this study, we
develop an efficient algorithm for optimal data detection in the mmWave OFDM
system with low-resolution ADCs. The analytical performance of the proposed
detector is derived and verified to achieve the fundamental limit of the
Bayesian optimal design. On the basis of the derived analytical expression, we
further propose a power allocation (PA) scheme that seeks to minimize the
average symbol error rate. In addition to the optimal data detector, we also
develop a feasible channel estimation method, which can provide high-quality
channel state information without significant pilot overhead. Simulation
results confirm the accuracy of our analysis and illustrate that the
performance of the proposed detector in conjunction with the proposed PA scheme
is close to the optimal performance of the OFDM system with infinite-resolution
ADC.
| 1 | 0 | 0 | 0 | 0 | 0 |
A second-order stochastic maximum principle for generalized mean-field control problem | In this paper, we study the generalized mean-field stochastic control problem
when the usual stochastic maximum principle (SMP) is not applicable due to the
singularity of the Hamiltonian function. In this case, we derive a second order
SMP. We introduce the adjoint process by the generalized mean-field backward
stochastic differential equation. The keys in the proofs are the expansion of
the cost functional in terms of a perturbation parameter, and the use of the
range theorem for vector-valued measures.
| 0 | 0 | 1 | 0 | 0 | 0 |
Representation of I(1) and I(2) autoregressive Hilbertian processes | We extend the Granger-Johansen representation theorems for I(1) and I(2)
vector autoregressive processes to accommodate processes that take values in an
arbitrary complex separable Hilbert space. This more general setting is of
central relevance for statistical applications involving functional time
series. We first obtain a range of necessary and sufficient conditions for a
pole in the inverse of a holomorphic index-zero Fredholm operator pencil to be
of first or second order. Those conditions form the basis for our development
of I(1) and I(2) representations of autoregressive Hilbertian processes.
Cointegrating and attractor subspaces are characterized in terms of the
behavior of the autoregressive operator pencil in a neighborhood of one.
| 0 | 0 | 1 | 1 | 0 | 0 |
Constraining cosmology with the velocity function of low-mass galaxies | The number density of field galaxies per rotation velocity, referred to as
the velocity function, is an intriguing statistical measure probing the
smallest scales of structure formation. In this paper we point out that the
velocity function is sensitive to small shifts in key cosmological parameters
such as the amplitude of primordial perturbations ($\sigma_8$) or the total
matter density ($\Omega_{\rm m}$). Using current data and applying conservative
assumptions about baryonic effects, we show that the observed velocity function
of the Local Volume favours cosmologies in tension with the measurements from
Planck but in agreement with the latest findings from weak lensing surveys.
While the current systematics regarding the relation between observed and true
rotation velocities are potentially important, upcoming data from HI surveys as
well as new insights from hydrodynamical simulations will dramatically improve
the situation in the near future.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the coefficients of symmetric power $L$-functions | We study the signs of the Fourier coefficients of a newform. Let $f$ be a
normalized newform of weight $k$ for $\Gamma_0(N)$. Let $a_f(n)$ be the $n$th
Fourier coefficient of $f$. For any fixed positive integer $m$, we study the
distribution of the signs of $\{a_f(p^m)\}_p$, where $p$ runs over all prime
numbers. We also find out the abscissas of absolute convergence of two
Dirichlet series with coefficients involving the Fourier coefficients of cusp
forms and the coefficients of symmetric power $L$-functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Spin-resolved electronic structure of ferroelectric α-GeTe and multiferroic Ge1-xMnxTe | Germanium telluride features special spin-electric effects originating from
spin-orbit coupling and symmetry breaking by the ferroelectric lattice
polarization, which opens up many prospectives for electrically tunable and
switchable spin electronic devices. By Mn doping of the {\alpha}-GeTe host
lattice, the system becomes a multiferroic semiconductor possessing
magnetoelectric properties in which the electric polarization, magnetization
and spin texture are coupled to each other. Employing spin- and angle-resolved
photoemission spectroscopy in bulk- and surface-sensitive energy ranges and by
varying dipole transition matrix elements, we disentangle the bulk, surface and
surface-resonance states of the electronic structure and determine the spin
textures for selected parameters. From our results, we derive a comprehensive
model of the {\alpha}-GeTe surface electronic structure which fits experimental
data and first principle theoretical predictions and we discuss the
unconventional evolution of the Rashba-type spin splitting upon manipulation by
external B- and E-fields.
| 0 | 1 | 0 | 0 | 0 | 0 |
Recurrent Scene Parsing with Perspective Understanding in the Loop | Objects may appear at arbitrary scales in perspective images of a scene,
posing a challenge for recognition systems that process images at a fixed
resolution. We propose a depth-aware gating module that adaptively selects the
pooling field size in a convolutional network architecture according to the
object scale (inversely proportional to the depth) so that small details are
preserved for distant objects while larger receptive fields are used for those
nearby. The depth gating signal is provided by stereo disparity or estimated
directly from monocular input. We integrate this depth-aware gating into a
recurrent convolutional neural network to perform semantic segmentation. Our
recurrent module iteratively refines the segmentation results, leveraging the
depth and semantic predictions from the previous iterations.
Through extensive experiments on four popular large-scale RGB-D datasets, we
demonstrate this approach achieves competitive semantic segmentation
performance with a model which is substantially more compact. We carry out
extensive analysis of this architecture including variants that operate on
monocular RGB but use depth as side-information during training, unsupervised
gating as a generic attentional mechanism, and multi-resolution gating. We find
that gated pooling for joint semantic segmentation and depth yields
state-of-the-art results for quantitative monocular depth estimation.
| 1 | 0 | 0 | 0 | 0 | 0 |
Child-sized 3D Printed igus Humanoid Open Platform | The use of standard platforms in the field of humanoid robotics can
accelerate research, and lower the entry barrier for new research groups. While
many affordable humanoid standard platforms exist in the lower size ranges of
up to 60cm, beyond this the few available standard platforms quickly become
significantly more expensive, and difficult to operate and maintain. In this
paper, the igus Humanoid Open Platform is presented---a new, affordable,
versatile and easily customisable standard platform for humanoid robots in the
child-sized range. At 90cm, the robot is large enough to interact with a
human-scale environment in a meaningful way, and is equipped with enough torque
and computing power to foster research in many possible directions. The
structure of the robot is entirely 3D printed, allowing for a lightweight and
appealing design. The electrical and mechanical designs of the robot are
presented, and the main features of the corresponding open-source ROS software
are discussed. The 3D CAD files for all of the robot parts have been released
open-source in conjunction with this paper.
| 1 | 0 | 0 | 0 | 0 | 0 |
Lower Bounds on the Complexity of Solving Two Classes of Non-cooperative Games | This paper studies the complexity of solving two classes of non-cooperative
games in a distributed manner in which the players communicate with a set of
system nodes over noisy communication channels. The complexity of solving each
game class is defined as the minimum number of iterations required to find a
Nash equilibrium (NE) of any game in that class with $\epsilon$ accuracy.
First, we consider the class $\mathcal{G}$ of all $N$-player non-cooperative
games with a continuous action space that admit at least one NE. Using
information-theoretic inequalities, we derive a lower bound on the complexity
of solving $\mathcal{G}$ that depends on the Kolmogorov $2\epsilon$-capacity of
the constraint set and the total capacity of the communication channels. We
also derive a lower bound on the complexity of solving games in $\mathcal{G}$
which depends on the volume and surface area of the constraint set. We next
consider the class of all $N$-player non-cooperative games with at least one NE
such that the players' utility functions satisfy a certain (differential)
constraint. We derive lower bounds on the complexity of solving this game class
under both Gaussian and non-Gaussian noise models. Our result in the
non-Gaussian case is derived by establishing a connection between the
Kullback-Leibler distance and Fisher information.
| 1 | 0 | 0 | 0 | 0 | 0 |
Orbital misalignment of the Neptune-mass exoplanet GJ 436b with the spin of its cool star | The angle between the spin of a star and its planets' orbital planes traces
the history of the planetary system. Exoplanets orbiting close to cool stars
are expected to be on circular, aligned orbits because of strong tidal
interactions with the stellar convective envelope. Spin-orbit alignment can be
measured when the planet transits its star, but such ground-based spectroscopic
measurements are challenging for cool, slowly-rotating stars. Here we report
the characterization of a planet three-dimensional trajectory around an M dwarf
star, derived by mapping the spectrum of the stellar photosphere along the
chord transited by the planet. We find that the eccentric orbit of the
Neptune-mass exoplanet GJ 436b is nearly perpendicular to the stellar equator.
Both eccentricity and misalignment, surprising around a cool star, can result
from dynamical interactions (via Kozai migration) with a yet-undetected outer
companion. This inward migration of GJ 436b could have triggered the
atmospheric escape that now sustains its giant exosphere. Eccentric, misaligned
exoplanets orbiting close to cool stars might thus hint at the presence of
unseen perturbers and illustrate the diversity of orbital architectures seen in
exoplanetary systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Flux-flow and vortex-glass phase in iron pnictide BaFe$_{2-x}$Ni$_x$As$_2$ single crystals with $T_c$ $\sim$ 20 K | We analysed the flux-flow region of isofield magneto resistivity data
obtained on three crystals of BaFe$_{2-x}$Ni$_x$As$_2$ with $T_c$$\sim$20 K for
three different geometries relative to the angle formed between the applied
magnetic field and the c-axis of the crystals. The field dependent activation
energy, $U_0$, was obtained from the TAFF and modified vortex-glass models,
which were compared with the values of $U_0$ obtained from flux-creep available
in the literature. We observed that the $U_0$ obtained from the TAFF model show
deviations among the different crystals, while the correspondent glass lines
obtained from the vortex glass model are virtually coincident. It is shown that
the data is well explained by the modified vortex glass model, allowing to
extract values of $T_g$, the glass transition temperature, and $T^*$, a
temperature which scales with the mean field critical temperature $T_c(H)$. The
resulting glass lines obey the anisotropic Ginzburg-Landau theory and are well
fitted by a theory developed in the literature by considering the effect of
disorder.
| 0 | 1 | 0 | 0 | 0 | 0 |
Word problems in Elliott monoids | Algorithmic issues concerning Elliott local semigroups are seldom considered
in the literature, although these combinatorial structures completely classify
AF algebras. In general, the addition operation of an Elliott local semigroup
is {\it partial}, but for every AF algebra $\mathfrak B$ whose Murray-von
Neumann order of projections is a lattice, this operation is uniquely
extendible to the addition of an involutive monoid $E(\mathfrak B)$. Let
$\mathfrak M_1$ be the Farey AF algebra introduced by the present author in
1988 and rediscovered by F. Boca in 2008. The freeness properties of the
involutive monoid $E(\mathfrak M_1)$ yield a natural word problem for every AF
algebra $\mathfrak B$ with singly generated $E(\mathfrak B)$, because
$\mathfrak B$ is automatically a quotient of $\mathfrak M_1$. Given two
formulas $\phi$ and $\psi$ in the language of involutive monoids, the problem
asks to decide whether $\phi$ and $\psi$ code the same equivalence of
projections of $\mathfrak B$. This mimics the classical definition of the word
problem of a group presented by generators and relations. We show that the word
problem of $\mathfrak M_1$ is solvable in polynomial time, and so is the word
problem of the Behnke-Leptin algebras $\mathcal A_{n,k}$, and of the
Effros-Shen algebras $\mathfrak F_{\theta}$, for $\theta\in [0,1]\setminus
\mathbb Q$ a real algebraic number, or $\theta = 1/e$. We construct a quotient
of $\mathfrak M_1$ having a Gödel incomplete word problem, and show that no
primitive quotient of $\mathfrak M_1$ is Gödel incomplete.
| 1 | 0 | 1 | 0 | 0 | 0 |
Improved lower bounds for the Mahler measure of the Fekete polynomials | We show that there is an absolute constant $c > 1/2$ such that the Mahler
measure of the Fekete polynomials $f_p$ of the form $$f_p(z) :=
\sum_{k=1}^{p-1}{\left( \frac kp \right)z^k}\,,$$ (where the coefficients are
the usual Legendre symbols) is at least $c\sqrt{p}$ for all sufficiently large
primes $p$. This improves the lower bound $\left(\frac 12 -
\varepsilon\right)\sqrt{p}$ known before for the Mahler measure of the Fekete
polynomials $f_p$ for all sufficiently large primes $p \geq c_{\varepsilon}$.
Our approach is based on the study of the zeros of the Fekete polynomials on
the unit circle.
| 0 | 0 | 1 | 0 | 0 | 0 |
Supervised Hashing based on Energy Minimization | Recently, supervised hashing methods have attracted much attention since they
can optimize retrieval speed and storage cost while preserving semantic
information. Because hashing codes learning is NP-hard, many methods resort to
some form of relaxation technique. But the performance of these methods can
easily deteriorate due to the relaxation. Luckily, many supervised hashing
formulations can be viewed as energy functions, hence solving hashing codes is
equivalent to learning marginals in the corresponding conditional random field
(CRF). By minimizing the KL divergence between a fully factorized distribution
and the Gibbs distribution of this CRF, a set of consistency equations can be
obtained, but updating them in parallel may not yield a local optimum since the
variational lower bound is not guaranteed to increase. In this paper, we use a
linear approximation of the sigmoid function to convert these consistency
equations to linear systems, which have a closed-form solution. By applying
this novel technique to two classical hashing formulations KSH and SPLH, we
obtain two new methods called EM (energy minimizing based)-KSH and EM-SPLH.
Experimental results on three datasets show the superiority of our methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Sampled-Data Boundary Feedback Control of 1-D Hyperbolic PDEs with Non-Local Terms | The paper provides results for the application of boundary feedback control
with Zero-Order-Hold (ZOH) to 1-D linear, first-order, hyperbolic systems with
non-local terms on bounded domains. It is shown that the emulation design based
on the recently proposed continuous-time, boundary feedback, designed by means
of backstepping, guarantees closed-loop exponential stability, provided that
the sampling period is sufficiently small. It is also shown that, contrary to
the parabolic case, a smaller sampling period implies a faster convergence rate
with no upper bound for the achieved convergence rate. The obtained results
provide stability estimates for the sup-norm of the state and robustness with
respect to perturbations of the sampling schedule is guaranteed.
| 1 | 0 | 1 | 0 | 0 | 0 |
Scalable Entity Resolution Using Probabilistic Signatures on Parallel Databases | Accurate and efficient entity resolution is an open challenge of particular
relevance to intelligence organisations that collect large datasets from
disparate sources with differing levels of quality and standard. Starting from
a first-principles formulation of entity resolution, this paper presents a
novel Entity Resolution algorithm that introduces a data-driven blocking and
record-linkage technique based on the probabilistic identification of entity
signatures in data. The scalability and accuracy of the proposed algorithm are
evaluated using benchmark datasets and shown to achieve state-of-the-art
results. The proposed algorithm can be implemented simply on modern parallel
databases, which allows it to be deployed with relative ease in large
industrial applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
ac properties of short Josephson weak links | The admittance of two types of Josephson weak links is calculated, i.e., of a
one-dimensional superconducting wire with a local suppression of the order
parameter, and the second is a short S-c-S structure, where S denotes a
superconductor and c---a constriction. The systems of the first type are
analyzed on the basis of time-dependent Ginzburg-Landau equations. We show that
the impedance $Z(\Omega)$ has a maximum as a function of the frequency
$\Omega$, and the electric field $E_{\Omega}$ is determined by two
gauge-invariant quantities---the condensate momentum $Q_{\Omega}$ and the
potential $\mu$ related to charge imbalance. The structures of the second type
are studied on the basis of microscopic equations for quasiclassical Green's
functions in the Keldysh technique. For short S-c-S contacts (the Thouless
energy ${E_{\text{Th}} = D/L^{2} \gg \Delta}$) we present a formula for
admittance $Y$ valid at frequencies $\Omega$ and temperatures $T$ less than the
Thouless energy but arbitrary with respect to the energy gap $\Delta$. It is
shown that, at low temperatures, the absorption is absent [${\mathrm{Re}(Y) =
0}$] if the frequency does not exceed the energy gap in the center of the
constriction (${\Omega < \Delta \cos \varphi_{0}}$, where $2 \varphi_{0}$ is
the phase difference between the S reservoirs). The absorption gradually
increases with increasing the difference ${(\Omega - \Delta \cos \varphi_{0})}$
if $2 \varphi_{0}$ is less than the phase difference $2 \varphi_{\text{c}}$
corresponding to the critical Josephson current. In the interval ${2
\varphi_{\text{c}} < 2 \varphi_{0} < \pi}$, the absorption has a maximum. This
interval of the phase difference is achievable in phase-biased Josephson
junctions. Close to $T_{\text{c}}$ the admittance has a maximum at low $\Omega$
which is described by an analytical formula.
| 0 | 1 | 0 | 0 | 0 | 0 |
ProtoDash: Fast Interpretable Prototype Selection | In this paper we propose an efficient algorithm ProtoDash for selecting
prototypical examples from complex datasets. Our generalizes the learn to
criticize (L2C) work by Kim et al. (2016) to not only select prototypes for a
given sparsity level $m$ but also to associate non-negative (for
interpretability) weights with each of them indicative of the importance of
each prototype. This extension provides a single coherent framework under which
both prototypes and criticisms can be found. Furthermore, our framework works
for any symmetric positive definite kernel thus addressing one of the key open
questions laid out in Kim et al. (2016). Our additional requirement of learning
non-negative weights no longer maintains submodularity of the objective as in
the previous work, however, we show that the problem is weakly submodular and
derive approximation guarantees for our fast ProtoDash algorithm. We
demonstrate the efficacy of our method on diverse domains such as retail, digit
recognition (MNIST) and on publicly available 40 health questionnaires obtained
from the Center for Disease Control (CDC) website maintained by the US Dept. of
Health. We validate the results quantitatively as well as qualitatively based
on expert feedback and recently published scientific studies on public health,
thus showcasing the power of our method in providing actionability (for
retail), utility (for MNIST) and insight (on CDC datasets), which presumably
are the hallmark of an effective interpretable method.
| 1 | 0 | 0 | 1 | 0 | 0 |
Regional Multi-Armed Bandits | We consider a variant of the classic multi-armed bandit problem where the
expected reward of each arm is a function of an unknown parameter. The arms are
divided into different groups, each of which has a common parameter. Therefore,
when the player selects an arm at each time slot, information of other arms in
the same group is also revealed. This regional bandit model naturally bridges
the non-informative bandit setting where the player can only learn the chosen
arm, and the global bandit model where sampling one arms reveals information of
all arms. We propose an efficient algorithm, UCB-g, that solves the regional
bandit problem by combining the Upper Confidence Bound (UCB) and greedy
principles. Both parameter-dependent and parameter-free regret upper bounds are
derived. We also establish a matching lower bound, which proves the
order-optimality of UCB-g. Moreover, we propose SW-UCB-g, which is an extension
of UCB-g for a non-stationary environment where the parameters slowly vary over
time.
| 0 | 0 | 0 | 1 | 0 | 0 |
Determining stellar parameters of asteroseismic targets: going beyond the use of scaling relations | Asteroseismic parameters allow us to measure the basic stellar properties of
field giants observed far across the Galaxy. Most of such determinations are,
up to now, based on simple scaling relations involving the large frequency
separation, \Delta\nu, and the frequency of maximum power, \nu$_{max}$. In this
work, we implement \Delta\nu\ and the period spacing, {\Delta}P, computed along
detailed grids of stellar evolutionary tracks, into stellar isochrones and
hence in a Bayesian method of parameter estimation. Tests with synthetic data
reveal that masses and ages can be determined with typical precision of 5 and
19 per cent, respectively, provided precise seismic parameters are available.
Adding independent information on the stellar luminosity, these values can
decrease down to 3 and 10 per cent respectively. The application of these
methods to NGC 6819 giants produces a mean age in agreement with those derived
from isochrone fitting, and no evidence of systematic differences between RGB
and RC stars. The age dispersion of NGC 6819 stars, however, is larger than
expected, with at least part of the spread ascribable to stars that underwent
mass-transfer events.
| 0 | 1 | 0 | 0 | 0 | 0 |
Polarised target for Drell-Yan experiment in COMPASS at CERN, part I | In the polarised Drell-Yan experiment at the COMPASS facility in CERN pion
beam with momentum of 190 GeV/c and intensity about $10^8$ pions/s interacted
with transversely polarised NH$_3$ target. Muon pairs produced in Drel-Yan
process were detected. The measurement was done in 2015 as the 1st ever
polarised Drell-Yan fixed target experiment. The hydrogen nuclei in the
solid-state NH$_3$ were polarised by dynamic nuclear polarisation in 2.5 T
field of large-acceptance superconducting magnet. Large helium dilution
cryostat was used to cool the target down below 100 mK. Polarisation of
hydrogen nuclei reached during the data taking was about 80 %. Two oppositely
polarised target cells, each 55 cm long and 4 cm in diameter were used.
Overview of COMPASS facility and the polarised target with emphasis on the
dilution cryostat and magnet is given. Results of the polarisation measurement
in the Drell-Yan run and overviews of the target material, cell and dynamic
nuclear polarisation system are given in the part II.
| 0 | 1 | 0 | 0 | 0 | 0 |
Spectral Mixture Kernels for Multi-Output Gaussian Processes | Early approaches to multiple-output Gaussian processes (MOGPs) relied on
linear combinations of independent, latent, single-output Gaussian processes
(GPs). This resulted in cross-covariance functions with limited parametric
interpretation, thus conflicting with the ability of single-output GPs to
understand lengthscales, frequencies and magnitudes to name a few. On the
contrary, current approaches to MOGP are able to better interpret the
relationship between different channels by directly modelling the
cross-covariances as a spectral mixture kernel with a phase shift. We extend
this rationale and propose a parametric family of complex-valued cross-spectral
densities and then build on Cramér's Theorem (the multivariate version of
Bochner's Theorem) to provide a principled approach to design multivariate
covariance functions. The so-constructed kernels are able to model delays among
channels in addition to phase differences and are thus more expressive than
previous methods, while also providing full parametric interpretation of the
relationship across channels. The proposed method is first validated on
synthetic data and then compared to existing MOGP methods on two real-world
examples.
| 0 | 0 | 0 | 1 | 0 | 0 |
Supergravity and its Legacy | A personal recollection of events that preceded the construction of
Supergravity and of some subsequent developments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Manton's five vortex equations from self-duality | We demonstrate that the five vortex equations recently introduced by Manton
ariseas symmetry reductions of the anti-self-dual Yang--Mills equations in four
dimensions. In particular the Jackiw--Pi vortex and the Ambj\o rn--Olesen
vortex correspond to the gauge group $SU(1, 1)$, and respectively the Euclidean
or the $SU(2)$ symmetry groups acting with two-dimensional orbits. We show how
to obtain vortices with higher vortex numbers, by superposing vortex equations
of different types. Finally we use the kinetic energy of the Yang--Mills theory
in 4+1 dimensions to construct a metric on vortex moduli spaces. This metric is
not positive-definite in cases of non-compact gauge groups.
| 0 | 1 | 1 | 0 | 0 | 0 |
Deep Style Match for Complementary Recommendation | Humans develop a common sense of style compatibility between items based on
their attributes. We seek to automatically answer questions like "Does this
shirt go well with that pair of jeans?" In order to answer these kinds of
questions, we attempt to model human sense of style compatibility in this
paper. The basic assumption of our approach is that most of the important
attributes for a product in an online store are included in its title
description. Therefore it is feasible to learn style compatibility from these
descriptions. We design a Siamese Convolutional Neural Network architecture and
feed it with title pairs of items, which are either compatible or incompatible.
Those pairs will be mapped from the original space of symbolic words into some
embedded style space. Our approach takes only words as the input with few
preprocessing and there is no laborious and expensive feature engineering.
| 1 | 0 | 0 | 0 | 0 | 0 |
SpectralLeader: Online Spectral Learning for Single Topic Models | We study the problem of learning a latent variable model from a stream of
data. Latent variable models are popular in practice because they can explain
observed data in terms of unobserved concepts. These models have been
traditionally studied in the offline setting. In the online setting, on the
other hand, the online EM is arguably the most popular algorithm for learning
latent variable models. Although the online EM is computationally efficient, it
typically converges to a local optimum. In this work, we develop a new online
learning algorithm for latent variable models, which we call SpectralLeader.
SpectralLeader always converges to the global optimum, and we derive a
sublinear upper bound on its $n$-step regret in the bag-of-words model. In both
synthetic and real-world experiments, we show that SpectralLeader performs
similarly to or better than the online EM with tuned hyper-parameters.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fractional Calculus and certain integrals of Generalized multiindex Bessel function | We aim to introduce the generalized multiindex Bessel function $J_{\left(
\beta _{j}\right) _{m},\kappa ,b}^{\left( \alpha _{j}\right)_{m},\gamma
,c}\left[ z\right] $ and to present some formulas of the Riemann-Liouville
fractional integration and differentiation operators. Further, we also derive
certain integral formulas involving the newly defined generalized multiindex
Bessel function $J_{\left( \beta _{j}\right) _{m},\kappa ,b}^{\left( \alpha
_{j}\right)_{m},\gamma ,c}\left[ z\right] $. We prove that such integrals are
expressed in terms of the Fox-Wright function $_{p}\Psi_{q}(z)$. The results
presented here are of general in nature and easily reducible to new and known
results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Tameness from two successive good frames | We show, assuming a mild set-theoretic hypothesis, that if an abstract
elementary class (AEC) has a superstable-like forking notion for models of
cardinality $\lambda$ and a superstable-like forking notion for models of
cardinality $\lambda^+$, then orbital types over models of cardinality
$\lambda^+$ are determined by their restrictions to submodels of cardinality
$\lambda$. By a superstable-like forking notion, we mean here a good frame, a
central concept of Shelah's book on AECs.
It is known that locality of orbital types together with the existence of a
superstable-like notion for models of cardinality $\lambda$ implies the
existence of a superstable-like notion for models of cardinality $\lambda^+$,
but here we prove the converse. An immediate consequence is that forking in
$\lambda^+$ can be described in terms of forking in $\lambda$.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the $\mathbf{\rmΨ}-$fractional integral and applications | Motivated by the ${\rm \Psi}$-Riemann-Liouville $({\rm \Psi-RL})$ fractional
derivative and by the ${\rm \Psi}$-Hilfer $({\rm \Psi-H})$ fractional
derivative, we introduced a new fractional operator the so-called
$\rm\Psi-$fractional integral. We present some important results by means of
theorems and in particular, that the $\rm\Psi-$fractional integration operator
is limited. In this sense, we discuss some examples, in particular, involving
the Mittag-Leffler $({\rm M-L})$ function, of paramount importance in the
solution of population growth problem, as approached. On the other hand, we
realize a brief discussion on the uniqueness of nonlinear $\Psi$-fractional
Volterra integral equation (${\rm VIE}$) using $\beta-$distance functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Detection and Characterization of Illegal Marketing and Promotion of Prescription Drugs on Twitter | Illicit online pharmacies allow the purchase of prescription drugs online
without a prescription. Such pharmacies leverage social media platforms such as
Twit- ter as a promotion and marketing tool with the intent of reaching out to
a larger, potentially younger demographics of the population. Given the serious
negative health effects that arise from abusing such drugs, it is important to
identify the relevant content on social media and exterminate their presence as
quickly as pos- sible. In response, we collected all the tweets that contained
the names of certain preselected controlled substances over a period of 5
months. We found that an unsupervised topic modeling based methodology is able
to identify tweets that promote and market controlled substances with high
precision. We also study the meta-data characteristics of such tweets and the
users who post them and find that they have several distinguishing
characteristics that sets them apart. We were able to train supervised methods
and achieve high performance in detecting such content and the users who post
them.
| 1 | 0 | 0 | 0 | 0 | 0 |
First Hochschild cohomology group and stable equivalence classification of Morita type of some tame symmetric algebras | We use the dimension and the Lie algebra structure of the first Hochschild
cohomology group to distinguish some algebras of dihedral, semi-dihedral and
quaternion type up to stable equivalence of Morita type. In particular, we
complete the classification of algebras of dihedral type that was mostly
determined by Zhou and Zimmermann.
| 0 | 0 | 1 | 0 | 0 | 0 |
When is a Convolutional Filter Easy To Learn? | We analyze the convergence of (stochastic) gradient descent algorithm for
learning a convolutional filter with Rectified Linear Unit (ReLU) activation
function. Our analysis does not rely on any specific form of the input
distribution and our proofs only use the definition of ReLU, in contrast with
previous works that are restricted to standard Gaussian input. We show that
(stochastic) gradient descent with random initialization can learn the
convolutional filter in polynomial time and the convergence rate depends on the
smoothness of the input distribution and the closeness of patches. To the best
of our knowledge, this is the first recovery guarantee of gradient-based
algorithms for convolutional filter on non-Gaussian input distributions. Our
theory also justifies the two-stage learning rate strategy in deep neural
networks. While our focus is theoretical, we also present experiments that
illustrate our theoretical findings.
| 1 | 0 | 0 | 1 | 0 | 0 |
Robots as Powerful Allies for the Study of Embodied Cognition from the Bottom Up | A large body of compelling evidence has been accumulated demonstrating that
embodiment - the agent's physical setup, including its shape, materials,
sensors and actuators - is constitutive for any form of cognition and as a
consequence, models of cognition need to be embodied. In contrast to methods
from empirical sciences to study cognition, robots can be freely manipulated
and virtually all key variables of their embodiment and control programs can be
systematically varied. As such, they provide an extremely powerful tool of
investigation. We present a robotic bottom-up or developmental approach,
focusing on three stages: (a) low-level behaviors like walking and reflexes,
(b) learning regularities in sensorimotor spaces, and (c) human-like cognition.
We also show that robotic based research is not only a productive path to
deepening our understanding of cognition, but that robots can strongly benefit
from human-like cognition in order to become more autonomous, robust,
resilient, and safe.
| 1 | 0 | 0 | 0 | 1 | 0 |
Chiral magnetic textures in Ir/Fe/Co/Pt multilayers: Evolution and topological Hall signature | Skyrmions are topologically protected, two-dimensional, localized hedgehogs
and whorls of spin. Originally invented as a concept in field theory for
nuclear interactions, skyrmions are central to a wide range of phenomena in
condensed matter. Their realization at room temperature (RT) in magnetic
multilayers has generated considerable interest, fueled by technological
prospects and the access granted to fundamental questions. The interaction of
skyrmions with charge carriers gives rise to exotic electrodynamics, such as
the topological Hall effect (THE), the Hall response to an emergent magnetic
field, a manifestation of the skyrmion Berry-phase. The proposal that THE can
be used to detect skyrmions needs to be tested quantitatively. For that it is
imperative to develop comprehensive understanding of skyrmions and other chiral
textures, and their electrical fingerprint. Here, using Hall transport and
magnetic imaging, we track the evolution of magnetic textures and their THE
signature in a technologically viable multilayer film as a function of
temperature ($T$) and out-of-plane applied magnetic field ($H$). We show that
topological Hall resistivity ($\rho_\mathrm{TH}$) scales with the density of
isolated skyrmions ($n_\mathrm{sk}$) over a wide range of $T$, confirming the
impact of the skyrmion Berry-phase on electronic transport. We find that at
higher $n_\mathrm{sk}$ skyrmions cluster into worms which carry considerable
topological charge, unlike topologically-trivial spin spirals. While we
establish a qualitative agreement between $\rho_\mathrm{TH}(H,T)$ and areal
density of topological charge $n_\mathrm{T}(H,T)$, our detailed quantitative
analysis shows a much larger $\rho_\mathrm{TH}$ than the prevailing theory
predicts for observed $n_\mathrm{T}$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Conductance distribution in the magnetic field | Using a modification of the Shapiro scaling approach, we derive the
distribution of conductance in the magnetic field applicable in the vicinity of
the Anderson transition. This distribution is described by the same equations
as in the absence of a field. Variation of the magnetic field does not lead to
any qualitative effects in the conductance distribution and only changes its
quantitative characteristics, moving a position of the system in the
three-parameter space. In contrast to the original Shapiro approach, the
evolution equation for quasi-1D systems is established from the generalized
DMPK equation, and not by a simple analogy with one-dimensional systems; as a
result, the whole approach became more rigorous and accurate.
| 0 | 1 | 0 | 0 | 0 | 0 |
Linear Convergence of An Iterative Phase Retrieval Algorithm with Data Reuse | Phase retrieval has been an attractive but difficult problem rising from
physical science, and there has been a gap between state-of-the-art theoretical
convergence analyses and the corresponding efficient retrieval methods.
Firstly, these analyses all assume that the sensing vectors and the iterative
updates are independent, which only fits the ideal model with infinite
measurements but not the reality, where data are limited and have to be reused.
Secondly, the empirical results of some efficient methods, such as the
randomized Kaczmarz method, show linear convergence, which is beyond existing
theoretical explanations considering its randomness and reuse of data. In this
work, we study for the first time, without the independence assumption, the
convergence behavior of the randomized Kaczmarz method for phase retrieval.
Specifically, beginning from taking expectation of the squared estimation error
with respect to the index of measurement by fixing the sensing vector and the
error in the previous step, we discard the independence assumption, rigorously
derive the upper and lower bounds of the reduction of the mean squared error,
and prove the linear convergence. This work fills the gap between a fast
converging algorithm and its theoretical understanding. The proposed
methodology may contribute to the study of other iterative algorithms for phase
retrieval and other problems in the broad area of signal processing and machine
learning.
| 1 | 0 | 0 | 0 | 0 | 0 |
Representing smooth functions as compositions of near-identity functions with implications for deep network optimization | We show that any smooth bi-Lipschitz $h$ can be represented exactly as a
composition $h_m \circ ... \circ h_1$ of functions $h_1,...,h_m$ that are close
to the identity in the sense that each $\left(h_i-\mathrm{Id}\right)$ is
Lipschitz, and the Lipschitz constant decreases inversely with the number $m$
of functions composed. This implies that $h$ can be represented to any accuracy
by a deep residual network whose nonlinear layers compute functions with a
small Lipschitz constant. Next, we consider nonlinear regression with a
composition of near-identity nonlinear maps. We show that, regarding Fréchet
derivatives with respect to the $h_1,...,h_m$, any critical point of a
quadratic criterion in this near-identity region must be a global minimizer. In
contrast, if we consider derivatives with respect to parameters of a fixed-size
residual network with sigmoid activation functions, we show that there are
near-identity critical points that are suboptimal, even in the realizable case.
Informally, this means that functional gradient methods for residual networks
cannot get stuck at suboptimal critical points corresponding to near-identity
layers, whereas parametric gradient methods for sigmoidal residual networks
suffer from suboptimal critical points in the near-identity region.
| 0 | 0 | 0 | 1 | 0 | 0 |
Bosonizing three-dimensional quiver gauge theories | We start with the recently conjectured 3d bosonization dualities and gauge
global symmetries to generate an infinite sequence of new dualities. These
equate theories with non-Abelian product gauge groups and bifundamental matter.
We uncover examples of Bose/Bose and Fermi/Fermi dualities, as well as a
sequence of dualities between theories with scalar matter in two-index
representations. Our conjectures are consistent with level/rank duality in
massive phases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Deep Learning for Sentiment Analysis : A Survey | Deep learning has emerged as a powerful machine learning technique that
learns multiple layers of representations or features of the data and produces
state-of-the-art prediction results. Along with the success of deep learning in
many other application domains, deep learning is also popularly used in
sentiment analysis in recent years. This paper first gives an overview of deep
learning and then provides a comprehensive survey of its current applications
in sentiment analysis.
| 0 | 0 | 0 | 1 | 0 | 0 |
Utility Preserving Secure Private Data Release | Differential privacy mechanisms that also make reconstruction of the data
impossible come at a cost - a decrease in utility. In this paper, we tackle
this problem by designing a private data release mechanism that makes
reconstruction of the original data impossible and also preserves utility for a
wide range of machine learning algorithms. We do so by combining the
Johnson-Lindenstrauss (JL) transform with noise generated from a Laplace
distribution. While the JL transform can itself provide privacy guarantees
\cite{blocki2012johnson} and make reconstruction impossible, we do not rely on
its differential privacy properties and only utilize its ability to make
reconstruction impossible. We present novel proofs to show that our mechanism
is differentially private under single element changes as well as single row
changes to any database. In order to show utility, we prove that our mechanism
maintains pairwise distances between points in expectation and also show that
its variance is proportional to the the dimensionality of the subspace we
project the data into. Finally, we experimentally show the utility of our
mechanism by deploying it on the task of clustering.
| 1 | 0 | 0 | 0 | 0 | 0 |
Towards colloidal spintronics through Rashba spin-orbit interaction in lead sulphide nanosheets | Employing the spin degree of freedom of charge carriers offers the
possibility to extend the functionality of conventional electronic devices,
while colloidal chemistry can be used to synthesize inexpensive and tuneable
nanomaterials. In order to benefit from both concepts, Rashba spin-orbit
interaction has been investigated in colloidal lead sulphide nanosheets by
electrical measurements on the circular photo-galvanic effect. Lead sulphide
nanosheets possess rock salt crystal structure, which is centrosymmetric. The
symmetry can be broken by quantum confinement, asymmetric vertical interfaces
and a gate electric field leading to Rashba-type band splitting in momentum
space at the M points, which results in an unconventional selection mechanism
for the excitation of the carriers. The effect, which is supported by
simulations of the band structure using density functional theory, can be tuned
by the gate electric field and by the thickness of the sheets. Spin-related
electrical transport phenomena in colloidal materials open a promising pathway
towards future inexpensive spintronic devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the use and abuse of Price equation concepts in ecology | In biodiversity and ecosystem functioning (BEF) research, the Loreau-Hector
(LH) statistical scheme is widely-used to partition the effect of biodiversity
on ecosystem properties into a "complementarity effect" and a "selection
effect". This selection effect was originally considered analogous to the
selection term in the Price equation from evolutionary biology. However, a key
paper published over thirteen years ago challenged this interpretation by
devising a new tripartite partitioning scheme that purportedly quantified the
role of selection in biodiversity experiments more accurately. This tripartite
method, as well as its recent spatiotemporal extension, were both developed as
an attempt to apply the Price equation in a BEF context. Here, we demonstrate
that the derivation of this tripartite method, as well as its spatiotemporal
extension, involve a set of incoherent and nonsensical mathematical arguments
driven largely by naïve visual analogies with the original Price equation,
that result in neither partitioning scheme quantifying any real property in the
natural world. Furthermore, we show that Loreau and Hector's original selection
effect always represented a true analog of the original Price selection term,
making the tripartite partitioning scheme a nonsensical solution to a
non-existent problem [...]
| 0 | 0 | 0 | 0 | 1 | 0 |
A Data-Driven MHD Model of the Global Solar Corona within Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) | We have developed a data-driven magnetohydrodynamic (MHD) model of the global
solar corona which uses characteristically-consistent boundary conditions (BCs)
at the inner boundary. Our global solar corona model can be driven by different
observational data including Solar Dynamics Observatory/Helioseismic and
Magnetic Imager (SDO/HMI) synoptic vector magnetograms together with the
horizontal velocity data in the photosphere obtained by the time-distance
helioseismology method, and the line-of-sight (LOS) magnetogram data obtained
by HMI, Solar and Heliospheric Observatory/Michelson Doppler Imager (SOHO/MDI),
National Solar Observatory/Global Oscillation Network Group (NSO/GONG) and
Wilcox Solar Observatory (WSO). We implemented our model in the Multi-Scale
Fluid-Kinetic Simulation Suite (MS-FLUKSS) - a suite of adaptive mesh
refinement (AMR) codes built upon the Chombo AMR framework developed at the
Lawrence Berkeley National Laboratory. We present an overview of our model,
characteristic BCs, and two results we obtained using our model: A benchmark
test of relaxation of a dipole field using characteristic BCs, and relaxation
of an initial PFSS field driven by HMI LOS magnetogram data, and horizontal
velocity data obtained by the time-distance helioseismology method using a set
of non-characteristic BCs.
| 0 | 1 | 0 | 0 | 0 | 0 |
AXNet: ApproXimate computing using an end-to-end trainable neural network | Neural network based approximate computing is a universal architecture
promising to gain tremendous energy-efficiency for many error resilient
applications. To guarantee the approximation quality, existing works deploy two
neural networks (NNs), e.g., an approximator and a predictor. The approximator
provides the approximate results, while the predictor predicts whether the
input data is safe to approximate with the given quality requirement. However,
it is non-trivial and time-consuming to make these two neural network
coordinate---they have different optimization objectives---by training them
separately. This paper proposes a novel neural network structure---AXNet---to
fuse two NNs to a holistic end-to-end trainable NN. Leveraging the philosophy
of multi-task learning, AXNet can tremendously improve the invocation
(proportion of safe-to-approximate samples) and reduce the approximation error.
The training effort also decrease significantly. Experiment results show 50.7%
more invocation and substantial cuts of training time when compared to existing
neural network based approximate computing framework.
| 0 | 0 | 0 | 1 | 0 | 0 |
500+ Times Faster Than Deep Learning (A Case Study Exploring Faster Methods for Text Mining StackOverflow) | Deep learning methods are useful for high-dimensional data and are becoming
widely used in many areas of software engineering. Deep learners utilizes
extensive computational power and can take a long time to train-- making it
difficult to widely validate and repeat and improve their results. Further,
they are not the best solution in all domains. For example, recent results show
that for finding related Stack Overflow posts, a tuned SVM performs similarly
to a deep learner, but is significantly faster to train. This paper extends
that recent result by clustering the dataset, then tuning very learners within
each cluster. This approach is over 500 times faster than deep learning (and
over 900 times faster if we use all the cores on a standard laptop computer).
Significantly, this faster approach generates classifiers nearly as good
(within 2\% F1 Score) as the much slower deep learning method. Hence we
recommend this faster methods since it is much easier to reproduce and utilizes
far fewer CPU resources. More generally, we recommend that before researchers
release research results, that they compare their supposedly sophisticated
methods against simpler alternatives (e.g applying simpler learners to build
local models).
| 0 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.