title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Torsion of elliptic curves and unlikely intersections | We study effective versions of unlikely intersections of images of torsion
points of elliptic curves on the projective line.
| 0 | 0 | 1 | 0 | 0 | 0 |
BoostJet: Towards Combining Statistical Aggregates with Neural Embeddings for Recommendations | Recommenders have become widely popular in recent years because of their
broader applicability in many e-commerce applications. These applications rely
on recommenders for generating advertisements for various offers or providing
content recommendations. However, the quality of the generated recommendations
depends on user features (like demography, temporality), offer features (like
popularity, price), and user-offer features (like implicit or explicit
feedback). Current state-of-the-art recommenders do not explore such diverse
features concurrently while generating the recommendations.
In this paper, we first introduce the notion of Trackers which enables us to
capture the above-mentioned features and thus incorporate users' online
behaviour through statistical aggregates of different features (demography,
temporality, popularity, price). We also show how to capture offer-to-offer
relations, based on their consumption sequence, leveraging neural embeddings
for offers in our Offer2Vec algorithm. We then introduce BoostJet, a novel
recommender which integrates the Trackers along with the neural embeddings
using MatrixNet, an efficient distributed implementation of gradient boosted
decision tree, to improve the recommendation quality significantly. We provide
an in-depth evaluation of BoostJet on Yandex's dataset, collecting online
behaviour from tens of millions of online users, to demonstrate the
practicality of BoostJet in terms of recommendation quality as well as
scalability.
| 1 | 0 | 0 | 1 | 0 | 0 |
Fixed points of diffeomorphisms on nilmanifolds with a free nilpotent fundamental group | Let $M$ be a nilmanifold with a fundamental group which is free $2$-step
nilpotent on at least 4 generators. We will show that for any nonnegative
integer $n$ there exists a self-diffeomorphism $h_n$ of $M$ such that $h_n$ has
exactly $n$ fixed points and any self-map $f$ of $M$ which is homotopic to
$h_n$ has at least $n$ fixed points. We will also shed some light on the
situation for less generators and also for higher nilpotency classes.
| 0 | 0 | 1 | 0 | 0 | 0 |
Offloading Execution from Edge to Cloud: a Dynamic Node-RED Based Approach | Fog computing enables use cases where data produced in end devices are
stored, processed, and acted on directly at the edges of the network, yet
computation can be offloaded to more powerful instances through the edge to
cloud continuum. Such offloading mechanism is especially needed in case of
modern multi-purpose IoT gateways, where both demand and operation conditions
can vary largely between deployments. To facilitate the development and
operations of gateways, we implement offloading directly as part of the IoT
rapid prototyping process embedded in the software stack, based on Node-RED. We
evaluate the implemented method using an image processing example, and compare
various offloading strategies based on resource consumption and other system
metrics, highlighting the differences in handling demand and service levels
reached.
| 1 | 0 | 0 | 0 | 0 | 0 |
A nested sampling code for targeted searches for continuous gravitational waves from pulsars | This document describes a code to perform parameter estimation and model
selection in targeted searches for continuous gravitational waves from known
pulsars using data from ground-based gravitational wave detectors. We describe
the general workings of the code and characterise it on simulated data
containing both noise and simulated signals. We also show how it performs
compared to a previous MCMC and grid-based approach to signal parameter
estimation. Details how to run the code in a variety of cases are provided in
Appendix A.
| 0 | 1 | 0 | 0 | 0 | 0 |
Using Big Data to Enhance the Bosch Production Line Performance: A Kaggle Challenge | This paper describes our approach to the Bosch production line performance
challenge run by Kaggle.com. Maximizing the production yield is at the heart of
the manufacturing industry. At the Bosch assembly line, data is recorded for
products as they progress through each stage. Data science methods are applied
to this huge data repository consisting records of tests and measurements made
for each component along the assembly line to predict internal failures. We
found that it is possible to train a model that predicts which parts are most
likely to fail. Thus a smarter failure detection system can be built and the
parts tagged likely to fail can be salvaged to decrease operating costs and
increase the profit margins.
| 1 | 0 | 0 | 0 | 0 | 0 |
Golden Elliptical Orbits in Newtonian Gravitation | In spherical symmetry with radial coordinate $r$, classical Newtonian
gravitation supports circular orbits and, for $-1/r$ and $r^2$ potentials only,
closed elliptical orbits [1]. Various families of elliptical orbits can be
thought of as arising from the action of perturbations on corresponding
circular orbits. We show that one elliptical orbit in each family is singled
out because its focal length is equal to the radius of the corresponding
unperturbed circular orbit. The eccentricity of this special orbit is related
to the famous irrational number known as the golden ratio. So inanimate
Newtonian gravitation appears to exhibit (but not prefer) the golden ratio
which has been previously identified mostly in settings within the animate
world.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Sparse Adversarial Dictionaries For Multi-Class Audio Classification | Audio events are quite often overlapping in nature, and more prone to noise
than visual signals. There has been increasing evidence for the superior
performance of representations learned using sparse dictionaries for
applications like audio denoising and speech enhancement. This paper
concentrates on modifying the traditional reconstructive dictionary learning
algorithms, by incorporating a discriminative term into the objective function
in order to learn class-specific adversarial dictionaries that are good at
representing samples of their own class at the same time poor at representing
samples belonging to any other class. We quantitatively demonstrate the
effectiveness of our learned dictionaries as a stand-alone solution for both
binary as well as multi-class audio classification problems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Matter-wave solutions in the Bose-Einstein condensates with the harmonic and Gaussian potentials | We study exact solutions of the quasi-one-dimensional Gross-Pitaevskii (GP)
equation with the (space, time)-modulated potential and nonlinearity and the
time-dependent gain or loss term in Bose-Einstein condensates. In particular,
based on the similarity transformation, we report several families of exact
solutions of the GP equation in the combination of the harmonic and Gaussian
potentials, in which some physically relevant solutions are described. The
stability of the obtained matter-wave solutions is addressed numerically such
that some stable solutions are found. Moreover, we also analyze the parameter
regimes for the stable solutions. These results may raise the possibility of
relative experiments and potential applications.
| 0 | 1 | 1 | 0 | 0 | 0 |
Histogram Transform-based Speaker Identification | A novel text-independent speaker identification (SI) method is proposed. This
method uses the Mel-frequency Cepstral coefficients (MFCCs) and the dynamic
information among adjacent frames as feature sets to capture speaker's
characteristics. In order to utilize dynamic information, we design super-MFCCs
features by cascading three neighboring MFCCs frames together. The probability
density function (PDF) of these super-MFCCs features is estimated by the
recently proposed histogram transform~(HT) method, which generates more
training data by random transforms to realize the histogram PDF estimation and
recedes the commonly occurred discontinuity problem in multivariate histograms
computing. Compared to the conventional PDF estimation methods, such as
Gaussian mixture models, the HT model shows promising improvement in the SI
performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Towards self-adaptable robots: from programming to training machines | We argue that hardware modularity plays a key role in the convergence of
Robotics and Artificial Intelligence (AI). We introduce a new approach for
building robots that leads to more adaptable and capable machines. We present
the concept of a self-adaptable robot that makes use of hardware modularity and
AI techniques to reduce the effort and time required to be built. We
demonstrate in simulation and with a real robot how, rather than programming,
training produces behaviors in the robot that generalize fast and produce
robust outputs in the presence of noise. In particular, we advocate for
mammals.
| 1 | 0 | 0 | 0 | 0 | 0 |
Incidence Results and Bounds of Trilinear and Quadrilinear Exponential Sums | We give a new bound on the number of collinear triples for two arbitrary
subsets of a finite field. This improves on existing results which rely on the
Cauchy inequality. We then us this to provide a new bound on trilinear and
quadrilinear exponential sums.
| 0 | 0 | 1 | 0 | 0 | 0 |
Comment on "Kinetic decoupling of WIMPs: Analytic expressions" | Visinelli and Gondolo (2015, hereafter VG15) derived analytic expressions for
the evolution of the dark matter temperature in a generic cosmological model.
They then calculated the dark matter kinetic decoupling temperature
$T_{\mathrm{kd}}$ and compared their results to the Gelmini and Gondolo (2008,
hereafter GG08) calculation of $T_{\mathrm{kd}}$ in an early matter-dominated
era (EMDE), which occurs when the Universe is dominated by either a decaying
oscillating scalar field or a semistable massive particle before Big Bang
nucleosynthesis. VG15 found that dark matter decouples at a lower temperature
in an EMDE than it would in a radiation-dominated era, while GG08 found that
dark matter decouples at a higher temperature in an EMDE than it would in a
radiation-dominated era. VG15 attributed this discrepancy to the presence of a
matching constant that ensures that the dark matter temperature is continuous
during the transition from the EMDE to the subsequent radiation-dominated era
and concluded that the GG08 result is incorrect. We show that the disparity is
due to the fact that VG15 compared $T_\mathrm{kd}$ in an EMDE to the decoupling
temperature in a radiation-dominated universe that would result in the same
dark matter temperature at late times. Since decoupling during an EMDE leaves
the dark matter colder than it would be if it decoupled during radiation
domination, this temperature is much higher than $T_\mathrm{kd}$ in a standard
thermal history, which is indeed lower than $T_{\mathrm{kd}}$ in an EMDE, as
stated by GG08.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dimension preserving resolutions of singularities of Poisson structures | Some Poisson structures do admit resolutions by symplectic manifolds of the
same dimension. We give examples and simple conditions under which such
resolutions can not exist.
| 0 | 0 | 1 | 0 | 0 | 0 |
Comparison of hidden Markov chain models and hidden Markov random field models in estimation of computed tomography images | There is an interest to replace computed tomography (CT) images with magnetic
resonance (MR) images for a number of diagnostic and therapeutic workflows. In
this article, predicting CT images from a number of magnetic resonance imaging
(MRI) sequences using regression approach is explored. Two principal areas of
application for estimated CT images are dose calculations in MRI-based
radiotherapy treatment planning and attenuation correction for positron
emission tomography (PET)/MRI. The main purpose of this work is to investigate
the performance of hidden Markov (chain) models (HMMs) in comparison to hidden
Markov random field (HMRF) models when predicting CT images of head. Our study
shows that HMMs have clear advantages over HMRF models in this particular
application. Obtained results suggest that HMMs deserve a further study for
investigating their potential in modelling applications where the most natural
theoretical choice would be the class of HMRF models.
| 0 | 0 | 0 | 1 | 0 | 0 |
Spin-charge split pairing in underdoped cuprate superconductors: support from low-$T$ specific heat | We calculate the specific heat of a weakly interacting dilute system of
bosons on a lattice and show that it is consistent with the measured electronic
specific heat in the superconducting state of underdoped cuprates with boson
concentration $\rho \sim x/2$, where $x$ is the hole (dopant) concentration. As
usual, the $T^3$ term is due to Goldstone phonons. The zero-point energy,
through its dependence on the condensate density $\rho_0(T)$, accounts for the
anomalous $T$-linear term. These results support the split-pairing mechanism,
in which spinons (pure spin) are paired at $T^*$ and holons (pure charge) form
real-space pairs at $T_p < T^*$, creating a gauge-coupled physical pair of
charge $+2e$ and concentration $x/2$ which Bose condenses below $T_c$,
accounting for the observed phases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Iterative Amortized Inference | Inference models are a key component in scaling variational inference to deep
latent variable models, most notably as encoder networks in variational
auto-encoders (VAEs). By replacing conventional optimization-based inference
with a learned model, inference is amortized over data examples and therefore
more computationally efficient. However, standard inference models are
restricted to direct mappings from data to approximate posterior estimates. The
failure of these models to reach fully optimized approximate posterior
estimates results in an amortization gap. We aim toward closing this gap by
proposing iterative inference models, which learn to perform inference
optimization through repeatedly encoding gradients. Our approach generalizes
standard inference models in VAEs and provides insight into several empirical
findings, including top-down inference techniques. We demonstrate the inference
optimization capabilities of iterative inference models and show that they
outperform standard inference models on several benchmark data sets of images
and text.
| 0 | 0 | 0 | 1 | 0 | 0 |
Rethinking probabilistic prediction in the wake of the 2016 U.S. presidential election | To many statisticians and citizens, the outcome of the most recent U.S.
presidential election represents a failure of data-driven methods on the
grandest scale. This impression has led to much debate and discussion about how
the election predictions went awry -- Were the polls inaccurate? Were the
models wrong? Did we misinterpret the probabilities? -- and how they went right
-- Perhaps the analyses were correct even though the predictions were wrong,
that's just the nature of probabilistic forecasting. With this in mind, we
analyze the election outcome with respect to a core set of effectiveness
principles. Regardless of whether and how the election predictions were right
or wrong, we argue that they were ineffective in conveying the extent to which
the data was informative of the outcome and the level of uncertainty in making
these assessments. Among other things, our analysis sheds light on the
shortcomings of the classical interpretations of probability and its
communication to consumers in the form of predictions. We present here an
alternative approach, based on a notion of validity, which offers two immediate
insights for predictive inference. First, the predictions are more
conservative, arguably more realistic, and come with certain guarantees on the
probability of an erroneous prediction. Second, our approach easily and
naturally reflects the (possibly substantial) uncertainty about the model by
outputting plausibilities instead of probabilities. Had these simple steps been
taken by the popular prediction outlets, the election outcome may not have been
so shocking.
| 0 | 0 | 1 | 1 | 0 | 0 |
$L^1$ solutions to one-dimensional BSDEs with sublinear growth generators in $z$ | This paper aims at solving a one-dimensional backward stochastic differential
equation (BSDE for short) with only integrable parameters. We first establish
the existence of a minimal $L^1$ solution for the BSDE when the generator $g$
is stronger continuous in $(y,z)$ and monotonic in $y$ as well as it has a
general growth in $y$ and a sublinear growth in $z$. Particularly, the $g$ may
be not uniformly continuous in $z$. Then, we put forward and prove a comparison
theorem and a Levi type theorem on the minimal $L^1$ solutions. A Lebesgue type
theorem on $L^1$ solutions is also obtained. Furthermore, we investigate the
same problem in the case that $g$ may be discontinuous in $y$. Finally, we
prove a general comparison theorem on $L^1$ solutions when $g$ is weakly
monotonic in $y$ and uniformly continuous in $z$ as well as it has a stronger
sublinear growth in $z$. As a byproduct, we also obtain a general existence and
unique theorem on $L^1$ solutions. Our results extend some known works.
| 0 | 0 | 1 | 0 | 0 | 0 |
Magnifying the early episodes of star formation: super star clusters at cosmological distances | We study the spectrophotometric properties of a highly magnified (\mu~40-70)
pair of stellar systems identified at z=3.2222 behind the Hubble Frontier Field
galaxy cluster MACS~J0416. Five multiple images (out of six) have been
spectroscopically confirmed by means of VLT/MUSE and VLT/X-Shooter
observations. Each image includes two faint (m_uv~30.6), young (<100 Myr),
low-mass (<10^7 Msun), low-metallicity (12+Log(O/H)~7.7, or 1/10 solar) and
compact (30 pc effective radius) stellar systems separated by ~300pc, after
correcting for lensing amplification. We measured several rest-frame
ultraviolet and optical narrow (\sigma_v <~ 25 km/s) high-ionization lines.
These features may be the signature of very hot (T>50000 K) stars within dense
stellar clusters, whose dynamical mass is likely dominated by the stellar
component. Remarkably, the ultraviolet metal lines are not accompanied by Lya
emission (e.g., CIV / Lya > 15), despite the fact that the Lya line flux is
expected to be 150 times brighter (inferred from the Hbeta flux). A
spatially-offset, strongly-magnified (\mu>50) Lya emission with a spatial
extent <~7.6 kpc^2 is instead identified 2 kpc away from the system. The origin
of such a faint emission can be the result of fluorescent Lya induced by a
transverse leakage of ionizing radiation emerging from the stellar systems
and/or can be associated to an underlying and barely detected object (with m_uv
> 34 de-lensed). This is the first confirmed metal-line emitter at such
low-luminosity and redshift without Lya emission, suggesting that, at least in
some cases, a non-uniform covering factor of the neutral gas might hamper the
Lya detection.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ergodicity of a system of interacting random walks with asymmetric interaction | We study N interacting random walks on the positive integers. Each particle
has drift {\delta} towards infinity, a reflection at the origin, and a drift
towards particles with lower positions. This inhomogeneous mean field system is
shown to be ergodic only when the interaction is strong enough. We focus on
this latter regime, and point out the effect of piles of particles, a
phenomenon absent in models of interacting diffusion in continuous space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Extreme value statistics for censored data with heavy tails under competing risks | This paper addresses the problem of estimating, in the presence of random
censoring as well as competing risks, the extreme value index of the
(sub)-distribution function associated to one particular cause, in the
heavy-tail case. Asymptotic normality of the proposed estimator (which has the
form of an Aalen-Johansen integral, and is the first estimator proposed in this
context) is established. A small simulation study exhibits its performances for
finite samples. Estimation of extreme quantiles of the cumulative incidence
function is also addressed.
| 0 | 0 | 1 | 1 | 0 | 0 |
Response to "Counterexample to global convergence of DSOS and SDSOS hierarchies" | In a recent note [8], the author provides a counterexample to the global
convergence of what his work refers to as "the DSOS and SDSOS hierarchies" for
polynomial optimization problems (POPs) and purports that this refutes claims
in our extended abstract [4] and slides in [3]. The goal of this paper is to
clarify that neither [4], nor [3], and certainly not our full paper [5], ever
defined DSOS or SDSOS hierarchies as it is done in [8]. It goes without saying
that no claims about convergence properties of the hierarchies in [8] were ever
made as a consequence. What was stated in [4,3] was completely different: we
stated that there exist hierarchies based on DSOS and SDSOS optimization that
converge. This is indeed true as we discuss in this response. We also emphasize
that we were well aware that some (S)DSOS hierarchies do not converge even if
their natural SOS counterparts do. This is readily implied by an example in our
prior work [5], which makes the counterexample in [8] superfluous. Finally, we
provide concrete counterarguments to claims made in [8] that aim to challenge
the scalability improvements obtained by DSOS and SDSOS optimization as
compared to sum of squares (SOS) optimization.
[3] A. A. Ahmadi and A. Majumdar. DSOS and SDSOS: More tractable alternatives
to SOS. Slides at the meeting on Geometry and Algebra of Linear Matrix
Inequalities, CIRM, Marseille, 2013. [4] A. A. Ahmadi and A. Majumdar. DSOS and
SDSOS optimization: LP and SOCP-based alternatives to sum of squares
optimization. In proceedings of the 48th annual IEEE Conference on Information
Sciences and Systems, 2014. [5] A. A. Ahmadi and A. Majumdar. DSOS and SDSOS
optimization: more tractable alternatives to sum of squares and semidefinite
optimization. arXiv:1706.02586, 2017. [8] C. Josz. Counterexample to global
convergence of DSOS and SDSOS hierarchies. arXiv:1707.02964, 2017.
| 1 | 0 | 0 | 1 | 0 | 0 |
Thermal memristor and neuromorphic networks for manipulating heat flow | A memristor is one of four fundamental two-terminal solid elements in
electronics. In addition with the resistor, the capacitor and the inductor,
this passive element relates the electric charges to current in solid state
elements. Here we report the existence of a thermal analog for this element
made with metal-insulator transition materials. We demonstrate that these
memristive systems can be used to create thermal neurons opening so the way to
neuromophic networks for smart thermal management and information treatment.
| 0 | 1 | 0 | 0 | 0 | 0 |
Asymptotic Distribution and Simultaneous Confidence Bands for Ratios of Quantile Functions | Ratio of medians or other suitable quantiles of two distributions is widely
used in medical research to compare treatment and control groups or in
economics to compare various economic variables when repeated cross-sectional
data are available. Inspired by the so-called growth incidence curves
introduced in poverty research, we argue that the ratio of quantile functions
is a more appropriate and informative tool to compare two distributions. We
present an estimator for the ratio of quantile functions and develop
corresponding simultaneous confidence bands, which allow to assess significance
of certain features of the quantile functions ratio. Derived simultaneous
confidence bands rely on the asymptotic distribution of the quantile functions
ratio and do not require re-sampling techniques. The performance of the
simultaneous confidence bands is demonstrated in simulations. Analysis of the
expenditure data from Uganda in years 1999, 2002 and 2005 illustrates the
relevance of our approach.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Topological proof that $O_2$ is $2$-MCFL | We give a new proof of Salvati's theorem that the group language $O_2$ is $2$
multiple context free. Unlike Salvati's proof, our arguments do not use any
idea specific to two-dimensions. This raises the possibility that the argument
might generalize to $O_n$.
| 1 | 0 | 1 | 0 | 0 | 0 |
Projected Variational Integrators for Degenerate Lagrangian Systems | We propose and compare several projection methods applied to variational
integrators for degenerate Lagrangian systems, whose Lagrangian is of the form
$L = \vartheta(q) \cdot \dot{q} - H(q)$ and thus linear in velocities. While
previous methods for such systems only work reliably in the case of $\vartheta$
being a linear function of $q$, our methods are long-time stable also for
systems where $\vartheta$ is a nonlinear function of $q$. We analyse the
properties of the resulting algorithms, in particular with respect to the
conservation of energy, momentum maps and symplecticity. In numerical
experiments, we verify the favourable properties of the projected integrators
and demonstrate their excellent long-time fidelity. In particular, we consider
a two-dimensional Lotka-Volterra system, planar point vortices with
position-dependent circulation and guiding centre dynamics.
| 0 | 1 | 0 | 0 | 0 | 0 |
Boosted Generative Models | We propose a novel approach for using unsupervised boosting to create an
ensemble of generative models, where models are trained in sequence to correct
earlier mistakes. Our meta-algorithmic framework can leverage any existing base
learner that permits likelihood evaluation, including recent deep expressive
models. Further, our approach allows the ensemble to include discriminative
models trained to distinguish real data from model-generated data. We show
theoretical conditions under which incorporating a new model in the ensemble
will improve the fit and empirically demonstrate the effectiveness of our
black-box boosting algorithms on density estimation, classification, and sample
generation on benchmark datasets for a wide range of generative models.
| 1 | 0 | 0 | 1 | 0 | 0 |
The Geometry of Strong Koszul Algebras | Koszul algebras with quadratic Groebner bases, called strong Koszul algebras,
are studied. We introduce affine algebraic varieties whose points are in
one-to-one correspondence with certain strong Koszul algebras and we
investigate the connection between the varieties and the algebras.
| 0 | 0 | 1 | 0 | 0 | 0 |
Overlapping community detection using superior seed set selection in social networks | Community discovery in the social network is one of the tremendously
expanding areas which earn interest among researchers for the past one decade.
There are many already existing algorithms. However, new seed-based algorithms
establish an emerging drift in this area. The basic idea behind these
strategies is to identify exceptional nodes in the given network, called seeds,
around which communities can be located. This paper proposes a blended strategy
for locating suitable superior seed set by applying various centrality measures
and using them to find overlapping communities. The examination of the
algorithm has been performed regarding the goodness of the identified
communities with the help of intra-cluster density and inter-cluster density.
Finally, the runtime of the proposed algorithm has been compared with the
existing community detection algorithms showing remarkable improvement.
| 1 | 0 | 0 | 0 | 0 | 0 |
Debugging Transactions and Tracking their Provenance with Reenactment | Debugging transactions and understanding their execution are of immense
importance for developing OLAP applications, to trace causes of errors in
production systems, and to audit the operations of a database. However,
debugging transactions is hard for several reasons: 1) after the execution of a
transaction, its input is no longer available for debugging, 2) internal states
of a transaction are typically not accessible, and 3) the execution of a
transaction may be affected by concurrently running transactions. We present a
debugger for transactions that enables non-invasive, post-mortem debugging of
transactions with provenance tracking and supports what-if scenarios (changes
to transaction code or data). Using reenactment, a declarative replay technique
we have developed, a transaction is replayed over the state of the DB seen by
its original execution including all its interactions with concurrently
executed transactions from the history. Importantly, our approach uses the
temporal database and audit logging capabilities available in many DBMS and
does not require any modifications to the underlying database system nor
transactional workload.
| 1 | 0 | 0 | 0 | 0 | 0 |
How LinkedIn Economic Graph Bonds Information and Product: Applications in LinkedIn Salary | The LinkedIn Salary product was launched in late 2016 with the goal of
providing insights on compensation distribution to job seekers, so that they
can make more informed decisions when discovering and assessing career
opportunities. The compensation insights are provided based on data collected
from LinkedIn members and aggregated in a privacy-preserving manner. Given the
simultaneous desire for computing robust, reliable insights and for having
insights to satisfy as many job seekers as possible, a key challenge is to
reliably infer the insights at the company level when there is limited or no
data at all. We propose a two-step framework that utilizes a novel, semantic
representation of companies (Company2vec) and a Bayesian statistical model to
address this problem. Our approach makes use of the rich information present in
the LinkedIn Economic Graph, and in particular, uses the intuition that two
companies are likely to be similar if employees are very likely to transition
from one company to the other and vice versa. We compute embeddings for
companies by analyzing the LinkedIn members' company transition data using
machine learning algorithms, then compute pairwise similarities between
companies based on these embeddings, and finally incorporate company
similarities in the form of peer company groups as part of the proposed
Bayesian statistical model to predict insights at the company level. We perform
extensive validation using several different evaluation techniques, and show
that we can significantly increase the coverage of insights while, in fact,
even improving the quality of the obtained insights. For example, we were able
to compute salary insights for 35 times as many title-region-company
combinations in the U.S. as compared to previous work, corresponding to 4.9
times as many monthly active users. Finally, we highlight the lessons learned
from deployment of our system.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fast quantum logic gates with trapped-ion qubits | Quantum bits based on individual trapped atomic ions constitute a promising
technology for building a quantum computer, with all the elementary operations
having been achieved with the necessary precision for some error-correction
schemes. However, the essential two-qubit logic gate used for generating
quantum entanglement has hitherto always been performed in an adiabatic regime,
where the gate is slow compared with the characteristic motional frequencies of
ions in the trap, giving logic speeds of order 10kHz. There have been numerous
proposals for performing gates faster than this natural "speed limit" of the
trap. We implement the method of Steane et al., which uses tailored laser
pulses: these are shaped on 10 ns timescales to drive the ions' motion along
trajectories designed such that the gate operation is insensitive to optical
phase fluctuations. This permits fast (MHz-rate) quantum logic which is robust
to this important source of experimental error. We demonstrate entanglement
generation for gate times as short as 480ns; this is less than a single
oscillation period of an ion in the trap, and 8 orders of magnitude shorter
than the memory coherence time measured in similar calcium-43 hyperfine qubits.
The method's power is most evident at intermediate timescales, where it yields
a gate error more than ten times lower than conventional techniques; for
example, we achieve a 1.6 us gate with fidelity 99.8%. Still faster gates are
possible at the price of higher laser intensity. The method requires only a
single amplitude-shaped pulse and one pair of beams derived from a
continuous-wave laser, and offers the prospect of combining the unrivalled
coherence properties, operation fidelities and optical connectivity of
trapped-ion qubits with the sub-microsecond logic speeds usually associated
with solid state devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generative Adversarial Network based Autoencoder: Application to fault detection problem for closed loop dynamical systems | Fault detection problem for closed loop uncertain dynamical systems, is
investigated in this paper, using different deep learning based methods.
Traditional classifier based method does not perform well, because of the
inherent difficulty of detecting system level faults for closed loop dynamical
system. Specifically, acting controller in any closed loop dynamical system,
works to reduce the effect of system level faults. A novel Generative
Adversarial based deep Autoencoder is designed to classify datasets under
normal and faulty operating conditions. This proposed network performs
significantly well when compared to any available classifier based methods, and
moreover, does not require labeled fault incorporated datasets for training
purpose. Finally, this aforementioned network's performance is tested on a high
complexity building energy system dataset.
| 0 | 0 | 0 | 1 | 0 | 0 |
Non-Asymptotic Rates for Manifold, Tangent Space, and Curvature Estimation | Given an $n$-sample drawn on a submanifold $M \subset \mathbb{R}^D$, we
derive optimal rates for the estimation of tangent spaces $T\_X M$, the second
fundamental form $II\_X^M$, and the submanifold $M$.After motivating their
study, we introduce a quantitative class of $\mathcal{C}^k$-submanifolds in
analogy with H{ö}lder classes.The proposed estimators are based on local
polynomials and allow to deal simultaneously with the three problems at stake.
Minimax lower bounds are derived using a conditional version of Assouad's lemma
when the base point $X$ is random.
| 0 | 0 | 1 | 1 | 0 | 0 |
Learning to Optimize Neural Nets | Learning to Optimize is a recently proposed framework for learning
optimization algorithms using reinforcement learning. In this paper, we explore
learning an optimization algorithm for training shallow neural nets. Such
high-dimensional stochastic optimization problems present interesting
challenges for existing reinforcement learning algorithms. We develop an
extension that is suited to learning optimization algorithms in this setting
and demonstrate that the learned optimization algorithm consistently
outperforms other known optimization algorithms even on unseen tasks and is
robust to changes in stochasticity of gradients and the neural net
architecture. More specifically, we show that an optimization algorithm trained
with the proposed method on the problem of training a neural net on MNIST
generalizes to the problems of training neural nets on the Toronto Faces
Dataset, CIFAR-10 and CIFAR-100.
| 1 | 0 | 1 | 1 | 0 | 0 |
A space-time finite element method for neural field equations with transmission delays | We present and analyze a new space-time finite element method for the
solution of neural field equations with transmission delays. The numerical
treatment of these systems is rare in the literature and currently has several
restrictions on the spatial domain and the functions involved, such as
connectivity and delay functions. The use of a space-time discretization, with
basis functions that are discontinuous in time and continuous in space
(dGcG-FEM), is a natural way to deal with space-dependent delays, which is
important for many neural field applications. In this article we provide a
detailed description of a space-time dGcG-FEM algorithm for neural delay
equations, including an a-priori error analysis. We demonstrate the application
of the dGcG-FEM algorithm on several neural field models, including problems
with an inhomogeneous kernel.
| 0 | 0 | 1 | 0 | 0 | 0 |
Notes on complexity of packing coloring | A packing $k$-coloring for some integer $k$ of a graph $G=(V,E)$ is a mapping
$\varphi:V\to\{1,\ldots,k\}$ such that any two vertices $u, v$ of color
$\varphi(u)=\varphi(v)$ are in distance at least $\varphi(u)+1$. This concept
is motivated by frequency assignment problems. The \emph{packing chromatic
number} of $G$ is the smallest $k$ such that there exists a packing
$k$-coloring of $G$.
Fiala and Golovach showed that determining the packing chromatic number for
chordal graphs is \NP-complete for diameter exactly 5. While the problem is
easy to solve for diameter 2, we show \NP-completeness for any diameter at
least 3. Our reduction also shows that the packing chromatic number is hard to
approximate within $n^{{1/2}-\varepsilon}$ for any $\varepsilon > 0$.
In addition, we design an \FPT algorithm for interval graphs of bounded
diameter. This leads us to exploring the problem of finding a partial coloring
that maximizes the number of colored vertices.
| 1 | 0 | 0 | 0 | 0 | 0 |
Predicting Positive and Negative Links with Noisy Queries: Theory & Practice | Social networks involve both positive and negative relationships, which can
be captured in signed graphs. The {\em edge sign prediction problem} aims to
predict whether an interaction between a pair of nodes will be positive or
negative. We provide theoretical results for this problem that motivate natural
improvements to recent heuristics.
The edge sign prediction problem is related to correlation clustering; a
positive relationship means being in the same cluster. We consider the
following model for two clusters: we are allowed to query any pair of nodes
whether they belong to the same cluster or not, but the answer to the query is
corrupted with some probability $0<q<\frac{1}{2}$. Let $\delta=1-2q$ be the
bias. We provide an algorithm that recovers all signs correctly with high
probability in the presence of noise with $O(\frac{n\log
n}{\delta^2}+\frac{\log^2 n}{\delta^6})$ queries. This is the best known result
for this problem for all but tiny $\delta$, improving on the recent work of
Mazumdar and Saha \cite{mazumdar2017clustering}. We also provide an algorithm
that performs $O(\frac{n\log n}{\delta^4})$ queries, and uses breadth first
search as its main algorithmic primitive. While both the running time and the
number of queries for this algorithm are sub-optimal, our result relies on
novel theoretical techniques, and naturally suggests the use of edge-disjoint
paths as a feature for predicting signs in online social networks.
Correspondingly, we experiment with using edge disjoint $s-t$ paths of short
length as a feature for predicting the sign of edge $(s,t)$ in real-world
signed networks. Empirical findings suggest that the use of such paths improves
the classification accuracy, especially for pairs of nodes with no common
neighbors.
| 1 | 0 | 0 | 0 | 0 | 0 |
Simulating a Topological Transition in a Superconducting Phase Qubit by Fast Adiabatic Trajectories | The significance of topological phases has been widely recognized in the
community of condensed matter physics. The well controllable quantum systems
provide an artificial platform to probe and engineer various topological
phases. The adiabatic trajectory of a quantum state describes the change of the
bulk Bloch eigenstates with the momentum, and this adiabatic simulation method
is however practically limited due to quantum dissipation. Here we apply the
`shortcut to adiabaticity' (STA) protocol to realize fast adiabatic evolutions
in the system of a superconducting phase qubit. The resulting fast adiabatic
trajectories illustrate the change of the bulk Bloch eigenstates in the
Su-Schrieffer-Heeger (SSH) model. A sharp transition is experimentally
determined for the topological invariant of a winding number. Our experiment
helps identify the topological Chern number of a two-dimensional toy model,
suggesting the applicability of the fast adiabatic simulation method for
topological systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Improved Absolute Frequency Measurement of the 171Yb Optical Lattice Clock at KRISS Relative to the SI Second | We measured the absolute frequency of the $^1S_0$ - $^3P_0$ transition of
$^{171}$Yb atoms confined in a one-dimensional optical lattice relative to the
SI second. The determined frequency was 518 295 836 590 863.38(57) Hz. The
uncertainty was reduced by a factor of 14 compared with our previously reported
value in 2013 due to the significant improvements in decreasing the systematic
uncertainties. This result is expected to contribute to the determination of a
new recommended value for the secondary representations of the second.
| 0 | 1 | 0 | 0 | 0 | 0 |
Theoretical properties of quasi-stationary Monte Carlo methods | This paper gives foundational results for the application of
quasi-stationarity to Monte Carlo inference problems. We prove natural
sufficient conditions for the quasi-limiting distribution of a killed diffusion
to coincide with a target density of interest. We also quantify the rate of
convergence to quasi-stationarity by relating the killed diffusion to an
appropriate Langevin diffusion. As an example, we consider in detail a killed
Ornstein--Uhlenbeck process with Gaussian quasi-stationary distribution.
| 0 | 0 | 1 | 1 | 0 | 0 |
A Hilbert Space of Stationary Ergodic Processes | Identifying meaningful signal buried in noise is a problem of interest
arising in diverse scenarios of data-driven modeling. We present here a
theoretical framework for exploiting intrinsic geometry in data that resists
noise corruption, and might be identifiable under severe obfuscation. Our
approach is based on uncovering a valid complete inner product on the space of
ergodic stationary finite valued processes, providing the latter with the
structure of a Hilbert space on the real field. This rigorous construction,
based on non-standard generalizations of the notions of sum and scalar
multiplication of finite dimensional probability vectors, allows us to
meaningfully talk about "angles" between data streams and data sources, and,
make precise the notion of orthogonal stochastic processes. In particular, the
relative angles appear to be preserved, and identifiable, under severe noise,
and will be developed in future as the underlying principle for robust
classification, clustering and unsupervised featurization algorithms.
| 0 | 0 | 0 | 1 | 0 | 1 |
Total variation regularized non-negative matrix factorization for smooth hyperspectral unmixing | Hyperspectral analysis has gained popularity over recent years as a way to
infer what materials are displayed on a picture whose pixels consist of a
mixture of spectral signatures. Computing both signatures and mixture
coefficients is known as unsupervised unmixing, a set of techniques usually
based on non-negative matrix factorization. Unmixing is a difficult non-convex
problem, and algorithms may converge to one out of many local minima, which may
be far removed from the true global minimum. Computing this true minimum is
NP-hard and seems therefore out of reach. Aiming for interesting local minima,
we investigate the addition of total variation regularization terms. Advantages
of these regularizers are two-fold. Their computation is typically rather
light, and they are deemed to preserve sharp edges in pictures. This paper
describes an algorithm for regularized hyperspectral unmixing based on the
Alternating Direction Method of Multipliers.
| 0 | 0 | 1 | 0 | 0 | 0 |
Efficient Antihydrogen Detection in Antimatter Physics by Deep Learning | Antihydrogen is at the forefront of antimatter research at the CERN
Antiproton Decelerator. Experiments aiming to test the fundamental CPT symmetry
and antigravity effects require the efficient detection of antihydrogen
annihilation events, which is performed using highly granular tracking
detectors installed around an antimatter trap. Improving the efficiency of the
antihydrogen annihilation detection plays a central role in the final
sensitivity of the experiments. We propose deep learning as a novel technique
to analyze antihydrogen annihilation data, and compare its performance with a
traditional track and vertex reconstruction method. We report that the deep
learning approach yields significant improvement, tripling event coverage while
simultaneously improving performance by over 5% in terms of Area Under Curve
(AUC).
| 1 | 1 | 0 | 0 | 0 | 0 |
Spin controlled atom-ion inelastic collisions | The control of the ultracold collisions between neutral atoms is an extensive
and successful field of study. The tools developed allow for ultracold chemical
reactions to be managed using magnetic fields, light fields and spin-state
manipulation of the colliding particles among other methods. The control of
chemical reactions in ultracold atom-ion collisions is a young and growing
field of research. Recently, the collision energy and the ion electronic state
were used to control atom-ion interactions. Here, we demonstrate
spin-controlled atom-ion inelastic processes. In our experiment, both
spin-exchange and charge-exchange reactions are controlled in an ultracold
Rb-Sr$^+$ mixture by the atomic spin state. We prepare a cloud of atoms in a
single hyperfine spin-state. Spin-exchange collisions between atoms and ion
subsequently polarize the ion spin. Electron transfer is only allowed for
(RbSr)$^+$ colliding in the singlet manifold. Initializing the atoms in various
spin states affects the overlap of the collision wavefunction with the singlet
molecular manifold and therefore also the reaction rate. We experimentally show
that by preparing the atoms in different spin states one can vary the
charge-exchange rate in agreement with theoretical predictions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nonparametric Cusum Charts for Angular Data with Applications in Health Science and Astrophysics | This paper develops non-parametric rotation invariant CUSUMs suited to the
detection of changes in the mean direction as well as changes in the
concentration parameter of angular data. The properties of the CUSUMs are
illustrated by theoretical calculations, Monte Carlo simulation and application
to sequentially observed angular data from health science and astrophysics.
| 0 | 0 | 0 | 1 | 0 | 0 |
Origin of Weak Turbulence in the Outer Regions of Protoplanetary Disks | The mechanism behind angular momentum transport in protoplanetary disks, and
whether this transport is turbulent in nature, is a fundamental issue in planet
formation studies. Recent ALMA observations have suggested that turbulent
velocities in the outer regions of these disks are less than ~5-10% of the
sound speed, contradicting theoretical predictions of turbulence driven by the
magnetorotational instability (MRI). These observations have generally been
interpreted to be consistent with a large-scale laminar magnetic wind driving
accretion. Here, we carry out local, shearing box simulations with varying
ionization levels and background magnetic field strengths in order to determine
which parameters produce results consistent with observations. We find that
even when the background magnetic field launches a strong largely laminar wind,
significant turbulence persists and is driven by localized regions of vertical
magnetic field (the result of zonal flows) that are unstable to the MRI. The
only conditions for which we find turbulent velocities below the observational
limits are weak background magnetic fields and ionization levels well below
that usually assumed in theoretical studies. We interpret these findings within
the context of a preliminary model in which a large scale magnetic field,
confined to the inner disk, hinders ionizing sources from reaching large radial
distances, e.g., through a sufficiently dense wind. Thus, in addition to such a
wind, this model predicts that for disks with weakly turbulent outer regions,
the outer disk will have significantly reduced ionization levels compared to
standard models and will harbor only a weak vertical magnetic field.
| 0 | 1 | 0 | 0 | 0 | 0 |
Real-time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments | In the context of robotic underwater operations, the visual degradations
induced by the medium properties make difficult the exclusive use of cameras
for localization purpose. Hence, most localization methods are based on
expensive navigational sensors associated with acoustic positioning. On the
other hand, visual odometry and visual SLAM have been exhaustively studied for
aerial or terrestrial applications, but state-of-the-art algorithms fail
underwater. In this paper we tackle the problem of using a simple low-cost
camera for underwater localization and propose a new monocular visual odometry
method dedicated to the underwater environment. We evaluate different tracking
methods and show that optical flow based tracking is more suited to underwater
images than classical approaches based on descriptors. We also propose a
keyframe-based visual odometry approach highly relying on nonlinear
optimization. The proposed algorithm has been assessed on both simulated and
real underwater datasets and outperforms state-of-the-art visual SLAM methods
under many of the most challenging conditions. The main application of this
work is the localization of Remotely Operated Vehicles (ROVs) used for
underwater archaeological missions but the developed system can be used in any
other applications as long as visual information is available.
| 1 | 0 | 0 | 0 | 0 | 0 |
Casper the Friendly Finality Gadget | We introduce Casper, a proof of stake-based finality system which overlays an
existing proof of work blockchain. Casper is a partial consensus mechanism
combining proof of stake algorithm research and Byzantine fault tolerant
consensus theory. We introduce our system, prove some desirable features, and
show defenses against long range revisions and catastrophic crashes. The Casper
overlay provides almost any proof of work chain with additional protections
against block reversions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Rating Protocol Design for Extortion and Cooperation in the Crowdsourcing Contest Dilemma | Crowdsourcing has emerged as a paradigm for leveraging human intelligence and
activity to solve a wide range of tasks. However, strategic workers will find
enticement in their self-interest to free-ride and attack in a crowdsourcing
contest dilemma game. Hence, incentive mechanisms are of great importance to
overcome the inefficiency of the socially undesirable equilibrium. Existing
incentive mechanisms are not effective in providing incentives for cooperation
in crowdsourcing competitions due to the following features: heterogeneous
workers compete against each other in a crowdsourcing platform with imperfect
monitoring. In this paper, we take these features into consideration, and
develop a novel game-theoretic design of rating protocols, which integrates
binary rating labels with differential pricing to maximize the requester's
utility, by extorting selfish workers and enforcing cooperation among them. By
quantifying necessary and sufficient conditions for the sustainable social
norm, we formulate the problem of maximizing the revenue of the requester among
all sustainable rating protocols, provide design guidelines for optimal rating
protocols, and design a low-complexity algorithm to select optimal design
parameters which are related to differential punishments and pricing schemes.
Simulation results demonstrate how intrinsic parameters impact on design
parameters, as well as the performance gain of the proposed rating protocol.
| 1 | 0 | 0 | 0 | 0 | 0 |
Form factors of local operators in supersymmetric quantum integrable models | We apply the nested algebraic Bethe ansatz to the models with gl(2|1) and
gl}(1|2) supersymmetry. We show that form factors of local operators in these
models can be expressed in terms of the universal form factors. Our derivation
is based on the use of the RTT-algebra only. It does not refer to any specific
representation of this algebra. We obtain thus determinant representations for
form factors of local operators in the cases where an explicit solution of the
quantum inverse scattering problem is not known.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the $E$-polynomial of parabolic $\mathrm{Sp}_{2n}$-character varieties | We find the $E$-polynomials of a family of parabolic
$\mathrm{Sp}_{2n}$-character varieties $\mathcal{M}^{\xi}_{n}$ of Riemann
surfaces by constructing a stratification, proving that each stratum has
polynomial count, applying a result of Katz regarding the counting functions,
and finally adding up the resulting $E$-polynomials of the strata. To count the
number of $\mathbb{F}_{q}$-points of the strata, we invoke a formula due to
Frobenius. Our calculation make use of a formula for the evaluation of
characters on semisimple elements coming from Deligne-Lusztig theory, applied
to the character theory of $\mathrm{Sp}{\left(2n,\mathbb{F}_{q}\right)}$, and
Möbius inversion on the poset of set-partitions. We compute the Euler
characteristic of the $\mathcal{M}^{\xi}_{n}$ with these polynomials, and show
they are connected.
| 0 | 0 | 1 | 0 | 0 | 0 |
Goldstone and Higgs Hydrodynamics in the BCS-BEC Crossover | We discuss the derivation of a low-energy effective field theory of phase
(Goldstone) and amplitude (Higgs) modes of the pairing field from a microscopic
theory of attractive fermions. The coupled equations for Goldstone and Higgs
fields are critically analyzed in the Bardeen-Cooper-Schrieffer (BCS) to
Bose-Einstein condensate (BEC) crossover both in three spatial dimensions and
in two spatial dimensions. The crucial role of pair fluctuations is
investigated, and the beyond-mean-field Gaussian theory of the BCS-BEC
crossover is compared with available experimental data of the two-dimensional
ultracold Fermi superfluid.
| 0 | 1 | 0 | 0 | 0 | 0 |
A note on knot concordance and involutive knot Floer homology | We prove that if two knots are concordant, their involutive knot Floer
complexes satisfy a certain type of stable equivalence.
| 0 | 0 | 1 | 0 | 0 | 0 |
$M$-QAM Precoder Design for MIMO Directional Modulation Transceivers | Spectrally efficient multi-antenna wireless communication systems are a key
challenge as service demands continue to increase. At the same time, powering
up radio access networks is facing environmental and regulation limitations. In
order to achieve more power efficiency, we design a directional modulation
precoder by considering an $M$-QAM constellation, particularly with
$M=4,8,16,32$. First, extended detection regions are defined for desired
constellations using analytical geometry. Then, constellation points are placed
in the optimal positions of these regions while the minimum Euclidean distance
to adjacent constellation points and detection region boundaries is kept as in
the conventional $M$-QAM modulation. For further power efficiency and symbol
error rate similar to that of fixed design in high SNR, relaxed detection
regions are modeled for inner points of $M=16,32$ constellations. The modeled
extended and relaxed detection regions as well as the modulation
characteristics are utilized to formulate symbol-level precoder design problems
for directional modulation to minimize the transmission power while preserving
the minimum required SNR at the destination. In addition, the extended and
relaxed detection regions are used for precoder design to minimize the output
of each power amplifier. We transform the design problems into convex ones and
devise an interior point path-following iterative algorithm to solve the
mentioned problems and provide details on finding the initial values of the
parameters and the starting point. Results show that compared to the benchmark
schemes, the proposed method performs better in terms of power and peak power
reduction as well as symbol error rate reduction for a wide range of SNRs.
| 1 | 0 | 0 | 0 | 0 | 0 |
Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces | We study kernel least-squares estimation under a norm constraint. This form
of regularisation is known as Ivanov regularisation and it provides better
control of the norm of the estimator than the well-established Tikhonov
regularisation. This choice of regularisation allows us to dispose of the
standard assumption that the reproducing kernel Hilbert space (RKHS) has a
Mercer kernel, which is restrictive as it usually requires compactness of the
covariate set. Instead, we assume only that the RKHS is separable with a
bounded and measurable kernel. We provide rates of convergence for the expected
squared $L^2$ error of our estimator under the weak assumption that the
variance of the response variables is bounded and the unknown regression
function lies in an interpolation space between $L^2$ and the RKHS. We then
obtain faster rates of convergence when the regression function is bounded by
clipping the estimator. In fact, we attain the optimal rate of convergence.
Furthermore, we provide a high-probability bound under the stronger assumption
that the response variables have subgaussian errors and that the regression
function lies in an interpolation space between $L^\infty$ and the RKHS.
Finally, we derive adaptive results for the settings in which the regression
function is bounded.
| 0 | 0 | 1 | 1 | 0 | 0 |
Using Ice and Dust Lines to Constrain the Surface Densities of Protoplanetary Disks | We present a novel method for determining the surface density of
protoplanetary disks through consideration of disk 'dust lines' which indicate
the observed disk radial scale at different observational wavelengths. This
method relies on the assumption that the processes of particle growth and drift
control the radial scale of the disk at late stages of disk evolution such that
the lifetime of the disk is equal to both the drift timescale and growth
timescale of the maximum particle size at a given dust line. We provide an
initial proof of concept of our model through an application to the disk TW Hya
and are able to estimate the disk dust-to-gas ratio, CO abundance, and
accretion rate in addition to the total disk surface density. We find that our
derived surface density profile and dust-to-gas ratio are consistent with the
lower limits found through measurements of HD gas. The CO ice line also depends
on surface density through grain adsorption rates and drift and we find that
our theoretical CO ice line estimates have clear observational analogues. We
further apply our model to a large parameter space of theoretical disks and
find three observational diagnostics that may be used to test its validity.
First we predict that the dust lines of disks other than TW Hya will be
consistent with the normalized CO surface density profile shape for those
disks. Second, surface density profiles that we derive from disk ice lines
should match those derived from disk dust lines. Finally, we predict that disk
dust and ice lines will scale oppositely, as a function of surface density,
across a large sample of disks.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quaternionic Projective Bundle Theorem and Gysin Triangle in MW-Motivic Cohomology | In this paper, we show that the motive $HP^n$ of the quaternionic
Grassmannian (as defined by I. Panin and C. Walter) splits in the category of
effective MW-motives (as defined by B. Calmès, F. Déglise and J. Fasel).
Moreover, we extend this result to an arbitrary symplectic bundle, obtaining
the so-called quaternionic projective bundle theorem. This enables us to define
Pontryagin classes of symplectic bundles in the Chow-Witt ring.
As an application, we prove that there is a Gysin triangle in MW-motivic
cohomology in case the normal bundle is symplectic.
| 0 | 0 | 1 | 0 | 0 | 0 |
Green function for linearized Navier-Stokes around a boundary layer profile: near critical layers | This is a continuation and completion of the program (initiated in
\cite{GrN1,GrN2}) to derive pointwise estimates on the Green function and sharp
bounds on the semigroup of linearized Navier-Stokes around a generic stationary
boundary layer profile. This is done via a spectral analysis approach and a
careful study of the Orr-Sommerfeld equations, or equivalently the
Navier-Stokes resolvent operator $(\lambda - L)^{-1}$. The earlier work
(\cite{GrN1,GrN2}) treats the Orr-Sommerfeld equations away from critical
layers: this is the case when the phase velocity is away from the range of the
background profile or when $\lambda$ is away from the Euler continuous
spectrum. In this paper, we study the critical case: the Orr-Sommerfeld
equations near critical layers, providing pointwise estimates on the Green
function as well as carefully studying the Dunford's contour integral near the
critical layers.
As an application, we obtain pointwise estimates on the Green function and
sharp bounds on the semigroup of the linearized Navier-Stokes problem near
monotonic boundary layers that are spectrally stable to the Euler equations,
complementing \cite{GrN1,GrN2} where unstable profiles are considered.
| 0 | 0 | 1 | 0 | 0 | 0 |
Summary of a Literature Review in Scalability of QoS-aware Service Composition | This paper shows that authors have no consistent way to characterize the
scalability of their solutions, and so consider only a limited number of
scaling characteristics. This review aimed at establishing the evidence that
the route for designing and evaluating the scalability of dynamic QoS-aware
service composition mechanisms has been lacking systematic guidance, and has
been informed by a very limited set of criteria. For such, we analyzed 47
papers, from 2004 to 2018.
| 1 | 0 | 0 | 0 | 0 | 0 |
Interacting superradiance samples: modified intensities and timescales, and frequency shifts | We consider the interaction between distinct superradiance (SR) systems and
use the dressed state formalism to solve the case of two interacting two-atom
SR samples at resonance. We show that the ensuing entanglement modifies the
transition rates and intensities of radiation, as well as introduces a
potentially measurable frequency chirp in the SR cascade, the magnitude of
which being a function of the separation between the samples. For the dominant
SR cascade we find a significant reduction in the duration and an increase of
the intensity of the SR pulse relative to the case of a single two-atom SR
sample.
| 0 | 1 | 0 | 0 | 0 | 0 |
An optimal XP algorithm for Hamiltonian cycle on graphs of bounded clique-width | In this paper, we prove that, given a clique-width $k$-expression of an
$n$-vertex graph, \textsc{Hamiltonian Cycle} can be solved in time
$n^{\mathcal{O}(k)}$. This improves the naive algorithm that runs in time
$n^{\mathcal{O}(k^2)}$ by Espelage et al. (WG 2001), and it also matches with
the lower bound result by Fomin et al. that, unless the Exponential Time
Hypothesis fails, there is no algorithm running in time $n^{o(k)}$ (SIAM. J.
Computing 2014).
We present a technique of representative sets using two-edge colored
multigraphs on $k$ vertices. The essential idea is that, for a two-edge colored
multigraph, the existence of an Eulerian trail that uses edges with different
colors alternately can be determined by two information: the number of colored
edges incident with each vertex, and the connectedness of the multigraph. With
this idea, we avoid the bottleneck of the naive algorithm, which stores all the
possible multigraphs on $k$ vertices with at most $n$ edges.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the functional window of the avian compass | The functional window is an experimentally observed property of the avian
compass that refers to its selectivity around the geomagnetic field strength.
We show that the radical-pair model, using biologically feasible hyperfine
parameters, can qualitatively explain the salient features of the avian compass
as observed from behavioral experiments: its functional window, as well as
disruption of the compass action by an RF field of specific frequencies.
Further, we show that adjustment of the hyperfine parameters can tune the
functional window, suggesting a possible mechanism for its observed
adaptability to field variation. While these lend strong support to the
radical-pair model, we find it impossible to explain quantitatively the
observed width of the functional window within this model, or even with simple
augmentations thereto. This suggests that a deeper generalization of this model
may be called for; we conjecture that environmental coupling may be playing a
subtle role here that has not been captured accurately. Lastly, we examine a
possible biological purpose to the functional window; assuming evolutionary
benefit from radical-pair magnetoreception, we conjecture that the functional
window is simply a corollary thereof and brings no additional advantage.
| 0 | 1 | 0 | 0 | 0 | 0 |
Automated text summarisation and evidence-based medicine: A survey of two domains | The practice of evidence-based medicine (EBM) urges medical practitioners to
utilise the latest research evidence when making clinical decisions. Because of
the massive and growing volume of published research on various medical topics,
practitioners often find themselves overloaded with information. As such,
natural language processing research has recently commenced exploring
techniques for performing medical domain-specific automated text summarisation
(ATS) techniques-- targeted towards the task of condensing large medical texts.
However, the development of effective summarisation techniques for this task
requires cross-domain knowledge. We present a survey of EBM, the
domain-specific needs for EBM, automated summarisation techniques, and how they
have been applied hitherto. We envision that this survey will serve as a first
resource for the development of future operational text summarisation
techniques for EBM.
| 1 | 0 | 0 | 0 | 0 | 0 |
Generalized Moran sets Generated by Step-wise Adjustable Iterated Function Systems | In this article we provide a systematic way of creating generalized Moran
sets using an analogous iterated function system (IFS) procedure. We use a
step-wise adjustable IFS to introduce some variance (such as
non-self-similarity) in the fractal limit sets. The process retains the
computational simplicity of a standard IFS procedure. In our construction of
the generalized Moran sets, we also weaken the fourth Moran Structure Condition
that requires the same pattern of diameter ratios be used across a generation.
Moreover, we provide upper and lower bounds for the Hausdorff dimension of the
fractals created from this generalized process. Specific examples (Cantor-like
sets, Sierpinski-like Triangles, etc) with the calculations of their
corresponding dimensions are studied.
| 0 | 1 | 1 | 0 | 0 | 0 |
On the lateral instability analysis of MEMS comb-drive electrostatic transducers | This paper investigates the lateral pull-in effect of an in-plane
overlap-varying transducer. The instability is induced by the translational and
rotational displacements. Based on the principle of virtual work, the
equilibrium conditions of force and moment in lateral directions are derived.
The analytical solutions of the critical voltage, at which the pull-in
phenomenon occurs, are developed when considering only the translational
stiffness or only the rotational stiffness of the mechanical spring. The
critical voltage in general case is numerically determined by using nonlinear
optimization techniques, taking into account the combined effect of translation
and rotation. The effects of possible translational offsets and angular
deviations to the critical voltage are modeled and numerically analyzed. The
investigation is then the first time expanded to anti-phase operation mode and
Bennet's doubler configuration of the two transducers.
| 0 | 1 | 0 | 0 | 0 | 0 |
Isospectrality For Orbifold Lens Spaces | We answer Mark Kac's famous question, "can one hear the shape of a drum?" in
the positive for orbifolds that are 3-dimensional and 4-dimensional lens
spaces; we thus complete the answer to this question for orbifold lens spaces
in all dimensions. We also show that the coefficients of the asymptotic
expansion of the trace of the heat kernel are not sufficient to determine the
above results.
| 0 | 0 | 1 | 0 | 0 | 0 |
How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models | Machine learning models are vulnerable to Adversarial Examples: minor
perturbations to input samples intended to deliberately cause
misclassification. Current defenses against adversarial examples, especially
for Deep Neural Networks (DNN), are primarily derived from empirical
developments, and their security guarantees are often only justified
retroactively. Many defenses therefore rely on hidden assumptions that are
subsequently subverted by increasingly elaborate attacks. This is not
surprising: deep learning notoriously lacks a comprehensive mathematical
framework to provide meaningful guarantees.
In this paper, we leverage Gaussian Processes to investigate adversarial
examples in the framework of Bayesian inference. Across different models and
datasets, we find deviating levels of uncertainty reflect the perturbation
introduced to benign samples by state-of-the-art attacks, including novel
white-box attacks on Gaussian Processes. Our experiments demonstrate that even
unoptimized uncertainty thresholds already reject adversarial examples in many
scenarios.
Comment: Thresholds can be broken in a modified attack, which was done in
arXiv:1812.02606 (The limitations of model uncertainty in adversarial
settings).
| 1 | 0 | 0 | 1 | 0 | 0 |
Energy Efficient Adaptive Network Coding Schemes for Satellite Communications | In this paper, we propose novel energy efficient adaptive network coding and
modulation schemes for time variant channels. We evaluate such schemes under a
realistic channel model for open area environments and Geostationary Earth
Orbit (GEO) satellites. Compared to non-adaptive network coding and adaptive
rate efficient network-coded schemes for time variant channels, we show that
our proposed schemes, through physical layer awareness can be designed to
transmit only if a target quality of service (QoS) is achieved. As a result,
such schemes can provide remarkable energy savings.
| 1 | 0 | 0 | 0 | 0 | 0 |
DCCO: Towards Deformable Continuous Convolution Operators | Discriminative Correlation Filter (DCF) based methods have shown competitive
performance on tracking benchmarks in recent years. Generally, DCF based
trackers learn a rigid appearance model of the target. However, this reliance
on a single rigid appearance model is insufficient in situations where the
target undergoes non-rigid transformations. In this paper, we propose a unified
formulation for learning a deformable convolution filter. In our framework, the
deformable filter is represented as a linear combination of sub-filters. Both
the sub-filter coefficients and their relative locations are inferred jointly
in our formulation. Experiments are performed on three challenging tracking
benchmarks: OTB- 2015, TempleColor and VOT2016. Our approach improves the
baseline method, leading to performance comparable to state-of-the-art.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multitarget search on complex networks: A logarithmic growth of global mean random cover time | We investigate multitarget search on complex networks and derive an exact
expression for the mean random cover time that quantifies the expected time a
walker needs to visit multiple targets. Based on this, we recover and extend
some interesting results of multitarget search on networks. Specifically, we
observe the logarithmic increase of the global mean random cover time with the
target number for a broad range of random search processes, including generic
random walks, biased random walks, and maximal entropy random walks. We show
that the logarithmic growth pattern is a universal feature of multi-target
search on networks by using the annealed network approach and the
Sherman-Morrison formula. Moreover, we find that for biased random walks, the
global mean random cover time can be minimized, and that the corresponding
optimal parameter also minimizes the global mean first passage time, pointing
towards its robustness. Our findings further confirm that the logarithmic
growth pattern is a universal law governing multitarget search in confined
media.
| 1 | 1 | 0 | 0 | 0 | 0 |
New X-ray bound on density of primordial black holes | We set a new upper limit on the abundance of primordial black holes (PBH)
based on existing X-ray data. PBH interactions with interstellar medium should
result in significant fluxes of X-ray photons, which would contribute to the
observed number density of compact X-ray objects in galaxies. The data
constrain PBH number density in the mass range from a few $M_\odot$ to $2\times
10^7 M_\odot$. PBH density needed to account for the origin of black holes
detected by LIGO is marginally allowed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hierarchical Clustering with Prior Knowledge | Hierarchical clustering is a class of algorithms that seeks to build a
hierarchy of clusters. It has been the dominant approach to constructing
embedded classification schemes since it outputs dendrograms, which capture the
hierarchical relationship among members at all levels of granularity,
simultaneously. Being greedy in the algorithmic sense, a hierarchical
clustering partitions data at every step solely based on a similarity /
dissimilarity measure. The clustering results oftentimes depend on not only the
distribution of the underlying data, but also the choice of dissimilarity
measure and the clustering algorithm. In this paper, we propose a method to
incorporate prior domain knowledge about entity relationship into the
hierarchical clustering. Specifically, we use a distance function in
ultrametric space to encode the external ontological information. We show that
popular linkage-based algorithms can faithfully recover the encoded structure.
Similar to some regularized machine learning techniques, we add this distance
as a penalty term to the original pairwise distance to regulate the final
structure of the dendrogram. As a case study, we applied this method on real
data in the building of a customer behavior based product taxonomy for an
Amazon service, leveraging the information from a larger Amazon-wide browse
structure. The method is useful when one wants to leverage the relational
information from external sources, or the data used to generate the distance
matrix is noisy and sparse. Our work falls in the category of semi-supervised
or constrained clustering.
| 0 | 0 | 0 | 1 | 0 | 0 |
Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations | We introduce physics informed neural networks -- neural networks that are
trained to solve supervised learning tasks while respecting any given law of
physics described by general nonlinear partial differential equations. In this
second part of our two-part treatise, we focus on the problem of data-driven
discovery of partial differential equations. Depending on whether the available
data is scattered in space-time or arranged in fixed temporal snapshots, we
introduce two main classes of algorithms, namely continuous time and discrete
time models. The effectiveness of our approach is demonstrated using a wide
range of benchmark problems in mathematical physics, including conservation
laws, incompressible fluid flow, and the propagation of nonlinear shallow-water
waves.
| 1 | 0 | 0 | 1 | 0 | 0 |
The HoTT reals coincide with the Escardó-Simpson reals | Escardó and Simpson defined a notion of interval object by a universal
property in any category with binary products. The Homotopy Type Theory book
defines a higher-inductive notion of reals, and suggests that the interval may
satisfy this universal property. We show that this is indeed the case in the
category of sets of any universe. We also show that the type of HoTT reals is
the least Cauchy complete subset of the Dedekind reals containing the
rationals.
| 1 | 0 | 1 | 0 | 0 | 0 |
Noise2Noise: Learning Image Restoration without Clean Data | We apply basic statistical reasoning to signal reconstruction by machine
learning -- learning to map corrupted observations to clean signals -- with a
simple and powerful conclusion: it is possible to learn to restore images by
only looking at corrupted examples, at performance at and sometimes exceeding
training using clean data, without explicit image priors or likelihood models
of the corruption. In practice, we show that a single model learns photographic
noise removal, denoising synthetic Monte Carlo images, and reconstruction of
undersampled MRI scans -- all corrupted by different processes -- based on
noisy data only.
| 0 | 0 | 0 | 1 | 0 | 0 |
Simple Root Cause Analysis by Separable Likelihoods | Root Cause Analysis for Anomalies is challenging because of the trade-off
between the accuracy and its explanatory friendliness, required for industrial
applications. In this paper we propose a framework for simple and friendly RCA
within the Bayesian regime under certain restrictions (that Hessian at the mode
is diagonal, here referred to as \emph{separability}) imposed on the predictive
posterior. We show that this assumption is satisfied for important base models,
including Multinomal, Dirichlet-Multinomial and Naive Bayes. To demonstrate the
usefulness of the framework, we embed it into the Bayesian Net and validate on
web server error logs (real world data set).
| 0 | 0 | 0 | 1 | 0 | 0 |
Jeffrey's prior sampling of deep sigmoidal networks | Neural networks have been shown to have a remarkable ability to uncover low
dimensional structure in data: the space of possible reconstructed images form
a reduced model manifold in image space. We explore this idea directly by
analyzing the manifold learned by Deep Belief Networks and Stacked Denoising
Autoencoders using Monte Carlo sampling. The model manifold forms an only
slightly elongated hyperball with actual reconstructed data appearing
predominantly on the boundaries of the manifold. In connection with the results
we present, we discuss problems of sampling high-dimensional manifolds as well
as recent work [M. Transtrum, G. Hart, and P. Qiu, Submitted (2014)] discussing
the relation between high dimensional geometry and model reduction.
| 1 | 1 | 0 | 0 | 0 | 0 |
Computing the quality of the Laplace approximation | Bayesian inference requires approximation methods to become computable, but
for most of them it is impossible to quantify how close the approximation is to
the true posterior. In this work, we present a theorem upper-bounding the KL
divergence between a log-concave target density
$f\left(\boldsymbol{\theta}\right)$ and its Laplace approximation
$g\left(\boldsymbol{\theta}\right)$. The bound we present is computable: on the
classical logistic regression model, we find our bound to be almost exact as
long as the dimensionality of the parameter space is high.
The approach we followed in this work can be extended to other Gaussian
approximations, as we will do in an extended version of this work, to be
submitted to the Annals of Statistics. It will then become a critical tool for
characterizing whether, for a given problem, a given Gaussian approximation is
suitable, or whether a more precise alternative method should be used instead.
| 0 | 0 | 1 | 1 | 0 | 0 |
Correlations between thresholds and degrees: An analytic approach to model attacks and failure cascades | Two node variables determine the evolution of cascades in random networks: a
node's degree and threshold. Correlations between both fundamentally change the
robustness of a network, yet, they are disregarded in standard analytic methods
as local tree or heterogeneous mean field approximations because of the bad
tractability of order statistics. We show how they become tractable in the
thermodynamic limit of infinite network size. This enables the analytic
description of node attacks that are characterized by threshold allocations
based on node degree. Using two examples, we discuss possible implications of
irregular phase transitions and different speeds of cascade evolution for the
control of cascades.
| 0 | 1 | 0 | 0 | 0 | 0 |
Faster Learning by Reduction of Data Access Time | Nowadays, the major challenge in machine learning is the Big Data challenge.
The big data problems due to large number of data points or large number of
features in each data point, or both, the training of models have become very
slow. The training time has two major components: Time to access the data and
time to process (learn from) the data. So far, the research has focused only on
the second part, i.e., learning from the data. In this paper, we have proposed
one possible solution to handle the big data problems in machine learning. The
idea is to reduce the training time through reducing data access time by
proposing systematic sampling and cyclic/sequential sampling to select
mini-batches from the dataset. To prove the effectiveness of proposed sampling
techniques, we have used Empirical Risk Minimization, which is commonly used
machine learning problem, for strongly convex and smooth case. The problem has
been solved using SAG, SAGA, SVRG, SAAG-II and MBSGD (Mini-batched SGD), each
using two step determination techniques, namely, constant step size and
backtracking line search method. Theoretical results prove the same convergence
for systematic sampling, cyclic sampling and the widely used random sampling
technique, in expectation. Experimental results with bench marked datasets
prove the efficacy of the proposed sampling techniques and show up to six times
faster training.
| 0 | 0 | 0 | 1 | 0 | 0 |
Optimal Control for Constrained Coverage Path Planning | The problem of constrained coverage path planning involves a robot trying to
cover maximum area of an environment under some constraints that appear as
obstacles in the map. Out of the several coverage path planning methods, we
consider augmenting the linear sweep-based coverage method to achieve minimum
energy/ time optimality along with maximum area coverage. In addition, we also
study the effects of variation of different parameters on the performance of
the modified method.
| 1 | 0 | 0 | 0 | 0 | 0 |
ScaleSimulator: A Fast and Cycle-Accurate Parallel Simulator for Architectural Exploration | Design of next generation computer systems should be supported by simulation
infrastructure that must achieve a few contradictory goals such as fast
execution time, high accuracy, and enough flexibility to allow comparison
between large numbers of possible design points. Most existing architecture
level simulators are designed to be flexible and to execute the code in
parallel for greater efficiency, but at the cost of scarified accuracy. This
paper presents the ScaleSimulator simulation environment, which is based on a
new design methodology whose goal is to achieve near cycle accuracy while still
being flexible enough to simulate many different future system architectures
and efficient enough to run meaningful workloads. We achieve these goals by
making the parallelism a first-class citizen in our methodology. Thus, this
paper focuses mainly on the ScaleSimulator design points that enable better
parallel execution while maintaining the scalability and cycle accuracy of a
simulated architecture. The paper indicates that the new proposed
ScaleSimulator tool can (1) efficiently parallelize the execution of a
cycle-accurate architecture simulator, (2) efficiently simulate complex
architectures (e.g., out-of-order CPU pipeline, cache coherency protocol, and
network) and massive parallel systems, and (3) use meaningful workloads, such
as full simulation of OLTP benchmarks, to examine future architectural choices.
| 1 | 0 | 0 | 0 | 0 | 0 |
Annihilating wild kernels | Let $L/K$ be a finite Galois extension of number fields with Galois group
$G$. Let $p$ be an odd prime and $r>1$ be an integer. Assuming a conjecture of
Schneider, we formulate a conjecture that relates special values of equivariant
Artin $L$-series at $s=r$ to the compact support cohomology of the étale
$p$-adic sheaf $\mathbb Z_p(r)$. We show that our conjecture is essentially
equivalent to the $p$-part of the equivariant Tamagawa number conjecture for
the pair $(h^0(\mathrm{Spec}(L))(r), \mathbb Z[G])$. We derive from this
explicit constraints on the Galois module structure of Banaszak's $p$-adic wild
kernels.
| 0 | 0 | 1 | 0 | 0 | 0 |
Trading the Twitter Sentiment with Reinforcement Learning | This paper is to explore the possibility to use alternative data and
artificial intelligence techniques to trade stocks. The efficacy of the daily
Twitter sentiment on predicting the stock return is examined using machine
learning methods. Reinforcement learning(Q-learning) is applied to generate the
optimal trading policy based on the sentiment signal. The predicting power of
the sentiment signal is more significant if the stock price is driven by the
expectation of the company growth and when the company has a major event that
draws the public attention. The optimal trading strategy based on reinforcement
learning outperforms the trading strategy based on the machine learning
prediction.
| 1 | 0 | 0 | 0 | 0 | 0 |
Kahler-Einstein metrics and algebraic geometry | This is a survey article, based on the author's lectures in the 2015 Current
developments in Mathematics meeting; published in "Current developments in
Mathematics". Version 2, references corrected and added.
| 0 | 0 | 1 | 0 | 0 | 0 |
Hemihelical local minimizers in prestrained elastic bi-strips | We consider a double layered prestrained elastic rod in the limit of
vanishing cross section. For the resulting limit Kirchoff-rod model with
intrinsic curvature we prove a supercritical bifurcation result, rigorously
showing the emergence of a branch of hemihelical local minimizers from the
straight configuration, at a critical force and under clamping at both ends. As
a consequence we obtain the existence of nontrivial local minimizers of the
$3$-d system.
| 0 | 0 | 1 | 0 | 0 | 0 |
AspEm: Embedding Learning by Aspects in Heterogeneous Information Networks | Heterogeneous information networks (HINs) are ubiquitous in real-world
applications. Due to the heterogeneity in HINs, the typed edges may not fully
align with each other. In order to capture the semantic subtlety, we propose
the concept of aspects with each aspect being a unit representing one
underlying semantic facet. Meanwhile, network embedding has emerged as a
powerful method for learning network representation, where the learned
embedding can be used as features in various downstream applications.
Therefore, we are motivated to propose a novel embedding learning
framework---AspEm---to preserve the semantic information in HINs based on
multiple aspects. Instead of preserving information of the network in one
semantic space, AspEm encapsulates information regarding each aspect
individually. In order to select aspects for embedding purpose, we further
devise a solution for AspEm based on dataset-wide statistics. To corroborate
the efficacy of AspEm, we conducted experiments on two real-words datasets with
two types of applications---classification and link prediction. Experiment
results demonstrate that AspEm can outperform baseline network embedding
learning methods by considering multiple aspects, where the aspects can be
selected from the given HIN in an unsupervised manner.
| 1 | 0 | 0 | 0 | 0 | 0 |
Nonlinear Information Bottleneck | Information bottleneck [IB] is a technique for extracting information in some
`input' random variable that is relevant for predicting some different 'output'
random variable. IB works by encoding the input in a compressed 'bottleneck
variable' from which the output can then be accurately decoded. IB can be
difficult to compute in practice, and has been mainly developed for two limited
cases: (1) discrete random variables with small state spaces, and (2)
continuous random variables that are jointly Gaussian distributed (in which
case the encoding and decoding maps are linear). We propose a method to perform
IB in more general domains. Our approach can be applied to discrete or
continuous inputs and outputs, and allows for nonlinear encoding and decoding
maps. The method uses a novel upper bound on the IB objective, derived using a
non-parametric estimator of mutual information and a variational approximation.
We show how to implement the method using neural networks and gradient-based
optimization, and demonstrate its performance on the MNIST dataset.
| 1 | 0 | 0 | 1 | 0 | 0 |
Top-k Overlapping Densest Subgraphs: Approximation and Complexity | A central problem in graph mining is finding dense subgraphs, with several
applications in different fields, a notable example being identifying
communities. While a lot of effort has been put on the problem of finding a
single dense subgraph, only recently the focus has been shifted to the problem
of finding a set of dens- est subgraphs. Some approaches aim at finding
disjoint subgraphs, while in many real-world networks communities are often
overlapping. An approach introduced to find possible overlapping subgraphs is
the Top-k Overlapping Densest Subgraphs problem. For a given integer k >= 1,
the goal of this problem is to find a set of k densest subgraphs that may share
some vertices. The objective function to be maximized takes into account both
the density of the subgraphs and the distance between subgraphs in the
solution.
The Top-k Overlapping Densest Subgraphs problem has been shown to admit a
1/10-factor approximation algorithm. Furthermore, the computational complexity
of the problem has been left open. In this paper, we present contributions
concerning the approximability and the computational complexity of the problem.
For the approximability, we present approximation algorithms that improves the
approximation factor to 1/2 , when k is bounded by the vertex set, and to 2/3
when k is a constant. For the computational complexity, we show that the
problem is NP-hard even when k = 3.
| 1 | 0 | 0 | 0 | 0 | 0 |
The energy-momentum tensor of electromagnetic fields in matter | We present a complete resolution of the Abraham-Minkowski controversy . This
is done by considering several new aspects which invalidate previous
discussions. We show that: 1)For polarized matter the center of mass theorem is
no longer valid in its usual form. A contribution related to microscopic spin
should be considered. 2)The electromagnetic dipolar energy density contributes
to the inertia of matter and should be incorporated covariantly to the the
energy-momentum tensor of matter. Then there is an electromagnetic component in
matter's momentum density whose variation explains the results of the only
experiment which supports Abraham's force. 3)Averaging the microscopic
Lorentz's force results in the unambiguos expression for the force density
exerted by the field. This force density is consistent with all the
experimental evidence. 4)Momentum conservation determines the electromagnetic
energy-momentum tensor. This tensor is different from Abraham's and Minkowski's
tensors, but one recovers Minkowski's expression for the momentum density. The
energy density is different from Poynting's expression but Poynting's vector
remains the same. Our tensor is non-symmetric which allows the field to exert a
distributed torque on matter. We use our results to discuss momentum and
angular momentum exchange in various situations of physical interest. We find
complete consistency of our equations in the description of the systems
considered. We also show that several alternative expressions of the field
energy-momentum tensor and force-density cannot be successfully used in all our
examples. In particular we verify in two of these examples that the center of
mass and spin introduced by us moves with constant velocity, but that the
standard center of mass does not.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pinhole induced efficiency variation in perovskite solar cells | Process induced efficiency variation is a major concern for all thin film
solar cells, including the emerging perovskite based solar cells. In this
manuscript, we address the effect of pinholes or process induced surface
coverage aspects on the efficiency of such solar cells through detailed
numerical simulations. Interestingly, we find the pinhole size distribution
affects the short circuit current and open circuit voltage in contrasting
manners. Specifically, while the Jsc is heavily dependent on the pinhole size
distribution, surprisingly, the Voc seems to be only nominally affected by it.
Further, our simulations also indicate that, with appropriate interface
engineering, it is indeed possible to design a nanostructured device with
efficiencies comparable to that of ideal planar structures. Additionally, we
propose a simple technique based on terminal IV characteristics to estimate the
surface coverage in perovskite solar cells.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Overview of Robust Subspace Recovery | This paper will serve as an introduction to the body of work on robust
subspace recovery. Robust subspace recovery involves finding an underlying
low-dimensional subspace in a dataset that is possibly corrupted with outliers.
While this problem is easy to state, it has been difficult to develop optimal
algorithms due to its underlying nonconvexity. This work emphasizes advantages
and disadvantages of proposed approaches and unsolved problems in the area.
| 0 | 0 | 0 | 1 | 0 | 0 |
Tuning Free Orthogonal Matching Pursuit | Orthogonal matching pursuit (OMP) is a widely used compressive sensing (CS)
algorithm for recovering sparse signals in noisy linear regression models. The
performance of OMP depends on its stopping criteria (SC). SC for OMP discussed
in literature typically assumes knowledge of either the sparsity of the signal
to be estimated $k_0$ or noise variance $\sigma^2$, both of which are
unavailable in many practical applications. In this article we develop a
modified version of OMP called tuning free OMP or TF-OMP which does not require
a SC. TF-OMP is proved to accomplish successful sparse recovery under the usual
assumptions on restricted isometry constants (RIC) and mutual coherence of
design matrix. TF-OMP is numerically shown to deliver a highly competitive
performance in comparison with OMP having \textit{a priori} knowledge of $k_0$
or $\sigma^2$. Greedy algorithm for robust de-noising (GARD) is an OMP like
algorithm proposed for efficient estimation in classical overdetermined linear
regression models corrupted by sparse outliers. However, GARD requires the
knowledge of inlier noise variance which is difficult to estimate. We also
produce a tuning free algorithm (TF-GARD) for efficient estimation in the
presence of sparse outliers by extending the operating principle of TF-OMP to
GARD. TF-GARD is numerically shown to achieve a performance comparable to that
of the existing implementation of GARD.
| 1 | 0 | 0 | 1 | 0 | 0 |
New constructions of MDS codes with complementary duals | Linear complementary-dual (LCD for short) codes are linear codes that
intersect with their duals trivially. LCD codes have been used in certain
communication systems. It is recently found that LCD codes can be applied in
cryptography. This application of LCD codes renewed the interest in the
construction of LCD codes having a large minimum distance. MDS codes are
optimal in the sense that the minimum distance cannot be improved for given
length and code size. Constructing LCD MDS codes is thus of significance in
theory and practice. Recently, Jin (\cite{Jin}, IEEE Trans. Inf. Theory, 2016)
constructed several classes of LCD MDS codes through generalized Reed-Solomon
codes. In this paper, a different approach is proposed to obtain new LCD MDS
codes from generalized Reed-Solomon codes. Consequently, new code constructions
are provided and certain previously known results in \cite{Jin} are extended.
| 1 | 0 | 0 | 0 | 0 | 0 |
Porosity and regularity in metric measure spaces | This is a report of a joint work with E. Järvenpää, M. Järvenpää,
T. Rajala, S. Rogovin, and V. Suomala. In [3], we characterized uniformly
porous sets in $s$-regular metric spaces in terms of regular sets by verifying
that a set $A$ is uniformly porous if and only if there is $t < s$ and a
$t$-regular set $F \supset A$. Here we outline the main idea of the proof and
also present an alternative proof for the crucial lemma needed in the proof of
the result.
| 0 | 0 | 1 | 0 | 0 | 0 |
Strong-coupling superconductivity induced by calcium intercalation in bilayer transition-metal dichalcogenides | We theoretically investigate the possibility of achieving a superconducting
state in transition-metal dichalcogenide bilayers through intercalation, a
process previously and widely used to achieve metallization and superconducting
states in novel superconductors. For the Ca-intercalated bilayers MoS$_2$ and
WS$_2$, we find that the superconducting state is characterized by an
electron-phonon coupling constant larger than $1.0$ and a superconducting
critical temperature of $13.3$ and $9.3$ K, respectively. These results are
superior to other predicted or experimentally observed two-dimensional
conventional superconductors and suggest that the investigated materials may be
good candidates for nanoscale superconductors. More interestingly, we proved
that the obtained thermodynamic properties go beyond the predictions of the
mean-field Bardeen--Cooper--Schrieffer approximation and that the calculations
conducted within the framework of the strong-coupling Eliashberg theory should
be treated as those that yield quantitative results.
| 0 | 1 | 0 | 0 | 0 | 0 |
Interval-type theorems concerning quasi-arithmetic means | Family of quasi-arithmetic means has a natural, partial order (point-wise
order) $A^{[f]}\le A^{[g]}$ if and only if $A^{[f]}(v)\le A^{[g]}(v)$ for all
admissible vectors $v$ ($f,\,g$ and, later, $h$ are continuous and monotone and
defined on a common interval). Therefore one can introduce the notion of
interval-type sets (sets $\mathcal{I}$ such that whenever $A^{[f]} \le A^{[h]}
\le A^{[g]}$ for some $A^{[f]},\,A^{[g]} \in \mathcal{I}$ then $A^{[h]} \in
\mathcal{I}$ too). Our aim is to give examples of interval-type sets involving
vary smoothness assumptions of generating functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Integrable flows between exact CFTs | We explicitly construct families of integrable $\sigma$-model actions
smoothly interpolating between exact CFTs. In the ultraviolet the theory is the
direct product of two current algebras at levels $k_1$ and $k_2$. In the
infrared and for the case of two deformation matrices the CFT involves a coset
CFT, whereas for a single matrix deformation it is given by the ultraviolet
direct product theories but at levels $k_1$ and $k_2-k_1$. For isotropic
deformations we demonstrate integrability. In this case we also compute the
exact beta-function for the deformation parameters using gravitational methods.
This is shown to coincide with previous results obtained using perturbation
theory and non-perturbative symmetries.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.