text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: Linear Time Clustering for High Dimensional Mixtures of Gaussian Clouds,
Abstract: Clustering mixtures of Gaussian distributions is a fundamental and
challenging problem that is ubiquitous in various high-dimensional data
processing tasks. While state-of-the-art work on learning Gaussian mixture
models has focused primarily on improving separation bounds and their
generalization to arbitrary classes of mixture models, less emphasis has been
paid to practical computational efficiency of the proposed solutions. In this
paper, we propose a novel and highly efficient clustering algorithm for $n$
points drawn from a mixture of two arbitrary Gaussian distributions in
$\mathbb{R}^p$. The algorithm involves performing random 1-dimensional
projections until a direction is found that yields a user-specified clustering
error $e$. For a 1-dimensional separation parameter $\gamma$ satisfying
$\gamma=Q^{-1}(e)$, the expected number of such projections is shown to be
bounded by $o(\ln p)$, when $\gamma$ satisfies $\gamma\leq
c\sqrt{\ln{\ln{p}}}$, with $c$ as the separability parameter of the two
Gaussians in $\mathbb{R}^p$. Consequently, the expected overall running time of
the algorithm is linear in $n$ and quasi-linear in $p$ at $o(\ln{p})O(np)$, and
the sample complexity is independent of $p$. This result stands in contrast to
prior works which provide polynomial, with at-best quadratic, running time in
$p$ and $n$. We show that our bound on the expected number of 1-dimensional
projections extends to the case of three or more Gaussian components, and we
present a generalization of our results to mixture distributions beyond the
Gaussian model. | [
1,
0,
0,
0,
0,
0
] |
Title: Transitions from a Kondo-like diamagnetic insulator into a modulated ferromagnetic metal in $\bm{\mathrm{FeGa}_{3-y}\mathrm{Ge}_y}$,
Abstract: One initial and essential question of magnetism is whether the magnetic
properties of a material are governed by localized moments or itinerant
electrons. Here we expose the case for the weakly ferromagnetic system
FeGa$_{3-y}$Ge$_y$ wherein these two opposite models are reconciled, such that
the magnetic susceptibility is quantitatively explained by taking into account
the effects of spin-spin correlation. With the electron doping introduced by Ge
substitution, the diamagnetic insulating parent compound FeGa$_3$ becomes a
paramagnetic metal as early as at $ y=0.01 $, and turns into a weakly
ferromagnetic metal around the quantum critical point $ y=0.15 $. Within the
ferromagnetic regime of FeGa$_{3-y}$Ge$_y$, the magnetic properties are of a
weakly itinerant ferromagnetic nature, located in the intermediate regime
between the localized and the itinerant dominance. Our analysis implies a
potential universality for all itinerant-electron ferromagnets. | [
0,
1,
0,
0,
0,
0
] |
Title: Sample, computation vs storage tradeoffs for classification using tensor subspace models,
Abstract: In this paper, we exhibit the tradeoffs between the (training) sample,
computation and storage complexity for the problem of supervised classification
using signal subspace estimation. Our main tool is the use of tensor subspaces,
i.e. subspaces with a Kronecker structure, for embedding the data into lower
dimensions. Among the subspaces with a Kronecker structure, we show that using
subspaces with a hierarchical structure for representing data leads to improved
tradeoffs. One of the main reasons for the improvement is that embedding data
into these hierarchical Kronecker structured subspaces prevents overfitting at
higher latent dimensions. | [
1,
0,
0,
1,
0,
0
] |
Title: Attention Solves Your TSP, Approximately,
Abstract: The development of efficient (heuristic) algorithms for practical
combinatorial optimization problems is costly, so we want to automatically
learn them instead. We show the feasibility of this approach on the important
Travelling Salesman Problem (TSP). We learn a heuristic algorithm that uses a
Neural Network policy to construct a tour. As an alternative to the Pointer
Network, our model is based entirely on (graph) attention layers and is
invariant to the input order of the nodes. We train the model efficiently using
REINFORCE with a simple and robust baseline based on a deterministic (greedy)
rollout of the best policy so far. We significantly improve over results from
previous works that consider learned heuristics for the TSP, reducing the
optimality gap for a single tour construction from 1.51% to 0.32% for instances
with 20 nodes, from 4.59% to 1.71% for 50 nodes and from 6.89% to 4.43% for 100
nodes. Additionally, we improve over a recent Reinforcement Learning framework
for two variants of the Vehicle Routing Problem (VRP). | [
0,
0,
0,
1,
0,
0
] |
Title: A Distributed Online Pricing Strategy for Demand Response Programs,
Abstract: We study a demand response problem from utility (also referred to as
operator)'s perspective with realistic settings, in which the utility faces
uncertainty and limited communication. Specifically, the utility does not know
the cost function of consumers and cannot have multiple rounds of information
exchange with consumers. We formulate an optimization problem for the utility
to minimize its operational cost considering time-varying demand response
targets and responses of consumers. We develop a joint online learning and
pricing algorithm. In each time slot, the utility sends out a price signal to
all consumers and estimates the cost functions of consumers based on their
noisy responses. We measure the performance of our algorithm using regret
analysis and show that our online algorithm achieves logarithmic regret with
respect to the operating horizon. In addition, our algorithm employs linear
regression to estimate the aggregate response of consumers, making it easy to
implement in practice. Simulation experiments validate the theoretic results
and show that the performance gap between our algorithm and the offline
optimality decays quickly. | [
1,
0,
1,
0,
0,
0
] |
Title: Highly Nonlinear and Low Confinement Loss Photonic Crystal Fiber Using GaP Slot Core,
Abstract: This paper presents a triangular lattice photonic crystal fiber with very
high nonlinear coefficient. Finite element method (FEM) is used to scrutinize
different optical properties of proposed highly nonlinear photonic crystal
fiber (HNL-PCF). The HNL-PCF exhibits a high nonlinearity up to $10\times10^{4}
W^{-1}km^{-1}$ over the wavelength of 1500 nm to 1700 nm. Moreover, proposed
HNL-PCF shows a very low confinement loss of $10^{-3} dB/km$ at 1550 nm
wavelength. Furthermore, chromatic dispersion, dispersion slope, effective area
etc. are also analyzed thoroughly. The proposed fiber will be a suitable
candidate for broadband dispersion compensation, sensor devices and
supercontinuum generation. | [
0,
1,
0,
0,
0,
0
] |
Title: Is Epicurus the father of Reinforcement Learning?,
Abstract: The Epicurean Philosophy is commonly thought as simplistic and hedonistic.
Here I discuss how this is a misconception and explore its link to
Reinforcement Learning. Based on the letters of Epicurus, I construct an
objective function for hedonism which turns out to be equivalent of the
Reinforcement Learning objective function when omitting the discount factor. I
then discuss how Plato and Aristotle 's views that can be also loosely linked
to Reinforcement Learning, as well as their weaknesses in relationship to it.
Finally, I emphasise the close affinity of the Epicurean views and the Bellman
equation. | [
1,
0,
0,
1,
0,
0
] |
Title: Deep Person Re-Identification with Improved Embedding and Efficient Training,
Abstract: Person re-identification task has been greatly boosted by deep convolutional
neural networks (CNNs) in recent years. The core of which is to enlarge the
inter-class distinction as well as reduce the intra-class variance. However, to
achieve this, existing deep models prefer to adopt image pairs or triplets to
form verification loss, which is inefficient and unstable since the number of
training pairs or triplets grows rapidly as the number of training data grows.
Moreover, their performance is limited since they ignore the fact that
different dimension of embedding may play different importance. In this paper,
we propose to employ identification loss with center loss to train a deep model
for person re-identification. The training process is efficient since it does
not require image pairs or triplets for training while the inter-class
distinction and intra-class variance are well handled. To boost the
performance, a new feature reweighting (FRW) layer is designed to explicitly
emphasize the importance of each embedding dimension, thus leading to an
improved embedding. Experiments on several benchmark datasets have shown the
superiority of our method over the state-of-the-art alternatives on both
accuracy and speed. | [
1,
0,
0,
0,
0,
0
] |
Title: Second-generation p-values: improved rigor, reproducibility, & transparency in statistical analyses,
Abstract: Verifying that a statistically significant result is scientifically
meaningful is not only good scientific practice, it is a natural way to control
the Type I error rate. Here we introduce a novel extension of the p-value - a
second-generation p-value - that formally accounts for scientific relevance and
leverages this natural Type I Error control. The approach relies on a
pre-specified interval null hypothesis that represents the collection of effect
sizes that are scientifically uninteresting or are practically null. The
second-generation p-value is the proportion of data-supported hypotheses that
are also null hypotheses. As such, second-generation p-values indicate when the
data are compatible with null hypotheses, or with alternative hypotheses, or
when the data are inconclusive. Moreover, second-generation p-values provide a
proper scientific adjustment for multiple comparisons and reduce false
discovery rates. This is an advance for environments rich in data, where
traditional p-value adjustments are needlessly punitive. Second-generation
p-values promote transparency, rigor and reproducibility of scientific results
by a priori specifying which candidate hypotheses are practically meaningful
and by providing a more reliable statistical summary of when the data are
compatible with alternative or null hypotheses. | [
0,
0,
0,
1,
0,
0
] |
Title: Construction of Directed 2K Graphs,
Abstract: We study the problem of constructing synthetic graphs that resemble
real-world directed graphs in terms of their degree correlations. We define the
problem of directed 2K construction (D2K) that takes as input the directed
degree sequence (DDS) and a joint degree and attribute matrix (JDAM) so as to
capture degree correlation specifically in directed graphs. We provide
necessary and sufficient conditions to decide whether a target D2K is
realizable, and we design an efficient algorithm that creates realizations with
that target D2K. We evaluate our algorithm in creating synthetic graphs that
target real-world directed graphs (such as Twitter) and we show that it brings
significant benefits compared to state-of-the-art approaches. | [
1,
0,
0,
0,
0,
0
] |
Title: Actions of automorphism groups of Lie groups,
Abstract: This is an expository article on properties of actions on Lie groups by
subgroups of their automorphism groups. After recalling various results on the
structure of the automorphism groups, we discuss actions with dense orbits,
invariant and quasi-invariant measures, the induced actions on the spaces of
probability measures on the groups, and results concerning various issues in
ergodic theory, topological dynamics, smooth dynamical systems, and probability
theory on Lie groups. | [
0,
0,
1,
0,
0,
0
] |
Title: Interplay between relativistic energy corrections and resonant excitations in x-ray multiphoton ionization dynamics of Xe atoms,
Abstract: In this paper, we theoretically study x-ray multiphoton ionization dynamics
of heavy atoms taking into account relativistic and resonance effects. When an
atom is exposed to an intense x-ray pulse generated by an x-ray free-electron
laser (XFEL), it is ionized to a highly charged ion via a sequence of
single-photon ionization and accompanying relaxation processes, and its final
charge state is limited by the last ionic state that can be ionized by a
single-photon ionization. If x-ray multiphoton ionization involves deep
inner-shell electrons in heavy atoms, energy shifts by relativistic effects
play an important role in ionization dynamics, as pointed out in [Phys.\ Rev.\
Lett.\ \textbf{110}, 173005 (2013)]. On the other hand, if the x-ray beam has a
broad energy bandwidth, the high-intensity x-ray pulse can drive resonant
photo-excitations for a broad range of ionic states and ionize even beyond the
direct one-photon ionization limit, as first proposed in [Nature\ Photon.\
\textbf{6}, 858 (2012)]. To investigate both relativistic and resonance
effects, we extend the \textsc{xatom} toolkit to incorporate relativistic
energy corrections and resonant excitations in x-ray multiphoton ionization
dynamics calculations. Charge-state distributions are calculated for Xe atoms
interacting with intense XFEL pulses at a photon energy of 1.5~keV and 5.5~keV,
respectively. For both photon energies, we demonstrate that the role of
resonant excitations in ionization dynamics is altered due to significant
shifts of orbital energy levels by relativistic effects. Therefore it is
necessary to take into account both effects to accurately simulate multiphoton
multiple ionization dynamics at high x-ray intensity. | [
0,
1,
0,
0,
0,
0
] |
Title: On the Relation between Color Image Denoising and Classification,
Abstract: Large amount of image denoising literature focuses on single channel images
and often experimentally validates the proposed methods on tens of images at
most. In this paper, we investigate the interaction between denoising and
classification on large scale dataset. Inspired by classification models, we
propose a novel deep learning architecture for color (multichannel) image
denoising and report on thousands of images from ImageNet dataset as well as
commonly used imagery. We study the importance of (sufficient) training data,
how semantic class information can be traded for improved denoising results. As
a result, our method greatly improves PSNR performance by 0.34 - 0.51 dB on
average over state-of-the art methods on large scale dataset. We conclude that
it is beneficial to incorporate in classification models. On the other hand, we
also study how noise affect classification performance. In the end, we come to
a number of interesting conclusions, some being counter-intuitive. | [
1,
0,
0,
0,
0,
0
] |
Title: A proof of the Flaherty-Keller formula on the effective property of densely packed elastic composites,
Abstract: We prove in a mathematically rigorous way the asymptotic formula of Flaherty
and Keller on the effective property of densely packed periodic elastic
composites with hard inclusions. The proof is based on the primal-dual
variational principle, where the upper bound is derived by using the
Keller-type test functions and the lower bound by singular functions made of
nuclei of strain. Singular functions are solutions of the Lamé system and
capture precisely singular behavior of the stress in the narrow region between
two adjacent hard inclusions. | [
0,
0,
1,
0,
0,
0
] |
Title: Superradiance with local phase-breaking effects,
Abstract: We study the superradiant evolution of a set of $N$ two-level systems
spontaneously radiating under the effect of phase-breaking mechanisms. We
investigate the dynamics generated by non-radiative losses and pure dephasing,
and their interplay with spontaneous emission. Our results show that in the
parameter region relevant to many solid-state cavity quantum electrodynamics
experiments, even with a dephasing rate much faster than the radiative lifetime
of a single two-level system, a sub-optimal collective superfluorescent burst
is still observable. We also apply our theory to the dilute excitation regime,
often used to describe optical excitations in solid-state systems. In this
regime, excitations can be described in terms of bright and dark bosonic
quasiparticles. We show how the effect of dephasing and losses in this regime
translates into inter-mode scattering rates and quasiparticle lifetimes. | [
0,
1,
0,
0,
0,
0
] |
Title: About Synchronized Globular Cluster Formation over Supra-galactic Scales,
Abstract: Observational and theoretical arguments support the idea that violent events
connected with $AGN$ activity and/or intense star forming episodes have played
a significant role in the early phases of galaxy formation at high red shifts.
Being old stellar systems, globular clusters seem adequate candidates to search
for the eventual signatures that might have been left by those energetic
phenomena. The analysis of the colour distributions of several thousands of
globular clusters in the Virgo and Fornax galaxy clusters reveals the existence
of some interesting and previously undetected features. A simple pattern
recognition technique, indicates the presence of "colour modulations",
distinctive for each galaxy cluster. The results suggest that the globular
cluster formation process has not been completely stochastic but, rather,
included a significant fraction of globulars that formed in a synchronized way
and over supra-galactic spatial scales. | [
0,
1,
0,
0,
0,
0
] |
Title: Geometric counting on wavefront real spherical spaces,
Abstract: We provide $L^p$-versus $L^\infty$-bounds for eigenfunctions on a real
spherical space $Z$ of wavefront type. It is shown that these bounds imply a
non-trivial error term estimate for lattice counting on $Z$. The paper also
serves as an introduction to geometric counting on spaces of the mentioned
type. Section 7 on higher rank is new and extends the result from v1 to higher
rank. Final version. To appear in Acta Math. Sinica. | [
0,
0,
1,
0,
0,
0
] |
Title: Extragalactic source population studies at very high energies in the Cherenkov Telescope Array era,
Abstract: The Cherenkov Telescope Array (CTA) is the next generation ground-based
$\gamma$-ray observatory. It will provide an order of magnitude better
sensitivity and an extended energy coverage, 20 GeV - 300 TeV, relative to
current Imaging Atmospheric Cherenkov Telescopes (IACTs). IACTs, despite
featuring an excellent sensitivity, are characterized by a limited field of
view that makes the blind search of new sources very time inefficient.
Fortunately, the $\textit{Fermi}$-LAT collaboration recently released a new
catalog of 1,556 sources detected in the 10 GeV - 2 TeV range by the Large Area
Telescope (LAT) in the first 7 years of its operation (the 3FHL catalog). This
catalog is currently the most appropriate description of the sky that will be
energetically accessible to CTA. Here, we discuss a detailed analysis of the
extragalactic source population (mostly blazars) that will be studied in the
near future by CTA. This analysis is based on simulations built from the
expected array configurations and information reported in the 3FHL catalog.
These results show the improvements that CTA will provide on the extragalactic
TeV source population studies, which will be carried out by Key Science
Projects as well as dedicated proposals. | [
0,
1,
0,
0,
0,
0
] |
Title: A Survey of Parallel A*,
Abstract: A* is a best-first search algorithm for finding optimal-cost paths in graphs.
A* benefits significantly from parallelism because in many applications, A* is
limited by memory usage, so distributed memory implementations of A* that use
all of the aggregate memory on the cluster enable problems that can not be
solved by serial, single-machine implementations to be solved. We survey
approaches to parallel A*, focusing on decentralized approaches to A* which
partition the state space among processors. We also survey approaches to
parallel, limited-memory variants of A* such as parallel IDA*. | [
1,
0,
0,
0,
0,
0
] |
Title: Large second harmonic generation enhancement in SiN waveguides by all-optically induced quasi phase matching,
Abstract: Integrated waveguides exhibiting efficient second-order nonlinearities are
crucial to obtain compact and low power optical signal processing devices.
Silicon nitride (SiN) has shown second harmonic generation (SHG) capabilities
in resonant structures and single-pass devices leveraging intermodal phase
matching, which is defined by waveguide design. Lithium niobate allows
compensating for the phase mismatch using periodically poled waveguides,
however the latter are not reconfigurable and remain difficult to integrate
with SiN or silicon (Si) circuits. Here we show the all-optical enhancement of
SHG in SiN waveguides by more than 30 dB. We demonstrate that a Watt-level
laser causes a periodic modification of the waveguide second-order
susceptibility. The resulting second order nonlinear grating has a periodicity
allowing for quasi phase matching (QPM) between the pump and SH mode. Moreover,
changing the pump wavelength or polarization updates the period, relaxing phase
matching constraints imposed by the waveguide geometry. We show that the
grating is long term inscribed in the waveguides, and we estimate a second
order nonlinearity of the order of 0.3 pm/V, while a maximum conversion
efficiency (CE) of 1.8x10-6 W-1 cm-2 is reached. | [
0,
1,
0,
0,
0,
0
] |
Title: Large Scale Constrained Linear Regression Revisited: Faster Algorithms via Preconditioning,
Abstract: In this paper, we revisit the large-scale constrained linear regression
problem and propose faster methods based on some recent developments in
sketching and optimization. Our algorithms combine (accelerated) mini-batch SGD
with a new method called two-step preconditioning to achieve an approximate
solution with a time complexity lower than that of the state-of-the-art
techniques for the low precision case. Our idea can also be extended to the
high precision case, which gives an alternative implementation to the Iterative
Hessian Sketch (IHS) method with significantly improved time complexity.
Experiments on benchmark and synthetic datasets suggest that our methods indeed
outperform existing ones considerably in both the low and high precision cases. | [
0,
0,
0,
1,
0,
0
] |
Title: Anisotropic Exchange in ${\bf LiCu_2O_2}$,
Abstract: We investigate the magnetic properties of the multiferroic quantum-spin
system LiCu$_2$O$_2$ by electron spin resonance (ESR) measurements at $X$- and
$Q$-band frequencies in a wide temperature range $(T_{\rm N1} \leq T \leq
300$\,K). The observed anisotropies of the $g$ tensor and the ESR linewidth in
untwinned single crystals result from the crystal-electric field and from local
exchange geometries acting on the magnetic Cu$^{2+}$ ions in the zigzag-ladder
like structure of LiCu$_2$O$_2$. Supported by a microscopic analysis of the
exchange paths involved, we show that both the symmetric anisotropic exchange
interaction and the antisymmetric Dzyaloshinskii-Moriya interaction provide the
dominant spin-spin relaxation channels in this material. | [
0,
1,
0,
0,
0,
0
] |
Title: Which friends are more popular than you? Contact strength and the friendship paradox in social networks,
Abstract: The friendship paradox states that in a social network, egos tend to have
lower degree than their alters, or, "your friends have more friends than you
do". Most research has focused on the friendship paradox and its implications
for information transmission, but treating the network as static and
unweighted. Yet, people can dedicate only a finite fraction of their attention
budget to each social interaction: a high-degree individual may have less time
to dedicate to individual social links, forcing them to modulate the quantities
of contact made to their different social ties. Here we study the friendship
paradox in the context of differing contact volumes between egos and alters,
finding a connection between contact volume and the strength of the friendship
paradox. The most frequently contacted alters exhibit a less pronounced
friendship paradox compared with the ego, whereas less-frequently contacted
alters are more likely to be high degree and give rise to the paradox. We argue
therefore for a more nuanced version of the friendship paradox: "your closest
friends have slightly more friends than you do", and in certain networks even:
"your best friend has no more friends than you do". We demonstrate that this
relationship is robust, holding in both a social media and a mobile phone
dataset. These results have implications for information transfer and influence
in social networks, which we explore using a simple dynamical model. | [
1,
1,
0,
0,
0,
0
] |
Title: Stochastic Optimization with Bandit Sampling,
Abstract: Many stochastic optimization algorithms work by estimating the gradient of
the cost function on the fly by sampling datapoints uniformly at random from a
training set. However, the estimator might have a large variance, which
inadvertently slows down the convergence rate of the algorithms. One way to
reduce this variance is to sample the datapoints from a carefully selected
non-uniform distribution. In this work, we propose a novel non-uniform sampling
approach that uses the multi-armed bandit framework. Theoretically, we show
that our algorithm asymptotically approximates the optimal variance within a
factor of 3. Empirically, we show that using this datapoint-selection technique
results in a significant reduction in the convergence time and variance of
several stochastic optimization algorithms such as SGD, SVRG and SAGA. This
approach for sampling datapoints is general, and can be used in conjunction
with any algorithm that uses an unbiased gradient estimation -- we expect it to
have broad applicability beyond the specific examples explored in this work. | [
1,
0,
0,
1,
0,
0
] |
Title: Simultaneous Detection of H and D NMR Signals in a micro-Tesla Field,
Abstract: We present NMR spectra of remote-magnetized deuterated water, detected in an
unshielded environment by means of a differential atomic magnetometer. The
measurements are performed in a $\mu$T field, while pulsed techniques are
applied -following the sample displacement- in a 100~$\mu$T field, to tip both
D and H nuclei by controllable amounts. The broadband nature of the detection
system enables simultaneous detection of the two signals and accurate
evaluation of their decay times. The outcomes of the experiment demonstrate the
potential of ultra-low-field NMR spectroscopy in important applications where
the correlation between proton and deuteron spin-spin relaxation rates as a
function of external parameters contains significant information. | [
0,
1,
0,
0,
0,
0
] |
Title: Structured Black Box Variational Inference for Latent Time Series Models,
Abstract: Continuous latent time series models are prevalent in Bayesian modeling;
examples include the Kalman filter, dynamic collaborative filtering, or dynamic
topic models. These models often benefit from structured, non mean field
variational approximations that capture correlations between time steps. Black
box variational inference with reparameterization gradients (BBVI) allows us to
explore a rich new class of Bayesian non-conjugate latent time series models;
however, a naive application of BBVI to such structured variational models
would scale quadratically in the number of time steps. We describe a BBVI
algorithm analogous to the forward-backward algorithm which instead scales
linearly in time. It allows us to efficiently sample from the variational
distribution and estimate the gradients of the ELBO. Finally, we show results
on the recently proposed dynamic word embedding model, which was trained using
our method. | [
1,
0,
0,
1,
0,
0
] |
Title: $L^p$ Norms of Eigenfunctions on Regular Graphs and on the Sphere,
Abstract: We prove upper bounds on the $L^p$ norms of eigenfunctions of the discrete
Laplacian on regular graphs. We then apply these ideas to study the $L^p$ norms
of joint eigenfunctions of the Laplacian and an averaging operator over a
finite collection of algebraic rotations of the $2$-sphere. Under mild
conditions, such joint eigenfunctions are shown to satisfy for large $p$ the
same bounds as those known for Laplace eigenfunctions on a surface of
non-positive curvature. | [
0,
0,
1,
0,
0,
0
] |
Title: Multipath IP Routing on End Devices: Motivation, Design, and Performance,
Abstract: Most end devices are now equipped with multiple network interfaces.
Applications can exploit all available interfaces and benefit from multipath
transmission. Recently Multipath TCP (MPTCP) was proposed to implement
multipath transmission at the transport layer and has attracted lots of
attention from academia and industry. However, MPTCP only supports TCP-based
applications and its multipath routing flexibility is limited. In this paper,
we investigate the possibility of orchestrating multipath transmission from the
network layer of end devices, and develop a Multipath IP (MPIP) design
consisting of signaling, session and path management, multipath routing, and
NAT traversal. We implement MPIP in Linux and Android kernels. Through
controlled lab experiments and Internet experiments, we demonstrate that MPIP
can effectively achieve multipath gains at the network layer. It not only
supports the legacy TCP and UDP protocols, but also works seamlessly with
MPTCP. By facilitating user-defined customized routing, MPIP can route traffic
from competing applications in a coordinated fashion to maximize the aggregate
user Quality-of-Experience. | [
1,
0,
0,
0,
0,
0
] |
Title: Fast Global Convergence via Landscape of Empirical Loss,
Abstract: While optimizing convex objective (loss) functions has been a powerhouse for
machine learning for at least two decades, non-convex loss functions have
attracted fast growing interests recently, due to many desirable properties
such as superior robustness and classification accuracy, compared with their
convex counterparts. The main obstacle for non-convex estimators is that it is
in general intractable to find the optimal solution. In this paper, we study
the computational issues for some non-convex M-estimators. In particular, we
show that the stochastic variance reduction methods converge to the global
optimal with linear rate, by exploiting the statistical property of the
population loss. En route, we improve the convergence analysis for the batch
gradient method in \cite{mei2016landscape}. | [
0,
0,
0,
1,
0,
0
] |
Title: Photodetector figures of merit in terms of POVMs,
Abstract: A photodetector may be characterized by various figures of merit such as
response time, bandwidth, dark count rate, efficiency, wavelength resolution,
and photon-number resolution. On the other hand, quantum theory says that any
measurement device is fully described by its POVM, which stands for
Positive-Operator-Valued Measure, and which generalizes the textbook notion of
the eigenstates of the appropriate hermitian operator (the "observable") as
measurement outcomes. Here we show how to define a multitude of photodetector
figures of merit in terms of a given POVM. We distinguish classical and quantum
figures of merit and issue a conjecture regarding trade-off relations between
them. We discuss the relationship between POVM elements and photodetector
clicks, and how models of photodetectors may be tested by measuring either POVM
elements or figures of merit. Finally, the POVM is advertised as a
platform-independent way of comparing different types of photodetectors, since
any such POVM refers to the Hilbert space of the incoming light, and not to any
Hilbert space internal to the detector. | [
0,
1,
0,
0,
0,
0
] |
Title: Kinetics of Protein-DNA Interactions: First-Passage Analysis,
Abstract: All living systems can function only far away from equilibrium, and for this
reason chemical kinetic methods are critically important for uncovering the
mechanisms of biological processes. Here we present a new theoretical method of
investigating dynamics of protein-DNA interactions, which govern all major
biological processes. It is based on a first-passage analysis of biochemical
and biophysical transitions, and it provides a fully analytic description of
the processes. Our approach is explained for the case of a single protein
searching for a specific binding site on DNA. In addition, the application of
the method to investigations of the effect of DNA sequence heterogeneity, and
the role multiple targets and traps in the protein search dynamics are
discussed. | [
0,
0,
0,
0,
1,
0
] |
Title: Detecting Topological Changes in Dynamic Community Networks,
Abstract: The study of time-varying (dynamic) networks (graphs) is of fundamental
importance for computer network analytics. Several methods have been proposed
to detect the effect of significant structural changes in a time series of
graphs. The main contribution of this work is a detailed analysis of a dynamic
community graph model. This model is formed by adding new vertices, and
randomly attaching them to the existing nodes. It is a dynamic extension of the
well-known stochastic blockmodel. The goal of the work is to detect the time at
which the graph dynamics switches from a normal evolution -- where balanced
communities grow at the same rate -- to an abnormal behavior -- where
communities start merging. In order to circumvent the problem of decomposing
each graph into communities, we use a metric to quantify changes in the graph
topology as a function of time. The detection of anomalies becomes one of
testing the hypothesis that the graph is undergoing a significant structural
change. In addition the the theoretical analysis of the test statistic, we
perform Monte Carlo simulations of our dynamic graph model to demonstrate that
our test can detect changes in graph topology. | [
1,
0,
0,
0,
0,
0
] |
Title: Semisuper Efimov effect of two-dimensional bosons at a three-body resonance,
Abstract: Wave-particle duality in quantum mechanics allows for a halo bound state
whose spatial extension far exceeds a range of the interaction potential. What
is even more striking is that such quantum halos can be arbitrarily large on
special occasions. The two examples known so far are the Efimov effect and the
super Efimov effect, which predict that spatial extensions of higher excited
states grow exponentially and double exponentially, respectively. Here, we
establish yet another new class of arbitrarily large quantum halos formed by
spinless bosons with short-range interactions in two dimensions. When the
two-body interaction is absent but the three-body interaction is resonant, four
bosons exhibit an infinite tower of bound states whose spatial extensions scale
as $R_n\sim e^{(\pi n)^2/27}$ for a large $n$. The emergent scaling law is
universal and is termed a semisuper Efimov effect, which together with the
Efimov and super Efimov effects constitutes a trio of few-body universality
classes allowing for arbitrarily large quantum halos. | [
0,
1,
0,
0,
0,
0
] |
Title: Optimizing the Latent Space of Generative Networks,
Abstract: Generative Adversarial Networks (GANs) have been shown to be able to sample
impressively realistic images. GAN training consists of a saddle point
optimization problem that can be thought of as an adversarial game between a
generator which produces the images, and a discriminator, which judges if the
images are real. Both the generator and the discriminator are commonly
parametrized as deep convolutional neural networks. The goal of this paper is
to disentangle the contribution of the optimization procedure and the network
parametrization to the success of GANs. To this end we introduce and study
Generative Latent Optimization (GLO), a framework to train a generator without
the need to learn a discriminator, thus avoiding challenging adversarial
optimization problems. We show experimentally that GLO enjoys many of the
desirable properties of GANs: learning from large data, synthesizing
visually-appealing samples, interpolating meaningfully between samples, and
performing linear arithmetic with noise vectors. | [
1,
0,
0,
1,
0,
0
] |
Title: Conservativity of realizations on motives of abelian type over finite fields,
Abstract: We show that the l-adic realization functor is conservative when restricted
to the Chow motives of abelian type over a finite field. A weak version of this
conservativity result extends to mixed motives of abelian type. | [
0,
0,
1,
0,
0,
0
] |
Title: Shape-dependence of the barrier for skyrmions on a two-lane racetrack,
Abstract: Single magnetic skyrmions are localized whirls in the magnetization with an
integer winding number. They have been observed on nano-meter scales up to room
temperature in multilayer structures. Due to their small size, topological
winding number, and their ability to be manipulated by extremely tiny forces,
they are often called interesting candidates for future memory devices. The
two-lane racetrack has to exhibit two lanes that are separated by an energy
barrier. The information is then encoded in the position of a skyrmion which is
located in one of these close-by lanes. The artificial barrier between the
lanes can be created by an additional nanostrip on top of the track. Here we
study the dependence of the potential barrier on the shape of the additional
nanostrip, calculating the potentials for a rectangular, triangular, and
parabolic cross section, as well as interpolations between the first two. We
find that a narrow barrier is always repulsive and that the height of the
potential strongly depends on the shape of the nanostrip, whereas the shape of
the potential is more universal. We finally show that the shape-dependence is
redundant for possible applications. | [
0,
1,
0,
0,
0,
0
] |
Title: Further Results on Size and Power of Heteroskedasticity and Autocorrelation Robust Tests, with an Application to Trend Testing,
Abstract: We complement the theory developed in Preinerstorfer and Pötscher (2016)
with further finite sample results on size and power of heteroskedasticity and
autocorrelation robust tests. These allows us, in particular, to show that the
sufficient conditions for the existence of size-controlling critical values
recently obtained in Pötscher and Preinerstorfer (2016) are often also
necessary. We furthermore apply the results obtained to tests for hypotheses on
deterministic trends in stationary time series regressions, and find that many
tests currently used are strongly size-distorted. | [
0,
0,
1,
1,
0,
0
] |
Title: A powerful approach to the study of moderate effect modification in observational studies,
Abstract: Effect modification means the magnitude or stability of a treatment effect
varies as a function of an observed covariate. Generally, larger and more
stable treatment effects are insensitive to larger biases from unmeasured
covariates, so a causal conclusion may be considerably firmer if this pattern
is noted if it occurs. We propose a new strategy, called the submax-method,
that combines exploratory and confirmatory efforts to determine whether there
is stronger evidence of causality - that is, greater insensitivity to
unmeasured confounding - in some subgroups of individuals. It uses the joint
distribution of test statistics that split the data in various ways based on
certain observed covariates. For $L$ binary covariates, the method splits the
population $L$ times into two subpopulations, perhaps first men and women,
perhaps then smokers and nonsmokers, computing a test statistic from each
subpopulation, and appends the test statistic for the whole population, making
$2L+1$ test statistics in total. Although $L$ binary covariates define $2^{L}$
interaction groups, only $2L+1$ tests are performed, and at least $L+1$ of
these tests use at least half of the data. The submax-method achieves the
highest design sensitivity and the highest Bahadur efficiency of its component
tests. Moreover, the form of the test is sufficiently tractable that its large
sample power may be studied analytically. The simulation suggests that the
submax method exhibits superior performance, in comparison with an approach
using CART, when there is effect modification of moderate size. Using data from
the NHANES I Epidemiologic Follow-Up Survey, an observational study of the
effects of physical activity on survival is used to illustrate the method. The
method is implemented in the $\texttt{R}$ package $\texttt{submax}$ which
contains the NHANES example. | [
0,
0,
0,
1,
0,
0
] |
Title: Ad-blocking: A Study on Performance, Privacy and Counter-measures,
Abstract: Many internet ventures rely on advertising for their revenue. However, users
feel discontent by the presence of ads on the websites they visit, as the
data-size of ads is often comparable to that of the actual content. This has an
impact not only on the loading time of webpages, but also on the internet bill
of the user in some cases. In absence of a mutually-agreed procedure for opting
out of advertisements, many users resort to ad-blocking browser-extensions. In
this work, we study the performance of popular ad-blockers on a large set of
news websites. Moreover, we investigate the benefits of ad-blockers on user
privacy as well as the mechanisms used by websites to counter them. Finally, we
explore the traffic overhead due to the ad-blockers themselves. | [
1,
0,
0,
0,
0,
0
] |
Title: On recognizing shapes of polytopes from their shadows,
Abstract: Let $P$ and $Q$ be two convex polytopes both contained in the interior of an
Euclidean ball $r\textbf{B}^{d}$. We prove that $P=Q$ provided that their sight
cones from any point on the sphere $rS^{d-1}$ are congruent. We also prove an
analogous result for spherical projections. | [
0,
0,
1,
0,
0,
0
] |
Title: GANs for Biological Image Synthesis,
Abstract: In this paper, we propose a novel application of Generative Adversarial
Networks (GAN) to the synthesis of cells imaged by fluorescence microscopy.
Compared to natural images, cells tend to have a simpler and more geometric
global structure that facilitates image generation. However, the correlation
between the spatial pattern of different fluorescent proteins reflects
important biological functions, and synthesized images have to capture these
relationships to be relevant for biological applications. We adapt GANs to the
task at hand and propose new models with casual dependencies between image
channels that can generate multi-channel images, which would be impossible to
obtain experimentally. We evaluate our approach using two independent
techniques and compare it against sensible baselines. Finally, we demonstrate
that by interpolating across the latent space we can mimic the known changes in
protein localization that occur through time during the cell cycle, allowing us
to predict temporal evolution from static images. | [
1,
0,
0,
1,
0,
0
] |
Title: An objective classification of Saturn cloud features from Cassini ISS images,
Abstract: A clustering algorithm is applied to Cassini Imaging Science Subsystem
continuum and methane band images of Saturns northern hemisphere to objectively
classify regional albedo features and aid in their dynamical interpretation.
The procedure is based on a technique applied previously to visible-infrared
images of Earth. It provides a new perspective on giant planet cloud morphology
and its relationship to the dynamics and a meteorological context for the
analysis of other types of simultaneous Saturn observations. The method
identifies six clusters that exhibit distinct morphology, vertical structure,
and preferred latitudes of occurrence. These correspond to areas dominated by
deep convective cells; low contrast areas, some including thinner and thicker
clouds possibly associated with baroclinic instability; regions with possible
isolated thin cirrus clouds; darker areas due to thinner low level clouds or
clearer skies due to downwelling, or due to absorbing particles; and fields of
relatively shallow cumulus clouds. The spatial associations among these cloud
types suggest that dynamically, there are three distinct types of latitude
bands on Saturn: deep convectively disturbed latitudes in cyclonic shear
regions poleward of the eastward jets; convectively suppressed regions near and
surrounding the westward jets; and baroclinically unstable latitudes near
eastward jet cores and in the anti-cyclonic regions equatorward of them. These
are roughly analogous to some of the features of Earths tropics, subtropics,
and midlatitudes, respectively. Temporal variations of feature contrast and
cluster occurrence suggest that the upper tropospheric haze in the northern
hemisphere may have thickened by 2014. | [
0,
1,
0,
0,
0,
0
] |
Title: Intuitionistic Layered Graph Logic: Semantics and Proof Theory,
Abstract: Models of complex systems are widely used in the physical and social
sciences, and the concept of layering, typically building upon graph-theoretic
structure, is a common feature. We describe an intuitionistic substructural
logic called ILGL that gives an account of layering. The logic is a bunched
system, combining the usual intuitionistic connectives, together with a
non-commutative, non-associative conjunction (used to capture layering) and its
associated implications. We give soundness and completeness theorems for a
labelled tableaux system with respect to a Kripke semantics on graphs. We then
give an equivalent relational semantics, itself proven equivalent to an
algebraic semantics via a representation theorem. We utilise this result in two
ways. First, we prove decidability of the logic by showing the finite
embeddability property holds for the algebraic semantics. Second, we prove a
Stone-type duality theorem for the logic. By introducing the notions of ILGL
hyperdoctrine and indexed layered frame we are able to extend this result to a
predicate version of the logic and prove soundness and completeness theorems
for an extension of the layered graph semantics . We indicate the utility of
predicate ILGL with a resource-labelled bigraph model. | [
1,
0,
0,
0,
0,
0
] |
Title: Exciting Nucleons in Compton Scattering and Hydrogen-Like Atoms,
Abstract: This PhD thesis is devoted to the low-energy structure of the nucleon (proton
and neutron) as seen through electromagnetic probes, e.g., electron and Compton
scattering. The research presented here is based primarily on dispersion theory
and chiral effective-field theory. The main motivation is the recent proton
radius puzzle, which is the discrepancy between the classic proton charge
radius determinations (based on electron-proton scattering and normal hydrogen
spectroscopy) and the highly precise extraction based on first muonic-hydrogen
experiments by the CREMA Collaboration. The precision of muonic-hydrogen
experiments is presently limited by the knowledge of proton structure effects
beyond the charge radius. A major part of this thesis is devoted to calculating
these effects using everything we know about the nucleon electromagnetic
structure from both theory and experiment.
The thesis consists of eight chapters. The first and last are, respectively,
the introduction and conclusion. The remainder of this thesis can roughly be
divided into the following three topics: finite-size effects in hydrogen-like
atoms, real and virtual Compton scattering, and two-photon-exchange effects. | [
0,
1,
0,
0,
0,
0
] |
Title: Privacy-Preserving Deep Inference for Rich User Data on The Cloud,
Abstract: Deep neural networks are increasingly being used in a variety of machine
learning applications applied to rich user data on the cloud. However, this
approach introduces a number of privacy and efficiency challenges, as the cloud
operator can perform secondary inferences on the available data. Recently,
advances in edge processing have paved the way for more efficient, and private,
data processing at the source for simple tasks and lighter models, though they
remain a challenge for larger, and more complicated models. In this paper, we
present a hybrid approach for breaking down large, complex deep models for
cooperative, privacy-preserving analytics. We do this by breaking down the
popular deep architectures and fine-tune them in a particular way. We then
evaluate the privacy benefits of this approach based on the information exposed
to the cloud service. We also asses the local inference cost of different
layers on a modern handset for mobile applications. Our evaluations show that
by using certain kind of fine-tuning and embedding techniques and at a small
processing costs, we can greatly reduce the level of information available to
unintended tasks applied to the data feature on the cloud, and hence achieving
the desired tradeoff between privacy and performance. | [
1,
0,
0,
0,
0,
0
] |
Title: Gradient Method With Inexact Oracle for Composite Non-Convex Optimization,
Abstract: In this paper, we develop new first-order method for composite non-convex
minimization problems with simple constraints and inexact oracle. The objective
function is given as a sum of "`hard"', possibly non-convex part, and
"`simple"' convex part. Informally speaking, oracle inexactness means that, for
the "`hard"' part, at any point we can approximately calculate the value of the
function and construct a quadratic function, which approximately bounds this
function from above. We give several examples of such inexactness: smooth
non-convex functions with inexact Hölder-continuous gradient, functions given
by auxiliary uniformly concave maximization problem, which can be solved only
approximately. For the introduced class of problems, we propose a gradient-type
method, which allows to use different proximal setup to adapt to geometry of
the feasible set, adaptively chooses controlled oracle error, allows for
inexact proximal mapping. We provide convergence rate for our method in terms
of the norm of generalized gradient mapping and show that, in the case of
inexact Hölder-continuous gradient, our method is universal with respect to
Hölder parameters of the problem. Finally, in a particular case, we show that
small value of the norm of generalized gradient mapping at a point means that a
necessary condition of local minimum approximately holds at that point. | [
0,
0,
1,
0,
0,
0
] |
Title: Kernel Implicit Variational Inference,
Abstract: Recent progress in variational inference has paid much attention to the
flexibility of variational posteriors. One promising direction is to use
implicit distributions, i.e., distributions without tractable densities as the
variational posterior. However, existing methods on implicit posteriors still
face challenges of noisy estimation and computational infeasibility when
applied to models with high-dimensional latent variables. In this paper, we
present a new approach named Kernel Implicit Variational Inference that
addresses these challenges. As far as we know, for the first time implicit
variational inference is successfully applied to Bayesian neural networks,
which shows promising results on both regression and classification tasks. | [
1,
0,
0,
1,
0,
0
] |
Title: The socle filtrations of principal series representations of $SL(3,\mathbb{R})$ and $Sp(2,\mathbb{R})$,
Abstract: We study the structure of the $(\mathfrak{g},K)$-modules of the principal
series representations of $SL(3,\mathbb{R})$ and $Sp(2,\mathbb{R})$ induced
from minimal parabolic subgroups, in the case when the infinitesimal character
is nonsingular. The composition factors of these modules are known by
Kazhdan-Lusztig-Vogan conjecture. In this paper, we give complete descriptions
of the socle filtrations of these modules. | [
0,
0,
1,
0,
0,
0
] |
Title: Improving the phase response of an atom interferometer by means of temporal pulse shaping,
Abstract: We study theoretically and experimentally the influence of temporally shaping
the light pulses in an atom interferometer, with a focus on the phase response
of the interferometer. We show that smooth light pulse shapes allow rejecting
high frequency phase fluctuations (above the Rabi frequency) and thus relax the
requirements on the phase noise or frequency noise of the interrogation lasers
driving the interferometer. The light pulse shape is also shown to modify the
scale factor of the interferometer, which has to be taken into account in the
evaluation of its accuracy budget. We discuss the trade-offs to operate when
choosing a particular pulse shape, by taking into account phase noise
rejection, velocity selectivity, and applicability to large momentum transfer
atom interferometry. | [
0,
1,
0,
0,
0,
0
] |
Title: Thermoregulation in mice, rats and humans: An insight into the evolution of human hairlessness,
Abstract: The thermoregulation system in animals removes body heat in hot temperatures
and retains body heat in cold temperatures. The better the animal removes heat,
the worse the animal retains heat and visa versa. It is the balance between
these two conflicting goals that determines the mammal's size, heart rate and
amount of hair. The rat's loss of tail hair and human's loss of its body hair
are responses to these conflicting thermoregulation needs as these animals
evolved to larger size over time. | [
0,
0,
0,
0,
1,
0
] |
Title: Duality and Universal Transport in a Mixed-Dimension Electrodynamics,
Abstract: We consider a theory of a two-component Dirac fermion localized on a (2+1)
dimensional brane coupled to a (3+1) dimensional bulk. Using the fermionic
particle-vortex duality, we show that the theory has a strong-weak duality that
maps the coupling $e$ to $\tilde e=(8\pi)/e$. We explore the theory at
$e^2=8\pi$ where it is self-dual. The electrical conductivity of the theory is
a constant independent of frequency. When the system is at finite density and
magnetic field at filling factor $\nu=\frac12$, the longitudinal and Hall
conductivity satisfies a semicircle law, and the ratio of the longitudinal and
Hall thermal electric coefficients is completely determined by the Hall angle.
The thermal Hall conductivity is directly related to the thermal electric
coefficients. | [
0,
1,
0,
0,
0,
0
] |
Title: New Methods of Enhancing Prediction Accuracy in Linear Models with Missing Data,
Abstract: In this paper, prediction for linear systems with missing information is
investigated. New methods are introduced to improve the Mean Squared Error
(MSE) on the test set in comparison to state-of-the-art methods, through
appropriate tuning of Bias-Variance trade-off. First, the use of proposed Soft
Weighted Prediction (SWP) algorithm and its efficacy are depicted and compared
to previous works for non-missing scenarios. The algorithm is then modified and
optimized for missing scenarios. It is shown that controlled over-fitting by
suggested algorithms will improve prediction accuracy in various cases.
Simulation results approve our heuristics in enhancing the prediction accuracy. | [
1,
0,
0,
1,
0,
0
] |
Title: Revisiting Imidazolium Based Ionic Liquids: Effect of the Conformation Bias of the [NTf$_{2}$] Anion Studied By Molecular Dynamics Simulations,
Abstract: We study ionic liquids composed 1-alkyl-3-methylimidazolium cations and
bis(trifluoromethyl-sulfonyl)imide anions ([C$_n$MIm][NTf$_2$]) with varying
chain-length $n\!=\!2, 4, 6, 8$ by using molecular dynamics simulations. We
show that a reparametrization of the dihedral potentials as well as charges of
the [NTf$_2$] anion leads to an improvment of the force field model introduced
by Köddermann {\em et al.} [ChemPhysChem, \textbf{8}, 2464 (2007)] (KPL-force
field). A crucial advantage of the new parameter set is that the minimum energy
conformations of the anion ({\em trans} and {\em gauche}), as deduced from {\em
ab initio} calculations and {\sc Raman} experiments, are now both well
represented by our model. In addition, the results for [C$_n$MIm][NTf$_2$] show
that this modification leads to an even better agreement between experiment and
molecular dynamics simulation as demonstrated for densities, diffusion
coefficients, vaporization enthalpies, reorientational correlation times, and
viscosities. Even though we focused on a better representation of the anion
conformation, also the alkyl chain-length dependence of the cation behaves
closer to the experiment. We strongly encourage to use the new NGKPL force
field for the [NTf$_2$] anion instead of the earlier KPL parameter set for
computer simulations aiming to describe the thermodynamics, dynamics and also
structure of imidazolium based ionic liquids. | [
0,
1,
0,
0,
0,
0
] |
Title: Tick: a Python library for statistical learning, with a particular emphasis on time-dependent modelling,
Abstract: Tick is a statistical learning library for Python~3, with a particular
emphasis on time-dependent models, such as point processes, and tools for
generalized linear models and survival analysis. The core of the library is an
optimization module providing model computational classes, solvers and proximal
operators for regularization. tick relies on a C++ implementation and
state-of-the-art optimization algorithms to provide very fast computations in a
single node multi-core setting. Source code and documentation can be downloaded
from this https URL | [
0,
0,
0,
1,
0,
0
] |
Title: An energy method for rough partial differential equations,
Abstract: We present a well-posedness and stability result for a class of nondegenerate
linear parabolic equations driven by rough paths. More precisely, we introduce
a notion of weak solution that satisfies an intrinsic formulation of the
equation in a suitable Sobolev space of negative order. Weak solutions are then
shown to satisfy the corresponding en- ergy estimates which are deduced
directly from the equation. Existence is obtained by showing compactness of a
suitable sequence of approximate solutions whereas unique- ness relies on a
doubling of variables argument and a careful analysis of the passage to the
diagonal. Our result is optimal in the sense that the assumptions on the
deterministic part of the equation as well as the initial condition are the
same as in the classical PDEs theory. | [
0,
0,
1,
0,
0,
0
] |
Title: High Order Hierarchical Divergence-free Constrained Transport $H(div)$ Finite Element Method for Magnetic Induction Equation,
Abstract: In this paper, we will use the interior functions of an hierarchical basis
for high order $BDM_p$ elements to enforce the divergence-free condition of a
magnetic field $B$ approximated by the H(div) $BDM_p$ basis. The resulting
constrained finite element method can be used to solve magnetic induction
equation in MHD equations. The proposed procedure is based on the fact that the
scalar $(p-1)$-th order polynomial space on each element can be decomposed as
an orthogonal sum of the subspace defined by the divergence of the interior
functions of the $p$-th order $BDM_p$ basis and the constant function.
Therefore, the interior functions can be used to remove element-wise all higher
order terms except the constant in the divergence error of the finite element
solution of $B$-field. The constant terms from each element can be then easily
corrected using a first order H(div) basis globally. Numerical results for a
3-D magnetic induction equation show the effectiveness of the proposed method
in enforcing divergence-free condition of the magnetic field. | [
0,
0,
1,
0,
0,
0
] |
Title: REMOTEGATE: Incentive-Compatible Remote Configuration of Security Gateways,
Abstract: Imagine that a malicious hacker is trying to attack a server over the
Internet and the server wants to block the attack packets as close to their
point of origin as possible. However, the security gateway ahead of the source
of attack is untrusted. How can the server block the attack packets through
this gateway? In this paper, we introduce REMOTEGATE, a trustworthy mechanism
for allowing any party (server) on the Internet to configure a security gateway
owned by a second party, at a certain agreed upon reward that the former pays
to the latter for its service. We take an interactive incentive-compatible
approach, for the case when both the server and the gateway are rational, to
devise a protocol that will allow the server to help the security gateway
generate and deploy a policy rule that filters the attack packets before they
reach the server. The server will reward the gateway only when the latter can
successfully verify that it has generated and deployed the correct rule for the
issue. This mechanism will enable an Internet-scale approach to improving
security and privacy, backed by digital payment incentives. | [
1,
0,
0,
0,
0,
0
] |
Title: Autocommuting probability of a finite group relative to its subgroups,
Abstract: Let $H \subseteq K$ be two subgroups of a finite group $G$ and Aut$(K)$ the
automorphism group of $K$. The autocommuting probability of $G$ relative to its
subgroups $H$ and $K$, denoted by ${\rm Pr}(H, {\rm Aut}(K))$, is the
probability that the autocommutator of a randomly chosen pair of elements, one
from $H$ and the other from Aut$(K)$, is equal to the identity element of $G$.
In this paper, we study ${\rm Pr}(H, {\rm Aut}(K))$ through a generalization. | [
0,
0,
1,
0,
0,
0
] |
Title: Radio observations confirm young stellar populations in local analogues to $z\sim5$ Lyman break galaxies,
Abstract: We present radio observations at 1.5 GHz of 32 local objects selected to
reproduce the physical properties of $z\sim5$ star-forming galaxies. We also
report non-detections of five such sources in the sub-millimetre. We find a
radio-derived star formation rate which is typically half that derived from
H$\alpha$ emission for the same objects. These observations support previous
indications that we are observing galaxies with a young dominant stellar
population, which has not yet established a strong supernova-driven synchrotron
continuum. We stress caution when applying star formation rate calibrations to
stellar populations younger than 100 Myr. We calibrate the conversions for
younger galaxies, which are dominated by a thermal radio emission component. We
improve the size constraints for these sources, compared to previous unresolved
ground-based optical observations. Their physical size limits indicate very
high star formation rate surface densities, several orders of magnitude higher
than the local galaxy population. In typical nearby galaxies, this would imply
the presence of galaxy-wide winds. Given the young stellar populations, it is
unclear whether a mechanism exists in our sources that can deposit sufficient
kinetic energy into the interstellar medium to drive such outflows. | [
0,
1,
0,
0,
0,
0
] |
Title: Sparse Deep Neural Network Exact Solutions,
Abstract: Deep neural networks (DNNs) have emerged as key enablers of machine learning.
Applying larger DNNs to more diverse applications is an important challenge.
The computations performed during DNN training and inference are dominated by
operations on the weight matrices describing the DNN. As DNNs incorporate more
layers and more neurons per layers, these weight matrices may be required to be
sparse because of memory limitations. Sparse DNNs are one possible approach,
but the underlying theory is in the early stages of development and presents a
number of challenges, including determining the accuracy of inference and
selecting nonzero weights for training. Associative array algebra has been
developed by the big data community to combine and extend database, matrix, and
graph/network concepts for use in large, sparse data problems. Applying this
mathematics to DNNs simplifies the formulation of DNN mathematics and reveals
that DNNs are linear over oscillating semirings. This work uses associative
array DNNs to construct exact solutions and corresponding perturbation models
to the rectified linear unit (ReLU) DNN equations that can be used to construct
test vectors for sparse DNN implementations over various precisions. These
solutions can be used for DNN verification, theoretical explorations of DNN
properties, and a starting point for the challenge of sparse training. | [
0,
0,
0,
1,
0,
0
] |
Title: Variation formulas for an extended Gompf invariant,
Abstract: In 1998, R. Gompf defined a homotopy invariant $\theta_G$ of oriented 2-plane
fields in 3-manifolds. This invariant is defined for oriented 2-plane fields
$\xi$ in a closed oriented 3-manifold $M$ when the first Chern class $c_1(\xi)$
is a torsion element of $H^2(M;\mathbb{Z})$. In this article, we define an
extension of the Gompf invariant for all compact oriented 3-manifolds with
boundary and we study its iterated variations under Lagrangian-preserving
surgeries. It follows that the extended Gompf invariant is a degree two
invariant with respect to a suitable finite type invariant theory. | [
0,
0,
1,
0,
0,
0
] |
Title: An Exploration of Mimic Architectures for Residual Network Based Spectral Mapping,
Abstract: Spectral mapping uses a deep neural network (DNN) to map directly from noisy
speech to clean speech. Our previous study found that the performance of
spectral mapping improves greatly when using helpful cues from an acoustic
model trained on clean speech. The mapper network learns to mimic the input
favored by the spectral classifier and cleans the features accordingly. In this
study, we explore two new innovations: we replace a DNN-based spectral mapper
with a residual network that is more attuned to the goal of predicting clean
speech. We also examine how integrating long term context in the mimic
criterion (via wide-residual biLSTM networks) affects the performance of
spectral mapping compared to DNNs. Our goal is to derive a model that can be
used as a preprocessor for any recognition system; the features derived from
our model are passed through the standard Kaldi ASR pipeline and achieve a WER
of 9.3%, which is the lowest recorded word error rate for CHiME-2 dataset using
only feature adaptation. | [
1,
0,
0,
0,
0,
0
] |
Title: Deep Neural Networks to Enable Real-time Multimessenger Astrophysics,
Abstract: Gravitational wave astronomy has set in motion a scientific revolution. To
further enhance the science reach of this emergent field, there is a pressing
need to increase the depth and speed of the gravitational wave algorithms that
have enabled these groundbreaking discoveries. To contribute to this effort, we
introduce Deep Filtering, a new highly scalable method for end-to-end
time-series signal processing, based on a system of two deep convolutional
neural networks, which we designed for classification and regression to rapidly
detect and estimate parameters of signals in highly noisy time-series data
streams. We demonstrate a novel training scheme with gradually increasing noise
levels, and a transfer learning procedure between the two networks. We showcase
the application of this method for the detection and parameter estimation of
gravitational waves from binary black hole mergers. Our results indicate that
Deep Filtering significantly outperforms conventional machine learning
techniques, achieves similar performance compared to matched-filtering while
being several orders of magnitude faster thus allowing real-time processing of
raw big data with minimal resources. More importantly, Deep Filtering extends
the range of gravitational wave signals that can be detected with ground-based
gravitational wave detectors. This framework leverages recent advances in
artificial intelligence algorithms and emerging hardware architectures, such as
deep-learning-optimized GPUs, to facilitate real-time searches of gravitational
wave sources and their electromagnetic and astro-particle counterparts. | [
1,
1,
0,
0,
0,
0
] |
Title: Orbital Evolution, Activity, and Mass Loss of Comet C/1995 O1 (Hale-Bopp). I. Close Encounter with Jupiter in Third Millennium BCE and Effects of Outgassing on the Comet's Motion and Physical Properties,
Abstract: This comprehensive study of comet C/1995 O1 focuses first on investigating
its orbital motion over a period of 17.6 yr (1993-2010). The comet is suggested
to have approached Jupiter to 0.005 AU on -2251 November 7, in general
conformity with Marsden's (1999) proposal of a Jovian encounter nearly 4300 yr
ago. The variations of sizable nongravitational effects with heliocentric
distance correlate with the evolution of outgassing, asymmetric relative to
perihelion. The future orbital period will shorten to ~1000 yr because of
orbital-cascade resonance effects. We find that the sublimation curves of
parent molecules are fitted with the type of a law used for the
nongravitational acceleration, determine their orbit-integrated mass loss, and
conclude that the share of water ice was at most 57%, and possibly less than
50%, of the total outgassed mass. Even though organic parent molecules (many
still unidentified) had very low abundances relative to water individually,
their high molar mass and sheer number made them, summarily, important
potential mass contributors to the total production of gas. The mass loss of
dust per orbit exceeded that of water ice by a factor of ~12, a dust loading
high enough to imply a major role for heavy organic molecules of low volatility
in accelerating the minuscule dust particles in the expanding halos to terminal
velocities as high as 0.7 km s^{-1}. In Part II, the comet's nucleus will be
modeled as a compact cluster of massive fragments to conform to the integrated
nongravitational effect. | [
0,
1,
0,
0,
0,
0
] |
Title: Implementing GraphQL as a Query Language for Deductive Databases in SWI-Prolog Using DCGs, Quasi Quotations, and Dicts,
Abstract: The methods to access large relational databases in a distributed system are
well established: the relational query language SQL often serves as a language
for data access and manipulation, and in addition public interfaces are exposed
using communication protocols like REST. Similarly to REST, GraphQL is the
query protocol of an application layer developed by Facebook. It provides a
unified interface between the client and the server for data fetching and
manipulation. Using GraphQL's type system, it is possible to specify data
handling of various sources and to combine, e.g., relational with NoSQL
databases. In contrast to REST, GraphQL provides a single API endpoint and
supports flexible queries over linked data.
GraphQL can also be used as an interface for deductive databases. In this
paper, we give an introduction of GraphQL and a comparison to REST. Using
language features recently added to SWI-Prolog 7, we have developed the Prolog
library GraphQL.pl, which implements the GraphQL type system and query syntax
as a domain-specific language with the help of definite clause grammars (DCG),
quasi quotations, and dicts. Using our library, the type system created for a
deductive database can be validated, while the query system provides a unified
interface for data access and introspection. | [
1,
0,
0,
0,
0,
0
] |
Title: New Determinant Expressions of the Multi-indexed Orthogonal Polynomials in Discrete Quantum Mechanics,
Abstract: The multi-indexed orthogonal polynomials (the Meixner, little $q$-Jacobi
(Laguerre), ($q$-)Racah, Wilson, Askey-Wilson types) satisfying second order
difference equations were constructed in discrete quantum mechanics. They are
polynomials in the sinusoidal coordinates $\eta(x)$ ($x$ is the coordinate of
quantum system) and expressed in terms of the Casorati determinants whose
matrix elements are functions of $x$ at various points. By using shape
invariance properties, we derive various equivalent determinant expressions,
especially those whose matrix elements are functions of the same point $x$.
Except for the ($q$-)Racah case, they can be expressed in terms of $\eta$ only,
without explicit $x$-dependence. | [
0,
1,
1,
0,
0,
0
] |
Title: Integrating a Global Induction Mechanism into a Sequent Calculus,
Abstract: Most interesting proofs in mathematics contain an inductive argument which
requires an extension of the LK-calculus to formalize. The most commonly used
calculi for induction contain a separate rule or axiom which reduces the valid
proof theoretic properties of the calculus. To the best of our knowledge, there
are no such calculi which allow cut-elimination to a normal form with the
subformula property, i.e. every formula occurring in the proof is a subformula
of the end sequent. Proof schemata are a variant of LK-proofs able to simulate
induction by linking proofs together. There exists a schematic normal form
which has comparable proof theoretic behaviour to normal forms with the
subformula property. However, a calculus for the construction of proof schemata
does not exist. In this paper, we introduce a calculus for proof schemata and
prove soundness and completeness with respect to a fragment of the inductive
arguments formalizable in Peano arithmetic. | [
1,
0,
0,
0,
0,
0
] |
Title: An analytic resolution to the competition between Lyman-Werner radiation and metal winds in direct collapse black hole hosts,
Abstract: A near pristine atomic cooling halo close to a star forming galaxy offers a
natural pathway for forming massive direct collapse black hole (DCBH) seeds
which could be the progenitors of the $z>6$ redshift quasars. The close
proximity of the haloes enables a sufficient Lyman-Werner flux to effectively
dissociate H$_2$ in the core of the atomic cooling halo. A mild background may
also be required to delay star formation in the atomic cooling halo, often
attributed to distant background galaxies. In this letter we investigate the
impact of metal enrichment from both the background galaxies and the close star
forming galaxy under extremely unfavourable conditions such as instantaneous
metal mixing. We find that within the time window of DCBH formation, the level
of enrichment never exceeds the critical threshold (Z$_{cr} \sim 1 \times
10^{-5} \ \rm Z_{\odot})$, and attains a maximum metallicity of Z $\sim 2
\times 10^{-6} \ \rm Z_{\odot}$. As the system evolves, the metallicity
eventually exceeds the critical threshold, long after the DCBH has formed. | [
0,
1,
0,
0,
0,
0
] |
Title: A comprehensive study of batch construction strategies for recurrent neural networks in MXNet,
Abstract: In this work we compare different batch construction methods for mini-batch
training of recurrent neural networks. While popular implementations like
TensorFlow and MXNet suggest a bucketing approach to improve the
parallelization capabilities of the recurrent training process, we propose a
simple ordering strategy that arranges the training sequences in a stochastic
alternatingly sorted way. We compare our method to sequence bucketing as well
as various other batch construction strategies on the CHiME-4 noisy speech
recognition corpus. The experiments show that our alternated sorting approach
is able to compete both in training time and recognition performance while
being conceptually simpler to implement. | [
1,
0,
0,
1,
0,
0
] |
Title: On a class of shift-invariant subspaces of the Drury-Arveson space,
Abstract: In the Drury-Arveson space, we consider the subspace of functions whose
Taylor coefficients are supported in the complement of a set
$Y\subset\mathbb{N}^d$ with the property that $Y+e_j\subset Y$ for all
$j=1,\dots,d$. This is an easy example of shift-invariant subspace, which can
be considered as a RKHS in is own right, with a kernel that can be explicitely
calculated. Moreover, every such a space can be seen as an intersection of
kernels of Hankel operators, whose symbols can be explicity calcuated as well.
Finally, this is the right space on which Drury's inequality can be optimally
adapted to a sub-family of the commuting and contractive operators originally
considered by Drury. | [
0,
0,
1,
0,
0,
0
] |
Title: Search for axions in streaming dark matter,
Abstract: A new search strategy for the detection of the elusive dark matter (DM) axion
is proposed. The idea is based on streaming DM axions, whose flux might get
temporally enormously enhanced due to gravitational lensing. This can happen if
the Sun or some planet (including the Moon) is found along the direction of a
DM stream propagating towards the Earth location. The experimental requirements
to the axion haloscope are a wide-band performance combined with a fast axion
rest mass scanning mode, which are feasible. Once both conditions have been
implemented in a haloscope, the axion search can continue parasitically almost
as before. Interestingly, some new DM axion detectors are operating wide-band
by default. In order not to miss the actually unpredictable timing of a
potential short duration signal, a network of co-ordinated axion antennae is
required, preferentially distributed world-wide. The reasoning presented here
for the axions applies to some degree also to any other DM candidates like the
WIMPs. | [
0,
1,
0,
0,
0,
0
] |
Title: Faster integer and polynomial multiplication using cyclotomic coefficient rings,
Abstract: We present an algorithm that computes the product of two n-bit integers in
O(n log n (4\sqrt 2)^{log^* n}) bit operations. Previously, the best known
bound was O(n log n 6^{log^* n}). We also prove that for a fixed prime p,
polynomials in F_p[X] of degree n may be multiplied in O(n log n 4^{log^* n})
bit operations; the previous best bound was O(n log n 8^{log^* n}). | [
1,
0,
0,
0,
0,
0
] |
Title: Multiscale Change-point Segmentation: Beyond Step Functions,
Abstract: Modern multiscale type segmentation methods are known to detect multiple
change-points with high statistical accuracy, while allowing for fast
computation. Underpinning theory has been developed mainly for models that
assume the signal as a piecewise constant function. In this paper this will be
extended to certain function classes beyond such step functions in a
nonparametric regression setting, revealing certain multiscale segmentation
methods as robust to deviation from such piecewise constant functions. Our main
finding is the adaptation over such function classes for a universal
thresholding, which includes bounded variation functions, and (piecewise)
Hölder functions of smoothness order $ 0 < \alpha \le1$ as special cases.
From this we derive statistical guarantees on feature detection in terms of
jumps and modes. Another key finding is that these multiscale segmentation
methods perform nearly (up to a log-factor) as well as the oracle piecewise
constant segmentation estimator (with known jump locations), and the best
piecewise constant approximants of the (unknown) true signal. Theoretical
findings are examined by various numerical simulations. | [
0,
0,
1,
1,
0,
0
] |
Title: Data Motif-based Proxy Benchmarks for Big Data and AI Workloads,
Abstract: For the architecture community, reasonable simulation time is a strong
requirement in addition to performance data accuracy. However, emerging big
data and AI workloads are too huge at binary size level and prohibitively
expensive to run on cycle-accurate simulators. The concept of data motif, which
is identified as a class of units of computation performed on initial or
intermediate data, is the first step towards building proxy benchmark to mimic
the real-world big data and AI workloads. However, there is no practical way to
construct a proxy benchmark based on the data motifs to help simulation-based
research. In this paper, we embark on a study to bridge the gap between data
motif and a practical proxy benchmark. We propose a data motif-based proxy
benchmark generating methodology by means of machine learning method, which
combine data motifs with different weights to mimic the big data and AI
workloads. Furthermore, we implement various data motifs using light-weight
stacks and apply the methodology to five real-world workloads to construct a
suite of proxy benchmarks, considering the data types, patterns, and
distributions. The evaluation results show that our proxy benchmarks shorten
the execution time by 100s times on real systems while maintaining the average
system and micro-architecture performance data accuracy above 90%, even
changing the input data sets or cluster configurations. Moreover, the generated
proxy benchmarks reflect consistent performance trends across different
architectures. To facilitate the community, we will release the proxy
benchmarks on the project homepage this http URL. | [
1,
0,
0,
0,
0,
0
] |
Title: The neighborhood lattice for encoding partial correlations in a Hilbert space,
Abstract: Neighborhood regression has been a successful approach in graphical and
structural equation modeling, with applications to learning undirected and
directed graphical models. We extend these ideas by defining and studying an
algebraic structure called the neighborhood lattice based on a generalized
notion of neighborhood regression. We show that this algebraic structure has
the potential to provide an economic encoding of all conditional independence
statements in a Gaussian distribution (or conditional uncorrelatedness in
general), even in the cases where no graphical model exists that could
"perfectly" encode all such statements. We study the computational complexity
of computing these structures and show that under a sparsity assumption, they
can be computed in polynomial time, even in the absence of the assumption of
perfectness to a graph. On the other hand, assuming perfectness, we show how
these neighborhood lattices may be "graphically" computed using the separation
properties of the so-called partial correlation graph. We also draw connections
with directed acyclic graphical models and Bayesian networks. We derive these
results using an abstract generalization of partial uncorrelatedness, called
partial orthogonality, which allows us to use algebraic properties of
projection operators on Hilbert spaces to significantly simplify and extend
existing ideas and arguments. Consequently, our results apply to a wide range
of random objects and data structures, such as random vectors, data matrices,
and functions. | [
1,
0,
1,
1,
0,
0
] |
Title: The 2-adic complexity of a class of binary sequences with almost optimal autocorrelation,
Abstract: Pseudo-random sequences with good statistical property, such as low
autocorrelation, high linear complexity and large 2-adic complexity, have been
applied in stream cipher. In general, it is difficult to give both the linear
complexity and 2-adic complexity of a periodic binary sequence. Cai and Ding
\cite{Cai Ying} gave a class of sequences with almost optimal autocorrelation
by constructing almost difference sets. Wang \cite{Wang Qi} proved that one
type of those sequences by Cai and Ding has large linear complexity. Sun et al.
\cite{Sun Yuhua} showed that another type of sequences by Cai and Ding has also
large linear complexity. Additionally, Sun et al. also generalized the
construction by Cai and Ding using $d$-form function with difference-balanced
property. In this paper, we first give the detailed autocorrelation
distribution of the sequences was generalized from Cai and Ding \cite{Cai Ying}
by Sun et al. \cite{Sun Yuhua}. Then, inspired by the method of Hu \cite{Hu
Honggang}, we analyse their 2-adic complexity and give a lower bound on the
2-adic complexity of these sequences. Our result show that the 2-adic
complexity of these sequences is at least $N-\mathrm{log}_2\sqrt{N+1}$ and that
it reach $N-1$ in many cases, which are large enough to resist the rational
approximation algorithm (RAA) for feedback with carry shift registers (FCSRs). | [
1,
0,
1,
0,
0,
0
] |
Title: Minimal solutions to generalized Lambda-semiflows and gradient flows in metric spaces,
Abstract: Generalized Lambda-semiflows are an abstraction of semiflows with
non-periodic solutions, for which there may be more than one solution
corresponding to given initial data. A select class of solutions to generalized
Lambda-semiflows is introduced. It is proved that such minimal solutions are
unique corresponding to given ranges and generate all other solutions by time
reparametrization. Special qualities of minimal solutions are shown. The
concept of minimal solutions is applied to gradient flows in metric spaces and
generalized semiflows. Generalized semiflows have been introduced by Ball. | [
0,
0,
1,
0,
0,
0
] |
Title: Estimates for maximal functions associated to hypersurfaces in $\Bbb R^3$ with height $h<2:$ Part I,
Abstract: In this article, we continue the study of the problem of $L^p$-boundedness of
the maximal operator $M$ associated to averages along isotropic dilates of a
given, smooth hypersurface $S$ of finite type in 3-dimensional Euclidean space.
An essentially complete answer to this problem had been given about seven years
ago by the last named two authors in joint work with M. Kempe for the case
where the height h of the given surface is at least two. In the present
article, we turn to the case $h<2.$ More precisely, in this Part I, we study
the case where $h<2,$ assuming that $S$ is contained in a sufficiently small
neighborhood of a given point $x^0\in S$ at which both principal curvatures of
$S$ vanish. Under these assumptions and a natural transversality assumption, we
show that, as in the case where $h\ge 2,$ the critical Lebesgue exponent for
the boundedness of $M$ remains to be $p_c=h,$ even though the proof of this
result turns out to require new methods, some of which are inspired by the more
recent work by the last named two authors on Fourier restriction to S. Results
on the case where $h<2$ and exactly one principal curvature of $S$ does not
vanish at $x^0$ will appear elsewhere. | [
0,
0,
1,
0,
0,
0
] |
Title: Using Inertial Sensors for Position and Orientation Estimation,
Abstract: In recent years, MEMS inertial sensors (3D accelerometers and 3D gyroscopes)
have become widely available due to their small size and low cost. Inertial
sensor measurements are obtained at high sampling rates and can be integrated
to obtain position and orientation information. These estimates are accurate on
a short time scale, but suffer from integration drift over longer time scales.
To overcome this issue, inertial sensors are typically combined with additional
sensors and models. In this tutorial we focus on the signal processing aspects
of position and orientation estimation using inertial sensors. We discuss
different modeling choices and a selected number of important algorithms. The
algorithms include optimization-based smoothing and filtering as well as
computationally cheaper extended Kalman filter and complementary filter
implementations. The quality of their estimates is illustrated using both
experimental and simulated data. | [
1,
0,
0,
0,
0,
0
] |
Title: The Massive CO White Dwarf in the Symbiotic Recurrent Nova RS Ophiuchi,
Abstract: If accreting white dwarfs (WD) in binary systems are to produce type Ia
supernovae (SNIa), they must grow to nearly the Chandrasekhar mass and ignite
carbon burning. Proving conclusively that a WD has grown substantially since
its birth is a challenging task. Slow accretion of hydrogen inevitably leads to
the erosion, rather than the growth of WDs. Rapid hydrogen accretion does lead
to growth of a helium layer, due to both decreased degeneracy and the
inhibition of mixing of the accreted hydrogen with the underlying WD. However,
until recently, simulations of helium-accreting WDs all claimed to show the
explosive ejection of a helium envelope once it exceeded $\sim 10^{-1}\, \rm
M_{\odot}$. Because CO WDs cannot be born with masses in excess of $\sim 1.1\,
\rm M_{\odot}$, any such object, in excess of $\sim 1.2\, \rm M_{\odot}$, must
have grown substantially. We demonstrate that the WD in the symbiotic nova RS
Oph is in the mass range 1.2-1.4\,M$_{\odot}$. We compare UV spectra of RS Oph
with those of novae with ONe WDs, and with novae erupting on CO WDs. The RS Oph
WD is clearly made of CO, demonstrating that it has grown substantially since
birth. It is a prime candidate to eventually produce an SNIa. | [
0,
1,
0,
0,
0,
0
] |
Title: Localization landscape theory of disorder in semiconductors II: Urbach tails of disordered quantum well layers,
Abstract: Urbach tails in semiconductors are often associated to effects of
compositional disorder. The Urbach tail observed in InGaN alloy quantum wells
of solar cells and LEDs by biased photocurrent spectroscopy is shown to be
characteristic of the ternary alloy disorder. The broadening of the absorption
edge observed for quantum wells emitting from violet to green (indium content
ranging from 0 to 28\%) corresponds to a typical Urbach energy of 20~meV. A 3D
absorption model is developed based on a recent theory of disorder-induced
localization which provides the effective potential seen by the localized
carriers without having to resort to the solution of the Schrödinger equation
in a disordered potential. This model incorporating compositional disorder
accounts well for the experimental broadening of the Urbach tail of the
absorption edge. For energies below the Urbach tail of the InGaN quantum wells,
type-II well-to-barrier transitions are observed and modeled. This contribution
to the below bandgap absorption is particularly efficient in near-UV emitting
quantum wells. When reverse biasing the device, the well-to-barrier below
bandgap absorption exhibits a red shift, while the Urbach tail corresponding to
the absorption within the quantum wells is blue shifted, due to the partial
compensation of the internal piezoelectric fields by the external bias. The
good agreement between the measured Urbach tail and its modeling by the new
localization theory demonstrates the applicability of the latter to
compositional disorder effects in nitride semiconductors. | [
0,
1,
0,
0,
0,
0
] |
Title: Why Abeta42 Is Much More Toxic Than Abeta40,
Abstract: Amyloid precursor with 770 amino acids dimerizes and aggregates, as do its c
terminal 99 amino acids and amyloid 40,42 amino acids fragments. The titled
question has been discussed extensively, and here it is addressed further using
thermodynamic scaling theory to analyze mutational trends in structural factors
and kinetics. Special attention is given to Family Alzheimer's Disease
mutations outside amyloid 42. The scaling analysis is connected to extensive
docking simulations which included membranes, thereby confirming their results
and extending them to Amyloid precursor. | [
0,
0,
0,
0,
1,
0
] |
Title: A Polynomial Time Algorithm for Spatio-Temporal Security Games,
Abstract: An ever-important issue is protecting infrastructure and other valuable
targets from a range of threats from vandalism to theft to piracy to terrorism.
The "defender" can rarely afford the needed resources for a 100% protection.
Thus, the key question is, how to provide the best protection using the limited
available resources. We study a practically important class of security games
that is played out in space and time, with targets and "patrols" moving on a
real line. A central open question here is whether the Nash equilibrium (i.e.,
the minimax strategy of the defender) can be computed in polynomial time. We
resolve this question in the affirmative. Our algorithm runs in time polynomial
in the input size, and only polylogarithmic in the number of possible patrol
locations (M). Further, we provide a continuous extension in which patrol
locations can take arbitrary real values. Prior work obtained polynomial-time
algorithms only under a substantial assumption, e.g., a constant number of
rounds. Further, all these algorithms have running times polynomial in M, which
can be very large. | [
1,
0,
0,
0,
0,
0
] |
Title: TIDBD: Adapting Temporal-difference Step-sizes Through Stochastic Meta-descent,
Abstract: In this paper, we introduce a method for adapting the step-sizes of temporal
difference (TD) learning. The performance of TD methods often depends on well
chosen step-sizes, yet few algorithms have been developed for setting the
step-size automatically for TD learning. An important limitation of current
methods is that they adapt a single step-size shared by all the weights of the
learning system. A vector step-size enables greater optimization by specifying
parameters on a per-feature basis. Furthermore, adapting parameters at
different rates has the added benefit of being a simple form of representation
learning. We generalize Incremental Delta Bar Delta (IDBD)---a vectorized
adaptive step-size method for supervised learning---to TD learning, which we
name TIDBD. We demonstrate that TIDBD is able to find appropriate step-sizes in
both stationary and non-stationary prediction tasks, outperforming ordinary TD
methods and TD methods with scalar step-size adaptation; we demonstrate that it
can differentiate between features which are relevant and irrelevant for a
given task, performing representation learning; and we show on a real-world
robot prediction task that TIDBD is able to outperform ordinary TD methods and
TD methods augmented with AlphaBound and RMSprop. | [
0,
0,
0,
1,
0,
0
] |
Title: Enhanced clustering tendency of Cu-impurities with a number of oxygen vacancies in heavy carbon-loaded TiO2 - the bulk and surface morphologies,
Abstract: The over threshold carbon-loadings (~50 at.%) of initial TiO2-hosts and
posterior Cu-sensitization (~7 at.%) was made using pulsed ion-implantation
technique in sequential mode with 1 hour vacuum-idle cycle between sequential
stages of embedding. The final Cx-TiO2:Cu samples were qualified using XPS
wide-scan elemental analysis, core-levels and valence band mappings. The
results obtained were discussed on the theoretic background employing
DFT-calculations. The combined XPS and DFT analysis allows to establish and
prove the final formula of the synthesized samples as Cx-TiO2:[Cu+][Cu2+] for
the bulk and Cx-TiO2:[Cu+][Cu0] for thin-films. It was demonstrated the in the
mode of heavy carbon-loadings the remaining majority of neutral C-C bonds
(sp3-type) is dominating and only a lack of embedded carbon is fabricating the
O-C=O clusters. No valence base-band width altering was established after
sequential carbon-copper modification of the atomic structure of initial
TiO2-hosts except the dominating majority of Cu 3s states after
Cu-sensitization. The crucial role of neutral carbon low-dimensional impurities
as the precursors for the new phases growth was shown for Cu-sensitized Cx-TiO2
intermediate-state hosts. | [
0,
1,
0,
0,
0,
0
] |
Title: Quantum Paramagnet and Frustrated Quantum Criticality in a Spin-One Diamond Lattice Antiferromagnet,
Abstract: Motivated by the proposal of topological quantum paramagnet in the diamond
lattice antiferromagnet NiRh$_2$O$_4$, we propose a minimal model to describe
the magnetic interaction and properties of the diamond material with the
spin-one local moments. Our model includes the first and second neighbor
Heisenberg interactions as well as a local single-ion spin anisotropy that is
allowed by the spin-one nature of the local moment and the tetragonal symmetry
of the system. We point out that there exists a quantum phase transition from a
trivial quantum paramagnet when the single-ion spin anisotropy is dominant to
the magnetic ordered states when the exchange is dominant. Due to the
frustrated spin interaction, the magnetic excitation in the quantum
paramagnetic state supports extensively degenerate band minima in the spectra.
As the system approaches the transition, extensively degenerate bosonic modes
become critical at the criticality, giving rise to unusual magnetic properties.
Our phase diagram and experimental predictions for different phases provide a
guildeline for the identification of the ground state for NiRh$_2$O$_4$.
Although our results are fundamentally different from the proposal of
topological quantum paramagnet, it represents interesting possibilities for
spin-one diamond lattice antiferromagnets. | [
0,
1,
0,
0,
0,
0
] |
Title: Characterizations of minimal dominating sets and the well-dominated property in lexicographic product graphs,
Abstract: A graph is said to be well-dominated if all its minimal dominating sets are
of the same size. The class of well-dominated graphs forms a subclass of the
well studied class of well-covered graphs. While the recognition problem for
the class of well-covered graphs is known to be co-NP-complete, the recognition
complexity of well-dominated graphs is open.
In this paper we introduce the notion of an irreducible dominating set, a
variant of dominating set generalizing both minimal dominating sets and minimal
total dominating sets. Based on this notion, we characterize the family of
minimal dominating sets in a lexicographic product of two graphs and derive a
characterization of the well-dominated lexicographic product graphs. As a side
result motivated by this study, we give a polynomially testable
characterization of well-dominated graphs with domination number two, and show,
more generally, that well-dominated graphs can be recognized in polynomial time
in any class of graphs with bounded domination number. Our results include a
characterization of dominating sets in lexicographic product graphs, which
generalizes the expression for the domination number of such graphs following
from works of Zhang et al. (2011) and of Šumenjak et al. (2012). | [
1,
0,
1,
0,
0,
0
] |
Title: Affiliation networks with an increasing degree sequence,
Abstract: Affiliation network is one kind of two-mode social network with two different
sets of nodes (namely, a set of actors and a set of social events) and edges
representing the affiliation of the actors with the social events. Although a
number of statistical models are proposed to analyze affiliation networks, the
asymptotic behaviors of the estimator are still unknown or have not been
properly explored. In this paper, we study an affiliation model with the degree
sequence as the exclusively natural sufficient statistic in the exponential
family distributions. We establish the uniform consistency and asymptotic
normality of the maximum likelihood estimator when the numbers of actors and
events both go to infinity. Simulation studies and a real data example
demonstrate our theoretical results. | [
0,
0,
1,
1,
0,
0
] |
Title: Coarse Grained Parallel Selection,
Abstract: We analyze the running time of the Saukas-Song algorithm for selection on a
coarse grained multicomputer without expressing the running time in terms of
communication rounds. This shows that while in the best case the Saukas-Song
algorithm runs in asymptotically optimal time, in general it does not. We
propose other algorithms for coarse grained selection that have optimal
expected running time. | [
1,
0,
0,
0,
0,
0
] |
Title: Aggregation of Classifiers: A Justifiable Information Granularity Approach,
Abstract: In this study, we introduce a new approach to combine multi-classifiers in an
ensemble system. Instead of using numeric membership values encountered in
fixed combining rules, we construct interval membership values associated with
each class prediction at the level of meta-data of observation by using
concepts of information granules. In the proposed method, uncertainty
(diversity) of findings produced by the base classifiers is quantified by
interval-based information granules. The discriminative decision model is
generated by considering both the bounds and the length of the obtained
intervals. We select ten and then fifteen learning algorithms to build a
heterogeneous ensemble system and then conducted the experiment on a number of
UCI datasets. The experimental results demonstrate that the proposed approach
performs better than the benchmark algorithms including six fixed combining
methods, one trainable combining method, AdaBoost, Bagging, and Random
Subspace. | [
1,
0,
0,
1,
0,
0
] |
Title: FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search,
Abstract: We present FLASH (\textbf{F}ast \textbf{L}SH \textbf{A}lgorithm for
\textbf{S}imilarity search accelerated with \textbf{H}PC), a similarity search
system for ultra-high dimensional datasets on a single machine, that does not
require similarity computations and is tailored for high-performance computing
platforms. By leveraging a LSH style randomized indexing procedure and
combining it with several principled techniques, such as reservoir sampling,
recent advances in one-pass minwise hashing, and count based estimations, we
reduce the computational and parallelization costs of similarity search, while
retaining sound theoretical guarantees.
We evaluate FLASH on several real, high-dimensional datasets from different
domains, including text, malicious URL, click-through prediction, social
networks, etc. Our experiments shed new light on the difficulties associated
with datasets having several million dimensions. Current state-of-the-art
implementations either fail on the presented scale or are orders of magnitude
slower than FLASH. FLASH is capable of computing an approximate k-NN graph,
from scratch, over the full webspam dataset (1.3 billion nonzeros) in less than
10 seconds. Computing a full k-NN graph in less than 10 seconds on the webspam
dataset, using brute-force ($n^2D$), will require at least 20 teraflops. We
provide CPU and GPU implementations of FLASH for replicability of our results. | [
1,
0,
0,
0,
0,
0
] |
Title: Output Range Analysis for Deep Neural Networks,
Abstract: Deep neural networks (NN) are extensively used for machine learning tasks
such as image classification, perception and control of autonomous systems.
Increasingly, these deep NNs are also been deployed in high-assurance
applications. Thus, there is a pressing need for developing techniques to
verify neural networks to check whether certain user-expected properties are
satisfied. In this paper, we study a specific verification problem of computing
a guaranteed range for the output of a deep neural network given a set of
inputs represented as a convex polyhedron. Range estimation is a key primitive
for verifying deep NNs. We present an efficient range estimation algorithm that
uses a combination of local search and linear programming problems to
efficiently find the maximum and minimum values taken by the outputs of the NN
over the given input set. In contrast to recently proposed "monolithic"
optimization approaches, we use local gradient descent to repeatedly find and
eliminate local minima of the function. The final global optimum is certified
using a mixed integer programming instance. We implement our approach and
compare it with Reluplex, a recently proposed solver for deep neural networks.
We demonstrate the effectiveness of the proposed approach for verification of
NNs used in automated control as well as those used in classification. | [
1,
0,
0,
1,
0,
0
] |
Title: The Mismeasure of Mergers: Revised Limits on Self-interacting Dark Matter in Merging Galaxy Clusters,
Abstract: In an influential recent paper, Harvey et al (2015) derive an upper limit to
the self-interaction cross section of dark matter ($\sigma_{\rm DM} < 0.47$
cm$^2$/g at 95\% confidence) by averaging the dark matter-galaxy offsets in a
sample of merging galaxy clusters. Using much more comprehensive data on the
same clusters, we identify several substantial errors in their offset
measurements. Correcting these errors relaxes the upper limit on $\sigma_{\rm
DM}$ to $\lesssim 2$ cm$^2$/g, following the Harvey et al prescription for
relating offsets to cross sections in a simple solid body scattering model.
Furthermore, many clusters in the sample violate the assumptions behind this
prescription, so even this revised upper limit should be used with caution.
Although this particular sample does not tightly constrain self-interacting
dark matter models when analyzed this way, we discuss how merger ensembles may
be used more effectively in the future. We conclude that errors inherent in
using single-band imaging to identify mass and light peaks do not necessarily
average out in a sample of this size, particularly when a handful of
substructures constitute a majority of the weight in the ensemble. | [
0,
1,
0,
0,
0,
0
] |
Title: International crop trade networks: The impact of shocks and cascades,
Abstract: Analyzing available FAO data from 176 countries over 21 years, we observe an
increase of complexity in the international trade of maize, rice, soy, and
wheat. A larger number of countries play a role as producers or intermediaries,
either for trade or food processing. In consequence, we find that the trade
networks become more prone to failure cascades caused by exogenous shocks. In
our model, countries compensate for demand deficits by imposing export
restrictions. To capture these, we construct higher-order trade dependency
networks for the different crops and years. These networks reveal hidden
dependencies between countries and allow to discuss policy implications. | [
0,
0,
0,
0,
0,
1
] |
Title: Winning on the Merits: The Joint Effects of Content and Style on Debate Outcomes,
Abstract: Debate and deliberation play essential roles in politics and government, but
most models presume that debates are won mainly via superior style or agenda
control. Ideally, however, debates would be won on the merits, as a function of
which side has the stronger arguments. We propose a predictive model of debate
that estimates the effects of linguistic features and the latent persuasive
strengths of different topics, as well as the interactions between the two.
Using a dataset of 118 Oxford-style debates, our model's combination of content
(as latent topics) and style (as linguistic features) allows us to predict
audience-adjudicated winners with 74% accuracy, significantly outperforming
linguistic features alone (66%). Our model finds that winning sides employ
stronger arguments, and allows us to identify the linguistic features
associated with strong or weak arguments. | [
1,
0,
0,
0,
0,
0
] |
Title: Gene regulatory networks: a primer in biological processes and statistical modelling,
Abstract: Modelling gene regulatory networks not only requires a thorough understanding
of the biological system depicted but also the ability to accurately represent
this system from a mathematical perspective. Throughout this chapter, we aim to
familiarise the reader with the biological processes and molecular factors at
play in the process of gene expression regulation.We first describe the
different interactions controlling each step of the expression process, from
transcription to mRNA and protein decay. In the second section, we provide
statistical tools to accurately represent this biological complexity in the
form of mathematical models. Amongst other considerations, we discuss the
topological properties of biological networks, the application of deterministic
and stochastic frameworks and the quantitative modelling of regulation. We
particularly focus on the use of such models for the simulation of expression
data that can serve as a benchmark for the testing of network inference
algorithms. | [
0,
0,
0,
1,
1,
0
] |
Title: Mathematical Knowledge and the Role of an Observer: Ontological and epistemological aspects,
Abstract: As David Berlinski writes (1997), the existence and nature of mathematics is
a more compelling and far deeper problem than any of the problems raised by
mathematics itself. Here we analyze the essence of mathematics making the main
emphasis on mathematics as an advanced system of knowledge. This knowledge
consists of structures and represents structures, existence of which depends on
observers in a nonstandard way. Structural nature of mathematics explains its
reasonable effectiveness. | [
0,
0,
1,
0,
0,
0
] |
Title: Variable Prioritization in Nonlinear Black Box Methods: A Genetic Association Case Study,
Abstract: The central aim in this paper is to address variable selection questions in
nonlinear and nonparametric regression. Motivated by statistical genetics,
where nonlinear interactions are of particular interest, we introduce a novel
and interpretable way to summarize the relative importance of predictor
variables. Methodologically, we develop the "RelATive cEntrality" (RATE)
measure to prioritize candidate genetic variants that are not just marginally
important, but whose associations also stem from significant covarying
relationships with other variants in the data. We illustrate RATE through
Bayesian Gaussian process regression, but the methodological innovations apply
to other "black box" methods. It is known that nonlinear models often exhibit
greater predictive accuracy than linear models, particularly for phenotypes
generated by complex genetic architectures. With detailed simulations and two
real data association mapping studies, we show that applying RATE enables an
explanation for this improved performance. | [
0,
0,
0,
1,
1,
0
] |
Title: Activit{é} motrice des truies en groupes dans les diff{é}rents syst{è}mes de logement,
Abstract: Assessment of the motor activity of group-housed sows in commercial farms.
The objective of this study was to specify the level of motor activity of
pregnant sows housed in groups in different housing systems. Eleven commercial
farms were selected for this study. Four housing systems were represented:
small groups of five to seven sows (SG), free access stalls (FS) with exercise
area, electronic sow feeder with a stable group (ESFsta) or a dynamic group
(ESFdyn). Ten sows in mid-gestation were observed in each farm. The
observations of motor activity were made for 6 hours at the first meal or at
the start of the feeding sequence, two consecutive days and at regular
intervals of 4 minutes. The results show that the motor activity of
group-housed sows depends on the housing system. The activity is higher with
the ESFdyn system (standing: 55.7%), sows are less active in the SG system
(standing: 26.5%), and FS system is intermediate. The distance traveled by sows
in ESF system is linked to a larger area available. Thus, sows travel an
average of 362 m $\pm$ 167 m in the ESFdyn system with an average available
surface of 446 m${}^2$ whereas sows in small groups travel 50 m $\pm$ 15 m for
15 m${}^2$ available. | [
0,
0,
0,
0,
1,
0
] |
Title: Posterior Concentration for Bayesian Regression Trees and Forests,
Abstract: Since their inception in the 1980's, regression trees have been one of the
more widely used non-parametric prediction methods. Tree-structured methods
yield a histogram reconstruction of the regression surface, where the bins
correspond to terminal nodes of recursive partitioning. Trees are powerful, yet
susceptible to over-fitting. Strategies against overfitting have traditionally
relied on pruning greedily grown trees. The Bayesian framework offers an
alternative remedy against overfitting through priors. Roughly speaking, a good
prior charges smaller trees where overfitting does not occur. While the
consistency of random histograms, trees and their ensembles has been studied
quite extensively, the theoretical understanding of the Bayesian counterparts
has been missing. In this paper, we take a step towards understanding why/when
do Bayesian trees and their ensembles not overfit. To address this question, we
study the speed at which the posterior concentrates around the true smooth
regression function. We propose a spike-and-tree variant of the popular
Bayesian CART prior and establish new theoretical results showing that
regression trees (and their ensembles) (a) are capable of recovering smooth
regression surfaces, achieving optimal rates up to a log factor, (b) can adapt
to the unknown level of smoothness and (c) can perform effective dimension
reduction when p>n. These results provide a piece of missing theoretical
evidence explaining why Bayesian trees (and additive variants thereof) have
worked so well in practice. | [
0,
0,
1,
1,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.