title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Morphology of PbTe crystal surface sputtered by argon plasma under Secondary Neutral Mass Spectrometry conditions | We have investigated morphology of the lateral surfaces of PbTe crystal
samples grown from melt by the Bridgman method sputtered by Ar+ plasma with ion
energy of 50-550 eV for 5-50 minutes under Secondary Neutral Mass Spectrometry
(SNMS) conditions. The sputtered PbTe crystal surface was found to be
simultaneously both the source of sputtered material and the efficient
substrate for re-deposition of the sputtered material during the depth
profiling. During sputtering PbTe crystal surface is forming the dimple relief.
To be redeposited the sputtered Pb and Te form arrays of the microscopic
surface structures in the shapes of hillocks, pyramids, cones and others on the
PbTe crystal sputtered surface. Correlation between the density of re-deposited
microscopic surface structures, their shape, and average size, on the one hand,
and the energy and duration of sputtering, on the other, is revealed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Indefinite Integrals of Spherical Bessel Functions | Highly oscillatory integrals, such as those involving Bessel functions, are
best evaluated analytically as much as possible, as numerical errors can be
difficult to control. We investigate indefinite integrals involving monomials
in $x$ multiplying one or two spherical Bessel functions of the first kind
$j_l(x)$ with integer order $l$. Closed-form solutions are presented where
possible, and recursion relations are developed that are guaranteed to reduce
all integrals in this class to closed-form solutions. These results allow for
definite integrals over spherical Bessel functions to be computed quickly and
accurately. For completeness, we also present our results in terms of ordinary
Bessel functions, but in general, the recursion relations do not terminate.
| 0 | 0 | 1 | 0 | 0 | 0 |
Proceedings 14th International Workshop on the ACL2 Theorem Prover and its Applications | This volume contains the proceedings of the Fourteenth International Workshop
on the ACL2 Theorem Prover and Its Applications, ACL2 2017, a two-day workshop
held in Austin, Texas, USA, on May 22-23, 2017. ACL2 workshops occur at
approximately 18-month intervals, and they provide a technical forum for
researchers to present and discuss improvements and extensions to the theorem
prover, comparisons of ACL2 with other systems, and applications of ACL2 in
formal verification.
ACL2 is a state-of-the-art automated reasoning system that has been
successfully applied in academia, government, and industry for specification
and verification of computing systems and in teaching computer science courses.
Boyer, Kaufmann, and Moore were awarded the 2005 ACM Software System Award for
their work on ACL2 and the other theorem provers in the Boyer-Moore
theorem-prover family.
The proceedings of ACL2 2017 include the seven technical papers and two
extended abstracts that were presented at the workshop. Each submission
received two or three reviews. The workshop also included three invited talks:
"Using Mechanized Mathematics in an Organization with a Simulation-Based
Mentality", by Glenn Henry of Centaur Technology, Inc.; "Formal Verification of
Financial Algorithms, Progress and Prospects", by Grant Passmore of Aesthetic
Integration; and "Verifying Oracle's SPARC Processors with ACL2" by Greg
Grohoski of Oracle. The workshop also included several rump sessions discussing
ongoing research and the use of ACL2 within industry.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mean Field Stochastic Games with Binary Action Spaces and Monotone Costs | This paper considers mean field games in a multi-agent Markov decision
process (MDP) framework. Each player has a continuum state and binary action.
By active control, a player can bring its state to a resetting point. All
players are coupled through their cost functions. The structural property of
the individual strategies is characterized in terms of threshold policies when
the mean field game admits a solution. We further introduce a stationary
equation system of the mean field game and analyze uniqueness of its solution
under positive externalities.
| 0 | 0 | 1 | 0 | 0 | 0 |
Heterogeneous Cellular Networks with LoS and NLoS Transmissions--The Role of Massive MIMO and Small Cells | We develop a framework for downlink heterogeneous cellular networks with
line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions. Using
stochastic geometry, we derive tight approximation of achievable downlink rate
that enables us to compare the performance between densifying small cells and
expanding BS antenna arrays. Interestingly, we find that adding small cells
into the network improves the achievable rate much faster than expanding
antenna arrays at the macro BS. However, when the small cell density exceeds a
critical threshold, the spacial densification will lose its benefits and
further impair the network capacity. To this end, we present the optimal small
cell density that maximizes the rate as practical deployment guidance. In
contrast, expanding macro BS antenna array can always benefit the capacity
until an upper bound caused by pilot contamination, and this bound also
surpasses the peak rate obtained from deployment of small cells. Furthermore,
we find that allocating part of antennas to distributed small cell BSs works
better than centralizing all antennas at the macro BS, and the optimal
allocation proportion is also given for practical configuration reference. In
summary, this work provides a further understanding on how to leverage small
cells and massive MIMO in future heterogeneous cellular networks deployment.
| 1 | 0 | 1 | 0 | 0 | 0 |
Gould's Belt: Local Large Scale Structure in the Milky Way | Gould's Belt is a flat local system composed of young OB stars, molecular
clouds and neutral hydrogen within 500 pc from the Sun. It is inclined about 20
degrees to the galactic plane and its velocity field significantly deviates
from rotation around the distant center of the Milky Way. We discuss possible
models of its origin: free expansion from a point or from a ring, expansion of
a shell, or a collision of a high velocity cloud with the plane of the Milky
Way. Currently, no convincing model exists. Similar structures are identified
in HI and CO distribution in our and other nearby galaxies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning with Correntropy-induced Losses for Regression with Mixture of Symmetric Stable Noise | In recent years, correntropy and its applications in machine learning have
been drawing continuous attention owing to its merits in dealing with
non-Gaussian noise and outliers. However, theoretical understanding of
correntropy, especially in the statistical learning context, is still limited.
In this study, within the statistical learning framework, we investigate
correntropy based regression in the presence of non-Gaussian noise or outliers.
Motivated by the practical way of generating non-Gaussian noise or outliers, we
introduce mixture of symmetric stable noise, which include Gaussian noise,
Cauchy noise, and their mixture as special cases, to model non-Gaussian noise
or outliers. We demonstrate that under the mixture of symmetric stable noise
assumption, correntropy based regression can learn the conditional mean
function or the conditional median function well without resorting to the
finite-variance or even the finite first-order moment condition on the noise.
In particular, for the above two cases, we establish asymptotic optimal
learning rates for correntropy based regression estimators that are
asymptotically of type $\mathcal{O}(n^{-1})$. These results justify the
effectiveness of the correntropy based regression estimators in dealing with
outliers as well as non-Gaussian noise. We believe that the present study
completes our understanding towards correntropy based regression from a
statistical learning viewpoint, and may also shed some light on robust
statistical learning for regression.
| 0 | 0 | 0 | 1 | 0 | 0 |
The Suppression and Promotion of Magnetic Flux Emergence in Fully Convective Stars | Evidence of surface magnetism is now observed on an increasing number of cool
stars. The detailed manner by which dynamo-generated magnetic fields giving
rise to starspots traverse the convection zone still remains unclear. Some
insight into this flux emergence mechanism has been gained by assuming bundles
of magnetic field can be represented by idealized thin flux tubes (TFTs). Weber
& Browning (2016) have recently investigated how individual flux tubes might
evolve in a 0.3 solar-mass M dwarf by effectively embedding TFTs in
time-dependent flows representative of a fully convective star. We expand upon
this work by initiating flux tubes at various depths in the upper 50-75% of the
star in order to sample the differing convective flow pattern and differential
rotation across this region. Specifically, we comment on the role of
differential rotation and time-varying flows in both the suppression and
promotion of the magnetic flux emergence process.
| 0 | 1 | 0 | 0 | 0 | 0 |
Opportunistic Content Delivery in Fading Broadcast Channels | We consider content delivery over fading broadcast channels. A server wants
to transmit K files to K users, each equipped with a cache of finite size.
Using the coded caching scheme of Maddah-Ali and Niesen, we design an
opportunistic delivery scheme where the long-term sum content delivery rate
scales with K the number of users in the system. The proposed delivery scheme
combines superposition coding together with appropriate power allocation across
sub-files intended to different subsets of users. We analyze the long-term
average sum content delivery rate achieved by two special cases of our scheme:
a) a selection scheme that chooses the subset of users with the largest
weighted rate, and b) a baseline scheme that transmits to K users using the
scheme of Maddah-Ali and Niesen. We prove that coded caching with appropriate
user selection is scalable since it yields a linear increase of the average sum
content delivery rate.
| 1 | 0 | 0 | 0 | 0 | 0 |
Conservation Laws With Random and Deterministic Data | The dynamics of nonlinear conservation laws have long posed fascinating
problems. With the introduction of some nonlinearity, e.g. Burgers' equation,
discontinuous behavior in the solutions is exhibited, even for smooth initial
data. The introduction of randomness in any of several forms into the initial
condition makes the problem even more interesting. We present a broad spectrum
of results from a number of works, both deterministic and random, to provide a
diverse introduction to some of the methods of analysis for conservation laws.
Some of the deep theorems are applied to discrete examples and illuminated
using diagrams.
| 0 | 0 | 1 | 0 | 0 | 0 |
Theoretical study of HfF$^+$ cation to search for the T,P-odd interactions | The combined all-electron and two-step approach is applied to calculate the
molecular parameters which are required to interpret the ongoing experiment to
search for the effects of manifestation of the T,P-odd fundamental interactions
in the HfF$^+$ cation by Cornell/Ye group [Science 342, 1220 (2013); J. Mol.
Spectrosc. 300, 12 (2014)]. The effective electric field that is required to
interpret the experiment in terms of the electron electric dipole moment is
found to be 22.5 GV/cm. In Ref. [Phys. Rev. D 89, 056006 (2014)] it was shown
that another source of T,P-odd interaction, the scalar-pseudoscalar
nucleus-electron interaction with the dimensionless strength constant $k_{T,P}$
can dominate over the direct contribution from the electron EDM within the
standard model and some of its extensions. Therefore, for the comprehensive and
correct interpretation of the HfF$^+$ experiment one should also know the
molecular parameter $W_{T,P}$ the value of which is reported here to be 20.1
kHz.
| 0 | 1 | 0 | 0 | 0 | 0 |
Adaptive Information Gathering via Imitation Learning | In the adaptive information gathering problem, a policy is required to select
an informative sensing location using the history of measurements acquired thus
far. While there is an extensive amount of prior work investigating effective
practical approximations using variants of Shannon's entropy, the efficacy of
such policies heavily depends on the geometric distribution of objects in the
world. On the other hand, the principled approach of employing online POMDP
solvers is rendered impractical by the need to explicitly sample online from a
posterior distribution of world maps.
We present a novel data-driven imitation learning framework to efficiently
train information gathering policies. The policy imitates a clairvoyant oracle
- an oracle that at train time has full knowledge about the world map and can
compute maximally informative sensing locations. We analyze the learnt policy
by showing that offline imitation of a clairvoyant oracle is implicitly
equivalent to online oracle execution in conjunction with posterior sampling.
This observation allows us to obtain powerful near-optimality guarantees for
information gathering problems possessing an adaptive sub-modularity property.
As demonstrated on a spectrum of 2D and 3D exploration problems, the trained
policies enjoy the best of both worlds - they adapt to different world map
distributions while being computationally inexpensive to evaluate.
| 1 | 0 | 0 | 0 | 0 | 0 |
Demonstration of dispersive rarefaction shocks in hollow elliptical cylinder chains | We report an experimental and numerical demonstration of dispersive
rarefaction shocks (DRS) in a 3D-printed soft chain of hollow elliptical
cylinders. We find that, in contrast to conventional nonlinear waves, these DRS
have their lower amplitude components travel faster, while the higher amplitude
ones propagate slower. This results in the backward-tilted shape of the front
of the wave (the rarefaction segment) and the breakage of wave tails into a
modulated waveform (the dispersive shock segment). Examining the DRS under
various impact conditions, we find the counter-intuitive feature that the
higher striker velocity causes the slower propagation of the DRS. These unique
features can be useful for mitigating impact controllably and efficiently
without relying on material damping or plasticity effects.
| 0 | 1 | 0 | 0 | 0 | 0 |
Gaussian Process Regression for Arctic Coastal Erosion Forecasting | Arctic coastal morphology is governed by multiple factors, many of which are
affected by climatological changes. As the season length for shorefast ice
decreases and temperatures warm permafrost soils, coastlines are more
susceptible to erosion from storm waves. Such coastal erosion is a concern,
since the majority of the population centers and infrastructure in the Arctic
are located near the coasts. Stakeholders and decision makers increasingly need
models capable of scenario-based predictions to assess and mitigate the effects
of coastal morphology on infrastructure and land use. Our research uses
Gaussian process models to forecast Arctic coastal erosion along the Beaufort
Sea near Drew Point, AK. Gaussian process regression is a data-driven modeling
methodology capable of extracting patterns and trends from data-sparse
environments such as remote Arctic coastlines. To train our model, we use
annual coastline positions and near-shore summer temperature averages from
existing datasets and extend these data by extracting additional coastlines
from satellite imagery. We combine our calibrated models with future climate
models to generate a range of plausible future erosion scenarios. Our results
show that the Gaussian process methodology substantially improves yearly
predictions compared to linear and nonlinear least squares methods, and is
capable of generating detailed forecasts suitable for use by decision makers.
| 0 | 1 | 0 | 1 | 0 | 0 |
Calibration for Stratified Classification Models | In classification problems, sampling bias between training data and testing
data is critical to the ranking performance of classification scores. Such bias
can be both unintentionally introduced by data collection and intentionally
introduced by the algorithm, such as under-sampling or weighting techniques
applied to imbalanced data. When such sampling bias exists, using the raw
classification score to rank observations in the testing data can lead to
suboptimal results. In this paper, I investigate the optimal calibration
strategy in general settings, and develop a practical solution for one specific
sampling bias case, where the sampling bias is introduced by stratified
sampling. The optimal solution is developed by analytically solving the problem
of optimizing the ROC curve. For practical data, I propose a ranking algorithm
for general classification models with stratified data. Numerical experiments
demonstrate that the proposed algorithm effectively addresses the stratified
sampling bias issue. Interestingly, the proposed method shows its potential
applicability in two other machine learning areas: unsupervised learning and
model ensembling, which can be future research topics.
| 1 | 0 | 0 | 1 | 0 | 0 |
Two classes of fast-declining type Ia supernovae | Fast-declining Type Ia supernovae (SN Ia) separate into two categories based
on their bolometric and near-infrared (NIR) properties. The peak bolometric
luminosity ($\mathrm{L_{max}}$), the phase of the first maximum relative to the
optical, the NIR peak luminosity and the occurrence of a second maximum in the
NIR distinguish a group of very faint SN Ia. Fast-declining supernovae show a
large range of peak bolometric luminosities ($\mathrm{L_{max}}$ differing by up
to a factor of $\sim$ 8). All fast-declining SN Ia with $\mathrm{L_{max}} < 0.3
\cdot$ 10$^{43}\mathrm{erg s}^{-1}$ are spectroscopically classified as
91bg-like and show only a single NIR peak. SNe with $\mathrm{L_{max}} > 0.5
\cdot$ 10$^{43}\mathrm{erg s}^{-1}$ appear to smoothly connect to normal SN Ia.
The total ejecta mass (M$_{ej}$) values for SNe with enough late time data are
$\lesssim$1 $M_{\odot}$, indicating a sub-Chandrasekhar mass progenitor for
these SNe.
| 0 | 1 | 0 | 0 | 0 | 0 |
What Sets the Radial Locations of Warm Debris Disks? | The architectures of debris disks encode the history of planet formation in
these systems. Studies of debris disks via their spectral energy distributions
(SEDs) have found infrared excesses arising from cold dust, warm dust, or a
combination of the two. The cold outer belts of many systems have been imaged,
facilitating their study in great detail. Far less is known about the warm
components, including the origin of the dust. The regularity of the disk
temperatures indicates an underlying structure that may be linked to the water
snow line. If the dust is generated from collisions in an exo-asteroid belt,
the dust will likely trace the location of the water snow line in the
primordial protoplanetary disk where planetesimal growth was enhanced. If
instead the warm dust arises from the inward transport from a reservoir of icy
material farther out in the system, the dust location is expected to be set by
the current snow line. We analyze the SEDs of a large sample of debris disks
with warm components. We find that warm components in single-component systems
(those without detectable cold components) follow the primordial snow line
rather than the current snow line, so they likely arise from exo-asteroid
belts. While the locations of many warm components in two-component systems are
also consistent with the primordial snow line, there is more diversity among
these systems, suggesting additional effects play a role.
| 0 | 1 | 0 | 0 | 0 | 0 |
Nonlinear Modulational Instability of Dispersive PDE Models | We prove nonlinear modulational instability for both periodic and localized
perturbations of periodic traveling waves for several dispersive PDEs,
including the KDV type equations (e.g. the Whitham equation, the generalized
KDV equation, the Benjamin-Ono equation), the nonlinear Schrödinger equation
and the BBM equation. First, the semigroup estimates required for the nonlinear
proof are obtained by using the Hamiltonian structures of the linearized PDEs;
Second, for KDV type equations the loss of derivative in the nonlinear term is
overcome in two complementary cases: (1) for smooth nonlinear terms and general
dispersive operators, we construct higher order approximation solutions and
then use energy type estimates; (2) for nonlinear terms of low regularity, with
some additional assumption on the dispersive operator, we use a bootstrap
argument to overcome the loss of derivative.
| 0 | 0 | 1 | 0 | 0 | 0 |
Unsupervised Learning of Disentangled Representations from Video | We present a new model DrNET that learns disentangled image representations
from video. Our approach leverages the temporal coherence of video and a novel
adversarial loss to learn a representation that factorizes each frame into a
stationary part and a temporally varying component. The disentangled
representation can be used for a range of tasks. For example, applying a
standard LSTM to the time-vary components enables prediction of future frames.
We evaluate our approach on a range of synthetic and real videos, demonstrating
the ability to coherently generate hundreds of steps into the future.
| 1 | 0 | 0 | 1 | 0 | 0 |
Control of Ultracold Photodissociation with Magnetic Fields | Photodissociation of a molecule produces a spatial distribution of
photofragments determined by the molecular structure and the characteristics of
the dissociating light. Performing this basic chemical reaction at ultracold
temperatures allows its quantum mechanical features to dominate. In this
regime, weak applied fields can be used to control the reaction. Here, we
photodissociate ultracold diatomic strontium in magnetic fields below 10 G and
observe striking changes in photofragment angular distributions. The
observations are in excellent qualitative agreement with a multichannel quantum
chemistry model that includes nonadiabatic effects and predicts strong mixing
of partial waves in the photofragment energy continuum. The experiment is
enabled by precise quantum-state control of the molecules.
| 0 | 1 | 0 | 0 | 0 | 0 |
JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets | A new generative adversarial network is developed for joint distribution
matching. Distinct from most existing approaches, that only learn conditional
distributions, the proposed model aims to learn a joint distribution of
multiple random variables (domains). This is achieved by learning to sample
from conditional distributions between the domains, while simultaneously
learning to sample from the marginals of each individual domain. The proposed
framework consists of multiple generators and a single softmax-based critic,
all jointly trained via adversarial learning. From a simple noise source, the
proposed framework allows synthesis of draws from the marginals, conditional
draws given observations from a subset of random variables, or complete draws
from the full joint distribution. Most examples considered are for joint
analysis of two domains, with examples for three domains also presented.
| 0 | 0 | 0 | 1 | 0 | 0 |
A lightweight MapReduce framework for secure processing with SGX | MapReduce is a programming model used extensively for parallel data
processing in distributed environments. A wide range of algorithms were
implemented using MapReduce, from simple tasks like sorting and searching up to
complex clustering and machine learning operations. Many of these
implementations are part of services externalized to cloud infrastructures.
Over the past years, however, many concerns have been raised regarding the
security guarantees offered in such environments. Some solutions relying on
cryptography were proposed for countering threats but these typically imply a
high computational overhead. Intel, the largest manufacturer of commodity CPUs,
recently introduced SGX (software guard extensions), a set of hardware
instructions that support execution of code in an isolated secure environment.
In this paper, we explore the use of Intel SGX for providing privacy guarantees
for MapReduce operations, and based on our evaluation we conclude that it
represents a viable alternative to a cryptographic mechanism. We present
results based on the widely used k-means clustering algorithm, but our
implementation can be generalized to other applications that can be expressed
using MapReduce model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Formation of wide-orbit gas giants near the stability limit in multi-stellar systems | We have investigated the formation of a circumstellar wide-orbit gas giant
planet in a multiple stellar system. We consider a model of orbital
circularization for the core of a giant planet after it is scattered from an
inner disk region by a more massive planet, which was proposed by Kikuchi et al
(2014). We extend their model for single star systems to binary (multiple) star
systems, by taking into account tidal truncation of the protoplanetary gas disk
by a binary companion. As an example, we consider a wide-orbit gas giant in a
hierarchical triple system, HD131399Ab. The best-fit orbit of the planet is
that with semimajor axis $\sim 80$ au and eccentricity $\sim 0.35$. As the
binary separation is $\sim 350$ au, it is very close to the stability limit,
which is puzzling. With the original core location $\sim 20$-30 au, the core
(planet) mass $\sim 50 M_{\rm E}$ and the disk truncation radius $\sim 150$ au,
our model reproduces the best-fit orbit of HD131399Ab. We find that the orbit
after the circularization is usually close to the stability limit against the
perturbations from the binary companion, because the scattered core accretes
gas from the truncated disk. Our conclusion can also be applied to wider or
more compact binary systems if the separation is not too large and another
planet with $> \sim$ 20-30 Earth masses that scattered the core existed in
inner region of the system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Frank-Wolfe Optimization for Symmetric-NMF under Simplicial Constraint | Symmetric nonnegative matrix factorization has found abundant applications in
various domains by providing a symmetric low-rank decomposition of nonnegative
matrices. In this paper we propose a Frank-Wolfe (FW) solver to optimize the
symmetric nonnegative matrix factorization problem under a simplicial
constraint, which has recently been proposed for probabilistic clustering.
Compared with existing solutions, this algorithm is simple to implement, and
has no hyperparameters to be tuned. Building on the recent advances of FW
algorithms in nonconvex optimization, we prove an $O(1/\varepsilon^2)$
convergence rate to $\varepsilon$-approximate KKT points, via a tight bound
$\Theta(n^2)$ on the curvature constant, which matches the best known result in
unconstrained nonconvex setting using gradient methods. Numerical results
demonstrate the effectiveness of our algorithm. As a side contribution, we
construct a simple nonsmooth convex problem where the FW algorithm fails to
converge to the optimum. This result raises an interesting question about
necessary conditions of the success of the FW algorithm on convex problems.
| 1 | 0 | 1 | 1 | 0 | 0 |
A Fourier Disparity Layer representation for Light Fields | In this paper, we present a new Light Field representation for efficient
Light Field processing and rendering called Fourier Disparity Layers (FDL). The
proposed FDL representation samples the Light Field in the depth (or
equivalently the disparity) dimension by decomposing the scene as a discrete
sum of layers. The layers can be constructed from various types of Light Field
inputs including a set of sub-aperture images, a focal stack, or even a
combination of both. From our derivations in the Fourier domain, the layers are
simply obtained by a regularized least square regression performed
independently at each spatial frequency, which is efficiently parallelized in a
GPU implementation. Our model is also used to derive a gradient descent based
calibration step that estimates the input view positions and an optimal set of
disparity values required for the layer construction. Once the layers are
known, they can be simply shifted and filtered to produce different viewpoints
of the scene while controlling the focus and simulating a camera aperture of
arbitrary shape and size. Our implementation in the Fourier domain allows real
time Light Field rendering. Finally, direct applications such as view
interpolation or extrapolation and denoising are presented and evaluated.
| 1 | 0 | 0 | 0 | 0 | 0 |
Narrating Networks | Networks have become the de facto diagram of the Big Data age (try searching
Google Images for [big data AND visualisation] and see). The concept of
networks has become central to many fields of human inquiry and is said to
revolutionise everything from medicine to markets to military intelligence.
While the mathematical and analytical capabilities of networks have been
extensively studied over the years, in this article we argue that the
storytelling affordances of networks have been comparatively neglected. In
order to address this we use multimodal analysis to examine the stories that
networks evoke in a series of journalism articles. We develop a protocol by
means of which narrative meanings can be construed from network imagery and the
context in which it is embedded, and discuss five different kinds of narrative
readings of networks, illustrated with analyses of examples from journalism.
Finally, to support further research in this area, we discuss methodological
issues that we encountered and suggest directions for future study to advance
and broaden research around this defining aspect of visual culture after the
digital turn.
| 1 | 0 | 0 | 0 | 0 | 0 |
Policy Evaluation and Optimization with Continuous Treatments | We study the problem of policy evaluation and learning from batched
contextual bandit data when treatments are continuous, going beyond previous
work on discrete treatments. Previous work for discrete treatment/action spaces
focuses on inverse probability weighting (IPW) and doubly robust (DR) methods
that use a rejection sampling approach for evaluation and the equivalent
weighted classification problem for learning. In the continuous setting, this
reduction fails as we would almost surely reject all observations. To tackle
the case of continuous treatments, we extend the IPW and DR approaches to the
continuous setting using a kernel function that leverages treatment proximity
to attenuate discrete rejection. Our policy estimator is consistent and we
characterize the optimal bandwidth. The resulting continuous policy optimizer
(CPO) approach using our estimator achieves convergent regret and approaches
the best-in-class policy for learnable policy classes. We demonstrate that the
estimator performs well and, in particular, outperforms a discretization-based
benchmark. We further study the performance of our policy optimizer in a case
study on personalized dosing based on a dataset of Warfarin patients, their
covariates, and final therapeutic doses. Our learned policy outperforms
benchmarks and nears the oracle-best linear policy.
| 0 | 0 | 0 | 1 | 0 | 0 |
Geometry of Factored Nuclear Norm Regularization | This work investigates the geometry of a nonconvex reformulation of
minimizing a general convex loss function $f(X)$ regularized by the matrix
nuclear norm $\|X\|_*$. Nuclear-norm regularized matrix inverse problems are at
the heart of many applications in machine learning, signal processing, and
control. The statistical performance of nuclear norm regularization has been
studied extensively in literature using convex analysis techniques. Despite its
optimal performance, the resulting optimization has high computational
complexity when solved using standard or even tailored fast convex solvers. To
develop faster and more scalable algorithms, we follow the proposal of
Burer-Monteiro to factor the matrix variable $X$ into the product of two
smaller rectangular matrices $X=UV^T$ and also replace the nuclear norm
$\|X\|_*$ with $(\|U\|_F^2+\|V\|_F^2)/2$. In spite of the nonconvexity of the
factored formulation, we prove that when the convex loss function $f(X)$ is
$(2r,4r)$-restricted well-conditioned, each critical point of the factored
problem either corresponds to the optimal solution $X^\star$ of the original
convex optimization or is a strict saddle point where the Hessian matrix has a
strictly negative eigenvalue. Such a geometric structure of the factored
formulation allows many local search algorithms to converge to the global
optimum with random initializations.
| 1 | 0 | 1 | 0 | 0 | 0 |
Effect of Scrape-Off-Layer Current on Reconstructed Tokamak Equilibrium | Methods are described that extend fields from reconstructed equilibria to
include scrape-off-layer current through extrapolated parametrized and
experimental fits. The extrapolation includes both the effects of the
toroidal-field and pressure gradients which produce scrape-off-layer current
after recomputation of the Grad-Shafranov solution. To quantify the degree that
inclusion of scrape-off-layer current modifies the equilibrium, the
$\chi$-squared goodness-of-fit parameter is calculated for cases with and
without scrape-off-layer current. The change in $\chi$-squared is found to be
minor when scrape-off-layer current is included however flux surfaces are
shifted by up to 3 cm. The impact on edge modes of these scrape-off-layer
modifications is also found to be small and the importance of these methods to
nonlinear computation is discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Holomorphy of Osborn loops | Let $(L,\cdot)$ be any loop and let $A(L)$ be a group of automorphisms of
$(L,\cdot)$ such that $\alpha$ and $\phi$ are elements of $A(L)$. It is shown
that, for all $x,y,z\in L$, the $A(L)$-holomorph $(H,\circ)=H(L)$ of
$(L,\cdot)$ is an Osborn loop if and only if $x\alpha (yz\cdot x\phi^{-1})=
x\alpha (yx^\lambda\cdot x) \cdot zx\phi^{-1}$. Furthermore, it is shown that
for all $x\in L$, $H(L)$ is an Osborn loop if and only if $(L,\cdot)$ is an
Osborn loop, $(x\alpha\cdot x^{\rho})x=x\alpha$, $x(x^{\lambda}\cdot
x\phi^{-1})=x\phi^{-1}$ and every pair of automorphisms in $A(L)$ is nuclear
(i.e. $x\alpha\cdot x^{\rho},x^{\lambda}\cdot x\phi\in N(L,\cdot )$). It is
shown that if $H(L)$ is an Osborn loop, then $A(L,\cdot)=
\mathcal{P}(L,\cdot)\cap\Lambda(L,\cdot)\cap\Phi(L,\cdot)\cap\Psi(L,\cdot)$ and
for any $\alpha\in A(L)$, $\alpha= L_{e\pi}=R^{-1}_{e\varrho}$ for some $\pi\in
\Phi(L,\cdot)$ and some $\varrho\in \Psi(L,\cdot)$. Some commutative diagrams
are deduced by considering isomorphisms among the various groups of regular
bijections (whose intersection is $A(L)$) and the nucleus of $(L,\cdot)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Multi-Objective Learning to re-Rank Approach to Optimize Online Marketplaces for Multiple Stakeholders | Multi-objective recommender systems address the difficult task of
recommending items that are relevant to multiple, possibly conflicting,
criteria. However these systems are most often designed to address the
objective of one single stakeholder, typically, in online commerce, the
consumers whose input and purchasing decisions ultimately determine the success
of the recommendation systems. In this work, we address the multi-objective,
multi-stakeholder, recommendation problem involving one or more objective(s)
per stakeholder. In addition to the consumer stakeholder, we also consider two
other stakeholders; the suppliers who provide the goods and services for sale
and the intermediary who is responsible for helping connect consumers to
suppliers via its recommendation algorithms. We analyze the multi-objective,
multi-stakeholder, problem from the point of view of the online marketplace
intermediary whose objective is to maximize its commission through its
recommender system. We define a multi-objective problem relating all our three
stakeholders which we solve with a novel learning-to-re-rank approach that
makes use of a novel regularization function based on the Kendall tau
correlation metric and its kernel version; given an initial ranking of item
recommendations built for the consumer, we aim to re-rank it such that the new
ranking is also optimized for the secondary objectives while staying close to
the initial ranking. We evaluate our approach on a real-world dataset of hotel
recommendations provided by Expedia where we show the effectiveness of our
approach against a business-rules oriented baseline model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Interactive Discovery System for Direct Democracy | Decide Madrid is the civic technology of Madrid City Council which allows
users to create and support online petitions. Despite the initial success, the
platform is encountering problems with the growth of petition signing because
petitions are far from the minimum number of supporting votes they must gather.
Previous analyses have suggested that this problem is produced by the
interface: a paginated list of petitions which applies a non-optimal ranking
algorithm. For this reason, we present an interactive system for the discovery
of topics and petitions. This approach leads us to reflect on the usefulness of
data visualization techniques to address relevant societal challenges.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Learning to Attend to Risk in ICU | Modeling physiological time-series in ICU is of high clinical importance.
However, data collected within ICU are irregular in time and often contain
missing measurements. Since absence of a measure would signify its lack of
importance, the missingness is indeed informative and might reflect the
decision making by the clinician. Here we propose a deep learning architecture
that can effectively handle these challenges for predicting ICU mortality
outcomes. The model is based on Long Short-Term Memory, and has layered
attention mechanisms. At the sensing layer, the model decides whether to
observe and incorporate parts of the current measurements. At the reasoning
layer, evidences across time steps are weighted and combined. The model is
evaluated on the PhysioNet 2012 dataset showing competitive and interpretable
results.
| 1 | 0 | 0 | 1 | 0 | 0 |
Steklov problem on differential forms | In this paper we study spectral properties of Dirichlet-to-Neumann map on
differential forms obtained by a slight modification of the definition due to
Belishev and Sharafutdinov. The resulting operator $\Lambda$ is shown to be
self-adjoint on the subspace of coclosed forms and to have purely discrete
spectrum there.We investigate properies of eigenvalues of $\Lambda$ and prove a
Hersch-Payne-Schiffer type inequality relating products of those eigenvalues to
eigenvalues of Hodge Laplacian on the boundary. Moreover, non-trivial
eigenvalues of $\Lambda$ are always at least as large as eigenvalues of
Dirichlet-to-Neumann map defined by Raulot and Savo. Finally, we remark that a
particular case of $p$-forms on the boundary of $2p+2$-dimensional manifold
shares a lot of important properties with the classical Steklov eigenvalue
problem on surfaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Constraining a dark matter and dark energy interaction scenario with a dynamical equation of state | In this work we have used the recent cosmic chronometers data along with the
latest estimation of the local Hubble parameter value, $H_0$ at 2.4\% precision
as well as the standard dark energy probes, such as the Supernovae Type Ia,
baryon acoustic oscillation distance measurements, and cosmic microwave
background measurements (PlanckTT $+$ lowP) to constrain a dark energy model
where the dark energy is allowed to interact with the dark matter. A general
equation of state of dark energy parametrized by a dimensionless parameter
`$\beta$' is utilized. From our analysis, we find that the interaction is
compatible with zero within the 1$\sigma$ confidence limit. We also show that
the same evolution history can be reproduced by a small pressure of the dark
matter.
| 0 | 1 | 0 | 0 | 0 | 0 |
Comprehensive evaluation of statistical speech waveform synthesis | Statistical TTS systems that directly predict the speech waveform have
recently reported improvements in synthesis quality. This investigation
evaluates Amazon's statistical speech waveform synthesis (SSWS) system. An
in-depth evaluation of SSWS is conducted across a number of domains to better
understand the consistency in quality. The results of this evaluation are
validated by repeating the procedure on a separate group of testers. Finally,
an analysis of the nature of speech errors of SSWS compared to hybrid unit
selection synthesis is conducted to identify the strengths and weaknesses of
SSWS. Having a deeper insight into SSWS allows us to better define the focus of
future work to improve this new technology.
| 1 | 0 | 0 | 0 | 0 | 0 |
Component response rate variation drives stability in large complex systems | The stability of a complex system generally decreases with increasing system
size and interconnectivity, a counterintuitive result of widespread importance
across the physical, life, and social sciences. Despite recent interest in the
relationship between system properties and stability, the effect of variation
in the response rate of individual system components remains unconsidered. Here
I vary the component response rates ($\boldsymbol{\gamma}$) of randomly
generated complex systems. I show that when component response rates vary, the
potential for system stability is markedly increased. Variation in
$\boldsymbol{\gamma}$ becomes increasingly important as system size increases,
such that the largest stable complex systems would be unstable if not for
$\boldsymbol{Var(\gamma)}$. My results reveal a previously unconsidered driver
of system stability that is likely to be pervasive across all complex systems.
| 0 | 0 | 0 | 0 | 1 | 0 |
Time-resolved ultrafast x-ray scattering from an incoherent electronic mixture | Time-resolved ultrafast x-ray scattering from photo-excited matter is an
emerging method to image ultrafast dynamics in matter with atomic-scale spatial
and temporal resolutions. For a correct and rigorous understanding of current
and upcoming imaging experiments, we present the theory of time-resolved x-ray
scattering from an incoherent electronic mixture using quantum electrodynamical
theory of light-matter interaction. We show that the total scattering signal is
an incoherent sum of the individual scattering signals arising from different
electronic states and therefore heterodyning of the individual signals is not
possible for an ensemble of gas-phase photo-excited molecules. We scrutinize
the information encoded in the total signal for the experimentally important
situation when pulse duration and coherence time of the x-ray pulse are short
in comparison to the timescale of the vibrational motion and long in comparison
to the timescale of the electronic motion, respectively. Finally, we show that
in the case of an electronically excited crystal the total scattering signal
imprints the interference of the individual scattering amplitudes associated
with different electronic states and heterodyning is possible.
| 0 | 1 | 0 | 0 | 0 | 0 |
Calculation of the critical overdensity in the spherical-collapse approximation | Critical overdensity $\delta_c$ is a key concept in estimating the number
count of halos for different redshift and halo-mass bins, and therefore, it is
a powerful tool to compare cosmological models to observations. There are
currently two different prescriptions in the literature for its calculation,
namely, the differential-radius and the constant-infinity methods. In this work
we show that the latter yields precise results {\it only} if we are careful in
the definition of the so-called numerical infinities. Although the subtleties
we point out are crucial ingredients for an accurate determination of
$\delta_c$ both in general relativity and in any other gravity theory, we focus
on $f(R)$ modified-gravity models in the metric approach; in particular, we use
the so-called large ($F=1/3$) and small-field ($F=0$) limits. For both of them,
we calculate the relative errors (between our method and the others) in the
critical density $\delta_c$, in the comoving number density of halos per
logarithmic mass interval $n_{\ln M}$ and in the number of clusters at a given
redshift in a given mass bin $N_{\rm bin}$, as functions of the redshift. We
have also derived an analytical expression for the density contrast in the
linear regime as a function of the collapse redshift $z_c$ and $\Omega_{m0}$
for any $F$.
| 0 | 1 | 0 | 0 | 0 | 0 |
Point distributions in compact metric spaces, II | We consider finite point subsets (distributions) in compact metric spaces. In
the case of general rectifiable metric spaces, non-trivial bounds for sums of
distances between points of distributions and for discrepancies of
distributions in metric balls are given (Theorem 1.1). We generalize
Stolarsky's invariance principle to distance-invariant spaces (Theorem 2.1).
For arbitrary metric spaces, we prove a probabilistic invariance principle
(Theorem 3.1). Furthermore, we construct equal-measure partitions of general
rectifiable compact metric spaces into parts of small average diameter (Theorem
4.1). This version of the paper will be published in Mathematika
| 0 | 0 | 1 | 0 | 0 | 0 |
A Variational Projection Scheme for Nonmatching Surface-to-Line Coupling between 3D Flexible Multibody System and Incompressible Turbulent Flow | This paper is concerned with the partitioned iterative formulation to
simulate the fluid-structure interaction of a nonlinear multibody system in an
incompressible turbulent flow. The proposed formulation relies on a
three-dimensional (3D) incompressible turbulent flow solver, a nonlinear
monolithic elastic structural solver for constrained flexible multibody system
and the nonlinear iterative force correction scheme for coupling of the
turbulent fluid-flexible multibody system with nonmatching interface meshes.
While the fluid equations are discretized using a stabilized Petrov-Galerkin
formulation in space and the generalized-$\alpha$ updates in time, the
multibody system utilizes a discontinuous space-time Galerkin finite element
method. We address two key challenges in the present formulation. Firstly, the
coupling of the incompressible turbulent flow with a system of nonlinear
elastic bodies described in a co-rotated frame. Secondly, the projection of the
tractions and displacements across the nonmatching 3D fluid surface elements
and the one-dimensional line elements for the flexible multibody system in a
conservative manner. Through the nonlinear iterative correction and the
conservative projection, the developed fluid-flexible multibody interaction
solver is stable for problems involving strong inertial effects between the
fluid-flexible multibody system and the coupled interactions among each
multibody component. The accuracy of the proposed coupled finite element
framework is validated against the available experimental data for a long
flexible cylinder undergoing vortex-induced vibration in a uniform current flow
condition. Finally, a practical application of the proposed framework is
demonstrated by simulating the flow-induced vibration of a realistic offshore
floating platform connected to a long riser and an elastic mooring system.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Hubble Catalog of Variables | The Hubble Catalog of Variables (HCV) is a 3 year ESA funded project that
aims to develop a set of algorithms to identify variables among the sources
included in the Hubble Source Catalog (HSC) and produce the HCV. We will
process all HSC sources with more than a predefined number of measurements in a
single filter/instrument combination and compute a range of lightcurve features
to determine the variability status of each source. At the end of the project,
the first release of the Hubble Catalog of Variables will be made available at
the Mikulski Archive for Space Telescopes (MAST) and the ESA Science Archives.
The variability detection pipeline will be implemented at the Space Telescope
Science Institute (STScI) so that updated versions of the HCV may be created
following the future releases of the HSC.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Loop-Based Methodology for Reducing Computational Redundancy in Workload Sets | The design of general purpose processors relies heavily on a workload
gathering step in which representative programs are collected from various
application domains. Processor performance, when running the workload set, is
profiled using simulators that model the targeted processor architecture.
However, simulating the entire workload set is prohibitively time-consuming,
which precludes considering a large number of programs. To reduce simulation
time, several techniques in the literature have exploited the internal program
repetitiveness to extract and execute only representative code segments.
Existing so- lutions are based on reducing cross-program computational
redundancy or on eliminating internal-program redundancy to decrease execution
time. In this work, we propose an orthogonal and complementary loop- centric
methodology that targets loop-dominant programs by exploiting internal-program
characteristics to reduce cross-program computational redundancy. The approach
employs a newly developed framework that extracts and analyzes core loops
within workloads. The collected characteristics model memory behavior,
computational complexity, and data structures of a program, and are used to
construct a signature vector for each program. From these vectors,
cross-workload similarity metrics are extracted, which are processed by a novel
heuristic to exclude similar programs and reduce redundancy within the set.
Finally, a reverse engineering approach that synthesizes executable
micro-benchmarks having the same instruction mix as the loops in the original
workload is introduced. A tool that automates the flow steps of the proposed
methodology is developed. Simulation results demonstrate that applying the
proposed methodology to a set of workloads reduces the set size by half, while
preserving the main characterizations of the initial workloads.
| 1 | 0 | 0 | 0 | 0 | 0 |
Multi-View Surveillance Video Summarization via Joint Embedding and Sparse Optimization | Most traditional video summarization methods are designed to generate
effective summaries for single-view videos, and thus they cannot fully exploit
the complicated intra and inter-view correlations in summarizing multi-view
videos in a camera network. In this paper, with the aim of summarizing
multi-view videos, we introduce a novel unsupervised framework via joint
embedding and sparse representative selection. The objective function is
two-fold. The first is to capture the multi-view correlations via an embedding,
which helps in extracting a diverse set of representatives. The second is to
use a `2;1- norm to model the sparsity while selecting representative shots for
the summary. We propose to jointly optimize both of the objectives, such that
embedding can not only characterize the correlations, but also indicate the
requirements of sparse representative selection. We present an efficient
alternating algorithm based on half-quadratic minimization to solve the
proposed non-smooth and non-convex objective with convergence analysis. A key
advantage of the proposed approach with respect to the state-of-the-art is that
it can summarize multi-view videos without assuming any prior
correspondences/alignment between them, e.g., uncalibrated camera networks.
Rigorous experiments on several multi-view datasets demonstrate that our
approach clearly outperforms the state-of-the-art methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Logical Approach to Cloud Federation | Federated clouds raise a variety of challenges for managing identity,
resource access, naming, connectivity, and object access control. This paper
shows how to address these challenges in a comprehensive and uniform way using
a data-centric approach. The foundation of our approach is a trust logic in
which participants issue authenticated statements about principals, objects,
attributes, and relationships in a logic language, with reasoning based on
declarative policy rules. We show how to use the logic to implement a trust
infrastructure for cloud federation that extends the model of NSF GENI, a
federated IaaS testbed. It captures shared identity management, GENI authority
services, cross-site interconnection using L2 circuits, and a naming and access
control system similar to AWS Identity and Access Management (IAM), but
extended to a federated system without central control.
| 1 | 0 | 0 | 0 | 0 | 0 |
Deep Affordance-grounded Sensorimotor Object Recognition | It is well-established by cognitive neuroscience that human perception of
objects constitutes a complex process, where object appearance information is
combined with evidence about the so-called object "affordances", namely the
types of actions that humans typically perform when interacting with them. This
fact has recently motivated the "sensorimotor" approach to the challenging task
of automatic object recognition, where both information sources are fused to
improve robustness. In this work, the aforementioned paradigm is adopted,
surpassing current limitations of sensorimotor object recognition research.
Specifically, the deep learning paradigm is introduced to the problem for the
first time, developing a number of novel neuro-biologically and
neuro-physiologically inspired architectures that utilize state-of-the-art
neural networks for fusing the available information sources in multiple ways.
The proposed methods are evaluated using a large RGB-D corpus, which is
specifically collected for the task of sensorimotor object recognition and is
made publicly available. Experimental results demonstrate the utility of
affordance information to object recognition, achieving an up to 29% relative
error reduction by its inclusion.
| 1 | 0 | 0 | 0 | 0 | 0 |
Liquid crystal induced elasto-capillary suppression of crack formation in thin colloidal films | Drying of colloidal droplets on solid, rigid substrates is associated with a
capillary pressure developing within the droplet. In due course of time, the
capillary pressure builds up due to droplet evaporation resulting in the
formation of a colloidal thin film that is prone to crack formation. In this
study, we show that introducing a minimal amount of nematic liquid crystal
(NLC) can completely suppress the crack formation. The mechanism behind the
curbing of the crack formation may be attributed to the capillary
stress-absorbing cushion provided by the elastic arrangements of the liquid
crystal at the substrate-droplet interface. Cracks and allied surface
instabilities are detrimental to the quality of the final product like surface
coatings, and therefore, its suppression by an external inert additive is a
promising technique that will be of immense importance for several industrial
applications. We believe this fundamental investigation of crack suppression
will open up an entire avenue of applications for the NLCs in the field of
coatings, broadening its already existing wide range of benefits.
| 0 | 1 | 0 | 0 | 0 | 0 |
Autonomous Reactive Mission Scheduling and Task-Path Planning Architecture for Autonomous Underwater Vehicle | An Autonomous Underwater Vehicle (AUV) should carry out complex tasks in a
limited time interval. Since existing AUVs have limited battery capacity and
restricted endurance, they should autonomously manage mission time and the
resources to perform effective persistent deployment in longer missions. Task
assignment requires making decisions subject to resource constraints, while
tasks are assigned with costs and/or values that are budgeted in advance. Tasks
are distributed in a particular operation zone and mapped by a waypoint covered
network. Thus, design an efficient routing-task priority assign framework
considering vehicle's availabilities and properties is essential for increasing
mission productivity and on-time mission completion. This depends strongly on
the order and priority of the tasks that are located between node-like
waypoints in an operation network. On the other hand, autonomous operation of
AUVs in an unfamiliar dynamic underwater and performing quick response to
sudden environmental changes is a complicated process. Water current
instabilities can deflect the vehicle to an undesired direction and perturb
AUVs safety. The vehicle's robustness to strong environmental variations is
extremely crucial for its safe and optimum operations in an uncertain and
dynamic environment. To this end, the AUV needs to have a general overview of
the environment in top level to perform an autonomous action selection (task
selection) and a lower level local motion planner to operate successfully in
dealing with continuously changing situations. This research deals with
developing a novel reactive control architecture to provide a higher level of
decision autonomy for the AUV operation that enables a single vehicle to
accomplish multiple tasks in a single mission in the face of periodic
disturbances in a turbulent and highly uncertain environment.
| 1 | 0 | 0 | 0 | 0 | 0 |
Stealth Attacks on the Smart Grid | Random attacks that jointly minimize the amount of information acquired by
the operator about the state of the grid and the probability of attack
detection are presented. The attacks minimize the information acquired by the
operator by minimizing the mutual information between the observations and the
state variables describing the grid. Simultaneously, the attacker aims to
minimize the probability of attack detection by minimizing the Kullback-Leibler
(KL) divergence between the distribution when the attack is present and the
distribution under normal operation. The resulting cost function is the
weighted sum of the mutual information and the KL divergence mentioned above.
The tradeoff between the probability of attack detection and the reduction of
mutual information is governed by the weighting parameter on the KL divergence
term in the cost function. The probability of attack detection is evaluated as
a function of the weighting parameter. A sufficient condition on the weighting
parameter is given for achieving an arbitrarily small probability of attack
detection. The attack performance is numerically assessed on the IEEE 30-Bus
and 118-Bus test systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gradual Tuning: a better way of Fine Tuning the parameters of a Deep Neural Network | In this paper we present an alternative strategy for fine-tuning the
parameters of a network. We named the technique Gradual Tuning. Once trained on
a first task, the network is fine-tuned on a second task by modifying a
progressively larger set of the network's parameters. We test Gradual Tuning on
different transfer learning tasks, using networks of different sizes trained
with different regularization techniques. The result shows that compared to the
usual fine tuning, our approach significantly reduces catastrophic forgetting
of the initial task, while still retaining comparable if not better performance
on the new task.
| 1 | 0 | 0 | 0 | 0 | 0 |
Bayesian power-spectrum inference with foreground and target contamination treatment | This work presents a joint and self-consistent Bayesian treatment of various
foreground and target contaminations when inferring cosmological power-spectra
and three dimensional density fields from galaxy redshift surveys. This is
achieved by introducing additional block sampling procedures for unknown
coefficients of foreground and target contamination templates to the previously
presented ARES framework for Bayesian large scale structure analyses. As a
result the method infers jointly and fully self-consistently three dimensional
density fields, cosmological power-spectra, luminosity dependent galaxy biases,
noise levels of respective galaxy distributions and coefficients for a set of a
priori specified foreground templates. In addition this fully Bayesian approach
permits detailed quantification of correlated uncertainties amongst all
inferred quantities and correctly marginalizes over observational systematic
effects. We demonstrate the validity and efficiency of our approach in
obtaining unbiased estimates of power-spectra via applications to realistic
mock galaxy observations subject to stellar contamination and dust extinction.
While simultaneously accounting for galaxy biases and unknown noise levels our
method reliably and robustly infers three dimensional density fields and
corresponding cosmological power-spectra from deep galaxy surveys. Further our
approach correctly accounts for joint and correlated uncertainties between
unknown coefficients of foreground templates and the amplitudes of the
power-spectrum. An effect amounting up to $10$ percent correlations and
anti-correlations across large ranges in Fourier space.
| 0 | 1 | 0 | 0 | 0 | 0 |
3-Lie bialgebras and 3-Lie classical Yang-Baxter equations in low dimensions | In this paper, we give some low-dimensional examples of local cocycle 3-Lie
bialgebras and double construction 3-Lie bialgebras which were introduced in
the study of the classical Yang-Baxter equation and Manin triples for 3-Lie
algebras. We give an explicit and practical formula to compute the
skew-symmetric solutions of the 3-Lie classical Yang-Baxter equation (CYBE). As
an illustration, we obtain all skew-symmetric solutions of the 3-Lie CYBE in
complex 3-Lie algebras of dimension 3 and 4 and then the induced local cocycle
3-Lie bialgebras. On the other hand, we classify the double construction 3-Lie
bialgebras for complex 3-Lie algebras in dimensions 3 and 4 and then give the
corresponding 8-dimensional pseudo-metric 3-Lie algebras.
| 0 | 0 | 1 | 0 | 0 | 0 |
Adaptive pixel-super-resolved lensfree holography for wide-field on-chip microscopy | High-resolution wide field-of-view (FOV) microscopic imaging plays an
essential role in various fields of biomedicine, engineering, and physical
sciences. As an alternative to conventional lens-based scanning techniques,
lensfree holography provides a new way to effectively bypass the intrinsical
trade-off between the spatial resolution and FOV of conventional microscopes.
Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance
during image acquisition, and sub-optimum solution to the phase retrieval
problem, typical lensfree microscopes only produce compromised imaging quality
in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we
propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which
can solve, or at least partially alleviate these limitations. Our approach
addresses the pixel aliasing problem by Z-scanning only, without resorting to
subpixel shifting or beam-angle manipulation. Automatic positional error
correction algorithm and adaptive relaxation strategy are introduced to enhance
the robustness and SNR of reconstruction significantly. Based on APLI, we
perform full-FOV reconstruction of a USAF resolution target ($\sim$29.85
$m{m^2}$) and achieve half-pitch lateral resolution of 770 $nm$, surpassing
2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed
by the sensor pixel-size (1.67 $\mu m$). Full-FOV imaging result of a typical
dicot root is also provided to demonstrate its promising potential applications
in biologic imaging.
| 0 | 1 | 0 | 0 | 0 | 0 |
Attend to You: Personalized Image Captioning with Context Sequence Memory Networks | We address personalization issues of image captioning, which have not been
discussed yet in previous research. For a query image, we aim to generate a
descriptive sentence, accounting for prior knowledge such as the user's active
vocabularies in previous documents. As applications of personalized image
captioning, we tackle two post automation tasks: hashtag prediction and post
generation, on our newly collected Instagram dataset, consisting of 1.1M posts
from 6.3K users. We propose a novel captioning model named Context Sequence
Memory Network (CSMN). Its unique updates over previous memory network models
include (i) exploiting memory as a repository for multiple types of context
information, (ii) appending previously generated words into memory to capture
long-term information without suffering from the vanishing gradient problem,
and (iii) adopting CNN memory structure to jointly represent nearby ordered
memory slots for better context understanding. With quantitative evaluation and
user studies via Amazon Mechanical Turk, we show the effectiveness of the three
novel features of CSMN and its performance enhancement for personalized image
captioning over state-of-the-art captioning models.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hardness of almost embedding simplicial complexes in $\mathbb R^d$ | A map $f\colon K\to \mathbb R^d$ of a simplicial complex is an almost
embedding if $f(\sigma)\cap f(\tau)=\emptyset$ whenever $\sigma,\tau$ are
disjoint simplices of $K$.
Theorem. Fix integers $d,k\ge2$ such that $d=\frac{3k}2+1$.
(a) Assume that $P\ne NP$. Then there exists a finite $k$-dimensional complex
$K$ that does not admit an almost embedding in $\mathbb R^d$ but for which
there exists an equivariant map $\tilde K\to S^{d-1}$.
(b) The algorithmic problem of recognition almost embeddability of finite
$k$-dimensional complexes in $\mathbb R^d$ is NP hard.
The proof is based on the technique from the Matoušek-Tancer-Wagner paper
(proving an analogous result for embeddings), and on singular versions of the
higher-dimensional Borromean rings lemma and a generalized van Kampen--Flores
theorem.
| 1 | 0 | 1 | 0 | 0 | 0 |
Asymmetry of short-term control of spatio-temporal gait parameters during treadmill walking | Optimization of energy cost determines average values of spatio-temporal gait
parameters such as step duration, step length or step speed. However, during
walking, humans need to adapt these parameters at every step to respond to
exogenous and/or endogenic perturbations. While some neurological mechanisms
that trigger these responses are known, our understanding of the fundamental
principles governing step-by-step adaptation remains elusive. We determined the
gait parameters of 20 healthy subjects with right-foot preference during
treadmill walking at speeds of 1.1, 1.4 and 1.7 m/s. We found that when the
value of the gait parameter was conspicuously greater (smaller) than the mean
value, it was either followed immediately by a smaller (greater) value of the
contralateral leg (interleg control), or the deviation from the mean value
decreased during the next movement of ipsilateral leg (intraleg control). The
selection of step duration and the selection of step length during such
transient control events were performed in unique ways. We quantified the
symmetry of short-term control of gait parameters and observed the significant
dominance of the right leg in short-term control of all three parameters at
higher speeds (1.4 and 1.7 m/s).
| 0 | 1 | 0 | 0 | 0 | 0 |
Edge N-Level Sparse Visibility Graphs: Fast Optimal Any-Angle Pathfinding Using Hierarchical Taut Paths | In the Any-Angle Pathfinding problem, the goal is to find the shortest path
between a pair of vertices on a uniform square grid, that is not constrained to
any fixed number of possible directions over the grid. Visibility Graphs are a
known optimal algorithm for solving the problem with the use of pre-processing.
However, Visibility Graphs are known to perform poorly in terms of running
time, especially on large, complex maps. In this paper, we introduce two
improvements over the Visibility Graph Algorithm to compute optimal paths.
Sparse Visibility Graphs (SVGs) are constructed by pruning unnecessary edges
from the original Visibility Graph. Edge N-Level Sparse Visibility Graphs
(ENLSVGs) is a hierarchical SVG built by iteratively pruning non-taut paths. We
also introduce Line-of-Sight Scans, a faster algorithm for building Visibility
Graphs over a grid. SVGs run much faster than Visibility Graphs by reducing the
average vertex degree. ENLSVGs, a hierarchical algorithm, improves this
further, especially on larger maps. On large maps, with the use of
pre-processing, these algorithms are orders of magnitude faster than existing
algorithms like Visibility Graphs and Theta*.
| 1 | 0 | 0 | 0 | 0 | 0 |
Model-Free Renewable Scenario Generation Using Generative Adversarial Networks | Scenario generation is an important step in the operation and planning of
power systems with high renewable penetrations. In this work, we proposed a
data-driven approach for scenario generation using generative adversarial
networks, which is based on two interconnected deep neural networks. Compared
with existing methods based on probabilistic models that are often hard to
scale or sample from, our method is data-driven, and captures renewable energy
production patterns in both temporal and spatial dimensions for a large number
of correlated resources. For validation, we use wind and solar times-series
data from NREL integration data sets. We demonstrate that the proposed method
is able to generate realistic wind and photovoltaic power profiles with full
diversity of behaviors. We also illustrate how to generate scenarios based on
different conditions of interest by using labeled data during training. For
example, scenarios can be conditioned on weather events~(e.g. high wind day) or
time of the year~(e,g. solar generation for a day in July). Because of the
feedforward nature of the neural networks, scenarios can be generated extremely
efficiently without sophisticated sampling techniques.
| 1 | 0 | 1 | 0 | 0 | 0 |
Higher-genus quasimap wall-crossing via localization | We give a new proof of Ciocan-Fontanine and Kim's wall-crossing formula
relating the virtual classes of the moduli spaces of $\epsilon$-stable
quasimaps for different $\epsilon$ in any genus, whenever the target is a
complete intersection in projective space and there is at least one marked
point.
Our techniques involve a twisted graph space, which we expect to generalize
to yield wall-crossing formulas for general gauged linear sigma models.
| 0 | 0 | 1 | 0 | 0 | 0 |
Uncertainty in Cyber Security Investments | When undertaking cyber security risk assessments, we must assign numeric
values to metrics to compute the final expected loss that represents the risk
that an organization is exposed to due to cyber threats. Even if risk
assessment is motivated from real-world observations and data, there is always
a high chance of assigning inaccurate values due to different uncertainties
involved (e.g., evolving threat landscape, human errors) and the natural
difficulty of quantifying risk per se. Our previous work has proposed a model
and a software tool that empowers organizations to compute optimal cyber
security strategies given their financial constraints, i.e., available cyber
security budget. We have also introduced a general game-theoretic model with
uncertain payoffs (probability-distribution-valued payoffs) showing that such
uncertainty can be incorporated in the game-theoretic model by allowing payoffs
to be random. In this paper, we combine our aforesaid works and we conclude
that although uncertainties in cyber security risk assessment lead, on average,
to different cyber security strategies, they do not play significant role into
the final expected loss of the organization when using our model and
methodology to derive this strategies. We show that our tool is capable of
providing effective decision support. To the best of our knowledge this is the
first paper that investigates how uncertainties on various parameters affect
cyber security investments.
| 1 | 0 | 0 | 0 | 0 | 0 |
ELT Linear Algebra II | This paper is a continuation of [arXiv:1603.02204]. Exploded layered tropical
(ELT) algebra is an extension of tropical algebra with a structure of layers.
These layers allow us to use classical algebraic results in order to easily
prove analogous tropical results. Specifically we prove and use an ELT version
of the transfer principal presented in [2]. In this paper we use the transfer
principle to prove an ELT version of Cayley-Hamilton Theorem, and study the
multiplicity of the ELT determinant, ELT adjoint matrices and quasi-invertible
matrices. We also define a new notion of trace -- the essential trace -- and
study its properties.
| 0 | 0 | 1 | 0 | 0 | 0 |
Inner-Scene Similarities as a Contextual Cue for Object Detection | Using image context is an effective approach for improving object detection.
Previously proposed methods used contextual cues that rely on semantic or
spatial information. In this work, we explore a different kind of contextual
information: inner-scene similarity. We present the CISS (Context by Inner
Scene Similarity) algorithm, which is based on the observation that two
visually similar sub-image patches are likely to share semantic identities,
especially when both appear in the same image. CISS uses base-scores provided
by a base detector and performs as a post-detection stage. For each candidate
sub-image (denoted anchor), the CISS algorithm finds a few similar sub-images
(denoted supporters), and, using them, calculates a new enhanced score for the
anchor. This is done by utilizing the base-scores of the supporters and a
pre-trained dependency model. The new scores are modeled as a linear function
of the base scores of the anchor and the supporters and is estimated using a
minimum mean square error optimization. This approach results in: (a) improved
detection of partly occluded objects (when there are similar non-occluded
objects in the scene), and (b) fewer false alarms (when the base detector
mistakenly classifies a background patch as an object). This work relates to
Duncan and Humphreys' "similarity theory," a psychophysical study. which
suggested that the human visual system perceptually groups similar image
regions and that the classification of one region is affected by the estimated
identity of the other. Experimental results demonstrate the enhancement of a
base detector's scores on the PASCAL VOC dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Time-dependent spectral renormalization method | The spectral renormalization method was introduced by Ablowitz and Musslimani
in 2005, [Opt. Lett. 30, pp. 2140-2142] as an effective way to numerically
compute (time-independent) bound states for certain nonlinear boundary value
problems. % of the nonlinear Schrödinger (NLS), Gross-Pitaevskii and water
wave type equations to mention a few. In this paper, we extend those ideas to
the time domain and introduce a time-dependent spectral renormalization method
as a numerical means to simulate linear and nonlinear evolution equations. The
essence of the method is to convert the underlying evolution equation from its
partial or ordinary differential form (using Duhamel's principle) into an
integral equation. The solution sought is then viewed as a fixed point in both
space and time. The resulting integral equation is then numerically solved
using a simple renormalized fixed-point iteration method. Convergence is
achieved by introducing a time-dependent renormalization factor which is
numerically computed from the physical properties of the governing evolution
equation. The proposed method has the ability to incorporate physics into the
simulations in the form of conservation laws or dissipation rates. This novel
scheme is implemented on benchmark evolution equations: the classical nonlinear
Schrödinger (NLS), integrable $PT$ symmetric nonlocal NLS and the viscous
Burgers' equations, each of which being a prototypical example of a
conservative and dissipative dynamical system. Numerical implementation and
algorithm performance are also discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
Cloth Manipulation Using Random-Forest-Based Imitation Learning | We present a novel approach for robust manipulation of high-DOF deformable
objects such as cloth. Our approach uses a random forest-based controller that
maps the observed visual features of the cloth to an optimal control action of
the manipulator. The topological structure of this random forest-based
controller is determined automatically based on the training data consisting
visual features and optimal control actions. This enables us to integrate the
overall process of training data classification and controller optimization
into an imitation learning (IL) approach. Our approach enables learning of
robust control policy for cloth manipulation with guarantees on convergence.We
have evaluated our approach on different multi-task cloth manipulation
benchmarks such as flattening, folding and twisting. In practice, our approach
works well with different deformable features learned based on the specific
task or deep learning. Moreover, our controller outperforms a simple or
piecewise linear controller in terms of robustness to noise. In addition, our
approach is easy to implement and does not require much parameter tuning.
| 1 | 0 | 0 | 0 | 0 | 0 |
Probabilistic Reduced-Order Modeling for Stochastic Partial Differential Equations | We discuss a Bayesian formulation to coarse-graining (CG) of PDEs where the
coefficients (e.g. material parameters) exhibit random, fine scale variability.
The direct solution to such problems requires grids that are small enough to
resolve this fine scale variability which unavoidably requires the repeated
solution of very large systems of algebraic equations. We establish a
physically inspired, data-driven coarse-grained model which learns a low-
dimensional set of microstructural features that are predictive of the
fine-grained model (FG) response. Once learned, those features provide a sharp
distribution over the coarse scale effec- tive coefficients of the PDE that are
most suitable for prediction of the fine scale model output. This ultimately
allows to replace the computationally expensive FG by a generative proba-
bilistic model based on evaluating the much cheaper CG several times. Sparsity
enforcing pri- ors further increase predictive efficiency and reveal
microstructural features that are important in predicting the FG response.
Moreover, the model yields probabilistic rather than single-point predictions,
which enables the quantification of the unavoidable epistemic uncertainty that
is present due to the information loss that occurs during the coarse-graining
process.
| 0 | 0 | 0 | 1 | 0 | 0 |
A practical guide to the simultaneous determination of protein structure and dynamics using metainference | Accurate protein structural ensembles can be determined with metainference, a
Bayesian inference method that integrates experimental information with prior
knowledge of the system and deals with all sources of uncertainty and errors as
well as with system heterogeneity. Furthermore, metainference can be
implemented using the metadynamics approach, which enables the computational
study of complex biological systems requiring extensive conformational
sampling. In this chapter, we provide a step-by-step guide to perform and
analyse metadynamic metainference simulations using the ISDB module of the
open-source PLUMED library, as well as a series of practical tips to avoid
common mistakes. Specifically, we will guide the reader in the process of
learning how to model the structural ensemble of a small disordered peptide by
combining state-of-the-art molecular mechanics force fields with nuclear
magnetic resonance data, including chemical shifts, scalar couplings and
residual dipolar couplings.
| 0 | 0 | 0 | 0 | 1 | 0 |
Propensity score prediction for electronic healthcare databases using Super Learner and High-dimensional Propensity Score Methods | The optimal learner for prediction modeling varies depending on the
underlying data-generating distribution. Super Learner (SL) is a generic
ensemble learning algorithm that uses cross-validation to select among a
"library" of candidate prediction models. The SL is not restricted to a single
prediction model, but uses the strengths of a variety of learning algorithms to
adapt to different databases. While the SL has been shown to perform well in a
number of settings, it has not been thoroughly evaluated in large electronic
healthcare databases that are common in pharmacoepidemiology and comparative
effectiveness research. In this study, we applied and evaluated the performance
of the SL in its ability to predict treatment assignment using three electronic
healthcare databases. We considered a library of algorithms that consisted of
both nonparametric and parametric models. We also considered a novel strategy
for prediction modeling that combines the SL with the high-dimensional
propensity score (hdPS) variable selection algorithm. Predictive performance
was assessed using three metrics: the negative log-likelihood, area under the
curve (AUC), and time complexity. Results showed that the best individual
algorithm, in terms of predictive performance, varied across datasets. The SL
was able to adapt to the given dataset and optimize predictive performance
relative to any individual learner. Combining the SL with the hdPS was the most
consistent prediction method and may be promising for PS estimation and
prediction modeling in electronic healthcare databases.
| 0 | 0 | 0 | 1 | 0 | 0 |
Transmission spectroscopy of the hot Jupiter TrES-3 b: Disproof of an overly large Rayleigh-like feature | Context. Transit events of extrasolar planets offer the opportunity to study
the composition of their atmospheres. Previous work on transmission
spectroscopy of the close-in gas giant TrES-3 b revealed an increase in
absorption towards blue wavelengths of very large amplitude in terms of
atmospheric pressure scale heights, too large to be explained by
Rayleigh-scattering in the planetary atmosphere. Aims. We present a follow-up
study of the optical transmission spectrum of the hot Jupiter TrES-3 b to
investigate the strong increase in opacity towards short wavelengths found by a
previous study. Furthermore, we aim to estimate the effect of stellar spots on
the transmission spectrum. Methods. This work uses previously published long
slit spectroscopy transit data of the Gran Telescopio Canarias (GTC) and
published broad band observations as well as new observations in different
bands from the near-UV to the near-IR, for a homogeneous transit light curve
analysis. Additionally, a long-term photometric monitoring of the TrES-3 host
star was performed. Results. Our newly analysed GTC spectroscopic transit
observations show a slope of much lower amplitude than previous studies. We
conclude from our results the previously reported increasing signal towards
short wavelengths is not intrinsic to the TrES-3 system. Furthermore, the broad
band spectrum favours a flat spectrum. Long-term photometric monitoring rules
out a significant modification of the transmission spectrum by unocculted star
spots.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multi-Objective Approaches to Markov Decision Processes with Uncertain Transition Parameters | Markov decision processes (MDPs) are a popular model for performance analysis
and optimization of stochastic systems. The parameters of stochastic behavior
of MDPs are estimates from empirical observations of a system; their values are
not known precisely. Different types of MDPs with uncertain, imprecise or
bounded transition rates or probabilities and rewards exist in the literature.
Commonly, analysis of models with uncertainties amounts to searching for the
most robust policy which means that the goal is to generate a policy with the
greatest lower bound on performance (or, symmetrically, the lowest upper bound
on costs). However, hedging against an unlikely worst case may lead to losses
in other situations. In general, one is interested in policies that behave well
in all situations which results in a multi-objective view on decision making.
In this paper, we consider policies for the expected discounted reward
measure of MDPs with uncertain parameters. In particular, the approach is
defined for bounded-parameter MDPs (BMDPs) [8]. In this setting the worst, best
and average case performances of a policy are analyzed simultaneously, which
yields a multi-scenario multi-objective optimization problem. The paper
presents and evaluates approaches to compute the pure Pareto optimal policies
in the value vector space.
| 1 | 0 | 0 | 0 | 0 | 0 |
Certificates for triangular equivalence and rank profiles | In this paper, we give novel certificates for triangular equivalence and rank
profiles. These certificates enable to verify the row or column rank profiles
or the whole rank profile matrix faster than recomputing them, with a
negligible overall overhead. We first provide quadratic time and space
non-interactive certificates saving the logarithmic factors of previously known
ones. Then we propose interactive certificates for the same problems whose
Monte Carlo verification complexity requires a small constant number of
matrix-vector multiplications, a linear space, and a linear number of extra
field operations. As an application we also give an interactive protocol ,
certifying the determinant of dense matrices, faster than the best previously
known one.
| 1 | 0 | 0 | 0 | 0 | 0 |
TF Boosted Trees: A scalable TensorFlow based framework for gradient boosting | TF Boosted Trees (TFBT) is a new open-sourced frame-work for the distributed
training of gradient boosted trees. It is based on TensorFlow, and its
distinguishing features include a novel architecture, automatic loss
differentiation, layer-by-layer boosting that results in smaller ensembles and
faster prediction, principled multi-class handling, and a number of
regularization techniques to prevent overfitting.
| 1 | 0 | 0 | 1 | 0 | 0 |
Circuit Treewidth, Sentential Decision, and Query Compilation | The evaluation of a query over a probabilistic database boils down to
computing the probability of a suitable Boolean function, the lineage of the
query over the database. The method of query compilation approaches the task in
two stages: first, the query lineage is implemented (compiled) in a circuit
form where probability computation is tractable; and second, the desired
probability is computed over the compiled circuit. A basic theoretical quest in
query compilation is that of identifying pertinent classes of queries whose
lineages admit compact representations over increasingly succinct, tractable
circuit classes. Fostering previous work by Jha and Suciu (2012) and Petke and
Razgon (2013), we focus on queries whose lineages admit circuit implementations
with small treewidth, and investigate their compilability within tame classes
of decision diagrams. In perfect analogy with the characterization of bounded
circuit pathwidth by bounded OBDD width, we show that a class of Boolean
functions has bounded circuit treewidth if and only if it has bounded SDD
width. Sentential decision diagrams (SDDs) are central in knowledge
compilation, being essentially as tractable as OBDDs but exponentially more
succinct. By incorporating constant width SDDs and polynomial size SDDs, we
refine the panorama of query compilation for unions of conjunctive queries with
and without inequalities.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improvements in the Small Sample Efficiency of the Minimum $S$-Divergence Estimators under Discrete Models | This paper considers the problem of inliers and empty cells and the resulting
issue of relative inefficiency in estimation under pure samples from a discrete
population when the sample size is small. Many minimum divergence estimators in
the $S$-divergence family, although possessing very strong outlier stability
properties, often have very poor small sample efficiency in the presence of
inliers and some are not even defined in the presence of a single empty cell;
this limits the practical applicability of these estimators, in spite of their
otherwise sound robustness properties and high asymptotic efficiency. Here, we
will study a penalized version of the $S$-divergences such that the resulting
minimum divergence estimators are free from these issues without altering their
robustness properties and asymptotic efficiencies. We will give a general proof
for the asymptotic properties of these minimum penalized $S$-divergence
estimators. This provides a significant addition to the literature as the
asymptotics of penalized divergences which are not finitely defined are
currently unavailable in the literature. The small sample advantages of the
minimum penalized $S$-divergence estimators are examined through an extensive
simulation study and some empirical suggestions regarding the choice of the
relevant underlying tuning parameters are also provided.
| 0 | 0 | 1 | 1 | 0 | 0 |
Homogeneous Kobayashi-hyperbolic manifolds with automorphism group of subcritical dimension | We determine all connected homogeneous Kobayashi-hyperbolic manifolds of
dimension $n\ge 2$ whose holomorphic automorphism group has dimension $n^2-3$.
This result complements existing classifications for automorphism group
dimension $n^2-2$ (which is in some sense critical) and greater.
| 0 | 0 | 1 | 0 | 0 | 0 |
Centroidal localization game | One important problem in a network is to locate an (invisible) moving entity
by using distance-detectors placed at strategical locations. For instance, the
metric dimension of a graph $G$ is the minimum number $k$ of detectors placed
in some vertices $\{v_1,\cdots,v_k\}$ such that the vector $(d_1,\cdots,d_k)$
of the distances $d(v_i,r)$ between the detectors and the entity's location $r$
allows to uniquely determine $r \in V(G)$. In a more realistic setting, instead
of getting the exact distance information, given devices placed in
$\{v_1,\cdots,v_k\}$, we get only relative distances between the entity's
location $r$ and the devices (for every $1\leq i,j\leq k$, it is provided
whether $d(v_i,r) >$, $<$, or $=$ to $d(v_j,r)$). The centroidal dimension of a
graph $G$ is the minimum number of devices required to locate the entity in
this setting.
We consider the natural generalization of the latter problem, where vertices
may be probed sequentially until the moving entity is located. At every turn, a
set $\{v_1,\cdots,v_k\}$ of vertices is probed and then the relative distances
between the vertices $v_i$ and the current location $r$ of the entity are
given. If not located, the moving entity may move along one edge. Let $\zeta^*
(G)$ be the minimum $k$ such that the entity is eventually located, whatever it
does, in the graph $G$.
We prove that $\zeta^* (T)\leq 2$ for every tree $T$ and give an upper bound
on $\zeta^*(G\square H)$ in cartesian product of graphs $G$ and $H$. Our main
result is that $\zeta^* (G)\leq 3$ for any outerplanar graph $G$. We then prove
that $\zeta^* (G)$ is bounded by the pathwidth of $G$ plus 1 and that the
optimization problem of determining $\zeta^* (G)$ is NP-hard in general graphs.
Finally, we show that approximating (up to any constant distance) the entity's
location in the Euclidean plane requires at most two vertices per turn.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improving the Burgess bound via Polya-Vinogradov | We show that even mild improvements of the Polya-Vinogradov inequality would
imply significant improvements of Burgess' bound on character sums. Our main
ingredients are a lower bound on certain types of character sums (coming from
works of the second author joint with J. Bober and Y. Lamzouri) and a
quantitative relationship between the mean and the logarithmic mean of a
completely multiplicative function.
| 0 | 0 | 1 | 0 | 0 | 0 |
Processes accompanying stimulated recombination of atoms | The phenomenon of polarization of nuclei in the process of stimulated
recombination of atoms in the field of circularly polarized laser radiation is
considered. This effect is considered for the case of the proton-electron beams
used in the method of electron cooling. An estimate is obtained for the maximum
degree of polarization of the protons on components of the hyperfine structure
of the 2s state of the hydrogen atom.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Complexity of Factors of Multivariate Polynomials | The existence of string functions, which are not polynomial time computable,
but whose graph is checkable in polynomial time, is a basic assumption in
cryptography. We prove that in the framework of algebraic complexity, there are
no such families of polynomial functions of polynomially bounded degree over
fields of characteristic zero. The proof relies on a polynomial upper bound on
the approximative complexity of a factor g of a polynomial f in terms of the
(approximative) complexity of f and the degree of the factor g. This extends a
result by Kaltofen (STOC 1986). The concept of approximative complexity allows
to cope with the case that a factor has an exponential multiplicity, by using a
perturbation argument. Our result extends to randomized (two-sided error)
decision complexity.
| 1 | 0 | 0 | 0 | 0 | 0 |
Chatbots as Conversational Recommender Systems in Urban Contexts | In this paper, we outline the vision of chatbots that facilitate the
interaction between citizens and policy-makers at the city scale. We report the
results of a co-design session attended by more than 60 participants. We give
an outlook of how some challenges associated with such chatbot systems could be
addressed in the future.
| 1 | 0 | 0 | 0 | 0 | 0 |
An estimate of the first non-zero eigenvalue of the Laplacian by the Ricci curvature on edges of graphs | We define the distance between edges of graphs and study the coarse Ricci
curvature on edges. We consider the Laplacian on edges based on the
Jost-Horak's definition of the Laplacian on simplicial complexes. As one of our
main results, we obtain an estimate of the first non-zero eigenvalue of the
Laplacian by the Ricci curvature for a regular graph.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Deep Reinforcement Learning Chatbot | We present MILABOT: a deep reinforcement learning chatbot developed by the
Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize
competition. MILABOT is capable of conversing with humans on popular small talk
topics through both speech and text. The system consists of an ensemble of
natural language generation and retrieval models, including template-based
models, bag-of-words models, sequence-to-sequence neural network and latent
variable neural network models. By applying reinforcement learning to
crowdsourced data and real-world user interactions, the system has been trained
to select an appropriate response from the models in its ensemble. The system
has been evaluated through A/B testing with real-world users, where it
performed significantly better than many competing systems. Due to its machine
learning architecture, the system is likely to improve with additional data.
| 1 | 0 | 0 | 1 | 0 | 0 |
Stochastic Optimal Power Flow Based on Data-Driven Distributionally Robust Optimization | We propose a data-driven method to solve a stochastic optimal power flow
(OPF) problem based on limited information about forecast error distributions.
The objective is to determine power schedules for controllable devices in a
power network to balance operation cost and conditional value-at-risk (CVaR) of
device and network constraint violations. These decisions include scheduled
power output adjustments and reserve policies, which specify planned reactions
to forecast errors in order to accommodate fluctuating renewable energy
sources. Instead of assuming the uncertainties across the networks follow
prescribed probability distributions, we assume the distributions are only
observable through a finite training dataset. By utilizing the Wasserstein
metric to quantify differences between the empirical data-based distribution
and the real data-generating distribution, we formulate a distributionally
robust optimization OPF problem to search for power schedules and reserve
policies that are robust to sampling errors inherent in the dataset. A simple
numerical example illustrates inherent tradeoffs between operation cost and
risk of constraint violation, and we show how our proposed method offers a
data-driven framework to balance these objectives.
| 1 | 0 | 1 | 0 | 0 | 0 |
Deep Learning for Real-Time Crime Forecasting and its Ternarization | Real-time crime forecasting is important. However, accurate prediction of
when and where the next crime will happen is difficult. No known physical model
provides a reasonable approximation to such a complex system. Historical crime
data are sparse in both space and time and the signal of interests is weak. In
this work, we first present a proper representation of crime data. We then
adapt the spatial temporal residual network on the well represented data to
predict the distribution of crime in Los Angeles at the scale of hours in
neighborhood-sized parcels. These experiments as well as comparisons with
several existing approaches to prediction demonstrate the superiority of the
proposed model in terms of accuracy. Finally, we present a ternarization
technique to address the resource consumption issue for its deployment in real
world. This work is an extension of our short conference proceeding paper [Wang
et al, Arxiv 1707.03340].
| 1 | 0 | 0 | 1 | 0 | 0 |
Thermal Modeling of Comet-Like Objects from AKARI Observation | We investigated the physical properties of the comet-like objects 107P/(4015)
Wilson--Harrington (4015WH) and P/2006 HR30 (Siding Spring; HR30) by applying a
simple thermophysical model (TPM) to the near-infrared spectroscopy and
broadband observation data obtained by AKARI satellite of JAXA when they showed
no detectable comet-like activity. We selected these two targets since the
tendency of thermal inertia to decrease with the size of an asteroid, which has
been demonstrated in recent studies, has not been confirmed for comet-like
objects. It was found that 4015WH, which was originally discovered as a comet
but has not shown comet-like activity since its discovery, has effective size $
D= $ 3.74--4.39 km and geometric albedo $ p_V \approx $ 0.040--0.055 with
thermal inertia $ \Gamma = $ 100--250 J m$ ^{-2} $ K$ ^{-1} $ s$ ^{-1/2}$. The
corresponding grain size is estimated to 1--3 mm. We also found that HR30,
which was observed as a bare cometary nucleus at the time of our observation,
have $ D= $ 23.9--27.1 km and $ p_V= $0.035--0.045 with $ \Gamma= $ 250--1,000
J m$ ^{-2} $ K$ ^{-1} $ s$ ^{-1/2}$. We conjecture the pole latitude $ -
20^{\circ} \lesssim \beta_s \lesssim +60^{\circ}$. The results for both targets
are consistent with previous studies. Based on the results, we propose that
comet-like objects are not clearly distinguishable from asteroidal counterpart
on the $ D $--$ \Gamma $ plane.
| 0 | 1 | 0 | 0 | 0 | 0 |
Improvement to the Prediction of Fuel Cost Distributions Using ARIMA Model | Availability of a validated, realistic fuel cost model is a prerequisite to
the development and validation of new optimization methods and control tools.
This paper uses an autoregressive integrated moving average (ARIMA) model with
historical fuel cost data in development of a three-step-ahead fuel cost
distribution prediction. First, the data features of Form EIA-923 are explored
and the natural gas fuel costs of Texas generating facilities are used to
develop and validate the forecasting algorithm for the Texas example.
Furthermore, the spot price associated with the natural gas hub in Texas is
utilized to enhance the fuel cost prediction. The forecasted data is fit to a
normal distribution and the Kullback-Leibler divergence is employed to evaluate
the difference between the real fuel cost distributions and the estimated
distributions. The comparative evaluation suggests the proposed forecasting
algorithm is effective in general and is worth pursuing further.
| 0 | 0 | 0 | 1 | 0 | 0 |
Timed Discrete-Event Systems are Synchronous Product Structures | In this work, we show that the model of timed discrete-event systems (TDES)
proposed by Brandin and Wonham is essentially a synchronous product structure.
This resolves an open problem that has remained unaddressed for the past 25
years and has its application in developing a more efficient timed state-tree
structures (TSTS) framework. The proof is constructive in the sense that an
explicit synchronous production rule is provided to generate a TDES from the
activity automaton and the timer automata after a suitable transformation of
the model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Maturation Trajectories of Cortical Resting-State Networks Depend on the Mediating Frequency Band | The functional significance of resting state networks and their abnormal
manifestations in psychiatric disorders are firmly established, as is the
importance of the cortical rhythms in mediating these networks. Resting state
networks are known to undergo substantial reorganization from childhood to
adulthood, but whether distinct cortical rhythms, which are generated by
separable neural mechanisms and are often manifested abnormally in psychiatric
conditions, mediate maturation differentially, remains unknown. Using
magnetoencephalography (MEG) to map frequency band specific maturation of
resting state networks from age 7 to 29 in 162 participants (31 independent),
we found significant changes with age in networks mediated by the beta
(13-30Hz) and gamma (31-80Hz) bands. More specifically, gamma band mediated
networks followed an expected asymptotic trajectory, but beta band mediated
networks followed a linear trajectory. Network integration increased with age
in gamma band mediated networks, while local segregation increased with age in
beta band mediated networks. Spatially, the hubs that changed in importance
with age in the beta band mediated networks had relatively little overlap with
those that showed the greatest changes in the gamma band mediated networks.
These findings are relevant for our understanding of the neural mechanisms of
cortical maturation, in both typical and atypical development.
| 0 | 0 | 0 | 1 | 1 | 0 |
High-power closed-cycle $^4$He cryostat with top-loading sample exchange | We report on the development of a versatile cryogen-free laboratory cryostat
based upon a commercial pulse tube cryocooler. It provides enough cooling power
for continuous recondensation of circulating $^4$He gas at a condensation
pressure of approximately 250~mbar. Moreover, the cryostat allows for exchange
of different cryostat-inserts as well as fast and easy "wet" top-loading of
samples directly into the 1 K pot with a turn-over time of less than 75~min.
Starting from room temperature and using a $^4$He cryostat-insert, a base
temperature of 1.0~K is reached within approximately seven hours and a cooling
power of 250~mW is established at 1.24~K.
| 0 | 1 | 0 | 0 | 0 | 0 |
A new proof of Kirchberg's $\mathcal O_2$-stable classification | I present a new proof of Kirchberg's $\mathcal O_2$-stable classification
theorem: two separable, nuclear, stable/unital, $\mathcal O_2$-stable
$C^\ast$-algebras are isomorphic if and only if their ideal lattices are order
isomorphic, or equivalently, their primitive ideal spaces are homeomorphic.
Many intermediate results do not depend on pure infiniteness of any sort.
| 0 | 0 | 1 | 0 | 0 | 0 |
Scaling evidence of the homothetic nature of cities | In this paper we analyse the profile of land use and population density with
respect to the distance to the city centre for the European city. In addition
to providing the radial population density and soil-sealing profiles for a
large set of cities, we demonstrate a remarkable constancy of the profiles
across city size.
Our analysis combines the GMES/Copernicus Urban Atlas 2006 land use database
at 5m resolution for 300 European cities with more than 100.000 inhabitants and
the Geostat population grid at 1km resolution. Population is allocated
proportionally to surface and weighted by soil sealing and density classes of
the Urban Atlas. We analyse the profile of each artificial land use and
population with distance to the town hall.
In line with earlier literature, we confirm the strong monocentricity of the
European city and the negative exponential curve for population density.
Moreover, we find that land use curves, in particular the share of housing and
roads, scale along the two horizontal dimensions with the square root of city
population, while population curves scale in three dimensions with the cubic
root of city population. In short, European cities of different sizes are
homothetic in terms of land use and population density. While earlier
literature documented the scaling of average densities (total surface and
population) with city size, we document the scaling of the whole radial
distance profile with city size, thus liaising intra-urban radial analysis and
systems of cities. In addition to providing a new empirical view of the
European city, our scaling offers a set of practical and coherent definitions
of a city, independent of its population, from which we can re-question urban
scaling laws and Zipf's law for cities.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subconvex bounds for Hecke-Maass forms on compact arithmetic quotients of semisimple Lie groups | Let $H$ be a semisimple algebraic group, $K$ a maximal compact subgroup of
$G:=H(\mathbb{R})$, and $\Gamma\subset H(\mathbb{Q})$ a congruence arithmetic
subgroup. In this paper, we generalize existing subconvex bounds for
Hecke-Maass forms on the locally symmetric space $\Gamma \backslash G/K$ to
corresponding bounds on the arithmetic quotient $\Gamma \backslash G$ for
cocompact lattices using the spectral function of an elliptic operator. The
bounds obtained extend known subconvex bounds for automorphic forms to
non-trivial $K$-types, yielding subconvex bounds for new classes of automorphic
representations, and constitute subconvex bounds for eigenfunctions on compact
manifolds with both positive and negative sectional curvature. We also obtain
new subconvex bounds for holomorphic modular forms in the weight aspect.
| 0 | 0 | 1 | 0 | 0 | 0 |
Studying Positive Speech on Twitter | We present results of empirical studies on positive speech on Twitter. By
positive speech we understand speech that works for the betterment of a given
situation, in this case relations between different communities in a
conflict-prone country. We worked with four Twitter data sets. Through
semi-manual opinion mining, we found that positive speech accounted for < 1% of
the data . In fully automated studies, we tested two approaches: unsupervised
statistical analysis, and supervised text classification based on distributed
word representation. We discuss benefits and challenges of those approaches and
report empirical evidence obtained in the study.
| 1 | 0 | 0 | 0 | 0 | 0 |
Geostatistical inference in the presence of geomasking: a composite-likelihood approach | In almost any geostatistical analysis, one of the underlying, often implicit,
modelling assump- tions is that the spatial locations, where measurements are
taken, are recorded without error. In this study we develop geostatistical
inference when this assumption is not valid. This is often the case when, for
example, individual address information is randomly altered to provide pri-
vacy protection or imprecisions are induced by geocoding processes and
measurement devices. Our objective is to develop a method of inference based on
the composite likelihood that over- comes the inherent computational limits of
the full likelihood method as set out in Fanshawe and Diggle (2011). Through a
simulation study, we then compare the performance of our proposed approach with
an N-weighted least squares estimation procedure, based on a corrected version
of the empirical variogram. Our results indicate that the composite-likelihood
approach outper- forms the latter, leading to smaller root-mean-square-errors
in the parameter estimates. Finally, we illustrate an application of our method
to analyse data on malnutrition from a Demographic and Health Survey conducted
in Senegal in 2011, where locations were randomly perturbed to protect the
privacy of respondents.
| 0 | 0 | 0 | 1 | 0 | 0 |
Large sample analysis of the median heuristic | In kernel methods, the median heuristic has been widely used as a way of
setting the bandwidth of RBF kernels. While its empirical performances make it
a safe choice under many circumstances, there is little theoretical
understanding of why this is the case. Our aim in this paper is to advance our
understanding of the median heuristic by focusing on the setting of kernel
two-sample test. We collect new findings that may be of interest for both
theoreticians and practitioners. In theory, we provide a convergence analysis
that shows the asymptotic normality of the bandwidth chosen by the median
heuristic in the setting of kernel two-sample test. Systematic empirical
investigations are also conducted in simple settings, comparing the
performances based on the bandwidths chosen by the median heuristic and those
by the maximization of test power.
| 0 | 0 | 1 | 1 | 0 | 0 |
Divergence Framework for EEG based Multiclass Motor Imagery Brain Computer Interface | Similar to most of the real world data, the ubiquitous presence of
non-stationarities in the EEG signals significantly perturb the feature
distribution thus deteriorating the performance of Brain Computer Interface. In
this letter, a novel method is proposed based on Joint Approximate
Diagonalization (JAD) to optimize stationarity for multiclass motor imagery
Brain Computer Interface (BCI) in an information theoretic framework.
Specifically, in the proposed method, we estimate the subspace which optimizes
the discriminability between the classes and simultaneously preserve
stationarity within the motor imagery classes. We determine the subspace for
the proposed approach through optimization using gradient descent on an
orthogonal manifold. The performance of the proposed stationarity enforcing
algorithm is compared to that of baseline One-Versus-Rest (OVR)-CSP and JAD on
publicly available BCI competition IV dataset IIa. Results show that an
improvement in average classification accuracies across the subjects over the
baseline algorithms and thus essence of alleviating within session
non-stationarities.
| 1 | 0 | 0 | 0 | 1 | 0 |
Accelerated Stochastic Power Iteration | Principal component analysis (PCA) is one of the most powerful tools in
machine learning. The simplest method for PCA, the power iteration, requires
$\mathcal O(1/\Delta)$ full-data passes to recover the principal component of a
matrix with eigen-gap $\Delta$. Lanczos, a significantly more complex method,
achieves an accelerated rate of $\mathcal O(1/\sqrt{\Delta})$ passes. Modern
applications, however, motivate methods that only ingest a subset of available
data, known as the stochastic setting. In the online stochastic setting, simple
algorithms like Oja's iteration achieve the optimal sample complexity $\mathcal
O(\sigma^2/\Delta^2)$. Unfortunately, they are fully sequential, and also
require $\mathcal O(\sigma^2/\Delta^2)$ iterations, far from the $\mathcal
O(1/\sqrt{\Delta})$ rate of Lanczos. We propose a simple variant of the power
iteration with an added momentum term, that achieves both the optimal sample
and iteration complexity. In the full-pass setting, standard analysis shows
that momentum achieves the accelerated rate, $\mathcal O(1/\sqrt{\Delta})$. We
demonstrate empirically that naively applying momentum to a stochastic method,
does not result in acceleration. We perform a novel, tight variance analysis
that reveals the "breaking-point variance" beyond which this acceleration does
not occur. By combining this insight with modern variance reduction techniques,
we construct stochastic PCA algorithms, for the online and offline setting,
that achieve an accelerated iteration complexity $\mathcal O(1/\sqrt{\Delta})$.
Due to the embarassingly parallel nature of our methods, this acceleration
translates directly to wall-clock time if deployed in a parallel environment.
Our approach is very general, and applies to many non-convex optimization
problems that can now be accelerated using the same technique.
| 1 | 0 | 1 | 1 | 0 | 0 |
Generalized phase mixing: Turbulence-like behaviour from unidirectionally propagating MHD waves | We present the results of three-dimensional (3D) ideal magnetohydrodynamics
(MHD) simulations on the dynamics of a perpendicularly inhomogeneous plasma
disturbed by propagating Alfvénic waves. Simpler versions of this scenario
have been extensively studied as the phenomenon of phase mixing. We show that,
by generalizing the textbook version of phase mixing, interesting phenomena are
obtained, such as turbulence-like behavior and complex current-sheet structure,
a novelty in longitudinally homogeneous plasma excited by unidirectionally
propagating waves. This constitutes an important finding for turbulence-related
phenomena in astrophysics in general, relaxing the conditions that have to be
fulfilled in order to generate turbulent behavior.
| 0 | 1 | 0 | 0 | 0 | 0 |
Discovering Bayesian Market Views for Intelligent Asset Allocation | Along with the advance of opinion mining techniques, public mood has been
found to be a key element for stock market prediction. However, how market
participants' behavior is affected by public mood has been rarely discussed.
Consequently, there has been little progress in leveraging public mood for the
asset allocation problem, which is preferred in a trusted and interpretable
way. In order to address the issue of incorporating public mood analyzed from
social media, we propose to formalize public mood into market views, because
market views can be integrated into the modern portfolio theory. In our
framework, the optimal market views will maximize returns in each period with a
Bayesian asset allocation model. We train two neural models to generate the
market views, and benchmark the model performance on other popular asset
allocation strategies. Our experimental results suggest that the formalization
of market views significantly increases the profitability (5% to 10% annually)
of the simulated portfolio at a given risk level.
| 0 | 0 | 0 | 0 | 0 | 1 |
Bayesian Static Parameter Estimation for Partially Observed Diffusions via Multilevel Monte Carlo | In this article we consider static Bayesian parameter estimation for
partially observed diffusions that are discretely observed. We work under the
assumption that one must resort to discretizing the underlying diffusion
process, for instance using the Euler-Maruyama method. Given this assumption,
we show how one can use Markov chain Monte Carlo (MCMC) and particularly
particle MCMC [Andrieu, C., Doucet, A. and Holenstein, R. (2010). Particle
Markov chain Monte Carlo methods (with discussion). J. R. Statist. Soc. Ser. B,
72, 269--342] to implement a new approximation of the multilevel (ML) Monte
Carlo (MC) collapsing sum identity. Our approach comprises constructing an
approximate coupling of the posterior density of the joint distribution over
parameter and hidden variables at two different discretization levels and then
correcting by an importance sampling method. The variance of the weights are
independent of the length of the observed data set. The utility of such a
method is that, for a prescribed level of mean square error, the cost of this
MLMC method is provably less than i.i.d. sampling from the posterior associated
to the most precise discretization. However the method here comprises using
only known and efficient simulation methodologies. The theoretical results are
illustrated by inference of the parameters of two prototypical processes given
noisy partial observations of the process: the first is an Ornstein Uhlenbeck
process and the second is a more general Langevin equation.
| 0 | 0 | 1 | 1 | 0 | 0 |
EigenNetworks | In many applications, the interdependencies among a set of $N$ time series
$\{ x_{nk}, k>0 \}_{n=1}^{N}$ are well captured by a graph or network $G$. The
network itself may change over time as well (i.e., as $G_k$). We expect the
network changes to be at a much slower rate than that of the time series. This
paper introduces eigennetworks, networks that are building blocks to compose
the actual networks $G_k$ capturing the dependencies among the time series.
These eigennetworks can be estimated by first learning the time series of
graphs $G_k$ from the data, followed by a Principal Network Analysis procedure.
Algorithms for learning both the original time series of graphs and the
eigennetworks are presented and discussed. Experiments on simulated and real
time series data demonstrate the performance of the learning and the
interpretation of the eigennetworks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.