title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Strong and broadly tunable plasmon resonances in thick films of aligned carbon nanotubes | Low-dimensional plasmonic materials can function as high quality terahertz
and infrared antennas at deep subwavelength scales. Despite these antennas'
strong coupling to electromagnetic fields, there is a pressing need to further
strengthen their absorption. We address this problem by fabricating thick films
of aligned, uniformly sized carbon nanotubes and showing that their plasmon
resonances are strong, narrow, and broadly tunable. With thicknesses ranging
from 25 to 250 nm, our films exhibit peak attenuation reaching 70%, quality
factors reaching 9, and electrostatically tunable peak frequencies by a factor
of 2.3x. Excellent nanotube alignment leads to the attenuation being 99%
linearly polarized along the nanotube axis. Increasing the film thickness
blueshifts the plasmon resonators down to peak wavelengths as low as 1.4
micrometers, promoting them to a new near-infrared regime in which they can
both overlap the S11 nanotube exciton energy and access the technologically
important infrared telecom band.
| 0 | 1 | 0 | 0 | 0 | 0 |
The coordination of centralised and distributed generation | In this paper, we analyse the interaction between centralised carbon emissive
technologies and distributed intermittent non-emissive technologies. A
representative consumer can satisfy his electricity demand by investing in
distributed generation (solar panels) and by buying power from a centralised
firm at a price the firm sets. Distributed generation is intermittent and
induces an externality cost to the consumer. The firm provides non-random
electricity generation subject to a carbon tax and to transmission costs. The
objective of the consumer is to satisfy her demand while minimising investment
costs, payments to the firm, and intermittency costs. The objective of the firm
is to satisfy the consumer's residual demand while minimising investment costs,
demand deviation costs, and maximising the payments from the consumer. We
formulate the investment decisions as McKean-Vlasov control problems with
stochastic coefficients. We provide explicit, price model-free solutions to the
optimal decision problems faced by each player, the solution of the Pareto
optimum, and the laissez-faire market situation represented by a Stackelberg
equilibrium where the firm is the leader. We find that, from the social
planner's point of view, the high adjustment cost of centralised technology
damages the development of distributed generation. The Stackelberg equilibrium
leads to significant deviation from the socially desirable ratio of centralised
versus distributed generation. In a situation where a power system is to be
built from zero, the optimal strategy of the firm is high price/low
market-share, but is low price/large market share for existing power systems.
Further, from a regulation policy, we find that a carbon tax or a subsidy to
distributed technology has the same efficiency in achieving a given level of
distributed generation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Computation of annular capacity by Hamiltonian Floer theory of non-contractible periodic trajectories | The first author introduced a relative symplectic capacity $C$ for a
symplectic manifold $(N,\omega_N)$ and its subset $X$ which measures the
existence of non-contractible periodic trajectories of Hamiltonian isotopies on
the product of $N$ with the annulus $A_R=(R,R)\times\mathbb{R}/\mathbb{Z}$. In
the present paper, we give an exact computation of the capacity $C$ of the
$2n$-torus $\mathbb{T}^{2n}$ relative to a Lagrangian submanifold
$\mathbb{T}^n$ which implies the existence of non-contractible Hamiltonian
periodic trajectories on $A_R\times\mathbb{T}^{2n}$. Moreover, we give a lower
bound on the number of such trajectories.
| 0 | 0 | 1 | 0 | 0 | 0 |
A New Torsion Pendulum for Gravitational Reference Sensor Technology Development | We report on the design and sensitivity of a new torsion pendulum for
measuring the performance of ultra-precise inertial sensors and for the
development of associated technologies for space-based gravitational wave
observatories and geodesy missions. The apparatus comprises a 1 m-long, 50
um-diameter, tungsten fiber that supports an inertial member inside a vacuum
system. The inertial member is an aluminum crossbar with four hollow cubic test
masses at each end. This structure converts the rotation of the torsion
pendulum into translation of the test masses. Two test masses are enclosed in
capacitive sensors which provide readout and actuation. These test masses are
electrically insulated from the rest of the cross-bar and their electrical
charge is controlled by photoemission using fiber-coupled ultraviolet light
emitting diodes. The capacitive readout measures the test mass displacement
with a broadband sensitivity of 30 nm / sqrt(Hz), and is complemented by a
laser interferometer with a sensitivity of about 0.5 nm / sqrt(Hz). The
performance of the pendulum, as determined by the measured residual torque
noise and expressed in terms of equivalent force acting on a single test mass,
is roughly 200 fN / sqrt(Hz) around 2 mHz, which is about a factor of 20 above
the thermal noise limit of the fiber.
| 0 | 1 | 0 | 0 | 0 | 0 |
Optical Angular Momentum in Classical Electrodynamics | Invoking Maxwell's classical equations in conjunction with expressions for
the electromagnetic (EM) energy, momentum, force, and torque, we use a few
simple examples to demonstrate the nature of the EM angular momentum. The
energy and the angular momentum of an EM field will be shown to have an
intimate relationship; a source radiating EM angular momentum will, of
necessity, pick up an equal but opposite amount of mechanical angular momentum;
and the spin and orbital angular momenta of the EM field, when absorbed by a
small particle, will be seen to elicit different responses from the particle.
| 0 | 1 | 0 | 0 | 0 | 0 |
Efficient variational Bayesian neural network ensembles for outlier detection | In this work we perform outlier detection using ensembles of neural networks
obtained by variational approximation of the posterior in a Bayesian neural
network setting. The variational parameters are obtained by sampling from the
true posterior by gradient descent. We show our outlier detection results are
comparable to those obtained using other efficient ensembling methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Emergent high-spin state above 7 GPa in superconducting FeSe | The local electronic and magnetic properties of superconducting FeSe have
been investigated by K$\beta$ x-ray emission (XES) and simultaneous x-ray
absorption spectroscopy (XAS) at the Fe K-edge at high pressure and low
temperature. Our results indicate a sluggish decrease of the local Fe spin
moment under pressure up to 7~GPa, in line with previous reports, followed by a
sudden increase at higher pressure which has been hitherto unobserved. The
magnetic surge is preceded by an abrupt change of the Fe local structure as
observed by the decrease of the XAS pre-edge region intensity and corroborated
by ab-initio simulations. This pressure corresponds to a structural transition,
previously detected by x-ray diffraction, from the $Cmma$ form to the denser
$Pbnm$ form with octahedral coordination of iron. Finally, the near-edge region
of the XAS spectra shows a change before this transition at 5~GPa,
corresponding well with the onset pressure of the previously observed
enhancement of $T_c$. Our results emphasize the delicate interplay between
structural, magnetic, and superconducting properties in FeSe under pressure.
| 0 | 1 | 0 | 0 | 0 | 0 |
Verification in Staged Tile Self-Assembly | We prove the unique assembly and unique shape verification problems,
benchmark measures of self-assembly model power, are
$\mathrm{coNP}^{\mathrm{NP}}$-hard and contained in $\mathrm{PSPACE}$ (and in
$\mathrm{\Pi}^\mathrm{P}_{2s}$ for staged systems with $s$ stages). En route,
we prove that unique shape verification problem in the 2HAM is
$\mathrm{coNP}^{\mathrm{NP}}$-complete.
| 1 | 0 | 0 | 0 | 0 | 0 |
Combinets: Creativity via Recombination of Neural Networks | One of the defining characteristics of human creativity is the ability to
make conceptual leaps, creating something surprising from typical knowledge. In
comparison, deep neural networks often struggle to handle cases outside of
their training data, which is especially problematic for problems with limited
training data. Approaches exist to transfer knowledge from problems with
sufficient data to those with insufficient data, but they tend to require
additional training or a domain-specific method of transfer. We present a new
approach, conceptual expansion, that serves as a general representation for
reusing existing trained models to derive new models without backpropagation.
We evaluate our approach on few-shot variations of two tasks: image
classification and image generation, and outperform standard transfer learning
approaches.
| 0 | 0 | 0 | 1 | 0 | 0 |
Siamese Network of Deep Fisher-Vector Descriptors for Image Retrieval | This paper addresses the problem of large scale image retrieval, with the aim
of accurately ranking the similarity of a large number of images to a given
query image. To achieve this, we propose a novel Siamese network. This network
consists of two computational strands, each comprising of a CNN component
followed by a Fisher vector component. The CNN component produces dense, deep
convolutional descriptors that are then aggregated by the Fisher Vector method.
Crucially, we propose to simultaneously learn both the CNN filter weights and
Fisher Vector model parameters. This allows us to account for the evolving
distribution of deep descriptors over the course of the learning process. We
show that the proposed approach gives significant improvements over the
state-of-the-art methods on the Oxford and Paris image retrieval datasets.
Additionally, we provide a baseline performance measure for both these datasets
with the inclusion of 1 million distractors.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gender Disparities in Science? Dropout, Productivity, Collaborations and Success of Male and Female Computer Scientists | Scientific collaborations shape ideas as well as innovations and are both the
substrate for, and the outcome of, academic careers. Recent studies show that
gender inequality is still present in many scientific practices ranging from
hiring to peer-review processes and grant applications. In this work, we
investigate gender-specific differences in collaboration patterns of more than
one million computer scientists over the course of 47 years. We explore how
these patterns change over years and career ages and how they impact scientific
success. Our results highlight that successful male and female scientists
reveal the same collaboration patterns: compared to scientists in the same
career age, they tend to collaborate with more colleagues than other
scientists, seek innovations as brokers and establish longer-lasting and more
repetitive collaborations. However, women are on average less likely to adapt
the collaboration patterns that are related with success, more likely to embed
into ego networks devoid of structural holes, and they exhibit stronger gender
homophily as well as a consistently higher dropout rate than men in all career
ages.
| 1 | 1 | 0 | 0 | 0 | 0 |
Estimating a network from multiple noisy realizations | Complex interactions between entities are often represented as edges in a
network. In practice, the network is often constructed from noisy measurements
and inevitably contains some errors. In this paper we consider the problem of
estimating a network from multiple noisy observations where edges of the
original network are recorded with both false positives and false negatives.
This problem is motivated by neuroimaging applications where brain networks of
a group of patients with a particular brain condition could be viewed as noisy
versions of an unobserved true network corresponding to the disease. The key to
optimally leveraging these multiple observations is to take advantage of
network structure, and here we focus on the case where the true network
contains communities. Communities are common in real networks in general and in
particular are believed to be presented in brain networks. Under a community
structure assumption on the truth, we derive an efficient method to estimate
the noise levels and the original network, with theoretical guarantees on the
convergence of our estimates. We show on synthetic networks that the
performance of our method is close to an oracle method using the true parameter
values, and apply our method to fMRI brain data, demonstrating that it
constructs stable and plausible estimates of the population network.
| 0 | 0 | 1 | 1 | 0 | 0 |
Autonomous drone race: A computationally efficient vision-based navigation and control strategy | Drone racing is becoming a popular sport where human pilots have to control
their drones to fly at high speed through complex environments and pass a
number of gates in a pre-defined sequence. In this paper, we develop an
autonomous system for drones to race fully autonomously using only onboard
resources. Instead of commonly used visual navigation methods, such as
simultaneous localization and mapping and visual inertial odometry, which are
computationally expensive for micro aerial vehicles (MAVs), we developed the
highly efficient snake gate detection algorithm for visual navigation, which
can detect the gate at 20HZ on a Parrot Bebop drone. Then, with the gate
detection result, we developed a robust pose estimation algorithm which has
better tolerance to detection noise than a state-of-the-art perspective-n-point
method. During the race, sometimes the gates are not in the drone's field of
view. For this case, a state prediction-based feed-forward control strategy is
developed to steer the drone to fly to the next gate. Experiments show that the
drone can fly a half-circle with 1.5m radius within 2 seconds with only 30cm
error at the end of the circle without any position feedback. Finally, the
whole system is tested in a complex environment (a showroom in the faculty of
Aerospace Engineering, TU Delft). The result shows that the drone can complete
the track of 15 gates with a speed of 1.5m/s which is faster than the speeds
exhibited at the 2016 and 2017 IROS autonomous drone races.
| 1 | 0 | 0 | 0 | 0 | 0 |
Experimental investigations on nucleation, bubble growth, and micro-explosion characteristics during the combustion of ethanol/Jet A-1 fuel droplets | The combustion characteristics of ethanol/Jet A-1 fuel droplets having three
different proportions of ethanol (10%, 30%, and 50% by vol.) are investigated
in the present study. The large volatility differential between ethanol and Jet
A-1 and the nominal immiscibility of the fuels seem to result in combustion
characteristics that are rather different from our previous work on butanol/Jet
A-1 droplets (miscible blends). Abrupt explosion was facilitated in fuel
droplets comprising lower proportions of ethanol (10%), possibly due to
insufficient nucleation sites inside the droplet and the partially unmixed fuel
mixture. For the fuel droplets containing higher proportions of ethanol (30%
and 50%), micro-explosion occurred through homogeneous nucleation, leading to
the ejection of secondary droplets and subsequent significant reduction in the
overall droplet lifetime. The rate of bubble growth is nearly similar in all
the blends of ethanol; however, the evolution of ethanol vapor bubble is
significantly faster than that of a vapor bubble in the blends of butanol. The
probability of disruptive behavior is considerably higher in ethanol/Jet A-1
blends than that of butanol/Jet A-1 blends. The Sauter mean diameter of the
secondary droplets produced from micro-explosion is larger for blends with a
higher proportion of ethanol. Both abrupt explosion and micro-explosion create
a large-scale distortion of the flame, which surrounds the parent droplet. The
secondary droplets generated from abrupt explosion undergo rapid evaporation
whereas the secondary droplets from micro-explosion carry their individual
flame and evaporate slowly. The growth of vapor bubble was also witnessed in
the secondary droplets, which leads to the further breakup of the droplet
(puffing/micro-explosion).
| 0 | 1 | 0 | 0 | 0 | 0 |
Hypergraph $p$-Laplacian: A Differential Geometry View | The graph Laplacian plays key roles in information processing of relational
data, and has analogies with the Laplacian in differential geometry. In this
paper, we generalize the analogy between graph Laplacian and differential
geometry to the hypergraph setting, and propose a novel hypergraph
$p$-Laplacian. Unlike the existing two-node graph Laplacians, this
generalization makes it possible to analyze hypergraphs, where the edges are
allowed to connect any number of nodes. Moreover, we propose a semi-supervised
learning method based on the proposed hypergraph $p$-Laplacian, and formalize
them as the analogue to the Dirichlet problem, which often appears in physics.
We further explore theoretical connections to normalized hypergraph cut on a
hypergraph, and propose normalized cut corresponding to hypergraph
$p$-Laplacian. The proposed $p$-Laplacian is shown to outperform standard
hypergraph Laplacians in the experiment on a hypergraph semi-supervised
learning and normalized cut setting.
| 1 | 0 | 0 | 1 | 0 | 0 |
Controlling motile disclinations in a thick nematogenic material with an electric field | Manipulating topological disclination networks that arise in a
symmetry-breaking phase transfor- mation in widely varied systems including
anisotropic materials can potentially lead to the design of novel materials
like conductive microwires, self-assembled resonators, and active anisotropic
matter. However, progress in this direction is hindered by a lack of control of
the kinetics and microstructure due to inherent complexity arising from
competing energy and topology. We have studied thermal and electrokinetic
effects on disclinations in a three-dimensional nonabsorbing nematic material
with a positive and negative sign of the dielectric anisotropy. The electric
flux lines are highly non-uniform in uniaxial media after an electric field
below the Fréedericksz threshold is switched on, and the kinetics of the
disclination lines is slowed down. In biaxial media, depending on the sign of
the dielectric anisotropy, apart from the slowing down of the disclination
kinetics, a non-uniform electric field filters out disclinations of different
topology by inducing a kinetic asymmetry. These results enhance the current
understanding of forced disclination networks and establish the pre- sented
method, which we call fluctuating electronematics, as a potentially useful tool
for designing materials with novel properties in silico.
| 0 | 1 | 0 | 0 | 0 | 0 |
How Generative Adversarial Networks and Their Variants Work: An Overview | Generative Adversarial Networks (GAN) have received wide attention in the
machine learning field for their potential to learn high-dimensional, complex
real data distribution. Specifically, they do not rely on any assumptions about
the distribution and can generate real-like samples from latent space in a
simple manner. This powerful property leads GAN to be applied to various
applications such as image synthesis, image attribute editing, image
translation, domain adaptation and other academic fields. In this paper, we aim
to discuss the details of GAN for those readers who are familiar with, but do
not comprehend GAN deeply or who wish to view GAN from various perspectives. In
addition, we explain how GAN operates and the fundamental meaning of various
objective functions that have been suggested recently. We then focus on how the
GAN can be combined with an autoencoder framework. Finally, we enumerate the
GAN variants that are applied to various tasks and other fields for those who
are interested in exploiting GAN for their research.
| 1 | 0 | 0 | 0 | 0 | 0 |
Principal Boundary on Riemannian Manifolds | We revisit the classification problem and focus on nonlinear methods for
classification on manifolds. For multivariate datasets lying on an embedded
nonlinear Riemannian manifold within the higher-dimensional space, our aim is
to acquire a classification boundary between the classes with labels. Motivated
by the principal flow [Panaretos, Pham and Yao, 2014], a curve that moves along
a path of the maximum variation of the data, we introduce the principal
boundary. From the classification perspective, the principal boundary is
defined as an optimal curve that moves in between the principal flows traced
out from two classes of the data, and at any point on the boundary, it
maximizes the margin between the two classes. We estimate the boundary in
quality with its direction supervised by the two principal flows. We show that
the principal boundary yields the usual decision boundary found by the support
vector machine, in the sense that locally, the two boundaries coincide. By
means of examples, we illustrate how to find, use and interpret the principal
boundary.
| 1 | 0 | 0 | 1 | 0 | 0 |
Exploiting network topology for large-scale inference of nonlinear reaction models | The development of chemical reaction models aids understanding and prediction
in areas ranging from biology to electrochemistry and combustion. A systematic
approach to building reaction network models uses observational data not only
to estimate unknown parameters, but also to learn model structure. Bayesian
inference provides a natural approach to this data-driven construction of
models. Yet traditional Bayesian model inference methodologies that numerically
evaluate the evidence for each model are often infeasible for nonlinear
reaction network inference, as the number of plausible models can be
combinatorially large. Alternative approaches based on model-space sampling can
enable large-scale network inference, but their realization presents many
challenges. In this paper, we present new computational methods that make
large-scale nonlinear network inference tractable. First, we exploit the
topology of networks describing potential interactions among chemical species
to design improved "between-model" proposals for reversible-jump Markov chain
Monte Carlo. Second, we introduce a sensitivity-based determination of move
types which, when combined with network-aware proposals, yields significant
additional gains in sampling performance. These algorithms are demonstrated on
inference problems drawn from systems biology, with nonlinear differential
equation models of species interactions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Numerical investigations of non-uniqueness for the Navier-Stokes initial value problem in borderline spaces | We consider the Cauchy problem for the incompressible Navier-Stokes equations
in $\mathbb{R}^3$ for a one-parameter family of explicit scale-invariant
axi-symmetric initial data, which is smooth away from the origin and invariant
under the reflection with respect to the $xy$-plane. Working in the class of
axi-symmetric fields, we calculate numerically scale-invariant solutions of the
Cauchy problem in terms of their profile functions, which are smooth. The
solutions are necessarily unique for small data, but for large data we observe
a breaking of the reflection symmetry of the initial data through a
pitchfork-type bifurcation. By a variation of previous results by Jia &
Šverák (2013) it is known rigorously that if the behavior seen here
numerically can be proved, optimal non-uniqueness examples for the Cauchy
problem can be established, and two different solutions can exists for the same
initial datum which is divergence-free, smooth away from the origin, compactly
supported, and locally $(-1)$-homogeneous near the origin. In particular,
assuming our (finite-dimensional) numerics represents faithfully the behavior
of the full (infinite-dimensional) system, the problem of uniqueness of the
Leray-Hopf solutions (with non-smooth initial data) has a negative answer and,
in addition, the perturbative arguments such those by Kato (1984) and Koch &
Tataru (2001), or the weak-strong uniqueness results by Leray, Prodi, Serrin,
Ladyzhenskaya and others, already give essentially optimal results. There are
no singularities involved in the numerics, as we work only with smooth profile
functions. It is conceivable that our calculations could be upgraded to a
computer-assisted proof, although this would involve a substantial amount of
additional work and calculations, including a much more detailed analysis of
the asymptotic expansions of the solutions at large distances.
| 0 | 1 | 1 | 0 | 0 | 0 |
Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics | Inspired by the success of deep learning techniques in the physical and
chemical sciences, we apply a modification of an autoencoder type deep neural
network to the task of dimension reduction of molecular dynamics data. We can
show that our time-lagged autoencoder reliably finds low-dimensional embeddings
for high-dimensional feature spaces which capture the slow dynamics of the
underlying stochastic processes - beyond the capabilities of linear dimension
reduction techniques.
| 1 | 1 | 0 | 1 | 0 | 0 |
Some integrable maps and their Hirota bilinear forms | We introduce a two-parameter family of birational maps, which reduces to a
family previously found by Demskoi, Tran, van der Kamp and Quispel (DTKQ) when
one of the parameters is set to zero. The study of the singularity confinement
pattern for these maps leads to the introduction of a tau function satisfying a
homogeneous recurrence which has the Laurent property, and the tropical (or
ultradiscrete) analogue of this homogeneous recurrence confirms the quadratic
degree growth found empirically by Demskoi et al. We prove that the tau
function also satisfies two different bilinear equations, each of which is a
reduction of the Hirota-Miwa equation (also known as the discrete KP equation,
or the octahedron recurrence). Furthermore, these bilinear equations are
related to reductions of particular two-dimensional integrable lattice
equations, of discrete KdV or discrete Toda type. These connections, as well as
the cluster algebra structure of the bilinear equations, allow a direct
construction of Poisson brackets, Lax pairs and first integrals for the
birational maps. As a consequence of the latter results, we show how each
member of the family can be lifted to a system that is integrable in the
Liouville sense, clarifying observations made previously in the original DTKQ
case.
| 0 | 1 | 0 | 0 | 0 | 0 |
Dynamics of higher-order rational solitons for the nonlocal nonlinear Schrodinger equation with the self-induced parity-time-symmetric potential | The integrable nonlocal nonlinear Schrodinger (NNLS) equation with the
self-induced parity-time-symmetric potential [Phys. Rev. Lett. 110 (2013)
064105] is investigated, which is an integrable extension of the standard NLS
equation. Its novel higher-order rational solitons are found using the nonlocal
version of the generalized perturbation (1, N-1)-fold Darboux transformation.
These rational solitons illustrate abundant wave structures for the distinct
choices of parameters (e.g., the strong and weak interactions of bright and
dark rational solitons). Moreover, we also explore the dynamical behaviors of
these higher-order rational solitons with some small noises on the basis of
numerical simulations.
| 0 | 1 | 1 | 0 | 0 | 0 |
N2N Learning: Network to Network Compression via Policy Gradient Reinforcement Learning | While bigger and deeper neural network architectures continue to advance the
state-of-the-art for many computer vision tasks, real-world adoption of these
networks is impeded by hardware and speed constraints. Conventional model
compression methods attempt to address this problem by modifying the
architecture manually or using pre-defined heuristics. Since the space of all
reduced architectures is very large, modifying the architecture of a deep
neural network in this way is a difficult task. In this paper, we tackle this
issue by introducing a principled method for learning reduced network
architectures in a data-driven way using reinforcement learning. Our approach
takes a larger `teacher' network as input and outputs a compressed `student'
network derived from the `teacher' network. In the first stage of our method, a
recurrent policy network aggressively removes layers from the large `teacher'
model. In the second stage, another recurrent policy network carefully reduces
the size of each remaining layer. The resulting network is then evaluated to
obtain a reward -- a score based on the accuracy and compression of the
network. Our approach uses this reward signal with policy gradients to train
the policies to find a locally optimal student network. Our experiments show
that we can achieve compression rates of more than 10x for models such as
ResNet-34 while maintaining similar performance to the input `teacher' network.
We also present a valuable transfer learning result which shows that policies
which are pre-trained on smaller `teacher' networks can be used to rapidly
speed up training on larger `teacher' networks.
| 1 | 0 | 0 | 1 | 0 | 0 |
Spectral analysis of stationary random bivariate signals | A novel approach towards the spectral analysis of stationary random bivariate
signals is proposed. Using the Quaternion Fourier Transform, we introduce a
quaternion-valued spectral representation of random bivariate signals seen as
complex-valued sequences. This makes possible the definition of a scalar
quaternion-valued spectral density for bivariate signals. This spectral density
can be meaningfully interpreted in terms of frequency-dependent polarization
attributes. A natural decomposition of any random bivariate signal in terms of
unpolarized and polarized components is introduced. Nonparametric spectral
density estimation is investigated, and we introduce the polarization
periodogram of a random bivariate signal. Numerical experiments support our
theoretical analysis, illustrating the relevance of the approach on synthetic
data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition | We propose a method (TT-GP) for approximate inference in Gaussian Process
(GP) models. We build on previous scalable GP research including stochastic
variational inference based on inducing inputs, kernel interpolation, and
structure exploiting algebra. The key idea of our method is to use Tensor Train
decomposition for variational parameters, which allows us to train GPs with
billions of inducing inputs and achieve state-of-the-art results on several
benchmarks. Further, our approach allows for training kernels based on deep
neural networks without any modifications to the underlying GP model. A neural
network learns a multidimensional embedding for the data, which is used by the
GP to make the final prediction. We train GP and neural network parameters
end-to-end without pretraining, through maximization of GP marginal likelihood.
We show the efficiency of the proposed approach on several regression and
classification benchmark datasets including MNIST, CIFAR-10, and Airline.
| 1 | 0 | 0 | 1 | 0 | 0 |
Some preliminary results on the set of principal congruences of a finite lattice | In the second edition of the congruence lattice book, Problem 22.1 asks for a
characterization of subsets $Q$ of a finite distributive lattice $D$ such that
there is a finite lattice $L$ whose congruence lattice is isomorphic to $D$ and
under this isomorphism $Q$ corresponds the the principal congruences of $L$. In
this note, we prove some preliminary results.
| 0 | 0 | 1 | 0 | 0 | 0 |
Extracting 3D Vascular Structures from Microscopy Images using Convolutional Recurrent Networks | Vasculature is known to be of key biological significance, especially in the
study of cancer. As such, considerable effort has been focused on the automated
measurement and analysis of vasculature in medical and pre-clinical images. In
tumors in particular, the vascular networks may be extremely irregular and the
appearance of the individual vessels may not conform to classical descriptions
of vascular appearance. Typically, vessels are extracted by either a
segmentation and thinning pipeline, or by direct tracking. Neither of these
methods are well suited to microscopy images of tumor vasculature. In order to
address this we propose a method to directly extract a medial representation of
the vessels using Convolutional Neural Networks. We then show that these
two-dimensional centerlines can be meaningfully extended into 3D in anisotropic
and complex microscopy images using the recently popularized Convolutional Long
Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this
hybrid convolutional-recurrent architecture over both 2D and 3D convolutional
comparators.
| 1 | 0 | 0 | 0 | 0 | 0 |
Search for Interstellar LiH in the Milky Way | We report the results of a sensitive search for the 443.952902 GHz $J=1-0$
transition of the LiH molecule toward two interstellar clouds in the Milky Way,
W49N and Sgr B2 (Main), that has been carried out using the Atacama Pathfinder
Experiment (APEX) telescope. The results obtained toward W49N place an upper
limit of $1.9 \times 10^{-11}\, (3\sigma)$ on the LiH abundance, $N({\rm
LiH})/N({\rm H}_2)$, in a foreground, diffuse molecular cloud along the
sight-line to W49N, corresponding to 0.5% of the solar system lithium
abundance. Those obtained toward Sgr B2 (Main) place an abundance limit $N({\rm
LiH})/N({\rm H}_2) < 3.6 \times 10^{-13} \,(3\sigma)$ in the dense gas within
the Sgr B2 cloud itself. These limits are considerably smaller that those
implied by the tentative detection of LiH reported previously for the $z=0.685$
absorber toward B0218+357.
| 0 | 1 | 0 | 0 | 0 | 0 |
Neural Probabilistic Model for Non-projective MST Parsing | In this paper, we propose a probabilistic parsing model, which defines a
proper conditional probability distribution over non-projective dependency
trees for a given sentence, using neural representations as inputs. The neural
network architecture is based on bi-directional LSTM-CNNs which benefits from
both word- and character-level representations automatically, by using
combination of bidirectional LSTM and CNN. On top of the neural network, we
introduce a probabilistic structured layer, defining a conditional log-linear
model over non-projective trees. We evaluate our model on 17 different
datasets, across 14 different languages. By exploiting Kirchhoff's Matrix-Tree
Theorem (Tutte, 1984), the partition functions and marginals can be computed
efficiently, leading to a straight-forward end-to-end model training procedure
via back-propagation. Our parser achieves state-of-the-art parsing performance
on nine datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Modularity of complex networks models | Modularity is designed to measure the strength of division of a network into
clusters (known also as communities). Networks with high modularity have dense
connections between the vertices within clusters but sparse connections between
vertices of different clusters. As a result, modularity is often used in
optimization methods for detecting community structure in networks, and so it
is an important graph parameter from a practical point of view. Unfortunately,
many existing non-spatial models of complex networks do not generate graphs
with high modularity; on the other hand, spatial models naturally create
clusters. We investigate this phenomenon by considering a few examples from
both sub-classes. We prove precise theoretical results for the classical model
of random d-regular graphs as well as the preferential attachment model, and
contrast these results with the ones for the spatial preferential attachment
(SPA) model that is a model for complex networks in which vertices are embedded
in a metric space, and each vertex has a sphere of influence whose size
increases if the vertex gains an in-link, and otherwise decreases with time.
The results obtained in this paper can be used for developing statistical tests
for models selection and to measure statistical significance of clusters
observed in complex networks.
| 0 | 0 | 1 | 0 | 0 | 0 |
BOLD5000: A public fMRI dataset of 5000 images | Vision science, particularly machine vision, has been revolutionized by
introducing large-scale image datasets and statistical learning approaches.
Yet, human neuroimaging studies of visual perception still rely on small
numbers of images (around 100) due to time-constrained experimental procedures.
To apply statistical learning approaches that integrate neuroscience, the
number of images used in neuroimaging must be significantly increased. We
present BOLD5000, a human functional MRI (fMRI) study that includes almost
5,000 distinct images depicting real-world scenes. Beyond dramatically
increasing image dataset size relative to prior fMRI studies, BOLD5000 also
accounts for image diversity, overlapping with standard computer vision
datasets by incorporating images from the Scene UNderstanding (SUN), Common
Objects in Context (COCO), and ImageNet datasets. The scale and diversity of
these image datasets, combined with a slow event-related fMRI design, enable
fine-grained exploration into the neural representation of a wide range of
visual features, categories, and semantics. Concurrently, BOLD5000 brings us
closer to realizing Marr's dream of a singular vision science - the intertwined
study of biological and computer vision.
| 0 | 0 | 0 | 0 | 1 | 0 |
Prospects for gravitational wave astronomy with next generation large-scale pulsar timing arrays | Next generation radio telescopes, namely the Five-hundred-meter Aperture
Spherical Telescope (FAST) and the Square Kilometer Array (SKA), will
revolutionize the pulsar timing arrays (PTAs) based gravitational wave (GW)
searches. We review some of the characteristics of FAST and SKA, and the
resulting PTAs, that are pertinent to the detection of gravitational wave
signals from individual supermassive black hole binaries.
| 0 | 1 | 0 | 0 | 0 | 0 |
On Identifiability of Nonnegative Matrix Factorization | In this letter, we propose a new identification criterion that guarantees the
recovery of the low-rank latent factors in the nonnegative matrix factorization
(NMF) model, under mild conditions. Specifically, using the proposed criterion,
it suffices to identify the latent factors if the rows of one factor are
\emph{sufficiently scattered} over the nonnegative orthant, while no structural
assumption is imposed on the other factor except being full-rank. This is by
far the mildest condition under which the latent factors are provably
identifiable from the NMF model.
| 1 | 0 | 0 | 1 | 0 | 0 |
Optimal Non-blocking Decentralized Supervisory Control Using G-Control Consistency | Supervisory control synthesis encounters with computational complexity. This
can be reduced by decentralized supervisory control approach. In this paper, we
define intrinsic control consistency for a pair of states of the plant.
G-control consistency (GCC) is another concept which is defined for a natural
projection w.r.t. the plant. We prove that, if a natural projection is output
control consistent for the closed language of the plant, and is a natural
observer for the marked language of the plant, then it is G-control consistent.
Namely, we relax the conditions for synthesis the optimal non-blocking
decentralized supervisory control by substituting GCC property for L-OCC and
Lm-observer properties of a natural projection. We propose a method to
synthesize the optimal non-blocking decentralized supervisory control based on
GCC property for a natural projection. In fact, we change the approach from
language-based properties of a natural projection to DES-based property by
defining GCC property.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fair mixing: the case of dichotomous preferences | Agents vote to choose a fair mixture of public outcomes; each agent likes or
dislikes each outcome. We discuss three outstanding voting rules. The
Conditional Utilitarian rule, a variant of the random dictator, is
Strategyproof and guarantees to any group of like-minded agents an influence
proportional to its size. It is easier to compute and more efficient than the
familiar Random Priority rule. Its worst case (resp. average) inefficiency is
provably (resp. in numerical experiments) low if the number of agents is low.
The efficient Egalitarian rule protects similarly individual agents but not
coalitions. It is Excludable Strategyproof: I do not want to lie if I cannot
consume outcomes I claim to dislike. The efficient Nash Max Product rule offers
the strongest welfare guarantees to coalitions, who can force any outcome with
a probability proportional to their size. But it fails even the excludable form
of Strategyproofness.
| 1 | 0 | 0 | 0 | 0 | 0 |
Inverse system characterizations of the (hereditarily) just infinite property in profinite groups | We give criteria on an inverse system of finite groups that ensure the limit
is just infinite or hereditarily just infinite. More significantly, these
criteria are 'universal' in that all (hereditarily) just infinite profinite
groups arise as limits of the specified form.
This is a corrected and revised version of the article: 'Inverse system
characterizations of the (hereditarily) just infinite property in profinite
groups', Bull. LMS vol 44, 3 (2012) 413--425.
| 0 | 0 | 1 | 0 | 0 | 0 |
p-FP: Extraction, Classification, and Prediction of Website Fingerprints with Deep Learning | Recent advances in learning Deep Neural Network (DNN) architectures have
received a great deal of attention due to their ability to outperform
state-of-the-art classifiers across a wide range of applications, with little
or no feature engineering. In this paper, we broadly study the applicability of
deep learning to website fingerprinting. We show that unsupervised DNNs can be
used to extract low-dimensional feature vectors that improve the performance of
state-of-the-art website fingerprinting attacks. When used as classifiers, we
show that they can match or exceed performance of existing attacks across a
range of application scenarios, including fingerprinting Tor website traces,
fingerprinting search engine queries over Tor, defeating fingerprinting
defenses, and fingerprinting TLS-encrypted websites. Finally, we show that DNNs
can be used to predict the fingerprintability of a website based on its
contents, achieving 99% accuracy on a data set of 4500 website downloads.
| 1 | 0 | 0 | 1 | 0 | 0 |
Equal confidence weighted expectation value estimates | In this article the issues are discussed with the Bayesian approach,
least-square fits, and most-likely fits. Trying to counter these issues, a
method, based on weighted confidence, is proposed for estimating probabilities
and other observables. This method sums over different model parameter
combinations but does not require the need for making assumptions on priors or
underlying probability functions. Moreover, by construction the results are
invariant under reparametrization of the model parameters. In one case the
result appears similar as in Bayesian statistics but in general there is no
agreement. The binomial distribution is also studied which turns out to be
useful for making predictions on production processes without the need to make
further assumptions. In the last part, the case of a simple linear fit (a
multi-variate example) is studied using the standard approaches and the
confidence weighted approach.
| 0 | 0 | 1 | 1 | 0 | 0 |
Protein Folding and Machine Learning: Fundamentals | In spite of decades of research, much remains to be discovered about folding:
the detailed structure of the initial (unfolded) state, vestigial folding
instructions remaining only in the unfolded state, the interaction of the
molecule with the solvent, instantaneous power at each point within the
molecule during folding, the fact that the process is stable in spite of myriad
possible disturbances, potential stabilization of trajectory by chaos, and, of
course, the exact physical mechanism (code or instructions) by which the
folding process is specified in the amino acid sequence. Simulations based upon
microscopic physics have had some spectacular successes and continue to
improve, particularly as super-computer capabilities increase. The simulations,
exciting as they are, are still too slow and expensive to deal with the
enormous number of molecules of interest. In this paper, we introduce an
approximate model based upon physics, empirics, and information science which
is proposed for use in machine learning applications in which very large
numbers of sub-simulations must be made. In particular, we focus upon machine
learning applications in the learning phase and argue that our model is
sufficiently close to the physics that, in spite of its approximate nature, can
facilitate stepping through machine learning solutions to explore the mechanics
of folding mentioned above. We particularly emphasize the exploration of energy
flow (power) within the molecule during folding, the possibility of energy
scale invariance (above a threshold), vestigial information in the unfolded
state as attractive targets for such machine language analysis, and statistical
analysis of an ensemble of folding micro-steps.
| 0 | 0 | 0 | 0 | 1 | 0 |
Discrete configuration spaces of squares and hexagons | We consider generalizations of the familiar fifteen-piece sliding puzzle on
the 4 by 4 square grid. On larger grids with more pieces and more holes,
asymptotically how fast can we move the puzzle into the solved state? We also
give a variation with sliding hexagons. The square puzzles and the hexagon
puzzles are both discrete versions of configuration spaces of disks, which are
of interest in statistical mechanics and topological robotics. The
combinatorial theorems and proofs in this paper suggest followup questions in
both combinatorics and topology, and may turn out to be useful for proving
topological statements about configuration spaces.
| 1 | 0 | 1 | 0 | 0 | 0 |
On the non commutative Iwasawa main conjecture for abelian varieties over function fields | We establish the Iwasawa main conjecture for semi-stable abelian varieties
over a function field of characteristic $p$ under certain restrictive
assumptions. Namely we consider $p$-torsion free $p$-adic Lie extensions of the
base field which contain the constant $\mathbb Z_p$-extension and are
everywhere unramified. Under the classical $\mu=0$ hypothesis we give a proof
which mainly relies on the interpretation of the Selmer complex in terms of
$p$-adic cohomology [TV] together with the trace formulas of [EL1].
| 0 | 0 | 1 | 0 | 0 | 0 |
Absorption and Emission Probabilities of Electrons in Electric and Magnetic Fields for FEL | We consider induced emission of ultrarelativistic electrons in strong
electric (magnetic) fields that are uniform along the direction of the electron
motion and are not uniform in the transverse direction. The stimulated
absorption and emission probabilities are found in such system.
| 0 | 1 | 0 | 0 | 0 | 0 |
Design, Development and Evaluation of a UAV to Study Air Quality in Qatar | Measuring gases for air quality monitoring is a challenging task that claims
a lot of time of observation and large numbers of sensors. The aim of this
project is to develop a partially autonomous unmanned aerial vehicle (UAV)
equipped with sensors, in order to monitor and collect air quality real time
data in designated areas and send it to the ground base. This project is
designed and implemented by a multidisciplinary team from electrical and
computer engineering departments. The electrical engineering team responsible
for implementing air quality sensors for detecting real time data and transmit
it from the plane to the ground. On the other hand, the computer engineering
team is in charge of Interface sensors and provide platform to view and
visualize air quality data and live video streaming. The proposed project
contains several sensors to measure Temperature, Humidity, Dust, CO, CO2 and
O3. The collected data is transmitted to a server over a wireless internet
connection and the server will store, and supply these data to any party who
has permission to access it through android phone or website in semi-real time.
The developed UAV has carried several field tests in Al Shamal airport in
Qatar, with interesting results and proof of concept outcomes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gaussian Process bandits with adaptive discretization | In this paper, the problem of maximizing a black-box function $f:\mathcal{X}
\to \mathbb{R}$ is studied in the Bayesian framework with a Gaussian Process
(GP) prior. In particular, a new algorithm for this problem is proposed, and
high probability bounds on its simple and cumulative regret are established.
The query point selection rule in most existing methods involves an exhaustive
search over an increasingly fine sequence of uniform discretizations of
$\mathcal{X}$. The proposed algorithm, in contrast, adaptively refines
$\mathcal{X}$ which leads to a lower computational complexity, particularly
when $\mathcal{X}$ is a subset of a high dimensional Euclidean space. In
addition to the computational gains, sufficient conditions are identified under
which the regret bounds of the new algorithm improve upon the known results.
Finally an extension of the algorithm to the case of contextual bandits is
proposed, and high probability bounds on the contextual regret are presented.
| 1 | 0 | 0 | 1 | 0 | 0 |
Conditional Time Series Forecasting with Convolutional Neural Networks | We present a method for conditional time series forecasting based on an
adaptation of the recent deep convolutional WaveNet architecture. The proposed
network contains stacks of dilated convolutions that allow it to access a broad
range of history when forecasting, a ReLU activation function and conditioning
is performed by applying multiple convolutional filters in parallel to separate
time series which allows for the fast processing of data and the exploitation
of the correlation structure between the multivariate time series. We test and
analyze the performance of the convolutional network both unconditionally as
well as conditionally for financial time series forecasting using the S&P500,
the volatility index, the CBOE interest rate and several exchange rates and
extensively compare it to the performance of the well-known autoregressive
model and a long-short term memory network. We show that a convolutional
network is well-suited for regression-type problems and is able to effectively
learn dependencies in and between the series without the need for long
historical time series, is a time-efficient and easy to implement alternative
to recurrent-type networks and tends to outperform linear and recurrent models.
| 0 | 0 | 0 | 1 | 0 | 0 |
Automated Assistants to Identify and Prompt Action on Visual News Bias | Bias is a common problem in today's media, appearing frequently in text and
in visual imagery. Users on social media websites such as Twitter need better
methods for identifying bias. Additionally, activists --those who are motivated
to effect change related to some topic, need better methods to identify and
counteract bias that is contrary to their mission. With both of these use cases
in mind, in this paper we propose a novel tool called UnbiasedCrowd that
supports identification of, and action on bias in visual news media. In
particular, it addresses the following key challenges (1) identification of
bias; (2) aggregation and presentation of evidence to users; (3) enabling
activists to inform the public of bias and take action by engaging people in
conversation with bots. We describe a preliminary study on the Twitter platform
that explores the impressions that activists had of our tool, and how people
reacted and engaged with online bots that exposed visual bias. We conclude by
discussing design and implication of our findings for creating future systems
to identify and counteract the effects of news bias.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Matched Filter Technique For Slow Radio Transient Detection And First Demonstration With The Murchison Widefield Array | Many astronomical sources produce transient phenomena at radio frequencies,
but the transient sky at low frequencies (<300 MHz) remains relatively
unexplored. Blind surveys with new widefield radio instruments are setting
increasingly stringent limits on the transient surface density on various
timescales. Although many of these instruments are limited by classical
confusion noise from an ensemble of faint, unresolved sources, one can in
principle detect transients below the classical confusion limit to the extent
that the classical confusion noise is independent of time. We develop a
technique for detecting radio transients that is based on temporal matched
filters applied directly to time series of images rather than relying on
source-finding algorithms applied to individual images. This technique has
well-defined statistical properties and is applicable to variable and transient
searches for both confusion-limited and non-confusion-limited instruments.
Using the Murchison Widefield Array as an example, we demonstrate that the
technique works well on real data despite the presence of classical confusion
noise, sidelobe confusion noise, and other systematic errors. We searched for
transients lasting between 2 minutes and 3 months. We found no transients and
set improved upper limits on the transient surface density at 182 MHz for flux
densities between ~20--200 mJy, providing the best limits to date for hour- and
month-long transients.
| 0 | 1 | 0 | 0 | 0 | 0 |
Shape and Energy Consistent Pseudopotentials for Correlated Electron systems | A method is developed for generating pseudopotentials for use in
correlated-electron calculations. The paradigms of shape and energy consistency
are combined and defined in terms of correlated-electron wave-functions. The
resulting energy consistent correlated electron pseudopotentials (eCEPPs) are
constructed for H, Li--F, Sc--Fe, and Cu. Their accuracy is quantified by
comparing the relaxed molecular geometries and dissociation energies they
provide with all electron results, with all quantities evaluated using coupled
cluster singles doubles and triples calculations. Errors inherent in the
pseudopotentials are also compared with those arising from a number of
approximations commonly used with pseudopotentials. The eCEPPs provide a
significant improvement in optimised geometries and dissociation energies for
small molecules, with errors for the latter being an order-of-magnitude smaller
than for Hartree-Fock-based pseudopotentials available in the literature.
Gaussian basis sets are optimised for use with these pseudopotentials.
| 0 | 1 | 0 | 0 | 0 | 0 |
Bayes model selection | We offer a general Bayes theoretic framework to tackle the model selection
problem under a two-step prior design: the first-step prior serves to assess
the model selection uncertainty, and the second-step prior quantifies the prior
belief on the strength of the signals within the model chosen from the first
step.
We establish non-asymptotic oracle posterior contraction rates under (i) a
new Bernstein-inequality condition on the log likelihood ratio of the
statistical experiment, (ii) a local entropy condition on the dimensionality of
the models, and (iii) a sufficient mass condition on the second-step prior near
the best approximating signal for each model. The first-step prior can be
designed generically. The resulting posterior mean also satisfies an oracle
inequality, thus automatically serving as an adaptive point estimator in a
frequentist sense. Model mis-specification is allowed in these oracle rates.
The new Bernstein-inequality condition not only eliminates the convention of
constructing explicit tests with exponentially small type I and II errors, but
also suggests the intrinsic metric to use in a given statistical experiment,
both as a loss function and as an entropy measurement. This gives a unified
reduction scheme for many experiments considered in Ghoshal & van der
Vaart(2007) and beyond. As an illustration for the scope of our general results
in concrete applications, we consider (i) trace regression, (ii)
shape-restricted isotonic/convex regression, (iii) high-dimensional partially
linear regression and (iv) covariance matrix estimation in the sparse factor
model. These new results serve either as theoretical justification of practical
prior proposals in the literature, or as an illustration of the generic
construction scheme of a (nearly) minimax adaptive estimator for a
multi-structured experiment.
| 0 | 0 | 1 | 1 | 0 | 0 |
Phase Congruency Parameter Optimization for Enhanced Detection of Image Features for both Natural and Medical Applications | Following the presentation and proof of the hypothesis that image features
are particularly perceived at points where the Fourier components are maximally
in phase, the concept of phase congruency (PC) is introduced. Subsequently, a
two-dimensional multi-scale phase congruency (2D-MSPC) is developed, which has
been an important tool for detecting and evaluation of image features. However,
the 2D-MSPC requires many parameters to be appropriately tuned for optimal
image features detection. In this paper, we defined a criterion for parameter
optimization of the 2D-MSPC, which is a function of its maximum and minimum
moments. We formulated the problem in various optimal and suboptimal
frameworks, and discussed the conditions and features of the suboptimal
solutions. The effectiveness of the proposed method was verified through
several examples, ranging from natural objects to medical images from patients
with a neurological disease, multiple sclerosis.
| 1 | 0 | 1 | 0 | 0 | 0 |
Community structure detection and evaluation during the pre- and post-ictal hippocampal depth recordings | Detecting and evaluating regions of brain under various circumstances is one
of the most interesting topics in computational neuroscience. However, the
majority of the studies on detecting communities of a functional connectivity
network of the brain is done on networks obtained from coherency attributes,
and not from correlation. This lack of studies, in part, is due to the fact
that many common methods for clustering graphs require the nodes of the network
to be `positively' linked together, a property that is guaranteed by a
coherency matrix, by definition. However, correlation matrices reveal more
information regarding how each pair of nodes are linked together. In this
study, for the first time we simultaneously examine four inherently different
network clustering methods (spectral, heuristic, and optimization methods)
applied to the functional connectivity networks of the CA1 region of the
hippocampus of an anaesthetized rat during pre-ictal and post-ictal states. The
networks are obtained from correlation matrices, and its results are compared
with the ones obtained by applying the same methods to coherency matrices. The
correlation matrices show a much finer community structure compared to the
coherency matrices. Furthermore, we examine the potential smoothing effect of
choosing various window sizes for computing the correlation/coherency matrices.
| 1 | 0 | 0 | 0 | 1 | 0 |
Shapley effects for sensitivity analysis with correlated inputs: comparisons with Sobol' indices, numerical estimation and applications | The global sensitivity analysis of a numerical model aims to quantify, by
means of sensitivity indices estimate, the contributions of each uncertain
input variable to the model output uncertainty. The so-called Sobol' indices,
which are based on the functional variance analysis, present a difficult
interpretation in the presence of statistical dependence between inputs. The
Shapley effect was recently introduced to overcome this problem as they
allocate the mutual contribution (due to correlation and interaction) of a
group of inputs to each individual input within the group.In this paper, using
several new analytical results, we study the effects of linear correlation
between some Gaussian input variables on Shapley effects, and compare these
effects to classical first-order and total Sobol' indices.This illustrates the
interest, in terms of sensitivity analysis setting and interpretation, of the
Shapley effects in the case of dependent inputs. We also investigate the
numerical convergence of the estimated Shapley effects. For the practical issue
of computationally demanding computer models, we show that the substitution of
the original model by a metamodel (here, kriging) makes it possible to estimate
these indices with precision at a reasonable computational cost.
| 0 | 0 | 1 | 1 | 0 | 0 |
Ridesourcing Car Detection by Transfer Learning | Ridesourcing platforms like Uber and Didi are getting more and more popular
around the world. However, unauthorized ridesourcing activities taking
advantages of the sharing economy can greatly impair the healthy development of
this emerging industry. As the first step to regulate on-demand ride services
and eliminate black market, we design a method to detect ridesourcing cars from
a pool of cars based on their trajectories. Since licensed ridesourcing car
traces are not openly available and may be completely missing in some cities
due to legal issues, we turn to transferring knowledge from public transport
open data, i.e, taxis and buses, to ridesourcing detection among ordinary
vehicles. We propose a two-stage transfer learning framework. In Stage 1, we
take taxi and bus data as input to learn a random forest (RF) classifier using
trajectory features shared by taxis/buses and ridesourcing/other cars. Then, we
use the RF to label all the candidate cars. In Stage 2, leveraging the subset
of high confident labels from the previous stage as input, we further learn a
convolutional neural network (CNN) classifier for ridesourcing detection, and
iteratively refine RF and CNN, as well as the feature set, via a co-training
process. Finally, we use the resulting ensemble of RF and CNN to identify the
ridesourcing cars in the candidate pool. Experiments on real car, taxi and bus
traces show that our transfer learning framework, with no need of a pre-labeled
ridesourcing dataset, can achieve similar accuracy as the supervised learning
methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Semantic Cross-Species Derived Data Management Application | Managing dynamic information in large multi-site, multi-species, and
multi-discipline consortia is a challenging task for data management
applications. Often in academic research studies the goals for informatics
teams are to build applications that provide extract-transform-load (ETL)
functionality to archive and catalog source data that has been collected by the
research teams. In consortia that cross species and methodological or
scientific domains, building interfaces that supply data in a usable fashion
and make intuitive sense to scientists from dramatically different backgrounds
increases the complexity for developers. Further, reusing source data from
outside one's scientific domain is fraught with ambiguities in understanding
the data types, analysis methodologies, and how to combine the data with those
from other research teams. We report on the design, implementation, and
performance of a semantic data management application to support the NIMH
funded Conte Center at the University of California, Irvine. The Center is
testing a theory of the consequences of "fragmented" (unpredictable, high
entropy) early-life experiences on adolescent cognitive and emotional outcomes
in both humans and rodents. It employs cross-species neuroimaging, epigenomic,
molecular, and neuroanatomical approaches in humans and rodents to assess the
potential consequences of fragmented unpredictable experience on brain
structure and circuitry. To address this multi-technology, multi-species
approach, the system uses semantic web techniques based on the Neuroimaging
Data Model (NIDM) to facilitate data ETL functionality. We find this approach
enables a low-cost, easy to maintain, and semantically meaningful information
management system, enabling the diverse research teams to access and use the
data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Retrosynthetic reaction prediction using neural sequence-to-sequence models | We describe a fully data driven model that learns to perform a retrosynthetic
reaction prediction task, which is treated as a sequence-to-sequence mapping
problem. The end-to-end trained model has an encoder-decoder architecture that
consists of two recurrent neural networks, which has previously shown great
success in solving other sequence-to-sequence prediction tasks such as machine
translation. The model is trained on 50,000 experimental reaction examples from
the United States patent literature, which span 10 broad reaction types that
are commonly used by medicinal chemists. We find that our model performs
comparably with a rule-based expert system baseline model, and also overcomes
certain limitations associated with rule-based expert systems and with any
machine learning approach that contains a rule-based expert system component.
Our model provides an important first step towards solving the challenging
problem of computational retrosynthetic analysis.
| 1 | 0 | 0 | 1 | 0 | 0 |
Redshift, metallicity and size of two extended dwarf Irregular galaxies. A link between dwarf Irregulars and Ultra Diffuse Galaxies? | We present the results of the spectroscopic and photometric follow-up of two
field galaxies that were selected as possible stellar counterparts of local
high velocity clouds. Our analysis shows that the two systems are distant (D>20
Mpc) dwarf irregular galaxies unrelated to the local HI clouds. However, the
newly derived distance and structural parameters reveal that the two galaxies
have luminosities and effective radii very similar to the recently identified
Ultra Diffuse Galaxies (UDGs). At odds with classical UDGs, they are remarkably
isolated, having no known giant galaxy within ~2.0 Mpc. Moreover, one of them
has a very high gas content compared to galaxies of similar stellar mass, with
a HI to stellar mass ratio M_HI/M_* ~90, typical of almost-dark dwarfs.
Expanding on this finding, we show that extended dwarf irregulars overlap the
distribution of UDGs in the M_V vs. log(r_e) plane and that the sequence
including dwarf spheroidals, dwarf irregulars and UDGs appears as continuously
populated in this plane.
| 0 | 1 | 0 | 0 | 0 | 0 |
Collapsing hyperkähler manifolds | Given a projective hyperkahler manifold with a holomorphic Lagrangian
fibration, we prove that hyperkahler metrics with volume of the torus fibers
shrinking to zero collapse in the Gromov-Hausdorff sense (and smoothly away
from the singular fibers) to a compact metric space which is a half-dimensional
special Kahler manifold outside a singular set of real Hausdorff codimension 2
and is homeomorphic to the base projective space.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fast amortized inference of neural activity from calcium imaging data with variational autoencoders | Calcium imaging permits optical measurement of neural activity. Since
intracellular calcium concentration is an indirect measurement of neural
activity, computational tools are necessary to infer the true underlying
spiking activity from fluorescence measurements. Bayesian model inversion can
be used to solve this problem, but typically requires either computationally
expensive MCMC sampling, or faster but approximate maximum-a-posteriori
optimization. Here, we introduce a flexible algorithmic framework for fast,
efficient and accurate extraction of neural spikes from imaging data. Using the
framework of variational autoencoders, we propose to amortize inference by
training a deep neural network to perform model inversion efficiently. The
recognition network is trained to produce samples from the posterior
distribution over spike trains. Once trained, performing inference amounts to a
fast single forward pass through the network, without the need for iterative
optimization or sampling. We show that amortization can be applied flexibly to
a wide range of nonlinear generative models and significantly improves upon the
state of the art in computation time, while achieving competitive accuracy. Our
framework is also able to represent posterior distributions over spike-trains.
We demonstrate the generality of our method by proposing the first
probabilistic approach for separating backpropagating action potentials from
putative synaptic inputs in calcium imaging of dendritic spines.
| 1 | 0 | 0 | 1 | 0 | 0 |
Hidden multiparticle excitation in weakly interacting Bose-Einstein Condensate | We investigate multiparticle excitation effect on a collective density
excitation as well as a single-particle excitation in a weakly interacting
Bose--Einstein condensate (BEC). We find that although the weakly interacting
BEC offers weak multiparticle excitation spectrum at low temperatures, this
multiparticle excitation effect may not remain hidden, but emerges as
bimodality in the density response function through the single-particle
excitation. Identification of spectra in the BEC between the single-particle
excitation and the density excitation is also assessed at nonzero temperatures,
which has been known to be unique nature in the BEC at absolute zero
temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hausdorff Measure: Lost in Translation | In the present article we describe how one can define Hausdorff measure
allowing empty elements in coverings, and using infinite countable coverings
only. In addition, we discuss how the use of different nonequivalent
interpretations of the notion "countable set", that is typical for classical
and modern mathematics, may lead to contradictions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Orthogonalized ALS: A Theoretically Principled Tensor Decomposition Algorithm for Practical Use | The popular Alternating Least Squares (ALS) algorithm for tensor
decomposition is efficient and easy to implement, but often converges to poor
local optima---particularly when the weights of the factors are non-uniform. We
propose a modification of the ALS approach that is as efficient as standard
ALS, but provably recovers the true factors with random initialization under
standard incoherence assumptions on the factors of the tensor. We demonstrate
the significant practical superiority of our approach over traditional ALS for
a variety of tasks on synthetic data---including tensor factorization on exact,
noisy and over-complete tensors, as well as tensor completion---and for
computing word embeddings from a third-order word tri-occurrence tensor.
| 1 | 0 | 0 | 1 | 0 | 0 |
Domain Generalization by Marginal Transfer Learning | Domain generalization is the problem of assigning class labels to an
unlabeled test data set, given several labeled training data sets drawn from
similar distributions. This problem arises in several applications where data
distributions fluctuate because of biological, technical, or other sources of
variation. We develop a distribution-free, kernel-based approach that predicts
a classifier from the marginal distribution of features, by leveraging the
trends present in related classification tasks. This approach involves
identifying an appropriate reproducing kernel Hilbert space and optimizing a
regularized empirical risk over the space. We present generalization error
analysis, describe universal kernels, and establish universal consistency of
the proposed methodology. Experimental results on synthetic data and three real
data applications demonstrate the superiority of the method with respect to a
pooling strategy.
| 0 | 0 | 0 | 1 | 0 | 0 |
(Non-)formality of the extended Swiss Cheese operads | We study two colored operads of configurations of little $n$-disks in a unit
$n$-disk, with the centers of the small disks of one color restricted to an
$m$-plane, $m<n$. We compute the rational homotopy type of these \emph{extended
Swiss Cheese operads} and show how they are connected to the rational homotopy
types of the inclusion maps from the little $m$-disks to the little $n$-disks
operad.
| 0 | 0 | 1 | 0 | 0 | 0 |
Pricing options and computing implied volatilities using neural networks | This paper proposes a data-driven approach, by means of an Artificial Neural
Network (ANN), to value financial options and to calculate implied volatilities
with the aim of accelerating the corresponding numerical methods. With ANNs
being universal function approximators, this method trains an optimized ANN on
a data set generated by a sophisticated financial model, and runs the trained
ANN as an agent of the original solver in a fast and efficient way. We test
this approach on three different types of solvers, including the analytic
solution for the Black-Scholes equation, the COS method for the Heston
stochastic volatility model and Brent's iterative root-finding method for the
calculation of implied volatilities. The numerical results show that the ANN
solver can reduce the computing time significantly.
| 1 | 0 | 0 | 0 | 0 | 1 |
Effect of magnetization on the tunneling anomaly in compressible quantum Hall states | Tunneling of electrons into a two-dimensional electron system is known to
exhibit an anomaly at low bias, in which the tunneling conductance vanishes due
to a many-body interaction effect. Recent experiments have measured this
anomaly between two copies of the half-filled Landau level as a function of
in-plane magnetic field, and they suggest that increasing spin polarization
drives a deeper suppression of tunneling. Here we present a theory of the
tunneling anomaly between two copies of the partially spin-polarized
Halperin-Lee-Read state, and we show that the conventional description of the
tunneling anomaly, based on the Coulomb self-energy of the injected charge
packet, is inconsistent with the experimental observation. We propose that the
experiment is operating in a different regime, not previously considered, in
which the charge-spreading action is determined by the compressibility of the
composite fermions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning to Acquire Information | We consider the problem of diagnosis where a set of simple observations are
used to infer a potentially complex hidden hypothesis. Finding the optimal
subset of observations is intractable in general, thus we focus on the problem
of active diagnosis, where the agent selects the next most-informative
observation based on the results of previous observations. We show that under
the assumption of uniform observation entropy, one can build an implication
model which directly predicts the outcome of the potential next observation
conditioned on the results of past observations, and selects the observation
with the maximum entropy. This approach enjoys reduced computation complexity
by bypassing the complicated hypothesis space, and can be trained on
observation data alone, learning how to query without knowledge of the hidden
hypothesis.
| 1 | 0 | 0 | 1 | 0 | 0 |
How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV | This work explores the feasibility of steering a drone with a (recurrent)
neural network, based on input from a forward looking camera, in the context of
a high-level navigation task. We set up a generic framework for training a
network to perform navigation tasks based on imitation learning. It can be
applied to both aerial and land vehicles. As a proof of concept we apply it to
a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a
room containing a number of obstacles. So far only feedforward neural networks
(FNNs) have been used to train UAV control. To cope with more complex tasks, we
propose the use of recurrent neural networks (RNN) instead and successfully
train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision
based control is a sequential prediction problem, known for its highly
correlated input data. The correlation makes training a network hard,
especially an RNN. To overcome this issue, we investigate an alternative
sampling method during training, namely window-wise truncated backpropagation
through time (WW-TBPTT). Further, end-to-end training requires a lot of data
which often is not available. Therefore, we compare the performance of
retraining only the Fully Connected (FC) and LSTM control layers with networks
which are trained end-to-end. Performing the relatively simple task of crossing
a room already reveals important guidelines and good practices for training
neural control networks. Different visualizations help to explain the behavior
learned.
| 1 | 0 | 0 | 0 | 0 | 0 |
Range-efficient consistent sampling and locality-sensitive hashing for polygons | Locality-sensitive hashing (LSH) is a fundamental technique for similarity
search and similarity estimation in high-dimensional spaces. The basic idea is
that similar objects should produce hash collisions with probability
significantly larger than objects with low similarity. We consider LSH for
objects that can be represented as point sets in either one or two dimensions.
To make the point sets finite size we consider the subset of points on a grid.
Directly applying LSH (e.g. min-wise hashing) to these point sets would require
time proportional to the number of points. We seek to achieve time that is much
lower than direct approaches.
Technically, we introduce new primitives for range-efficient consistent
sampling (of independent interest), and show how to turn such samples into LSH
values. Another application of our technique is a data structure for quickly
estimating the size of the intersection or union of a set of preprocessed
polygons. Curiously, our consistent sampling method uses transformation to a
geometric problem.
| 1 | 0 | 0 | 0 | 0 | 0 |
Decoupled Greedy Learning of CNNs | A commonly cited inefficiency of neural network training by back-propagation
is the update locking problem: each layer must wait for the signal to propagate
through the network before updating. We consider and analyze a training
procedure, Decoupled Greedy Learning (DGL), that addresses this problem more
effectively and at scales beyond those of previous solutions. It is based on a
greedy relaxation of the joint training objective, recently shown to be
effective in the context of Convolutional Neural Networks (CNNs) on large-scale
image classification. We consider an optimization of this objective that
permits us to decouple the layer training, allowing for layers or modules in
networks to be trained with a potentially linear parallelization in layers. We
show theoretically and empirically that this approach converges. In addition,
we empirically find that it can lead to better generalization than sequential
greedy optimization and even standard end-to-end back-propagation. We show that
an extension of this approach to asynchronous settings, where modules can
operate with large communication delays, is possible with the use of a replay
buffer. We demonstrate the effectiveness of DGL on the CIFAR-10 datasets
against alternatives and on the large-scale ImageNet dataset, where we are able
to effectively train VGG and ResNet-152 models.
| 1 | 0 | 0 | 1 | 0 | 0 |
Discrete time Pontryagin maximum principle for optimal control problems under state-action-frequency constraints | We establish a Pontryagin maximum principle for discrete time optimal control
problems under the following three types of constraints: a) constraints on the
states pointwise in time, b) constraints on the control actions pointwise in
time, and c) constraints on the frequency spectrum of the optimal control
trajectories. While the first two types of constraints are already included in
the existing versions of the Pontryagin maximum principle, it turns out that
the third type of constraints cannot be recast in any of the standard forms of
the existing results for the original control system. We provide two different
proofs of our Pontryagin maximum principle in this article, and include several
special cases fine-tuned to control-affine nonlinear and linear system models.
In particular, for minimization of quadratic cost functions and linear time
invariant control systems, we provide tight conditions under which the optimal
controls under frequency constraints are either normal or abnormal.
| 1 | 0 | 1 | 0 | 0 | 0 |
Quantitative evaluation of an active Chemotaxis model in Discrete time | A system of $N$ particles in a chemical medium in $\mathbb{R}^{d}$ is studied
in a discrete time setting. Underlying interacting particle system in
continuous time can be expressed as \begin{eqnarray} dX_{i}(t)
&=&[-(I-A)X_{i}(t) + \bigtriangledown h(t,X_{i}(t))]dt + dW_{i}(t), \,\,
X_{i}(0)=x_{i}\in \mathbb{R}^{d}\,\,\forall i=1,\ldots,N\nonumber\\
\frac{\partial}{\partial t} h(t,x)&=&-\alpha h(t,x) + D\bigtriangleup h(t,x)
+\frac{\beta}{n} \sum_{i=1}^{N} g(X_{i}(t),x),\quad h(0,\cdot) =
h(\cdot).\label{main} \end{eqnarray} where $X_{i}(t)$ is the location of the
$i$th particle at time $t$ and $h(t,x)$ is the function measuring the
concentration of the medium at location $x$ with $h(0,x) = h(x)$. In this
article we describe a general discrete time non-linear formulation of the
aforementioned model and a strongly coupled particle system approximating it.
Similar models have been studied before (Budhiraja et al.(2011)) under a
restrictive compactness assumption on the domain of particles. In current work
the particles take values in $\R^{d}$ and consequently the stability analysis
is particularly challenging. We provide sufficient conditions for the existence
of a unique fixed point for the dynamical system governing the large $N$
asymptotics of the particle empirical measure. We also provide uniform in time
convergence rates for the particle empirical measure to the corresponding limit
measure under suitable conditions on the model.
| 0 | 0 | 1 | 0 | 0 | 0 |
Deep Learning the Ising Model Near Criticality | It is well established that neural networks with deep architectures perform
better than shallow networks for many tasks in machine learning. In statistical
physics, while there has been recent interest in representing physical data
with generative modelling, the focus has been on shallow neural networks. A
natural question to ask is whether deep neural networks hold any advantage over
shallow networks in representing such data. We investigate this question by
using unsupervised, generative graphical models to learn the probability
distribution of a two-dimensional Ising system. Deep Boltzmann machines, deep
belief networks, and deep restricted Boltzmann networks are trained on thermal
spin configurations from this system, and compared to the shallow architecture
of the restricted Boltzmann machine. We benchmark the models, focussing on the
accuracy of generating energetic observables near the phase transition, where
these quantities are most difficult to approximate. Interestingly, after
training the generative networks, we observe that the accuracy essentially
depends only on the number of neurons in the first hidden layer of the network,
and not on other model details such as network depth or model type. This is
evidence that shallow networks are more efficient than deep networks at
representing physical probability distributions associated with Ising systems
near criticality.
| 1 | 1 | 0 | 1 | 0 | 0 |
A supervised approach to time scale detection in dynamic networks | For any stream of time-stamped edges that form a dynamic network, an
important choice is the aggregation granularity that an analyst uses to bin the
data. Picking such a windowing of the data is often done by hand, or left up to
the technology that is collecting the data. However, the choice can make a big
difference in the properties of the dynamic network. This is the time scale
detection problem. In previous work, this problem is often solved with a
heuristic as an unsupervised task. As an unsupervised problem, it is difficult
to measure how well a given algorithm performs. In addition, we show that the
quality of the windowing is dependent on which task an analyst wants to perform
on the network after windowing. Therefore the time scale detection problem
should not be handled independently from the rest of the analysis of the
network.
We introduce a framework that tackles both of these issues: By measuring the
performance of the time scale detection algorithm based on how well a given
task is accomplished on the resulting network, we are for the first time able
to directly compare different time scale detection algorithms to each other.
Using this framework, we introduce time scale detection algorithms that take a
supervised approach: they leverage ground truth on training data to find a good
windowing of the test data. We compare the supervised approach to previous
approaches and several baselines on real data.
| 1 | 0 | 0 | 0 | 0 | 0 |
Binarized octree generation for Cartesian adaptive mesh refinement around immersed geometries | We revisit the generation of balanced octrees for adaptive mesh refinement
(AMR) of Cartesian domains with immersed complex geometries. In a recent short
note [Hasbestan and Senocak, J. Comput. Phys. vol. 351:473-477 (2017)], we
showed that the data-locality of the Z-order curve in hashed linear octree
generation methods may not be perfect because of potential collisions in the
hash table. Building on that observation, we propose a binarized octree
generation method that complies with the Z-order curve exactly. Similar to a
hashed linear octree generation method, we use Morton encoding to index the
nodes of an octree, but use a red-black tree in place of the hash table.
Red-black tree is a special kind of a binary tree, which we use for insertion
and deletion of elements during mesh adaptation. By strictly working with the
bitwise representation of the octree, we remove computer hardware limitations
on the depth of adaptation on a single processor. Additionally, we introduce a
geometry encoding technique for rapidly tagging the solid geometry for
refinement. Our results for several geometries with different levels of
adaptations show that the binarized octree generation outperforms the linear
octree generation in terms of runtime performance at the expense of only a
slight increase in memory usage. We provide the current AMR capability as
open-source software.
| 1 | 1 | 0 | 0 | 0 | 0 |
Adversarial Examples: Opportunities and Challenges | With the advent of the era of artificial intelligence(AI), deep neural
networks (DNNs) have shown huge superiority over human in image recognition,
speech processing, autonomous vehicles and medical diagnosis. However, recent
studies indicate that DNNs are vulnerable to adversarial examples (AEs) which
are designed by attackers to fool deep learning models. Different from real
examples, AEs can hardly be distinguished from human eyes, but mislead the
model to predict incorrect outputs and therefore threaten security critical
deep-learning applications. In recent years, the generation and defense of AEs
have become a research hotspot in the field of AI security. This article
reviews the latest research progress of AEs. First, we introduce the concept,
cause, characteristic and evaluation metrics of AEs, then give a survey on the
state-of-the-art AE generation methods with the discussion of advantages and
disadvantages. After that we review the existing defenses and discuss their
limitations. Finally, the future research opportunities and challenges of AEs
are prospected.
| 0 | 0 | 0 | 1 | 0 | 0 |
Decoupling Learning Rules from Representations | In the artificial intelligence field, learning often corresponds to changing
the parameters of a parameterized function. A learning rule is an algorithm or
mathematical expression that specifies precisely how the parameters should be
changed. When creating an artificial intelligence system, we must make two
decisions: what representation should be used (i.e., what parameterized
function should be used) and what learning rule should be used to search
through the resulting set of representable functions. Using most learning
rules, these two decisions are coupled in a subtle (and often unintentional)
way. That is, using the same learning rule with two different representations
that can represent the same sets of functions can result in two different
outcomes. After arguing that this coupling is undesirable, particularly when
using artificial neural networks, we present a method for partially decoupling
these two decisions for a broad class of learning rules that span unsupervised
learning, reinforcement learning, and supervised learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Schur P-positivity and involution Stanley symmetric functions | The involution Stanley symmetric functions $\hat{F}_y$ are the stable limits
of the analogues of Schubert polynomials for the orbits of the orthogonal group
in the flag variety. These symmetric functions are also generating functions
for involution words, and are indexed by the involutions in the symmetric
group. By construction each $\hat{F}_y$ is a sum of Stanley symmetric functions
and therefore Schur positive. We prove the stronger fact that these power
series are Schur $P$-positive. We give an algorithm to efficiently compute the
decomposition of $\hat{F}_y$ into Schur $P$-summands, and prove that this
decomposition is triangular with respect to the dominance order on partitions.
As an application, we derive pattern avoidance conditions which characterize
the involution Stanley symmetric functions which are equal to Schur
$P$-functions. We deduce as a corollary that the involution Stanley symmetric
function of the reverse permutation is a Schur $P$-function indexed by a
shifted staircase shape. These results lead to alternate proofs of theorems of
Ardila-Serrano and DeWitt on skew Schur functions which are Schur
$P$-functions. We also prove new Pfaffian formulas for certain related
involution Schubert polynomials.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Riemannian gossip approach to subspace learning on Grassmann manifold | In this paper, we focus on subspace learning problems on the Grassmann
manifold. Interesting applications in this setting include low-rank matrix
completion and low-dimensional multivariate regression, among others. Motivated
by privacy concerns, we aim to solve such problems in a decentralized setting
where multiple agents have access to (and solve) only a part of the whole
optimization problem. The agents communicate with each other to arrive at a
consensus, i.e., agree on a common quantity, via the gossip protocol.
We propose a novel cost function for subspace learning on the Grassmann
manifold, which is a weighted sum of several sub-problems (each solved by an
agent) and the communication cost among the agents. The cost function has a
finite sum structure. In the proposed modeling approach, different agents learn
individual local subspace but they achieve asymptotic consensus on the global
learned subspace. The approach is scalable and parallelizable. Numerical
experiments show the efficacy of the proposed decentralized algorithms on
various matrix completion and multivariate regression benchmarks.
| 1 | 0 | 1 | 0 | 0 | 0 |
Space-Valued Diagrams, Type-Theoretically (Extended Abstract) | Topologists are sometimes interested in space-valued diagrams over a given
index category, but it is tricky to say what such a diagram even is if we look
for a notion that is stable under equivalence. The same happens in (homotopy)
type theory, where it is known only for special cases how one can define a type
of type-valued diagrams over a given index category. We offer several
constructions. We first show how to define homotopy coherent diagrams which
come with all higher coherence laws explicitly, with two variants that come
with assumption on the index category or on the type theory. Further, we
present a construction of diagrams over certain Reedy categories. As an
application, we add the degeneracies to the well-known construction of
semisimplicial types, yielding a construction of simplicial types up to any
given finite level. The current paper is only an extended abstract, and a full
version is to follow. In the full paper, we will show that the different
notions of diagrams are equivalent to each other and to the known notion of
Reedy fibrant diagrams whenever the statement makes sense. In the current
paper, we only sketch some core ideas of the proofs.
| 1 | 0 | 1 | 0 | 0 | 0 |
Properties of cyanobacterial UV-absorbing pigments suggest their evolution was driven by optimizing photon dissipation rather than photoprotection | An ancient repertoire of UV absorbing pigments which survive today in the
phylogenetically oldest extant photosynthetic organisms the cyanobacteria point
to a direction in evolutionary adaptation of the pigments and their associated
biota from largely UVC absorbing pigments in the Archean to pigments covering
ever more of the longer wavelength UV and visible in the Phanerozoic.Such a
scenario implies selection of photon dissipation rather than photoprotection
over the evolutionary history of life.This is consistent with the thermodynamic
dissipation theory of the origin and evolution of life which suggests that the
most important hallmark of biological evolution has been the covering of Earths
surface with organic pigment molecules and water to absorb and dissipate ever
more completely the prevailing surface solar spectrum.In this article we
compare a set of photophysical photochemical biosynthetic and other germane
properties of the two dominant classes of cyanobacterial UV absorbing pigments
the mycosporine like amino acids MAAs and scytonemins.Pigment wavelengths of
maximum absorption correspond with the time dependence of the prevailing Earth
surface solar spectrum and we proffer this as evidence for the selection of
photon dissipation rather than photoprotection over the history of life on
Earth.
| 0 | 1 | 0 | 0 | 0 | 0 |
Output Impedance Diffusion into Lossy Power Lines | Output impedances are inherent elements of power sources in the electrical
grids. In this paper, we give an answer to the following question: What is the
effect of output impedances on the inductivity of the power network? To address
this question, we propose a measure to evaluate the inductivity of a power
grid, and we compute this measure for various types of output impedances.
Following this computation, it turns out that network inductivity highly
depends on the algebraic connectivity of the network. By exploiting the derived
expressions of the proposed measure, one can tune the output impedances in
order to enforce a desired level of inductivity on the power system.
Furthermore, the results show that the more "connected" the network is, the
more the output impedances diffuse into the network. Finally, using Kron
reduction, we provide examples that demonstrate the utility and validity of the
method.
| 1 | 0 | 0 | 0 | 0 | 0 |
Enhancing the significance of gravitational wave bursts through signal classification | The quest to observe gravitational waves challenges our ability to
discriminate signals from detector noise. This issue is especially relevant for
transient gravitational waves searches with a robust eyes wide open approach,
the so called all- sky burst searches. Here we show how signal classification
methods inspired by broad astrophysical characteristics can be implemented in
all-sky burst searches preserving their generality. In our case study, we apply
a multivariate analyses based on artificial neural networks to classify waves
emitted in compact binary coalescences. We enhance by orders of magnitude the
significance of signals belonging to this broad astrophysical class against the
noise background. Alternatively, at a given level of mis-classification of
noise events, we can detect about 1/4 more of the total signal population. We
also show that a more general strategy of signal classification can actually be
performed, by testing the ability of artificial neural networks in
discriminating different signal classes. The possible impact on future
observations by the LIGO-Virgo network of detectors is discussed by analysing
recoloured noise from previous LIGO-Virgo data with coherent WaveBurst, one of
the flagship pipelines dedicated to all-sky searches for transient
gravitational waves.
| 0 | 1 | 0 | 0 | 0 | 0 |
Model-based Clustering with Sparse Covariance Matrices | Finite Gaussian mixture models are widely used for model-based clustering of
continuous data. Nevertheless, since the number of model parameters scales
quadratically with the number of variables, these models can be easily
over-parameterized. For this reason, parsimonious models have been developed
via covariance matrix decompositions or assuming local independence. However,
these remedies do not allow for direct estimation of sparse covariance matrices
nor do they take into account that the structure of association among the
variables can vary from one cluster to the other. To this end, we introduce
mixtures of Gaussian covariance graph models for model-based clustering with
sparse covariance matrices. A penalized likelihood approach is employed for
estimation and a general penalty term on the graph configurations can be used
to induce different levels of sparsity and incorporate prior knowledge. Model
estimation is carried out using a structural-EM algorithm for parameters and
graph structure estimation, where two alternative strategies based on a genetic
algorithm and an efficient stepwise search are proposed for inference. With
this approach, sparse component covariance matrices are directly obtained. The
framework results in a parsimonious model-based clustering of the data via a
flexible model for the within-group joint distribution of the variables.
Extensive simulated data experiments and application to illustrative datasets
show that the method attains good classification performance and model quality.
| 0 | 0 | 0 | 1 | 0 | 0 |
An Assessment of Data Transfer Performance for Large-Scale Climate Data Analysis and Recommendations for the Data Infrastructure for CMIP6 | We document the data transfer workflow, data transfer performance, and other
aspects of staging approximately 56 terabytes of climate model output data from
the distributed Coupled Model Intercomparison Project (CMIP5) archive to the
National Energy Research Supercomputing Center (NERSC) at the Lawrence Berkeley
National Laboratory required for tracking and characterizing extratropical
storms, a phenomena of importance in the mid-latitudes. We present this
analysis to illustrate the current challenges in assembling multi-model data
sets at major computing facilities for large-scale studies of CMIP5 data.
Because of the larger archive size of the upcoming CMIP6 phase of model
intercomparison, we expect such data transfers to become of increasing
importance, and perhaps of routine necessity. We find that data transfer rates
using the ESGF are often slower than what is typically available to US
residences and that there is significant room for improvement in the data
transfer capabilities of the ESGF portal and data centers both in terms of
workflow mechanics and in data transfer performance. We believe performance
improvements of at least an order of magnitude are within technical reach using
current best practices, as illustrated by the performance we achieved in
transferring the complete raw data set between two high performance computing
facilities. To achieve these performance improvements, we recommend: that
current best practices (such as the Science DMZ model) be applied to the data
servers and networks at ESGF data centers; that sufficient financial and human
resources be devoted at the ESGF data centers for systems and network
engineering tasks to support high performance data movement; and that
performance metrics for data transfer between ESGF data centers and major
computing facilities used for climate data analysis be established, regularly
tested, and published.
| 1 | 1 | 0 | 0 | 0 | 0 |
Generalization for Adaptively-chosen Estimators via Stable Median | Datasets are often reused to perform multiple statistical analyses in an
adaptive way, in which each analysis may depend on the outcomes of previous
analyses on the same dataset. Standard statistical guarantees do not account
for these dependencies and little is known about how to provably avoid
overfitting and false discovery in the adaptive setting. We consider a natural
formalization of this problem in which the goal is to design an algorithm that,
given a limited number of i.i.d.~samples from an unknown distribution, can
answer adaptively-chosen queries about that distribution.
We present an algorithm that estimates the expectations of $k$ arbitrary
adaptively-chosen real-valued estimators using a number of samples that scales
as $\sqrt{k}$. The answers given by our algorithm are essentially as accurate
as if fresh samples were used to evaluate each estimator. In contrast, prior
work yields error guarantees that scale with the worst-case sensitivity of each
estimator. We also give a version of our algorithm that can be used to verify
answers to such queries where the sample complexity depends logarithmically on
the number of queries $k$ (as in the reusable holdout technique).
Our algorithm is based on a simple approximate median algorithm that
satisfies the strong stability guarantees of differential privacy. Our
techniques provide a new approach for analyzing the generalization guarantees
of differentially private algorithms.
| 1 | 0 | 0 | 1 | 0 | 0 |
Automated Problem Identification: Regression vs Classification via Evolutionary Deep Networks | Regression or classification? This is perhaps the most basic question faced
when tackling a new supervised learning problem. We present an Evolutionary
Deep Learning (EDL) algorithm that automatically solves this by identifying the
question type with high accuracy, along with a proposed deep architecture.
Typically, a significant amount of human insight and preparation is required
prior to executing machine learning algorithms. For example, when creating deep
neural networks, the number of parameters must be selected in advance and
furthermore, a lot of these choices are made based upon pre-existing knowledge
of the data such as the use of a categorical cross entropy loss function.
Humans are able to study a dataset and decide whether it represents a
classification or a regression problem, and consequently make decisions which
will be applied to the execution of the neural network. We propose the
Automated Problem Identification (API) algorithm, which uses an evolutionary
algorithm interface to TensorFlow to manipulate a deep neural network to decide
if a dataset represents a classification or a regression problem. We test API
on 16 different classification, regression and sentiment analysis datasets with
up to 10,000 features and up to 17,000 unique target values. API achieves an
average accuracy of $96.3\%$ in identifying the problem type without hardcoding
any insights about the general characteristics of regression or classification
problems. For example, API successfully identifies classification problems even
with 1000 target values. Furthermore, the algorithm recommends which loss
function to use and also recommends a neural network architecture. Our work is
therefore a step towards fully automated machine learning.
| 1 | 0 | 0 | 1 | 0 | 0 |
Attribution of extreme rainfall in Southeast China during May 2015 | Anthropogenic climate change increased the probability that a short-duration,
intense rainfall event would occur in parts of southeast China. This type of
event occurred in May 2015, causing serious flooding.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Kellogg property and boundary regularity for p-harmonic functions with respect to the Mazurkiewicz boundary and other compactifications | In this paper boundary regularity for p-harmonic functions is studied with
respect to the Mazurkiewicz boundary and other compactifications. In
particular, the Kellogg property (which says that the set of irregular boundary
points has capacity zero) is obtained for a large class of compactifications,
but also two examples when it fails are given. This study is done for complete
metric spaces equipped with doubling measures supporting a p-Poincaré
inequality, but the results are new also in unweighted Euclidean spaces.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nonparametric Inference via Bootstrapping the Debiased Estimator | In this paper, we propose to construct confidence bands by bootstrapping the
debiased kernel density estimator (for density estimation) and the debiased
local polynomial regression estimator (for regression analysis). The idea of
using a debiased estimator was first introduced in Calonico et al. (2015),
where they construct a confidence interval of the density function (and
regression function) at a given point by explicitly estimating stochastic
variations. We extend their ideas and propose a bootstrap approach for
constructing confidence bands that is uniform for every point in the support.
We prove that the resulting bootstrap confidence band is asymptotically valid
and is compatible with most tuning parameter selection approaches, such as the
rule of thumb and cross-validation. We further generalize our method to
confidence sets of density level sets and inverse regression problems.
Simulation studies confirm the validity of the proposed confidence bands/sets.
| 0 | 0 | 1 | 1 | 0 | 0 |
Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks | Finding actions that satisfy the constraints imposed by both external inputs
and internal representations is central to decision making. We demonstrate that
some important classes of constraint satisfaction problems (CSPs) can be solved
by networks composed of homogeneous cooperative-competitive modules that have
connectivity similar to motifs observed in the superficial layers of neocortex.
The winner-take-all modules are sparsely coupled by programming neurons that
embed the constraints onto the otherwise homogeneous modular computational
substrate. We show rules that embed any instance of the CSPs planar four-color
graph coloring, maximum independent set, and Sudoku on this substrate, and
provide mathematical proofs that guarantee these graph coloring problems will
convergence to a solution. The network is composed of non-saturating linear
threshold neurons. Their lack of right saturation allows the overall network to
explore the problem space driven through the unstable dynamics generated by
recurrent excitation. The direction of exploration is steered by the constraint
neurons. While many problems can be solved using only linear inhibitory
constraints, network performance on hard problems benefits significantly when
these negative constraints are implemented by non-linear multiplicative
inhibition. Overall, our results demonstrate the importance of instability
rather than stability in network computation, and also offer insight into the
computational role of dual inhibitory mechanisms in neural circuits.
| 0 | 0 | 0 | 0 | 1 | 0 |
Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks | A major challenge in brain tumor treatment planning and quantitative
evaluation is determination of the tumor extent. The noninvasive magnetic
resonance imaging (MRI) technique has emerged as a front-line diagnostic tool
for brain tumors without ionizing radiation. Manual segmentation of brain tumor
extent from 3D MRI volumes is a very time-consuming task and the performance is
highly relied on operator's experience. In this context, a reliable fully
automatic segmentation method for the brain tumor segmentation is necessary for
an efficient measurement of the tumor extent. In this study, we propose a fully
automatic method for brain tumor segmentation, which is developed using U-Net
based deep convolutional networks. Our method was evaluated on Multimodal Brain
Tumor Image Segmentation (BRATS 2015) datasets, which contain 220 high-grade
brain tumor and 54 low-grade tumor cases. Cross-validation has shown that our
method can obtain promising segmentation efficiently.
| 1 | 0 | 0 | 0 | 0 | 0 |
Asymptotic Blind-spot Analysis of Localization Networks under Correlated Blocking using a Poisson Line Process | In a localization network, the line-of-sight between anchors (transceivers)
and targets may be blocked due to the presence of obstacles in the environment.
Due to the non-zero size of the obstacles, the blocking is typically correlated
across both anchor and target locations, with the extent of correlation
increasing with obstacle size. If a target does not have line-of-sight to a
minimum number of anchors, then its position cannot be estimated unambiguously
and is, therefore, said to be in a blind-spot. However, the analysis of the
blind-spot probability of a given target is challenging due to the inherent
randomness in the obstacle locations and sizes. In this letter, we develop a
new framework to analyze the worst-case impact of correlated blocking on the
blind-spot probability of a typical target; in particular, we model the
obstacles by a Poisson line process and the anchor locations by a Poisson point
process. For this setup, we define the notion of the asymptotic blind-spot
probability of the typical target and derive a closed-form expression for it as
a function of the area distribution of a typical Poisson-Voronoi cell. As an
upper bound for the more realistic case when obstacles have finite dimensions,
the asymptotic blind-spot probability is useful as a design tool to ensure that
the blind-spot probability of a typical target does not exceed a desired
threshold, $\epsilon$.
| 1 | 0 | 0 | 0 | 0 | 0 |
The relation between galaxy morphology and colour in the EAGLE simulation | We investigate the relation between kinematic morphology, intrinsic colour
and stellar mass of galaxies in the EAGLE cosmological hydrodynamical
simulation. We calculate the intrinsic u-r colours and measure the fraction of
kinetic energy invested in ordered corotation of 3562 galaxies at z=0 with
stellar masses larger than $10^{10}M_{\odot}$. We perform a visual inspection
of gri-composite images and find that our kinematic morphology correlates
strongly with visual morphology. EAGLE produces a galaxy population for which
morphology is tightly correlated with the location in the colour- mass diagram,
with the red sequence mostly populated by elliptical galaxies and the blue
cloud by disc galaxies. Satellite galaxies are more likely to be on the red
sequence than centrals, and for satellites the red sequence is morphologically
more diverse. These results show that the connection between mass, intrinsic
colour and morphology arises from galaxy formation models that reproduce the
observed galaxy mass function and sizes.
| 0 | 1 | 0 | 0 | 0 | 0 |
An alternative to continuous univariate distributions supported on a bounded interval: The BMT distribution | In this paper, we introduce the BMT distribution as an unimodal alternative
to continuous univariate distributions supported on a bounded interval. The
ideas behind the mathematical formulation of this new distribution come from
computer aid geometric design, specifically from Bezier curves. First, we
review general properties of a distribution given by parametric equations and
extend the definition of a Bezier distribution. Then, after proposing the BMT
cumulative distribution function, we derive its probability density function
and a closed-form expression for quantile function, median, interquartile
range, mode, and moments. The domain change from [0,1] to [c,d] is mentioned.
Estimation of parameters is approached by the methods of maximum likelihood and
maximum product of spacing. We test the numerical estimation procedures using
some simulated data. Usefulness and flexibility of the new distribution are
illustrated in three real data sets. The BMT distribution has a significant
potential to estimate domain parameters and to model data outside the scope of
the beta or similar distributions.
| 0 | 0 | 1 | 1 | 0 | 0 |
Deep Object Centric Policies for Autonomous Driving | While learning visuomotor skills in an end-to-end manner is appealing, deep
neural networks are often uninterpretable and fail in surprising ways. For
robotics tasks, such as autonomous driving, models that explicitly represent
objects may be more robust to new scenes and provide intuitive visualizations.
We describe a taxonomy of object-centric models which leverage both object
instances and end-to-end learning. In the Grand Theft Auto V simulator, we show
that object centric models outperform object-agnostic methods in scenes with
other vehicles and pedestrians, even with an imperfect detector. We also
demonstrate that our architectures perform well on real world environments by
evaluating on the Berkeley DeepDrive Video dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Search for Laser Emission with Megawatt Thresholds from 5600 FGKM Stars | We searched high resolution spectra of 5600 nearby stars for emission lines
that are both inconsistent with a natural origin and unresolved spatially, as
would be expected from extraterrestrial optical lasers. The spectra were
obtained with the Keck 10-meter telescope, including light coming from within
0.5 arcsec of the star, corresponding typically to within a few to tens of au
of the star, and covering nearly the entire visible wavelength range from 3640
to 7890 angstroms. We establish detection thresholds by injecting synthetic
laser emission lines into our spectra and blindly analyzing them for
detections. We compute flux density detection thresholds for all wavelengths
and spectral types sampled. Our detection thresholds for the power of the
lasers themselves range from 3 kW to 13 MW, independent of distance to the star
but dependent on the competing "glare" of the spectral energy distribution of
the star and on the wavelength of the laser light, launched from a benchmark,
diffraction-limited 10-meter class telescope. We found no such laser emission
coming from the planetary region around any of the 5600 stars. As they contain
roughly 2000 lukewarm, Earth-size planets, we rule out models of the Milky Way
in which over 0.1 percent of warm, Earth-size planets harbor technological
civilizations that, intentionally or not, are beaming optical lasers toward us.
A next generation spectroscopic laser search will be done by the Breakthrough
Listen initiative, targeting more stars, especially stellar types overlooked
here including spectral types O, B, A, early F, late M, and brown dwarfs, and
astrophysical exotica.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Large Scale Ordinary Differential Equation Systems | Learning large scale nonlinear ordinary differential equation (ODE) systems
from data is known to be computationally and statistically challenging. We
present a framework together with the adaptive integral matching (AIM)
algorithm for learning polynomial or rational ODE systems with a sparse network
structure. The framework allows for time course data sampled from multiple
environments representing e.g. different interventions or perturbations of the
system. The algorithm AIM combines an initial penalised integral matching step
with an adapted least squares step based on solving the ODE numerically. The R
package episode implements AIM together with several other algorithms and is
available from CRAN. It is shown that AIM achieves state-of-the-art network
recovery for the in silico phosphoprotein abundance data from the eighth DREAM
challenge with an AUROC of 0.74, and it is demonstrated via a range of
numerical examples that AIM has good statistical properties while being
computationally feasible even for large systems.
| 0 | 0 | 1 | 1 | 0 | 0 |
Linear Time Clustering for High Dimensional Mixtures of Gaussian Clouds | Clustering mixtures of Gaussian distributions is a fundamental and
challenging problem that is ubiquitous in various high-dimensional data
processing tasks. While state-of-the-art work on learning Gaussian mixture
models has focused primarily on improving separation bounds and their
generalization to arbitrary classes of mixture models, less emphasis has been
paid to practical computational efficiency of the proposed solutions. In this
paper, we propose a novel and highly efficient clustering algorithm for $n$
points drawn from a mixture of two arbitrary Gaussian distributions in
$\mathbb{R}^p$. The algorithm involves performing random 1-dimensional
projections until a direction is found that yields a user-specified clustering
error $e$. For a 1-dimensional separation parameter $\gamma$ satisfying
$\gamma=Q^{-1}(e)$, the expected number of such projections is shown to be
bounded by $o(\ln p)$, when $\gamma$ satisfies $\gamma\leq
c\sqrt{\ln{\ln{p}}}$, with $c$ as the separability parameter of the two
Gaussians in $\mathbb{R}^p$. Consequently, the expected overall running time of
the algorithm is linear in $n$ and quasi-linear in $p$ at $o(\ln{p})O(np)$, and
the sample complexity is independent of $p$. This result stands in contrast to
prior works which provide polynomial, with at-best quadratic, running time in
$p$ and $n$. We show that our bound on the expected number of 1-dimensional
projections extends to the case of three or more Gaussian components, and we
present a generalization of our results to mixture distributions beyond the
Gaussian model.
| 1 | 0 | 0 | 0 | 0 | 0 |
Estimation of Relationship between Stimulation Current and Force Exerted during Isometric Contraction | In this study, we developed a method to estimate the relationship between
stimulation current and volatility during isometric contraction. In functional
electrical stimulation (FES), joints are driven by applying voltage to muscles.
This technology has been used for a long time in the field of rehabilitation,
and recently application oriented research has been reported. However,
estimation of the relationship between stimulus value and exercise capacity has
not been discussed to a great extent. Therefore, in this study, a human muscle
model was estimated using the transfer function estimation method with fast
Fourier transform. It was found that the relationship between stimulation
current and force exerted could be expressed by a first-order lag system. In
verification of the force estimate, the ability of the proposed model to
estimate the exerted force under steady state response was found to be good.
| 0 | 0 | 0 | 0 | 1 | 0 |
Subsets and Splits