text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: Closure operators, frames, and neatest representations,
Abstract: Given a poset $P$ and a standard closure operator $\Gamma:\wp(P)\to\wp(P)$ we
give a necessary and sufficient condition for the lattice of $\Gamma$-closed
sets of $\wp(P)$ to be a frame in terms of the recursive construction of the
$\Gamma$-closure of sets. We use this condition to show that given a set
$\mathcal{U}$ of distinguished joins from $P$, the lattice of
$\mathcal{U}$-ideals of $P$ fails to be a frame if and only if it fails to be
$\sigma$-distributive, with $\sigma$ depending on the cardinalities of sets in
$\mathcal{U}$. From this we deduce that if a poset has the property that
whenever $a\wedge(b\vee c)$ is defined for $a,b,c\in P$ it is necessarily equal
to $(a\wedge b)\vee (a\wedge c)$, then it has an $(\omega,3)$-representation.
This answers a question from the literature. | [
0,
0,
1,
0,
0,
0
] |
Title: The structure of rationally factorized Lax type flows and their analytical integrability,
Abstract: The work is devoted to constructing a wide class of differential-functional
dynamical systems, whose rich algebraic structure makes their integrability
analytically effective. In particular, there is analyzed in detail the operator
Lax type equations for factorized seed elements, there is proved an important
theorem about their operator factorization and the related analytical solution
scheme to the corresponding nonlinear differential-functional dynamical
systems. | [
0,
1,
0,
0,
0,
0
] |
Title: Multiplex core-periphery organization of the human connectome,
Abstract: The behavior of many complex systems is determined by a core of densely
interconnected units. While many methods are available to identify the core of
a network when connections between nodes are all of the same type, a principled
approach to define the core when multiple types of connectivity are allowed is
still lacking. Here we introduce a general framework to define and extract the
core-periphery structure of multi-layer networks by explicitly taking into
account the connectivity of the nodes at each layer. We show how our method
works on synthetic networks with different size, density, and overlap between
the cores at the different layers. We then apply the method to multiplex brain
networks whose layers encode information both on the anatomical and the
functional connectivity among regions of the human cortex. Results confirm the
presence of the main known hubs, but also suggest the existence of novel brain
core regions that have been discarded by previous analysis which focused
exclusively on the structural layer. Our work is a step forward in the
identification of the core of the human connectome, and contributes to shed
light to a fundamental question in modern neuroscience. | [
1,
0,
0,
0,
1,
0
] |
Title: Towards Learned Clauses Database Reduction Strategies Based on Dominance Relationship,
Abstract: Clause Learning is one of the most important components of a conflict driven
clause learning (CDCL) SAT solver that is effective on industrial instances.
Since the number of learned clauses is proved to be exponential in the worse
case, it is necessary to identify the most relevant clauses to maintain and
delete the irrelevant ones. As reported in the literature, several learned
clauses deletion strategies have been proposed. However the diversity in both
the number of clauses to be removed at each step of reduction and the results
obtained with each strategy creates confusion to determine which criterion is
better. Thus, the problem to select which learned clauses are to be removed
during the search step remains very challenging. In this paper, we propose a
novel approach to identify the most relevant learned clauses without favoring
or excluding any of the proposed measures, but by adopting the notion of
dominance relationship among those measures. Our approach bypasses the problem
of the diversity of results and reaches a compromise between the assessments of
these measures. Furthermore, the proposed approach also avoids another
non-trivial problem which is the amount of clauses to be deleted at each
reduction of the learned clause database. | [
1,
0,
0,
0,
0,
0
] |
Title: Global well-posedness of the 3D primitive equations with horizontal viscosity and vertical diffusivity,
Abstract: In this paper, we consider the 3D primitive equations of oceanic and
atmospheric dynamics with only horizontal eddy viscosities in the horizontal
momentum equations and only vertical diffusivity in the temperature equation.
Global well-posedness of strong solutions is established for any initial data
such that the initial horizontal velocity $v_0\in H^2(\Omega)$ and the initial
temperature $T_0\in H^1(\Omega)\cap L^\infty(\Omega)$ with $\nabla_HT_0\in
L^q(\Omega)$, for some $q\in(2,\infty)$. Moreover, the strong solutions enjoy
correspondingly more regularities if the initial temperature belongs to
$H^2(\Omega)$. The main difficulties are the absence of the vertical viscosity
and the lack of the horizontal diffusivity, which, interact with each other,
thus causing the "\,mismatching\," of regularities between the horizontal
momentum and temperature equations. To handle this "mismatching" of
regularities, we introduce several auxiliary functions, i.e., $\eta, \theta,
\varphi,$ and $\psi$ in the paper, which are the horizontal curls or some
appropriate combinations of the temperature with the horizontal divergences of
the horizontal velocity $v$ or its vertical derivative $\partial_zv$. To
overcome the difficulties caused by the absence of the horizontal diffusivity,
which leads to the requirement of some $L^1_t(W^{1,\infty}_\textbf{x})$-type a
priori estimates on $v$, we decompose the velocity into the
"temperature-independent" and temperature-dependent parts and deal with them in
different ways, by using the logarithmic Sobolev inequalities of the
Brézis-Gallouet-Wainger and Beale-Kato-Majda types, respectively.
Specifically, a logarithmic Sobolev inequality of the limiting type, introduced
in our previous work [12], is used, and a new logarithmic type Gronwall
inequality is exploited. | [
0,
1,
1,
0,
0,
0
] |
Title: An Incentive-Based Online Optimization Framework for Distribution Grids,
Abstract: This paper formulates a time-varying social-welfare maximization problem for
distribution grids with distributed energy resources (DERs) and develops online
distributed algorithms to identify (and track) its solutions. In the considered
setting, network operator and DER-owners pursue given operational and economic
objectives, while concurrently ensuring that voltages are within prescribed
limits. The proposed algorithm affords an online implementation to enable
tracking of the solutions in the presence of time-varying operational
conditions and changing optimization objectives. It involves a strategy where
the network operator collects voltage measurements throughout the feeder to
build incentive signals for the DER-owners in real time; DERs then adjust the
generated/consumed powers in order to avoid the violation of the voltage
constraints while maximizing given objectives. The stability of the proposed
schemes is analytically established and numerically corroborated. | [
1,
0,
1,
0,
0,
0
] |
Title: Towards a scientific blockchain framework for reproducible data analysis,
Abstract: Publishing reproducible analyses is a long-standing and widespread challenge
for the scientific community, funding bodies and publishers. Although a
definitive solution is still elusive, the problem is recognized to affect all
disciplines and lead to a critical system inefficiency. Here, we propose a
blockchain-based approach to enhance scientific reproducibility, with a focus
on life science studies and precision medicine. While the interest of encoding
permanently into an immutable ledger all the study key information-including
endpoints, data and metadata, protocols, analytical methods and all
findings-has been already highlighted, here we apply the blockchain approach to
solve the issue of rewarding time and expertise of scientists that commit to
verify reproducibility. Our mechanism builds a trustless ecosystem of
researchers, funding bodies and publishers cooperating to guarantee digital and
permanent access to information and reproducible results. As a natural
byproduct, a procedure to quantify scientists' and institutions' reputation for
ranking purposes is obtained. | [
1,
0,
0,
0,
0,
0
] |
Title: Time Series Anomaly Detection; Detection of anomalous drops with limited features and sparse examples in noisy highly periodic data,
Abstract: Google uses continuous streams of data from industry partners in order to
deliver accurate results to users. Unexpected drops in traffic can be an
indication of an underlying issue and may be an early warning that remedial
action may be necessary. Detecting such drops is non-trivial because streams
are variable and noisy, with roughly regular spikes (in many different shapes)
in traffic data. We investigated the question of whether or not we can predict
anomalies in these data streams. Our goal is to utilize Machine Learning and
statistical approaches to classify anomalous drops in periodic, but noisy,
traffic patterns. Since we do not have a large body of labeled examples to
directly apply supervised learning for anomaly classification, we approached
the problem in two parts. First we used TensorFlow to train our various models
including DNNs, RNNs, and LSTMs to perform regression and predict the expected
value in the time series. Secondly we created anomaly detection rules that
compared the actual values to predicted values. Since the problem requires
finding sustained anomalies, rather than just short delays or momentary
inactivity in the data, our two detection methods focused on continuous
sections of activity rather than just single points. We tried multiple
combinations of our models and rules and found that using the intersection of
our two anomaly detection methods proved to be an effective method of detecting
anomalies on almost all of our models. In the process we also found that not
all data fell within our experimental assumptions, as one data stream had no
periodicity, and therefore no time based model could predict it. | [
1,
0,
0,
1,
0,
0
] |
Title: Comparison of Self-Aware and Organic Computing Systems,
Abstract: With increasing complexity and heterogeneity of computing devices, it has
become crucial for system to be autonomous, adaptive to dynamic environment,
robust, flexible, and having so called self-*properties. These autonomous
systems are called organic computing(OC) systems. OC system was proposed as a
solution to tackle complex systems. Design time decisions have been shifted to
run time in highly complex and interconnected systems as it is very hard to
consider all scenarios and their appropriate actions in advance. Consequently,
Self-awareness becomes crucial for these adaptive autonomous systems. To cope
with evolving environment and changing user needs, system need to have
knowledge about itself and its surroundings. Literature review shows that for
autonomous and intelligent systems, researchers are concerned about knowledge
acquisition, representation and learning which is necessary for a system to
adapt. This paper is written to compare self-awareness and organic computing by
discussing their definitions, properties and architecture. | [
1,
0,
0,
0,
0,
0
] |
Title: First Order Methods beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems,
Abstract: We focus on nonconvex and nonsmooth minimization problems with a composite
objective, where the differentiable part of the objective is freed from the
usual and restrictive global Lipschitz gradient continuity assumption. This
longstanding smoothness restriction is pervasive in first order methods (FOM),
and was recently circumvent for convex composite optimization by Bauschke,
Bolte and Teboulle, through a simple and elegant framework which captures, all
at once, the geometry of the function and of the feasible set. Building on this
work, we tackle genuine nonconvex problems. We first complement and extend
their approach to derive a full extended descent lemma by introducing the
notion of smooth adaptable functions. We then consider a Bregman-based proximal
gradient methods for the nonconvex composite model with smooth adaptable
functions, which is proven to globally converge to a critical point under
natural assumptions on the problem's data. To illustrate the power and
potential of our general framework and results, we consider a broad class of
quadratic inverse problems with sparsity constraints which arises in many
fundamental applications, and we apply our approach to derive new globally
convergent schemes for this class. | [
1,
0,
1,
0,
0,
0
] |
Title: Robust Bayesian Optimization with Student-t Likelihood,
Abstract: Bayesian optimization has recently attracted the attention of the automatic
machine learning community for its excellent results in hyperparameter tuning.
BO is characterized by the sample efficiency with which it can optimize
expensive black-box functions. The efficiency is achieved in a similar fashion
to the learning to learn methods: surrogate models (typically in the form of
Gaussian processes) learn the target function and perform intelligent sampling.
This surrogate model can be applied even in the presence of noise; however, as
with most regression methods, it is very sensitive to outlier data. This can
result in erroneous predictions and, in the case of BO, biased and inefficient
exploration. In this work, we present a GP model that is robust to outliers
which uses a Student-t likelihood to segregate outliers and robustly conduct
Bayesian optimization. We present numerical results evaluating the proposed
method in both artificial functions and real problems. | [
1,
0,
0,
1,
0,
0
] |
Title: Estimating Tactile Data for Adaptive Grasping of Novel Objects,
Abstract: We present an adaptive grasping method that finds stable grasps on novel
objects. The main contributions of this paper is in the computation of the
probability of success of grasps in the vicinity of an already applied grasp.
Our method performs grasp adaptions by simulating tactile data for grasps in
the vicinity of the current grasp. The simulated data is used to evaluate
hypothetical grasps and thereby guide us toward better grasps. We demonstrate
the applicability of our method by constructing a system that can plan, apply
and adapt grasps on novel objects. Experiments are conducted on objects from
the YCB object set and the success rate of our method is 88%. Our experiments
show that the application of our grasp adaption method improves grasp stability
significantly. | [
1,
0,
0,
0,
0,
0
] |
Title: Near-UV OH Prompt Emission in the Innermost Coma of 103P/Hartley 2,
Abstract: The Deep Impact spacecraft fly-by of comet 103P/Hartley 2 occurred on 2010
November 4, one week after perihelion with a closest approach (CA) distance of
about 700 km. We used narrowband images obtained by the Medium Resolution
Imager (MRI) onboard the spacecraft to study the gas and dust in the innermost
coma. We derived an overall dust reddening of 15\%/100 nm between 345 and 749
nm and identified a blue enhancement in the dust coma in the sunward direction
within 5 km from the nucleus, which we interpret as a localized enrichment in
water ice. OH column density maps show an anti-sunward enhancement throughout
the encounter except for the highest resolution images, acquired at CA, where a
radial jet becomes visible in the innermost coma, extending up to 12 km from
the nucleus. The OH distribution in the inner coma is very different from that
expected for a fragment species. Instead, it correlates well with the water
vapor map derived by the HRI-IR instrument onboard Deep Impact
\citep{AHearn2011}. Radial profiles of the OH column density and derived water
production rates show an excess of OH emission during CA that cannot be
explained with pure fluorescence. We attribute this excess to a prompt emission
process where photodissociation of H$_2$O directly produces excited
OH*($A^2\it{\Sigma}^+$) radicals. Our observations provide the first direct
imaging of Near-UV prompt emission of OH. We therefore suggest the use of a
dedicated filter centered at 318.8 nm to directly trace the water in the coma
of comets. | [
0,
1,
0,
0,
0,
0
] |
Title: A gradient estimate for nonlocal minimal graphs,
Abstract: We consider the class of measurable functions defined in all of
$\mathbb{R}^n$ that give rise to a nonlocal minimal graph over a ball of
$\mathbb{R}^n$. We establish that the gradient of any such function is bounded
in the interior of the ball by a power of its oscillation. This estimate,
together with previously known results, leads to the $C^\infty$ regularity of
the function in the ball. While the smoothness of nonlocal minimal graphs was
known for $n = 1, 2$ (but without a quantitative bound), in higher dimensions
only their continuity had been established.
To prove the gradient bound, we show that the normal to a nonlocal minimal
graph is a supersolution of a truncated fractional Jacobi operator, for which
we prove a weak Harnack inequality. To this end, we establish a new universal
fractional Sobolev inequality on nonlocal minimal surfaces.
Our estimate provides an extension to the fractional setting of the
celebrated gradient bounds of Finn and of Bombieri, De Giorgi & Miranda for
solutions of the classical mean curvature equation. | [
0,
0,
1,
0,
0,
0
] |
Title: The GAPS Programme with HARPS-N@TNG XIV. Investigating giant planet migration history via improved eccentricity and mass determination for 231 transiting planets,
Abstract: We carried out a Bayesian homogeneous determination of the orbital parameters
of 231 transiting giant planets (TGPs) that are alone or have distant
companions; we employed DE-MCMC methods to analyse radial-velocity (RV) data
from the literature and 782 new high-accuracy RVs obtained with the HARPS-N
spectrograph for 45 systems over 3 years. Our work yields the largest sample of
systems with a transiting giant exoplanet and coherently determined orbital,
planetary, and stellar parameters. We found that the orbital parameters of TGPs
in non-compact planetary systems are clearly shaped by tides raised by their
host stars. Indeed, the most eccentric planets have relatively large orbital
separations and/or high mass ratios, as expected from the equilibrium tide
theory. This feature would be the outcome of high-eccentricity migration (HEM).
The distribution of $\alpha=a/a_R$, where $a$ and $a_R$ are the semi-major axis
and the Roche limit, for well-determined circular orbits peaks at 2.5; this
also agrees with expectations from the HEM. The few planets of our sample with
circular orbits and $\alpha >5$ values may have migrated through disc-planet
interactions instead of HEM. By comparing circularisation times with stellar
ages, we found that hot Jupiters with $a < 0.05$ au have modified tidal quality
factors $10^{5} < Q'_p < 10^{9}$, and that stellar $Q'_s > 10^{6}-10^{7}$ are
required to explain the presence of eccentric planets at the same orbital
distance. As a by-product of our analysis, we detected a non-zero eccentricity
for HAT-P-29; we determined that five planets that were previously regarded to
have hints of non-zero eccentricity have circular orbits or undetermined
eccentricities; we unveiled curvatures caused by distant companions in the RV
time series of HAT-P-2, HAT-P-22, and HAT-P-29; and we revised the planetary
parameters of CoRoT-1b. | [
0,
1,
0,
0,
0,
0
] |
Title: Reinforcement Learning using Augmented Neural Networks,
Abstract: Neural networks allow Q-learning reinforcement learning agents such as deep
Q-networks (DQN) to approximate complex mappings from state spaces to value
functions. However, this also brings drawbacks when compared to other function
approximators such as tile coding or their generalisations, radial basis
functions (RBF) because they introduce instability due to the side effect of
globalised updates present in neural networks. This instability does not even
vanish in neural networks that do not have any hidden layers. In this paper, we
show that simple modifications to the structure of the neural network can
improve stability of DQN learning when a multi-layer perceptron is used for
function approximation. | [
0,
0,
0,
1,
0,
0
] |
Title: Perishability of Data: Dynamic Pricing under Varying-Coefficient Models,
Abstract: We consider a firm that sells a large number of products to its customers in
an online fashion. Each product is described by a high dimensional feature
vector, and the market value of a product is assumed to be linear in the values
of its features. Parameters of the valuation model are unknown and can change
over time. The firm sequentially observes a product's features and can use the
historical sales data (binary sale/no sale feedbacks) to set the price of
current product, with the objective of maximizing the collected revenue. We
measure the performance of a dynamic pricing policy via regret, which is the
expected revenue loss compared to a clairvoyant that knows the sequence of
model parameters in advance.
We propose a pricing policy based on projected stochastic gradient descent
(PSGD) and characterize its regret in terms of time $T$, features dimension
$d$, and the temporal variability in the model parameters, $\delta_t$. We
consider two settings. In the first one, feature vectors are chosen
antagonistically by nature and we prove that the regret of PSGD pricing policy
is of order $O(\sqrt{T} + \sum_{t=1}^T \sqrt{t}\delta_t)$. In the second
setting (referred to as stochastic features model), the feature vectors are
drawn independently from an unknown distribution. We show that in this case,
the regret of PSGD pricing policy is of order $O(d^2 \log T + \sum_{t=1}^T
t\delta_t/d)$. | [
1,
0,
0,
1,
0,
0
] |
Title: Fractional quantum Hall systems near nematicity: bimetric theory, composite fermions, and Dirac brackets,
Abstract: We perform a detailed comparison of the Dirac composite fermion and the
recently proposed bimetric theory for a quantum Hall Jain states near half
filling. By tuning the composite Fermi liquid to the vicinity of a nematic
phase transition, we find that the two theories are equivalent to each other.
We verify that the single mode approximation for the response functions and the
static structure factor becomes reliable near the phase transition. We show
that the dispersion relation of the nematic mode near the phase transition can
be obtained from the Dirac brackets between the components of the nematic order
parameter. The dispersion is quadratic at low momenta and has a magnetoroton
minimum at a finite momentum, which is not related to any nearby inhomogeneous
phase. | [
0,
1,
0,
0,
0,
0
] |
Title: Recognizing Objects In-the-wild: Where Do We Stand?,
Abstract: The ability to recognize objects is an essential skill for a robotic system
acting in human-populated environments. Despite decades of effort from the
robotic and vision research communities, robots are still missing good visual
perceptual systems, preventing the use of autonomous agents for real-world
applications. The progress is slowed down by the lack of a testbed able to
accurately represent the world perceived by the robot in-the-wild. In order to
fill this gap, we introduce a large-scale, multi-view object dataset collected
with an RGB-D camera mounted on a mobile robot. The dataset embeds the
challenges faced by a robot in a real-life application and provides a useful
tool for validating object recognition algorithms. Besides describing the
characteristics of the dataset, the paper evaluates the performance of a
collection of well-established deep convolutional networks on the new dataset
and analyzes the transferability of deep representations from Web images to
robotic data. Despite the promising results obtained with such representations,
the experiments demonstrate that object classification with real-life robotic
data is far from being solved. Finally, we provide a comparative study to
analyze and highlight the open challenges in robot vision, explaining the
discrepancies in the performance. | [
1,
0,
0,
0,
0,
0
] |
Title: Generalizing Geometric Brownian Motion,
Abstract: To convert standard Brownian motion $Z$ into a positive process, Geometric
Brownian motion (GBM) $e^{\beta Z_t}, \beta >0$ is widely used. We generalize
this positive process by introducing an asymmetry parameter $ \alpha \geq 0$
which describes the instantaneous volatility whenever the process reaches a new
low. For our new process, $\beta$ is the instantaneous volatility as prices
become arbitrarily high. Our generalization preserves the positivity, constant
proportional drift, and tractability of GBM, while expressing the instantaneous
volatility as a randomly weighted $L^2$ mean of $\alpha$ and $\beta$. The
running minimum and relative drawup of this process are also analytically
tractable. Letting $\alpha = \beta$, our positive process reduces to Geometric
Brownian motion. By adding a jump to default to the new process, we introduce a
non-negative martingale with the same tractabilities. Assuming a security's
dynamics are driven by these processes in risk neutral measure, we price
several derivatives including vanilla, barrier and lookback options. | [
0,
0,
0,
0,
0,
1
] |
Title: Speculation On a Source of Dark Matter,
Abstract: By drawing an analogy with superfluid 4He vortices we suggest that dark
matter may consist of irreducibly small remnants of cosmic strings. | [
0,
1,
0,
0,
0,
0
] |
Title: Response Formulae for $n$-point Correlations in Statistical Mechanical Systems and Application to a Problem of Coarse Graining,
Abstract: Predicting the response of a system to perturbations is a key challenge in
mathematical and natural sciences. Under suitable conditions on the nature of
the system, of the perturbation, and of the observables of interest, response
theories allow to construct operators describing the smooth change of the
invariant measure of the system of interest as a function of the small
parameter controlling the intensity of the perturbation. In particular,
response theories can be developed both for stochastic and chaotic
deterministic dynamical systems, where in the latter case stricter conditions
imposing some degree of structural stability are required. In this paper we
extend previous findings and derive general response formulae describing how
n-point correlations are affected by perturbations to the vector flow. We also
show how to compute the response of the spectral properties of the system to
perturbations. We then apply our results to the seemingly unrelated problem of
coarse graining in multiscale systems: we find explicit formulae describing the
change in the terms describing parameterisation of the neglected degrees of
freedom resulting from applying perturbations to the full system. All the terms
envisioned by the Mori-Zwanzig theory - the deterministic, stochastic, and
non-Markovian terms - are affected at 1st order in the perturbation. The
obtained results provide a more comprehesive understanding of the response of
statistical mechanical systems to perturbations and contribute to the goal of
constructing accurate and robust parameterisations and are of potential
relevance for fields like molecular dynamics, condensed matter, and geophysical
fluid dynamics. We envision possible applications of our general results to the
study of the response of climate variability to anthropogenic and natural
forcing and to the study of the equivalence of thermostatted statistical
mechanical systems. | [
0,
1,
1,
0,
0,
0
] |
Title: Randomized Kernel Methods for Least-Squares Support Vector Machines,
Abstract: The least-squares support vector machine is a frequently used kernel method
for non-linear regression and classification tasks. Here we discuss several
approximation algorithms for the least-squares support vector machine
classifier. The proposed methods are based on randomized block kernel matrices,
and we show that they provide good accuracy and reliable scaling for
multi-class classification problems with relatively large data sets. Also, we
present several numerical experiments that illustrate the practical
applicability of the proposed methods. | [
1,
0,
0,
1,
0,
0
] |
Title: Short-time behavior of the heat kernel and Weyl's law on $RCD^*(K, N)$-spaces,
Abstract: In this paper, we prove pointwise convergence of heat kernels for
mGH-convergent sequences of $RCD^*(K,N)$-spaces. We obtain as a corollary
results on the short-time behavior of the heat kernel in $RCD^*(K,N)$-spaces.
We use then these results to initiate the study of Weyl's law in the $RCD$
setting | [
0,
0,
1,
0,
0,
0
] |
Title: Football and Beer - a Social Media Analysis on Twitter in Context of the FIFA Football World Cup 2018,
Abstract: In many societies alcohol is a legal and common recreational substance and
socially accepted. Alcohol consumption often comes along with social events as
it helps people to increase their sociability and to overcome their
inhibitions. On the other hand we know that increased alcohol consumption can
lead to serious health issues, such as cancer, cardiovascular diseases and
diseases of the digestive system, to mention a few. This work examines alcohol
consumption during the FIFA Football World Cup 2018, particularly the usage of
alcohol related information on Twitter. For this we analyse the tweeting
behaviour and show that the tournament strongly increases the interest in beer.
Furthermore we show that countries who had to leave the tournament at early
stage might have done something good to their fans as the interest in beer
decreased again. | [
1,
0,
0,
0,
0,
0
] |
Title: Cross-stream migration of a surfactant-laden deformable droplet in a Poiseuille flow,
Abstract: The motion of a viscous deformable droplet suspended in an unbounded
Poiseuille flow in the presence of bulk-insoluble surfactants is studied
analytically. Assuming the convective transport of fluid and heat to be
negligible, we perform a small-deformation perturbation analysis to obtain the
droplet migration velocity. The droplet dynamics strongly depends on the
distribution of surfactants along the droplet interface, which is governed by
the relative strength of convective transport of surfactants as compared with
the diffusive transport of surfactants. The present study is focused on the
following two limits: (i) when the surfactant transport is dominated by surface
diffusion, and (ii) when the surfactant transport is dominated by surface
convection. In the first limiting case, it is seen that the axial velocity of
the droplet decreases with increase in the advection of the surfactants along
the surface. The variation of cross-stream migration velocity, on the other
hand, is analyzed over three different regimes based on the ratio of the
viscosity of the droplet phase to that of the carrier phase. In the first
regime the migration velocity decreases with increase in surface advection of
the surfactants although there is no change in direction of droplet migration.
For the second regime, the direction of the cross-stream migration of the
droplet changes depending on different parameters. In the third regime, the
migration velocity is merely affected by any change in the surfactant
distribution. For the other limit of higher surface advection in comparison to
surface diffusion of the surfactants, the axial velocity of the droplet is
found to be independent of the surfactant distribution. However, the
cross-stream velocity is found to decrease with increase in non-uniformity in
surfactant distribution. | [
0,
1,
0,
0,
0,
0
] |
Title: PCA in Data-Dependent Noise (Correlated-PCA): Nearly Optimal Finite Sample Guarantees,
Abstract: We study Principal Component Analysis (PCA) in a setting where a part of the
corrupting noise is data-dependent and, as a result, the noise and the true
data are correlated. Under a bounded-ness assumption on the true data and the
noise, and a simple assumption on data-noise correlation, we obtain a nearly
optimal sample complexity bound for the most commonly used PCA solution,
singular value decomposition (SVD). This bound is a significant improvement
over the bound obtained by Vaswani and Guo in recent work (NIPS 2016) where
this "correlated-PCA" problem was first studied; and it holds under a
significantly weaker data-noise correlation assumption than the one used for
this earlier result. | [
1,
0,
0,
1,
0,
0
] |
Title: SING: Symbol-to-Instrument Neural Generator,
Abstract: Recent progress in deep learning for audio synthesis opens the way to models
that directly produce the waveform, shifting away from the traditional paradigm
of relying on vocoders or MIDI synthesizers for speech or music generation.
Despite their successes, current state-of-the-art neural audio synthesizers
such as WaveNet and SampleRNN suffer from prohibitive training and inference
times because they are based on autoregressive models that generate audio
samples one at a time at a rate of 16kHz. In this work, we study the more
computationally efficient alternative of generating the waveform frame-by-frame
with large strides. We present SING, a lightweight neural audio synthesizer for
the original task of generating musical notes given desired instrument, pitch
and velocity. Our model is trained end-to-end to generate notes from nearly
1000 instruments with a single decoder, thanks to a new loss function that
minimizes the distances between the log spectrograms of the generated and
target waveforms. On the generalization task of synthesizing notes for pairs of
pitch and instrument not seen during training, SING produces audio with
significantly improved perceptual quality compared to a state-of-the-art
autoencoder based on WaveNet as measured by a Mean Opinion Score (MOS), and is
about 32 times faster for training and 2, 500 times faster for inference. | [
1,
0,
0,
0,
0,
0
] |
Title: Neural IR Meets Graph Embedding: A Ranking Model for Product Search,
Abstract: Recently, neural models for information retrieval are becoming increasingly
popular. They provide effective approaches for product search due to their
competitive advantages in semantic matching. However, it is challenging to use
graph-based features, though proved very useful in IR literature, in these
neural approaches. In this paper, we leverage the recent advances in graph
embedding techniques to enable neural retrieval models to exploit
graph-structured data for automatic feature extraction. The proposed approach
can not only help to overcome the long-tail problem of click-through data, but
also incorporate external heterogeneous information to improve search results.
Extensive experiments on a real-world e-commerce dataset demonstrate
significant improvement achieved by our proposed approach over multiple strong
baselines both as an individual retrieval model and as a feature used in
learning-to-rank frameworks. | [
1,
0,
0,
0,
0,
0
] |
Title: Lyapunov exponents for products of matrices,
Abstract: Let ${\bf M}=(M_1,\ldots, M_k)$ be a tuple of real $d\times d$ matrices.
Under certain irreducibility assumptions, we give checkable criteria for
deciding whether ${\bf M}$ possesses the following property: there exist two
constants $\lambda\in {\Bbb R}$ and $C>0$ such that for any $n\in {\Bbb N}$ and
any $i_1, \ldots, i_n \in \{1,\ldots, k\}$, either $M_{i_1} \cdots M_{i_n}={\bf
0}$ or $C^{-1} e^{\lambda n} \leq \| M_{i_1} \cdots M_{i_n} \| \leq C
e^{\lambda n}$, where $\|\cdot\|$ is a matrix norm. The proof is based on
symbolic dynamics and the thermodynamic formalism for matrix products. As
applications, we are able to check the absolute continuity of a class of
overlapping self-similar measures on ${\Bbb R}$, the absolute continuity of
certain self-affine measures in ${\Bbb R}^d$ and the dimensional regularity of
a class of sofic affine-invariant sets in the plane. | [
0,
0,
1,
0,
0,
0
] |
Title: Group-Server Queues,
Abstract: By analyzing energy-efficient management of data centers, this paper proposes
and develops a class of interesting {\it Group-Server Queues}, and establishes
two representative group-server queues through loss networks and impatient
customers, respectively. Furthermore, such two group-server queues are given
model descriptions and necessary interpretation. Also, simple mathematical
discussion is provided, and simulations are made to study the expected queue
lengths, the expected sojourn times and the expected virtual service times. In
addition, this paper also shows that this class of group-server queues are
often encountered in many other practical areas including communication
networks, manufacturing systems, transportation networks, financial networks
and healthcare systems. Note that the group-server queues are always used to
design effectively dynamic control mechanisms through regrouping and
recombining such many servers in a large-scale service system by means of, for
example, bilateral threshold control, and customers transfer to the buffer or
server groups. This leads to the large-scale service system that is divided
into several adaptive and self-organizing subsystems through scheduling of
batch customers and regrouping of service resources, which make the middle
layer of this service system more effectively managed and strengthened under a
dynamic, real-time and even reward optimal framework. Based on this,
performance of such a large-scale service system may be improved greatly in
terms of introducing and analyzing such group-server queues. Therefore, not
only analysis of group-server queues is regarded as a new interesting research
direction, but there also exists many theoretical challenges, basic
difficulties and open problems in the area of queueing networks. | [
1,
0,
0,
0,
0,
0
] |
Title: MC$^2$: Multi-wavelength and dynamical analysis of the merging galaxy cluster ZwCl 0008.8+5215: An older and less massive Bullet Cluster,
Abstract: We analyze a rich dataset including Subaru/SuprimeCam, HST/ACS and WFC3,
Keck/DEIMOS, Chandra/ACIS-I, and JVLA/C and D array for the merging galaxy
cluster ZwCl 0008.8+5215. With a joint Subaru/HST weak gravitational lensing
analysis, we identify two dominant subclusters and estimate the masses to be
M$_{200}=\text{5.7}^{+\text{2.8}}_{-\text{1.8}}\times\text{10}^{\text{14}}\,\text{M}_{\odot}$
and 1.2$^{+\text{1.4}}_{-\text{0.6}}\times10^{14}$ M$_{\odot}$. We estimate the
projected separation between the two subclusters to be
924$^{+\text{243}}_{-\text{206}}$ kpc. We perform a clustering analysis on
confirmed cluster member galaxies and estimate the line of sight velocity
difference between the two subclusters to be 92$\pm$164 km s$^{-\text{1}}$. We
further motivate, discuss, and analyze the merger scenario through an analysis
of the 42 ks of Chandra/ACIS-I and JVLA/C and D polarization data. The X-ray
surface brightness profile reveals a remnant core reminiscent of the Bullet
Cluster. The X-ray luminosity in the 0.5-7.0 keV band is
1.7$\pm$0.1$\times$10$^{\text{44}}$ erg s$^{-\text{1}}$ and the X-ray
temperature is 4.90$\pm$0.13 keV. The radio relics are polarized up to 40$\%$.
We implement a Monte Carlo dynamical analysis and estimate the merger velocity
at pericenter to be 1800$^{+\text{400}}_{-\text{300}}$ km s$^{-\text{1}}$. ZwCl
0008.8+5215 is a low-mass version of the Bullet Cluster and therefore may prove
useful in testing alternative models of dark matter. We do not find significant
offsets between dark matter and galaxies, as the uncertainties are large with
the current lensing data. Furthermore, in the east, the BCG is offset from
other luminous cluster galaxies, which poses a puzzle for defining dark matter
-- galaxy offsets. | [
0,
1,
0,
0,
0,
0
] |
Title: Bayesian adaptive bandit-based designs using the Gittins index for multi-armed trials with normally distributed endpoints,
Abstract: Adaptive designs for multi-armed clinical trials have become increasingly
popular recently in many areas of medical research because of their potential
to shorten development times and to increase patient response. However,
developing response-adaptive trial designs that offer patient benefit while
ensuring the resulting trial avoids bias and provides a statistically rigorous
comparison of the different treatments included is highly challenging. In this
paper, the theory of Multi-Armed Bandit Problems is used to define a family of
near optimal adaptive designs in the context of a clinical trial with a
normally distributed endpoint with known variance. Through simulation studies
based on an ongoing trial as a motivation we report the operating
characteristics (type I error, power, bias) and patient benefit of these
approaches and compare them to traditional and existing alternative designs.
These results are then compared to those recently published in the context of
Bernoulli endpoints. Many limitations and advantages are similar in both cases
but there are also important differences, specially with respect to type I
error control. This paper proposes a simulation-based testing procedure to
correct for the observed type I error inflation that bandit-based and adaptive
rules can induce. Results presented extend recent work by considering a
normally distributed endpoint, a very common case in clinical practice yet
mostly ignored in the response-adaptive theoretical literature, and illustrate
the potential advantages of using these methods in a rare disease context. We
also recommend a suitable modified implementation of the bandit-based adaptive
designs for the case of common diseases. | [
0,
0,
0,
1,
0,
0
] |
Title: Strong Convergence Rate of Splitting Schemes for Stochastic Nonlinear Schrödinger Equations,
Abstract: We prove the optimal strong convergence rate of a fully discrete scheme,
based on a splitting approach, for a stochastic nonlinear Schrödinger (NLS)
equation. The main novelty of our method lies on the uniform a priori estimate
and exponential integrability of a sequence of splitting processes which are
used to approximate the solution of the stochastic NLS equation. We show that
the splitting processes converge to the solution with strong order $1/2$. Then
we use the Crank--Nicolson scheme to temporally discretize the splitting
process and get the temporal splitting scheme which also possesses strong order
$1/2$. To obtain a full discretization, we apply this splitting Crank--Nicolson
scheme to the spatially discrete equation which is achieved through the
spectral Galerkin approximation. Furthermore, we establish the convergence of
this fully discrete scheme with optimal strong convergence rate
$\mathcal{O}(N^{-2}+\tau^\frac12)$, where $N$ denotes the dimension of the
approximate space and $\tau$ denotes the time step size. To the best of our
knowledge, this is the first result about strong convergence rates of
temporally numerical approximations and fully discrete schemes for stochastic
NLS equations, or even for stochastic partial differential equations (SPDEs)
with non-monotone coefficients. Numerical experiments verify our theoretical
result. | [
0,
0,
1,
0,
0,
0
] |
Title: Is Task Board Customization Beneficial? - An Eye Tracking Study,
Abstract: The task board is an essential artifact in many agile development approaches.
It provides a good overview of the project status. Teams often customize their
task boards according to the team members' needs. They modify the structure of
boards, define colored codings for different purposes, and introduce different
card sizes. Although the customizations are intended to improve the task
board's usability and effectiveness, they may also complicate its comprehension
and use. The increased effort impedes the work of both the team and team
externals. Hence, task board customization is in conflict with the agile
practice of fast and easy overview for everyone. In an eye tracking study with
30 participants, we compared an original task board design with three
customized ones to investigate which design shortened the required time to
identify a particular story card. Our findings yield that only the customized
task board design with modified structures reduces the required time. The
original task board design is more beneficial than individual colored codings
and changed card sizes. According to our findings, agile teams should rethink
their current task board design. They may be better served by focusing on the
original task board design and by applying only carefully selected adjustments.
In case of customization, a task board's structure should be adjusted since
this is the only beneficial kind of customization, that additionally complies
more precisely with the concept of fast and easy project overview. | [
1,
0,
0,
0,
0,
0
] |
Title: Characterizing the impact of model error in hydrogeologic time series recovery inverse problems,
Abstract: Hydrogeologic models are commonly over-smoothed relative to reality, owing to
the difficulty of obtaining accurate high-resolution information about the
subsurface. When used in an inversion context, such models may introduce
systematic biases which cannot be encapsulated by an unbiased "observation
noise" term of the type assumed by standard regularization theory and typical
Bayesian formulations. Despite its importance, model error is difficult to
encapsulate systematically and is often neglected. Here, model error is
considered for a hydrogeologically important class of inverse problems that
includes interpretation of hydraulic transients and contaminant source history
inference: reconstruction of a time series that has been convolved against a
transfer function (i.e., impulse response) that is only approximately known.
Using established harmonic theory along with two results established here
regarding triangular Toeplitz matrices, upper and lower error bounds are
derived for the effect of systematic model error on time series recovery for
both well-determined and over-determined inverse problems. A Monte Carlo study
of a realistic hydraulic reconstruction problem is presented, and the lower
error bound is seen informative about expected behavior. A possible diagnostic
criterion for blind transfer function characterization is also uncovered. | [
0,
0,
1,
0,
0,
0
] |
Title: Suszko's Problem: Mixed Consequence and Compositionality,
Abstract: Suszko's problem is the problem of finding the minimal number of truth values
needed to semantically characterize a syntactic consequence relation. Suszko
proved that every Tarskian consequence relation can be characterized using only
two truth values. Malinowski showed that this number can equal three if some of
Tarski's structural constraints are relaxed. By so doing, Malinowski introduced
a case of so-called mixed consequence, allowing the notion of a designated
value to vary between the premises and the conclusions of an argument. In this
paper we give a more systematic perspective on Suszko's problem and on mixed
consequence. First, we prove general representation theorems relating
structural properties of a consequence relation to their semantic
interpretation, uncovering the semantic counterpart of substitution-invariance,
and establishing that (intersective) mixed consequence is fundamentally the
semantic counterpart of the structural property of monotonicity. We use those
to derive maximum-rank results proved recently in a different setting by French
and Ripley, as well as by Blasio, Marcos and Wansing, for logics with various
structural properties (reflexivity, transitivity, none, or both). We strengthen
these results into exact rank results for non-permeable logics (roughly, those
which distinguish the role of premises and conclusions). We discuss the
underlying notion of rank, and the associated reduction proposed independently
by Scott and Suszko. As emphasized by Suszko, that reduction fails to preserve
compositionality in general, meaning that the resulting semantics is no longer
truth-functional. We propose a modification of that notion of reduction,
allowing us to prove that over compact logics with what we call regular
connectives, rank results are maintained even if we request the preservation of
truth-functionality and additional semantic properties. | [
1,
0,
1,
0,
0,
0
] |
Title: Optimization of distributions differences for classification,
Abstract: In this paper we introduce a new classification algorithm called Optimization
of Distributions Differences (ODD). The algorithm aims to find a transformation
from the feature space to a new space where the instances in the same class are
as close as possible to one another while the gravity centers of these classes
are as far as possible from one another. This aim is formulated as a
multiobjective optimization problem that is solved by a hybrid of an
evolutionary strategy and the Quasi-Newton method. The choice of the
transformation function is flexible and could be any continuous space function.
We experiment with a linear and a non-linear transformation in this paper. We
show that the algorithm can outperform 6 other state-of-the-art classification
methods, namely naive Bayes, support vector machines, linear discriminant
analysis, multi-layer perceptrons, decision trees, and k-nearest neighbors, in
12 standard classification datasets. Our results show that the method is less
sensitive to the imbalanced number of instances comparing to these methods. We
also show that ODD maintains its performance better than other classification
methods in these datasets, hence, offers a better generalization ability. | [
1,
0,
0,
1,
0,
0
] |
Title: Synthesis, Crystal Structure, and Physical Properties of New Layered Oxychalcogenide La2O2Bi3AgS6,
Abstract: We have synthesized a new layered oxychalcogenide La2O2Bi3AgS6. From
synchrotron X-ray diffraction and Rietveld refinement, the crystal structure of
La2O2Bi3AgS6 was refined using a model of the P4/nmm space group with a =
4.0644(1) {\AA} and c = 19.412(1) {\AA}, which is similar to the related
compound LaOBiPbS3, while the interlayer bonds (M2-S1 bonds) are apparently
shorter in La2O2Bi3AgS6. The tunneling electron microscopy (TEM) image
confirmed the lattice constant derived from Rietveld refinement (c ~ 20 {\AA}).
The electrical resistivity and Seebeck coefficient suggested that the
electronic states of La2O2Bi3AgS6 are more metallic than those of LaOBiS2 and
LaOBiPbS3. The insertion of a rock-salt-type chalcogenide into the van der
Waals gap of BiS2-based layered compounds, such as LaOBiS2, will be a useful
strategy for designing new layered functional materials in the layered
chalcogenide family. | [
0,
1,
0,
0,
0,
0
] |
Title: Simulation to scaled city: zero-shot policy transfer for traffic control via autonomous vehicles,
Abstract: Using deep reinforcement learning, we train control policies for autonomous
vehicles leading a platoon of vehicles onto a roundabout. Using Flow, a library
for deep reinforcement learning in micro-simulators, we train two policies, one
policy with noise injected into the state and action space and one without any
injected noise. In simulation, the autonomous vehicle learns an emergent
metering behavior for both policies in which it slows to allow for smoother
merging. We then directly transfer this policy without any tuning to the
University of Delaware Scaled Smart City (UDSSC), a 1:25 scale testbed for
connected and automated vehicles. We characterize the performance of both
policies on the scaled city. We show that the noise-free policy winds up
crashing and only occasionally metering. However, the noise-injected policy
consistently performs the metering behavior and remains collision-free,
suggesting that the noise helps with the zero-shot policy transfer.
Additionally, the transferred, noise-injected policy leads to a 5% reduction of
average travel time and a reduction of 22% in maximum travel time in the UDSSC.
Videos of the controllers can be found at
this https URL. | [
1,
0,
0,
0,
0,
0
] |
Title: Relative Singularity Categories,
Abstract: We study the following generalization of singularity categories. Let X be a
quasi-projective Gorenstein scheme with isolated singularities and A a
non-commutative resolution of singularities of X in the sense of Van den Bergh.
We introduce the relative singularity category as the Verdier quotient of the
bounded derived category of coherent sheaves on A modulo the category of
perfect complexes on X. We view it as a measure for the difference between X
and A. The main results of this thesis are the following.
(i) We prove an analogue of Orlov's localization result in our setup. If X
has isolated singularities, then this reduces the study of the relative
singularity categories to the affine case.
(ii) We prove Hom-finiteness and idempotent completeness of the relative
singularity categories in the complete local situation and determine its
Grothendieck group.
(iii) We give a complete and explicit description of the relative singularity
categories when X has only nodal singularities and the resolution is given by a
sheaf of Auslander algebras.
(iv) We study relations between relative singularity categories and classical
singularity categories. For a simple hypersurface singularity and its Auslander
resolution, we show that these categories determine each other.
(v) The developed technique leads to the following `purely commutative'
application: a description of Iyama & Wemyss triangulated category for rational
surface singularities in terms of the singularity category of the rational
double point resolution.
(vi) We give a description of singularity categories of gentle algebras. | [
0,
0,
1,
0,
0,
0
] |
Title: Arimoto-Rényi Conditional Entropy and Bayesian $M$-ary Hypothesis Testing,
Abstract: This paper gives upper and lower bounds on the minimum error probability of
Bayesian $M$-ary hypothesis testing in terms of the Arimoto-Rényi conditional
entropy of an arbitrary order $\alpha$. The improved tightness of these bounds
over their specialized versions with the Shannon conditional entropy
($\alpha=1$) is demonstrated. In particular, in the case where $M$ is finite,
we show how to generalize Fano's inequality under both the conventional and
list-decision settings. As a counterpart to the generalized Fano's inequality,
allowing $M$ to be infinite, a lower bound on the Arimoto-Rényi conditional
entropy is derived as a function of the minimum error probability. Explicit
upper and lower bounds on the minimum error probability are obtained as a
function of the Arimoto-Rényi conditional entropy for both positive and
negative $\alpha$. Furthermore, we give upper bounds on the minimum error
probability as functions of the Rényi divergence. In the setup of discrete
memoryless channels, we analyze the exponentially vanishing decay of the
Arimoto-Rényi conditional entropy of the transmitted codeword given the
channel output when averaged over a random coding ensemble. | [
1,
0,
1,
1,
0,
0
] |
Title: Real-Time Model Predictive Control for Energy Management in Autonomous Underwater Vehicle,
Abstract: Improving endurance is crucial for extending the spatial and temporal
operation range of autonomous underwater vehicles (AUVs). Considering the
hardware constraints and the performance requirements, an intelligent energy
management system is required to extend the operation range of AUVs. This paper
presents a novel model predictive control (MPC) framework for energy-optimal
point-to-point motion control of an AUV. In this scheme, the energy management
problem of an AUV is reformulated as a surge motion optimization problem in two
stages. First, a system-level energy minimization problem is solved by managing
the trade-off between the energies required for overcoming the positive
buoyancy and surge drag force in static optimization. Next, an MPC with a
special cost function formulation is proposed to deal with transients and
system dynamics. A switching logic for handling the transition between the
static and dynamic stages is incorporated to reduce the computational efforts.
Simulation results show that the proposed method is able to achieve
near-optimal energy consumption with considerable lower computational
complexity. | [
1,
0,
0,
0,
0,
0
] |
Title: A modal typing system for self-referential programs and specifications,
Abstract: This paper proposes a modal typing system that enables us to handle
self-referential formulae, including ones with negative self-references, which
on one hand, would introduce a logical contradiction, namely Russell's paradox,
in the conventional setting, while on the other hand, are necessary to capture
a certain class of programs such as fixed-point combinators and objects with
so-called binary methods in object-oriented programming. The proposed system
provides a basis for axiomatic semantics of such a wider range of programs and
a new framework for natural construction of recursive programs in the
proofs-as-programs paradigm. | [
1,
0,
0,
0,
0,
0
] |
Title: Stacking and stability,
Abstract: Stacking is a general approach for combining multiple models toward greater
predictive accuracy. It has found various application across different domains,
ensuing from its meta-learning nature. Our understanding, nevertheless, on how
and why stacking works remains intuitive and lacking in theoretical insight. In
this paper, we use the stability of learning algorithms as an elemental
analysis framework suitable for addressing the issue. To this end, we analyze
the hypothesis stability of stacking, bag-stacking, and dag-stacking and
establish a connection between bag-stacking and weighted bagging. We show that
the hypothesis stability of stacking is a product of the hypothesis stability
of each of the base models and the combiner. Moreover, in bag-stacking and
dag-stacking, the hypothesis stability depends on the sampling strategy used to
generate the training set replicates. Our findings suggest that 1) subsampling
and bootstrap sampling improve the stability of stacking, and 2) stacking
improves the stability of both subbagging and bagging. | [
1,
0,
0,
1,
0,
0
] |
Title: Accelerated Consensus via Min-Sum Splitting,
Abstract: We apply the Min-Sum message-passing protocol to solve the consensus problem
in distributed optimization. We show that while the ordinary Min-Sum algorithm
does not converge, a modified version of it known as Splitting yields
convergence to the problem solution. We prove that a proper choice of the
tuning parameters allows Min-Sum Splitting to yield subdiffusive accelerated
convergence rates, matching the rates obtained by shift-register methods. The
acceleration scheme embodied by Min-Sum Splitting for the consensus problem
bears similarities with lifted Markov chains techniques and with multi-step
first order methods in convex optimization. | [
0,
0,
1,
0,
0,
0
] |
Title: Constraining the Milky Way assembly history with Galactic Archaeology. Ludwig Biermann Award Lecture 2015,
Abstract: The aim of Galactic Archaeology is to recover the evolutionary history of the
Milky Way from its present day kinematical and chemical state. Because stars
move away from their birth sites, the current dynamical information alone is
not sufficient for this task. The chemical composition of stellar atmospheres,
on the other hand, is largely preserved over the stellar lifetime and, together
with accurate ages, can be used to recover the birthplaces of stars currently
found at the same Galactic radius. In addition to the availability of large
stellar samples with accurate 6D kinematics and chemical abundance
measurements, this requires detailed modeling with both dynamical and chemical
evolution taken into account. An important first step is to understand the
variety of dynamical processes that can take place in the Milky Way, including
the perturbative effects of both internal (bar and spiral structure) and
external (infalling satellites) agents. We discuss here (1) how to constrain
the Galactic bar, spiral structure, and merging satellites by their effect on
the local and global disc phase-space, (2) the effect of multiple patterns on
the disc dynamics, and (3) the importance of radial migration and merger
perturbations for the formation of the Galactic thick disc. Finally, we discuss
the construction of Milky Way chemo-dynamical models and relate to
observations. | [
0,
1,
0,
0,
0,
0
] |
Title: A unified view of entropy-regularized Markov decision processes,
Abstract: We propose a general framework for entropy-regularized average-reward
reinforcement learning in Markov decision processes (MDPs). Our approach is
based on extending the linear-programming formulation of policy optimization in
MDPs to accommodate convex regularization functions. Our key result is showing
that using the conditional entropy of the joint state-action distributions as
regularization yields a dual optimization problem closely resembling the
Bellman optimality equations. This result enables us to formalize a number of
state-of-the-art entropy-regularized reinforcement learning algorithms as
approximate variants of Mirror Descent or Dual Averaging, and thus to argue
about the convergence properties of these methods. In particular, we show that
the exact version of the TRPO algorithm of Schulman et al. (2015) actually
converges to the optimal policy, while the entropy-regularized policy gradient
methods of Mnih et al. (2016) may fail to converge to a fixed point. Finally,
we illustrate empirically the effects of using various regularization
techniques on learning performance in a simple reinforcement learning setup. | [
1,
0,
0,
1,
0,
0
] |
Title: Arithmetic Circuits for Multilevel Qudits Based on Quantum Fourier Transform,
Abstract: We present some basic integer arithmetic quantum circuits, such as adders and
multipliers-accumulators of various forms, as well as diagonal operators, which
operate on multilevel qudits. The integers to be processed are represented in
an alternative basis after they have been Fourier transformed. Several
arithmetic circuits operating on Fourier transformed integers have appeared in
the literature for two level qubits. Here we extend these techniques on
multilevel qudits, as they may offer some advantages relative to qubits
implementations. The arithmetic circuits presented can be used as basic
building blocks for higher level algorithms such as quantum phase estimation,
quantum simulation, quantum optimization etc., but they can also be used in the
implementation of a quantum fractional Fourier transform as it is shown in a
companion work presented separately. | [
1,
0,
0,
0,
0,
0
] |
Title: Strong and broadly tunable plasmon resonances in thick films of aligned carbon nanotubes,
Abstract: Low-dimensional plasmonic materials can function as high quality terahertz
and infrared antennas at deep subwavelength scales. Despite these antennas'
strong coupling to electromagnetic fields, there is a pressing need to further
strengthen their absorption. We address this problem by fabricating thick films
of aligned, uniformly sized carbon nanotubes and showing that their plasmon
resonances are strong, narrow, and broadly tunable. With thicknesses ranging
from 25 to 250 nm, our films exhibit peak attenuation reaching 70%, quality
factors reaching 9, and electrostatically tunable peak frequencies by a factor
of 2.3x. Excellent nanotube alignment leads to the attenuation being 99%
linearly polarized along the nanotube axis. Increasing the film thickness
blueshifts the plasmon resonators down to peak wavelengths as low as 1.4
micrometers, promoting them to a new near-infrared regime in which they can
both overlap the S11 nanotube exciton energy and access the technologically
important infrared telecom band. | [
0,
1,
0,
0,
0,
0
] |
Title: Computation of annular capacity by Hamiltonian Floer theory of non-contractible periodic trajectories,
Abstract: The first author introduced a relative symplectic capacity $C$ for a
symplectic manifold $(N,\omega_N)$ and its subset $X$ which measures the
existence of non-contractible periodic trajectories of Hamiltonian isotopies on
the product of $N$ with the annulus $A_R=(R,R)\times\mathbb{R}/\mathbb{Z}$. In
the present paper, we give an exact computation of the capacity $C$ of the
$2n$-torus $\mathbb{T}^{2n}$ relative to a Lagrangian submanifold
$\mathbb{T}^n$ which implies the existence of non-contractible Hamiltonian
periodic trajectories on $A_R\times\mathbb{T}^{2n}$. Moreover, we give a lower
bound on the number of such trajectories. | [
0,
0,
1,
0,
0,
0
] |
Title: Optical Angular Momentum in Classical Electrodynamics,
Abstract: Invoking Maxwell's classical equations in conjunction with expressions for
the electromagnetic (EM) energy, momentum, force, and torque, we use a few
simple examples to demonstrate the nature of the EM angular momentum. The
energy and the angular momentum of an EM field will be shown to have an
intimate relationship; a source radiating EM angular momentum will, of
necessity, pick up an equal but opposite amount of mechanical angular momentum;
and the spin and orbital angular momenta of the EM field, when absorbed by a
small particle, will be seen to elicit different responses from the particle. | [
0,
1,
0,
0,
0,
0
] |
Title: Efficient variational Bayesian neural network ensembles for outlier detection,
Abstract: In this work we perform outlier detection using ensembles of neural networks
obtained by variational approximation of the posterior in a Bayesian neural
network setting. The variational parameters are obtained by sampling from the
true posterior by gradient descent. We show our outlier detection results are
comparable to those obtained using other efficient ensembling methods. | [
1,
0,
0,
1,
0,
0
] |
Title: Siamese Network of Deep Fisher-Vector Descriptors for Image Retrieval,
Abstract: This paper addresses the problem of large scale image retrieval, with the aim
of accurately ranking the similarity of a large number of images to a given
query image. To achieve this, we propose a novel Siamese network. This network
consists of two computational strands, each comprising of a CNN component
followed by a Fisher vector component. The CNN component produces dense, deep
convolutional descriptors that are then aggregated by the Fisher Vector method.
Crucially, we propose to simultaneously learn both the CNN filter weights and
Fisher Vector model parameters. This allows us to account for the evolving
distribution of deep descriptors over the course of the learning process. We
show that the proposed approach gives significant improvements over the
state-of-the-art methods on the Oxford and Paris image retrieval datasets.
Additionally, we provide a baseline performance measure for both these datasets
with the inclusion of 1 million distractors. | [
1,
0,
0,
0,
0,
0
] |
Title: Gender Disparities in Science? Dropout, Productivity, Collaborations and Success of Male and Female Computer Scientists,
Abstract: Scientific collaborations shape ideas as well as innovations and are both the
substrate for, and the outcome of, academic careers. Recent studies show that
gender inequality is still present in many scientific practices ranging from
hiring to peer-review processes and grant applications. In this work, we
investigate gender-specific differences in collaboration patterns of more than
one million computer scientists over the course of 47 years. We explore how
these patterns change over years and career ages and how they impact scientific
success. Our results highlight that successful male and female scientists
reveal the same collaboration patterns: compared to scientists in the same
career age, they tend to collaborate with more colleagues than other
scientists, seek innovations as brokers and establish longer-lasting and more
repetitive collaborations. However, women are on average less likely to adapt
the collaboration patterns that are related with success, more likely to embed
into ego networks devoid of structural holes, and they exhibit stronger gender
homophily as well as a consistently higher dropout rate than men in all career
ages. | [
1,
1,
0,
0,
0,
0
] |
Title: Estimating a network from multiple noisy realizations,
Abstract: Complex interactions between entities are often represented as edges in a
network. In practice, the network is often constructed from noisy measurements
and inevitably contains some errors. In this paper we consider the problem of
estimating a network from multiple noisy observations where edges of the
original network are recorded with both false positives and false negatives.
This problem is motivated by neuroimaging applications where brain networks of
a group of patients with a particular brain condition could be viewed as noisy
versions of an unobserved true network corresponding to the disease. The key to
optimally leveraging these multiple observations is to take advantage of
network structure, and here we focus on the case where the true network
contains communities. Communities are common in real networks in general and in
particular are believed to be presented in brain networks. Under a community
structure assumption on the truth, we derive an efficient method to estimate
the noise levels and the original network, with theoretical guarantees on the
convergence of our estimates. We show on synthetic networks that the
performance of our method is close to an oracle method using the true parameter
values, and apply our method to fMRI brain data, demonstrating that it
constructs stable and plausible estimates of the population network. | [
0,
0,
1,
1,
0,
0
] |
Title: Experimental investigations on nucleation, bubble growth, and micro-explosion characteristics during the combustion of ethanol/Jet A-1 fuel droplets,
Abstract: The combustion characteristics of ethanol/Jet A-1 fuel droplets having three
different proportions of ethanol (10%, 30%, and 50% by vol.) are investigated
in the present study. The large volatility differential between ethanol and Jet
A-1 and the nominal immiscibility of the fuels seem to result in combustion
characteristics that are rather different from our previous work on butanol/Jet
A-1 droplets (miscible blends). Abrupt explosion was facilitated in fuel
droplets comprising lower proportions of ethanol (10%), possibly due to
insufficient nucleation sites inside the droplet and the partially unmixed fuel
mixture. For the fuel droplets containing higher proportions of ethanol (30%
and 50%), micro-explosion occurred through homogeneous nucleation, leading to
the ejection of secondary droplets and subsequent significant reduction in the
overall droplet lifetime. The rate of bubble growth is nearly similar in all
the blends of ethanol; however, the evolution of ethanol vapor bubble is
significantly faster than that of a vapor bubble in the blends of butanol. The
probability of disruptive behavior is considerably higher in ethanol/Jet A-1
blends than that of butanol/Jet A-1 blends. The Sauter mean diameter of the
secondary droplets produced from micro-explosion is larger for blends with a
higher proportion of ethanol. Both abrupt explosion and micro-explosion create
a large-scale distortion of the flame, which surrounds the parent droplet. The
secondary droplets generated from abrupt explosion undergo rapid evaporation
whereas the secondary droplets from micro-explosion carry their individual
flame and evaporate slowly. The growth of vapor bubble was also witnessed in
the secondary droplets, which leads to the further breakup of the droplet
(puffing/micro-explosion). | [
0,
1,
0,
0,
0,
0
] |
Title: How Generative Adversarial Networks and Their Variants Work: An Overview,
Abstract: Generative Adversarial Networks (GAN) have received wide attention in the
machine learning field for their potential to learn high-dimensional, complex
real data distribution. Specifically, they do not rely on any assumptions about
the distribution and can generate real-like samples from latent space in a
simple manner. This powerful property leads GAN to be applied to various
applications such as image synthesis, image attribute editing, image
translation, domain adaptation and other academic fields. In this paper, we aim
to discuss the details of GAN for those readers who are familiar with, but do
not comprehend GAN deeply or who wish to view GAN from various perspectives. In
addition, we explain how GAN operates and the fundamental meaning of various
objective functions that have been suggested recently. We then focus on how the
GAN can be combined with an autoencoder framework. Finally, we enumerate the
GAN variants that are applied to various tasks and other fields for those who
are interested in exploiting GAN for their research. | [
1,
0,
0,
0,
0,
0
] |
Title: Principal Boundary on Riemannian Manifolds,
Abstract: We revisit the classification problem and focus on nonlinear methods for
classification on manifolds. For multivariate datasets lying on an embedded
nonlinear Riemannian manifold within the higher-dimensional space, our aim is
to acquire a classification boundary between the classes with labels. Motivated
by the principal flow [Panaretos, Pham and Yao, 2014], a curve that moves along
a path of the maximum variation of the data, we introduce the principal
boundary. From the classification perspective, the principal boundary is
defined as an optimal curve that moves in between the principal flows traced
out from two classes of the data, and at any point on the boundary, it
maximizes the margin between the two classes. We estimate the boundary in
quality with its direction supervised by the two principal flows. We show that
the principal boundary yields the usual decision boundary found by the support
vector machine, in the sense that locally, the two boundaries coincide. By
means of examples, we illustrate how to find, use and interpret the principal
boundary. | [
1,
0,
0,
1,
0,
0
] |
Title: Numerical investigations of non-uniqueness for the Navier-Stokes initial value problem in borderline spaces,
Abstract: We consider the Cauchy problem for the incompressible Navier-Stokes equations
in $\mathbb{R}^3$ for a one-parameter family of explicit scale-invariant
axi-symmetric initial data, which is smooth away from the origin and invariant
under the reflection with respect to the $xy$-plane. Working in the class of
axi-symmetric fields, we calculate numerically scale-invariant solutions of the
Cauchy problem in terms of their profile functions, which are smooth. The
solutions are necessarily unique for small data, but for large data we observe
a breaking of the reflection symmetry of the initial data through a
pitchfork-type bifurcation. By a variation of previous results by Jia &
Šverák (2013) it is known rigorously that if the behavior seen here
numerically can be proved, optimal non-uniqueness examples for the Cauchy
problem can be established, and two different solutions can exists for the same
initial datum which is divergence-free, smooth away from the origin, compactly
supported, and locally $(-1)$-homogeneous near the origin. In particular,
assuming our (finite-dimensional) numerics represents faithfully the behavior
of the full (infinite-dimensional) system, the problem of uniqueness of the
Leray-Hopf solutions (with non-smooth initial data) has a negative answer and,
in addition, the perturbative arguments such those by Kato (1984) and Koch &
Tataru (2001), or the weak-strong uniqueness results by Leray, Prodi, Serrin,
Ladyzhenskaya and others, already give essentially optimal results. There are
no singularities involved in the numerics, as we work only with smooth profile
functions. It is conceivable that our calculations could be upgraded to a
computer-assisted proof, although this would involve a substantial amount of
additional work and calculations, including a much more detailed analysis of
the asymptotic expansions of the solutions at large distances. | [
0,
1,
1,
0,
0,
0
] |
Title: Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics,
Abstract: Inspired by the success of deep learning techniques in the physical and
chemical sciences, we apply a modification of an autoencoder type deep neural
network to the task of dimension reduction of molecular dynamics data. We can
show that our time-lagged autoencoder reliably finds low-dimensional embeddings
for high-dimensional feature spaces which capture the slow dynamics of the
underlying stochastic processes - beyond the capabilities of linear dimension
reduction techniques. | [
1,
1,
0,
1,
0,
0
] |
Title: Some integrable maps and their Hirota bilinear forms,
Abstract: We introduce a two-parameter family of birational maps, which reduces to a
family previously found by Demskoi, Tran, van der Kamp and Quispel (DTKQ) when
one of the parameters is set to zero. The study of the singularity confinement
pattern for these maps leads to the introduction of a tau function satisfying a
homogeneous recurrence which has the Laurent property, and the tropical (or
ultradiscrete) analogue of this homogeneous recurrence confirms the quadratic
degree growth found empirically by Demskoi et al. We prove that the tau
function also satisfies two different bilinear equations, each of which is a
reduction of the Hirota-Miwa equation (also known as the discrete KP equation,
or the octahedron recurrence). Furthermore, these bilinear equations are
related to reductions of particular two-dimensional integrable lattice
equations, of discrete KdV or discrete Toda type. These connections, as well as
the cluster algebra structure of the bilinear equations, allow a direct
construction of Poisson brackets, Lax pairs and first integrals for the
birational maps. As a consequence of the latter results, we show how each
member of the family can be lifted to a system that is integrable in the
Liouville sense, clarifying observations made previously in the original DTKQ
case. | [
0,
1,
0,
0,
0,
0
] |
Title: Spectral analysis of stationary random bivariate signals,
Abstract: A novel approach towards the spectral analysis of stationary random bivariate
signals is proposed. Using the Quaternion Fourier Transform, we introduce a
quaternion-valued spectral representation of random bivariate signals seen as
complex-valued sequences. This makes possible the definition of a scalar
quaternion-valued spectral density for bivariate signals. This spectral density
can be meaningfully interpreted in terms of frequency-dependent polarization
attributes. A natural decomposition of any random bivariate signal in terms of
unpolarized and polarized components is introduced. Nonparametric spectral
density estimation is investigated, and we introduce the polarization
periodogram of a random bivariate signal. Numerical experiments support our
theoretical analysis, illustrating the relevance of the approach on synthetic
data. | [
0,
0,
0,
1,
0,
0
] |
Title: Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition,
Abstract: We propose a method (TT-GP) for approximate inference in Gaussian Process
(GP) models. We build on previous scalable GP research including stochastic
variational inference based on inducing inputs, kernel interpolation, and
structure exploiting algebra. The key idea of our method is to use Tensor Train
decomposition for variational parameters, which allows us to train GPs with
billions of inducing inputs and achieve state-of-the-art results on several
benchmarks. Further, our approach allows for training kernels based on deep
neural networks without any modifications to the underlying GP model. A neural
network learns a multidimensional embedding for the data, which is used by the
GP to make the final prediction. We train GP and neural network parameters
end-to-end without pretraining, through maximization of GP marginal likelihood.
We show the efficiency of the proposed approach on several regression and
classification benchmark datasets including MNIST, CIFAR-10, and Airline. | [
1,
0,
0,
1,
0,
0
] |
Title: Modularity of complex networks models,
Abstract: Modularity is designed to measure the strength of division of a network into
clusters (known also as communities). Networks with high modularity have dense
connections between the vertices within clusters but sparse connections between
vertices of different clusters. As a result, modularity is often used in
optimization methods for detecting community structure in networks, and so it
is an important graph parameter from a practical point of view. Unfortunately,
many existing non-spatial models of complex networks do not generate graphs
with high modularity; on the other hand, spatial models naturally create
clusters. We investigate this phenomenon by considering a few examples from
both sub-classes. We prove precise theoretical results for the classical model
of random d-regular graphs as well as the preferential attachment model, and
contrast these results with the ones for the spatial preferential attachment
(SPA) model that is a model for complex networks in which vertices are embedded
in a metric space, and each vertex has a sphere of influence whose size
increases if the vertex gains an in-link, and otherwise decreases with time.
The results obtained in this paper can be used for developing statistical tests
for models selection and to measure statistical significance of clusters
observed in complex networks. | [
0,
0,
1,
0,
0,
0
] |
Title: BOLD5000: A public fMRI dataset of 5000 images,
Abstract: Vision science, particularly machine vision, has been revolutionized by
introducing large-scale image datasets and statistical learning approaches.
Yet, human neuroimaging studies of visual perception still rely on small
numbers of images (around 100) due to time-constrained experimental procedures.
To apply statistical learning approaches that integrate neuroscience, the
number of images used in neuroimaging must be significantly increased. We
present BOLD5000, a human functional MRI (fMRI) study that includes almost
5,000 distinct images depicting real-world scenes. Beyond dramatically
increasing image dataset size relative to prior fMRI studies, BOLD5000 also
accounts for image diversity, overlapping with standard computer vision
datasets by incorporating images from the Scene UNderstanding (SUN), Common
Objects in Context (COCO), and ImageNet datasets. The scale and diversity of
these image datasets, combined with a slow event-related fMRI design, enable
fine-grained exploration into the neural representation of a wide range of
visual features, categories, and semantics. Concurrently, BOLD5000 brings us
closer to realizing Marr's dream of a singular vision science - the intertwined
study of biological and computer vision. | [
0,
0,
0,
0,
1,
0
] |
Title: Prospects for gravitational wave astronomy with next generation large-scale pulsar timing arrays,
Abstract: Next generation radio telescopes, namely the Five-hundred-meter Aperture
Spherical Telescope (FAST) and the Square Kilometer Array (SKA), will
revolutionize the pulsar timing arrays (PTAs) based gravitational wave (GW)
searches. We review some of the characteristics of FAST and SKA, and the
resulting PTAs, that are pertinent to the detection of gravitational wave
signals from individual supermassive black hole binaries. | [
0,
1,
0,
0,
0,
0
] |
Title: On Identifiability of Nonnegative Matrix Factorization,
Abstract: In this letter, we propose a new identification criterion that guarantees the
recovery of the low-rank latent factors in the nonnegative matrix factorization
(NMF) model, under mild conditions. Specifically, using the proposed criterion,
it suffices to identify the latent factors if the rows of one factor are
\emph{sufficiently scattered} over the nonnegative orthant, while no structural
assumption is imposed on the other factor except being full-rank. This is by
far the mildest condition under which the latent factors are provably
identifiable from the NMF model. | [
1,
0,
0,
1,
0,
0
] |
Title: Optimal Non-blocking Decentralized Supervisory Control Using G-Control Consistency,
Abstract: Supervisory control synthesis encounters with computational complexity. This
can be reduced by decentralized supervisory control approach. In this paper, we
define intrinsic control consistency for a pair of states of the plant.
G-control consistency (GCC) is another concept which is defined for a natural
projection w.r.t. the plant. We prove that, if a natural projection is output
control consistent for the closed language of the plant, and is a natural
observer for the marked language of the plant, then it is G-control consistent.
Namely, we relax the conditions for synthesis the optimal non-blocking
decentralized supervisory control by substituting GCC property for L-OCC and
Lm-observer properties of a natural projection. We propose a method to
synthesize the optimal non-blocking decentralized supervisory control based on
GCC property for a natural projection. In fact, we change the approach from
language-based properties of a natural projection to DES-based property by
defining GCC property. | [
1,
0,
0,
0,
0,
0
] |
Title: Discrete configuration spaces of squares and hexagons,
Abstract: We consider generalizations of the familiar fifteen-piece sliding puzzle on
the 4 by 4 square grid. On larger grids with more pieces and more holes,
asymptotically how fast can we move the puzzle into the solved state? We also
give a variation with sliding hexagons. The square puzzles and the hexagon
puzzles are both discrete versions of configuration spaces of disks, which are
of interest in statistical mechanics and topological robotics. The
combinatorial theorems and proofs in this paper suggest followup questions in
both combinatorics and topology, and may turn out to be useful for proving
topological statements about configuration spaces. | [
1,
0,
1,
0,
0,
0
] |
Title: On the non commutative Iwasawa main conjecture for abelian varieties over function fields,
Abstract: We establish the Iwasawa main conjecture for semi-stable abelian varieties
over a function field of characteristic $p$ under certain restrictive
assumptions. Namely we consider $p$-torsion free $p$-adic Lie extensions of the
base field which contain the constant $\mathbb Z_p$-extension and are
everywhere unramified. Under the classical $\mu=0$ hypothesis we give a proof
which mainly relies on the interpretation of the Selmer complex in terms of
$p$-adic cohomology [TV] together with the trace formulas of [EL1]. | [
0,
0,
1,
0,
0,
0
] |
Title: Design, Development and Evaluation of a UAV to Study Air Quality in Qatar,
Abstract: Measuring gases for air quality monitoring is a challenging task that claims
a lot of time of observation and large numbers of sensors. The aim of this
project is to develop a partially autonomous unmanned aerial vehicle (UAV)
equipped with sensors, in order to monitor and collect air quality real time
data in designated areas and send it to the ground base. This project is
designed and implemented by a multidisciplinary team from electrical and
computer engineering departments. The electrical engineering team responsible
for implementing air quality sensors for detecting real time data and transmit
it from the plane to the ground. On the other hand, the computer engineering
team is in charge of Interface sensors and provide platform to view and
visualize air quality data and live video streaming. The proposed project
contains several sensors to measure Temperature, Humidity, Dust, CO, CO2 and
O3. The collected data is transmitted to a server over a wireless internet
connection and the server will store, and supply these data to any party who
has permission to access it through android phone or website in semi-real time.
The developed UAV has carried several field tests in Al Shamal airport in
Qatar, with interesting results and proof of concept outcomes. | [
1,
0,
0,
0,
0,
0
] |
Title: Gaussian Process bandits with adaptive discretization,
Abstract: In this paper, the problem of maximizing a black-box function $f:\mathcal{X}
\to \mathbb{R}$ is studied in the Bayesian framework with a Gaussian Process
(GP) prior. In particular, a new algorithm for this problem is proposed, and
high probability bounds on its simple and cumulative regret are established.
The query point selection rule in most existing methods involves an exhaustive
search over an increasingly fine sequence of uniform discretizations of
$\mathcal{X}$. The proposed algorithm, in contrast, adaptively refines
$\mathcal{X}$ which leads to a lower computational complexity, particularly
when $\mathcal{X}$ is a subset of a high dimensional Euclidean space. In
addition to the computational gains, sufficient conditions are identified under
which the regret bounds of the new algorithm improve upon the known results.
Finally an extension of the algorithm to the case of contextual bandits is
proposed, and high probability bounds on the contextual regret are presented. | [
1,
0,
0,
1,
0,
0
] |
Title: A Matched Filter Technique For Slow Radio Transient Detection And First Demonstration With The Murchison Widefield Array,
Abstract: Many astronomical sources produce transient phenomena at radio frequencies,
but the transient sky at low frequencies (<300 MHz) remains relatively
unexplored. Blind surveys with new widefield radio instruments are setting
increasingly stringent limits on the transient surface density on various
timescales. Although many of these instruments are limited by classical
confusion noise from an ensemble of faint, unresolved sources, one can in
principle detect transients below the classical confusion limit to the extent
that the classical confusion noise is independent of time. We develop a
technique for detecting radio transients that is based on temporal matched
filters applied directly to time series of images rather than relying on
source-finding algorithms applied to individual images. This technique has
well-defined statistical properties and is applicable to variable and transient
searches for both confusion-limited and non-confusion-limited instruments.
Using the Murchison Widefield Array as an example, we demonstrate that the
technique works well on real data despite the presence of classical confusion
noise, sidelobe confusion noise, and other systematic errors. We searched for
transients lasting between 2 minutes and 3 months. We found no transients and
set improved upper limits on the transient surface density at 182 MHz for flux
densities between ~20--200 mJy, providing the best limits to date for hour- and
month-long transients. | [
0,
1,
0,
0,
0,
0
] |
Title: Bayes model selection,
Abstract: We offer a general Bayes theoretic framework to tackle the model selection
problem under a two-step prior design: the first-step prior serves to assess
the model selection uncertainty, and the second-step prior quantifies the prior
belief on the strength of the signals within the model chosen from the first
step.
We establish non-asymptotic oracle posterior contraction rates under (i) a
new Bernstein-inequality condition on the log likelihood ratio of the
statistical experiment, (ii) a local entropy condition on the dimensionality of
the models, and (iii) a sufficient mass condition on the second-step prior near
the best approximating signal for each model. The first-step prior can be
designed generically. The resulting posterior mean also satisfies an oracle
inequality, thus automatically serving as an adaptive point estimator in a
frequentist sense. Model mis-specification is allowed in these oracle rates.
The new Bernstein-inequality condition not only eliminates the convention of
constructing explicit tests with exponentially small type I and II errors, but
also suggests the intrinsic metric to use in a given statistical experiment,
both as a loss function and as an entropy measurement. This gives a unified
reduction scheme for many experiments considered in Ghoshal & van der
Vaart(2007) and beyond. As an illustration for the scope of our general results
in concrete applications, we consider (i) trace regression, (ii)
shape-restricted isotonic/convex regression, (iii) high-dimensional partially
linear regression and (iv) covariance matrix estimation in the sparse factor
model. These new results serve either as theoretical justification of practical
prior proposals in the literature, or as an illustration of the generic
construction scheme of a (nearly) minimax adaptive estimator for a
multi-structured experiment. | [
0,
0,
1,
1,
0,
0
] |
Title: A Semantic Cross-Species Derived Data Management Application,
Abstract: Managing dynamic information in large multi-site, multi-species, and
multi-discipline consortia is a challenging task for data management
applications. Often in academic research studies the goals for informatics
teams are to build applications that provide extract-transform-load (ETL)
functionality to archive and catalog source data that has been collected by the
research teams. In consortia that cross species and methodological or
scientific domains, building interfaces that supply data in a usable fashion
and make intuitive sense to scientists from dramatically different backgrounds
increases the complexity for developers. Further, reusing source data from
outside one's scientific domain is fraught with ambiguities in understanding
the data types, analysis methodologies, and how to combine the data with those
from other research teams. We report on the design, implementation, and
performance of a semantic data management application to support the NIMH
funded Conte Center at the University of California, Irvine. The Center is
testing a theory of the consequences of "fragmented" (unpredictable, high
entropy) early-life experiences on adolescent cognitive and emotional outcomes
in both humans and rodents. It employs cross-species neuroimaging, epigenomic,
molecular, and neuroanatomical approaches in humans and rodents to assess the
potential consequences of fragmented unpredictable experience on brain
structure and circuitry. To address this multi-technology, multi-species
approach, the system uses semantic web techniques based on the Neuroimaging
Data Model (NIDM) to facilitate data ETL functionality. We find this approach
enables a low-cost, easy to maintain, and semantically meaningful information
management system, enabling the diverse research teams to access and use the
data. | [
1,
0,
0,
0,
0,
0
] |
Title: Retrosynthetic reaction prediction using neural sequence-to-sequence models,
Abstract: We describe a fully data driven model that learns to perform a retrosynthetic
reaction prediction task, which is treated as a sequence-to-sequence mapping
problem. The end-to-end trained model has an encoder-decoder architecture that
consists of two recurrent neural networks, which has previously shown great
success in solving other sequence-to-sequence prediction tasks such as machine
translation. The model is trained on 50,000 experimental reaction examples from
the United States patent literature, which span 10 broad reaction types that
are commonly used by medicinal chemists. We find that our model performs
comparably with a rule-based expert system baseline model, and also overcomes
certain limitations associated with rule-based expert systems and with any
machine learning approach that contains a rule-based expert system component.
Our model provides an important first step towards solving the challenging
problem of computational retrosynthetic analysis. | [
1,
0,
0,
1,
0,
0
] |
Title: Redshift, metallicity and size of two extended dwarf Irregular galaxies. A link between dwarf Irregulars and Ultra Diffuse Galaxies?,
Abstract: We present the results of the spectroscopic and photometric follow-up of two
field galaxies that were selected as possible stellar counterparts of local
high velocity clouds. Our analysis shows that the two systems are distant (D>20
Mpc) dwarf irregular galaxies unrelated to the local HI clouds. However, the
newly derived distance and structural parameters reveal that the two galaxies
have luminosities and effective radii very similar to the recently identified
Ultra Diffuse Galaxies (UDGs). At odds with classical UDGs, they are remarkably
isolated, having no known giant galaxy within ~2.0 Mpc. Moreover, one of them
has a very high gas content compared to galaxies of similar stellar mass, with
a HI to stellar mass ratio M_HI/M_* ~90, typical of almost-dark dwarfs.
Expanding on this finding, we show that extended dwarf irregulars overlap the
distribution of UDGs in the M_V vs. log(r_e) plane and that the sequence
including dwarf spheroidals, dwarf irregulars and UDGs appears as continuously
populated in this plane. | [
0,
1,
0,
0,
0,
0
] |
Title: Hausdorff Measure: Lost in Translation,
Abstract: In the present article we describe how one can define Hausdorff measure
allowing empty elements in coverings, and using infinite countable coverings
only. In addition, we discuss how the use of different nonequivalent
interpretations of the notion "countable set", that is typical for classical
and modern mathematics, may lead to contradictions. | [
0,
0,
1,
0,
0,
0
] |
Title: Orthogonalized ALS: A Theoretically Principled Tensor Decomposition Algorithm for Practical Use,
Abstract: The popular Alternating Least Squares (ALS) algorithm for tensor
decomposition is efficient and easy to implement, but often converges to poor
local optima---particularly when the weights of the factors are non-uniform. We
propose a modification of the ALS approach that is as efficient as standard
ALS, but provably recovers the true factors with random initialization under
standard incoherence assumptions on the factors of the tensor. We demonstrate
the significant practical superiority of our approach over traditional ALS for
a variety of tasks on synthetic data---including tensor factorization on exact,
noisy and over-complete tensors, as well as tensor completion---and for
computing word embeddings from a third-order word tri-occurrence tensor. | [
1,
0,
0,
1,
0,
0
] |
Title: Domain Generalization by Marginal Transfer Learning,
Abstract: Domain generalization is the problem of assigning class labels to an
unlabeled test data set, given several labeled training data sets drawn from
similar distributions. This problem arises in several applications where data
distributions fluctuate because of biological, technical, or other sources of
variation. We develop a distribution-free, kernel-based approach that predicts
a classifier from the marginal distribution of features, by leveraging the
trends present in related classification tasks. This approach involves
identifying an appropriate reproducing kernel Hilbert space and optimizing a
regularized empirical risk over the space. We present generalization error
analysis, describe universal kernels, and establish universal consistency of
the proposed methodology. Experimental results on synthetic data and three real
data applications demonstrate the superiority of the method with respect to a
pooling strategy. | [
0,
0,
0,
1,
0,
0
] |
Title: Pricing options and computing implied volatilities using neural networks,
Abstract: This paper proposes a data-driven approach, by means of an Artificial Neural
Network (ANN), to value financial options and to calculate implied volatilities
with the aim of accelerating the corresponding numerical methods. With ANNs
being universal function approximators, this method trains an optimized ANN on
a data set generated by a sophisticated financial model, and runs the trained
ANN as an agent of the original solver in a fast and efficient way. We test
this approach on three different types of solvers, including the analytic
solution for the Black-Scholes equation, the COS method for the Heston
stochastic volatility model and Brent's iterative root-finding method for the
calculation of implied volatilities. The numerical results show that the ANN
solver can reduce the computing time significantly. | [
1,
0,
0,
0,
0,
1
] |
Title: Effect of magnetization on the tunneling anomaly in compressible quantum Hall states,
Abstract: Tunneling of electrons into a two-dimensional electron system is known to
exhibit an anomaly at low bias, in which the tunneling conductance vanishes due
to a many-body interaction effect. Recent experiments have measured this
anomaly between two copies of the half-filled Landau level as a function of
in-plane magnetic field, and they suggest that increasing spin polarization
drives a deeper suppression of tunneling. Here we present a theory of the
tunneling anomaly between two copies of the partially spin-polarized
Halperin-Lee-Read state, and we show that the conventional description of the
tunneling anomaly, based on the Coulomb self-energy of the injected charge
packet, is inconsistent with the experimental observation. We propose that the
experiment is operating in a different regime, not previously considered, in
which the charge-spreading action is determined by the compressibility of the
composite fermions. | [
0,
1,
0,
0,
0,
0
] |
Title: Range-efficient consistent sampling and locality-sensitive hashing for polygons,
Abstract: Locality-sensitive hashing (LSH) is a fundamental technique for similarity
search and similarity estimation in high-dimensional spaces. The basic idea is
that similar objects should produce hash collisions with probability
significantly larger than objects with low similarity. We consider LSH for
objects that can be represented as point sets in either one or two dimensions.
To make the point sets finite size we consider the subset of points on a grid.
Directly applying LSH (e.g. min-wise hashing) to these point sets would require
time proportional to the number of points. We seek to achieve time that is much
lower than direct approaches.
Technically, we introduce new primitives for range-efficient consistent
sampling (of independent interest), and show how to turn such samples into LSH
values. Another application of our technique is a data structure for quickly
estimating the size of the intersection or union of a set of preprocessed
polygons. Curiously, our consistent sampling method uses transformation to a
geometric problem. | [
1,
0,
0,
0,
0,
0
] |
Title: Decoupled Greedy Learning of CNNs,
Abstract: A commonly cited inefficiency of neural network training by back-propagation
is the update locking problem: each layer must wait for the signal to propagate
through the network before updating. We consider and analyze a training
procedure, Decoupled Greedy Learning (DGL), that addresses this problem more
effectively and at scales beyond those of previous solutions. It is based on a
greedy relaxation of the joint training objective, recently shown to be
effective in the context of Convolutional Neural Networks (CNNs) on large-scale
image classification. We consider an optimization of this objective that
permits us to decouple the layer training, allowing for layers or modules in
networks to be trained with a potentially linear parallelization in layers. We
show theoretically and empirically that this approach converges. In addition,
we empirically find that it can lead to better generalization than sequential
greedy optimization and even standard end-to-end back-propagation. We show that
an extension of this approach to asynchronous settings, where modules can
operate with large communication delays, is possible with the use of a replay
buffer. We demonstrate the effectiveness of DGL on the CIFAR-10 datasets
against alternatives and on the large-scale ImageNet dataset, where we are able
to effectively train VGG and ResNet-152 models. | [
1,
0,
0,
1,
0,
0
] |
Title: Discrete time Pontryagin maximum principle for optimal control problems under state-action-frequency constraints,
Abstract: We establish a Pontryagin maximum principle for discrete time optimal control
problems under the following three types of constraints: a) constraints on the
states pointwise in time, b) constraints on the control actions pointwise in
time, and c) constraints on the frequency spectrum of the optimal control
trajectories. While the first two types of constraints are already included in
the existing versions of the Pontryagin maximum principle, it turns out that
the third type of constraints cannot be recast in any of the standard forms of
the existing results for the original control system. We provide two different
proofs of our Pontryagin maximum principle in this article, and include several
special cases fine-tuned to control-affine nonlinear and linear system models.
In particular, for minimization of quadratic cost functions and linear time
invariant control systems, we provide tight conditions under which the optimal
controls under frequency constraints are either normal or abnormal. | [
1,
0,
1,
0,
0,
0
] |
Title: Quantitative evaluation of an active Chemotaxis model in Discrete time,
Abstract: A system of $N$ particles in a chemical medium in $\mathbb{R}^{d}$ is studied
in a discrete time setting. Underlying interacting particle system in
continuous time can be expressed as \begin{eqnarray} dX_{i}(t)
&=&[-(I-A)X_{i}(t) + \bigtriangledown h(t,X_{i}(t))]dt + dW_{i}(t), \,\,
X_{i}(0)=x_{i}\in \mathbb{R}^{d}\,\,\forall i=1,\ldots,N\nonumber\\
\frac{\partial}{\partial t} h(t,x)&=&-\alpha h(t,x) + D\bigtriangleup h(t,x)
+\frac{\beta}{n} \sum_{i=1}^{N} g(X_{i}(t),x),\quad h(0,\cdot) =
h(\cdot).\label{main} \end{eqnarray} where $X_{i}(t)$ is the location of the
$i$th particle at time $t$ and $h(t,x)$ is the function measuring the
concentration of the medium at location $x$ with $h(0,x) = h(x)$. In this
article we describe a general discrete time non-linear formulation of the
aforementioned model and a strongly coupled particle system approximating it.
Similar models have been studied before (Budhiraja et al.(2011)) under a
restrictive compactness assumption on the domain of particles. In current work
the particles take values in $\R^{d}$ and consequently the stability analysis
is particularly challenging. We provide sufficient conditions for the existence
of a unique fixed point for the dynamical system governing the large $N$
asymptotics of the particle empirical measure. We also provide uniform in time
convergence rates for the particle empirical measure to the corresponding limit
measure under suitable conditions on the model. | [
0,
0,
1,
0,
0,
0
] |
Title: Decoupling Learning Rules from Representations,
Abstract: In the artificial intelligence field, learning often corresponds to changing
the parameters of a parameterized function. A learning rule is an algorithm or
mathematical expression that specifies precisely how the parameters should be
changed. When creating an artificial intelligence system, we must make two
decisions: what representation should be used (i.e., what parameterized
function should be used) and what learning rule should be used to search
through the resulting set of representable functions. Using most learning
rules, these two decisions are coupled in a subtle (and often unintentional)
way. That is, using the same learning rule with two different representations
that can represent the same sets of functions can result in two different
outcomes. After arguing that this coupling is undesirable, particularly when
using artificial neural networks, we present a method for partially decoupling
these two decisions for a broad class of learning rules that span unsupervised
learning, reinforcement learning, and supervised learning. | [
1,
0,
0,
1,
0,
0
] |
Title: Space-Valued Diagrams, Type-Theoretically (Extended Abstract),
Abstract: Topologists are sometimes interested in space-valued diagrams over a given
index category, but it is tricky to say what such a diagram even is if we look
for a notion that is stable under equivalence. The same happens in (homotopy)
type theory, where it is known only for special cases how one can define a type
of type-valued diagrams over a given index category. We offer several
constructions. We first show how to define homotopy coherent diagrams which
come with all higher coherence laws explicitly, with two variants that come
with assumption on the index category or on the type theory. Further, we
present a construction of diagrams over certain Reedy categories. As an
application, we add the degeneracies to the well-known construction of
semisimplicial types, yielding a construction of simplicial types up to any
given finite level. The current paper is only an extended abstract, and a full
version is to follow. In the full paper, we will show that the different
notions of diagrams are equivalent to each other and to the known notion of
Reedy fibrant diagrams whenever the statement makes sense. In the current
paper, we only sketch some core ideas of the proofs. | [
1,
0,
1,
0,
0,
0
] |
Title: Output Impedance Diffusion into Lossy Power Lines,
Abstract: Output impedances are inherent elements of power sources in the electrical
grids. In this paper, we give an answer to the following question: What is the
effect of output impedances on the inductivity of the power network? To address
this question, we propose a measure to evaluate the inductivity of a power
grid, and we compute this measure for various types of output impedances.
Following this computation, it turns out that network inductivity highly
depends on the algebraic connectivity of the network. By exploiting the derived
expressions of the proposed measure, one can tune the output impedances in
order to enforce a desired level of inductivity on the power system.
Furthermore, the results show that the more "connected" the network is, the
more the output impedances diffuse into the network. Finally, using Kron
reduction, we provide examples that demonstrate the utility and validity of the
method. | [
1,
0,
0,
0,
0,
0
] |
Title: Enhancing the significance of gravitational wave bursts through signal classification,
Abstract: The quest to observe gravitational waves challenges our ability to
discriminate signals from detector noise. This issue is especially relevant for
transient gravitational waves searches with a robust eyes wide open approach,
the so called all- sky burst searches. Here we show how signal classification
methods inspired by broad astrophysical characteristics can be implemented in
all-sky burst searches preserving their generality. In our case study, we apply
a multivariate analyses based on artificial neural networks to classify waves
emitted in compact binary coalescences. We enhance by orders of magnitude the
significance of signals belonging to this broad astrophysical class against the
noise background. Alternatively, at a given level of mis-classification of
noise events, we can detect about 1/4 more of the total signal population. We
also show that a more general strategy of signal classification can actually be
performed, by testing the ability of artificial neural networks in
discriminating different signal classes. The possible impact on future
observations by the LIGO-Virgo network of detectors is discussed by analysing
recoloured noise from previous LIGO-Virgo data with coherent WaveBurst, one of
the flagship pipelines dedicated to all-sky searches for transient
gravitational waves. | [
0,
1,
0,
0,
0,
0
] |
Title: Model-based Clustering with Sparse Covariance Matrices,
Abstract: Finite Gaussian mixture models are widely used for model-based clustering of
continuous data. Nevertheless, since the number of model parameters scales
quadratically with the number of variables, these models can be easily
over-parameterized. For this reason, parsimonious models have been developed
via covariance matrix decompositions or assuming local independence. However,
these remedies do not allow for direct estimation of sparse covariance matrices
nor do they take into account that the structure of association among the
variables can vary from one cluster to the other. To this end, we introduce
mixtures of Gaussian covariance graph models for model-based clustering with
sparse covariance matrices. A penalized likelihood approach is employed for
estimation and a general penalty term on the graph configurations can be used
to induce different levels of sparsity and incorporate prior knowledge. Model
estimation is carried out using a structural-EM algorithm for parameters and
graph structure estimation, where two alternative strategies based on a genetic
algorithm and an efficient stepwise search are proposed for inference. With
this approach, sparse component covariance matrices are directly obtained. The
framework results in a parsimonious model-based clustering of the data via a
flexible model for the within-group joint distribution of the variables.
Extensive simulated data experiments and application to illustrative datasets
show that the method attains good classification performance and model quality. | [
0,
0,
0,
1,
0,
0
] |
Title: An Assessment of Data Transfer Performance for Large-Scale Climate Data Analysis and Recommendations for the Data Infrastructure for CMIP6,
Abstract: We document the data transfer workflow, data transfer performance, and other
aspects of staging approximately 56 terabytes of climate model output data from
the distributed Coupled Model Intercomparison Project (CMIP5) archive to the
National Energy Research Supercomputing Center (NERSC) at the Lawrence Berkeley
National Laboratory required for tracking and characterizing extratropical
storms, a phenomena of importance in the mid-latitudes. We present this
analysis to illustrate the current challenges in assembling multi-model data
sets at major computing facilities for large-scale studies of CMIP5 data.
Because of the larger archive size of the upcoming CMIP6 phase of model
intercomparison, we expect such data transfers to become of increasing
importance, and perhaps of routine necessity. We find that data transfer rates
using the ESGF are often slower than what is typically available to US
residences and that there is significant room for improvement in the data
transfer capabilities of the ESGF portal and data centers both in terms of
workflow mechanics and in data transfer performance. We believe performance
improvements of at least an order of magnitude are within technical reach using
current best practices, as illustrated by the performance we achieved in
transferring the complete raw data set between two high performance computing
facilities. To achieve these performance improvements, we recommend: that
current best practices (such as the Science DMZ model) be applied to the data
servers and networks at ESGF data centers; that sufficient financial and human
resources be devoted at the ESGF data centers for systems and network
engineering tasks to support high performance data movement; and that
performance metrics for data transfer between ESGF data centers and major
computing facilities used for climate data analysis be established, regularly
tested, and published. | [
1,
1,
0,
0,
0,
0
] |
Title: Automated Problem Identification: Regression vs Classification via Evolutionary Deep Networks,
Abstract: Regression or classification? This is perhaps the most basic question faced
when tackling a new supervised learning problem. We present an Evolutionary
Deep Learning (EDL) algorithm that automatically solves this by identifying the
question type with high accuracy, along with a proposed deep architecture.
Typically, a significant amount of human insight and preparation is required
prior to executing machine learning algorithms. For example, when creating deep
neural networks, the number of parameters must be selected in advance and
furthermore, a lot of these choices are made based upon pre-existing knowledge
of the data such as the use of a categorical cross entropy loss function.
Humans are able to study a dataset and decide whether it represents a
classification or a regression problem, and consequently make decisions which
will be applied to the execution of the neural network. We propose the
Automated Problem Identification (API) algorithm, which uses an evolutionary
algorithm interface to TensorFlow to manipulate a deep neural network to decide
if a dataset represents a classification or a regression problem. We test API
on 16 different classification, regression and sentiment analysis datasets with
up to 10,000 features and up to 17,000 unique target values. API achieves an
average accuracy of $96.3\%$ in identifying the problem type without hardcoding
any insights about the general characteristics of regression or classification
problems. For example, API successfully identifies classification problems even
with 1000 target values. Furthermore, the algorithm recommends which loss
function to use and also recommends a neural network architecture. Our work is
therefore a step towards fully automated machine learning. | [
1,
0,
0,
1,
0,
0
] |
Title: The Kellogg property and boundary regularity for p-harmonic functions with respect to the Mazurkiewicz boundary and other compactifications,
Abstract: In this paper boundary regularity for p-harmonic functions is studied with
respect to the Mazurkiewicz boundary and other compactifications. In
particular, the Kellogg property (which says that the set of irregular boundary
points has capacity zero) is obtained for a large class of compactifications,
but also two examples when it fails are given. This study is done for complete
metric spaces equipped with doubling measures supporting a p-Poincaré
inequality, but the results are new also in unweighted Euclidean spaces. | [
0,
0,
1,
0,
0,
0
] |
Title: Nonparametric Inference via Bootstrapping the Debiased Estimator,
Abstract: In this paper, we propose to construct confidence bands by bootstrapping the
debiased kernel density estimator (for density estimation) and the debiased
local polynomial regression estimator (for regression analysis). The idea of
using a debiased estimator was first introduced in Calonico et al. (2015),
where they construct a confidence interval of the density function (and
regression function) at a given point by explicitly estimating stochastic
variations. We extend their ideas and propose a bootstrap approach for
constructing confidence bands that is uniform for every point in the support.
We prove that the resulting bootstrap confidence band is asymptotically valid
and is compatible with most tuning parameter selection approaches, such as the
rule of thumb and cross-validation. We further generalize our method to
confidence sets of density level sets and inverse regression problems.
Simulation studies confirm the validity of the proposed confidence bands/sets. | [
0,
0,
1,
1,
0,
0
] |
Title: Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks,
Abstract: Finding actions that satisfy the constraints imposed by both external inputs
and internal representations is central to decision making. We demonstrate that
some important classes of constraint satisfaction problems (CSPs) can be solved
by networks composed of homogeneous cooperative-competitive modules that have
connectivity similar to motifs observed in the superficial layers of neocortex.
The winner-take-all modules are sparsely coupled by programming neurons that
embed the constraints onto the otherwise homogeneous modular computational
substrate. We show rules that embed any instance of the CSPs planar four-color
graph coloring, maximum independent set, and Sudoku on this substrate, and
provide mathematical proofs that guarantee these graph coloring problems will
convergence to a solution. The network is composed of non-saturating linear
threshold neurons. Their lack of right saturation allows the overall network to
explore the problem space driven through the unstable dynamics generated by
recurrent excitation. The direction of exploration is steered by the constraint
neurons. While many problems can be solved using only linear inhibitory
constraints, network performance on hard problems benefits significantly when
these negative constraints are implemented by non-linear multiplicative
inhibition. Overall, our results demonstrate the importance of instability
rather than stability in network computation, and also offer insight into the
computational role of dual inhibitory mechanisms in neural circuits. | [
0,
0,
0,
0,
1,
0
] |
Title: Asymptotic Blind-spot Analysis of Localization Networks under Correlated Blocking using a Poisson Line Process,
Abstract: In a localization network, the line-of-sight between anchors (transceivers)
and targets may be blocked due to the presence of obstacles in the environment.
Due to the non-zero size of the obstacles, the blocking is typically correlated
across both anchor and target locations, with the extent of correlation
increasing with obstacle size. If a target does not have line-of-sight to a
minimum number of anchors, then its position cannot be estimated unambiguously
and is, therefore, said to be in a blind-spot. However, the analysis of the
blind-spot probability of a given target is challenging due to the inherent
randomness in the obstacle locations and sizes. In this letter, we develop a
new framework to analyze the worst-case impact of correlated blocking on the
blind-spot probability of a typical target; in particular, we model the
obstacles by a Poisson line process and the anchor locations by a Poisson point
process. For this setup, we define the notion of the asymptotic blind-spot
probability of the typical target and derive a closed-form expression for it as
a function of the area distribution of a typical Poisson-Voronoi cell. As an
upper bound for the more realistic case when obstacles have finite dimensions,
the asymptotic blind-spot probability is useful as a design tool to ensure that
the blind-spot probability of a typical target does not exceed a desired
threshold, $\epsilon$. | [
1,
0,
0,
0,
0,
0
] |
Title: The relation between galaxy morphology and colour in the EAGLE simulation,
Abstract: We investigate the relation between kinematic morphology, intrinsic colour
and stellar mass of galaxies in the EAGLE cosmological hydrodynamical
simulation. We calculate the intrinsic u-r colours and measure the fraction of
kinetic energy invested in ordered corotation of 3562 galaxies at z=0 with
stellar masses larger than $10^{10}M_{\odot}$. We perform a visual inspection
of gri-composite images and find that our kinematic morphology correlates
strongly with visual morphology. EAGLE produces a galaxy population for which
morphology is tightly correlated with the location in the colour- mass diagram,
with the red sequence mostly populated by elliptical galaxies and the blue
cloud by disc galaxies. Satellite galaxies are more likely to be on the red
sequence than centrals, and for satellites the red sequence is morphologically
more diverse. These results show that the connection between mass, intrinsic
colour and morphology arises from galaxy formation models that reproduce the
observed galaxy mass function and sizes. | [
0,
1,
0,
0,
0,
0
] |
Title: Learning Large Scale Ordinary Differential Equation Systems,
Abstract: Learning large scale nonlinear ordinary differential equation (ODE) systems
from data is known to be computationally and statistically challenging. We
present a framework together with the adaptive integral matching (AIM)
algorithm for learning polynomial or rational ODE systems with a sparse network
structure. The framework allows for time course data sampled from multiple
environments representing e.g. different interventions or perturbations of the
system. The algorithm AIM combines an initial penalised integral matching step
with an adapted least squares step based on solving the ODE numerically. The R
package episode implements AIM together with several other algorithms and is
available from CRAN. It is shown that AIM achieves state-of-the-art network
recovery for the in silico phosphoprotein abundance data from the eighth DREAM
challenge with an AUROC of 0.74, and it is demonstrated via a range of
numerical examples that AIM has good statistical properties while being
computationally feasible even for large systems. | [
0,
0,
1,
1,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.