title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Stochastic comparisons of series and parallel systems with heterogeneous components | In this paper, we discuss stochastic comparisons of parallel systems with
independent heterogeneous exponentiated Nadarajah-Haghighi (ENH) components in
terms of the usual stochastic order, dispersive order, convex transform order
and the likelihood ratio order. In the presence of the Archimedean copula, we
study stochastic comparison of series dependent systems in terms of the usual
stochastic order.
| 0 | 0 | 1 | 1 | 0 | 0 |
Statistical inference in two-sample summary-data Mendelian randomization using robust adjusted profile score | Mendelian randomization (MR) is a method of exploiting genetic variation to
unbiasedly estimate a causal effect in presence of unmeasured confounding. MR
is being widely used in epidemiology and other related areas of population
science. In this paper, we study statistical inference in the increasingly
popular two-sample summary-data MR design. We show a linear model for the
observed associations approximately holds in a wide variety of settings when
all the genetic variants satisfy the exclusion restriction assumption, or in
genetic terms, when there is no pleiotropy. In this scenario, we derive a
maximum profile likelihood estimator with provable consistency and asymptotic
normality. However, through analyzing real datasets, we find strong evidence of
both systematic and idiosyncratic pleiotropy in MR, echoing the omnigenic model
of complex traits that is recently proposed in genetics. We model the
systematic pleiotropy by a random effects model, where no genetic variant
satisfies the exclusion restriction condition exactly. In this case we propose
a consistent and asymptotically normal estimator by adjusting the profile
score. We then tackle the idiosyncratic pleiotropy by robustifying the adjusted
profile score. We demonstrate the robustness and efficiency of the proposed
methods using several simulated and real datasets.
| 0 | 0 | 0 | 1 | 0 | 0 |
Edgeworth correction for the largest eigenvalue in a spiked PCA model | We study improved approximations to the distribution of the largest
eigenvalue $\hat{\ell}$ of the sample covariance matrix of $n$ zero-mean
Gaussian observations in dimension $p+1$. We assume that one population
principal component has variance $\ell > 1$ and the remaining `noise'
components have common variance $1$. In the high dimensional limit $p/n \to
\gamma > 0$, we begin study of Edgeworth corrections to the limiting Gaussian
distribution of $\hat{\ell}$ in the supercritical case $\ell > 1 + \sqrt
\gamma$. The skewness correction involves a quadratic polynomial as in
classical settings, but the coefficients reflect the high dimensional
structure. The methods involve Edgeworth expansions for sums of independent
non-identically distributed variates obtained by conditioning on the sample
noise eigenvalues, and limiting bulk properties \textit{and} fluctuations of
these noise eigenvalues.
| 0 | 0 | 1 | 1 | 0 | 0 |
On one nearly everywhere continuous and nowhere differentiable function, that defined by automaton with finite memory | This paper is devoted to the investigation of the following function $$ f:
x=\Delta^{3}_{\alpha_{1}\alpha_{2}...\alpha_{n}...}{\rightarrow}
\Delta^{3}_{\varphi(\alpha_{1})\varphi(\alpha_{2})...\varphi(\alpha_{n})...}=f(x)=y,
$$ where $\varphi(i)=\frac{-3i^{2}+7i}{2}$, $ i \in N^{0}_{2}=\{0,1,2\}$, and
$\Delta^{3}_{\alpha_{1}\alpha_{2}...\alpha_{n}...}$ is the ternary
representation of $x \in [0;1]$. That is values of this function are obtained
from the ternary representation of the argument by the following change of
digits: 0 by 0, 1 by 2, and 2 by 1. This function preserves the ternary digit
$0$.
Main mapping properties and differential, integral, fractal properties of the
function are studied. Equivalent representations by additionally defined
auxiliary functions of this function are proved.
This paper is the paper translated from Ukrainian (the Ukrainian variant
available at this https URL). In 2012, the
Ukrainian variant of this paper was represented by the author in the
International Scientific Conference "Asymptotic Methods in the Theory of
Differential Equations" dedicated to 80th anniversary of M. I. Shkil (the
conference paper available at
this https URL). In 2013, the
investigations of the present article were generalized by the author in the
paper "One one class of functions with complicated local structure"
(this https URL) and in the several conference papers
(available at: this https URL,
this https URL).
| 0 | 0 | 1 | 0 | 0 | 0 |
Hochschild cohomology for periodic algebras of polynomial growth | We describe the dimensions of low Hochschild cohomology spaces of exceptional
periodic representation-infinite algebras of polynomial growth. As an
application we obtain that an indecomposable non-standard periodic
representation-infinite algebra of polynomial growth is not derived equivalent
to a standard self-injective algebra.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fracton Models on General Three-Dimensional Manifolds | Fracton models, a collection of exotic gapped lattice Hamiltonians recently
discovered in three spatial dimensions, contain some 'topological' features:
they support fractional bulk excitations (dubbed fractons), and a ground state
degeneracy that is robust to local perturbations. However, because previous
fracton models have only been defined and analyzed on a cubic lattice with
periodic boundary conditions, it is unclear to what extent a notion of topology
is applicable. In this paper, we demonstrate that the X-cube model, a
prototypical type-I fracton model, can be defined on general three-dimensional
manifolds. Our construction revolves around the notion of a singular compact
total foliation of the spatial manifold, which constructs a lattice from
intersecting stacks of parallel surfaces called leaves. We find that the ground
state degeneracy depends on the topology of the leaves and the pattern of leaf
intersections. We further show that such a dependence can be understood from a
renormalization group transformation for the X-cube model, wherein the system
size can be changed by adding or removing 2D layers of topological states. Our
results lead to an improved definition of fracton phase and bring to the fore
the topological nature of fracton orders.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topic Modeling on Health Journals with Regularized Variational Inference | Topic modeling enables exploration and compact representation of a corpus.
The CaringBridge (CB) dataset is a massive collection of journals written by
patients and caregivers during a health crisis. Topic modeling on the CB
dataset, however, is challenging due to the asynchronous nature of multiple
authors writing about their health journeys. To overcome this challenge we
introduce the Dynamic Author-Persona topic model (DAP), a probabilistic
graphical model designed for temporal corpora with multiple authors. The
novelty of the DAP model lies in its representation of authors by a persona ---
where personas capture the propensity to write about certain topics over time.
Further, we present a regularized variational inference algorithm, which we use
to encourage the DAP model's personas to be distinct. Our results show
significant improvements over competing topic models --- particularly after
regularization, and highlight the DAP model's unique ability to capture common
journeys shared by different authors.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the Taylor coefficients of a subclass of meromorphic univalent functions | Let $\mathcal{V}_p(\lambda)$ be the collection of all functions $f$ defined
in the unit disc $\ID$ having a simple pole at $z=p$ where $0<p<1$ and analytic
in $\ID\setminus\{p\}$ with $f(0)=0=f'(0)-1$ and satisfying the differential
inequality $|(z/f(z))^2 f'(z)-1|< \lambda $ for $z\in \ID$, $0<\lambda\leq 1$.
Each $f\in\mathcal{V}_p(\lambda)$ has the following Taylor expansion:
$$
f(z)=z+\sum_{n=2}^{\infty}a_n(f) z^n, \quad |z|<p.
$$
In \cite{BF-3}, we conjectured that
$$
|a_n(f)|\leq \frac{1-(\lambda p^2)^n}{p^{n-1}(1-\lambda p^2)}\quad
\mbox{for}\quad n\geq3. $$ In the present article, we first obtain a
representation formula for functions in the class $\mathcal{V}_p(\lambda)$.
Using this representation, we prove the aforementioned conjecture for $n=3,4,5$
whenever $p$ belongs to certain subintervals of $(0,1)$. Also we determine non
sharp bounds for $|a_n(f)|,\,n\geq 3$ and for $|a_{n+1}(f)-a_n(f)/p|,\,n\geq
2$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs | The loss functions of deep neural networks are complex and their geometric
properties are not well understood. We show that the optima of these complex
loss functions are in fact connected by simple curves over which training and
test accuracy are nearly constant. We introduce a training procedure to
discover these high-accuracy pathways between modes. Inspired by this new
geometric insight, we also propose a new ensembling method entitled Fast
Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in
the time required to train a single model. We achieve improved performance
compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10,
CIFAR-100, and ImageNet.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the scaling patterns of infectious disease incidence in cities | Urban areas with larger and more connected populations offer an auspicious
environment for contagion processes such as the spread of pathogens. Empirical
evidence reveals a systematic increase in the rates of certain sexually
transmitted diseases (STDs) with larger urban population size. However, the
main drivers of these systemic infection patterns are still not well
understood, and rampant urbanization rates worldwide makes it critical to
advance our understanding on this front. Using confirmed-cases data for three
STDs in US metropolitan areas, we investigate the scaling patterns of
infectious disease incidence in urban areas. The most salient features of these
patterns are that, on average, the incidence of infectious diseases that
transmit with less ease-- either because of a lower inherent transmissibility
or due to a less suitable environment for transmission-- scale more steeply
with population size, are less predictable across time and more variable across
cities of similar size. These features are explained, first, using a simple
mathematical model of contagion, and then through the lens of a new theory of
urban scaling. These theoretical frameworks help us reveal the links between
the factors that determine the transmissibility of infectious diseases and the
properties of their scaling patterns across cities.
| 0 | 0 | 0 | 0 | 1 | 0 |
Homological subsets of Spec | We investigate homological subsets of the prime spectrum of a ring, defined
by the help of the Ext-family $\{\Ext^i_R(-,R)\}$. We extend Grothendieck's
calculation of $\dim(\Ext^g_R(M,R))$. We compute support of $\Ext^i_R(M,R)$ in
many cases. Also, we answer a low-dimensional case of a problem posed by
Vasconcelos on the finiteness of associated prime ideals of
$\{\Ext^i_R(M,R)\}$. An application is given.
| 0 | 0 | 1 | 0 | 0 | 0 |
Mean field repulsive Kuramoto models: Phase locking and spatial signs | The phenomenon of self-synchronization in populations of oscillatory units
appears naturally in neurosciences. However, in some situations, the formation
of a coherent state is damaging. In this article we study a repulsive
mean-field Kuramoto model that describes the time evolution of n points on the
unit circle, which are transformed into incoherent phase-locked states. It has
been recently shown that such systems can be reduced to a three-dimensional
system of ordinary differential equations, whose mathematical structure is
strongly related to hyperbolic geometry. The orbits of the Kuramoto dynamical
system are then described by a ow of Möbius transformations. We show this
underlying dynamic performs statistical inference by computing dynamically
M-estimates of scatter matrices. We also describe the limiting phase-locked
states for random initial conditions using Tyler's transformation matrix.
Moreover, we show the repulsive Kuramoto model performs dynamically not only
robust covariance matrix estimation, but also data processing: the initial
configuration of the n points is transformed by the dynamic into a limiting
phase-locked state that surprisingly equals the spatial signs from
nonparametric statistics. That makes the sign empirical covariance matrix to
equal 1 2 id2, the variance-covariance matrix of a random vector that is
uniformly distributed on the unit circle.
| 0 | 0 | 0 | 0 | 1 | 0 |
Signal tracking beyond the time resolution of an atomic sensor by Kalman filtering | We study causal waveform estimation (tracking) of time-varying signals in a
paradigmatic atomic sensor, an alkali vapor monitored by Faraday rotation
probing. We use Kalman filtering, which optimally tracks known linear Gaussian
stochastic processes, to estimate stochastic input signals that we generate by
optical pumping. Comparing the known input to the estimates, we confirm the
accuracy of the atomic statistical model and the reliability of the Kalman
filter, allowing recovery of waveform details far briefer than the sensor's
intrinsic time resolution. With proper filter choice, we obtain similar
benefits when tracking partially-known and non-Gaussian signal processes, as
are found in most practical sensing applications. The method evades the
trade-off between sensitivity and time resolution in coherent sensing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Case Study: Explaining Diabetic Retinopathy Detection Deep CNNs via Integrated Gradients | In this report, we applied integrated gradients to explaining a neural
network for diabetic retinopathy detection. The integrated gradient is an
attribution method which measures the contributions of input to the quantity of
interest. We explored some new ways for applying this method such as explaining
intermediate layers, filtering out unimportant units by their attribution value
and generating contrary samples. Moreover, the visualization results extend the
use of diabetic retinopathy detection model from merely predicting to assisting
finding potential lesions.
| 1 | 0 | 0 | 0 | 0 | 0 |
6.2-GHz modulated terahertz light detection using fast terahertz quantum well photodetectors | The fast detection of terahertz radiation is of great importance for various
applications such as fast imaging, high speed communications, and spectroscopy.
Most commercial products capable of sensitively responding the terahertz
radiation are thermal detectors, i.e., pyroelectric sensors and bolometers.
This class of terahertz detectors is normally characterized by low modulation
frequency (dozens or hundreds of Hz). Here we demonstrate the first fast
semiconductor-based terahertz quantum well photodetectors by carefully
designing the device structure and microwave transmission line for high
frequency signal extraction. Modulation response bandwidth of gigahertz level
is obtained. As an example, the 6.2-GHz modulated terahertz light emitted from
a Fabry-Pérot terahertz quantum cascade laser is successfully detected
using the fast terahertz quantum well photodetector. In addition to the fast
terahertz detection, the technique presented in this work can also facilitate
the frequency stability or phase noise characterizations for terahertz quantum
cascade lasers.
| 0 | 1 | 0 | 0 | 0 | 0 |
Inference via low-dimensional couplings | We investigate the low-dimensional structure of deterministic transformations
between random variables, i.e., transport maps between probability measures. In
the context of statistics and machine learning, these transformations can be
used to couple a tractable "reference" measure (e.g., a standard Gaussian) with
a target measure of interest. Direct simulation from the desired measure can
then be achieved by pushing forward reference samples through the map. Yet
characterizing such a map---e.g., representing and evaluating it---grows
challenging in high dimensions. The central contribution of this paper is to
establish a link between the Markov properties of the target measure and the
existence of low-dimensional couplings, induced by transport maps that are
sparse and/or decomposable. Our analysis not only facilitates the construction
of transformations in high-dimensional settings, but also suggests new
inference methodologies for continuous non-Gaussian graphical models. For
instance, in the context of nonlinear state-space models, we describe new
variational algorithms for filtering, smoothing, and sequential parameter
inference. These algorithms can be understood as the natural
generalization---to the non-Gaussian case---of the square-root
Rauch-Tung-Striebel Gaussian smoother.
| 0 | 0 | 0 | 1 | 0 | 0 |
A unified thermostat scheme for efficient configurational sampling for classical/quantum canonical ensembles via molecular dynamics | We show a unified second-order scheme for constructing simple, robust and
accurate algorithms for typical thermostats for configurational sampling for
the canonical ensemble. When Langevin dynamics is used, the scheme leads to the
BAOAB algorithm that has been recently investigated. We show that the scheme is
also useful for other types of thermostat, such as the Andersen thermostat and
Nosé-Hoover chain. Two 1-dimensional models and three typical realistic
molecular systems that range from the gas phase, clusters, to the condensed
phase are used in numerical examples for demonstration. Accuracy may be
increased by an order of magnitude for estimating coordinate-dependent
properties in molecular dynamics (when the same time interval is used),
irrespective of which type of thermostat is applied. The scheme is especially
useful for path integral molecular dynamics, because it consistently improves
the efficiency for evaluating all thermodynamic properties for any type of
thermostat.
| 0 | 1 | 0 | 0 | 0 | 0 |
Wide Bandwidth, Frequency Modulated Free Electron Laser | It is shown via theory and simulation that the resonant frequency of a Free
Electron Laser may be modulated to obtain an FEL interaction with a frequency
bandwidth which is at least an order of magnitude greater than normal FEL
operation. The system is described in the linear regime by a summation over
exponential gain modes, allowing the amplification of multiple light
frequencies simultaneously. Simulation in 3D demonstrates the process for
parameters of the UK's CLARA FEL test facility currently under construction.
This new mode of FEL operation has close analogies to Frequency Modulation in a
conventional cavity laser. This new, wide bandwidth mode of FEL operation
scales well for X-ray generation and offers users a new form of high-power FEL
output.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Transformation-Proximal Bundle Algorithm for Solving Large-Scale Multistage Adaptive Robust Optimization Problems | This paper presents a novel transformation-proximal bundle algorithm to solve
multistage adaptive robust mixed-integer linear programs (MARMILPs). By
explicitly partitioning recourse decisions into state decisions and local
decisions, the proposed algorithm applies affine decision rule only to state
decisions and allows local decisions to be fully adaptive. In this way, the
MARMILP is proved to be transformed into an equivalent two-stage adaptive
robust optimization (ARO) problem. The proposed multi-to-two transformation
scheme remains valid for other types of non-anticipative decision rules besides
the affine one, and it is general enough to be employed with existing two-stage
ARO algorithms for solving MARMILPs. The proximal bundle method is developed
for the resulting two-stage ARO problem. We perform a theoretical analysis to
show finite convergence of the proposed algorithm with any positive tolerance.
To quantitatively assess solution quality, we develop a scenario-tree-based
lower bounding technique. Computational studies on multiperiod inventory
management and process network planning are presented to demonstrate its
effectiveness and computational scalability. In the inventory management
application, the affine decision rule method suffers from a severe
suboptimality with an average gap of 34.88%, while the proposed algorithm
generates near-optimal solutions with an average gap of merely 1.68%.
| 1 | 0 | 0 | 0 | 0 | 0 |
Dual SVM Training on a Budget | We present a dual subspace ascent algorithm for support vector machine
training that respects a budget constraint limiting the number of support
vectors. Budget methods are effective for reducing the training time of kernel
SVM while retaining high accuracy. To date, budget training is available only
for primal (SGD-based) solvers. Dual subspace ascent methods like sequential
minimal optimization are attractive for their good adaptation to the problem
structure, their fast convergence rate, and their practical speed. By
incorporating a budget constraint into a dual algorithm, our method enjoys the
best of both worlds. We demonstrate considerable speed-ups over primal budget
training methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
AWAKE readiness for the study of the seeded self-modulation of a 400\,GeV proton bunch | AWAKE is a proton-driven plasma wakefield acceleration experiment. % We show
that the experimental setup briefly described here is ready for systematic
study of the seeded self-modulation of the 400\,GeV proton bunch in the
10\,m-long rubidium plasma with density adjustable from 1 to
10$\times10^{14}$\,cm$^{-3}$. % We show that the short laser pulse used for
ionization of the rubidium vapor propagates all the way along the column,
suggesting full ionization of the vapor. % We show that ionization occurs along
the proton bunch, at the laser time and that the plasma that follows affects
the proton bunch. %
| 0 | 1 | 0 | 0 | 0 | 0 |
Exploring the Psychological Basis for Transitions in the Archaeological Record | In lieu of an abstract here is the first paragraph: No other species remotely
approaches the human capacity for the cultural evolution of novelty that is
accumulative, adaptive, and open-ended (i.e., with no a priori limit on the
size or scope of possibilities). By culture we mean extrasomatic
adaptations--including behavior and technology--that are socially rather than
sexually transmitted. This chapter synthesizes research from anthropology,
psychology, archaeology, and agent-based modeling into a speculative yet
coherent account of two fundamental cognitive transitions underlying human
cultural evolution that is consistent with contemporary psychology. While the
chapter overlaps with a more technical paper on this topic (Gabora & Smith
2018), it incorporates new research and elaborates a genetic component to our
overall argument. The ideas in this chapter grew out of a non-Darwinian
framework for cultural evolution, referred to as the Self-other Reorganization
(SOR) theory of cultural evolution (Gabora, 2013, in press; Smith, 2013), which
was inspired by research on the origin and earliest stage in the evolution of
life (Cornish-Bowden & Cárdenas 2017; Goldenfeld, Biancalani, & Jafarpour,
2017, Vetsigian, Woese, & Goldenfeld 2006; Woese, 2002). SOR bridges
psychological research on fundamental aspects of our human nature such as
creativity and our proclivity to reflect on ideas from different perspectives,
with the literature on evolutionary approaches to cultural evolution that
aspire to synthesize the behavioral sciences much as has been done for the
biological scientists. The current chapter is complementary to this effort, but
less abstract; it attempts to ground the theory of cultural evolution in terms
of cognitive transitions as suggested by archaeological evidence.
| 0 | 0 | 0 | 0 | 1 | 0 |
Well quasi-orders and the functional interpretation | The purpose of this article is to study the role of Gödel's functional
interpretation in the extraction of programs from proofs in well quasi-order
theory. The main focus is on the interpretation of Nash-Williams' famous
minimal bad sequence construction, and the exploration of a number of much
broader problems which are related to this, particularly the question of the
constructive meaning of Zorn's lemma and the notion of recursion over the
non-wellfounded lexicographic ordering on infinite sequences.
| 1 | 0 | 1 | 0 | 0 | 0 |
A local limit theorem for Quicksort key comparisons via multi-round smoothing | As proved by Régnier and Rösler, the number of key comparisons required
by the randomized sorting algorithm QuickSort to sort a list of $n$ distinct
items (keys) satisfies a global distributional limit theorem. Fill and Janson
proved results about the limiting distribution and the rate of convergence, and
used these to prove a result part way towards a corresponding local limit
theorem. In this paper we use a multi-round smoothing technique to prove the
full local limit theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Computation of Kantorovich-Wasserstein Distances between 2D-Histograms by Uncapacitated Minimum Cost Flows | In this work, we present a method to compute the Kantorovich distance, that
is, the Wasserstein distance of order one, between a pair of two-dimensional
histograms. Recent works in Computer Vision and Machine Learning have shown the
benefits of measuring Wasserstein distances of order one between histograms
with $N$ bins, by solving a classical transportation problem on (very large)
complete bipartite graphs with $N$ nodes and $N^2$ edges. The main contribution
of our work is to approximate the original transportation problem by an
uncapacitated min cost flow problem on a reduced flow network of size $O(N)$.
More precisely, when the distance among the bin centers is measured with the
1-norm or the $\infty$-norm, our approach provides an optimal solution. When
the distance amongst bins is measured with the 2-norm: (i) we derive a
quantitative estimate on the error between optimal and approximate solution;
(ii) given the error, we construct a reduced flow network of size $O(N)$. We
numerically show the benefits of our approach by computing Wasserstein
distances of order one on a set of grey scale images used as benchmarks in the
literature. We show how our approach scales with the size of the images with
1-norm, 2-norm and $\infty$-norm ground distances.
| 0 | 0 | 0 | 1 | 0 | 0 |
Efficient anchor loss suppression in coupled near-field optomechanical resonators | Elastic dissipation through radiation towards the substrate is a major loss
channel in micro- and nanomechanical resonators. Engineering the coupling of
these resonators with optical cavities further complicates and constrains the
design of low-loss optomechanical devices. In this work we rely on the coherent
cancellation of mechanical radiation to demonstrate material and surface
absorption limited silicon near-field optomechanical resonators oscillating at
tens of MHz. The effectiveness of our dissipation suppression scheme is
investigated at room and cryogenic temperatures. While at room temperature we
can reach a maximum quality factor of 7.61k ($fQ$-product of the order of
$10^{11}$~Hz), at 22~K the quality factor increases to 37k, resulting in a
$fQ$-product of $2\times10^{12}$~Hz.
| 0 | 1 | 0 | 0 | 0 | 0 |
Searching for previously unknown classes of objects in the AKARI-NEP Deep data with fuzzy logic SVM algorithm | In this proceedings application of a fuzzy Support Vector Machine (FSVM)
learning algorithm, to classify mid-infrared (MIR) sources from the AKARI NEP
Deep field into three classes: stars, galaxies and AGNs, is presented. FSVM is
an improved version of the classical SVM algorithm, incorporating measurement
errors into the classification process; this is the first successful
application of this algorithm in the astronomy. We created reliable catalogues
of galaxies, stars and AGNs consisting of objects with MIR measurements, some
of them with no optical counterparts. Some examples of identified objects are
shown, among them O-rich and C-rich AGB stars.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stability of laminar Couette flow of compressible fluids | Cylindrical Couette flow is a subject where the main focus has long been on
the onset of turbulence or, more precisely, the limit of stability of the
simplest laminar flow. The theoretical framework of this paper is a recently
developed action principle for hydrodynamics. It incorporates Euler-Lagrange
equations that are in essential agreement with the Navier-Stokes equation, but
applicable to the general case of a compressible fluid. The variational
principle incorporates the equation of continuity, a canonical structure and a
conserved Hamiltonian. The density is compressible, characterized by a general
(non-polar) equation of state, and homogeneous. The onset of instability is
often accompanied by bubble formation. It is proposed that the limit of
stability of laminar Couette flow may some times be related to cavitation. In
contrast to traditional stability theory we are not looking for mathematical
instabilities of a system of differential equations, but instead for the
possibility that the system is driven to a metastable or unstable
configuration. The application of this idea to cylindrical Couette flow
reported here turns out to account rather well for the observations. The
failure of a famous criterion due to Rayleigh is well known. It is here shown
that it may be due to the use of methods that are appropriate only in the case
that the equations of motion are derived from an action principle.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Hierarchical Max-infinitely Divisible Process for Extreme Areal Precipitation Over Watersheds | Understanding the spatial extent of extreme precipitation is necessary for
determining flood risk and adequately designing infrastructure (e.g.,
stormwater pipes) to withstand such hazards. While environmental phenomena
typically exhibit weakening spatial dependence at increasingly extreme levels,
limiting max-stable process models for block maxima have a rigid dependence
structure that does not capture this type of behavior. We propose a flexible
Bayesian model from a broader family of max-infinitely divisible processes that
allows for weakening spatial dependence at increasingly extreme levels, and due
to a hierarchical representation of the likelihood in terms of random effects,
our inference approach scales to large datasets. The proposed model is
constructed using flexible random basis functions that are estimated from the
data, allowing for straightforward inspection of the predominant spatial
patterns of extremes. In addition, the described process possesses
max-stability as a special case, making inference on the tail dependence class
possible. We apply our model to extreme precipitation in eastern North America,
and show that the proposed model adequately captures the extremal behavior of
the data.
| 0 | 0 | 0 | 1 | 0 | 0 |
The dependence of cluster galaxy properties on the central entropy of their host cluster | We present a study of the connection between brightest cluster galaxies
(BCGs) and their host galaxy clusters. Using galaxy clusters at $0.1<z<0.3$
from the Hectospec Cluster Survey (HeCS) with X-ray information from the
Archive of {\it Chandra} Cluster Entropy Profile Tables (ACCEPT), we confirm
that BCGs in low central entropy clusters are well aligned with the X-ray
center. Additionally, the magnitude difference between BCG and the 2nd
brightest one also correlates with the central entropy of the intracluster
medium. From the red-sequence (RS) galaxies, we cannot find significant
dependence of RS color scatter and stellar population on the central entropy of
the intracluster medium of their host cluster. However, BCGs in low entropy
clusters are systematically less massive than those in high entropy clusters,
although this is dependent on the method used to derive the stellar mass of
BCGs. In contrast, the stellar velocity dispersion of BCGs shows no dependence
on BCG activity and cluster central entropy. This implies that the potential of
the BCG is established earlier and the activity leading to optical emission
lines is dictated by the properties of the intracluster medium in the cluster
core.
| 0 | 1 | 0 | 0 | 0 | 0 |
Gaussian Prototypical Networks for Few-Shot Learning on Omniglot | We propose a novel architecture for $k$-shot classification on the Omniglot
dataset. Building on prototypical networks, we extend their architecture to
what we call Gaussian prototypical networks. Prototypical networks learn a map
between images and embedding vectors, and use their clustering for
classification. In our model, a part of the encoder output is interpreted as a
confidence region estimate about the embedding point, and expressed as a
Gaussian covariance matrix. Our network then constructs a direction and class
dependent distance metric on the embedding space, using uncertainties of
individual data points as weights. We show that Gaussian prototypical networks
are a preferred architecture over vanilla prototypical networks with an
equivalent number of parameters. We report state-of-the-art performance in
1-shot and 5-shot classification both in 5-way and 20-way regime (for 5-shot
5-way, we are comparable to previous state-of-the-art) on the Omniglot dataset.
We explore artificially down-sampling a fraction of images in the training set,
which improves our performance even further. We therefore hypothesize that
Gaussian prototypical networks might perform better in less homogeneous,
noisier datasets, which are commonplace in real world applications.
| 1 | 0 | 0 | 1 | 0 | 0 |
Uncertainty principle and geometry of the infinite Grassmann manifold | We study the pairs of projections $$ P_If=\chi_If ,\ \ Q_Jf= \left(\chi_J
\hat{f}\right)\check{\ } , \ \ f\in L^2(\mathbb{R}^n), $$ where $I, J\subset
\mathbb{R}^n$ are sets of finite Lebesgue measure, $\chi_I, \chi_J$ denote the
corresponding characteristic functions and $\hat{\ } , \check{\ }$ denote the
Fourier-Plancherel transformation $L^2(\mathbb{R}^n)\to L^2(\mathbb{R}^n)$ and
its inverse. These pairs of projections have been widely studied by several
authors in connection with the mathematical formulation of Heisenberg's
uncertainty principle. Our study is done from a differential geometric point of
view. We apply known results on the Finsler geometry of the Grassmann manifold
${\cal P}({\cal H})$ of a Hilbert space ${\cal H}$ to establish that there
exists a unique minimal geodesic of ${\cal P}({\cal H})$, which is a curve of
the form $$ \delta(t)=e^{itX_{I,J}}P_Ie^{-itX_{I,J}} $$ which joins $P_I$ and
$Q_J$ and has length $\pi/2$. As a consequence we obtain that if $H$ is the
logarithm of the Fourier-Plancherel map, then $$ \|[H,P_I]\|\ge \pi/2. $$ The
spectrum of $X_{I,J}$ is denumerable and symmetric with respect to the origin,
it has a smallest positive eigenvalue $\gamma(X_{I,J})$ which satisfies $$
\cos(\gamma(X_{I,J}))=\|P_IQ_J\|. $$
| 0 | 0 | 1 | 0 | 0 | 0 |
Existence and symmetry of solutions for critical fractional Schrödinger equations with bounded potentials | This paper is concerned with the following fractional Schrödinger
equations involving critical exponents: \begin{eqnarray*}
(-\Delta)^{\alpha}u+V(x)u=k(x)f(u)+\lambda|u|^{2_{\alpha}^{*}-2}u\quad\quad
\mbox{in}\ \mathbb{R}^{N}, \end{eqnarray*} where $(-\Delta)^{\alpha}$ is the
fractional Laplacian operator with $\alpha\in(0,1)$, $N\geq2$, $\lambda$ is a
positive real parameter and $2_{\alpha}^{*}=2N/(N-2\alpha)$ is the critical
Sobolev exponent, $V(x)$ and $k(x)$ are positive and bounded functions
satisfying some extra hypotheses. Based on the principle of concentration
compactness in the fractional Sobolev space and the minimax arguments, we
obtain the existence of a nontrivial radially symmetric weak solution for the
above-mentioned equations without assuming the Ambrosetti-Rabinowitz condition
on the subcritical nonlinearity.
| 0 | 0 | 1 | 0 | 0 | 0 |
Early Detection of Promoted Campaigns on Social Media | Social media expose millions of users every day to information campaigns ---
some emerging organically from grassroots activity, others sustained by
advertising or other coordinated efforts. These campaigns contribute to the
shaping of collective opinions. While most information campaigns are benign,
some may be deployed for nefarious purposes. It is therefore important to be
able to detect whether a meme is being artificially promoted at the very moment
it becomes wildly popular. This problem has important social implications and
poses numerous technical challenges. As a first step, here we focus on
discriminating between trending memes that are either organic or promoted by
means of advertisement. The classification is not trivial: ads cause bursts of
attention that can be easily mistaken for those of organic trends. We designed
a machine learning framework to classify memes that have been labeled as
trending on Twitter.After trending, we can rely on a large volume of activity
data. Early detection, occurring immediately at trending time, is a more
challenging problem due to the minimal volume of activity data that is
available prior to trending.Our supervised learning framework exploits hundreds
of time-varying features to capture changing network and diffusion patterns,
content and sentiment information, timing signals, and user meta-data. We
explore different methods for encoding feature time series. Using millions of
tweets containing trending hashtags, we achieve 75% AUC score for early
detection, increasing to above 95% after trending. We evaluate the robustness
of the algorithms by introducing random temporal shifts on the trend time
series. Feature selection analysis reveals that content cues provide
consistently useful signals; user features are more informative for early
detection, while network and timing features are more helpful once more data is
available.
| 1 | 0 | 0 | 0 | 0 | 0 |
Discovery of statistical equivalence classes using computer algebra | Discrete statistical models supported on labelled event trees can be
specified using so-called interpolating polynomials which are generalizations
of generating functions. These admit a nested representation. A new algorithm
exploits the primary decomposition of monomial ideals associated with an
interpolating polynomial to quickly compute all nested representations of that
polynomial. It hereby determines an important subclass of all trees
representing the same statistical model. To illustrate this method we analyze
the full polynomial equivalence class of a staged tree representing the best
fitting model inferred from a real-world dataset.
| 0 | 0 | 1 | 1 | 0 | 0 |
Can Boltzmann Machines Discover Cluster Updates ? | Boltzmann machines are physics informed generative models with wide
applications in machine learning. They can learn the probability distribution
from an input dataset and generate new samples accordingly. Applying them back
to physics, the Boltzmann machines are ideal recommender systems to accelerate
Monte Carlo simulation of physical systems due to their flexibility and
effectiveness. More intriguingly, we show that the generative sampling of the
Boltzmann Machines can even discover unknown cluster Monte Carlo algorithms.
The creative power comes from the latent representation of the Boltzmann
machines, which learn to mediate complex interactions and identify clusters of
the physical system. We demonstrate these findings with concrete examples of
the classical Ising model with and without four spin plaquette interactions.
Our results endorse a fresh research paradigm where intelligent machines are
designed to create or inspire human discovery of innovative algorithms.
| 0 | 1 | 0 | 1 | 0 | 0 |
Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction | Graph Signal Processing (GSP) is a promising framework to analyze
multi-dimensional neuroimaging datasets, while taking into account both the
spatial and functional dependencies between brain signals. In the present work,
we apply dimensionality reduction techniques based on graph representations of
the brain to decode brain activity from real and simulated fMRI datasets. We
introduce seven graphs obtained from a) geometric structure and/or b)
functional connectivity between brain areas at rest, and compare them when
performing dimension reduction for classification. We show that mixed graphs
using both a) and b) offer the best performance. We also show that graph
sampling methods perform better than classical dimension reduction including
Principal Component Analysis (PCA) and Independent Component Analysis (ICA).
| 1 | 0 | 0 | 1 | 0 | 0 |
Weak subsolutions to complex Monge-Ampère equations | We compare various notions of weak subsolutions to degenerate complex
Monge-Ampère equations, showing that they all coincide. This allows us to
give an alternative proof of mixed Monge-Ampère inequalities due to Kolodziej
and Dinew.
| 0 | 0 | 1 | 0 | 0 | 0 |
Bayesian inversion of convolved hidden Markov models with applications in reservoir prediction | Efficient assessment of convolved hidden Markov models is discussed. The
bottom-layer is defined as an unobservable categorical first-order Markov
chain, while the middle-layer is assumed to be a Gaussian spatial variable
conditional on the bottom-layer. Hence, this layer appear as a Gaussian mixture
spatial variable unconditionally. We observe the top-layer as a convolution of
the middle-layer with Gaussian errors. Focus is on assessment of the
categorical and Gaussian mixture variables given the observations, and we
operate in a Bayesian inversion framework. The model is defined to make
inversion of subsurface seismic AVO data into lithology/fluid classes and to
assess the associated elastic material properties. Due to the spatial coupling
in the likelihood functions, evaluation of the posterior normalizing constant
is computationally demanding, and brute-force, single-site updating Markov
chain Monte Carlo algorithms converges far too slow to be useful. We construct
two classes of approximate posterior models which we assess analytically and
efficiently using the recursive Forward-Backward algorithm. These approximate
posterior densities are used as proposal densities in an independent proposal
Markov chain Monte Carlo algorithm, to assess the correct posterior model. A
set of synthetic realistic examples are presented. The proposed approximations
provides efficient proposal densities which results in acceptance probabilities
in the range 0.10-0.50 in the Markov chain Monte Carlo algorithm. A case study
of lithology/fluid seismic inversion is presented. The lithology/fluid classes
and the elastic material properties can be reliably predicted.
| 0 | 1 | 0 | 1 | 0 | 0 |
On the post-Keplerian corrections to the orbital periods of a two-body system and their application to the Galactic Center | Detailed numerical analyses of the orbital motion of a test particle around a
spinning primary are performed. They aim to investigate the possibility of
using the post-Keplerian (pK) corrections to the orbiter's periods (draconitic,
anomalistic and sidereal) as a further opportunity to perform new tests of
post-Newtonian (pN) gravity. As a specific scenario, the S-stars orbiting the
Massive Black Hole (MBH) supposedly lurking in Sgr A$^\ast$ at the center of
the Galaxy is adopted. We, first, study the effects of the pK Schwarzchild,
Lense-Thirring and quadrupole moment accelerations experienced by a target star
for various possible initial orbital configurations. It turns out that the
results of the numerical simulations are consistent with the analytical ones in
the small eccentricity approximation for which almost all the latter ones were
derived. For highly elliptical orbits, the size of all the three pK corrections
considered turn out to increase remarkably. The periods of the observed S2 and
S0-102 stars as functions of the MBH's spin axis orientation are considered as
well. The pK accelerations considered lead to corrections of the orbital
periods of the order of 1-100d (Schwarzschild), 0.1-10h (Lense-Thirring) and
1-10^3s (quadrupole) for a target star with a=300-800~AU and e ~ 0.8, which
could be possibly measurable by the future facilities.
| 0 | 1 | 0 | 0 | 0 | 0 |
Disentangling top-down vs. bottom-up and low-level vs. high-level influences on eye movements over time | Bottom-up and top-down, as well as low-level and high-level factors influence
where we fixate when viewing natural scenes. However, the importance of each of
these factors and how they interact remains a matter of debate. Here, we
disentangle these factors by analysing their influence over time. For this
purpose we develop a saliency model which is based on the internal
representation of a recent early spatial vision model to measure the low-level
bottom-up factor. To measure the influence of high-level bottom-up features, we
use a recent DNN-based saliency model. To account for top-down influences, we
evaluate the models on two large datasets with different tasks: first, a
memorisation task and, second, a search task. Our results lend support to a
separation of visual scene exploration into three phases: The first saccade, an
initial guided exploration characterised by a gradual broadening of the
fixation density, and an steady state which is reached after roughly 10
fixations. Saccade target selection during the initial exploration and in the
steady state are related to similar areas of interest, which are better
predicted when including high-level features. In the search dataset, fixation
locations are determined predominantly by top-down processes. In contrast, the
first fixation follows a different fixation density and contains a strong
central fixation bias. Nonetheless, first fixations are guided strongly by
image properties and as early as 200 ms after image onset, fixations are better
predicted by high-level information. We conclude that any low-level bottom-up
factors are mainly limited to the generation of the first saccade. All saccades
are better explained when high-level features are considered, and later this
high-level bottom-up control can be overruled by top-down influences.
| 0 | 0 | 0 | 0 | 1 | 0 |
Sharp estimates for oscillatory integral operators via polynomial partitioning | The sharp range of $L^p$-estimates for the class of Hörmander-type
oscillatory integral operators is established in all dimensions under a
positive-definite assumption on the phase. This is achieved by generalising a
recent approach of the first author for studying the Fourier extension
operator, which utilises polynomial partitioning arguments.
| 0 | 0 | 1 | 0 | 0 | 0 |
Inhomogeneous Heisenberg Spin Chain and Quantum Vortex Filament as Non-Holonomically Deformed NLS Systems | Through the Hasimoto map, various dynamical systems can be mapped to
different integrodifferential generalizations of Nonlinear Schrodinger (NLS)
family of equations some of which are known to be integrable. Two such
continuum limits, corresponding to the inhomogeneous XXX Heisenberg spin chain
[Balakrishnan, J. Phys. C 15, L1305 (1982)] and that of a thin vortex filament
moving in a superfluid with drag [Shivamoggi, Eur. Phys. J. B 86, 275 (2013)
86; Van Gorder, Phys. Rev. E 91, 053201 (2015)], are shown to be particular
non-holonomic deformations (NHDs) of the standard NLS system involving
generalized parameterizations. Crucially, such NHDs of the NLS system are
restricted to specific spectral orders that exactly complements NHDs of the
original physical systems. The specific non-holonomic constraints associated
with these integrodifferential generalizations additionally posses distinct
semi-classical signature.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Information-Theoretic Analysis of Deduplication | Deduplication finds and removes long-range data duplicates. It is commonly
used in cloud and enterprise server settings and has been successfully applied
to primary, backup, and archival storage. Despite its practical importance as a
source-coding technique, its analysis from the point of view of information
theory is missing. This paper provides such an information-theoretic analysis
of data deduplication. It introduces a new source model adapted to the
deduplication setting. It formalizes the two standard fixed-length and
variable-length deduplication schemes, and it introduces a novel multi-chunk
deduplication scheme. It then provides an analysis of these three deduplication
variants, emphasizing the importance of boundary synchronization between source
blocks and deduplication chunks. In particular, under fairly mild assumptions,
the proposed multi-chunk deduplication scheme is shown to be order optimal.
| 1 | 0 | 1 | 0 | 0 | 0 |
A neural network trained to predict future video frames mimics critical properties of biological neuronal responses and perception | While deep neural networks take loose inspiration from neuroscience, it is an
open question how seriously to take the analogies between artificial deep
networks and biological neuronal systems. Interestingly, recent work has shown
that deep convolutional neural networks (CNNs) trained on large-scale image
recognition tasks can serve as strikingly good models for predicting the
responses of neurons in visual cortex to visual stimuli, suggesting that
analogies between artificial and biological neural networks may be more than
superficial. However, while CNNs capture key properties of the average
responses of cortical neurons, they fail to explain other properties of these
neurons. For one, CNNs typically require large quantities of labeled input data
for training. Our own brains, in contrast, rarely have access to this kind of
supervision, so to the extent that representations are similar between CNNs and
brains, this similarity must arise via different training paths. In addition,
neurons in visual cortex produce complex time-varying responses even to static
inputs, and they dynamically tune themselves to temporal regularities in the
visual environment. We argue that these differences are clues to fundamental
differences between the computations performed in the brain and in deep
networks. To begin to close the gap, here we study the emergent properties of a
previously-described recurrent generative network that is trained to predict
future video frames in a self-supervised manner. Remarkably, the model is able
to capture a wide variety of seemingly disparate phenomena observed in visual
cortex, ranging from single unit response dynamics to complex perceptual motion
illusions. These results suggest potentially deep connections between recurrent
predictive neural network models and the brain, providing new leads that can
enrich both fields.
| 0 | 0 | 0 | 0 | 1 | 0 |
Triangulum II: Not Especially Dense After All | Among the Milky Way satellites discovered in the past three years, Triangulum
II has presented the most difficulty in revealing its dynamical status. Kirby
et al. (2015a) identified it as the most dark matter-dominated galaxy known,
with a mass-to-light ratio within the half-light radius of 3600 +3500 -2100
M_sun/L_sun. On the other hand, Martin et al. (2016) measured an outer velocity
dispersion that is 3.5 +/- 2.1 times larger than the central velocity
dispersion, suggesting that the system might not be in equilibrium. From new
multi-epoch Keck/DEIMOS measurements of 13 member stars in Triangulum II, we
constrain the velocity dispersion to be sigma_v < 3.4 km/s (90% C.L.). Our
previous measurement of sigma_v, based on six stars, was inflated by the
presence of a binary star with variable radial velocity. We find no evidence
that the velocity dispersion increases with radius. The stars display a wide
range of metallicities, indicating that Triangulum II retained supernova ejecta
and therefore possesses or once possessed a massive dark matter halo. However,
the detection of a metallicity dispersion hinges on the membership of the two
most metal-rich stars. The stellar mass is lower than galaxies of similar mean
stellar metallicity, which might indicate that Triangulum II is either a star
cluster or a tidally stripped dwarf galaxy. Detailed abundances of one star
show heavily depressed neutron-capture abundances, similar to stars in most
other ultra-faint dwarf galaxies but unlike stars in globular clusters.
| 0 | 1 | 0 | 0 | 0 | 0 |
Improving the upper bound on the length of the shortest reset words | We improve the best known upper bound on the length of the shortest reset
words of synchronizing automata. The new bound is slightly better than $114 n^3
/ 685 + O(n^2)$. The Černý conjecture states that $(n-1)^2$ is an upper
bound. So far, the best general upper bound was $(n^3-n)/6-1$ obtained by
J.-E.~Pin and P.~Frankl in 1982. Despite a number of efforts, it remained
unchanged for about 35 years.
To obtain the new upper bound we utilize avoiding words. A word is avoiding
for a state $q$ if after reading the word the automaton cannot be in $q$. We
obtain upper bounds on the length of the shortest avoiding words, and using the
approach of Trahtman from 2011 combined with the well known Frankl theorem from
1982, we improve the general upper bound on the length of the shortest reset
words. For all the bounds, there exist polynomial algorithms finding a word of
length not exceeding the bound.
| 1 | 0 | 0 | 0 | 0 | 0 |
Graphite: Iterative Generative Modeling of Graphs | Graphs are a fundamental abstraction for modeling relational data. However,
graphs are discrete and combinatorial in nature, and learning representations
suitable for machine learning tasks poses statistical and computational
challenges. In this work, we propose Graphite an algorithmic framework for
unsupervised learning of representations over nodes in a graph using deep
latent variable generative models. Our model is based on variational
autoencoders (VAE), and uses graph neural networks for parameterizing both the
generative model (i.e., decoder) and inference model (i.e., encoder). The use
of graph neural networks directly incorporates inductive biases due to the
spatial, local structure of graphs directly in the generative model. We draw
novel connections of our framework with approximate inference via kernel
embeddings. Empirically, Graphite outperforms competing approaches for the
tasks of density estimation, link prediction, and node classification on
synthetic and benchmark datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Characteristic classes in general relativity on a modified Poincare curvature bundle | Characteristic classes in space-time manifolds are discussed for both even-
and odd-dimensional spacetimes. In particular, it is shown that the
Einstein--Hilbert action is equivalent to a second Chern-class on a modified
Poincare bundle in four dimensions. Consequently, the cosmological constant and
the trace of an energy-momentum tensor become divisible modulo R/Z.
| 0 | 1 | 0 | 0 | 0 | 0 |
Observation of a 3D magnetic null point | We describe high resolution observations of a GOES B-class flare
characterized by a circular ribbon at chromospheric level, corresponding to the
network at photospheric level. We interpret the flare as a consequence of a
magnetic reconnection event occurred at a three-dimensional (3D) coronal null
point located above the supergranular cell. The potential field extrapolation
of the photospheric magnetic field indicates that the circular chromospheric
ribbon is cospatial with the fan footpoints, while the ribbons of the inner and
outer spines look like compact kernels. We found new interesting observational
aspects that need to be explained by models: 1) a loop corresponding to the
outer spine became brighter a few minutes before the onset of the flare; 2) the
circular ribbon was formed by several adjacent compact kernels characterized by
a size of 1"-2"; 3) the kernels with stronger intensity emission were located
at the outer footpoint of the darker filaments departing radially from the
center of the supergranular cell; 4) these kernels start to brighten
sequentially in clockwise direction; 5) the site of the 3D null point and the
shape of the outer spine were detected by RHESSI in the low energy channel
between 6.0 and 12.0 keV. Taking into account all these features and the length
scales of the magnetic systems involved by the event we argued that the low
intensity of the flare may be ascribed to the low amount of magnetic flux and
to its symmetric configuration.
| 0 | 1 | 0 | 0 | 0 | 0 |
3D spatial exploration by E. coli echoes motor temporal variability | Unraveling bacterial strategies for spatial exploration is crucial to
understand the complexity of the organi- zation of life. Currently, a
cornerstone for quantitative modeling of bacterial transport, is their
run-and-tumble strategy to explore their environment. For Escherichia coli, the
run time distribution was reported to follow a Poisson process with a single
characteristic time related to the rotational switching of the flagellar motor.
Direct measurements on flagellar motors show, on the contrary, heavy-tailed
distributions of rotation times stemming from the intrinsic noise in the
chemotactic mechanism. The crucial role of stochasticity on the chemotactic
response has also been highlighted by recent modeling, suggesting its
determinant influence on motility. In stark contrast with the accepted vision
of run-and-tumble, here we report a large behavioral variability of wild-type
E. coli, revealed in their three-dimensional trajectories. At short times, a
broad distribution of run times is measured on a population and attributed to
the slow fluctuations of a signaling protein triggering the flagellar motor
reversal. Over long times, individual bacteria undergo significant changes in
motility. We demonstrate that such a large distribution introduces measurement
biases in most practical situations. These results reconcile the notorious
conundrum between run time observations and motor switching statistics. We
finally propose that statistical modeling of transport properties currently
undertaken in the emerging framework of active matter studies should be
reconsidered under the scope of this large variability of motility features.
| 0 | 0 | 0 | 0 | 1 | 0 |
Unsupervised Latent Behavior Manifold Learning from Acoustic Features: audio2behavior | Behavioral annotation using signal processing and machine learning is highly
dependent on training data and manual annotations of behavioral labels.
Previous studies have shown that speech information encodes significant
behavioral information and be used in a variety of automated behavior
recognition tasks. However, extracting behavior information from speech is
still a difficult task due to the sparseness of training data coupled with the
complex, high-dimensionality of speech, and the complex and multiple
information streams it encodes. In this work we exploit the slow varying
properties of human behavior. We hypothesize that nearby segments of speech
share the same behavioral context and hence share a similar underlying
representation in a latent space. Specifically, we propose a Deep Neural
Network (DNN) model to connect behavioral context and derive the behavioral
manifold in an unsupervised manner. We evaluate the proposed manifold in the
couples therapy domain and also provide examples from publicly available data
(e.g. stand-up comedy). We further investigate training within the couples'
therapy domain and from movie data. The results are extremely encouraging and
promise improved behavioral quantification in an unsupervised manner and
warrants further investigation in a range of applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Test map characterizations of local properties of fundamental groups | Local properties of the fundamental group of a path-connected topological
space can pose obstructions to the applicability of covering space theory. A
generalized covering map is a generalization of the classical notion of
covering map defined in terms of unique lifting properties. The existence of
generalized covering maps depends entirely on the verification of the unique
path lifting property for a standard covering construction. Given any
path-connected metric space $X$, and a subgroup $H\leq\pi_1(X,x_0)$, we
characterize the unique path lifting property relative to $H$ in terms of a new
closure operator on the $\pi_1$-subgroup lattice that is induced by maps from a
fixed "test" domain into $X$. Using this test map framework, we develop a
unified approach to comparing the existence of generalized coverings with a
number of related properties.
| 0 | 0 | 1 | 0 | 0 | 0 |
Adaptive channel selection for DOA estimation in MIMO radar | We present adaptive strategies for antenna selection for Direction of Arrival
(DoA) estimation of a far-field source using TDM MIMO radar with linear arrays.
Our treatment is formulated within a general adaptive sensing framework that
uses one-step ahead predictions of the Bayesian MSE using a parametric family
of Weiss-Weinstein bounds that depend on previous measurements. We compare in
simulations our strategy with adaptive policies that optimize the Bobrovsky-
Zaka{\i} bound and the Expected Cramér-Rao bound, and show the performance
for different levels of measurement noise.
| 1 | 0 | 0 | 0 | 0 | 0 |
Photometric characterization of the Dark Energy Camera | We characterize the variation in photometric response of the Dark Energy
Camera (DECam) across its 520~Mpix science array during 4 years of operation.
These variations are measured using high signal-to-noise aperture photometry of
$>10^7$ stellar images in thousands of exposures of a few selected fields, with
the telescope dithered to move the sources around the array. A calibration
procedure based on these results brings the RMS variation in aperture
magnitudes of bright stars on cloudless nights down to 2--3 mmag, with <1 mmag
of correlated photometric errors for stars separated by $\ge20$". On cloudless
nights, any departures of the exposure zeropoints from a secant airmass law
exceeding >1 mmag are plausibly attributable to spatial/temporal variations in
aperture corrections. These variations can be inferred and corrected by
measuring the fraction of stellar light in an annulus between 6" and 8"
diameter. Key elements of this calibration include: correction of amplifier
nonlinearities; distinguishing pixel-area variations and stray light from
quantum-efficiency variations in the flat fields; field-dependent color
corrections; and the use of an aperture-correction proxy. The DECam response
pattern across the 2-degree field drifts over months by up to $\pm7$ mmag, in a
nearly-wavelength-independent low-order pattern. We find no fundamental
barriers to pushing global photometric calibrations toward mmag accuracy.
| 0 | 1 | 0 | 0 | 0 | 0 |
Tough self-healing elastomers by molecular enforced integration of covalent and reversible networks | Self-healing polymers crosslinked by solely reversible bonds are
intrinsically weaker than common covalently crosslinked networks. Introducing
covalent crosslinks into a reversible network would improve mechanical
strength. It is challenging, however, to apply this design concept to dry
elastomers, largely because reversible crosslinks such as hydrogen bonds are
often polar motifs, whereas covalent crosslinks are non-polar motifs, and these
two types of bonds are intrinsically immiscible without co-solvents. Here we
design and fabricate a hybrid polymer network by crosslinking randomly branched
polymers carrying motifs that can form both reversible hydrogen bonds and
permanent covalent crosslinks. The randomly branched polymer links such two
types of bonds and forces them to mix on the molecular level without
co-solvents. This allows us to create a hybrid dry elastomer that is very tough
with a fracture energy $13,500J/m^2$ comparable to that of natural rubber;
moreover, the elastomer can self-heal at room temperature with a recovered
tensile strength 4 MPa similar to that of existing self-healing elastomers. The
concept of forcing covalent and reversible bonds to mix at molecular scale to
create a homogenous network is quite general and should enable development of
tough, self-healing polymers of practical usage.
| 0 | 1 | 0 | 0 | 0 | 0 |
SYK Models and SYK-like Tensor Models with Global Symmetry | In this paper, we study an SYK model and an SYK-like tensor model with global
symmetry. First, we study the large $N$ expansion of the bi-local collective
action for the SYK model with manifest global symmetry. We show that the global
symmetry is enhanced to a local symmetry at strong coupling limit, and the
corresponding symmetry algebra is the Kac-Moody algebra. The emergent local
symmetry together with the emergent reparametrization is spontaneously and
explicit broken. This leads to a low energy effective action. We evaluate four
point functions, and obtain spectrum of our model. We derive the low energy
effective action and analyze the chaotic behavior of the four point functions.
We also consider the recent 3D gravity conjecture for our model.
We also introduce an SYK-like tensor model with global symmetry. We first
study chaotic behavior of four point functions in various channels for the
rank-3 case, and generalize this into a rank-$(q-1)$ tensor model.
| 0 | 1 | 0 | 0 | 0 | 0 |
Which Stars are Ionizing the Orion Nebula ? | The common assumption that Theta-1-Ori C is the dominant ionizing source for
the Orion Nebula is critically examined. This assumption underlies much of the
existing analysis of the nebula. In this paper we establish through comparison
of the relative strengths of emission lines with expectations from Cloudy
models and through the direction of the bright edges of proplyds that
Theta-2-Ori-A, which lies beyond the Bright Bar, also plays an important role.
Theta-1-Ori-C does dominate ionization in the inner part of the Orion Nebula,
but outside of the Bright Bar as far as the southeast boundary of the Extended
Orion Nebula, Theta-2-Ori-A is the dominant source. In addition to identifying
the ionizing star in sample regions, we were able to locate those portions of
the nebula in 3-D. This analysis illustrates the power of MUSE spectral imaging
observations in identifying sources of ionization in extended regions.
| 0 | 1 | 0 | 0 | 0 | 0 |
FLaapLUC: a pipeline for the generation of prompt alerts on transient Fermi-LAT $γ$-ray sources | The large majority of high energy sources detected with Fermi-LAT are
blazars, which are known to be very variable sources. High cadence long-term
monitoring simultaneously at different wavelengths being prohibitive, the study
of their transient activities can help shedding light on our understanding of
these objects. The early detection of such potentially fast transient events is
the key for triggering follow-up observations at other wavelengths. A Python
tool, FLaapLUC, built on top of the Science Tools provided by the Fermi Science
Support Center and the Fermi-LAT collaboration, has been developed using a
simple aperture photometry approach. This tool can effectively detect relative
flux variations in a set of predefined sources and alert potential users. Such
alerts can then be used to trigger target of opportunity observations with
other facilities. It is shown that FLaapLUC is an efficient tool to reveal
transient events in Fermi-LAT data, providing quick results which can be used
to promptly organise follow-up observations. Results from this simple aperture
photometry method are also compared to full likelihood analyses. The FLaapLUC
package is made available on GitHub and is open to contributions by the
community.
| 0 | 1 | 0 | 0 | 0 | 0 |
A network approach to topic models | One of the main computational and scientific challenges in the modern age is
to extract useful information from unstructured texts. Topic models are one
popular machine-learning approach which infers the latent topical structure of
a collection of documents. Despite their success --- in particular of its most
widely used variant called Latent Dirichlet Allocation (LDA) --- and numerous
applications in sociology, history, and linguistics, topic models are known to
suffer from severe conceptual and practical problems, e.g. a lack of
justification for the Bayesian priors, discrepancies with statistical
properties of real texts, and the inability to properly choose the number of
topics. Here we obtain a fresh view on the problem of identifying topical
structures by relating it to the problem of finding communities in complex
networks. This is achieved by representing text corpora as bipartite networks
of documents and words. By adapting existing community-detection methods --
using a stochastic block model (SBM) with non-parametric priors -- we obtain a
more versatile and principled framework for topic modeling (e.g., it
automatically detects the number of topics and hierarchically clusters both the
words and documents). The analysis of artificial and real corpora demonstrates
that our SBM approach leads to better topic models than LDA in terms of
statistical model selection. More importantly, our work shows how to formally
relate methods from community detection and topic modeling, opening the
possibility of cross-fertilization between these two fields.
| 1 | 0 | 0 | 1 | 0 | 0 |
CRPropa 3.1 -- A low energy extension based on stochastic differential equations | The propagation of charged cosmic rays through the Galactic environment
influences all aspects of the observation at Earth. Energy spectrum,
composition and arrival directions are changed due to deflections in magnetic
fields and interactions with the interstellar medium. Today the transport is
simulated with different simulation methods either based on the solution of a
transport equation (multi-particle picture) or a solution of an equation of
motion (single-particle picture).
We developed a new module for the publicly available propagation software
CRPropa 3.1, where we implemented an algorithm to solve the transport equation
using stochastic differential equations. This technique allows us to use a
diffusion tensor which is anisotropic with respect to an arbitrary magnetic
background field. The source code of CRPropa is written in C++ with python
steering via SWIG which makes it easy to use and computationally fast.
In this paper, we present the new low-energy propagation code together with
validation procedures that are developed to proof the accuracy of the new
implementation. Furthermore, we show first examples of the cosmic ray density
evolution, which depends strongly on the ratio of the parallel
$\kappa_\parallel$ and perpendicular $\kappa_\perp$ diffusion coefficients.
This dependency is systematically examined as well the influence of the
particle rigidity on the diffusion process.
| 0 | 1 | 0 | 0 | 0 | 0 |
Isometries in spaces of Kähler potentials | The space of Kähler potentials in a compact Kähler manifold, endowed with
Mabuchi's metric, is an infinite dimensional Riemannian manifold. We
characterize local isometries between spaces of Kähler potentials, and prove
existence and uniqueness for such isometries.
| 0 | 0 | 1 | 0 | 0 | 0 |
On the Statistical Challenges of Echo State Networks and Some Potential Remedies | Echo state networks are powerful recurrent neural networks. However, they are
often unstable and shaky, making the process of finding an good ESN for a
specific dataset quite hard. Obtaining a superb accuracy by using the Echo
State Network is a challenging task. We create, develop and implement a family
of predictably optimal robust and stable ensemble of Echo State Networks via
regularizing the training and perturbing the input. Furthermore, several
distributions of weights have been tried based on the shape to see if the shape
of the distribution has the impact for reducing the error. We found ESN can
track in short term for most dataset, but it collapses in the long run.
Short-term tracking with large size reservoir enables ESN to perform strikingly
with superior prediction. Based on this scenario, we go a further step to
aggregate many of ESNs into an ensemble to lower the variance and stabilize the
system by stochastic replications and bootstrapping of input data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Identifiability of Gaussian Structural Equation Models with Dependent Errors Having Equal Variances | In this paper, we prove that some Gaussian structural equation models with
dependent errors having equal variances are identifiable from their
corresponding Gaussian distributions. Specifically, we prove identifiability
for the Gaussian structural equation models that can be represented as
Andersson-Madigan-Perlman chain graphs (Andersson et al., 2001). These chain
graphs were originally developed to represent independence models. However,
they are also suitable for representing causal models with additive noise
(Peña, 2016. Our result implies then that these causal models can be
identified from observational data alone. Our result generalizes the result by
Peters and Bühlmann (2014), who considered independent errors having equal
variances. The suitability of the equal error variances assumption should be
assessed on a per domain basis.
| 0 | 0 | 0 | 1 | 0 | 0 |
Smart Contract SLAs for Dense Small-Cell-as-a-Service | The disruptive power of blockchain technologies represents a great
opportunity to re-imagine standard practices of telecommunication networks and
to identify critical areas that can benefit from brand new approaches. As a
starting point for this debate, we look at the current limits of infrastructure
sharing, and specifically at the Small-Cell-as-a-Service trend, asking
ourselves how we could push it to its natural extreme: a scenario in which any
individual home or business user can become a service provider for mobile
network operators, freed from all the scalability and legal constraints that
are inherent to the current modus operandi. We propose the adoption of smart
contracts to implement simple but effective Service Level Agreements (SLAs)
between small cell providers and mobile operators, and present an example
contract template based on the Ethereum blockchain.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Kiguradze theorem for linear boundary value problems | We investigate the limiting behavior of solutions of nonhomogeneous boundary
value problems for the systems of linear ordinary differential equations. The
generalization of Kiguradze theorem (1987) on passage to the limit is obtained.
| 0 | 0 | 1 | 0 | 0 | 0 |
Fidelity Lower Bounds for Stabilizer and CSS Quantum Codes | In this paper we estimate the fidelity of stabilizer and CSS codes. First, we
derive a lower bound on the fidelity of a stabilizer code via its quantum
enumerator. Next, we find the average quantum enumerators of the ensembles of
finite length stabilizer and CSS codes. We use the average quantum enumerators
for obtaining lower bounds on the average fidelity of these ensembles. We
further improve the fidelity bounds by estimating the quantum enumerators of
expurgated ensembles of stabilizer and CSS codes. Finally, we derive fidelity
bounds in the asymptotic regime when the code length tends to infinity.
These results tell us which code rate we can afford for achieving a target
fidelity with codes of a given length. The results also show that in symmetric
depolarizing channel a typical stabilizer code has better performance, in terms
of fidelity and code rate, compared with a typical CSS codes, and that balanced
CSS codes significantly outperform other CSS codes. Asymptotic results
demonstrate that CSS codes have a fundamental performance loss compared to
stabilizer codes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cross-validation improved by aggregation: Agghoo | Cross-validation is widely used for selecting among a family of learning
rules. This paper studies a related method, called aggregated hold-out
(Agghoo), which mixes cross-validation with aggregation; Agghoo can also be
related to bagging. According to numerical experiments, Agghoo can improve
significantly cross-validation's prediction error, at the same computational
cost; this makes it very promising as a general-purpose tool for prediction. We
provide the first theoretical guarantees on Agghoo, in the supervised
classification setting, ensuring that one can use it safely: at worse, Agghoo
performs like the hold-out, up to a constant factor. We also prove a
non-asymptotic oracle inequality, in binary classification under the margin
condition, which is sharp enough to get (fast) minimax rates.
| 0 | 0 | 1 | 1 | 0 | 0 |
MAT: A Multimodal Attentive Translator for Image Captioning | In this work we formulate the problem of image captioning as a multimodal
translation task. Analogous to machine translation, we present a
sequence-to-sequence recurrent neural networks (RNN) model for image caption
generation. Different from most existing work where the whole image is
represented by convolutional neural network (CNN) feature, we propose to
represent the input image as a sequence of detected objects which feeds as the
source sequence of the RNN model. In this way, the sequential representation of
an image can be naturally translated to a sequence of words, as the target
sequence of the RNN model. To represent the image in a sequential way, we
extract the objects features in the image and arrange them in a order using
convolutional neural networks. To further leverage the visual information from
the encoded objects, a sequential attention layer is introduced to selectively
attend to the objects that are related to generate corresponding words in the
sentences. Extensive experiments are conducted to validate the proposed
approach on popular benchmark dataset, i.e., MS COCO, and the proposed model
surpasses the state-of-the-art methods in all metrics following the dataset
splits of previous work. The proposed approach is also evaluated by the
evaluation server of MS COCO captioning challenge, and achieves very
competitive results, e.g., a CIDEr of 1.029 (c5) and 1.064 (c40).
| 1 | 0 | 0 | 0 | 0 | 0 |
A Semi-Supervised and Inductive Embedding Model for Churn Prediction of Large-Scale Mobile Games | Mobile gaming has emerged as a promising market with billion-dollar revenues.
A variety of mobile game platforms and services have been developed around the
world. One critical challenge for these platforms and services is to understand
user churn behavior in mobile games. Accurate churn prediction will benefit
many stakeholders such as game developers, advertisers, and platform operators.
In this paper, we present the first large-scale churn prediction solution for
mobile games. In view of the common limitations of the state-of-the-art methods
built upon traditional machine learning models, we devise a novel
semi-supervised and inductive embedding model that jointly learns the
prediction function and the embedding function for user-app relationships. We
model these two functions by deep neural networks with a unique edge embedding
technique that is able to capture both contextual information and relationship
dynamics. We also design a novel attributed random walk technique that takes
into consideration both topological adjacency and attribute similarities. To
evaluate the performance of our solution, we collect real-world data from the
Samsung Game Launcher platform that includes tens of thousands of games and
hundreds of millions of user-app interactions. The experimental results with
this data demonstrate the superiority of our proposed model against existing
state-of-the-art methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
Sequential rerandomization | The seminal work of Morgan and Rubin (2012) considers rerandomization for all
the units at one time. In practice, however, experimenters may have to
rerandomize units sequentially. For example, a clinician studying a rare
disease may be unable to wait to perform an experiment until all the
experimental units are recruited. Our work offers a mathematical framework for
sequential rerandomization designs, where the experimental units are enrolled
in groups. We formulate an adaptive rerandomization procedure for balancing
treatment/control assignments over some continuous or binary covariates, using
Mahalanobis distance as the imbalance measure. We prove in our key result,
Theorem 3, that given the same number of rerandomizations (in expected value),
under certain mild assumptions, sequential rerandomization achieves better
covariate balance than rerandomization at one time.
| 0 | 0 | 0 | 1 | 0 | 0 |
Neural SLAM: Learning to Explore with External Memory | We present an approach for agents to learn representations of a global map
from sensor data, to aid their exploration in new environments. To achieve
this, we embed procedures mimicking that of traditional Simultaneous
Localization and Mapping (SLAM) into the soft attention based addressing of
external memory architectures, in which the external memory acts as an internal
representation of the environment. This structure encourages the evolution of
SLAM-like behaviors inside a completely differentiable deep neural network. We
show that this approach can help reinforcement learning agents to successfully
explore new environments where long-term memory is essential. We validate our
approach in both challenging grid-world environments and preliminary Gazebo
experiments. A video of our experiments can be found at: this https URL.
| 1 | 0 | 0 | 0 | 0 | 0 |
Information Storage and Retrieval using Macromolecules as Storage Media | To store information at extremely high-density and data-rate, we propose to
adapt, integrate, and extend the techniques developed by chemists and molecular
biologists for the purpose of manipulating biological and other macromolecules.
In principle, volumetric densities in excess of 10^21 bits/cm^3 can be achieved
when individual molecules having dimensions below a nanometer or so are used to
encode the 0's and 1's of a binary string of data. In practice, however, given
the limitations of electron-beam lithography, thin film deposition and
patterning technologies, molecular manipulation in submicron dimensions, etc.,
we believe that volumetric storage densities on the order of 10^16 bits/cm^3
(i.e., petabytes per cubic centimeter) should be readily attainable, leaving
plenty of room for future growth. The unique feature of the proposed new
approach is its focus on the feasibility of storing bits of information in
individual molecules, each only a few angstroms in size.
| 1 | 1 | 0 | 0 | 0 | 0 |
Morgan type uncertainty principle and unique continuation properties for abstract Schrödinger equations | In this paper, Morgan type uncertainty principle and unique continuation
properties of abstract Schrödinger equations with time dependent potentials
in vector-valued classes are obtained. The equation involves a possible linear
operators considered in the Hilbert spaces. So, by choosing the corresponding
spaces H and operators we derived unique continuation properties for numerous
classes of Schrödinger type equations and its systems which occur in a wide
variety of physical systems
| 0 | 0 | 1 | 0 | 0 | 0 |
Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data | Early in researchers' careers, it is difficult to assess how good their work
is or how important or influential the scholars will eventually be. Hence,
funding agencies, academic departments, and others often use the Journal Impact
Factor (JIF) of where the authors have published to assess their work and
provide resources and rewards for future work. The use of JIFs in this way has
been heavily criticized, however. Using a large data set with many thousands of
publication profiles of individual researchers, this study tests the ability of
the JIF (in its normalized variant) to identify, at the beginning of their
careers, those candidates who will be successful in the long run. Instead of
bare JIFs and citation counts, the metrics used here are standardized according
to Web of Science subject categories and publication years. The results of the
study indicate that the JIF (in its normalized variant) is able to discriminate
between researchers who published papers later on with a citation impact above
or below average in a field and publication year - not only in the short term,
but also in the long term. However, the low to medium effect sizes of the
results also indicate that the JIF (in its normalized variant) should not be
used as the sole criterion for identifying later success: other criteria, such
as the novelty and significance of the specific research, academic
distinctions, and the reputation of previous institutions, should also be
considered.
| 1 | 1 | 0 | 0 | 0 | 0 |
Deep MIMO Detection | In this paper, we consider the use of deep neural networks in the context of
Multiple-Input-Multiple-Output (MIMO) detection. We give a brief introduction
to deep learning and propose a modern neural network architecture suitable for
this detection task. First, we consider the case in which the MIMO channel is
constant, and we learn a detector for a specific system. Next, we consider the
harder case in which the parameters are known yet changing and a single
detector must be learned for all multiple varying channels. We demonstrate the
performance of our deep MIMO detector using numerical simulations in comparison
to competing methods including approximate message passing and semidefinite
relaxation. The results show that deep networks can achieve state of the art
accuracy with significantly lower complexity while providing robustness against
ill conditioned channels and mis-specified noise variance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Cross-View Image Matching for Geo-localization in Urban Environments | In this paper, we address the problem of cross-view image geo-localization.
Specifically, we aim to estimate the GPS location of a query street view image
by finding the matching images in a reference database of geo-tagged bird's eye
view images, or vice versa. To this end, we present a new framework for
cross-view image geo-localization by taking advantage of the tremendous success
of deep convolutional neural networks (CNNs) in image classification and object
detection. First, we employ the Faster R-CNN to detect buildings in the query
and reference images. Next, for each building in the query image, we retrieve
the $k$ nearest neighbors from the reference buildings using a Siamese network
trained on both positive matching image pairs and negative pairs. To find the
correct NN for each query building, we develop an efficient multiple nearest
neighbors matching method based on dominant sets. We evaluate the proposed
framework on a new dataset that consists of pairs of street view and bird's eye
view images. Experimental results show that the proposed method achieves better
geo-localization accuracy than other approaches and is able to generalize to
images at unseen locations.
| 1 | 0 | 0 | 0 | 0 | 0 |
Understanding the Feedforward Artificial Neural Network Model From the Perspective of Network Flow | In recent years, deep learning based on artificial neural network (ANN) has
achieved great success in pattern recognition. However, there is no clear
understanding of such neural computational models. In this paper, we try to
unravel "black-box" structure of Ann model from network flow. Specifically, we
consider the feed forward Ann as a network flow model, which consists of many
directional class-pathways. Each class-pathway encodes one class. The
class-pathway of a class is obtained by connecting the activated neural nodes
in each layer from input to output, where activation value of neural node
(node-value) is defined by the weights of each layer in a trained
ANN-classifier. From the perspective of the class-pathway, training an
ANN-classifier can be regarded as the formulation process of class-pathways of
different classes. By analyzing the the distances of each two class-pathways in
a trained ANN-classifiers, we try to answer the questions, why the classifier
performs so? At last, from the neural encodes view, we define the importance of
each neural node through the class-pathways, which is helpful to optimize the
structure of a classifier. Experiments for two types of ANN model including
multi-layer MLP and CNN verify that the network flow based on class-pathway is
a reasonable explanation for ANN models.
| 1 | 0 | 0 | 0 | 0 | 0 |
Excitation of multiple 2-mode parametric resonances by a single driven mode | We demonstrate autoparametric excitation of two distinct sub-harmonic
mechanical modes by the same driven mechanical mode corresponding to different
drive frequencies within its resonance dispersion band. This experimental
observation is used to motivate a more general physical picture wherein
multiple mechanical modes could be excited by the same driven primary mode
within the same device as long as the frequency spacing between the
sub-harmonic modes is less than half the dispersion bandwidth of the driven
primary mode. The excitation of both modes is seen to be threshold-dependent
and a parametric back-action is observed impacting on the response of the
driven primary mode. Motivated by this experimental observation, modified
dynamical equations specifying 2-mode auto-parametric excitation for such
systems are presented.
| 0 | 1 | 0 | 0 | 0 | 0 |
Solar system science with the Wide-Field InfraRed Survey Telescope (WFIRST) | We present a community-led assessment of the solar system investigations
achievable with NASA's next-generation space telescope, the Wide Field InfraRed
Survey Telescope (WFIRST). WFIRST will provide imaging, spectroscopic, and
coronagraphic capabilities from 0.43-2.0 $\mu$m and will be a potential
contemporary and eventual successor to JWST. Surveys of irregular satellites
and minor bodies are where WFIRST will excel with its 0.28 deg$^2$ field of
view Wide Field Instrument (WFI). Potential ground-breaking discoveries from
WFIRST could include detection of the first minor bodies orbiting in the Inner
Oort Cloud, identification of additional Earth Trojan asteroids, and the
discovery and characterization of asteroid binary systems similar to
Ida/Dactyl. Additional investigations into asteroids, giant planet satellites,
Trojan asteroids, Centaurs, Kuiper Belt Objects, and comets are presented.
Previous use of astrophysics assets for solar system science and synergies
between WFIRST, LSST, JWST, and the proposed NEOCam mission are discussed. We
also present the case for implementation of moving target tracking, a feature
that will benefit from the heritage of JWST and enable a broader range of solar
system observations.
| 0 | 1 | 0 | 0 | 0 | 0 |
Combining Generative and Discriminative Approaches to Unsupervised Dependency Parsing via Dual Decomposition | Unsupervised dependency parsing aims to learn a dependency parser from
unannotated sentences. Existing work focuses on either learning generative
models using the expectation-maximization algorithm and its variants, or
learning discriminative models using the discriminative clustering algorithm.
In this paper, we propose a new learning strategy that learns a generative
model and a discriminative model jointly based on the dual decomposition
method. Our method is simple and general, yet effective to capture the
advantages of both models and improve their learning results. We tested our
method on the UD treebank and achieved a state-of-the-art performance on thirty
languages.
| 1 | 0 | 0 | 0 | 0 | 0 |
Near-Infrared Knots and Dense Fe Ejecta in the Cassiopeia A Supernova Remnant | We report the results of broadband (0.95--2.46 $\mu$m) near-infrared
spectroscopic observations of the Cassiopeia A supernova remnant. Using a
clump-finding algorithm in two-dimensional dispersed images, we identify 63
"knots" from eight slit positions and derive their spectroscopic properties.
All of the knots emit [Fe II] lines together with other ionic forbidden lines
of heavy elements, and some of them also emit H and He lines. We identify 46
emission line features in total from the 63 knots and measure their fluxes and
radial velocities. The results of our analyses of the emission line features
based on principal component analysis show that the knots can be classified
into three groups: (1) He-rich, (2) S-rich, and (3) Fe-rich knots. The He-rich
knots have relatively small, $\lesssim 200~{\rm km~s}^{-1}$, line-of-sight
speeds and radiate strong He I and [Fe II] lines resembling closely optical
quasi-stationary flocculi of circumstellar medium, while the S-rich knots show
strong lines from O-burning material with large radial velocities up to $\sim
2000~{\rm km~s}^{-1}$ indicating that they are supernova ejecta material known
as fast-moving knots. The Fe-rich knots also have large radial velocities but
show no lines from O-burning material. We discuss the origin of the Fe-rich
knots and conclude that they are most likely "pure" Fe ejecta synthesized in
the innermost region during the supernova explosion. The comparison of [Fe II]
images with other waveband images shows that these dense Fe ejecta are mainly
distributed along the southwestern shell just outside the unshocked $^{44}$Ti
in the interior, supporting the presence of unshocked Fe associated with
$^{44}$Ti.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the conjecture of Jeśmanowicz | We give a survey on some results covering the last 60 years concerning
Jeśmanowicz' conjecture. Moreover, we conclude the survey with a new result
by showing that the special Diophantine equation $$(20k)^x+(99k)^y=(101k)^z$$
has no solution other than $(x,y,z)=(2,2,2)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Lattice implementation of Abelian gauge theories with Chern-Simons number and an axion field | Real time evolution of classical gauge fields is relevant for a number of
applications in particle physics and cosmology, ranging from the early Universe
to dynamics of quark-gluon plasma. We present a lattice formulation of the
interaction between a $shift$-symmetric field and some $U(1)$ gauge sector,
$a(x)\tilde{F}_{\mu\nu}F^{\mu\nu}$, reproducing the continuum limit to order
$\mathcal{O}(dx_\mu^2)$ and obeying the following properties: (i) the system is
gauge invariant and (ii) shift symmetry is exact on the lattice. For this end
we construct a definition of the {\it topological number density} $Q =
\tilde{F}_{\mu\nu}F^{\mu\nu}$ that admits a lattice total derivative
representation $Q = \Delta_\mu^+ K^\mu$, reproducing to order
$\mathcal{O}(dx_\mu^2)$ the continuum expression $Q = \partial_\mu K^\mu
\propto \vec E \cdot \vec B$. If we consider a homogeneous field $a(x) = a(t)$,
the system can be mapped into an Abelian gauge theory with Hamiltonian
containing a Chern-Simons term for the gauge fields. This allow us to study in
an accompanying paper the real time dynamics of fermion number non-conservation
(or chirality breaking) in Abelian gauge theories at finite temperature. When
$a(x) = a(\vec x,t)$ is inhomogeneous, the set of lattice equations of motion
do not admit however a simple explicit local solution (while preserving an
$\mathcal{O}(dx_\mu^2)$ accuracy). We discuss an iterative scheme allowing to
overcome this difficulty.
| 0 | 1 | 0 | 0 | 0 | 0 |
II-FCN for skin lesion analysis towards melanoma detection | Dermoscopy image detection stays a tough task due to the weak distinguishable
property of the object.Although the deep convolution neural network
signifigantly boosted the performance on prevelance computer vision tasks in
recent years,there remains a room to explore more robust and precise models to
the problem of low contrast image segmentation.Towards the challenge of Lesion
Segmentation in ISBI 2017,we built a symmetrical identity inception fully
convolution network which is based on only 10 reversible inception blocks,every
block composed of four convolution branches with combination of different layer
depth and kernel size to extract sundry semantic features.Then we proposed an
approximate loss function for jaccard index metrics to train our model.To
overcome the drawbacks of traditional convolution,we adopted the dilation
convolution and conditional random field method to rectify our segmentation.We
also introduced multiple ways to prevent the problem of overfitting.The
experimental results shows that our model achived jaccard index of 0.82 and
kept learning from epoch to epoch.
| 1 | 0 | 0 | 0 | 0 | 0 |
Zero-Shot Visual Imitation | The current dominant paradigm for imitation learning relies on strong
supervision of expert actions to learn both 'what' and 'how' to imitate. We
pursue an alternative paradigm wherein an agent first explores the world
without any expert supervision and then distills its experience into a
goal-conditioned skill policy with a novel forward consistency loss. In our
framework, the role of the expert is only to communicate the goals (i.e., what
to imitate) during inference. The learned policy is then employed to mimic the
expert (i.e., how to imitate) after seeing just a sequence of images
demonstrating the desired task. Our method is 'zero-shot' in the sense that the
agent never has access to expert actions during training or for the task
demonstration at inference. We evaluate our zero-shot imitator in two
real-world settings: complex rope manipulation with a Baxter robot and
navigation in previously unseen office environments with a TurtleBot. Through
further experiments in VizDoom simulation, we provide evidence that better
mechanisms for exploration lead to learning a more capable policy which in turn
improves end task performance. Videos, models, and more details are available
at this https URL
| 1 | 0 | 0 | 1 | 0 | 0 |
Spreading in kinetic reaction-transport equations in higher velocity dimensions | In this paper, we extend and complement previous works about propagation in
kinetic reaction-transport equations. The model we study describes particles
moving according to a velocity-jump process, and proliferating according to a
reaction term of monostable type. We focus on the case of bounded velocities,
having dimension higher than one. We extend previous results obtained by the
first author with Calvez and Nadin in dimension one. We study the large
time/large scale hyperbolic limit via an Hamilton-Jacobi framework together
with the half-relaxed limits method. We deduce spreading results and the
existence of travelling wave solutions. A crucial difference with the
mono-dimensional case is the resolution of the spectral problem at the edge of
the front, that yields potential singular velocity distributions. As a
consequence, the minimal speed of propagation may not be determined by a first
order condition.
| 0 | 0 | 1 | 0 | 0 | 0 |
Matching of orbital integrals (transfer) and Roche Hecke algebra isomorphisms | Let $F$ be a non-Archimedan local field, $G$ a connected reductive group
defined and split over $F$, and $T$ a maximal $F$-split torus in $G$. Let
$\chi_0$ be a depth zero character of the maximal compact subgroup
$\mathcal{T}$ of $T(F)$. It gives by inflation a character $\rho$ of an Iwahori
subgroup $\mathcal{I}$ of $G(F)$ containing $\mathcal{T}$. From Roche, $\chi_0$
defines a split endoscopic group $G'$ of $G$, and there is an injective
morphism of ${\Bbb C}$-algebras $\mathcal{H}(G(F),\rho) \rightarrow
\mathcal{H}(G'(F),1_{\mathcal{I}'})$ where $\mathcal{H}(G(F),\rho)$ is the
Hecke algebra of compactly supported $\rho^{-1}$-spherical functions on $G(F)$
and $\mathcal{I}'$ is an Iwahori subgroup of $G'(F)$. This morphism restricts
to an injective morphism $\zeta: \mathcal{Z}(G(F),\rho)\rightarrow
\mathcal{Z}(G'(F),1_{\mathcal{I}'})$ between the centers of the Hecke algebras.
We prove here that a certain linear combination of morphisms analogous to
$\zeta$ realizes the transfer (matching of strongly $G$-regular semisimple
orbital integrals). If ${\rm char}(F)=p>0$, our result is unconditional only if
$p$ is large enough.
| 0 | 0 | 1 | 0 | 0 | 0 |
Testing atomic collision theory with the two-photon continuum of astrophysical nebulae | Accurate rates for energy-degenerate l-changing collisions are needed to
determine cosmological abundances and recombination. There are now several
competing theories for the treatment of this process, and it is not possible to
test these experimentally. We show that the H I two-photon continuum produced
by astrophysical nebulae is strongly affected by l-changing collisions. We
perform an analysis of the different underlying atomic processes and simulate
the recombination and two-photon spectrum of a nebula containing H and He. We
provide an extended set of effective recombination coefficients and updated
l-changing 2s-2p transition rates using several competing theories. In
principle, accurate astronomical observations could determine which theory is
correct.
| 0 | 1 | 0 | 0 | 0 | 0 |
Metrologically useful states of spin-1 Bose condensates with macroscopic magnetization | We study theoretically the usefulness of spin-1 Bose condensates with
macroscopic magnetization in a homogeneous magnetic field for quantum
metrology. We demonstrate Heisenberg scaling of the quantum Fisher information
for states in thermal equilibrium. The scaling applies to both
antiferromagnetic and ferromagnetic interactions. The effect preserves as long
as fluctuations of magnetization are sufficiently small. Scaling of the quantum
Fisher information with the total particle number is derived within the
mean-field approach in the zero temperature limit and exactly in the high
magnetic field limit for any temperature. The precision gain is intuitively
explained owing to subtle features of the quasi-distribution function in phase
space.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Final Chapter In The Saga Of YIG | The magnetic insulator Yttrium Iron Garnet can be grown with exceptional
quality, has a ferrimagnetic transition temperature of nearly 600 K, and is
used in microwave and spintronic devices that can operate at room temperature.
The most accurate prior measurements of the magnon spectrum date back nearly 40
years, but cover only 3 of the lowest energy modes out of 20 distinct magnon
branches. Here we have used time-of-flight inelastic neutron scattering to
measure the full magnon spectrum throughout the Brillouin zone. We find that
the existing model of the excitation spectrum, well known from an earlier work
titled "The Saga of YIG", fails to describe the optical magnon modes. Using a
very general spin Hamiltonian, we show that the magnetic interactions are both
longer-ranged and more complex than was previously understood. The results
provide the basis for accurate microscopic models of the finite temperature
magnetic properties of Yttrium Iron Garnet, necessary for next-generation
electronic devices.
| 0 | 1 | 0 | 0 | 0 | 0 |
Modeling and Analysis of HetNets with mm-Wave Multi-RAT Small Cells Deployed Along Roads | We characterize a multi tier network with classical macro cells, and multi
radio access technology (RAT) small cells, which are able to operate in
microwave and millimeter-wave (mm-wave) bands. The small cells are assumed to
be deployed along roads modeled as a Poisson line process. This
characterization is more realistic as compared to the classical Poisson point
processes typically used in literature. In this context, we derive the
association and RAT selection probabilities of the typical user under various
system parameters such as the small cell deployment density and mm-wave antenna
gain, and with varying street densities. Finally, we calculate the signal to
interference plus noise ratio (SINR) coverage probability for the typical user
considering a tractable dominant interference based model for mm-wave
interference. Our analysis reveals the need of deploying more small cells per
street in cities with more streets to maintain coverage, and highlights that
mm-wave RAT in small cells can help to improve the SINR performance of the
users.
| 1 | 0 | 0 | 0 | 0 | 0 |
Ermakov-Painlevé II Symmetry Reduction of a Korteweg Capillarity System | A class of nonlinear Schrödinger equations involving a triad of power law
terms together with a de Broglie-Bohm potential is shown to admit symmetry
reduction to a hybrid Ermakov-Painlevé II equation which is linked, in turn,
to the integrable Painlevé XXXIV equation. A nonlinear Schrödinger
encapsulation of a Korteweg-type capillary system is thereby used in the
isolation of such a Ermakov-Painlevé II reduction valid for a multi-parameter
class of free energy functions. Iterated application of a Bäcklund
transformation then allows the construction of novel classes of exact solutions
of the nonlinear capillarity system in terms of Yablonskii-Vorob'ev polynomials
or classical Airy functions. A Painlevé XXXIV equation is derived for the
density in the capillarity system and seen to correspond to the symmetry
reduction of its Bernoulli integral of motion.
| 0 | 1 | 1 | 0 | 0 | 0 |
Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks | Matrix completion models are among the most common formulations of
recommender systems. Recent works have showed a boost of performance of these
techniques when introducing the pairwise relationships between users/items in
the form of graphs, and imposing smoothness priors on these graphs. However,
such techniques do not fully exploit the local stationarity structures of
user/item graphs, and the number of parameters to learn is linear w.r.t. the
number of users and items. We propose a novel approach to overcome these
limitations by using geometric deep learning on graphs. Our matrix completion
architecture combines graph convolutional neural networks and recurrent neural
networks to learn meaningful statistical graph-structured patterns and the
non-linear diffusion process that generates the known ratings. This neural
network system requires a constant number of parameters independent of the
matrix size. We apply our method on both synthetic and real datasets, showing
that it outperforms state-of-the-art techniques.
| 1 | 0 | 0 | 1 | 0 | 0 |
Value Asymptotics in Dynamic Games on Large Horizons | This paper is concerned with two-person dynamic zero-sum games. Let games for
some family have common dynamics, running costs and capabilities of players,
and let these games differ in densities only. We show that the Dynamic
Programming Principle directly leads to the General Tauberian Theorem---that
the existence of a uniform limit of the value functions for uniform
distribution or for exponential distribution implies that the value functions
uniformly converge to the same limit for arbitrary distribution from large
class. No assumptions on strategies are necessary. Applications to differential
games and stochastic statement are considered.
| 0 | 0 | 1 | 0 | 0 | 0 |
Classical counterparts of quantum attractors in generic dissipative systems | In the context of dissipative systems, we show that for any quantum chaotic
attractor a corre- sponding classical chaotic attractor can always be found. We
provide with a general way to locate them, rooted in the structure of the
parameter space (which is typically bidimensional, accounting for the forcing
strength and dissipation parameters). In the cases where an approximate point
like quantum distribution is found, it can be associated to exceptionally large
regular structures. Moreover, supposedly anomalous quantum chaotic behaviour
can be very well reproduced by the classical dynamics plus Gaussian noise of
the size of an effective Planck constant $\hbar_{\rm eff}$. We give support to
our conjectures by means of two paradigmatic examples of quantum chaos and
transport theory. In particular, a dissipative driven system becomes
fundamental in order to extend their validity to generic cases.
| 0 | 1 | 0 | 0 | 0 | 0 |
Marginal likelihood based model comparison in Fuzzy Bayesian Learning | In a recent paper [1] we introduced the Fuzzy Bayesian Learning (FBL)
paradigm where expert opinions can be encoded in the form of fuzzy rule bases
and the hyper-parameters of the fuzzy sets can be learned from data using a
Bayesian approach. The present paper extends this work for selecting the most
appropriate rule base among a set of competing alternatives, which best
explains the data, by calculating the model evidence or marginal likelihood. We
explain why this is an attractive alternative over simply minimizing a mean
squared error metric of prediction and show the validity of the proposition
using synthetic examples and a real world case study in the financial services
sector.
| 0 | 0 | 0 | 1 | 0 | 0 |
Position-sensitive propagation of information on social media using social physics approach | The excitement and convergence of tweets on specific topics are well studied.
However, by utilizing the position information of Tweet, it is also possible to
analyze the position-sensitive tweet. In this research, we focus on bomb
terrorist attacks and propose a method for separately analyzing the number of
tweets at the place where the incident occurred, nearby, and far. We made
measurements of position-sensitive tweets and suggested a theory to explain it.
This theory is an extension of the mathematical model of the hit phenomenon.
| 1 | 1 | 0 | 0 | 0 | 0 |
The application of Monte Carlo methods for learning generalized linear model | Monte Carlo method is a broad class of computational algorithms that rely on
repeated random sampling to obtain numerical results. They are often used in
physical and mathematical problems and are most useful when it is difficult or
impossible to use other mathematical methods. Basically, many statisticians
have been increasingly drawn to Monte Carlo method in three distinct problem
classes: optimization, numerical integration, and generating draws from a
probability distribution. In this paper, we will introduce the Monte Carlo
method for calculating coefficients in Generalized Linear Model(GLM),
especially for Logistic Regression. Our main methods are Metropolis
Hastings(MH) Algorithms and Stochastic Approximation in Monte Carlo
Computation(SAMC). For comparison, we also get results automatically using MLE
method in R software.
| 0 | 0 | 0 | 1 | 0 | 0 |
Language Bootstrapping: Learning Word Meanings From Perception-Action Association | We address the problem of bootstrapping language acquisition for an
artificial system similarly to what is observed in experiments with human
infants. Our method works by associating meanings to words in manipulation
tasks, as a robot interacts with objects and listens to verbal descriptions of
the interactions. The model is based on an affordance network, i.e., a mapping
between robot actions, robot perceptions, and the perceived effects of these
actions upon objects. We extend the affordance model to incorporate spoken
words, which allows us to ground the verbal symbols to the execution of actions
and the perception of the environment. The model takes verbal descriptions of a
task as the input and uses temporal co-occurrence to create links between
speech utterances and the involved objects, actions, and effects. We show that
the robot is able form useful word-to-meaning associations, even without
considering grammatical structure in the learning process and in the presence
of recognition errors. These word-to-meaning associations are embedded in the
robot's own understanding of its actions. Thus, they can be directly used to
instruct the robot to perform tasks and also allow to incorporate context in
the speech recognition task. We believe that the encouraging results with our
approach may afford robots with a capacity to acquire language descriptors in
their operation's environment as well as to shed some light as to how this
challenging process develops with human infants.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.