title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Learning to detect chest radiographs containing lung nodules using visual attention networks | Machine learning approaches hold great potential for the automated detection
of lung nodules in chest radiographs, but training the algorithms requires vary
large amounts of manually annotated images, which are difficult to obtain. Weak
labels indicating whether a radiograph is likely to contain pulmonary nodules
are typically easier to obtain at scale by parsing historical free-text
radiological reports associated to the radiographs. Using a repositotory of
over 700,000 chest radiographs, in this study we demonstrate that promising
nodule detection performance can be achieved using weak labels through
convolutional neural networks for radiograph classification. We propose two
network architectures for the classification of images likely to contain
pulmonary nodules using both weak labels and manually-delineated bounding
boxes, when these are available. Annotated nodules are used at training time to
deliver a visual attention mechanism informing the model about its localisation
performance. The first architecture extracts saliency maps from high-level
convolutional layers and compares the estimated position of a nodule against
the ground truth, when this is available. A corresponding localisation error is
then back-propagated along with the softmax classification error. The second
approach consists of a recurrent attention model that learns to observe a short
sequence of smaller image portions through reinforcement learning. When a
nodule annotation is available at training time, the reward function is
modified accordingly so that exploring portions of the radiographs away from a
nodule incurs a larger penalty. Our empirical results demonstrate the potential
advantages of these architectures in comparison to competing methodologies.
| 0 | 0 | 0 | 1 | 0 | 0 |
New conformal map for the Sinc approximation for exponentially decaying functions over the semi-infinite interval | The Sinc approximation has shown high efficiency for numerical methods in
many fields. Conformal maps play an important role in the success, i.e.,
appropriate conformal map must be employed to elicit high performance of the
Sinc approximation. Appropriate conformal maps have been proposed for typical
cases; however, such maps may not be optimal. Thus, the performance of the Sinc
approximation may be improved by using another conformal map rather than an
existing map. In this paper, we propose a new conformal map for the case where
functions are defined over the semi-infinite interval and decay exponentially.
Then, we demonstrate in both theoretical and numerical ways that the
convergence rate is improved by replacing the existing conformal map with the
proposed map.
| 1 | 0 | 0 | 0 | 0 | 0 |
Evidence synthesis for stochastic epidemic models | In recent years the role of epidemic models in informing public health
policies has progressively grown. Models have become increasingly realistic and
more complex, requiring the use of multiple data sources to estimate all
quantities of interest. This review summarises the different types of
stochastic epidemic models that use evidence synthesis and highlights current
challenges.
| 0 | 0 | 0 | 1 | 0 | 0 |
The multi-resonant Lugiato-Lefever model | We introduce a new model describing multiple resonances in Kerr optical
cavities. It perfectly agrees quantitatively with the Ikeda map and predicts
complex phenomena such as super cavity solitons and coexistence of multiple
nonlinear states.
| 0 | 1 | 0 | 0 | 0 | 0 |
Scalable Bayesian shrinkage and uncertainty quantification in high-dimensional regression | Bayesian shrinkage methods have generated a lot of recent interest as tools
for high-dimensional regression and model selection. These methods naturally
facilitate tractable uncertainty quantification and incorporation of prior
information. A common feature of these models, including the Bayesian lasso,
global-local shrinkage priors, and spike-and-slab priors is that the
corresponding priors on the regression coefficients can be expressed as scale
mixture of normals. While the three-step Gibbs sampler used to sample from the
often intractable associated posterior density has been shown to be
geometrically ergodic for several of these models (Khare and Hobert, 2013; Pal
and Khare, 2014), it has been demonstrated recently that convergence of this
sampler can still be quite slow in modern high-dimensional settings despite
this apparent theoretical safeguard. We propose a new method to draw from the
same posterior via a tractable two-step blocked Gibbs sampler. We demonstrate
that our proposed two-step blocked sampler exhibits vastly superior convergence
behavior compared to the original three- step sampler in high-dimensional
regimes on both real and simulated data. We also provide a detailed theoretical
underpinning to the new method in the context of the Bayesian lasso. First, we
derive explicit upper bounds for the (geometric) rate of convergence.
Furthermore, we demonstrate theoretically that while the original Bayesian
lasso chain is not Hilbert-Schmidt, the proposed chain is trace class (and
hence Hilbert-Schmidt). The trace class property has useful theoretical and
practical implications. It implies that the corresponding Markov operator is
compact, and its eigenvalues are summable. It also facilitates a rigorous
comparison of the two-step blocked chain with "sandwich" algorithms which aim
to improve performance of the two-step chain by inserting an inexpensive extra
step.
| 0 | 0 | 0 | 1 | 0 | 0 |
Inclusion and Majorization Properties of Certain Subclasses of Multivalent Analytic Functions Involving a Linear Operator | The object of the present paper is to study certain properties and
characteristics of the operator $Q_{p,\beta}^{\alpha}$defined on p-valent
analytic function by using technique of differential subordination.We also
obtained result involving majorization problems by applying the operator to
p-valent analytic function.Relevant connection of the the result are presented
here with those obtained by earlier worker are pointed out.
| 0 | 0 | 1 | 0 | 0 | 0 |
Tameness in least fixed-point logic and McColm's conjecture | We investigate fundamental model-theoretic dividing lines (the order
property, the independence property, the strict order property, and the tree
property 2) in the context of least fixed-point (LFP) logic over families of
finite structures. We show that, unlike the first-order (FO) case, the order
property and the independence property are equivalent, but all of the other
natural implications are strict. We identify the LFP strict order property with
proficiency, a well-studied notion in finite model theory.
Gregory McColm conjectured that FO and LFP definability coincide over a
family C of finite structures exactly when C is non-proficient. McColm's
conjecture is false in general, but as an application of our results, we show
that it holds under standard FO tameness assumptions adapted to families of
finite structures.
| 1 | 0 | 1 | 0 | 0 | 0 |
Enhanced activity of the Southern Taurids in 2005 and 2015 | The paper presents an analysis of Polish Fireball Network (PFN) observations
of enhanced activity of the Southern Taurid meteor shower in 2005 and 2015. In
2005, between October 20 and November 10, seven stations of PFN determined 107
accurate orbits with 37 of them belonging to the Southern Taurid shower. In the
same period of 2015, 25 stations of PFN recorded 719 accurate orbits with 215
orbits of the Southern Taurids. Both maxima were rich in fireballs which
accounted to 17% of all observed Taurids. The whole sample of Taurid fireballs
is quite uniform in the sense of starting and terminal heights of the
trajectory. On the other hand a clear decreasing trend in geocentric velocity
with increasing solar longitude was observed.
Orbital parameters of observed Southern Taurids were compared to orbital
elements of Near Earth Objects (NEO) from the NEODYS-2 database. Using the
Drummond criterion $D'$ with threshold as low as 0.06, we found over 100
fireballs strikingly similar to the orbit of asteroid 2015 TX24. Several dozens
of Southern Taurids have orbits similar to three other asteroids, namely: 2005
TF50, 2005 UR and 2010 TU149. All mentioned NEOs have orbital periods very
close to the 7:2 resonance with Jupiter's orbit. It confirms a theory of a
"resonant meteoroid swarm" within the Taurid complex that predicts that in
specific years, the Earth is hit by a greater number of meteoroids capable of
producing fireballs.
| 0 | 1 | 0 | 0 | 0 | 0 |
Comparing anticyclotomic Selmer groups of positive coranks for congruent modular forms | We study the variation of Iwasawa invariants of the anticyclotomic Selmer
groups of congruent modular forms under the Heegner hypothesis. In particular,
we show that even if the Selmer groups we study may have positive coranks, the
mu-invariant vanishes for one modular form if and only if it vanishes for the
other, and that their lambda-invariants are related by an explicit formula.
This generalizes results of Greenberg-Vatsal for the cyclotomic extension, as
well as results of Pollack-Weston and Castella-Kim-Longo for the anticyclotomic
extension when the Selmer groups in question are cotorsion.
| 0 | 0 | 1 | 0 | 0 | 0 |
Predicting the Results of LTL Model Checking using Multiple Machine Learning Algorithms | In this paper, we study how to predict the results of LTL model checking
using some machine learning algorithms. Some Kripke structures and LTL formulas
and their model checking results are made up data set. The approaches based on
the Random Forest (RF), K-Nearest Neighbors (KNN), Decision tree (DT), and
Logistic Regression (LR) are used to training and prediction. The experiment
results show that the average computation efficiencies of the RF, LR, DT, and
KNN-based approaches are 2066181, 2525333, 1894000 and 294 times than that of
the existing approach, respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Magnetoelectric properties of the layered room-temperature antiferromagnets BaMn2P2 and BaMn2As2 | Properties of two ThCr2Si2-type materials are discussed within the context of
their established structural and magnetic symmetries. Both materials develop
collinear, G-type antiferromagnetic order above room temperature, and magnetic
ions occupy acentric sites in centrosymmetric structures. We refute a previous
conjecture that BaMn2As2 is an example of a magnetoelectric material with
hexadecapole order by exposing flaws in supporting arguments, principally, an
omission of discrete symmetries enforced by the symmetry of sites used by Mn
ions and, also, improper classifications of the primary and secondary
order-parameters. Implications for future experiments designed to improve our
understanding of BaMn2P2 and BaMn2As2 magnetoelectric properties, using neutron
and x-ray diffraction, are examined. Patterns of Bragg spots caused by
conventional magnetic dipoles and magnetoelectric (Dirac) multipoles are
predicted to be distinct, which raises the intriguing possibility of a unique
and comprehensive examination of the magnetoelectric state by diffraction. A
roto-inversion operation in Mn site symmetry is ultimately responsible for the
distinguishing features.
| 0 | 1 | 0 | 0 | 0 | 0 |
Discrete Spectrum Reconstruction using Integral Approximation Algorithm | An inverse problem in spectroscopy is considered. The objective is to restore
the discrete spectrum from observed spectrum data, taking into account the
spectrometer's line spread function. The problem is reduced to solution of a
system of linear-nonlinear equations (SLNE) with respect to intensities and
frequencies of the discrete spectral lines. The SLNE is linear with respect to
lines' intensities and nonlinear with respect to the lines' frequencies. The
integral approximation algorithm is proposed for the solution of this SLNE. The
algorithm combines solution of linear integral equations with solution of a
system of linear algebraic equations and avoids nonlinear equations. Numerical
examples of the application of the technique, both to synthetic and
experimental spectra, demonstrate the efficacy of the proposed approach in
enabling an effective enhancement of the spectrometer's resolution.
| 0 | 0 | 1 | 0 | 0 | 0 |
Tackling Over-pruning in Variational Autoencoders | Variational autoencoders (VAE) are directed generative models that learn
factorial latent variables. As noted by Burda et al. (2015), these models
exhibit the problem of factor over-pruning where a significant number of
stochastic factors fail to learn anything and become inactive. This can limit
their modeling power and their ability to learn diverse and meaningful latent
representations. In this paper, we evaluate several methods to address this
problem and propose a more effective model-based approach called the epitomic
variational autoencoder (eVAE). The so-called epitomes of this model are groups
of mutually exclusive latent factors that compete to explain the data. This
approach helps prevent inactive units since each group is pressured to explain
the data. We compare the approaches with qualitative and quantitative results
on MNIST and TFD datasets. Our results show that eVAE makes efficient use of
model capacity and generalizes better than VAE.
| 1 | 0 | 0 | 0 | 0 | 0 |
Elastic collision and molecule formation of spatiotemporal light bullets in a cubic-quintic nonlinear medium | We consider the statics and dynamics of a stable, mobile three-dimensional
(3D) spatiotemporal light bullet in a cubic-quintic nonlinear medium with a
focusing cubic nonlinearity above a critical value and any defocusing quintic
nonlinearity. The 3D light bullet can propagate with a constant velocity in any
direction. Stability of the light bullet under a small perturbation is
established numerically.We consider frontal collision between two light bullets
with different relative velocities. At large velocities the collision is
elastic with the bullets emerge after collision with practically no distortion.
At small velocities two bullets coalesce to form a bullet molecule. At a small
range of intermediate velocities the localized bullets could form a single
entity which expands indefinitely leading to a destruction of the bullets after
collision. The present study is based on an analytic Lagrange variational
approximation and a full numerical solution of the 3D nonlinear Schrödinger
equation.
| 0 | 1 | 0 | 0 | 0 | 0 |
From Query-By-Keyword to Query-By-Example: LinkedIn Talent Search Approach | One key challenge in talent search is to translate complex criteria of a
hiring position into a search query, while it is relatively easy for a searcher
to list examples of suitable candidates for a given position. To improve search
efficiency, we propose the next generation of talent search at LinkedIn, also
referred to as Search By Ideal Candidates. In this system, a searcher provides
one or several ideal candidates as the input to hire for a given position. The
system then generates a query based on the ideal candidates and uses it to
retrieve and rank results. Shifting from the traditional Query-By-Keyword to
this new Query-By-Example system poses a number of challenges: How to generate
a query that best describes the candidates? When moving to a completely
different paradigm, how does one leverage previous product logs to learn
ranking models and/or evaluate the new system with no existing usage logs?
Finally, given the different nature between the two search paradigms, the
ranking features typically used for Query-By-Keyword systems might not be
optimal for Query-By-Example. This paper describes our approach to solving
these challenges. We present experimental results confirming the effectiveness
of the proposed solution, particularly on query building and search ranking
tasks. As of writing this paper, the new system has been available to all
LinkedIn members.
| 1 | 0 | 0 | 0 | 0 | 0 |
Cyber-Physical System for Energy-Efficient Stadium Operation: Methodology and Experimental Validation | The environmental impacts of medium to large scale buildings receive
substantial attention in research, industry, and media. This paper studies the
energy savings potential of a commercial soccer stadium during day-to-day
operation. Buildings of this kind are characterized by special purpose system
installations like grass heating systems and by event-driven usage patterns.
This work presents a methodology to holistically analyze the stadiums
characteristics and integrate its existing instrumentation into a
Cyber-Physical System, enabling to deploy different control strategies
flexibly. In total, seven different strategies for controlling the studied
stadiums grass heating system are developed and tested in operation.
Experiments in winter season 2014/2015 validated the strategies impacts within
the real operational setup of the Commerzbank Arena, Frankfurt, Germany. With
95% confidence, these experiments saved up to 66% of median daily
weather-normalized energy consumption. Extrapolated to an average heating
season, this corresponds to savings of 775 MWh and 148 t of CO2 emissions. In
winter 2015/2016 an additional predictive nighttime heating experiment targeted
lower temperatures, which increased the savings to up to 85%, equivalent to 1
GWh (197 t CO2) in an average winter. Beyond achieving significant energy
savings, the different control strategies also met the target temperature
levels to the satisfaction of the stadiums operational staff. While the case
study constitutes a significant part, the discussions dedicated to the
transferability of this work to other stadiums and other building types show
that the concepts and the approach are of general nature. Furthermore, this
work demonstrates the first successful application of Deep Belief Networks to
regress and predict the thermal evolution of building systems.
| 1 | 0 | 0 | 0 | 0 | 0 |
On Bayesian Exponentially Embedded Family for Model Order Selection | In this paper, we derive a Bayesian model order selection rule by using the
exponentially embedded family method, termed Bayesian EEF. Unlike many other
Bayesian model selection methods, the Bayesian EEF can use vague proper priors
and improper noninformative priors to be objective in the elicitation of
parameter priors. Moreover, the penalty term of the rule is shown to be the sum
of half of the parameter dimension and the estimated mutual information between
parameter and observed data. This helps to reveal the EEF mechanism in
selecting model orders and may provide new insights into the open problems of
choosing an optimal penalty term for model order selection and choosing a good
prior from information theoretic viewpoints. The important example of linear
model order selection is given to illustrate the algorithms and arguments.
Lastly, the Bayesian EEF that uses Jeffreys prior coincides with the EEF rule
derived by frequentist strategies. This shows another interesting relationship
between the frequentist and Bayesian philosophies for model selection.
| 0 | 0 | 0 | 1 | 0 | 0 |
Estimating solar flux density at low radio frequencies using a sky brightness model | Sky models have been used in the past to calibrate individual low radio
frequency telescopes. Here we generalize this approach from a single antenna to
a two element interferometer and formulate the problem in a manner to allow us
to estimate the flux density of the Sun using the normalized cross-correlations
(visibilities) measured on a low resolution interferometric baseline. For wide
field-of-view instruments, typically the case at low radio frequencies, this
approach can provide robust absolute solar flux calibration for well
characterized antennas and receiver systems. It can provide a reliable and
computationally lean method for extracting parameters of physical interest
using a small fraction of the voluminous interferometric data, which can be
prohibitingly compute intensive to calibrate and image using conventional
approaches. We demonstrate this technique by applying it to data from the
Murchison Widefield Array and assess its reliability.
| 0 | 1 | 0 | 0 | 0 | 0 |
Revisiting Activation Regularization for Language RNNs | Recurrent neural networks (RNNs) serve as a fundamental building block for
many sequence tasks across natural language processing. Recent research has
focused on recurrent dropout techniques or custom RNN cells in order to improve
performance. Both of these can require substantial modifications to the machine
learning model or to the underlying RNN configurations. We revisit traditional
regularization techniques, specifically L2 regularization on RNN activations
and slowness regularization over successive hidden states, to improve the
performance of RNNs on the task of language modeling. Both of these techniques
require minimal modification to existing RNN architectures and result in
performance improvements comparable or superior to more complicated
regularization techniques or custom cell architectures. These regularization
techniques can be used without any modification on optimized LSTM
implementations such as the NVIDIA cuDNN LSTM.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non-equilibrium time dynamics of genetic evolution | Biological systems are typically highly open, non-equilibrium systems that
are very challenging to understand from a statistical mechanics perspective.
While statistical treatments of evolutionary biological systems have a long and
rich history, examination of the time-dependent non-equilibrium dynamics has
been less studied. In this paper we first derive a generalized master equation
in the genotype space for diploid organisms incorporating the processes of
selection, mutation, recombination, and reproduction. The master equation is
defined in terms of continuous time and can handle an arbitrary number of gene
loci and alleles, and can be defined in terms of an absolute population or
probabilities. We examine and analytically solve several prototypical cases
which illustrate the interplay of the various processes and discuss the
timescales of their evolution. The entropy production during the evolution
towards steady state is calculated and we find that it agrees with predictions
from non-equilibrium statistical mechanics where it is large when the
population distribution evolves towards a more viable genotype. The stability
of the non-equilibrium steady state is confirmed using the Glansdorff-Prigogine
criterion.
| 0 | 0 | 0 | 0 | 1 | 0 |
On the scaling of entropy viscosity in high order methods | In this work, we outline the entropy viscosity method and discuss how the
choice of scaling influences the size of viscosity for a simple shock problem.
We present examples to illustrate the performance of the entropy viscosity
method under two distinct scalings.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Architecture Combining Convolutional Neural Network (CNN) and Support Vector Machine (SVM) for Image Classification | Convolutional neural networks (CNNs) are similar to "ordinary" neural
networks in the sense that they are made up of hidden layers consisting of
neurons with "learnable" parameters. These neurons receive inputs, performs a
dot product, and then follows it with a non-linearity. The whole network
expresses the mapping between raw image pixels and their class scores.
Conventionally, the Softmax function is the classifier used at the last layer
of this network. However, there have been studies (Alalshekmubarak and Smith,
2013; Agarap, 2017; Tang, 2013) conducted to challenge this norm. The cited
studies introduce the usage of linear support vector machine (SVM) in an
artificial neural network architecture. This project is yet another take on the
subject, and is inspired by (Tang, 2013). Empirical data has shown that the
CNN-SVM model was able to achieve a test accuracy of ~99.04% using the MNIST
dataset (LeCun, Cortes, and Burges, 2010). On the other hand, the CNN-Softmax
was able to achieve a test accuracy of ~99.23% using the same dataset. Both
models were also tested on the recently-published Fashion-MNIST dataset (Xiao,
Rasul, and Vollgraf, 2017), which is suppose to be a more difficult image
classification dataset than MNIST (Zalandoresearch, 2017). This proved to be
the case as CNN-SVM reached a test accuracy of ~90.72%, while the CNN-Softmax
reached a test accuracy of ~91.86%. The said results may be improved if data
preprocessing techniques were employed on the datasets, and if the base CNN
model was a relatively more sophisticated than the one used in this study.
| 1 | 0 | 0 | 1 | 0 | 0 |
Derivation of the cutoff length from the quantum quadratic enhancement of a mass in vacuum energy constant Lambda | Ultraviolet self-interaction energies in field theory sometimes contain
meaningful physical quantities. The self-energies in such as classical
electrodynamics are usually subtracted from the rest mass. For the consistent
treatment of energies as sources of curvature in the Einstein field equations,
this study includes these subtracted self-energies into vacuum energy expressed
by the constant Lambda (used in such as Lambda-CDM). In this study, the
self-energies in electrodynamics and macroscopic classical Einstein field
equations are examined, using the formalisms with the ultraviolet cutoff
scheme. One of the cutoff formalisms is the field theory in terms of the
step-function-type basis functions, developed by the present authors. The other
is a continuum theory of a fundamental particle with the same cutoff length.
Based on the effectiveness of the continuum theory with the cutoff length shown
in the examination, the dominant self-energy is the quadratic term of the Higgs
field at a quantum level (classical self-energies are reduced to logarithmic
forms by quantum corrections). The cutoff length is then determined to
reproduce today's tiny value of Lambda for vacuum energy. Additionally, a field
with nonperiodic vanishing boundary conditions is treated, showing that the
field has no zero-point energy.
| 0 | 1 | 0 | 0 | 0 | 0 |
Exact MAP Inference by Avoiding Fractional Vertices | Given a graphical model, one essential problem is MAP inference, that is,
finding the most likely configuration of states according to the model.
Although this problem is NP-hard, large instances can be solved in practice. A
major open question is to explain why this is true. We give a natural condition
under which we can provably perform MAP inference in polynomial time. We
require that the number of fractional vertices in the LP relaxation exceeding
the optimal solution is bounded by a polynomial in the problem size. This
resolves an open question by Dimakis, Gohari, and Wainwright. In contrast, for
general LP relaxations of integer programs, known techniques can only handle a
constant number of fractional vertices whose value exceeds the optimal
solution. We experimentally verify this condition and demonstrate how efficient
various integer programming methods are at removing fractional solutions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Study on a Poisson's Equation Solver Based On Deep Learning Technique | In this work, we investigated the feasibility of applying deep learning
techniques to solve Poisson's equation. A deep convolutional neural network is
set up to predict the distribution of electric potential in 2D or 3D cases.
With proper training data generated from a finite difference solver, the strong
approximation capability of the deep convolutional neural network allows it to
make correct prediction given information of the source and distribution of
permittivity. With applications of L2 regularization, numerical experiments
show that the predication error of 2D cases can reach below 1.5\% and the
predication of 3D cases can reach below 3\%, with a significant reduction in
CPU time compared with the traditional solver based on finite difference
methods.
| 1 | 1 | 0 | 0 | 0 | 0 |
On the compressibility of the transition-metal carbides and nitrides alloys Zr_xNb_{1-x}C and Zr_xNb_{1-x}N | The 4d-transition-metals carbides (ZrC, NbC) and nitrides (ZrN, NbN) in the
rocksalt structure, as well as their ternary alloys, have been recently studied
by means of a first-principles full potential linearized augmented plane waves
method within the local density approximation. These materials are important
because of their interesting mechanical and physical properties, which make
them suitable for many technological applications. Here, by using a simple
theoretical model, we estimate the bulk moduli of their ternary alloys
Zr$_x$Nb$_{1-x}$C and Zr$_x$Nb$_{1-x}$N in terms of the bulk moduli of the end
members alone. The results are comparable to those deduced from the
first-principles calculations.
| 0 | 1 | 0 | 0 | 0 | 0 |
On multifractals: a non-linear study of actigraphy data | This work aimed, to determine the characteristics of activity series from
fractal geometry concepts application, in addition to evaluate the possibility
of identifying individuals with fibromyalgia. Activity level data were
collected from 27 healthy subjects and 27 fibromyalgia patients, with the use
of clock-like devices equipped with accelerometers, for about four weeks, all
day long. The activity series were evaluated through fractal and multifractal
methods. Hurst exponent analysis exhibited values according to other studies
($H>0.5$) for both groups ($H=0.98\pm0.04$ for healthy subjects and
$H=0.97\pm0.03$ for fibromyalgia patients), however, it is not possible to
distinguish between the two groups by such analysis. Activity time series also
exhibited a multifractal pattern. A paired analysis of the spectra indices for
the sleep and awake states revealed differences between healthy subjects and
fibromyalgia patients. The individuals feature differences between awake and
sleep states, having statistically significant differences for $\alpha_{q-} -
\alpha_{0}$ in healthy subjects ($p = 0.014$) and $D_{0}$ for patients with
fibromyalgia ($p = 0.013$). The approach has proven to be an option on the
characterisation of such kind of signals and was able to differ between both
healthy and fibromyalgia groups. This outcome suggests changes in the
physiologic mechanisms of movement control.
| 0 | 1 | 0 | 1 | 0 | 0 |
Exact semi-separation of variables in waveguides with nonplanar boundaries | Series expansions of unknown fields $\Phi=\sum\varphi_n Z_n$ in elongated
waveguides are commonly used in acoustics, optics, geophysics, water waves and
other applications, in the context of coupled-mode theories (CMTs). The
transverse functions $Z_n$ are determined by solving local Sturm-Liouville
problems (reference waveguides). In most cases, the boundary conditions
assigned to $Z_n$ cannot be compatible with the physical boundary conditions of
$\Phi$, leading to slowly convergent series, and rendering CMTs mild-slope
approximations. In the present paper, the heuristic approach introduced in
(Athanassoulis & Belibassakis 1999, J. Fluid Mech. 389, 275-301) is generalized
and justified. It is proved that an appropriately enhanced series expansion
becomes an exact, rapidly-convergent representation of the field $\Phi$, valid
for any smooth, nonplanar boundaries and any smooth enough $\Phi$. This series
expansion can be differentiated termwise everywhere in the domain, including
the boundaries, implementing an exact semi-separation of variables for
non-separable domains. The efficiency of the method is illustrated by solving a
boundary value problem for the Laplace equation, and computing the
corresponding Dirichlet-to-Neumann operator, involved in Hamiltonian equations
for nonlinear water waves. The present method provides accurate results with
only a few modes for quite general domains. Extensions to general waveguides
are also discussed.
| 0 | 0 | 1 | 0 | 0 | 0 |
Universality for eigenvalue algorithms on sample covariance matrices | We prove a universal limit theorem for the halting time, or iteration count,
of the power/inverse power methods and the QR eigenvalue algorithm.
Specifically, we analyze the required number of iterations to compute extreme
eigenvalues of random, positive-definite sample covariance matrices to within a
prescribed tolerance. The universality theorem provides a complexity estimate
for the algorithms which, in this random setting, holds with high probability.
The method of proof relies on recent results on the statistics of the
eigenvalues and eigenvectors of random sample covariance matrices (i.e.,
delocalization, rigidity and edge universality).
| 0 | 0 | 1 | 0 | 0 | 0 |
Dynamical Isometry is Achieved in Residual Networks in a Universal Way for any Activation Function | We demonstrate that in residual neural networks (ResNets) dynamical isometry
is achievable irrespectively of the activation function used. We do that by
deriving, with the help of Free Probability and Random Matrix Theories, a
universal formula for the spectral density of the input-output Jacobian at
initialization, in the large network width and depth limit. The resulting
singular value spectrum depends on a single parameter, which we calculate for a
variety of popular activation functions, by analyzing the signal propagation in
the artificial neural network. We corroborate our results with numerical
simulations of both random matrices and ResNets applied to the CIFAR-10
classification problem. Moreover, we study the consequence of this universal
behavior for the initial and late phases of the learning processes. We conclude
by drawing attention to the simple fact, that initialization acts as a
confounding factor between the choice of activation function and the rate of
learning. We propose that in ResNets this can be resolved based on our results,
by ensuring the same level of dynamical isometry at initialization.
| 0 | 0 | 0 | 1 | 0 | 0 |
Similarity-based Multi-label Learning | Multi-label classification is an important learning problem with many
applications. In this work, we propose a principled similarity-based approach
for multi-label learning called SML. We also introduce a similarity-based
approach for predicting the label set size. The experimental results
demonstrate the effectiveness of SML for multi-label classification where it is
shown to compare favorably with a wide variety of existing algorithms across a
range of evaluation criterion.
| 1 | 0 | 0 | 1 | 0 | 0 |
Burst Synchronization in A Scale-Free Neuronal Network with Inhibitory Spike-Timing-Dependent Plasticity | We are concerned about burst synchronization (BS), related to neural
information processes in health and disease, in the Barabási-Albert
scale-free network (SFN) composed of inhibitory bursting Hindmarsh-Rose
neurons. This inhibitory neuronal population has adaptive dynamic synaptic
strengths governed by the inhibitory spike-timing-dependent plasticity (iSTDP).
In previous works without considering iSTDP, BS was found to appear in a range
of noise intensities for fixed synaptic inhibition strengths. In contrast, in
our present work, we take into consideration iSTDP and investigate its effect
on BS by varying the noise intensity. Our new main result is to find occurrence
of a Matthew effect in inhibitory synaptic plasticity: good BS gets better via
LTD, while bad BS get worse via LTP. This kind of Matthew effect in inhibitory
synaptic plasticity is in contrast to that in excitatory synaptic plasticity
where good (bad) synchronization gets better (worse) via LTP (LTD). We note
that, due to inhibition, the roles of LTD and LTP in inhibitory synaptic
plasticity are reversed in comparison with those in excitatory synaptic
plasticity. Moreover, emergences of LTD and LTP of synaptic inhibition
strengths are intensively investigated via a microscopic method based on the
distributions of time delays between the pre- and the post-synaptic burst onset
times. Finally, in the presence of iSTDP we investigate the effects of network
architecture on BS by varying the symmetric attachment degree $l^*$ and the
asymmetry parameter $\Delta l$ in the SFN.
| 0 | 0 | 0 | 0 | 1 | 0 |
The Arrow of Time in the collapse of collisionless self-gravitating systems: non-validity of the Vlasov-Poisson equation during violent relaxation | The collapse of a collisionless self-gravitating system, with the fast
achievement of a quasi-stationary state, is driven by violent relaxation, with
a typical particle interacting with the time-changing collective potential. It
is traditionally assumed that this evolution is governed by the Vlasov-Poisson
equation, in which case entropy must be conserved. We run N-body simulations of
isolated self-gravitating systems, using three simulation codes: NBODY-6
(direct summation without softening), NBODY-2 (direct summation with softening)
and GADGET-2 (tree code with softening), for different numbers of particles and
initial conditions. At each snapshot, we estimate the Shannon entropy of the
distribution function with three different techniques: Kernel, Nearest Neighbor
and EnBiD. For all simulation codes and estimators, the entropy evolution
converges to the same limit as N increases. During violent relaxation, the
entropy has a fast increase followed by damping oscillations, indicating that
violent relaxation must be described by a kinetic equation other than the
Vlasov-Poisson, even for N as large as that of astronomical structures. This
indicates that violent relaxation cannot be described by a time-reversible
equation, shedding some light on the so-called "fundamental paradox of stellar
dynamics". The long-term evolution is well described by the orbit-averaged
Fokker-Planck model, with Coulomb logarithm values in the expected range 10-12.
By means of NBODY-2, we also study the dependence of the 2-body relaxation
time-scale on the softening length. The approach presented in the current work
can potentially provide a general method for testing any kinetic equation
intended to describe the macroscopic evolution of N-body systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Numerical Observation of Parafermion Zero Modes and their Stability in 2D Topological States | The possibility of realizing non-Abelian excitations (non-Abelions) in
two-dimensional (2D) Abelian states of matter has generated a lot of interest
recently. A well-known example of such non-Abelions are parafermion zeros modes
(PFZMs) which can be realized at the endpoints of the so called genons in
fractional quantum Hall (FQH) states or fractional Chern insulators (FCIs). In
this letter, we discuss some known signatures of PFZMs and also introduce some
novel ones. In particular, we show that the topological entanglement entropy
(TEE) shifts by a quantized value after crossing PFZMs. Utilizing those
signatures, we present the first large scale numerical study of PFZMs and their
stability against perturbations in both FQH states and FCIs within the
density-Matrix-Renormalization-Group (DMRG) framework. Our results can help
build a closer connection with future experiments on FQH states with genons.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stochastic Optimal Control of Epidemic Processes in Networks | We approach the development of models and control strategies of
susceptible-infected-susceptible (SIS) epidemic processes from the perspective
of marked temporal point processes and stochastic optimal control of stochastic
differential equations (SDEs) with jumps. In contrast to previous work, this
novel perspective is particularly well-suited to make use of fine-grained data
about disease outbreaks and lets us overcome the shortcomings of current
control strategies. Our control strategy resorts to treatment intensities to
determine who to treat and when to do so to minimize the amount of infected
individuals over time. Preliminary experiments with synthetic data show that
our control strategy consistently outperforms several alternatives. Looking
into the future, we believe our methodology provides a promising step towards
the development of practical data-driven control strategies of epidemic
processes.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improved upper bounds in the moving sofa problem | The moving sofa problem, posed by L. Moser in 1966, asks for the planar shape
of maximal area that can move around a right-angled corner in a hallway of unit
width. It is known that a maximal area shape exists, and that its area is at
least 2.2195... - the area of an explicit construction found by Gerver in 1992
- and at most $2\sqrt{2}=2.82...$, with the lower bound being conjectured as
the true value. We prove a new and improved upper bound of 2.37. The method
involves a computer-assisted proof scheme that can be used to rigorously derive
further improved upper bounds that converge to the correct value.
| 0 | 0 | 1 | 0 | 0 | 0 |
Neural correlates of episodic memory in the Memento cohort | IntroductionThe free and cued selective reminding test is used to identify
memory deficits in mild cognitive impairment and demented patients. It allows
assessing three processes: encoding, storage, and recollection of verbal
episodic memory.MethodsWe investigated the neural correlates of these three
memory processes in a large cohort study. The Memento cohort enrolled 2323
outpatients presenting either with subjective cognitive decline or mild
cognitive impairment who underwent cognitive, structural MRI and, for a subset,
fluorodeoxyglucose--positron emission tomography evaluations.ResultsEncoding
was associated with a network including parietal and temporal cortices; storage
was mainly associated with entorhinal and parahippocampal regions, bilaterally;
retrieval was associated with a widespread network encompassing frontal
regions.DiscussionThe neural correlates of episodic memory processes can be
assessed in large and standardized cohorts of patients at risk for Alzheimer's
disease. Their relation to pathophysiological markers of Alzheimer's disease
remains to be studied.
| 0 | 0 | 0 | 0 | 1 | 0 |
Neighborhood-Based Label Propagation in Large Protein Graphs | Understanding protein function is one of the keys to understanding life at
the molecular level. It is also important in several scenarios including human
disease and drug discovery. In this age of rapid and affordable biological
sequencing, the number of sequences accumulating in databases is rising with an
increasing rate. This presents many challenges for biologists and computer
scientists alike. In order to make sense of this huge quantity of data, these
sequences should be annotated with functional properties. UniProtKB consists of
two components: i) the UniProtKB/Swiss-Prot database containing protein
sequences with reliable information manually reviewed by expert bio-curators
and ii) the UniProtKB/TrEMBL database that is used for storing and processing
the unknown sequences. Hence, for all proteins we have available the sequence
along with few more information such as the taxon and some structural domains.
Pairwise similarity can be defined and computed on proteins based on such
attributes. Other important attributes, while present for proteins in
Swiss-Prot, are often missing for proteins in TrEMBL, such as their function
and cellular localization. The enormous number of protein sequences now in
TrEMBL calls for rapid procedures to annotate them automatically. In this work,
we present DistNBLP, a novel Distributed Neighborhood-Based Label Propagation
approach for large-scale annotation of proteins. To do this, the functional
annotations of reviewed proteins are used to predict those of non-reviewed
proteins using label propagation on a graph representation of the protein
database. DistNBLP is built on top of the "akka" toolkit for building resilient
distributed message-driven applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions | Compressive sensing is a powerful technique for recovering sparse solutions
of underdetermined linear systems, which is often encountered in uncertainty
quantification analysis of expensive and high-dimensional physical models. We
perform numerical investigations employing several compressive sensing solvers
that target the unconstrained LASSO formulation, with a focus on linear systems
that arise in the construction of polynomial chaos expansions. With core
solvers of l1_ls, SpaRSA, CGIST, FPC_AS, and ADMM, we develop techniques to
mitigate overfitting through an automated selection of regularization constant
based on cross-validation, and a heuristic strategy to guide the stop-sampling
decision. Practical recommendations on parameter settings for these techniques
are provided and discussed. The overall method is applied to a series of
numerical examples of increasing complexity, including large eddy simulations
of supersonic turbulent jet-in-crossflow involving a 24-dimensional input.
Through empirical phase-transition diagrams and convergence plots, we
illustrate sparse recovery performance under structures induced by polynomial
chaos, accuracy and computational tradeoffs between polynomial bases of
different degrees, and practicability of conducting compressive sensing for a
realistic, high-dimensional physical application. Across test cases studied in
this paper, we find ADMM to have demonstrated empirical advantages through
consistent lower errors and faster computational times.
| 0 | 1 | 0 | 1 | 0 | 0 |
Closure Properties in the Class of Multiple Context Free Groups | We show that the class of groups with $k$-multiple context-free word problem
is closed under graphs of groups with finite edge groups.
| 1 | 0 | 1 | 0 | 0 | 0 |
Random gauge models of the superconductor-insulator transition in two-dimensional disordered superconductors | We study numerically the superconductor-insulator transition in
two-dimensional inhomogeneous superconductors with gauge disorder, described by
four different quantum rotor models: a gauge glass, a flux glass, a binary
phase glass and a Gaussian phase glass. The first two models, describe the
combined effect of geometrical disorder in the array of local superconducting
islands and a uniform external magnetic field while the last two describe the
effects of random negative Josephson-junction couplings or $\pi$ junctions.
Monte Carlo simulations in the path-integral representation of the models are
used to determine the critical exponents and the universal conductivity at the
quantum phase transition. The gauge and flux glass models display the same
critical behavior, within the estimated numerical uncertainties. Similar
agreement is found for the binary and Gaussian phase-glass models. Despite the
different symmetries and disorder correlations, we find that the universal
conductivity of these models is approximately the same. In particular, the
ratio of this value to that of the pure model agrees with recent experiments on
nanohole thin film superconductors in a magnetic field, in the large disorder
limit.
| 0 | 1 | 0 | 0 | 0 | 0 |
Extended quantum field theory, index theory and the parity anomaly | We use techniques from functorial quantum field theory to provide a geometric
description of the parity anomaly in fermionic systems coupled to background
gauge and gravitational fields on odd-dimensional spacetimes. We give an
explicit construction of a geometric cobordism bicategory which incorporates
general background fields in a stack, and together with the theory of symmetric
monoidal bicategories we use it to provide the concrete forms of invertible
extended quantum field theories which capture anomalies in both the path
integral and Hamiltonian frameworks. Specialising this situation by using the
extension of the Atiyah-Patodi-Singer index theorem to manifolds with corners
due to Loya and Melrose, we obtain a new Hamiltonian perspective on the parity
anomaly. We compute explicitly the 2-cocycle of the projective representation
of the gauge symmetry on the quantum state space, which is defined in a
parity-symmetric way by suitably augmenting the standard chiral fermionic Fock
spaces with Lagrangian subspaces of zero modes of the Dirac Hamiltonian that
naturally appear in the index theorem. We describe the significance of our
constructions for the bulk-boundary correspondence in a large class of
time-reversal invariant gauge-gravity symmetry-protected topological phases of
quantum matter with gapless charged boundary fermions, including the standard
topological insulator in 3+1 dimensions.
| 0 | 1 | 1 | 0 | 0 | 0 |
Learning Graphical Models Using Multiplicative Weights | We give a simple, multiplicative-weight update algorithm for learning
undirected graphical models or Markov random fields (MRFs). The approach is
new, and for the well-studied case of Ising models or Boltzmann machines, we
obtain an algorithm that uses a nearly optimal number of samples and has
quadratic running time (up to logarithmic factors), subsuming and improving on
all prior work. Additionally, we give the first efficient algorithm for
learning Ising models over general alphabets.
Our main application is an algorithm for learning the structure of t-wise
MRFs with nearly-optimal sample complexity (up to polynomial losses in
necessary terms that depend on the weights) and running time that is
$n^{O(t)}$. In addition, given $n^{O(t)}$ samples, we can also learn the
parameters of the model and generate a hypothesis that is close in statistical
distance to the true MRF. All prior work runs in time $n^{\Omega(d)}$ for
graphs of bounded degree d and does not generate a hypothesis close in
statistical distance even for t=3. We observe that our runtime has the correct
dependence on n and t assuming the hardness of learning sparse parities with
noise.
Our algorithm--the Sparsitron-- is easy to implement (has only one parameter)
and holds in the on-line setting. Its analysis applies a regret bound from
Freund and Schapire's classic Hedge algorithm. It also gives the first solution
to the problem of learning sparse Generalized Linear Models (GLMs).
| 1 | 0 | 1 | 1 | 0 | 0 |
Neutron response of PARIS phoswich detector | We have studied neutron response of PARIS phoswich [LaBr$_3$(Ce)-NaI(Tl)]
detector which is being developed for measuring the high energy (E$_{\gamma}$ =
5 - 30 MeV) $\gamma$ rays emitted from the decay of highly collective states in
atomic nuclei. The relative neutron detection efficiency of LaBr$_3$(Ce) and
NaI(Tl) crystal of the phoswich detector has been measured using the
time-of-flight (TOF) and pulse shape discrimination (PSD) technique in the
energy range of E$_n$ = 1 - 9 MeV and compared with the GEANT4 based
simulations. It has been found that for E$_n$ $>$ 3 MeV, $\sim$ 95 \% of
neutrons have the primary interaction in the LaBr$_3$(Ce) crystal, indicating
that a clear n-$\gamma$ separation can be achieved even at $\sim$15 cm flight
path.
| 0 | 1 | 0 | 0 | 0 | 0 |
Swarm robotics in wireless distributed protocol design for coordinating robots involved in cooperative tasks | The mine detection in an unexplored area is an optimization problem where
multiple mines, randomly distributed throughout an area, need to be discovered
and disarmed in a minimum amount of time. We propose a strategy to explore an
unknown area, using a stigmergy approach based on ants behavior, and a novel
swarm based protocol to recruit and coordinate robots for disarming the mines
cooperatively. Simulation tests are presented to show the effectiveness of our
proposed Ant-based Task Robot Coordination (ATRC) with only the exploration
task and with both exploration and recruiting strategies. Multiple minimization
objectives have been considered: the robots' recruiting time and the overall
area exploration time. We discuss, through simulation, different cases under
different network and field conditions, performed by the robots. The results
have shown that the proposed decentralized approaches enable the swarm of
robots to perform cooperative tasks intelligently without any central control.
| 1 | 0 | 0 | 0 | 0 | 0 |
A polynomial time knot polynomial | We present the strongest known knot invariant that can be computed
effectively (in polynomial time).
| 0 | 0 | 1 | 0 | 0 | 0 |
Corruption-free scheme of entering into contract: mathematical model | The main purpose of this paper is to formalize the modelling process,
analysis and mathematical definition of corruption when entering into a
contract between principal agent and producers. The formulation of the problem
and the definition of concepts for the general case are considered. For
definiteness, all calculations and formulas are given for the case of three
producers, one principal agent and one intermediary. Economic analysis of
corruption allowed building a mathematical model of interaction between agents.
Financial resources distribution problem in a contract with a corrupted
intermediary is considered.Then proposed conditions for corruption emergence
and its possible consequences. Optimal non-corruption schemes of financial
resources distribution in a contract are formed, when principal agent's choice
is limited first only by asymmetrical information and then also by external
influences.Numerical examples suggesting optimal corruption-free agents'
behaviour are presented.
| 0 | 0 | 0 | 0 | 0 | 1 |
Geometric Biplane Graphs I: Maximal Graphs | We study biplane graphs drawn on a finite planar point set $S$ in general
position. This is the family of geometric graphs whose vertex set is $S$ and
can be decomposed into two plane graphs. We show that two maximal biplane
graphs---in the sense that no edge can be added while staying biplane---may
differ in the number of edges, and we provide an efficient algorithm for adding
edges to a biplane graph to make it maximal. We also study extremal properties
of maximal biplane graphs such as the maximum number of edges and the largest
maximum connectivity over $n$-element point sets.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Multiobjective Approach to Multimicrogrid System Design | The main goal of this paper is to design a market operator (MO) and a
distribution network operator (DNO) for a network of microgrids in
consideration of multiple objectives. This is a high-level design and only
those microgrids with nondispatchable renewable energy sources are considered.
For a power grid in the network, the net value derived from providing power to
the network must be maximized. For a microgrid, it is desirable to maximize the
net gain derived from consuming the received power. Finally, for an independent
system operator, stored energy levels at microgrids must be maintained as close
as possible to storage capacity to secure network emergency operation. To
achieve these objectives, a multiobjective approach is proposed. The price
signal generated by the MO and power distributed by the DNO are assigned based
on a Pareto optimal solution of a multiobjective optimization problem. By using
the proposed approach, a fair scheme that does not advantage one particular
objective can be attained. Simulations are provided to validate the proposed
methodology.
| 1 | 0 | 0 | 0 | 0 | 0 |
Neural Architecture for Question Answering Using a Knowledge Graph and Web Corpus | In Web search, entity-seeking queries often trigger a special Question
Answering (QA) system. It may use a parser to interpret the question to a
structured query, execute that on a knowledge graph (KG), and return direct
entity responses. QA systems based on precise parsing tend to be brittle: minor
syntax variations may dramatically change the response. Moreover, KG coverage
is patchy. At the other extreme, a large corpus may provide broader coverage,
but in an unstructured, unreliable form. We present AQQUCN, a QA system that
gracefully combines KG and corpus evidence. AQQUCN accepts a broad spectrum of
query syntax, between well-formed questions to short `telegraphic' keyword
sequences. In the face of inherent query ambiguities, AQQUCN aggregates signals
from KGs and large corpora to directly rank KG entities, rather than commit to
one semantic interpretation of the query. AQQUCN models the ideal
interpretation as an unobservable or latent variable. Interpretations and
candidate entity responses are scored as pairs, by combining signals from
multiple convolutional networks that operate collectively on the query, KG and
corpus. On four public query workloads, amounting to over 8,000 queries with
diverse query syntax, we see 5--16% absolute improvement in mean average
precision (MAP), compared to the entity ranking performance of recent systems.
Our system is also competitive at entity set retrieval, almost doubling F1
scores for challenging short queries.
| 1 | 0 | 0 | 0 | 0 | 0 |
Calculating the closed ordinal Ramsey number $R^{cl}(ω\cdot 2,3)^2$ | We show that $R^{cl}(\omega\cdot 2,3)^2$ is equal to $\omega^3\cdot 2$.
| 0 | 0 | 1 | 0 | 0 | 0 |
A spatially explicit capture recapture model for partially identified individuals when trap detection rate is less than one | Spatially explicit capture recapture (SECR) models have gained enormous
popularity to solve abundance estimation problems in ecology. In this study, we
develop a novel Bayesian SECR model that disentangles the process of animal
movement through a detector from the process of recording data by a detector in
the face of imperfect detection. We integrate this complexity into an advanced
version of a recent SECR model involving partially identified individuals
(Royle, 2015). We assess the performance of our model over a range of realistic
simulation scenarios and demonstrate that estimates of population size $N$
improve when we utilize the proposed model relative to the model that does not
explicitly estimate trap detection probability (Royle, 2015). We confront and
investigate the proposed model with a spatial capture-recapture data set from a
camera trapping survey on tigers (\textit{Panthera tigris}) in Nagarahole,
southern India. Trap detection probability is estimated at 0.489 and therefore
justifies the necessity to utilize our model in field situations. We discuss
possible extensions, future work and relevance of our model to other
statistical applications beyond ecology.
| 0 | 0 | 0 | 1 | 0 | 0 |
Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery | Obtaining models that capture imaging markers relevant for disease
progression and treatment monitoring is challenging. Models are typically based
on large amounts of data with annotated examples of known markers aiming at
automating detection. High annotation effort and the limitation to a vocabulary
of known markers limit the power of such approaches. Here, we perform
unsupervised learning to identify anomalies in imaging data as candidates for
markers. We propose AnoGAN, a deep convolutional generative adversarial network
to learn a manifold of normal anatomical variability, accompanying a novel
anomaly scoring scheme based on the mapping from image space to a latent space.
Applied to new data, the model labels anomalies, and scores image patches
indicating their fit into the learned distribution. Results on optical
coherence tomography images of the retina demonstrate that the approach
correctly identifies anomalous images, such as images containing retinal fluid
or hyperreflective foci.
| 1 | 0 | 0 | 0 | 0 | 0 |
ZOOpt: Toolbox for Derivative-Free Optimization | Recent advances of derivative-free optimization allow efficient approximating
the global optimal solutions of sophisticated functions, such as functions with
many local optima, non-differentiable and non-continuous functions. This
article describes the ZOOpt (this https URL) toolbox that
provides efficient derivative-free solvers and are designed easy to use. ZOOpt
provides a Python package for single-thread optimization, and a light-weighted
distributed version with the help of the Julia language for Python described
functions. ZOOpt toolbox particularly focuses on optimization problems in
machine learning, addressing high-dimensional, noisy, and large-scale problems.
The toolbox is being maintained toward ready-to-use tool in real-world machine
learning tasks.
| 0 | 0 | 0 | 1 | 0 | 0 |
Non-Gaussian Stochastic Volatility Model with Jumps via Gibbs Sampler | In this work, we propose a model for estimating volatility from financial
time series, extending the non-Gaussian family of space-state models with exact
marginal likelihood proposed by Gamerman, Santos and Franco (2013). On the
literature there are models focused on estimating financial assets risk,
however, most of them rely on MCMC methods based on Metropolis algorithms,
since full conditional posterior distributions are not known. We present an
alternative model capable of estimating the volatility, in an automatic way,
since all full conditional posterior distributions are known, and it is
possible to obtain an exact sample of parameters via Gibbs Sampler. The
incorporation of jumps in returns allows the model to capture speculative
movements of the data, so that their influence does not propagate to
volatility. We evaluate the performance of the algorithm using synthetic and
real data time series.
Keywords: Financial time series, Stochastic volatility, Gibbs Sampler,
Dynamic linear models.
| 0 | 0 | 0 | 0 | 0 | 1 |
Smart Guiding Glasses for Visually Impaired People in Indoor Environment | To overcome the travelling difficulty for the visually impaired group, this
paper presents a novel ETA (Electronic Travel Aids)-smart guiding device in the
shape of a pair of eyeglasses for giving these people guidance efficiently and
safely. Different from existing works, a novel multi sensor fusion based
obstacle avoiding algorithm is proposed, which utilizes both the depth sensor
and ultrasonic sensor to solve the problems of detecting small obstacles, and
transparent obstacles, e.g. the French door. For totally blind people, three
kinds of auditory cues were developed to inform the direction where they can go
ahead. Whereas for weak sighted people, visual enhancement which leverages the
AR (Augment Reality) technique and integrates the traversable direction is
adopted. The prototype consisting of a pair of display glasses and several low
cost sensors is developed, and its efficiency and accuracy were tested by a
number of users. The experimental results show that the smart guiding glasses
can effectively improve the user's travelling experience in complicated indoor
environment. Thus it serves as a consumer device for helping the visually
impaired people to travel safely.
| 1 | 0 | 0 | 0 | 0 | 0 |
Recurrent Poisson Factorization for Temporal Recommendation | Poisson factorization is a probabilistic model of users and items for
recommendation systems, where the so-called implicit consumer data is modeled
by a factorized Poisson distribution. There are many variants of Poisson
factorization methods who show state-of-the-art performance on real-world
recommendation tasks. However, most of them do not explicitly take into account
the temporal behavior and the recurrent activities of users which is essential
to recommend the right item to the right user at the right time. In this paper,
we introduce Recurrent Poisson Factorization (RPF) framework that generalizes
the classical PF methods by utilizing a Poisson process for modeling the
implicit feedback. RPF treats time as a natural constituent of the model and
brings to the table a rich family of time-sensitive factorization models. To
elaborate, we instantiate several variants of RPF who are capable of handling
dynamic user preferences and item specification (DRPF), modeling the
social-aspect of product adoption (SRPF), and capturing the consumption
heterogeneity among users and items (HRPF). We also develop a variational
algorithm for approximate posterior inference that scales up to massive data
sets. Furthermore, we demonstrate RPF's superior performance over many
state-of-the-art methods on synthetic dataset, and large scale real-world
datasets on music streaming logs, and user-item interactions in M-Commerce
platforms.
| 1 | 0 | 0 | 1 | 0 | 0 |
Categorification of sign-skew-symmetric cluster algebras and some conjectures on g-vectors | Using the unfolding method given in \cite{HL}, we prove the conjectures on
sign-coherence and a recurrence formula respectively of ${\bf g}$-vectors for
acyclic sign-skew-symmetric cluster algebras. As a following consequence, the
conjecture is affirmed in the same case which states that the ${\bf g}$-vectors
of any cluster form a basis of $\mathbb Z^n$. Also, the additive
categorification of an acyclic sign-skew-symmetric cluster algebra $\mathcal
A(\Sigma)$ is given, which is realized as $(\mathcal C^{\widetilde Q},\Gamma)$
for a Frobenius $2$-Calabi-Yau category $\mathcal C^{\widetilde Q}$ constructed
from an unfolding $(Q,\Gamma)$ of the acyclic exchange matrix $B$ of $\mathcal
A(\Sigma)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Capacitated Bounded Cardinality Hub Routing Problem: Model and Solution Algorithm | In this paper, we address the Bounded Cardinality Hub Location Routing with
Route Capacity wherein each hub acts as a transshipment node for one directed
route. The number of hubs lies between a minimum and a maximum and the
hub-level network is a complete subgraph. The transshipment operations take
place at the hub nodes and flow transfer time from a hub-level transporter to a
spoke-level vehicle influences spoke- to-hub allocations. We propose a
mathematical model and a branch-and-cut algorithm based on Benders
decomposition to solve the problem. To accelerate convergence, our solution
framework embeds an efficient heuristic producing high-quality solutions in
short computation times. In addition, we show how symmetry can be exploited to
accelerate and improve the performance of our method.
| 0 | 0 | 1 | 0 | 0 | 0 |
Large Kernel Matters -- Improve Semantic Segmentation by Global Convolutional Network | One of recent trends [30, 31, 14] in network architec- ture design is
stacking small filters (e.g., 1x1 or 3x3) in the entire network because the
stacked small filters is more ef- ficient than a large kernel, given the same
computational complexity. However, in the field of semantic segmenta- tion,
where we need to perform dense per-pixel prediction, we find that the large
kernel (and effective receptive field) plays an important role when we have to
perform the clas- sification and localization tasks simultaneously. Following
our design principle, we propose a Global Convolutional Network to address both
the classification and localization issues for the semantic segmentation. We
also suggest a residual-based boundary refinement to further refine the ob-
ject boundaries. Our approach achieves state-of-art perfor- mance on two public
benchmarks and significantly outper- forms previous results, 82.2% (vs 80.2%)
on PASCAL VOC 2012 dataset and 76.9% (vs 71.8%) on Cityscapes dataset.
| 1 | 0 | 0 | 0 | 0 | 0 |
Is ram-pressure stripping an efficient mechanism to remove gas in galaxies? | We study how the gas in a sample of galaxies (M* > 10e9 Msun) in clusters,
obtained in a cosmological simulation, is affected by the interaction with the
intra-cluster medium (ICM). The dynamical state of each elemental parcel of gas
is studied using the total energy. At z ~ 2, the galaxies in the simulation are
evenly distributed within clusters, moving later on towards more central
locations. In this process, gas from the ICM is accreted and mixed with the gas
in the galactic halo. Simultaneously, the interaction with the environment
removes part of the gas. A characteristic stellar mass around M* ~ 10e10 Msun
appears as a threshold marking two differentiated behaviours. Below this mass,
galaxies are located at the external part of clusters and have eccentric
orbits. The effect of the interaction with the environment is marginal. Above,
galaxies are mainly located at the inner part of clusters with mostly radial
orbits with low velocities. In these massive systems, part of the gas, strongly
correlated with the stellar mass of the galaxy, is removed. The amount of
removed gas is sub-dominant compared with the quantity of retained gas which is
continuously influenced by the hot gas coming from the ICM. The analysis of
individual galaxies reveals the existence of a complex pattern of flows,
turbulence and a constant fuelling of gas to the hot corona from the ICM that
could make the global effect of the interaction of galaxies with their
environment to be substantially less dramatic than previously expected.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data | We present hidden fluid mechanics (HFM), a physics informed deep learning
framework capable of encoding an important class of physical laws governing
fluid motions, namely the Navier-Stokes equations. In particular, we seek to
leverage the underlying conservation laws (i.e., for mass, momentum, and
energy) to infer hidden quantities of interest such as velocity and pressure
fields merely from spatio-temporal visualizations of a passive scaler (e.g.,
dye or smoke), transported in arbitrarily complex domains (e.g., in human
arteries or brain aneurysms). Our approach towards solving the aforementioned
data assimilation problem is unique as we design an algorithm that is agnostic
to the geometry or the initial and boundary conditions. This makes HFM highly
flexible in choosing the spatio-temporal domain of interest for data
acquisition as well as subsequent training and predictions. Consequently, the
predictions made by HFM are among those cases where a pure machine learning
strategy or a mere scientific computing approach simply cannot reproduce. The
proposed algorithm achieves accurate predictions of the pressure and velocity
fields in both two and three dimensional flows for several benchmark problems
motivated by real-world applications. Our results demonstrate that this
relatively simple methodology can be used in physical and biomedical problems
to extract valuable quantitative information (e.g., lift and drag forces or
wall shear stresses in arteries) for which direct measurements may not be
possible.
| 0 | 0 | 0 | 1 | 0 | 0 |
Testing Network Structure Using Relations Between Small Subgraph Probabilities | We study the problem of testing for structure in networks using relations
between the observed frequencies of small subgraphs. We consider the statistics
\begin{align*} T_3 & =(\text{edge frequency})^3 - \text{triangle frequency}\\
T_2 & =3(\text{edge frequency})^2(1-\text{edge frequency}) - \text{V-shape
frequency} \end{align*} and prove a central limit theorem for $(T_2, T_3)$
under an Erdős-Rényi null model. We then analyze the power of the
associated $\chi^2$ test statistic under a general class of alternative models.
In particular, when the alternative is a $k$-community stochastic block model,
with $k$ unknown, the power of the test approaches one. Moreover, the
signal-to-noise ratio required is strictly weaker than that required for
community detection. We also study the relation with other statistics over
three-node subgraphs, and analyze the error under two natural algorithms for
sampling small subgraphs. Together, our results show how global structural
characteristics of networks can be inferred from local subgraph frequencies,
without requiring the global community structure to be explicitly estimated.
| 1 | 0 | 1 | 1 | 0 | 0 |
A 3D MHD simulation of SN 1006: a polarized emission study for the turbulent case | Three dimensional magnetohydrodynamical simulations were carried out in order
to perform a new polarization study of the radio emission of the supernova
remnant SN 1006. These simulations consider that the remnant expands into a
turbulent interstellar medium (including both magnetic field and density
perturbations). Based on the referenced-polar angle technique, a statistical
study was done on observational and numerical magnetic field position-angle
distributions. Our results show that a turbulent medium with an adiabatic index
of 1.3 can reproduce the polarization properties of the SN 1006 remnant. This
statistical study reveals itself as a useful tool for obtaining the orientation
of the ambient magnetic field, previous to be swept up by the main supernova
remnant shock.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hierarchical Detail Enhancing Mesh-Based Shape Generation with 3D Generative Adversarial Network | Automatic mesh-based shape generation is of great interest across a wide
range of disciplines, from industrial design to gaming, computer graphics and
various other forms of digital art. While most traditional methods focus on
primitive based model generation, advances in deep learning made it possible to
learn 3-dimensional geometric shape representations in an end-to-end manner.
However, most current deep learning based frameworks focus on the
representation and generation of voxel and point-cloud based shapes, making it
not directly applicable to design and graphics communities. This study
addresses the needs for automatic generation of mesh-based geometries, and
propose a novel framework that utilizes signed distance function representation
that generates detail preserving three-dimensional surface mesh by a deep
learning based approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Combined analysis of galaxy cluster number count, thermal Sunyaev-Zel'dovich power spectrum, and bispectrum | The Sunyaev-Zel'dovich (SZ) effect is a powerful probe of the evolution of
structures in the universe, and is thus highly sensitive to cosmological
parameters $\sigma_8$ and $\Omega_m$, though its power is hampered by the
current uncertainties on the cluster mass calibration. In this analysis we
revisit constraints on these cosmological parameters as well as the hydrostatic
mass bias, by performing (i) a robust estimation of the tSZ power-spectrum,
(ii) a complete modeling and analysis of the tSZ bispectrum, and (iii) a
combined analysis of galaxy clusters number count, tSZ power spectrum, and tSZ
bispectrum. From this analysis, we derive as final constraints $\sigma_8 = 0.79
\pm 0.02$, $\Omega_{\rm m} = 0.29 \pm 0.02$, and $(1-b) = 0.71 \pm 0.07$. These
results favour a high value for the hydrostatic mass bias compared to numerical
simulations and weak-lensing based estimations. They are furthermore consistent
with both previous tSZ analyses, CMB derived cosmological parameters, and
ancillary estimations of the hydrostatic mass bias.
| 0 | 1 | 0 | 0 | 0 | 0 |
On short cycle enumeration in biregular bipartite graphs | A number of recent works have used a variety of combinatorial constructions
to derive Tanner graphs for LDPC codes and some of these have been shown to
perform well in terms of their probability of error curves and error floors.
Such graphs are bipartite and many of these constructions yield biregular
graphs where the degree of left vertices is a constant $c+1$ and that of the
right vertices is a constant $d+1$. Such graphs are termed $(c+1,d+1)$
biregular bipartite graphs here. One property of interest in such work is the
girth of the graph and the number of short cycles in the graph, cycles of
length either the girth or slightly larger. Such numbers have been shown to be
related to the error floor of the probability of error curve of the related
LDPC code. Using known results of graph theory, it is shown how the girth and
the number of cycles of length equal to the girth may be computed for these
$(c+1,d+1)$ biregular bipartite graphs knowing only the parameters $c$ and $d$
and the numbers of left and right vertices. While numerous algorithms to
determine the number of short cycles in arbitrary graphs exist, the reduction
of the problem from an algorithm to a computation for these biregular bipartite
graphs is of interest.
| 1 | 0 | 1 | 0 | 0 | 0 |
Exhaled breath barbotage: a new method for pulmonary surfactant dysfunction assessment | Exhaled air contains aerosol of submicron droplets of the alveolar lining
fluid (ALF), which are generated in the small airways of a human lung. Since
the exhaled particles are micro-samples of the ALF, their trapping opens up an
opportunity to collect non-invasively a native material from respiratory tract.
Recent studies of the particle characteristics (such as size distribution,
concentration and composition) in healthy and diseased subjects performed under
various conditions have demonstrated a high potential of the analysis of
exhaled aerosol droplets for identifying and monitoring pathological processes
in the ALF. In this paper we present a new method for sampling of aerosol
particles during the exhaled breath barbotage (EBB) through liquid. The
barbotage procedure results in accumulation of the pulmonary surfactant, being
the main component of ALF, on the liquid surface, which makes possible the
study its surface properties. We also propose a data processing algorithm to
evaluate the surface pressure ($\pi$) -- surface concentration ($\Gamma$)
isotherm from the raw data measured in a Langmuir trough. Finally, we analyze
the $(\pi-\Gamma)$ isotherms obtained for the samples collected in the groups
of healthy volunteers and patients with pulmonary tuberculosis and compare them
with the isotherm measured for the artificial pulmonary surfactant.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Computational Study of the Role of Tonal Tension in Expressive Piano Performance | Expressive variations of tempo and dynamics are an important aspect of music
performances, involving a variety of underlying factors. Previous work has
showed a relation between such expressive variations (in particular expressive
tempo) and perceptual characteristics derived from the musical score, such as
musical expectations, and perceived tension. In this work we use a
computational approach to study the role of three measures of tonal tension
proposed by Herremans and Chew (2016) in the prediction of expressive
performances of classical piano music. These features capture tonal
relationships of the music represented in Chew's spiral array model, a three
dimensional representation of pitch classes, chords and keys constructed in
such a way that spatial proximity represents close tonal relationships. We use
non-linear sequential models (recurrent neural networks) to assess the
contribution of these features to the prediction of expressive dynamics and
expressive tempo using a dataset of Mozart piano sonatas performed by a
professional concert pianist. Experiments of models trained with and without
tonal tension features show that tonal tension helps predict change of tempo
and dynamics more than absolute tempo and dynamics values. Furthermore, the
improvement is stronger for dynamics than for tempo.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the least upper bound for the settling time of a class of fixed-time stable systems | This paper deals with the convergence time analysis of a class of fixed-time
stable systems with the aim to provide a new non-conservative upper bound for
its settling time. Our contribution is threefold. First, we revisit a
well-known class of fixed-time stable systems showing the conservatism of the
classical upper estimate of the settling time. Second, we provide the smallest
constant that uniformly upper bounds the settling time of any trajectory of the
system under consideration. Then, introducing a slight modification of the
previous class of fixed-time systems, we propose a new predefined-time
convergent algorithm where the least upper bound of the settling time is set a
priori as a parameter of the system. This calculation is a valuable
contribution toward online differentiators, observers, and controllers in
applications with real-time constraints.
| 1 | 0 | 0 | 0 | 0 | 0 |
Self-organization and the Maximum Empower Principle in the Framework of max-plus Algebra | Self-organization is a process where order of a whole system arises out of
local interactions between small components of a system.
Emergy, defined as the amount of (solar) energy used to make a product or a
service, is becoming an important ecological indicator. To explain observed
self-organization of systems by emergy the Maximum Empower Principle (MEP) was
proposed initially without a mathematical formulation.
Emergy analysis is based on four rules called emergy algebra. Most of emergy
computations in steady state are in fact approximate results, which rely on
linear algebra. In such a context, a mathematical formulation of the MEP has
been proposed by Giannantoni (2002).
In 2012 Le Corre and the second author of this paper have proposed a rigorous
mathematical framework for emergy analysis. They established that the exact
computation of emergy is based on the so-called max-plus algebra and seven
coherent axioms that replace the emergy algebra. In this paper the MEP in
steady state is formalized in the context of the max-plus algebra and graph
theory. The main concepts of the paper are (a) a particular graph called
'emergy graph', (b) the notion of compatible paths of the emergy graph, and (c)
sets of compatible paths, which are called 'emergy states'. The main results of
the paper are as follows:
(1) Emergy is mathematically expressed as a maximum over all possible emergy
states. (2) The maximum is always reached by an emergy state. (3) Only prevail
emergy states for which the maximum is reached.
| 1 | 1 | 0 | 0 | 0 | 0 |
Nonequilibrium Work and its Hamiltonian Connection for a Microstate in Nonequilibrium Statistical Thermodynamics: A Case of Mistaken Identity | Nonequilibrium work-Hamiltonian connection for a microstate plays a central
role in diverse branches of statistical thermodynamics (fluctuation theorems,
quantum thermodynamics, stochastic thermodynamics, etc.). We show that the
change in the Hamiltonian for a microstate should be identified with the work
done by it, and not the work done on it. This contradicts the current practice
in the field. The difference represents a contribution whose average gives the
work that is dissipated due to irreversibility. As the latter has been
overlooked, the current identification does not properly account for
irreversibilty. As an example, we show that the corrected version of
Jarzynski's relation can be applied to free expansion, where the original
relation fails. Thus, the correction has far-reaching consequences and requires
reassessment of current applications.
| 0 | 1 | 0 | 0 | 0 | 0 |
Thick Subcategories of the stable category of modules over the exterior algebra I | We study thick subcategories defined by modules of complexity one in
$\underline{\md}R$, where $R$ is the exterior algebra in $n+1$ indeterminates.
| 0 | 0 | 1 | 0 | 0 | 0 |
Planet-driven spiral arms in protoplanetary disks: II. Implications | We examine whether various characteristics of planet-driven spiral arms can
be used to constrain the masses of unseen planets and their positions within
their disks. By carrying out two-dimensional hydrodynamic simulations varying
planet mass and disk gas temperature, we find that a larger number of spiral
arms form with a smaller planet mass and a lower disk temperature. A planet
excites two or more spiral arms interior to its orbit for a range of disk
temperature characterized by the disk aspect ratio $0.04\leq(h/r)_p\leq0.15$,
whereas exterior to a planet's orbit multiple spiral arms can form only in cold
disks with $(h/r)_p \lesssim 0.06$. Constraining the planet mass with the pitch
angle of spiral arms requires accurate disk temperature measurements that might
be challenging even with ALMA. However, the property that the pitch angle of
planet-driven spiral arms decreases away from the planet can be a powerful
diagnostic to determine whether the planet is located interior or exterior to
the observed spirals. The arm-to-arm separations increase as a function of
planet mass, consistent with previous studies; however, the exact slope depends
on disk temperature as well as the radial location where the arm-to-arm
separations are measured. We apply these diagnostics to the spiral arms seen in
MWC 758 and Elias 2-27. As shown in Bae et al. (2017), planet-driven spiral
arms can create concentric rings and gaps, which can produce more dominant
observable signature than spiral arms under certain circumstances. We discuss
the observability of planet-driven spiral arms versus rings and gaps.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ideal structure and pure infiniteness of ample groupoid $C^*$-algebras | In this paper, we study the ideal structure of reduced $C^*$-algebras
$C^*_r(G)$ associated to étale groupoids $G$. In particular, we characterize
when there is a one-to-one correspondence between the closed, two-sided ideals
in $C_r^*(G)$ and the open invariant subsets of the unit space $G^{(0)}$ of
$G$. As a consequence, we show that if $G$ is an inner exact, essentially
principal, ample groupoid, then $C_r^*(G)$ is (strongly) purely infinite if and
only if every non-zero projection in $C_0(G^{(0)})$ is properly infinite in
$C_r^*(G)$. We also establish a sufficient condition on the ample groupoid $G$
that ensures pure infiniteness of $C_r^*(G)$ in terms of paradoxicality of
compact open subsets of the unit space $G^{(0)}$.
Finally, we introduce the type semigroup for ample groupoids and also obtain
a dichotomy result: Let $G$ be an ample groupoid with compact unit space which
is minimal and topologically principal. If the type semigroup is almost
unperforated, then $C_r^*(G)$ is a simple $C^*$-algebra which is either stably
finite or strongly purely infinite.
| 0 | 0 | 1 | 0 | 0 | 0 |
Nonparametric mean curvature type flows of graphs with contact angle conditions | In this paper we study nonparametric mean curvature type flows in
$M\times\mathbb{R}$ which are represented as graphs $(x,u(x,t))$ over a domain
in a Riemannian manifold $M$ with prescribed contact angle. The speed of $u$ is
the mean curvature speed minus an admissible function $\psi(x,u,Du)$. Long time
existence and uniformly convergence are established if $\psi(x,u, Du)\equiv 0$
with vertical contact angle and $\psi(x,u,Du)=h(x,u)\omega$ with $h_u(x,u)\geq
h_0>0$ and $\omega=\sqrt{1+|Du|^2}$. Their applications include mean curvature
type equations with prescribed contact angle boundary condition and the
asymptotic behavior of nonparametric mean curvature flows of graphs over a
convex domain in $M^2$ which is a surface with nonnegative Ricci curvature.
| 0 | 0 | 1 | 0 | 0 | 0 |
Multilevel Sequential Monte Carlo with Dimension-Independent Likelihood-Informed Proposals | In this article we develop a new sequential Monte Carlo (SMC) method for
multilevel (ML) Monte Carlo estimation. In particular, the method can be used
to estimate expectations with respect to a target probability distribution over
an infinite-dimensional and non-compact space as given, for example, by a
Bayesian inverse problem with Gaussian random field prior. Under suitable
assumptions the MLSMC method has the optimal $O(\epsilon^{-2})$ bound on the
cost to obtain a mean-square error of $O(\epsilon^2)$. The algorithm is
accelerated by dimension-independent likelihood-informed (DILI) proposals
designed for Gaussian priors, leveraging a novel variation which uses empirical
sample covariance information in lieu of Hessian information, hence eliminating
the requirement for gradient evaluations. The efficiency of the algorithm is
illustrated on two examples: inversion of noisy pressure measurements in a PDE
model of Darcy flow to recover the posterior distribution of the permeability
field, and inversion of noisy measurements of the solution of an SDE to recover
the posterior path measure.
| 0 | 0 | 0 | 1 | 0 | 0 |
Spontaneous symmetry breaking as a triangular relation between pairs of Goldstone bosons and the degenerate vacuum: Interactions of D-branes | We formulate the Nambu-Goldstone theorem as a triangular relation between
pairs of Goldstone bosons with the degenerate vacuum. The vacuum degeneracy is
then a natural consequence of this relation. Inside the scenario of String
Theory, we then find that there is a correspondence between the way how the
$D$-branes interact and the properties of the Goldstone bosons.
| 0 | 1 | 0 | 0 | 0 | 0 |
Differentially Private Dropout | Large data collections required for the training of neural networks often
contain sensitive information such as the medical histories of patients, and
the privacy of the training data must be preserved. In this paper, we introduce
a dropout technique that provides an elegant Bayesian interpretation to
dropout, and show that the intrinsic noise added, with the primary goal of
regularization, can be exploited to obtain a degree of differential privacy.
The iterative nature of training neural networks presents a challenge for
privacy-preserving estimation since multiple iterations increase the amount of
noise added. We overcome this by using a relaxed notion of differential
privacy, called concentrated differential privacy, which provides tighter
estimates on the overall privacy loss. We demonstrate the accuracy of our
privacy-preserving dropout algorithm on benchmark datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
On (in)stabilities of perturbations in mimetic models with higher derivatives | Usually when applying the mimetic model to the early universe, higher
derivative terms are needed to promote the mimetic field to be dynamical.
However such models suffer from the ghost and/or the gradient instabilities and
simple extensions cannot cure this pathology. We point out in this paper that
it is possible to overcome this difficulty by considering the direct couplings
of the higher derivatives of the mimetic field to the curvature of the
spacetime.
| 0 | 1 | 0 | 0 | 0 | 0 |
Quantitative Connection Between Ensemble Thermodynamics and Single-Molecule Kinetics: A Case Study Using Cryogenic Electron Microscopy and Single-Molecule Fluorescence Resonance Energy Transfer Investigations of the Ribosome | At equilibrium, thermodynamic and kinetic information can be extracted from
biomolecular energy landscapes by many techniques. However, while static,
ensemble techniques yield thermodynamic data, often only dynamic,
single-molecule techniques can yield the kinetic data that describes
transition-state energy barriers. Here we present a generalized framework based
upon dwell-time distributions that can be used to connect such static, ensemble
techniques with dynamic, single-molecule techniques, and thus characterize
energy landscapes to greater resolutions. We demonstrate the utility of this
framework by applying it to cryogenic electron microscopy (cryo-EM) and
single-molecule fluorescence resonance energy transfer (smFRET) studies of the
bacterial ribosomal pre-translocation complex. Among other benefits,
application of this framework to these data explains why two transient,
intermediate conformations of the pre-translocation complex, which are observed
in a cryo-EM study, may not be observed in several smFRET studies.
| 0 | 1 | 0 | 0 | 0 | 0 |
Non-parametric estimation of Jensen-Shannon Divergence in Generative Adversarial Network training | Generative Adversarial Networks (GANs) have become a widely popular framework
for generative modelling of high-dimensional datasets. However their training
is well-known to be difficult. This work presents a rigorous statistical
analysis of GANs providing straight-forward explanations for common training
pathologies such as vanishing gradients. Furthermore, it proposes a new
training objective, Kernel GANs, and demonstrates its practical effectiveness
on large-scale real-world data sets. A key element in the analysis is the
distinction between training with respect to the (unknown) data distribution,
and its empirical counterpart. To overcome issues in GAN training, we pursue
the idea of smoothing the Jensen-Shannon Divergence (JSD) by incorporating
noise in the input distributions of the discriminator. As we show, this
effectively leads to an empirical version of the JSD in which the true and the
generator densities are replaced by kernel density estimates, which leads to
Kernel GANs.
| 0 | 0 | 0 | 1 | 0 | 0 |
METAGUI 3: a graphical user interface for choosing the collective variables in molecular dynamics simulations | Molecular dynamics (MD) simulations allow the exploration of the phase space
of biopolymers through the integration of equations of motion of their
constituent atoms. The analysis of MD trajectories often relies on the choice
of collective variables (CVs) along which the dynamics of the system is
projected. We developed a graphical user interface (GUI) for facilitating the
interactive choice of the appropriate CVs. The GUI allows: defining
interactively new CVs; partitioning the configurations into microstates
characterized by similar values of the CVs; calculating the free energies of
the microstates for both unbiased and biased (metadynamics) simulations;
clustering the microstates in kinetic basins; visualizing the free energy
landscape as a function of a subset of the CVs used for the analysis. A simple
mouse click allows one to quickly inspect structures corresponding to specific
points in the landscape.
| 0 | 1 | 0 | 0 | 0 | 0 |
Model compression as constrained optimization, with application to neural nets. Part II: quantization | We consider the problem of deep neural net compression by quantization: given
a large, reference net, we want to quantize its real-valued weights using a
codebook with $K$ entries so that the training loss of the quantized net is
minimal. The codebook can be optimally learned jointly with the net, or fixed,
as for binarization or ternarization approaches. Previous work has quantized
the weights of the reference net, or incorporated rounding operations in the
backpropagation algorithm, but this has no guarantee of converging to a
loss-optimal, quantized net. We describe a new approach based on the recently
proposed framework of model compression as constrained optimization
\citep{Carreir17a}. This results in a simple iterative "learning-compression"
algorithm, which alternates a step that learns a net of continuous weights with
a step that quantizes (or binarizes/ternarizes) the weights, and is guaranteed
to converge to local optimum of the loss for quantized nets. We develop
algorithms for an adaptive codebook or a (partially) fixed codebook. The latter
includes binarization, ternarization, powers-of-two and other important
particular cases. We show experimentally that we can achieve much higher
compression rates than previous quantization work (even using just 1 bit per
weight) with negligible loss degradation.
| 1 | 0 | 1 | 1 | 0 | 0 |
Three Skewed Matrix Variate Distributions | Three-way data can be conveniently modelled by using matrix variate
distributions. Although there has been a lot of work for the matrix variate
normal distribution, there is little work in the area of matrix skew
distributions. Three matrix variate distributions that incorporate skewness, as
well as other flexible properties such as concentration, are discussed.
Equivalences to multivariate analogues are presented, and moment generating
functions are derived. Maximum likelihood parameter estimation is discussed,
and simulated data is used for illustration.
| 0 | 0 | 1 | 1 | 0 | 0 |
Anticipating Persistent Infection | We explore the emergence of persistent infection in a closed region where the
disease progression of the individuals is given by the SIRS model, with an
individual becoming infected on contact with another infected individual within
a given range. We focus on the role of synchronization in the persistence of
contagion. Our key result is that higher degree of synchronization, both
globally in the population and locally in the neighborhoods, hinders
persistence of infection. Importantly, we find that early short-time asynchrony
appears to be a consistent precursor to future persistence of infection, and
can potentially provide valuable early warnings for sustained contagion in a
population patch. Thus transient synchronization can help anticipate the
long-term persistence of infection. Further we demonstrate that when the range
of influence of an infected individual is wider, one obtains lower persistent
infection. This counter-intuitive observation can also be understood through
the relation of synchronization to infection burn-out.
| 0 | 0 | 0 | 0 | 1 | 0 |
The first global-scale 30 m resolution mangrove canopy height map using Shuttle Radar Topography Mission data | No high-resolution canopy height map exists for global mangroves. Here we
present the first global mangrove height map at a consistent 30 m pixel
resolution derived from digital elevation model data collected through shuttle
radar topography mission. Additionally, we refined the current global mangrove
area maps by discarding the non-mangrove areas that are included in current
mangrove maps.
| 0 | 1 | 0 | 0 | 0 | 0 |
Direct Estimation of Regional Wall Thicknesses via Residual Recurrent Neural Network | Accurate estimation of regional wall thicknesses (RWT) of left ventricular
(LV) myocardium from cardiac MR sequences is of significant importance for
identification and diagnosis of cardiac disease. Existing RWT estimation still
relies on segmentation of LV myocardium, which requires strong prior
information and user interaction. No work has been devoted into direct
estimation of RWT from cardiac MR images due to the diverse shapes and
structures for various subjects and cardiac diseases, as well as the complex
regional deformation of LV myocardium during the systole and diastole phases of
the cardiac cycle. In this paper, we present a newly proposed Residual
Recurrent Neural Network (ResRNN) that fully leverages the spatial and temporal
dynamics of LV myocardium to achieve accurate frame-wise RWT estimation. Our
ResRNN comprises two paths: 1) a feed forward convolution neural network (CNN)
for effective and robust CNN embedding learning of various cardiac images and
preliminary estimation of RWT from each frame itself independently, and 2) a
recurrent neural network (RNN) for further improving the estimation by modeling
spatial and temporal dynamics of LV myocardium. For the RNN path, we design for
cardiac sequences a Circle-RNN to eliminate the effect of null hidden input for
the first time-step. Our ResRNN is capable of obtaining accurate estimation of
cardiac RWT with Mean Absolute Error of 1.44mm (less than 1-pixel error) when
validated on cardiac MR sequences of 145 subjects, evidencing its great
potential in clinical cardiac function assessment.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non-dipole recollision-gated double ionization and observable effects | Using a three-dimensional semiclassical model, we study double ionization for
strongly-driven He fully accounting for magnetic field effects. For linearly
and slightly elliptically polarized laser fields, we show that recollisions and
the magnetic field combined act as a gate. This gate favors more transverse -
with respect to the electric field - initial momenta of the tunneling electron
that are opposite to the propagation direction of the laser field. In the
absence of non-dipole effects, the transverse initial momentum is symmetric
with respect to zero. We find that this asymmetry in the transverse initial
momentum gives rise to an asymmetry in a double ionization observable. Finally,
we show that this asymmetry in the transverse initial momentum of the tunneling
electron accounts for a recently-reported unexpectedly large average sum of the
electron momenta parallel to the propagation direction of the laser field.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Survey of Active Attacks on Wireless Sensor Networks and their Countermeasures | Lately, Wireless Sensor Networks (WSNs) have become an emerging technology
and can be utilized in some crucial circumstances like battlegrounds,
commercial applications, habitat observing, buildings, smart homes, traffic
surveillance and other different places. One of the foremost difficulties that
WSN faces nowadays is protection from serious attacks. While organizing the
sensor nodes in an abandoned environment makes network systems helpless against
an assortment of strong assaults, intrinsic memory and power restrictions of
sensor nodes make the traditional security arrangements impractical. The
sensing knowledge combined with the wireless communication and processing power
makes it lucrative for being abused. The wireless sensor network technology
also obtains a big variety of security intimidations. This paper describes four
basic security threats and many active attacks on WSN with their possible
countermeasures proposed by different research scholars.
| 1 | 0 | 0 | 0 | 0 | 0 |
A Deterministic Nonsmooth Frank Wolfe Algorithm with Coreset Guarantees | We present a new Frank-Wolfe (FW) type algorithm that is applicable to
minimization problems with a nonsmooth convex objective. We provide convergence
bounds and show that the scheme yields so-called coreset results for various
Machine Learning problems including 1-median, Balanced Development, Sparse PCA,
Graph Cuts, and the $\ell_1$-norm-regularized Support Vector Machine (SVM)
among others. This means that the algorithm provides approximate solutions to
these problems in time complexity bounds that are not dependent on the size of
the input problem. Our framework, motivated by a growing body of work on
sublinear algorithms for various data analysis problems, is entirely
deterministic and makes no use of smoothing or proximal operators. Apart from
these theoretical results, we show experimentally that the algorithm is very
practical and in some cases also offers significant computational advantages on
large problem instances. We provide an open source implementation that can be
adapted for other problems that fit the overall structure.
| 1 | 0 | 0 | 1 | 0 | 0 |
The problem of boundary conditions for the shallow water equations (Russian) | The problem of choice of boundary conditions are discussed for the case of
numerical integration of the shallow water equations on a substantially
irregular relief. In modeling of unsteady surface water flows has a dynamic
boundary partitioning liquid and dry bottom. The situation is complicated by
the emergence of sub- and supercritical flow regimes for the problems of
seasonal floodplain flooding, flash floods, tsunami landfalls. Analysis of the
use of various methods of setting conditions for the physical quantities of
liquid when the settlement of the boundary shows the advantages of using the
waterfall type conditions in the presence of strong inhomogeneities landforms.
When there is a waterfall on the border of the computational domain and
heterogeneity of the relief in the vicinity of the boundary portion may occur,
which is formed by the region of critical flow with the formation of a
hydraulic jump, which greatly weakens the effect of the waterfall on the flow
pattern upstream.
| 0 | 1 | 0 | 0 | 0 | 0 |
The exit time finite state projection scheme: bounding exit distributions and occupation measures of continuous-time Markov chains | We introduce the exit time finite state projection (ETFSP) scheme, a
truncation-based method that yields approximations to the exit distribution and
occupation measure associated with the time of exit from a domain (i.e., the
time of first passage to the complement of the domain) of time-homogeneous
continuous-time Markov chains. We prove that: (i) the computed approximations
bound the measures from below; (ii) the total variation distances between the
approximations and the measures decrease monotonically as states are added to
the truncation; and (iii) the scheme converges, in the sense that, as the
truncation tends to the entire state space, the total variation distances tend
to zero. Furthermore, we give a computable bound on the total variation
distance between the exit distribution and its approximation, and we delineate
the cases in which the bound is sharp. We also revisit the related finite state
projection scheme and give a comprehensive account of its theoretical
properties. We demonstrate the use of the ETFSP scheme by applying it to two
biological examples: the computation of the first passage time associated with
the expression of a gene, and the fixation times of competing species subject
to demographic noise.
| 0 | 0 | 0 | 0 | 1 | 0 |
Ontological Multidimensional Data Models and Contextual Data Qality | Data quality assessment and data cleaning are context-dependent activities.
Motivated by this observation, we propose the Ontological Multidimensional Data
Model (OMD model), which can be used to model and represent contexts as
logic-based ontologies. The data under assessment is mapped into the context,
for additional analysis, processing, and quality data extraction. The resulting
contexts allow for the representation of dimensions, and multidimensional data
quality assessment becomes possible. At the core of a multidimensional context
we include a generalized multidimensional data model and a Datalog+/- ontology
with provably good properties in terms of query answering. These main
components are used to represent dimension hierarchies, dimensional
constraints, dimensional rules, and define predicates for quality data
specification. Query answering relies upon and triggers navigation through
dimension hierarchies, and becomes the basic tool for the extraction of quality
data. The OMD model is interesting per se, beyond applications to data quality.
It allows for a logic-based, and computationally tractable representation of
multidimensional data, extending previous multidimensional data models with
additional expressive power and functionalities.
| 1 | 0 | 0 | 0 | 0 | 0 |
Time-Assisted Authentication Protocol | Authentication is the first step toward establishing a service provider and
customer (C-P) association. In a mobile network environment, a lightweight and
secure authentication protocol is one of the most significant factors to
enhance the degree of service persistence. This work presents a secure and
lightweight keying and authentication protocol suite termed TAP (Time-Assisted
Authentication Protocol). TAP improves the security of protocols with the
assistance of time-based encryption keys and scales down the authentication
complexity by issuing a re-authentication ticket. While moving across the
network, a mobile customer node sends a re-authentication ticket to establish
new sessions with service-providing nodes. Consequently, this reduces the
communication and computational complexity of the authentication process. In
the keying protocol suite, a key distributor controls the key generation
arguments and time factors, while other participants independently generate a
keychain based on key generation arguments. We undertake a rigorous security
analysis and prove the security strength of TAP using CSP and rank function
analysis.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improved Bounds for Online Dominating Sets of Trees | The online dominating set problem is an online variant of the minimum
dominating set problem, which is one of the most important NP-hard problems on
graphs. This problem is defined as follows: Given an undirected graph $G = (V,
E)$, in which $V$ is a set of vertices and $E$ is a set of edges. We say that a
set $D \subseteq V$ of vertices is a {\em dominating set} of $G$ if for each $v
\in V \setminus D$, there exists a vertex $u \in D$ such that $\{ u, v \} \in
E$. The vertices are revealed to an online algorithm one by one over time. When
a vertex is revealed, edges between the vertex and vertices revealed in the
past are also revealed. A revelaed subtree is connected at any time.
Immediately after the revelation of each vertex, an online algorithm can choose
vertices which were already revealed irrevocably and must maintain a dominating
set of a graph revealed so far. The cost of an algorithm on a given tree is the
number of vertices chosen by it, and its objective is to minimize the cost.
Eidenbenz (Technical report, Institute of Theoretical Computer Science, ETH
Zürich, 2002) and Boyar et al.\ (SWAT 2016) studied the case in which given
graphs are trees. They designed a deterministic online algorithm whose
competitive ratio is at most three, and proved that a lower bound on the
competitive ratio of any deterministic algorithm is two. In this paper, we also
focus on trees. We establish a matching lower bound for any deterministic
algorithm. Moreover, we design a randomized online algorithm whose competitive
ratio is at most $5/2 = 2.5$, and show that the competitive ratio of any
randomized algorithm is at least $4/3 \approx 1.333$.
| 1 | 0 | 0 | 0 | 0 | 0 |
Systematical design and three-dimensional simulation of X-ray FEL oscillator for Shanghai Coherent Light Facility | Shanghai Coherent Light Facility (SCLF) is a quasi-CW hard X-ray free
electron laser user facility which is recently proposed. Due to the high
repetition rate, high quality electron beams, it is straightforward to consider
an X-ray free electron laser oscillator (XFELO) operation for SCLF. The main
processes for XFELO design, and parameters optimization of the undulator, X-ray
cavity and electron beam are described. The first three-dimensional X-ray
crystal Bragg diffraction code, named BRIGHT is built, which collaborates
closely with GENESIS and OPC for numerical simulations of XFELO. The XFELO
performances of SCLF is investigated and optimized by theoretical analysis and
numerical simulation.
| 0 | 1 | 0 | 0 | 0 | 0 |
Who is the infector? Epidemic models with symptomatic and asymptomatic cases | What role do asymptomatically infected individuals play in the transmission
dynamics? There are many diseases, such as norovirus and influenza, where some
infected hosts show symptoms of the disease while others are asymptomatically
infected, i.e. do not show any symptoms. The current paper considers a class of
epidemic models following an SEIR (Susceptible $\to$ Exposed $\to$ Infectious
$\to$ Recovered) structure that allows for both symptomatic and asymptomatic
cases. The following question is addressed: what fraction $\rho$ of those
individuals getting infected are infected by symptomatic (asymptomatic) cases?
This is a more complicated question than the related question for the beginning
of the epidemic: what fraction of the expected number of secondary cases of a
typical newly infected individual, i.e. what fraction of the basic reproduction
number $R_0$, is caused by symptomatic individuals? The latter fraction only
depends on the type-specific reproduction numbers, while the former fraction
$\rho$ also depends on timing and hence on the probabilistic distributions of
latent and infectious periods of the two types (not only their means). Bounds
on $\rho$ are derived for the situation where these distributions (and even
their means) are unknown. Special attention is given to the class of Markov
models and the class of continuous-time Reed-Frost models as two classes of
distribution functions. We show how these two classes of models can exhibit
very different behaviour.
| 0 | 0 | 0 | 0 | 1 | 0 |
Calibration of a two-state pitch-wise HMM method for note segmentation in Automatic Music Transcription systems | Many methods for automatic music transcription involves a multi-pitch
estimation method that estimates an activity score for each pitch. A second
processing step, called note segmentation, has to be performed for each pitch
in order to identify the time intervals when the notes are played. In this
study, a pitch-wise two-state on/off firstorder Hidden Markov Model (HMM) is
developed for note segmentation. A complete parametrization of the HMM sigmoid
function is proposed, based on its original regression formulation, including a
parameter alpha of slope smoothing and beta? of thresholding contrast. A
comparative evaluation of different note segmentation strategies was performed,
differentiated according to whether they use a fixed threshold, called "Hard
Thresholding" (HT), or a HMM-based thresholding method, called "Soft
Thresholding" (ST). This evaluation was done following MIREX standards and
using the MAPS dataset. Also, different transcription scenarios and recording
natures were tested using three units of the Degradation toolbox. Results show
that note segmentation through a HMM soft thresholding with a data-based
optimization of the {alpha,beta} parameter couple significantly enhances
transcription performance.
| 1 | 0 | 0 | 1 | 0 | 0 |
Action-depedent Control Variates for Policy Optimization via Stein's Identity | Policy gradient methods have achieved remarkable successes in solving
challenging reinforcement learning problems. However, it still often suffers
from the large variance issue on policy gradient estimation, which leads to
poor sample efficiency during training. In this work, we propose a control
variate method to effectively reduce variance for policy gradient methods.
Motivated by the Stein's identity, our method extends the previous control
variate methods used in REINFORCE and advantage actor-critic by introducing
more general action-dependent baseline functions. Empirical studies show that
our method significantly improves the sample efficiency of the state-of-the-art
policy gradient approaches.
| 1 | 0 | 0 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.