title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
A note on the role of projectivity in likelihood-based inference for random graph models | There is widespread confusion about the role of projectivity in
likelihood-based inference for random graph models. The confusion is rooted in
claims that projectivity, a form of marginalizability, may be necessary for
likelihood-based inference and consistency of maximum likelihood estimators. We
show that likelihood-based superpopulation inference is not affected by lack of
projectivity and that projectivity is not a necessary condition for consistency
of maximum likelihood estimators.
| 0 | 0 | 1 | 1 | 0 | 0 |
Alternating Double Euler Sums, Hypergeometric Identities and a Theorem of Zagier | In this work, we derive relations between generating functions of double
stuffle relations and double shuffle relations to express the alternating
double Euler sums $\zeta\left(\overline{r}, s\right)$, $\zeta\left(r,
\overline{s}\right)$ and $\zeta\left(\overline{r}, \overline{s}\right)$ with
$r+s$ odd in terms of zeta values. We also give a direct proof of a
hypergeometric identity which is a limiting case of a basic hypergeometric
identity of Andrews. Finally, we gave another proof for the formula of Zagier
on the multiple zeta values $\zeta(2,\ldots,2,3,2,\ldots,2)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
Achieving Dilution without Knowledge of Coordinates in the SINR Model | Considerable literature has been developed for various fundamental
distributed problems in the SINR (Signal-to-Interference-plus-Noise-Ratio)
model for radio transmission. A setting typically studied is when all nodes
transmit a signal of the same strength, and each device only has access to
knowledge about the total number of nodes in the network $n$, the range from
which each node's label is taken $[1,\dots,N]$, and the label of the device
itself. In addition, an assumption is made that each node also knows its
coordinates in the Euclidean plane. In this paper, we create a technique which
allows algorithm designers to remove that last assumption. The assumption about
the unavailability of the knowledge of the physical coordinates of the nodes
truly captures the `ad-hoc' nature of wireless networks.
Previous work in this area uses a flavor of a technique called dilution, in
which nodes transmit in a (predetermined) round-robin fashion, and are able to
reach all their neighbors. However, without knowing the physical coordinates,
it's not possible to know the coordinates of their containing (pivotal) grid
box and seemingly not possible to use dilution (to coordinate their
transmissions). We propose a new technique to achieve dilution without using
the knowledge of physical coordinates. This technique exploits the
understanding that the transmitting nodes lie in 2-D space, segmented by an
appropriate pivotal grid, without explicitly referring to the actual physical
coordinates of these nodes. Using this technique, it is possible for every weak
device to successfully transmit its message to all of its neighbors in
$\Theta(\lg N)$ rounds, as long as the density of transmitting nodes in any
physical grid box is bounded by a known constant. This technique, we feel, is
an important generic tool for devising practical protocols when physical
coordinates of the nodes are not known.
| 1 | 0 | 0 | 0 | 0 | 0 |
Probing the topology of density matrices | The mixedness of a quantum state is usually seen as an adversary to
topological quantization of observables. For example, exact quantization of the
charge transported in a so-called Thouless adiabatic pump is lifted at any
finite temperature in symmetry-protected topological insulators. Here, we show
that certain directly observable many-body correlators preserve the integrity
of topological invariants for mixed Gaussian quantum states in one dimension.
Our approach relies on the expectation value of the many-body
momentum-translation operator, and leads to a physical observable --- the
"ensemble geometric phase" (EGP) --- which represents a bona fide geometric
phase for mixed quantum states, in the thermodynamic limit. In cyclic
protocols, the EGP provides a topologically quantized observable which detects
encircled spectral singularities ("purity-gap" closing points) of density
matrices. While we identify the many-body nature of the EGP as a key
ingredient, we propose a conceptually simple, interferometric setup to directly
measure the latter in experiments with mesoscopic ensembles of ultracold atoms.
| 0 | 1 | 0 | 0 | 0 | 0 |
Stable Limit Theorems for Empirical Processes under Conditional Neighborhood Dependence | This paper introduces a new concept of stochastic dependence among many
random variables which we call conditional neighborhood dependence (CND).
Suppose that there are a set of random variables and a set of sigma algebras
where both sets are indexed by the same set endowed with a neighborhood system.
When the set of random variables satisfies CND, any two non-adjacent sets of
random variables are conditionally independent given sigma algebras having
indices in one of the two sets' neighborhood. Random variables with CND include
those with conditional dependency graphs and a class of Markov random fields
with a global Markov property. The CND property is useful for modeling
cross-sectional dependence governed by a complex, large network. This paper
provides two main results. The first result is a stable central limit theorem
for a sum of random variables with CND. The second result is a Donsker-type
result of stable convergence of empirical processes indexed by a class of
functions satisfying a certain bracketing entropy condition when the random
variables satisfy CND.
| 0 | 0 | 1 | 1 | 0 | 0 |
Topological Maxwell Metal Bands in a Superconducting Qutrit | We experimentally explore the topological Maxwell metal bands by mapping the
momentum space of condensed-matter models to the tunable parameter space of
superconducting quantum circuits. An exotic band structure that is effectively
described by the spin-1 Maxwell equations is imaged. Three-fold degenerate
points dubbed Maxwell points are observed in the Maxwell metal bands. Moreover,
we engineer and observe the topological phase transition from the topological
Maxwell metal to a trivial insulator, and report the first experiment to
measure the Chern numbers that are higher than one.
| 0 | 1 | 0 | 0 | 0 | 0 |
Pressure effect and Superconductivity in $β$-Bi$_4$I$_4$ Topological Insulator | We report a detailed study of the transport coefficients of
$\beta$-Bi$_4$I$_4$ quasi-one dimensional topological insulator. Electrical
resistivity, thermoelectric power, thermal conductivity and Hall coefficient
measurements are consistent with the possible appearance of a charge density
wave order at low temperatures. Both electrons and holes contribute to the
conduction in $\beta$-Bi$_4$I$_4$ and the dominant type of charge carrier
changes with temperature as a consequence of temperature-dependent carrier
densities and mobilities. Measurements of resistivity and Seebeck coefficient
under hydrostatic pressure up to 2 GPa show a shift of the charge density wave
order to higher temperatures suggesting a strongly one-dimensional character at
ambient pressure. Surprisingly, superconductivity is induced in
$\beta$-Bi$_4$I$_4$ above 10 GPa with of 4.0 K which is slightly decreasing
upon increasing the pressure up to 20 GPa. Chemical characterisation of the
pressure-treated samples shows amorphization of $\beta$-Bi$_4$I$_4$ under
pressure and rules out decomposition into Bi and BiI$_3$ at room-temperature
conditions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Backprop-Q: Generalized Backpropagation for Stochastic Computation Graphs | In real-world scenarios, it is appealing to learn a model carrying out
stochastic operations internally, known as stochastic computation graphs
(SCGs), rather than learning a deterministic mapping. However, standard
backpropagation is not applicable to SCGs. We attempt to address this issue
from the angle of cost propagation, with local surrogate costs, called
Q-functions, constructed and learned for each stochastic node in an SCG. Then,
the SCG can be trained based on these surrogate costs using standard
backpropagation. We propose the entire framework as a solution to generalize
backpropagation for SCGs, which resembles an actor-critic architecture but
based on a graph. For broad applicability, we study a variety of SCG structures
from one cost to multiple costs. We utilize recent advances in reinforcement
learning (RL) and variational Bayes (VB), such as off-policy critic learning
and unbiased-and-low-variance gradient estimation, and review them in the
context of SCGs. The generalized backpropagation extends transported learning
signals beyond gradients between stochastic nodes while preserving the benefit
of backpropagating gradients through deterministic nodes. Experimental
suggestions and concerns are listed to help design and test any specific model
using this framework.
| 0 | 0 | 0 | 1 | 0 | 0 |
Development of a low-alpha-emitting μ-PIC for NEWAGE direction-sensitive dark-matter search | NEWAGE is a direction-sensitive dark-matter-search experiment that uses a
micro-patterned gaseous detector, or {\mu}-PIC, as the readout. The main
background sources are {\alpha}-rays from radioactive contaminants in the
{\mu}-PIC. We have therefore developed a low-alpha-emitting {\mu}-PICs and
measured its performances. We measured the surface {\alpha}-ray emission rate
of the {\mu}-PIC in the Kamioka mine using a surface {\alpha}-ray counter based
on a micro TPC.
| 0 | 1 | 0 | 0 | 0 | 0 |
Online characterization of planetary surfaces: PlanetServer, an open-source analysis and visualization tool | The lack of open-source tools for hyperspectral data visualization and
analysiscreates a demand for new tools. In this paper we present the new
PlanetServer,a set of tools comprising a web Geographic Information System
(GIS) and arecently developed Python Application Programming Interface (API)
capableof visualizing and analyzing a wide variety of hyperspectral data from
differentplanetary bodies. Current WebGIS open-source tools are evaluated in
orderto give an overview and contextualize how PlanetServer can help in this
mat-ters. The web client is thoroughly described as well as the datasets
availablein PlanetServer. Also, the Python API is described and exposed the
reason ofits development. Two different examples of mineral characterization of
differenthydrosilicates such as chlorites, prehnites and kaolinites in the Nili
Fossae areaon Mars are presented. As the obtained results show positive outcome
in hyper-spectral analysis and visualization compared to previous literature,
we suggestusing the PlanetServer approach for such investigations.
| 1 | 1 | 0 | 0 | 0 | 0 |
GM-PHD Filter for Searching and Tracking an Unknown Number of Targets with a Mobile Sensor with Limited FOV | We study the problem of searching for and tracking a collection of moving
targets using a robot with a limited Field-Of-View (FOV) sensor. The actual
number of targets present in the environment is not known a priori. We propose
a search and tracking framework based on the concept of Bayesian Random Finite
Sets (RFSs). Specifically, we generalize the Gaussian Mixture Probability
Hypothesis Density (GM-PHD) filter which was previously applied for tracking
problems to allow for simultaneous search and tracking with a limited FOV
sensor. The proposed framework can extract individual target tracks as well as
estimate the number and the spatial density of targets. We also show how to use
the Gaussian Process (GP) regression to extract and predict non-linear target
trajectories in this framework. We demonstrate the efficacy of our techniques
through representative simulations and a real data collected from an aerial
robot.
| 1 | 0 | 0 | 0 | 0 | 0 |
Regularity of solutions to scalar conservation laws with a force | We prove regularity estimates for entropy solutions to scalar conservation
laws with a force. Based on the kinetic form of a scalar conservation law, a
new decomposition of entropy solutions is introduced, by means of a
decomposition in the velocity variable, adapted to the non-degeneracy
properties of the flux function. This allows a finer control of the degeneracy
behavior of the flux. In addition, this decomposition allows to make use of the
fact that the entropy dissipation measure has locally finite singular moments.
Based on these observations, improved regularity estimates for entropy
solutions to (forced) scalar conservation laws are obtained.
| 0 | 0 | 1 | 0 | 0 | 0 |
Coupled identical localized fermionic chains with quasi-random disorder | We analyze the ground state localization properties of an array of identical
interacting spinless fermionic chains with quasi-random disorder, using
non-perturbative Renormalization Group methods. In the single or two chains
case localization persists while for a larger number of chains a different
qualitative behavior is generically expected, unless the many body interaction
is vanishing. This is due to number theoretical properties of the frequency,
similar to the ones assumed in KAM theory, and cancellations due to Pauli
principle which in the single or two chains case imply that all the effective
interactions are irrelevant; in contrast for a larger number of chains relevant
effective interactions are present.
| 0 | 1 | 0 | 0 | 0 | 0 |
Calibrated Filtered Reduced Order Modeling | We propose a calibrated filtered reduced order model (CF-ROM) framework for
the numerical simulation of general nonlinear PDEs that are amenable to reduced
order modeling. The novel CF-ROM framework consists of two steps: (i) In the
first step, we use explicit ROM spatial filtering of the nonlinear PDE to
construct a filtered ROM. This filtered ROM is low-dimensional, but is not
closed (because of the nonlinearity in the given PDE). (ii) In the second step,
we use a calibration procedure to close the filtered ROM, i.e., to model the
interaction between the resolved and unresolved modes. To this end, we use a
linear or quadratic ansatz to model this interaction and close the filtered
ROM. To find the new coefficients in the closed filtered ROM, we solve an
optimization problem that minimizes the difference between the full order model
data and our ansatz. Although we use a fluid dynamics setting to illustrate how
to construct and use the CF-ROM framework, we emphasize that it is built on
general ideas of spatial filtering and optimization and is independent of
(restrictive) phenomenological arguments. Thus, the CF-ROM framework can be
applied to a wide variety of PDEs.
| 0 | 1 | 1 | 0 | 0 | 0 |
Population of collective modes in light scattering by many atoms | The interaction of light with an atomic sample containing a large number of
particles gives rise to many collective (or cooperative) effects, such as
multiple scattering, superradiance and subradiance, even if the atomic density
is low and the incident optical intensity weak (linear optics regime). Tracing
over the degrees of freedom of the light field, the system can be well
described by an effective atomic Hamiltonian, which contains the light-mediated
dipole-dipole interaction between atoms. This long-range interaction is at the
origin of the various collective effects, or of collective excitation modes of
the system. Even though an analysis of the eigenvalues and eigenfunctions of
these collective modes does allow distinguishing superradiant modes, for
instance, from other collective modes, this is not sufficient to understand the
dynamics of a driven system, as not all collective modes are significantly
populated. Here, we study how the excitation parameters, i.e. the driving
field, determines the population of the collective modes. We investigate in
particular the role of the laser detuning from the atomic transition, and
demonstrate a simple relation between the detuning and the steady-state
population of the modes. This relation allows understanding several properties
of cooperative scattering, such as why superradiance and subradiance become
independent of the detuning at large enough detuning without vanishing, and why
superradiance, but not subradiance, is suppressed near resonance.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Framework for Accurate Drought Forecasting System Using Semantics-Based Data Integration Middleware | Technological advancement in Wireless Sensor Networks (WSN) has made it
become an invaluable component of a reliable environmental monitoring system;
they form the digital skin' through which to 'sense' and collect the context of
the surroundings and provides information on the process leading to complex
events such as drought. However, these environmental properties are measured by
various heterogeneous sensors of different modalities in distributed locations
making up the WSN, using different abstruse terms and vocabulary in most cases
to denote the same observed property, causing data heterogeneity. Adding
semantics and understanding the relationships that exist between the observed
properties, and augmenting it with local indigenous knowledge is necessary for
an accurate drought forecasting system. In this paper, we propose the framework
for the semantic representation of sensor data and integration with indigenous
knowledge on drought using a middleware for an efficient drought forecasting
system.
| 1 | 0 | 0 | 0 | 0 | 0 |
Readings and Misreadings of J. Willard Gibbs Elementary Principles in Statistical Mechanics | J. Willard Gibbs' Elementary Principles in Statistical Mechanics was the
definitive work of one of America's greatest physicists. Gibbs' book on
statistical mechanics establishes the basic principles and fundamental results
that have flowered into the modern field of statistical mechanics. However, at
a number of points, Gibbs' teachings on statistical mechanics diverge from
positions on the canonical ensemble found in more recent works, at points where
seemingly there should be agreement. The objective of this paper is to note
some of these points, so that Gibbs' actual positions are not misrepresented to
future generations of students.
| 0 | 1 | 0 | 0 | 0 | 0 |
Multilingual Adaptation of RNN Based ASR Systems | In this work, we focus on multilingual systems based on recurrent neural
networks (RNNs), trained using the Connectionist Temporal Classification (CTC)
loss function. Using a multilingual set of acoustic units poses difficulties.
To address this issue, we proposed Language Feature Vectors (LFVs) to train
language adaptive multilingual systems. Language adaptation, in contrast to
speaker adaptation, needs to be applied not only on the feature level, but also
to deeper layers of the network. In this work, we therefore extended our
previous approach by introducing a novel technique which we call "modulation".
Based on this method, we modulated the hidden layers of RNNs using LFVs. We
evaluated this approach in both full and low resource conditions, as well as
for grapheme and phone based systems. Lower error rates throughout the
different conditions could be achieved by the use of the modulation.
| 1 | 0 | 0 | 0 | 0 | 0 |
SPIDERS: Selection of spectroscopic targets using AGN candidates detected in all-sky X-ray surveys | SPIDERS (SPectroscopic IDentification of eROSITA Sources) is an SDSS-IV
survey running in parallel to the eBOSS cosmology project. SPIDERS will obtain
optical spectroscopy for large numbers of X-ray-selected AGN and galaxy cluster
members detected in wide area eROSITA, XMM-Newton and ROSAT surveys. We
describe the methods used to choose spectroscopic targets for two
sub-programmes of SPIDERS: X-ray selected AGN candidates detected in the ROSAT
All Sky and the XMM-Newton Slew surveys. We have exploited a Bayesian
cross-matching algorithm, guided by priors based on mid-IR colour-magnitude
information from the WISE survey, to select the most probable optical
counterpart to each X-ray detection. We empirically demonstrate the high
fidelity of our counterpart selection method using a reference sample of bright
well-localised X-ray sources collated from XMM-Newton, Chandra and Swift-XRT
serendipitous catalogues, and also by examining blank-sky locations. We
describe the down-selection steps which resulted in the final set of
SPIDERS-AGN targets put forward for spectroscopy within the eBOSS/TDSS/SPIDERS
survey, and present catalogues of these targets. We also present catalogues of
~12000 ROSAT and ~1500 XMM-Newton Slew survey sources which have existing
optical spectroscopy from SDSS-DR12, including the results of our visual
inspections. On completion of the SPIDERS program, we expect to have collected
homogeneous spectroscopic redshift information over a footprint of ~7500
deg$^2$ for >85 percent of the ROSAT and XMM-Newton Slew survey sources having
optical counterparts in the magnitude range 17<r<22.5, producing a large and
highly complete sample of bright X-ray-selected AGN suitable for statistical
studies of AGN evolution and clustering.
| 0 | 1 | 0 | 0 | 0 | 0 |
Task-specific Word Identification from Short Texts Using a Convolutional Neural Network | Task-specific word identification aims to choose the task-related words that
best describe a short text. Existing approaches require well-defined seed words
or lexical dictionaries (e.g., WordNet), which are often unavailable for many
applications such as social discrimination detection and fake review detection.
However, we often have a set of labeled short texts where each short text has a
task-related class label, e.g., discriminatory or non-discriminatory, specified
by users or learned by classification algorithms. In this paper, we focus on
identifying task-specific words and phrases from short texts by exploiting
their class labels rather than using seed words or lexical dictionaries. We
consider the task-specific word and phrase identification as feature learning.
We train a convolutional neural network over a set of labeled texts and use
score vectors to localize the task-specific words and phrases. Experimental
results on sentiment word identification show that our approach significantly
outperforms existing methods. We further conduct two case studies to show the
effectiveness of our approach. One case study on a crawled tweets dataset
demonstrates that our approach can successfully capture the
discrimination-related words/phrases. The other case study on fake review
detection shows that our approach can identify the fake-review words/phrases.
| 1 | 0 | 0 | 0 | 0 | 0 |
Merlin-Arthur with efficient quantum Merlin and quantum supremacy for the second level of the Fourier hierarchy | We introduce a simple sub-universal quantum computing model, which we call
the Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a
classical reversible circuit sandwiched by two layers of Hadamard gates, and
therefore it is in the second level of the Fourier hierarchy. We show that
output probability distributions of the HC1Q model cannot be classically
efficiently sampled within a multiplicative error unless the polynomial-time
hierarchy collapses to the second level. The proof technique is different from
those used for previous sub-universal models, such as IQP, Boson Sampling, and
DQC1, and therefore the technique itself might be useful for finding other
sub-universal models that are hard to classically simulate. We also study the
classical verification of quantum computing in the second level of the Fourier
hierarchy. To this end, we define a promise problem, which we call the
probability distribution distinguishability with maximum norm (PDD-Max). It is
a promise problem to decide whether output probability distributions of two
quantum circuits are far apart or close. We show that PDD-Max is BQP-complete,
but if the two circuits are restricted to some types in the second level of the
Fourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a
Merlin-Arthur system with quantum polynomial-time Merlin and classical
probabilistic polynomial-time Arthur.
| 1 | 0 | 0 | 0 | 0 | 0 |
Light sterile neutrinos, dark matter, and new resonances in a $U(1)$ extension of the MSSM | We present $\psi'$MSSM, a model based on a $U(1)_{\psi'}$ extension of the
minimal supersymmetric standard model. The gauge symmetry $U(1)_{\psi'}$, also
known as $U(1)_N$, is a linear combination of the $U(1)_\chi$ and $U(1)_\psi$
subgroups of $E_6$. The model predicts the existence of three sterile neutrinos
with masses $\lesssim 0.1~{\rm eV}$, if the $U(1)_{\psi'}$ breaking scale is of
order 10 TeV. Their contribution to the effective number of neutrinos at
nucleosynthesis is $\Delta N_{\nu}\simeq 0.29$. The model can provide a variety
of possible cold dark matter candidates including the lightest sterile
sneutrino. If the $U(1)_{\psi'}$ breaking scale is increased to $10^3~{\rm
TeV}$, the sterile neutrinos, which are stable on account of a $Z_2$ symmetry,
become viable warm dark matter candidates. The observed value of the standard
model Higgs boson mass can be obtained with relatively light stop quarks thanks
to the D-term contribution from $U(1)_{\psi'}$. The model predicts diquark and
diphoton resonances which may be found at an updated LHC. The well-known $\mu$
problem is resolved and the observed baryon asymmetry of the universe can be
generated via leptogenesis. The breaking of $U(1)_{\psi'}$ produces
superconducting strings that may be present in our galaxy. A $U(1)$ R symmetry
plays a key role in keeping the proton stable and providing the light sterile
neutrinos.
| 0 | 1 | 0 | 0 | 0 | 0 |
Accurate halo-galaxy mocks from automatic bias estimation and particle mesh gravity solvers | Reliable extraction of cosmological information from clustering measurements
of galaxy surveys requires estimation of the error covariance matrices of
observables. The accuracy of covariance matrices is limited by our ability to
generate sufficiently large number of independent mock catalogs that can
describe the physics of galaxy clustering across a wide range of scales.
Furthermore, galaxy mock catalogs are required to study systematics in galaxy
surveys and to test analysis tools. In this investigation, we present a fast
and accurate approach for generation of mock catalogs for the upcoming galaxy
surveys. Our method relies on low-resolution approximate gravity solvers to
simulate the large scale dark matter field, which we then populate with halos
according to a flexible nonlinear and stochastic bias model. In particular, we
extend the \textsc{patchy} code with an efficient particle mesh algorithm to
simulate the dark matter field (the \textsc{FastPM} code), and with a robust
MCMC method relying on the \textsc{emcee} code for constraining the parameters
of the bias model. Using the halos in the BigMultiDark high-resolution $N$-body
simulation as a reference catalog, we demonstrate that our technique can model
the bivariate probability distribution function (counts-in-cells), power
spectrum, and bispectrum of halos in the reference catalog. Specifically, we
show that the new ingredients permit us to reach percentage accuracy in the
power spectrum up to $k\sim 0.4\; \,h\,{\rm Mpc}^{-1}$ (within 5\% up to $k\sim
0.6\; \,h\,{\rm Mpc}^{-1}$) with accurate bispectra improving previous results
based on Lagrangian perturbation theory.
| 0 | 1 | 0 | 0 | 0 | 0 |
Time-Optimal Path Tracking via Reachability Analysis | Given a geometric path, the Time-Optimal Path Tracking problem consists in
finding the control strategy to traverse the path time-optimally while
regulating tracking errors. A simple yet effective approach to this problem is
to decompose the controller into two components: (i)~a path controller, which
modulates the parameterization of the desired path in an online manner,
yielding a reference trajectory; and (ii)~a tracking controller, which takes
the reference trajectory and outputs joint torques for tracking. However, there
is one major difficulty: the path controller might not find any feasible
reference trajectory that can be tracked by the tracking controller because of
torque bounds. In turn, this results in degraded tracking performances. Here,
we propose a new path controller that is guaranteed to find feasible reference
trajectories by accounting for possible future perturbations. The main
technical tool underlying the proposed controller is Reachability Analysis, a
new method for analyzing path parameterization problems. Simulations show that
the proposed controller outperforms existing methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians | We consider deep classifying neural networks. We expose a structure in the
derivative of the logits with respect to the parameters of the model, which is
used to explain the existence of outliers in the spectrum of the Hessian.
Previous works decomposed the Hessian into two components, attributing the
outliers to one of them, the so-called Covariance of gradients. We show this
term is not a Covariance but a second moment matrix, i.e., it is influenced by
means of gradients. These means possess an additive two-way structure that is
the source of the outliers in the spectrum. This structure can be used to
approximate the principal subspace of the Hessian using certain "averaging"
operations, avoiding the need for high-dimensional eigenanalysis. We
corroborate this claim across different datasets, architectures and sample
sizes.
| 1 | 0 | 0 | 1 | 0 | 0 |
Discrete-Time Statistical Inference for Multiscale Diffusions | We study statistical inference for small-noise-perturbed multiscale dynamical
systems under the assumption that we observe a single time series from the slow
process only. We construct estimators for both averaging and homogenization
regimes, based on an appropriate misspecified model motivated by a second-order
stochastic Taylor expansion of the slow process with respect to a function of
the time-scale separation parameter. In the case of a fixed number of
observations, we establish consistency, asymptotic normality, and asymptotic
statistical efficiency of a minimum contrast estimator (MCE), the limiting
variance having been identified explicitly; we furthermore establish
consistency and asymptotic normality of a simplified minimum constrast
estimator (SMCE), which is however not in general efficient. These results are
then extended to the case of high-frequency observations under a condition
restricting the rate at which the number of observations may grow vis-à-vis
the separation of scales. Numerical simulations illustrate the theoretical
results.
| 0 | 0 | 1 | 1 | 0 | 0 |
Taggle: Scalable Visualization of Tabular Data through Aggregation | Visualization of tabular data---for both presentation and exploration
purposes---is a well-researched area. Although effective visual presentations
of complex tables are supported by various plotting libraries, creating such
tables is a tedious process and requires scripting skills. In contrast,
interactive table visualizations that are designed for exploration purposes
either operate at the level of individual rows, where large parts of the table
are accessible only via scrolling, or provide a high-level overview that often
lacks context-preserving drill-down capabilities. In this work we present
Taggle, a novel visualization technique for exploring and presenting large and
complex tables that are composed of individual columns of categorical or
numerical data and homogeneous matrices. The key contribution of Taggle is the
hierarchical aggregation of data subsets, for which the user can also choose
suitable visual representations.The aggregation strategy is complemented by the
ability to sort hierarchically such that groups of items can be flexibly
defined by combining categorical stratifications and by rich data selection and
filtering capabilities. We demonstrate the usefulness of Taggle for interactive
analysis and presentation of complex genomics data for the purpose of drug
discovery.
| 1 | 0 | 0 | 0 | 0 | 0 |
A parity-breaking electronic nematic phase transition in the spin-orbit coupled metal Cd$_2$Re$_2$O$_7$ | Strong electron interactions can drive metallic systems toward a variety of
well-known symmetry-broken phases, but the instabilities of correlated metals
with strong spin-orbit coupling have only recently begun to be explored. We
uncovered a multipolar nematic phase of matter in the metallic pyrochlore
Cd$_2$Re$_2$O$_7$ using spatially resolved second-harmonic optical anisotropy
measurements. Like previously discovered electronic nematic phases, this
multipolar phase spontaneously breaks rotational symmetry while preserving
translational invariance. However, it has the distinguishing property of being
odd under spatial inversion, which is allowed only in the presence of
spin-orbit coupling. By examining the critical behavior of the multipolar
nematic order parameter, we show that it drives the thermal phase transition
near 200 kelvin in Cd$_2$Re$_2$O$_7$ and induces a parity-breaking lattice
distortion as a secondary order.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sparse bounds for a prototypical singular Radon transform | We use a variant of the technique in [Lac17a] to give sparse L^p(log(L))^4
bounds for a class of model singular and maximal Radon transforms
| 0 | 0 | 1 | 0 | 0 | 0 |
Sterile neutrinos in cosmology | Sterile neutrinos are natural extensions to the standard model of particle
physics in neutrino mass generation mechanisms. If they are relatively light,
less than approximately 10 keV, they can alter cosmology significantly, from
the early Universe to the matter and radiation energy density today. Here, we
review the cosmological role such light sterile neutrinos can play from the
early Universe, including production of keV-scale sterile neutrinos as dark
matter candidates, and dynamics of light eV-scale sterile neutrinos during the
weakly-coupled active neutrino era. We review proposed signatures of light
sterile neutrinos in cosmic microwave background and large scale structure
data. We also discuss keV-scale sterile neutrino dark matter decay signatures
in X-ray observations, including recent candidate $\sim$3.5 keV X-ray line
detections consistent with the decay of a $\sim$7 keV sterile neutrino dark
matter particle.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Statistical Approach to Increase Classification Accuracy in Supervised Learning Algorithms | Probabilistic mixture models have been widely used for different machine
learning and pattern recognition tasks such as clustering, dimensionality
reduction, and classification. In this paper, we focus on trying to solve the
most common challenges related to supervised learning algorithms by using
mixture probability distribution functions. With this modeling strategy, we
identify sub-labels and generate synthetic data in order to reach better
classification accuracy. It means we focus on increasing the training data
synthetically to increase the classification accuracy.
| 1 | 0 | 0 | 1 | 0 | 0 |
Kinematics and workspace analysis of a 3ppps parallel robot with u-shaped base | This paper presents the kinematic analysis of the 3-PPPS parallel robot with
an equilateral mobile platform and a U-shape base. The proposed design and
appropriate selection of parameters allow to formulate simpler direct and
inverse kinematics for the manipulator under study. The parallel singularities
associated with the manipulator depend only on the orientation of the
end-effector, and thus depend only on the orientation of the end effector. The
quaternion parameters are used to represent the aspects, i.e. the singularity
free regions of the workspace. A cylindrical algebraic decomposition is used to
characterize the workspace and joint space with a low number of cells. The
dis-criminant variety is obtained to describe the boundaries of each cell. With
these simplifications, the 3-PPPS parallel robot with proposed design can be
claimed as the simplest 6 DOF robot, which further makes it useful for the
industrial applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Accurate spectroscopic redshift of the multiply lensed quasar PSOJ0147 from the Pan-STARRS survey | Context: The gravitational lensing time delay method provides a one-step
determination of the Hubble constant (H0) with an uncertainty level on par with
the cosmic distance ladder method. However, to further investigate the nature
of the dark energy, a H0 estimate down to 1% level is greatly needed. This
requires dozens of strongly lensed quasars that are yet to be delivered by
ongoing and forthcoming all-sky surveys.
Aims: In this work we aim to determine the spectroscopic redshift of
PSOJ0147, the first strongly lensed quasar candidate found in the Pan-STARRS
survey. The main goal of our work is to derive an accurate redshift estimate of
the background quasar for cosmography.
Methods: To obtain timely spectroscopically follow-up, we took advantage of
the fast-track service programme that is carried out by the Nordic Optical
Telescope. Using a grism covering 3200 - 9600 A, we identified prominent
emission line features, such as Ly-alpha, N V, O I, C II, Si IV, C IV, and [C
III] in the spectra of the background quasar of the PSOJ0147 lens system. This
enables us to determine accurately the redshift of the background quasar.
Results: The spectrum of the background quasar exhibits prominent absorption
features bluewards of the strong emission lines, such as Ly-alpha, N V, and C
IV. These blue absorption lines indicate that the background source is a broad
absorption line (BAL) quasar. Unfortunately, the BAL features hamper an
accurate determination of redshift using the above-mentioned strong emission
lines. Nevertheless, we are able to determine a redshift of 2.341+/-0.001 from
three of the four lensed quasar images with the clean forbidden line [C III].
In addition, we also derive a maximum outflow velocity of ~ 9800 km/s with the
broad absorption features bluewards of the C IV emission line. This value of
maximum outflow velocity is in good agreement with other BAL quasars.
| 0 | 1 | 0 | 0 | 0 | 0 |
Upper bounds on the smallest size of a saturating set in projective planes and spaces of even dimension | In a projective plane $\Pi_{q}$ (not necessarily Desarguesian) of order $q$,
a point subset $\mathcal{S}$ is saturating (or dense) if any point of
$\Pi_{q}\setminus \mathcal{S}$ is collinear with two points in $\mathcal{S}$.
Modifying an approach of [31], we proved the following upper bound on the
smallest size $s(2,q)$ of a saturating set in $\Pi_{q}$: \begin{equation*}
s(2,q)\leq \sqrt{(q+1)\left(3\ln q+\ln\ln q
+\ln\frac{3}{4}\right)}+\sqrt{\frac{q}{3\ln q}}+3. \end{equation*} The bound
holds for all q, not necessarily large.
By using inductive constructions, upper bounds on the smallest size of a
saturating set in the projective space $\mathrm{PG}(N,q)$ with even dimension
$N$ are obtained.
All the results are also stated in terms of linear covering codes.
| 1 | 0 | 1 | 0 | 0 | 0 |
A Neural Representation of Sketch Drawings | We present sketch-rnn, a recurrent neural network (RNN) able to construct
stroke-based drawings of common objects. The model is trained on thousands of
crude human-drawn images representing hundreds of classes. We outline a
framework for conditional and unconditional sketch generation, and describe new
robust training methods for generating coherent sketch drawings in a vector
format.
| 1 | 0 | 0 | 1 | 0 | 0 |
Privileged Multi-label Learning | This paper presents privileged multi-label learning (PrML) to explore and
exploit the relationship between labels in multi-label learning problems. We
suggest that for each individual label, it cannot only be implicitly connected
with other labels via the low-rank constraint over label predictors, but also
its performance on examples can receive the explicit comments from other labels
together acting as an \emph{Oracle teacher}. We generate privileged label
feature for each example and its individual label, and then integrate it into
the framework of low-rank based multi-label learning. The proposed algorithm
can therefore comprehensively explore and exploit label relationships by
inheriting all the merits of privileged information and low-rank constraints.
We show that PrML can be efficiently solved by dual coordinate descent
algorithm using iterative optimization strategy with cheap updates. Experiments
on benchmark datasets show that through privileged label features, the
performance can be significantly improved and PrML is superior to several
competing methods in most cases.
| 1 | 0 | 0 | 1 | 0 | 0 |
A Diophantine approximation problem with two primes and one $k$-th power of a prime | We refine a result of the last two Authors of [8] on a Diophantine
approximation problem with two primes and a $k$-th power of a prime which was
only proved to hold for $1<k<4/3$. We improve the $k$-range to $1<k\le 3$ by
combining Harman's technique on the minor arc with a suitable estimate for the
$L^4$-norm of the relevant exponential sum over primes $S_k$. In the common
range we also give a stronger bound for the approximation.
| 0 | 0 | 1 | 0 | 0 | 0 |
Reexamining Low Rank Matrix Factorization for Trace Norm Regularization | Trace norm regularization is a widely used approach for learning low rank
matrices. A standard optimization strategy is based on formulating the problem
as one of low rank matrix factorization which, however, leads to a non-convex
problem. In practice this approach works well, and it is often computationally
faster than standard convex solvers such as proximal gradient methods.
Nevertheless, it is not guaranteed to converge to a global optimum, and the
optimization can be trapped at poor stationary points. In this paper we show
that it is possible to characterize all critical points of the non-convex
problem. This allows us to provide an efficient criterion to determine whether
a critical point is also a global minimizer. Our analysis suggests an iterative
meta-algorithm that dynamically expands the parameter space and allows the
optimization to escape any non-global critical point, thereby converging to a
global minimizer. The algorithm can be applied to problems such as matrix
completion or multitask learning, and our analysis holds for any random
initialization of the factor matrices. Finally, we confirm the good performance
of the algorithm on synthetic and real datasets.
| 1 | 0 | 0 | 1 | 0 | 0 |
Tier structure of strongly endotactic reaction networks | Reaction networks are mainly used to model the time-evolution of molecules of
interacting chemical species. Stochastic models are typically used when the
counts of the molecules are low, whereas deterministic models are used when the
counts are in high abundance. In 2011, the notion of `tiers' was introduced to
study the long time behavior of deterministically modeled reaction networks
that are weakly reversible and have a single linkage class. This `tier' based
argument was analytical in nature. Later, in 2014, the notion of a strongly
endotactic network was introduced in order to generalize the previous results
from weakly reversible networks with a single linkage class to this wider
family of networks. The point of view of this later work was more geometric and
algebraic in nature. The notion of strongly endotactic networks was later used
in 2018 to prove a large deviation principle for a class of stochastically
modeled reaction networks.
We provide an analytical characterization of strongly endotactic networks in
terms of tier structures. By doing so, we shed light on the connection between
the two points of view, and also make available a new proof technique for the
study of strongly endotactic networks. We show the power of this new technique
in two distinct ways. First, we demonstrate how the main previous results
related to strongly endotactic networks, both for the deterministic and
stochastic modeling choices, can be quickly obtained from our characterization.
Second, we demonstrate how new results can be obtained by proving that a
sub-class of strongly endotactic networks, when modeled stochastically, is
positive recurrent. Finally, and similarly to recent independent work by Agazzi
and Mattingly, we provide an example which closes a conjecture in the negative
by showing that stochastically modeled strongly endotactic networks can be
transient (and even explosive).
| 0 | 0 | 0 | 0 | 1 | 0 |
Relative FP-injective and FP-flat complexes and their model structures | In this paper, we introduce the notions of ${\rm FP}_n$-injective and ${\rm
FP}_n$-flat complexes in terms of complexes of type ${\rm FP}_n$. We show that
some characterizations analogous to that of injective, FP-injective and flat
complexes exist for ${\rm FP}_n$-injective and ${\rm FP}_n$-flat complexes. We
also introduce and study ${\rm FP}_n$-injective and ${\rm FP}_n$-flat
dimensions of modules and complexes, and give a relation between them in terms
of Pontrjagin duality. The existence of pre-envelopes and covers in this
setting is discussed, and we prove that any complex has an ${\rm FP}_n$-flat
cover and an ${\rm FP}_n$-flat pre-envelope, and in the case $n \geq 2$ that
any complex has an ${\rm FP}_n$-injective cover and an ${\rm FP}_n$-injective
pre-envelope. Finally, we construct model structures on the category of
complexes from the classes of modules with bounded ${\rm FP}_n$-injective and
${\rm FP}_n$-flat dimensions, and analyze several conditions under which it is
possible to connect these model structures via Quillen functors and Quillen
equivalences.
| 0 | 0 | 1 | 0 | 0 | 0 |
An Interactive Tool to Explore and Improve the Ply Number of Drawings | Given a straight-line drawing $\Gamma$ of a graph $G=(V,E)$, for every vertex
$v$ the ply disk $D_v$ is defined as a disk centered at $v$ where the radius of
the disk is half the length of the longest edge incident to $v$. The ply number
of a given drawing is defined as the maximum number of overlapping disks at
some point in $\mathbb{R}^2$. Here we present a tool to explore and evaluate
the ply number for graphs with instant visual feedback for the user. We
evaluate our methods in comparison to an existing ply computation by De Luca et
al. [WALCOM'17]. We are able to reduce the computation time from seconds to
milliseconds for given drawings and thereby contribute to further research on
the ply topic by providing an efficient tool to examine graphs extensively by
user interaction as well as some automatic features to reduce the ply number.
| 1 | 0 | 0 | 0 | 0 | 0 |
Distinguishing the albedo of exoplanets from stellar activity | Light curves show the flux variation from the target star and its orbiting
planets as a function of time. In addition to the transit features created by
the planets, the flux also includes the reflected light component of each
planet, which depends on the planetary albedo. This signal is typically
referred to as phase curve and could be easily identified if there were no
additional noise. As well as instrumental noise, stellar activity, such as
spots, can create a modulation in the data, which may be very difficult to
distinguish from the planetary signal. We analyze the limitations imposed by
the stellar activity on the detection of the planetary albedo, considering the
limitations imposed by the predicted level of instrumental noise and the short
duration of the observations planned in the context of the CHEOPS mission. As
initial condition, we have assumed that each star is characterized by just one
orbiting planet. We built mock light curves that included a realistic stellar
activity pattern, the reflected light component of the planet and an
instrumental noise level, which we have chosen to be at the same level as
predicted for CHEOPS. We then fit these light curves to try to recover the
reflected light component, assuming the activity patterns can be modeled with a
Gaussian process.We estimate that at least one full stellar rotation is
necessary to obtain a reliable detection of the planetary albedo. This result
is independent of the level of noise, but it depends on the limitation of the
Gaussian process to describe the stellar activity when the light curve
time-span is shorter than the stellar rotation. Finally, in presence of typical
CHEOPS gaps in the simulations, we confirm that it is still possible to obtain
a reliable albedo.
| 0 | 1 | 0 | 0 | 0 | 0 |
Inverse Reinforcement Learning Under Noisy Observations | We consider the problem of performing inverse reinforcement learning when the
trajectory of the expert is not perfectly observed by the learner. Instead, a
noisy continuous-time observation of the trajectory is provided to the learner.
This problem exhibits wide-ranging applications and the specific application we
consider here is the scenario in which the learner seeks to penetrate a
perimeter patrolled by a robot. The learner's field of view is limited due to
which it cannot observe the patroller's complete trajectory. Instead, we allow
the learner to listen to the expert's movement sound, which it can also use to
estimate the expert's state and action using an observation model. We treat the
expert's state and action as hidden data and present an algorithm based on
expectation maximization and maximum entropy principle to solve the non-linear,
non-convex problem. Related work considers discrete-time observations and an
observation model that does not include actions. In contrast, our technique
takes expectations over both state and action of the expert, enabling learning
even in the presence of extreme noise and broader applications.
| 1 | 0 | 0 | 0 | 0 | 0 |
Using Perturbed Underdamped Langevin Dynamics to Efficiently Sample from Probability Distributions | In this paper we introduce and analyse Langevin samplers that consist of
perturbations of the standard underdamped Langevin dynamics. The perturbed
dynamics is such that its invariant measure is the same as that of the
unperturbed dynamics. We show that appropriate choices of the perturbations can
lead to samplers that have improved properties, at least in terms of reducing
the asymptotic variance. We present a detailed analysis of the new Langevin
sampler for Gaussian target distributions. Our theoretical results are
supported by numerical experiments with non-Gaussian target measures.
| 0 | 0 | 1 | 1 | 0 | 0 |
Meta learning Framework for Automated Driving | The success of automated driving deployment is highly depending on the
ability to develop an efficient and safe driving policy. The problem is well
formulated under the framework of optimal control as a cost optimization
problem. Model based solutions using traditional planning are efficient, but
require the knowledge of the environment model. On the other hand, model free
solutions suffer sample inefficiency and require too many interactions with the
environment, which is infeasible in practice. Methods under the Reinforcement
Learning framework usually require the notion of a reward function, which is
not available in the real world. Imitation learning helps in improving sample
efficiency by introducing prior knowledge obtained from the demonstrated
behavior, on the risk of exact behavior cloning without generalizing to unseen
environments. In this paper we propose a Meta learning framework, based on data
set aggregation, to improve generalization of imitation learning algorithms.
Under the proposed framework, we propose MetaDAgger, a novel algorithm which
tackles the generalization issues in traditional imitation learning. We use The
Open Race Car Simulator (TORCS) to test our algorithm. Results on unseen test
tracks show significant improvement over traditional imitation learning
algorithms, improving the learning time and sample efficiency in the same time.
The results are also supported by visualization of the learnt features to prove
generalization of the captured details.
| 1 | 0 | 0 | 1 | 0 | 0 |
MOBILITY21: Strategic Investments for Transportation Infrastructure & Technology | America's transportation infrastructure is the backbone of our economy. A
strong infrastructure means a strong America - an America that competes
globally, supports local and regional economic development, and creates jobs.
Strategic investments in our transportation infrastructure are vital to our
national security, economic growth, transportation safety and our technology
leadership. This document outlines critical needs for our transportation
infrastructure, identifies new technology drivers and proposes strategic
investments for safe and efficient air, ground, rail and marine mobility of
people and goods.
| 1 | 0 | 0 | 0 | 0 | 0 |
Hysteretic behaviour of metal connectors for hybrid (high- and low-grade mixed species) cross laminated timber | Cross-laminated timber (CLT) is a prefabricated solid engineered wood product
made of at least three orthogonally bonded layers of solid-sawn lumber that are
laminated by gluing longitudinal and transverse layers with structural
adhesives to form a solid panel. Previous studies have shown that the CLT
buildings can perform well in seismic loading and are recognized as the
essential role of connector performance in structural design, modelling, and
analysis of CLT buildings. When CLT is composed of high-grade/high-density
layers for the outer lamellas and low-grade/low-density for the core of the
panels, the CLT panels are herein designated as hybrid CLT panels as opposed to
conventional CLT panels that are built using one lumber type for both outer and
core lamellas. This paper presents results of a testing program developed to
estimate the cyclic performance of CLT connectors applied on hybrid CLT layups.
Two connectors are selected, which can be used in wall-to-floor connections.
These are readily available in the North American market. Characterization of
the performance of connectors is done in two perpendicular directions under a
modified CUREE cyclic loading protocol. Depending on the mode of failure, in
some cases, testing results indicate that when the nails or screws penetrate
the low-grade/low-density core lumber, a statistically significant difference
is obtained between hybrid and conventional layups. However, in other cases,
due to damage in the face layer or in the connection, force-displacement
results for conventional and hybrid CLT layups were not statistically
significant.
| 0 | 1 | 0 | 0 | 0 | 0 |
Partial Knowledge In Embeddings | Representing domain knowledge is crucial for any task. There has been a wide
range of techniques developed to represent this knowledge, from older logic
based approaches to the more recent deep learning based techniques (i.e.
embeddings). In this paper, we discuss some of these methods, focusing on the
representational expressiveness tradeoffs that are often made. In particular,
we focus on the the ability of various techniques to encode `partial knowledge'
- a key component of successful knowledge systems. We introduce and describe
the concepts of `ensembles of embeddings' and `aggregate embeddings' and
demonstrate how they allow for partial knowledge.
| 1 | 0 | 0 | 0 | 0 | 0 |
Impact of the latest measurement of Hubble constant on constraining inflation models | We investigate how the constraint results of inflation models are affected by
considering the latest local measurement of $H_0$ in the global fit. We use the
observational data, including the Planck CMB full data, the BICEP2 and Keck
Array CMB B-mode data, the BAO data, and the latest measurement of Hubble
constant, to constrain the $\Lambda$CDM+$r$+$N_{\rm eff}$ model, and the
obtained 1$\sigma$ and 2$\sigma$ contours of $(n_s, r)$ are compared to the
theoretical predictions of selected inflationary models. We find that, in this
fit, the scale invariance is only excluded at the 3.3$\sigma$ level, and
$\Delta N_{\rm eff}>0$ is favored at the 1.6$\sigma$ level. The natural
inflation model is now excluded at more than 2$\sigma$ level; the Starobinsky
$R^2$ model becomes only favored at around 2$\sigma$ level; the most favored
model becomes the spontaneously broken SUSY inflation model; and, the brane
inflation model is also well consistent with the current data, in this case.
| 0 | 1 | 0 | 0 | 0 | 0 |
Ultra-light and strong: the massless harmonic oscillator and its singular path integral | In classical mechanics, a light particle bound by a strong elastic force just
oscillates at high frequency in the region allowed by its initial position and
velocity. In quantum mechanics, instead, the ground state of the particle
becomes completely de-localized in the limit $m \to 0$. The harmonic oscillator
thus ceases to be a useful microscopic physical model in the limit $m \to 0$,
but its Feynman path integral has interesting singularities which make it a
prototype of other systems exhibiting a "quantum runaway" from the classical
configurations near the minimum of the action. The probability density of the
coherent runaway modes can be obtained as the solution of a Fokker-Planck
equation associated to the condition $S=S_{min}$. This technique can be applied
also to other systems, notably to a dimensional reduction of the
Einstein-Hilbert action.
| 0 | 1 | 0 | 0 | 0 | 0 |
The Shannon-McMillan-Breiman theorem beyond amenable groups | We introduce a new isomorphism-invariant notion of entropy for measure
preserving actions of arbitrary countable groups on probability spaces, which
we call cocycle entropy. We develop methods to show that cocycle entropy
satisfies many of the properties of classical amenable entropy theory, but
applies in much greater generality to actions of non-amenable groups. One key
ingredient in our approach is a proof of a subadditive convergence principle
which is valid for measure-preserving amenable equivalence relations, going
beyond the Ornstein-Weiss Lemma for amenable groups.
For a large class of countable groups, which may in fact include all of them,
we prove the Shannon-McMillan-Breiman pointwise convergence theorem for cocycle
entropy in their measure-preserving actions.
We also compare cocycle entropy to Rokhlin entropy, and using an important
recent result of Seward we show that they coincide for free, ergodic actions of
any countable group in the class. Finally, we use the example of the free group
to demonstrate the geometric significance of the entropy equipartition property
implied by the Shannon-McMillan-Breiman theorem.
| 0 | 0 | 1 | 0 | 0 | 0 |
Assistive robotic device: evaluation of intelligent algorithms | Assistive robotic devices can be used to help people with upper body
disabilities gaining more autonomy in their daily life. Although basic motions
such as positioning and orienting an assistive robot gripper in space allow
performance of many tasks, it might be time consuming and tedious to perform
more complex tasks. To overcome these difficulties, improvements can be
implemented at different levels, such as mechanical design, control interfaces
and intelligent control algorithms. In order to guide the design of solutions,
it is important to assess the impact and potential of different innovations.
This paper thus presents the evaluation of three intelligent algorithms aiming
to improve the performance of the JACO robotic arm (Kinova Robotics). The
evaluated algorithms are 'preset position', 'fluidity filter' and 'drinking
mode'. The algorithm evaluation was performed with 14 motorized wheelchair's
users and showed a statistically significant improvement of the robot's
performance.
| 1 | 0 | 0 | 0 | 0 | 0 |
Solving the Brachistochrone Problem by an Influence Diagram | Influence diagrams are a decision-theoretic extension of probabilistic
graphical models. In this paper we show how they can be used to solve the
Brachistochrone problem. We present results of numerical experiments on this
problem, compare the solution provided by the influence diagram with the
optimal solution. The R code used for the experiments is presented in the
Appendix.
| 1 | 0 | 1 | 0 | 0 | 0 |
OH Survey along Sightlines of Galactic Observations of Terahertz C+ | We have obtained OH spectra of four transitions in the $^2\Pi_{3/2}$ ground
state, at 1612, 1665, 1667, and 1720 MHz, toward 51 sightlines that were
observed in the Herschel project Galactic Observations of Terahertz C+. The
observations cover the longitude range of (32$^\circ$, 64$^\circ$) and
(189$^\circ$, 207$^\circ$) in the northern Galactic plane. All of the diffuse
OH emissions conform to the so-called 'Sum Rule' of the four brightness
temperatures, indicating optically thin emission condition for OH from diffuse
clouds in the Galactic plane. The column densities of the HI `halos' N(HI)
surrounding molecular clouds increase monotonically with OH column density,
N(OH), until saturating when N(HI)=1.0 x 10$^{21}$ cm$^{-2}$ and N (OH) $\geq
4.5\times 10^{15}$ cm$^{-2}$, indicating the presence of molecular gas that
cannot be traced by HI. Such a linear correlation, albeit weak, is suggestive
of HI halos' contribution to the UV shielding required for molecular formation.
About 18% of OH clouds have no associated CO emission (CO-dark) at a
sensitivity of 0.07 K but are associated with C$^+$ emission. A weak
correlation exists between C$^+$ intensity and OH column density for CO-dark
molecular clouds. These results imply that OH seems to be a better tracer of
molecular gas than CO in diffuse molecular regions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Strain manipulation of Majorana fermions in graphene armchair nanoribbons | Graphene nanoribbons with armchair edges are studied for externally enhanced,
but realistic parameter values: enhanced Rashba spin-orbit coupling due to
proximity to a transition metal dichalcogenide like WS$_{2}$, and enhanced
Zeeman field due to exchange coupling with a magnetic insulator like EuS under
applied magnetic field. The presence of s--wave superconductivity, induced
either by proximity or by decoration with alkali metal atoms like Ca or Li,
leads to a topological superconducting phase with Majorana end modes. The
topological phase is highly sensitive to the application of uniaxial strain,
with a transition to the trivial state above a critical strain well below
$0.1\%$. This sensitivity allows for real space manipulation of Majorana
fermions by applying non-uniform strain profiles. Similar manipulation is also
possible by applying inhomogeneous Zeeman field or chemical potential.
| 0 | 1 | 0 | 0 | 0 | 0 |
Relativistic wide-angle galaxy bispectrum on the light-cone | Given the important role that the galaxy bispectrum has recently acquired in
cosmology and the scale and precision of forthcoming galaxy clustering
observations, it is timely to derive the full expression of the large-scale
bispectrum going beyond approximated treatments which neglect integrated terms
or higher-order bias terms or use the Limber approximation. On cosmological
scales, relativistic effects that arise from observing on the past light-cone
alter the observed galaxy number counts, therefore leaving their imprints on
N-point correlators at all orders. In this paper we compute for the first time
the bispectrum including all general relativistic, local and integrated,
effects at second order, the tracers' bias at second order, geometric effects
as well as the primordial non-Gaussianity contribution. This is timely
considering that future surveys will probe scales comparable to the horizon
where approximations widely used currently may not hold; neglecting these
effects may introduce biases in estimation of cosmological parameters as well
as primordial non-Gaussianity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Recent progress in many-body localization | This article is a brief introduction to the rapidly evolving field of
many-body localization. Rather than giving an in-depth review of the subject,
our aspiration here is simply to introduce the problem and its general context,
outlining a few directions where notable progress has been achieved in recent
years. We hope that this will prepare the readers for the more specialized
articles appearing in the forthcoming dedicated volume of Annalen der Physik,
where these developments are discussed in more detail.
| 0 | 1 | 0 | 0 | 0 | 0 |
Magnetic Field Dependence of Spin Glass Free Energy Barriers | We measure the field dependence of spin glass free energy barriers in a thin
amorphous Ge:Mn film through the time dependence of the magnetization. After
the correlation length $\xi(t, T)$ has reached the film thickness $\mathcal
{L}=155$~\AA~so that the dynamics are activated, we change the initial magnetic
field by $\delta H$. In agreement with the scaling behavior exhibited in a
companion Letter [Janus collaboration: M. Baity-Jesi {\it et al.}, Phys. Rev.
Lett. {\bf 118}, 157202 (2017)], we find the activation energy is increased
when $\delta H < 0$. The change is proportional to $(\delta H)^2$ with the
addition of a small $(\delta H)^4$ term. The magnitude of the change of the
spin glass free energy barriers is in near quantitative agreement with the
prediction of a barrier model.
| 0 | 1 | 0 | 0 | 0 | 0 |
Topological Structures on DMC spaces | Two channels are said to be equivalent if they are degraded from each other.
The space of equivalent channels with input alphabet $X$ and output alphabet
$Y$ can be naturally endowed with the quotient of the Euclidean topology by the
equivalence relation. A topology on the space of equivalent channels with fixed
input alphabet $X$ and arbitrary but finite output alphabet is said to be
natural if and only if it induces the quotient topology on the subspaces of
equivalent channels sharing the same output alphabet. We show that every
natural topology is $\sigma$-compact, separable and path-connected. On the
other hand, if $|X|\geq 2$, a Hausdorff natural topology is not Baire and it is
not locally compact anywhere. This implies that no natural topology can be
completely metrized if $|X|\geq 2$. The finest natural topology, which we call
the strong topology, is shown to be compactly generated, sequential and $T_4$.
On the other hand, the strong topology is not first-countable anywhere, hence
it is not metrizable. We show that in the strong topology, a subspace is
compact if and only if it is rank-bounded and strongly-closed. We introduce a
metric distance on the space of equivalent channels which compares the noise
levels between channels. The induced metric topology, which we call the
noisiness topology, is shown to be natural. We also study topologies that are
inherited from the space of meta-probability measures by identifying channels
with their Blackwell measures. We show that the weak-* topology is exactly the
same as the noisiness topology and hence it is natural. We prove that if
$|X|\geq 2$, the total variation topology is not natural nor Baire, hence it is
not completely metrizable. Moreover, it is not locally compact anywhere.
Finally, we show that the Borel $\sigma$-algebra is the same for all Hausdorff
natural topologies.
| 1 | 0 | 1 | 0 | 0 | 0 |
The spectral element method as an efficient tool for transient simulations of hydraulic systems | This paper presents transient numerical simulations of hydraulic systems in
engineering applications using the spectral element method (SEM). Along with a
detailed description of the underlying numerical method, it is shown that the
SEM yields highly accurate numerical approximations at modest computational
costs, which is in particular useful for optimization-based control
applications. In order to enable fast explicit time stepping methods, the
boundary conditions are imposed weakly using a numerically stable upwind
discretization. The benefits of the SEM in the area of hydraulic system
simulations are demonstrated in various examples including several simulations
of strong water hammer effects. Due to its exceptional convergence
characteristics, the SEM is particularly well suited to be used in real-time
capable control applications. As an example, it is shown that the time
evolution of pressure waves in a large scale pumped-storage power plant can be
well approximated using a low-dimensional system representation utilizing a
minimum number of dynamical states.
| 0 | 1 | 0 | 0 | 0 | 0 |
Re-entrant charge order in overdoped (Bi,Pb)$_{2.12}$Sr$_{1.88}$CuO$_{6+δ}$ outside the pseudogap regime | Charge modulations are considered as a leading competitor of high-temperature
superconductivity in the underdoped cuprates, and their relationship to Fermi
surface reconstructions and to the pseudogap state is an important subject of
current research. Overdoped cuprates, on the other hand, are widely regarded as
conventional Fermi liquids without collective electronic order. For the
overdoped (Bi,Pb)2.12Sr1.88CuO6+{\delta} (Bi2201) high-temperature
superconductor, here we report resonant x-ray scattering measurements revealing
incommensurate charge order reflections, with correlation lengths of 40-60
lattice units, that persist up to at least 250K. Charge order is markedly more
robust in the overdoped than underdoped regime but the incommensurate wave
vectors follow a common trend; moreover it coexists with a single,
unreconstructed Fermi surface, without pseudogap or nesting features, as
determined from angle-resolved photoemission spectroscopy. This re-entrant
charge order is reproduced by model calculations that consider a strong van
Hove singularity within a Fermi liquid framework.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Spectral Approach for the Design of Experiments: Design, Analysis and Algorithms | This paper proposes a new approach to construct high quality space-filling
sample designs. First, we propose a novel technique to quantify the
space-filling property and optimally trade-off uniformity and randomness in
sample designs in arbitrary dimensions. Second, we connect the proposed metric
(defined in the spatial domain) to the objective measure of the design
performance (defined in the spectral domain). This connection serves as an
analytic framework for evaluating the qualitative properties of space-filling
designs in general. Using the theoretical insights provided by this
spatial-spectral analysis, we derive the notion of optimal space-filling
designs, which we refer to as space-filling spectral designs. Third, we propose
an efficient estimator to evaluate the space-filling properties of sample
designs in arbitrary dimensions and use it to develop an optimization framework
to generate high quality space-filling designs. Finally, we carry out a
detailed performance comparison on two different applications in 2 to 6
dimensions: a) image reconstruction and b) surrogate modeling on several
benchmark optimization functions and an inertial confinement fusion (ICF)
simulation code. We demonstrate that the propose spectral designs significantly
outperform existing approaches especially in high dimensions.
| 1 | 0 | 0 | 1 | 0 | 0 |
Detection and Tracking of General Movable Objects in Large 3D Maps | This paper studies the problem of detection and tracking of general objects
with long-term dynamics, observed by a mobile robot moving in a large
environment. A key problem is that due to the environment scale, it can only
observe a subset of the objects at any given time. Since some time passes
between observations of objects in different places, the objects might be moved
when the robot is not there. We propose a model for this movement in which the
objects typically only move locally, but with some small probability they jump
longer distances, through what we call global motion. For filtering, we
decompose the posterior over local and global movements into two linked
processes. The posterior over the global movements and measurement associations
is sampled, while we track the local movement analytically using Kalman
filters. This novel filter is evaluated on point cloud data gathered
autonomously by a mobile robot over an extended period of time. We show that
tracking jumping objects is feasible, and that the proposed probabilistic
treatment outperforms previous methods when applied to real world data. The key
to efficient probabilistic tracking in this scenario is focused sampling of the
object posteriors.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mapping the aberrations of a wide-field spectrograph using a photonic comb | We demonstrate a new approach to calibrating the spectral-spatial response of
a wide-field spectrograph using a fibre etalon comb. Conventional wide-field
instruments employed on front-line telescopes are mapped with a grid of
diffraction-limited holes cut into a focal plane mask. The aberrated grid
pattern in the image plane typically reveals n-symmetric (e.g. pincushion)
distortion patterns over the field arising from the optical train. This
approach is impractical in the presence of a dispersing element because the
diffraction-limited spots in the focal plane are imaged as an array of
overlapping spectra. Instead we propose a compact solution that builds on
recent developments in fibre-based Fabry-Perot etalons. We introduce a novel
approach to near-field illumination that exploits a 25cm commercial telescope
and the propagation of skew rays in a multimode fibre. The mapping of the
optical transfer function across the full field is represented accurately
(<0.5% rms residual) by an orthonormal set of Chebyshev moments. Thus we are
able to reconstruct the full 4Kx4K CCD image of the dispersed output from the
optical fibres using this mapping, as we demonstrate. Our method removes one of
the largest sources of systematic error in multi-object spectroscopy.
| 0 | 1 | 0 | 0 | 0 | 0 |
On The Asymptotic Efficiency of Selection Procedures for Independent Gaussian Populations | The field of discrete event simulation and optimization techniques motivates
researchers to adjust classic ranking and selection (R&S) procedures to the
settings where the number of populations is large. We use insights from extreme
value theory in order to reveal the asymptotic properties of R&S procedures.
Namely, we generalize the asymptotic result of Robbins and Siegmund regarding
selection from independent Gaussian populations with known constant variance by
their means to the case of selecting a subset of varying size out of a given
set of populations. In addition, we revisit the problem of selecting the
population with the highest mean among independent Gaussian populations with
unknown and possibly different variances. Particularly, we derive the relative
asymptotic efficiency of Dudewicz and Dalal's and Rinott's procedures, showing
that the former can be asymptotically superior by a multiplicative factor which
is larger than one, but this factor may be reduced by proper choice of
parameters. We also use our asymptotic results to suggest that the sample size
in the first stage of the two procedures should be logarithmic in the number of
populations.
| 0 | 0 | 1 | 1 | 0 | 0 |
DSOD: Learning Deeply Supervised Object Detectors from Scratch | We present Deeply Supervised Object Detector (DSOD), a framework that can
learn object detectors from scratch. State-of-the-art object objectors rely
heavily on the off-the-shelf networks pre-trained on large-scale classification
datasets like ImageNet, which incurs learning bias due to the difference on
both the loss functions and the category distributions between classification
and detection tasks. Model fine-tuning for the detection task could alleviate
this bias to some extent but not fundamentally. Besides, transferring
pre-trained models from classification to detection between discrepant domains
is even more difficult (e.g. RGB to depth images). A better solution to tackle
these two critical problems is to train object detectors from scratch, which
motivates our proposed DSOD. Previous efforts in this direction mostly failed
due to much more complicated loss functions and limited training data in object
detection. In DSOD, we contribute a set of design principles for training
object detectors from scratch. One of the key findings is that deep
supervision, enabled by dense layer-wise connections, plays a critical role in
learning a good detector. Combining with several other principles, we develop
DSOD following the single-shot detection (SSD) framework. Experiments on PASCAL
VOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better
results than the state-of-the-art solutions with much more compact models. For
instance, DSOD outperforms SSD on all three benchmarks with real-time detection
speed, while requires only 1/2 parameters to SSD and 1/10 parameters to Faster
RCNN. Our code and models are available at: this https URL .
| 1 | 0 | 0 | 0 | 0 | 0 |
Data Capture & Analysis to Assess Impact of Carbon Credit Schemes | Data enables Non-Governmental Organisations (NGOs) to quantify the impact of
their initiatives to themselves and to others. The increasing amount of data
stored today can be seen as a direct consequence of the falling costs in
obtaining it. Cheap data acquisition harnesses existing communications networks
to collect information. Globally, more people are connected by the mobile phone
network than by the Internet. We worked with Vita, a development organisation
implementing green initiatives to develop an SMS-based data collection
application to collect social data surrounding the impacts of their
initiatives. We present our system design and lessons learned from
on-the-ground testing.
| 1 | 0 | 0 | 0 | 0 | 0 |
Conducting Simulations in Causal Inference with Networks-Based Structural Equation Models | The past decade has seen an increasing body of literature devoted to the
estimation of causal effects in network-dependent data. However, the validity
of many classical statistical methods in such data is often questioned. There
is an emerging need for objective and practical ways to assess which causal
methodologies might be applicable and valid in network-dependent data. This
paper describes a set of tools implemented in the simcausal R package that
allow simulating data based on user-specified structural equation model for
connected units. Specification and simulation of counterfactual data is
implemented for static, dynamic and stochastic interventions. A new interface
aims to simplify the specification of network-based functional relationships
between connected units. A set of examples illustrates how these simulations
may be applied to evaluation of different statistical methods for estimation of
causal effects in network-dependent data.
| 0 | 0 | 0 | 1 | 0 | 0 |
Finite Semihypergroups Built From Groups | Necessary and sufficient conditions for finite semihypergroups to be built
from groups of the same order are established
| 0 | 0 | 1 | 0 | 0 | 0 |
Auslander Modules | In this paper, we introduce the notion of Auslander modules, inspired from
Auslander's zero-divisor conjecture (theorem) and give some interesting results
for these modules. We also investigate torsion-free modules.
| 0 | 0 | 1 | 0 | 0 | 0 |
Verifying Security Protocols using Dynamic Strategies | Current formal approaches have been successfully used to find design flaws in
many security protocols. However, it is still challenging to automatically
analyze protocols due to their large or infinite state spaces. In this paper,
we propose a novel framework that can automatically verifying security
protocols without any human intervention. Experimental results show that
SmartVerif automatically verifies security protocols that cannot be
automatically verified by existing approaches. The case studies also validate
the effectiveness of our dynamic strategy.
| 1 | 0 | 0 | 0 | 0 | 0 |
Distribution uniformity of laser-accelerated proton beams | Compared with conventional accelerators, laser plasma accelerators can
generate high energy ions at a greatly reduced scale, due to their TV/m
acceleration gradient. A compact laser plasma accelerator (CLAPA) has been
built at the Institute of Heavy Ion Physics at Peking University. It will be
used for applied research like biological irradiation, astrophysics
simulations, etc. A beamline system with multiple quadrupoles and an analyzing
magnet for laser-accelerated ions is proposed here. Since laser-accelerated ion
beams have broad energy spectra and large angular divergence, the parameters
(beam waist position in the Y direction, beam line layout, drift distance,
magnet angles etc.) of the beamline system are carefully designed and optimised
to obtain a radially symmetric proton distribution at the irradiation platform.
Requirements of energy selection and differences in focusing or defocusing in
application systems greatly influence the evolution of proton distributions.
With optimal parameters, radially symmetric proton distributions can be
achieved and protons with different energy spread within 5% have similar
transverse areas at the experiment target.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning to Detect Human-Object Interactions | We study the problem of detecting human-object interactions (HOI) in static
images, defined as predicting a human and an object bounding box with an
interaction class label that connects them. HOI detection is a fundamental
problem in computer vision as it provides semantic information about the
interactions among the detected objects. We introduce HICO-DET, a new large
benchmark for HOI detection, by augmenting the current HICO classification
benchmark with instance annotations. To solve the task, we propose Human-Object
Region-based Convolutional Neural Networks (HO-RCNN). At the core of our
HO-RCNN is the Interaction Pattern, a novel DNN input that characterizes the
spatial relations between two bounding boxes. Experiments on HICO-DET
demonstrate that our HO-RCNN, by exploiting human-object spatial relations
through Interaction Patterns, significantly improves the performance of HOI
detection over baseline approaches.
| 1 | 0 | 0 | 0 | 0 | 0 |
Data clustering with edge domination in complex networks | This paper presents a model for a dynamical system where particles dominate
edges in a complex network. The proposed dynamical system is then extended to
an application on the problem of community detection and data clustering. In
the case of the data clustering problem, 6 different techniques were simulated
on 10 different datasets in order to compare with the proposed technique. The
results show that the proposed algorithm performs well when prior knowledge of
the number of clusters is known to the algorithm.
| 1 | 1 | 0 | 0 | 0 | 0 |
sWSI: A Low-cost and Commercial-quality Whole Slide Imaging System on Android and iOS Smartphones | In this paper, scalable Whole Slide Imaging (sWSI), a novel high-throughput,
cost-effective and robust whole slide imaging system on both Android and iOS
platforms is introduced and analyzed. With sWSI, most mainstream smartphone
connected to a optical eyepiece of any manually controlled microscope can be
automatically controlled to capture sequences of mega-pixel fields of views
that are synthesized into giga-pixel virtual slides. Remote servers carry out
the majority of computation asynchronously to support clients running at
satisfying frame rates without sacrificing image quality nor robustness. A
typical 15x15mm sample can be digitized in 30 seconds with 4X or in 3 minutes
with 10X object magnification, costing under $1. The virtual slide quality is
considered comparable to existing high-end scanners thus satisfying for
clinical usage by surveyed pathologies. The scan procedure with features such
as supporting magnification up to 100x, recoding z-stacks,
specimen-type-neutral and giving real-time feedback, is deemed
work-flow-friendly and reliable.
| 1 | 1 | 0 | 0 | 0 | 0 |
Predicting multicellular function through multi-layer tissue networks | Motivation: Understanding functions of proteins in specific human tissues is
essential for insights into disease diagnostics and therapeutics, yet
prediction of tissue-specific cellular function remains a critical challenge
for biomedicine.
Results: Here we present OhmNet, a hierarchy-aware unsupervised node feature
learning approach for multi-layer networks. We build a multi-layer network,
where each layer represents molecular interactions in a different human tissue.
OhmNet then automatically learns a mapping of proteins, represented as nodes,
to a neural embedding based low-dimensional space of features. OhmNet
encourages sharing of similar features among proteins with similar network
neighborhoods and among proteins activated in similar tissues. The algorithm
generalizes prior work, which generally ignores relationships between tissues,
by modeling tissue organization with a rich multiscale tissue hierarchy. We use
OhmNet to study multicellular function in a multi-layer protein interaction
network of 107 human tissues. In 48 tissues with known tissue-specific cellular
functions, OhmNet provides more accurate predictions of cellular function than
alternative approaches, and also generates more accurate hypotheses about
tissue-specific protein actions. We show that taking into account the tissue
hierarchy leads to improved predictive power. Remarkably, we also demonstrate
that it is possible to leverage the tissue hierarchy in order to effectively
transfer cellular functions to a functionally uncharacterized tissue. Overall,
OhmNet moves from flat networks to multiscale models able to predict a range of
phenotypes spanning cellular subsystems
| 1 | 0 | 0 | 1 | 0 | 0 |
Toward Microphononic Circuits on Chip: An Evaluation of Components based on High-Contrast Evanescent Confinement of Acoustic Waves | We investigate the prospects for micron-scale acoustic wave components and
circuits on chip in solid planar structures that do not require suspension. We
leverage evanescent guiding of acoustic waves by high slowness contrast
materials readily available in silicon complementary metal-oxide semiconductor
(CMOS) processes. High slowness contrast provides strong confinement of GHz
frequency acoustic fields in micron-scale structures. We address the
fundamental implications of intrinsic material and radiation losses on
operating frequency, bandwidth, device size and as a result practicality of
multi-element microphononic circuits based on solid embedded waveguides. We
show that a family of acoustic components based on evanescently guided acoustic
waves, including waveguide bends, evanescent couplers, Y-splitters, and
acoustic-wave microring resonators, can be realized in compact, micron-scale
structures, and provide basic scaling and performance arguments for these
components based on material properties and simulations. We further find that
wave propagation losses are expected to permit high quality factor (Q),
narrowband resonators and propagation lengths allowing delay lines and the
coupling or cascading of multiple components to form functional circuits, of
potential utility in guided acoustic signal processing on chip. We also address
and simulate bends and radiation loss, providing insight into routing and
resonators. Such circuits could be monolithically integrated with electronic
and photonic circuits on a single chip with expanded capabilities.
| 0 | 1 | 0 | 0 | 0 | 0 |
Adaptive Submodular Influence Maximization with Myopic Feedback | This paper examines the problem of adaptive influence maximization in social
networks. As adaptive decision making is a time-critical task, a realistic
feedback model has been considered, called myopic. In this direction, we
propose the myopic adaptive greedy policy that is guaranteed to provide a (1 -
1/e)-approximation of the optimal policy under a variant of the independent
cascade diffusion model. This strategy maximizes an alternative utility
function that has been proven to be adaptive monotone and adaptive submodular.
The proposed utility function considers the cumulative number of active nodes
through the time, instead of the total number of the active nodes at the end of
the diffusion. Our empirical analysis on real-world social networks reveals the
benefits of the proposed myopic strategy, validating our theoretical results.
| 1 | 0 | 0 | 0 | 0 | 0 |
Evaluation of Direct Haptic 4D Volume Rendering of Partially Segmented Data for Liver Puncture Simulation | This work presents an evaluation study using a force feedback evaluation
framework for a novel direct needle force volume rendering concept in the
context of liver puncture simulation. PTC/PTCD puncture interventions targeting
the bile ducts have been selected to illustrate this concept. The haptic
algorithms of the simulator system are based on (1) partially segmented patient
image data and (2) a non-linear spring model effective at organ borders. The
primary aim is to quantitatively evaluate force errors caused by our patient
modeling approach, in comparison to haptic force output obtained from using
gold-standard, completely manually-segmented data. The evaluation of the force
algorithms compared to a force output from fully manually segmented
gold-standard patient models, yields a low mean of 0.12 N root mean squared
force error and up to 1.6 N for systematic maximum absolute errors. Force
errors were evaluated on 31,222 preplanned test paths from 10 patients. Only
twelve percent of the emitted forces along these paths were affected by errors.
This is the first study evaluating haptic algorithms with deformable virtual
patients in silico. We prove haptic rendering plausibility on a very high
number of test paths. Important errors are below just noticeable differences
for the hand-arm system.
| 1 | 1 | 0 | 0 | 0 | 0 |
The Role of Big Data on Smart Grid Transition | Despite being popularly referred to as the ultimate solution for all problems
of our current electric power system, smart grid is still a growing and
unstable concept. It is usually considered as a set of advanced features
powered by promising technological solutions. In this paper, we describe smart
grid as a socio-technical transition and illustrate the evolutionary path on
which a smart grid can be realized. Through this conceptual lens, we reveal the
role of big data, and how it can fuel the organic growth of smart grid. We also
provide a rough estimate of how much data will be potentially generated from
different data sources, which helps clarify the big data challenges during the
evolutionary process.
| 1 | 0 | 0 | 0 | 0 | 0 |
Learning Deep ResNet Blocks Sequentially using Boosting Theory | Deep neural networks are known to be difficult to train due to the
instability of back-propagation. A deep \emph{residual network} (ResNet) with
identity loops remedies this by stabilizing gradient computations. We prove a
boosting theory for the ResNet architecture. We construct $T$ weak module
classifiers, each contains two of the $T$ layers, such that the combined strong
learner is a ResNet. Therefore, we introduce an alternative Deep ResNet
training algorithm, \emph{BoostResNet}, which is particularly suitable in
non-differentiable architectures. Our proposed algorithm merely requires a
sequential training of $T$ "shallow ResNets" which are inexpensive. We prove
that the training error decays exponentially with the depth $T$ if the
\emph{weak module classifiers} that we train perform slightly better than some
weak baseline. In other words, we propose a weak learning condition and prove a
boosting theory for ResNet under the weak learning condition. Our results apply
to general multi-class ResNets. A generalization error bound based on margin
theory is proved and suggests ResNet's resistant to overfitting under network
with $l_1$ norm bounded weights.
| 1 | 0 | 0 | 0 | 0 | 0 |
Total energy of radial mappings | The main aim of this paper is to extend one of the main results of Iwaniec
and Onninen (Arch. Ration. Mech. Anal., 194: 927-986, 2009). We prove that, the
so called total energy functional defined on the class of radial streachings
between annuli attains its minimum on a total energy diffeomorphism between
annuli. This involves a subtle analysis of some special ODE.
| 0 | 0 | 1 | 0 | 0 | 0 |
Divergence and Sufficiency for Convex Optimization | Logarithmic score and information divergence appear in information theory,
statistics, statistical mechanics, and portfolio theory. We demonstrate that
all these topics involve some kind of optimization that leads directly to
regret functions and such regret functions are often given by a Bregman
divergence. If the regret function also fulfills a sufficiency condition it
must be proportional to information divergence. We will demonstrate that
sufficiency is equivalent to the apparently weaker notion of locality and it is
also equivalent to the apparently stronger notion of monotonicity. These
sufficiency conditions have quite different relevance in the different areas of
application, and often they are not fulfilled. Therefore sufficiency conditions
can be used to explain when results from one area can be transferred directly
to another and when one will experience differences.
| 1 | 1 | 1 | 0 | 0 | 0 |
Compact linear programs for 2SAT | For each integer $n$ we present an explicit formulation of a compact linear
program, with $O(n^3)$ variables and constraints, which determines the
satisfiability of any 2SAT formula with $n$ boolean variables by a single
linear optimization. This contrasts with the fact that the natural polytope for
this problem, formed from the convex hull of all satisfiable formulas and their
satisfying assignments, has superpolynomial extension complexity. Our
formulation is based on multicommodity flows. We also discuss connections of
these results to the stable matching problem.
| 1 | 0 | 1 | 0 | 0 | 0 |
Distributed Protocols at the Rescue for Trustworthy Online Voting | While online services emerge in all areas of life, the voting procedure in
many democracies remains paper-based as the security of current online voting
technology is highly disputed. We address the issue of trustworthy online
voting protocols and recall therefore their security concepts with its trust
assumptions. Inspired by the Bitcoin protocol, the prospects of distributed
online voting protocols are analysed. No trusted authority is assumed to ensure
ballot secrecy. Further, the integrity of the voting is enforced by all voters
themselves and without a weakest link, the protocol becomes more robust. We
introduce a taxonomy of notions of distribution in online voting protocols that
we apply on selected online voting protocols. Accordingly, blockchain-based
protocols seem to be promising for online voting due to their similarity with
paper-based protocols.
| 1 | 0 | 0 | 0 | 0 | 0 |
Isomonodromy aspects of the tt* equations of Cecotti and Vafa III. Iwasawa factorization and asymptotics | This paper, the third in a series, completes our description of all (radial)
solutions on C* of the tt*-Toda equations, using a combination of methods from
p.d.e., isomonodromic deformations (Riemann-Hilbert method), and loop groups.
We place these global solutions into the broader context of solutions which are
smooth near 0. For such solutions, we compute explicitly the Stokes data and
connection matrix of the associated meromorphic system, in the resonant cases
as well as the non-resonant case. This allows us to give a complete picture of
the monodromy data of the global solutions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Decorative Plasmonic Surfaces | Low-profile patterned plasmonic surfaces are synergized with a broad class of
silicon microstructures to greatly enhance near-field nanoscale imaging,
sensing, and energy harvesting coupled with far-field free-space detection.
This concept has a clear impact on several key areas of interest for the MEMS
community, including but not limited to ultra-compact microsystems for
sensitive detection of small number of target molecules, and surface devices
for optical data storage, micro-imaging and displaying. In this paper, we
review the current state-of-the-art in plasmonic theory as well as derive
design guidance for plasmonic integration with microsystems, fabrication
techniques, and selected applications in biosensing, including refractive-index
based label-free biosensing, plasmonic integrated lab-on-chip systems,
plasmonic near-field scanning optical microscopy and plasmonics on-chip systems
for cellular imaging. This paradigm enables low-profile conformal surfaces on
microdevices, rather than bulk material or coatings, which provide clear
advantages for physical, chemical and biological-related sensing, imaging, and
light harvesting, in addition to easier realization, enhanced flexibility, and
tunability.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hamiltonian approach to slip-stacking dynamics | Hamiltonian dynamics has been applied to study the slip-stacking dynamics.
The canonical-perturbation method is employed to obtain the second-harmonic
correction term in the slip-stacking Hamiltonian. The Hamiltonian approach
provides a clear optimal method for choosing the slip-stacking parameter and
improving stacking efficiency. The dynamics are applied specifically to the
Fermilab Booster-Recycler complex. The dynamics can also be applied to other
accelerator complexes.
| 0 | 1 | 0 | 0 | 0 | 0 |
Materials processing with intense pulsed ion beams and masked targets | Intense, pulsed ion beams locally heat materials and deliver dense electronic
excitations that can induce materials modifications and phase transitions.
Materials properties can potentially be stabilized by rapid quenching. Pulsed
ion beams with (sub-) ns pulse lengths have recently become available for
materials processing. Here, we optimize mask geometries for local modification
of materials by intense ion pulses. The goal is to rapidly excite targets
volumetrically to the point where a phase transition or local lattice
reconstruction is induced followed by rapid cooling that stabilizes desired
materials properties fast enough before the target is altered or damaged by e.
g. hydrodynamic expansion. We performed HYDRA simulations that calculate peak
temperatures for a series of excitation conditions and cooling rates of silicon
targets with micro-structured masks and compare these to a simple analytical
model. The model gives scaling laws that can guide the design of targets over a
wide range of pulsed ion beam parameters.
| 0 | 1 | 0 | 0 | 0 | 0 |
Distributions of Historic Market Data -- Implied and Realized Volatility | We undertake a systematic comparison between implied volatility, as
represented by VIX (new methodology) and VXO (old methodology), and realized
volatility. We compare visually and statistically distributions of realized and
implied variance (volatility squared) and study the distribution of their
ratio. We find that the ratio is best fitted by heavy-tailed -- lognormal and
fat-tailed (power-law) -- distributions, depending on whether preceding or
concurrent month of realized variance is used. We do not find substantial
difference in accuracy between VIX and VXO. Additionally, we study the variance
of theoretical realized variance for Heston and multiplicative models of
stochastic volatility and compare those with realized variance obtained from
historic market data.
| 0 | 0 | 0 | 0 | 0 | 1 |
Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net | We propose a novel method to directly learn a stochastic transition operator
whose repeated application provides generated samples. Traditional undirected
graphical models approach this problem indirectly by learning a Markov chain
model whose stationary distribution obeys detailed balance with respect to a
parameterized energy function. The energy function is then modified so the
model and data distributions match, with no guarantee on the number of steps
required for the Markov chain to converge. Moreover, the detailed balance
condition is highly restrictive: energy based models corresponding to neural
networks must have symmetric weights, unlike biological neural circuits. In
contrast, we develop a method for directly learning arbitrarily parameterized
transition operators capable of expressing non-equilibrium stationary
distributions that violate detailed balance, thereby enabling us to learn more
biologically plausible asymmetric neural networks and more general non-energy
based dynamical systems. The proposed training objective, which we derive via
principled variational methods, encourages the transition operator to "walk
back" in multi-step trajectories that start at data-points, as quickly as
possible back to the original data points. We present a series of experimental
results illustrating the soundness of the proposed approach, Variational
Walkback (VW), on the MNIST, CIFAR-10, SVHN and CelebA datasets, demonstrating
superior samples compared to earlier attempts to learn a transition operator.
We also show that although each rapid training trajectory is limited to a
finite but variable number of steps, our transition operator continues to
generate good samples well past the length of such trajectories, thereby
demonstrating the match of its non-equilibrium stationary distribution to the
data distribution. Source Code: this http URL
| 1 | 0 | 0 | 1 | 0 | 0 |
Toward a language-theoretic foundation for planning and filtering | We address problems underlying the algorithmic question of automating the
co-design of robot hardware in tandem with its apposite software. Specifically,
we consider the impact that degradations of a robot's sensor and actuation
suites may have on the ability of that robot to complete its tasks. We
introduce a new formal structure that generalizes and consolidates a variety of
well-known structures including many forms of plans, planning problems, and
filters, into a single data structure called a procrustean graph, and give
these graph structures semantics in terms of ideas based in formal language
theory. We describe a collection of operations on procrustean graphs (both
semantics-preserving and semantics-mutating), and show how a family of
questions about the destructiveness of a change to the robot hardware can be
answered by applying these operations. We also highlight the connections
between this new approach and existing threads of research, including
combinatorial filtering, Erdmann's strategy complexes, and hybrid automata.
| 1 | 0 | 0 | 0 | 0 | 0 |
Algebraic relations between solutions of Painlevé equations | We calculate model theoretic ranks of Painlevé equations in this article,
showing in particular, that any equation in any of the Painlevé families has
Morley rank one, extending results of Nagloo and Pillay (2011). We show that
the type of the generic solution of any equation in the second Painlevé
family is geometrically trivial, extending a result of Nagloo (2015).
We also establish the orthogonality of various pairs of equations in the
Painlevé families, showing at least generically, that all instances of
nonorthogonality between equations in the same Painlevé family come from
classically studied B{ä}cklund transformations. For instance, we show that if
at least one of $\alpha, \beta$ is transcendental, then $P_{II} (\alpha)$ is
nonorthogonal to $P_{II} ( \beta )$ if and only if $\alpha+ \beta \in \mathbb
Z$ or $\alpha - \beta \in \mathbb Z$. Our results have concrete interpretations
in terms of characterizing the algebraic relations between solutions of
Painlevé equations. We give similar results for orthogonality relations
between equations in different Painlevé families, and formulate some general
questions which extend conjectures of Nagloo and Pillay (2011) on transcendence
and algebraic independence of solutions to Painlevé equations. We also apply
our analysis of ranks to establish some orthogonality results for pairs of
Painlevé equations from different families. For instance, we answer several
open questions of Nagloo (2016), and in the process answer a question of Boalch
(2012).
| 0 | 0 | 1 | 0 | 0 | 0 |
Pore lifetimes in cell electroporation: Complex dark pores? | We review some of the basic concepts and the possible pore structures
associated with electroporation (EP) for times after electrical pulsing. We
purposefully give only a short description of pore creation and subsequent
evolution of pore populations, as these are adequately discussed in both
reviews and original research reports. In contrast, post-pulse pore concepts
have changed dramatically. For perspective we note that pores are not directly
observed. Instead understanding of pores is based on inference from experiments
and, increasingly, molecular dynamics (MD) simulations. In the past decade
concepts for post-pulse pores have changed significantly: The idea of pure
lipidic transient pores (TPs) that exist for milliseconds or longer post-pulse
has become inconsistent with MD results, which support TP lifetimes of only
$\sim$100 ns. A typical large TP number during cell EP pulsing is of order
$10^6$. In twenty MD-based TP lifetimes (2 us total), the TP number plummets to
$\sim$0.001. In short, TPs vanish 2 us after a pulse ends, and cannot account
for post-pulse behavior such as large and relatively non-specific ionic and
molecular transport. Instead, an early conjecture of complex pores (CPs) with
both lipidic and other molecule should be taken seriously. Indeed, in the past
decade several experiments provide partial support for complex pores (CPs).
Presently, CPs are "dark", in the sense that while some CP functions are known,
little is known about their structure(s). There may be a wide range of
lifetimes and permeabilities, not yet revealed by experiments. Like cosmology's
dark matter, these unseen pores present us with an outstanding problem.
| 0 | 1 | 0 | 0 | 0 | 0 |
Data-driven causal path discovery without prior knowledge - a benchmark study | Causal discovery broadens the inference possibilities, as correlation does
not inform about the relationship direction. The common approaches were
proposed for cases in which prior knowledge is desired, when the impact of a
treatment/intervention variable is discovered or to analyze time-related
dependencies. In some practical applications, more universal techniques are
needed and have already been presented. Therefore, the aim of the study was to
assess the accuracies in determining causal paths in a dataset without
considering the ground truth and the contextual information. This benchmark was
performed on the database with cause-effect pairs, using a framework consisting
of generalized correlations (GC), kernel regression gradients (GR) and absolute
residuals criteria (AR), along with causal additive modeling (CAM). The best
overall accuracy, 80%, was achieved for the (majority voting) combination of
GC, AR, and CAM, however, the most similar sensitivity and specificity values
were obtained for AR. Bootstrap simulation established the probability of
correct causal path determination (which pairs should remain indeterminate).
The mean accuracy was then improved to 83% for the selected subset of pairs.
The described approach can be used for preliminary dependence assessment, as an
initial step for commonly used causality assessment frameworks or for
comparison with prior assumptions.
| 0 | 0 | 0 | 1 | 0 | 0 |
Computation of Ground States of the Gross-Pitaevskii Functional via Riemannian Optimization | In this paper we combine concepts from Riemannian Optimization and the theory
of Sobolev gradients to derive a new conjugate gradient method for direct
minimization of the Gross-Pitaevskii energy functional with rotation. The
conservation of the number of particles constrains the minimizers to lie on a
manifold corresponding to the unit $L^2$ norm. The idea developed here is to
transform the original constrained optimization problem to an unconstrained
problem on this (spherical) Riemannian manifold, so that fast minimization
algorithms can be applied as alternatives to more standard constrained
formulations. First, we obtain Sobolev gradients using an equivalent definition
of an $H^1$ inner product which takes into account rotation. Then, the
Riemannian gradient (RG) steepest descent method is derived based on projected
gradients and retraction of an intermediate solution back to the constraint
manifold. Finally, we use the concept of the Riemannian vector transport to
propose a Riemannian conjugate gradient (RCG) method for this problem. It is
derived at the continuous level based on the "optimize-then-discretize"
paradigm instead of the usual "discretize-then-optimize" approach, as this
ensures robustness of the method when adaptive mesh refinement is performed in
computations. We evaluate various design choices inherent in the formulation of
the method and conclude with recommendations concerning selection of the best
options. Numerical tests demonstrate that the proposed RCG method outperforms
the simple gradient descent (RG) method in terms of rate of convergence. While
on simple problems a Newton-type method implemented in the {\tt Ipopt} library
exhibits a faster convergence than the (RCG) approach, the two methods perform
similarly on more complex problems requiring the use of mesh adaptation. At the
same time the (RCG) approach has far fewer tunable parameters.
| 0 | 1 | 1 | 0 | 0 | 0 |
On the symplectic size of convex polytopes | In this paper we introduce a combinatorial formula for the
Ekeland-Hofer-Zehnder capacity of a convex polytope in $\mathbb{R}^{2n}$. One
application of this formula is a certain subadditivity property of this
capacity.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Fully Convolutional Neural Network Approach to End-to-End Speech Enhancement | This paper will describe a novel approach to the cocktail party problem that
relies on a fully convolutional neural network (FCN) architecture. The FCN
takes noisy audio data as input and performs nonlinear, filtering operations to
produce clean audio data of the target speech at the output. Our method learns
a model for one specific speaker, and is then able to extract that speakers
voice from babble background noise. Results from experimentation indicate the
ability to generalize to new speakers and robustness to new noise environments
of varying signal-to-noise ratios. A potential application of this method would
be for use in hearing aids. A pre-trained model could be quickly fine tuned for
an individuals family members and close friends, and deployed onto a hearing
aid to assist listeners in noisy environments.
| 1 | 0 | 0 | 1 | 0 | 0 |
Tunnelling Spectroscopy of Andreev States in Graphene | A normal conductor placed in good contact with a superconductor can inherit
its remarkable electronic properties. This proximity effect microscopically
originates from the formation in the conductor of entangled electron-hole
states, called Andreev states. Spectroscopic studies of Andreev states have
been performed in just a handful of systems. The unique geometry, electronic
structure and high mobility of graphene make it a novel platform for studying
Andreev physics in two dimensions. Here we use a full van der Waals
heterostructure to perform tunnelling spectroscopy measurements of the
proximity effect in superconductor-graphene-superconductor junctions. The
measured energy spectra, which depend on the phase difference between the
superconductors, reveal the presence of a continuum of Andreev bound states.
Moreover, our device heterostructure geometry and materials enable us to
measure the Andreev spectrum as a function of the graphene Fermi energy,
showing a transition between different mesoscopic regimes. Furthermore, by
experimentally introducing a novel concept, the supercurrent spectral density,
we determine the supercurrent-phase relation in a tunnelling experiment, thus
establishing the connection between Andreev physics at finite energy and the
Josephson effect. This work opens up new avenues for probing exotic topological
phases of matter in hybrid superconducting Dirac materials.
| 0 | 1 | 0 | 0 | 0 | 0 |
Volatility estimation for stochastic PDEs using high-frequency observations | We study the parameter estimation for parabolic, linear, second order,
stochastic partial differential equations (SPDEs) observing a mild solution on
a discrete grid in time and space. A high-frequency regime is considered where
the mesh of the grid in the time variable goes to zero. Focusing on volatility
estimation, we provide an explicit and easy to implement method of moments
estimator based on squared increments. The estimator is consistent and admits a
central limit theorem. This is established moreover for the estimation of the
integrated volatility in a semi-parametric framework and for the joint
estimation of the volatility and an unknown parameter in the differential
operator. Starting from a representation of the solution as an infinite factor
model and exploiting mixing-type properties of time series, the theory
considerably differs from the statistics for semi-martingales literature. The
performance of the method is illustrated in a simulation study.
| 0 | 0 | 1 | 1 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.