text
stringlengths 138
2.38k
| labels
sequencelengths 6
6
| Predictions
sequencelengths 1
3
|
---|---|---|
Title: On the Hilbert coefficients, depth of associated graded rings and reduction numbers,
Abstract: Let $(R,\mathfrak{m})$ be a $d$-dimensional Cohen-Macaulay local ring, $I$ an
$\mathfrak{m}$-primary ideal of $R$ and $J=(x_1,...,x_d)$ a minimal reduction
of $I$. We show that if $J_{d-1}=(x_1,...,x_{d-1})$ and
$\sum\limits_{n=1}^\infty\lambda{({I^{n+1}\cap J_{d-1}})/({J{I^n} \cap
J_{d-1}})=i}$ where i=0,1, then depth $G(I)\geq{d-i-1}$. Moreover, we prove
that if $e_2(I) = \sum_{n=2}^\infty (n-1) \lambda (I^n/JI^{n-1})-2;$ or if $I$
is integrally closed and $e_2(I) = \sum_{n=2}^\infty
(n-1)\lambda({I^{n}}/JI^{n-1})-i$ where $i=3,4$, then $e_1(I) =
\sum_{n=1}^\infty \lambda(I^n / JI^{n-1})-1.$ In addition, we show that $r(I)$
is independent. Furthermore, we study the independence of $r(I)$ with some
other conditions. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Tracking performance in high multiplicities environment at ALICE,
Abstract: In LHC Run 3, ALICE will increase the data taking rate significantly to
50\,kHz continuous read out of minimum bias Pb-Pb events. This challenges the
online and offline computing infrastructure, requiring to process 50 times as
many events per second as in Run 2, and increasing the data compression ratio
from 5 to 20. Such high data compression is impossible by lossless ZIP-like
algorithms, but it must use results from online reconstruction, which in turn
requires online calibration. These important online processing steps are the
most computing-intense ones, and will use GPUs as hardware accelerators. The
new online features are already under test during Run 2 in the High Level
Trigger (HLT) online processing farm. The TPC (Time Projection Chamber)
tracking algorithm for Run 3 is derived from the current HLT online tracking
and is based on the Cellular Automaton and Kalman Filter. HLT has deployed
online calibration for the TPC drift time, which needs to be extended to space
charge distortions calibration. This requires online reconstruction for
additional detectors like TRD (Transition Radiation Detector) and TOF (Time Of
Flight). We present prototypes of these developments, in particular a data
compression algorithm that achieves a compression factor of~9 on Run 2 TPC
data, and the efficiency of online TRD tracking. We give an outlook to the
challenges of TPC tracking with continuous read out. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Symplectic stability on manifolds with cylindrical ends,
Abstract: A famous result of Jurgen Moser states that a symplectic form on a compact
manifold cannot be deformed within its cohomology class to an inequivalent
symplectic form. It is well known that this does not hold in general for
noncompact symplectic manifolds. The notion of Eliashberg-Gromov convex ends
provides a natural restricted setting for the study of analogs of Moser's
symplectic stability result in the noncompact case, and this has been
significantly developed in work of Cieliebak-Eliashberg. Retaining the end
structure on the underlying smooth manifold, but dropping the convexity and
completeness assumptions on the symplectic forms at infinity we show that
symplectic stability holds under a natural growth condition on the path of
symplectic forms. The result can be straightforwardly applied as we show
through explicit examples. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Making the Dzyaloshinskii-Moriya interaction visible,
Abstract: Brillouin light spectroscopy is a powerful and robust technique for measuring
the interfacial Dzyaloshinskii-Moriya interaction in thin films with broken
inversion symmetry. Here we show that the magnon visibility, i.e. the intensity
of the inelastically scattered light, strongly depends on the thickness of the
dielectric seed material - SiO$_2$. By using both, analytical thin-film optics
and numerical calculations, we reproduce the experimental data. We therefore
provide a guideline for the maximization of the signal by adapting the
substrate properties to the geometry of the measurement. Such a boost-up of the
signal eases the magnon visualization in ultrathin magnetic films, speeds-up
the measurement and increases the reliability of the data. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Ferroionic states in ferroelectric thin films,
Abstract: The electric coupling between surface ions and bulk ferroelectricity gives
rise to a continuum of mixed states in ferroelectric thin films, exquisitely
sensitive to temperature and external factors, such as applied voltage and
oxygen pressure. Here we develop the comprehensive analytical description of
these coupled ferroelectric and ionic ("ferroionic") states by combining the
Ginzburg-Landau-Devonshire description of the ferroelectric properties of the
film with Langmuir adsorption model for the electrochemical reaction at the
film surface. We explore the thermodynamic and kinetic characteristics of the
ferroionic states as a function of temperature, film thickness, and external
electric potential. These studies provide a new insight into mesoscopic
properties of ferroelectric thin films, whose surface is exposed to chemical
environment as screening charges supplier. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Quantum Chebyshev's Inequality and Applications,
Abstract: In this paper we provide new quantum algorithms with polynomial speed-up for
a range of problems for which no such results were known, or we improve
previous algorithms. First, we consider the approximation of the frequency
moments $F_k$ of order $k \geq 3$ in the multi-pass streaming model with
updates (turnstile model). We design a $P$-pass quantum streaming algorithm
with memory $M$ satisfying a tradeoff of $P^2 M = \tilde{O}(n^{1-2/k})$,
whereas the best classical algorithm requires $P M = \Theta(n^{1-2/k})$. Then,
we study the problem of estimating the number $m$ of edges and the number $t$
of triangles given query access to an $n$-vertex graph. We describe optimal
quantum algorithms that perform $\tilde{O}(\sqrt{n}/m^{1/4})$ and
$\tilde{O}(\sqrt{n}/t^{1/6} + m^{3/4}/\sqrt{t})$ queries respectively. This is
a quadratic speed-up compared to the classical complexity of these problems.
For this purpose we develop a new quantum paradigm that we call Quantum
Chebyshev's inequality. Namely we demonstrate that, in a certain model of
quantum sampling, one can approximate with relative error the mean of any
random variable with a number of quantum samples that is linear in the ratio of
the square root of the variance to the mean. Classically the dependency is
quadratic. Our algorithm subsumes a previous result of Montanaro [Mon15]. This
new paradigm is based on a refinement of the Amplitude Estimation algorithm of
Brassard et al. [BHMT02] and of previous quantum algorithms for the mean
estimation problem. We show that this speed-up is optimal, and we identify
another common model of quantum sampling where it cannot be obtained. For our
applications, we also adapt the variable-time amplitude amplification technique
of Ambainis [Amb10] into a variable-time amplitude estimation algorithm. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Physics",
"Mathematics"
] |
Title: Data-driven Advice for Applying Machine Learning to Bioinformatics Problems,
Abstract: As the bioinformatics field grows, it must keep pace not only with new data
but with new algorithms. Here we contribute a thorough analysis of 13
state-of-the-art, commonly used machine learning algorithms on a set of 165
publicly available classification problems in order to provide data-driven
algorithm recommendations to current researchers. We present a number of
statistical and visual comparisons of algorithm performance and quantify the
effect of model selection and algorithm tuning for each algorithm and dataset.
The analysis culminates in the recommendation of five algorithms with
hyperparameters that maximize classifier performance across the tested
problems, as well as general guidelines for applying machine learning to
supervised classification problems. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology",
"Statistics"
] |
Title: Intuitive Hand Teleoperation by Novice Operators Using a Continuous Teleoperation Subspace,
Abstract: Human-in-the-loop manipulation is useful in when autonomous grasping is not
able to deal sufficiently well with corner cases or cannot operate fast enough.
Using the teleoperator's hand as an input device can provide an intuitive
control method but requires mapping between pose spaces which may not be
similar. We propose a low-dimensional and continuous teleoperation subspace
which can be used as an intermediary for mapping between different hand pose
spaces. We present an algorithm to project between pose space and teleoperation
subspace. We use a non-anthropomorphic robot to experimentally prove that it is
possible for teleoperation subspaces to effectively and intuitively enable
teleoperation. In experiments, novice users completed pick and place tasks
significantly faster using teleoperation subspace mapping than they did using
state of the art teleoperation methods. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A Hierarchical Bayes Approach to Adjust for Selection Bias in Before-After Analyses of Vision Zero Policies,
Abstract: American cities devote significant resources to the implementation of traffic
safety countermeasures that prevent pedestrian fatalities. However, the
before-after comparisons typically used to evaluate the success of these
countermeasures often suffer from selection bias. This paper motivates the
tendency for selection bias to overestimate the benefits of traffic safety
policy, using New York City's Vision Zero strategy as an example. The NASS
General Estimates System, Fatality Analysis Reporting System and other
databases are combined into a Bayesian hierarchical model to calculate a more
realistic before-after comparison. The results confirm the before-after
analysis of New York City's Vision Zero policy did in fact overestimate the
effect of the policy, and a more realistic estimate is roughly two-thirds the
size. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Quantitative Finance"
] |
Title: Shift-Coupling of Random Rooted Graphs and Networks,
Abstract: In this paper, we present a result similar to the shift-coupling result of
Thorisson (1996) in the context of random graphs and networks. The result is
that a given random rooted network can be obtained by changing the root of
another given one if and only if the distributions of the two agree on the
invariant sigma-field. Several applications of the result are presented for the
case of unimodular networks. In particular, it is shown that the distribution
of a unimodular network is uniquely determined by its restriction to the
invariant sigma-filed. Also, the theorem is applied to the existence of an
invariant transport kernel that balances between two given (discrete) measures
on the vertices. An application is the existence of a so called extra head
scheme for the Bernoulli process on an infinite unimodular graph. Moreover, a
construction is presented for balancing transport kernels that is a
generalization of the Gale-Shapley stable matching algorithm in bipartite
graphs. Another application is on a general method that covers the situations
where some vertices and edges are added to a unimodular network and then, to
make it unimodular, the probability measure is biased and then a new root is
selected. It is proved that this method provides all possible
unimodularizations in these situations. Finally, analogous existing results for
stationary point processes and unimodular networks are discussed in detail. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Statistics",
"Computer Science"
] |
Title: Fast Automated Analysis of Strong Gravitational Lenses with Convolutional Neural Networks,
Abstract: Quantifying image distortions caused by strong gravitational lensing and
estimating the corresponding matter distribution in lensing galaxies has been
primarily performed by maximum likelihood modeling of observations. This is
typically a time and resource-consuming procedure, requiring sophisticated
lensing codes, several data preparation steps, and finding the maximum
likelihood model parameters in a computationally expensive process with
downhill optimizers. Accurate analysis of a single lens can take up to a few
weeks and requires the attention of dedicated experts. Tens of thousands of new
lenses are expected to be discovered with the upcoming generation of ground and
space surveys, the analysis of which can be a challenging task. Here we report
the use of deep convolutional neural networks to accurately estimate lensing
parameters in an extremely fast and automated way, circumventing the
difficulties faced by maximum likelihood methods. We also show that lens
removal can be made fast and automated using Independent Component Analysis of
multi-filter imaging data. Our networks can recover the parameters of the
Singular Isothermal Ellipsoid density profile, commonly used to model strong
lensing systems, with an accuracy comparable to the uncertainties of
sophisticated models, but about ten million times faster: 100 systems in
approximately 1s on a single graphics processing unit. These networks can
provide a way for non-experts to obtain lensing parameter estimates for large
samples of data. Our results suggest that neural networks can be a powerful and
fast alternative to maximum likelihood procedures commonly used in
astrophysics, radically transforming the traditional methods of data reduction
and analysis. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: End-to-End Task-Completion Neural Dialogue Systems,
Abstract: One of the major drawbacks of modularized task-completion dialogue systems is
that each module is trained individually, which presents several challenges.
For example, downstream modules are affected by earlier modules, and the
performance of the entire system is not robust to the accumulated errors. This
paper presents a novel end-to-end learning framework for task-completion
dialogue systems to tackle such issues. Our neural dialogue system can directly
interact with a structured database to assist users in accessing information
and accomplishing certain tasks. The reinforcement learning based dialogue
manager offers robust capabilities to handle noises caused by other components
of the dialogue system. Our experiments in a movie-ticket booking domain show
that our end-to-end system not only outperforms modularized dialogue system
baselines for both objective and subjective evaluation, but also is robust to
noises as demonstrated by several systematic experiments with different error
granularity and rates specific to the language understanding module. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Improving power of genetic association studies by extreme phenotype sampling: a review and some new results,
Abstract: Extreme phenotype sampling is a selective genotyping design for genetic
association studies where only individuals with extreme values of a continuous
trait are genotyped for a set of genetic variants. Under financial or other
limitations, this design is assumed to improve the power to detect associations
between genetic variants and the trait, compared to randomly selecting the same
number of individuals for genotyping. Here we present extensions of likelihood
models that can be used for inference when the data are sampled according to
the extreme phenotype sampling design. Computational methods for parameter
estimation and hypothesis testing are provided. We consider methods for common
variant genetic effects and gene-environment interaction effects in linear
regression models with a normally distributed trait. We use simulated and real
data to show that extreme phenotype sampling can be powerful compared to random
sampling, but that this does not hold for all extreme sampling methods and
situations. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Quantitative Biology"
] |
Title: Invitation to Alexandrov geometry: CAT[0] spaces,
Abstract: The idea is to demonstrate the beauty and power of Alexandrov geometry by
reaching interesting applications with a minimum of preparation.
The topics include
1. Estimates on the number of collisions in billiards.
2. Construction of exotic aspherical manifolds.
3. The geometry of two-convex sets in Euclidean space. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: The extension of some D(4)-pairs,
Abstract: In this paper we illustrate the use of the results from [1] proving that
$D(4)$-triple $\{a, b, c\}$ with $a < b < a + 57\sqrt{a}$ has a unique
extension to a quadruple with a larger element. This furthermore implies that
$D(4)$-pair $\{a, b\}$ cannot be extended to a quintuple if $a < b < a +
57\sqrt{a}$. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Grid-based Approaches for Distributed Data Mining Applications,
Abstract: The data mining field is an important source of large-scale applications and
datasets which are getting more and more common. In this paper, we present
grid-based approaches for two basic data mining applications, and a performance
evaluation on an experimental grid environment that provides interesting
monitoring capabilities and configuration tools. We propose a new distributed
clustering approach and a distributed frequent itemsets generation well-adapted
for grid environments. Performance evaluation is done using the Condor system
and its workflow manager DAGMan. We also compare this performance analysis to a
simple analytical model to evaluate the overheads related to the workflow
engine and the underlying grid system. This will specifically show that
realistic performance expectations are currently difficult to achieve on the
grid. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: LoopInvGen: A Loop Invariant Generator based on Precondition Inference,
Abstract: We describe the LoopInvGen tool for generating loop invariants that can
provably guarantee correctness of a program with respect to a given
specification. LoopInvGen is an efficient implementation of the inference
technique originally proposed in our earlier work on PIE
(this https URL).
In contrast to existing techniques, LoopInvGen is not restricted to a fixed
set of features -- atomic predicates that are composed together to build
complex loop invariants. Instead, we start with no initial features, and use
program synthesis techniques to grow the set on demand. This not only enables a
less onerous and more expressive approach, but also appears to be significantly
faster than the existing tools over the SyGuS-COMP 2017 benchmarks from the INV
track. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Dimensional crossover of effective orbital dynamics in polar distorted 3He-A: Transitions to anti-spacetime,
Abstract: Topologically protected superfluid phases of $^3$He allow one to simulate
many important aspects of relativistic quantum field theories and quantum
gravity in condensed matter. Here we discuss a topological Lifshitz transition
of the effective quantum vacuum in which the determinant of the tetrad field
changes sign through a crossing to a vacuum state with a degenerate fermionic
metric. Such a transition is realized in polar distorted superfluid $^3$He-A in
terms of the effective tetrad fields emerging in the vicinity of the superfluid
gap nodes: the tetrads of the Weyl points in the chiral A-phase of $^3$He and
the degenerate tetrad in the vicinity of a Dirac nodal line in the polar phase
of $^3$He. The continuous phase transition from the $A$-phase to the polar
phase, i.e. in the transition from the Weyl nodes to the Dirac nodal line and
back, allows one to follow the behavior of the fermionic and bosonic effective
actions when the sign of the tetrad determinant changes, and the effective
chiral space-time transforms to anti-chiral "anti-spacetime". This condensed
matter realization demonstrates that while the original fermionic action is
analytic across the transition, the effective action for the orbital degrees of
freedom (pseudo-EM) fields and gravity have non-analytic behavior. In
particular, the action for the pseudo-EM field in the vacuum with Weyl fermions
(A-phase) contains the modulus of the tetrad determinant. In the vacuum with
the degenerate metric (polar phase) the nodal line is effectively a family of
$2+1$d Dirac fermion patches, which leads to a non-analytic $(B^2-E^2)^{3/4}$
QED action in the vicinity of the Dirac line. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Scalable Spectrum Allocation and User Association in Networks with Many Small Cells,
Abstract: A scalable framework is developed to allocate radio resources across a large
number of densely deployed small cells with given traffic statistics on a slow
timescale. Joint user association and spectrum allocation is first formulated
as a convex optimization problem by dividing the spectrum among all possible
transmission patterns of active access points (APs). To improve scalability
with the number of APs, the problem is reformulated using local patterns of
interfering APs. To maintain global consistency among local patterns,
inter-cluster interaction is characterized as hyper-edges in a hyper-graph with
nodes corresponding to neighborhoods of APs. A scalable solution is obtained by
iteratively solving a convex optimization problem for bandwidth allocation with
reduced complexity and constructing a global spectrum allocation using
hyper-graph coloring. Numerical results demonstrate the proposed solution for a
network with 100 APs and several hundred user equipments. For a given quality
of service (QoS), the proposed scheme can increase the network capacity several
fold compared to assigning each user to the strongest AP with full-spectrum
reuse. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: The collisional frequency shift of a trapped-ion optical clock,
Abstract: Collisions with background gas can perturb the transition frequency of
trapped ions in an optical atomic clock. We develop a non-perturbative
framework based on a quantum channel description of the scattering process, and
use it to derive a master equation which leads to a simple analytic expression
for the collisional frequency shift. As a demonstration of our method, we
calculate the frequency shift of the Sr$^+$ optical atomic clock transition due
to elastic collisions with helium. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Continuity properties for Born-Jordan operators with symbols in Hörmander classes and modulation spaces,
Abstract: We show that the Weyl symbol of a Born-Jordan operator is in the same class
as the Born-Jordan symbol, when Hörmander symbols and certain types of
modulation spaces are used as symbol classes. We use these properties to carry
over continuity and Schatten-von Neumann properties to the Born-Jordan
calculus. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Correcting Two Deletions and Insertions in Racetrack Memory,
Abstract: Racetrack memory is a non-volatile memory engineered to provide both high
density and low latency, that is subject to synchronization or shift errors.
This paper describes a fast coding solution, in which delimiter bits assist in
identifying the type of shift error, and easily implementable graph-based codes
are used to correct the error, once identified. A code that is able to detect
and correct double shift errors is described in detail. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Limits of Yang-Mills α-connections,
Abstract: In the spirit of recent work of Lamm, Malchiodi and Micallef in the setting
of harmonic maps, we identify Yang-Mills connections obtained by approximations
with respect to the Yang-Mills {\alpha}-energy. More specifically, we show that
for the SU(2) Hopf fibration over the four sphere, for sufficiently small
{\alpha} values the SO(4) invariant ADHM instanton is the unique
{\alpha}-critical point which has Yang-Mills {\alpha}-energy lower than a
specific threshold. | [
0,
0,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Online Robust Principal Component Analysis with Change Point Detection,
Abstract: Robust PCA methods are typically batch algorithms which requires loading all
observations into memory before processing. This makes them inefficient to
process big data. In this paper, we develop an efficient online robust
principal component methods, namely online moving window robust principal
component analysis (OMWRPCA). Unlike existing algorithms, OMWRPCA can
successfully track not only slowly changing subspace but also abruptly changed
subspace. By embedding hypothesis testing into the algorithm, OMWRPCA can
detect change points of the underlying subspaces. Extensive simulation studies
demonstrate the superior performance of OMWRPCA compared with other
state-of-art approaches. We also apply the algorithm for real-time background
subtraction of surveillance video. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: How production networks amplify economic growth,
Abstract: Technological improvement is the most important cause of long-term economic
growth, but the factors that drive it are still not fully understood. In
standard growth models technology is treated in the aggregate, and a main goal
has been to understand how growth depends on factors such as knowledge
production. But an economy can also be viewed as a network, in which producers
purchase goods, convert them to new goods, and sell them to households or other
producers. Here we develop a simple theory that shows how the network
properties of an economy can amplify the effects of technological improvements
as they propagate along chains of production. A key property of an industry is
its output multiplier, which can be understood as the average number of
production steps required to make a good. The model predicts that the output
multiplier of an industry predicts future changes in prices, and that the
average output multiplier of a country predicts future economic growth. We test
these predictions using data from the World Input Output Database and find
results in good agreement with the model. The results show how purely
structural properties of an economy, that have nothing to do with innovation or
human creativity, can exert an important influence on long-term growth. | [
0,
0,
0,
0,
0,
1
] | [
"Quantitative Finance",
"Economics"
] |
Title: Exact density functional obtained via the Levy constrained search,
Abstract: A stochastic minimization method for a real-space wavefunction, $\Psi({\bf
r}_{1},{\bf r}_{2}\ldots{\bf r}_{n})$, constrained to a chosen density,
$\rho({\bf r})$, is developed. It enables the explicit calculation of the Levy
constrained search
$F[\rho]=\min_{\Psi\rightarrow\rho}\langle\Psi|\hat{T}+\hat{V}_{ee}|\Psi\rangle$
(Proc. Natl. Acad. Sci. 76 6062 (1979)), that gives the exact functional of
density functional theory. This general method is illustrated in the evaluation
of $F[\rho]$ for two-electron densities in one dimension with a soft-Coulomb
interaction. Additionally, procedures are given to determine the first and
second functional derivatives, $\frac{\delta F}{\delta\rho({\bf r})}$ and
$\frac{\delta^{2}F}{\delta\rho({\bf r})\delta\rho({\bf r}')}$. For a chosen
external potential, $v({\bf r})$, the functional and its derivatives are used
in minimizations only over densities to give the exact energy, $E_{v}$ without
needing to solve the Schrödinger equation. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Compile-Time Symbolic Differentiation Using C++ Expression Templates,
Abstract: Template metaprogramming is a popular technique for implementing compile time
mechanisms for numerical computing. We demonstrate how expression templates can
be used for compile time symbolic differentiation of algebraic expressions in
C++ computer programs. Given a positive integer $N$ and an algebraic function
of multiple variables, the compiler generates executable code for the $N$th
partial derivatives of the function. Compile-time simplification of the
derivative expressions is achieved using recursive templates. A detailed
analysis indicates that current C++ compiler technology is already sufficient
for practical use of our results, and highlights a number of issues where
further improvements may be desirable. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Autocommuting probability of a finite group,
Abstract: Let $G$ be a finite group and $\Aut(G)$ the automorphism group of $G$. The
autocommuting probability of $G$, denoted by $\Pr(G, \Aut(G))$, is the
probability that a randomly chosen automorphism of $G$ fixes a randomly chosen
element of $G$. In this paper, we study $\Pr(G, \Aut(G))$ through a
generalization. We obtain a computing formula, several bounds and
characterizations of $G$ through $\Pr(G, \Aut(G))$. We conclude the paper by
showing that the generalized autocommuting probability of $G$ remains unchanged
under autoisoclinism. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: SilhoNet: An RGB Method for 3D Object Pose Estimation and Grasp Planning,
Abstract: Autonomous robot manipulation often involves both estimating the pose of the
object to be manipulated and selecting a viable grasp point. Methods using
RGB-D data have shown great success in solving these problems. However, there
are situations where cost constraints or the working environment may limit the
use of RGB-D sensors. When limited to monocular camera data only, both the
problem of object pose estimation and of grasp point selection are very
challenging. In the past, research has focused on solving these problems
separately. In this work, we introduce a novel method called SilhoNet that
bridges the gap between these two tasks. We use a Convolutional Neural Network
(CNN) pipeline that takes in ROI proposals to simultaneously predict an
intermediate silhouette representation for objects with an associated occlusion
mask. The 3D pose is then regressed from the predicted silhouettes. Grasp
points from a precomputed database are filtered by back-projecting them onto
the occlusion mask to find which points are visible in the scene. We show that
our method achieves better overall performance than the state-of-the art
PoseCNN network for 3D pose estimation on the YCB-video dataset. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Collapsed Tetragonal Phase Transition in LaRu$_2$P$_2$,
Abstract: The structural properties of LaRu$_2$P$_2$ under external pressure have been
studied up to 14 GPa, employing high-energy x-ray diffraction in a
diamond-anvil pressure cell. At ambient conditions, LaRu$_2$P$_2$ (I4/mmm) has
a tetragonal structure with a bulk modulus of $B=105(2)$ GPa and exhibits
superconductivity at $T_c= 4.1$ K. With the application of pressure,
LaRu$_2$P$_2$ undergoes a phase transition to a collapsed tetragonal (cT) state
with a bulk modulus of $B=175(5)$ GPa. At the transition, the c-lattice
parameter exhibits a sharp decrease with a concurrent increase of the a-lattice
parameter. The cT phase transition in LaRu$_2$P$_2$ is consistent with a second
order transition, and was found to be temperature dependent, increasing from
$P=3.9(3)$ GPa at 160 K to $P=4.6(3)$ GPa at 300 K. In total, our data are
consistent with the cT transition being near, but slightly above 2 GPa at 5 K.
Finally, we compare the effect of physical and chemical pressure in the
RRu$_2$P$_2$ (R = Y, La-Er, Yb) isostructural series of compounds and find them
to be analogous. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Controlling the thermoelectric effect by mechanical manipulation of the electron's quantum phase in atomic junctions,
Abstract: The thermoelectric voltage developed across an atomic metal junction (i.e., a
nanostructure in which one or a few atoms connect two metal electrodes) in
response to a temperature difference between the electrodes, results from the
quantum interference of electrons that pass through the junction multiple times
after being scattered by the surrounding defects. Here we report successfully
tuning this quantum interference and thus controlling the magnitude and sign of
the thermoelectric voltage by applying a mechanical force that deforms the
junction. The observed switching of the thermoelectric voltage is reversible
and can be cycled many times. Our ab initio and semi-empirical calculations
elucidate the detailed mechanism by which the quantum interference is tuned. We
show that the applied strain alters the quantum phases of electrons passing
through the narrowest part of the junction and hence modifies the electronic
quantum interference in the device. Tuning the quantum interference causes the
energies of electronic transport resonances to shift, which affects the
thermoelectric voltage. These experimental and theoretical studies reveal that
Au atomic junctions can be made to exhibit both positive and negative
thermoelectric voltages on demand, and demonstrate the importance and
tunability of the quantum interference effect in the atomic-scale metal
nanostructures. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Improving Protein Gamma-Turn Prediction Using Inception Capsule Networks,
Abstract: Protein gamma-turn prediction is useful in protein function studies and
experimental design. Several methods for gamma-turn prediction have been
developed, but the results were unsatisfactory with Matthew correlation
coefficients (MCC) around 0.2-0.4. One reason for the low prediction accuracy
is the limited capacity of the methods; in particular, the traditional
machine-learning methods like SVM may not extract high-level features well to
distinguish between turn or non-turn. Hence, it is worthwhile exploring new
machine-learning methods for the prediction. A cutting-edge deep neural
network, named Capsule Network (CapsuleNet), provides a new opportunity for
gamma-turn prediction. Even when the number of input samples is relatively
small, the capsules from CapsuleNet are very effective to extract high-level
features for classification tasks. Here, we propose a deep inception capsule
network for gamma-turn prediction. Its performance on the gamma-turn benchmark
GT320 achieved an MCC of 0.45, which significantly outperformed the previous
best method with an MCC of 0.38. This is the first gamma-turn prediction method
utilizing deep neural networks. Also, to our knowledge, it is the first
published bioinformatics application utilizing capsule network, which will
provides a useful example for the community. | [
0,
0,
0,
0,
1,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Non-exponential decoherence of radio-frequency resonance rotation of spin in storage rings,
Abstract: Precision experiments, such as the search for electric dipole moments of
charged particles using radiofrequency spin rotators in storage rings, demand
for maintaining the exact spin resonance condition for several thousand
seconds. Synchrotron oscillations in the stored beam modulate the spin tune of
off-central particles, moving it off the perfect resonance condition set for
central particles on the reference orbit. Here we report an analytic
description of how synchrotron oscillations lead to non-exponential decoherence
of the radiofrequency resonance driven up-down spin rotations. This
non-exponential decoherence is shown to be accompanied by a nontrivial walk of
the spin phase. We also comment on sensitivity of the decoherence rate to the
harmonics of the radiofreqency spin rotator and a possibility to check
predictions of decoherence-free magic energies. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Flow-Sensitive Composition of Thread-Modular Abstract Interpretation,
Abstract: We propose a constraint-based flow-sensitive static analysis for concurrent
programs by iteratively composing thread-modular abstract interpreters via the
use of a system of lightweight constraints. Our method is compositional in that
it first applies sequential abstract interpreters to individual threads and
then composes their results. It is flow-sensitive in that the causality
ordering of interferences (flow of data from global writes to reads) is modeled
by a system of constraints. These interference constraints are lightweight
since they only refer to the execution order of program statements as opposed
to their numerical properties: they can be decided efficiently using an
off-the-shelf Datalog engine. Our new method has the advantage of being more
accurate than existing, flow-insensitive, static analyzers while remaining
scalable and providing the expected soundness and termination guarantees even
for programs with unbounded data. We implemented our method and evaluated it on
a large number of benchmarks, demonstrating its effectiveness at increasing the
accuracy of thread-modular abstract interpretation. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Momentum Control of Humanoid Robots with Series Elastic Actuators,
Abstract: Humanoid robots may require a degree of compliance at the joint level for
improving efficiency, shock tolerance, and safe interaction with humans. The
presence of joint elasticity, however, complexifies the design of balancing and
walking controllers. This paper proposes a control framework for extending
momentum based controllers developed for stiff actuators to the case of series
elastic actuators. The key point is to consider the motor velocities as an
intermediate control input, and then apply high-gain control to stabilise the
desired motor velocities achieving momentum control. Simulations carried out on
a model of the robot iCub verify the soundness of the proposed approach. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science"
] |
Title: Time-Optimal Trajectories of Generic Control-Affine Systems Have at Worst Iterated Fuller Singularities,
Abstract: We consider in this paper the regularity problem for time-optimal
trajectories of a single-input control-affine system on a n-dimensional
manifold. We prove that, under generic conditions on the drift and the
controlled vector field, any control u associated with an optimal trajectory is
smooth out of a countable set of times. More precisely, there exists an integer
K, only depending on the dimension n, such that the non-smoothness set of u is
made of isolated points, accumulations of isolated points, and so on up to K-th
order iterated accumulations. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Wikipedia for Smart Machines and Double Deep Machine Learning,
Abstract: Very important breakthroughs in data centric deep learning algorithms led to
impressive performance in transactional point applications of Artificial
Intelligence (AI) such as Face Recognition, or EKG classification. With all due
appreciation, however, knowledge blind data only machine learning algorithms
have severe limitations for non-transactional AI applications, such as medical
diagnosis beyond the EKG results. Such applications require deeper and broader
knowledge in their problem solving capabilities, e.g. integrating anatomy and
physiology knowledge with EKG results and other patient findings. Following a
review and illustrations of such limitations for several real life AI
applications, we point at ways to overcome them. The proposed Wikipedia for
Smart Machines initiative aims at building repositories of software structures
that represent humanity science & technology knowledge in various parts of
life; knowledge that we all learn in schools, universities and during our
professional life. Target readers for these repositories are smart machines;
not human. AI software developers will have these Reusable Knowledge structures
readily available, hence, the proposed name ReKopedia. Big Data is by now a
mature technology, it is time to focus on Big Knowledge. Some will be derived
from data, some will be obtained from mankind gigantic repository of knowledge.
Wikipedia for smart machines along with the new Double Deep Learning approach
offer a paradigm for integrating datacentric deep learning algorithms with
algorithms that leverage deep knowledge, e.g. evidential reasoning and
causality reasoning. For illustration, a project is described to produce
ReKopedia knowledge modules for medical diagnosis of about 1,000 disorders.
Data is important, but knowledge deep, basic, and commonsense is equally
important. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Deep Text Classification Can be Fooled,
Abstract: In this paper, we present an effective method to craft text adversarial
samples, revealing one important yet underestimated fact that DNN-based text
classifiers are also prone to adversarial sample attack. Specifically,
confronted with different adversarial scenarios, the text items that are
important for classification are identified by computing the cost gradients of
the input (white-box attack) or generating a series of occluded test samples
(black-box attack). Based on these items, we design three perturbation
strategies, namely insertion, modification, and removal, to generate
adversarial samples. The experiment results show that the adversarial samples
generated by our method can successfully fool both state-of-the-art
character-level and word-level DNN-based text classifiers. The adversarial
samples can be perturbed to any desirable classes without compromising their
utilities. At the same time, the introduced perturbation is difficult to be
perceived. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Strong Bayesian Evidence for the Normal Neutrino Hierarchy,
Abstract: The configuration of the three neutrino masses can take two forms, known as
the normal and inverted hierarchies. We compute the Bayesian evidence
associated with these two hierarchies. Previous studies found a mild preference
for the normal hierarchy, and this was driven by the asymmetric manner in which
cosmological data has confined the available parameter space. Here we identify
the presence of a second asymmetry, which is imposed by data from neutrino
oscillations. By combining constraints on the squared-mass splittings with the
limit on the sum of neutrino masses of $\Sigma m_\nu < 0.13$ eV, and using a
minimally informative prior on the masses, we infer odds of 42:1 in favour of
the normal hierarchy, which is classified as "strong" in the Jeffreys' scale.
We explore how these odds may evolve in light of higher precision cosmological
data, and discuss the implications of this finding with regards to the nature
of neutrinos. Finally the individual masses are inferred to be $m_1 =
3.80^{+26.2}_{-3.73} \, \text{meV}, m_2 = 8.8^{+18}_{-1.2} \, \text{meV}, m_3 =
50.4^{+5.8}_{-1.2} \, \text{meV}$ ($95\%$ credible intervals). | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Statistics"
] |
Title: SMARTies: Sentiment Models for Arabic Target Entities,
Abstract: We consider entity-level sentiment analysis in Arabic, a morphologically rich
language with increasing resources. We present a system that is applied to
complex posts written in response to Arabic newspaper articles. Our goal is to
identify important entity "targets" within the post along with the polarity
expressed about each target. We achieve significant improvements over multiple
baselines, demonstrating that the use of specific morphological representations
improves the performance of identifying both important targets and their
sentiment, and that the use of distributional semantic clusters further boosts
performances for these representations, especially when richer linguistic
resources are not available. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Explaining the elongated shape of 'Oumuamua by the Eikonal abrasion model,
Abstract: The photometry of the minor body with extrasolar origin (1I/2017 U1)
'Oumuamua revealed an unprecedented shape: Meech et al. (2017) reported a shape
elongation b/a close to 1/10, which calls for theoretical explanation. Here we
show that the abrasion of a primordial asteroid by a huge number of tiny
particles ultimately leads to such elongated shape. The model (called the
Eikonal equation) predicting this outcome was already suggested in Domokos et
al. (2009) to play an important role in the evolution of asteroid shapes. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: On the lattice of the $σ$-permutable subgroups of a finite group,
Abstract: Let $\sigma =\{\sigma_{i} | i\in I\}$ be some partition of the set of all
primes $\Bbb{P}$, $G$ a finite group and $\sigma (G) =\{\sigma_{i}
|\sigma_{i}\cap \pi (G)\ne \emptyset \}$. A set ${\cal H}$ of subgroups of $G$
is said to be a complete Hall $\sigma $-set of $G$ if every member $\ne 1$ of
${\cal H}$ is a Hall $\sigma_{i}$-subgroup of $G$ for some $\sigma_{i}\in
\sigma $ and ${\cal H}$ contains exactly one Hall $\sigma_{i}$-subgroup of $G$
for every $\sigma_{i}\in \sigma (G)$. A subgroup $A$ of $G$ is said to be
${\sigma}$-permutable in $G$ if $G$ possesses a complete Hall $\sigma $-set and
$A$ permutes with each Hall $\sigma_{i}$-subgroup $H$ of $G$, that is, $AH=HA$
for all $i \in I$. We characterize finite groups with distributive lattice of
the ${\sigma}$-permutable subgroups. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: High Order Numerical Integrators for Relativistic Charged Particle Tracking,
Abstract: In this paper, we extend several time reversible numerical integrators to
solve the Lorentz force equations from second order accuracy to higher order
accuracy for relativistic charged particle tracking in electromagnetic fields.
A fourth order algorithm is given explicitly and tested with numerical
examples. Such high order numerical integrators can significantly save the
computational cost by using a larger step size in comparison to the second
order integrators. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics",
"Computer Science"
] |
Title: Towards quantitative methods to assess network generative models,
Abstract: Assessing generative models is not an easy task. Generative models should
synthesize graphs which are not replicates of real networks but show
topological features similar to real graphs. We introduce an approach for
assessing graph generative models using graph classifiers. The inability of an
established graph classifier for distinguishing real and synthesized graphs
could be considered as a performance measurement for graph generators. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Soliton groups as the reason for extreme statistics of unidirectional sea waves,
Abstract: The results of the probabilistic analysis of the direct numerical simulations
of irregular unidirectional deep-water waves are discussed. It is shown that an
occurrence of large-amplitude soliton-like groups represents an extraordinary
case, which is able to increase noticeably the probability of high waves even
in moderately rough sea conditions. The ensemble of wave realizations should be
large enough to take these rare events into account. Hence we provide a
striking example when long-living coherent structures make the water wave
statistics extreme. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Statistics"
] |
Title: New Abilities and Limitations of Spectral Graph Bisection,
Abstract: Spectral based heuristics belong to well-known commonly used methods which
determines provably minimal graph bisection or outputs "fail" when the
optimality cannot be certified. In this paper we focus on Boppana's algorithm
which belongs to one of the most prominent methods of this type. It is well
known that the algorithm works well in the random \emph{planted bisection
model} -- the standard class of graphs for analysis minimum bisection and
relevant problems. In 2001 Feige and Kilian posed the question if Boppana's
algorithm works well in the semirandom model by Blum and Spencer. In our paper
we answer this question affirmatively. We show also that the algorithm achieves
similar performance on graph classes which extend the semirandom model.
Since the behavior of Boppana's algorithm on the semirandom graphs remained
unknown, Feige and Kilian proposed a new semidefinite programming (SDP) based
approach and proved that it works on this model. The relationship between the
performance of the SDP based algorithm and Boppana's approach was left as an
open problem. In this paper we solve the problem in a complete way by proving
that the bisection algorithm of Feige and Kilian provides exactly the same
results as Boppana's algorithm. As a consequence we get that Boppana's
algorithm achieves the optimal threshold for exact cluster recovery in the
\emph{stochastic block model}. On the other hand we prove some limitations of
Boppana's approach: we show that if the density difference on the parameters of
the planted bisection model is too small then the algorithm fails with high
probability in the model. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Deep Residual Learning for Accelerated MRI using Magnitude and Phase Networks,
Abstract: Accelerated magnetic resonance (MR) scan acquisition with compressed sensing
(CS) and parallel imaging is a powerful method to reduce MR imaging scan time.
However, many reconstruction algorithms have high computational costs. To
address this, we investigate deep residual learning networks to remove aliasing
artifacts from artifact corrupted images. The proposed deep residual learning
networks are composed of magnitude and phase networks that are separately
trained. If both phase and magnitude information are available, the proposed
algorithm can work as an iterative k-space interpolation algorithm using
framelet representation. When only magnitude data is available, the proposed
approach works as an image domain post-processing algorithm. Even with strong
coherent aliasing artifacts, the proposed network successfully learned and
removed the aliasing artifacts, whereas current parallel and CS reconstruction
methods were unable to remove these artifacts. Comparisons using single and
multiple coil show that the proposed residual network provides good
reconstruction results with orders of magnitude faster computational time than
existing compressed sensing methods. The proposed deep learning framework may
have a great potential for accelerated MR reconstruction by generating accurate
results immediately. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: A critical analysis of resampling strategies for the regularized particle filter,
Abstract: We analyze the performance of different resampling strategies for the
regularized particle filter regarding parameter estimation. We show in
particular, building on analytical insight obtained in the linear Gaussian
case, that resampling systematically can prevent the filtered density from
converging towards the true posterior distribution. We discuss several means to
overcome this limitation, including kernel bandwidth modulation, and provide
evidence that the resulting particle filter clearly outperforms traditional
bootstrap particle filters. Our results are supported by numerical simulations
on a linear textbook example, the logistic map and a non-linear plant growth
model. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Geometry of the free-sliding Bernoulli beam,
Abstract: If a variational problem comes with no boundary conditions prescribed
beforehand, and yet these arise as a consequence of the variation process
itself, we speak of a free boundary values variational problem. Such is, for
instance, the problem of finding the shortest curve whose endpoints can slide
along two prescribed curves. There exists a rigorous geometric way to formulate
this sort of problems on smooth manifolds with boundary, which we review here
in a friendly self-contained way. As an application, we study a particular free
boundary values variational problem, the free-sliding Bernoulli beam. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Rapid, User-Transparent, and Trustworthy Device Pairing for D2D-Enabled Mobile Crowdsourcing,
Abstract: Mobile Crowdsourcing is a promising service paradigm utilizing ubiquitous
mobile devices to facilitate largescale crowdsourcing tasks (e.g. urban sensing
and collaborative computing). Many applications in this domain require
Device-to-Device (D2D) communications between participating devices for
interactive operations such as task collaborations and file transmissions.
Considering the private participating devices and their opportunistic
encountering behaviors, it is highly desired to establish secure and
trustworthy D2D connections in a fast and autonomous way, which is vital for
implementing practical Mobile Crowdsourcing Systems (MCSs). In this paper, we
develop an efficient scheme, Trustworthy Device Pairing (TDP), which achieves
user-transparent secure D2D connections and reliable peer device selections for
trustworthy D2D communications. Through rigorous analysis, we demonstrate the
effectiveness and security intensity of TDP in theory. The performance of TDP
is evaluated based on both real-world prototype experiments and extensive
trace-driven simulations. Evaluation results verify our theoretical analysis
and show that TDP significantly outperforms existing approaches in terms of
pairing speed, stability, and security. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Partially chaotic orbits in a perturbed cubic force model,
Abstract: Three types of orbits are theoretically possible in autonomous Hamiltonian
systems with three degrees of freedom: fully chaotic (they only obey the energy
integral), partially chaotic (they obey an additional isolating integral
besides energy) and regular (they obey two isolating integrals besides energy).
The existence of partially chaotic orbits has been denied by several authors,
however, arguing either that there is a sudden transition from regularity to
full chaoticity, or that a long enough follow up of a supposedly partially
chaotic orbit would reveal a fully chaotic nature. This situation needs
clarification, because partially chaotic orbits might play a significant role
in the process of chaotic diffusion. Here we use numerically computed Lyapunov
exponents to explore the phase space of a perturbed three dimensional cubic
force toy model, and a generalization of the Poincaré maps to show that
partially chaotic orbits are actually present in that model. They turn out to
be double orbits joined by a bifurcation zone, which is the most likely source
of their chaos, and they are encapsulated in regions of phase space bounded by
regular orbits similar to each one of the components of the double orbit. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Just-infinite C*-algebras and their invariants,
Abstract: Just-infinite C*-algebras, i.e., infinite dimensional C*-algebras, whose
proper quotients are finite dimensional, were investigated in
[Grigorchuk-Musat-Rordam, 2016]. One particular example of a just-infinite
residually finite dimensional AF-algebras was constructed in that article. In
this paper we extend that construction by showing that each infinite
dimensional metrizable Choquet simplex is affinely homeomorphic to the trace
simplex of a just-infinite residually finite dimensional C*-algebras. The trace
simplex of any unital residually finite dimensional C*-algebra is hence
realized by a just-infinite one. We determine the trace simplex of the
particular residually finite dimensional AF-algebras constructed in the above
mentioned article, and we show that it has precisely one extremal trace of type
II_1.
We give a complete description of the Bratteli diagrams corresponding to
residually finite dimensional AF-algebras. We show that a modification of any
such Bratteli diagram, similar to the modification that makes an arbitrary
Bratteli diagram simple, will yield a just-infinite residually finite
dimensional AF-algebra. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: A Large-scale Dataset and Benchmark for Similar Trademark Retrieval,
Abstract: Trademark retrieval (TR) has become an important yet challenging problem due
to an ever increasing trend in trademark applications and infringement
incidents. There have been many promising attempts for the TR problem, which,
however, fell impracticable since they were evaluated with limited and mostly
trivial datasets. In this paper, we provide a large-scale dataset with
benchmark queries with which different TR approaches can be evaluated
systematically. Moreover, we provide a baseline on this benchmark using the
widely-used methods applied to TR in the literature. Furthermore, we identify
and correct two important issues in TR approaches that were not addressed
before: reversal of contrast, and presence of irrelevant text in trademarks
severely affect the TR methods. Lastly, we applied deep learning, namely,
several popular Convolutional Neural Network models, to the TR problem. To the
best of the authors, this is the first attempt to do so. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Tracing Networks of Knowledge in the Digital Age,
Abstract: The emergence of new digital technologies has allowed the study of human
behaviour at a scale and at level of granularity that were unthinkable just a
decade ago. In particular, by analysing the digital traces left by people
interacting in the online and offline worlds, we are able to trace the
spreading of knowledge and ideas at both local and global scales.
In this article we will discuss how these digital traces can be used to map
knowledge across the world, outlining both the limitations and the challenges
in performing this type of analysis. We will focus on data collected from
social media platforms, large-scale digital repositories and mobile data.
Finally, we will provide an overview of the tools that are available to
scholars and practitioners for understanding these processes using these
emerging forms of data. | [
1,
1,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Data-Driven Tree Transforms and Metrics,
Abstract: We consider the analysis of high dimensional data given in the form of a
matrix with columns consisting of observations and rows consisting of features.
Often the data is such that the observations do not reside on a regular grid,
and the given order of the features is arbitrary and does not convey a notion
of locality. Therefore, traditional transforms and metrics cannot be used for
data organization and analysis. In this paper, our goal is to organize the data
by defining an appropriate representation and metric such that they respect the
smoothness and structure underlying the data. We also aim to generalize the
joint clustering of observations and features in the case the data does not
fall into clear disjoint groups. For this purpose, we propose multiscale
data-driven transforms and metrics based on trees. Their construction is
implemented in an iterative refinement procedure that exploits the
co-dependencies between features and observations. Beyond the organization of a
single dataset, our approach enables us to transfer the organization learned
from one dataset to another and to integrate several datasets together. We
present an application to breast cancer gene expression analysis: learning
metrics on the genes to cluster the tumor samples into cancer sub-types and
validating the joint organization of both the genes and the samples. We
demonstrate that using our approach to combine information from multiple gene
expression cohorts, acquired by different profiling technologies, improves the
clustering of tumor samples. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Quantitative Biology"
] |
Title: Aktuelle Entwicklungen in der Automatischen Musikverfolgung,
Abstract: In this paper we present current trends in real-time music tracking (a.k.a.
score following). Casually speaking, these algorithms "listen" to a live
performance of music, compare the audio signal to an abstract representation of
the score, and "read" along in the sheet music. In this way at any given time
the exact position of the musician(s) in the sheet music is computed. Here, we
focus on the aspects of flexibility and usability of these algorithms. This
comprises work on automatic identification and flexible tracking of the piece
being played as well as current approaches based on Deep Learning. The latter
enables direct learning of correspondences between complex audio data and
images of the sheet music, avoiding the complicated and time-consuming
definition of a mid-level representation.
-----
Diese Arbeit befasst sich mit aktuellen Entwicklungen in der automatischen
Musikverfolgung durch den Computer. Es handelt sich dabei um Algorithmen, die
einer musikalischen Aufführung "zuhören", das aufgenommene Audiosignal mit
einer (abstrakten) Repräsentation des Notentextes vergleichen und sozusagen
in diesem mitlesen. Der Algorithmus kennt also zu jedem Zeitpunkt die Position
der Musiker im Notentext. Neben der Vermittlung eines generellen Überblicks,
liegt der Schwerpunkt dieser Arbeit auf der Beleuchtung des Aspekts der
Flexibilität und der einfacheren Nutzbarkeit dieser Algorithmen. Es wird
dargelegt, welche Schritte getätigt wurden (und aktuell getätigt werden) um
den Prozess der automatischen Musikverfolgung einfacher zugänglich zu machen.
Dies umfasst Arbeiten zur automatischen Identifikation von gespielten Stücken
und deren flexible Verfolgung ebenso wie aktuelle Ansätze mithilfe von Deep
Learning, die es erlauben Bild und Ton direkt zu verbinden, ohne Umwege über
abstrakte und nur unter gro{\ss}em Zeitaufwand zu erstellende
Zwischenrepräsentationen. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A Visualization of the Classical Musical Tradition,
Abstract: A study of around 13,000 musical compositions from the Western classical
tradition is carried out, spanning 33 major composers from the Baroque to the
Romantic, with a focus on the usage of major/minor key signatures. A
2-dimensional chromatic diagram is proposed to succinctly visualize the data.
The diagram is found to be useful not only in distinguishing style and period,
but also in tracking the career development of a particular composer. | [
0,
0,
0,
1,
0,
0
] | [
"Quantitative Biology"
] |
Title: Electron-Hole Symmetry Breaking in Charge Transport in Nitrogen-Doped Graphene,
Abstract: Graphitic nitrogen-doped graphene is an excellent platform to study
scattering processes of massless Dirac fermions by charged impurities, in which
high mobility can be preserved due to the absence of lattice defects through
direct substitution of carbon atoms in the graphene lattice by nitrogen atoms.
In this work, we report on electrical and magnetotransport measurements of
high-quality graphitic nitrogen-doped graphene. We show that the substitutional
nitrogen dopants in graphene introduce atomically sharp scatters for electrons
but long-range Coulomb scatters for holes and, thus, graphitic nitrogen-doped
graphene exhibits clear electron-hole asymmetry in transport properties.
Dominant scattering processes of charge carriers in graphitic nitrogen-doped
graphene are analyzed. It is shown that the electron-hole asymmetry originates
from a distinct difference in intervalley scattering of electrons and holes. We
have also carried out the magnetotransport measurements of graphitic
nitrogen-doped graphene at different temperatures and the temperature
dependences of intervalley scattering, intravalley scattering and phase
coherent scattering rates are extracted and discussed. Our results provide an
evidence for the electron-hole asymmetry in the intervalley scattering induced
by substitutional nitrogen dopants in graphene and shine a light on versatile
and potential applications of graphitic nitrogen-doped graphene in electronic
and valleytronic devices. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: How Many Random Seeds? Statistical Power Analysis in Deep Reinforcement Learning Experiments,
Abstract: Consistently checking the statistical significance of experimental results is
one of the mandatory methodological steps to address the so-called
"reproducibility crisis" in deep reinforcement learning. In this tutorial
paper, we explain how the number of random seeds relates to the probabilities
of statistical errors. For both the t-test and the bootstrap confidence
interval test, we recall theoretical guidelines to determine the number of
random seeds one should use to provide a statistically significant comparison
of the performance of two algorithms. Finally, we discuss the influence of
deviations from the assumptions usually made by statistical tests. We show that
they can lead to inaccurate evaluations of statistical errors and provide
guidelines to counter these negative effects. We make our code available to
perform the tests. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Training Quantized Nets: A Deeper Understanding,
Abstract: Currently, deep neural networks are deployed on low-power portable devices by
first training a full-precision model using powerful hardware, and then
deriving a corresponding low-precision model for efficient inference on such
systems. However, training models directly with coarsely quantized weights is a
key step towards learning on embedded platforms that have limited computing
resources, memory capacity, and power consumption. Numerous recent publications
have studied methods for training quantized networks, but these studies have
mostly been empirical. In this work, we investigate training methods for
quantized neural networks from a theoretical viewpoint. We first explore
accuracy guarantees for training methods under convexity assumptions. We then
look at the behavior of these algorithms for non-convex problems, and show that
training algorithms that exploit high-precision representations have an
important greedy search phase that purely quantized training methods lack,
which explains the difficulty of training using low-precision arithmetic. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Computational Aided Design for Generating a Modular, Lightweight Car Concept,
Abstract: Developing an appropriate design process for a conceptual model is a stepping
stone toward designing car bodies. This paper presents a methodology to design
a lightweight and modular space frame chassis for a sedan electric car. The
dual phase high strength steel with improved mechanical properties is employed
to reduce the weight of the car body. Utilizing the finite element analysis
yields two models in order to predict the performance of each component. The
first model is a beam structure with a rapid response in structural stiffness
simulation. This model is used for performing the static tests including modal
frequency, bending stiffens and torsional stiffness evaluation. Whereas the
second model, i.e., a shell model, is proposed to illustrate every module's
mechanical behavior as well as its crashworthiness efficiency. In order to
perform the crashworthiness analysis, the explicit nonlinear dynamic solver
provided by ABAQUS, a commercial finite element software, is used. The results
of finite element beam and shell models are in line with the concept design
specifications. Implementation of this procedure leads to generate a
lightweight and modular concept for an electric car. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: Reaction-Diffusion Systems in Epidemiology,
Abstract: A key problem in modelling the evolution dynamics of infectious diseases is
the mathematical representation of the mechanism of transmission of the
contagion. Models with a finite number of subpopulations can be described via
systems of ordinary differential equations. When dealing with populations with
space structure the relevant quantities are spatial densities, whose evolution
in time requires nonlinear partial differential equations, which are known as
reaction-diffusion systems. Here we present an (historical) outline of
mathematical epidemiology, with a particular attention to the role of spatial
heterogeneity and dispersal in the population dynamics of infectious diseases.
Two specific examples are discussed, which have been the subject of intensive
research by the authors, i.e. man-environment-man epidemics, and malaria. In
addition to the epidemiological relevance of these epidemics all over the
world, their treatment requires a large amount of different sophisticate
mathematical methods, and has even posed new non trivial mathematical problems,
as one can realize from the list of references. One of the most relevant
problems posed by the authors, i.e. regional control, has been emphasized here:
the public health concern consists of eradicating the disease in the relevant
population, as fast as possible. On the other hand, very often the entire
domain of interest for the epidemic, is either unknown, or difficult to manage
for an affordable implementation of suitable environmental programmes. For
regional control instead it might be sufficient to implement such programmes
only in a given subregion conveniently chosen so to lead to an effective
(exponentially fast) eradication of the epidemic in the whole habitat; it is
evident that this practice may have an enormous importance in real cases with
respect to both financial and practical affordability. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Quantitative Biology"
] |
Title: Commissioning and performance results of the WFIRST/PISCES integral field spectrograph,
Abstract: The Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies
(PISCES) is a high contrast integral field spectrograph (IFS) whose design was
driven by WFIRST coronagraph instrument requirements. We present commissioning
and operational results using PISCES as a camera on the High Contrast Imaging
Testbed at JPL. PISCES has demonstrated ability to achieve high contrast
spectral retrieval with flight-like data reduction and analysis techniques. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: Revealing strong bias in common measures of galaxy properties using new inclination-independent structures,
Abstract: Accurate measurement of galaxy structures is a prerequisite for quantitative
investigation of galaxy properties or evolution. Yet, the impact of galaxy
inclination and dust on commonly used metrics of galaxy structure is poorly
quantified. We use infrared data sets to select inclination-independent samples
of disc and flattened elliptical galaxies. These samples show strong variation
in Sérsic index, concentration, and half-light radii with inclination. We
develop novel inclination-independent galaxy structures by collapsing the light
distribution in the near-infrared on to the major axis, yielding
inclination-independent `linear' measures of size and concentration. With these
new metrics we select a sample of Milky Way analogue galaxies with similar
stellar masses, star formation rates, sizes and concentrations. Optical
luminosities, light distributions, and spectral properties are all found to
vary strongly with inclination: When inclining to edge-on, $r$-band
luminosities dim by $>$1 magnitude, sizes decrease by a factor of 2,
`dust-corrected' estimates of star formation rate drop threefold, metallicities
decrease by 0.1 dex, and edge-on galaxies are half as likely to be classified
as star forming. These systematic effects should be accounted for in analyses
of galaxy properties. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: Online Nonparametric Anomaly Detection based on Geometric Entropy Minimization,
Abstract: We consider the online and nonparametric detection of abrupt and persistent
anomalies, such as a change in the regular system dynamics at a time instance
due to an anomalous event (e.g., a failure, a malicious activity). Combining
the simplicity of the nonparametric Geometric Entropy Minimization (GEM) method
with the timely detection capability of the Cumulative Sum (CUSUM) algorithm we
propose a computationally efficient online anomaly detection method that is
applicable to high-dimensional datasets, and at the same time achieve a
near-optimum average detection delay performance for a given false alarm
constraint. We provide new insights to both GEM and CUSUM, including new
asymptotic analysis for GEM, which enables soft decisions for outlier
detection, and a novel interpretation of CUSUM in terms of the discrepancy
theory, which helps us generalize it to the nonparametric GEM statistic. We
numerically show, using both simulated and real datasets, that the proposed
nonparametric algorithm attains a close performance to the clairvoyant
parametric CUSUM test. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Learning latent structure of large random graphs,
Abstract: In this paper, we estimate the distribution of hidden nodes weights in large
random graphs from the observation of very few edges weights. In this very
sparse setting, the first non-asymptotic risk bounds for maximum likelihood
estimators (MLE) are established. The proof relies on the construction of a
graphical model encoding conditional dependencies that is extremely efficient
to study n-regular graphs obtained using a round-robin scheduling. This
graphical model allows to prove geometric loss of memory properties and deduce
the asymp-totic behavior of the likelihood function. Following a classical
construction in learning theory, the asymptotic likelihood is used to define a
measure of performance for the MLE. Risk bounds for the MLE are finally
obtained by subgaussian deviation results derived from concentration
inequalities for Markov chains applied to our graphical model. | [
0,
0,
1,
1,
0,
0
] | [
"Computer Science",
"Mathematics",
"Statistics"
] |
Title: Second differentials in the Quillen spectral sequence,
Abstract: For an algebraic variety $X$ we introduce generalized first Chern classes,
which are defined for coherent sheaves on $X$ with support in codimension $p$
and take values in $CH^p(X)$. We use them to provide an explicit formula for
the differentials ${d_2^p: E_2^{p,-p-1} \to E_2^{p+2, -p-2} \cong CH^{p+2}(X)}$
in the Quillen spectral sequence. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: General Dynamics of Spinors,
Abstract: In this paper, we consider a general twisted-curved space-time hosting Dirac
spinors and we take into account the Lorentz covariant polar decomposition of
the Dirac spinor field: the corresponding decomposition of the Dirac spinor
field equation leads to a set of field equations that are real and where
spinorial components have disappeared while still maintaining Lorentz
covariance. We will see that the Dirac spinor will contain two real scalar
degrees of freedom, the module and the so-called Yvon-Takabayashi angle, and we
will display their field equations. This will permit us to study the coupling
of curvature and torsion respectively to the module and the YT angle. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: PT-Spike: A Precise-Time-Dependent Single Spike Neuromorphic Architecture with Efficient Supervised Learning,
Abstract: One of the most exciting advancements in AI over the last decade is the wide
adoption of ANNs, such as DNN and CNN, in many real-world applications.
However, the underlying massive amounts of computation and storage requirement
greatly challenge their applicability in resource-limited platforms like the
drone, mobile phone, and IoT devices etc. The third generation of neural
network model--Spiking Neural Network (SNN), inspired by the working mechanism
and efficiency of human brain, has emerged as a promising solution for
achieving more impressive computing and power efficiency within light-weighted
devices (e.g. single chip). However, the relevant research activities have been
narrowly carried out on conventional rate-based spiking system designs for
fulfilling the practical cognitive tasks, underestimating SNN's energy
efficiency, throughput, and system flexibility. Although the time-based SNN can
be more attractive conceptually, its potentials are not unleashed in realistic
applications due to lack of efficient coding and practical learning schemes. In
this work, a Precise-Time-Dependent Single Spike Neuromorphic Architecture,
namely "PT-Spike", is developed to bridge this gap. Three constituent
hardware-favorable techniques: precise single-spike temporal encoding,
efficient supervised temporal learning, and fast asymmetric decoding are
proposed accordingly to boost the energy efficiency and data processing
capability of the time-based SNN at a more compact neural network model size
when executing real cognitive tasks. Simulation results show that "PT-Spike"
demonstrates significant improvements in network size, processing efficiency
and power consumption with marginal classification accuracy degradation when
compared with the rate-based SNN and ANN under the similar network
configuration. | [
0,
0,
0,
0,
1,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Learning to Represent Edits,
Abstract: We introduce the problem of learning distributed representations of edits. By
combining a "neural editor" with an "edit encoder", our models learn to
represent the salient information of an edit and can be used to apply edits to
new inputs. We experiment on natural language and source code edit data. Our
evaluation yields promising results that suggest that our neural network models
learn to capture the structure and semantics of edits. We hope that this
interesting task and data source will inspire other researchers to work further
on this problem. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Photon-gated spin transistor,
Abstract: Spin-polarized field-effect transistor (spin-FET), where a dielectric layer
is generally employed for the electrical gating as the traditional FET, stands
out as a seminal spintronic device under the miniaturization trend of
electronics. It would be fundamentally transformative if optical gating was
used for spin-FET. We report a new type of spin-polarized field-effect
transistor (spin-FET) with optical gating, which is fabricated by partial
exposure of the (La,Sr)MnO3 channel to light-emitting diode (LED) light. The
manipulation of the channel conductivity is ascribed to the enhanced scattering
of the spin-polarized current by photon-excited antiparallel aligned spins. And
the photon-gated spin-FET shows strong light power dependence and reproducible
enhancement of resistance under light illumination, indicting well-defined
conductivity cycling features. Our finding would enrich the concept of spin-FET
and promote the use of optical means in spintronics for low power consumption
and ultrafast data processing. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Two bosonic quantum walkers in one-dimensional optical lattices,
Abstract: Dynamical properties of two bosonic quantum walkers in a one-dimensional
lattice are studied theoretically. Depending on the initial state,
interactions, lattice tilting, and lattice disorder, whole plethora of
different behaviors are observed. Particularly, it is shown that two bosons
system manifests the many-body localization like behavior in the presence of a
quenched disorder. The whole analysis is based on a specific decomposition of
the temporal density profile into different contributions from singly and
doubly occupied sites. In this way, the role of interactions is extracted.
Since the contributions can be directly measured in experiments with ultra-cold
atoms in optical lattices, the predictions presented may have some importance
for upcoming experiment. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Effect of stellar flares on the upper atmospheres of HD 189733b and HD 209458b,
Abstract: Stellar flares are a frequent occurrence on young low-mass stars around which
many detected exoplanets orbit. Flares are energetic, impulsive events, and
their impact on exoplanetary atmospheres needs to be taken into account when
interpreting transit observations. We have developed a model to describe the
upper atmosphere of Extrasolar Giant Planets (EGPs) orbiting flaring stars. The
model simulates thermal escape from the upper atmospheres of close-in EGPs.
Ionisation by solar radiation and electron impact is included and photochemical
and diffusive transport processes are simulated. This model is used to study
the effect of stellar flares from the solar-like G star HD209458 and the young
K star HD189733 on their respective planets. A hypothetical HD209458b-like
planet orbiting the active M star AU Mic is also simulated. We find that the
neutral upper atmosphere of EGPs is not significantly affected by typical
flares. Therefore, stellar flares alone would not cause large enough changes in
planetary mass loss to explain the variations in HD189733b transit depth seen
in previous studies, although we show that it may be possible that an extreme
stellar proton event could result in the required mass loss. Our simulations do
however reveal an enhancement in electron number density in the ionosphere of
these planets, the peak of which is located in the layer where stellar X-rays
are absorbed. Electron densities are found to reach 2.2 to 3.5 times pre-flare
levels and enhanced electron densities last from about 3 to 10 hours after the
onset of the flare. The strength of the flare and the width of its spectral
energy distribution affect the range of altitudes that see enhancements in
ionisation. A large broadband continuum component in the XUV portion of the
flaring spectrum in very young flare stars, such as AU Mic, results in a broad
range of altitudes affected in planets orbiting this star. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Gross-Hopkins Duals of Higher Real K-theory Spectra,
Abstract: We determine the Gross-Hopkins duals of certain higher real K-theory spectra.
More specifically, let p be an odd prime, and consider the Morava E-theory
spectrum of height n=p-1. It is known, in the expert circles, that for certain
finite subgroups G of the Morava stabilizer group, the homotopy fixed point
spectra E_n^{hG} are Gross-Hopkins self-dual up to a shift. In this paper, we
determine the shift for those finite subgroups G which contain p-torsion. This
generalizes previous results for n=2 and p=3. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: A Projection Method for Metric-Constrained Optimization,
Abstract: We outline a new approach for solving optimization problems which enforce
triangle inequalities on output variables. We refer to this as
metric-constrained optimization, and give several examples where problems of
this form arise in machine learning applications and theoretical approximation
algorithms for graph clustering. Although these problem are interesting from a
theoretical perspective, they are challenging to solve in practice due to the
high memory requirement of black-box solvers. In order to address this
challenge we first prove that the metric-constrained linear program relaxation
of correlation clustering is equivalent to a special case of the metric
nearness problem. We then developed a general solver for metric-constrained
linear and quadratic programs by generalizing and improving a simple projection
algorithm originally developed for metric nearness. We give several novel
approximation guarantees for using our framework to find lower bounds for
optimal solutions to several challenging graph clustering problems. We also
demonstrate the power of our framework by solving optimizing problems involving
up to 10^{8} variables and 10^{11} constraints. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Curie: Policy-based Secure Data Exchange,
Abstract: Data sharing among partners---users, organizations, companies---is crucial
for the advancement of data analytics in many domains. Sharing through secure
computation and differential privacy allows these partners to perform private
computations on their sensitive data in controlled ways. However, in reality,
there exist complex relationships among members. Politics, regulations,
interest, trust, data demands and needs are one of the many reasons. Thus,
there is a need for a mechanism to meet these conflicting relationships on data
sharing. This paper presents Curie, an approach to exchange data among members
whose membership has complex relationships. The CPL policy language that allows
members to define the specifications of data exchange requirements is
introduced. Members (partners) assert who and what to exchange through their
local policies and negotiate a global sharing agreement. The agreement is
implemented in a multi-party computation that guarantees sharing among members
will comply with the policy as negotiated. The use of Curie is validated
through an example of a health care application built on recently introduced
secure multi-party computation and differential privacy frameworks, and policy
and performance trade-offs are explored. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Gemini/GMOS Transmission Spectral Survey: Complete Optical Transmission Spectrum of the hot Jupiter WASP-4b,
Abstract: We present the complete optical transmission spectrum of the hot Jupiter
WASP-4b from 440-940 nm at R ~ 400-1500 obtained with the Gemini Multi-Object
Spectrometers (GMOS); this is the first result from a comparative
exoplanetology survey program of close-in gas giants conducted with GMOS.
WASP-4b has an equilibrium temperature of 1700 K and is favorable to study in
transmission due to a large scale height (370 km). We derive the transmission
spectrum of WASP-4b using 4 transits observed with the MOS technique. We
demonstrate repeatable results across multiple epochs with GMOS, and derive a
combined transmission spectrum at a precision about twice above photon noise,
which is roughly equal to to one atmospheric scale height. The transmission
spectrum is well fitted with a uniform opacity as a function of wavelength. The
uniform opacity and absence of a Rayleigh slope from molecular hydrogen suggest
that the atmosphere is dominated by clouds with condensate grain size of ~1 um.
This result is consistent with previous observations of hot Jupiters since
clouds have been seen in planets with similar equilibrium temperatures to
WASP-4b. We describe a custom pipeline that we have written to reduce GMOS
time-series data of exoplanet transits, and present a thorough analysis of the
dominant noise sources in GMOS, which primarily consist of wavelength- and
time- dependent displacements of the spectra on the detector, mainly due to a
lack of atmospheric dispersion correction. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Astrophysics"
] |
Title: Periods of abelian differentials and dynamics,
Abstract: Given a closed oriented surface S we describe those cohomology classes which
appear as the period characters of abelian differentials for some choice of
complex structure on S consistent with the orientation. The proof is based upon
Ratner's solution of Raghunathan's conjecture. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Carbon Nanotube Wools Directly from CO2 By Molten Electrolysis Value Driven Pathways to Carbon Dioxide Greenhouse Gas Mitigation,
Abstract: A climate mitigation comprehensive solution is presented through the first
high yield, low energy synthesis of macroscopic length carbon nanotubes (CNT)
wool from CO2 by molten carbonate electrolysis, suitable for weaving into
carbon composites and textiles. Growing CO2 concentrations, the concurrent
climate change and species extinction can be addressed if CO2 becomes a sought
resource rather than a greenhouse pollutant. Inexpensive carbon composites
formed from carbon wool as a lighter metal, textiles and cement replacement
comprise a major market sink to compactly store transformed anthropogenic CO2.
100x-longer CNTs grow on Monel versus steel. Monel, electrolyte equilibration,
and a mixed metal nucleation facilitate the synthesis. CO2, the sole reactant
in this transformation, is directly extractable from dilute (atmospheric) or
concentrated sources, and is cost constrained only by the (low) cost of
electricity. Today's $100K per ton CNT valuation incentivizes CO2 removal. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Rock-Paper-Scissors Random Walks on Temporal Multilayer Networks,
Abstract: We study diffusion on a multilayer network where the contact dynamics between
the nodes is governed by a random process and where the waiting time
distribution differs for edges from different layers. We study the impact on a
random walk of the competition that naturally emerges between the edges of the
different layers. In opposition to previous studies which have imposed a priori
inter-layer competition, the competition is here induced by the heterogeneity
of the activity on the different layers. We first study the precedence relation
between different edges and by extension between different layers, and show
that it determines biased paths for the walker. We also discuss the emergence
of cyclic, rock-paper-scissors random walks, when the precedence between layers
is non-transitive. Finally, we numerically show the slowing-down effect due to
the competition on a heterogeneous multilayer as the walker is likely to be
trapped for a longer time either on a single layer, or on an oriented cycle .
Keywords: random walks; multilayer networks; dynamical systems on networks;
models of networks; simulations of networks; competition between layers. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics",
"Physics"
] |
Title: Time-frequency analysis of ship wave patterns in shallow water: modelling and experiments,
Abstract: A spectrogram of a ship wake is a heat map that visualises the time-dependent
frequency spectrum of surface height measurements taken at a single point as
the ship travels by. Spectrograms are easy to compute and, if properly
interpreted, have the potential to provide crucial information about various
properties of the ship in question. Here we use geometrical arguments and
analysis of an idealised mathematical model to identify features of
spectrograms, concentrating on the effects of a finite-depth channel. Our
results depend heavily on whether the flow regime is subcritical or
supercritical. To support our theoretical predictions, we compare with data
taken from experiments we conducted in a model test basin using a variety of
realistic ship hulls. Finally, we note that vessels with a high aspect ratio
appear to produce spectrogram data that contains periodic patterns. We can
reproduce this behaviour in our mathematical model by using a so-called
two-point wavemaker. These results highlight the role of wave interference
effects in spectrograms of ship wakes. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: HARE: Supporting efficient uplink multi-hop communications in self-organizing LPWANs,
Abstract: The emergence of low-power wide area networks (LPWANs) as a new agent in the
Internet of Things (IoT) will result in the incorporation into the digital
world of low-automated processes from a wide variety of sectors. The single-hop
conception of typical LPWAN deployments, though simple and robust, overlooks
the self-organization capabilities of network devices, suffers from lack of
scalability in crowded scenarios, and pays little attention to energy
consumption. Aimed to take the most out of devices' capabilities, the HARE
protocol stack is proposed in this paper as a new LPWAN technology flexible
enough to adopt uplink multi-hop communications when proving energetically more
efficient. In this way, results from a real testbed show energy savings of up
to 15% when using a multi-hop approach while keeping the same network
reliability. System's self-organizing capability and resilience have been also
validated after performing numerous iterations of the association mechanism and
deliberately switching off network devices. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Tensor tomography in periodic slabs,
Abstract: The X-ray transform on the periodic slab $[0,1]\times\mathbb T^n$, $n\geq0$,
has a non-trivial kernel due to the symmetry of the manifold and presence of
trapped geodesics. For tensor fields gauge freedom increases the kernel
further, and the X-ray transform is not solenoidally injective unless $n=0$. We
characterize the kernel of the geodesic X-ray transform for $L^2$-regular
$m$-tensors for any $m\geq0$. The characterization extends to more general
manifolds, twisted slabs, including the Möbius strip as the simplest example. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Meromorphic Jacobi Forms of Half-Integral Index and Umbral Moonshine Modules,
Abstract: In this work we consider an association of meromorphic Jacobi forms of
half-integral index to the pure D-type cases of umbral moonshine, and solve the
module problem for four of these cases by constructing vertex operator
superalgebras that realise the corresponding meromorphic Jacobi forms as graded
traces. We also present a general discussion of meromorphic Jacobi forms with
half-integral index and their relationship to mock modular forms. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Active Hypothesis Testing: Beyond Chernoff-Stein,
Abstract: An active hypothesis testing problem is formulated. In this problem, the
agent can perform a fixed number of experiments and then decide on one of the
hypotheses. The agent is also allowed to declare its experiments inconclusive
if needed. The objective is to minimize the probability of making an incorrect
inference (misclassification probability) while ensuring that the true
hypothesis is declared conclusively with moderately high probability. For this
problem, lower and upper bounds on the optimal misclassification probability
are derived and these bounds are shown to be asymptotically tight. In the
analysis, a sub-problem, which can be viewed as a generalization of the
Chernoff-Stein lemma, is formulated and analyzed. A heuristic approach to
strategy design is proposed and its relationship with existing heuristic
strategies is discussed. | [
1,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Learning to detect chest radiographs containing lung nodules using visual attention networks,
Abstract: Machine learning approaches hold great potential for the automated detection
of lung nodules in chest radiographs, but training the algorithms requires vary
large amounts of manually annotated images, which are difficult to obtain. Weak
labels indicating whether a radiograph is likely to contain pulmonary nodules
are typically easier to obtain at scale by parsing historical free-text
radiological reports associated to the radiographs. Using a repositotory of
over 700,000 chest radiographs, in this study we demonstrate that promising
nodule detection performance can be achieved using weak labels through
convolutional neural networks for radiograph classification. We propose two
network architectures for the classification of images likely to contain
pulmonary nodules using both weak labels and manually-delineated bounding
boxes, when these are available. Annotated nodules are used at training time to
deliver a visual attention mechanism informing the model about its localisation
performance. The first architecture extracts saliency maps from high-level
convolutional layers and compares the estimated position of a nodule against
the ground truth, when this is available. A corresponding localisation error is
then back-propagated along with the softmax classification error. The second
approach consists of a recurrent attention model that learns to observe a short
sequence of smaller image portions through reinforcement learning. When a
nodule annotation is available at training time, the reward function is
modified accordingly so that exploring portions of the radiographs away from a
nodule incurs a larger penalty. Our empirical results demonstrate the potential
advantages of these architectures in comparison to competing methodologies. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: New conformal map for the Sinc approximation for exponentially decaying functions over the semi-infinite interval,
Abstract: The Sinc approximation has shown high efficiency for numerical methods in
many fields. Conformal maps play an important role in the success, i.e.,
appropriate conformal map must be employed to elicit high performance of the
Sinc approximation. Appropriate conformal maps have been proposed for typical
cases; however, such maps may not be optimal. Thus, the performance of the Sinc
approximation may be improved by using another conformal map rather than an
existing map. In this paper, we propose a new conformal map for the case where
functions are defined over the semi-infinite interval and decay exponentially.
Then, we demonstrate in both theoretical and numerical ways that the
convergence rate is improved by replacing the existing conformal map with the
proposed map. | [
1,
0,
0,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: The multi-resonant Lugiato-Lefever model,
Abstract: We introduce a new model describing multiple resonances in Kerr optical
cavities. It perfectly agrees quantitatively with the Ikeda map and predicts
complex phenomena such as super cavity solitons and coexistence of multiple
nonlinear states. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Scalable Bayesian shrinkage and uncertainty quantification in high-dimensional regression,
Abstract: Bayesian shrinkage methods have generated a lot of recent interest as tools
for high-dimensional regression and model selection. These methods naturally
facilitate tractable uncertainty quantification and incorporation of prior
information. A common feature of these models, including the Bayesian lasso,
global-local shrinkage priors, and spike-and-slab priors is that the
corresponding priors on the regression coefficients can be expressed as scale
mixture of normals. While the three-step Gibbs sampler used to sample from the
often intractable associated posterior density has been shown to be
geometrically ergodic for several of these models (Khare and Hobert, 2013; Pal
and Khare, 2014), it has been demonstrated recently that convergence of this
sampler can still be quite slow in modern high-dimensional settings despite
this apparent theoretical safeguard. We propose a new method to draw from the
same posterior via a tractable two-step blocked Gibbs sampler. We demonstrate
that our proposed two-step blocked sampler exhibits vastly superior convergence
behavior compared to the original three- step sampler in high-dimensional
regimes on both real and simulated data. We also provide a detailed theoretical
underpinning to the new method in the context of the Bayesian lasso. First, we
derive explicit upper bounds for the (geometric) rate of convergence.
Furthermore, we demonstrate theoretically that while the original Bayesian
lasso chain is not Hilbert-Schmidt, the proposed chain is trace class (and
hence Hilbert-Schmidt). The trace class property has useful theoretical and
practical implications. It implies that the corresponding Markov operator is
compact, and its eigenvalues are summable. It also facilitates a rigorous
comparison of the two-step blocked chain with "sandwich" algorithms which aim
to improve performance of the two-step chain by inserting an inexpensive extra
step. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Inclusion and Majorization Properties of Certain Subclasses of Multivalent Analytic Functions Involving a Linear Operator,
Abstract: The object of the present paper is to study certain properties and
characteristics of the operator $Q_{p,\beta}^{\alpha}$defined on p-valent
analytic function by using technique of differential subordination.We also
obtained result involving majorization problems by applying the operator to
p-valent analytic function.Relevant connection of the the result are presented
here with those obtained by earlier worker are pointed out. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Tameness in least fixed-point logic and McColm's conjecture,
Abstract: We investigate fundamental model-theoretic dividing lines (the order
property, the independence property, the strict order property, and the tree
property 2) in the context of least fixed-point (LFP) logic over families of
finite structures. We show that, unlike the first-order (FO) case, the order
property and the independence property are equivalent, but all of the other
natural implications are strict. We identify the LFP strict order property with
proficiency, a well-studied notion in finite model theory.
Gregory McColm conjectured that FO and LFP definability coincide over a
family C of finite structures exactly when C is non-proficient. McColm's
conjecture is false in general, but as an application of our results, we show
that it holds under standard FO tameness assumptions adapted to families of
finite structures. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Magnetoelectric properties of the layered room-temperature antiferromagnets BaMn2P2 and BaMn2As2,
Abstract: Properties of two ThCr2Si2-type materials are discussed within the context of
their established structural and magnetic symmetries. Both materials develop
collinear, G-type antiferromagnetic order above room temperature, and magnetic
ions occupy acentric sites in centrosymmetric structures. We refute a previous
conjecture that BaMn2As2 is an example of a magnetoelectric material with
hexadecapole order by exposing flaws in supporting arguments, principally, an
omission of discrete symmetries enforced by the symmetry of sites used by Mn
ions and, also, improper classifications of the primary and secondary
order-parameters. Implications for future experiments designed to improve our
understanding of BaMn2P2 and BaMn2As2 magnetoelectric properties, using neutron
and x-ray diffraction, are examined. Patterns of Bragg spots caused by
conventional magnetic dipoles and magnetoelectric (Dirac) multipoles are
predicted to be distinct, which raises the intriguing possibility of a unique
and comprehensive examination of the magnetoelectric state by diffraction. A
roto-inversion operation in Mn site symmetry is ultimately responsible for the
distinguishing features. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Discrete Spectrum Reconstruction using Integral Approximation Algorithm,
Abstract: An inverse problem in spectroscopy is considered. The objective is to restore
the discrete spectrum from observed spectrum data, taking into account the
spectrometer's line spread function. The problem is reduced to solution of a
system of linear-nonlinear equations (SLNE) with respect to intensities and
frequencies of the discrete spectral lines. The SLNE is linear with respect to
lines' intensities and nonlinear with respect to the lines' frequencies. The
integral approximation algorithm is proposed for the solution of this SLNE. The
algorithm combines solution of linear integral equations with solution of a
system of linear algebraic equations and avoids nonlinear equations. Numerical
examples of the application of the technique, both to synthetic and
experimental spectra, demonstrate the efficacy of the proposed approach in
enabling an effective enhancement of the spectrometer's resolution. | [
0,
0,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Tackling Over-pruning in Variational Autoencoders,
Abstract: Variational autoencoders (VAE) are directed generative models that learn
factorial latent variables. As noted by Burda et al. (2015), these models
exhibit the problem of factor over-pruning where a significant number of
stochastic factors fail to learn anything and become inactive. This can limit
their modeling power and their ability to learn diverse and meaningful latent
representations. In this paper, we evaluate several methods to address this
problem and propose a more effective model-based approach called the epitomic
variational autoencoder (eVAE). The so-called epitomes of this model are groups
of mutually exclusive latent factors that compete to explain the data. This
approach helps prevent inactive units since each group is pressured to explain
the data. We compare the approaches with qualitative and quantitative results
on MNIST and TFD datasets. Our results show that eVAE makes efficient use of
model capacity and generalizes better than VAE. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Cyber-Physical System for Energy-Efficient Stadium Operation: Methodology and Experimental Validation,
Abstract: The environmental impacts of medium to large scale buildings receive
substantial attention in research, industry, and media. This paper studies the
energy savings potential of a commercial soccer stadium during day-to-day
operation. Buildings of this kind are characterized by special purpose system
installations like grass heating systems and by event-driven usage patterns.
This work presents a methodology to holistically analyze the stadiums
characteristics and integrate its existing instrumentation into a
Cyber-Physical System, enabling to deploy different control strategies
flexibly. In total, seven different strategies for controlling the studied
stadiums grass heating system are developed and tested in operation.
Experiments in winter season 2014/2015 validated the strategies impacts within
the real operational setup of the Commerzbank Arena, Frankfurt, Germany. With
95% confidence, these experiments saved up to 66% of median daily
weather-normalized energy consumption. Extrapolated to an average heating
season, this corresponds to savings of 775 MWh and 148 t of CO2 emissions. In
winter 2015/2016 an additional predictive nighttime heating experiment targeted
lower temperatures, which increased the savings to up to 85%, equivalent to 1
GWh (197 t CO2) in an average winter. Beyond achieving significant energy
savings, the different control strategies also met the target temperature
levels to the satisfaction of the stadiums operational staff. While the case
study constitutes a significant part, the discussions dedicated to the
transferability of this work to other stadiums and other building types show
that the concepts and the approach are of general nature. Furthermore, this
work demonstrates the first successful application of Deep Belief Networks to
regress and predict the thermal evolution of building systems. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics"
] |
Title: Estimating solar flux density at low radio frequencies using a sky brightness model,
Abstract: Sky models have been used in the past to calibrate individual low radio
frequency telescopes. Here we generalize this approach from a single antenna to
a two element interferometer and formulate the problem in a manner to allow us
to estimate the flux density of the Sun using the normalized cross-correlations
(visibilities) measured on a low resolution interferometric baseline. For wide
field-of-view instruments, typically the case at low radio frequencies, this
approach can provide robust absolute solar flux calibration for well
characterized antennas and receiver systems. It can provide a reliable and
computationally lean method for extracting parameters of physical interest
using a small fraction of the voluminous interferometric data, which can be
prohibitingly compute intensive to calibrate and image using conventional
approaches. We demonstrate this technique by applying it to data from the
Murchison Widefield Array and assess its reliability. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Astrophysics"
] |
Title: Non-equilibrium time dynamics of genetic evolution,
Abstract: Biological systems are typically highly open, non-equilibrium systems that
are very challenging to understand from a statistical mechanics perspective.
While statistical treatments of evolutionary biological systems have a long and
rich history, examination of the time-dependent non-equilibrium dynamics has
been less studied. In this paper we first derive a generalized master equation
in the genotype space for diploid organisms incorporating the processes of
selection, mutation, recombination, and reproduction. The master equation is
defined in terms of continuous time and can handle an arbitrary number of gene
loci and alleles, and can be defined in terms of an absolute population or
probabilities. We examine and analytically solve several prototypical cases
which illustrate the interplay of the various processes and discuss the
timescales of their evolution. The entropy production during the evolution
towards steady state is calculated and we find that it agrees with predictions
from non-equilibrium statistical mechanics where it is large when the
population distribution evolves towards a more viable genotype. The stability
of the non-equilibrium steady state is confirmed using the Glansdorff-Prigogine
criterion. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology",
"Physics",
"Statistics"
] |
Title: Exact MAP Inference by Avoiding Fractional Vertices,
Abstract: Given a graphical model, one essential problem is MAP inference, that is,
finding the most likely configuration of states according to the model.
Although this problem is NP-hard, large instances can be solved in practice. A
major open question is to explain why this is true. We give a natural condition
under which we can provably perform MAP inference in polynomial time. We
require that the number of fractional vertices in the LP relaxation exceeding
the optimal solution is bounded by a polynomial in the problem size. This
resolves an open question by Dimakis, Gohari, and Wainwright. In contrast, for
general LP relaxations of integer programs, known techniques can only handle a
constant number of fractional vertices whose value exceeds the optimal
solution. We experimentally verify this condition and demonstrate how efficient
various integer programming methods are at removing fractional solutions. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: On the compressibility of the transition-metal carbides and nitrides alloys Zr_xNb_{1-x}C and Zr_xNb_{1-x}N,
Abstract: The 4d-transition-metals carbides (ZrC, NbC) and nitrides (ZrN, NbN) in the
rocksalt structure, as well as their ternary alloys, have been recently studied
by means of a first-principles full potential linearized augmented plane waves
method within the local density approximation. These materials are important
because of their interesting mechanical and physical properties, which make
them suitable for many technological applications. Here, by using a simple
theoretical model, we estimate the bulk moduli of their ternary alloys
Zr$_x$Nb$_{1-x}$C and Zr$_x$Nb$_{1-x}$N in terms of the bulk moduli of the end
members alone. The results are comparable to those deduced from the
first-principles calculations. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Materials Science"
] |
Title: Exact semi-separation of variables in waveguides with nonplanar boundaries,
Abstract: Series expansions of unknown fields $\Phi=\sum\varphi_n Z_n$ in elongated
waveguides are commonly used in acoustics, optics, geophysics, water waves and
other applications, in the context of coupled-mode theories (CMTs). The
transverse functions $Z_n$ are determined by solving local Sturm-Liouville
problems (reference waveguides). In most cases, the boundary conditions
assigned to $Z_n$ cannot be compatible with the physical boundary conditions of
$\Phi$, leading to slowly convergent series, and rendering CMTs mild-slope
approximations. In the present paper, the heuristic approach introduced in
(Athanassoulis & Belibassakis 1999, J. Fluid Mech. 389, 275-301) is generalized
and justified. It is proved that an appropriately enhanced series expansion
becomes an exact, rapidly-convergent representation of the field $\Phi$, valid
for any smooth, nonplanar boundaries and any smooth enough $\Phi$. This series
expansion can be differentiated termwise everywhere in the domain, including
the boundaries, implementing an exact semi-separation of variables for
non-separable domains. The efficiency of the method is illustrated by solving a
boundary value problem for the Laplace equation, and computing the
corresponding Dirichlet-to-Neumann operator, involved in Hamiltonian equations
for nonlinear water waves. The present method provides accurate results with
only a few modes for quite general domains. Extensions to general waveguides
are also discussed. | [
0,
0,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.