text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: Simulation of Drop Impact on a Hot Wall using SPH Method with Peng-Robinson Equation of State,
Abstract: This study presents a smoothed particle hydrodynamics (SPH) method with
Peng-Robinson equation of state for simulating drop vaporization and drop
impact on a hot surface. The conservation equations of momentum and energy and
Peng-Robinson equation of state are applied to describe both the liquid and gas
phases. The governing equations are solved numerically by the SPH method. The
phase change between the liquid and gas phases are simulated directly without
using any phase change models. The numerical method is validated by comparing
numerical results with analytical solutions for the vaporization of n-heptane
drops at different temperatures. Using the SPH method, the processes of
n-heptane drops impacting on a solid wall with different temperatures are
studied numerically. The results show that the size of the film formed by drop
impact decreases when temperature increases. When the temperature is high
enough, the drop will rebound. | [
0,
1,
0,
0,
0,
0
] |
Title: Generation of attosecond electron beams in relativistic ionization by short laser pulses,
Abstract: Ionization by relativistically intense short laser pulses is studied in the
framework of strong-field quantum electrodynamics. Distinctive patterns are
found in the energy probability distributions of photoelectrons. Except of the
already observed patterns, which were studied in Phys. Rev. A {\bf 94}, 013402
(2016), we discover an additional interference-free smooth supercontinuum in
the high-energy portion of the spectrum, reaching tens of kiloelectronovolts.
As we show, the latter is sensitive to the driving field intensity and it can
be detected in a narrow polar-angular window. Once these high-energy electrons
are collected, they can form solitary attosecond pulses. This is particularly
important in light of various applications of attosecond electron beams such as
in ultrafast electron diffraction and crystallography, or in time-resolved
electron microscopy of physical, chemical, and biological processes. | [
0,
1,
0,
0,
0,
0
] |
Title: Surface defects and elliptic quantum groups,
Abstract: A brane construction of an integrable lattice model is proposed. The model is
composed of Belavin's R-matrix, Felder's dynamical R-matrix, the
Bazhanov-Sergeev-Derkachov-Spiridonov R-operator and some intertwining
operators. This construction implies that a family of surface defects act on
supersymmetric indices of four-dimensional $\mathcal{N} = 1$ supersymmetric
field theories as transfer matrices related to elliptic quantum groups. | [
0,
0,
1,
0,
0,
0
] |
Title: Disentangling and Assessing Uncertainties in Multiperiod Corporate Default Risk Predictions,
Abstract: Measuring the corporate default risk is broadly important in economics and
finance. Quantitative methods have been developed to predictively assess future
corporate default probabilities. However, as a more difficult yet crucial
problem, evaluating the uncertainties associated with the default predictions
remains little explored. In this paper, we attempt to fill this blank by
developing a procedure for quantifying the level of associated uncertainties
upon carefully disentangling multiple contributing sources. Our framework
effectively incorporates broad information from historical default data,
corporates' financial records, and macroeconomic conditions by a)
characterizing the default mechanism, and b) capturing the future dynamics of
various features contributing to the default mechanism. Our procedure overcomes
the major challenges in this large scale statistical inference problem and
makes it practically feasible by using parsimonious models, innovative methods,
and modern computational facilities. By predicting the marketwide total number
of defaults and assessing the associated uncertainties, our method can also be
applied for evaluating the aggregated market credit risk level. Upon analyzing
a US market data set, we demonstrate that the level of uncertainties associated
with default risk assessments is indeed substantial. More informatively, we
also find that the level of uncertainties associated with the default risk
predictions is correlated with the level of default risks, indicating potential
for new scopes in practical applications including improving the accuracy of
default risk assessments. | [
0,
0,
0,
1,
0,
1
] |
Title: The XXL Survey: XVII. X-ray and Sunyaev-Zel'dovich Properties of the Redshift 2.0 Galaxy Cluster XLSSC 122,
Abstract: We present results from a 100 ks XMM-Newton observation of galaxy cluster
XLSSC 122, the first massive cluster discovered through its X-ray emission at
$z\approx2$. The data provide the first precise constraints on the bulk
thermodynamic properties of such a distant cluster, as well as an X-ray
spectroscopic confirmation of its redshift. We measure an average temperature
of $kT=5.0\pm0.7$ keV; a metallicity with respect to solar of
$Z/Z_{\odot}=0.33^{+0.19}_{-0.17}$, consistent with lower-redshift clusters;
and a redshift of $z=1.99^{+0.07}_{-0.06}$, consistent with the earlier photo-z
estimate. The measured gas density profile leads to a mass estimate at
$r_{500}$ of $M_{500}=(6.3\pm1.5)\times10^{13}M_{\odot}$. From CARMA 30 GHz
data, we measure the spherically integrated Compton parameter within $r_{500}$
to be $Y_{500}=(3.6\pm0.4)\times10^{-12}$. We compare the measured properties
of XLSSC 122 to lower-redshift cluster samples, and find good agreement when
assuming the simplest (self-similar) form for the evolution of cluster scaling
relations. While a single cluster provides limited information, this result
suggests that the evolution of the intracluster medium in the most massive,
well developed clusters is remarkably simple, even out to the highest redshifts
where they have been found. At the same time, our data reaffirm the previously
reported spatial offset between the centers of the X-ray and SZ signals for
XLSSC 122, suggesting a disturbed configuration. Higher spatial resolution data
could thus provide greater insights into the internal dynamics of this system. | [
0,
1,
0,
0,
0,
0
] |
Title: A bound for rational Thurston-Bennequin invariants,
Abstract: In this paper, we introduce a rational $\tau$ invariant for rationally
null-homologous knots in contact 3-manifolds with nontrivial
Ozsváth-Szabó contact invariants. Such an invariant is an upper bound
for the sum of rational Thurston-Bennequin invariant and the rational rotation
number of the Legendrian representatives of the knot. In the special case of
Floer simple knots in L-spaces, we can compute the rational $\tau$ invariants
by correction terms. | [
0,
0,
1,
0,
0,
0
] |
Title: Generalized Fréchet Bounds for Cell Entries in Multidimensional Contingency Tables,
Abstract: We consider the lattice, $\mathcal{L}$, of all subsets of a multidimensional
contingency table and establish the properties of monotonicity and
supermodularity for the marginalization function, $n(\cdot)$, on $\mathcal{L}$.
We derive from the supermodularity of $n(\cdot)$ some generalized Fréchet
inequalities complementing and extending inequalities of Dobra and Fienberg.
Further, we construct new monotonic and supermodular functions from $n(\cdot)$,
and we remark on the connection between supermodularity and some correlation
inequalities for probability distributions on lattices. We also apply an
inequality of Ky Fan to derive a new approach to Fréchet inequalities for
multidimensional contingency tables. | [
0,
0,
1,
1,
0,
0
] |
Title: Signaling on the Continuous Spectrum of Nonlinear Optical fiber,
Abstract: This paper studies different signaling techniques on the continuous spectrum
(CS) of nonlinear optical fiber defined by nonlinear Fourier transform. Three
different signaling techniques are proposed and analyzed based on the
statistics of the noise added to CS after propagation along the nonlinear
optical fiber. The proposed methods are compared in terms of error performance,
distance reach, and complexity. Furthermore, the effect of chromatic dispersion
on the data rate and noise in nonlinear spectral domain is investigated. It is
demonstrated that, for a given sequence of CS symbols, an optimal bandwidth (or
symbol rate) can be determined so that the temporal duration of the propagated
signal at the end of the fiber is minimized. In effect, the required guard
interval between the subsequently transmitted data packets in time is minimized
and the effective data rate is significantly enhanced. Moreover, by selecting
the proper signaling method and design criteria a reach distance of 7100 km is
reported by only singling on the CS at a rate of 9.6 Gbps. | [
1,
1,
0,
0,
0,
0
] |
Title: Symmetry analysis and soliton solution of (2+1)- dimensional Zoomeron equation,
Abstract: Traveling wave solutions of (2 + 1)-dimensional Zoomeron equation(ZE) are
developed in terms of exponential functions involving free parameters. It is
shown that the novel Lie group of transformations method is a competent and
prominent tool in solving nonlinear partial differential equations(PDEs) in
mathematical physics. The similarity transformation method(STM) is applied
first on (2 + 1)-dimensional ZE to find the infinitesimal generators.
Discussing the different cases on these infinitesimal generators, STM reduce (2
+ 1)-dimensional ZE into (1 + 1)-dimensional PDEs, later it reduces these PDEs
into various ordinary differential equations(ODEs) and help to find exact
solutions of (2 + 1)-dimensional ZE. | [
0,
1,
1,
0,
0,
0
] |
Title: Estimation in the convolution structure density model. Part II: adaptation over the scale of anisotropic classes,
Abstract: This paper continues the research started in \cite{LW16}. In the framework of
the convolution structure density model on $\bR^d$, we address the problem of
adaptive minimax estimation with $\bL_p$--loss over the scale of anisotropic
Nikol'skii classes. We fully characterize the behavior of the minimax risk for
different relationships between regularity parameters and norm indexes in the
definitions of the functional class and of the risk. In particular, we show
that the boundedness of the function to be estimated leads to an essential
improvement of the asymptotic of the minimax risk. We prove that the selection
rule proposed in Part I leads to the construction of an optimally or nearly
optimally (up to logarithmic factor) adaptive estimator. | [
0,
0,
1,
1,
0,
0
] |
Title: Explaining Recurrent Neural Network Predictions in Sentiment Analysis,
Abstract: Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown
to deliver insightful explanations in the form of input space relevances for
understanding feed-forward neural network classification decisions. In the
present work, we extend the usage of LRP to recurrent neural networks. We
propose a specific propagation rule applicable to multiplicative connections as
they arise in recurrent network architectures such as LSTMs and GRUs. We apply
our technique to a word-based bi-directional LSTM model on a five-class
sentiment prediction task, and evaluate the resulting LRP relevances both
qualitatively and quantitatively, obtaining better results than a
gradient-based related method which was used in previous work. | [
1,
0,
0,
1,
0,
0
] |
Title: Generalized Coherence Concurrence and Path distinguishability,
Abstract: We propose a new family of coherence monotones, named the \emph{generalized
coherence concurrence} (or coherence $k$-concurrence), which is an analogous
concept to the generalized entanglement concurrence. The coherence
$k$-concurrence of a state is nonzero if and only if the coherence number (a
recently introduced discrete coherence monotone) of the state is not smaller
than $k$, and a state can be converted to a state with nonzero entanglement
$k$-concurrence via incoherent operations if and only if the state has nonzero
coherence $k$-concurrence. We apply the coherence concurrence family to the
problem of wave-particle duality in multi-path interference phenomena. We
obtain a sharper equation for path distinguishability (which witness the
duality) than the known value and show that the amount of each concurrence for
the quanton state determines the number of slits which are identified
unambiguously. | [
1,
0,
0,
0,
0,
0
] |
Title: Passivity Based Whole-body Control for Quadrupedal Locomotion on Challenging Terrain,
Abstract: We present a passivity-based Whole-Body Control approach for quadruped robots
that achieves dynamic locomotion while compliantly balancing the robot's trunk.
We formulate the motion tracking as a Quadratic Program that takes into account
the full robot rigid body dynamics, the actuation limit, the joint limits and
the contact interaction. We analyze the controller robustness against
inaccurate friction coefficient estimates and unstable footholds, as well as
its capability to redistribute the load as a consequence of enforcing actuation
limits. Additionally, we present some practical implementation details gained
from the experience with the real platform. Extensive experimental trials on
the 90 Kg Hydraulically actuated Quadruped robot validate the capabilities of
this controller under various terrain conditions and gaits. The proposed
approach is expedient for accurate execution of high dynamic motions with
respect to the current state of the art. | [
1,
0,
0,
0,
0,
0
] |
Title: Accelerations for Graph Isomorphism,
Abstract: In this paper, we present two main results. First, by only one conjecture
(Conjecture 2.9) for recognizing a vertex symmetric graph, which is the hardest
task for our problem, we construct an algorithm for finding an isomorphism
between two graphs in polynomial time $ O(n^{3}) $. Second, without that
conjecture, we prove the algorithm to be of quasi-polynomial time $
O(n^{1.5\log n}) $. The conjectures in this paper are correct for all graphs of
size no larger than $ 5 $ and all graphs we have encountered. At least the
conjecture for determining if a graph is vertex symmetric is quite true
intuitively. We are not able to prove them by hand, so we have planned to find
possible counterexamples by a computer. We also introduce new concepts like
collapse pattern and collapse tomography, which play important roles in our
algorithms. | [
1,
0,
0,
0,
0,
0
] |
Title: Multi-Objective Maximization of Monotone Submodular Functions with Cardinality Constraint,
Abstract: We consider the problem of multi-objective maximization of monotone
submodular functions subject to cardinality constraint, often formulated as
$\max_{|A|=k}\min_{i\in\{1,\dots,m\}}f_i(A)$. While it is widely known that
greedy methods work well for a single objective, the problem becomes much
harder with multiple objectives. In fact, Krause et al.\ (2008) showed that
when the number of objectives $m$ grows as the cardinality $k$ i.e.,
$m=\Omega(k)$, the problem is inapproximable (unless $P=NP$). On the other
hand, when $m$ is constant Chekuri et al.\ (2010) showed a randomized
$(1-1/e)-\epsilon$ approximation with runtime (number of queries to function
oracle) $n^{m/\epsilon^3}$. %In fact, the result of Chekuri et al.\ (2010) is
for the far more general case of matroid constant.
We focus on finding a fast and practical algorithm that has (asymptotic)
approximation guarantees even when $m$ is super constant. We first modify the
algorithm of Chekuri et al.\ (2010) to achieve a $(1-1/e)$ approximation for
$m=o(\frac{k}{\log^3 k})$. This demonstrates a steep transition from constant
factor approximability to inapproximability around $m=\Omega(k)$. Then using
Multiplicative-Weight-Updates (MWU), we find a much faster
$\tilde{O}(n/\delta^3)$ time asymptotic $(1-1/e)^2-\delta$ approximation. While
the above results are all randomized, we also give a simple deterministic
$(1-1/e)-\epsilon$ approximation with runtime $kn^{m/\epsilon^4}$. Finally, we
run synthetic experiments using Kronecker graphs and find that our MWU inspired
heuristic outperforms existing heuristics. | [
1,
0,
0,
1,
0,
0
] |
Title: Cosmology from conservation of global energy,
Abstract: It is argued that many of the problems and ambiguities of standard cosmology
derive from a single one: violation of conservation of energy in the standard
paradigm. Standard cosmology satisfies conservation of local energy, however
disregards the inherent global aspect of energy. We therefore explore
conservation of the quasi-local Misner-Sharp energy within the causal horizon,
which, as we argue, is necessarily an apparent horizon. Misner-Sharp energy
assumes the presence of arbitrary mass-energy. Its conservation, however,
yields "empty" de Sitter (open, flat, closed) as single cosmological solution,
where Misner-Sharp total energy acts as cosmological constant and where the
source of curvature energy is unidentified. It is argued that de Sitter is only
apparently empty of matter. That is, total matter energy scales as curvature
energy in open de Sitter, which causes evolution of the cosmic potential and
induces gravitational time dilation. Curvature of time accounts completely for
the extrinsic curvature, i.e., renders open de Sitter spatially flat. This
explains the well known, surprising, spatial flatness of Misner-Sharp energy,
even if extrinsic curvature is non-zero. The general relativistic derivation
from Misner-Sharp energy is confirmed by a Machian equation of recessional and
peculiar energy, which explicitly assumes the presence of matter. This
relational model enhances interpretation. Time-dilated open de Sitter is
spatially flat, dynamically close to $\Lambda$CDM, and is shown to be without
the conceptual problems of concordance cosmology. | [
0,
1,
0,
0,
0,
0
] |
Title: Electric field modulation of the non-linear areal magnetic anisotropy energy,
Abstract: We study the ferromagnetic layer thickness dependence of the
voltage-controlled magnetic anisotropy (VCMA) in gated CoFeB/MgO
heterostructures with heavy metal underlayers. When the effective CoFeB
thickness is below ~1 nm, the VCMA efficiency of Ta/CoFeB/MgO heterostructures
considerably decreases with decreasing CoFeB thickness. We find that a high
order phenomenological term used to describe the thickness dependence of the
areal magnetic anisotropy energy can also account for the change in the areal
VCMA efficiency. In this structure, the higher order term competes against the
common interfacial VCMA, thereby reducing the efficiency at lower CoFeB
thickness. The areal VCMA efficiency does not saturate even when the effective
CoFeB thickness exceeds ~1 nm. We consider the higher order term is related to
the strain that develops at the CoFeB/MgO interface: as the average strain of
the CoFeB layer changes with its thickness, the electronic structure of the
CoFeB/MgO interface varies leading to changes in areal magnetic anisotropy
energy and VCMA efficiency. | [
0,
1,
0,
0,
0,
0
] |
Title: Gapless surface states originated from accidentally degenerate quadratic band touching in a three-dimensional tetragonal photonic crystal,
Abstract: A tetragonal photonic crystal composed of high-index pillars can exhibit a
frequency-isolated accidental degeneracy at a high-symmetry point in the first
Brillouin zone. A photonic band gap can be formed there by introducing a
geometrical anisotropy in the pillars. In this gap, gapless surface/domain-wall
states emerge under a certain condition. We analyze their physical property in
terms of an effective hamiltonian, and a good agreement between the effective
theory and numerical calculation is obtained. | [
0,
1,
0,
0,
0,
0
] |
Title: On the impact of pull request decisions on future contributions,
Abstract: The pull-based development process has become prevalent on platforms such as
GitHub as a form of distributed software development. Potential contributors
can create and submit a set of changes to a software project through pull
requests. These changes can be accepted, discussed or rejected by the
maintainers of the software project, and can influence further contribution
proposals. As such, it is important to examine the practices that encourage
contributors to a project to submit pull requests. Specifically, we consider
the impact of prior pull requests on the acceptance or rejection of subsequent
pull requests. We also consider the potential effect of rejecting or ignoring
pull requests on further contributions. In this preliminary research, we study
three large projects on \textsf{GitHub}, using pull request data obtained
through the \textsf{GitHub} API, and we perform empirical analyses to
investigate the above questions. Our results show that continued contribution
to a project is correlated with higher pull request acceptance rates and that
pull request rejections lead to fewer future contributions. | [
1,
0,
0,
0,
0,
0
] |
Title: The unreasonable effectiveness of the forget gate,
Abstract: Given the success of the gated recurrent unit, a natural question is whether
all the gates of the long short-term memory (LSTM) network are necessary.
Previous research has shown that the forget gate is one of the most important
gates in the LSTM. Here we show that a forget-gate-only version of the LSTM
with chrono-initialized biases, not only provides computational savings but
outperforms the standard LSTM on multiple benchmark datasets and competes with
some of the best contemporary models. Our proposed network, the JANET, achieves
accuracies of 99% and 92.5% on the MNIST and pMNIST datasets, outperforming the
standard LSTM which yields accuracies of 98.5% and 91%. | [
0,
0,
0,
1,
0,
0
] |
Title: WMRB: Learning to Rank in a Scalable Batch Training Approach,
Abstract: We propose a new learning to rank algorithm, named Weighted Margin-Rank Batch
loss (WMRB), to extend the popular Weighted Approximate-Rank Pairwise loss
(WARP). WMRB uses a new rank estimator and an efficient batch training
algorithm. The approach allows more accurate item rank approximation and
explicit utilization of parallel computation to accelerate training. In three
item recommendation tasks, WMRB consistently outperforms WARP and other
baselines. Moreover, WMRB shows clear time efficiency advantages as data scale
increases. | [
1,
0,
0,
1,
0,
0
] |
Title: Learning Features from Co-occurrences: A Theoretical Analysis,
Abstract: Representing a word by its co-occurrences with other words in context is an
effective way to capture the meaning of the word. However, the theory behind
remains a challenge. In this work, taking the example of a word classification
task, we give a theoretical analysis of the approaches that represent a word X
by a function f(P(C|X)), where C is a context feature, P(C|X) is the
conditional probability estimated from a text corpus, and the function f maps
the co-occurrence measure to a prediction score. We investigate the impact of
context feature C and the function f. We also explain the reasons why using the
co-occurrences with multiple context features may be better than just using a
single one. In addition, some of the results shed light on the theory of
feature learning and machine learning in general. | [
1,
0,
1,
1,
0,
0
] |
Title: Azumaya algebras and canonical components,
Abstract: Let $M$ be a compact 3-manifold and $\Gamma=\pi_1(M)$. The work of Thurston
and Culler--Shalen established the $\mathrm{SL}_2(\mathbb{C})$ character
variety $X(\Gamma)$ as fundamental tool in the study of the geometry and
topology of $M$. This is particularly so in the case when $M$ is the exterior
of a hyperbolic knot $K$ in $S^3$. The main goals of this paper are to bring to
bear tools from algebraic and arithmetic geometry to understand algebraic and
number theoretic properties of the so-called canonical component of $X(\Gamma)$
as well as distinguished points on the canonical component when $\Gamma$ is a
knot group. In particular, we study how the theory of quaternion Azumaya
algebras can be used to obtain algebraic and arithmetic information about Dehn
surgeries, and perhaps of most interest, to construct new knot invariants that
lie in the Brauer groups of curves over number fields. | [
0,
0,
1,
0,
0,
0
] |
Title: On the Spectrum of Multi-Frequency Quasiperiodic Schrödinger Operators with Large Coupling,
Abstract: We study multi-frequency quasiperiodic Schrödinger operators on
$\mathbb{Z} $. We prove that for a large real analytic potential satisfying
certain restrictions the spectrum consists of a single interval. The result is
a consequence of a criterion for the spectrum to contain an interval at a given
location that we establish non-perturbatively in the regime of positive
Lyapunov exponent. | [
0,
0,
1,
0,
0,
0
] |
Title: Model Spaces of Regularity Structures for Space-Fractional SPDEs,
Abstract: We study model spaces, in the sense of Hairer, for stochastic partial
differential equations involving the fractional Laplacian. We prove that the
fractional Laplacian is a singular kernel suitable to apply the theory of
regularity structures. Our main contribution is to study the dependence of the
model space for a regularity structure on the three-parameter problem involving
the spatial dimension, the polynomial order of the nonlinearity, and the
exponent of the fractional Laplacian. The goal is to investigate the growth of
the model space under parameter variation. In particular, we prove several
results in the approaching subcriticality limit leading to universal growth
exponents of the regularity structure. A key role is played by the viewpoint
that model spaces can be identified with families of rooted trees. Our proofs
are based upon a geometrical construction similar to Newton polygons for
classical Taylor series and various combinatorial arguments. We also present
several explicit examples listing all elements with negative homogeneity by
implementing a new symbolic software package to work with regularity
structures. We use this package to illustrate our analytical results and to
obtain new conjectures regarding coarse-grained network measures for model
spaces. | [
0,
0,
1,
0,
0,
0
] |
Title: Transit Detection of a "Starshade" at the Inner Lagrange Point of an Exoplanet,
Abstract: All water-covered rocky planets in the inner habitable zones of solar-type
stars will inevitably experience a catastrophic runaway climate due to
increasing stellar luminosity and limits to outgoing infrared radiation from
wet greenhouse atmospheres. Reflectors or scatterers placed near Earth's inner
Lagrange point (L1) have been proposed as a 'geo-engineering" solution to
anthropogenic climate change and an advanced version of this could modulate
incident irradiation over many Gyr or "rescue" a planet from the interior of
the habitable zone. The distance of the starshade from the planet that
minimizes its mass is 1.6 times the Earth-L1 distance. Such a starshade would
have to be similar in size to the planet and the mutual occultations during
planetary transits could produce a characteristic maximum at mid-transit in the
light-curve. Because of a fortuitous ratio of densities, Earth-size planets
around G dwarf stars present the best opportunity to detect such an artifact.
The signal would be persistent and is potentially detectable by a future space
photometry mission to characterize transiting planets. The signal could be
distinguished from natural phenomenon, i.e. starspots or cometary dust clouds,
by its shape, persistence, and transmission spectrum. | [
0,
1,
0,
0,
0,
0
] |
Title: Automatic Error Analysis of Human Motor Performance for Interactive Coaching in Virtual Reality,
Abstract: In the context of fitness coaching or for rehabilitation purposes, the motor
actions of a human participant must be observed and analyzed for errors in
order to provide effective feedback. This task is normally carried out by human
coaches, and it needs to be solved automatically in technical applications that
are to provide automatic coaching (e.g. training environments in VR). However,
most coaching systems only provide coarse information on movement quality, such
as a scalar value per body part that describes the overall deviation from the
correct movement. Further, they are often limited to static body postures or
rather simple movements of single body parts. While there are many approaches
to distinguish between different types of movements (e.g., between walking and
jumping), the detection of more subtle errors in a motor performance is less
investigated. We propose a novel approach to classify errors in sports or
rehabilitation exercises such that feedback can be delivered in a rapid and
detailed manner: Homogeneous sub-sequences of exercises are first temporally
aligned via Dynamic Time Warping. Next, we extract a feature vector from the
aligned sequences, which serves as a basis for feature selection using Random
Forests. The selected features are used as input for Support Vector Machines,
which finally classify the movement errors. We compare our algorithm to a well
established state-of-the-art approach in time series classification, 1-Nearest
Neighbor combined with Dynamic Time Warping, and show our algorithm's
superiority regarding classification quality as well as computational cost. | [
1,
0,
0,
0,
0,
0
] |
Title: A mechanism of synaptic clock underlying subjective time perception,
Abstract: Temporal resolution of visual information processing is thought to be an
important factor in predator-prey interactions, shaped in the course of
evolution by animals' ecology. Here I show that light can be considered to have
a dual role of a source of information, which guides motor actions, and an
environmental feedback for those actions. I consequently show how temporal
perception might depend on behavioral adaptations realized by the nervous
system. I propose an underlying mechanism of synaptic clock, with every synapse
having its characteristic time unit, determined by the persistence of memory
traces of synaptic inputs, which is used by the synapse to tell time. The
present theory offers a testable framework, which may account for numerous
experimental findings, including the interspecies variation in temporal
resolution and the properties of subjective time perception, specifically the
variable speed of perceived time passage, depending on emotional and
attentional states or tasks performed. | [
0,
0,
0,
0,
1,
0
] |
Title: Optimal Packings of Two to Four Equal Circles on Any Flat Torus,
Abstract: We find explicit formulas for the radii and locations of the circles in all
the optimally dense packings of two, three or four equal circles on any flat
torus, defined to be the quotient of the Euclidean plane by the lattice
generated by two independent vectors. We prove the optimality of the
arrangements using techniques from rigidity theory and topological graph
theory. | [
0,
0,
1,
0,
0,
0
] |
Title: Obfuscation in Bitcoin: Techniques and Politics,
Abstract: In the cryptographic currency Bitcoin, all transactions are recorded in the
blockchain - a public, global, and immutable ledger. Because transactions are
public, Bitcoin and its users employ obfuscation to maintain a degree of
financial privacy. Critically, and in contrast to typical uses of obfuscation,
in Bitcoin obfuscation is not aimed against the system designer but is instead
enabled by design. We map sixteen proposed privacy-preserving techniques for
Bitcoin on an obfuscation-vs.-cryptography axis, and find that those that are
used in practice tend toward obfuscation. We argue that this has led to a
balance between privacy and regulatory acceptance. | [
1,
0,
0,
0,
0,
0
] |
Title: Path Planning and Controlled Crash Landing of a Quadcopter in case of a Rotor Failure,
Abstract: This paper presents a framework for controlled emergency landing of a
quadcopter, experiencing a rotor failure, away from sensitive areas. A complete
mathematical model capturing the dynamics of the system is presented that takes
the asymmetrical aerodynamic load on the propellers into account. An
equilibrium state of the system is calculated around which a linear
time-invariant control strategy is developed to stabilize the system. By
utilizing the proposed model, a specific configuration for a quadcopter is
introduced that leads to the minimum power consumption during a
yaw-rate-resolved hovering after a rotor failure. Furthermore, given a 3D
representation of the environment, an optimal flight trajectory towards a safe
crash landing spot, while avoiding collision with obstacles, is developed using
an RRT* approach. The cost function for determining the best landing spot
consists of: (i) finding the safest landing spot with the largest clearance
from the obstacles; and (ii) finding the most energy-efficient trajectory
towards the landing spot. The performance of the proposed framework is tested
via simulations. | [
1,
0,
0,
0,
0,
0
] |
Title: Exact short-time height distribution in 1D KPZ equation with Brownian initial condition,
Abstract: The early time regime of the Kardar-Parisi-Zhang (KPZ) equation in $1+1$
dimension, starting from a Brownian initial condition with a drift $w$, is
studied using the exact Fredholm determinant representation. For large drift we
recover the exact results for the droplet initial condition, whereas a
vanishingly small drift describes the stationary KPZ case, recently studied by
weak noise theory (WNT). We show that for short time $t$, the probability
distribution $P(H,t)$ of the height $H$ at a given point takes the large
deviation form $P(H,t) \sim \exp{\left(-\Phi(H)/\sqrt{t} \right)}$. We obtain
the exact expressions for the rate function $\Phi(H)$ for $H<H_{c2}$. Our exact
expression for $H_{c2}$ numerically coincides with the value at which WNT was
found to exhibit a spontaneous reflection symmetry breaking. We propose two
continuations for $H>H_{c2}$, which apparently correspond to the symmetric and
asymmetric WNT solutions. The rate function $\Phi(H)$ is Gaussian in the
center, while it has asymmetric tails, $|H|^{5/2}$ on the negative $H$ side and
$H^{3/2}$ on the positive $H$ side. | [
0,
1,
0,
0,
0,
0
] |
Title: The role of cosmology in modern physics,
Abstract: Subject of this article is the relationship between modern cosmology and
fundamental physics, in particular general relativity as a theory of gravity on
one side, together with its unique application in cosmology, and the formation
of structures and their statistics on the other. It summarises arguments for
the formulation for a metric theory of gravity and the uniqueness of the
construction of general relativity. It discusses symmetry arguments in the
construction of Friedmann-Lemaître cosmologies as well as assumptions in
relation to the presence of dark matter, when adopting general relativity as
the gravitational theory. A large section is dedicated to $\Lambda$CDM as the
standard model for structure formation and the arguments that led to its
construction, and to the of role statistics and to the problem of scientific
inference in cosmology as an empirical science. The article concludes with an
outlook on current and future developments in cosmology. | [
0,
1,
0,
0,
0,
0
] |
Title: TURN TAP: Temporal Unit Regression Network for Temporal Action Proposals,
Abstract: Temporal Action Proposal (TAP) generation is an important problem, as fast
and accurate extraction of semantically important (e.g. human actions) segments
from untrimmed videos is an important step for large-scale video analysis. We
propose a novel Temporal Unit Regression Network (TURN) model. There are two
salient aspects of TURN: (1) TURN jointly predicts action proposals and refines
the temporal boundaries by temporal coordinate regression; (2) Fast computation
is enabled by unit feature reuse: a long untrimmed video is decomposed into
video units, which are reused as basic building blocks of temporal proposals.
TURN outperforms the state-of-the-art methods under average recall (AR) by a
large margin on THUMOS-14 and ActivityNet datasets, and runs at over 880 frames
per second (FPS) on a TITAN X GPU. We further apply TURN as a proposal
generation stage for existing temporal action localization pipelines, it
outperforms state-of-the-art performance on THUMOS-14 and ActivityNet. | [
1,
0,
0,
0,
0,
0
] |
Title: Efficiently Learning Nonstationary Gaussian Processes for Real World Impact,
Abstract: Most real world phenomena such as sunlight distribution under a forest
canopy, minerals concentration, stock valuation, exhibit nonstationary dynamics
i.e. phenomenon variation changes depending on the locality. Nonstationary
dynamics pose both theoretical and practical challenges to statistical machine
learning algorithms that aim to accurately capture the complexities governing
the evolution of such processes. Typically the nonstationary dynamics are
modeled using nonstationary Gaussian Process models (NGPS) that employ local
latent dynamics parameterization to correspondingly model the nonstationary
real observable dynamics. Recently, an approach based on most likely induced
latent dynamics representation attracted research community's attention for a
while. The approach could not be employed for large scale real world
applications because learning a most likely latent dynamics representation
involves maximization of marginal likelihood of the observed real dynamics that
becomes intractable as the number of induced latent points grows with problem
size. We have established a direct relationship between informativeness of the
induced latent dynamics and the marginal likelihood of the observed real
dynamics. This opens up the possibility of maximizing marginal likelihood of
observed real dynamics indirectly by near optimally maximizing entropy or
mutual information gain on the induced latent dynamics using greedy algorithms.
Therefore, for an efficient yet accurate inference, we propose to build an
induced latent dynamics representation using a novel algorithm LISAL that
adaptively maximizes entropy or mutual information on the induced latent
dynamics and marginal likelihood of observed real dynamics in an iterative
manner. The relevance of LISAL is validated using real world datasets. | [
0,
0,
0,
1,
0,
0
] |
Title: Causal Holography in Application to the Inverse Scattering Problems,
Abstract: For a given smooth compact manifold $M$, we introduce an open class $\mathcal
G(M)$ of Riemannian metrics, which we call \emph{metrics of the gradient type}.
For such metrics $g$, the geodesic flow $v^g$ on the spherical tangent bundle
$SM \to M$ admits a Lyapunov function (so the $v^g$-flow is traversing). It
turns out, that metrics of the gradient type are exactly the non-trapping
metrics.
For every $g \in \mathcal G(M)$, the geodesic scattering along the boundary
$\partial M$ can be expressed in terms of the \emph{scattering map} $C_{v^g}:
\partial_1^+(SM) \to \partial_1^-(SM)$. It acts from a domain
$\partial_1^+(SM)$ in the boundary $\partial(SM)$ to the complementary domain
$\partial_1^-(SM)$, both domains being diffeomorphic. We prove that, for a
\emph{boundary generic} metric $g \in \mathcal G(M)$ the map $C_{v^g}$ allows
for a reconstruction of $SM$ and of the geodesic foliation $\mathcal F(v^g)$ on
it, up to a homeomorphism (often a diffeomorphism).
Also, for such $g$, the knowledge of the scattering map $C_{v^g}$ makes it
possible to recover the homology of $M$, the Gromov simplicial semi-norm on it,
and the fundamental group of $M$. Additionally, $C_{v^g}$ allows to reconstruct
the naturally stratified topological type of the space of geodesics on $M$. | [
0,
0,
1,
0,
0,
0
] |
Title: Inferring short-term volatility indicators from Bitcoin blockchain,
Abstract: In this paper, we study the possibility of inferring early warning indicators
(EWIs) for periods of extreme bitcoin price volatility using features obtained
from Bitcoin daily transaction graphs. We infer the low-dimensional
representations of transaction graphs in the time period from 2012 to 2017
using Bitcoin blockchain, and demonstrate how these representations can be used
to predict extreme price volatility events. Our EWI, which is obtained with a
non-negative decomposition, contains more predictive information than those
obtained with singular value decomposition or scalar value of the total Bitcoin
transaction volume. | [
1,
0,
0,
0,
0,
1
] |
Title: Discrete structure of the brain rhythms,
Abstract: Neuronal activity in the brain generates synchronous oscillations of the
Local Field Potential (LFP). The traditional analyses of the LFPs are based on
decomposing the signal into simpler components, such as sinusoidal harmonics.
However, a common drawback of such methods is that the decomposition primitives
are usually presumed from the onset, which may bias our understanding of the
signal's structure. Here, we introduce an alternative approach that allows an
impartial, high resolution, hands-off decomposition of the brain waves into a
small number of discrete, frequency-modulated oscillatory processes, which we
call oscillons. In particular, we demonstrate that mouse hippocampal LFP
contain a single oscillon that occupies the $\theta$-frequency band and a
couple of $\gamma$-oscillons that correspond, respectively, to slow and fast
$\gamma$-waves. Since the oscillons were identified empirically, they may
represent the actual, physical structure of synchronous oscillations in
neuronal ensembles, whereas Fourier-defined "brain waves" are nothing but
poorly resolved oscillons. | [
0,
0,
0,
0,
1,
0
] |
Title: A new class of solutions for the multi-component extended Harry Dym equation,
Abstract: We construct a point transformation between two integrable systems, the
multi-component Harry Dym equation and the multi-component extended Harry Dym
equation, that does not preserve the class of multi-phase solutions. As a
consequence we obtain a new type of wave-like solutions, generalising
the~multi-phase solutions of the multi-component extended Harry Dym equation.
Our construction is easily transferable to other integrable systems with
analogous properties. | [
0,
1,
0,
0,
0,
0
] |
Title: Multivariate Generalized Linear Mixed Models for Joint Estimation of Sporting Outcomes,
Abstract: This paper explores improvements in prediction accuracy and inference
capability when allowing for potential correlation in team-level random effects
across multiple game-level responses from different assumed distributions.
First-order and fully exponential Laplace approximations are used to fit
normal-binary and Poisson-binary multivariate generalized linear mixed models
with non-nested random effects structures. We have built these models into the
R package mvglmmRank, which is used to explore several seasons of American
college football and basketball data. | [
0,
0,
0,
1,
0,
0
] |
Title: Accelerator Codesign as Non-Linear Optimization,
Abstract: We propose an optimization approach for determining both hardware and
software parameters for the efficient implementation of a (family of)
applications called dense stencil computations on programmable GPGPUs. We first
introduce a simple, analytical model for the silicon area usage of accelerator
architectures and a workload characterization of stencil computations. We
combine this characterization with a parametric execution time model and
formulate a mathematical optimization problem. That problem seeks to maximize a
common objective function of 'all the hardware and software parameters'. The
solution to this problem, therefore "solves" the codesign problem:
simultaneously choosing software-hardware parameters to optimize total
performance.
We validate this approach by proposing architectural variants of the NVIDIA
Maxwell GTX-980 (respectively, Titan X) specifically tuned to a predetermined
workload of four common 2D stencils (Heat, Jacobi, Laplacian, and Gradient) and
two 3D ones (Heat and Laplacian). Our model predicts that performance would
potentially improve by 28% (respectively, 33%) with simple tweaks to the
hardware parameters such as adapting coarse and fine-grained parallelism by
changing the number of streaming multiprocessors and the number of compute
cores each contains. We propose a set of Pareto-optimal design points to
exploit the trade-off between performance and silicon area and show that by
additionally eliminating GPU caches, we can get a further 2-fold improvement. | [
1,
0,
0,
0,
0,
0
] |
Title: Riddim: A Rhythm Analysis and Decomposition Tool Based On Independent Subspace Analysis,
Abstract: The goal of this thesis was to implement a tool that, given a digital audio
input, can extract and represent rhythm and musical time. The purpose of the
tool is to help develop better models of rhythm for real-time computer based
performance and composition. This analysis tool, Riddim, uses Independent
Subspace Analysis (ISA) and a robust onset detection scheme to separate and
detect salient rhythmic and timing information from different sonic sources
within the input. This information is then represented in a format that can be
used by a variety of algorithms that interpret timing information to infer
rhythmic and musical structure. A secondary objective of this work is a "proof
of concept" as a non-real-time rhythm analysis system based on ISA. This is a
necessary step since ultimately it is desirable to incorporate this
functionality in a real-time plug-in for live performance and improvisation. | [
1,
0,
0,
0,
0,
0
] |
Title: Cholesterol modulates acetylcholine receptor diffusion by tuning confinement sojourns and nanocluster stability,
Abstract: Translational motion of neurotransmitter receptors is key for determining
receptor number at the synapse and hence, synaptic efficacy. We combine
live-cell STORM superresolution microscopy of nicotinic acetylcholine receptor
(nAChR) with single-particle tracking, mean-squared displacement (MSD), turning
angle, ergodicity, and clustering analyses to characterize the lateral motion
of individual molecules and their collective behaviour. nAChR diffusion is
highly heterogeneous: subdiffusive, Brownian and, less frequently,
superdiffusive. At the single-track level, free walks are transiently
interrupted by ms-long confinement sojourns occurring in nanodomains of ~36 nm
radius. Cholesterol modulates the time and the area spent in confinement.
Turning angle analysis reveals anticorrelated steps with time-lag dependence,
in good agreement with the permeable fence model. At the ensemble level,
nanocluster assembly occurs in second-long bursts separated by periods of
cluster disassembly. Thus, millisecond-long confinement sojourns and
second-long reversible nanoclustering with similar cholesterol sensitivities
affect all trajectories; the proportion of the two regimes determines the
resulting macroscopic motional mode and breadth of heterogeneity in the
ensemble population. | [
0,
1,
0,
0,
0,
0
] |
Title: Incidence systems on Cartesian powers of algebraic curves,
Abstract: We show that a reduct of the Zariski structure of an algebraic curve which is
not locally modular interprets a field, answering a question of Zilber's. | [
0,
0,
1,
0,
0,
0
] |
Title: Automated versus do-it-yourself methods for causal inference: Lessons learned from a data analysis competition,
Abstract: Statisticians have made great progress in creating methods that reduce our
reliance on parametric assumptions. However this explosion in research has
resulted in a breadth of inferential strategies that both create opportunities
for more reliable inference as well as complicate the choices that an applied
researcher has to make and defend. Relatedly, researchers advocating for new
methods typically compare their method to at best 2 or 3 other causal inference
strategies and test using simulations that may or may not be designed to
equally tease out flaws in all the competing methods. The causal inference data
analysis challenge, "Is Your SATT Where It's At?", launched as part of the 2016
Atlantic Causal Inference Conference, sought to make progress with respect to
both of these issues. The researchers creating the data testing grounds were
distinct from the researchers submitting methods whose efficacy would be
evaluated. Results from 30 competitors across the two versions of the
competition (black box algorithms and do-it-yourself analyses) are presented
along with post-hoc analyses that reveal information about the characteristics
of causal inference strategies and settings that affect performance. The most
consistent conclusion was that methods that flexibly model the response surface
perform better overall than methods that fail to do so. Finally new methods are
proposed that combine features of several of the top-performing submitted
methods. | [
0,
0,
0,
1,
0,
0
] |
Title: Certified Computation from Unreliable Datasets,
Abstract: A wide range of learning tasks require human input in labeling massive data.
The collected data though are usually low quality and contain inaccuracies and
errors. As a result, modern science and business face the problem of learning
from unreliable data sets.
In this work, we provide a generic approach that is based on
\textit{verification} of only few records of the data set to guarantee high
quality learning outcomes for various optimization objectives. Our method,
identifies small sets of critical records and verifies their validity. We show
that many problems only need $\text{poly}(1/\varepsilon)$ verifications, to
ensure that the output of the computation is at most a factor of $(1 \pm
\varepsilon)$ away from the truth. For any given instance, we provide an
\textit{instance optimal} solution that verifies the minimum possible number of
records to approximately certify correctness. Then using this instance optimal
formulation of the problem we prove our main result: "every function that
satisfies some Lipschitz continuity condition can be certified with a small
number of verifications". We show that the required Lipschitz continuity
condition is satisfied even by some NP-complete problems, which illustrates the
generality and importance of this theorem.
In case this certification step fails, an invalid record will be identified.
Removing these records and repeating until success, guarantees that the result
will be accurate and will depend only on the verified records. Surprisingly, as
we show, for several computation tasks more efficient methods are possible.
These methods always guarantee that the produced result is not affected by the
invalid records, since any invalid record that affects the output will be
detected and verified. | [
1,
0,
0,
0,
0,
0
] |
Title: A sparse grid approach to balance sheet risk measurement,
Abstract: In this work, we present a numerical method based on a sparse grid
approximation to compute the loss distribution of the balance sheet of a
financial or an insurance company. We first describe, in a stylised way, the
assets and liabilities dynamics that are used for the numerical estimation of
the balance sheet distribution. For the pricing and hedging model, we chose a
classical Black & Scholes model with a stochastic interest rate following a
Hull & White model. The risk management model describing the evolution of the
parameters of the pricing and hedging model is a Gaussian model. The new
numerical method is compared with the traditional nested simulation approach.
We review the convergence of both methods to estimate the risk indicators under
consideration. Finally, we provide numerical results showing that the sparse
grid approach is extremely competitive for models with moderate dimension. | [
0,
0,
0,
0,
0,
1
] |
Title: Towards Metamerism via Foveated Style Transfer,
Abstract: The problem of $\textit{visual metamerism}$ is defined as finding a family of
perceptually indistinguishable, yet physically different images. In this paper,
we propose our NeuroFovea metamer model, a foveated generative model that is
based on a mixture of peripheral representations and style transfer
forward-pass algorithms. Our gradient-descent free model is parametrized by a
foveated VGG19 encoder-decoder which allows us to encode images in high
dimensional space and interpolate between the content and texture information
with adaptive instance normalization anywhere in the visual field. Our
contributions include: 1) A framework for computing metamers that resembles a
noisy communication system via a foveated feed-forward encoder-decoder network
-- We observe that metamerism arises as a byproduct of noisy perturbations that
partially lie in the perceptual null space; 2) A perceptual optimization scheme
as a solution to the hyperparametric nature of our metamer model that requires
tuning of the image-texture tradeoff coefficients everywhere in the visual
field which are a consequence of internal noise; 3) An ABX psychophysical
evaluation of our metamers where we also find that the rate of growth of the
receptive fields in our model match V1 for reference metamers and V2 between
synthesized samples. Our model also renders metamers at roughly a second,
presenting a $\times1000$ speed-up compared to the previous work, which allows
for tractable data-driven metamer experiments. | [
1,
0,
0,
0,
0,
0
] |
Title: Modeling Sheep pox Disease from the 1994-1998 Epidemic in Evros Prefecture, Greece,
Abstract: Sheep pox is a highly transmissible disease which can cause serious loss of
livestock and can therefore have major economic impact. We present data from
sheep pox epidemics which occurred between 1994 and 1998. The data include
weekly records of infected farms as well as a number of covariates. We
implement Bayesian stochastic regression models which, in addition to various
explanatory variables like seasonal and environmental/meteorological factors,
also contain serial correlation structure based on variants of the
Ornstein-Uhlenbeck process. We take a predictive view in model selection by
utilizing deviance-based measures. The results indicate that seasonality and
the number of infected farms are important predictors for sheep pox incidence. | [
0,
0,
0,
1,
0,
0
] |
Title: Minimal hard surface-unlink and classical unlink diagrams,
Abstract: We describe a method for generating minimal hard prime surface-link diagrams.
We extend the known examples of minimal hard prime classical unknot and unlink
diagrams up to three components and generate figures of all minimal hard prime
surface-unknot and surface-unlink diagrams with prime base surface components
up to ten crossings. | [
0,
0,
1,
0,
0,
0
] |
Title: Fading of collective attention shapes the evolution of linguistic variants,
Abstract: Language change involves the competition between alternative linguistic forms
(1). The spontaneous evolution of these forms typically results in monotonic
growths or decays (2, 3) like in winner-take-all attractor behaviors. In the
case of the Spanish past subjunctive, the spontaneous evolution of its two
competing forms (ended in -ra and -se) was perturbed by the appearance of the
Royal Spanish Academy in 1713, which enforced the spelling of both forms as
perfectly interchangeable variants (4), at a moment in which the -ra form was
dominant (5). Time series extracted from a massive corpus of books (6) reveal
that this regulation in fact produced a transient renewed interest for the old
form -se which, once faded, left the -ra again as the dominant form up to the
present day. We show that time series are successfully explained by a
two-dimensional linear model that integrates an imitative and a novelty
component. The model reveals that the temporal scale over which collective
attention fades is in inverse proportion to the verb frequency. The integration
of the two basic mechanisms of imitation and attention to novelty allows to
understand diverse competing objects, with lifetimes that range from hours for
memes and news (7, 8) to decades for verbs, suggesting the existence of a
general mechanism underlying cultural evolution. | [
0,
0,
0,
0,
1,
0
] |
Title: Continuous-Time User Modeling in the Presence of Badges: A Probabilistic Approach,
Abstract: User modeling plays an important role in delivering customized web services
to the users and improving their engagement. However, most user models in the
literature do not explicitly consider the temporal behavior of users. More
recently, continuous-time user modeling has gained considerable attention and
many user behavior models have been proposed based on temporal point processes.
However, typical point process based models often considered the impact of peer
influence and content on the user participation and neglected other factors.
Gamification elements, are among those factors that are neglected, while they
have a strong impact on user participation in online services. In this paper,
we propose interdependent multi-dimensional temporal point processes that
capture the impact of badges on user participation besides the peer influence
and content factors. We extend the proposed processes to model user actions
over the community based question and answering websites, and propose an
inference algorithm based on Variational-EM that can efficiently learn the
model parameters. Extensive experiments on both synthetic and real data
gathered from Stack Overflow show that our inference algorithm learns the
parameters efficiently and the proposed method can better predict the user
behavior compared to the alternatives. | [
1,
0,
0,
0,
0,
0
] |
Title: Testing for observation-dependent regime switching in mixture autoregressive models,
Abstract: Testing for regime switching when the regime switching probabilities are
specified either as constants (`mixture models') or are governed by a
finite-state Markov chain (`Markov switching models') are long-standing
problems that have also attracted recent interest. This paper considers testing
for regime switching when the regime switching probabilities are time-varying
and depend on observed data (`observation-dependent regime switching').
Specifically, we consider the likelihood ratio test for observation-dependent
regime switching in mixture autoregressive models. The testing problem is
highly nonstandard, involving unidentified nuisance parameters under the null,
parameters on the boundary, singular information matrices, and higher-order
approximations of the log-likelihood. We derive the asymptotic null
distribution of the likelihood ratio test statistic in a general mixture
autoregressive setting using high-level conditions that allow for various forms
of dependence of the regime switching probabilities on past observations, and
we illustrate the theory using two particular mixture autoregressive models.
The likelihood ratio test has a nonstandard asymptotic distribution that can
easily be simulated, and Monte Carlo studies show the test to have satisfactory
finite sample size and power properties. | [
0,
0,
1,
1,
0,
0
] |
Title: The finiteness dimension of modules and relative Cohen-Macaulayness,
Abstract: Let $R$ be a commutative Noetherian ring, $\mathfrak a$ and $\mathfrak b$
ideals of $R$. In this paper, we study the finiteness dimension $f_{\mathfrak
a}(M)$ of $M$ relative to $\mathfrak a$ and the $\mathfrak b$-minimum
$\mathfrak a$-adjusted depth $\lambda_{\mathfrak a}^{\mathfrak b}(M)$ of $M$,
where the underlying module $M$ is relative Cohen-Macaulay w.r.t $\mathfrak a$.
Some applications of such modules are given. | [
0,
0,
1,
0,
0,
0
] |
Title: On Ladder Logic Bombs in Industrial Control Systems,
Abstract: In industrial control systems, devices such as Programmable Logic Controllers
(PLCs) are commonly used to directly interact with sensors and actuators, and
perform local automatic control. PLCs run software on two different layers: a)
firmware (i.e. the OS) and b) control logic (processing sensor readings to
determine control actions). In this work, we discuss ladder logic bombs, i.e.
malware written in ladder logic (or one of the other IEC 61131-3-compatible
languages). Such malware would be inserted by an attacker into existing control
logic on a PLC, and either persistently change the behavior, or wait for
specific trigger signals to activate malicious behaviour. For example, the LLB
could replace legitimate sensor readings with manipulated values. We see the
concept of LLBs as a generalization of attacks such as the Stuxnet attack. We
introduce LLBs on an abstract level, and then demonstrate several designs based
on real PLC devices in our lab. In particular, we also focus on stealthy LLBs,
i.e. LLBs that are hard to detect by human operators manually validating the
program running in PLCs. In addition to introducing vulnerabilities on the
logic layer, we also discuss countermeasures and we propose two detection
techniques. | [
1,
0,
0,
0,
0,
0
] |
Title: Stochastic variance reduced multiplicative update for nonnegative matrix factorization,
Abstract: Nonnegative matrix factorization (NMF), a dimensionality reduction and factor
analysis method, is a special case in which factor matrices have low-rank
nonnegative constraints. Considering the stochastic learning in NMF, we
specifically address the multiplicative update (MU) rule, which is the most
popular, but which has slow convergence property. This present paper introduces
on the stochastic MU rule a variance-reduced technique of stochastic gradient.
Numerical comparisons suggest that our proposed algorithms robustly outperform
state-of-the-art algorithms across different synthetic and real-world datasets. | [
1,
0,
0,
1,
0,
0
] |
Title: Bi-National Delay Pattern Analysis For Commercial and Passenger Vehicles at Niagara Frontier Border,
Abstract: Border crossing delays between New York State and Southern Ontario cause
problems like enormous economic loss and massive environmental pollutions. In
this area, there are three border-crossing ports: Peace Bridge (PB), Rainbow
Bridge (RB) and Lewiston-Queenston Bridge (LQ) at Niagara Frontier border. The
goals of this paper are to figure out whether the distributions of bi-national
wait times for commercial and passenger vehicles are evenly distributed among
the three ports and uncover the hidden significant influential factors that
result in the possible insufficient utilization. The historical border wait
time data from 7:00 to 21:00 between 08/22/2016 and 06/20/2017 are archived, as
well as the corresponding temporal and weather data. For each vehicle type
towards each direction, a Decision Tree is built to identify the various border
delay patterns over the three bridges. We find that for the passenger vehicles
to the USA, the convenient connections between the Canada freeways with USA
I-190 by LQ and PB may cause these two bridges more congested than RB,
especially when it is a holiday in Canada. For the passenger vehicles in the
other bound, RB is much more congested than LQ and PB in some cases, and the
visitors to Niagara Falls in the USA in summer may be a reason. For the
commercial trucks to the USA, the various delay patterns show PB is always more
congested than LQ. Hour interval and weekend are the most significant factors
appearing in all the four Decision Trees. These Decision Trees can help the
authorities to make specific routing suggestions when the corresponding
conditions are satisfied. | [
1,
0,
0,
0,
0,
0
] |
Title: A Scalable and Adaptive Method for Finding Semantically Equivalent Cue Words of Uncertainty,
Abstract: Scientific knowledge is constantly subject to a variety of changes due to new
discoveries, alternative interpretations, and fresh perspectives. Understanding
uncertainties associated with various stages of scientific inquiries is an
integral part of scientists' domain expertise and it serves as the core of
their meta-knowledge of science. Despite the growing interest in areas such as
computational linguistics, systematically characterizing and tracking the
epistemic status of scientific claims and their evolution in scientific
disciplines remains a challenge. We present a unifying framework for the study
of uncertainties explicitly and implicitly conveyed in scientific publications.
The framework aims to accommodate a wide range of uncertain types, from
speculations to inconsistencies and controversies. We introduce a scalable and
adaptive method to recognize semantically equivalent cues of uncertainty across
different fields of research and accommodate individual analysts' unique
perspectives. We demonstrate how the new method can be used to expand a small
seed list of uncertainty cue words and how the validity of the expanded
candidate cue words are verified. We visualize the mixture of the original and
expanded uncertainty cue words to reveal the diversity of expressions of
uncertainty. These cue words offer a novel resource for the study of
uncertainty in scientific assertions. | [
1,
0,
0,
0,
0,
0
] |
Title: An optimization method to simultaneously estimate electrophysiology and connectivity in a model central pattern generator,
Abstract: Central pattern generators (CPGs) appear to have evolved multiple times
throughout the animal kingdom, indicating that their design imparts a
significant evolutionary advantage. Insight into how this design is achieved is
hindered by the difficulty inherent in examining relationships among
electrophysiological properties of the constituent cells of a CPG and their
functional connectivity. That is: experimentally it is challenging to estimate
the values of more than two or three of these properties simultaneously. We
employ a method of statistical data assimilation (D.A.) to estimate the
synaptic weights, synaptic reversal potentials, and maximum conductances of ion
channels of the constituent neurons in a multi-modal network model. We then use
these estimates to predict the functional mode of activity that the network is
expressing. The measurements used are the membrane voltage time series of all
neurons in the circuit. We find that these measurements provide sufficient
information to yield accurate predictions of the network's associated
electrical activity. This experiment can apply directly in a real laboratory
using intracellular recordings from a known biological CPG whose structural
mapping is known, and which can be completely isolated from the animal. The
simulated results in this paper suggest that D.A. might provide a tool for
simultaneously estimating tens to hundreds of CPG properties, thereby offering
the opportunity to seek possible systematic relationships among these
properties and the emergent electrical activity. | [
0,
1,
0,
0,
0,
0
] |
Title: Overdensities of SMGs around WISE-selected, ultra-luminous, high-redshift AGN,
Abstract: We investigate extremely luminous dusty galaxies in the environments around
WISE-selected hot dust obscured galaxies (Hot DOGs) and WISE/radio-selected
active galactic nuclei (AGNs) at average redshifts of z = 2.7 and z = 1.7,
respectively. Previous observations have detected overdensities of companion
submillimetre-selected sources around 10 Hot DOGs and 30 WISE/radio AGNs, with
overdensities of ~ 2 - 3 and ~ 5 - 6 , respectively. We find that the space
densities in both samples to be overdense compared to normal star-forming
galaxies and submillimetre galaxies (SMGs) in the SCUBA-2 Cosmology Legacy
Survey (S2CLS). Both samples of companion sources have consistent mid-IR
colours and mid-IR to submm ratios as SMGs. The brighter population around
WISE/radio AGNs could be responsible for the higher overdensity reported. We
also find the star formation rate density (SFRDs) are higher than the field,
but consistent with clusters of dusty galaxies. WISE-selected AGNs appear to be
good signposts for protoclusters at high redshift on arcmin scales. The results
reported here provide an upper limit to the strength of angular clustering
using the two-point correlation function. Monte Carlo simulations show no
angular correlation, which could indicate protoclusters on scales larger than
the SCUBA-2 1.5arcmin scale maps. | [
0,
1,
0,
0,
0,
0
] |
Title: Constrained Best Linear Unbiased Estimation,
Abstract: The least squares (LS) estimator and the best linear unbiased estimator
(BLUE) are two well-studied approaches for the estimation of a deterministic
but unknown parameter vector. In many applications it is known that the
parameter vector fulfills some constraints, e.g., linear constraints. For such
situations the constrained LS estimator, which is a simple extension of the LS
estimator, can be employed. In this paper we derive the constrained version of
the BLUE. It will turn out that the incorporation of the linear constraints
into the derivation of the BLUE is not straight forward as for the constrained
LS estimator, but the final expression for the constrained BLUE is closely
related to that of the constrained LS estimator. | [
0,
0,
1,
1,
0,
0
] |
Title: Compact Tensor Pooling for Visual Question Answering,
Abstract: Performing high level cognitive tasks requires the integration of feature
maps with drastically different structure. In Visual Question Answering (VQA)
image descriptors have spatial structures, while lexical inputs inherently
follow a temporal sequence. The recently proposed Multimodal Compact Bilinear
pooling (MCB) forms the outer products, via count-sketch approximation, of the
visual and textual representation at each spatial location. While this
procedure preserves spatial information locally, outer-products are taken
independently for each fiber of the activation tensor, and therefore do not
include spatial context. In this work, we introduce multi-dimensional sketch
({MD-sketch}), a novel extension of count-sketch to tensors. Using this new
formulation, we propose Multimodal Compact Tensor Pooling (MCT) to fully
exploit the global spatial context during bilinear pooling operations.
Contrarily to MCB, our approach preserves spatial context by directly
convolving the MD-sketch from the visual tensor features with the text vector
feature using higher order FFT. Furthermore we apply MCT incrementally at each
step of the question embedding and accumulate the multi-modal vectors with a
second LSTM layer before the final answer is chosen. | [
1,
0,
0,
0,
0,
0
] |
Title: Beam-induced Back-streaming Electron Suppression Analysis for Accelerator Type Neutron Generators,
Abstract: A facility based on a next-generation, high-flux D-D neutron generator has
been commissioned and it is now operational at the University of California,
Berkeley. The current generator design produces near monoenergetic 2.45 MeV
neutrons at outputs of 10^8 n/s. Calculations provided show that future
conditioning at higher currents and voltages will allow for a production rate
over 10^10 n/s. A significant problem encountered was beam-induced electron
backstreaming, that needed to be resolved to achieve meaningful beam currents.
Two methods of suppressing secondary electrons resulting from the deuterium
beam striking the target were tested: the application of static electric and
magnetic fields. Computational simulations of both techniques were done using a
finite element analysis in COMSOL Multiphysics. Experimental tests verified
these simulation results. The most reliable suppression was achieved via the
implementation of an electrostatic shroud with a voltage offset of -800 V
relative to the target. | [
0,
1,
0,
0,
0,
0
] |
Title: Vertical stratification of forest canopy for segmentation of under-story trees within small-footprint airborne LiDAR point clouds,
Abstract: Airborne LiDAR point cloud representing a forest contains 3D data, from which
vertical stand structure even of understory layers can be derived. This paper
presents a tree segmentation approach for multi-story stands that stratifies
the point cloud to canopy layers and segments individual tree crowns within
each layer using a digital surface model based tree segmentation method. The
novelty of the approach is the stratification procedure that separates the
point cloud to an overstory and multiple understory tree canopy layers by
analyzing vertical distributions of LiDAR points within overlapping locales.
The procedure does not make a priori assumptions about the shape and size of
the tree crowns and can, independent of the tree segmentation method, be
utilized to vertically stratify tree crowns of forest canopies. We applied the
proposed approach to the University of Kentucky Robinson Forest - a natural
deciduous forest with complex and highly variable terrain and vegetation
structure. The segmentation results showed that using the stratification
procedure strongly improved detecting understory trees (from 46% to 68%) at the
cost of introducing a fair number of over-segmented understory trees (increased
from 1% to 16%), while barely affecting the overall segmentation quality of
overstory trees. Results of vertical stratification of the canopy showed that
the point density of understory canopy layers were suboptimal for performing a
reasonable tree segmentation, suggesting that acquiring denser LiDAR point
clouds would allow more improvements in segmenting understory trees. As shown
by inspecting correlations of the results with forest structure, the
segmentation approach is applicable to a variety of forest types. | [
1,
0,
0,
0,
0,
0
] |
Title: Temperature effect observed by the Nagoya muon telescope,
Abstract: The temperature coefficients for all the directions of the Nagoya muon
telescope were obtained. The zenith angular dependence of the temperature
coefficients was studied. | [
0,
1,
0,
0,
0,
0
] |
Title: Generalizing Hamiltonian Monte Carlo with Neural Networks,
Abstract: We present a general-purpose method to train Markov chain Monte Carlo
kernels, parameterized by deep neural networks, that converge and mix quickly
to their target distribution. Our method generalizes Hamiltonian Monte Carlo
and is trained to maximize expected squared jumped distance, a proxy for mixing
speed. We demonstrate large empirical gains on a collection of simple but
challenging distributions, for instance achieving a 106x improvement in
effective sample size in one case, and mixing when standard HMC makes no
measurable progress in a second. Finally, we show quantitative and qualitative
gains on a real-world task: latent-variable generative modeling. We release an
open source TensorFlow implementation of the algorithm. | [
1,
0,
0,
1,
0,
0
] |
Title: Multiplicities of Character Values of Binary Sidel'nikov-Lempel-Cohn-Eastman Sequences,
Abstract: Binary Sidel'nikov-Lempel-Cohn-Eastman sequences (or SLCE sequences) over F 2
have even period and almost perfect autocorrelation. However, the evaluation of
the linear complexity of these sequences is really difficult. In this paper, we
continue the study of [1]. We first express the multiple roots of character
polynomials of SLCE sequences into certain kinds of Jacobi sums. Then by making
use of Gauss sums and Jacobi sums in the "semiprimitive" case, we derive new
divisibility results for SLCE sequences. | [
1,
0,
1,
0,
0,
0
] |
Title: Complex and Holographic Embeddings of Knowledge Graphs: A Comparison,
Abstract: Embeddings of knowledge graphs have received significant attention due to
their excellent performance for tasks like link prediction and entity
resolution. In this short paper, we are providing a comparison of two
state-of-the-art knowledge graph embeddings for which their equivalence has
recently been established, i.e., ComplEx and HolE [Nickel, Rosasco, and Poggio,
2016; Trouillon et al., 2016; Hayashi and Shimbo, 2017]. First, we briefly
review both models and discuss how their scoring functions are equivalent. We
then analyze the discrepancy of results reported in the original articles, and
show experimentally that they are likely due to the use of different loss
functions. In further experiments, we evaluate the ability of both models to
embed symmetric and antisymmetric patterns. Finally, we discuss advantages and
disadvantages of both models and under which conditions one would be preferable
to the other. | [
1,
0,
0,
1,
0,
0
] |
Title: Deep Image Prior,
Abstract: Deep convolutional networks have become a popular tool for image generation
and restoration. Generally, their excellent performance is imputed to their
ability to learn realistic image priors from a large number of example images.
In this paper, we show that, on the contrary, the structure of a generator
network is sufficient to capture a great deal of low-level image statistics
prior to any learning. In order to do so, we show that a randomly-initialized
neural network can be used as a handcrafted prior with excellent results in
standard inverse problems such as denoising, super-resolution, and inpainting.
Furthermore, the same prior can be used to invert deep neural representations
to diagnose them, and to restore images based on flash-no flash input pairs.
Apart from its diverse applications, our approach highlights the inductive
bias captured by standard generator network architectures. It also bridges the
gap between two very popular families of image restoration methods:
learning-based methods using deep convolutional networks and learning-free
methods based on handcrafted image priors such as self-similarity. Code and
supplementary material are available at
this https URL . | [
0,
0,
0,
1,
0,
0
] |
Title: Efficient SMC$^2$ schemes for stochastic kinetic models,
Abstract: Fitting stochastic kinetic models represented by Markov jump processes within
the Bayesian paradigm is complicated by the intractability of the observed data
likelihood. There has therefore been considerable attention given to the design
of pseudo-marginal Markov chain Monte Carlo algorithms for such models.
However, these methods are typically computationally intensive, often require
careful tuning and must be restarted from scratch upon receipt of new
observations. Sequential Monte Carlo (SMC) methods on the other hand aim to
efficiently reuse posterior samples at each time point. Despite their appeal,
applying SMC schemes in scenarios with both dynamic states and static
parameters is made difficult by the problem of particle degeneracy. A
principled approach for overcoming this problem is to move each parameter
particle through a Metropolis-Hastings kernel that leaves the target invariant.
This rejuvenation step is key to a recently proposed SMC$^2$ algorithm, which
can be seen as the pseudo-marginal analogue of an idealised scheme known as
iterated batch importance sampling. Computing the parameter weights in SMC$^2$
requires running a particle filter over dynamic states to unbiasedly estimate
the intractable observed data likelihood contributions at each time point. In
this paper, we propose to use an auxiliary particle filter inside the SMC$^2$
scheme. Our method uses two recently proposed constructs for sampling
conditioned jump processes and we find that the resulting inference schemes
typically require fewer state particles than when using a simple bootstrap
filter. Using two applications, we compare the performance of the proposed
approach with various competing methods, including two global MCMC schemes. | [
0,
0,
0,
1,
0,
0
] |
Title: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,
Abstract: Image-to-image translation is a class of vision and graphics problems where
the goal is to learn the mapping between an input image and an output image
using a training set of aligned image pairs. However, for many tasks, paired
training data will not be available. We present an approach for learning to
translate an image from a source domain $X$ to a target domain $Y$ in the
absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$
such that the distribution of images from $G(X)$ is indistinguishable from the
distribution $Y$ using an adversarial loss. Because this mapping is highly
under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$
and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice
versa). Qualitative results are presented on several tasks where paired
training data does not exist, including collection style transfer, object
transfiguration, season transfer, photo enhancement, etc. Quantitative
comparisons against several prior methods demonstrate the superiority of our
approach. | [
1,
0,
0,
0,
0,
0
] |
Title: Consistent structure estimation of exponential-family random graph models with block structure,
Abstract: We consider the challenging problem of statistical inference for
exponential-family random graph models based on a single observation of a
random graph with complex dependence. To facilitate statistical inference, we
consider random graphs with additional structure in the form of block
structure. We have shown elsewhere that when the block structure is known, it
facilitates consistency results for $M$-estimators of canonical and curved
exponential-family random graph models with complex dependence, such as
transitivity. In practice, the block structure is known in some applications
(e.g., multilevel networks), but is unknown in others. When the block structure
is unknown, the first and foremost question is whether it can be recovered with
high probability based on a single observation of a random graph with complex
dependence. The main consistency results of the paper show that it is possible
to do so provided the number of blocks grows as fast as in high-dimensional
stochastic block models. These results confirm that exponential-family random
graph models with block structure constitute a promising direction of
statistical network analysis. | [
0,
0,
1,
1,
0,
0
] |
Title: Taskonomy: Disentangling Task Transfer Learning,
Abstract: Do visual tasks have a relationship, or are they unrelated? For instance,
could having surface normals simplify estimating the depth of an image?
Intuition answers these questions positively, implying existence of a structure
among visual tasks. Knowing this structure has notable values; it is the
concept underlying transfer learning and provides a principled way for
identifying redundancies across tasks, e.g., to seamlessly reuse supervision
among related tasks or solve many tasks in one system without piling up the
complexity.
We proposes a fully computational approach for modeling the structure of
space of visual tasks. This is done via finding (first and higher-order)
transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D,
and semantic tasks in a latent space. The product is a computational taxonomic
map for task transfer learning. We study the consequences of this structure,
e.g. nontrivial emerged relationships, and exploit them to reduce the demand
for labeled data. For example, we show that the total number of labeled
datapoints needed for solving a set of 10 tasks can be reduced by roughly 2/3
(compared to training independently) while keeping the performance nearly the
same. We provide a set of tools for computing and probing this taxonomical
structure including a solver that users can employ to devise efficient
supervision policies for their use cases. | [
1,
0,
0,
0,
0,
0
] |
Title: The Energy Measure for the Euler and Navier-Stokes Equations,
Abstract: The potential failure of energy equality for a solution $u$ of the Euler or
Navier-Stokes equations can be quantified using a so-called `energy measure':
the weak-$*$ limit of the measures $|u(t)|^2\,\mbox{d}x$ as $t$ approaches the
first possible blowup time. We show that membership of $u$ in certain (weak or
strong) $L^q L^p$ classes gives a uniform lower bound on the lower local
dimension of $\mathcal{E}$; more precisely, it implies uniform boundedness of a
certain upper $s$-density of $\mathcal{E}$. We also define and give lower
bounds on the `concentration dimension' associated to $\mathcal{E}$, which is
the Hausdorff dimension of the smallest set on which energy can concentrate.
Both the lower local dimension and the concentration dimension of $\mathcal{E}$
measure the departure from energy equality. As an application of our estimates,
we prove that any solution to the $3$-dimensional Navier-Stokes Equations which
is Type-I in time must satisfy the energy equality at the first blowup time. | [
0,
0,
1,
0,
0,
0
] |
Title: Inference of Spatio-Temporal Functions over Graphs via Multi-Kernel Kriged Kalman Filtering,
Abstract: Inference of space-time varying signals on graphs emerges naturally in a
plethora of network science related applications. A frequently encountered
challenge pertains to reconstructing such dynamic processes, given their values
over a subset of vertices and time instants. The present paper develops a
graph-aware kernel-based kriged Kalman filter that accounts for the
spatio-temporal variations, and offers efficient online reconstruction, even
for dynamically evolving network topologies. The kernel-based learning
framework bypasses the need for statistical information by capitalizing on the
smoothness that graph signals exhibit with respect to the underlying graph. To
address the challenge of selecting the appropriate kernel, the proposed filter
is combined with a multi-kernel selection module. Such a data-driven method
selects a kernel attuned to the signal dynamics on-the-fly within the linear
span of a pre-selected dictionary. The novel multi-kernel learning algorithm
exploits the eigenstructure of Laplacian kernel matrices to reduce
computational complexity. Numerical tests with synthetic and real data
demonstrate the superior reconstruction performance of the novel approach
relative to state-of-the-art alternatives. | [
1,
0,
0,
1,
0,
0
] |
Title: Z2-Thurston Norm and Complexity of 3-Manifolds, II,
Abstract: In this sequel to earlier papers by three of the authors, we obtain a new
bound on the complexity of a closed 3--manifold, as well as a characterisation
of manifolds realising our complexity bounds. As an application, we obtain the
first infinite families of minimal triangulations of Seifert fibred spaces
modelled on Thurston's geometry $\widetilde{\text{SL}_2(\mathbb{R})}.$ | [
0,
0,
1,
0,
0,
0
] |
Title: Discriminative k-shot learning using probabilistic models,
Abstract: This paper introduces a probabilistic framework for k-shot image
classification. The goal is to generalise from an initial large-scale
classification task to a separate task comprising new classes and small numbers
of examples. The new approach not only leverages the feature-based
representation learned by a neural network from the initial task
(representational transfer), but also information about the classes (concept
transfer). The concept information is encapsulated in a probabilistic model for
the final layer weights of the neural network which acts as a prior for
probabilistic k-shot learning. We show that even a simple probabilistic model
achieves state-of-the-art on a standard k-shot learning dataset by a large
margin. Moreover, it is able to accurately model uncertainty, leading to well
calibrated classifiers, and is easily extensible and flexible, unlike many
recent approaches to k-shot learning. | [
1,
0,
0,
1,
0,
0
] |
Title: Comparative analysis of criteria for filtering time series of word usage frequencies,
Abstract: This paper describes a method of nonlinear wavelet thresholding of time
series. The Ramachandran-Ranganathan runs test is used to assess the quality of
approximation. To minimize the objective function, it is proposed to use
genetic algorithms - one of the stochastic optimization methods. The suggested
method is tested both on the model series and on the word frequency series
using the Google Books Ngram data. It is shown that method of filtering which
uses the runs criterion shows significantly better results compared with the
standard wavelet thresholding. The method can be used when quality of filtering
is of primary importance but not the speed of calculations. | [
0,
0,
0,
1,
0,
0
] |
Title: Extend of the $\mathbb{Z}_2$-spin liquid phase on the Kagomé-lattice,
Abstract: The $\mathbb{Z}_2$ topological phase in the quantum dimer model on the
Kagomé-lattice is a candidate for the description of the low-energy physics
of the anti-ferromagnetic Heisenberg model on the same lattice. We study the
extend of the topological phase by interpolating between the exactly solvable
parent Hamiltonian of the topological phase and an effective low-energy
description of the Heisenberg model in terms of a quantum-dimer Hamiltonian.
Therefore, we perform a perturbative treatment of the low-energy excitations in
the topological phase including free and interacting quasi-particles. We find a
phase transition out of the topological phase far from the Heisenberg point.
The resulting phase is characterized by a spontaneously broken rotational
symmetry and a unit cell involving six sites. | [
0,
1,
0,
0,
0,
0
] |
Title: Accelerated Stochastic Quasi-Newton Optimization on Riemann Manifolds,
Abstract: We propose an L-BFGS optimization algorithm on Riemannian manifolds using
minibatched stochastic variance reduction techniques for fast convergence with
constant step sizes, without resorting to linesearch methods designed to
satisfy Wolfe conditions. We provide a new convergence proof for strongly
convex functions without using curvature conditions on the manifold, as well as
a convergence discussion for nonconvex functions. We discuss a couple of ways
to obtain the correction pairs used to calculate the product of the gradient
with the inverse Hessian, and empirically demonstrate their use in synthetic
experiments on computation of Karcher means for symmetric positive definite
matrices and leading eigenvalues of large scale data matrices. We compare our
method to VR-PCA for the latter experiment, along with Riemannian SVRG for both
cases, and show strong convergence results for a range of datasets. | [
0,
0,
1,
1,
0,
0
] |
Title: Bridging trees for posterior inference on Ancestral Recombination Graphs,
Abstract: We present a new Markov chain Monte Carlo algorithm, implemented in software
Arbores, for inferring the history of a sample of DNA sequences. Our principal
innovation is a bridging procedure, previously applied only for simple
stochastic processes, in which the local computations within a bridge can
proceed independently of the rest of the DNA sequence, facilitating large-scale
parallelisation. | [
0,
0,
0,
0,
1,
0
] |
Title: Strong magnetic frustration in Y$_{3}$Cu$_{9}$(OH)$_{19}$Cl$_{8}$: a distorted kagome antiferromagnet,
Abstract: We present the crystal structure and magnetic properties of
Y$_{3}$Cu$_{9}$(OH)$_{19}$Cl$_{8}$, a stoichiometric frustrated quantum spin
system with slightly distorted kagome layers. Single crystals of
Y$_{3}$Cu$_{9}$(OH)$_{19}$Cl$_{8}$ were grown under hydrothermal conditions.
The structure was determined from single crystal X-ray diffraction and
confirmed by neutron powder diffraction. The observed structure reveals two
different Cu-positions leading to a slightly distored kagome layer in contrast
to the closely related YCu$_{3}$(OH)$_{6}$Cl$_{3}$. Curie-Weiss behavior at
high-temperatures with a Weiss-temperature $\theta_{W}$ of the order of $-100$
K, shows a large dominant antiferromagnetic coupling within the kagome planes.
Specific-heat and magnetization measurements on single crystals reveal an
antiferromagnetic transition at T$_{N}=2.2$ K indicating a pronounced
frustration parameter of $\theta_{W}/T_{N}\approx50$. Optical transmission
experiments on powder samples and single crystals confirm the structural
findings. Specific-heat measurements on YCu$_{3}$(OH)$_{6}$Cl$_{3}$ down to 0.4
K confirm the proposed quantum spin-liquid state of that system. Therefore, the
two Y-Cu-OH-Cl compounds present a unique setting to investigate closely
related structures with a spin-liquid state and a strongly frustrated AFM
ordered state, by slightly releasing the frustration in a kagome lattice. | [
0,
1,
0,
0,
0,
0
] |
Title: The Character Field Theory and Homology of Character Varieties,
Abstract: We construct an extended oriented $(2+\epsilon)$-dimensional topological
field theory, the character field theory $X_G$ attached to a affine algebraic
group in characteristic zero, which calculates the homology of character
varieties of surfaces. It is a model for a dimensional reduction of
Kapustin-Witten theory ($N=4$ $d=4$ super-Yang-Mills in the GL twist), and a
universal version of the unipotent character field theory introduced in
arXiv:0904.1247. Boundary conditions in $X_G$ are given by quantum Hamiltonian
$G$-spaces, as captured by de Rham (or strong) $G$-categories, i.e., module
categories for the monoidal dg category $D(G)$ of $D$-modules on $G$. We show
that the circle integral $X_G(S^1)$ (the center and trace of $D(G)$) is
identified with the category $D(G/G)$ of "class $D$-modules", while for an
oriented surface $S$ (with arbitrary decorations at punctures) we show that
$X_G(S)\simeq{\rm H}_*^{BM}(Loc_G(S))$ is the Borel-Moore homology of the
corresponding character stack. We also describe the "Hodge filtration" on the
character theory, a one parameter degeneration to a TFT whose boundary
conditions are given by classical Hamiltonian $G$-spaces, and which encodes a
variant of the Hodge filtration on character varieties. | [
0,
0,
1,
0,
0,
0
] |
Title: A new generator of chaotic bit sequences with mixed-mode inputs,
Abstract: This paper presents a new generator of chaotic bit sequences with mixed-mode
(continuous and discrete) inputs. The generator has an improved level of
chaotic properties in comparison with the existing single source (input)
digital chaotic bit generators. The 0-1 test is used to show the improved
chaotic behavior of our generator having a chaotic continuous input (Chua,
Rössler or Lorenz system) intermingled with a discrete input (logistic,
Tinkerbell or Henon map) with various parameters. The obtained sequences of
chaotic bits show some features of random processes with increased entropy
levels, even in the cases of small numbers of bit representations. The
properties of the new generator and its binary sequences compare well with
those obtained from a truly random binary reference quantum generator, as
evidenced by the results of the $ent$ tests. | [
0,
1,
0,
0,
0,
0
] |
Title: HAWC response to atmospheric electricity activity,
Abstract: The HAWC Gamma Ray observatory consists of 300 water Cherenkov detectors
(WCD) instrumented with four photo multipliers tubes (PMT) per WCD. HAWC is
located between two of the highest mountains in Mexico. The high altitude (4100
m asl), the relatively short distance to the Gulf of Mexico (~100 km), the
large detecting area (22 000 m$^2$) and its high sensitivity, make HAWC a good
instrument to explore the acceleration of particles due to the electric fields
existing inside storm clouds. In particular, the scaler system of HAWC records
the output of each one of the 1200 PMTs as well as the 2, 3, and 4-fold
multiplicities (logic AND in a time window of 30 ns) of each WCD with a
sampling rate of 40 Hz. Using the scaler data, we have identified 20
enhancements of the observed rate during periods when storm clouds were over
HAWC but without cloud-earth discharges. These enhancements can be produced by
electrons with energy of tens of MeV, accelerated by the electric fields of
tens of kV/m measured at the site during the storm periods. In this work, we
present the recorded data, the method of analysis and our preliminary
conclusions on the electron acceleration by the electric fields inside the
clouds. | [
0,
1,
0,
0,
0,
0
] |
Title: Optical Music Recognition with Convolutional Sequence-to-Sequence Models,
Abstract: Optical Music Recognition (OMR) is an important technology within Music
Information Retrieval. Deep learning models show promising results on OMR
tasks, but symbol-level annotated data sets of sufficient size to train such
models are not available and difficult to develop. We present a deep learning
architecture called a Convolutional Sequence-to-Sequence model to both move
towards an end-to-end trainable OMR pipeline, and apply a learning process that
trains on full sentences of sheet music instead of individually labeled
symbols. The model is trained and evaluated on a human generated data set, with
various image augmentations based on real-world scenarios. This data set is the
first publicly available set in OMR research with sufficient size to train and
evaluate deep learning models. With the introduced augmentations a pitch
recognition accuracy of 81% and a duration accuracy of 94% is achieved,
resulting in a note level accuracy of 80%. Finally, the model is compared to
commercially available methods, showing a large improvements over these
applications. | [
1,
0,
0,
0,
0,
0
] |
Title: Fractional Cable Model for Signal Conduction in Spiny Neuronal Dendrites,
Abstract: The cable model is widely used in several fields of science to describe the
propagation of signals. A relevant medical and biological example is the
anomalous subdiffusion in spiny neuronal dendrites observed in several studies
of the last decade. Anomalous subdiffusion can be modelled in several ways
introducing some fractional component into the classical cable model. The
Chauchy problem associated to these kind of models has been investigated by
many authors, but up to our knowledge an explicit solution for the signalling
problem has not yet been published. Here we propose how this solution can be
derived applying the generalized convolution theorem (known as Efros theorem)
for Laplace transforms. The fractional cable model considered in this paper is
defined by replacing the first order time derivative with a fractional
derivative of order $\alpha\in(0,1)$ of Caputo type. The signalling problem is
solved for any input function applied to the accessible end of a semi-infinite
cable, which satisfies the requirements of the Efros theorem. The solutions
corresponding to the simple cases of impulsive and step inputs are explicitly
calculated in integral form containing Wright functions. Thanks to the
variability of the parameter $\alpha$, the corresponding solutions are expected
to adapt to the qualitative behaviour of the membrane potential observed in
experiments better than in the standard case $\alpha=1$. | [
0,
1,
0,
0,
0,
0
] |
Title: Hedging in fractional Black-Scholes model with transaction costs,
Abstract: We consider conditional-mean hedging in a fractional Black-Scholes pricing
model in the presence of proportional transaction costs. We develop an explicit
formula for the conditional-mean hedging portfolio in terms of the recently
discovered explicit conditional law of the fractional Brownian motion. | [
0,
0,
1,
1,
0,
0
] |
Title: Finite flat spaces,
Abstract: We say that a finite metric space $X$ can be embedded almost isometrically
into a class of metric spaces $C$, if for every $\epsilon > 0$ there exists an
embedding of $X$ into one of the elements of $C$ with the bi-Lipschitz
distortion less then $1 + \epsilon$. We show that almost isometric
embeddability conditions are equal for following classes of spaces
(a) Quotients of Euclidean spaces by isometric actions of finite groups,
(b) $L_2$-Wasserstein spaces over Euclidean spaces,
(c) Compact flat manifolds,
(d) Compact flat orbifolds,
(e) Quotients of bi-invariant Lie groups by isometric actions of compact Lie
groups. (This one is the most surprising.)
We call spaces which satisfy this conditions finite flat spaces. The question
about synthetic definition naturally arises.
Since Markov type constants depend only on finite subsets we can conclude
that bi-invariant Lie groups and their quotients have Markov type $2$ with
constant $1$. | [
0,
0,
1,
0,
0,
0
] |
Title: On The Equivalence of Projections In Relative $α$-Entropy and Rényi Divergence,
Abstract: The aim of this work is to establish that two recently published projection
theorems, one dealing with a parametric generalization of relative entropy and
another dealing with Rényi divergence, are equivalent under a
correspondence on the space of probability measures. Further, we demonstrate
that the associated "Pythagorean" theorems are equivalent under this
correspondence. Finally, we apply Eguchi's method of obtaining Riemannian
metrics from general divergence functions to show that the geometry arising
from the above divergences are equivalent under the aforementioned
correspondence. | [
1,
0,
0,
0,
0,
0
] |
Title: Comparing high dimensional partitions, with the Coclustering Adjusted Rand Index,
Abstract: The popular Adjusted Rand Index (ARI) is extended to the task of simultaneous
clustering of the rows and columns of a given matrix. This new index called
Coclustering Adjusted Rand Index (CARI) remains convenient and competitive
facing other indices. Indeed, partitions with high number of clusters can be
considered and it does not require any convention when the numbers of clusters
in partitions are different. Experiments on simulated partitions are presented
and the performance of this index to measure the agreement between two pairs of
partitions is assessed. Comparison with other indices is discussed. | [
0,
0,
0,
1,
0,
0
] |
Title: Dependence between Path-length and Size in Random Digital Trees,
Abstract: We study the size and the external path length of random tries and show that
they are asymptotically independent in the asymmetric case but strongly
dependent with small periodic fluctuations in the symmetric case. Such an
unexpected behavior is in sharp contrast to the previously known results on
random tries that the size is totally positively correlated to the internal
path length and that both tend to the same normal limit law. These two
dependence examples provide concrete instances of bivariate normal
distributions (as limit laws) whose correlation is $0$, $1$ and periodically
oscillating. Moreover, the same type of behaviors is also clarified for other
classes of digital trees such as bucket digital trees and Patricia tries. | [
0,
0,
1,
0,
0,
0
] |
Title: Analysing Shortcomings of Statistical Parametric Speech Synthesis,
Abstract: Output from statistical parametric speech synthesis (SPSS) remains noticeably
worse than natural speech recordings in terms of quality, naturalness, speaker
similarity, and intelligibility in noise. There are many hypotheses regarding
the origins of these shortcomings, but these hypotheses are often kept vague
and presented without empirical evidence that could confirm and quantify how a
specific shortcoming contributes to imperfections in the synthesised speech.
Throughout speech synthesis literature, surprisingly little work is dedicated
towards identifying the perceptually most important problems in speech
synthesis, even though such knowledge would be of great value for creating
better SPSS systems.
In this book chapter, we analyse some of the shortcomings of SPSS. In
particular, we discuss issues with vocoding and present a general methodology
for quantifying the effect of any of the many assumptions and design choices
that hold SPSS back. The methodology is accompanied by an example that
carefully measures and compares the severity of perceptual limitations imposed
by vocoding as well as other factors such as the statistical model and its use. | [
1,
0,
0,
0,
0,
0
] |
Title: On Meshfree GFDM Solvers for the Incompressible Navier-Stokes Equations,
Abstract: Meshfree solution schemes for the incompressible Navier--Stokes equations are
usually based on algorithms commonly used in finite volume methods, such as
projection methods, SIMPLE and PISO algorithms. However, drawbacks of these
algorithms that are specific to meshfree methods have often been overlooked. In
this paper, we study the drawbacks of conventionally used meshfree Generalized
Finite Difference Method~(GFDM) schemes for Lagrangian incompressible
Navier-Stokes equations, both operator splitting schemes and monolithic
schemes. The major drawback of most of these schemes is inaccurate local
approximations to the mass conservation condition. Further, we propose a new
modification of a commonly used monolithic scheme that overcomes these problems
and shows a better approximation for the velocity divergence condition. We then
perform a numerical comparison which shows the new monolithic scheme to be more
accurate than existing schemes. | [
0,
1,
1,
0,
0,
0
] |
Title: Using a new parsimonious AHP methodology combined with the Choquet integral: An application for evaluating social housing initiatives,
Abstract: We propose a development of the Analytic Hierarchy Process (AHP) permitting
to use the methodology also in cases of decision problems with a very large
number of alternatives evaluated with respect to several criteria. While the
application of the original AHP method involves many pairwise comparisons
between alternatives and criteria, our proposal is composed of three steps: (i)
direct evaluation of the alternatives at hand on the considered criteria, (ii)
selection of some reference evaluations; (iii) application of the original AHP
method to reference evaluations; (iv) revision of the direct evaluation on the
basis of the prioritization supplied by AHP on reference evaluations. The new
proposal has been tested and validated in an experiment conducted on a sample
of university students. The new methodology has been therefore applied to a
real world problem involving the evaluation of 21 Social Housing initiatives
sited in the Piedmont region (Italy). To take into account interaction between
criteria, the Choquet integral preference model has been considered within a
Non Additive Robust Ordinal Regression approach. | [
1,
0,
1,
0,
0,
0
] |
Title: Importance Sketching of Influence Dynamics in Billion-scale Networks,
Abstract: The blooming availability of traces for social, biological, and communication
networks opens up unprecedented opportunities in analyzing diffusion processes
in networks. However, the sheer sizes of the nowadays networks raise serious
challenges in computational efficiency and scalability.
In this paper, we propose a new hyper-graph sketching framework for inflence
dynamics in networks. The central of our sketching framework, called SKIS, is
an efficient importance sampling algorithm that returns only non-singular
reverse cascades in the network. Comparing to previously developed sketches
like RIS and SKIM, our sketch significantly enhances estimation quality while
substantially reducing processing time and memory-footprint. Further, we
present general strategies of using SKIS to enhance existing algorithms for
influence estimation and influence maximization which are motivated by
practical applications like viral marketing. Using SKIS, we design high-quality
influence oracle for seed sets with average estimation error up to 10x times
smaller than those using RIS and 6x times smaller than SKIM. In addition, our
influence maximization using SKIS substantially improves the quality of
solutions for greedy algorithms. It achieves up to 10x times speed-up and 4x
memory reduction for the fastest RIS-based DSSA algorithm, while maintaining
the same theoretical guarantees. | [
1,
0,
0,
0,
0,
0
] |
Title: Motivations, Classification and Model Trial of Conversational Agents for Insurance Companies,
Abstract: Advances in artificial intelligence have renewed interest in conversational
agents. So-called chatbots have reached maturity for industrial applications.
German insurance companies are interested in improving their customer service
and digitizing their business processes. In this work we investigate the
potential use of conversational agents in insurance companies by determining
which classes of agents are of interest to insurance companies, finding
relevant use cases and requirements, and developing a prototype for an
exemplary insurance scenario. Based on this approach, we derive key findings
for conversational agent implementation in insurance companies. | [
1,
0,
0,
0,
0,
0
] |
Title: Dark Matter and Neutrinos,
Abstract: The Keplerian distribution of velocities is not observed in the rotation of
large scale structures, such as found in the rotation of spiral galaxies. The
deviation from Keplerian distribution provides compelling evidence of the
presence of non-luminous matter i.e. called dark matter. There are several
astrophysical motivations for investigating the dark matter in and around the
galaxy as halo. In this work we address various theoretical and experimental
indications pointing towards the existence of this unknown form of matter.
Amongst its constituents neutrino is one of the most prospective candidates. We
know the neutrinos oscillate and have tiny masses, but there are also
signatures for existence of heavy and light sterile neutrinos and possibility
of their mixing. Altogether, the role of neutrinos is of great interests in
cosmology and understanding dark matter. | [
0,
1,
0,
0,
0,
0
] |
Title: Variational Bi-LSTMs,
Abstract: Recurrent neural networks like long short-term memory (LSTM) are important
architectures for sequential prediction tasks. LSTMs (and RNNs in general)
model sequences along the forward time direction. Bidirectional LSTMs
(Bi-LSTMs) on the other hand model sequences along both forward and backward
directions and are generally known to perform better at such tasks because they
capture a richer representation of the data. In the training of Bi-LSTMs, the
forward and backward paths are learned independently. We propose a variant of
the Bi-LSTM architecture, which we call Variational Bi-LSTM, that creates a
channel between the two paths (during training, but which may be omitted during
inference); thus optimizing the two paths jointly. We arrive at this joint
objective for our model by minimizing a variational lower bound of the joint
likelihood of the data sequence. Our model acts as a regularizer and encourages
the two networks to inform each other in making their respective predictions
using distinct information. We perform ablation studies to better understand
the different components of our model and evaluate the method on various
benchmarks, showing state-of-the-art performance. | [
1,
0,
0,
1,
0,
0
] |
Title: Riemann-Langevin Particle Filtering in Track-Before-Detect,
Abstract: Track-before-detect (TBD) is a powerful approach that consists in providing
the tracker with sensor measurements directly without pre-detection. Due to the
measurement model non-linearities, online state estimation in TBD is most
commonly solved via particle filtering. Existing particle filters for TBD do
not incorporate measurement information in their proposal distribution. The
Langevin Monte Carlo (LMC) is a sampling method whose proposal is able to
exploit all available knowledge of the posterior (that is, both prior and
measurement information). This letter synthesizes recent advances in LMC-based
filtering to describe the Riemann-Langevin particle filter and introduces its
novel application to TBD. The benefits of our approach are illustrated in a
challenging low-noise scenario. | [
0,
0,
0,
1,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.