text
stringlengths 138
2.38k
| labels
sequencelengths 6
6
| Predictions
sequencelengths 1
3
|
---|---|---|
Title: Discrete Local Induction Equation,
Abstract: The local induction equation, or the binormal flow on space curves is a
well-known model of deformation of space curves as it describes the dynamics of
vortex filaments, and the complex curvature is governed by the nonlinear
Schrödinger equation. In this paper, we present its discrete analogue,
namely, a model of deformation of discrete space curves by the discrete
nonlinear Schrödinger equation. We also present explicit formulas for both
smooth and discrete curves in terms of tau functions of the two-component KP
hierarchy. | [
0,
1,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: A sharp lower bound for the lifespan of small solutions to the Schrödinger equation with a subcritical power nonlinearity,
Abstract: Let $T_{\epsilon}$ be the lifespan for the solution to the Schrödinger
equation on $\mathbb{R}^d$ with a power nonlinearity $\lambda |u|^{2\theta/d}u$
($\lambda \in \mathbb{C}$, $0<\theta<1$) and the initial data in the form
$\epsilon \varphi(x)$. We provide a sharp lower bound estimate for
$T_{\epsilon}$ as $\epsilon \to +0$ which can be written explicitly by
$\lambda$, $d$, $\theta$, $\varphi$ and $\epsilon$. This is an improvement of
the previous result by H.Sasaki [Adv. Diff. Eq. 14 (2009), 1021--1039]. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Permission Inference for Array Programs,
Abstract: Information about the memory locations accessed by a program is, for
instance, required for program parallelisation and program verification.
Existing inference techniques for this information provide only partial
solutions for the important class of array-manipulating programs. In this
paper, we present a static analysis that infers the memory footprint of an
array program in terms of permission pre- and postconditions as used, for
example, in separation logic. This formulation allows our analysis to handle
concurrent programs and produces specifications that can be used by
verification tools. Our analysis expresses the permissions required by a loop
via maximum expressions over the individual loop iterations. These maximum
expressions are then solved by a novel maximum elimination algorithm, in the
spirit of quantifier elimination. Our approach is sound and is implemented; an
evaluation on existing benchmarks for memory safety of array programs
demonstrates accurate results, even for programs with complex access patterns
and nested loops. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Application of Spin-Exchange Relaxation-Free Magnetometry to the Cosmic Axion Spin Precession Experiment,
Abstract: The Cosmic Axion Spin Precession Experiment (CASPEr) seeks to measure
oscillating torques on nuclear spins caused by axion or axion-like-particle
(ALP) dark matter via nuclear magnetic resonance (NMR) techniques. A sample
spin-polarized along a leading magnetic field experiences a resonance when the
Larmor frequency matches the axion/ALP Compton frequency, generating precessing
transverse nuclear magnetization. Here we demonstrate a Spin-Exchange
Relaxation-Free (SERF) magnetometer with sensitivity $\approx 1~{\rm
fT/\sqrt{Hz}}$ and an effective sensing volume of 0.1 $\rm{cm^3}$ that may be
useful for NMR detection in CASPEr. A potential drawback of
SERF-magnetometer-based NMR detection is the SERF's limited dynamic range. Use
of a magnetic flux transformer to suppress the leading magnetic field is
considered as a potential method to expand the SERF's dynamic range in order to
probe higher axion/ALP Compton frequencies. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Characterization and Photometric Performance of the Hyper Suprime-Cam Software Pipeline,
Abstract: The Subaru Strategic Program (SSP) is an ambitious multi-band survey using
the Hyper Suprime-Cam (HSC) on the Subaru telescope. The Wide layer of the SSP
is both wide and deep, reaching a detection limit of i~26.0 mag. At these
depths, it is challenging to achieve accurate, unbiased, and consistent
photometry across all five bands. The HSC data are reduced using a pipeline
that builds on the prototype pipeline for the Large Synoptic Survey Telescope.
We have developed a Python-based, flexible framework to inject synthetic
galaxies into real HSC images called SynPipe. Here we explain the design and
implementation of SynPipe and generate a sample of synthetic galaxies to
examine the photometric performance of the HSC pipeline. For stars, we achieve
1% photometric precision at i~19.0 mag and 6% precision at i~25.0 in the
i-band. For synthetic galaxies with single-Sersic profiles, forced CModel
photometry achieves 13% photometric precision at i~20.0 mag and 18% precision
at i~25.0 in the i-band. We show that both forced PSF and CModel photometry
yield unbiased color estimates that are robust to seeing conditions. We
identify several caveats that apply to the version of HSC pipeline used for the
first public HSC data release (DR1) that need to be taking into consideration.
First, the degree to which an object is blended with other objects impacts the
overall photometric performance. This is especially true for point sources.
Highly blended objects tend to have larger photometric uncertainties,
systematically underestimated fluxes and slightly biased colors. Second, >20%
of stars at 22.5< i < 25.0 mag can be misclassified as extended objects. Third,
the current CModel algorithm tends to strongly underestimate the half-light
radius and ellipticity of galaxy with i>21.5 mag. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Information Geometry Approach to Parameter Estimation in Hidden Markov Models,
Abstract: We consider the estimation of hidden Markovian process by using information
geometry with respect to transition matrices. We consider the case when we use
only the histogram of $k$-memory data. Firstly, we focus on a partial
observation model with Markovian process and we show that the asymptotic
estimation error of this model is given as the inverse of projective Fisher
information of transition matrices. Next, we apply this result to the
estimation of hidden Markovian process. We carefully discuss the equivalence
problem for hidden Markovian process on the tangent space. Then, we propose a
novel method to estimate hidden Markovian process. | [
0,
0,
1,
1,
0,
0
] | [
"Mathematics",
"Statistics",
"Computer Science"
] |
Title: Gotta Learn Fast: A New Benchmark for Generalization in RL,
Abstract: In this report, we present a new reinforcement learning (RL) benchmark based
on the Sonic the Hedgehog (TM) video game franchise. This benchmark is intended
to measure the performance of transfer learning and few-shot learning
algorithms in the RL domain. We also present and evaluate some baseline
algorithms on the new benchmark. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Kinetic Trans-assembly of DNA Nanostructures,
Abstract: The central dogma of molecular biology is the principal framework for
understanding how nucleic acid information is propagated and used by living
systems to create complex biomolecules. Here, by integrating the structural and
dynamic paradigms of DNA nanotechnology, we present a rationally designed
synthetic platform which functions in an analogous manner to create complex DNA
nanostructures. Starting from one type of DNA nanostructure, DNA strand
displacement circuits were designed to interact and pass along the information
encoded in the initial structure to mediate the self-assembly of a different
type of structure, the final output structure depending on the type of circuit
triggered. Using this concept of a DNA structure "trans-assembling" a different
DNA structure through non-local strand displacement circuitry, four different
schemes were implemented. Specifically, 1D ladder and 2D double-crossover (DX)
lattices were designed to kinetically trigger DNA circuits to activate
polymerization of either ring structures or another type of DX lattice under
enzyme-free, isothermal conditions. In each scheme, the desired multilayer
reaction pathway was activated, among multiple possible pathways, ultimately
leading to the downstream self-assembly of the correct output structure. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology",
"Physics"
] |
Title: The Leray transform: factorization, dual $CR$ structures and model hypersurfaces in $\mathbb{C}\mathbb{P}^2$,
Abstract: We compute the exact norms of the Leray transforms for a family
$\mathcal{S}_{\beta}$ of unbounded hypersurfaces in two complex dimensions. The
$\mathcal{S}_{\beta}$ generalize the Heisenberg group, and provide local
projective approximations to any smooth, strongly $\mathbb{C}$-convex
hypersurface $\mathcal{S}_{\beta}$ to two orders of tangency. This work is then
examined in the context of projective dual $CR$-structures and the
corresponding pair of canonical dual Hardy spaces associated to
$\mathcal{S}_{\beta}$, leading to a universal description of the Leray
transform and a factorization of the transform through orthogonal projection
onto the conjugate dual Hardy space. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: A Fast Quantum-safe Asymmetric Cryptosystem Using Extra Superincreasing Sequences,
Abstract: This paper gives the definitions of an extra superincreasing sequence and an
anomalous subset sum, and proposes a fast quantum-safe asymmetric cryptosystem
called JUOAN2. The new cryptosystem is based on an additive multivariate
permutation problem (AMPP) and an anomalous subset sum problem (ASSP) which
parallel a multivariate polynomial problem and a shortest vector problem
respectively, and composed of a key generator, an encryption algorithm, and a
decryption algorithm. The authors analyze the security of the new cryptosystem
against the Shamir minima accumulation point attack and the LLL lattice basis
reduction attack, and prove it to be semantically secure (namely IND-CPA) on
the assumption that AMPP and ASSP have no subexponential time solutions.
Particularly, the analysis shows that the new cryptosystem has the potential to
be resistant to quantum computing attack, and is especially suitable to the
secret communication between two mobile terminals in maneuvering field
operations under any weather. At last, an example explaining the correctness of
the new cryptosystem is given. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Detecting Adversarial Examples via Key-based Network,
Abstract: Though deep neural networks have achieved state-of-the-art performance in
visual classification, recent studies have shown that they are all vulnerable
to the attack of adversarial examples. Small and often imperceptible
perturbations to the input images are sufficient to fool the most powerful deep
neural networks. Various defense methods have been proposed to address this
issue. However, they either require knowledge on the process of generating
adversarial examples, or are not robust against new attacks specifically
designed to penetrate the existing defense. In this work, we introduce
key-based network, a new detection-based defense mechanism to distinguish
adversarial examples from normal ones based on error correcting output codes,
using the binary code vectors produced by multiple binary classifiers applied
to randomly chosen label-sets as signatures to match normal images and reject
adversarial examples. In contrast to existing defense methods, the proposed
method does not require knowledge of the process for generating adversarial
examples and can be applied to defend against different types of attacks. For
the practical black-box and gray-box scenarios, where the attacker does not
know the encoding scheme, we show empirically that key-based network can
effectively detect adversarial examples generated by several state-of-the-art
attacks. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: Guessing Attacks on Distributed-Storage Systems,
Abstract: The secrecy of a distributed-storage system for passwords is studied. The
encoder, Alice, observes a length-n password and describes it using two hints,
which she stores in different locations. The legitimate receiver, Bob, observes
both hints. In one scenario the requirement is that the expected number of
guesses it takes Bob to guess the password approach one as n tends to infinity,
and in the other that the expected size of the shortest list that Bob must form
to guarantee that it contain the password approach one. The eavesdropper, Eve,
sees only one of the hints. Assuming that Alice cannot control which hints Eve
observes, the largest normalized (by n) exponent that can be guaranteed for the
expected number of guesses it takes Eve to guess the password is characterized
for each scenario. Key to the proof are new results on Arikan's guessing and
Bunte and Lapidoth's task-encoding problem; in particular, the paper
establishes a close relation between the two problems. A rate-distortion
version of the model is also discussed, as is a generalization that allows for
Alice to produce {\delta} (not necessarily two) hints, for Bob to observe {\nu}
(not necessarily two) of the hints, and for Eve to observe {\eta} (not
necessarily one) of the hints. The generalized model is robust against {\delta}
- {\nu} disk failures. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Numerical analysis of nonlocal fracture models in Hölder space,
Abstract: In this work, we calculate the convergence rate of the finite difference
approximation for a class of nonlocal fracture models. We consider two point
force interactions characterized by a double well potential. We show the
existence of a evolving displacement field in Hölder space with Hölder
exponent $\gamma \in (0,1]$. The rate of convergence of the finite difference
approximation depends on the factor $C_s h^\gamma/\epsilon^2$ where $\epsilon$
gives the length scale of nonlocal interaction, $h$ is the discretization
length and $C_s$ is the maximum of Hölder norm of the solution and its second
derivatives during the evolution. It is shown that the rate of convergence
holds for both the forward Euler scheme as well as general single step implicit
schemes. A stability result is established for the semi-discrete approximation.
The Hölder continuous evolutions are seen to converge to a brittle fracture
evolution in the limit of vanishing nonlocality. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Iterative Collaborative Filtering for Sparse Matrix Estimation,
Abstract: The sparse matrix estimation problem consists of estimating the distribution
of an $n\times n$ matrix $Y$, from a sparsely observed single instance of this
matrix where the entries of $Y$ are independent random variables. This captures
a wide array of problems; special instances include matrix completion in the
context of recommendation systems, graphon estimation, and community detection
in (mixed membership) stochastic block models. Inspired by classical
collaborative filtering for recommendation systems, we propose a novel
iterative, collaborative filtering-style algorithm for matrix estimation in
this generic setting. We show that the mean squared error (MSE) of our
estimator converges to $0$ at the rate of $O(d^2 (pn)^{-2/5})$ as long as
$\omega(d^5 n)$ random entries from a total of $n^2$ entries of $Y$ are
observed (uniformly sampled), $\mathbb{E}[Y]$ has rank $d$, and the entries of
$Y$ have bounded support. The maximum squared error across all entries
converges to $0$ with high probability as long as we observe a little more,
$\Omega(d^5 n \ln^2(n))$ entries. Our results are the best known sample
complexity results in this generality. | [
0,
0,
1,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: Parameter Estimation in Finite Mixture Models by Regularized Optimal Transport: A Unified Framework for Hard and Soft Clustering,
Abstract: In this short paper, we formulate parameter estimation for finite mixture
models in the context of discrete optimal transportation with convex
regularization. The proposed framework unifies hard and soft clustering methods
for general mixture models. It also generalizes the celebrated
$k$\nobreakdash-means and expectation-maximization algorithms in relation to
associated Bregman divergences when applied to exponential family mixture
models. | [
1,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Computer Science"
] |
Title: Direct observation of domain wall surface tension by deflating or inflating a magnetic bubble,
Abstract: The surface energy of a magnetic Domain Wall (DW) strongly affects its static
and dynamic behaviours. However, this effect was seldom directly observed and
many related phenomena have not been well understood. Moreover, a reliable
method to quantify the DW surface energy is still missing. Here, we report a
series of experiments in which the DW surface energy becomes a dominant
parameter. We observed that a semicircular magnetic domain bubble could
spontaneously collapse under the Laplace pressure induced by DW surface energy.
We further demonstrated that the surface energy could lead to a geometrically
induced pinning when the DW propagates in a Hall cross or from a nanowire into
a nucleation pad. Based on these observations, we developed two methods to
quantify the DW surface energy, which could be very helpful to estimate
intrinsic parameters such as Dzyaloshinskii-Moriya Interactions (DMI) or
exchange stiffness in magnetic ultra-thin films. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Finite size effects for spiking neural networks with spatially dependent coupling,
Abstract: We study finite-size fluctuations in a network of spiking deterministic
neurons coupled with non-uniform synaptic coupling. We generalize a previously
developed theory of finite size effects for uniform globally coupled neurons.
In the uniform case, mean field theory is well defined by averaging over the
network as the number of neurons in the network goes to infinity. However, for
nonuniform coupling it is no longer possible to average over the entire network
if we are interested in fluctuations at a particular location within the
network. We show that if the coupling function approaches a continuous function
in the infinite system size limit then an average over a local neighborhood can
be defined such that mean field theory is well defined for a spatially
dependent field. We then derive a perturbation expansion in the inverse system
size around the mean field limit for the covariance of the input to a neuron
(synaptic drive) and firing rate fluctuations due to dynamical deterministic
finite-size effects. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology",
"Mathematics"
] |
Title: Shape and fission instabilities of ferrofluids in non-uniform magnetic fields,
Abstract: We study static distributions of ferrofluid submitted to non-uniform magnetic
fields. We show how the normal-field instability is modified in the presence of
a weak magnetic field gradient. Then we consider a ferrofluid droplet and show
how the gradient affects its shape. A rich phase transitions phenomenology is
found. We also investigate the creation of droplets by successive splits when a
magnet is vertically approached from below and derive theoretical expressions
which are solved numerically to obtain the number of droplets and their aspect
ratio as function of the field configuration. A quantitative comparison is
performed with previous experimental results, as well as with our own
experiments, and yields good agreement with the theoretical modeling. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Tuplemax Loss for Language Identification,
Abstract: In many scenarios of a language identification task, the user will specify a
small set of languages which he/she can speak instead of a large set of all
possible languages. We want to model such prior knowledge into the way we train
our neural networks, by replacing the commonly used softmax loss function with
a novel loss function named tuplemax loss. As a matter of fact, a typical
language identification system launched in North America has about 95% users
who could speak no more than two languages. Using the tuplemax loss, our system
achieved a 2.33% error rate, which is a relative 39.4% improvement over the
3.85% error rate of standard softmax loss method. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Short Presburger arithmetic is hard,
Abstract: We study the computational complexity of short sentences in Presburger
arithmetic (Short-PA). Here by "short" we mean sentences with a bounded number
of variables, quantifiers, inequalities and Boolean operations; the input
consists only of the integer coefficients involved in the linear inequalities.
We prove that satisfiability of Short-PA sentences with $m+2$ alternating
quantifiers is $\Sigma_{P}^m$-complete or $\Pi_{P}^m$-complete, when the first
quantifier is $\exists$ or $\forall$, respectively. Counting versions and
restricted systems are also analyzed. Further application are given to hardness
of two natural problems in Integer Optimizations. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: The Bias of the Log Power Spectrum for Discrete Surveys,
Abstract: A primary goal of galaxy surveys is to tighten constraints on cosmological
parameters, and the power spectrum $P(k)$ is the standard means of doing so.
However, at translinear scales $P(k)$ is blind to much of these surveys'
information---information which the log density power spectrum recovers. For
discrete fields (such as the galaxy density), $A^*$ denotes the statistic
analogous to the log density: $A^*$ is a "sufficient statistic" in that its
power spectrum (and mean) capture virtually all of a discrete survey's
information. However, the power spectrum of $A^*$ is biased with respect to the
corresponding log spectrum for continuous fields, and to use $P_{A^*}(k)$ to
constrain the values of cosmological parameters, we require some means of
predicting this bias. Here we present a prescription for doing so; for
Euclid-like surveys (with cubical cells 16$h^{-1}$ Mpc across) our bias
prescription's error is less than 3 per cent. This prediction will facilitate
optimal utilization of the information in future galaxy surveys. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Statistics"
] |
Title: Local Symmetry and Global Structure in Adaptive Voter Models,
Abstract: "Coevolving" or "adaptive" voter models (AVMs) are natural systems for
modeling the emergence of mesoscopic structure from local networked processes
driven by conflict and homophily. Because of this, many methods for
approximating the long-run behavior of AVMs have been proposed over the last
decade. However, most such methods are either restricted in scope, expensive in
computation, or inaccurate in predicting important statistics. In this work, we
develop a novel, second-order moment closure approximation method for studying
the equilibrium mesoscopic structure of AVMs and apply it to binary-state
rewire-to-random and rewire-to-same model variants with random state-switching.
This framework exploits an asymmetry in voting events that enables us to derive
analytic approximations for the fast-timescale dynamics. The resulting
numerical approximations enable the computation of key properties of the model
behavior, such as the location of the fragmentation transition and the
equilibrium active edge density, across the entire range of state densities.
Numerically, they are nearly exact for the rewire-to-random model, and
competitive with other current approaches for the rewire-to-same model. We
conclude with suggestions for model refinement and extensions to more complex
models. | [
1,
0,
0,
0,
0,
0
] | [
"Physics",
"Mathematics",
"Statistics"
] |
Title: Poverty Mapping Using Convolutional Neural Networks Trained on High and Medium Resolution Satellite Images, With an Application in Mexico,
Abstract: Mapping the spatial distribution of poverty in developing countries remains
an important and costly challenge. These "poverty maps" are key inputs for
poverty targeting, public goods provision, political accountability, and impact
evaluation, that are all the more important given the geographic dispersion of
the remaining bottom billion severely poor individuals. In this paper we train
Convolutional Neural Networks (CNNs) to estimate poverty directly from high and
medium resolution satellite images. We use both Planet and Digital Globe
imagery with spatial resolutions of 3-5 sq. m. and 50 sq. cm. respectively,
covering all 2 million sq. km. of Mexico. Benchmark poverty estimates come from
the 2014 MCS-ENIGH combined with the 2015 Intercensus and are used to estimate
poverty rates for 2,456 Mexican municipalities. CNNs are trained using the 896
municipalities in the 2014 MCS-ENIGH. We experiment with several architectures
(GoogleNet, VGG) and use GoogleNet as a final architecture where weights are
fine-tuned from ImageNet. We find that 1) the best models, which incorporate
satellite-estimated land use as a predictor, explain approximately 57% of the
variation in poverty in a validation sample of 10 percent of MCS-ENIGH
municipalities; 2) Across all MCS-ENIGH municipalities explanatory power
reduces to 44% in a CNN prediction and landcover model; 3) Predicted poverty
from the CNN predictions alone explains 47% of the variation in poverty in the
validation sample, and 37% over all MCS-ENIGH municipalities; 4) In urban areas
we see slight improvements from using Digital Globe versus Planet imagery,
which explain 61% and 54% of poverty variation respectively. We conclude that
CNNs can be trained end-to-end on satellite imagery to estimate poverty,
although there is much work to be done to understand how the training process
influences out of sample validation. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Network of sensitive magnetometers for urban studies,
Abstract: The magnetic signature of an urban environment is investigated using a
geographically distributed network of fluxgate magnetometers deployed in and
around Berkeley, California. The system hardware and software are described and
results from initial operation of the network are reported. The sensors sample
the vector magnetic field with a 4 kHz resolution and are sensitive to
fluctuations below 0.1 $\textrm{nT}/\sqrt{\textrm{Hz}}$. Data from separate
stations are synchronized to around $\pm100$ $\mu{s}$ using GPS and computer
system clocks. Data from all sensors are automatically uploaded to a central
server. Anomalous events, such as lightning strikes, have been observed. A
wavelet analysis is used to study observations over a wide range of temporal
scales up to daily variations that show strong differences between weekend and
weekdays. The Bay Area Rapid Transit (BART) is identified as the most dominant
signal from these observations and a superposed epoch analysis is used to study
and extract the BART signal. Initial results of the correlation between sensors
are also presented. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Statistics"
] |
Title: On the Limitation of Convolutional Neural Networks in Recognizing Negative Images,
Abstract: Convolutional Neural Networks (CNNs) have achieved state-of-the-art
performance on a variety of computer vision tasks, particularly visual
classification problems, where new algorithms reported to achieve or even
surpass the human performance. In this paper, we examine whether CNNs are
capable of learning the semantics of training data. To this end, we evaluate
CNNs on negative images, since they share the same structure and semantics as
regular images and humans can classify them correctly. Our experimental results
indicate that when training on regular images and testing on negative images,
the model accuracy is significantly lower than when it is tested on regular
images. This leads us to the conjecture that current training methods do not
effectively train models to generalize the concepts. We then introduce the
notion of semantic adversarial examples - transformed inputs that semantically
represent the same objects, but the model does not classify them correctly -
and present negative images as one class of such inputs. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: Variational Monte Carlo study of spin dynamics in underdoped cuprates,
Abstract: The hour-glass-like dispersion of spin excitations is a common feature of
underdoped cuprates. It was qualitatively explained by the random phase
approximation based on various ordered states with some phenomenological
parameters; however, its origin remains elusive. Here, we present a numerical
study of spin dynamics in the $t$-$J$ model using the variational Monte Carlo
method. This parameter-free method satisfies the no double-occupancy constraint
of the model and thus provides a better evaluation on the spin dynamics with
respect to various mean-field trial states. We conclude that the lower branch
of the hour-glass dispersion is a collective mode and the upper branch is more
likely the consequence of the stripe state than the other candidates. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Parabolic equations with divergence-free drift in space $L_{t}^{l}L_{x}^{q}$,
Abstract: In this paper we study the fundamental solution $\varGamma(t,x;\tau,\xi)$ of
the parabolic operator $L_{t}=\partial_{t}-\Delta+b(t,x)\cdot\nabla$, where for
every $t$, $b(t,\cdot)$ is a divergence-free vector field, and we consider the
case that $b$ belongs to the Lebesgue space
$L^{l}\left(0,T;L^{q}\left(\mathbb{R}^{n}\right)\right)$. The regularity of
weak solutions to the parabolic equation $L_{t}u=0$ depends critically on the
value of the parabolic exponent $\gamma=\frac{2}{l}+\frac{n}{q}$. Without the
divergence-free condition on $b$, the regularity of weak solutions has been
established when $\gamma\leq1$, and the heat kernel estimate has been obtained
as well, except for the case that $l=\infty,q=n$. The regularity of weak
solutions was deemed not true for the critical case
$L^{\infty}\left(0,T;L^{n}\left(\mathbb{R}^{n}\right)\right)$ for a general
$b$, while it is true for the divergence-free case, and a written proof can be
deduced from the results in [Semenov, 2006]. One of the results obtained in the
present paper establishes the Aronson type estimate for critical and
supercritical cases and for vector fields $b$ which are divergence-free. We
will prove the best possible lower and upper bounds for the fundamental
solution one can derive under the current approach. The significance of the
divergence-free condition enters the study of parabolic equations rather
recently, mainly due to the discovery of the compensated compactness. The
interest for the study of such parabolic equations comes from its connections
with Leray's weak solutions of the Navier-Stokes equations and the Taylor
diffusion associated with a vector field where the heat operator $L_{t}$
appears naturally. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Detecting Arbitrary Attacks Using Continuous Secured Side Information in Wireless Networks,
Abstract: This paper focuses on Byzantine attack detection for Gaussian two-hop one-way
relay network, where an amplify-and-forward relay may conduct Byzantine attacks
by forwarding altered symbols to the destination. For facilitating attack
detection, we utilize the openness of wireless medium to make the destination
observe some secured signals that are not attacked. Then, a detection scheme is
developed for the destination by using its secured observations to
statistically check other observations from the relay. On the other hand,
notice the Gaussian channel is continuous, which allows the possible Byzantine
attacks to be conducted within continuous alphabet(s). The existing work on
discrete channel is not applicable for investigating the performance of the
proposed scheme. The main contribution of this paper is to prove that if and
only if the wireless relay network satisfies a non-manipulable channel
condition, the proposed detection scheme achieves asymptotic errorless
performance against arbitrary attacks that allow the stochastic distributions
of altered symbols to vary arbitrarily and depend on each other. No pre-shared
secret or secret transmission is needed for the detection. Furthermore, we also
prove that the relay network is non-manipulable as long as all channel
coefficients are non-zero, which is not essential restrict for many practical
systems. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Graph Convolutional Networks for Classification with a Structured Label Space,
Abstract: It is a usual practice to ignore any structural information underlying
classes in multi-class classification. In this paper, we propose a graph
convolutional network (GCN) augmented neural network classifier to exploit a
known, underlying graph structure of labels. The proposed approach resembles an
(approximate) inference procedure in, for instance, a conditional random field
(CRF). We evaluate the proposed approach on document classification and object
recognition and report both accuracies and graph-theoretic metrics that
correspond to the consistency of the model's prediction. The experiment results
reveal that the proposed model outperforms a baseline method which ignores the
graph structures of a label space in terms of graph-theoretic metrics. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Decoupling of graphene from Ni(111) via oxygen intercalation,
Abstract: The combination of the surface science techniques (STM, XPS, ARPES) and
density-functional theory calculations was used to study the decoupling of
graphene from Ni(111) by oxygen intercalation. The formation of the
antiferromagnetic (AFM) NiO layer at the interface between graphene and
ferromagnetic (FM) Ni is found, where graphene protects the underlying AFM/FM
sandwich system. It is found that graphene is fully decoupled in this system
and strongly $p$-doped via charge transfer with a position of the Dirac point
of $(0.69\pm0.02)$ eV above the Fermi level. Our theoretical analysis confirms
all experimental findings, addressing also the interface properties between
graphene and AFM NiO. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Topic supervised non-negative matrix factorization,
Abstract: Topic models have been extensively used to organize and interpret the
contents of large, unstructured corpora of text documents. Although topic
models often perform well on traditional training vs. test set evaluations, it
is often the case that the results of a topic model do not align with human
interpretation. This interpretability fallacy is largely due to the
unsupervised nature of topic models, which prohibits any user guidance on the
results of a model. In this paper, we introduce a semi-supervised method called
topic supervised non-negative matrix factorization (TS-NMF) that enables the
user to provide labeled example documents to promote the discovery of more
meaningful semantic structure of a corpus. In this way, the results of TS-NMF
better match the intuition and desired labeling of the user. The core of TS-NMF
relies on solving a non-convex optimization problem for which we derive an
iterative algorithm that is shown to be monotonic and convergent to a local
optimum. We demonstrate the practical utility of TS-NMF on the Reuters and
PubMed corpora, and find that TS-NMF is especially useful for conceptual or
broad topics, where topic key terms are not well understood. Although
identifying an optimal latent structure for the data is not a primary objective
of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard
similarity scores than the contemporary methods, (unsupervised) NMF and latent
Dirichlet allocation, at supervision rates as low as 10% to 20%. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: ECO-AMLP: A Decision Support System using an Enhanced Class Outlier with Automatic Multilayer Perceptron for Diabetes Prediction,
Abstract: With advanced data analytical techniques, efforts for more accurate decision
support systems for disease prediction are on rise. Surveys by World Health
Organization (WHO) indicate a great increase in number of diabetic patients and
related deaths each year. Early diagnosis of diabetes is a major concern among
researchers and practitioners. The paper presents an application of
\textit{Automatic Multilayer Perceptron }which\textit{ }is combined with an
outlier detection method \textit{Enhanced Class Outlier Detection using
distance based algorithm }to create a prediction framework named as Enhanced
Class Outlier with Automatic Multi layer Perceptron (ECO-AMLP). A series of
experiments are performed on publicly available Pima Indian Diabetes Dataset to
compare ECO-AMLP with other individual classifiers as well as ensemble based
methods. The outlier technique used in our framework gave better results as
compared to other pre-processing and classification techniques. Finally, the
results are compared with other state-of-the-art methods reported in literature
for diabetes prediction on PIDD and achieved accuracy of 88.7\% bests all other
reported studies. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Modeling and control of modern wind turbine systems: An introduction,
Abstract: This chapter provides an introduction to the modeling and control of power
generation from wind turbine systems. In modeling, the focus is on the
electrical components: electrical machine (e.g. permanent-magnet synchronous
generators), back-to-back converter (consisting of machine-side and grid-side
converter sharing a common DC-link), mains filters and ideal (balanced) power
grid. The aerodynamics and the torque generation of the wind turbine are
explained in simplified terms using a so-called power coefficient. The overall
control system is considered. In particular, the phase-locked loop system for
grid-side voltage orientation, the nonlinear speed control system for the
generator (and turbine), and the non-minimum phase DC-link voltage control
system are discussed in detail; based on a brief derivation of the underlying
machine-side and grid-side current control systems. With the help of the power
balance of the wind turbine, the operation management and the control of the
power flow are explained. Concluding simulation results illustrate the overall
system behavior of a controlled wind turbine with a permanent-magnet
synchronous generator. | [
1,
0,
0,
0,
0,
0
] | [
"Physics",
"Engineering"
] |
Title: Linear algebraic analogues of the graph isomorphism problem and the Erdős-Rényi model,
Abstract: A classical difficult isomorphism testing problem is to test isomorphism of
p-groups of class 2 and exponent p in time polynomial in the group order. It is
known that this problem can be reduced to solving the alternating matrix space
isometry problem over a finite field in time polynomial in the underlying
vector space size. We propose a venue of attack for the latter problem by
viewing it as a linear algebraic analogue of the graph isomorphism problem.
This viewpoint leads us to explore the possibility of transferring techniques
for graph isomorphism to this long-believed bottleneck case of group
isomorphism.
In 1970's, Babai, Erdős, and Selkow presented the first average-case
efficient graph isomorphism testing algorithm (SIAM J Computing, 1980).
Inspired by that algorithm, we devise an average-case efficient algorithm for
the alternating matrix space isometry problem over a key range of parameters,
in a random model of alternating matrix spaces in vein of the Erdős-Rényi
model of random graphs. For this, we develop a linear algebraic analogue of the
classical individualisation technique, a technique belonging to a set of
combinatorial techniques that has been critical for the progress on the
worst-case time complexity for graph isomorphism, but was missing in the group
isomorphism context. As a consequence of the main algorithm, we establish a
weaker linear algebraic analogue of Erdős and Rényi's classical result
that most graphs have the trivial automorphism group. We finally show that
Luks' dynamic programming technique for graph isomorphism (STOC 1999) can be
adapted to slightly improve the worst-case time complexity of the alternating
matrix space isometry problem in a certain range of parameters. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: A numerical scheme for an improved Green-Naghdi model in the Camassa-Holm regime for the propagation of internal waves,
Abstract: In this paper we introduce a new reformulation of the Green-Naghdi model in
the Camassa-Holm regime for the propagation of internal waves over a flat
topography derived by Duchêne, Israwi and Talhouk. These new Green-Naghdi
systems are adapted to improve the frequency dispersion of the original model,
they share the same order of precision as the standard one but have an
appropriate structure which makes them much more suitable for the numerical
resolution. We develop a second order splitting scheme where the hyperbolic
part of the system is treated with a high-order finite volume scheme and the
dispersive part is treated with a finite difference approach. Numerical
simulations are then performed to validate the model and the numerical methods. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: General Bayesian Updating and the Loss-Likelihood Bootstrap,
Abstract: In this paper we revisit the weighted likelihood bootstrap, a method that
generates samples from an approximate Bayesian posterior of a parametric model.
We show that the same method can be derived, without approximation, under a
Bayesian nonparametric model with the parameter of interest defined as
minimising an expected negative log-likelihood under an unknown sampling
distribution. This interpretation enables us to extend the weighted likelihood
bootstrap to posterior sampling for parameters minimizing an expected loss. We
call this method the loss-likelihood bootstrap. We make a connection between
this and general Bayesian updating, which is a way of updating prior belief
distributions without needing to construct a global probability model, yet
requires the calibration of two forms of loss function. The loss-likelihood
bootstrap is used to calibrate the general Bayesian posterior by matching
asymptotic Fisher information. We demonstrate the methodology on a number of
examples. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Self-consistent calculation of the flux-flow conductivity in diffusive superconductors,
Abstract: In the framework of Keldysh-Usadel kinetic theory, we study the temperature
dependence of flux-flow conductivity (FFC) in diffusive superconductors. By
using self-consistent vortex solutions we find the exact values of
dimensionless parameters that determine the diffusion-controlled FFC both in
the limit of the low temperatures and close to the critical one. Taking into
account the electron-phonon scattering we study the transition between
flux-flow regimes controlled either by the diffusion or the inelastic
relaxation of non-equilibrium quasiparticles. We demonstrate that the inelastic
electron-phonon relaxation leads to the strong suppression of FFC as compared
to the previous estimates making it possible to obtain the numerical agreement
with experimental results. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: "Found in Translation": Predicting Outcomes of Complex Organic Chemistry Reactions using Neural Sequence-to-Sequence Models,
Abstract: There is an intuitive analogy of an organic chemist's understanding of a
compound and a language speaker's understanding of a word. Consequently, it is
possible to introduce the basic concepts and analyze potential impacts of
linguistic analysis to the world of organic chemistry. In this work, we cast
the reaction prediction task as a translation problem by introducing a
template-free sequence-to-sequence model, trained end-to-end and fully
data-driven. We propose a novel way of tokenization, which is arbitrarily
extensible with reaction information. With this approach, we demonstrate
results superior to the state-of-the-art solution by a significant margin on
the top-1 accuracy. Specifically, our approach achieves an accuracy of 80.1%
without relying on auxiliary knowledge such as reaction templates. Also, 66.4%
accuracy is reached on a larger and noisier dataset. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Efficient Decomposition of High-Rank Tensors,
Abstract: Tensors are a natural way to express correlations among many physical
variables, but storing tensors in a computer naively requires memory which
scales exponentially in the rank of the tensor. This is not optimal, as the
required memory is actually set not by the rank but by the mutual information
amongst the variables in question. Representations such as the tensor tree
perform near-optimally when the tree decomposition is chosen to reflect the
correlation structure in question, but making such a choice is non-trivial and
good heuristics remain highly context-specific. In this work I present two new
algorithms for choosing efficient tree decompositions, independent of the
physical context of the tensor. The first is a brute-force algorithm which
produces optimal decompositions up to truncation error but is generally
impractical for high-rank tensors, as the number of possible choices grows
exponentially in rank. The second is a greedy algorithm, and while it is not
optimal it performs extremely well in numerical experiments while having
runtime which makes it practical even for tensors of very high rank. | [
1,
1,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: A Decision Procedure for Herbrand Formulae without Skolemization,
Abstract: This paper describes a decision procedure for disjunctions of conjunctions of
anti-prenex normal forms of pure first-order logic (FOLDNFs) that do not
contain $\vee$ within the scope of quantifiers. The disjuncts of these FOLDNFs
are equivalent to prenex normal forms whose quantifier-free parts are
conjunctions of atomic and negated atomic formulae (= Herbrand formulae). In
contrast to the usual algorithms for Herbrand formulae, neither skolemization
nor unification algorithms with function symbols are applied. Instead, a
procedure is described that rests on nothing but equivalence transformations
within pure first-order logic (FOL). This procedure involves the application of
a calculus for negative normal forms (the NNF-calculus) with $A \dashv\vdash A
\wedge A$ (= $\wedge$I) as the sole rule that increases the complexity of given
FOLDNFs. The described algorithm illustrates how, in the case of Herbrand
formulae, decision problems can be solved through a systematic search for
proofs that reduce the number of applications of the rule $\wedge$I to a
minimum in the NNF-calculus. In the case of Herbrand formulae, it is even
possible to entirely abstain from applying $\wedge$I. Finally, it is shown how
the described procedure can be used within an optimized general search for
proofs of contradiction and what kind of questions arise for a
$\wedge$I-minimal proof strategy in the case of a general search for proofs of
contradiction. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: The role of relativistic many-body theory in probing new physics beyond the standard model via the electric dipole moments of diamagnetic atoms,
Abstract: The observation of electric dipole moments (EDMs) in atomic systems due to
parity and time-reversal violating (P,T-odd) interactions can probe new physics
beyond the standard model and also provide insights into the matter-antimatter
asymmetry in the Universe. The EDMs of open-shell atomic systems are sensitive
to the electron EDM and the P,T-odd scalar-pseudoscalar (S-PS) semi-leptonic
interaction, but the dominant contributions to the EDMs of diamagnetic atoms
come from the hadronic and tensor-pseudotensor (T-PT) semi-leptonic
interactions. Several diamagnetic atoms like $^{129}$Xe, $^{171}$Yb,
$^{199}$Hg, $^{223}$Rn, and $^{225}$Ra are candidates for the experimental
search for the possible existence of EDMs, and among these $^{199}$Hg has
yielded the lowest limit till date. The T or CP violating coupling constants of
the aforementioned interactions can be extracted from these measurements by
combining with atomic and nuclear calculations. In this work, we report the
calculations of the EDMs of the above atoms by including both the
electromagnetic and P,T-odd violating interactions simultaneously. These
calculations are performed by employing relativistic many-body methods based on
the random phase approximation (RPA) and the singles and doubles
coupled-cluster (CCSD) method starting with the Dirac-Hartree-Fock (DHF) wave
function in both cases. The differences in the results from both the methods
shed light on the importance of the non-core-polarization electron correlation
effects that are accounted for by the CCSD method. We also determine electric
dipole polarizabilities of these atoms, which have computational similarities
with EDMs and compare them with the available experimental and other
theoretical results to assess the accuracy of our calculations. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Security Incident Recognition and Reporting (SIRR): An Industrial Perspective,
Abstract: Reports and press releases highlight that security incidents continue to
plague organizations. While researchers and practitioners' alike endeavor to
identify and implement realistic security solutions to prevent incidents from
occurring, the ability to initially identify a security incident is paramount
when researching a security incident lifecycle. Hence, this research
investigates the ability of employees in a Global Fortune 500 financial
organization, through internal electronic surveys, to recognize and report
security incidents to pursue a more holistic security posture. The research
contribution is an initial insight into security incident perceptions by
employees in the financial sector as well as serving as an initial guide for
future security incident recognition and reporting initiatives. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Finance"
] |
Title: On attainability of optimal controls in coefficients for system of Hammerstein type with anisotropic p-Laplacia,
Abstract: In this paper we consider an optimal control problem for the coupled system
of a nonlinear monotone Dirichlet problem with anisotropic p-Laplacian and
matrix-valued nonsmooth controls in its coefficients and a nonlinear equation
of Hammerstein type. Using the direct method in calculus of variations, we
prove the existence of an optimal control in considered problem and provide
sensitivity analysis for a specific case of considered problem with respect to
two-parameter regularization. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Personalizing Path-Specific Effects,
Abstract: Unlike classical causal inference, which often has an average causal effect
of a treatment within a population as a target, in settings such as
personalized medicine, the goal is to map a given unit's characteristics to a
treatment tailored to maximize the expected outcome for that unit. Obtaining
high-quality mappings of this type is the goal of the dynamic regime literature
(Chakraborty and Moodie 2013), with connections to reinforcement learning and
experimental design. Aside from the average treatment effects, mechanisms
behind causal relationships are also of interest. A well-studied approach to
mechanism analysis is establishing average effects along with a particular set
of causal pathways, in the simplest case the direct and indirect effects.
Estimating such effects is the subject of the mediation analysis literature
(Robins and Greenland 1992; Pearl 2001).
In this paper, we consider how unit characteristics may be used to tailor a
treatment assignment strategy that maximizes a particular path-specific effect.
In healthcare applications, finding such a policy is of interest if, for
instance, we are interested in maximizing the chemical effect of a drug on an
outcome (corresponding to the direct effect), while assuming drug adherence
(corresponding to the indirect effect) is set to some reference level. To solve
our problem, we define counterfactuals associated with path-specific effects of
a policy, give a general identification algorithm for these counterfactuals,
give a proof of completeness, and show how classification algorithms in machine
learning (Chen, Zeng, and Kosorok 2016) may be used to find a high-quality
policy. We validate our approach via a simulation study. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Computer Science"
] |
Title: WASP-12b: A Mass-Losing Extremely Hot Jupiter,
Abstract: WASP-12b is an extreme hot Jupiter in a 1 day orbit, suffering profound
irradiation from its F type host star. The planet is surrounded by a
translucent exosphere which overfills the Roche lobe and produces
line-blanketing absorption in the near-UV. The planet is losing mass. Another
unusual property of the WASP-12 system is that observed chromospheric emission
from the star is anomalously low: WASP-12 is an extreme outlier amongst
thousands of stars when the log $R^{'}_{HK}$ chromospheric activity indicator
is considered. Occam's razor suggests these two extremely rare properties
coincide in this system because they are causally related. The absence of the
expected chromospheric emission is attributable to absorption by a diffuse
circumstellar gas shroud which surrounds the entire planetary system and fills
our line of sight to the chromospherically active regions of the star. This
circumstellar gas shroud is probably fed by mass loss from WASP-12b. The
orbital eccentricity of WASP-12b is small but may be non-zero. The planet is
part of a hierarchical quadruple system; its current orbit is consistent with
prior secular dynamical evolution leading to a highly eccentric orbit followed
by tidal circularization. When compared with the Galaxy's population of
planets, WASP-12b lies on the upper boundary of the sub-Jovian desert in both
the $(M_{\rm P}, P)$ and $(R_{\rm P}, P)$ planes. Determining the mass loss
rate for WASP-12b will illuminate the mechanism(s) responsible for the
sub-Jovian desert. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: The Extinction Properties of and Distance to the Highly Reddened Type Ia Supernova SN 2012cu,
Abstract: Correction of Type Ia Supernova brightnesses for extinction by dust has
proven to be a vexing problem. Here we study the dust foreground to the highly
reddened SN 2012cu, which is projected onto a dust lane in the galaxy NGC 4772.
The analysis is based on multi-epoch, spectrophotometric observations spanning
3,300 - 9,200 {\AA}, obtained by the Nearby Supernova Factory. Phase-matched
comparison of the spectroscopically twinned SN 2012cu and SN 2011fe across 10
epochs results in the best-fit color excess of (E(B-V), RMS) = (1.00, 0.03) and
total-to-selective extinction ratio of (RV , RMS) = (2.95, 0.08) toward SN
2012cu within its host galaxy. We further identify several diffuse interstellar
bands, and compare the 5780 {\AA} band with the dust-to-band ratio for the
Milky Way. Overall, we find the foreground dust-extinction properties for SN
2012cu to be consistent with those of the Milky Way. Furthermore we find no
evidence for significant time variation in any of these extinction tracers. We
also compare the dust extinction curve models of Cardelli et al. (1989),
O'Donnell (1994), and Fitzpatrick (1999), and find the predictions of
Fitzpatrick (1999) fit SN 2012cu the best. Finally, the distance to NGC4772,
the host of SN 2012cu, at a redshift of z = 0.0035, often assigned to the Virgo
Southern Extension, is determined to be 16.6$\pm$1.1 Mpc. We compare this
result with distance measurements in the literature. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: On Whitham and related equations,
Abstract: The aim of this paper is to study, via theoretical analysis and numerical
simulations, the dynamics of Whitham and related equations. In particular we
establish rigorous bounds between solutions of the Whitham and KdV equations
and provide some insights into the dynamics of the Whitham equation in
different regimes, some of them being outside the range of validity of the
Whitham equation as a water waves model. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Analytic Connectivity in General Hypergraphs,
Abstract: In this paper we extend the known results of analytic connectivity to
non-uniform hypergraphs. We prove a modified Cheeger's inequality and also give
a bound on analytic connectivity with respect to the degree sequence and
diameter of a hypergraph. | [
1,
0,
0,
0,
0,
0
] | [
"Mathematics"
] |
Title: Optimal VWAP execution under transient price impact,
Abstract: We solve the problem of optimal liquidation with volume weighted average
price (VWAP) benchmark when the market impact is linear and transient. Our
setting is indeed more general as it considers the case when the trading
interval is not necessarily coincident with the benchmark interval:
Implementation Shortfall and Target Close execution are shown to be particular
cases of our setting. We find explicit solutions in continuous and discrete
time considering risk averse investors having a CARA utility function. Finally,
we show that, contrary to what is observed for Implementation Shortfall, the
optimal VWAP solution contains both buy and sell trades also when the decay
kernel is convex. | [
0,
0,
0,
0,
0,
1
] | [
"Quantitative Finance",
"Mathematics"
] |
Title: Image-derived generative modeling of pseudo-macromolecular structures - towards the statistical assessment of Electron CryoTomography template matching,
Abstract: Cellular Electron CryoTomography (CECT) is a 3D imaging technique that
captures information about the structure and spatial organization of
macromolecular complexes within single cells, in near-native state and at
sub-molecular resolution. Although template matching is often used to locate
macromolecules in a CECT image, it is insufficient as it only measures the
relative structural similarity. Therefore, it is preferable to assess the
statistical credibility of the decision through hypothesis testing, requiring
many templates derived from a diverse population of macromolecular structures.
Due to the very limited number of known structures, we need a generative model
to efficiently and reliably sample pseudo-structures from the complex
distribution of macromolecular structures. To address this challenge, we
propose a novel image-derived approach for performing hypothesis testing for
template matching by constructing generative models using the generative
adversarial network. Finally, we conducted hypothesis testing experiments for
template matching on both simulated and experimental subtomograms, allowing us
to conclude the identity of subtomograms with high statistical credibility and
significantly reducing false positives. | [
0,
0,
0,
1,
1,
0
] | [
"Quantitative Biology",
"Statistics",
"Computer Science"
] |
Title: Structure, magnetic susceptibility and specific heat of the spin-orbital-liquid candidate FeSc2S4 : Influence of fe off-stoichiometry,
Abstract: We report structural, susceptibility and specific heat studies of
stoichiometric and off-stoichiometric poly- and single crystals of the A-site
spinel compound FeSc2S4. In stoichiometric samples no long-range magnetic order
is found down to 1.8 K. The magnetic susceptibility of these samples is field
independent in the temperature range 10 - 400 K and does not show irreversible
effects at low temperatures. In contrast, the magnetic susceptibility of
samples with iron excess shows substantial field dependence at high
temperatures and manifests a pronounced magnetic irreversibility at low
temperatures with a difference between ZFC and FC susceptibilities and a
maximum at 10 K reminiscent of a magnetic transition. Single crystal x-ray
diffraction of the stoichiometric samples revealed a single phase spinel
structure without site inversion. In single crystalline samples with Fe excess
besides the main spinel phase a second ordered single-crystal phase was
detected with the diffraction pattern of a vacancy-ordered superstructure of
iron sulfide, close to the 5C polytype Fe9S10. Specific heat studies reveal a
broad anomaly, which evolves below 20 K in both stoichiometric and
off-stoichiometric crystals. We show that the low-temperature specific heat can
be well described by considering the low-lying spin-orbital electronic levels
of Fe2+ ions. Our results demonstrate significant influence of excess Fe ions
on intrinsic magnetic behavior of FeSc2S4 and provide support for the
spin-orbital liquid scenario proposed in earlier studies for the stoichiometric
compound. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Directed Information as Privacy Measure in Cloud-based Control,
Abstract: We consider cloud-based control scenarios in which clients with local control
tasks outsource their computational or physical duties to a cloud service
provider. In order to address privacy concerns in such a control architecture,
we first investigate the issue of finding an appropriate privacy measure for
clients who desire to keep local state information as private as possible
during the control operation. Specifically, we justify the use of Kramer's
notion of causally conditioned directed information as a measure of privacy
loss based on an axiomatic argument. Then we propose a methodology to design an
optimal "privacy filter" that minimizes privacy loss while a given level of
control performance is guaranteed. We show in particular that the optimal
privacy filter for cloud-based Linear-Quadratic-Gaussian (LQG) control can be
synthesized by a Linear-Matrix-Inequality (LMI) algorithm. The trade-off in the
design is illustrated by a numerical example. | [
0,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Code-division multiplexed resistive pulse sensor networks for spatio-temporal detection of particles in microfluidic devices,
Abstract: Spatial separation of suspended particles based on contrast in their physical
or chemical properties forms the basis of various biological assays performed
on lab-on-achip devices. To electronically acquire this information, we have
recently introduced a microfluidic sensing platform, called Microfluidic CODES,
which combines the resistive pulse sensing with the code division multiple
access in multiplexing a network of integrated electrical sensors. In this
paper, we enhance the multiplexing capacity of the Microfluidic CODES by
employing sensors that generate non-orthogonal code waveforms and a new
decoding algorithm that combines machine learning techniques with minimum
mean-squared error estimation. As a proof of principle, we fabricated a
microfluidic device with a network of 10 code-multiplexed sensors and
characterized it using cells suspended in phosphate buffer saline solution. | [
0,
0,
0,
1,
0,
0
] | [
"Quantitative Biology",
"Physics",
"Computer Science"
] |
Title: Navigability evaluation of complex networks by greedy routing efficiency,
Abstract: Network navigability is a key feature of complex networked systems. For a
network embedded in a geometrical space, maximization of greedy routing (GR)
measures based on the node geometrical coordinates can ensure efficient greedy
navigability. In PNAS, Seguin et al. (PNAS 2018, vol. 115, no. 24) define a
measure for quantifying the efficiency of brain network navigability in the
Euclidean space, referred to as the efficiency ratio, whose formula exactly
coincides with the GR-score (GR-efficiency) previously published by Muscoloni
et al. (Nature Communications 2017, vol. 8, no. 1615). In this Letter, we point
out potential flaws in the study of Seguin et al. regarding the discussion of
the GR evaluation. In particular, we revise the concept of GR navigability,
together with a careful discussion of the advantage offered by the new proposed
GR-efficiency measure in comparison to the main measures previously adopted in
literature. Finally, we clarify and standardize the GR-efficiency terminology
in order to simplify and facilitate the discussion in future studies. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: PIMKL: Pathway Induced Multiple Kernel Learning,
Abstract: Reliable identification of molecular biomarkers is essential for accurate
patient stratification. While state-of-the-art machine learning approaches for
sample classification continue to push boundaries in terms of performance, most
of these methods are not able to integrate different data types and lack
generalization power, limiting their application in a clinical setting.
Furthermore, many methods behave as black boxes, and we have very little
understanding about the mechanisms that lead to the prediction. While
opaqueness concerning machine behaviour might not be a problem in deterministic
domains, in health care, providing explanations about the molecular factors and
phenotypes that are driving the classification is crucial to build trust in the
performance of the predictive system. We propose Pathway Induced Multiple
Kernel Learning (PIMKL), a novel methodology to reliably classify samples that
can also help gain insights into the molecular mechanisms that underlie the
classification. PIMKL exploits prior knowledge in the form of a molecular
interaction network and annotated gene sets, by optimizing a mixture of
pathway-induced kernels using a Multiple Kernel Learning (MKL) algorithm, an
approach that has demonstrated excellent performance in different machine
learning applications. After optimizing the combination of kernels for
prediction of a specific phenotype, the model provides a stable molecular
signature that can be interpreted in the light of the ingested prior knowledge
and that can be used in transfer learning tasks. | [
0,
0,
0,
1,
1,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Spin-orbit interactions in optically active materials,
Abstract: We investigate the inherent influence of light polarization on the intensity
distribution in anisotropic media undergoing a local inhomogeneous rotation of
the principal axes. Whereas in general such configuration implies a complicated
interaction between geometric and dynamic phase, we show that, in a medium
showing an inhomogeneous circular birefringence, the geometric phase vanishes.
Due to the spin-orbit interaction, the two circular polarizations perceive
reversed spatial distribution of the dynamic phase. Based upon this effect,
polarization-selective lens, waveguides and beam deflectors are proposed. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Automatic White-Box Testing of First-Order Logic Ontologies,
Abstract: Formal ontologies are axiomatizations in a logic-based formalism. The
development of formal ontologies, and their important role in the Semantic Web
area, is generating considerable research on the use of automated reasoning
techniques and tools that help in ontology engineering. One of the main aims is
to refine and to improve axiomatizations for enabling automated reasoning tools
to efficiently infer reliable information. Defects in the axiomatization can
not only cause wrong inferences, but can also hinder the inference of expected
information, either by increasing the computational cost of, or even
preventing, the inference. In this paper, we introduce a novel, fully automatic
white-box testing framework for first-order logic ontologies. Our methodology
is based on the detection of inference-based redundancies in the given
axiomatization. The application of the proposed testing method is fully
automatic since a) the automated generation of tests is guided only by the
syntax of axioms and b) the evaluation of tests is performed by automated
theorem provers. Our proposal enables the detection of defects and serves to
certify the grade of suitability --for reasoning purposes-- of every axiom. We
formally define the set of tests that are generated from any axiom and prove
that every test is logically related to redundancies in the axiom from which
the test has been generated. We have implemented our method and used this
implementation to automatically detect several non-trivial defects that were
hidden in various first-order logic ontologies. Throughout the paper we provide
illustrative examples of these defects, explain how they were found, and how
each proof --given by an automated theorem-prover-- provides useful hints on
the nature of each defect. Additionally, by correcting all the detected
defects, we have obtained an improved version of one of the tested ontologies:
Adimen-SUMO. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: On the correspondence of deviances and maximum likelihood and interval estimates from log-linear to logistic regression modelling,
Abstract: Consider a set of categorical variables $\mathcal{P}$ where at least one,
denoted by $Y$, is binary. The log-linear model that describes the counts in
the resulting contingency table implies a specific logistic regression model,
with the binary variable as the outcome. Extending results in Christensen
(1997), by also considering the case where factors present in the contingency
table disappear from the logistic regression model, we prove that the Maximum
Likelihood Estimate (MLE) for the parameters of the logistic regression equals
the MLE for the corresponding parameters of the log-linear model. We prove
that, asymptotically, standard errors for the two sets of parameters are also
equal. Subsequently, Wald confidence intervals are asymptotically equal. These
results demonstrate the extent to which inferences from the log-linear
framework can be translated to inferences within the logistic regression
framework, on the magnitude of main effects and interactions. Finally, we prove
that the deviance of the log-linear model is equal to the deviance of the
corresponding logistic regression, provided that the latter is fitted to a
dataset where no cell observations are merged when one or more factors in
$\mathcal{P} \setminus \{ Y \}$ become obsolete. We illustrate the derived
results with the analysis of a real dataset. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Ultra-High Electro-Optic Activity Demonstrated in a Silicon-Organic Hybrid (SOH) Modulator,
Abstract: Efficient electro-optic (EO) modulators crucially rely on advanced materials
that exhibit strong electro-optic activity and that can be integrated into
high-speed and efficient phase shifter structures. In this paper, we
demonstrate ultra-high in-device EO figures of merit of up to n3r33 = 2300 pm/V
achieved in a silicon-organic hybrid (SOH) Mach-Zehnder Modulator (MZM) using
the EO chromophore JRD1. This is the highest material-related in-device EO
figure of merit hitherto achieved in a high-speed modulator at any operating
wavelength. The {\pi}-voltage of the 1.5 mm-long device amounts to 210 mV,
leading to a voltage-length product of U{\pi}L = 320 V{\mu}m - the lowest value
reported for MZM that are based on low-loss dielectric waveguides. The
viability of the devices is demonstrated by generating high-quality
on-off-keying (OOK) signals at 40 Gbit/s with Q factors in excess of 8 at a
drive voltage as low as 140 mVpp. We expect that efficient high-speed EO
modulators will not only have major impact in the field of optical
communications, but will also open new avenues towards ultra-fast
photonic-electronic signal processing. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games,
Abstract: How should an AI-based explanation system explain an agent's complex behavior
to ordinary end users who have no background in AI? Answering this question is
an active research area, for if an AI-based explanation system could
effectively explain intelligent agents' behavior, it could enable the end users
to understand, assess, and appropriately trust (or distrust) the agents
attempting to help them. To provide insights into this question, we turned to
human expert explainers in the real-time strategy domain, "shoutcaster", to
understand (1) how they foraged in an evolving strategy game in real time, (2)
how they assessed the players' behaviors, and (3) how they constructed
pertinent and timely explanations out of their insights and delivered them to
their audience. The results provided insights into shoutcasters' foraging
strategies for gleaning information necessary to assess and explain the
players; a characterization of the types of implicit questions shoutcasters
answered; and implications for creating explanations by using the patterns | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Focusing light through dynamical samples using fast closed-loop wavefront optimization,
Abstract: We describe a fast closed-loop optimization wavefront shaping system able to
focus light through dynamic scattering media. A MEMS-based spatial light
modulator (SLM), a fast photodetector and FPGA electronics are combined to
implement a closed-loop optimization of a wavefront with a single mode
optimization rate of 4.1 kHz. The system performances are demonstrated by
focusing light through colloidal solutions of TiO2 particles in glycerol with
tunable temporal stability. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: The curvature estimates for convex solutions of some fully nonlinear Hessian type equations,
Abstract: The curvature estimates of quotient curvature equation do not always exist
even for convex setting \cite{GRW}. Thus it is natural question to find other
type of elliptic equations possessing curvature estimates. In this paper, we
discuss the existence of curvature estimate for fully nonlinear elliptic
equations defined by symmetric polynomials, mainlly, the linear combination of
elementary symmetric polynomials. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: A Real-Time Autonomous Highway Accident Detection Model Based on Big Data Processing and Computational Intelligence,
Abstract: Due to increasing urban population and growing number of motor vehicles,
traffic congestion is becoming a major problem of the 21st century. One of the
main reasons behind traffic congestion is accidents which can not only result
in casualties and losses for the participants, but also in wasted and lost time
for the others that are stuck behind the wheels. Early detection of an accident
can save lives, provides quicker road openings, hence decreases wasted time and
resources, and increases efficiency. In this study, we propose a preliminary
real-time autonomous accident-detection system based on computational
intelligence techniques. Istanbul City traffic-flow data for the year 2015 from
various sensor locations are populated using big data processing methodologies.
The extracted features are then fed into a nearest neighbor model, a regression
tree, and a feed-forward neural network model. For the output, the possibility
of an occurrence of an accident is predicted. The results indicate that even
though the number of false alarms dominates the real accident cases, the system
can still provide useful information that can be used for status verification
and early reaction to possible accidents. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Asymptotic Eigenfunctions for a class of Difference Operators,
Abstract: We analyze a general class of difference operators $H_\varepsilon =
T_\varepsilon + V_\varepsilon$ on $\ell^2(\varepsilon \mathbb{Z}^d)$, where
$V_\varepsilon$ is a one-well potential and $\varepsilon$ is a small parameter.
We construct formal asymptotic expansions of WKB-type for eigenfunctions
associated with the low lying eigenvalues of $H_\varepsilon$. These are
obtained from eigenfunctions or quasimodes for the operator $H_\varepsilon$,
acting on $L^2(\mathbb{R}^d)$, via restriction to the lattice
$\varepsilon\mathbb{Z}^d$. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Britannia Rule the Waves,
Abstract: The students are introduced to navigation in general and the longitude
problem in particular. A few videos provide insight into scientific and
historical facts related to the issue. Then, the students learn in two steps
how longitude can be derived from time measurements. They first build a
Longitude Clock that visualises the math behind the concept. They use it to
determine the longitudes corresponding to five time measurements. In the second
step, they assume the position of James Cook's navigator and plot the location
of seven destinations on Cook's second voyage between 1772 and 1775. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning,
Abstract: We introduce Error Forward-Propagation, a biologically plausible mechanism to
propagate error feedback forward through the network. Architectural constraints
on connectivity are virtually eliminated for error feedback in the brain;
systematic backward connectivity is not used or needed to deliver error
feedback. Feedback as a means of assigning credit to neurons earlier in the
forward pathway for their contribution to the final output is thought to be
used in learning in the brain. How the brain solves the credit assignment
problem is unclear. In machine learning, error backpropagation is a highly
successful mechanism for credit assignment in deep multilayered networks.
Backpropagation requires symmetric reciprocal connectivity for every neuron.
From a biological perspective, there is no evidence of such an architectural
constraint, which makes backpropagation implausible for learning in the brain.
This architectural constraint is reduced with the use of random feedback
weights. Models using random feedback weights require backward connectivity
patterns for every neuron, but avoid symmetric weights and reciprocal
connections. In this paper, we practically remove this architectural
constraint, requiring only a backward loop connection for effective error
feedback. We propose reusing the forward connections to deliver the error
feedback by feeding the outputs into the input receiving layer. This mechanism,
Error Forward-Propagation, is a plausible basis for how error feedback occurs
deep in the brain independent of and yet in support of the functionality
underlying intricate network architectures. We show experimentally that
recurrent neural networks with two and three hidden layers can be trained using
Error Forward-Propagation on the MNIST and Fashion MNIST datasets, achieving
$1.90\%$ and $11\%$ generalization errors respectively. | [
0,
0,
0,
0,
1,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: $α$-Variational Inference with Statistical Guarantees,
Abstract: We propose a family of variational approximations to Bayesian posterior
distributions, called $\alpha$-VB, with provable statistical guarantees. The
standard variational approximation is a special case of $\alpha$-VB with
$\alpha=1$. When $\alpha \in(0,1]$, a novel class of variational inequalities
are developed for linking the Bayes risk under the variational approximation to
the objective function in the variational optimization problem, implying that
maximizing the evidence lower bound in variational inference has the effect of
minimizing the Bayes risk within the variational density family. Operating in a
frequentist setup, the variational inequalities imply that point estimates
constructed from the $\alpha$-VB procedure converge at an optimal rate to the
true parameter in a wide range of problems. We illustrate our general theory
with a number of examples, including the mean-field variational approximation
to (low)-high-dimensional Bayesian linear regression with spike and slab
priors, mixture of Gaussian models, latent Dirichlet allocation, and (mixture
of) Gaussian variational approximation in regular parametric models. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Computer Science"
] |
Title: Experimental demonstration of an ultra-compact on-chip polarization controlling structure,
Abstract: We demonstrated a novel on-chip polarization controlling structure,
fabricated by standard 0.18-um foundry technology. It achieved polarization
rotation with a size of 0.726 um * 5.27 um and can be easily extended into
dynamic polarization controllers. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Letter-Based Speech Recognition with Gated ConvNets,
Abstract: In the recent literature, "end-to-end" speech systems often refer to
letter-based acoustic models trained in a sequence-to-sequence manner, either
via a recurrent model or via a structured output learning approach (such as
CTC). In contrast to traditional phone (or senone)-based approaches, these
"end-to-end'' approaches alleviate the need of word pronunciation modeling, and
do not require a "forced alignment" step at training time. Phone-based
approaches remain however state of the art on classical benchmarks. In this
paper, we propose a letter-based speech recognition system, leveraging a
ConvNet acoustic model. Key ingredients of the ConvNet are Gated Linear Units
and high dropout. The ConvNet is trained to map audio sequences to their
corresponding letter transcriptions, either via a classical CTC approach, or
via a recent variant called ASG. Coupled with a simple decoder at inference
time, our system matches the best existing letter-based systems on WSJ (in word
error rate), and shows near state of the art performance on LibriSpeech. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Classifying and Qualifying GUI Defects,
Abstract: Graphical user interfaces (GUIs) are integral parts of software systems that
require interactions from their users. Software testers have paid special
attention to GUI testing in the last decade, and have devised techniques that
are effective in finding several kinds of GUI errors. However, the introduction
of new types of interactions in GUIs (e.g., direct manipulation) presents new
kinds of errors that are not targeted by current testing techniques. We believe
that to advance GUI testing, the community needs a comprehensive and high level
GUI fault model, which incorporates all types of interactions. The work
detailed in this paper establishes 4 contributions: 1) A GUI fault model
designed to identify and classify GUI faults. 2) An empirical analysis for
assessing the relevance of the proposed fault model against failures found in
real GUIs. 3) An empirical assessment of two GUI testing tools (i.e. GUITAR and
Jubula) against those failures. 4) GUI mutants we've developed according to our
fault model. These mutants are freely available and can be reused by developers
for benchmarking their GUI testing tools. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A mechanistic model of connector hubs, modularity, and cognition,
Abstract: The human brain network is modular--comprised of communities of tightly
interconnected nodes. This network contains local hubs, which have many
connections within their own communities, and connector hubs, which have
connections diversely distributed across communities. A mechanistic
understanding of these hubs and how they support cognition has not been
demonstrated. Here, we leveraged individual differences in hub connectivity and
cognition. We show that a model of hub connectivity accurately predicts the
cognitive performance of 476 individuals in four distinct tasks. Moreover,
there is a general optimal network structure for cognitive
performance--individuals with diversely connected hubs and consequent modular
brain networks exhibit increased cognitive performance, regardless of the task.
Critically, we find evidence consistent with a mechanistic model in which
connector hubs tune the connectivity of their neighbors to be more modular
while allowing for task appropriate information integration across communities,
which increases global modularity and cognitive performance. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology",
"Statistics"
] |
Title: Some studies using capillary for flow control in a closed loop gas recirculation system,
Abstract: A Pilot unit of a closed loop gas (CLS) mixing and distribution system for
the INO project was designed and is being operated with (1.8 x 1.9) m^2 glass
RPCs (Resistive Plate Chamber). The performance of an RPC depends on the
quality and quantity of gas mixture being used, a number of studies on
controlling the flow and optimization of the gas mixture is being carried out.
In this paper the effect of capillary as a dynamic impedance element on the
differential pressure across RPC detector in a closed loop gas system is being
highlighted. The flow versus the pressure variation with different types of
capillaries and also with different types of gasses that are being used in an
RPC is presented. An attempt is also made to measure the transient time of the
gas flow through the capillary. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Verifiable Light-Weight Monitoring for Certificate Transparency Logs,
Abstract: Trust in publicly verifiable Certificate Transparency (CT) logs is reduced
through cryptography, gossip, auditing, and monitoring. The role of a monitor
is to observe each and every log entry, looking for suspicious certificates
that interest the entity running the monitor. While anyone can run a monitor,
it requires continuous operation and copies of the logs to be inspected. This
has lead to the emergence of monitoring-as-a-service: a trusted party runs the
monitor and provides registered subjects with selective certificate
notifications, e.g., "notify me of all foo.com certificates". We present a
CT/bis extension for verifiable light-weight monitoring that enables subjects
to verify the correctness of such notifications, reducing the trust that is
placed in these monitors. Our extension supports verifiable monitoring of
wild-card domains and piggybacks on CT's existing gossip-audit security model. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Cryptography"
] |
Title: In silico optimization of critical currents in superconductors,
Abstract: For many technological applications of superconductors the performance of a
material is determined by the highest current it can carry losslessly - the
critical current. In turn, the critical current can be controlled by adding
non-superconducting defects in the superconductor matrix. Here we report on
systematic comparison of different local and global optimization strategies to
predict optimal structures of pinning centers leading to the highest possible
critical currents. We demonstrate performance of these methods for a
superconductor with randomly placed spherical, elliptical, and columnar
defects. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Attracting sequences of holomorphic automorphisms that agree to a certain order,
Abstract: The basin of attraction of a uniformly attracting sequence of holomorphic
automorphisms that agree to a certain order in the common fixed point, is
biholomorphic to $\mathbb{C}^n$. We also give sufficient estimates how large
this order has to be. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Kohn anomalies in momentum dependence of magnetic susceptibility of some three-dimensional systems,
Abstract: We study a question of presence of Kohn points, yielding at low temperatures
non-analytic momentum dependence of magnetic susceptibility near its maximum,
in electronic spectum of some three-dimensional systems. In particular, we
consider one-band model on face centered cubic lattice with hopping between
nearest and next-nearest neighbors, which models some aspects of the dispersion
of ZrZn$_2$, and the two-band model on body centered cubic lattice, modeling
the dispersion of chromium. For the former model it is shown that Kohn points
yielding maxima of susceptibility exist in a certain (sufficiently wide) region
of electronic concentrations; the dependence of the wave vectors, corresponding
to the maxima, on the chemical potential is investigated. For the two-band
model we show existence of the lines of Kohn points, yielding maximum of the
susceptibility, which position agrees with the results of band structure
calculations and experimental data on the wave vector of antiferromagnetism of
chromium. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Stability conditions, $τ$-tilting Theory and Maximal Green Sequences,
Abstract: Extending the notion of maximal green sequences to an abelian category, we
characterize the stability functions, as defined by Rudakov, that induce a
maximal green sequence in an abelian length category. Furthermore, we use
$\tau$-tilting theory to give a description of the wall and chamber structure
of any finite dimensional algebra. Finally we introduce the notion of green
paths in the wall and chamber structure of an algebra and show that green paths
serve as geometrical generalization of maximal green sequences in this context. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: On Asymptotic Standard Normality of the Two Sample Pivot,
Abstract: The asymptotic solution to the problem of comparing the means of two
heteroscedastic populations, based on two random samples from the populations,
hinges on the pivot underpinning the construction of the confidence interval
and the test statistic being asymptotically standard Normal, which is known to
happen if the two samples are independent and the ratio of the sample sizes
converges to a finite positive number. This restriction on the asymptotic
behavior of the ratio of the sample sizes carries the risk of rendering the
asymptotic justification of the finite sample approximation invalid. It turns
out that neither the restriction on the asymptotic behavior of the ratio of the
sample sizes nor the assumption of cross sample independence is necessary for
the pivotal convergence in question to take place. If the joint distribution of
the standardized sample means converges to a spherically symmetric
distribution, then that distribution must be bivariate standard Normal (which
can happen without the assumption of cross sample independence), and the
aforesaid pivotal convergence holds. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Breakthrough revisited: investigating the requirements for growth of dust beyond the bouncing barrier,
Abstract: For grain growth to proceed effectively and lead to planet formation a number
of barriers to growth must be overcome. One such barrier, relevant for compact
grains in the inner regions of the disc, is the `bouncing barrier' in which
large grains ($\sim$ mm size) tend to bounce off each other rather than
sticking. However, by maintaining a population of small grains it has been
suggested that cm-size particles may grow rapidly by sweeping up these small
grains. We present the first numerically resolved investigation into the
conditions under which grains may be lucky enough to grow beyond the bouncing
barrier by a series of rare collisions leading to growth (so-called
`breakthrough'). Our models support previous results, and show that in simple
models breakthrough requires the mass ratio at which high velocity collisions
transition to growth instead of causing fragmentation to be low, $\phi \lesssim
50$. However, in models that take into account the dependence of the
fragmentation threshold on mass-ratio, we find breakthrough occurs more
readily, even if mass transfer is relatively inefficient. This suggests that
bouncing may only slow down growth, rather than preventing growth beyond a
threshold barrier. However, even when growth beyond the bouncing barrier is
possible, radial drift will usually prevent growth to arbitrarily large sizes. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Density and current profiles in $U_q(A^{(1)}_2)$ zero range process,
Abstract: The stochastic $R$ matrix for $U_q(A^{(1)}_n)$ introduced recently gives rise
to an integrable zero range process of $n$ classes of particles in one
dimension. For $n=2$ we investigate how finitely many first class particles
fixed as defects influence the grand canonical ensemble of the second class
particles. By using the matrix product stationary probabilities involving
infinite products of $q$-bosons, exact formulas are derived for the local
density and current of the second class particles in the large volume limit. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Responses of Pre-transitional Materials with Stress-Generating Defects to External Stimuli: Superelasticity, Supermagnetostriction, Invar and Elinvar Effects,
Abstract: We considered a generic case of pre-transitional materials with static
stress-generating defects, dislocations and coherent nano-precipitates, at
temperatures close but above the starting temperature of martensitic
transformation, Ms. Using the Phase Field Microelasticity theory and 3D
simulation, we demonstrated that the local stress generated by these defects
produces equilibrium nano-size martensitic embryos (MEs) in pre-transitional
state, these embryos being orientation variants of martensite. This is a new
type of equilibrium: the thermoelastic equilibrium between the MEs and parent
phase in which the total volume of MEs and their size are equilibrium internal
thermodynamic parameters. This thermoelastic equilibrium exists only in
presence of the stress-generating defects. Cooling the pre-transitional state
towards Ms or applying the external stimuli, stress or magnetic field, results
in a shift of the thermoelastic equilibrium provided by a reversible
anhysteretic growth of MEs that results in a giant ME-generated macroscopic
strain. In particular, this effect can be associated with the diffuse phase
transformations observed in some ferroelectrics above the Curie point. It is
shown that the ME-generated strain is giant and describes a superelasticity if
the applied field is stress. It describes a super magnetostriction if the
martensite (or austenite) are ferromagnetic and the applied field is a magnetic
field. In general, the material with defects can be a multiferroic with a giant
multiferroic response if the parent and martensitic phase have different
ferroic properties. Finally the ME-generated strain may explain or, at least,
contribute to the Invar and Elinvar effects that are typically observed in
pre-transitional austenite. The thermoelastic equilibrium and all these effects
exist only if the interaction between the defects and MEs is infinite-range. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Materials Science"
] |
Title: Logics for Word Transductions with Synthesis,
Abstract: We introduce a logic, called LT, to express properties of transductions, i.e.
binary relations from input to output (finite) words. In LT, the input/output
dependencies are modelled via an origin function which associates to any
position of the output word, the input position from which it originates. LT is
well-suited to express relations (which are not necessarily functional), and
can express all regular functional transductions, i.e. transductions definable
for instance by deterministic two-way transducers. Despite its high expressive
power, LT has decidable satisfiability and equivalence problems, with tight
non-elementary and elementary complexities, depending on specific
representation of LT-formulas. Our main contribution is a synthesis result:
from any transduction R defined in LT , it is possible to synthesise a regular
functional transduction f such that for all input words u in the domain of R, f
is defined and (u,f(u)) belongs to R. As a consequence, we obtain that any
functional transduction is regular iff it is LT-definable. We also investigate
the algorithmic and expressiveness properties of several extensions of LT, and
explicit a correspondence between transductions and data words. As a
side-result, we obtain a new decidable logic for data words. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Augmentor: An Image Augmentation Library for Machine Learning,
Abstract: The generation of artificial data based on existing observations, known as
data augmentation, is a technique used in machine learning to improve model
accuracy, generalisation, and to control overfitting. Augmentor is a software
package, available in both Python and Julia versions, that provides a high
level API for the expansion of image data using a stochastic, pipeline-based
approach which effectively allows for images to be sampled from a distribution
of augmented images at runtime. Augmentor provides methods for most standard
augmentation practices as well as several advanced features such as
label-preserving, randomised elastic distortions, and provides many helper
functions for typical augmentation tasks used in machine learning. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: Multilevel Sequential${}^2$ Monte Carlo for Bayesian Inverse Problems,
Abstract: The identification of parameters in mathematical models using noisy
observations is a common task in uncertainty quantification. We employ the
framework of Bayesian inversion: we combine monitoring and observational data
with prior information to estimate the posterior distribution of a parameter.
Specifically, we are interested in the distribution of a diffusion coefficient
of an elliptic PDE. In this setting, the sample space is high-dimensional, and
each sample of the PDE solution is expensive. To address these issues we
propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the
approximation of the posterior distribution. Classical, single-level SMC
constructs a sequence of measures, starting with the prior distribution, and
finishing with the posterior distribution. The intermediate measures arise from
a tempering of the likelihood, or, equivalently, a rescaling of the noise. The
resolution of the PDE discretisation is fixed. In contrast, our estimator
employs a hierarchy of PDE discretisations to decrease the computational cost.
We construct a sequence of intermediate measures by decreasing the temperature
or by increasing the discretisation level at the same time. This idea builds on
and generalises the multi-resolution sampler proposed in [P.S. Koutsourelakis,
J. Comput. Phys., 228 (2009), pp. 6184-6211] where a bridging scheme is used to
transfer samples from coarse to fine discretisation levels. Importantly, our
choice between tempering and bridging is fully adaptive. We present numerical
experiments in 2D space, comparing our estimator to single-level SMC and the
multi-resolution sampler. | [
0,
0,
0,
1,
0,
0
] | [
"Mathematics",
"Statistics",
"Computer Science"
] |
Title: Multiparameter actuation of a neutrally-stable shell: a flexible gear-less motor,
Abstract: We have designed and tested experimentally a morphing structure consisting of
a neutrally stable thin cylindrical shell driven by a multiparameter
piezoelectric actuation. The shell is obtained by plastically deforming an
initially flat copper disk, so as to induce large isotropic and almost uniform
inelastic curvatures. Following the plastic deformation, in a perfectly
isotropic system, the shell is theoretically neutrally stable, owning a
continuous manifold of stable cylindrical shapes corresponding to the rotation
of the axis of maximal curvature. Small imperfections render the actual
structure bistable, giving preferred orientations. A three-parameter
piezoelectric actuation, exerted through micro-fiber-composite actuators,
allows us to add a small perturbation to the plastic inelastic curvature and to
control the direction of maximal curvature. This actuation law is designed
through a geometrical analogy based on a fully non-linear inextensible
uniform-curvature shell model. We report on the fabrication, identification,
and experimental testing of a prototype and demonstrate the effectiveness of
the piezoelectric actuators in controlling its shape. The resulting motion is
an apparent rotation of the shell, controlled by the voltages as in a
"gear-less motor", which is, in reality, a precession of the axis of principal
curvature. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Engineering"
] |
Title: Learning Distributions of Meant Color,
Abstract: When a speaker says the name of a color, the color that they picture is not
necessarily the same as the listener imagines. Color is a grounded semantic
task, but that grounding is not a mapping of a single word (or phrase) to a
single point in color-space. Proper understanding of color language requires
the capacity to map a sequence of words to a probability distribution in
color-space. A distribution is required as there is no clear agreement between
people as to what a particular color describes -- different people have a
different idea of what it means to be `very dark orange'. We propose a novel
GRU-based model to handle this case. Learning how each word in a color name
contributes to the color described, allows for knowledge sharing between uses
of the words in different color names. This knowledge sharing significantly
improves predicative capacity for color names with sparse training data. The
extreme case of this challenge in data sparsity is for color names without any
direct training data. Our model is able to predict reasonable distributions for
these cases, as evaluated on a held-out dataset consisting only of such terms. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Coherent anti-Stokes Raman Scattering Lidar Using Slow Light: A Theoretical Study,
Abstract: We theoretically investigate a scheme in which backward coherent anti-Stokes
Raman scattering (CARS) is significantly enhanced by using slow light.
Specifically, we reduce the group velocity of the Stokes excitation pulse by
introducing a coupling laser that causes electromagnetically induced
transparency (EIT). When the Stokes pulse has a spatial length shorter than the
CARS wavelength, the backward CARS emission is significantly enhanced. We also
investigated the possibility of applying this scheme as a CARS lidar with O2 or
N2 as the EIT medium. We found that if nanosecond laser with large pulse energy
(>1 J) and a telescope with large aperture (~10 m) are equipped in the lidar
system, a CARS lidar could become much more sensitive than a spontaneous Raman
lidar. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Tests based on characterizations, and their efficiencies: a survey,
Abstract: A survey of goodness-of-fit and symmetry tests based on the characterization
properties of distributions is presented. This approach became popular in
recent years. In most cases the test statistics are functionals of
$U$-empirical processes. The limiting distributions and large deviations of new
statistics under the null hypothesis are described. Their local Bahadur
efficiency for various parametric alternatives is calculated and compared with
each other as well as with diverse previously known tests. We also describe new
directions of possible research in this domain. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Hyperprior on symmetric Dirichlet distribution,
Abstract: In this article we introduce how to put vague hyperprior on Dirichlet
distribution, and we update the parameter of it by adaptive rejection sampling
(ARS). Finally we analyze this hyperprior in an over-fitted mixture model by
some synthetic experiments. | [
1,
0,
0,
0,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Ramsey expansions of metrically homogeneous graphs,
Abstract: We discuss the Ramsey property, the existence of a stationary independence
relation and the coherent extension property for partial isometries (coherent
EPPA) for all classes of metrically homogeneous graphs from Cherlin's
catalogue, which is conjectured to include all such structures. We show that,
with the exception of tree-like graphs, all metric spaces in the catalogue have
precompact Ramsey expansions (or lifts) with the expansion property. With two
exceptions we can also characterise the existence of a stationary independence
relation and the coherent EPPA.
Our results can be seen as a new contribution to Nešetřil's
classification programme of Ramsey classes and as empirical evidence of the
recent convergence in techniques employed to establish the Ramsey property, the
expansion (or lift or ordering) property, EPPA and the existence of a
stationary independence relation. At the heart of our proof is a canonical way
of completing edge-labelled graphs to metric spaces in Cherlin's classes. The
existence of such a "completion algorithm" then allows us to apply several
strong results in the areas that imply EPPA and respectively the Ramsey
property.
The main results have numerous corollaries on the automorphism groups of the
Fraïssé limits of the classes, such as amenability, unique ergodicity,
existence of universal minimal flows, ample generics, small index property,
21-Bergman property and Serre's property (FA). | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Rapidly star-forming galaxies adjacent to quasars at redshifts exceeding 6,
Abstract: The existence of massive ($10^{11}$ solar masses) elliptical galaxies by
redshift z~4 (when the Universe was 1.5 billion years old) necessitates the
presence of galaxies with star-formation rates exceeding 100 solar masses per
year at z>6 (corresponding to an age of the Universe of less than 1 billion
years). Surveys have discovered hundreds of galaxies at these early cosmic
epochs, but their star-formation rates are more than an order of magnitude
lower. The only known galaxies with very high star-formation rates at z>6 are,
with only one exception, the host galaxies of quasars, but these galaxies also
host accreting supermassive (more than $10^9$ solar masses) black holes, which
probably affect the properties of the galaxies. Here we report observations of
an emission line of singly ionized carbon ([CII] at a wavelength of 158
micrometres) in four galaxies at z>6 that are companions of quasars, with
velocity offsets of less than 600 kilometers per second and linear offsets of
less than 600 kiloparsecs. The discovery of these four galaxies was
serendipitous; they are close to their companion quasars and appear bright in
the far-infrared. On the basis of the [CII] measurements, we estimate
star-formation rates in the companions of more than 100 solar masses per year.
These sources are similar to the host galaxies of the quasars in [CII]
brightness, linewidth and implied dynamical masses, but do not show evidence
for accreting supermassive black holes. Similar systems have previously been
found at lower redshift. We find such close companions in four out of
twenty-five z>6 quasars surveyed, a fraction that needs to be accounted for in
simulations. If they are representative of the bright end of the [CII]
luminosity function, then they can account for the population of massive
elliptical galaxies at z~4 in terms of cosmic space density. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Measuring Territorial Control in Civil Wars Using Hidden Markov Models: A Data Informatics-Based Approach,
Abstract: Territorial control is a key aspect shaping the dynamics of civil war.
Despite its importance, we lack data on territorial control that are
fine-grained enough to account for subnational spatio-temporal variation and
that cover a large set of conflicts. To resolve this issue, we propose a
theoretical model of the relationship between territorial control and tactical
choice in civil war and outline how Hidden Markov Models (HMMs) are suitable to
capture theoretical intuitions and estimate levels of territorial control. We
discuss challenges of using HMMs in this application and mitigation strategies
for future work. | [
1,
0,
0,
1,
0,
0
] | [
"Statistics",
"Quantitative Biology"
] |
Title: Virtual quandle for links in lens spaces,
Abstract: We construct a virtual quandle for links in lens spaces $L(p,q)$, with $q=1$.
This invariant has two valuable advantages over an ordinary fundamental quandle
for links in lens spaces: the virtual quandle is an essential invariant and the
presentation of the virtual quandle can be easily written from the band diagram
of a link. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Adaptation to Easy Data in Prediction with Limited Advice,
Abstract: We derive an online learning algorithm with improved regret guarantees for
`easy' loss sequences. We consider two types of `easiness': (a) stochastic loss
sequences and (b) adversarial loss sequences with small effective range of the
losses. While a number of algorithms have been proposed for exploiting small
effective range in the full information setting, Gerchinovitz and Lattimore
[2016] have shown the impossibility of regret scaling with the effective range
of the losses in the bandit setting. We show that just one additional
observation per round is sufficient to circumvent the impossibility result. The
proposed Second Order Difference Adjustments (SODA) algorithm requires no prior
knowledge of the effective range of the losses, $\varepsilon$, and achieves an
$O(\varepsilon \sqrt{KT \ln K}) + \tilde{O}(\varepsilon K \sqrt[4]{T})$
expected regret guarantee, where $T$ is the time horizon and $K$ is the number
of actions. The scaling with the effective loss range is achieved under
significantly weaker assumptions than those made by Cesa-Bianchi and Shamir
[2018] in an earlier attempt to circumvent the impossibility result. We also
provide a regret lower bound of $\Omega(\varepsilon\sqrt{T K})$, which almost
matches the upper bound. In addition, we show that in the stochastic setting
SODA achieves an $O\left(\sum_{a:\Delta_a>0}
\frac{K\varepsilon^2}{\Delta_a}\right)$ pseudo-regret bound that holds
simultaneously with the adversarial regret guarantee. In other words, SODA is
safe against an unrestricted oblivious adversary and provides improved regret
guarantees for at least two different types of `easiness' simultaneously. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: Generalizing Distance Covariance to Measure and Test Multivariate Mutual Dependence,
Abstract: We propose three measures of mutual dependence between multiple random
vectors. All the measures are zero if and only if the random vectors are
mutually independent. The first measure generalizes distance covariance from
pairwise dependence to mutual dependence, while the other two measures are sums
of squared distance covariance. All the measures share similar properties and
asymptotic distributions to distance covariance, and capture non-linear and
non-monotone mutual dependence between the random vectors. Inspired by complete
and incomplete V-statistics, we define the empirical measures and simplified
empirical measures as a trade-off between the complexity and power when testing
mutual independence. Implementation of the tests is demonstrated by both
simulation results and real data examples. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Joint Routing, Scheduling and Power Control Providing Hard Deadline in Wireless Multihop Networks,
Abstract: We consider optimal/efficient power allocation policies in a single/multihop
wireless network in the presence of hard end-to-end deadline delay constraints
on the transmitted packets. Such constraints can be useful for real time voice
and video. Power is consumed in only transmission of the data. We consider the
case when the power used in transmission is a convex function of the data
transmitted. We develop a computationally efficient online algorithm, which
minimizes the average power for the single hop. We model this problem as
dynamic program (DP) and obtain the optimal solution. Next, we generalize it to
the multiuser, multihop scenario when there are multiple real time streams with
different hard deadline constraints. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Activation of Microwave Fields in a Spin-Torque Nano-Oscillator by Neuronal Action Potentials,
Abstract: Action potentials are the basic unit of information in the nervous system and
their reliable detection and decoding holds the key to understanding how the
brain generates complex thought and behavior. Transducing these signals into
microwave field oscillations can enable wireless sensors that report on brain
activity through magnetic induction. In the present work we demonstrate that
action potentials from crayfish lateral giant neuron can trigger microwave
oscillations in spin-torque nano-oscillators. These nanoscale devices take as
input small currents and convert them to microwave current oscillations that
can wirelessly broadcast neuronal activity, opening up the possibility for
compact neuro-sensors. We show that action potentials activate microwave
oscillations in spin-torque nano-oscillators with an amplitude that follows the
action potential signal, demonstrating that the device has both the sensitivity
and temporal resolution to respond to action potentials from a single neuron.
The activation of magnetic oscillations by action potentials, together with the
small footprint and the high frequency tunability, makes these devices
promising candidates for high resolution sensing of bioelectric signals from
neural tissues. These device attributes may be useful for design of
high-throughput bi-directional brain-machine interfaces. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: On the unit distance problem,
Abstract: The Erd\H os unit distance conjecture in the plane says that the number of
pairs of points from a point set of size $n$ separated by a fixed (Euclidean)
distance is $\leq C_{\epsilon} n^{1+\epsilon}$ for any $\epsilon>0$. The best
known bound is $Cn^{\frac{4}{3}}$. We show that if the set under consideration
is well-distributed and the fixed distance is much smaller than the diameter of
the set, then the exponent $\frac{4}{3}$ is significantly improved.
Corresponding results are also established in higher dimensions. The results
are obtained by solving the corresponding continuous problem and using a
continuous-to-discrete conversion mechanism. The degree of sharpness of results
is tested using the known results on the distribution of lattice points dilates
of convex domains.
We also introduce the following variant of the Erd\H os unit distance
problem: how many pairs of points from a set of size $n$ are separated by an
integer distance? We obtain some results in this direction and formulate a
conjecture. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Ordered states in the Kitaev-Heisenberg model: From 1D chains to 2D honeycomb,
Abstract: We study the ground state of the 1D Kitaev-Heisenberg (KH) model using the
density-matrix renormalization group and Lanczos exact diagonalization methods.
We obtain a rich ground-state phase diagram as a function of the ratio between
Heisenberg ($J=\cos\phi)$ and Kitaev ($K=\sin\phi$) interactions. Depending on
the ratio, the system exhibits four long-range ordered states:
ferromagnetic-$z$ , ferromagnetic-$xy$, staggered-$xy$, Néel-$z$, and two
liquid states: Tomonaga-Luttinger liquid and spiral-$xy$. The two Kitaev points
$\phi=\frac{\pi}{2}$ and $\phi=\frac{3\pi}{2}$ are singular. The
$\phi$-dependent phase diagram is similar to that for the 2D honeycomb-lattice
KH model. Remarkably, all the ordered states of the honeycomb-lattice KH model
can be interpreted in terms of the coupled KH chains. We also discuss the
magnetic structure of the K-intercalated RuCl$_3$, a potential Kitaev material,
in the framework of the 1D KH model. Furthermore, we demonstrate that the
low-lying excitations of the 1D KH Hamiltonian can be explained within the
combination of the known six-vertex model and spin-wave theory. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Neurology-as-a-Service for the Developing World,
Abstract: Electroencephalography (EEG) is an extensively-used and well-studied
technique in the field of medical diagnostics and treatment for brain
disorders, including epilepsy, migraines, and tumors. The analysis and
interpretation of EEGs require physicians to have specialized training, which
is not common even among most doctors in the developed world, let alone the
developing world where physician shortages plague society. This problem can be
addressed by teleEEG that uses remote EEG analysis by experts or by local
computer processing of EEGs. However, both of these options are prohibitively
expensive and the second option requires abundant computing resources and
infrastructure, which is another concern in developing countries where there
are resource constraints on capital and computing infrastructure. In this work,
we present a cloud-based deep neural network approach to provide decision
support for non-specialist physicians in EEG analysis and interpretation. Named
`neurology-as-a-service,' the approach requires almost no manual intervention
in feature engineering and in the selection of an optimal architecture and
hyperparameters of the neural network. In this study, we deploy a pipeline that
includes moving EEG data to the cloud and getting optimal models for various
classification tasks. Our initial prototype has been tested only in developed
world environments to-date, but our intention is to test it in developing world
environments in future work. We demonstrate the performance of our proposed
approach using the BCI2000 EEG MMI dataset, on which our service attains 63.4%
accuracy for the task of classifying real vs. imaginary activity performed by
the subject, which is significantly higher than what is obtained with a shallow
approach such as support vector machines. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.