title
stringlengths 7
239
| abstract
stringlengths 7
2.76k
| cs
int64 0
1
| phy
int64 0
1
| math
int64 0
1
| stat
int64 0
1
| quantitative biology
int64 0
1
| quantitative finance
int64 0
1
|
---|---|---|---|---|---|---|---|
Comment on Jackson's analysis of electric charge quantization due to interaction with Dirac's magnetic monopole | In J.D. Jackson's Classical Electrodynamics textbook, the analysis of Dirac's
charge quantization condition in the presence of a magnetic monopole has a
mathematical omission and an all too brief physical argument that might mislead
some students. This paper presents a detailed derivation of Jackson's main
result, explains the significance of the missing term, and highlights the close
connection between Jackson's findings and Dirac's original argument.
| 0 | 1 | 0 | 0 | 0 | 0 |
Study of charged hadron multiplicities in charged-current neutrino-lead interactions in the OPERA detector | The OPERA experiment was designed to search for $\nu_{\mu} \rightarrow
\nu_{\tau}$ oscillations in appearance mode through the direct observation of
tau neutrinos in the CNGS neutrino beam. In this paper, we report a study of
the multiplicity of charged particles produced in charged-current neutrino
interactions in lead. We present charged hadron average multiplicities, their
dispersion and investigate the KNO scaling in different kinematical regions.
The results are presented in detail in the form of tables that can be used in
the validation of Monte Carlo generators of neutrino-lead interactions.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the coefficients of the Alekseev Torossian associator | This paper explains a method to calculate the coefficients of the
Alekseev-Torossian associator as linear combinations of iterated integrals of
Kontsevich weight forms of Lie graphs.
| 0 | 0 | 1 | 0 | 0 | 0 |
Exact spectral decomposition of a time-dependent one-particle reduced density matrix | We determine the exact time-dependent non-idempotent one-particle reduced
density matrix and its spectral decomposition for a harmonically confined
two-particle correlated one-dimensional system when the interaction terms in
the Schrödinger Hamiltonian are changed abruptly. Based on this matrix in
coordinate space we derivea precise condition for the equivalence of the purity
and the overlap-square of the correlated and non-correlated wave functions as
the system evolves in time. This equivalence holds only if the interparticle
interactions are affected, while the confinement terms are unaffected within
the stability range of the system. Under this condition we also analyze various
time-dependent measures of entanglement and demonstrate that, depending on the
magnitude of the changes made in the Schrödinger Hamiltonian, periodic,
logarithmically incresing or constant value behavior of the von Neumann entropy
can occur.
| 0 | 1 | 0 | 0 | 0 | 0 |
A short variational proof of equivalence between policy gradients and soft Q learning | Two main families of reinforcement learning algorithms, Q-learning and policy
gradients, have recently been proven to be equivalent when using a softmax
relaxation on one part, and an entropic regularization on the other. We relate
this result to the well-known convex duality of Shannon entropy and the softmax
function. Such a result is also known as the Donsker-Varadhan formula. This
provides a short proof of the equivalence. We then interpret this duality
further, and use ideas of convex analysis to prove a new policy inequality
relative to soft Q-learning.
| 1 | 0 | 0 | 0 | 0 | 0 |
Non-parametric Message Important Measure: Storage Code Design and Transmission Planning for Big Data | Storage and transmission in big data are discussed in this paper, where
message importance is taken into account. Similar to Shannon Entropy and Renyi
Entropy, we define non-parametric message important measure (NMIM) as a measure
for the message importance in the scenario of big data, which can characterize
the uncertainty of random events. It is proved that the proposed NMIM can
sufficiently describe two key characters of big data: rare events finding and
large diversities of events. Based on NMIM, we first propose an effective
compressed encoding mode for data storage, and then discuss the channel
transmission over some typical channel models. Numerical simulation results
show that using our proposed strategy occupies less storage space without
losing too much message importance, and there are growth region and saturation
region for the maximum transmission, which contributes to designing of better
practical communication system.
| 0 | 0 | 1 | 1 | 0 | 0 |
Graphene and its elemental analogue: A molecular dynamics view of fracture phenomenon | Graphene and some graphene like two dimensional materials; hexagonal boron
nitride (hBN) and silicene have unique mechanical properties which severely
limit the suitability of conventional theories used for common brittle and
ductile materials to predict the fracture response of these materials. This
study revealed the fracture response of graphene, hBN and silicene nanosheets
under different tiny crack lengths by molecular dynamics (MD) simulations using
LAMMPS. The useful strength of these large area two dimensional materials are
determined by their fracture toughness. Our study shows a comparative analysis
of mechanical properties among the elemental analogues of graphene and
suggested that hBN can be a good substitute for graphene in terms of mechanical
properties. We have also found that the pre-cracked sheets fail in brittle
manner and their failure is governed by the strength of the atomic bonds at the
crack tip. The MD prediction of fracture toughness shows significant difference
with the fracture toughness determined by Griffth's theory of brittle failure
which restricts the applicability of Griffith's criterion for these materials
in case of nano-cracks. Moreover, the strengths measured in armchair and zigzag
directions of nanosheets of these materials implied that the bonds in armchair
direction has the stronger capability to resist crack propagation compared to
zigzag direction.
| 0 | 1 | 0 | 0 | 0 | 0 |
Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning | Skilled robotic manipulation benefits from complex synergies between
non-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing
can help rearrange cluttered objects to make space for arms and fingers;
likewise, grasping can help displace objects to make pushing movements more
precise and collision-free. In this work, we demonstrate that it is possible to
discover and learn these synergies from scratch through model-free deep
reinforcement learning. Our method involves training two fully convolutional
networks that map from visual observations to actions: one infers the utility
of pushes for a dense pixel-wise sampling of end effector orientations and
locations, while the other does the same for grasping. Both networks are
trained jointly in a Q-learning framework and are entirely self-supervised by
trial and error, where rewards are provided from successful grasps. In this
way, our policy learns pushing motions that enable future grasps, while
learning grasps that can leverage past pushes. During picking experiments in
both simulation and real-world scenarios, we find that our system quickly
learns complex behaviors amid challenging cases of clutter, and achieves better
grasping success rates and picking efficiencies than baseline alternatives
after only a few hours of training. We further demonstrate that our method is
capable of generalizing to novel objects. Qualitative results (videos), code,
pre-trained models, and simulation environments are available at
this http URL
| 1 | 0 | 0 | 1 | 0 | 0 |
A Versatile Approach to Evaluating and Testing Automated Vehicles based on Kernel Methods | Evaluation and validation of complicated control systems are crucial to
guarantee usability and safety. Usually, failure happens in some very rarely
encountered situations, but once triggered, the consequence is disastrous.
Accelerated Evaluation is a methodology that efficiently tests those
rarely-occurring yet critical failures via smartly-sampled test cases. The
distribution used in sampling is pivotal to the performance of the method, but
building a suitable distribution requires case-by-case analysis. This paper
proposes a versatile approach for constructing sampling distribution using
kernel method. The approach uses statistical learning tools to approximate the
critical event sets and constructs distributions based on the unique properties
of Gaussian distributions. We applied the method to evaluate the automated
vehicles. Numerical experiments show proposed approach can robustly identify
the rare failures and significantly reduce the evaluation time.
| 1 | 0 | 0 | 1 | 0 | 0 |
Decoupled Access-Execute on ARM big.LITTLE | Energy-efficiency plays a significant role given the battery lifetime
constraints in embedded systems and hand-held devices. In this work we target
the ARM big.LITTLE, a heterogeneous platform that is dominant in the mobile and
embedded market, which allows code to run transparently on different
microarchitectures with individual energy and performance characteristics. It
allows to se more energy efficient cores to conserve power during simple tasks
and idle times and switch over to faster, more power hungry cores when
performance is needed. This proposal explores the power-savings and the
performance gains that can be achieved by utilizing the ARM big.LITTLE core in
combination with Decoupled Access-Execute (DAE). DAE is a compiler technique
that splits code regions into two distinct phases: a memory-bound Access phase
and a compute-bound Execute phase. By scheduling the memory-bound phase on the
LITTLE core, and the compute-bound phase on the big core, we conserve energy
while caching data from main memory and perform computations at maximum
performance. Our preliminary findings show that applying DAE on ARM big.LITTLE
has potential. By prefetching data in Access we can achieve an IPC improvement
of up to 37% in the Execute phase, and manage to shift more than half of the
program runtime to the LITTLE core. We also provide insight into advantages and
disadvantages of our approach, present preliminary results and discuss
potential solutions to overcome locking overhead.
| 1 | 0 | 0 | 0 | 0 | 0 |
Closure structures parameterized by systems of isotone Galois connections | We study properties of classes of closure operators and closure systems
parameterized by systems of isotone Galois connections. The parameterizations
express stronger requirements on idempotency and monotony conditions of closure
operators. The present approach extends previous approaches to fuzzy closure
operators which appeared in analysis of object-attribute data with graded
attributes and reasoning with if-then rules in graded setting and is also
related to analogous results developed in linear temporal logic. In the paper,
we present foundations of the operators and include examples of general
problems in data analysis where such operators appear.
| 1 | 0 | 0 | 0 | 0 | 0 |
High Isolation Improvement in a Compact UWB MIMO Antenna | A compact multiple-input-multiple-output (MIMO) antenna with very high
isolation is proposed for ultrawide-band (UWB) applications. The antenna with a
compact size of 30.1x20.5 mm^2 (0.31${\lambda}_0$ x0.21${\lambda}_0$ ) consists
of two planar-monopole antenna elements. It is found that isolation of more
than 25 dB can be achieved between two parallel monopole antenna elements. For
the low-frequency isolation, an efficient technique of bending the feed-line
and applying a new protruded ground is introduced. To increase isolation, a
design based on suppressing surface wave, near-field, and far-field coupling is
applied. The simulation and measurement results of the proposed antenna with
the good agreement are presented and show a bandwidth with S 11 < -10 dB, S 12
< -25 dB ranged from 3.1 to 10.6 GHz making the proposed antenna a good
candidate for UWB MIMO systems.
| 0 | 1 | 0 | 0 | 0 | 0 |
Identifying and Alleviating Concept Drift in Streaming Tensor Decomposition | Tensor decompositions are used in various data mining applications from
social network to medical applications and are extremely useful in discovering
latent structures or concepts in the data. Many real-world applications are
dynamic in nature and so are their data. To deal with this dynamic nature of
data, there exist a variety of online tensor decomposition algorithms. A
central assumption in all those algorithms is that the number of latent
concepts remains fixed throughout the entire stream. However, this need not be
the case. Every incoming batch in the stream may have a different number of
latent concepts, and the difference in latent concepts from one tensor batch to
another can provide insights into how our findings in a particular application
behave and deviate over time. In this paper, we define "concept" and "concept
drift" in the context of streaming tensor decomposition, as the manifestation
of the variability of latent concepts throughout the stream. Furthermore, we
introduce SeekAndDestroy, an algorithm that detects concept drift in streaming
tensor decomposition and is able to produce results robust to that drift. To
the best of our knowledge, this is the first work that investigates concept
drift in streaming tensor decomposition. We extensively evaluate SeekAndDestroy
on synthetic datasets, which exhibit a wide variety of realistic drift. Our
experiments demonstrate the effectiveness of SeekAndDestroy, both in the
detection of concept drift and in the alleviation of its effects, producing
results with similar quality to decomposing the entire tensor in one shot.
Additionally, in real datasets, SeekAndDestroy outperforms other streaming
baselines, while discovering novel useful components.
| 0 | 0 | 0 | 1 | 0 | 0 |
Handling Adversarial Concept Drift in Streaming Data | Classifiers operating in a dynamic, real world environment, are vulnerable to
adversarial activity, which causes the data distribution to change over time.
These changes are traditionally referred to as concept drift, and several
approaches have been developed in literature to deal with the problem of drift
handling and detection. However, most concept drift handling techniques,
approach it as a domain independent task, to make them applicable to a wide
gamut of reactive systems. These techniques were developed from an adversarial
agnostic perspective, where they are naive and assume that drift is a benign
change, which can be fixed by updating the model. However, this is not the case
when an active adversary is trying to evade the deployed classification system.
In such an environment, the properties of concept drift are unique, as the
drift is intended to degrade the system and at the same time designed to avoid
detection by traditional concept drift detection techniques. This special
category of drift is termed as adversarial drift, and this paper analyzes its
characteristics and impact, in a streaming environment. A novel framework for
dealing with adversarial concept drift is proposed, called the Predict-Detect
streaming framework. Experimental evaluation of the framework, on generated
adversarial drifting data streams, demonstrates that this framework is able to
provide reliable unsupervised indication of drift, and is able to recover from
drifts swiftly. While traditional partially labeled concept drift detection
methodologies fail to detect adversarial drifts, the proposed framework is able
to detect such drifts and operates with <6% labeled data, on average. Also, the
framework provides benefits for active learning over imbalanced data streams,
by innately providing for feature space honeypots, where minority class
adversarial samples may be captured.
| 0 | 0 | 0 | 1 | 0 | 0 |
Real intersection homology | We present a definition of intersection homology for real algebraic varieties
that is analogous to Goresky and MacPherson's original definition of
intersection homology for complex varieties.
| 0 | 0 | 1 | 0 | 0 | 0 |
High Contrast Observations of Bright Stars with a Starshade | Starshades are a leading technology to enable the direct detection and
spectroscopic characterization of Earth-like exoplanets. In an effort to
advance starshade technology through system level demonstrations, the
McMath-Pierce Solar Telescope was adapted to enable the suppression of
astronomical sources with a starshade. The long baselines achievable with the
heliostat provide measurements of starshade performance at a flight-like
Fresnel number and resolution, aspects critical to the validation of optical
models. The heliostat has provided the opportunity to perform the first
astronomical observations with a starshade and has made science accessible in a
unique parameter space, high contrast at moderate inner working angles. On-sky
images are valuable for developing the experience and tools needed to extract
science results from future starshade observations. We report on high contrast
observations of nearby stars provided by a starshade. We achieve 5.6e-7
contrast at 30 arcseconds inner working angle on the star Vega and provide new
photometric constraints on background stars near Vega.
| 0 | 1 | 0 | 0 | 0 | 0 |
Unconstrained inverse quadratic programming problem | The paper covers a formulation of the inverse quadratic programming problem
in terms of unconstrained optimization where it is required to find the unknown
parameters (the matrix of the quadratic form and the vector of the quasi-linear
part of the quadratic form) provided that approximate estimates of the optimal
solution of the direct problem and those of the target function to be minimized
in the form of pairs of values lying in the corresponding neighborhoods are
only known. The formulation of the inverse problem and its solution are based
on the least squares method. In the explicit form the inverse problem solution
has been derived in the form a system of linear equations. The parameters
obtained can be used for reconstruction of the direct quadratic programming
problem and determination of the optimal solution and the extreme value of the
target function, which were not known formerly. It is possible this approach
opens new ways in over applications, for example, in neurocomputing and quadric
surfaces fitting. Simple numerical examples have been demonstrated. A scenario
in the Octave/MATLAB programming language has been proposed for practical
implementation of the method.
| 1 | 0 | 1 | 0 | 0 | 0 |
Families of Thue equations associated with a rank one subgroup of the unit group of a number field | Twisting a binary form $F_0(X,Y)\in{\mathbb{Z}}[X,Y]$ of degree $d\ge 3$ by
powers $\upsilon^a$ ($a\in{\mathbb{Z}}$) of an algebraic unit $\upsilon$ gives
rise to a binary form $F_a(X,Y)\in{\mathbb{Z}}[X,Y]$. More precisely, when $K$
is a number field of degree $d$, $\sigma_1,\sigma_2,\dots,\sigma_d$ the
embeddings of $K$ into $\mathbb{C}$, $\alpha$ a nonzero element in $K$,
$a_0\in{\mathbb{Z}}$, $a_0>0$ and $$ F_0(X,Y)=a_0\displaystyle\prod_{i=1}^d
(X-\sigma_i(\alpha) Y), $$ then for $a\in{\mathbb{Z}}$ we set $$
F_a(X,Y)=\displaystyle a_0\prod_{i=1}^d (X-\sigma_i(\alpha\upsilon^a) Y). $$
Given $m\ge 0$, our main result is an effective upper bound for the solutions
$(x,y,a)\in{\mathbb{Z}}^3$ of the Diophantine inequalities $$ 0<|F_a(x,y)|\le m
$$ for which $xy\not=0$ and ${\mathbb{Q}}(\alpha \upsilon^a)=K$. Our estimate
involves an effectively computable constant depending only on $d$; it is
explicit in terms of $m$, in terms of the heights of $F_0$ and of $\upsilon$,
and in terms of the regulator of the number field $K$.
| 0 | 0 | 1 | 0 | 0 | 0 |
On Defects Between Gapped Boundaries in Two-Dimensional Topological Phases of Matter | Defects between gapped boundaries provide a possible physical realization of
projective non-abelian braid statistics. A notable example is the projective
Majorana/parafermion braid statistics of boundary defects in fractional quantum
Hall/topological insulator and superconductor heterostructures. In this paper,
we develop general theories to analyze the topological properties and
projective braiding of boundary defects of topological phases of matter in two
spatial dimensions. We present commuting Hamiltonians to realize defects
between gapped boundaries in any $(2+1)D$ untwisted Dijkgraaf-Witten theory,
and use these to describe their topological properties such as their quantum
dimension. By modeling the algebraic structure of boundary defects through
multi-fusion categories, we establish a bulk-edge correspondence between
certain boundary defects and symmetry defects in the bulk. Even though it is
not clear how to physically braid the defects, this correspondence elucidates
the projective braid statistics for many classes of boundary defects, both
amongst themselves and with bulk anyons. Specifically, three such classes of
importance to condensed matter physics/topological quantum computation are
studied in detail: (1) A boundary defect version of Majorana and parafermion
zero modes, (2) a similar version of genons in bilayer theories, and (3)
boundary defects in $\mathfrak{D}(S_3)$.
| 0 | 1 | 1 | 0 | 0 | 0 |
Arbitrary order 2D virtual elements for polygonal meshes: Part II, inelastic problem | The present paper is the second part of a twofold work, whose first part is
reported in [3], concerning a newly developed Virtual Element Method (VEM) for
2D continuum problems. The first part of the work proposed a study for linear
elastic problem. The aim of this part is to explore the features of the VEM
formulation when material nonlinearity is considered, showing that the accuracy
and easiness of implementation discovered in the analysis inherent to the first
part of the work are still retained. Three different nonlinear constitutive
laws are considered in the VEM formulation. In particular, the generalized
viscoplastic model, the classical Mises plasticity with isotropic/kinematic
hardening and a shape memory alloy (SMA) constitutive law are implemented. The
versatility with respect to all the considered nonlinear material constitutive
laws is demonstrated through several numerical examples, also remarking that
the proposed 2D VEM formulation can be straightforwardly implemented as in a
standard nonlinear structural finite element method (FEM) framework.
| 0 | 0 | 1 | 0 | 0 | 0 |
Reconstruction of Hidden Representation for Robust Feature Extraction | This paper aims to develop a new and robust approach to feature
representation. Motivated by the success of Auto-Encoders, we first theoretical
summarize the general properties of all algorithms that are based on
traditional Auto-Encoders: 1) The reconstruction error of the input can not be
lower than a lower bound, which can be viewed as a guiding principle for
reconstructing the input. Additionally, when the input is corrupted with
noises, the reconstruction error of the corrupted input also can not be lower
than a lower bound. 2) The reconstruction of a hidden representation achieving
its ideal situation is the necessary condition for the reconstruction of the
input to reach the ideal state. 3) Minimizing the Frobenius norm of the
Jacobian matrix of the hidden representation has a deficiency and may result in
a much worse local optimum value. We believe that minimizing the reconstruction
error of the hidden representation is more robust than minimizing the Frobenius
norm of the Jacobian matrix of the hidden representation. Based on the above
analysis, we propose a new model termed Double Denoising Auto-Encoders (DDAEs),
which uses corruption and reconstruction on both the input and the hidden
representation. We demonstrate that the proposed model is highly flexible and
extensible and has a potentially better capability to learn invariant and
robust feature representations. We also show that our model is more robust than
Denoising Auto-Encoders (DAEs) for dealing with noises or inessential features.
Furthermore, we detail how to train DDAEs with two different pre-training
methods by optimizing the objective function in a combined and separate manner,
respectively. Comparative experiments illustrate that the proposed model is
significantly better for representation learning than the state-of-the-art
models.
| 1 | 0 | 0 | 1 | 0 | 0 |
HONE: Higher-Order Network Embeddings | This paper describes a general framework for learning Higher-Order Network
Embeddings (HONE) from graph data based on network motifs. The HONE framework
is highly expressive and flexible with many interchangeable components. The
experimental results demonstrate the effectiveness of learning higher-order
network representations. In all cases, HONE outperforms recent embedding
methods that are unable to capture higher-order structures with a mean relative
gain in AUC of $19\%$ (and up to $75\%$ gain) across a wide variety of networks
and embedding methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Cosmological Evolution and Exact Solutions in a Fourth-order Theory of Gravity | A fourth-order theory of gravity is considered which in terms of dynamics has
the same degrees of freedom and number of constraints as those of scalar-tensor
theories. In addition it admits a canonical point-like Lagrangian description.
We study the critical points of the theory and we show that it can describe the
matter epoch of the universe and that two accelerated phases can be recovered
one of which describes a de Sitter universe. Finally for some models exact
solutions are presented.
| 0 | 1 | 1 | 0 | 0 | 0 |
Linear Progress with Exponential Decay in Weakly Hyperbolic Groups | A random walk $w_n$ on a separable, geodesic hyperbolic metric space $X$
converges to the boundary $\partial X$ with probability one when the step
distribution supports two independent loxodromics. In particular, the random
walk makes positive linear progress. Progress is known to be linear with
exponential decay when (1) the step distribution has exponential tail and (2)
the action on $X$ is acylindrical. We extend exponential decay to the
non-acylindrical case.
| 0 | 0 | 1 | 0 | 0 | 0 |
Talbot-enhanced, maximum-visibility imaging of condensate interference | Nearly two centuries ago Talbot first observed the fascinating effect whereby
light propagating through a periodic structure generates a `carpet' of image
revivals in the near field. Here we report the first observation of the spatial
Talbot effect for light interacting with periodic Bose-Einstein condensate
interference fringes. The Talbot effect can lead to dramatic loss of fringe
visibility in images, degrading precision interferometry, however we
demonstrate how the effect can also be used as a tool to enhance visibility, as
well as extend the useful focal range of matter wave detection systems by
orders of magnitude. We show that negative optical densities arise from
matter-wave induced lensing of detuned imaging light -- yielding
Talbot-enhanced single-shot interference visibility of >135% compared to the
ideal visibility for resonant light.
| 0 | 1 | 0 | 0 | 0 | 0 |
A numerical study of the F-model with domain-wall boundaries | We perform a numerical study of the F-model with domain-wall boundary
conditions. Various exact results are known for this particular case of the
six-vertex model, including closed expressions for the partition function for
any system size as well as its asymptotics and leading finite-size corrections.
To complement this picture we use a full lattice multi-cluster algorithm to
study equilibrium properties of this model for systems of moderate size, up to
L=512. We compare the energy to its exactly known large-L asymptotics. We
investigate the model's infinite-order phase transition by means of finite-size
scaling for an observable derived from the staggered polarization in order to
test the method put forward in our recent joint work with Duine and Barkema. In
addition we analyse local properties of the model. Our data are perfectly
consistent with analytical expressions for the arctic curves. We investigate
the structure inside the temperate region of the lattice, confirming the
oscillations in vertex densities that were first observed by Sylju{\aa}sen and
Zvonarev, and recently studied by Lyberg et al. We point out
'(anti)ferroelectric' oscillations close to the corresponding frozen regions as
well as 'higher-order' oscillations forming an intricate pattern with
saddle-point-like features.
| 0 | 1 | 0 | 0 | 0 | 0 |
Visualizing the Loss Landscape of Neural Nets | Neural network training relies on our ability to find "good" minimizers of
highly non-convex loss functions. It is well-known that certain network
architecture designs (e.g., skip connections) produce loss functions that train
easier, and well-chosen training parameters (batch size, learning rate,
optimizer) produce minimizers that generalize better. However, the reasons for
these differences, and their effects on the underlying loss landscape, are not
well understood. In this paper, we explore the structure of neural loss
functions, and the effect of loss landscapes on generalization, using a range
of visualization methods. First, we introduce a simple "filter normalization"
method that helps us visualize loss function curvature and make meaningful
side-by-side comparisons between loss functions. Then, using a variety of
visualizations, we explore how network architecture affects the loss landscape,
and how training parameters affect the shape of minimizers.
| 1 | 0 | 0 | 1 | 0 | 0 |
Hydrogen bonding characterization in water and small molecules | The prototypical Hydrogen bond in water dimer and Hydrogen bonds in the
protonated water dimer, in other small molecules, in water cyclic clusters, and
in ice, covering a wide range of bond strengths, are theoretically investigated
by first-principles calculations based on the Density Functional Theory,
considering a standard Generalized Gradient Approximation functional but also,
for the water dimer, hybrid and van-der-Waals corrected functionals. We compute
structural, energetic, and electrostatic (induced molecular dipole moments)
properties. In particular, Hydrogen bonds are characterized in terms of
differential electron densities distributions and profiles, and of the shifts
of the centres of Maximally localized Wannier Functions. The information from
the latter quantities can be conveyed into a single geometric bonding parameter
that appears to be correlated to the Mayer bond order parameter and can be
taken as an estimate of the covalent contribution to the Hydrogen bond. By
considering the cyclic water hexamer and the hexagonal phase of ice we also
elucidate the importance of cooperative/anticooperative effects in
Hydrogen-bonding formation.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Calculus of Truly Concurrent Mobile Processes | We make a mixture of Milner's $\pi$-calculus and our previous work on truly
concurrent process algebra, which is called $\pi_{tc}$. We introduce syntax and
semantics of $\pi_{tc}$, its properties based on strongly truly concurrent
bisimilarities. Also, we include an axiomatization of $\pi_{tc}$. $\pi_{tc}$
can be used as a formal tool in verifying mobile systems in a truly concurrent
flavor.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gigahertz optomechanical modulation by split-ring-resonator nanophotonic meta-atom arrays | Using polarization-resolved transient reflection spectroscopy, we investigate
the ultrafast modulation of light interacting with a metasurface consisting of
coherently vibrating nanophotonic meta-atoms in the form of U-shaped split-ring
resonators, that exhibit co-localized optical and mechanical resonances. With a
two-dimensional square-lattice array of these resonators formed of gold on a
glass substrate, we monitor the visible-pump-pulse induced gigahertz
oscillations in intensity of reflected linearly-polarized infrared probe light
pulses, modulated by the resonators effectively acting as miniature tuning
forks. A multimodal vibrational response involving the opening and closing
motion of the split rings is detected in this way. Numerical simulations of the
associated transient deformations and strain fields elucidate the complex
nanomechanical dynamics contributing to the ultrafast optical modulation, and
point to the role of acousto-plasmonic interactions through the opening and
closing motion of the SRR gaps as the dominant effect. Applications include
ultrafast acoustooptic modulator design and sensing.
| 0 | 1 | 0 | 0 | 0 | 0 |
Guaranteed Fault Detection and Isolation for Switched Affine Models | This paper considers the problem of fault detection and isolation (FDI) for
switched affine models. We first study the model invalidation problem and its
application to guaranteed fault detection. Novel and intuitive
optimization-based formulations are proposed for model invalidation and
T-distinguishability problems, which we demonstrate to be computationally more
efficient than an earlier formulation that required a complicated change of
variables. Moreover, we introduce a distinguishability index as a measure of
separation between the system and fault models, which offers a practical method
for finding the smallest receding time horizon that is required for fault
detection, and for finding potential design recommendations for ensuring
T-distinguishability. Then, we extend our fault detection guarantees to the
problem of fault isolation with multiple fault models, i.e., the identification
of the type and location of faults, by introducing the concept of
I-isolability. An efficient way to implement the FDI scheme is also proposed,
whose run-time does not grow with the number of fault models that are
considered. Moreover, we derive bounds on detection and isolation delays and
present an adaptive scheme for reducing isolation delays. Finally, the
effectiveness of the proposed method is illustrated using several examples,
including an HVAC system model with multiple faults.
| 1 | 0 | 1 | 0 | 0 | 0 |
NOOP: A Domain-Theoretic Model of Nominally-Typed OOP | The majority of industrial-strength object-oriented (OO) software is written
using nominally-typed OO programming languages. Extant domain-theoretic models
of OOP developed to analyze OO type systems miss, however, a crucial feature of
these mainstream OO languages: nominality. This paper presents the construction
of NOOP as the first domain-theoretic model of OOP that includes full
class/type names information found in nominally-typed OOP. Inclusion of nominal
information in objects of NOOP and asserting that type inheritance in
statically-typed OO programming languages is an inherently nominal notion allow
readily proving that type inheritance and subtyping are completely identified
in these languages. This conclusion is in full agreement with intuitions of
developers and language designers of these OO languages, and contrary to the
belief that "inheritance is not subtyping," which came from assuming
non-nominal (a.k.a., structural) models of OOP.
To motivate the construction of NOOP, this paper briefly presents the
benefits of nominal-typing to mainstream OO developers and OO language
designers, as compared to structural-typing. After presenting NOOP, the paper
further briefly compares NOOP to the most widely known domain-theoretic models
of OOP. Leveraging the development of NOOP, the comparisons presented in this
paper provide clear, brief and precise technical and mathematical accounts for
the relation between nominal and structural OO type systems. NOOP, thus,
provides a firmer semantic foundation for analyzing and progressing
nominally-typed OO programming languages.
| 1 | 0 | 0 | 0 | 0 | 0 |
Modelling wave-induced sea ice breakup in the marginal ice zone | A model of ice floe breakup under ocean wave forcing in the marginal ice zone
(MIZ) is proposed to investigate how floe size distribution (FSD) evolves under
repeated wave breakup events. A three-dimensional linear model of ocean wave
scattering by a finite array of compliant circular ice floes is coupled to a
flexural failure model, which breaks a floe into two floes provided the
two-dimensional stress field satisfies a breakup criterion. A closed-feedback
loop algorithm is devised, which (i)~solves wave scattering problem for a given
FSD under time-harmonic plane wave forcing, (ii)~computes the stress field in
all the floes, (iii)~fractures the floes satisfying the breakup criterion and
(iv)~generates an updated FSD, initialising the geometry for the next iteration
of the loop.The FSD after 50 breakup events is uni-modal and near normal, or
bi-modal. Multiple scattering is found to enhance breakup for long waves and
thin ice, but to reduce breakup for short waves and thick ice. A breakup front
marches forward in the latter regime, as wave-induced fracture weakens the ice
cover allowing waves to travel deeper into the MIZ.
| 0 | 1 | 0 | 0 | 0 | 0 |
An Introduction to Animal Movement Modeling with Hidden Markov Models using Stan for Bayesian Inference | Hidden Markov models (HMMs) are popular time series model in many fields
including ecology, economics and genetics. HMMs can be defined over discrete or
continuous time, though here we only cover the former. In the field of movement
ecology in particular, HMMs have become a popular tool for the analysis of
movement data because of their ability to connect observed movement data to an
underlying latent process, generally interpreted as the animal's unobserved
behavior. Further, we model the tendency to persist in a given behavior over
time. Notation presented here will generally follow the format of Zucchini et
al. (2016) and cover HMMs applied in an unsupervised case to animal movement
data, specifically positional data. We provide Stan code to analyze movement
data of the wild haggis as presented first in Michelot et al. (2016).
| 0 | 0 | 0 | 1 | 1 | 0 |
Sampling a Network to Find Nodes of Interest | The focus of the current research is to identify people of interest in social
networks. We are especially interested in studying dark networks, which
represent illegal or covert activity. In such networks, people are unlikely to
disclose accurate information when queried. We present REDLEARN, an algorithm
for sampling dark networks with the goal of identifying as many nodes of
interest as possible. We consider two realistic lying scenarios, which describe
how individuals in a dark network may attempt to conceal their connections. We
test and present our results on several real-world multilayered networks, and
show that REDLEARN achieves up to a 340% improvement over the next best
strategy.
| 1 | 1 | 0 | 0 | 0 | 0 |
Representation Mixing for TTS Synthesis | Recent character and phoneme-based parametric TTS systems using deep learning
have shown strong performance in natural speech generation. However, the choice
between character or phoneme input can create serious limitations for practical
deployment, as direct control of pronunciation is crucial in certain cases. We
demonstrate a simple method for combining multiple types of linguistic
information in a single encoder, named representation mixing, enabling flexible
choice between character, phoneme, or mixed representations during inference.
Experiments and user studies on a public audiobook corpus show the efficacy of
our approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Safe Open-Loop Strategies for Handling Intermittent Communications in Multi-Robot Systems | In multi-robot systems where a central decision maker is specifying the
movement of each individual robot, a communication failure can severely impair
the performance of the system. This paper develops a motion strategy that
allows robots to safely handle critical communication failures for such
multi-robot architectures. For each robot, the proposed algorithm computes a
time horizon over which collisions with other robots are guaranteed not to
occur. These safe time horizons are included in the commands being transmitted
to the individual robots. In the event of a communication failure, the robots
execute the last received velocity commands for the corresponding safe time
horizons leading to a provably safe open-loop motion strategy. The resulting
algorithm is computationally effective and is agnostic to the task that the
robots are performing. The efficacy of the strategy is verified in simulation
as well as on a team of differential-drive mobile robots.
| 1 | 0 | 0 | 0 | 0 | 0 |
Scientific co-authorship networks | The paper addresses the stability of the co-authorship networks in time. The
analysis is done on the networks of Slovenian researchers in two time periods
(1991-2000 and 2001-2010). Two researchers are linked if they published at
least one scientific bibliographic unit in a given time period. As proposed by
Kronegger et al. (2011), the global network structures are examined by
generalized blockmodeling with the assumed
multi-core--semi-periphery--periphery blockmodel type. The term core denotes a
group of researchers who published together in a systematic way with each
other.
The obtained blockmodels are comprehensively analyzed by visualizations and
through considering several statistics regarding the global network structure.
To measure the stability of the obtained blockmodels, different adjusted
modified Rand and Wallace indices are applied. Those enable to distinguish
between the splitting and merging of cores when operationalizing the stability
of cores. Also, the adjusted modified indices can be used when new researchers
occur in the second time period (newcomers) and when some researchers are no
longer present in the second time period (departures). The research disciplines
are described and clustered according to the values of these indices.
Considering the obtained clusters, the sources of instability of the research
disciplines are studied (e.g., merging or splitting of cores, newcomers or
departures). Furthermore, the differences in the stability of the obtained
cores on the level of scientific disciplines are studied by linear regression
analysis where some personal characteristics of the researchers (e.g., age,
gender), are also considered.
| 1 | 0 | 0 | 1 | 0 | 0 |
Projection Based Weight Normalization for Deep Neural Networks | Optimizing deep neural networks (DNNs) often suffers from the ill-conditioned
problem. We observe that the scaling-based weight space symmetry property in
rectified nonlinear network will cause this negative effect. Therefore, we
propose to constrain the incoming weights of each neuron to be unit-norm, which
is formulated as an optimization problem over Oblique manifold. A simple yet
efficient method referred to as projection based weight normalization (PBWN) is
also developed to solve this problem. PBWN executes standard gradient updates,
followed by projecting the updated weight back to Oblique manifold. This
proposed method has the property of regularization and collaborates well with
the commonly used batch normalization technique. We conduct comprehensive
experiments on several widely-used image datasets including CIFAR-10,
CIFAR-100, SVHN and ImageNet for supervised learning over the state-of-the-art
convolutional neural networks, such as Inception, VGG and residual networks.
The results show that our method is able to improve the performance of DNNs
with different architectures consistently. We also apply our method to Ladder
network for semi-supervised learning on permutation invariant MNIST dataset,
and our method outperforms the state-of-the-art methods: we obtain test errors
as 2.52%, 1.06%, and 0.91% with only 20, 50, and 100 labeled samples,
respectively.
| 1 | 0 | 0 | 0 | 0 | 0 |
Mapping stable direct and retrograde orbits around the triple system of asteroids (45) Eugenia | It is well accepted that knowing the composition and the orbital evolution of
asteroids may help us to understand the process of formation of the Solar
System. It is also known that asteroids can represent a threat to our planet.
Such important role made space missions to asteroids a very popular topic in
the current astrodynamics and astronomy studies. By taking into account the
increasingly interest in space missions to asteroids, especially to multiple
systems, we present a study aimed to characterize the stable and unstable
regions around the triple system of asteroids (45) Eugenia. The goal is to
characterize unstable and stable regions of this system and compare with the
system 2001 SN263 - the target of the ASTER mission. Besides, Prado (2014) used
a new concept for mapping orbits considering the disturbance received by the
spacecraft from all the perturbing forces individually. This method was also
applied to (45) Eugenia. We present the stable and unstable regions for
particles with relative inclination between 0 and 180 degrees. We found that
(45) Eugenia presents larger stable regions for both, prograde and retrograde
cases. This is mainly because the satellites of this system are small when
compared to the primary body, and because they are not so close to each other.
We also present a comparison between those two triple systems, and a discussion
on how these results may guide us in the planning of future missions.
| 0 | 1 | 0 | 0 | 0 | 0 |
Zinc oxide induces the stringent response and major reorientations in the central metabolism of Bacillus subtilis | Microorganisms, such as bacteria, are one of the first targets of
nanoparticles in the environment. In this study, we tested the effect of two
nanoparticles, ZnO and TiO2, with the salt ZnSO4 as the control, on the
Gram-positive bacterium Bacillus subtilis by 2D gel electrophoresis-based
proteomics. Despite a significant effect on viability (LD50), TiO2 NPs had no
detectable effect on the proteomic pattern, while ZnO NPs and ZnSO4
significantly modified B. subtilis metabolism. These results allowed us to
conclude that the effects of ZnO observed in this work were mainly attributable
to Zn dissolution in the culture media. Proteomic analysis highlighted twelve
modulated proteins related to central metabolism: MetE and MccB (cysteine
metabolism), OdhA, AspB, IolD, AnsB, PdhB and YtsJ (Krebs cycle) and XylA,
YqjI, Drm and Tal (pentose phosphate pathway). Biochemical assays, such as free
sulfhydryl, CoA-SH and malate dehydrogenase assays corroborated the observed
central metabolism reorientation and showed that Zn stress induced oxidative
stress, probably as a consequence of thiol chelation stress by Zn ions. The
other patterns affected by ZnO and ZnSO4 were the stringent response and the
general stress response. Nine proteins involved in or controlled by the
stringent response showed a modified expression profile in the presence of ZnO
NPs or ZnSO4: YwaC, SigH, YtxH, YtzB, TufA, RplJ, RpsB, PdhB and Mbl. An
increase in the ppGpp concentration confirmed the involvement of the stringent
response during a Zn stress. All these metabolic reorientations in response to
Zn stress were probably the result of complex regulatory mechanisms including
at least the stringent response via YwaC.
| 0 | 0 | 0 | 0 | 1 | 0 |
Learning what matters - Sampling interesting patterns | In the field of exploratory data mining, local structure in data can be
described by patterns and discovered by mining algorithms. Although many
solutions have been proposed to address the redundancy problems in pattern
mining, most of them either provide succinct pattern sets or take the interests
of the user into account-but not both. Consequently, the analyst has to invest
substantial effort in identifying those patterns that are relevant to her
specific interests and goals. To address this problem, we propose a novel
approach that combines pattern sampling with interactive data mining. In
particular, we introduce the LetSIP algorithm, which builds upon recent
advances in 1) weighted sampling in SAT and 2) learning to rank in interactive
pattern mining. Specifically, it exploits user feedback to directly learn the
parameters of the sampling distribution that represents the user's interests.
We compare the performance of the proposed algorithm to the state-of-the-art in
interactive pattern mining by emulating the interests of a user. The resulting
system allows efficient and interleaved learning and sampling, thus
user-specific anytime data exploration. Finally, LetSIP demonstrates favourable
trade-offs concerning both quality-diversity and exploitation-exploration when
compared to existing methods.
| 1 | 0 | 0 | 1 | 0 | 0 |
Proofs as Relational Invariants of Synthesized Execution Grammars | The automatic verification of programs that maintain unbounded low-level data
structures is a critical and open problem. Analyzers and verifiers developed in
previous work can synthesize invariants that only describe data structures of
heavily restricted forms, or require an analyst to provide predicates over
program data and structure that are used in a synthesized proof of correctness.
In this work, we introduce a novel automatic safety verifier of programs that
maintain low-level data structures, named LTTP. LTTP synthesizes proofs of
program safety represented as a grammar of a given program's control paths,
annotated with invariants that relate program state at distinct points within
its path of execution. LTTP synthesizes such proofs completely automatically,
using a novel inductive-synthesis algorithm.
We have implemented LTTP as a verifier for JVM bytecode and applied it to
verify the safety of a collection of verification benchmarks. Our results
demonstrate that LTTP can be applied to automatically verify the safety of
programs that are beyond the scope of previously-developed verifiers.
| 1 | 0 | 0 | 0 | 0 | 0 |
Orthogonal involutions and totally singular quadratic forms in characteristic two | We associate to every central simple algebra with involution of orthogonal
type in characteristic two a totally singular quadratic form which reflects
certain anisotropy properties of the involution. It is shown that this
quadratic form can be used to classify totally decomposable algebras with
orthogonal involution. Also, using this form, a criterion is obtained for an
orthogonal involution on a split algebra to be conjugated to the transpose
involution.
| 0 | 0 | 1 | 0 | 0 | 0 |
Coastal flood implications of 1.5 °C, 2.0 °C, and 2.5 °C temperature stabilization targets in the 21st and 22nd century | Sea-level rise (SLR) is magnifying the frequency and severity of coastal
flooding. The rate and amount of global mean sea-level (GMSL) rise is a
function of the trajectory of global mean surface temperature (GMST).
Therefore, temperature stabilization targets (e.g., 1.5 °C and 2.0 °C
of warming above pre-industrial levels, as from the Paris Agreement) have
important implications for coastal flood risk. Here, we assess differences in
the return periods of coastal floods at a global network of tide gauges between
scenarios that stabilize GMST warming at 1.5 °C, 2.0 °C, and 2.5
°C above pre-industrial levels. We employ probabilistic, localized SLR
projections and long-term hourly tide gauge records to construct estimates of
the return levels of current and future flood heights for the 21st and 22nd
centuries. By 2100, under 1.5 °C, 2.0 °C, and 2.5 °C GMST
stabilization, median GMSL is projected to rise 47 cm with a very likely range
of 28-82 cm (90% probability), 55 cm (very likely 30-94 cm), and 58 cm (very
likely 36-93 cm), respectively. As an independent comparison, a semi-empirical
sea level model calibrated to temperature and GMSL over the past two millennia
estimates median GMSL will rise within < 13% of these projections. By 2150,
relative to the 2.0 °C scenario, GMST stabilization of 1.5 °C
inundates roughly 5 million fewer inhabitants that currently occupy lands,
including 40,000 fewer individuals currently residing in Small Island
Developing States. Relative to a 2.0 °C scenario, the reduction in the
amplification of the frequency of the 100-yr flood arising from a 1.5 °C
GMST stabilization is greatest in the eastern United States and in Europe, with
flood frequency amplification being reduced by about half.
| 0 | 1 | 0 | 0 | 0 | 0 |
Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data | We present a factorized hierarchical variational autoencoder, which learns
disentangled and interpretable representations from sequential data without
supervision. Specifically, we exploit the multi-scale nature of information in
sequential data by formulating it explicitly within a factorized hierarchical
graphical model that imposes sequence-dependent priors and sequence-independent
priors to different sets of latent variables. The model is evaluated on two
speech corpora to demonstrate, qualitatively, its ability to transform speakers
or linguistic content by manipulating different sets of latent variables; and
quantitatively, its ability to outperform an i-vector baseline for speaker
verification and reduce the word error rate by as much as 35% in mismatched
train/test scenarios for automatic speech recognition tasks.
| 1 | 0 | 0 | 1 | 0 | 0 |
De-blending Deep Herschel Surveys: A Multi-wavelength Approach | Cosmological surveys in the far infrared are known to suffer from confusion.
The Bayesian de-blending tool, XID+, currently provides one of the best ways to
de-confuse deep Herschel SPIRE images, using a flat flux density prior. This
work is to demonstrate that existing multi-wavelength data sets can be
exploited to improve XID+ by providing an informed prior, resulting in more
accurate and precise extracted flux densities. Photometric data for galaxies in
the COSMOS field were used to constrain spectral energy distributions (SEDs)
using the fitting tool CIGALE. These SEDs were used to create Gaussian prior
estimates in the SPIRE bands for XID+. The multi-wavelength photometry and the
extracted SPIRE flux densities were run through CIGALE again to allow us to
compare the performance of the two priors. Inferred ALMA flux densities
(F$^i$), at 870$\mu$m and 1250$\mu$m, from the best fitting SEDs from the
second CIGALE run were compared with measured ALMA flux densities (F$^m$) as an
independent performance validation. Similar validations were conducted with the
SED modelling and fitting tool MAGPHYS and modified black body functions to
test for model dependency. We demonstrate a clear improvement in agreement
between the flux densities extracted with XID+ and existing data at other
wavelengths when using the new informed Gaussian prior over the original
uninformed prior. The residuals between F$^m$ and F$^i$ were calculated. For
the Gaussian prior, these residuals, expressed as a multiple of the ALMA error
($\sigma$), have a smaller standard deviation, 7.95$\sigma$ for the Gaussian
prior compared to 12.21$\sigma$ for the flat prior, reduced mean, 1.83$\sigma$
compared to 3.44$\sigma$, and have reduced skew to positive values, 7.97
compared to 11.50. These results were determined to not be significantly model
dependent. This results in statistically more reliable SPIRE flux densities.
| 0 | 1 | 0 | 0 | 0 | 0 |
Room-Temperature Ionic Liquids Meet Bio-Membranes: the State-of-the- Art | Room-temperature ionic liquids (RTIL) are a new class of organic salts whose
melting temperature falls below the conventional limit of 100C. Their low vapor
pressure, moreover, has made these ionic compounds the solvents of choice of
the so-called green chemistry. For these and other peculiar characteristics,
they are increasingly used in industrial applications. However, studies of
their interaction with living organisms have highlighted mild to severe health
hazards. Since their cytotoxicity shows a positive correlation with their
lipo-philicity, several chemical-physical studies of their interaction with
biomembranes have been carried out in the last few years, aiming to identify
the microscopic mechanisms behind their toxicity. Cation chain length and anion
nature have been seen to affect the lipo-philicity and, in turn, the toxicity
of RTILs. The emerging picture, however, raises new questions, points to the
need to assess toxicity on a case-by-case basis, but also suggests a potential
positive role of RTILs in pharmacology, bio-medicine, and, more in general,
bio-nano-technology. Here, we review this new subject of research, and comment
on the future and the potential importance of this new field of study.
| 0 | 1 | 0 | 0 | 0 | 0 |
The use of Charts, Pivot Tables, and Array Formulas in two Popular Spreadsheet Corpora | The use of spreadsheets in industry is widespread. Companies base decisions
on information coming from spreadsheets. Unfortunately, spreadsheets are
error-prone and this increases the risk that companies base their decisions on
inaccurate information, which can lead to incorrect decisions and loss of
money. In general, spreadsheet research is aimed to reduce the error-proneness
of spreadsheets. Most research is concentrated on the use of formulas. However,
there are other constructions in spreadsheets, like charts, pivot tables, and
array formulas, that are also used to present decision support information to
the user. There is almost no research about how these constructions are used.
To improve spreadsheet quality it is important to understand how spreadsheets
are used and to obtain a complete understanding, the use of charts, pivot
tables, and array formulas should be included in research. In this paper, we
analyze two popular spreadsheet corpora: Enron and EUSES on the use of the
aforementioned constructions.
| 1 | 0 | 0 | 0 | 0 | 0 |
Disordered statistical physics in low dimensions: extremes, glass transition, and localization | This thesis presents original results in two domains of disordered
statistical physics: logarithmic correlated Random Energy Models (logREMs), and
localization transitions in long-range random matrices.
In the first part devoted to logREMs, we show how to characterise their
common properties and model--specific data. Then we develop their replica
symmetry breaking treatment, which leads to the freezing scenario of their free
energy distribution and the general description of their minima process, in
terms of decorated Poisson point process. We also report a series of new
applications of the Jack polynomials in the exact predictions of some
observables in the circular model and its variants. Finally, we present the
recent progress on the exact connection between logREMs and the Liouville
conformal field theory.
The goal of the second part is to introduce and study a new class of banded
random matrices, the broadly distributed class, which is characterid an
effective sparseness. We will first study a specific model of the class, the
Beta Banded random matrices, inspired by an exact mapping to a recently studied
statistical model of long--range first--passage percolation/epidemics dynamics.
Using analytical arguments based on the mapping and numerics, we show the
existence of localization transitions with mobility edges in the
"stretch--exponential" parameter--regime of the statistical models. Then, using
a block--diagonalization renormalization approach, we argue that such
localization transitions occur generically in the broadly distributed class.
| 0 | 1 | 0 | 0 | 0 | 0 |
HyperMinHash: MinHash in LogLog space | In this extended abstract, we describe and analyze a lossy compression of
MinHash from buckets of size $O(\log n)$ to buckets of size $O(\log\log n)$ by
encoding using floating-point notation. This new compressed sketch, which we
call HyperMinHash, as we build off a HyperLogLog scaffold, can be used as a
drop-in replacement of MinHash. Unlike comparable Jaccard index fingerprinting
algorithms in sub-logarithmic space (such as b-bit MinHash), HyperMinHash
retains MinHash's features of streaming updates, unions, and cardinality
estimation. For a multiplicative approximation error $1+ \epsilon$ on a Jaccard
index $ t $, given a random oracle, HyperMinHash needs $O\left(\epsilon^{-2}
\left( \log\log n + \log \frac{1}{ t \epsilon} \right)\right)$ space.
HyperMinHash allows estimating Jaccard indices of 0.01 for set cardinalities on
the order of $10^{19}$ with relative error of around 10\% using 64KiB of
memory; MinHash can only estimate Jaccard indices for cardinalities of
$10^{10}$ with the same memory consumption.
| 1 | 0 | 0 | 0 | 0 | 0 |
Asynchronous stochastic price pump | We propose a model for equity trading in a population of agents where each
agent acts to achieve his or her target stock-to-bond ratio, and, as a feedback
mechanism, follows a market adaptive strategy. In this model only a fraction of
agents participates in buying and selling stock during a trading period, while
the rest of the group accepts the newly set price. Using numerical simulations
we show that the stochastic process settles on a stationary regime for the
returns. The mean return can be greater or less than the return on the bond and
it is determined by the parameters of the adaptive mechanism. When the number
of interacting agents is fixed, the distribution of the returns follows the
log-normal density. In this case, we give an analytic formula for the mean rate
of return in terms of the rate of change of agents' risk levels and confirm the
formula by numerical simulations. However, when the number of interacting
agents per period is random, the distribution of returns can significantly
deviate from the log-normal, especially as the variance of the distribution for
the number of interacting agents increases.
| 0 | 0 | 0 | 0 | 0 | 1 |
Character Distributions of Classical Chinese Literary Texts: Zipf's Law, Genres, and Epochs | We collect 14 representative corpora for major periods in Chinese history in
this study. These corpora include poetic works produced in several dynasties,
novels of the Ming and Qing dynasties, and essays and news reports written in
modern Chinese. The time span of these corpora ranges between 1046 BCE and 2007
CE. We analyze their character and word distributions from the viewpoint of the
Zipf's law, and look for factors that affect the deviations and similarities
between their Zipfian curves. Genres and epochs demonstrated their influences
in our analyses. Specifically, the character distributions for poetic works of
between 618 CE and 1644 CE exhibit striking similarity. In addition, although
texts of the same dynasty may tend to use the same set of characters, their
character distributions still deviate from each other.
| 1 | 0 | 0 | 0 | 0 | 0 |
Improving Bi-directional Generation between Different Modalities with Variational Autoencoders | We investigate deep generative models that can exchange multiple modalities
bi-directionally, e.g., generating images from corresponding texts and vice
versa. A major approach to achieve this objective is to train a model that
integrates all the information of different modalities into a joint
representation and then to generate one modality from the corresponding other
modality via this joint representation. We simply applied this approach to
variational autoencoders (VAEs), which we call a joint multimodal variational
autoencoder (JMVAE). However, we found that when this model attempts to
generate a large dimensional modality missing at the input, the joint
representation collapses and this modality cannot be generated successfully.
Furthermore, we confirmed that this difficulty cannot be resolved even using a
known solution. Therefore, in this study, we propose two models to prevent this
difficulty: JMVAE-kl and JMVAE-h. Results of our experiments demonstrate that
these methods can prevent the difficulty above and that they generate
modalities bi-directionally with equal or higher likelihood than conventional
VAE methods, which generate in only one direction. Moreover, we confirm that
these methods can obtain the joint representation appropriately, so that they
can generate various variations of modality by moving over the joint
representation or changing the value of another modality.
| 0 | 0 | 0 | 1 | 0 | 0 |
Matching neural paths: transfer from recognition to correspondence search | Many machine learning tasks require finding per-part correspondences between
objects. In this work we focus on low-level correspondences - a highly
ambiguous matching problem. We propose to use a hierarchical semantic
representation of the objects, coming from a convolutional neural network, to
solve this ambiguity. Training it for low-level correspondence prediction
directly might not be an option in some domains where the ground-truth
correspondences are hard to obtain. We show how transfer from recognition can
be used to avoid such training. Our idea is to mark parts as "matching" if
their features are close to each other at all the levels of convolutional
feature hierarchy (neural paths). Although the overall number of such paths is
exponential in the number of layers, we propose a polynomial algorithm for
aggregating all of them in a single backward pass. The empirical validation is
done on the task of stereo correspondence and demonstrates that we achieve
competitive results among the methods which do not use labeled target domain
data.
| 1 | 0 | 0 | 0 | 0 | 0 |
On the affine random walk on the torus | Let $\mu$ be a borelian probability measure on
$\mathbf{G}:=\mathrm{SL}_d(\mathbb{Z}) \ltimes \mathbb{T}^d$. Define, for $x\in
\mathbb{T}^d$, a random walk starting at $x$ denoting for $n\in \mathbb{N}$, \[
\left\{\begin{array}{rcl} X_0 &=&x\\ X_{n+1} &=& a_{n+1} X_n + b_{n+1}
\end{array}\right. \] where $((a_n,b_n))\in \mathbf{G}^\mathbb{N}$ is an iid
sequence of law $\mu$.
Then, we denote by $\mathbb{P}_x$ the measure on $(\mathbb{T}^d)^\mathbb{N}$
that is the image of $\mu^{\otimes \mathbb{N}}$ by the map $\left((g_n) \mapsto
(x,g_1 x, g_2 g_1 x, \dots , g_n \dots g_1 x, \dots)\right)$ and for any
$\varphi \in \mathrm{L}^1((\mathbb{T}^d)^\mathbb{N}, \mathbb{P}_x)$, we set
$\mathbb{E}_x \varphi((X_n)) = \int \varphi((X_n))
\mathrm{d}\mathbb{P}_x((X_n))$.
Bourgain, Furmann, Lindenstrauss and Mozes studied this random walk when
$\mu$ is concentrated on $\mathrm{SL}_d(\mathbb{Z}) \ltimes\{0\}$ and this
allowed us to study, for any hölder-continuous function $f$ on the torus, the
sequence $(f(X_n))$ when $x$ is not too well approximable by rational points.
In this article, we are interested in the case where $\mu$ is not
concentrated on $\mathrm{SL}_d(\mathbb{Z}) \ltimes \mathbb{Q}^d/\mathbb{Z}^d$
and we prove that, under assumptions on the group spanned by the support of
$\mu$, the Lebesgue's measure $\nu$ on the torus is the only stationary
probability measure and that for any hölder-continuous function $f$ on the
torus, $\mathbb{E}_x f(X_n)$ converges exponentially fast to $\int
f\mathrm{d}\nu$.
Then, we use this to prove the law of large numbers, a non-concentration
inequality, the functional central limit theorem and it's almost-sure version
for the sequence $(f(X_n))$.
In the appendix, we state a non-concentration inequality for products of
random matrices without any irreducibility assumption.
| 0 | 0 | 1 | 0 | 0 | 0 |
StackInsights: Cognitive Learning for Hybrid Cloud Readiness | Hybrid cloud is an integrated cloud computing environment utilizing a mix of
public cloud, private cloud, and on-premise traditional IT infrastructures.
Workload awareness, defined as a detailed full range understanding of each
individual workload, is essential in implementing the hybrid cloud. While it is
critical to perform an accurate analysis to determine which workloads are
appropriate for on-premise deployment versus which workloads can be migrated to
a cloud off-premise, the assessment is mainly performed by rule or policy based
approaches. In this paper, we introduce StackInsights, a novel cognitive system
to automatically analyze and predict the cloud readiness of workloads for an
enterprise. Our system harnesses the critical metrics across the entire stack:
1) infrastructure metrics, 2) data relevance metrics, and 3) application
taxonomy, to identify workloads that have characteristics of a) low sensitivity
with respect to business security, criticality and compliance, and b) low
response time requirements and access patterns. Since the capture of the data
relevance metrics involves an intrusive and in-depth scanning of the content of
storage objects, a machine learning model is applied to perform the business
relevance classification by learning from the meta level metrics harnessed
across stack. In contrast to traditional methods, StackInsights significantly
reduces the total time for hybrid cloud readiness assessment by orders of
magnitude.
| 1 | 0 | 0 | 0 | 0 | 0 |
Risk-averse model predictive control | Risk-averse model predictive control (MPC) offers a control framework that
allows one to account for ambiguity in the knowledge of the underlying
probability distribution and unifies stochastic and worst-case MPC. In this
paper we study risk-averse MPC problems for constrained nonlinear Markovian
switching systems using generic cost functions, and derive Lyapunov-type
risk-averse stability conditions by leveraging the properties of risk-averse
dynamic programming operators. We propose a controller design procedure to
design risk-averse stabilizing terminal conditions for constrained nonlinear
Markovian switching systems. Lastly, we cast the resulting risk-averse optimal
control problem in a favorable form which can be solved efficiently and thus
deems risk-averse MPC suitable for applications.
| 0 | 0 | 1 | 0 | 0 | 0 |
Neutral evolution and turnover over centuries of English word popularity | Here we test Neutral models against the evolution of English word frequency
and vocabulary at the population scale, as recorded in annual word frequencies
from three centuries of English language books. Against these data, we test
both static and dynamic predictions of two neutral models, including the
relation between corpus size and vocabulary size, frequency distributions, and
turnover within those frequency distributions. Although a commonly used Neutral
model fails to replicate all these emergent properties at once, we find that
modified two-stage Neutral model does replicate the static and dynamic
properties of the corpus data. This two-stage model is meant to represent a
relatively small corpus (population) of English books, analogous to a `canon',
sampled by an exponentially increasing corpus of books in the wider population
of authors. More broadly, this mode -- a smaller neutral model within a larger
neutral model -- could represent more broadly those situations where mass
attention is focused on a small subset of the cultural variants.
| 1 | 1 | 0 | 0 | 0 | 0 |
On Statistically-Secure Quantum Homomorphic Encryption | Homomorphic encryption is an encryption scheme that allows computations to be
evaluated on encrypted inputs without knowledge of their raw messages. Recently
Ouyang et al. constructed a quantum homomorphic encryption (QHE) scheme for
Clifford circuits with statistical security (or information-theoretic security
(IT-security)). It is desired to see whether an
information-theoretically-secure (ITS) quantum FHE exists. If not, what other
nontrivial class of quantum circuits can be homomorphically evaluated with
IT-security? We provide a limitation for the first question that an ITS quantum
FHE necessarily incurs exponential overhead. As for the second one, we propose
a QHE scheme for the instantaneous quantum polynomial-time (IQP) circuits. Our
QHE scheme for IQP circuits follows from the one-time pad.
| 1 | 0 | 0 | 0 | 0 | 0 |
Monte Carlo Tree Search for Asymmetric Trees | We present an extension of Monte Carlo Tree Search (MCTS) that strongly
increases its efficiency for trees with asymmetry and/or loops. Asymmetric
termination of search trees introduces a type of uncertainty for which the
standard upper confidence bound (UCB) formula does not account. Our first
algorithm (MCTS-T), which assumes a non-stochastic environment, backs-up tree
structure uncertainty and leverages it for exploration in a modified UCB
formula. Results show vastly improved efficiency in a well-known asymmetric
domain in which MCTS performs arbitrarily bad. Next, we connect the ideas about
asymmetric termination to the presence of loops in the tree, where the same
state appears multiple times in a single trace. An extension to our algorithm
(MCTS-T+), which in addition to non-stochasticity assumes full state
observability, further increases search efficiency for domains with loops as
well. Benchmark testing on a set of OpenAI Gym and Atari 2600 games indicates
that our algorithms always perform better than or at least equivalent to
standard MCTS, and could be first-choice tree search algorithms for
non-stochastic, fully-observable environments.
| 0 | 0 | 0 | 1 | 0 | 0 |
On the difference-to-sum power ratio of speech and wind noise based on the Corcos model | The difference-to-sum power ratio was proposed and used to suppress wind
noise under specific acoustic conditions. In this contribution, a general
formulation of the difference-to-sum power ratio associated with a mixture of
speech and wind noise is proposed and analyzed. In particular, it is assumed
that the complex coherence of convective turbulence can be modelled by the
Corcos model. In contrast to the work in which the power ratio was first
presented, the employed Corcos model holds for every possible air stream
direction and takes into account the lateral coherence decay rate. The obtained
expression is subsequently validated with real data for a dual microphone
set-up. Finally, the difference-to- sum power ratio is exploited as a spatial
feature to indicate the frame-wise presence of wind noise, obtaining improved
detection performance when compared to an existing multi-channel wind noise
detection approach.
| 1 | 0 | 0 | 0 | 0 | 0 |
Quantum torus algebras and B(C) type Toda systems | In this paper, we construct a new even constrained B(C) type Toda hierarchy
and derive its B(C) type Block type additional symmetry. Also we generalize the
B(C) type Toda hierarchy to the $N$-component B(C) type Toda hierarchy which is
proved to have symmetries of a coupled $\bigotimes^NQT_+ $ algebra ( $N$-folds
direct product of the positive half of the quantum torus algebra $QT$).
| 0 | 1 | 1 | 0 | 0 | 0 |
A Decidable Intuitionistic Temporal Logic | We introduce the logic $\sf ITL^e$, an intuitionistic temporal logic based on
structures $(W,\preccurlyeq,S)$, where $\preccurlyeq$ is used to interpret
intuitionistic implication and $S$ is a $\preccurlyeq$-monotone function used
to interpret temporal modalities. Our main result is that the satisfiability
and validity problems for $\sf ITL^e$ are decidable. We prove this by showing
that the logic enjoys the strong finite model property. In contrast, we also
consider a `persistent' version of the logic, $\sf ITL^p$, whose models are
similar to Cartesian products. We prove that, unlike $\sf ITL^e$, $\sf ITL^p$
does not have the finite model property.
| 0 | 0 | 1 | 0 | 0 | 0 |
A Re-weighted Joint Spatial-Radon Domain CT Image Reconstruction Model for Metal Artifact Reduction | High density implants such as metals often lead to serious artifacts in the
reconstructed CT images which hampers the accuracy of image based diagnosis and
treatment planning. In this paper, we propose a novel wavelet frame based CT
image reconstruction model to reduce metal artifacts. This model is built on a
joint spatial and Radon (projection) domain (JSR) image reconstruction
framework with a built-in weighting and re-weighting mechanism in Radon domain
to repair degraded projection data. The new weighting strategy used in the
proposed model not only makes the regularization in Radon domain by wavelet
frame transform more effective, but also makes the commonly assumed linear
model for CT imaging a more accurate approximation of the nonlinear physical
problem. The proposed model, which will be referred to as the re-weighted JSR
model, combines the ideas of the recently proposed wavelet frame based JSR
model \cite{Dong2013} and the normalized metal artifact reduction model
\cite{meyer2010normalized}, and manages to achieve noticeably better CT
reconstruction quality than both methods. To solve the proposed re-weighted JSR
model, an efficient alternative iteration algorithm is proposed with guaranteed
convergence. Numerical experiments on both simulated and real CT image data
demonstrate the effectiveness of the re-weighted JSR model and its advantage
over some of the state-of-the-art methods.
| 0 | 1 | 1 | 0 | 0 | 0 |
A simple proof that the $(n^2-1)$-puzzle is hard | The 15 puzzle is a classic reconfiguration puzzle with fifteen uniquely
labeled unit squares within a $4 \times 4$ board in which the goal is to slide
the squares (without ever overlapping) into a target configuration. By
generalizing the puzzle to an $n \times n$ board with $n^2-1$ squares, we can
study the computational complexity of problems related to the puzzle; in
particular, we consider the problem of determining whether a given end
configuration can be reached from a given start configuration via at most a
given number of moves. This problem was shown NP-complete in Ratner and Warmuth
(1990). We provide an alternative simpler proof of this fact by reduction from
the rectilinear Steiner tree problem.
| 1 | 0 | 0 | 0 | 0 | 0 |
Is Proxima Centauri b habitable? -- A study of atmospheric loss | We address the important question of whether the newly discovered exoplanet,
Proxima Centauri b (PCb), is capable of retaining an atmosphere over long
periods of time. This is done by adapting a sophisticated multi-species MHD
model originally developed for Venus and Mars, and computing the ion escape
losses from PCb. The results suggest that the ion escape rates are about two
orders of magnitude higher than the terrestrial planets of our Solar system if
PCb is unmagnetized. In contrast, if the planet does have an intrinsic dipole
magnetic field, the rates are lowered for certain values of the stellar wind
dynamic pressure, but they are still higher than the observed values for our
Solar system's terrestrial planets. These results must be interpreted with due
caution, since most of the relevant parameters for PCb remain partly or wholly
unknown.
| 0 | 1 | 0 | 0 | 0 | 0 |
A Geometric Perspective on the Power of Principal Component Association Tests in Multiple Phenotype Studies | Joint analysis of multiple phenotypes can increase statistical power in
genetic association studies. Principal component analysis, as a popular
dimension reduction method, especially when the number of phenotypes is
high-dimensional, has been proposed to analyze multiple correlated phenotypes.
It has been empirically observed that the first PC, which summarizes the
largest amount of variance, can be less powerful than higher order PCs and
other commonly used methods in detecting genetic association signals. In this
paper, we investigate the properties of PCA-based multiple phenotype analysis
from a geometric perspective by introducing a novel concept called principal
angle. A particular PC is powerful if its principal angle is $0^o$ and is
powerless if its principal angle is $90^o$. Without prior knowledge about the
true principal angle, each PC can be powerless. We propose linear, non-linear
and data-adaptive omnibus tests by combining PCs. We show that the omnibus PC
test is robust and powerful in a wide range of scenarios. We study the
properties of the proposed methods using power analysis and eigen-analysis. The
subtle differences and close connections between these combined PC methods are
illustrated graphically in terms of their rejection boundaries. Our proposed
tests have convex acceptance regions and hence are admissible. The $p$-values
for the proposed tests can be efficiently calculated analytically and the
proposed tests have been implemented in a publicly available R package {\it
MPAT}. We conduct simulation studies in both low and high dimensional settings
with various signal vectors and correlation structures. We apply the proposed
tests to the joint analysis of metabolic syndrome related phenotypes with data
sets collected from four international consortia to demonstrate the
effectiveness of the proposed combined PC testing procedures.
| 0 | 0 | 0 | 1 | 0 | 0 |
A Design Based on Stair-case Band Alignment of Electron Transport Layer for Improving Performance and Stability in Planar Perovskite Solar Cells | Among the n-type metal oxide materials used in the planar perovskite solar
cells, zinc oxide (ZnO) is a promising candidate to replace titanium dioxide
(TiO2) due to its relatively high electron mobility, high transparency, and
versatile nanostructures. Here, we present the application of low temperature
solution processed ZnO/Al-doped ZnO (AZO) bilayer thin film as electron
transport layers (ETLs) in the inverted perovskite solar cells, which provide a
stair-case band profile. Experimental results revealed that the power
conversion efficiency (PCE) of perovskite solar cells were significantly
increased from 12.25 to 16.07% by employing the AZO thin film as the buffer
layer. Meanwhile, the short-circuit current density (Jsc), open-circuit voltage
(Voc), and fill factor (FF) were improved to 20.58 mA/cm2, 1.09V, and 71.6%,
respectively. The enhancement in performance is attributed to the modified
interface in ETL with stair-case band alignment of ZnO/AZO/CH3NH3PbI3, which
allows more efficient extraction of photogenerated electrons in the CH3NH3PbI3
active layer. Thus, it is demonstrated that the ZnO/AZO bilayer ETLs would
benefit the electron extraction and contribute in enhancing the performance of
perovskite solar cells.
| 0 | 1 | 0 | 0 | 0 | 0 |
Statistics on functional data and covariance operators in linear inverse problems | We introduce a framework for the statistical analysis of functional data in a
setting where these objects cannot be fully observed, but only indirect and
noisy measurements are available, namely an inverse problem setting. The
proposed methodology can be applied either to the analysis of indirectly
observed functional data or to the associated covariance operators,
representing second-order information, and thus lying on a non-Euclidean space.
To deal with the ill-posedness of the inverse problem, we exploit the spatial
structure of the sample data by introducing a flexible regularizing term
embedded in the model. Thanks to its efficiency, the proposed model is applied
to MEG data, leading to a novel statistical approach to the investigation of
functional connectivity.
| 0 | 0 | 0 | 1 | 0 | 0 |
Development of ICA and IVA Algorithms with Application to Medical Image Analysis | Independent component analysis (ICA) is a widely used BSS method that can
uniquely achieve source recovery, subject to only scaling and permutation
ambiguities, through the assumption of statistical independence on the part of
the latent sources. Independent vector analysis (IVA) extends the applicability
of ICA by jointly decomposing multiple datasets through the exploitation of the
dependencies across datasets. Though both ICA and IVA algorithms cast in the
maximum likelihood (ML) framework enable the use of all available statistical
information in reality, they often deviate from their theoretical optimality
properties due to improper estimation of the probability density function
(PDF). This motivates the development of flexible ICA and IVA algorithms that
closely adhere to the underlying statistical description of the data. Although
it is attractive minimize the assumptions, important prior information about
the data, such as sparsity, is usually available. If incorporated into the ICA
model, use of this additional information can relax the independence
assumption, resulting in an improvement in the overall separation performance.
Therefore, the development of a unified mathematical framework that can take
into account both statistical independence and sparsity is of great interest.
In this work, we first introduce a flexible ICA algorithm that uses an
effective PDF estimator to accurately capture the underlying statistical
properties of the data. We then discuss several techniques to accurately
estimate the parameters of the multivariate generalized Gaussian distribution,
and how to integrate them into the IVA model. Finally, we provide a
mathematical framework that enables direct control over the influence of
statistical independence and sparsity, and use this framework to develop an
effective ICA algorithm that can jointly exploit these two forms of diversity.
| 0 | 0 | 0 | 1 | 0 | 0 |
Observability of characteristic binary-induced structures in circumbinary disks | Context: A substantial fraction of protoplanetary disks forms around stellar
binaries. The binary system generates a time-dependent non-axisymmetric
gravitational potential, inducing strong tidal forces on the circumbinary disk.
This leads to a change in basic physical properties of the circumbinary disk,
which should in turn result in unique structures that are potentially
observable with the current generation of instruments.
Aims: The goal of this study is to identify these characteristic structures,
to constrain the physical conditions that cause them, and to evaluate the
feasibility to observe them in circumbinary disks.
Methods: To achieve this, at first two-dimensional hydrodynamic simulations
are performed. The resulting density distributions are post-processed with a 3D
radiative transfer code to generate re-emission and scattered light maps. Based
on these, we study the influence of various parameters, such as the mass of the
stellar components, the mass of the disk and the binary separation on
observable features in circumbinary disks.
Results: We find that the Atacama Large (sub-)Millimetre Array (ALMA) as well
as the European Extremely Large Telescope (E-ELT) are capable of tracing
asymmetries in the inner region of circumbinary disks which are affected most
by the binary-disk interaction. Observations at submillimetre/millimetre
wavelengths will allow the detection of the density waves at the inner rim of
the disk and the inner cavity. With the E-ELT one can partially resolve the
innermost parts of the disk in the infrared wavelength range, including the
disk's rim, accretion arms and potentially the expected circumstellar disks
around each of the binary components.
| 0 | 1 | 0 | 0 | 0 | 0 |
Sound Event Detection in Synthetic Audio: Analysis of the DCASE 2016 Task Results | As part of the 2016 public evaluation challenge on Detection and
Classification of Acoustic Scenes and Events (DCASE 2016), the second task
focused on evaluating sound event detection systems using synthetic mixtures of
office sounds. This task, which follows the `Event Detection - Office
Synthetic' task of DCASE 2013, studies the behaviour of tested algorithms when
facing controlled levels of audio complexity with respect to background noise
and polyphony/density, with the added benefit of a very accurate ground truth.
This paper presents the task formulation, evaluation metrics, submitted
systems, and provides a statistical analysis of the results achieved, with
respect to various aspects of the evaluation dataset.
| 1 | 0 | 0 | 1 | 0 | 0 |
Pressure-induced Superconductivity in the Three-component Fermion Topological Semimetal Molybdenum Phosphide | Topological semimetal, a novel state of quantum matter hosting exotic
emergent quantum phenomena dictated by the non-trivial band topology, has
emerged as a new frontier in condensed-matter physics. Very recently, a
coexistence of triply degenerate points of band crossing and Weyl points near
the Fermi level was theoretically predicted and immediately experimentally
verified in single crystalline molybdenum phosphide (MoP). Here we show in this
material the high-pressure electronic transport and synchrotron X-ray
diffraction (XRD) measurements, combined with density functional theory (DFT)
calculations. We report the emergence of pressure-induced superconductivity in
MoP with a critical temperature Tc of about 2 K at 27.6 GPa, rising to 3.7 K at
the highest pressure of 95.0 GPa studied. No structural phase transitions is
detected up to 60.6 GPa from the XRD. Meanwhile, the Weyl points and triply
degenerate points topologically protected by the crystal symmetry are retained
at high pressure as revealed by our DFT calculations. The coexistence of
three-component fermion and superconductivity in heavily pressurized MoP offers
an excellent platform to study the interplay between topological phase of
matter and superconductivity.
| 0 | 1 | 0 | 0 | 0 | 0 |
Chaotic zones around rotating small bodies | Small bodies of the Solar system, like asteroids, trans-Neptunian objects,
cometary nuclei, planetary satellites, with diameters smaller than one thousand
kilometers usually have irregular shapes, often resembling dumb-bells, or
contact binaries. The spinning of such a gravitating dumb-bell creates around
it a zone of chaotic orbits. We determine its extent analytically and
numerically. We find that the chaotic zone swells significantly if the rotation
rate is decreased, in particular, the zone swells more than twice if the
rotation rate is decreased ten times with respect to the "centrifugal breakup"
threshold. We illustrate the properties of the chaotic orbital zones in
examples of the global orbital dynamics about asteroid 243 Ida (which has a
moon, Dactyl, orbiting near the edge of the chaotic zone) and asteroid 25143
Itokawa.
| 0 | 1 | 0 | 0 | 0 | 0 |
Collective excitations and supersolid behavior of bosonic atoms inside two crossed optical cavities | We discuss the nature of symmetry breaking and the associated collective
excitations for a system of bosons coupled to the electromagnetic field of two
optical cavities. For the specific configuration realized in a recent
experiment at ETH, we show that, in absence of direct intercavity scattering
and for parameters chosen such that the atoms couple symmetrically to both
cavities, the system possesses an approximate $U(1)$ symmetry which holds
asymptotically for vanishing cavity field intensity. It corresponds to the
invariance with respect to redistributing the total intensity $I=I_1+I_2$
between the two cavities. The spontaneous breaking of this symmetry gives rise
to a broken continuous translation-invariance for the atoms, creating a
supersolid-like order in the presence of a Bose-Einstein condensate. In
particular, we show that atom-mediated scattering between the two cavities,
which favors the state with equal light intensities $I_1=I_2$ and reduces the
symmetry to $\mathbf{Z}_2\otimes \mathbf{Z}_2$, gives rise to a finite value
$\sim \sqrt{I}$ of the effective Goldstone mass. For strong atom driving, this
low energy mode is clearly separated from an effective Higgs excitation
associated with changes of the total intensity $I$. In addition, we compute the
spectral distribution of the cavity light field and show that both the Higgs
and Goldstone mode acquire a finite lifetime due to Landau damping at non-zero
temperature.
| 0 | 1 | 0 | 0 | 0 | 0 |
Generalized Value Iteration Networks: Life Beyond Lattices | In this paper, we introduce a generalized value iteration network (GVIN),
which is an end-to-end neural network planning module. GVIN emulates the value
iteration algorithm by using a novel graph convolution operator, which enables
GVIN to learn and plan on irregular spatial graphs. We propose three novel
differentiable kernels as graph convolution operators and show that the
embedding based kernel achieves the best performance. We further propose
episodic Q-learning, an improvement upon traditional n-step Q-learning that
stabilizes training for networks that contain a planning module. Lastly, we
evaluate GVIN on planning problems in 2D mazes, irregular graphs, and
real-world street networks, showing that GVIN generalizes well for both
arbitrary graphs and unseen graphs of larger scale and outperforms a naive
generalization of VIN (discretizing a spatial graph into a 2D image).
| 1 | 0 | 0 | 0 | 0 | 0 |
Structure and Evolution of Internally Heated Hot Jupiters | Hot Jupiters receive strong stellar irradiation, producing equilibrium
temperatures of $1000 - 2500 \ \mathrm{Kelvin}$. Incoming irradiation directly
heats just their thin outer layer, down to pressures of $\sim 0.1 \
\mathrm{bars}$. In standard irradiated evolution models of hot Jupiters,
predicted transit radii are too small. Previous studies have shown that deeper
heating -- at a small fraction of the heating rate from irradiation -- can
explain observed radii. Here we present a suite of evolution models for HD
209458b where we systematically vary both the depth and intensity of internal
heating, without specifying the uncertain heating mechanism(s). Our models
start with a hot, high entropy planet whose radius decreases as the convective
interior cools. The applied heating suppresses this cooling. We find that very
shallow heating -- at pressures of $1 - 10 \ \mathrm{bars}$ -- does not
significantly suppress cooling, unless the total heating rate is $\gtrsim 10\%$
of the incident stellar power. Deeper heating, at $100 \ \mathrm{bars}$,
requires heating at only $1\%$ of the stellar irradiation to explain the
observed transit radius of $1.4 R_{\rm Jup}$ after 5 Gyr of cooling. In
general, more intense and deeper heating results in larger hot Jupiter radii.
Surprisingly, we find that heat deposited at $10^4 \ \mathrm{bars}$ -- which is
exterior to $\approx 99\%$ of the planet's mass -- suppresses planetary cooling
as effectively as heating at the center. In summary, we find that relatively
shallow heating is required to explain the radii of most hot Jupiters, provided
that this heat is applied early and persists throughout their evolution.
| 0 | 1 | 0 | 0 | 0 | 0 |
Inference-Based Distributed Channel Allocation in Wireless Sensor Networks | Interference-aware resource allocation of time slots and frequency channels
in single-antenna, halfduplex radio wireless sensor networks (WSN) is
challenging. Devising distributed algorithms for such task further complicates
the problem. This work studiesWSN joint time and frequency channel allocation
for a given routing tree, such that: a) allocation is performed in a fully
distributed way, i.e., information exchange is only performed among neighboring
WSN terminals, within communication up to two hops, and b) detection of
potential interfering terminals is simplified and can be practically realized.
The algorithm imprints space, time, frequency and radio hardware constraints
into a loopy factor graph and performs iterative message passing/ loopy belief
propagation (BP) with randomized initial priors. Sufficient conditions for
convergence to a valid solution are offered, for the first time in the
literature, exploiting the structure of the proposed factor graph. Based on
theoretical findings, modifications of BP are devised that i) accelerate
convergence to a valid solution and ii) reduce computation cost. Simulations
reveal promising throughput results of the proposed distributed algorithm, even
though it utilizes simplified interfering terminals set detection. Future work
could modify the constraints such that other disruptive wireless technologies
(e.g., full-duplex radios or network coding) could be accommodated within the
same inference framework.
| 1 | 0 | 0 | 0 | 0 | 0 |
Switch Functions | We define a switch function to be a function from an interval to $\{1,-1\}$
with a finite number of sign changes. (Special cases are the Walsh functions.)
By a topological argument, we prove that, given $n$ real-valued functions,
$f_1, \dots, f_n$, in $L^1[0,1]$, there exists a switch function, $\sigma$,
with at most $n$ sign changes that is simultaneously orthogonal to all of them
in the sense that $\int_0^1 \sigma(t)f_i(t)dt=0$, for all $i = 1, \dots , n$.
Moreover, we prove that, for each $\lambda \in (-1,1)$, there exists a unique
switch function, $\sigma$, with $n$ switches such that $\int_0^1 \sigma(t) p(t)
dt = \lambda \int_0^1 p(t)dt$ for every real polynomial $p$ of degree at most
$n-1$. We also prove the same statement holds for every real even polynomial of
degree at most $2n-2$. Furthermore, for each of these latter results, we write
down, in terms of $\lambda$ and $n$, a degree $n$ polynomial whose roots are
the switch points of $\sigma$; we are thereby able to compute these switch
functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
Schrödinger operators periodic in octants | We consider Schrödinger operators with periodic potentials in the positive
quadrant for dim $>1$ with Dirichlet boundary condition. We show that for any
integer $N$ and any interval $I$ there exists a periodic potential such that
the Schrödinger operator has $N$ eigenvalues counted with the multiplicity on
this interval and there is no other spectrum on the interval. Furthermore, to
the right and to the left of it there is a essential spectrum.
Moreover, we prove similar results for Schrödinger operators for other
domains. The proof is based on the inverse spectral theory for Hill operators
on the real line.
| 0 | 0 | 1 | 0 | 0 | 0 |
First international comparison of fountain primary frequency standards via a long distance optical fiber link | We report on the first comparison of distant caesium fountain primary
frequency standards (PFSs) via an optical fiber link. The 1415 km long optical
link connects two PFSs at LNE-SYRTE (Laboratoire National de métrologie et
d'Essais - SYstème de Références Temps-Espace) in Paris (France)
with two at PTB (Physikalisch-Technische Bundesanstalt) in Braunschweig
(Germany). For a long time, these PFSs have been major contributors to accuracy
of the International Atomic Time (TAI), with stated accuracies of around
$3\times 10^{-16}$. They have also been the references for a number of absolute
measurements of clock transition frequencies in various optical frequency
standards in view of a future redefinition of the second. The phase coherent
optical frequency transfer via a stabilized telecom fiber link enables far
better resolution than any other means of frequency transfer based on satellite
links. The agreement for each pair of distant fountains compared is well within
the combined uncertainty of a few 10$^{-16}$ for all the comparisons, which
fully supports the stated PFSs' uncertainties. The comparison also includes a
rubidium fountain frequency standard participating in the steering of TAI and
enables a new absolute determination of the $^{87}$Rb ground state hyperfine
transition frequency with an uncertainty of $3.1\times 10^{-16}$.
This paper is dedicated to the memory of André Clairon, who passed away
on the 24$^{th}$ of December 2015, for his pioneering and long-lasting efforts
in atomic fountains. He also pioneered optical links from as early as 1997.
| 0 | 1 | 0 | 0 | 0 | 0 |
Hardy inequalities, Rellich inequalities and local Dirichlet forms | First the Hardy and Rellich inequalities are defined for the submarkovian
operator associated with a local Dirichlet form. Secondly, two general
conditions are derived which are sufficient to deduce the Rellich inequality
from the Hardy inequality. In addition the Rellich constant is calculated from
the Hardy constant. Thirdly, we establish that the criteria for the Rellich
inequality are verified for a large class of weighted second-order operators on
a domain $\Omega\subseteq \Ri^d$. The weighting near the boundary $\partial
\Omega$ can be different from the weighting at infinity. Finally these results
are applied to weighted second-order operators on $\Ri^d\backslash\{0\}$ and to
a general class of operators of Grushin type.
| 0 | 0 | 1 | 0 | 0 | 0 |
Optimized Quantification of Spin Relaxation Times in the Hybrid State | Purpose: The analysis of optimized spin ensemble trajectories for relaxometry
in the hybrid state.
Methods: First, we constructed visual representations to elucidate the
differential equation that governs spin dynamics in hybrid state. Subsequently,
numerical optimizations were performed to find spin ensemble trajectories that
minimize the Cramér-Rao bound for $T_1$-encoding, $T_2$-encoding, and their
weighted sum, respectively, followed by a comparison of the Cramér-Rao bounds
obtained with our optimized spin-trajectories, as well as Look-Locker and
multi-spin-echo methods. Finally, we experimentally tested our optimized spin
trajectories with in vivo scans of the human brain.
Results: After a nonrecurring inversion segment on the southern hemisphere of
the Bloch sphere, all optimized spin trajectories pursue repetitive loops on
the northern half of the sphere in which the beginning of the first and the end
of the last loop deviate from the others. The numerical results obtained in
this work align well with intuitive insights gleaned directly from the
governing equation. Our results suggest that hybrid-state sequences outperform
traditional methods. Moreover, hybrid-state sequences that balance $T_1$- and
$T_2$-encoding still result in near optimal signal-to-noise efficiency. Thus,
the second parameter can be encoded at virtually no extra cost.
Conclusion: We provide insights regarding the optimal encoding processes of
spin relaxation times in order to guide the design of robust and efficient
pulse sequences. We find that joint acquisitions of $T_1$ and $T_2$ in the
hybrid state are substantially more efficient than sequential encoding
techniques.
| 0 | 1 | 0 | 0 | 0 | 0 |
On the generation of drift flows in wall-bounded flows transiting to turbulence | Despite recent progress, laminar-turbulent coexistence in transitional planar
wall-bounded shear flows is still not well understood. Contrasting with the
processes by which chaotic flow inside turbulent patches is sustained at the
local (minimal flow unit) scale, the mechanisms controlling the obliqueness of
laminar-turbulent interfaces typically observed all along the coexistence range
are still mysterious. An extension of Waleffe's approach [Phys. Fluids 9 (1997)
883--900] is used to show that, already at the local scale, drift flows
breaking the problem's spanwise symmetry are generated just by slightly
detuning the modes involved in the self-sustainment process. This opens
perspectives for theorizing the formation of laminar-turbulent patterns.
| 0 | 1 | 0 | 0 | 0 | 0 |
Goldbach's Function Approximation Using Deep Learning | Goldbach conjecture is one of the most famous open mathematical problems. It
states that every even number, bigger than two, can be presented as a sum of 2
prime numbers. % In this work we present a deep learning based model that
predicts the number of Goldbach partitions for a given even number.
Surprisingly, our model outperforms all state-of-the-art analytically derived
estimations for the number of couples, while not requiring prime factorization
of the given number. We believe that building a model that can accurately
predict the number of couples brings us one step closer to solving one of the
world most famous open problems. To the best of our knowledge, this is the
first attempt to consider machine learning based data-driven methods to
approximate open mathematical problems in the field of number theory, and hope
that this work will encourage such attempts.
| 0 | 0 | 0 | 1 | 0 | 0 |
Estimation of a Continuous Distribution on a Real Line by Discretization Methods -- Complete Version-- | For an unknown continuous distribution on a real line, we consider the
approximate estimation by the discretization. There are two methods for the
discretization. First method is to divide the real line into several intervals
before taking samples ("fixed interval method") . Second method is dividing the
real line using the estimated percentiles after taking samples ("moving
interval method"). In either way, we settle down to the estimation problem of a
multinomial distribution. We use (symmetrized) $f$-divergence in order to
measure the discrepancy of the true distribution and the estimated one. Our
main result is the asymptotic expansion of the risk (i.e. expected divergence)
up to the second-order term in the sample size. We prove theoretically that the
moving interval method is asymptotically superior to the fixed interval method.
We also observe how the presupposed intervals (fixed interval method) or
percentiles (moving interval method) affect the asymptotic risk.
| 0 | 0 | 1 | 1 | 0 | 0 |
Reviving and Improving Recurrent Back-Propagation | In this paper, we revisit the recurrent back-propagation (RBP) algorithm,
discuss the conditions under which it applies as well as how to satisfy them in
deep neural networks. We show that RBP can be unstable and propose two variants
based on conjugate gradient on the normal equations (CG-RBP) and Neumann series
(Neumann-RBP). We further investigate the relationship between Neumann-RBP and
back propagation through time (BPTT) and its truncated version (TBPTT). Our
Neumann-RBP has the same time complexity as TBPTT but only requires constant
memory, whereas TBPTT's memory cost scales linearly with the number of
truncation steps. We examine all RBP variants along with BPTT and TBPTT in
three different application domains: associative memory with continuous
Hopfield networks, document classification in citation networks using graph
neural networks and hyperparameter optimization for fully connected networks.
All experiments demonstrate that RBPs, especially the Neumann-RBP variant, are
efficient and effective for optimizing convergent recurrent neural networks.
| 0 | 0 | 0 | 1 | 0 | 0 |
A family of Dirichlet-Morrey spaces | To each weighted Dirichlet space $\mathcal{D}_p$, $0<p<1$, we associate a
family of Morrey-type spaces ${\mathcal{D}}_p^{\lambda}$, $0< \lambda < 1$,
constructed by imposing growth conditions on the norm of hyperbolic translates
of functions. We indicate some of the properties of these spaces, mention the
characterization in terms of boundary values, and study integration and
multiplication operators on them.
| 0 | 0 | 1 | 0 | 0 | 0 |
Merging real and virtual worlds: An analysis of the state of the art and practical evaluation of Microsoft Hololens | Achieving a symbiotic blending between reality and virtuality is a dream that
has been lying in the minds of many people for a long time. Advances in various
domains constantly bring us closer to making that dream come true. Augmented
reality as well as virtual reality are in fact trending terms and are expected
to further progress in the years to come.
This master's thesis aims to explore these areas and starts by defining
necessary terms such as augmented reality (AR) or virtual reality (VR). Usual
taxonomies to classify and compare the corresponding experiences are then
discussed.
In order to enable those applications, many technical challenges need to be
tackled, such as accurate motion tracking with 6 degrees of freedom (positional
and rotational), that is necessary for compelling experiences and to prevent
user sickness. Additionally, augmented reality experiences typically rely on
image processing to position the superimposed content. To do so, "paper"
markers or features extracted from the environment are often employed. Both
sets of techniques are explored and common solutions and algorithms are
presented.
After investigating those technical aspects, I carry out an objective
comparison of the existing state-of-the-art and state-of-the-practice in those
domains, and I discuss present and potential applications in these areas. As a
practical validation, I present the results of an application that I have
developed using Microsoft HoloLens, one of the more advanced affordable
technologies for augmented reality that is available today. Based on the
experience and lessons learned during this development, I discuss the
limitations of current technologies and present some avenues of future
research.
| 1 | 0 | 0 | 0 | 0 | 0 |
Fast Characterization of Segmental Duplications in Genome Assemblies | Segmental duplications (SDs), or low-copy repeats (LCR), are segments of DNA
greater than 1 Kbp with high sequence identity that are copied to other regions
of the genome. SDs are among the most important sources of evolution, a common
cause of genomic structural variation, and several are associated with diseases
of genomic origin. Despite their functional importance, SDs present one of the
major hurdles for de novo genome assembly due to the ambiguity they cause in
building and traversing both state-of-the-art overlap-layout-consensus and de
Bruijn graphs. This causes SD regions to be misassembled, collapsed into a
unique representation, or completely missing from assembled reference genomes
for various organisms. In turn, this missing or incorrect information limits
our ability to fully understand the evolution and the architecture of the
genomes. Despite the essential need to accurately characterize SDs in
assemblies, there is only one tool that has been developed for this purpose,
called Whole Genome Assembly Comparison (WGAC). WGAC is comprised of several
steps that employ different tools and custom scripts, which makes it difficult
and time consuming to use. Thus there is still a need for algorithms to
characterize within-assembly SDs quickly, accurately, and in a user friendly
manner.
Here we introduce a SEgmental Duplication Evaluation Framework (SEDEF) to
rapidly detect SDs through sophisticated filtering strategies based on Jaccard
similarity and local chaining. We show that SEDEF accurately detects SDs while
maintaining substantial speed up over WGAC that translates into practical run
times of minutes instead of weeks. Notably, our algorithm captures up to 25%
pairwise error between segments, where previous studies focused on only 10%,
allowing us to more deeply track the evolutionary history of the genome.
SEDEF is available at this https URL
| 0 | 0 | 0 | 0 | 1 | 0 |
Cross validation for locally stationary processes | We propose an adaptive bandwidth selector via cross validation for local
M-estimators in locally stationary processes. We prove asymptotic optimality of
the procedure under mild conditions on the underlying parameter curves. The
results are applicable to a wide range of locally stationary processes such
linear and nonlinear processes. A simulation study shows that the method works
fairly well also in misspecified situations.
| 0 | 0 | 1 | 1 | 0 | 0 |
Weyl nodes in Andreev spectra of multiterminal Josephson junctions: Chern numbers, conductances and supercurrents | We consider mesoscopic four-terminal Josephson junctions and study emergent
topological properties of the Andreev subgap bands. We use symmetry-constrained
analysis for Wigner-Dyson classes of scattering matrices to derive band
dispersions. When scattering matrix of the normal region connecting
superconducting leads is energy-independent, the determinant formula for
Andreev spectrum can be reduced to a palindromic equation that admits a
complete analytical solution. Band topology manifests with an appearance of the
Weyl nodes which serve as monopoles of finite Berry curvature. The
corresponding fluxes are quantified by Chern numbers that translate into a
quantized nonlocal conductance that we compute explicitly for the
time-reversal-symmetric scattering matrix. The topological regime can be also
identified by supercurrents as Josephson current-phase relationships exhibit
pronounced nonanalytic behavior and discontinuities near Weyl points that can
be controllably accessed in experiments.
| 0 | 1 | 0 | 0 | 0 | 0 |
Injectivity of the connecting homomorphisms | Let $A$ be the inductive limit of a sequence $$A_1\, \xrightarrow{\phi_{1,2}}
\,A_2\,\xrightarrow{\phi_{2,3}} \,A_3\rightarrow\cdots$$ with
$A_n=\oplus_{i=1}^{n_i}A_{[n,i]}$, where all the $A_{[n,i]}$ are
Elliott-Thomsen algebras and $\phi_{n,n+1}$ are homomorphisms, in this paper,
we will prove that $A$ can be written as another inductive limit
$$B_1\,\xrightarrow{\psi_{1,2}} \,B_2\,\xrightarrow{\psi_{2,3}}
\,B_3\rightarrow\cdots$$ with $B_n=\oplus_{i=1}^{n_i}B_{[n,i]}$, where all the
$B_{[n,i]}$ are Elliott-Thomsen building blocks and with the extra condition
that all the $\phi_{n,n+1}$ are injective.
| 0 | 0 | 1 | 0 | 0 | 0 |
Selective Inference for Change Point Detection in Multi-dimensional Sequences | We study the problem of detecting change points (CPs) that are characterized
by a subset of dimensions in a multi-dimensional sequence. A method for
detecting those CPs can be formulated as a two-stage method: one for selecting
relevant dimensions, and another for selecting CPs. It has been difficult to
properly control the false detection probability of these CP detection methods
because selection bias in each stage must be properly corrected. Our main
contribution in this paper is to formulate a CP detection problem as a
selective inference problem, and show that exact (non-asymptotic) inference is
possible for a class of CP detection methods. We demonstrate the performances
of the proposed selective inference framework through numerical simulations and
its application to our motivating medical data analysis problem.
| 0 | 0 | 0 | 1 | 0 | 0 |
RPC: A Large-Scale Retail Product Checkout Dataset | Over recent years, emerging interest has occurred in integrating computer
vision technology into the retail industry. Automatic checkout (ACO) is one of
the critical problems in this area which aims to automatically generate the
shopping list from the images of the products to purchase. The main challenge
of this problem comes from the large scale and the fine-grained nature of the
product categories as well as the difficulty for collecting training images
that reflect the realistic checkout scenarios due to continuous update of the
products. Despite its significant practical and research value, this problem is
not extensively studied in the computer vision community, largely due to the
lack of a high-quality dataset. To fill this gap, in this work we propose a new
dataset to facilitate relevant research. Our dataset enjoys the following
characteristics: (1) It is by far the largest dataset in terms of both product
image quantity and product categories. (2) It includes single-product images
taken in a controlled environment and multi-product images taken by the
checkout system. (3) It provides different levels of annotations for the
check-out images. Comparing with the existing datasets, ours is closer to the
realistic setting and can derive a variety of research problems. Besides the
dataset, we also benchmark the performance on this dataset with various
approaches. The dataset and related resources can be found at
\url{this https URL}.
| 1 | 0 | 0 | 0 | 0 | 0 |
Anyonic self-induced disorder in a stabilizer code: quasi-many body localization in a translational invariant model | We enquire into the quasi-many-body localization in topologically ordered
states of matter, revolving around the case of Kitaev toric code on ladder
geometry, where different types of anyonic defects carry different masses
induced by environmental errors. Our study verifies that random arrangement of
anyons generates a complex energy landscape solely through braiding statistics,
which suffices to suppress the diffusion of defects in such multi-component
anyonic liquid. This non-ergodic dynamic suggests a promising scenario for
investigation of quasi-many-body localization. Computing standard diagnostics
evidences that, in such disorder-free many-body system, a typical initial
inhomogeneity of anyons gives birth to a glassy dynamics with an exponentially
diverging time scale of the full relaxation. A by-product of this dynamical
effect is manifested by the slow growth of entanglement entropy, with
characteristic time scales bearing resemblance to those of inhomogeneity
relaxation. This setting provides a new platform which paves the way toward
impeding logical errors by self-localization of anyons in a generic, high
energy state, originated in their exotic statistics.
| 0 | 1 | 0 | 0 | 0 | 0 |
KiDS-450: Tomographic Cross-Correlation of Galaxy Shear with {\it Planck} Lensing | We present the tomographic cross-correlation between galaxy lensing measured
in the Kilo Degree Survey (KiDS-450) with overlapping lensing measurements of
the cosmic microwave background (CMB), as detected by Planck 2015. We compare
our joint probe measurement to the theoretical expectation for a flat
$\Lambda$CDM cosmology, assuming the best-fitting cosmological parameters from
the KiDS-450 cosmic shear and Planck CMB analyses. We find that our results are
consistent within $1\sigma$ with the KiDS-450 cosmology, with an amplitude
re-scaling parameter $A_{\rm KiDS} = 0.86 \pm 0.19$. Adopting a Planck
cosmology, we find our results are consistent within $2\sigma$, with $A_{\it
Planck} = 0.68 \pm 0.15$. We show that the agreement is improved in both cases
when the contamination to the signal by intrinsic galaxy alignments is
accounted for, increasing $A$ by $\sim 0.1$. This is the first tomographic
analysis of the galaxy lensing -- CMB lensing cross-correlation signal, and is
based on five photometric redshift bins. We use this measurement as an
independent validation of the multiplicative shear calibration and of the
calibrated source redshift distribution at high redshifts. We find that
constraints on these two quantities are strongly correlated when obtained from
this technique, which should therefore not be considered as a stand-alone
competitive calibration tool.
| 0 | 1 | 0 | 0 | 0 | 0 |
Earthquake Early Warning and Beyond: Systems Challenges in Smartphone-based Seismic Network | Earthquake Early Warning (EEW) systems can effectively reduce fatalities,
injuries, and damages caused by earthquakes. Current EEW systems are mostly
based on traditional seismic and geodetic networks, and exist only in a few
countries due to the high cost of installing and maintaining such systems. The
MyShake system takes a different approach and turns people's smartphones into
portable seismic sensors to detect earthquake-like motions. However, to issue
EEW messages with high accuracy and low latency in the real world, we need to
address a number of challenges related to mobile computing. In this paper, we
first summarize our experience building and deploying the MyShake system, then
focus on two key challenges for smartphone-based EEW (sensing heterogeneity and
user/system dynamics) and some preliminary exploration. We also discuss other
challenges and new research directions associated with smartphone-based seismic
network.
| 1 | 0 | 0 | 0 | 0 | 0 |
Gate-error analysis in simulations of quantum computers with transmon qubits | In the model of gate-based quantum computation, the qubits are controlled by
a sequence of quantum gates. In superconducting qubit systems, these gates can
be implemented by voltage pulses. The success of implementing a particular gate
can be expressed by various metrics such as the average gate fidelity, the
diamond distance, and the unitarity. We analyze these metrics of gate pulses
for a system of two superconducting transmon qubits coupled by a resonator, a
system inspired by the architecture of the IBM Quantum Experience. The metrics
are obtained by numerical solution of the time-dependent Schrödinger equation
of the transmon system. We find that the metrics reflect systematic errors that
are most pronounced for echoed cross-resonance gates, but that none of the
studied metrics can reliably predict the performance of a gate when used
repeatedly in a quantum algorithm.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.