text
stringlengths 6
128k
|
---|
In earlier work we introduced an abelianized gauge field model in which a
Rarita-Schwinger field is directly coupled to a spin-$\frac{1}{2}$ field, and
showed that this model admits a perturbative expansion in the gauge field
coupling. As a preliminary to further study of the coupled model, in this paper
we present a detailed analysis of the free field structure that obtains when
the dimensionless gauge coupling is set to zero, but the dimension one coupling
of the spin-$\frac{3}{2}$ and spin-$\frac{1}{2}$ fields remains nonzero.
|
In this paper we give a survey on various multiscale methods for the
numerical solution of second order hyperbolic equations in highly heterogeneous
media. We concentrate on the wave equation and distinguish between two classes
of applications. First we discuss numerical methods for the wave equation in
heterogeneous media without scale separation. Such a setting is for instance
encountered in the geosciences, where natural structures often exhibit a
continuum of different scales, that all need to be resolved numerically to get
meaningful approximations. Approaches tailored for these settings typically
involve the construction of generalized finite element spaces, where the basis
functions incorporate information about the data variations. In the second part
of the paper, we discuss numerical methods for the case of structured media
with scale separation. This setting is for instance encountered in engineering
sciences, where materials are often artificially designed. If this is the case,
the structure and the scale separation can be explicitly exploited to compute
appropriate homogenized/upscaled wave models that only exhibit a single coarse
scale and that can be hence solved at significantly reduced computational
costs.
|
Generalized parton distributions of the nucleon are accessed via exclusive
leptoproduction of the real photon. While earlier analytical considerations of
phenomenological observables were restricted to twist-three accuracy, i.e.,
taking into account only terms suppressed by a single power of the hard scale,
in the present study we revisit this differential cross section within the
helicity formalism and restore power-suppressed effects stemming from the
process kinematics exactly. We restrict ourselves to the phenomenologically
important case of lepton scattering off a longitudinally polarized nucleon,
where the photon flips its helicity at most by one unit.
|
We review the main concepts of the recently introduced principle of relative
locality and investigate some aspects of classical interactions between point
particles from this new perspective. We start with a physical motivation and
basic mathematical description of relative locality and review the treatment of
a system of classical point particles in this framework. We then examine one of
the unsolved problems of this picture, the apparent ambiguities in the
definition of momentum constraints caused by a non-commutative and/or
non-associative momentum addition rule. The gamma ray burst experiment is used
as an illustration. Finally, we use the formalism of relative locality to
reinterpret the well-known multiple point particle system coupled to 2+1
Einstein gravity, analyzing the geometry of its phase space and once again
referring to the gamma ray burst problem as an example.
|
We report a study of the homogeneous isotropic Boltzmann equation for an open
system. We seek for nonequilibrium steady solutions in presence of forcing and
dissipation. Using the language of weak turbulence theory, we analyze the
possibility to observe Kolmogorov-Zakharov steady distributions. We derive a
differential approximation model and we find that the expected nonequilibrium
steady solutions have always the form of warm cascades. We propose an
analytical prediction for relation between the forcing and dissipation and the
thermodynamic quantities of the system. Specifically, we find that the
temperature of the system is independent of the forcing amplitude and
determined only by the forcing and dissipation scales. Finally, we perform
direct numerical simulations of the Boltzmann equation finding consistent
results with our theoretical predictions.
|
Pseudorandom quantum states (PRS) are efficiently constructible states that
are computationally indistinguishable from being Haar-random, and have recently
found cryptographic applications. We explore new definitions, new properties
and applications of pseudorandom states, and present the following
contributions:
1. New Definitions: We study variants of pseudorandom function-like state
(PRFS) generators, introduced by Ananth, Qian, and Yuen (CRYPTO'22), where the
pseudorandomness property holds even when the generator can be queried
adaptively or in superposition. We show feasibility of these variants assuming
the existence of post-quantum one-way functions.
2. Classical Communication: We show that PRS generators with logarithmic
output length imply commitment and encryption schemes with classical
communication. Previous constructions of such schemes from PRS generators
required quantum communication.
3. Simplified Proof: We give a simpler proof of the Brakerski--Shmueli
(TCC'19) result that polynomially-many copies of uniform superposition states
with random binary phases are indistinguishable from Haar-random states.
4. Necessity of Computational Assumptions: We also show that a secure PRS
with output length logarithmic, or larger, in the key length necessarily
requires computational assumptions.
|
We introduce a new method for detecting ultra-diffuse galaxies by searching
for over-densities in intergalactic globular cluster populations. Our approach
is based on an application of the log-Gaussian Cox process, which is a commonly
used model in the spatial statistics literature but rarely used in astronomy.
This method is applied to the globular cluster data obtained from the PIPER
survey, a \textit{Hubble Space Telescope} imaging program targeting the Perseus
cluster. We successfully detect all confirmed ultra-diffuse galaxies with known
globular cluster populations in the survey. We also identify a potential galaxy
that has no detected diffuse stellar content. Preliminary analysis shows that
it is unlikely to be merely an accidental clump of globular clusters or other
objects. If confirmed, this system would be the first of its kind. Simulations
are used to assess how the physical parameters of the globular cluster systems
within ultra-diffuse galaxies affect their detectability using our method. We
quantify the correlation of the detection probability with the total number of
globular clusters in the galaxy and the anti-correlation with increasing
half-number radius of the globular cluster system. The S\'{e}rsic index of the
globular cluster distribution has little impact on detectability.
|
A classical theorem of Coifman, Rochberg, and Weiss on commutators of
singular integrals is extended to the case of generalized $L^p$ spaces with
variable exponent.
|
Recently, it was found that there is a remarkable intuitive similarity
between studies in theoretical computer science dealing with large data sets on
the one hand, and categorical methods of topology and geometry in pure
mathematics, on the other. In this article, we treat the key notion of
persistency from computer science in the algebraic geometric context involving
Nori motivic constructions and related methods. We also discuss model
structures for persistent topology.
|
Neural network quantization enables the deployment of large models on
resource-constrained devices. Current post-training quantization methods fall
short in terms of accuracy for INT4 (or lower) but provide reasonable accuracy
for INT8 (or above). In this work, we study the effect of quantization on the
structure of the loss landscape. Additionally, we show that the structure is
flat and separable for mild quantization, enabling straightforward
post-training quantization methods to achieve good results. We show that with
more aggressive quantization, the loss landscape becomes highly non-separable
with steep curvature, making the selection of quantization parameters more
challenging. Armed with this understanding, we design a method that quantizes
the layer parameters jointly, enabling significant accuracy improvement over
current post-training quantization methods. Reference implementation is
available at
https://github.com/ynahshan/nn-quantization-pytorch/tree/master/lapq
|
We consider a binary repulsive Bose-Einstein condensate in a harmonic trap in
one spatial dimension and investigate particular solutions consisting of two
dark-bright (DB) solitons. There are two different stationary solutions
characterized by the phase difference in the bright component, in-phase and
out-of-phase states. We show that above a critical particle number in the
bright component, a symmetry breaking bifurcation of the pitchfork type occurs
that leads to a new asymmetric solution whereas the parental branch, i.e., the
out-of-phase state becomes unstable. These three different states support
different small amplitude oscillations, characterized by an almost stationary
density of the dark component and a tunneling of the bright component between
the two dark solitons. Within a suitable effective double-well picture, these
can be understood as the characteristic features of a Bosonic Josephson
Junction (BJJ), and we show within a two-mode approach that all characteristic
features of the BJJ phase space are recovered. For larger deviations from the
stationary states, the simplifying double-well description breaks down due to
the feedback of the bright component onto the dark one, causing the solitons to
move. In this regime we observe intricate anharmonic and aperiodic dynamics,
exhibiting remnants of the BJJ phase space.
|
Recent experiments by Larson et al. demonstrate the feasibility of measuring
local $dd$ excitations using nonresonant inelastic X-ray scattering (IXS). We
establish a general framework for the interpretation where the $dd$ transitions
created in the scattering process are expressed in effective one-particle
operators that follow a simple selection rule. The different operators can be
selectively probed by employing their different dependence on the direction and
magnitude of the transferred momentum. We use the operators to explain the
presence of nodal directions and the nonresonant IXS in specific directions and
planes. We demonstrate how nonresonant IXS can be used to extract valuable
ground state information for orbiton excitations in manganite.
|
This is the abstract prepared for Workshop on Topology and Geometry (Zhang
jiang, China, October 1994), and is a review of my recent works. What kinds of
combinations of singularities can appear in small deformation fibers of a fixed
singularity? We consider this problem for hypersurface singularities on complex
analytic spaces of dimension 2. For all singularities in the beginning par t of
Arnold's classification list, the answer to this problem is given by a unique
principle described by Dynkin graphs. This article contains several figures. I
will send the hard copy containin g figures by mail with envelope and stamps
under request.
|
This paper gathers, from the literature and private communication, 72 new
Galactic Population I Wolf-Rayet stars and 17 candidate WCLd stars, recognized
and/or discovered after the publication of The VIIth Catalogue of Galactic
Wolf-Rayet Stars. This brings the total number of known Galactic Wolf-Rayet
stars to 298, of which 24 (8%) are in open cluster Westerlund 1, and 60 (20%)
are in open clusters near the Galactic Center.
|
Despite the growing interest for expressive speech synthesis, synthesis of
nonverbal expressions is an under-explored area. In this paper we propose an
audio laughter synthesis system based on a sequence-to-sequence TTS synthesis
system. We leverage transfer learning by training a deep learning model to
learn to generate both speech and laughs from annotations. We evaluate our
model with a listening test, comparing its performance to an HMM-based laughter
synthesis one and assess that it reaches higher perceived naturalness. Our
solution is a first step towards a TTS system that would be able to synthesize
speech with a control on amusement level with laughter integration.
|
Hard X-ray imaging of the Galactic plane by the INTEGRAL satellite is
uncovering large numbers of 20-100 keV "IGR" sources. We present results from
Chandra, INTEGRAL, optical, and IR observations of 4 IGR sources: 3 sources in
the Norma region of the Galaxy (IGR J16195-4945, IGR J16207-5129, and IGR
J16167-4957) and one that is closer to the Galactic center (IGR J17195-4100).
In all 4 cases, one relatively bright Chandra source is seen in the INTEGRAL
error circle, and these are likely to be the soft X-ray counterparts of the IGR
sources. They have hard 0.3-10 keV spectra with power-law photon indices of 0.5
to 1.1. While many previously studied IGR sources show high column densities,
only IGR J16195-4945 has a column density that could be as high as 10^23 cm^-2.
Using optical and IR sky survey catalogs and our own photometry, we have
obtained identifications for all 4 sources. The J-band magnitudes are in the
range 14.9-10.4, and we have used the optical/IR spectral energy distributions
(SEDs) to constrain the nature of the sources. Blackbody components with
temperature lower limits of >9400 K for IGR J16195-4945 and >18,000 K for IGR
J16207-5129 indicate that these are very likely High-Mass X-ray Binaries
(HMXBs). However, for IGR J16167-4957 and IGR J17195-4100, low extinction and
the SEDs indicate later spectral types for the putative companions, indicating
that these are not HMXBs.
|
The dual dynamics of Einstein gravity on AdS$_3$ supplemented with boundary
conditions of KdV-type is identified. It corresponds to a two-dimensional field
theory at the boundary, described by a novel action principle whose field
equations are given by two copies of the "potential modified KdV equation". The
asymptotic symmetries then transmute into the global Noether symmetries of the
dual action, giving rise to an infinite set of commuting conserved charges,
implying the integrability of the system. Noteworthy, the theory at the
boundary is non-relativistic and possesses anisotropic scaling of Lifshitz
type.
|
By exploiting multipath fading channels as a source of common randomness,
physical layer (PHY) based key generation protocols allow two terminals with
correlated observations to generate secret keys with information-theoretical
security. The state of the art, however, still suffers from major limitations,
e.g., low key generation rate, lower entropy of key bits and a high reliance on
node mobility. In this paper, a novel cooperative key generation protocol is
developed to facilitate high-rate key generation in narrowband fading channels,
where two keying nodes extract the phase randomness of the fading channel with
the aid of relay node(s). For the first time, we explicitly consider the effect
of estimation methods on the extraction of secret key bits from the underlying
fading channels and focus on a popular statistical method--maximum likelihood
estimation (MLE). The performance of the cooperative key generation scheme is
extensively evaluated theoretically. We successfully establish both a
theoretical upper bound on the maximum secret key rate from mutual information
of correlated random sources and a more practical upper bound from Cramer-Rao
bound (CRB) in estimation theory. Numerical examples and simulation studies are
also presented to demonstrate the performance of the cooperative key generation
system. The results show that the key rate can be improved by a couple of
orders of magnitude compared to the existing approaches.
|
Using data acquired with the CLEO detector at the CESR e+e- collider at
sqrt{s} = 3.773 GeV, we measure the cross section for the radiative return
process e+e- --> gamma J/psi, J/psi --> mu+mu-, resulting in B(J/psi -->
mu+mu-) x Gamma_ee(J/psi) = 0.3384 +- 0.0058 +- 0.0071 keV, Gamma_ee(J/psi) =
5.68 +- 0.11 +- 0.13 keV, and Gamma_tot(J/psi) = 95.5 +- 2.4 +- 2.4 keV, in
which the errors are statistical and systematic, respectively. We also
determine the ratio Gamma_ee[psi(2S)] / Gamma_ee(J/psi) = 0.45 +- 0.01 +- 0.02.
|
Gamma-ray absorption due to gamma-gamma-pair creation on cosmological scales
depends on the line-of-sight integral of the evolving density of low-energy
photons in the Universe, i.e. on the history of the diffuse, isotropic
radiation field. Here we present and discuss a semi-empirical model for this
metagalactic radiation field based on stellar light produced and reprocessed in
evolving galaxies. With a minimum of parameters and assumptions, the
present-day background intensity is obtained from the far-IR to the ultraviolet
band. Predicted model intensities are independent of cosmological parameters,
since we require that the comoving emissivity, as a function of redshift,
agrees with observed values obtained from deep galaxy surveys. The far-infrared
background at present day prediced from optical galaxy surveys falls short in
explaining the observed one, and we show that this deficit can be removed by
taking into account (ultra)luminous infrared galaxies (ULIGs/LIGs) with a
seperate star formation rate. The accuracy and reliability of the model, out to
redshifts of 5, allow a realistic estimate of the attenuation length of
GeV-to-TeV gamma-rays and its uncertainty, which is the focus of a subsequent
paper.
|
Short lived resonances are sensitive to the medium properties in heavy-ion
collisions. Heavy hadrons have larger probability to be produced within the
quark gluon plasma phase due to their short formation times. Therefore heavy
mass resonances are more likely to be affected by the medium, and the
identification of early produced resonances from jet fragmentation might be a
viable option to study chirality. The high momentum resonances on the away-side
of a triggered di-jet are likely to be the most modified by the partonic or
early hadronic medium. We will discuss first results of triggered
hadron-resonance correlations in Cu+Cu heavy ion collisions.
|
We argue that the 4-state Potts antiferromagnet has a finite-temperature
phase transition on any Eulerian plane triangulation in which one sublattice
consists of vertices of degree 4. We furthermore predict the universality class
of this transition. We then present transfer-matrix and Monte Carlo data
confirming these predictions for the cases of the union-jack and bisected
hexagonal lattices.
|
The jump process introduced by J. S. Bell in 1986, for defining a quantum
field theory without observers, presupposes that space is discrete whereas time
is continuous. In this letter, our interest is to find an analogous process in
discrete time. We argue that a genuine analog does not exist, but provide
examples of processes in discrete time that could be used as a replacement.
|
We introduce snowballs, which are compact sets in $\R^3$ homeomorphic to the
unit ball. They are 3-dimensional analogs of domains in the plane bounded by
snowflake curves. For each snowball $B$ a quasiconformal map $f\colon \R^3\to
\R^3$ is constructed that maps $B$ to the unit ball.
|
This work investigates continual learning of two segmentation tasks in brain
MRI with neural networks. To explore in this context the capabilities of
current methods for countering catastrophic forgetting of the first task when a
new one is learned, we investigate elastic weight consolidation, a recently
proposed method based on Fisher information, originally evaluated on
reinforcement learning of Atari games. We use it to sequentially learn
segmentation of normal brain structures and then segmentation of white matter
lesions. Our findings show this recent method reduces catastrophic forgetting,
while large room for improvement exists in these challenging settings for
continual learning.
|
Session types, types for structuring communication between endpoints in
distributed systems, are recently being integrated into mainstream programming
languages. In practice, a very important notion for dealing with such types is
that of subtyping, since it allows for typing larger classes of system, where a
program has not precisely the expected behaviour but a similar one.
Unfortunately, recent work has shown that subtyping for session types in an
asynchronous setting is undecidable. To cope with this negative result, the
only approaches we are aware of either restrict the syntax of session types or
limit communication (by considering forms of bounded asynchrony). Both
approaches are too restrictive in practice, hence we proceed differently by
presenting an algorithm for checking subtyping which is sound, but not complete
(in some cases it terminates without returning a decisive verdict). The
algorithm is based on a tree representation of the coinductive definition of
asynchronous subtyping; this tree could be infinite, and the algorithm checks
for the presence of finite witnesses of infinite successful subtrees.
Furthermore, we provide a tool that implements our algorithm. We use this tool
to test our algorithm on many examples that cannot be managed with the previous
approaches, and to provide an empirical evaluation of the time and space cost
of the algorithm.
|
Motivated by Tukey classification problems and building on work in
\cite{Dobrinen/Todorcevic11}, we develop a new hierarchy of topological Ramsey
spaces $\mathcal{R}_{\alpha}$, $\alpha<\omega_1$. These spaces form a natural
hierarchy of complexity, $\mathcal{R}_0$ being the Ellentuck space, and for
each $\alpha<\omega_1$, $\mathcal{R}_{\alpha+1}$ coming immediately after
$\mathcal{R}_{\alpha}$ in complexity. Associated with each
$\mathcal{R}_{\alpha}$ is an ultrafilter $\mathcal{U}_{\alpha}$, which is
Ramsey for $\mathcal{R}_{\alpha}$, and in particular, is a rapid p-point
satisfying certain partition properties. We prove Ramsey-classification
theorems for equivalence relations on fronts on $\mathcal{R}_{\alpha}$,
$2\le\alpha<\omega_1$. These are analogous to the Pudlak-\Rodl\ Theorem
canonizing equivalence relations on barriers on the Ellentuck space. We then
apply our Ramsey-classification theorems to completely classify all
Rudin-Keisler equivalence classes of ultrafilters which are Tukey reducible to
$\mathcal{U}_{\alpha}$, for each $2\le\alpha<\omega_1$: Every ultrafilter which
is Tukey reducible to $\mathcal{U}_{\alpha}$ is isomorphic to a countable
iteration of Fubini products of ultrafilters from among a fixed countable
collection of rapid p-points. Moreover, we show that the Tukey types of
nonprincipal ultrafilters Tukey reducible to $\mathcal{U}_{\alpha}$ form a
descending chain of order type $\alpha+1$.
|
We investigate the effectiveness of the statistical radio frequency
interference (RFI) mitigation technique spectral kurtosis (SK) in the face of
simulated realistic RFI signals. SK estimates the kurtosis of a collection of M
power values in a single channel and provides a detection metric that is able
to discern between human-made RFI and incoherent astronomical signals of
interest. We test the ability of SK to flag signals with various representative
modulation types, data rates, duty cycles, and carrier frequencies. We flag
with various accumulation lengths M and implement multi-scale SK, which
combines information from adjacent time-frequency bins to mitigate weaknesses
in single-scale \SK. We find that signals with significant sidelobe emission
from high data rates are harder to flag, as well as signals with a 50%
effective duty cycle and weak signal-to-noise ratios. Multi-scale SK with at
least one extra channel can detect both the center channel and side-band
interference, flagging greater than 90% as long as the bin channel width is
wider in frequency than the RFI.
|
Polar bubble domains are complex topological defects akin to magnetic
skyrmions that can spontaneously form in ferroelectric thin films and
superlattices. They can be deterministically written and deleted and exhibit a
set of properties, such as sub-10 nm radius and room-temperature stability,
that are highly attractive for dense data storage and reconfigurable
nano-electronics technologies. However, possibilities of controlled motion of
electric bubble skyrmions, a critical technology requirement currently remains
missing. Here we present atomistic simulations that demonstrate how external
electric-field perturbations can induce two types of motion of bubble skyrmions
in low-dimensional tetragonal PbZr$_{0.4}$Ti$_{0.6}$O$_3$ systems under
residual depolarizing field. Specifically, we show that, depending on the
spatial profile and magnitude of the external field, bubble skyrmions can
exhibit either a continuous motion driven by the external electric field
gradient or a discontinuous, teleportation-like, skyrmion domain transfer.
These findings provide the first analysis of dynamics and controlled motion of
polar skyrmions that are essential for functionalization of these particle-like
domain structures.
|
Radio monitoring of the gravitational lens system B0218+357 reveals it to be
a highly variable source with variations on timescales of a few days correlated
in both images. This shows that the variability is intrinsic to the background
lensed source and suggests that similar variations in other intraday variable
sources can also be intrinsic in origin.
|
We propose here to garnish the folklore of function spaces on Lipschitz
domains. We prove the boundedness of the trace operator for homogeneous Sobolev
and Besov spaces on a special Lipschitz domain with sharp regularity. In order
to obtain such a result, we also provide appropriate definitions and properties
so that our construction of homogeneous of Sobolev and Besov spaces on special
Lipschitz domains, and their boundary, that are suitable for the treatment of
non-linear partial differential equations and boundary value problems. The
trace theorem for homogeneous Sobolev and Besov spaces on special Lipschitz
domains occurs in range $s\in(\frac{1}{p},1+\frac{1}{p})$. While the case of
inhomogeneous Sobolev and Besov spaces is very common and well known, the case
of homogeneous function spaces seems to be new. This paper uses and improves
several arguments exposed by the author in a previous paper for function spaces
on the whole and the half-space.
|
Recently, self-supervised instance discrimination methods have achieved
significant success in learning visual representations from unlabeled
photographic images. However, given the marked differences between photographic
and medical images, the efficacy of instance-based objectives, focusing on
learning the most discriminative global features in the image (i.e., wheels in
bicycle), remains unknown in medical imaging. Our preliminary analysis showed
that high global similarity of medical images in terms of anatomy hampers
instance discrimination methods for capturing a set of distinct features,
negatively impacting their performance on medical downstream tasks. To
alleviate this limitation, we have developed a simple yet effective
self-supervised framework, called Context-Aware instance Discrimination (CAiD).
CAiD aims to improve instance discrimination learning by providing finer and
more discriminative information encoded from a diverse local context of
unlabeled medical images. We conduct a systematic analysis to investigate the
utility of the learned features from a three-pronged perspective: (i)
generalizability and transferability, (ii) separability in the embedding space,
and (iii) reusability. Our extensive experiments demonstrate that CAiD (1)
enriches representations learned from existing instance discrimination methods;
(2) delivers more discriminative features by adequately capturing finer
contextual information from individual medial images; and (3) improves
reusability of low/mid-level features compared to standard instance
discriminative methods. As open science, all codes and pre-trained models are
available on our GitHub page: https://github.com/JLiangLab/CAiD.
|
The gauge equivalence between the Manin-Radul and Laberge-Mathieu super KdV
hierarchies is revisited. Apart from the Inami-Kanno transformation, we show
that there is another gauge transformation which also possess the canonical
property. We explore the relationship of these two gauge transformations from
the Kupershmidt-Wilson theorem viewpoint and, as a by-product, obtain the
Darboux-Backlund transformation for the Manin-Radul super KdV hierarchy. The
geometrical intepretation of these transformations is also briefly discussed.
|
Fix an integer $d>0$. In 2008, David and Weston showed that, on average, an
elliptic curve over $\mathbf{Q}$ picks up a nontrivial $p$-torsion point
defined over a finite extension $K$ of the $p$-adics of degree at most $d$ for
only finitely many primes $p$. This paper proves an analogous averaging result
for principally polarized abelian surfaces over $\mathbf{Q}$ with real
multiplication by $\mathbf{Q}(\sqrt{5})$ and a level-$\sqrt{5}$ structure.
Furthermore, we indicate how the result on abelian surfaces with real
multiplication by $\mathbf{Q}(\sqrt{5})$ relates to the deformation theory of
modular Galois representations.
|
Clustering of the four-nucleon system at kinetic freezeout conditions is
studied using path-integral Monte Carlo techniques. This method seeks to
improve upon previous calculations which relied on approximate semiclassical
methods or few-body quantum mechanics. Estimates are given for the decay
probabilities of the 4N system into various light nuclei decay channels and the
strength of spatial correlations is characterized. Additionally, a simple model
is presented to describe the impact of this clustering on nucleon multiplicity
distributions. The effects of a possible modification of the inter-nucleon
interaction due to the close critical line (and hypothetical QCD critical
point) on the clustering are also studied.
|
After a brief review of spin networks and their interpretation as wave
functions for the (space) geometry, we discuss the renormalisation of the area
operator in loop quantum gravity. In such a background independent framework,
we propose to probe the structure of a surface through the analysis of the
coarse-graining and renormalisation flow(s) of its area. We further introduce a
procedure to coarse-grain spin network states and we quantitatively study the
decrease in the number of degrees of freedom during this process. Finally, we
use these coarse-graining tools to define the correlation and entanglement
between parts of a spin network and discuss their potential interpretation as a
natural measure of distance in such a state of quantum geometry.
|
This paper studies the question of how well a signal can be reprsented by a
sparse linear combination of reference signals from an overcomplete dictionary.
When the dictionary size is exponential in the dimension of signal, then the
exact characterization of the optimal distortion is given as a function of the
dictionary size exponent and the number of reference signals for the linear
representation. Roughly speaking, every signal is sparse if the dictionary size
is exponentially large, no matter how small the exponent is. Furthermore, an
iterative method similar to matching pursuit that successively finds the best
reference signal at each stage gives asymptotically optimal representations.
This method is essentially equivalent to successive refinement for multiple
descriptions and provides a simple alternative proof of the successive
refinability of white Gaussian sources.
|
Thermal field theory is indispensable for describing hot and dense systems.
Yet perturbative calculations are often stymied by a host of energy scales, and
tend to converge slowly. This means that precise results require the apt use of
effective field theories. In this paper we refine the effective description of
slowly varying gauge field known as hard thermal loops. We match this effective
theory to the full theory to two-loops. Our results apply for any
renormalizable model and fermion chemical potential. We also discuss how to
consistently define asymptotic masses at higher orders; and how to treat
spectral functions close to the lightcone. In particular, we demonstrate that
the gluon mass is well-defined to next-to-leading order.
|
Recently the CIBER experiment measured the diffuse cosmic infrared background
(CIB) flux and claimed an excess compared with integrated emission from
galaxies. We show that the CIB spectrum can be fitted by the additional photons
produced by the decay of a new particle. However, it also contributes too much
to the anisotropy of the CIB, which is in contradiction with the anisotropy
measurements by the CIBER and Hubble Space Telescope.
|
We address the problem that state-of-the-art Convolution Neural Networks
(CNN) classifiers are not invariant to small shifts. The problem can be solved
by the removal of sub-sampling operations such as stride and max pooling, but
at a cost of severely degraded training and test efficiency. We present a novel
usage of Gaussian-Hermite basis to efficiently approximate arbitrary filters
within the CNN framework to obtain translation invariance. This is shown to be
invariant to small shifts, and preserves the efficiency of training. Further,
to improve efficiency in memory usage as well as computational speed, we show
that it is still possible to sub-sample with this approach and retain a weaker
form of invariance that we call \emph{translation insensitivity}, which leads
to stability with respect to shifts. We prove these claims analytically and
empirically. Our analytic methods further provide a framework for understanding
any architecture in terms of translation insensitivity, and provide guiding
principles for design.
|
We exploit the asymptotic normality of the extreme value theory (EVT) based
estimators of the parameters of a symmetric L\'evy-stable distribution, to
construct confidence intervals. The accuracy of these intervals is evaluated
through a simulation study.
|
Magnetic flux emergence from the solar interior to the atmosphere is believed
to be a key process of formation of solar active regions and driving solar
eruptions. Due to the limited capability of observation, the flux emergence
process is commonly studied using numerical simulations. In this paper, we
developed a numerical model to simulate the emergence of a twisted magnetic
flux tube from the convection zone to the corona using the AMR--CESE--MHD code,
which is based on the conservation-element solution-element method with
adaptive mesh refinement. The result of our simulation agrees with that of many
previous ones with similar initial conditions but using different numerical
codes. In the early stage, the flux tube rises from the convection zone as
driven by the magnetic buoyancy until it reaches close to the photosphere. The
emergence is decelerated there and with piling-up of the magnetic flux, the
magnetic buoyancy instability is triggered, which allows the magnetic field to
partially enter into the atmosphere. Meanwhile, two gradually separated
polarity concentration zones appear in the photospheric layer, transporting the
magnetic field and energy into the atmosphere through their vortical and
shearing motions. Correspondingly, the coronal magnetic field has also been
reshaped to a sigmoid configuration containing a thin current layer, which
resembles the typical pre-eruptive magnetic configuration of an active region.
Such a numerical framework of magnetic flux emergence as established will be
applied in future investigations of how solar eruptions are initiated in flux
emergence active regions.
|
Grid computing is distributed computing performed transparently across
multiple administrative domains. Grid middleware, which is meant to enable
access to grid resources, is currently widely seen as being too heavyweight
and, in consequence, unwieldy for general scientific use. Its heavyweight
nature, especially on the client-side, has severely restricted the uptake of
grid technology by computational scientists. In this paper, we describe the
Application Hosting Environment (AHE) which we have developed to address some
of these problems. The AHE is a lightweight, easily deployable environment
designed to allow the scientist to quickly and easily run legacy applications
on distributed grid resources. It provides a higher level abstraction of a grid
than is offered by existing grid middleware schemes such as the Globus Toolkit.
As a result the computational scientist does not need to know the details of
any particular underlying grid middleware and is isolated from any changes to
it on the distributed resources. The functionality provided by the AHE is
`application-centric': applications are exposed as web services with a
well-defined standards-compliant interface. This allows the computational
scientist to start and manage application instances on a grid in a transparent
manner, thus greatly simplifying the user experience. We describe how a range
of computational science codes have been hosted within the AHE and how the
design of the AHE allows us to implement complex workflows for deployment on
grid infrastructure.
|
It is now well-known that automatic speaker verification (ASV) systems can be
spoofed using various types of adversaries. The usual approach to counteract
ASV systems against such attacks is to develop a separate spoofing
countermeasure (CM) module to classify speech input either as a bonafide, or a
spoofed utterance. Nevertheless, such a design requires additional computation
and utilization efforts at the authentication stage. An alternative strategy
involves a single monolithic ASV system designed to handle both zero-effort
imposter (non-targets) and spoofing attacks. Such spoof-aware ASV systems have
the potential to provide stronger protections and more economic computations.
To this end, we propose to generalize the standalone ASV (G-SASV) against
spoofing attacks, where we leverage limited training data from CM to enhance a
simple backend in the embedding space, without the involvement of a separate CM
module during the test (authentication) phase. We propose a novel yet simple
backend classifier based on deep neural networks and conduct the study via
domain adaptation and multi-task integration of spoof embeddings at the
training stage. Experiments are conducted on the ASVspoof 2019 logical access
dataset, where we improve the performance of statistical ASV backends on the
joint (bonafide and spoofed) and spoofed conditions by a maximum of 36.2% and
49.8% in terms of equal error rates, respectively.
|
Based on a calibration argument, we prove a Bernstein type theorem for entire
minimal graphs over Gauss space $\mathbb{G}^n$ by a simple proof.
|
The development, assessment, and comparison of randomized search algorithms
heavily rely on benchmarking. Regarding the domain of constrained optimization,
the number of currently available benchmark environments bears no relation to
the number of distinct problem features. The present paper advances a proposal
of a scalable linear constrained optimization problem that is suitable for
benchmarking Evolutionary Algorithms. By comparing two recent EA variants, the
linear benchmarking environment is demonstrated.
|
In this paper we investigate the existence of nontrivial ground state
solutions for the following fractional scalar field equation \begin{align*}
(-\Delta)^{s} u+V(x)u= f(u) \mbox{ in } \mathbb{R}^{N}, \end{align*} where
$s\in (0,1)$, $N> 2s$, $(-\Delta)^{s}$ is the fractional Laplacian, $V:
\mathbb{R}^{N}\rightarrow \mathbb{R}$ is a bounded potential satisfying
suitable assumptions, and $f\in C^{1, \beta}(\mathbb{R}, \mathbb{R})$ has
critical growth. We first analyze the case $V$ constant, and then we develop a
Jeanjean-Tanaka argument \cite{JT} to deal with the non autonomous case. As far
as we know, all results presented here are new.
|
This paper explores the application of machine learning methods for
classifying astronomical sources using photometric data, including normal and
emission line galaxies (ELGs; starforming, starburst, AGN, broad line),
quasars, and stars. We utilized samples from Sloan Digital Sky Survey (SDSS)
Data Release 17 (DR17) and the ALLWISE catalog, which contain spectroscopically
labeled sources from SDSS. Our methodology comprises two parts. First, we
conducted experiments, including three-class, four-class, and seven-class
classifications, employing the Random Forest (RF) algorithm. This phase aimed
to achieve optimal performance with balanced datasets. In the second part, we
trained various machine learning methods, such as $k$-nearest neighbors (KNN),
RF, XGBoost (XGB), voting, and artificial neural network (ANN), using all
available data based on promising results from the first phase. Our results
highlight the effectiveness of combining optical and infrared features,
yielding the best performance across all classifiers. Specifically, in the
three-class experiment, RF and XGB algorithms achieved identical average F1
scores of 98.93 per~cent on both balanced and unbalanced datasets. In the
seven-class experiment, our average F1 score was 73.57 per~cent. Using the XGB
method in the four-class experiment, we achieved F1 scores of 87.9 per~cent for
normal galaxies (NGs), 81.5 per~cent for ELGs, 99.1 per~cent for stars, and
98.5 per~cent for quasars (QSOs). Unlike classical methods based on
time-consuming spectroscopy, our experiments demonstrate the feasibility of
using automated algorithms on carefully classified photometric data. With more
data and ample training samples, detailed photometric classification becomes
possible, aiding in the selection of follow-up observation candidates.
|
Nanoscale systems of metal atoms antiferromagnetically exchange coupled to
several magnetic impurities are shown to exhibit an unconventional re-entrant
competition between Kondo screening and indirect magnetic exchange interaction.
Depending on the atomic positions of the magnetic moments, the total
ground-state spin deviates from predictions of standard
Ruderman-Kittel-Kasuya-Yosida perturbation theory. The effect shows up on an
energy scale larger than the level width induced by the coupling to the
environment and is experimentally verifiable by studying magnetic field
dependencies.
|
In this work, we present a paradigm bridging electromagnetic (EM) and
molecular communication through a stimuli-responsive intra-body model. It has
been established that protein molecules, which play a key role in governing
cell behavior, can be selectively stimulated using Terahertz (THz) band
frequencies. By triggering protein vibrational modes using THz waves, we induce
changes in protein conformation, resulting in the activation of a controlled
cascade of biochemical and biomechanical events. To analyze such an
interaction, we formulate a communication system composed of a nanoantenna
transmitter and a protein receiver. We adopt a Markov chain model to account
for protein stochasticity with transition rates governed by the nanoantenna
force. Both two-state and multi-state protein models are presented to depict
different biological configurations. Closed form expressions for the mutual
information of each scenario is derived and maximized to find the capacity
between the input nanoantenna force and the protein state. The results we
obtain indicate that controlled protein signaling provides a communication
platform for information transmission between the nanoantenna and the protein
with a clear physical significance. The analysis reported in this work should
further research into the EM-based control of protein networks.
|
Over the past couple of years, the growing debate around automated facial
recognition has reached a boiling point. As developers have continued to
swiftly expand the scope of these kinds of technologies into an almost
unbounded range of applications, an increasingly strident chorus of critical
voices has sounded concerns about the injurious effects of the proliferation of
such systems. Opponents argue that the irresponsible design and use of facial
detection and recognition technologies (FDRTs) threatens to violate civil
liberties, infringe on basic human rights and further entrench structural
racism and systemic marginalisation. They also caution that the gradual creep
of face surveillance infrastructures into every domain of lived experience may
eventually eradicate the modern democratic forms of life that have long
provided cherished means to individual flourishing, social solidarity and human
self-creation. Defenders, by contrast, emphasise the gains in public safety,
security and efficiency that digitally streamlined capacities for facial
identification, identity verification and trait characterisation may bring. In
this explainer, I focus on one central aspect of this debate: the role that
dynamics of bias and discrimination play in the development and deployment of
FDRTs. I examine how historical patterns of discrimination have made inroads
into the design and implementation of FDRTs from their very earliest moments.
And, I explain the ways in which the use of biased FDRTs can lead
distributional and recognitional injustices. The explainer concludes with an
exploration of broader ethical questions around the potential proliferation of
pervasive face-based surveillance infrastructures and makes some
recommendations for cultivating more responsible approaches to the development
and governance of these technologies.
|
We calculate the angular correlation function of galaxies in the Two Micron
All Sky Survey. We minimize the possible contamination by stars, dust, seeing
and sky brightness by studying their cross correlation with galaxy density, and
limiting the galaxy sample accordingly. We measure the correlation function at
scales between 1-18 arcdegs using a half million galaxies. We find a best fit
power law to the correlation function has a slope of 0.76 and an amplitude of
0.11. However, there are statistically significant oscillations around this
power law. The largest oscillation occurs at about 0.8 degrees, corresponding
to 2.8 h^{-1} Mpc at the median redshift of our survey, as expected in halo
occupation distribution descriptions of galaxy clustering.
We invert the angular correlation function using Singular Value Decomposition
to measure the three-dimensional power spectrum and find that it too is in good
agreement with previous measurements. A dip seen in the power spectrum at small
wavenumber k is statistically consistent with CDM-type power spectra. A fit of
CDM-type power spectra to k < 0.2 h Mpc^{-1} give constraints of
\Gamma_{eff}=0.116 and \sigma_8=0.96. This suggest a K_s-band linear bias of
1.1+/-0.2. This \Gamma_{eff} is different from the WMAP CMB derived value. On
small scales the power-law shape of our power spectrum is shallower than that
derived for the SDSS. These facts together imply a biasing of these different
galaxies that might be nonlinear, that might be either waveband or luminosity
dependent, and that might have a nonlocal origin.
|
This article is about Lehn-Lehn-Sorger-van Straten eightfolds $Z$, and their
anti-symplectic involution $\iota$. When $Z$ is birational to the Hilbert
scheme of points on a K3 surface, we give an explicit formula for the action of
$\iota$ on the Chow group of $0$-cycles of $Z$. The formula is in agreement
with the Bloch-Beilinson conjectures, and has some non-trivial consequences for
the Chow ring of the quotient.
|
The relation between the ratio of infrared (IR) and ultraviolet (UV) flux
densities (the infrared excess: IRX) and the slope of the UV spectrum (\beta)
of galaxies plays a fundamental role in the evaluation of the dust attenuation
of star forming galaxies especially at high redshifts. Many authors, however,
pointed out that there is a significant dispersion and/or deviation from the
originally proposed IRX-\beta relation depending on sample selection. We
reexamined the IRX-\beta relation by measuring the far- and near-UV flux
densities of the original sample galaxies with GALEX and AKARI imaging data,
and constructed a revised formula. We found that the newly obtained IRX values
were lower than the original relation because of the significant
underestimation of the UV flux densities of the galaxies, caused by the small
aperture of IUE, Further, since the original relation was based on IRAS data
which covered a wavelength range of \lambda = 42--122\mum, using the data from
AKARI which has wider wavelength coverage toward longer wavelengths, we
obtained an appropriate IRX-\beta relation with total dust emission (TIR):
\log(L_{\rm TIR}/L_{\rm FUV}) = \log [10^{0.4(3.06+1.58\beta)}-1] +0.22. This
new relation is consistent with most of the preceding results for samples
selected at optical and UV, though there is a significant scatter around it. We
also found that even the quiescent class of IR galaxies follows this new
relation, though luminous and ultraluminous IR galaxies distribute completely
differently as well known before.
|
By adapting some ideas of M. Ledoux \cite{ledoux2}, \cite{ledoux-stflour} and
\cite{Led} to a sub-Riemannian framework we study Sobolev, Poincar\'e and
isoperimetric inequalities associated to subelliptic diffusion operators that
satisfy the generalized curvature dimension inequality that was introduced by
F. Baudoin and N. Garofalo in \cite{Bau2}. Our results apply in particular on
all CR Sasakian manifolds whose horizontal Webster-Tanaka-Ricci curvature is
non negative, all Carnot groups with step two, and wide subclasses of principal
bundles over Riemannian manifolds whose Ricci curvature is non negative.
|
We prove that small smooth solutions of semi-linear Klein-Gordon equations
with quadratic potential exist over a longer interval than the one given by
local existence theory, for almost every value of mass. We use normal form for
the Sobolev energy. The difficulty in comparison with some similar results on
the sphere comes from the fact that two successive eigenvalues $\lambda,
\lambda'$ of $\sqrt{-\Delta+|x|^2}$ may be separated by a distance as small as
$\frac{1}{\sqrt{\lambda}}$.
|
The giant molecular cloud Sagittarius B2, located near the Galactic Centre,
has been observed in the far-infrared by the ISO Long Wavelength Spectrometer.
Wavelengths in the range 47-196 microns were covered with the high resolution
Fabry-Perot spectrometer, giving a spectral resolution of 30-40 km/s. The J=1-0
and J=2-1 rotational transitions of HD fall within this range at 112 microns
and 56 microns. A probable detection was made of the ground state J=1-0 line in
emission but the J=2-1 line was not detected above the noise. This allowed us
to calculate an upper limit on the temperature in the emitting region of
approximately 80 K and a value for the deuterium abundance in the Sgr B2
envelope of D/H=(0.2-11)x10^-6.
|
The coordinates along any fixed direction(s), of points on the sphere
$S^{n-1}(\sqrt{n})$, roughly follow a standard Gaussian distribution as $n$
approaches infinity. We revisit this classical result from a nonstandard
analysis perspective, providing a new proof by working with hyperfinite
dimensional spheres. We also set up a nonstandard theory for the asymptotic
behavior of integrals over varying domains in general. We obtain a new proof of
the Riemann--Lebesgue lemma as a by-product of this theory. We finally show
that for any function $f \co \mathbb{R}^k \to \mathbb{R}$ with finite Gaussian
moment of an order larger than one, its expectation is given by a Loeb integral
integral over a hyperfinite dimensional sphere. Some useful inequalities
between high-dimensional spherical means of $f$ and its Gaussian mean are
obtained in order to complete the above proof. A review of the requisite
nonstandard analysis is provided.
|
In this paper, we determine the Hausdorff dimension of the set of points with
divergent trajectories on the product of certain homogeneous spaces. The flow
is allowed to be weighted with respect to the factors in the product space. The
result is derived from its counterpart in Diophantine approximation. In doing
this, we introduce a notion of jointly singular matrix tuples, and extend the
dimension formula for singular matrices to such matrix tuples.
|
This paper was motivated by a remarkable group, the maximal subgroup
$M=S_3\ltimes 2^{2+1}_{-}\ltimes3^{2+1}\ltimes2^{6+1}_{-}$ of the sporadic
simple group ${\rm Fi}_{23}$, where $S_3$ is the symmetric group of degree 3,
and $2^{2+1}_{-}$, $3^{2+1}$ and $2^{6+1}_{-}$ denote extraspecial groups. The
representation $3^{2+1}\to{\rm GL}(3,\mathbb{F}_4)\to{\rm GL}(6,\mathbb{F}_2)$
extends (remarkably) to $S_3\ltimes 2^{2+1}_{-}\ltimes3^{2+1}$ and preserves a
quadratic form (of minus type) which allows the construction of $M$. The paper
describes certain (Weil) representations of extraspecial groups which extend,
and preserve various forms. Incidentally, $M$ is a remarkable solvable group
with derived length 10, and composition length 24.
|
The quantized Hall conductance in a plateau is related to the index of a
Fredholm operator. In this paper we describe the generic ``phase diagram'' of
Fredholm indices associated with bounded and Toeplitz operators. We discuss the
possible relevance of our results to the phase diagram of disordered integer
quantum Hall systems.
|
We point out that the equivalence theorem, which relates the amplitude for a
process with external longitudinally polarized vector bosons to the amplitude
in which the longitudinal vector bosons are replaced by the corresponding
pseudo-Goldstone bosons, is not valid for effective Lagrangians. However, a
more general formulation of this theorem also holds for effective interactions.
The generalized theorem can be utilized to determine the high-energy behaviour
of scattering processes just by power counting and to simplify the calculation
of the corresponding amplitudes. We apply this method to the phenomenologically
most interesting terms describing effective interactions of the electroweak
vector and Higgs bosons in order to examine their effects on vector-boson
scattering and on vector-boson-pair production in $f\bar{f}$ annihilation. The
use of the equivalence theorem in the literature is examined.
|
We provide comments on the article by Shannon et al. (Sep 2015) entitled
"Gravitational waves from binary supermassive black holes missing in pulsar
observations". The purpose of this letter is to address several misconceptions
of the public and other scientists regarding the conclusions of that work.
|
Consistency regularization on label predictions becomes a fundamental
technique in semi-supervised learning, but it still requires a large number of
training iterations for high performance. In this study, we analyze that the
consistency regularization restricts the propagation of labeling information
due to the exclusion of samples with unconfident pseudo-labels in the model
updates. Then, we propose contrastive regularization to improve both efficiency
and accuracy of the consistency regularization by well-clustered features of
unlabeled data. In specific, after strongly augmented samples are assigned to
clusters by their pseudo-labels, our contrastive regularization updates the
model so that the features with confident pseudo-labels aggregate the features
in the same cluster, while pushing away features in different clusters. As a
result, the information of confident pseudo-labels can be effectively
propagated into more unlabeled samples during training by the well-clustered
features. On benchmarks of semi-supervised learning tasks, our contrastive
regularization improves the previous consistency-based methods and achieves
state-of-the-art results, especially with fewer training iterations. Our method
also shows robust performance on open-set semi-supervised learning where
unlabeled data includes out-of-distribution samples.
|
The operation of many classical and quantum systems in nonequilibrium steady
state is constrained by cost-precision (dissipation-fluctuation) tradeoff
relations, delineated by the thermodynamic uncertainty relation (TUR). However,
coherent quantum electronic nanojunctions can escape such a constraint, showing
finite charge current and nonzero entropic cost with vanishing current
fluctuations. Here, we analyze the absence, and restoration, of cost-precision
tradeoff relations in fermionic nanojunctions under different affinities:
voltage and temperature biases. With analytic work and simulations, we show
that both charge and energy currents can display the absence of cost-precision
tradeoff if we engineer the transmission probability as a boxcar function --
with a perfect transmission and hard energy cutoffs. Specifically for charge
current under voltage bias, the standard TUR may be immediately violated as we
depart from equilibrium, and it is exponentially suppressed with increased
voltage. However, beyond idealized, hard-cutoff energy-filtered transmission
functions, we show that realistic models with soft cutoffs or imperfect
transmission functions follow cost-precision tradeoffs, and eventually recover
the standard TUR sufficiently far from equilibrium. The existence of
cost-precision tradeoff relations is thus suggested as a generic feature of
realistic nonequilibrium quantum transport junctions.
|
We study the high-temperature regime of a mean-field spin glass model whose
couplings matrix is orthogonally invariant in law. The magnetization of this
model is conjectured to satisfy a system of TAP equations, originally derived
by Parisi and Potters using a diagrammatic expansion of the Gibbs free energy.
We prove that this TAP description is correct in an $L^2$ sense, in a regime of
sufficiently high temperature. Our approach develops a novel geometric argument
for proving the convergence of an Approximate Message Passing (AMP) algorithm
to the magnetization vector, which is applicable in models without i.i.d.
couplings. This convergence is shown via a conditional second moment analysis
of the free energy restricted to a thin band around the output of the AMP
algorithm, in a system of many "orthogonal" replicas.
|
Determining the physical properties of microlensing events depends on having
accurate angular sizes of the source star. Using long-baseline optical
interferometry we are able to measure the angular sizes of nearby stars with
uncertainties $\leq 2\%$. We present empirically derived relations of angular
diameters that are calibrated using both a sample of dwarfs/subgiants and a
sample of giant stars. These relations are functions of five color indices in
the visible and near-infrared, and have uncertainties of 1.8-6.5% depending on
the color used. We find that a combined sample of both main-sequence and
evolved stars of A-K spectral types is well fit by a single relation for each
color considered. We find that in the colors considered, metallicity does not
play a statistically significant role in predicting stellar size, leading to a
means of predicting observed sizes of stars from color alone.
|
In this study, we present a systematic computational investigation to analyze
the long debated crystal stability of two well known aspirin polymorphs,
labeled as Form I and Form II. Specifically, we developed a strategy to collect
training configurations covering diverse interatomic interactions between
representative functional groups in the aspirin crystals. Utilizing a
state-of-the-art neural network interatomic potential (NNIP) model, we
developed an accurate machine learning potential to simulate aspirin crystal
dynamics under finite temperature conditions with $\sim$0.46 kJ/mol/molecule
accuracy. Employing the trained NNIP model, we performed thermodynamic
integration to assess the free energy difference between aspirin Forms I and
II, accounting for the anharmonic effects in a large supercell consisting of
512 molecules. For the first time, our results convincingly demonstrated that
Form I is more stable than Form II at 300 K, ranging from 0.74 to 1.83
kJ/mol/molecule, aligning with the experimental observations. Unlike the
majority of previous simulations based on (quasi)harmonic approximations in a
small super cell, which often found the degenerate energies between aspirin I
and II, our findings underscore the importance of anharmonic effects in
determining polymorphic stability ranking. Furthermore, we proposed the use of
rotational degrees of freedom of methyl and ester/phenyl groups in the aspirin
crystal, as characteristic motions to highlight rotational entropic
contribution that favors the stability of Form I. Beyond the aspirin
polymorphism, we anticipate that such entropy-driven stabilization can be
broadly applicable to many other organic systems and thus our approach,
suggesting our approach holds a great promise for stability studies in small
molecule drug design.
|
We address the ground-state properties of the long-standing and much-studied
three-dimensional quantum spin liquid candidate, the $S=\frac 1 2$ pyrochlore
Heisenberg antiferromagnet. By using $SU(2)$ density-matrix renormalization
group (DMRG), we are able to access cluster sizes of up to 128 spins. Our most
striking finding is a robust spontaneous inversion symmetry breaking, reflected
in an energy density difference between the two sublattices of tetrahedra,
familiar as a starting point of earlier perturbative treatments. We also
determine the ground-state energy, $E_0/N_\text{sites} = -0.490(6) J$, by
combining extrapolations of DMRG with those of a numerical linked cluster
expansion. These findings suggest a scenario in which a finite-temperature spin
liquid regime gives way to a symmetry-broken state at low temperatures.
|
We present numerical and analytical results for the reflection and
transmission properties of matter wave solitons impinging on localized
scattering potentials in one spatial dimension. Our mean field analysis
identifies regimes where the solitons behave more like waves or more like
particles as a result of the interplay between the dispersive wave propagation
and the attractive interactions between the atoms. For a bright soliton
propagating together with a dark soliton void in a two-species Bose-Einstein
condensate of atoms with repulsive interactions, we find different reflection
and transmission properties of the dark and the bright components.
|
In this paper, perfect k-orthogonal colourings of tensor graphs are studied.
First, the problem of determining if a given graph has a perfect 2-orthogonal
colouring is reformulated as a tensor subgraph problem. Then, it is shown that
if two graphs have a perfect $k$-orthogonal colouring, then so does their
tensor graph. This provides an upper bound on the $k$-orthogonal chromatic
number for general tensor graphs. Lastly, two other conditions for a tensor
graph to have a perfect $k$-orthogonal colouring are given.
|
We analyze the charged lepton flavor violating (CLFV) decays of vector mesons
$V\rightarrow l_i^{\pm}l_j^{\mp}$ with $V\in\{\phi, J/\Psi, \Upsilon, \rho^0,
\omega \}$ in BLMSSM model. This new model is introduced as an supersymmetric
extension of Standard Model (SM), where local gauged baryon number B and lepton
number L are considered. The numerical results indicate BLMSSM model can
produce significant contributions to such two-body CLFV decays. And the
branching ratios for these CLFV processes can easily reach the present
experimental upper bounds. Therefore, searching for CLFV processes of vector
mesons may be an effective channels to study new physics.
|
We combine searches by the CDF and D0 collaborations for a Higgs boson
decaying to W+W-. The data correspond to an integrated total luminosity of 4.8
(CDF) and 5.4 (D0) fb-1 of p-pbar collisions at sqrt{s}=1.96 TeV at the
Fermilab Tevatron collider. No excess is observed above background expectation,
and resulting limits on Higgs boson production exclude a standard-model Higgs
boson in the mass range 162-166 GeV at the 95% C.L.
|
Making language models bigger does not inherently make them better at
following a user's intent. For example, large language models can generate
outputs that are untruthful, toxic, or simply not helpful to the user. In other
words, these models are not aligned with their users. In this paper, we show an
avenue for aligning language models with user intent on a wide range of tasks
by fine-tuning with human feedback. Starting with a set of labeler-written
prompts and prompts submitted through the OpenAI API, we collect a dataset of
labeler demonstrations of the desired model behavior, which we use to fine-tune
GPT-3 using supervised learning. We then collect a dataset of rankings of model
outputs, which we use to further fine-tune this supervised model using
reinforcement learning from human feedback. We call the resulting models
InstructGPT. In human evaluations on our prompt distribution, outputs from the
1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3,
despite having 100x fewer parameters. Moreover, InstructGPT models show
improvements in truthfulness and reductions in toxic output generation while
having minimal performance regressions on public NLP datasets. Even though
InstructGPT still makes simple mistakes, our results show that fine-tuning with
human feedback is a promising direction for aligning language models with human
intent.
|
A scalar theory of gravitation with a preferred reference frame (PRF) is
considered, that accounts for special relativity and reduces to it if the
gravitational field cancels. The gravitating system consists of a finite number
of perfect-fluid bodies. An " asymptotic " post-Newtonian (PN) approximation
scheme is used, allowing an explicit weak-field limit with all fields expanded.
Exact mass centers are defined and their exact equations of motion are derived.
The PN expansion of these equations is obtained: the zero-order equations are
those of Newtonian gravity (NG), and the equations for the first-order (PN)
corrections depend linearly on the PN fields. For PN corrections to the motion
of the mass centers, especially in the solar system, one may assume "
very-well-separated " rigidly moving bodies with spherical self-fields of the
zero-order approximation. The PN corrections reduce then to a time integration
and include spin effects, which might be significant. It is shown that the
Newtonian masses are not correct zero-order masses for the PN calculations. An
algorithm is proposed, in order to minimize the residual and to assess the
velocity in the PRF.
|
This paper performs the first study to understand the prevalence, challenges,
and effectiveness of using Static Application Security Testing (SAST) tools on
Open-Source Embedded Software (EMBOSS) repositories. We collect a corpus of 258
of the most popular EMBOSS projects, representing 13 distinct categories such
as real-time operating systems, network stacks, and applications. To understand
the current use of SAST tools on EMBOSS, we measured this corpus and surveyed
developers. To understand the challenges and effectiveness of using SAST tools
on EMBOSS projects, we applied these tools to the projects in our corpus. We
report that almost none of these projects (just 3%) use SAST tools beyond those
baked into the compiler, and developers give rationales such as ineffectiveness
and false positives. In applying SAST tools ourselves, we show that minimal
engineering effort and project expertise are needed to apply many tools to a
given EMBOSS project. GitHub's CodeQL was the most effective SAST tool -- using
its built-in security checks we found a total of 540 defects (with a false
positive rate of 23%) across the 258 projects, with 399 (74%) likely security
vulnerabilities, including in projects maintained by Microsoft, Amazon, and the
Apache Foundation. EMBOSS engineers have confirmed 273 (51%) of these defects,
mainly by accepting our pull requests. Two CVEs were issued. In summary, we
urge EMBOSS engineers to adopt the current generation of SAST tools, which
offer low false positive rates and are effective at finding security-relevant
defects.
|
The central engine of GRB170817A post-merger to GW170817 is probed by
GW-calorimetry and event timing, applied to a post-merger descending chirp
which can potentially break the degeneracy in spin-down of a neutron star or
black hole remnant by the relatively large energy reservoir in the angular
momentum, $E_J$, of the latter according to the Kerr metric. This analysis
derives from model-agnostic spectrograms with equal sensitivity to ascending
and descending chirps generated by time-symmetric butterfly matched filtering.
The sensitivity was calibrated by response curves generated by software
injection experiments. The statistical significance for candidate emission from
the central engine of GRB170817A is expressed by probabilities of false alarm
(PFA; type I errors) derived from an event-timing analysis. PDFs were derived
for start-time $t_s$, identified via high-resolution image analyses of the
available spectrograms. For merged (H1,L1)-spectrograms of the LIGO detectors,
a PFA $p_1$ derives from causality in $t_s$ given GW170817-GRB17081A. A
statistically independent confirmation is presented in individual H1 and L1
analyses, in a second PFA $p_2$ of consistency in their respective observations
of $t_s$. A combined PFA derives from their product since mean and
(respectively) difference in timing are statistically independent. Applied to
GW170817-GRB170817A, PFAs of event timing in $t_s$ produce $p_1=8.3\times
10^{-4}$ and $p_2=4.9\times 10^{-5}$ of a post-merger output ${\cal
E}_{GW}\simeq 3.5\%M_\odot c^2$ ($p_1p_2=4.1\times 10^{-8}$, equivalent
$Z$-score 5.48). ${\cal E}_{GW}$ exceeds $E_J$ of the hyper-massive neutron
star in the immediate aftermath of GW170817, yet it is consistent with $E_J$
rejuvenated in delayed gravitational collapse to a Kerr black hole. Similar
emission may be expected from energetic core-collapse supernovae producing
black holes. (Abbr.)
|
A promising way to measure the distribution of matter on small scales (k ~ 10
hMpc^-1) is to use gravitational lensing of the Cosmic Microwave Background
(CMB). CMB-HD, a proposed high-resolution, low-noise millimeter survey over
half the sky, can measure the CMB lensing auto spectrum on such small scales
enabling measurements that can distinguish between a cold dark matter (CDM)
model and alternative models designed to solve problems with CDM on small
scales. However, extragalactic foregrounds can bias the CMB lensing auto
spectrum if left untreated. We present a foreground mitigation strategy that
provides a path to reduce the bias from two of the most dominant foregrounds,
the thermal Sunyaev-Zel'dovich effect (tSZ) and the Cosmic Infrared Background
(CIB). Given the level of realism included in our analysis, we find that the
tSZ alone and the CIB alone bias the lensing auto spectrum by 0.6 sigma and 1.1
sigma respectively, in the lensing multipole range of L in [5000,20000] for a
CMB-HD survey; combined these foregrounds yield a bias of only 1.3 sigma.
Including these foregrounds, we also find that a CMB-HD survey can distinguish
between a CDM model and a 10^-22 eV FDM model at the 5 sigma level. These
results provide an important step in demonstrating that foreground
contamination can be sufficiently reduced to enable a robust measurement of the
small-scale matter power spectrum with CMB-HD.
|
The US EPA and the WHO claim that PM2.5 is causal of all-cause deaths. Both
support and fund research on air quality and health effects. WHO funded a
massive systematic review and meta-analyses of air quality and health-effect
papers. 1,632 literature papers were reviewed and 196 were selected for
meta-analyses. The standard air components, particulate matter, PM10 and PM2.5,
nitrogen dioxide, NO2, and ozone, were selected as causes and all-cause and
cause-specific mortalities were selected as outcomes. A claim was made for
PM2.5 and all-cause deaths, risk ratio of 1.0065, with confidence limits of
1.0044 to 1.0086. There is a need to evaluate the reliability of this causal
claim. Based on a p-value plot and discussion of several forms of bias, we
conclude that the association is not causal.
|
In robotics motion is often described from an external perspective, i.e., we
give information on the obstacle motion in a mathematical manner with respect
to a specific (often inertial) reference frame. In the current work, we propose
to describe the robotic motion with respect to the robot itself. Similar to how
we give instructions to each other (go straight, and then after multiple meters
move left, and then a sharp turn right.), we give the instructions to a robot
as a relative rotation. We first introduce an obstacle avoidance framework that
allows avoiding star-shaped obstacles while trying to stay close to an initial
(linear or nonlinear) dynamical system. The framework of the local rotation is
extended to motion learning. Automated clustering defines regions of local
stability, for which the precise dynamics are individually learned. The
framework has been applied to the LASA-handwriting dataset and shows promising
results.
|
Much research in the last two decades has focused on Virtual Topology
Reconfiguration (VTR) problem. However, most of the proposed methods either has
low controllability, or the analysis of a control parameter is limited to
empirical analysis. In this paper, we present a highly tunable Virtual Topology
(VT) controller. First, we analyze the controllability of two previously
proposed VTR algorithms: a heuristic method and a neural networks based method.
Then we present insights on how to transform these VTR methods to their tunable
versions. To benefit from the controllability, an optimality analysis of the
control parameter is needed. In the second part of the paper, through a
probabilistic analysis we find an optimal parameter for the neural network
based method. We validated our analysis through simulations. We propose this
highly tunable method as a new VTR algorithm.
|
Spoken language identification (LID) technologies have improved in recent
years from discriminating largely distinct languages to discriminating highly
similar languages or even dialects of the same language. One aspect that has
been mostly neglected, however, is discrimination of languages for multilingual
speakers, despite being a primary target audience of many systems that utilize
LID technologies. As we show in this work, LID systems can have a high average
accuracy for most combinations of languages while greatly underperforming for
others when accented speech is present. We address this by using
coarser-grained targets for the acoustic LID model and integrating its outputs
with interaction context signals in a context-aware model to tailor the system
to each user. This combined system achieves an average 97% accuracy across all
language combinations while improving worst-case accuracy by over 60% relative
to our baseline.
|
Conventional video segmentation methods often rely on temporal continuity to
propagate masks. Such an assumption suffers from issues like drifting and
inability to handle large displacement. To overcome these issues, we formulate
an effective mechanism to prevent the target from being lost via adaptive
object re-identification. Specifically, our Video Object Segmentation with
Re-identification (VS-ReID) model includes a mask propagation module and a ReID
module. The former module produces an initial probability map by flow warping
while the latter module retrieves missing instances by adaptive matching. With
these two modules iteratively applied, our VS-ReID records a global mean
(Region Jaccard and Boundary F measure) of 0.699, the best performance in 2017
DAVIS Challenge.
|
After about a century since the first attempts by Bohr, the interpretation of
quantum theory is still a field with many open questions. In this article a new
interpretation of quantum theory is suggested, motivated by philosophical
considerations. Based on the findings that the 'weirdness' of quantum theory
can be understood to derive from a vanishing distinguishability of
indiscernible particles, and the observation that a similar vanishing
distinguishability is found for bundle theories in philosophical ontology, the
claim is made that quantum theory can be interpreted in an intelligible way by
positing a bundle-theoretic view of objective idealism instead of materialism
as the underlying fundamental nature of reality.
|
In recent experiments [M. Dubois, B. Dem\'e, T. Gulik-Krzywicki, J.-C.
Dedieu, C. Vautrin, S. D\'esert, E. Perez, and T. Zemb, Nature (London) Vol.
411, 672 (2001)] the spontaneous formation of hollow bilayer vesicles with
polyhedral symmetry has been observed. On the basis of the experimental
phenomenology it was suggested [M. Dubois, V. Lizunov, A. Meister, T.
Gulik-Krzywicki, J. M. Verbavatz, E. Perez, J. Zimmerberg, and T. Zemb, Proc.
Natl. Acad. Sci. U.S.A. Vol. 101, 15082 (2004)] that the mechanism for the
formation of bilayer polyhedra is minimization of elastic bending energy.
Motivated by these experiments, we study the elastic bending energy of
polyhedral bilayer vesicles. In agreement with experiments, and provided that
excess amphiphiles exhibiting spontaneous curvature are present in sufficient
quantity, we find that polyhedral bilayer vesicles can indeed be energetically
favorable compared to spherical bilayer vesicles. Consistent with experimental
observations we also find that the bending energy associated with the vertices
of bilayer polyhedra can be locally reduced through the formation of pores.
However, the stabilization of polyhedral bilayer vesicles over spherical
bilayer vesicles relies crucially on molecular segregation of excess
amphiphiles along the ridges rather than the vertices of bilayer polyhedra.
Furthermore, our analysis implies that, contrary to what has been suggested on
the basis of experiments, the icosahedron does not minimize elastic bending
energy among arbitrary polyhedral shapes and sizes. Instead, we find that, for
large polyhedron sizes, the snub dodecahedron and the snub cube both have lower
total bending energies than the icosahedron.
|
Matrix product states play an important role in quantum information theory to
represent states of many-body systems. They can be seen as low-dimensional
subvarieties of a high-dimensional tensor space. In these notes, we consider
two variants: homogeneous matrix product states and uniform matrix product
states. Studying the linear spans of these varieties leads to a natural
connection with invariant theory of matrices. For homogeneous matrix product
states, a classical result on polynomial identities of matrices leads to a
formula for the dimension of the linear span, in the case of 2x2 matrices.
These notes are based partially on a talk given by the author at the
University of Warsaw during the thematic semester "AGATES: Algebraic Geometry
with Applications to TEnsors and Secants", and partially on further research
done during the semester. This is still a preliminary version; an updated
version will be uploaded over the course of 2023.
|
We discuss the early evolution of ultrarelativistic heavy-ion collisions
within a multi-fluid dynamical model. In particular, we show that due to the
finite mean-free path of the particles compression shock waves are smeared out
considerably as compared to the one-fluid limit. Also, the maximal energy
density of the baryons is much lower. We discuss the time scale of kinetic
equilibration of the baryons in the central region and its relevance for
directed flow. Finally, thermal emission of direct photons from the fluid of
produced particles is calculated within the three-fluid model and two other
simple expansion models. It is shown that the transverse momentum and rapidity
spectra of photons give clue to the cooling law and the early rapidity
distribution of the photon source.
|
Face recognition has been one of the most relevant and explored fields of
Biometrics. In real-world applications, face recognition methods usually must
deal with scenarios where not all probe individuals were seen during the
training phase (open-set scenarios). Therefore, open-set face recognition is a
subject of increasing interest as it deals with identifying individuals in a
space where not all faces are known in advance. This is useful in several
applications, such as access authentication, on which only a few individuals
that have been previously enrolled in a gallery are allowed. The present work
introduces a novel approach towards open-set face recognition focusing on small
galleries and in enrollment detection, not identity retrieval. A Siamese
Network architecture is proposed to learn a model to detect if a face probe is
enrolled in the gallery based on a verification-like approach. Promising
results were achieved for small galleries on experiments carried out on
Pubfig83, FRGCv1 and LFW datasets. State-of-the-art methods like HFCN and HPLS
were outperformed on FRGCv1. Besides, a new evaluation protocol is introduced
for experiments in small galleries on LFW.
|
M. Kobayashi introduced a notion of duality of weight systems. We tone this
notion slightly down to a notion called coupling. We show that coupling induces
a relation between the reduced zeta functions of the monodromy operators of the
corresponding singularities generalizing an observation of K. Saito concerning
Arnold's strange duality. We show that the weight systems of the mirror
symmetric pairs of M. Reid's list of 95 families of Gorenstein K3 surfaces in
weighted projective 3-spaces are strongly coupled. This includes Arnold's
strange duality where the corresponding weight systems are strongly dual in
Kobayashi's original sense. We show that the same is true for the extension of
Arnold's strange duality found by the author and C. T. C. Wall.
|
We reconsider the hypothesis of a vast cometary reservoir surrounding the
Solar System - the Oort cloud of comets - within the framework of Milgromian
Dynamics (MD or MOND). For this purpose we built a numerical model of the cloud
assuming QUMOND, a modified gravity theory of MD. In the modified gravity
versions of MD, the internal dynamics of a system is influenced by the external
gravitational field in which the system is embedded, even when this external
field is constant and uniform, a phenomenon dubbed the external field effect
(EFE). Adopting the popular pair $\nu(x)=[1-\exp(-x^{1/2})]^{-1}$ for the MD
interpolating function and $a_{0}=1.2\times10^{-10}$ m s$^{-2}$ for the MD
acceleration scale, we found that the observationally inferred Milgromian cloud
of comets is much more radially compact than its Newtonian counterpart. The
comets of the Milgromian cloud stay away from the zone where the Galactic tide
can torque their orbits significantly. However, this does not need to be an
obstacle for the injection of the comets into the inner solar system as the EFE
can induce significant change in perihelion distance during one revolution of a
comet around the Sun. Adopting constraints on different interpolating function
families and a revised value of $a_{0}$ (provided recently by the Cassini
spacecraft), the aforementioned qualitative results no longer hold, and, in
conclusion, the Milgromian cloud is very similar to the Newtonian in its
overall size, binding energies of comets and hence the operation of the
Jupiter-Saturn barrier. However, EFE torquing of perihelia still play a
significant role in the inner parts of the cloud. Consequently Sedna-like
orbits and orbits of large semi-major axis Centaurs are easily comprehensible
in MD. In MD, they both belong to the same population, just in different modes
of their evolution.
|
The formation of a 3D network composed of free standing and interconnected Pt
nanowires is achieved by a two-step method, consisting of conformal deposition
of Pt by atomic layer deposition (ALD) on a forest of carbon nanotubes and
subsequent removal of the carbonaceous template. Detailed characterization of
this novel 3D nanostructure was carried out by transmission electron microscopy
(TEM) and electrochemical impedance spectroscopy (EIS). These characterizations
showed that this pure 3D nanostructure of platinum is self-supported and offers
an enhancement of the electrochemically active surface area by a factor of 50.
|
Religious adherence can be considered as a degree of freedom, in a
statistical physics sense, for a human agent belonging to a population. The
distribution, performance and life time of religions can thus be studied having
in mind heterogeneous interacting agent modeling in mind. We present a
comprehensive analysis of 58 so called religion (to be better defined in the
main text) evolutions, as measured through their number of adherents between
1900 and 2000, - data taken from the World Christian Encyclopedia: 40 are
considered to be ''presently growing'' cases, including 11 turn overs in the XX
century; 18 are ''presently decaying'', among which 12 are found to have had a
recent maximum, in the XIX or the XX century. The Avrami-Kolmogorov
differential equation which usually describes solid state transformations, like
crystal growth, is used in each case in order to obtain the preferential
attachment parameter introduced previously. It is often found close to unity,
indicating a smooth evolution. However large values suggest the occurrence of
extreme cases which we conjecture are controlled by so called external fields.
A few cases indicate the likeliness of a detachment process. We discuss
different growing and decaying religions, and illustrate various fits. Some
cases seem to indicate the lack of reliability of the data. Others, departure
from Avrami law. We point out two difficulties in the analysis : (i) the
''precise'' original time of apparition of a religion, (ii) the time of its
maximum, both informations being necessary for integrating reliably any
evolution equation. Moreover the Avrami evolution equation might be surely
improved, in particular, and somewhat obviously, for the decaying religion
cases.
|
We present a general result giving us families of incomplete and boundedly
complete families of discrete distributions. For such families, the classes of
unbiased estimators of zero with finite variance and of parametric functions
which will have uniformly minimum variance unbiased estimators with finite
variance are explicitly characterized. The general result allows us to
construct a large number of families of incomplete and boundedly complete
families of discrete distributions. Several new examples of such families are
described.
|
The main objective of this paper is to provide a tool for performing path
planning at the servo level of a mobile robot. The ability to perform, in a
provably correct manner, such a complex task at the servo level can lead to a
large increase in the speed of operation, low energy consumption and high
quality of response. Planning has been traditionally limited to the high level
controller of a robot. The guidance velocity signal from this stage is usually
converted to a control signal using what is known as an electronic speed
controller (ESC). This paper demonstrates the ability of the harmonic potential
field (HPF) approach to generate a provably correct, constrained, well behaved
trajectory and control signal for a rigid, nonholonomic robot in a stationary,
cluttered environment. It is shown that the HPF based, servo level planner can
address a large number of challenges facing planning in a realistic situation.
The suggested approach migrates the rich and provably correct properties of the
solution trajectories from an HPF planner to those of the robot. This is
achieved using a synchronizing control signal whose aim is to align the
velocity of the robot in its local coordinates, with that of the gradient of
the HPF. The link between the two is made possible by representing the robot
using what the paper terms separable form. The context-sensitive and
goal-oriented control signal used to steer the robot is demonstrated to be well
behaved and robust in the presence of actuator noise, saturation and
uncertainty in the parameters. The approach is developed, proofs of correctness
are provided and the capabilities of the scheme are demonstrated using
simulation results.
|
In this paper, we introduce a novel approach for diagnosis of Parkinson's
Disease (PD) based on deep Echo State Networks (ESNs). The identification of PD
is performed by analyzing the whole time-series collected from a tablet device
during the sketching of spiral tests, without the need for feature extraction
and data preprocessing. We evaluated the proposed approach on a public dataset
of spiral tests. The results of experimental analysis show that DeepESNs
perform significantly better than shallow ESN model. Overall, the proposed
approach obtains state-of-the-art results in the identification of PD on this
kind of temporal data.
|
The term \emph{moderate deviations} is often used in the literature to mean a
class of large deviation principles that, in some sense, fills the gap between
a convergence in probability to zero (governed by a large deviation principle)
and a weak convergence to a centered Normal distribution. We talk about
\emph{noncentral moderate deviations} when the weak convergence is towards a
non-Gaussian distribution. In this paper we present noncentral moderate
deviation results for two fractional Skellam processes in the literature (see
Kerss, Leonenko and Sikorskii, 2014). We also establish that, for the
fractional Skellam process of type 2 (for which we can refer the recent results
for compound fractional Poisson processes in Beghin and Macci (2022)), the
convergences to zero are usually faster because we can prove suitable
inequalities between rate functions.
|
Quantum Key Distribution (QKD) is a technique enabling provable secure
communication but faces challenges in device characterization, posing potential
security risks. Device-Independent (DI) QKD protocols overcome this issue by
making minimal device assumptions but are limited in distance because they
require high detection efficiencies, which refer to the ability of the
experimental setup to detect quantum states. It is thus desirable to find
quantum key distribution protocols that are based on realistic assumptions on
the devices as well as implementable over long distances. In this work, we
consider a one-sided DI QKD scheme with two measurements per party and show
that it is secure against coherent attacks up to detection efficiencies greater
than 50.1% specifically on the untrusted side. This is almost the theoretical
limit achievable for protocols with two untrusted measurements. Interestingly,
we also show that, by placing the source of states close to the untrusted side,
our protocol is secure over distances comparable to standard QKD protocols.
|
We use chiral perturbation theory to compute the effective nucleon propagator
in an expansion about low density in the chiral limit. We neglect four-nucleon
interactions and focus on pion exchange. Evaluating the nucleon self-energy on
its mass shell to leading order, we show that the effective nucleon mass
increases by a small amount. We discuss the relevance of our results to the
structure of compact stars.
|
Query optimization is one of the most challenging problems in database
systems. Despite the progress made over the past decades, query optimizers
remain extremely complex components that require a great deal of hand-tuning
for specific workloads and datasets. Motivated by this shortcoming and inspired
by recent advances in applying machine learning to data management challenges,
we introduce Neo (Neural Optimizer), a novel learning-based query optimizer
that relies on deep neural networks to generate query executions plans. Neo
bootstraps its query optimization model from existing optimizers and continues
to learn from incoming queries, building upon its successes and learning from
its failures. Furthermore, Neo naturally adapts to underlying data patterns and
is robust to estimation errors. Experimental results demonstrate that Neo, even
when bootstrapped from a simple optimizer like PostgreSQL, can learn a model
that offers similar performance to state-of-the-art commercial optimizers, and
in some cases even surpass them.
|
The first linear global electromagnetic gyrokinetic particle simulation on
the excitation of toroidicity induced Alfven eigenmode (TAE) by energetic
particles is reported. With an increase in the energetic particle pressure, the
TAE frequency moves down into the lower continuum.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.