text
stringlengths 6
128k
|
---|
Basis state shift is central to many quantum algorithms, most notably the
quantum walk. Efficient implementations are of major importance for achieving a
quantum speedup for computational applications. We optimize the state shift
algorithm by incorporating the shift in different directions in parallel. This
provides a significant reduction in the depth of the quantum circuit in
comparison to the currently known methods, giving a linear scaling in the
number of gates versus working qubits in contrast to the quadratic scaling of
the state-of-the-art method based on the quantum Fourier transform. For a
one-dimensional array of size $2^n$ for $n > 4$, we derive the total number of
$15n + 74$ two-qubit $CX$ gates in the parallel circuit, using a total of
$2n-2$ qubits including an ancilla register for the decomposition of
multi-controlled gates. We focus on the one-dimensional and periodic shift, but
note that the method can be extended to more complex cases.
|
We find the distribution function f(E) for dark matter (DM) halos in galaxies
and the corresponding equation of state from the (empirical) DM density
profiles derived from observations. We solve for DM in galaxies the analogous
of the Eddington equation originally used for the gas of stars in globular
clusters. The observed density profiles are a good realistic starting point and
the distribution functions derived from them are realistic. We do not make any
assumption about the DM nature, the methods developed here apply to any DM
kind, though all results are consistent with Warm DM. With these methods we
find: (i) Cored density profiles behaving quadratically for small distances
rho(r) r -> 0 = rho(0) - K r^2 produce distribution functions which are finite
and positive at the halo center while cusped density profiles always produce
divergent distribution functions at the center. (ii) Cored density profiles
produce approximate thermal Boltzmann distribution functions for r < 3 r_h
where r_h is the halo radius. (iii) Analytic expressions for the dispersion
velocity and the pressure are derived yielding an ideal DM gas equation of
state with local temperature T(r) = m v^2(r)/3. T(r) turns to be constant in
the same region where the distribution function is thermal and exhibits the
same temperature within the percent. The self-gravitating DM gas can thermalize
despite being collisionless because it is an ergodic system. (iv) The DM halo
can be consistently considered at local thermal equilibrium with: (a) a
constant temperature T(r) = T_0 for r < 3 \; r_h, (b) a space dependent
temperature T(r) for 3 r_h < r < R_{virial}, which slowly decreases with r.
That is, the DM halo is realistically a collisionless self-gravitating thermal
gas for r < R_{virial}. (v) T(r) outside the halo radius nicely follows the
decrease of the circular velocity squared.
|
We introduce two tractable analytical models to describe dynamic effects at
resonant light scattering by subwavelength particles. One of them is based on
generalization of the temporal coupled-mode theory, and the other employs the
normal mode approach. We show that sharp variations in the envelope of the
incident pulse may initiate unusual, counterintuitive dynamics of the
scattering associated with interference of modes with fast and slow relaxation.
To exhibit the power of the models, we apply them to explain the dynamic light
scattering of a square-envelope pulse by an infinite circular cylinder made of
$GaP$, when the pulse carrier frequency lies in the vicinity of the destructive
interference at the Fano resonances. We observe and explain intensive sharp
spikes in scattering cross section just behind the leading and trailing edges
of the incident pulse. The latter occurs when the incident pulse is over and is
explained by the electromagnetic energy released in the particle at the
previous scattering stages. The accuracy of the models is checked against their
comparison with results of the direct numerical integration of the complete set
of Maxwell's equations and occurs very high. The models' advantages and
disadvantages are revealed, and the ways to apply them to other types of
dynamic resonant scattering are discussed.
|
We continue to develop Bootstrability -- a method merging Integrability and
Conformal Bootstrap to extract CFT data in integrable conformal gauge theories
such as $\mathcal{N}$=4 SYM. In this paper, we consider the 1D defect CFT
defined on a $\frac{1}{2}$-BPS Wilson line in the theory, whose
non-perturbative spectrum is governed by the Quantum Spectral Curve (QSC). In
addition, we use that the deformed setup of a cusped Wilson line is also
controlled by the QSC. In terms of the defect CFT, this translates into two
nontrivial relations connecting integrated 4-point correlators to cusp spectral
data, such as the Bremsstrahlung and Curvature functions -- known analytically
from the QSC. Combining these new constraints and the spectrum of the $10$
lowest-lying states with the Numerical Conformal Bootstrap, we obtain very
sharp rigorous numerical bounds for the structure constant of the first
non-protected state, giving this observable with seven digits precision for the
't Hooft coupling in the intermediate coupling region
$\frac{\sqrt{\lambda}}{4\pi}\sim 1$, with the error decreasing quickly at large
't Hooft coupling. Furthermore, for the same structure constant we obtain a
$4$-loop analytic result at weak coupling. We also present results for excited
states.
|
Based on the vector meson dominance assumption, a Hamiltonian model has been
developed to investigate $J/\Psi$ photo-production reaction on the nucleon by
using the $J/\Psi$-nucleon potential extracted from a lattice QCD calculation
of Phys. Rev. D{\bf 82}, 091501 (2010). It is found that the predicted total
cross sections are comparable to the recent data of $J/\Psi$ photo-production
reaction from Jefferson Laboratory. The model is then extended to include the
two-gluon exchange amplitude modeled by Donnachie and Lanshoff within Regge
Phenomenology. The resulting Pomeron-LQCD model can then explain the data up to
invariant mass $W=$ 300 GeV. Future improvements needed to reduce the
uncertainties of the predictions are discussed. The need of an accurate
extraction of $J/\Psi$-N potential at short distances from LQCD is illustrated.
|
We report on singular and nonsingular flat bands in a Sierpinski fractal-like
photonic lattice. We demonstrate that the the lowest two bands, being isolated
and degenerate due to geometrical frustration, are nonsingular and thus can be
spanned by a complete set of compact localized states. We experimentally prove
these states to propagate diffractionless in the photonic lattice. Our results
reveal the interplay between geometrical frustration, degenerate flat bands and
compact localized states in a single photonic lattice, and pave the way to
photonic spin liquid ground states.
|
Smart contracts are programs that execute inside blockchains such as Ethereum
to manipulate digital assets. Since bugs in smart contracts may lead to
substantial financial losses, there is considerable interest in formally
proving their correctness. However, the specification and verification of smart
contracts faces challenges that do not arise in other application domains.
Smart contracts frequently interact with unverified, potentially adversarial
outside code, which substantially weakens the assumptions that formal analyses
can (soundly) make. Moreover, the core functionality of smart contracts is to
manipulate and transfer resources; describing this functionality concisely
requires dedicated specification support. Current reasoning techniques do not
fully address these challenges, being restricted in their scope or
expressiveness (in particular, in the presence of re-entrant calls), and
offering limited means of expressing the resource transfers a contract
performs.
In this paper, we present a novel specification methodology tailored to the
domain of smart contracts. Our specification constructs and associated
reasoning technique are the first to enable: (1) sound and precise reasoning in
the presence of unverified code and arbitrary re-entrancy, (2) modular
reasoning about collaborating smart contracts, and (3) domain-specific
specifications based on resources and resource transfers, which allow
expressing a contract's behavior in intuitive and concise ways and exclude
typical errors by default. We have implemented our approach in 2vyper, an
SMT-based automated verification tool for Ethereum smart contracts written in
the Vyper language, and demonstrated its effectiveness in succinctly capturing
and verifying strong correctness guarantees for real-world contracts.
|
Kouveliotou et al. (1993) recently confirmed that gamma-ray bursts are
bimodal in duration. In this paper we compute the statistical properties of the
short ($\le 2$~s) and long ($>2$~s) bursts using a method of analysis that
makes no assumption regarding the location of the bursts, whether in the Galaxy
or at a cosmological distance. We find the 64 ms channel on BATSE to be more
sensitive to short bursts and the 1024 ms channel is more sensitive to long
bursts. We show that all the currently available data are consistent with the
simple hypothesis that both short and long bursts have the same spatial
distribution and that within each population the sources are standard candles.
The rate of short bursts is $\sim 0.4$ of the rate of long bursts. Although the
durations of short and long gamma-ray bursts span several orders of magnitude
and the total energy of a typical short burst is smaller than that of a typical
long burst by a factor of $\sim 20$, surprisingly the peak luminosities of the
two kinds of bursts are equal to within a factor of $\sim 2$.
|
In this paper, we study the relation between $m$-strongly Gorenstein
projective (resp. injective) modules and $n$-strongly Gorenstein projective
(resp. injective) modules whenever $m \neq n$, and the homological behavior of
$n$-strongly Gorenstein projective (resp. injective) modules. We introduce the
notion of $n$-strongly Gorenstein flat modules. Then we study the homological
behavior of $n$-strongly Gorenstein flat modules, and the relation between
these modules and $n$-strongly Gorenstein projective (resp. injective) modules.
|
Thermonuclear explosions of Type Ia supernovae (SNIa) involve turbulent
deflagrations, detonations, and possibly a deflagration-to-detonation
transition. A phenomenological delayed detonation model of SNIa successfully
explains many observational properties of SNIa including monochromatic light
curves, spectra, brightness - decline and color - decline relations. Observed
variations among SNia are explained as a result of varying nickel mass
synthesised in an explosion of a Chandrasekhar mass C/O white dwarf. Based on
theoretical models of SNIa, the value of the Hubble constant H_o \simeq
67km/s/Mpc was determined without the use of secondary distance indicators. The
cause for the nickel mass variations in SNIa is still debated. It may be a
variation of the initial C/O ratio in a supernova progenitor, rotation, or
other effects.
|
An S-Matrix ansatz is used to determine the mass and width of the Z boson, as
well as the contributions of gamma/Z interference and Z boson exchange to
fermion-pair production. For this purpose we use hadron and lepton-pair
production cross sections and lepton forward-backward asymmetries that have
been measured with the L3 detector at centre-of-mass energies between 87GeV and
189GeV.
|
We study sums of directed paths on a hierarchical lattice where each bond has
either a positive or negative sign with a probability $p$. Such path sums $J$
have been used to model interference effects by hopping electrons in the
strongly localized regime. The advantage of hierarchical lattices is that they
include path crossings, ignored by mean field approaches, while still
permitting analytical treatment. Here, we perform a scaling analysis of the
controversial ``sign transition'' using Monte Carlo sampling, and conclude that
the transition exists and is second order. Furthermore, we make use of exact
moment recursion relations to find that the moments $<J^n>$ always determine,
uniquely, the probability distribution $P(J)$. We also derive, exactly, the
moment behavior as a function of $p$ in the thermodynamic limit. Extrapolations
($n\to 0$) to obtain $<\ln J>$ for odd and even moments yield a new signal for
the transition that coincides with Monte Carlo simulations. Analysis of high
moments yield interesting ``solitonic'' structures that propagate as a function
of $p$. Finally, we derive the exact probability distribution for path sums $J$
up to length L=64 for all sign probabilities.
|
The rapid advancement in self-supervised learning (SSL) has highlighted its
potential to leverage unlabeled data for learning rich visual representations.
However, the existing SSL techniques, particularly those employing different
augmentations of the same image, often rely on a limited set of simple
transformations that are not representative of real-world data variations. This
constrains the diversity and quality of samples, which leads to sub-optimal
representations. In this paper, we introduce a novel framework that enriches
the SSL paradigm by utilizing generative models to produce semantically
consistent image augmentations. By directly conditioning generative models on a
source image representation, our method enables the generation of diverse
augmentations while maintaining the semantics of the source image, thus
offering a richer set of data for self-supervised learning. Our extensive
experimental results on various SSL methods demonstrate that our framework
significantly enhances the quality of learned visual representations by up to
10\% Top-1 accuracy in downstream tasks. This research demonstrates that
incorporating generative models into the SSL workflow opens new avenues for
exploring the potential of synthetic data. This development paves the way for
more robust and versatile representation learning techniques.
|
Two formulas for the multiplicity of the fiber cone
$F(I)=\oplus_{n=0}^{\infty} I^n/\m I^n$ of an $\m$-primary ideal of a
$d$-dimensional Cohen-Macaulay local ring $(R,\m)$ are derived in terms of the
mixed multiplicity $e_{d-1}(\m | I),$ the multiplicity $e(I)$ and superficial
elements. As a consequence, the Cohen-Macaulay property of $F(I)$ when $I$ has
minimal mixed multiplicity or almost minimal mixed multiplicity is
characterized in terms of reduction number of $I$ and lengths of certain
ideals. We also characterize Cohen-Macaulay and Gorenstein property of fiber
cones of $\m$-primary ideals with a $d$-generated minimal reduction $J$
satisfying (i) $\ell(I^2/JI)=1$ or (ii) $\ell(I\m/J\m)=1.$
|
We show how to derive state-of-the-art unsupervised neural machine
translation systems from generatively pre-trained language models. Our method
consists of three steps: few-shot amplification, distillation, and
backtranslation. We first use the zero-shot translation ability of large
pre-trained language models to generate translations for a small set of
unlabeled sentences. We then amplify these zero-shot translations by using them
as few-shot demonstrations for sampling a larger synthetic dataset. This
dataset is distilled by discarding the few-shot demonstrations and then
fine-tuning. During backtranslation, we repeatedly generate translations for a
set of inputs and then fine-tune a single language model on both directions of
the translation task at once, ensuring cycle-consistency by swapping the roles
of gold monotext and generated translations when fine-tuning. By using our
method to leverage GPT-3's zero-shot translation capability, we achieve a new
state-of-the-art in unsupervised translation on the WMT14 English-French
benchmark, attaining a BLEU score of 42.1.
|
We present new results of our program to systematically search for strongly
lensed galaxies in the Sloan Digital Sky Survey (SDSS) imaging data. In this
study six strong lens systems are presented which we have confirmed with
follow-up spectroscopy and imaging using the 3.5m telescope at the Apache Point
Observatory. Preliminary mass models indicate that the lenses are group-scale
systems with velocity dispersions ranging from 466-878 km s^{-1} at z=0.17-0.45
which are strongly lensing source galaxies at z=0.4-1.4. Galaxy groups are a
relatively new mass scale just beginning to be probed with strong lensing. Our
sample of lenses roughly doubles the confirmed number of group-scale lenses in
the SDSS and complements ongoing strong lens searches in other imaging surveys
such as the CFHTLS (Cabanac et al 2007). As our arcs were discovered in the
SDSS imaging data they are all bright ($r\lesssim22$), making them ideally
suited for detailed follow-up studies.
|
The Ethereum platform allows developers to implement and deploy applications
called Dapps onto the blockchain for public use through the use of smart
contracts. To execute code within a smart contract, a paid transaction must be
issued towards one of the functions that are exposed in the interface of a
contract. However, such a transaction is only processed once one of the miners
in the peer-to-peer network selects it, adds it to a block, and appends that
block to the blockchain This creates a delay between transaction submission and
code execution. It is crucial for Dapp developers to be able to precisely
estimate when transactions will be processed, since this allows them to define
and provide a certain Quality of Service (QoS) level (e.g., 95% of the
transactions processed within 1 minute). However, the impact that different
factors have on these times have not yet been studied. Processing time
estimation services are used by Dapp developers to achieve predefined QoS. Yet,
these services offer minimal insights into what factors impact processing
times. Considering the vast amount of data that surrounds the Ethereum
blockchain, changes in processing times are hard for Dapp developers to
predict, making it difficult to maintain said QoS. In our study, we build
random forest models to understand the factors that are associated with
transaction processing times. We engineer several features that capture
blockchain internal factors, as well as gas pricing behaviors of transaction
issuers. By interpreting our models, we conclude that features surrounding gas
pricing behaviors are very strongly associated with transaction processing
times. Based on our empirical results, we provide Dapp developers with concrete
insights that can help them provide and maintain high levels of QoS.
|
We report a comparative Raman spectroscopic study of the
quasi-one-dimensional charge-density-wave systems \ab (A = K, Rb). The
temperature and polarization dependent experiments reveal charge-coupled
vibrational Raman features. The strongly temperature-dependent collective
amplitudon mode in both materials differ by about 3 cm, thus revealing the role
of alkali atom. We discus the observed vibrational features in terms of
charge-density-wave ground state accompanied by change in the crystal symmetry.
A frequency-kink in some modes seen in \bb between T = 80 K and 100 K supports
the first-order lock-in transition, unlike \rb. The unusually sharp Raman
lines(limited by the instrumental response) at very low temperatures and their
temperature evolution suggests that the decay of the low energy phonons is
strongly influenced by the presence of the temperature dependent charge density
wave gap.
|
Automatic organ segmentation is an important yet challenging problem for
medical image analysis. The pancreas is an abdominal organ with very high
anatomical variability. This inhibits previous segmentation methods from
achieving high accuracies, especially compared to other organs such as the
liver, heart or kidneys. In this paper, we present a probabilistic bottom-up
approach for pancreas segmentation in abdominal computed tomography (CT) scans,
using multi-level deep convolutional networks (ConvNets). We propose and
evaluate several variations of deep ConvNets in the context of hierarchical,
coarse-to-fine classification on image patches and regions, i.e. superpixels.
We first present a dense labeling of local image patches via
$P{-}\mathrm{ConvNet}$ and nearest neighbor fusion. Then we describe a regional
ConvNet ($R_1{-}\mathrm{ConvNet}$) that samples a set of bounding boxes around
each image superpixel at different scales of contexts in a "zoom-out" fashion.
Our ConvNets learn to assign class probabilities for each superpixel region of
being pancreas. Last, we study a stacked $R_2{-}\mathrm{ConvNet}$ leveraging
the joint space of CT intensities and the $P{-}\mathrm{ConvNet}$ dense
probability maps. Both 3D Gaussian smoothing and 2D conditional random fields
are exploited as structured predictions for post-processing. We evaluate on CT
images of 82 patients in 4-fold cross-validation. We achieve a Dice Similarity
Coefficient of 83.6$\pm$6.3% in training and 71.8$\pm$10.7% in testing.
|
We experimentally realized an optical nanofiber-based cavity by combining a
1-D photonic crystal and Bragg grating structures. The cavity morphology
comprises a periodic, triplex air-cube introduced at the waist of the
nanofiber. The cavity has been theoretically characterized using FDTD
simulations to obtain the reflection and transmission spectra. We have also
experimentally measured the transmission spectra and a Q-factor of ~784(87) for
a very short periodic structure has been observed. The structure provides
strong confinement of the cavity field and its potential for optical network
integration makes it an ideal candidate for use in nanophotonic and quantum
information systems.
|
Designing a registration framework for images that do not share the same
probability distribution is a major challenge in modern image analytics yet
trivial task for the human visual system (HVS). Discrepancies in probability
distributions, also known as \emph{drifts}, can occur due to various reasons
including, but not limited to differences in sequences and modalities (e.g.,
MRI T1-T2 and MRI-CT registration), or acquisition settings (e.g., multisite,
inter-subject, or intra-subject registrations). The popular assumption about
the working of HVS is that it exploits a communal feature subspace exists
between the registering images or fields-of-view that encompasses key
drift-invariant features. Mimicking the approach that is potentially adopted by
the HVS, herein, we present a representation learning technique of this
invariant communal subspace that is shared by registering domains. The proposed
communal domain learning (CDL) framework uses a set of hierarchical nonlinear
transforms to learn the communal subspace that minimizes the probability
differences and maximizes the amount of shared information between the
registering domains. Similarity metric and parameter optimization calculations
for registration are subsequently performed in the drift-minimized learned
communal subspace. This generic registration framework is applied to register
multisequence (MR: T1, T2) and multimodal (MR, CT) images. Results demonstrated
generic applicability, consistent performance, and statistically significant
improvement for both multi-sequence and multi-modal data using the proposed
approach ($p$-value$<0.001$; Wilcoxon rank sum test) over baseline methods.
|
This study is concerned with estimating the inequality measures associated
with the underlying hypothetical income distribution from the times series
grouped data on the Lorenz curve. We adopt the Dirichlet pseudo likelihood
approach where the parameters of the Dirichlet likelihood are set to the
differences between the Lorenz curve of the hypothetical income distribution
for the consecutive income classes and propose a state space model which
combines the transformed parameters of the Lorenz curve through a time series
structure. Furthermore, the information on the sample size in each survey is
introduced into the originally nuisance Dirichlet precision parameter to take
into account the variability from the sampling. From the simulated data and
real data on the Japanese monthly income survey, it is confirmed that the
proposed model produces more efficient estimates on the inequality measures
than the existing models without time series structures.
|
This paper is concerned with geometric motion of a closed surface whose
velocity depends on a nonlocal quantity of the enclosed region. Using the level
set formulation, we study a class of nonlocal Hamilton--Jacobi equations and
establish a control-based representation formula for solutions. We also apply
the formula to discuss the fattening phenomenon and large-time asymptotics of
the solutions.
|
We report the serendipitous detection with ALMA of the vibrationally-excited
pure-rotational CO transition $v=1, J=3-2$ towards five asymptotic giant branch
(AGB) stars, $o$ Cet, R Aqr, R Scl, W Aql, and $\pi^1$ Gru. The observed lines
are formed in the poorly-understood region located between the stellar surface
and the region where the wind starts, the so-called warm molecular layer. We
successfully reproduce the observed lines profiles using a simple model. We
constrain the extents, densities, and kinematics of the region where the lines
are produced. R Aqr and R Scl show inverse P-Cygni line profiles which indicate
infall of material onto the stars. The line profiles of $o$ Cet and R Scl show
variability. The serendipitous detection towards these five sources shows that
vibrationally-excited rotational lines can be observed towards a large number
of nearby AGB stars using ALMA. This opens a new possibility for the study of
the innermost regions of AGB circumstellar envelopes.
|
We show that there are models of MA where the boldface
$\Sigma^1_3$-uniformization property holds. Further we show that BPFA and the
assertion $\aleph_1$ is accessible to reals outright implies that the boldface
$\Sigma^1_3$-uniformization property is true.
|
Baseline design of a typical X-ray FEL undulator assumes a planar
configuration which results in a linear polarization of the FEL radiation.
However, many experiments at X-ray FEL user facilities would profit from using
a circularly polarized radiation. As a cheap upgrade one can consider an
installation of a short helical (or cross-planar) afterburner, but then one
should have an efficient method to suppress powerful linearly polarized
background from the main undulator. In this paper we propose a new method for
such a suppression: an application of the reverse taper in the main undulator.
We discover that in a certain range of the taper strength, the density
modulation (bunching) at saturation is practically the same as in the case of
non-tapered undulator while the power of linearly polarized radiation is
suppressed by orders of magnitude. Then strongly modulated electron beam
radiates at full power in the afterburner. Considering SASE3 undulator of the
European XFEL as a practical example, we demonstrate that soft X-ray radiation
pulses with peak power in excess of 100 GW and an ultimately high degree of
circular polarization can be produced. The proposed method is rather universal,
i.e. it can be used at SASE FELs and seeded (self-seeded) FELs, with any
wavelength of interest, in a wide range of electron beam parameters, and with
any repetition rate. It can be used at different X-ray FEL facilities, in
particular at LCLS after installation of the helical afterburner in the near
future.
|
We give a method for designing a mechanical impedance to suppress the
propagation of disturbances along a chain of masses. The key feature of our
method is that it is scale free. This means that it can be used to give a
single, fixed, design, with provable performance guarantees in mass chains of
any length. We illustrate the approach by designing a bidirectional control law
in a vehicle platoon in a manner that is independent of the number of vehicles
in the platoon.
|
In this paper, we apply experimental number theory to two integrable quantum
models in one dimension, the Lieb-Liniger Bose gas and the Yang-Gaudin Fermi
gas with contact interactions. We identify patterns in weak- and
strong-coupling series expansions of the ground-state energy, local correlation
functions and pressure. Based on the most accurate data available in the
literature, we make a few conjectures about their mathematical structure and
extrapolate to higher orders.
|
Variations of physical and chemical characteristics of biomass lead to an
uneven flow of biomass in a biorefinery, which reduces equipment utilization
and increases operational costs. Uncertainty of biomass supply and high
processing costs increase the risk of investing in the US's cellulosic biofuel
industry. We propose a stochastic programming model to streamline processes
within a biorefinery. A chance constraint models system's reliability
requirement that the reactor is operating at a high utilization rate given
uncertain biomass moisture content, particle size distribution, and equipment
failure. The model identifies operating conditions of equipment and inventory
level to maintain a continuous flow of biomass to the reactor. The Sample
Average Approximation method approximates the chance constraint and a bisection
search-based heuristic solves this approximation. A case study is developed
using real-life data collected at Idaho National Laboratory's pilot biomass
processing facility. An extensive computational analysis indicates that
sequencing of biomass bales based on moisture level, increasing storage
capacity, and managing particle size distribution increase utilization of the
reactor and reduce operational costs.
|
We propose a method to separate \Delta\eta-dependent and
\Delta\eta-independent azimuthal correlations using two- and four-particle
cumulants between pseudo-rapidity (\eta) bins in symmetric heavy-ion
collisions. The \Delta\eta-independent correlation may be dominated by harmonic
flows, a global correlation to the common collision geometry. The
\Delta\eta-dependent correlation can be identified as nonflow, particle
correlations unrelated to the common geometry. Our method exploits the \eta
symmetry of the average harmonic flows and is "data-driven." We use the AMPT
and HIJING event generators to illustrate our method. We discuss the decomposed
\Delta\eta-independent and \Delta\eta-dependent correlations regarding flow and
nonflow in the models.
|
We propose different implementations of the sparse matrix--dense vector
multiplication (\spmv{}) for finite fields and rings $\Zb/m\Zb$. We take
advantage of graphic card processors (GPU) and multi-core architectures. Our
aim is to improve the speed of \spmv{} in the \linbox library, and henceforth
the speed of its black box algorithms. Besides, we use this and a new
parallelization of the sigma-basis algorithm in a parallel block Wiedemann rank
implementation over finite fields.
|
Ferroelectric crystals must adopt one of the 10 polar point groups according
to the Neumann's principle. In this paper we propose that this conclusion is
based on perfect bulk crystals without taking the boundaries into account, and
we show first-principles evidence that ferroelectric polarizations may also be
formed in some non-polar point groups as the edges generally break the crystal
symmetry, which may even maintain at macroscale. They can be switchable in some
systems with weak van der Waals bondings or covalent-like ionic bondings where
long ion displacements with moderate barriers are possible. Such unconventional
ferroelectricity violates the Neumann's principle and Abrahams' conditions
respectively due to the boundaries and long ion displacements, which may
explain some unclarified phenomena reported previously as well as significantly
expand the scope of ferroelectrics.
|
We have used the master equation approach to study a moderately complex
network of diffusive reactions occurring on the surfaces of interstellar dust
particles. This network is meant to apply to dense clouds in which a large
portion of the gas-phase carbon has already been converted to carbon monoxide.
Hydrogen atoms, oxygen atoms, and CO molecules are allowed to accrete onto dust
particles and their chemistry is followed. The stable molecules produced are
oxygen, hydrogen, water, carbon dioxide (CO2), formaldehyde (H2CO), and
methanol (CH3OH). The surface abundances calculated via the master equation
approach are in good agreement with those obtained via a Monte Carlo method but
can differ considerably from those obtained with standard rate equations.
|
There are two dominant and contrasting classes of origin of life scenarios:
those predicting that life emerged in submarine hydrothermal systems, where
chemical disequilibrium can provide an energy source for nascent life; and
those predicting that life emerged within subaerial environments, where UV
catalysis of reactions may occur to form the building blocks of life. Here, we
describe a prebiotically plausible environment that draws on the strengths of
both scenarios: surface hydrothermal vents. We show how key feedstock molecules
for prebiotic chemistry can be produced in abundance in shallow and surficial
hydrothermal systems. We calculate the chemistry of volcanic gases feeding
these vents over a range of pressures and basalt C/N/O contents. If
ultra-reducing carbon-rich nitrogen-rich gases interact with subsurface water
at a volcanic vent they result in 1 mM to 1 M concentrations of diacetylene,
acetylene, cyanoacetylene, hydrogen cyanide, bisulfite, hydrogen sulfide and
soluble iron in vent water. One key feedstock molecule, cyanamide, is not
formed in significant quantities within this scenario, suggesting that it may
need to be delivered exogenously, or formed from hydrogen cyanide either via
organometallic compounds, or by some as yet-unknown chemical synthesis. Given
the likely ubiquity of surface hydrothermal vents on young, hot, terrestrial
planets, these results identify a prebiotically plausible local geochemical
environment, which is also amenable to future lab-based simulation.
|
Infinite-order U-statistics (IOUS) has been used extensively on subbagging
ensemble learning algorithms such as random forests to quantify its
uncertainty. While normality results of IOUS have been studied extensively, its
variance estimation approaches and theoretical properties remain mostly
unexplored. Existing approaches mainly utilize the leading term dominance
property in the Hoeffding decomposition. However, such a view usually leads to
biased estimation when the kernel size is large or the sample size is small. On
the other hand, while several unbiased estimators exist in the literature,
their relationships and theoretical properties, especially the ratio
consistency, have never been studied. These limitations lead to unguaranteed
performances of constructed confidence intervals. To bridge these gaps in the
literature, we propose a new view of the Hoeffding decomposition for variance
estimation that leads to an unbiased estimator. Instead of leading term
dominance, our view utilizes the dominance of the peak region. Moreover, we
establish the connection and equivalence of our estimator with several existing
unbiased variance estimators. Theoretically, we are the first to establish the
ratio consistency of such a variance estimator, which justifies the coverage
rate of confidence intervals constructed from random forests. Numerically, we
further propose a local smoothing procedure to improve the estimator's finite
sample performance. Extensive simulation studies show that our estimators enjoy
lower bias and archive targeted coverage rates.
|
Recent measurements of the time-dependent CP violation are presented. The
decays of $B_{s}^{0}$ mesons to $J/\psi\,K^+ K^-$ and $J/\psi\,\pi^+ \pi^-$
final states are used to measure CP-violating parameters with proton-proton
collision data, corresponding to an integrated luminosity of 1.9 fb$^{-1}$,
collected by the LHCb detector at a centre-of-mass energy of 13 TeV in 2015 and
2016.
|
This paper studies the motion planning problem of the pick-and-place of an
aerial manipulator that consists of a quadcopter flying base and a Delta arm.
We propose a novel partially decoupled motion planning framework to solve this
problem. Compared to the state-of-the-art approaches, the proposed one has two
novel features. First, it does not suffer from increased computation in
high-dimensional configuration spaces. That is because it calculates the
trajectories of the quadcopter base and the end-effector separately in the
Cartesian space based on proposed geometric feasibility constraints. The
geometric feasibility constraints can ensure the resulting trajectories satisfy
the aerial manipulator's geometry. Second, collision avoidance for the Delta
arm is achieved through an iterative approach based on a pinhole mapping
method, so that the feasible trajectory can be found in an efficient manner.
The proposed approach is verified by three experiments on a real aerial
manipulation platform. The experimental results show the effectiveness of the
proposed method for the aerial pick-and-place task.
|
Explainable artificial intelligence (XAI) is one of the most intensively
developed area of AI in recent years. It is also one of the most fragmented
with multiple methods that focus on different aspects of explanations. This
makes difficult to obtain the full spectrum of explanation at once in a compact
and consistent way. To address this issue, we present Local Universal Explainer
(LUX), which is a rule-based explainer that can generate factual,
counterfactual and visual explanations. It is based on a modified version of
decision tree algorithms that allows for oblique splits and integration with
feature importance XAI methods such as SHAP or LIME. It does not use data
generation in opposite to other algorithms, but is focused on selecting local
concepts in a form of high-density clusters of real data that have the highest
impact on forming the decision boundary of the explained model. We tested our
method on real and synthetic datasets and compared it with state-of-the-art
rule-based explainers such as LORE, EXPLAN and Anchor. Our method outperforms
the existing approaches in terms of simplicity, global fidelity,
representativeness, and consistency.
|
Today many compact and efficient on-water data acquisition units help the
modern coaching by measuring and analyzing various inertial signals during
kayaking. One of the most challenging problems is how these signals can be used
to estimate performance and to develop the technique. Recently we have
introduced indicators based on the fluctuations of the inertial signals as
promising additions to the existing parameters. In this work we report about
our more detailed analysis, compare new indicators and discuss the possible
advantages of the applied methods. Our primary aim is to draw the attention to
several exciting and inspiring open problems and to initiate further research
even in several related multidisciplinary fields. More detailed information can
be found on a dedicated web page, http://www.noise.inf.u-szeged.hu/kayak.
|
We derive a second order estimate for the first m eigenvalues and
eigenfunctions of the linearized Gel'fand problem associated to solutions which
blow-up at m points. This allows us to determine, in some suitable situations,
some qualitative properties of the first m eigenfunctions as the number of
points of concentration or the multiplicity of the eigenvalue .
|
Motivated by the fact that information is encoded and processed by physical
systems, the P versus NP problem is examined in terms of physical processes. In
particular, we consider P as a class of deterministic, and NP as
nondeterministic, polynomial-time physical processes. Based on these
identifications, we review a self-reference physical process in quantum theory,
which belongs to NP but cannot be contained in P.
|
We present ab initio GW plus cumulant-expansion calculations for an organic
compound (TMTSF)2PF6 and a transition-metal oxide SrVO3. These materials
exhibit characteristic low-energy band structures around the Fermi level, which
bring about interesting low-energy properties; the low-energy bands near the
Fermi level are isolated from the other bands and, in the isolated bands,
unusually low-energy plasmon excitations occur. To study the effect of this
low-energy-plasmon fluctuation on the electronic structure, we calculate
spectral functions and photoemission spectra using the ab initio cumulant
expansion of the Green's function based on the GW self-energy. We found that
the low-energy plasmon fluctuation leads to an appreciable renormalization of
the low-energy bands and a transfer of the spectral weight into the incoherent
part, thus resulting in an agreement with experimental photoemission data.
|
We study one dimensional Bose liquids of interacting ultracold atoms in the
Y-shaped potential when each branch is filled with atoms. We find that the
excitation packet incident on a single Y-junction should experience a negative
density reflection analogous to the Andreev reflection at normal-superconductor
interfaces, although the present system does not contain fermions. In a ring
interferometer type configuration, we find that the transport is completely
insensitive to the (effective) flux contained in the ring, in contrast to the
Aharonov-Bohm effect of a single particle in the same geometry.
|
We characterize using the Bergman kernel Carleson measures of Bergman spaces
in strongly pseudoconvex bounded domains in several complex variables,
generalizing to this setting theorems proved by Duren and Weir for the unit
ball. We also show that uniformly discrete (with respect to the Kobayashi
distance) sequences give examples of Carleson measures, and we compute the
speed of escape to the boundary of uniformly discrete sequences in strongly
pseudoconvex domains, generalizing results obtained in the unit ball by
Jevti\'c, Massaneda and Thomas, by Duren and Weir, and by MacCluer.
|
Annular substructures in protoplanetary discs, ubiquitous in sub-mm
observations, can be caused by gravitational coupling between a disc and its
embedded planets. Planetary density waves inject angular momentum into the disc
leading to gap opening only after travelling some distance and steepening into
shocks (in the absence of linear damping); no angular momentum is deposited in
the planetary coorbital region, where the wave has not shocked yet. Despite
that, simulations show mass evacuation from the coorbital region even in
inviscid discs, leading to smooth, double-trough gap profiles. Here we consider
the early, time-dependent stages of planetary gap opening in inviscid discs. We
find that an often-overlooked contribution to the angular momentum balance
caused by the time-variability of the specific angular momentum of the disc
fluid (caused, in turn, by the time-variability of the radial pressure support)
plays a key role in gap opening. Focusing on the regime of shallow gaps with
depths of $\lesssim 20\%$, we demonstrate analytically that early gap opening
is a self-similar process, with the amplitude of the planet-driven perturbation
growing linearly in time and the radial gap profile that can be computed
semi-analytically. We show that mass indeed gets evacuated from the coorbital
region even in inviscid discs. This evolution pattern holds even in viscous
discs over a limited period of time. These results are found to be in excellent
agreement with 2D numerical simulations. Our simple gap evolution solutions can
be used in studies of dust dynamics near planets and for interpreting
protoplanetary disc observations.
|
We present a combined experimental and theoretical study of the surface
vibrational modes of the topological insulator Bi$_2$Te$_3$. Using
high-resolution helium-3 spin-echo spectroscopy we are able to resolve the
acoustic phonon modes of Bi$_2$Te$_3$(111). The low energy region of the
lattice vibrations is mainly dominated by the Rayleigh mode which has been
claimed to be absent in previous experimental studies. The appearance of the
Rayleigh mode is consistent with previous bulk lattice dynamics studies as well
as theoretical predictions of the surface phonon modes. Density functional
perturbation theory calculations including van der Waals corrections are in
excellent agreement with the experimental data. Comparison of the experimental
results with theoretically obtained values for films with a thickness of
several layers further demonstrate, that for an accurate theoretical
description of three-dimensional topological insulators with their layered
structure the inclusion of van der Waals corrections is essential. The presence
of a prominent surface acoustic wave and the contribution of van der Waals
bonding to the lattice dynamics may hold important implications for the
thermoelectric properties of thin-film and nanoscale devices.
|
In the framework of the Global Architecture of Planetary Systems (GAPS)
project we collected more than 300 spectra with HARPS-N at the TNG for the
bright G9V star HD164922. This target is known to host one gas giant planet in
a wide orbit (Pb~1200 days, semi-major axis ~2 au) and a Neptune-mass planet
with a period Pc ~76 days. We searched for additional low-mass companions in
the inner region of the system. We compared the radial velocities (RV) and the
activity indices derived from the HARPS-N time series to measure the rotation
period of the star and used a Gaussian process regression to describe the
behaviour of the stellar activity. We exploited this information in a combined
model of planetary and stellar activity signals in an RV time-series composed
of almost 700 high-precision RVs, both from HARPS-N and literature data. We
performed a dynamical analysis to evaluate the stability of the system and the
allowed regions for additional potential companions. Thanks to the high
sensitivity of the HARPS-N dataset, we detect an additional inner super-Earth
with an RV semi-amplitude of 1.3+/-0.2 m/s, a minimum mass of ~4+/-1 M_E and a
period of 12.458+/-0.003 days. We disentangle the planetary signal from
activity and measure a stellar rotation period of ~42 days. The dynamical
analysis shows the long term stability of the orbits of the three-planet system
and allows us to identify the permitted regions for additional planets in the
semi-major axis ranges 0.18-0.21 au and 0.6-1.4 au. The latter partially
includes the habitable zone of the system. We did not detect any planet in
these regions, down to minimum detectable masses of 5 and 18 M_E, respectively.
A larger region of allowed planets is expected beyond the orbit of planet b,
where our sampling rules-out bodies with minimum mass > 50 M_E.
|
The aim of this talk is to present the most recent advances in establishing
plausible planetary system architectures determined by the gravitational tidal
interactions between the planets and the disc in which they are embedded during
the early epoch of planetary system formation. We concentrate on a very well
defined and intensively studied process of the disc-planet interaction leading
to the planet migration. We focus on the dynamics of the systems in which
low-mass planets are present. Particular attention is devoted to investigation
of the role of resonant configurations. Our studies, apart from being
complementary to the fast progress occurring just now in observing the whole
variety of planetary systems and uncovering their structure and origin, can
also constitute a valuable contribution in support of the missions planned to
enhance the number of detected multiple systems.
|
We study compactifications of the $N=2$ 6D tensionless string on various
complex two-folds down to two-dimensions. In the IR limit they become
non-trivial conformal field theories in 2D. Using results of Vafa and Witten on
the partition functions of twisted Super-Yang-Mills theories, we can study the
resulting CFT. We also discuss the contribution of instantons made by wrapping
strings on 2-cycles of the complex two-fold.
|
We formulate elementary SFT spectral invariants of a large class of
symplectic cobordisms and stable Hamiltonian manifolds, in any dimension. We
give criteria for the strong closing property using these invariants, and
verify these criteria for Hofer near periodic systems. This extends the class
of symplectic dynamical systems in any dimension that satisfy the strong
closing property.
|
Atom chips use current flowing in lithographically patterned wires to produce
microscopic magnetic traps for atoms. The density distribution of a trapped
cold atom cloud reveals disorder in the trapping potential, which results from
meandering current flow in the wire. Roughness in the edges of the wire is
usually the main cause of this behaviour. Here, we point out that the edges of
microfabricated wires normally exhibit self-affine roughness. We investigate
the consequences of this for disorder in atom traps. In particular, we consider
how closely the trap can approach the wire when there is a maximum allowable
strength of the disorder. We comment on the role of roughness in future
atom--surface interaction experiments.
|
From the partial wave analysis, the phase shift D33 of pion nucleon
scattering containing the D(1232) resonance, corresponding to isospin I=3/2 and
angular momentum J=3/2, has been parameterized over the energy range 1100 < W <
1375 MeV, using p-Pi+ data. The result of our parameterization shows good
agreement in comparison with the available experimental data.
|
The excitation of many cells and tissues is associated with cell mechanical
changes. The evidence presented herein corroborates that single cells deform
during an action potential (AP). It is demonstrated that excitation of plant
cells (Chara braunii internodes) is accompanied by out-of-plane displacements
of the cell surface in the micrometer range (1-10 micron). The onset of
cellular deformation coincides with the depolarization phase of the AP. The
mechanical pulse (i) propagates with the same velocity as the electrical pulse
(within experimental accuracy; 10 mm/s), (ii) is reversible, (iii) in most
cases of biphasic nature (109 out of 152 experiments) and (iv) presumably
independent of actin-myosin-motility. The existence of transient mechanical
changes in the cell cortex is confirmed by micropipette aspiration experiments.
A theoretical analysis demonstrates that this observation can be explained by a
reversible change in the mechanical properties of the cell surface
(transmembrane pressure, surface tension and bending rigidity). Taken together,
these findings contribute to the ongoing debate about the physical nature of
cellular excitability.
|
Physicians write notes about patients. In doing so, they reveal much about
themselves. Using data from 129,228 emergency room visits, we train a model to
identify notes written by fatigued physicians -- those who worked 5 or more of
the prior 7 days. In a hold-out set, the model accurately identifies notes
written by these high-workload physicians, and also flags notes written in
other high-fatigue settings: on overnight shifts, and after high patient
volumes. Model predictions also correlate with worse decision-making on at
least one important metric: yield of testing for heart attack is 18% lower with
each standard deviation increase in model-predicted fatigue. Finally, the model
indicates that notes written about Black and Hispanic patients have 12% and 21%
higher predicted fatigue than Whites -- larger than overnight vs. daytime
differences. These results have an important implication for large language
models (LLMs). Our model indicates that fatigued doctors write more predictable
notes. Perhaps unsurprisingly, because word prediction is the core of how LLMs
work, we find that LLM-written notes have 17% higher predicted fatigue than
real physicians' notes. This indicates that LLMs may introduce distortions in
generated text that are not yet fully understood.
|
In the practice of sequential decision making, agents are often designed to
sense state at regular intervals of $d$ time steps, $d > 1$, ignoring state
information in between sensing steps. While it is clear that this practice can
reduce sensing and compute costs, recent results indicate a further benefit. On
many Atari console games, reinforcement learning (RL) algorithms deliver
substantially better policies when run with $d > 1$ -- in fact with $d$ even as
high as $180$. In this paper, we investigate the role of the parameter $d$ in
RL; $d$ is called the "frame-skip" parameter, since states in the Atari domain
are images. For evaluating a fixed policy, we observe that under standard
conditions, frame-skipping does not affect asymptotic consistency. Depending on
other parameters, it can possibly even benefit learning. To use $d > 1$ in the
control setting, one must first specify which $d$-step open-loop action
sequences can be executed in between sensing steps. We focus on
"action-repetition", the common restriction of this choice to $d$-length
sequences of the same action. We define a task-dependent quantity called the
"price of inertia", in terms of which we upper-bound the loss incurred by
action-repetition. We show that this loss may be offset by the gain brought to
learning by a smaller task horizon. Our analysis is supported by experiments on
different tasks and learning algorithms.
|
We revisit the issue of whether radio loudness in AGN is associated with
central black hole mass, as has been suggested in the literature. We present
new estimates of black hole mass for 295 AGN (mostly radio-quiet), calculating
their radio loudnesses from published radio and optical fluxes, and combine
with our previously published values, for a sample of 452 AGN for which both
black hole mass and radio loudness (or upper limits thereto) are known. Among
the radio-quiet AGN, there are now many black holes with mass larger than
$10^{9}$ $M_{\odot}$, extending to the same high black hole masses as
radio-loud AGN of the same redshifts. Over the full sample, the black hole
masses of radio-loud and radio-quiet AGN span the same large range, $10^6 -
10^{10} M_\odot$. We conclude that radio loudness in AGN does not depend
strongly on central black hole mass.
|
Medicare fraud results in considerable losses for governments and insurance
companies and results in higher premiums from clients. Medicare fraud costs
around 13 billion euros in Europe and between 21 billion and 71 billion US
dollars per year in the United States. This study aims to use artificial neural
network based classifiers to predict medicare fraud. The main difficulty using
machine learning techniques in fraud detection or more generally anomaly
detection is that the data sets are highly imbalanced. To detect medicare
frauds, we propose a multiple inputs deep neural network based classifier with
a Long-short Term Memory (LSTM) autoencoder component. This architecture makes
it possible to take into account many sources of data without mixing them and
makes the classification task easier for the final model. The latent features
extracted from the LSTM autoencoder have a strong discriminating power and
separate the providers into homogeneous clusters. We use the data sets from the
Centers for Medicaid and Medicare Services (CMS) of the US federal government.
The CMS provides publicly available data that brings together all of the cost
price requests sent by American hospitals to medicare companies. Our results
show that although baseline artificial neural network give good performances,
they are outperformed by our multiple inputs neural networks. We have shown
that using a LSTM autoencoder to embed the provider behavior gives better
results and makes the classifiers more robust to class imbalance.
|
We present the Hamiltonian formulation of the Ghost Free Mimetic Massive
Gravity theory. The linearized theory is studied and the Hamiltonian equations
of motion are analyzed. Poisson brackets are computed and closure is proved. To
prove that this theory is ghost-free, the number of degrees of freedom is
analyzed showing that we have only five degrees of freedom.
|
We discuss preliminary results on medium-modified fragmentation functions
obtained in a combined NLO fit to data on semi-inclusive deep inelastic
scattering off nuclei and hadroproduction in deuteron-gold collisions.
|
We propose a simple mechanism for solving the $\mu$ problem in the context of
minimal low--energy supergravity models. This is based on the appearance of
non--renormalizable couplings in the superpotential. In particular, if $H_1H_2$
is an allowed operator by all the symmetries of the theory, it is natural to
promote the usual renormalizable superpotential $W_o$ to $W_o+\lambda W_o
H_1H_2$, yielding an effective $\mu$ parameter whose size is directly related
to the gravitino mass once supersymmetry is broken (this result is maintained
if $H_1H_2$ couples with different strengths to the various terms present in
$W_o$). On the other hand, the $\mu$ term must be absent from $W_o$, otherwise
the natural scale for $\mu$ would be $M_P$. Remarkably enough, this is entirely
justified in the supergravity theories coming from superstrings, where mass
terms for light fields are forbidden in the superpotential. We also analyse the
$SU(2)\times U(1)$ breaking, finding that it takes place satisfactorily.
Finally, we give a realistic example in which supersymmetry is broken by
gaugino condensation, where the mechanism proposed for solving the $\mu$
problem can be gracefully implemented.
|
The main result of this paper is that two large collections of ergodic
measure preserving systems, the Odometer Based and the Circular Systems have
the same global structure with respect to joinings. The classes are canonically
isomorphic by a continuous map that takes factor maps to factor maps,
measure-isomorphisms to measure-isomorphisms, weakly mixing extensions to
weakly mixing extensions and compact extensions to compact extensions. The
first class includes all finite entropy ergodic transformations with an
odometer factor. By results in a previous paper, the second class contains all
transformations realizable as diffeomorphisms using the strongly uniform
untwisted Anosov-Katok method. An application of the main result will appear in
a forthcoming paper that shows that the diffeomorphisms of the torus are
inherently unclassifiable up to measure-isomorphism. Other consequences include
the existence measure distal diffeomorphisms of arbitrary countable distal
height.
|
In our work, we analyse $5\times10^{4}$ single pulses from the recycled
pulsar PSR J2222$-$0137 in one of its scintillation maxima observed by the
Five-hundred-meter Aperture Spherical radio Telescope (FAST). PSR J2222$-$0137
is one of the nearest and best studies of binary pulsars and a unique
laboratory for testing gravitational theories. We report single pulses' energy
distribution and polarization from the pulsar's main-pulse region. The single
pulse energy follows the log-normal distribution. We resolve a steep
polarization swing, but at the current time resolution ($64\,\mu{\rm s}$), we
find no evidence for the orthogonal jump in the main-pulse region, as has been
suspected. We find a potential sub-pulse drifting period of $P_{3} \sim
3.5\,P$. We analyse the jitter noise from different integrated numbers of
pulses and find that its $\sigma_{j}$ is $270\pm{9}\,{\rm ns}$ for 1-hr
integration at 1.25 GHz. This result is useful for optimizing future timing
campaigns with FAST or other radio telescopes.
|
We construct irreducible modules V_{\alpha}, \alpha \in \C over W_3 algebra
with c = -2 in terms of a free bosonic field. We prove that these modules
exhaust all the irreducible modules of W_3 algebra with c = -2. Highest weights
of modules V_{\alpha}, \alpha \in \C with respect to the full (two-dimensional)
Cartan subalgebra of W_3 algebra are (\alpha(\alpha -1)/2, \alpha(\alpha
-1)(2\alpha -1)/6). They are parametrized by points (t, w) on a rational curve
w^2 - t^2 (8t + 1)/9 = 0. Irreducible modules of vertex algebra W_{1+\infty}
with c = -1 are also classified.
|
In this note, following \cite{Chitescuetal2014}, we show that the
Monge-Kantorovich norm on the vector space of countably additive measures on a
compact metric space has a primal representation analogous to the Hanin norm,
meaning that similarly to the Hanin norm, the Monge-Kantorovich norm can be
seen as an extension of the Kantorovich-Rubinstein norm from the vector
subspace of zero-charge measures, implying a number of novel results, such as
the equivalence of the Monge-Kantorovich and Hanin norms.
|
Key processes in biological and chemical systems are described by networks of
chemical reactions. From molecular biology to biotechnology applications,
computational models of reaction networks are used extensively to elucidate
their non-linear dynamics. Model dynamics are crucially dependent on parameter
values which are often estimated from observations. Over past decade, the
interest in parameter and state estimation in models of (bio-)chemical reaction
networks (BRNs) grew considerably. Statistical inference problems are also
encountered in many other tasks including model calibration, discrimination,
identifiability and checking as well as optimum experiment design, sensitivity
analysis, bifurcation analysis and other. The aim of this review paper is to
explore developments of past decade to understand what BRN models are commonly
used in literature, and for what inference tasks and inference methods. Initial
collection of about 700 publications excluding books in computational biology
and chemistry were screened to select over 260 research papers and 20 graduate
theses concerning estimation problems in BRNs. The paper selection was
performed as text mining using scripts to automate search for relevant keywords
and terms. The outcome are tables revealing the level of interest in different
inference tasks and methods for given models in literature as well as recent
trends. In addition, a brief survey of general estimation strategies is
provided to facilitate understanding of estimation methods which are used for
BRNs. Our findings indicate that many combinations of models, tasks and methods
are still relatively sparse representing new research opportunities to explore
those that have not been considered - perhaps for a good reason. The paper
concludes by discussing future research directions including research problems
which cannot be directly deduced from presented tables.
|
In [I. Cardinali and L. Giuzzi. Line Hermitian Grassmann codes and their
parameters. Finite Fields Appl., 51: 407-432, 2018] we introduced line
Hermitian Grassmann codes and determined their parameters. The aim of this
paper is to present (in the spirit of [I. Cardinali and L. Giuzzi. Enumerative
coding for line polar Grassmannians with applications to codes. Finite Fields
Appl., 46:107-138, 2017]) an algorithm for the point enumerator of a line
Hermitian Grassmannian which can be usefully applied to get efficient encoders,
decoders and error correction algorithms for the aforementioned codes.
|
In this paper, we introduce a two-level attention schema, Poolingformer, for
long document modeling. Its first level uses a smaller sliding window pattern
to aggregate information from neighbors. Its second level employs a larger
window to increase receptive fields with pooling attention to reduce both
computational cost and memory consumption. We first evaluate Poolingformer on
two long sequence QA tasks: the monolingual NQ and the multilingual TyDi QA.
Experimental results show that Poolingformer sits atop three official
leaderboards measured by F1, outperforming previous state-of-the-art models by
1.9 points (79.8 vs. 77.9) on NQ long answer, 1.9 points (79.5 vs. 77.6) on
TyDi QA passage answer, and 1.6 points (67.6 vs. 66.0) on TyDi QA minimal
answer. We further evaluate Poolingformer on a long sequence summarization
task. Experimental results on the arXiv benchmark continue to demonstrate its
superior performance.
|
In the present paper and as an application of Roth's theorem concerning the
rational approximation of algebraic numbers, we give a sufficient condition
that will assure us that a sum, product and quotient of some series of positive
rational terms are transcendental numbers. We recall that all the infinite
series that we are going to treat are Liouville numbers. At the end this
article, we establish an approximation measure of these numbers.
|
In this paper, we construct the axisymmetric weak solutions to the 3D
isothermal stationary compressible Navier-Stokes equations on the domain D,
where the heat ratio equal 1 and the external force g satisfies certain
cancellation conditions. We first establish the compactness assertion of the
approximation solutions by assuming the total mass of the fluid is finite, then
we exclude the trivial solution via imposing another type of restrictions on
the density. To deal with the singularities near the symmetric axis and at the
far field, we will derive proper weighted estimates of the L2-norm of the
velocity field on the unbounded domain D, which belongs to the critical case of
the Sobolev's inequality.
|
Asteroseismology is a powerful tool for probing stellar interiors and
determining stellar fundamental parameters. In previous works,
$\chi^2$-minimization method is usually used to find the best matching model to
characterize observations. In this letter, we adopt the $\chi^2$-minimization
method but only using the observed high-precision oscillation to constrain
theoretical models for solar-like oscillating star KIC 6225718, which is
observed by \kepler\ satellite. We also take into account the influence of
model precision. Finally, we find that the time resolution of stellar evolution
can not be ignored in high-precision asteroseismic analysis. Based on this, we
find the acoustic radius $\tau_{0}$ is the only global parameter that can be
accurately measured by $\chi^2_{\nu}$ matching method between observed
frequencies and theoretical model calculations. We obtain
$\tau_{0}=4601.5^{+4.4}_{-8.3}$ seconds. In addition, we analyze the
distribution of $\chi^2_{\nu}$-minimization models (CMMs), and find that the
distribution range of CMMs is slightly enlarged by some extreme cases, which
possess both of larger mass and higher (or lower) heavy element abundance, at
lower acoustic radius end.
|
We announce a factorization result for equivariant birational morphisms
between toric 4-folds whose source is Fano: such a morphism is always a
composite of blow-ups along smooth invariant centers. Moreover, we show with a
counterexample that, differently from the 3-dimensional case, even if both
source and target are Fano, the intermediate varieties can not be chosen Fano.
|
In this paper, we prove the existence of a unique strong solution to a
stochastic tamed 3D Navier-Stokes equation in the whole space as well as in the
periodic boundary case. Then, we also study the Feller property of solutions,
and prove the existence of invariant measures for the corresponding Feller
semigroup in the case of periodic conditions. Moreover, in the case of periodic
boundary and degenerated additive noise, using the notion of asymptotic strong
Feller property proposed by Hairer and Mattingly \cite{Ha-Ma}, we prove the
uniqueness of invariant measures for the corresponding transition semigroup.
|
We discuss relations between the para-CR structures and differential
equations (both ODEs and PDEs of finite type).
|
Motivated by Thurston and Daskalopoulos--Uhlenbeck's approach to
Teichm\"uller theory, we study the behavior of $q$-harmonic functions and their
$p$-harmonic conjugates in the limit as $q \to 1$, where $1/p + 1/q = 1$. The
$1$-Laplacian is already known to give rise to laminations by minimal
hypersurfaces; we show that the limiting $p$-harmonic conjugates converge to
calibrations $F$ of the laminations. Moreover, we show that the laminations
which are calibrated by $F$ are exactly those which arise from the
$1$-Laplacian. We also explore the limiting dual problem as a model problem for
the optimal Lipschitz extension problem, which exhibits behavior rather unlike
the scalar $\infty$-Laplacian. In a companion work, we will apply the main
result of this paper to associate to each class in $H^{d - 1}$ a lamination in
a canonical way, and study the duality of the stable norm on $H_{d - 1}$.
|
Tensor data are increasingly available in many application domains. We
develop several tensor decomposition methods for binary tensor data. Different
from classical tensor decompositions for continuous-valued data with squared
error loss, we formulate logistic tensor decompositions for binary data with a
Bernoulli likelihood. To enhance the interpretability of estimated factors and
improve their stability further, we propose sparse formulations of logistic
tensor decomposition by considering $\ell_{1}$-norm and $\ell_{0}$-norm
regularized likelihood. To handle the resulting optimization problems, we
develop computational algorithms which combine the strengths of tensor power
method and majorization-minimization (MM) algorithm. Through simulation
studies, we demonstrate the utility of our methods in analysis of binary tensor
data. To illustrate the effectiveness of the proposed methods, we analyze a
dataset concerning nations and their political relations and perform
co-clustering of estimated factors to find associations between the nations and
political relations.
|
The linear combination of atomic orbitals (LCAO) is a standard method for
studying solids and molecules, it is also known as the tight$-$binding (TB)
method. In most of the implementations only the basis set and the coupling
constants are provided, without the explicit definition of kinetic and
potential energy operators. The tight$-$binding scheme is, nonetheless, capable
of providing an accurate description of properties such as the electronic bands
and elastic constants for many materials. However, for some applications, the
knowledge of the underlying electronic potential associated with the
tight$-$binding hamiltonian might be important to guarantee that the actual
physics is preserved by the semiempirical scheme. In this work the electronic
potentials that arise from the use of tight$-$binding effective hamiltonians it
is explored. The formalism is applied to the extended H\"{u}ckel
tight$-$binding (EHTB) hamiltonian, which is a two$-$center Slater$-$Koster
approach that makes explicit use of the overlap matrix.
|
Let S be a finitely generated standard multi-graded algebra over a Noetherian
local ring A. This paper first expresses mixed multiplicities of S in term of
Hilbert-Samuel multiplicity that explained the mixed multiplicities S as the
Hilbert-Samuel multiplicities for quotient modules of S. As an application, we
get formulas for the mixed multiplicities of ideals that covers the main result
of Trung-Verma 2007 [TV].
|
It is argued that in certain 2d dilaton gravity theories there exist
self-consistent solutions of field equations with quantum terms which describe
extreme black holes at nonzero temperature. The curvature remains finite on the
horizon due to cancelation of thermal divergencies in the stress-energy tensor
against divergencies in the classical part of field equations. The extreme
black hole solutions under discussion are due to quantum effects only and do
not have classical counterparts.
|
We review the background field method for general N = 2 super Yang-Mills
theories formulated in the N = 2 harmonic superspace. The covariant harmonic
supergraph technique is then applied to rigorously prove the N=2
non-renormalization theorem as well as to compute the holomorphic low-energy
action for the N = 2 SU(2) pure super Yang-Mills theory and the leading
non-holomorphic low-energy correction for N = 4 SU(2) super Yang-Mills theory.
|
Here, we introduce a price-formation model where a large number of small
players can store and trade electricity. Our model is a constrained mean-field
game (MFG) where the price is a Lagrange multiplier for the supply vs. demand
balance condition. We establish the existence of a unique solution using a
fixed-point argument. In particular, we show that the price is well-defined and
it is a Lipschitz function of time. Then, we study linear-quadratic models that
can be solved explicitly and compare our model with real data.
|
We present a series of measurements based on K -> pi+pi- and K -> pi0pi0
decays collected in 1996-1997 by the KTeV experiment (E832) at Fermilab. We
compare these four K -> pipi decay rates to measure the direct CP violation
parameter Re(e'/e) = (20.7 +- 2.8) x 10^-4. We also test CPT symmetry by
measuring the relative phase between the CP violating and CP conserving decay
amplitudes for K->pi+pi- (phi+-) and for K -> pi0pi0 (phi00). We find the
difference between the relative phases to be Delta-phi = phi00 - phi+- = (+0.39
+- 0.50) degrees and the deviation of phi+- from the superweak phase to be
phi+- - phi_SW =(+0.61 +- 1.19) degrees; both results are consistent with CPT
symmetry. In addition, we present new measurements of the KL-KS mass difference
and KS lifetime: Delta-m = (5261 +- 15) x 10^6 hbar/s and tauS = (89.65 +-
0.07) x 10^-12 s.
|
We study counting-regular languages -- these are languages $L$ for which
there is a regular language $L'$ such that the number of strings of length $n$
in $L$ and $L'$ are the same for all $n$. We show that the languages accepted
by unambiguous nondeterministic Turing machines with a one-way read-only input
tape and a reversal-bounded worktape are counting-regular. Many one-way
acceptors are a special case of this model, such as reversal-bounded
deterministic pushdown automata, reversal-bounded deterministic queue automata,
and many others, and therefore all languages accepted by these models are
counting-regular. This result is the best possible in the sense that the claim
does not hold for either $2$-ambiguous PDA's, unambiguous PDA's with no
reversal-bound, and other models.
We also study closure properties of counting-regular languages, and we study
decidability problems in regards to counting-regularity. For example, it is
shown that the counting-regularity of even some restricted subclasses of PDA's
is undecidable. Lastly, $k$-slender languages -- where there are at most $k$
words of any length -- are also studied. Amongst other results, it is shown
that it is decidable whether a language in any semilinear full trio is
$k$-slender.
|
We propose a new mechanism for rendering dark matter self-interacting in the
presence of a massive spin-2 mediator. The derived Yukawa-type potential for
dark matter is independent of the spins of dark matter in the leading order of
the momentum expansion, so are the resulting non-perturbative effects for the
dark matter self-scattering. We find that both the Born cross section and
relatively mild resonance effects assist to make the self-scattering cross
section velocity-dependent. We discuss how to evade the current indirect bounds
on dark matter annihilations and show that the model is marginally compatible
with perturbative unitarity in the ghost-free realization of the massive spin-2
particle.
|
A recently proposed canonical form of Boolean functions, namely tagged
sentential decision diagrams (TSDDs), exploits both the standard and
zero-suppressed trimming rules. The standard ones minimize the size of
sentential decision diagrams (SDDs) while the zero-suppressed trimming rules
have the same objective as the standard ones but for zero-suppressed sentential
decision diagrams (ZSDDs). The original TSDDs, which we call zero-suppressed
TSDDs (ZTSDDs), firstly fully utilize the zero-suppressed trimming rules, and
then the standard ones. In this paper, we present a variant of TSDDs which we
call standard TSDDs (STSDDs) by reversing the order of trimming rules. We then
prove the canonicity of STSDDs and present the algorithms for binary operations
on TSDDs. In addition, we offer two kinds of implementations of STSDDs and
ZTSDDs and acquire three variations of the original TSDDs. Experimental
evaluations demonstrate that the four versions of TSDDs have the size advantage
over SDDs and ZSDDs.
|
Medical images can be decomposed into normal and abnormal features, which is
considered as the compositionality. Based on this idea, we propose an
encoder-decoder network to decompose a medical image into two discrete latent
codes: a normal anatomy code and an abnormal anatomy code. Using these latent
codes, we demonstrate a similarity retrieval by focusing on either normal or
abnormal features of medical images.
|
We study the boundaries of the positroid cells which arise from N = 4 super
Yang Mills theory. Our main tool is a new diagrammatic object which generalizes
the Wilson loop diagrams used to represent interactions in the theory. We prove
conditions under which these new generalized Wilson loop diagrams correspond to
positroids and give an explicit algorithm to calculate the Grassmann necklace
of said positroids. Then we develop a graphical calculus operating directly on
noncrossing generalized Wilson loop diagrams. In this paradigm, applying
diagrammatic moves to a generalized Wilson loop diagram results in new diagrams
that represent boundaries of its associated positroid, without passing through
cryptomorphisms. We provide a Python implementation of the graphical calculus
and use it to show that the boundaries of positroids associated to ordinary
Wilson loop diagram are generated by our diagrammatic moves in certain cases.
|
The goal of information-seeking dialogue is to respond to seeker queries with
natural language utterances that are grounded on knowledge sources. However,
dialogue systems often produce unsupported utterances, a phenomenon known as
hallucination. To mitigate this behavior, we adopt a data-centric solution and
create FaithDial, a new benchmark for hallucination-free dialogues, by editing
hallucinated responses in the Wizard of Wikipedia (WoW) benchmark. We observe
that FaithDial is more faithful than WoW while also maintaining engaging
conversations. We show that FaithDial can serve as training signal for: i) a
hallucination critic, which discriminates whether an utterance is faithful or
not, and boosts the performance by 12.8 F1 score on the BEGIN benchmark
compared to existing datasets for dialogue coherence; ii) high-quality dialogue
generation. We benchmark a series of state-of-the-art models and propose an
auxiliary contrastive objective that achieves the highest level of faithfulness
and abstractiveness based on several automated metrics. Further, we find that
the benefits of FaithDial generalize to zero-shot transfer on other datasets,
such as CMU-Dog and TopicalChat. Finally, human evaluation reveals that
responses generated by models trained on FaithDial are perceived as more
interpretable, cooperative, and engaging.
|
We approach the problem of improving robustness of deep learning algorithms
in the presence of label noise. Building upon existing label correction and
co-teaching methods, we propose a novel training procedure to mitigate the
memorization of noisy labels, called CrossSplit, which uses a pair of neural
networks trained on two disjoint parts of the labelled dataset. CrossSplit
combines two main ingredients: (i) Cross-split label correction. The idea is
that, since the model trained on one part of the data cannot memorize
example-label pairs from the other part, the training labels presented to each
network can be smoothly adjusted by using the predictions of its peer network;
(ii) Cross-split semi-supervised training. A network trained on one part of the
data also uses the unlabeled inputs of the other part. Extensive experiments on
CIFAR-10, CIFAR-100, Tiny-ImageNet and mini-WebVision datasets demonstrate that
our method can outperform the current state-of-the-art in a wide range of noise
ratios.
|
We present results from XMM-Newton observations of the obscured QSO 1SAX
J1218.9+2958. We find that the previously reported optical and soft X-ray
counterpart positions are incorrect. However we confirm the spectroscopic
redshift of 0.176. The optical counterpart has a K magnitude of 13.5 and an R-K
colour of 5.0 and is therefore a bright extremely red object (ERO). The X-ray
spectrum is well described by a power-law ($\Gamma=2.0\pm0.2$) absorbed by an
intrinsic neutral column density of $8.2^{+1.1}_{-0.7}\times 10^{22} cm^{-2}$.
We find that any scattered emission contributes at most 0.5 percent to the
total X-ray flux. From the optical/near-IR colour we estimate that the active
nucleus must contribute at least 50 percent of the total flux in the K band and
that the ratio of extinction to X-ray absorption is 0.1-0.7 times that expected
from a Galactic dust-gas ratio and extinction curve. If 1SAX J1218.9+2958 were
100 times less luminous it would be indistinguishable from the population
responsible for most of the 2-10 keV X-ray background. This has important
implications for the optical/IR properties of faint absorbed X-ray sources.
|
The factorial hull of the projective variety X (or its cone) is a graded
algebra R(X) that can be used in some situations to consider simultaneously all
divisor classes on X. Associated with X is a certain cone H in the divisor
class group Cl(X) of X. We calculate H explicitly in the case where X is the
plongement magnifique for the simple goup G. In this case the elements of H can
be represented by semisimple algebraic monoids.
|
It is known that the volume functional $\,\phi\mapsto\int e^{-\phi}\,$
satisfies certain concavity or convexity inequalities with respect to three of
the four linear structures induced by the order isomorphisms acting on
${\rm{Cvx}}_0({\mathbb R}^n)$. In this note we define the fourth linear
structure on ${\rm{Cvx}}_0({\mathbb R}^n)$ as the pullback of the standard
linear structure under the ${\cal J}$ transform. We show that, interpolating
with respect to this linear structure, no concavity or convexity inequalities
hold, and prove that a quasi-convexity inequality is violated only by up to a
factor of $2$. We also establish all the order relations which the four
different interpolations satisfy.
|
Investigating the relationship between structure and dynamical processes is a
central goal in condensed matter physics. Perhaps the most noted relationship
between the two is the phenomenon of de Gennes narrowing, in which relaxation
times in liquids are proportional to the scattering structure factor. Here a
similar relationship is discovered during the self-organized ion beam
nanopatterning of silicon using coherent x-ray scattering. However, in contrast
to the exponential relaxation of fluctuations in classic de Gennes narrowing,
the dynamic surface exhibits a wide range of behaviors as a function of length
scale, with a compressed exponential relaxation at lengths corresponding to the
dominant structural motif - self-organized nanoscale ripples. These behaviors
are reproduced in simulations of a nonlinear model describing the surface
evolution. We suggest that the compressed exponential behavior observed here is
due to the morphological persistence of the self-organized surface ripple
patterns which form and evolve during ion beam nanopatterning.
|
The field of query-by-example aims at inferring queries from output examples
given by non-expert users, by finding the underlying logic that binds the
examples. However, for a very small set of examples, it is difficult to
correctly infer such logic. To bridge this gap, previous work suggested
attaching explanations to each output example, modeled as provenance, allowing
users to explain the reason behind their choice of example. In this paper, we
explore the problem of inferring queries from a few output examples and
intuitive explanations. We propose a two step framework: (1) convert the
explanations into (partial) provenance and (2) infer a query that generates the
output examples using a novel algorithm that employs a graph based approach.
This framework is suitable for non-experts as it does not require the
specification of the provenance in its entirety or an understanding of its
structure. We show promising initial experimental results of our approach.
|
Quantum Hall effect (QHE) is a macroscopic manifestation of quantized states
which only occurs in confined two-dimensional electron gas (2DEG) systems.
Experimentally, QHE is hosted in high mobility 2DEG with large external
magnetic field at low temperature. Two-dimensional van der Waals materials,
such as graphene and black phosphorus, are considered interesting material
systems to study quantum transport, because it could unveil unique host
material properties due to its easy accessibility of monolayer or few-layer
thin films at 2D quantum limit. Here for the first time, we report direct
observation of QHE in a novel low-dimensional material system:
tellurene.High-quality 2D tellurene thin films were acquired from recently
reported hydrothermal method with high hole mobility of nearly 3,000 cm2/Vs at
low temperatures, which allows the observation of well-developed
Shubnikov-de-Haas (SdH) oscillations and QHE. A four-fold degeneracy of Landau
levels in SdH oscillations and QHE was revealed. Quantum oscillations were
investigated under different gate biases, tilted magnetic fields and various
temperatures, and the results manifest the inherent information of the
electronic structure of Te. Anomalies in both temperature-dependent oscillation
amplitudes and transport characteristics were observed which are ascribed to
the interplay between Zeeman effect and spin-orbit coupling as depicted by the
density functional theory (DFT) calculations.
|
Non-Hermitian topological phases can produce some remarkable properties,
compared with their Hermitian counterpart, such as the breakdown of
conventional bulk-boundary correspondence and the non-Hermitian topological
edge mode. Here, we introduce several algorithms with multi-layer perceptron
(MLP), and convolutional neural network (CNN) in the field of deep learning, to
predict the winding of eigenvalues non-Hermitian Hamiltonians. Subsequently, we
use the smallest module of the periodic circuit as one unit to construct
high-dimensional circuit data features. Further, we use the Dense Convolutional
Network (DenseNet), a type of convolutional neural network that utilizes dense
connections between layers to design a non-Hermitian topolectrical Chern
circuit, as the DenseNet algorithm is more suitable for processing
high-dimensional data. Our results demonstrate the effectiveness of the deep
learning network in capturing the global topological characteristics of a
non-Hermitian system based on training data.
|
An example of application of the specialized computer algebra system Grg-EC
to the searching for solutions to the source-free Maxwell and Einstein--Maxwell
equations is demonstrated. The solution involving five arbitrary functions of
two variables is presented in explicit form (up to quadratures). An emphasis is
made on the characterizing of the software utilized.
|
The theory of mean field electrodynamics for diffusive processes in Electron
Magnetohydrodynamic (EMHD) model is presented. In contrast to
Magnetohydrodynamics (MHD) the evolution of magnetic field here is governed by
a nonlinear equation in the magnetic field variables. A detailed description of
diffusive processes in two dimensions are presented in this paper. In
particular, it has been shown analytically that the turbulent magnetic field
diffusivity is suppressed from naive quasilinear estimates. It is shown that
for complete whisterlization of the spectrum, the turbulent diffusivity
vanishes. The question of whistlerization of the turbulent spectrum is
investigated numerically, and a reasonable tendency towards whisterlization is
observed. Numerical studies also show the suppression of magnetic field
diffusivity in accordance with the analytical estimates.
|
A novel factorization formula is presented for the longitudinal structure
function $F_L$ near the elastic region $x \to 1$ of deeply inelastic
scattering. In moment space this formula can resum all contributions to $F_L$
that are of order $\ln^k N/N$. This is achieved by defining a new jet function
which probes the transverse momentum of the struck parton in the target at
leading twist. The anomalous dimension $\gamma_{J^\prime}$ of this new jet
operator generates in moment space the logarithmic enhancements coming from the
fragmentation of the current jet in the final state. It is also shown how the
suggested factorization for $F_L$ is related to the corresponding one for $F_2$
in the same kinematic region.
|
Deep Q-Network (DQN) based multi-agent systems (MAS) for reinforcement
learning (RL) use various schemes where in the agents have to learn and
communicate. The learning is however specific to each agent and communication
may be satisfactorily designed for the agents. As more complex Deep QNetworks
come to the fore, the overall complexity of the multi-agent system increases
leading to issues like difficulty in training, need for higher resources and
more training time, difficulty in fine-tuning, etc. To address these issues we
propose a simple but efficient DQN based MAS for RL which uses shared state and
rewards, but agent-specific actions, for updation of the experience replay pool
of the DQNs, where each agent is a DQN. The benefits of the approach are
overall simplicity, faster convergence and better performance as compared to
conventional DQN based approaches. It should be noted that the method can be
extended to any DQN. As such we use simple DQN and DDQN (Double Q-learning)
respectively on three separate tasks i.e. Cartpole-v1 (OpenAI Gym environment)
, LunarLander-v2 (OpenAI Gym environment) and Maze Traversal (customized
environment). The proposed approach outperforms the baseline on these tasks by
decent margins respectively.
|
Exploiting $B$-meson decays for Standard Model tests and beyond requires a
precise understanding of the strong final-state interactions that can be
provided model-independently by means of dispersion theory. This formalism
allows one to deduce the universal pion-pion final-state interactions from the
accurately known $\pi\pi$ phase shifts and, in the scalar sector, a
coupled-channel treatment with the kaon-antikaon system. In this work an
analysis of the decays $\bar B_d^0 \to J/\psi \pi^+\pi^-$ and $\bar B_s^0 \to
J/\psi \pi^+\pi^-$ is presented. We find very good agreement with the data up
to 1.05 GeV with a number of parameters reduced significantly compared to a
phenomenological analysis. In addition, the phases of the amplitudes are
correct by construction, a crucial feature when it comes to studies of $CP$
violation in heavy-meson decays.
|
Subsets and Splits