text
stringlengths 6
128k
|
---|
We consider the weak convergence of numerical methods for stochastic
differential equations (SDEs). Weak convergence is usually expressed in terms
of the convergence of expected values of test functions of the trajectories.
Here we present an alternative formulation of weak convergence in terms of the
well-known Prokhorov metric on spaces of random variables. For a general class
of methods, we establish bounds on the rates of convergence in terms of the
Prokhorov metric. In doing so, we revisit the original proofs of weak
convergence and show explicitly how the bounds on the error depend on the
smoothness of the test functions. As an application of our result, we use the
Strassen - Dudley theorem to show that the numerical approximation and the true
solution to the system of SDEs can be re-embedded in a probability space in
such a way that the method converges there in a strong sense. One corollary of
this last result is that the method converges in the Wasserstein distance,
another metric on spaces of random variables. Another corollary establishes
rates of convergence for expected values of test functions assuming only local
Lipschitz continuity. We conclude with a review of the existing results for
pathwise convergence of weakly converging methods and the corresponding strong
results available under re-embedding.
|
Weak field Hall resistance Rxy(T) of the 2D electron system in Si was
measured over the range of temperatures 1-35 K and densities, where the
diagonal resistivity exhibits a ``metallic'' behavior. The Rxy(T) dependence
was found to be non-monotonic with a maximum at temperatures Tm~0.16Tf. The
Rxy(T) variations in the low-temperature domain (T<Tm) agree qualitatively with
the semiclassical model, that takes into account a broadening of the
Fermi-distribution solely. The semiclassical result considerably exceeds an
interaction-induced quantum correction. In the ``high-temperature'' domain
(T>Tm), the Rxy(T) dependence can be qualitatively explained in terms of either
a semiclassical T-dependence of a transport time, or a thermal activation of
carries from a localized band.
|
This document provides some technical notes on the polar field correction
scheme for the HMI synoptic maps and daily updated synchronic frames. It is
intended as a reference for the new data products and for some minor updates on
our previous scheme for MDI (Sun et al. 2011).
|
The paper solves the model of the miniature Power supply based on the
piezoelectric cantilever. The aim of the future is to further hybrid
integration and use of nanotechnology. Contents of the article belongs to the
category of renewable energy sources with environment energy conversion into
electrical energy. The work is focused on the use in small temperature
differences.
|
We formulate singular classical theories without involving constraints.
Applying the action principle for the action (27) we develop a partial (in the
sense that not all velocities are transformed to momenta) Hamiltonian formalism
in the initially reduced phase space (with the canonical coordinates
$q_{i},p_{i}$, where the number $n_{p}$ of momenta $p_{i}$, $i=1,\...,n_{p}$
(17) is arbitrary $n_{p}\leq n$, where $n$ is the dimension of the
configuration space), in terms of the partial Hamiltonian $H_{0}$ (18) and
$(n-n_{p})$ additional Hamiltonians $H_{\alpha}$, $\alpha=n_{p}+1,\...,n$ (20).
We obtain $(n-n_{p}+1)$ Hamilton-Jacobi equations (25)-(26). The equations of
motion are first order differential equations (33)-(34) with respect to
$q_{i},p_{i}$ and second order differential equations (35) for $q_{\alpha}$. If
$H_{0}$, $H_{\alpha}$ do not depend on $\dot{q}_{\alpha}$ (42), then the second
order differential equations (35) become algebraic equations (43) with respect
to $\dot{q}_{\alpha}$. We interpret $q_{\alpha}$ as additional times by (45),
and arrive at a multi-time dynamics. The above independence is satisfied in
singular theories and $r_{W}\leq n_{p}$ (58), where $r_{W}$ is the Hessian
rank. If $n_{p}=r_{W}$, then there are no constraints. A classification of the
singular theories is given by analyzing system (62) in terms of
$F_{\alpha\beta}$ (63). If its rank is full, then we can solve the system (62);
if not, some of $\dot{q}_{\alpha}$ remain arbitrary (sign of a gauge theory).
We define new antisymmetric brackets (69) and (80) and present the equations of
motion in the Hamilton-like form, (67)-(68) and (81)-(82) respectively. The
origin of the Dirac constraints in our framework is shown: if we define extra
momenta $p_{\alpha}$ by (86), then we obtain the standard primary constraints
(87), and the new brackets transform to the Dirac bracket. Quantization is
discussed.
|
This contribution summarizes some of the important theoretical progress that
has been made in the arena of electroweak physics at hadron colliders. The
focus is on developments that have sharpened theoretical predictions for final
states produced through electroweak processes. Special attention is paid to new
results that have been presented in the last year, since LHCP2015, as well as
on key issues for future measurements at the LHC.
|
We numerically study the system of rapidly rotating Bose atoms at the filling
factor (ratio of particle number to vortex number) $\nu=1$ with the dipolar
interaction. A moderate dipolar interaction stabilizes the incompressible
quantum liquid at $\nu=1$. Further addition induces a collapse of it. The state
after the collapse is a compressible state which has phases with stripes and
bubbles.
There are two types of bubbles with a different array. We also investigate
models constructed from truncated interactions and the models with the
three-body contact interaction. They also have phases with stripes and bubbles.
|
We argue that flat space amplitudes for the process $ 2 \to n$ gravitons at
center of mass energies $\sqrt{s}$ much less than the Planck scale, will
coincide approximately with amplitudes calculated from correlators of a
boundary CFT for AdS space of radius $R\gg L_P$, only when $n < R/L_P$ . For
larger values of $n$ in AdS space, insisting that all the incoming energy
enters "the arena" [arXiv:hep-th/9901079], implies the production of black
holes, whereas there is no black hole production in the flat space amplitudes.
We also argue, from unitarity, that flat space amplitudes for all $n$ are
necessary to reconstruct the complicated singularity at zero momentum in the $2
\to 2$ amplitude, which can therefore not be reproduced as the limit of an AdS
calculation. Applying similar reasoning to amplitudes for real black hole
production in flat space, we argue that unitarity of the flat space S-matrix
cannot be assessed or inferred from properties of CFT correlators.
|
We investigate the exclusive semileptonic $B_c\to
(D,\eta_c,B,B_s)\ell\nu_\ell$, $\eta_b\to B_c\ell\nu_\ell$($\ell=e,\mu,\tau$)
decays using the light-front quark model constrained by the variational
principle for the QCD motivated effective Hamiltonian. The form factors
$f_+(q^2)$ and $f_-(q^2)$ are obtained from the analytic continuation method in
the $q^+=0$ frame. While the form factor $f_+(q^2)$ is free from the zero-mode,
the form factor $f_-(q^2)$ is not free from the zero-mode in the $q^+=0$ frame.
We quantify the zero-mode contributions to $f_-(q^2)$ for various semileptonic
$B_c$ decays. Using our effective method to relate the non-wave function vertex
to the light-front valence wave function, we incorporate the zero-mode
contribution as a convolution of zero-mode operator with the initial and final
state wave functions. Our results are then compared to the available
experimental data and the results from other theoretical approaches. Since the
prediction on the magnetic dipole $B^*_c\to B_c+\gamma$ decay turns out to be
very sensitive to the mass difference between $B^*_c$ and $B_c$ mesons, the
decay width $\Gamma(B^*_c \to B_c \gamma)$ may help in determining the mass of
$B^*_c$ experimentally. Furthermore, we compare the results from the harmonic
oscillator potential and the linear potential and identify the decay processes
that are sensitive to the choice of confining potential. From the future
experimental data on these sensitive processes, one may obtain more realistic
information on the potential between quark and antiquark in the heavy meson
system.
|
This technical report presents the 2nd winning model for AQTC, a task newly
introduced in CVPR 2022 LOng-form VidEo Understanding (LOVEU) challenges. This
challenge faces difficulties with multi-step answers, multi-modal, and diverse
and changing button representations in video. We address this problem by
proposing a new context ground module attention mechanism for more effective
feature mapping. In addition, we also perform the analysis over the number of
buttons and ablation study of different step networks and video features. As a
result, we achieved the overall 2nd place in LOVEU competition track 3,
specifically the 1st place in two out of four evaluation metrics. Our code is
available at https://github.com/jaykim9870/ CVPR-22_LOVEU_unipyler.
|
Observations suggest that the structural parameters of disk galaxies have not
changed greatly since redshift 1. We examine whether these observations are
consistent with a cosmology in which structures form hierarchically. We use
SPH/N-body galaxy-scale simulations to simulate the formation and evolution of
Milky-Way-like disk galaxies by fragmentation, followed by hierarchical
merging. The simulated galaxies have a thick disk, that forms in a period of
chaotic merging at high redshift, during which a large amount of alpha-elements
are produced, and a thin disk, that forms later and has a higher metallicity.
Our simulated disks settle down quickly and do not evolve much since redshift
z~1, mostly because no major mergers take place between z=1 and z=0. During
this period, the disk radius increases (inside-out growth) while its thickness
remains constant. These results are consistent with observations of disk
galaxies at low and high redshift.
|
Reconstructing dynamic 3D scenes from 2D images and generating diverse views
over time presents a significant challenge due to the inherent complexity and
temporal dynamics involved. While recent advancements in neural implicit models
and dynamic Gaussian Splatting have shown promise, limitations persist,
particularly in accurately capturing the underlying geometry of highly dynamic
scenes. Some approaches address this by incorporating strong semantic and
geometric priors through diffusion models. However, we explore a different
avenue by investigating the potential of regularizing the native warp field
within the dynamic Gaussian Splatting framework. Our method is grounded on the
key intuition that an accurate warp field should produce continuous space-time
motions. While enforcing the motion constraints on warp fields is non-trivial,
we show that we can exploit knowledge innate to the forward warp field network
to derive an analytical velocity field, then time integrate for scene flows to
effectively constrain both the 2D motion and 3D positions of the Gaussians.
This derived Lucas-Kanade style analytical regularization enables our method to
achieve superior performance in reconstructing highly dynamic scenes, even
under minimal camera movement, extending the boundaries of what existing
dynamic Gaussian Splatting frameworks can achieve.
|
We investigate theoretically and numerically the light-matter interaction in
a two-level system (TLS) as a model system for excitation in a solid-state band
structure. We identify five clearly distinct excitation regimes, categorized
with well-known adiabaticity parameters: (1) the perturbative multiphoton
absorption regime for small driving field strengths, and four light
field-driven regimes, where intraband motion connects different TLS: (2) the
impulsive Landau-Zener (LZ) regime, (3) the non-impulsive LZ regime, (4) the
adiabatic regime and (5) the adiabatic-impulsive regime for large electric
field strengths. This categorization is tremendously helpful to understand the
highly complex excitation dynamics in any TLS, in particular when the driving
field strength varies, and naturally connects Rabi physics with Landau-Zener
physics. In addition, we find an insightful analytical expression for the
photon orders connecting the perturbative multiphoton regime with the light
field-driven regimes. Moreover, in the adiabatic-impulsive regime, adiabatic
motion and impulsive LZ transitions are equally important, leading to an
inversion symmetry breaking of the TLS when applying few-cycle laser pulses.
This categorization allows a deep understanding of driven TLS in a large
variety of settings ranging from cold atoms and molecules to solids and qubits,
and will help to find optimal driving parameters for a given purpose.
|
It is shown in this paper that suitable weak solutions to the 6D steady
incompressible Navier-Stokes are H\"{o}lder continuous at $0$ provided that
$\int_{B_1}|u(x)|^3dx+\int_{B_1}|f(x)|^qdx$ or $\int_{B_1}|\nabla
u(x)|^2dx$+$\int_{B_1}|\nabla
u(x)|^2dx\left(\int_{B_1}|u(x)|dx\right)^2+\int_{B_1}|f(x)|^qdx$ with $q>3$ is
sufficiently small, which implies that the 2D Hausdorff measure of the set of
singular points is zero. For the boundary case, we obtain that $0$ is regular
provided that $\int_{B_1^+} |u(x)|^3 dx + \int_{B_1^+} |f(x)|^3 dx$ or
$\int_{B_1^+} |\nabla u(x)|^2 dx + \int_{B_1^+} |f(x)|^3 dx$ is sufficiently
small. These results improve previous regularity theorems by Dong-Strain
(\cite{DS}, Indiana Univ. Math. J., 2012), Dong-Gu (\cite{DG2}, J. Funct.
Anal., 2014), and Liu-Wang (\cite{LW}, J. Differential Equations, 2018), where
either the smallness of the pressure or the smallness on all balls is
necessary.
|
We present numerical results for the gravitational self-force and redshift
invariant calculated in the Regge-Wheeler and Easy gauges for circular orbits
in a Schwarzschild background, utilizing the regularization framework
introduced by Pound, Merlin, and Barack. The numerical calculation is performed
in the frequency domain and requires the integration of a single second-order
ODE, greatly improving computation times over more traditional Lorenz gauge
numerical methods. A sufficiently high-order, analytic expansion of the
Detweiler-Whiting singular field is gauge-transformed to both the Regge-Wheeler
and Easy gauges and used to construct tensor-harmonic mode-sum regularization
parameters. We compare our results to the gravitational self-force calculated
in the Lorenz gauge by explicitly gauge-transforming the Lorenz gauge
self-force to the Regge-Wheeler and Easy gauges, and find that our results
agree to a relative accuracy of $10^{-15}$ for an orbital radius of $r_0=6M$
and $10^{-16}$ for an orbital radius of $r_0=10M$.
|
The equivalent electrical circuit of the Ebers-Moll type is introduced for
magnetic bipolar transistors. In addition to conventional diodes and current
sources, the new circuit comprises two novel elements due to spin-charge
coupling. A classification scheme of the operating modes of magnetic bipolar
transistors in the low bias regime is presented.
|
We compute the fully non-linear Cosmic Microwave Background (CMB)
anisotropies on scales larger than the horizon at last-scattering in terms of
only the curvature perturbation, providing a generalization of the linear
Sachs-Wolfe effect at any order in perturbation theory. We show how to compute
the n-point connected correlation functions of the large-scale CMB anisotropies
for generic primordial seeds provided by standard slow-roll inflation as well
as the curvaton and other scenarios for the generation of cosmological
perturbations. As an application of our formalism, we compute the three- and
four-point connected correlation functions whose detection in future CMB
experiments might be used to assess the level of primordial non-Gaussianity,
giving the theoretical predictions for the parameters of quadratic and cubic
non-linearities f_NL and g_NL.
|
A hundred years ago, the quantum concept provoked a revolution in science and
the search of a new conceptual basis for whole physics, as emphasized by
Einstein. In this paper, I discuss the essential features of Planck's works in
1900 on the blackbody radiation and the hypothesis of energy quantization.
|
There is something unknown in the cosmos. Something big. Which causes the
acceleration of the Universe expansion, that is perhaps the most surprising and
unexpected discovery of the last decades, and thus represents one of the most
pressing mysteries of the Universe. The current standard $\Lambda$CDM model
uses two unknown entities to make everything fit: dark energy and dark matter,
which together would constitute more than 95% of the energy density of the
Universe. A bit like saying that we have understood almost nothing, but without
openly admitting it. Here we start from the recent theoretical results that
come from the extension of general relativity to antimatter, through CPT
symmetry. This theory predicts a mutual gravitational repulsion between matter
and antimatter. Our basic assumption is that the Universe contains equal
amounts of matter and antimatter, with antimatter possibly located in cosmic
voids, as discussed in previous works. From this scenario we develop a simple
cosmological model, from whose equations we derive the first results. While the
existence of the elusive dark energy is completely replaced by gravitational
repulsion, the presence of dark matter is not excluded, but not strictly
required, as most of the related phenomena can also be ascribed to
repulsive-gravity effects. With a matter energy density ranging from $\sim5%$
(baryonic matter alone, and as much antimatter) to $\sim25%$ of the so-called
critical density, the present age of the Universe varies between about 13 and
$15\rm\,Gyr$. The SN Ia test is successfully passed, with residuals comparable
with those of the $\Lambda$CDM model in the observed redshift range, but with a
clear prediction for fainter SNe at higher $z$. Moreover, this model has
neither horizon nor coincidence problems, and no initial singularity is
requested. In conclusion, we have replaced all the tough problems of the
current
|
We present here the transformations required to recast the Robertson-Walker
metric and Friedmann-Robertson-Walker equations in terms of observer-dependent
coordinates for several commonly assumed cosmologies. The overriding motivation
is the derivation of explicit expressions for the radius R_h of our cosmic
horizon in terms of measurable quantities for each of the cases we consider. We
show that the cosmological time dt diverges for any finite interval ds
associated with a process at R -> R_h, which therefore represents a physical
limit to our observations. This is a key component required for a complete
interpretation of the data, particularly as they pertain to the nature of dark
energy. With these results, we affirm the conclusion drawn in our earlier work
that the identification of dark energy as a cosmological constant does not
appear to be consistent with the data.
|
The character change of a superfluid state due to the variation of the
attractive force is investigated in the relativistic framework with a massive
fermion. Two crossovers are found. One is a crossover from the usual BCS state
to the Bose-Einstein condensation (BEC) of bound fermion pairs. The other is
from the BEC to the relativistic Bose-Einstein condensation (RBEC) of nearly
massless bound pairs where antiparticles as well as particles dominate the
thermodynamics. Possible realization of the BEC and RBEC states in the quark
matter is also pointed out.
|
In this essay we extend the standard discussion of neutrino oscillations to
astrophysical neutrinos propagating through expanding space. This extension
introduces a new cosmological parameter $I$ into the oscillation phase. The new
parameter records cosmic history in much the same manner as the redshift z or
the apparent luminosity D_L. Measuring $I$ through neutrino oscillations could
help determine cosmological parameters and discriminate among different
cosmologies.
|
We derive several new bounds for the problem of difference sets with local
properties, such as establishing the super-linear threshold of the problem. For
our proofs, we develop several new tools, including a variant of higher moment
energies and a Ramsey-theoretic approach for the problem.
|
The detailed knowledge of the inner skin temperature behavior is very
important to evaluate and manage the aging of large pipes in cooling systems.
We describe here a method to obtain this information as a function of outer
skin temperature measurements, in space and time. This goal is achieved by
mixing fine simulations and numerical methods such as impulse response and data
assimilation. Demonstration is done on loads representing extreme transient
stratification or thermal shocks. From a numerical point of view, the results
of the reconstruction are outstanding, with a mean accuracy of the order of
less than a half percent of the temperature values of the thermal transient.
|
We study natural supersymmetric scenarios with light right-handed neutrino
superfields, and consider the possibility of having either a neutrino or a
sneutrino as a dark matter candidate. For the former, we evaluate the
possibility of having SUSY corrections on the $\nu_4\to\nu_\ell\gamma$ decay
rate, such that the NuStar bounds are relaxed. We find that corrections are too
small. For sneutrino dark matter, we consider thermal and non-thermal
production, taking into account freeze-out, freeze-in and super-WIMP
mechanisms. For the non-thermal case, we find that the $\tilde\nu_R$ can
reproduce the observed relic density by adjusting the R-sneutrino mass and
Yukawa couplings. For the thermal case, we find the need to extend the model in
order to enhance sneutrino annihilations, which we exemplify in a model with an
extended gauge symmetry.
|
This paper proposes and evaluates a novel architecture for a low-power
Time-to-Digital Converter with high resolution, optimized for both integration
in multichannel chips and high rate operation (40 Mconversion/s/channel). This
converter is based on a three-step architecture. The first step uses a counter
whereas the following ones are based on two kinds of Delay Line structures. A
programmable time amplifier is used between the second and third steps to reach
the final resolution of 24.4 ps in the standard mode of operation. The system
makes use of common continuously stabilized master blocks that control
trimmable slave blocks, in each channel, against the effects of global PVT
variations. Thanks to this structure, the power consumption of a channel is
considerably reduced when it does not process a hit, and limited to 2.2 mW when
it processes a hit. In the 130 nm CMOS technology used for the prototype, the
area of a TDC channel is only 0.051 mm2. This compactness combined with low
power consumption is a key advantage for integration in multi-channel front-end
chips. The performance of this new structure has been evaluated on prototype
chips. Measurements show excellent timing performance over a wide range of
operating temperatures (-40{\deg}C to 60{\deg}C) in agreement with our
expectations. For example, the measured timing integral nonlinearity is better
than 1 LSB (25 ps) and the overall timing precision is better than 21 ps RMS.
|
We compute the abelianisations of the mapping class groups of the manifolds
$W_g^{2n} = g(S^n \times S^n)$ for $n \geq 3$ and $g \geq 5$. The answer is a
direct sum of two parts. The first part arises from the action of the mapping
class group on the middle homology, and takes values in the abelianisation of
the automorphism group of the middle homology. The second part arises from
bordism classes of mapping cylinders and takes values in the quotient of the
stable homotopy groups of spheres by a certain subgroup which in many cases
agrees with the image of the stable $J$-homomorphism. We relate its calculation
to a purely homotopy theoretic problem.
|
We study a deep learning (DL) based limited feedback methods for
multi-antenna systems. Deep neural networks (DNNs) are introduced to replace an
end-to-end limited feedback procedure including pilot-aided channel training
process, channel codebook design, and beamforming vector selection. The DNNs
are trained to yield binary feedback information as well as an efficient
beamforming vector which maximizes the effective channel gain. Compared to
conventional limited feedback schemes, the proposed DL method shows an 1 dB
symbol error rate (SER) gain with reduced computational complexity.
|
Collisionless, magnetized turbulence offers a promising framework for the
generation of non-thermal high-energy particles in various astrophysical sites.
Yet, the detailed mechanism that governs particle acceleration has remained
subject to debate. By means of 2D and 3D PIC, as well as 3D (incompressible)
magnetohydrodynamic (MHD) simulations, we test here a recent model of
non-resonant particle acceleration in strongly magnetized
turbulence~\cite{2021PhRvD.104f3020L}, which ascribes the energization of
particles to their continuous interaction with the random velocity flow of the
turbulence, in the spirit of the original Fermi model. To do so, we compare,
for a large number of particles that were tracked in the simulations, the
predicted and the observed histories of particles momenta. The predicted
history is that derived from the model, after extracting from the simulations,
at each point along the particle trajectory, the three force terms that control
acceleration: the acceleration of the field line velocity projected along the
field line direction, its shear projected along the same direction, and its
transverse compressive part. Overall, we find a clear correlation between the
model predictions and the numerical experiments, indicating that this
non-resonant model can successfully account for the bulk of particle
energization through Fermi-type processes in strongly magnetized turbulence. We
also observe that the parallel shear contribution tends to dominate the physics
of energization in the PIC simulations, while in the MHD incompressible
simulation, both the parallel shear and the transverse compressive term provide
about equal contributions.
|
K+ meson production in pA (A = C, Cu, Au) collisions has been studied using
the ANKE spectrometer at an internal target position of the COSY-Juelich
accelerator. The complete momentum spectrum of kaons emitted at forward angles,
theta < 12 degrees, has been measured for a beam energy of T(p)=1.0 GeV, far
below the free NN threshold of 1.58 GeV. The spectrum does not follow a thermal
distribution at low kaon momenta and the larger momenta reflect a high degree
of collectivity in the target nucleus.
|
Multimode optical fibres are hair-thin strands of glass that efficiently
transport light. They promise next-generation medical endoscopes that provide
unprecedented sub-cellular image resolution deep inside the body. However,
confining light to such fibres means that images are inherently scrambled in
transit. Conventionally, this scrambling has been compensated by
pre-calibrating how a specific fibre scrambles light and solving a stationary
linear matrix equation that represents a physical model of the fibre. However,
as the technology develops towards real-world deployment, the unscrambling
process must account for dynamic changes in the matrix representing the fibre's
effect on light, due to factors such as movement and temperature shifts, and
non-linearities resulting from the inaccessibility of the fibre tip when inside
the body. Such complex, dynamic and nonlinear behaviour is well-suited to
approximation by neural networks, but most leading image reconstruction
networks rely on convolutional layers, which assume strong correlations between
adjacent pixels, a strong inductive bias that is inappropriate for fibre
matrices which may be expressed in a range of arbitrary coordinate
representations with long-range correlations. We introduce a new concept that
uses self-attention layers to dynamically transform the coordinate
representations of varying fibre matrices to a basis that admits compact,
low-dimensional representations suitable for further processing. We demonstrate
the effectiveness of this approach on diverse fibre matrix datasets. We show
our models significantly improve the sparsity of fibre bases in their
transformed bases with a participation ratio, p, as a measure of sparsity, of
between 0.01 and 0.11. Further, we show that these transformed representations
admit reconstruction of the original matrices with < 10% reconstruction error,
demonstrating the invertibility.
|
The dynamical spin susceptibility is studied in the magnetically-disordered
phase of heavy-Fermion systems near the antiferromagnetic quantum phase
transition. In the framework of the $S=1/2$ Kondo lattice model, we introduce a
perturbative expansion treating the spin and Kondo-like degrees of freedom on
an equal footing. The general expression of the dynamical spin susceptibility
that we derive presents a two-component behaviour: a quasielastic peak as in a
Fermi liquid theory, and a strongly q-dependent inelastic peak typical of a
non-Fermi liquid behaviour. Very strikingly, the position of the inelastic peak
is found to be pushed to zero at the antiferromagnetic transition with a
vanishing relaxation rate. The comparison has been quantitatively made with
Inelastic Neutron Scattering (INS) experiments performed in $CeCu_{6}$ and
$Ce_{1-x}La_{x}Ru_{2}Si_{2}$. The excellent agreement that we have found gives
strong support to a two-band model with new prospects for the study of the
quantum critical phenomena in the vicinity of the magnetic phase transition.
|
We examine the notion of inconsistency in pairwise comparisons and propose an
axiomatization which is independent of any method of approximation or the
inconsistency indicator definition (e.g., Analytic Hierarchy Process, AHP). It
has been proven that the eigenvalue-based inconsistency (proposed as a part of
AHP) is incorrect.
|
We show that all existing deterministic microscopic traffic models with
identical drivers (including both two-phase and three-phase models) can be
understood as special cases from a master model by expansion around
well-defined ground states. This allows two traffic models to be compared in a
well-defined way. The three-phase models are characterized by the vanishing of
leading orders of expansion within a certain density range, and as an example
the popular intelligent driver models (IDM) is shown to be equivalent to a
generalized optimal velocity (OV) model. We also explore the diverse solutions
of the generalized OV model that can be important both for understanding human
driving behaviors and algorithms for autonomous driverless vehicles.
|
Compressing large neural networks with minimal performance loss is crucial to
enabling their deployment on edge devices. (Cho et al., 2022) proposed a weight
quantization method that uses an attention-based clustering algorithm called
differentiable $k$-means (DKM). Despite achieving state-of-the-art results,
DKM's performance is constrained by its heavy memory dependency. We propose an
implicit, differentiable $k$-means algorithm (IDKM), which eliminates the major
memory restriction of DKM. Let $t$ be the number of $k$-means iterations, $m$
be the number of weight-vectors, and $b$ be the number of bits per cluster
address. IDKM reduces the overall memory complexity of a single $k$-means layer
from $\mathcal{O}(t \cdot m \cdot 2^b)$ to $\mathcal{O}( m \cdot 2^b)$. We also
introduce a variant, IDKM with Jacobian-Free-Backpropagation (IDKM-JFB), for
which the time complexity of the gradient calculation is independent of $t$ as
well. We provide a proof of concept of our methods by showing that, under the
same settings, IDKM achieves comparable performance to DKM with less compute
time and less memory. We also use IDKM and IDKM-JFB to quantize a large neural
network, Resnet18, on hardware where DKM cannot train at all.
|
Nonthermal electrons accelerated in solar flares produce electromagnetic
emission in two distinct, highly complementary domains - hard X-rays (HXRs) and
microwaves (MWs). This paper reports MW imaging spectroscopy observations from
the Expanded Owens Valley Solar Array of an M1.2 flare that occurred on 2017
September 9, from which we deduce evolving coronal parameter maps. We analyze
these data jointly with the complementary Reuven Ramaty High-Energy Solar
Spectroscopic Imager HXR data to reveal the spatially-resolved evolution of the
nonthermal electrons in the flaring volume. We find that the high-energy
portion of the nonthermal electron distribution, responsible for the MW
emission, displays a much more prominent evolution (in the form of strong
spectral hardening) than the low-energy portion, responsible for the HXR
emission. We show that the revealed trends are consistent with a single
electron population evolving according to a simplified trap-plus-precipitation
model with sustained injection/acceleration of nonthermal electrons, which
produces a double-powerlaw with steadily increasing break energy.
|
Planning can often be simpli ed by decomposing the task into smaller tasks
arranged hierarchically. Charlin et al. [4] recently showed that the hierarchy
discovery problem can be framed as a non-convex optimization problem. However,
the inherent computational di culty of solving such an optimization problem
makes it hard to scale to realworld problems. In another line of research,
Toussaint et al. [18] developed a method to solve planning problems by
maximumlikelihood estimation. In this paper, we show how the hierarchy
discovery problem in partially observable domains can be tackled using a
similar maximum likelihood approach. Our technique rst transforms the problem
into a dynamic Bayesian network through which a hierarchical structure can
naturally be discovered while optimizing the policy. Experimental results
demonstrate that this approach scales better than previous techniques based on
non-convex optimization.
|
Multipartite quantum entanglement of a manybody is not well understood. Here
we numerically study the amount of tripartite Greenberger-Horne-Zeilinger (GHZ)
states that can be extracted from the state generated by random Clifford
circuits with probabilistic single-qubit projective measurements. We find a
GHZ-entangled phase where this amount is finite and a GHZ-trivial phase where
no tripartite entanglement is available. The transition between them is either
measurement-induced, at $p_c\approx 0.16$, or partition-induced when a party
contains more than half of the qubits. We find that the GHZ entanglement can be
enhanced by measurements in certain regimes, which could be understood from the
perspective of quantum Internet. Effects of the measurements to the growth of
GHZ entanglement are also studied.
|
Stochastic simulations such as large-scale, spatiotemporal, age-structured
epidemic models are computationally expensive at fine-grained resolution. While
deep surrogate models can speed up the simulations, doing so for stochastic
simulations and with active learning approaches is an underexplored area. We
propose Interactive Neural Process (INP), a deep Bayesian active learning
framework for learning deep surrogate models to accelerate stochastic
simulations. INP consists of two components, a spatiotemporal surrogate model
built upon Neural Process (NP) family and an acquisition function for active
learning. For surrogate modeling, we develop Spatiotemporal Neural Process
(STNP) to mimic the simulator dynamics. For active learning, we propose a novel
acquisition function, Latent Information Gain (LIG), calculated in the latent
space of NP based models. We perform a theoretical analysis and demonstrate
that LIG reduces sample complexity compared with random sampling in high
dimensions. We also conduct empirical studies on three complex spatiotemporal
simulators for reaction diffusion, heat flow, and infectious disease. The
results demonstrate that STNP outperforms the baselines in the offline learning
setting and LIG achieves the state-of-the-art for Bayesian active learning.
|
We investigate the geometry of the space of N-valent SU(2)-intertwiners. We
propose a new set of holomorphic operators acting on this space and a new set
of coherent states which are covariant under U(N) transformations. These states
are labeled by elements of the Grassmannian Gr(N,2), they possess a direct
geometrical interpretation in terms of framed polyhedra and are shown to be
related to the well-known coherent intertwiners.
|
We present a new, probabilistic method for determining the systemic proper
motions of Milky Way (MW) ultra-faint satellites in the Dark Energy Survey
(DES). We utilize the superb photometry from the first public data release
(DR1) of DES to select candidate members, and cross-match them with the proper
motions from $Gaia$ DR2. We model the candidate members with a mixture model
(satellite and MW) in spatial and proper motion space. This method does not
require prior knowledge of satellite membership, and can successfully determine
the tangential motion of thirteen DES satellites. With our method we present
measurements of the following satellites: Columba~I, Eridanus~III, Grus~II,
Phoenix~II, Pictor~I, Reticulum~III, and Tucana~IV; this is the first systemic
proper motion measurement for several and the majority lack extensive
spectroscopic follow-up studies. We compare these to the predictions of Large
Magellanic Cloud satellites and to the vast polar structure. With the high
precision DES photometry we conclude that most of the newly identified member
stars are very metal-poor ([Fe/H] $\lesssim -2$) similar to other ultra-faint
dwarf galaxies, while Reticulum III is likely more metal-rich. We also find
potential members in the following satellites that might indicate their overall
proper motion: Cetus~II, Kim~2, and Horologium~II; however, due to the small
number of members in each satellite, spectroscopic follow-up observations are
necessary to determine the systemic proper motion in these satellites.
|
The use of proper ``time'' to describe classical ``spacetimes'' which contain
both Euclidean and Lorentzian regions permits the introduction of smooth
(generalized) orthonormal frames. This remarkable fact permits one to describe
both a variational treatment of Einstein's equations and distribution theory
using straightforward generalizations of the standard treatments for constant
signature.
|
New-generation spectrographs dedicated to the study of exoplanetary
atmospheres require a high accuracy in the atmospheric models to better
interpret the input spectra. Thanks to space missions, the observed spectra
will cover a large wavelength range from visible to mid-infrared with an higher
precision compared to the old-generation instrumentation, revealing complex
features coming from different regions of the atmosphere. For hot and ultra hot
Jupiters (HJs and UHJs), the main source of complexity in the spectra comes
from thermal and chemical differences between the day and the night sides. In
this context, one-dimensional plane parallel retrieval models of atmospheres
may not be suitable to extract the complexity of such spectra. In addition,
Bayesian frameworks are computationally intensive and prevent us from using
complete three-dimensional self-consistent models to retrieve exoplanetary
atmospheres. We propose the TauREx 2D retrieval code, which uses
two-dimensional atmospheric models as a good compromise between computational
cost and model accuracy to better infer exoplanetary atmospheric
characteristics for the hottest planets. TauREx 2D uses a 2D parametrization
across the limb which computes the transmission spectrum from an exoplanetary
atmosphere assuming azimuthal symmetry. It also includes a thermal dissociation
model of various species. We demonstrate that, given an input observation,
TauREx 2D mitigates the biases between the retrieved atmospheric parameters and
the real atmospheric parameters. We also show that having a prior knowledge on
the link between local temperature and composition is instrumental in inferring
the temperature structure of the atmosphere. Finally, we apply such a model on
a synthetic spectrum computed from a GCM simulation of WASP-121b and show how
parameter biases can be removed when using two-dimensional forward models
across the limb.
|
This paper tackles the problem of parts-aware point cloud generation. Unlike
existing works which require the point cloud to be segmented into parts a
priori, our parts-aware editing and generation are performed in an unsupervised
manner. We achieve this with a simple modification of the Variational
Auto-Encoder which yields a joint model of the point cloud itself along with a
schematic representation of it as a combination of shape primitives. In
particular, we introduce a latent representation of the point cloud which can
be decomposed into a disentangled representation for each part of the shape.
These parts are in turn disentangled into both a shape primitive and a point
cloud representation, along with a standardising transformation to a canonical
coordinate system. The dependencies between our standardising transformations
preserve the spatial dependencies between the parts in a manner that allows
meaningful parts-aware point cloud generation and shape editing. In addition to
the flexibility afforded by our disentangled representation, the inductive bias
introduced by our joint modeling approach yields state-of-the-art experimental
results on the ShapeNet dataset.
|
We discuss the possibility to explain the anomalies in short-baseline
neutrino oscillation experiments in terms of sterile neutrinos. We work in a
3+1 framework and pay special attention to recent new data from reactor
experiments, IceCube and MINOS+. We find that results from the DANSS and NEOS
reactor experiments support the sterile neutrino explanation of the reactor
anomaly, based on an analysis that relies solely on the relative comparison of
measured reactor spectra. Global data from the $\nu_e$ disappearance channel
favour sterile neutrino oscillations at the $3\sigma$ level with $\Delta
m^2_{41} \approx 1.3$ eV$^2$ and $|U_{e4}| \approx 0.1$, even without any
assumptions on predicted reactor fluxes. In contrast, the anomalies in the
$\nu_e$ appearance channel (dominated by LSND) are in strong tension with
improved bounds on $\nu_\mu$ disappearance, mostly driven by MINOS+ and
IceCube. Under the sterile neutrino oscillation hypothesis, the p-value for
those data sets being consistent is less than $2.6\times 10^{-6}$. Therefore,
an explanation of the LSND anomaly in terms of sterile neutrino oscillations in
the 3+1 scenario is excluded at the $4.7\sigma$ level. This result is robust
with respect to variations in the analysis and used data, in particular it
depends neither on the theoretically predicted reactor neutrino fluxes, nor on
constraints from any single experiment. Irrespective of the anomalies, we
provide updated constraints on the allowed mixing strengths $|U_{\alpha 4}|$
($\alpha = e,\mu,\tau$) of active neutrinos with a fourth neutrino mass state
in the eV range.
|
We investigate the disconnection time of a simple random walk in a discrete
cylinder with a large finite connected base. In a recent article of A. Dembo
and the author it was found that for large $N$ the disconnection time of
$G_N\times\mathbb{Z}$ has rough order $|G_N|^2$, when
$G_N=(\mathbb{Z}/N\mathbb{Z})^d$. In agreement with a conjecture by I.
Benjamini, we show here that this behavior has broad generality when the bases
of the discrete cylinders are large connected graphs of uniformly bounded
degree.
|
We construct ladder operators, $\tilde{C}$ and $\tilde{C^\dagger}$, for a
multi-step rational extension of the harmonic oscillator on the half plane,
$x\ge0$. These ladder operators connect all states of the spectrum in only
infinite-dimensional representations of their polynomial Heisenberg algebra.
For comparison, we also construct two different classes of ladder operator
acting on this system that form finite-dimensional as well as
infinite-dimensional representations of their respective polynomial Heisenberg
algebras. For the rational extension, we construct the position wavefunctions
in terms of exceptional orthogonal polynomials. For a particular choice of
parameters, we construct the coherent states, eigenvectors of $\tilde{C}$ with
generally complex eigenvalues, $z$, as superpositions of a subset of the energy
eigenvectors. Then we calculate the properties of these coherent states,
looking for classical or non-classical behaviour. We calculate the energy
expectation as a function of $|z|$. We plot position probability densities for
the coherent states and for the even and odd cat states formed from these
coherent states. We plot the Wigner function for a particular choice of $z$.
For these coherent states on one arm of a beamsplitter, we calculate the two
excitation number distribution and the linear entropy of the output state. We
plot the standard deviations in $x$ and $p$ and find no squeezing in the regime
considered. By plotting the Mandel $Q$ parameter for the coherent states as a
function of $|z|$, we find that the number statistics is sub-Poissonian.
|
Let $k$ be a perfect field of characteristic $p$. Associated to any
(1-dimensional, commutative) formal group law of finite height $n$ over $k$
there is a complex oriented cohomology theory represented by a spectrum denoted
$E(n)$ and commonly referred to as Morava $E$-theory. These spectra are known
to admit $E_\infty$-structures, and the dependence of the $E_\infty$-structure
on the choice of formal group law has been well studied (cf.\ [GH], [R], [L],
Section 5, [PV]). In this note we show that the underlying homotopy type of
$E(n)$ is independent of the choice of formal group law.
|
We apply Kramers theory to investigate the dissociation of multiple bonds
under mechanical force and interpret experimental results for the
unfolding/refolding force distributions of an RNA hairpin pulled at different
loading rates using laser tweezers. We identify two different kinetic regimes
depending on the range of forces explored during the unfolding and refolding
process. The present approach extends the range of validity of the two-states
approximation by providing a theoretical framework to reconstruct free-energy
landscapes and identify force-induced structural changes in molecular
transition states using single molecule pulling experiments. The method should
be applicable to RNA hairpins with multiple kinetic barriers.
|
SPIRou is a near-infrared (nIR) spectropolarimeter at the CFHT, covering the
YJHK nIR spectral bands ($980-2350\,\mathrm{nm}$). We describe the development
and current status of the SPIRou wavelength calibration in order to obtain
precise radial velocities (RVs) in the nIR. We make use of a UNe hollow-cathode
lamp and a Fabry-P\'erot \'etalon to calibrate the pixel-wavelength
correspondence for SPIRou. Different methods are developed for identifying the
hollow-cathode lines, for calibrating the wavelength dependence of the
Fabry-P\'erot cavity width, and for combining the two calibrators. The
hollow-cathode spectra alone do not provide a sufficiently accurate wavelength
solution to meet the design requirements of an internal error of
$\mathrm{<0.45\,m\,s^{-1}}$, for an overall RV precision of
$\mathrm{1\,m\,s^{-1}}$. However, the combination with the Fabry-P\'erot
spectra allows for significant improvements, leading to an internal error of
$\mathrm{\sim 0.15\,m\,s^{-1}}$. We examine the inter-night stability,
intra-night stability, and impact on the stellar RVs of the wavelength
solution.
|
This note will address the issue of the existence of God from a game
theoretic perspective. We will show that, under certain assumptions, man cannot
simultaneously be (i) rational and (ii) believe that an infinitely powerful God
exists. Game theory and decision theory have long been used to address this
thorny question.
|
The connection between Supergravity and the low-energy world is analyzed. In
particular, the soft Supersymmetry-breaking terms arising in Supergravity, the
$\mu$ problem and various solutions proposed to solve it are reviewed. The soft
terms arising in Supergravity theories coming from Superstring theory are also
computed and the solutions proposed to solve the $\mu$ problem, which are
naturally present in Superstrings, are also discussed. The $B$ soft terms
associated are given for the different solutions. Finally, the low-energy
Supersymmetric-spectra, which are very characteristic, are obtained.
|
Solution derived La2Zr2O7 films have drawn much attention for potential
applications as thermal barriers or low-cost buffer layers for coated conductor
technology. Annealing and coating parameters strongly affect the microstructure
of La2Zr2O7, but different film processing methods can yield similar
microstructural features such as nanovoids and nanometer-sized La2Zr2O7 grains.
Nanoporosity is a typical feature found in such films and the implications for
the functionality of the films is investigated by a combination of scanning
transmission electron microscopy, electron energy-loss spectroscopy and
quantitative electron tomography. Chemical solution based La2Zr2O7 films
deposited on flexible Ni-5at.%W substrates with a {100}<001> biaxial texture
were prepared for an in-depth characterization. A sponge-like structure
composed of nanometer sized voids is revealed by high-angle annular dark-field
scanning transmission electron microscopy in combination with electron
tomography. A three-dimensional quantification of nanovoids in the La2Zr2O7
film is obtained on a local scale. Mostly non-interconnected highly facetted
nanovoids compromise more than one-fifth of the investigated sample volume. The
diffusion barrier efficiency of a 170 nm thick La2Zr2O7 film is investigated by
STEM-EELS yielding a 1.8 \pm 0.2 nm oxide layer beyond which no significant
nickel diffusion can be detected and intermixing is observed. This is of
particular significance for the functionality of YBa2Cu3O7-{\delta} coated
conductor architectures based on solution derived La2Zr2O7 films as diffusion
barriers.
|
We report on a multi-frequency, multi-epoch campaign of Very Long Baseline
Interferometry observations of the radio galaxy 1946+708 using the VLBA and a
Global VLBI array. From these high-resolution observations we deduce the
kinematic age of the radio source to be $\sim$4000 years, comparable with the
ages of other Compact Symmetric Objects (CSOs). Ejections of pairs of jet
components appears to take place on time scales of 10 years and these
components in the jet travel outward at intrinsic velocities between 0.6 and
0.9 c. From the constraint that jet components cannot have intrinsic velocities
faster than light, we derive H_0 > 57 km s^-1 Mpc^-1 from the fastest pair of
components launched from the core. We provide strong evidence for the ejection
of a new pair of components in ~1997. From the trajectories of the jet
components we deduce that the jet is most likely to be helically confined,
rather than purely ballistic in nature.
|
It has been commonly accepted that magnetic field suppresses
superconductivity by inducing the ordered motion of Cooper pairs. We
demonstrate that magnetic field can instead provide a generation of
superconducting correlations by inducing the motion of superconducting
condensate. This effect arises in superconductor/ferromagnet heterostructures
in the presence of Rashba spin-orbital coupling. We predict the odd-frequency
spin-triplet superconducting correlations called the Berezinskii order to be
switched on at large distances from the superconductor/ferromagnet interface by
the application of a magnetic field. This is shown to result in the unusual
behaviour of Josephson effect and local density of states in
superconductor/ferromagnet structures.
|
For any admissible value of the parameters $n$ and $k$ there exist
$[n,k]$-Maximum Rank distance ${\mathbb F}_q$-linear codes. Indeed, it can be
shown that if field extensions large enough are considered, almost all rank
distance codes are MRD. On the other hand, very few families up to equivalence
of such codes are currently known. In the present paper we study some
invariants of MRD codes and evaluate their value for the known families,
providing a new characterization of generalized twisted Gabidulin codes.
|
This paper provides some evidence for conjectural relations between
extensions of (right) weak order on Coxeter groups, closure operators on root
systems, and Bruhat order. The conjecture focused upon here refines an earlier
question as to whether the set of initial sections of reflection orders,
ordered by inclusion, forms a complete lattice. Meet and join in weak order are
described in terms of a suitable closure operator. Galois connections are
defined from the power set of W to itself, under which maximal subgroups of
certain groupoids correspond to certain complete meet subsemilattices of weak
order. An analogue of weak order for standard parabolic subsets of any rank of
the root system is defined, reducing to the usual weak order in rank zero, and
having some analogous properties in rank one (and conjecturally in general).
|
We study one-dimensional systems of two-orbital SU(4) fermionic cold atoms.
In particular, we focus on an SU(4) spin model [named SU(4) $e$-$g$ spin model]
that is realized in a low-energy state in the Mott insulator phase at the
filling $n_g=3, n_e=1$ ($n_g, n_e$: numbers of atoms in ground and excited
states, respectively). Our numerical study with the infinite-size density
matrix renormalization group shows that the ground state of SU(4) $e$-$g$ spin
model is a nontrivial symmetry protected topological (SPT) phase protected by
$Z_4 \times Z_4$ symmetry. Specifically, we find that the ground state belongs
to an SPT phase with the topological index $2\in\mathbb{Z}_4$ and show sixfold
degenerate edge states. This is topologically distinct from SPT phases with the
index $1\in\mathbb{Z}_4$ that are realized in the SU(4) bilinear model and the
SU(4) Affleck-Kennedy-Lieb-Tasaki (AKLT) model. We explore the phase diagram of
SU(4) spin models including $e$-$g$ spin model, bilinear-biquadratic model, and
AKLT model, and identify that antisymmetrization effect in neighboring spins
(that we quantify with Casimir operators) is the driving force of the phase
transitions. Furthermore, we demonstrate by using the matrix product state how
the $\mathbb{Z}_4$ SPT state with six edge states appears in the SU(4) $e$-$g$
spin model.
|
The correlated variability in the responses of a neural population to the
repeated presentation of a sensory stimulus is a universally observed
phenomenon. Such correlations have been studied in much detail, both with
respect to their mechanistic origin and to their influence on stimulus
discrimination and on the performance of population codes. In particular,
recurrent neural network models have been used to understand the origin (or
lack) of correlations in neural activity. Here, we apply a model of recurrently
connected stochastic neurons to interpret correlations found in a population of
neurons recorded from mouse auditory cortex. We study the consequences of
recurrent connections on the stimulus dependence of correlations, and we
compare them to those from alternative sources of correlated variability, like
correlated gain fluctuations and common input in feed-forward architectures. We
find that a recurrent network model with random effective connections
reproduces observed statistics, like the relation between noise and signal
correlations in the data, in a natural way. In the model, we can analyze
directly the relation between network parameters, correlations, and how well
pairs of stimuli can be discriminated based on population activity. In this
way, we can relate circuit parameters to information processing.
|
Magnetic activity in stars manifests as dark spots on their surfaces that
modulate the brightness observed by telescopes. These light curves contain
important information on stellar rotation. However, the accurate estimation of
rotation periods is computationally expensive due to scarce ground truth
information, noisy data, and large parameter spaces that lead to degenerate
solutions. We harness the power of deep learning and successfully apply
Convolutional Neural Networks to regress stellar rotation periods from Kepler
light curves. Geometry-preserving time-series to image transformations of the
light curves serve as inputs to a ResNet-18 based architecture which is trained
through transfer learning. The McQuillan catalog of published rotation periods
is used as ansatz to groundtruth. We benchmark the performance of our method
against a random forest regressor, a 1D CNN, and the Auto-Correlation Function
(ACF) - the current standard to estimate rotation periods. Despite limiting our
input to fewer data points (1k), our model yields more accurate results and
runs 350 times faster than ACF runs on the same number of data points and
10,000 times faster than ACF runs on 65k data points. With only minimal feature
engineering our approach has impressive accuracy, motivating the application of
deep learning to regress stellar parameters on an even larger scale
|
Network functions virtualization (NFV) is a new concept that has received the
attention of both researchers and network providers. NFV decouples network
functions from specialized hardware devices and virtualizes these network
functions as software instances called virtualized network functions (VNFs).
NFV leads to various benefits, including more flexibility, high resource
utilization, and easy upgrades and maintenances. Despite recent works in this
field, placement and chaining of VNFs need more attention. More specifically,
some of the existing works have considered only the placement of VNFs and
ignored the chaining part. So, they have not provided an integrated view of
host or bandwidth resources and propagation delay of paths. In this paper, we
solve the VNF placement and chaining problem as an optimization problem based
on the particle swarm optimization (PSO) algorithm. Our goal is to minimize the
required number of used servers, the average propagation delay of paths, and
the average utilization of links while meeting network demands and constraints.
Based on the obtained results, the algorithm proposed in this study can find
feasible and high-quality solutions.
|
We present a Heisenberg operator based formulation of coherent quantum
feedback and Pyragas control. This model is easy to implement and allows for an
efficient and fast calculation of the dynamics of feedback-driven observables
as the number of contributing correlations grows in systems with a fixed number
of excitations only linearly in time. Furthermore, our model unravels the
quantum kinetics of entanglement growth in the system by explicitly calculating
non-Markovian multi-time correlations, e.g., how the emission of a photon is
correlated with an absorption process in the past. Therefore, the time-delayed
differential equations are expressed in terms of insightful physical
quantities. Another considerate advantage of this method is its compatibility
to typical approximation schemes, such as factorization techniques and the
semi-classical treatment of coherent fields. This allows the application on a
variety of setups, ranging from closed quantum systems in the few excitation
regimes to open systems and Pyragas control in general.
|
We report the discovery of a new quadruply imaged quasar surrounded by an
optical Einstein ring candidate. Spectra of the different components of 1RXS
J113155.4-123155 reveal a source at z=0.658. Up to now, this object is the
closest known gravitationally lensed quasar. The lensing galaxy is clearly
detected. Its redshift is measured to be z=0.295. Additionally, the total V
magnitude of the system has varied by 0.3 mag between two epochs separated by
33 weeks. The measured relative astrometry of the lensed images is best fitted
with an SIS model plus shear. This modeling suggests very high magnification of
the source (up to 50 for the total magnification) and predicts flux ratios
between the lensed images significantly different from what is actually
observed. This suggests that the lensed images may be affected by a combination
of micro or milli-lensing and dust extinction effects.
|
Rare earth pyrochlore Iridates (RE2Ir2O7) consist of two interpenetrating
cation sublattices, the RE with highly-frustrated magnetic moments, and the
Iridium with extended conduction orbitals significantly mixed by spin-orbit
interactions. The coexistence and coupling of these two sublattices create a
landscape for discovery and manipulation of quantum phenomena such as the
topological Hall effect, massless conduction bands, and quantum criticality.
Thin films allow extended control of the material system via symmetry-lowering
effects such as strain. While bulk Pr2Ir2O7 shows a spontaneous hysteretic Hall
effect below 1.5K, we observe the effect at elevated temperatures up to 15K in
epitaxial thin films on (111) YSZ substrates synthesized via solid phase
epitaxy. Similar to the bulk, the lack of observable long-range magnetic order
in the thin films points to a topological origin. We use synchrotron-based
element-specific x-ray diffraction (XRD) and x-ray magnetic circular dichroism
(XMCD) to compare powders and thin films to attribute the spontaneous Hall
effect in the films to localization of the Ir moments. We link the thin film Ir
local moments to lattice distortions absent in the bulk-like powders. We
conclude that the elevated-temperature spontaneous Hall effect is caused by the
topological effect originating either from the Ir or Pr sublattice, with
interaction strength enhanced by the Ir local moments. This spontaneous Hall
effect with weak net moment highlights the effect of vanishingly small lattice
distortions as a means to discover topological phenomena in metallic frustrated
magnetic materials.
|
In 1999, Molodtsov \cite{1} developed the idea of soft set theory, proving it
to be a flexible mathematical tool for dealing with uncertainty. Several
researchers have extended the framework by combining it with other theories of
uncertainty, such as fuzzy set theory, intuitionistic fuzzy soft set theory,
rough soft set theory, and so on. These enhancements aim to increase the
applicability and expressiveness of soft set theory, making it a more robust
tool for dealing with complex, real-world problems characterized by uncertainty
and vagueness. The notion of fuzzy soft sets and their associated operations
were introduced by Maji et al. \cite{7}. However, Molodtsov \cite{3} identified
numerous incorrect results and notions of soft set theory that were introduced
in the paper \cite{7}. Therefore, the derived concept of fuzzy soft sets is
equally incorrect since the basic idea of soft sets in \cite{7} is flawed.
Consequently, it is essential to address these incorrect notions and provide an
exact and formal definition of the idea of fuzzy soft sets. This reevaluation
is important to guarantee fuzzy soft set theory's theoretical stability and
practical application across a range of domains. In this paper, we propose
fuzzy soft set theory based on Molodtsov's correct notion of soft set theory
and demonstrate a fuzzy soft set in matrix form. Additionally, we derive
several significant findings on fuzzy soft sets.
|
Besides the chemical constituents, it is the lattice geometry that controls
the most important material properties. In many interesting compounds, the
arrangement of elements leads to pronounced anisotropies, which reflect into a
varying degree of quasi two-dimensionality of their low-energy excitations.
Here, we start by classifying important families of correlated materials
according to a simple measure for the tetragonal anisotropy of their ab initio
electronic (band) structure. Second, we investigate the impact of a
progressively large anisotropy in driving the non-locality of many-body
effects. To this end, we tune the Hubbard model from isotropic cubic in three
dimensions to the two-dimensional limit and analyze it using the dynamical
vertex approximation. For sufficiently isotropic hoppings, we find the
self-energy to be well separable into a static non-local and a dynamical local
contribution. While the latter could potentially be obtained from dynamical
mean-field approaches, we find the former to be non-negligible in all cases.
Further, by increasing the model-anisotropy, we quantify the degree of quasi
two-dimensionality which causes this "space-time separation" to break down. Our
systematic analysis improves the general understanding of electronic
correlations in anisotropic materials, heterostructures and ultra-thin films,
and provides useful guidance for future realistic studies.
|
In image denoising (IDN) processing, the low-rank property is usually
considered as an important image prior. As a convex relaxation approximation of
low rank, nuclear norm based algorithms and their variants have attracted
significant attention. These algorithms can be collectively called image domain
based methods, whose common drawback is the requirement of great number of
iterations for some acceptable solution. Meanwhile, the sparsity of images in a
certain transform domain has also been exploited in image denoising problems.
Sparsity transform learning algorithms can achieve extremely fast computations
as well as desirable performance. By taking both advantages of image domain and
transform domain in a general framework, we propose a sparsity transform
learning and weighted singular values minimization method (STLWSM) for IDN
problems. The proposed method can make full use of the preponderance of both
domains. For solving the non-convex cost function, we also present an efficient
alternative solution for acceleration. Experimental results show that the
proposed STLWSM achieves improvement both visually and quantitatively with a
large margin over state-of-the-art approaches based on an alternatively single
domain. It also needs much less iteration than all the image domain algorithms.
|
We study a multipole vector-based decomposition of cosmic microwave
background (CMB) data in order to search for signatures of a multiconnected
topology of the universe. Using 10^6 simulated maps, we analyse the multipole
vector distribution on the sky for the lowest order multipoles together with
the probability distribution function of statistics based on the sum of the dot
products of the multipole vectors for both the simply-connected flat universe
and universes with the topology of a 3-torus. The estimated probabilities of
obtaining lower values for these statistics as compared to the 5-year WMAP data
indicate that the observed alignment of the quadrupole and octopole is
statistically favoured in a 3-torus topology where at least one dimension of
the fundamental domain is significantly shorter than the diameter of the
observable universe, as compared to the usual standard simply-connected
universe. However, none of the obtained results are able to clearly rule out
the latter (at more than 97% confidence level). Multipole vector statistics do
not appear to be very sensitive to the signatures of a 3-torus topology if the
shorter dimension of the domain becomes comparable to the diameter of the
observable universe. Unfortunately, the signatures are also significantly
diluted by the integrated Sachs-Wolfe effect.
|
The rapid development of Large Language Models (LLMs) has facilitated a
variety of applications from different domains. In this technical report, we
explore the integration of LLMs and the popular academic writing tool,
Overleaf, to enhance the efficiency and quality of academic writing. To achieve
the above goal, there are three challenges: i) including seamless interaction
between Overleaf and LLMs, ii) establishing reliable communication with the LLM
provider, and iii) ensuring user privacy. To address these challenges, we
present OverleafCopilot, the first-ever tool (i.e., a browser extension) that
seamlessly integrates LLMs and Overleaf, enabling researchers to leverage the
power of LLMs while writing papers. Specifically, we first propose an effective
framework to bridge LLMs and Overleaf. Then, we developed PromptGenius, a
website for researchers to easily find and share high-quality up-to-date
prompts. Thirdly, we propose an agent command system to help researchers
quickly build their customizable agents. OverleafCopilot
(https://chromewebstore.google.com/detail/overleaf-copilot/eoadabdpninlhkkbhngoddfjianhlghb
) has been on the Chrome Extension Store, which now serves thousands of
researchers. Additionally, the code of PromptGenius is released at
https://github.com/wenhaomin/ChatGPT-PromptGenius. We believe our work has the
potential to revolutionize academic writing practices, empowering researchers
to produce higher-quality papers in less time.
|
Due to the three-dimensional nature of CT- or MR-scans, generative modeling
of medical images is a particularly challenging task. Existing approaches
mostly apply patch-wise, slice-wise, or cascaded generation techniques to fit
the high-dimensional data into the limited GPU memory. However, these
approaches may introduce artifacts and potentially restrict the model's
applicability for certain downstream tasks. This work presents WDM, a
wavelet-based medical image synthesis framework that applies a diffusion model
on wavelet decomposed images. The presented approach is a simple yet effective
way of scaling 3D diffusion models to high resolutions and can be trained on a
single \SI{40}{\giga\byte} GPU. Experimental results on BraTS and LIDC-IDRI
unconditional image generation at a resolution of $128 \times 128 \times 128$
demonstrate state-of-the-art image fidelity (FID) and sample diversity
(MS-SSIM) scores compared to recent GANs, Diffusion Models, and Latent
Diffusion Models. Our proposed method is the only one capable of generating
high-quality images at a resolution of $256 \times 256 \times 256$,
outperforming all comparing methods.
|
We compute theoretical predictions for the production of a W-boson in
association with a bottom-quark pair at hadron colliders at
next-to-next-to-leading order (NNLO) in QCD, including the leptonic decay of
the W-boson, while treating the bottom quark as massless. This calculation
constitutes the very first $2 \to 3$ process with a massive external particle
to be studied at such a perturbative order. We derive an analytic expression
for the required two-loop five-particle amplitudes in the leading colour
approximation employing finite-field methods. Numerical results for the cross
section and differential distributions are presented for the Large Hadron
Collider at $\sqrt{s} = 8$ TeV. We observe an improvement of the perturbative
convergence for the inclusive case and for the prediction with a jet veto upon
the inclusion of the NNLO QCD corrections.
|
We have recently begun a search for Classical Novae in M31 using three years
of multicolour data taken by the POINT-AGAPE microlensing collaboration with
the 2.5m Isaac Newton Telescope (INT) on La Palma. This is a pilot program
leading to the use of the Liverpool Telescope (LT) to systematically search for
and follow novae of all speed classes in external galaxies to distances up to
around 5Mpc.
|
We derive the transport equations for two-dimensional electron systems with
spin-orbit interaction and short-range spin-independent disorder. In the limit
of slow spatial variations of the electron distribution we obtain coupled
diffusion equations for the electron density and spin. Using these equations we
calculate electric-field induced spin accumulation in a finite-size sample for
arbitrary ratio between spin-orbit energy splitting and elastic scattering
rate. We demonstrate that the spin-Hall conductivity vanishes in an infinite
system independent of this ratio.
|
The state space in Multiagent Reinforcement Learning (MARL) grows
exponentially with the agent number. Such a curse of dimensionality results in
poor scalability and low sample efficiency, inhibiting MARL for decades. To
break this curse, we propose a unified agent permutation framework that
exploits the permutation invariance (PI) and permutation equivariance (PE)
inductive biases to reduce the multiagent state space. Our insight is that
permuting the order of entities in the factored multiagent state space does not
change the information. Specifically, we propose two novel implementations: a
Dynamic Permutation Network (DPN) and a Hyper Policy Network (HPN). The core
idea is to build separate entity-wise PI input and PE output network modules to
connect the entity-factored state space and action space in an end-to-end way.
DPN achieves such connections by two separate module selection networks, which
consistently assign the same input module to the same input entity (guarantee
PI) and assign the same output module to the same entity-related output
(guarantee PE). To enhance the representation capability, HPN replaces the
module selection networks of DPN with hypernetworks to directly generate the
corresponding module weights. Extensive experiments in SMAC, Google Research
Football and MPE validate that the proposed methods significantly boost the
performance and the learning efficiency of existing MARL algorithms.
Remarkably, in SMAC, we achieve 100% win rates in almost all hard and
super-hard scenarios (never achieved before).
|
Using quantum Monte Carlo (QMC) simulations and a mean field (MF) theory, we
investigate the spin-1/2 XXZ model with nearest neighbor interactions on a
periodic depleted square lattice. In particular, we present results for 1/4
depleted lattice in an applied magnetic field and investigate the effect of
depletion on the ground state. The ground state phase diagram is found to
include an antiferromagnetic (AF) phase of magnetization $m_{z}=\pm 1/6$ and an
in-plane ferromagnetic (FM) phase with finite spin stiffness. The agreement
between the QMC simulations and the mean field theory based on resonating
trimers suggests the AF phase and in-plane FM phase can be interpreted as a
Mott insulator and superfluid of trimer states respectively. While the thermal
transitions of the in-plane FM phase are well described by the
Kosterlitz-Thouless transition, the quantum phase transition from the AF phase
to in-plane FM phase undergo a direct second order insulator-superfluid
transition upon increasing magnetic field.
|
Respecting that any consistent quantum field theory in curved space-time must
include black hole radiation, in this paper, we examine the Krein-Gupta-Bleuler
(KGB) formalism as an inevitable quantization scheme in order to follow the
guideline of the covariance of minimally coupled massless scalar field and
linear gravity on de Sitter (dS) background in the sense of
Wightman-G\"{a}rding approach, by investigating thermodynamical aspects of
black holes. The formalism is interestingly free of pathological large distance
behavior. In this construction, also, no infinite term appears in the
calculation of expectation values of the energy-momentum tensor (we have an
automatic and covariant renormalization) which results in the vacuum energy of
the free field vanishes. However, the existence of an effective potential
barrier, intrinsically created by black holes gravitational field, gives a
Casimir-type contribution to the vacuum expectation value of the
energy-momentum tensor. On this basis, by evaluating the Casimir
energy-momentum tensor for a conformally coupled massless scalar field in the
vicinity of a non-rotating black hole event horizon through the KGB
quantization, in this work, we explicitly prove that the hole produces
black-body radiation which its temperature exactly coincides with the result
obtained by Hawking for black hole radiation.
|
CO and CO$_2$ are the two dominant carbon-bearing molecules in comae and have
major roles in driving activity. Their relative abundances also provide strong
observational constraints to models of solar system formation and evolution but
have never been studied together in a large sample of comets. We carefully
compiled and analyzed published measurements of simultaneous CO and CO$_2$
production rates for 25 comets. Approximately half of the comae have
substantially more CO$_2$ than CO, about a third are CO-dominated and about a
tenth produce a comparable amount of both. There may be a heliocentric
dependence to this ratio with CO dominating comae beyond 3.5 au. Eight out of
nine of the Jupiter Family Comets in our study produce more CO$_2$ than CO. The
six dynamically new comets produce more CO$_2$ relative to CO than the eight
Oort Cloud comets that have made multiple passes through the inner solar
system. This may be explained by long-term cosmic ray processing of a comet
nucleus's outer layers. We find (Q$_{CO}$/Q$_{H_2O}$)$_{median}$ = 3 $\pm$ 1\%
and (Q$_{CO_2}$/Q$_{H_2O}$)$_{median}$ = 12 $\pm$ 2\%. The inorganic volatile
carbon budget was estimated to be Q$_{CO}$+Q$_{CO_2}$)/Q$_{H_2O}$ $\sim$ 18\%
for most comets. Between 0.7 to 4.6 au, CO$_2$ outgassing appears to be more
intimately tied to the water production in a way that the CO is not. The
volatile carbon/oxygen ratio for 18 comets is C/O$_{median}$ $\sim$ 13\%, which
is consistent with a comet formation environment that is well within the CO
snow line.
|
We show that a resummation model for the evolution kernel at small x creates
a bridge between the weak and strong couplings. The resummation model embodies
DGLAP and BFKL anomalous dimensions at leading logarithmic orders, as well as a
kinematical constraint on the real emission part of the kernel. In the case of
pure gluodynamics the strong coupling limit of the Pomeron intercept is
consistent with the exchange of the spin-two, colorless particle.
|
Measurements of the elastic scattering angular distribution for the
d+$^{197}$Au system were carried out covering deuteron incident energies in the
range from 5 to 16 MeV, i.e. approximately 50% below and above the Coulomb
barrier. A critical interaction distance of $d_I$= 2.49 fm was determined from
these distributions, which is comparable to that of the radioactive halo
nucleus $^{6}$He. The experimental angular distributions were systematically
analyzed using two alternative models: the semi-microscopic Sao Paulo and the
effective Woods-Saxon optical potentials, for which the best-fitting parameters
were determined. These potentials, integrated in the vicinity of the
sensitivity radius, were calculated for each energy. For both models, the
energy dependence of these integrals presented the breakup threshold anomaly
around the coulomb barrier, a typical signature of weakly bound nuclei.
|
Recently, a new stabilizer free weak Galerkin method (SFWG) is proposed,
which is easier to implement. The idea is to raise the degree of polynomials j
for computing weak gradient. It is shown that if j>=j0 for some j0, then SFWG
achieves the optimal rate of convergence. However, large j will cause some
numerical difficulties. To improve the efficiency of SFWG and avoid numerical
locking, in this note, we provide the optimal j0 with rigorous mathematical
proof.
|
Trusted Platform Module (TPM) serves as a hardware-based root of trust that
protects cryptographic keys from privileged system and physical adversaries. In
this work, we perform a black-box timing analysis of TPM 2.0 devices deployed
on commodity computers. Our analysis reveals that some of these devices feature
secret-dependent execution times during signature generation based on elliptic
curves. In particular, we discovered timing leakage on an Intel firmware-based
TPM as well as a hardware TPM. We show how this information allows an attacker
to apply lattice techniques to recover 256-bit private keys for ECDSA and
ECSchnorr signatures. On Intel fTPM, our key recovery succeeds after about
1,300 observations and in less than two minutes. Similarly, we extract the
private ECDSA key from a hardware TPM manufactured by STMicroelectronics, which
is certified at Common Criteria (CC) EAL 4+, after fewer than 40,000
observations. We further highlight the impact of these vulnerabilities by
demonstrating a remote attack against a StrongSwan IPsec VPN that uses a TPM to
generate the digital signatures for authentication. In this attack, the remote
client recovers the server's private authentication key by timing only 45,000
authentication handshakes via a network connection.
The vulnerabilities we have uncovered emphasize the difficulty of correctly
implementing known constant-time techniques, and show the importance of
evolutionary testing and transparent evaluation of cryptographic
implementations. Even certified devices that claim resistance against attacks
require additional scrutiny by the community and industry, as we learn more
about these attacks.
|
Neural radiance fields (NeRFs) produce state-of-the-art view synthesis
results. However, they are slow to render, requiring hundreds of network
evaluations per pixel to approximate a volume rendering integral. Baking NeRFs
into explicit data structures enables efficient rendering, but results in a
large increase in memory footprint and, in many cases, a quality reduction. In
this paper, we propose a novel neural light field representation that, in
contrast, is compact and directly predicts integrated radiance along rays. Our
method supports rendering with a single network evaluation per pixel for small
baseline light field datasets and can also be applied to larger baselines with
only a few evaluations per pixel. At the core of our approach is a ray-space
embedding network that maps the 4D ray-space manifold into an intermediate,
interpolable latent space. Our method achieves state-of-the-art quality on
dense forward-facing datasets such as the Stanford Light Field dataset. In
addition, for forward-facing scenes with sparser inputs we achieve results that
are competitive with NeRF-based approaches in terms of quality while providing
a better speed/quality/memory trade-off with far fewer network evaluations.
|
An edge-face colouring of a plane graph with edge set $E$ and face set $F$ is
a colouring of the elements of $E \cup F$ such that adjacent or incident
elements receive different colours. Borodin proved that every plane graph of
maximum degree $\Delta\ge10$ can be edge-face coloured with $\Delta+1$ colours.
Borodin's bound was recently extended to the case where $\Delta=9$. In this
paper, we extend it to the case $\Delta=8$.
|
We propose a $\mu-\tau$ reflection symmetric Littlest Seesaw ($\mu\tau$-LSS)
model. In this model the two mass parameters of the LSS model are fixed to be
in a special ratio by symmetry, so that the resulting neutrino mass matrix in
the flavour basis (after the seesaw mechanism has been applied) satisfies
$\mu-\tau$ reflection symmetry and has only one free adjustable parameter,
namely an overall free mass scale. However the physical low energy predictions
of the neutrino masses and lepton mixing angles and CP phases are subject to
renormalisation group (RG) corrections, which introduces further parameters.
Although the high energy model is rather complicated, involving $(S_4\times
U(1))^2$ and supersymmetry, with many flavons and driving fields, the low
energy neutrino mass matrix has ultimate simplicity.
|
We present a method that leverages the complementarity of event cameras and
standard cameras to track visual features with low-latency. Event cameras are
novel sensors that output pixel-level brightness changes, called "events". They
offer significant advantages over standard cameras, namely a very high dynamic
range, no motion blur, and a latency in the order of microseconds. However,
because the same scene pattern can produce different events depending on the
motion direction, establishing event correspondences across time is
challenging. By contrast, standard cameras provide intensity measurements
(frames) that do not depend on motion direction. Our method extracts features
on frames and subsequently tracks them asynchronously using events, thereby
exploiting the best of both types of data: the frames provide a photometric
representation that does not depend on motion direction and the events provide
low-latency updates. In contrast to previous works, which are based on
heuristics, this is the first principled method that uses raw intensity
measurements directly, based on a generative event model within a
maximum-likelihood framework. As a result, our method produces feature tracks
that are both more accurate (subpixel accuracy) and longer than the state of
the art, across a wide variety of scenes.
|
The effect of monolayers of oxygen (O) and hydrogen (H) on the possibility of
material transfer at aluminium/titanium nitride (Al/TiN) and copper/diamond
(Cu/C$_{\text{dia}}$) interfaces, respectively, were investigated within the
framework of density functional theory (DFT). To this end the approach,
contact, and subsequent separation of two atomically flat surfaces consisting
of the aforementioned pairs of materials were simulated. These calculations
were performed for the clean as well as oxygenated and hydrogenated Al and
C$_{\text{dia}}$ surfaces, respectively. Various contact configurations were
considered by studying several lateral arrangements of the involved surfaces at
the interface. Material transfer is typically possible at interfaces between
the investigated clean surfaces; however, the addition of O to the Al and H to
the C$_{\text{dia}}$ surfaces was found to hinder material transfer. This
passivation occurs because of a significant reduction of the adhesion energy at
the examined interfaces, which can be explained by the distinct bonding
situations.
|
The doublet--triplet mass splitting problem is one of the most serious
problems in supersymmetric grand unified theories (GUTs). A class of models
based on a product gauge group, such as the SU(5)_{GUT} times U(3)_H or the
SU(5)_{GUT} times U(2)_H, realize naturally the desired mass splitting that is
protected by an unbroken R symmetry. It has been pointed out that various
features in the models suggest that these product-group unification models are
embedded in a supersymmetric brane world. We show an explicit construction of
those models in the supersymmetric brane world based on the Type IIB
supergravity in ten dimensions. We consider T^6/(Z_{12} times Z_2) orientifold
for the compactified six extra dimensions. We find that all of the particles
needed for the GUT-symmetry-breaking sector are obtained from the D-brane
fluctuations. The three families of quarks and leptons are introduced at an
orbifold singularity, although their origin remains unexplained. This paper
includes extensive discussion on anomaly cancellation in a given orbifold
geometry. Relation to the Type IIB string theory, realization of R symmetry as
a rotation of extra-dimensional space, and effective superpotential at low
energies are also discussed.
|
The invariant-comb approach is a method to construct entanglement measures
for multipartite systems of qubits. The essential step is the construction of
an antilinear operator that we call {\em comb} in reference to the {\em
hairy-ball theorem}. An appealing feature of this approach is that for qubits
(or spins 1/2) the combs are automatically invariant under $SL(2,\CC)$, which
implies that the obtained invariants are entanglement monotones by
construction. By asking which property of a state determines whether or not it
is detected by a polynomial $SL(2,\CC)$ invariant we find that it is the
presence of a {\em balanced part} that persists under local unitary
transformations. We present a detailed analysis for the maximally entangled
states detected by such polynomial invariants, which leads to the concept of
{\em irreducibly balanced} states. The latter indicates a tight connection with
SLOCC classifications of qubit entanglement. \\ Combs may also help to define
measures for multipartite entanglement of higher-dimensional subsystems.
However, for higher spins there are many independent combs such that it is
non-trivial to find an invariant one. By restricting the allowed local
operations to rotations of the coordinate system (i.e. again to the
$SL(2,\CC)$) we manage to define a unique extension of the concurrence to
general half-integer spin with an analytic convex-roof expression for mixed
states.
|
We perform a detailed, fully-correlated study of the chiral behavior of the
pion mass and decay constant, based on 2+1 flavor lattice QCD simulations.
These calculations are implemented using tree-level, O(a)-improved Wilson
fermions, at four values of the lattice spacing down to 0.054 fm and all the
way down to below the physical value of the pion mass. They allow a sharp
comparison with the predictions of SU(2) chiral perturbation theory (\chi PT)
and a determination of some of its low energy constants. In particular, we
systematically explore the range of applicability of NLO SU(2) \chi PT in two
different expansions: the first in quark mass (x-expansion), and the second in
pion mass (\xi-expansion). We find that these expansions begin showing signs of
failure around M_\pi=300 MeV for the typical percent-level precision of our
N_f=2+1 lattice results. We further determine the LO low energy constants
(LECs), F=88.0 \pm 1.3\pm 0.3 and B^\msbar(2 GeV)=2.58 \pm 0.07 \pm 0.02 GeV,
and the related quark condensate, \Sigma^\msbar(2 GeV)=(271\pm 4\pm 1 MeV)^3,
as well as the NLO ones, l_3=2.5 \pm 0.5 \pm 0.4 and l_4=3.8 \pm 0.4 \pm 0.2,
with fully controlled uncertainties. We also explore the NNLO expansions and
the values of NNLO LECs. In addition, we show that the lattice results favor
the presence of chiral logarithms. We further demonstrate how the absence of
lattice results with pion masses below 200 MeV can lead to misleading results
and conclusions. Our calculations allow a fully controlled, ab initio
determination of the pion decay constant with a total 1% error, which is in
excellent agreement with experiment.
|
We introduce a novel class of field theories where energy always flows along
timelike geodesics, mimicking in that respect dust, yet which possess non-zero
pressure. This theory comprises two scalar fields, one of which is a Lagrange
multiplier enforcing a constraint between the other's field value and
derivative. We show that this system possesses no wave-like modes but retains a
single dynamical degree of freedom. Thus, the sound speed is always identically
zero on all backgrounds. In particular, cosmological perturbations reproduce
the standard behaviour for hydrodynamics with vanishing sound speed. Using all
these properties we propose a model unifying Dark Matter and Dark Energy in a
single degree of freedom. In a certain limit this model exactly reproduces the
evolution history of Lambda-CDM, while deviations away from the standard
expansion history produce a potentially measurable difference in the evolution
of structure.
|
In the current landscape of ever-increasing levels of digitalization, we are
facing major challenges pertaining to scalability. Recommender systems have
become irreplaceable both for helping users navigate the increasing amounts of
data and, conversely, aiding providers in marketing products to interested
users. The growing awareness of discrimination in machine learning methods has
recently motivated both academia and industry to research how fairness can be
ensured in recommender systems. For recommender systems, such issues are well
exemplified by occupation recommendation, where biases in historical data may
lead to recommender systems relating one gender to lower wages or to the
propagation of stereotypes. In particular, consumer-side fairness, which
focuses on mitigating discrimination experienced by users of recommender
systems, has seen a vast number of diverse approaches for addressing different
types of discrimination. The nature of said discrimination depends on the
setting and the applied fairness interpretation, of which there are many
variations. This survey serves as a systematic overview and discussion of the
current research on consumer-side fairness in recommender systems. To that end,
a novel taxonomy based on high-level fairness interpretation is proposed and
used to categorize the research and their proposed fairness evaluation metrics.
Finally, we highlight some suggestions for the future direction of the field.
|
Let $K$ be a complete discretely valued field of residue characteristic not
$2$ and $O_K$ its ring of integers. We explicitly construct a regular model
over $O_K$ with strict normal crossings of any hyperelliptic curve
$C/K:y^2=f(x)$. For this purpose, we introduce the new notion of ''MacLane
cluster picture'', that aims to be a link between clusters and MacLane
valuations.
|
FIDO2 authentication is starting to be applied in numerous web authentication
services, aiming to replace passwords and their known vulnerabilities. However,
this new authentication method has not been integrated yet with network
authentication systems. In this paper, we introduce FIDO2CAP: FIDO2
Captive-portal Authentication Protocol. Our proposal describes a novel protocol
for captive-portal network authentication using FIDO2 authenticators, as
security keys and passkeys. For validating our proposal, we have developed a
prototype of FIDO2CAP authentication in a mock scenario. Using this prototype,
we performed an usability experiment with 15 real users. This work makes the
first systematic approach for adapting network authentication to the new
authentication paradigm relying on FIDO2 authentication.
|
In this article I investigate the phenomenon of minimum models of
second-order set theories, focusing on Kelley--Morse set theory $\mathsf{KM}$,
G\"odel--Bernays set theory $\mathsf{GB}$, and $\mathsf{GB}$ augmented with the
principle of Elementary Transfinite Recursion. The main results are the
following. (1) A countable model of $\mathsf{ZFC}$ has a minimum
$\mathsf{GBC}$-realization if and only if it admits a parametrically definable
global well-order. (2) Countable models of $\mathsf{GBC}$ admit minimal
extensions with the same sets. (3) There is no minimum transitive model of
$\mathsf{KM}$. (4) There is a minimum $\beta$-model of $\mathsf{GB} +
\mathsf{ETR}$. The main question left unanswered by this article is whether
there is a minimum transitive model of $\mathsf{GB} + \mathsf{ETR}$.
|
We report on simultaneous sub-second optical and X-ray timing observations of
the low mass X-ray binary black hole candidate MAXI J1820+070. The bright 2018
outburst rise allowed simultaneous photometry in five optical bands ($ugriz_s$)
with HiPERCAM/GTC (Optical) at frame rates over 100 Hz, together with NICER/ISS
observations (X-rays). Intense (factor of two) red flaring activity in the
optical is seen over a broad range of timescales down to $\sim$10 ms.
Cross-correlating the bands reveals a prominent anti-correlation on timescales
of $\sim$seconds, and a narrow sub-second correlation at a lag of $\approx$+165
ms (optical lagging X-rays). This lag increases with optical wavelength, and is
approximately constant over Fourier frequencies of $\sim$0.3-10 Hz. These
features are consistent with an origin in the inner accretion flow and jet base
within $\sim$5000 Gravitational radii. An additional $\sim$+5 s lag feature may
be ascribable to disc reprocessing. MAXI J1820+070 is the third black hole
transient to display a clear $\sim$0.1s optical lag, which may be common
feature in such objects. The sub-second lag $variation$ with wavelength is
novel, and may allow constraints on internal shock jet stratification models.
|
We investigate the Kovacs (or crossover) effect in facilitated $f$-spin
models of glassy dynamics. Although the Kovacs hump shows a behavior
qualitatively similar for all cases we have examined (irrespective of the
facilitation parameter $f$ and the spatial dimension $d$), we find that the
dependence of the Kovacs peak time on the temperature of the second quench
allows to distinguish among different microscopic mechanisms responsible for
the glassy relaxation (e.g. cooperative vs defect diffusion). We also analyze
the inherent structure dynamics underlying the Kovacs protocol, and find that
the class of facilitated spin models with $d>1$ and $f>1$ shows features
resembling those obtained recently in a realistic model of fragile glass
forming liquid.
|
We develop a computational method for evaluating the damping of vibrational
modes in mono-atomic metallic chains suspended between bulk crystals under
external strain. The damping is due to the coupling between the chain and
contact modes and the phonons in the bulk substrates. The geometry of the atoms
forming the contact is taken into account. The dynamical matrix is computed
with density functional theory in the atomic chain and the contacts using
finite atomic displacements, while an empirical method is employed for the bulk
substrate. As a specific example, we present results for the experimentally
realized case of gold chains in two different crystallographic directions. The
range of the computed damping rates confirm the estimates obtained by fits to
experimental data [Frederiksen et al., Phys. Rev. B, 75, 205413(R)(2007)]. Our
method indicates that an order-of-magnitude variation in the damping is
possible even for relatively small changes in the strain. Such detailed insight
is necessary for a quantitative analysis of damping in metallic atomic chains,
and in explaining the rich phenomenology seen in the experiments.
|
We consider image reconstruction in full-field photoacoustic tomography,
where 2D projections of the full 3D acoustic pressure distribution at a given
time T>0 are collected. We discuss existing results on the stability and
uniqueness of the resulting image reconstruction problem and review existing
reconstruction algorithms. Open challenges are also mentioned. Additionally, we
introduce novel one-step reconstruction methods allowing for a variable speed
of sound. We apply preconditioned iterative and variational regularization
methods to the one-step formulation. Numerical results using the one-step
formulation are presented, together with a comparison with the previous
two-step approach for full-field photoacoustic tomography
|
Numerous signals in relevant signal processing applications can be modeled as
a sum of complex exponentials. Each exponential term entails a particular
property of the modeled physical system, and it is possible to define families
of signals that are associated with the complex exponentials. In this paper, we
formulate a classification problem for this guiding principle and we propose a
data processing strategy. In particular, we exploit the information obtained
from the analytical model by combining it with data-driven learning techniques.
As a result, we obtain a classification strategy that is robust under modeling
uncertainties and experimental perturbations. To assess the performance of the
new scheme, we test it with experimental data obtained from the scattering
response of targets illuminated with an impulse radio ultra-wideband radar.
|
The mixing efficiency of a flow advecting a passive scalar sustained by
steady sources and sinks is naturally defined in terms of the suppression of
bulk scalar variance in the presence of stirring, relative to the variance in
the absence of stirring. These variances can be weighted at various spatial
scales, leading to a family of multi-scale mixing measures and efficiencies. We
derive a priori estimates on these efficiencies from the advection--diffusion
partial differential equation, focusing on a broad class of statistically
homogeneous and isotropic incompressible flows. The analysis produces bounds on
the mixing efficiencies in terms of the Peclet number, a measure the strength
of the stirring relative to molecular diffusion. We show by example that the
estimates are sharp for particular source, sink and flow combinations. In
general the high-Peclet number behavior of the bounds (scaling exponents as
well as prefactors) depends on the structure and smoothness properties of, and
length scales in, the scalar source and sink distribution. The fundamental
model of the stirring of a monochromatic source/sink combination by the random
sine flow is investigated in detail via direct numerical simulation and
analysis. The large-scale mixing efficiency follows the upper bound scaling
(within a logarithm) at high Peclet number but the intermediate and small-scale
efficiencies are qualitatively less than optimal. The Peclet number scaling
exponents of the efficiencies observed in the simulations are deduced
theoretically from the asymptotic solution of an internal layer problem arising
in a quasi-static model.
|
Subsets and Splits