text
stringlengths 6
128k
|
---|
It is argued that a new mechanism and many-body theory of superconductivity
are required for doped correlated insulators. Here we review the essential
features of and the experimental support for such a theory, in which the
physics is driven by the kinetic energy.
|
The dust-to-stellar mass ratio ($M_{\rm dust}$/$M_{\rm \star}$) is a crucial
yet poorly constrained quantity to understand the production mechanisms of
dust, metals and stars in galaxy evolution. In this work we explore and
interpret the nature of $M_{\rm dust}$/$M_{\rm \star}$ in 300 massive
($M_{\star}>10^{10}M_{\odot}$), dusty star-forming galaxies detected with ALMA
up to $z\approx5$. We find that $M_{\rm dust}$/$M_{\rm \star}$ evolves with
redshift, stellar mass, specific SFR and integrated dust size, differently for
main sequence and starburst galaxies. In both galaxy populations $M_{\rm
dust}$/$M_{\rm \star}$ rises until $z\sim2$ followed by a roughly flat trend
towards higher redshifts. We show that the inverse relation between $M_{\rm
dust}$/$M_{\rm \star}$ and $M_{\star}$ holds up to $z\approx5$ and can be
interpreted as an evolutionary transition from early to late starburst phases.
We demonstrate that $M_{\rm dust}$/$M_{\rm \star}$ in starbursts mirrors the
increase in molecular gas fraction with redshift, and is enhanced in objects
with the most compact dusty star-formation. The state-of-the-art cosmological
simulation SIMBA broadly matches the evolution of $M_{\rm dust}$/$M_{\rm
\star}$ in main sequence galaxies, but underestimates it in starbursts. The
latter is found to be linked to lower gas-phase metallicities and longer dust
growth timescales relative to data. Our data are well reproduced by analytical
model that includes recipes for rapid metal enrichment, strongly suggesting
that high $M_{\rm dust}$/$M_{\rm \star}$ is due to fast grain growth in metal
enriched ISM. Our work highlights multifold benefits of using $M_{\rm
dust}$/$M_{\rm \star}$ as a diagnostic tool for: (1) separating main sequence
and starburst galaxies until $z\sim5$; (2) probing the evolutionary phases of
dusty galaxies, and (3) refining the treatment of dust life cycle in
simulations.
|
Results of Morse and Schilling show that the set of increasing factorizations
of reduced words for a permutation is naturally a crystal for the general
linear Lie algebra. Hiroshima has recently constructed two superalgebra
analogues of such crystals. Specifically, Hiroshima has shown that the sets of
increasing factorizations of involution words and fpf-involution words for a
self-inverse permutation are each crystals for the queer Lie superalgebra. In
this paper, we prove that these crystals are normal and identify their
connected components. To accomplish this, we study two insertion algorithms
that may be viewed as shifted analogues of the Edelman-Greene correspondence.
We prove that the connected components of Hiroshima's crystals are the subsets
of factorizations with the same insertion tableau for these algorithms, and
that passing to the recording tableau defines a crystal morphism. This confirms
a conjecture of Hiroshima. Our methods involve a detailed investigation of
certain analogues of the Little map, through which we extend several results of
Hamaker and Young.
|
The total Hamiltonian in general relativity, which involves the first class
Hamiltonian and momentum constraints, weakly vanishes. However, when the action
is expanded around a classical solution as in the case of a single scalar field
inflationary model, there appears a non-vanishing Hamiltonian and additional
first class constraints; but this time the theory becomes perturbative in the
number of fluctuation fields. We show that one can reorganize this expansion
and solve the Hamiltonian constraint exactly, which yield an explicit all order
action. On the other hand, the momentum constraint can be solved perturbatively
in the tensor modes $\gamma_{ij}$ by still keeping the curvature perturbation
$\zeta$ dependence exact. In this way, after gauge fixing, one can obtain a
semi-exact Hamiltonian for $\zeta$ which only gets corrections from the
interactions with the tensor modes (hence the Hamiltonian becomes exact when
the tensor perturbations set to zero). The equations of motion clearly exhibit
when the evolution of $\zeta$ involves a logarithmic time dependence, which is
a subtle point that has been debated in the literature. We discuss the long
wavelength and late time limits, and obtain some simple but non-trivial
classical solutions of the $\zeta$ zero-mode.
|
We consider an impurity immersed in a Bose-Einstein condensate with tunable
boson-impurity interactions. Such a Bose polaron has recently been predicted to
exhibit an intriguing energy spectrum at finite temperature, where the
ground-state quasiparticle evenly splits into two branches as the temperature
is increased from zero [Guenther et al., Phys. Rev. Lett. 120, 050405 (2018)].
To investigate this theoretical prediction, we employ a recently developed
variational approach that systematically includes multi-body correlations
between the impurity and the finite-temperature medium, thus allowing us to go
beyond previous finite-temperature methods. Crucially, we find that the number
of quasiparticle branches is simply set by the number of hole excitations of
the thermal cloud, such that including up to one hole yields one splitting, two
holes yields two splittings, and so on. Moreover, this effect is independent of
the impurity mass. We thus expect that the exact ground-state quasiparticle
will evolve into a single broad peak for temperatures $T>0$, with a broadening
that scales as $T^{3/4}$ at low temperatures and sufficiently weak boson-boson
interactions. In the zero-temperature limit, we show that our calculated
ground-state polaron energy is in excellent agreement with recent quantum Monte
Carlo results and with experiments.
|
The cross-sections for the production of single charged and neutral
intermediate vector bosons were measured using integrated luminosities of 52
pb^{-1} and 154 pb^{-1} collected by the DELPHI experiment at centre-of-mass
energies of 182.6 GeV and 188.6 GeV, respectively. The cross-sections for the
reactions were determined in limited kinematic regions. The results found are
in agreement with the Standard Model predictions for these channels.
|
We discuss questions pertaining to the definition of `momentum', `momentum
space', `phase space', and `Wigner distributions'; for finite dimensional
quantum systems. For such systems, where traditional concepts of `momenta'
established for continuum situations offer little help, we propose a physically
reasonable and mathematically tangible definition and use it for the purpose of
setting up Wigner distributions in a purely algebraic manner. It is found that
the point of view adopted here is limited to odd dimensional systems only. The
mathematical reasons which force this situation are examined in detail.
|
We introduce a novel compositional description of Feynman diagrams, with
well-defined categorical semantics as morphisms in a dagger-compact category.
Our chosen setting is suitable for infinite-dimensional diagrammatic reasoning,
generalising the ZX calculus and other algebraic gadgets familiar to the
categorical quantum theory community.
The Feynman diagrams we define look very similar to their traditional
counterparts, but are more general: instead of depicting scattering amplitude,
they embody the linear maps from which the amplitudes themselves are computed,
for any given initial and final particle states. This shift in perspective
reflects into a formal transition from the syntactic, graph-theoretic
compositionality of traditional Feynman diagrams to a semantic,
categorical-diagrammatic compositionality.
Because we work in a concrete categorical setting -- powered by non-standard
analysis -- we are able to take direct advantage of complex additive structure
in our description. This makes it possible to derive a particularly compelling
characterisation for the sequential composition of categorical Feynman
diagrams, which automatically results in the superposition of all possible
graph-theoretic combinations of the individual diagrams themselves.
|
We simulate the response of acoustic seismic waves in horizontally layered
media using a deep neural network. In contrast to traditional finite-difference
modelling techniques our network is able to directly approximate the recorded
seismic response at multiple receiver locations in a single inference step,
without needing to iteratively model the seismic wavefield through time. This
results in an order of magnitude reduction in simulation time from the order of
1 s for FD modelling to the order of 0.1 s using our approach. Such a speed
improvement could lead to real-time seismic simulation applications and benefit
seismic inversion algorithms based on forward modelling, such as full waveform
inversion. Our proof of concept deep neural network is trained using 50,000
synthetic examples of seismic waves propagating through different 2D
horizontally layered velocity models. We discuss how our approach could be
extended to arbitrary velocity models. Our deep neural network design is
inspired by the WaveNet architecture used for speech synthesis. We also
investigate using deep neural networks for simulating the full seismic
wavefield and for carrying out seismic inversion directly.
|
We systematically examine uncertainties from fitting rare earth single-ion
crystal electric field (CEF) Hamiltonians to inelastic neutron scattering data.
Using pyrochlore and delafossite structures as test cases, we find that
uncertainty in CEF parameters can be large despite visually excellent fits.
These results show Yb$^{3+}$ compounds have particularly large $g$-tensor
uncertainty because of the few available peaks. In such cases, additional
constraints are necessary for meaningful fits.
|
An perturbation-iteration method is developed for the computation of the
Hermite-Gaussian-like solitons with arbitrary peak numbers in nonlocal
nonlinear media. This method is based on the perturbed model of the
Schr\"{o}dinger equation for the harmonic oscillator, in which the minimum
perturbation is obtained by the iteration. This method takes a few tens of
iteration loops to achieve enough high accuracy, and the initial condition is
fixed to the Hermite-Gaussian function. The method we developed might also be
extended to the numerical integration of the Schr\"{o}dinger equations in any
type of potentials.
|
The use of nonlinear lattices with large betatron tune spreads can increase
instability and space charge thresholds due to improved Landau damping.
Unfortunately, the majority of nonlinear accelerator lattices turn out to be
nonintegrable, producing chaotic motion and a complex network of stable and
unstable resonances. Recent advances in finding the integrable nonlinear
accelerator lattices have led to a proposal to construct at Fermilab a test
accelerator with strong nonlinear focusing which avoids resonances and chaotic
particle motion. This presentation will outline the main challenges,
theoretical design solutions and construction status of the Integrable Optics
Test Accelerator underway at Fermilab.
|
We study the averaging of a diffusion process living in a simplex $K$ of
$\mathbb R^n$, $n\ge 1$. We assume that its infinitesimal generator can be
decomposed as a sum of two generators corresponding to two distinct timescales
and that the one corresponding to the fastest timescale is pure noise with a
diffusion coefficient vanishing exactly on the vertices of $K$. We show that
this diffusion process averages to a pure jump Markov process living on the
vertices of $K$ for the Meyer-Zheng topology. The role of the geometric
assumptions done on $K$ is also discussed.
|
We re-derive the expression for the heat current for a classical system
subject to periodic boundary conditions and show that it can be written as a
sum of two terms. The first term is a time derivative of the first moment of
the system energy density while the second term is expressed through the energy
transfer rate through the periodic boundary. We show that in solids the second
term alone leads to the same thermal conductivity as the full expression for
the heat current when used in the Green-Kubo approach. More generally, energy
passing though any surface formed by translation of the original periodic
boundary can be used to calculate thermal conductivity. These statements are
verified for two systems: crystalline argon and crystals of argon and krypton
forming an interface.
|
Gravitational-wave observations of double compact object (DCO) mergers are
providing new insights into the physics of massive stars and the evolution of
binary systems. Making the most of expected near-future observations for
understanding stellar physics will rely on comparisons with binary population
synthesis models. However, the vast majority of simulated binaries never
produce DCOs, which makes calculating such populations computationally
inefficient. We present an importance sampling algorithm, STROOPWAFEL, that
improves the computational efficiency of population studies of rare events, by
focusing the simulation around regions of the initial parameter space found to
produce outputs of interest. We implement the algorithm in the binary
population synthesis code COMPAS, and compare the efficiency of our
implementation to the standard method of Monte Carlo sampling from the birth
probability distributions. STROOPWAFEL finds $\sim$25-200 times more DCO
mergers than the standard sampling method with the same simulation size, and so
speeds up simulations by up to two orders of magnitude. Finding more DCO
mergers automatically maps the parameter space with far higher resolution than
when using the traditional sampling. This increase in efficiency also leads to
a decrease of a factor $\sim$3-10 in statistical sampling uncertainty for the
predictions from the simulations. This is particularly notable for the
distribution functions of observable quantities such as the black hole and
neutron star chirp mass distribution, including in the tails of the
distribution functions where predictions using standard sampling can be
dominated by sampling noise.
|
In this paper we present the realization of further steps towards the
measurement of the magnetic birefringence of the vacuum using pulsed fields.
After describing our experiment, we report the calibration of our apparatus
using nitrogen gas and we discuss the precision of our measurement giving a
detailed error budget. Our best present vacuum upper limit is Dn < 5.0x10^(-20)
T^-2 per 4 ms acquisition time. We finally discuss the improvements necessary
to reach our final goal.
|
The multi-messenger joint observations of GW170817 and GRB170817A shed new
light on the study of short-duration gamma-ray bursts (SGRBs). Not only did it
substantiate the assumption that SGRBs originate from binary neutron star (BNS)
mergers, but it also confirms that the jet generated by this type of merger
must be structured, hence the observed energy of an SGRB depends on the viewing
angle from the observer. However, the precise structure of the jet is still
subject to debate. Moreover, whether a single unified jet model can be applied
to all SGRBs is not known. Another uncertainty is the delay timescale of BNS
mergers with respect to star formation history of the Universe. In this paper,
we conduct a global test of both delay and jet models of BNS mergers across a
wide parameter space with simulated SGRBs. We compare the simulated peak flux,
redshift and luminosity distributions with the observed ones and test the
goodness-of-fit for a set of models and parameter combinations. Our simulations
suggest that GW170817/GRB 170817A and all SGRBs can be understood within the
framework of a universal structured jet viewed at different viewing angles.
Furthermore, models invoking a jet plus cocoon structure with a lognormal delay
timescale is most favored. Some other combinations (e.g. a Gaussian delay with
a power-law jet model) are also acceptable. However, the Gaussian delay with
Gaussian jet model and the entire set of power-law delay models are disfavored.
|
We present the first multi-task learning model -- named PhoNLP -- for joint
Vietnamese part-of-speech (POS) tagging, named entity recognition (NER) and
dependency parsing. Experiments on Vietnamese benchmark datasets show that
PhoNLP produces state-of-the-art results, outperforming a single-task learning
approach that fine-tunes the pre-trained Vietnamese language model PhoBERT
(Nguyen and Nguyen, 2020) for each task independently. We publicly release
PhoNLP as an open-source toolkit under the Apache License 2.0. Although we
specify PhoNLP for Vietnamese, our PhoNLP training and evaluation command
scripts in fact can directly work for other languages that have a pre-trained
BERT-based language model and gold annotated corpora available for the three
tasks of POS tagging, NER and dependency parsing. We hope that PhoNLP can serve
as a strong baseline and useful toolkit for future NLP research and
applications to not only Vietnamese but also the other languages. Our PhoNLP is
available at: https://github.com/VinAIResearch/PhoNLP
|
An abundance analysis for 20 elements from Na to Eu is reported for 34 K
giants from the Hyades supercluster and for 22 K giants from the Sirius
supercluster. Observed giants were identified as highly probable members of
their respective superclusters by Famaey et al. (2005, A&A, 430, 165). Three
giants each from the Hyades and Praesepe open clusters were similarly observed
and analysed. Each supercluster shows a range in metallicity: $-0.20 \leq$
[Fe/H] $\leq +0.25$ for the Hyades supercluster and $-0.22 \leq $ [Fe/H] $\leq
+0.15$ for the Sirius supercluster with the metal-rich tail of the metallicity
distribution of the Hyades supercluster extending beyond that of the Sirius
supercluster and spanning the metallicity of the Hyades and Praesepe cluster
giants. Relative elemental abundances [El/Fe] across the supercluster giants
are representative of the Galactic thin disc as determined from giants in open
clusters analysed in a similar way to our approach. Judged by metallicity and
age, very few and likely none of the giants in these superclusters originated
in an open cluster: the pairings include the Hyades supercluster with the
Hyades - Praesepe open clusters and the Sirius supercluster with the U Ma open
cluster. Literature on main sequence stars attributed to the two superclusters
and the possible relation to the associated open cluster is reviewed. It is
suggested that the Hyades supercluster's main sequence population contains few
stars from the two associated open clusters. As suggested by some previous
investigations, the Sirius supercluster, when tightly defined kinematically,
appears to be well populated by stars shed by the U Ma open cluster.
|
We present an elastic constitutive model of gravity where we identify
physical space with the mid-hypersurface of an elastic hyperplate called the
"cosmic fabric" and spacetime with the fabric's world volume. Using a
Lagrangian formulation, we show that the fabric's behavior as derived from
Hooke's Law is analogous to that of spacetime per the Field Equations of
General Relativity. The study is conducted in the limit of small strains, or
analogously, in the limit of weak and nearly static gravitational fields. The
Fabric's Lagrangian outside of inclusions is shown to have the same form as the
Einstein-Hilbert Lagrangian for free space. Properties of the fabric such as
strain, stress, vibrations, and elastic moduli are related to properties of
gravity and space, such as the gravitational potential, gravitational
acceleration, gravitational waves, and the energy density of free space. By
introducing a mechanical analogy of General Relativity, we enable the
application of Solid Mechanics tools to address problems in Cosmology.
|
The electronic structure of a prototype Kondo system, a cobalt impurity in a
copper host is calculated with accurate taking into account of correlation
effects on the Co atom. Using the recently developed continuous-time QMC
technique, it is possible to describe the Kondo resonance with a complete
four-index Coulomb interaction matrix. This opens a way for completely
first-principle calculations of the Kondo temperature. We have demonstrated
that a standard practice of using a truncated Hubbard Hamiltonian to consider
the Kondo physics can be quantitatively inadequate.
|
Heating induced by the noise postulated in wave function collapse models
leads to a lower bound to the temperature of solid objects. For the noise
parameter values $\lambda ={\rm coupling~strength}\sim 10^{-8} {\rm s}^{-1}$
and $r_C ={\rm correlation~length} \sim 10^{-5} {\rm cm}$, which were suggested
\cite{adler1} to make latent image formation an indicator of wave function
collapse and which are consistent with the recent experiment of Vinante et al.
\cite{vin}, the effect may be observable. For metals, where the heat
conductivity is proportional to the temperature at low temperatures, the lower
bound (specifically for RRR=30 copper) is $\sim 5\times 10^{-11} (L/r_C) $K,
with L the size of the object. For the thermal insulator Torlon 4203, the
comparable lower bound is $\sim 3 \times 10^{-6} (L/r_c)^{0.63}$ K. We first
give a rough estimate for a cubical metal solid, and then give an exact
solution of the heat transfer problem for a sphere.
|
Recent developments in the field of high precision calculations in the
Standard Model are illustrated with particular emphasis on the evidence for
radiative corrections and on the estimate of the theoretical error in
perturbative calculations.
|
Jones introduced unitary representations for the Thompson groups $F$ and $T$
from a given subfactor planar algebra. Some interesting subgroups arise as the
stabilizer of certain vector, in particular the Jones subgroups $\vec{F}$ and
$\vec{T}$. Golan and Sapir studied $\vec{F}$ and identified it as a copy of the
Thompson group $F_3$. In this paper we completely describe $\vec{T}$ and show
that $\vec{T}$ coincides with its commensurator in $T$, implying that the
corresponding unitary representation is irreducible. We also generalize the
notion of the Stallings 2-core for diagram groups to $T$, showing that
$\vec{T}$ and $T_3$ are not isomorphic, but as annular diagram groups they have
very similar presentations.
|
We measured the radial velocity curve of the companion of the neutron star
X-ray transient XTE J2123-058. Its semi-amplitude (K_2) of 298.5 +/- 6.9 km/s
is the highest value that has been measured for any neutron star LMXB. The high
value for K_2 is, in part, due to the high binary inclination of the system but
may also indicate a high neutron star mass. The mass function (f_2) of 0.684
+/- 0.047 solar masses, along with our constraints on the companion's spectral
type (K5V-K9V) and previous constraints on the inclination, gives a likely
range of neutron star masses from 1.2 to 1.8 solar masses. We also derive a
source distance of 8.5 +/- 2.5 kpc, indicating that XTE J2123-058 is unusually
far, 5.0 +/- 1.5 kpc, from the Galactic plane. Our measurement of the systemic
radial velocity is -94.5 +/- 5.5 km/s, which is significantly different from
what would be observed if this object corotates with the disk of the Galaxy.
|
The Bose-Einstein correlation (BEC) in forward region ($2.0<\eta<4.8$)
measured at 7 TeV in the Large Hadron Collider (LHC) by the LHCb collaboration
is analyzed using two conventional formulas of different types named CF$_{\rm
I}$ and CF$_{\rm II}$. The first formula is well known and contains the degree
of coherence ($\lambda$) and the exchange function $E_{\rm BE}^2$ from the BE
statistics. The second formula is an extended formula (CF$_{\rm II}$) that
contains the second degree of coherence $\lambda_2$ and the second exchange
function $E_{\rm BE_2}^2$ in addition to CF$_{\rm I}$. To examine the physical
meaning of the parameters estimated by CF$_{\rm II}$, we analyze the LHCb BEC
data by using a stochastic approach of the three-negative binomial distribution
and the three-generalized Glauber-Lachs formula. Our results reveal that the
BEC at 7 TeV consisted of three activity intervals defined by the multiplicity
$n$ ([8, 18], [19, 35], and [36, 96]) can be well explained by CF$_{\rm II}$.
|
We present the first ever discovery of a short-period and unusually
helium-deficient dwarf nova KSP-OT-201701a by the Korea Microlensing Telescope
Network Supernova Program. The source shows three superoutbursts, each led by a
precursor outburst, and several normal outbursts in BVI during the span of ~2.6
years with supercycle and normal cycle lengths of about 360 and 76 days,
respectively. Spectroscopic observations near the end of a superoutburst reveal
the presence of strong double-peaked HI emission lines together with weak HeI
emission lines. The helium-to-hydrogen intensity ratios measured by
HeI{\lambda}5876 and H{\alpha} lines are 0.10 {\pm} 0.01 at a quiescent phase
and 0.26 {\pm} 0.04 at an outburst phase, similar to the ratios found in
long-period dwarf novae while significantly lower than those in helium
cataclysmic variables (He CVs). Its orbital period of 51.91 {\pm} 2.50 minutes,
which is estimated based on time series spectroscopy, is a bit shorter than the
superhump period of 56.52 {\pm} 0.19 minutes, as expected from the
gravitational interaction between the eccentric disk and the secondary star. We
measure its mass ratio to be 0.37^{+0.32}_{-0.21} using the superhump period
excess of 0.089 {\pm} 0.053. The short orbital period, which is under the
period minimum, the unusual helium deficiency, and the large mass ratio suggest
that KSP-OT-201701a is a transition object evolving to a He CV from a
long-period dwarf nova with an evolved secondary star.
|
We use channel probing to determine the best transponder configurations for
spectral services in a long-haul production network. An estimation accuracy
better than +/- 0,7dB in GSNR margin is obtained for lightpaths up to 5738km.
|
In this paper, we give results that partially prove a conjecture which was
discussed in our previous work (arXiv:1307.4991). More precisely, we prove that
as $n\to \infty,$ the zeros of the polynomial$${}_{2}\text{F}_{1}\left[
\begin{array}{c} -n, \alpha n+1\\ \alpha n+2 \end{array} ; \begin{array}{cc} z
\end{array}\right]$$ cluster on a certain curve defined as a part of a level
curve of an explicit harmonic function. This generalizes work by Boggs, Driver,
Duren et. al, to a complex parameter $\alpha$.
|
We have measured the microwave conductance of mechanically exfoliated
graphene at frequencies up to 8.5 GHz. The conductance at 4.2 K exhibits
quantum oscillations, and is independent of the frequency.
|
The steepening (break) of the power-law fall-off observed in the optical
emission of some GRB afterglows at epoch ~1 day is often attributed to a
collimated outflow (jet), undergoing lateral spreading. Wider opening GRB
ejecta with a non-uniform energy angular distribution (structured outflows) or
the cessation of energy injection in the afterglow can also yield light-curve
breaks.
We determine the optical and X-ray light-curve decay indices and spectral
energy distribution slopes for 10 GRB afterglows with optical light-curve
breaks (980519, 990123, 990510, 991216, 000301, 000926, 010222, 011211, 020813,
030226), and use these properties to test the above models for light-curve
steepening. It is found that the optical breaks of six of these afterglows can
be accommodated by either energy injection or by structured outflows. In the
refreshed shock model, a wind-like stratification of the circumburst medium (as
expected for massive stars as GRB progenitors) is slightly favoured. A
spreading jet interacting with a homogeneous circumburst medium is required by
the afterglows 990510, 000301, 011211, and 030226. The optical pre- and
post-break decays of these four afterglows are incompatible with a wind-like
medium.
The current sample of 10 afterglows with breaks suggests that the
distribution of the break magnitude (defined as the increase of the afterglow
decay exponent) is bimodal, with a gap at 1. If true, this bimodality favours
the structured outflow model, while the gap location indicates a homogeneous
circumburst environment.
|
Compressive sensing (CS) is a mathematically elegant tool for reducing the
sampling rate, potentially bringing context-awareness to a wider range of
devices. Nevertheless, practical issues with the sampling and reconstruction
algorithms prevent further proliferation of CS in real world domains,
especially among heterogeneous ubiquitous devices. Deep learning (DL) naturally
complements CS for adapting the sampling matrix, reconstructing the signal, and
learning form the compressed samples. While the CS-DL integration has received
substantial research interest recently, it has not yet been thoroughly
surveyed, nor has the light been shed on practical issues towards bringing the
CS-DL to real world implementations in the ubicomp domain. In this paper we
identify main possible ways in which CS and DL can interplay, extract key ideas
for making CS-DL efficient, identify major trends in CS-DL research space, and
derive guidelines for future evolution of CS-DL within the ubicomp domain.
|
In this study we coded, for individual student participation on each
question, the video of twenty-seven groups interacting in the group phase of a
variety of two-phase exams. We found that maximum group participation occurred
on questions where at least one person in the group had answered that question
incorrectly during the solo phase of the exam. We also observed that those
students that were correct on a question during the solo phase have higher
participation than those that were incorrect. Finally we observed that, from a
participation standpoint, the strongest (weakest) students seem to benefit the
most (least) from heterogeneous groups, while homogeneous groups do not seem to
favor students of any particular performance level.
|
We present a reduced-order model (ROM) methodology for inverse scattering
problems in which the reduced-order models are data-driven, i.e. they are
constructed directly from data gathered by sensors. Moreover, the entries of
the ROM contain localised information about the coefficients of the wave
equation.
We solve the inverse problem by embedding the ROM in physical space. Such an
approach is also followed in the theory of ``optimal grids,'' where the ROMs
are interpreted as two-point finite-difference discretisations of an underlying
set of equations of a first-order continuous system on this special grid. Here,
we extend this line of work to wave equations and introduce a new embedding
technique, which we call Krein embedding, since it is inspired by Krein's
seminal work on vibrations of a string. In this embedding approach, an adaptive
grid and a set of medium parameters can be directly extracted from a ROM and we
show that several limitations of optimal grid embeddings can be avoided.
Furthermore, we show how Krein embedding is connected to classical optimal grid
embedding and that convergence results for optimal grids can be extended to
this novel embedding approach. Finally, we also briefly discuss Krein embedding
for open domains, that is, semi-infinite domains that extend to infinity in one
direction.
|
In strong stellar and solar flares flare loops typically appear during the
decay phase, providing an additional contribution to the flare emission and,
possibly, obscuring the flare emission. Super-flares, common in active, cool
stars, persist mostly from minutes to several hours and alter the star's
luminosity across the electromagnetic spectrum. Recent observations of a young
main-sequence star reveal a distinctive cool loop arcade forming above the
flaring region during a 27-hour superflare event, obscuring the region multiple
times. Analysis of these occultations enables the estimation of the arcade's
geometry and physical properties. The arcade's size expanded from 0.213 to
0.391 R$_*$ at a speed of approximately 3.5$\,$km/s. The covering structure
exhibited a thickness below 12$\,$200$\,$km, with electron densities ranging
from 10$^{13}$ to 10$^{14}\,$cm$^{-3}$ and temperatures below 7$\,$600$\,$K,
6$\,$400$\,$K, and 5$\,$077$\,$K for successive occultations. Additionally, the
flare's maximum emission temperature has to exceed 12$\,$000$\,$K for the
occultations to appear. Comparing these parameters with known values from other
stars and the Sun suggests the structure's nature as an arcade of cool flare
loops. For the first time, we present the physical parameters and the
reconstructed geometry of the cool flare loops that obscure the flaring region
during the gradual phase of a long-duration flare on a star other than the Sun.
|
This presentation reviews an approach to nuclear many-body systems based on
the spontaneously broken chiral symmetry of low-energy QCD. In the low-energy
limit, for energies and momenta small compared to a characteristic symmetry
breaking scale of order 1 GeV, QCD is realized as an effective field theory of
Goldstone bosons (pions) coupled to heavy fermionic sources (nucleons). Nuclear
forces at long and intermediate distance scales result from a systematic
hierarchy of one- and two-pion exchange processes in combination with Pauli
blocking effects in the nuclear medium. Short distance dynamics, not resolved
at the wavelengths corresponding to typical nuclear Fermi momenta, are
introduced as contact interactions between nucleons. Apart from a set of
low-energy constants associated with these contact terms, the parameters of
this theory are entirely determined by pion properties and low-energy
pion-nucleon scattering observables. This framework (in-medium chiral
perturbation theory) can provide a realistic description of both
isospin-symmetric nuclear matter and neutron matter. The importance of
three-body forces is emphasized, and the role of explicit Delta(1232)-isobar
degrees of freedom is investigated in detail. Nuclear chiral thermodynamics is
developed and a calculation of the nuclear phase diagram is performed. This
includes a successful description of the first-order phase transition from a
nuclear Fermi liquid to an interacting Fermi gas and the coexistence of these
phases below a critical temperature T_c. Density functional methods for finite
nuclei based on this approach are also discussed. Effective interactions, their
density dependence and connections to Landau Fermi liquid theory are outlined.
Finally, the density and temperature dependence of the chiral (quark)
condensate is investigated.
|
We report experiments in which two photoluminescent samples of Strontium
Aluminate pigments and Zinc Sulfide pebbles were quantum entangled via
photoexcitation with entangled photons from a mercury lamp and a CRT screen.
After photo-excitation, remote triggering of one of the sample with infrared
(IR) photons yielded stimulated light variation from the quantum entangled
other sample located 4 m away. The initial half-life of Strontium Aluminate is
approximately one minute. However, molecules with a longer half-life may be
found in the future. These experiments demonstrate that useful quantum
information could be transferred through quantum channels via de-excitation of
one sample of photoluminescent material quantum entangled with another.
|
We present an algorithm to compute all factorizations into linear factors of
univariate polynomials over the split quaternions, provided such a
factorization exists. Failure of the algorithm is equivalent to
non-factorizability for which we present also geometric interpretations in
terms of rulings on the quadric of non-invertible split quaternions. However,
suitable real polynomial multiples of split quaternion polynomials can still be
factorized and we describe how to find these real polynomials. Split quaternion
polynomials describe rational motions in the hyperbolic plane. Factorization
with linear factors corresponds to the decomposition of the rational motion
into hyperbolic rotations. Since multiplication with a real polynomial does not
change the motion, this decomposition is always possible. Some of our ideas can
be transferred to the factorization theory of motion polynomials. These are
polynomials over the dual quaternions with real norm polynomial and they
describe rational motions in Euclidean kinematics. We transfer techniques
developed for split quaternions to compute new factorizations of certain dual
quaternion polynomials.
|
The amount of audio-visual information has increased dramatically with the
advent of High Speed Internet. Furthermore, technological advances in recent
years in the field of information technology, have simplified the use of video
data in various fields by the general public. This made it possible to store
large collections of video documents into computer systems. To enable efficient
use of these collections, it is necessary to develop tools to facilitate access
to these documents and handling them. In this paper we propose a method for
indexing and retrieval of video sequences in a video database of large
dimension, based on a weighting technique to calculate the degree of membership
of a concept in a video also a structuring of the data of the audio-visual
(context / concept / video). Finally, we decided to create a search system,
offering in addition to the usual commands, different types of access to the
system, depending on the disability of the person. Indeed, the application
consists of a search system but offers access to commands through voice or
gestures. Our contribution at the experimental level consists with the
implementation of prototype. We integrated the techniques proposed in system to
evaluate it contributions in terms of effectiveness and precision.
|
Spectra that cover wavelengths from 0.6 to 1.1um are used to examine the
behavior of emission and absorption features in a contiguous 22 x 300 arcsec
region centered on the nearby dwarf galaxy NGC 55. Based on the relative
strengths of various emission features measured over spatial scales of many
tens of parsecs, it is concluded that the ionization states and sulphur
abundances in most of the star-forming regions near the center of NGC 55 are
similar. A large star-forming region is identified in the north west part of
the disk at a projected distance of ~1 kpc from the galaxy center that has
distinct ionization properties. Fossil star-forming regions are also identified
using the depth of the near-infrared Ca triplet. One such area is identified
near the intersection of the major and minor axes, and it is suggested that
this area is a proto-nucleus. The spectra of bright unresolved sources that are
blended stellar asterisms, compact HII regions, and star clusters are also
discussed. The spectra of some of the HII regions contain Ca triplet absorption
lines, signalling a concentration of stars in the resolution element that span
many Myr. Six of the unresolved sources have spectroscopic characteristics that
are indicative of C stars embedded in intermediate age clusters. The peculiar
properties of NGC 55 have been well documented in the literature, and it is
argued that these may indicate that NGC 55 is transforming into a dwarf
lenticular galaxy.
|
The results of Higgs boson searches in the context of the Minimal
Supersymmetric extension of the Standard Model (MSSM) in proton-proton
collisions with the ATLAS detector based on collected data corresponding to up
to 36 pb-1 are presented. Searches in the channels H+->cs, H+->taunu, and
H->tautau are discussed. All observations agree with the expectation of the
Standard Model (SM)-only hypothesis and thus exclusion limits are derived.
|
We address a long-standing problem of describing the thermodynamics of
rotating Taub--NUT solutions. The obtained first law is of full cohomogeneity
and allows for asymmetric distributions of Misner strings as well as their
potential variable strengths---encoded in the gravitational Misner charges.
Notably, the angular momentum is no longer given by the Noether charge over the
sphere at infinity and picks up non-trivial contributions from Misner strings.
|
In a recent analysis of the world data on polarized DIS, Blumlein and
Bottcher conclude that there is no evidence for higher twist contributions, in
contrast to the claim of the LSS group, who find evidence for significant
higher twist effects. We explain the origin of the apparent contradiction
between these results.
|
Despite the unquestionable empirical success of quantum theory, witnessed by
the recent uprising of quantum technologies, the debate on how to reconcile the
theory with the macroscopic classical world is still open. Spontaneous collapse
models are one of the few testable solutions so far proposed. In particular,
the continuous spontaneous localization (CSL) model has become subject of an
intense experimental research. Experiments looking for the universal force
noise predicted by CSL in ultrasensitive mechanical resonators have recently
set the strongest unambiguous bounds on CSL; further improving these
experiments by direct reduction of mechanical noise is technically challenging.
Here, we implement a recently proposed alternative strategy, that aims at
enhancing the CSL noise by exploiting a multilayer test mass attached on a high
quality factor microcantilever. The test mass is specifically designed to
enhance the effect of CSL noise at the characteristic length $r_c=10^{-7}$ m.
The measurements are in good agreement with pure thermal motion for
temperatures down to 100 mK. From the absence of excess noise we infer a new
bound on the collapse rate at the characteristic length $r_c=10^{-7}$ m, which
improves over previous mechanical experiments by more than one order of
magnitude. Our results are explicitly challenging a well-motivated region of
the CSL parameter space proposed by Adler.
|
We describe the real quasi-exactly solvable spectral locus of the
PT-symmetric quartic using the Nevanlinna parametrization.
|
It has been hypothesized that $k$-SAT is hard to solve for randomly chosen
instances near the "critical threshold", where the clause-to-variable ratio is
$2^k \ln 2-\theta(1)$. Feige's hypothesis for $k$-SAT says that for all
sufficiently large clause-to-variable ratios, random $k$-SAT cannot be refuted
in polynomial time. It has also been hypothesized that the worst-case $k$-SAT
problem cannot be solved in $2^{n(1-\omega_k(1)/k)}$ time, as multiple known
algorithmic paradigms (backtracking, local search and the polynomial method)
only yield an $2^{n(1-1/O(k))}$ time algorithm. This hypothesis has been called
the "Super-Strong ETH", modeled after the ETH and the Strong ETH.
Our main result is a randomized algorithm which refutes the Super-Strong ETH
for the case of random $k$-SAT, for any clause-to-variable ratio. Given any
random $k$-SAT instance $F$ with $n$ variables and $m$ clauses, our algorithm
decides satisfiability for $F$ in $2^{n(1-\Omega(\log k)/k)}$ time, with high
probability. It turns out that a well-known algorithm from the literature on
SAT algorithms does the job: the PPZ algorithm of Paturi, Pudlak, and Zane
(1998).
|
In a world of ever-increasing systems interdependence, effective
cybersecurity policy design seems to be one of the most critically understudied
elements of our national security strategy. Enterprise cyber technologies are
often implemented without much regard to the interactions that occur between
humans and the new technology. Furthermore, the interactions that occur between
individuals can often have an impact on the newly employed technology as well.
Without a rigorous, evidence-based approach to ground an employment strategy
and elucidate the emergent organizational needs that will come with the
fielding of new cyber capabilities, one is left to speculate on the impact that
novel technologies will have on the aggregate functioning of the enterprise. In
this paper, we will explore a scenario in which a hypothetical government
agency applies a complexity science perspective, supported by agent-based
modeling, to more fully understand the impacts of strategic policy decisions.
We present a model to explore the socio-technical dynamics of these systems,
discuss lessons using this platform, and suggest further research and
development.
|
This study investigates lightning at tall objects and evaluates the risk of
upward lightning (UL) over the eastern Alps and its surrounding areas. While
uncommon, UL poses a threat, especially to wind turbines, as the long-duration
current of UL can cause significant damage. Current risk assessment methods
overlook the impact of meteorological conditions, potentially underestimating
UL risks. Therefore, this study employs random forests, a machine learning
technique, to analyze the relationship between UL measured at Gaisberg Tower
(Austria) and $35$ larger-scale meteorological variables. Of these, the
larger-scale upward velocity, wind speed and direction at 10 meters and cloud
physics variables contribute most information. The random forests predict the
risk of UL across the study area at a 1 km$^2$ resolution. Strong near-surface
winds combined with upward deflection by elevated terrain increase UL risk. The
diurnal cycle of the UL risk as well as high-risk areas shift seasonally. They
are concentrated north/northeast of the Alps in winter due to prevailing
northerly winds, and expanding southward, impacting northern Italy in the
transitional and summer months. The model performs best in winter, with the
highest predicted UL risk coinciding with observed peaks in measured lightning
at tall objects. The highest concentration is north of the Alps, where most
wind turbines are located, leading to an increase in overall lightning
activity. Comprehensive meteorological information is essential for UL risk
assessment, as lightning densities are a poor indicator of lightning at tall
objects.
|
In this paper, we propose enhancing monocular depth estimation by adding 3D
points as depth guidance. Unlike existing depth completion methods, our
approach performs well on extremely sparse and unevenly distributed point
clouds, which makes it agnostic to the source of the 3D points. We achieve this
by introducing a novel multi-scale 3D point fusion network that is both
lightweight and efficient. We demonstrate its versatility on two different
depth estimation problems where the 3D points have been acquired with
conventional structure-from-motion and LiDAR. In both cases, our network
performs on par with state-of-the-art depth completion methods and achieves
significantly higher accuracy when only a small number of points is used while
being more compact in terms of the number of parameters. We show that our
method outperforms some contemporary deep learning based multi-view stereo and
structure-from-motion methods both in accuracy and in compactness.
|
Region-based convolutional neural networks
(R-CNN)~\cite{fast_rcnn,faster_rcnn,mask_rcnn} have largely dominated object
detection. Operators defined on RoIs (Region of Interests) play an important
role in R-CNNs such as RoIPooling~\cite{fast_rcnn} and
RoIAlign~\cite{mask_rcnn}. They all only utilize information inside RoIs for
RoI prediction, even with their recent deformable
extensions~\cite{deformable_cnn}. Although surrounding context is well-known
for its importance in object detection, it has yet been integrated in R-CNNs in
a flexible and effective way. Inspired by the auto-context
work~\cite{auto_context} and the multi-class object layout
work~\cite{nms_context}, this paper presents a generic context-mining RoI
operator (i.e., \textit{RoICtxMining}) seamlessly integrated in R-CNNs, and the
resulting object detection system is termed \textbf{Auto-Context R-CNN} which
is trained end-to-end. The proposed RoICtxMining operator is a simple yet
effective two-layer extension of the RoIPooling or RoIAlign operator. Centered
at an object-RoI, it creates a $3\times 3$ layout to mine contextual
information adaptively in the $8$ surrounding context regions on-the-fly.
Within each of the $8$ context regions, a context-RoI is mined in term of
discriminative power and its RoIPooling / RoIAlign features are concatenated
with the object-RoI for final prediction. \textit{The proposed Auto-Context
R-CNN is robust to occlusion and small objects, and shows promising
vulnerability for adversarial attacks without being adversarially-trained.} In
experiments, it is evaluated using RoIPooling as the backbone and shows
competitive results on Pascal VOC, Microsoft COCO, and KITTI datasets
(including $6.9\%$ mAP improvements over the R-FCN~\cite{rfcn} method on COCO
\textit{test-dev} dataset and the first place on both KITTI pedestrian and
cyclist detection as of this submission).
|
Respiratory audio, such as coughing and breathing sounds, has predictive
power for a wide range of healthcare applications, yet is currently
under-explored. The main problem for those applications arises from the
difficulty in collecting large labeled task-specific data for model
development. Generalizable respiratory acoustic foundation models pretrained
with unlabeled data would offer appealing advantages and possibly unlock this
impasse. However, given the safety-critical nature of healthcare applications,
it is pivotal to also ensure openness and replicability for any proposed
foundation model solution. To this end, we introduce OPERA, an OPEn Respiratory
Acoustic foundation model pretraining and benchmarking system, as the first
approach answering this need. We curate large-scale respiratory audio datasets
(~136K samples, 440 hours), pretrain three pioneering foundation models, and
build a benchmark consisting of 19 downstream respiratory health tasks for
evaluation. Our pretrained models demonstrate superior performance (against
existing acoustic models pretrained with general audio on 16 out of 19 tasks)
and generalizability (to unseen datasets and new respiratory audio modalities).
This highlights the great promise of respiratory acoustic foundation models and
encourages more studies using OPERA as an open resource to accelerate research
on respiratory audio for health. The system is accessible from
https://github.com/evelyn0414/OPERA.
|
Physiological solvent flows surround biological structures triggering therein
collective motions. Notable examples are virus/host-cell interactions and
solvent-mediated allosteric regulation. The present work describes a multiscale
approach joining the Lattice Boltzmann fluid dynamics (for solvent flows) with
the all-atom atomistic molecular dynamics (for proteins) to model functional
interactions between flows and molecules. We present, as an applicative
scenario, the study of the SARS-CoV-2 virus spike glycoprotein protein
interacting with the surrounding solvent, modeled as a mesoscopic fluid. The
equilibrium properties of the wild-type spike and of the Alpha variant in
implicit solvent are described by suitable observables. The mesoscopic solvent
description is critically compared to the all-atom solvent model, to quantify
the advantages and limitations of the mesoscopic fluid description.
|
The tidal torque theory (TTT) relates the origin and evolution of angular
momentum with the environment in which dark matter (DM) haloes form. The
deviations introduced by late non-linearities are commonly thought as noise in
the model. In this work, we analyze a cosmological simulation looking for
systematics on these deviations, finding that the classification of DM haloes
according to their angular momentum growth results in samples with different
internal alignment, spin parameter distribution and assembly history. Based on
this classification, we obtain that low mass haloes are embedded in denser
environments if they have acquired angular momentum below the TTT expectations
(L haloes), whereas at high masses enhanced clustering is typically associated
with higher angular momentum growths (W haloes). Additionally, we find that the
low mass signal has a weak dependence on the direction, whereas the high mass
signal is entirely due to the structure perpendicular to the angular momentum.
Finally, we study the anisotropy of the matter distribution around haloes as a
function of their mass. We find that the angular momentum direction of W (L)
haloes remains statistically perpendicular (parallel) to the surrounding
structure across the mass range
$11<\mathrm{log}(M/h^{-1}\mathrm{M}_{\odot})<14$, whereas haloes following TTT
show a "spin flip" mass consistent with previously reported values ($\sim 5
\times 10^{12}$ $h^{-1}\mathrm{M}_\odot$). Hence, whether the spin flip mass of
the deviated samples is highly shifted or straightly undefined, our results
indicate that is remarkably connected to the haloes angular momentum growth.
|
We discuss the problem of the third black hole parameter, an electric charge.
While the mass and the spin of black holes are frequently considered in the
majority of publications, the charge is often neglected and implicitly set
identically to zero. However, both classical and relativistic processes can
lead to a small non-zero charge of black holes. When dealing with neutral
particles and photons, zero charge is a good approximation. On the other hand,
even a small charge can significantly influence the motion of charged
particles, in particular cosmic rays, in the vicinity of black holes.
Therefore, we stress that more attention should be paid to the problem of a
black-hole charge and hence, it should not be neglected a priori, as it is done
in most astrophysical studies nowadays. The paper looks at the problem of the
black-hole charge mainly from the astrophysical point of view, which is
complemented by a few historical as well as philosophical notes when relevant.
In particular, we show that a cosmic ray or in general elementary charged
particles passing a non-neutral black hole can experience an electromagnetic
force as much as sixteen times the gravitational force for the mass of the
Galactic centre black hole and its charge being seventeen orders of magnitude
less than the extremal value (calculated for a proton). Furthermore, a
Kerr-Newman rotating black hole with the maximum likely charge of 1 Coulomb per
solar mass can have the position of its innermost stable circular orbit (ISCO)
moved by both rotation and charge in ways that can enhance or partly cancel
each other, putting the ISCO not far from the gravitational radius or out at
more than 6 gravitational radii. An interpretation of X-ray radiation from near
the ISCO of a black hole in X-ray binaries is then no longer unique.
|
We introduce an approximation strategy for the discounted moments of a
stochastic process that can, for a large class of problems, approximate the
true moments. These moments appear in pricing formulas of financial products
such as bonds and credit derivatives. The approximation relies on high-order
power series expansion of the infinitesimal generator, and draws parallels with
the theory of polynomial processes. We demonstrate applications to bond pricing
and credit derivatives. In the special cases that allow for an analytical
solution the approximation error decreases to around 10 to 100 times machine
precision for higher orders. When no analytical solution exists we tie out the
approximation with Monte Carlo simulations.
|
One practical requirement in solving dynamic games is to ensure that the
players play well from any decision point onward. To satisfy this requirement,
existing efforts focus on equilibrium refinement, but the scalability and
applicability of existing techniques are limited. In this paper, we propose
Temporal-Induced Self-Play (TISP), a novel reinforcement learning-based
framework to find strategies with decent performances from any decision point
onward. TISP uses belief-space representation, backward induction, policy
learning, and non-parametric approximation. Building upon TISP, we design a
policy-gradient-based algorithm TISP-PG. We prove that TISP-based algorithms
can find approximate Perfect Bayesian Equilibrium in zero-sum one-sided
stochastic Bayesian games with finite horizon. We test TISP-based algorithms in
various games, including finitely repeated security games and a grid-world
game. The results show that TISP-PG is more scalable than existing mathematical
programming-based methods and significantly outperforms other learning-based
methods.
|
This paper develops a class of low-complexity device scheduling algorithms
for over-the-air federated learning via the method of matching pursuit. The
proposed scheme tracks closely the close-to-optimal performance achieved by
difference-of-convex programming, and outperforms significantly the well-known
benchmark algorithms based on convex relaxation. Compared to the
state-of-the-art, the proposed scheme poses a drastically lower computational
load on the system: For $K$ devices and $N$ antennas at the parameter server,
the benchmark complexity scales with $\left(N^2+K\right)^3 + N^6$ while the
complexity of the proposed scheme scales with $K^p N^q$ for some $0 < p,q \leq
2$. The efficiency of the proposed scheme is confirmed via numerical
experiments on the CIFAR-10 dataset.
|
The similarity in atomic structure between liquids and glasses has stimulated
a long-standing hypothesis that the nature of glasses may be more fluid like,
rather than an apparent solid. In principle, the nature of glasses can be
characterized by measuring the dynamic response of rheology to shear strain
rate in the glass state. However, limited by the brittleness of glasses and
current experimental techniques, the dynamic behaviors of glasses were mainly
assessed in the supercooled liquid state or in the glass state within a narrow
rate range. Therefore, the nature of glasses has not been well elucidated
experimentally. Here we report the dynamic response of shear stress to shear
strain rate of metallic glasses over nine orders of magnitude in time scale,
equivalent to hundreds of years, by broadband stress relaxation experiments.
The full spectrum dynamic response of metallic glasses, together with other
glasses including silicate and polymer glasses, granular materials, soils,
emulsifiers and even fire ant aggregations, follows a universal scaling law
within the framework of fluid dynamics. Moreover, the universal scaling law
provides comprehensive validation of the conjecture on the jamming phase
diagram by which the dynamic behaviours of a wide variety of glass system can
be unified under one rubric parameterized by thermodynamic variables of
temperature, volume and stress in trajectory space.
|
Although convergence of the Parareal and multigrid-reduction-in-time (MGRIT)
parallel-in-time algorithms is well studied, results on their optimality is
limited. Appealling to recently derived tight bounds of two-level Parareal and
MGRIT convergence, this paper proves (or disproves) $h_x$- and
$h_t$-independent convergence of two-level Parareal and MGRIT, for linear
problems of the form $\mathbf{u}'(t) + \mathcal{L}\mathbf{u}(t) = f(t)$, where
$\mathcal{L}$ is symmetric positive definite and Runge-Kutta time integration
is used. The theory presented in this paper also encompasses analysis of some
modified Parareal algorithms, such as the $\theta$-Parareal method, and shows
that not all Runge-Kutta schemes are equal from the perspective of
parallel-in-time. Some schemes, particularly L-stable methods, offer
significantly better convergence than others as they are guaranteed to converge
rapidly at both limits of small and large $h_t\xi$, where $\xi$ denotes an
eigenvalue of $\mathcal{L}$ and $h_t$ time-step size. On the other hand, some
schemes do not obtain $h$-optimal convergence, and two-level convergence is
restricted to certain regimes. In certain cases, an $\mathcal{O}(1)$ factor
change in time step $h_t$ or coarsening factor $k$ can be the difference
between convergence factors $\rho\approx0.02$ and divergence! The analysis is
extended to skew symmetric operators as well, which cannot obtain
$h$-independent convergence and, in fact, will generally not converge for a
sufficiently large number of time steps. Numerical results confirm the analysis
in practice and emphasize the importance of a priori analysis in choosing an
effective coarse-grid scheme and coarsening factor. A Mathematica notebook to
perform a priori two-grid analysis is available at
https://github.com/XBraid/xbraid-convergence-est.
|
In recent years, large pre-trained language models (PLMs) have achieved
remarkable performance on many natural language processing benchmarks. Despite
their success, prior studies have shown that PLMs are vulnerable to attacks
from adversarial examples. In this work, we focus on the named entity
recognition task and study context-aware adversarial attack methods to examine
the model's robustness. Specifically, we propose perturbing the most
informative words for recognizing entities to create adversarial examples and
investigate different candidate replacement methods to generate natural and
plausible adversarial examples. Experiments and analyses show that our methods
are more effective in deceiving the model into making wrong predictions than
strong baselines.
|
We develop a general framework for effective equations of expectation values
in quantum cosmology and pose for them the quantum Cauchy problem with
no-boundary and tunneling wavefunctions. We apply this framework in the model
with a big negative non-minimal coupling, which incorporates a recently
proposed low energy (GUT scale) mechanism of the quantum origin of the
inflationary Universe and study the effects of the quantum inflaton mode.
|
The dipole moment in the angular distribution of the cosmic microwave
background (CMB) is thought to originate from the Doppler effect and our motion
relative to the CMB frame. Observations of large-scale structure (LSS) should
show a related ``kinematic dipole'' and help test the kinematic origin of the
CMB dipole. Intriguingly, many previous LSS dipole studies suggest
discrepancies with the expectations from the CMB. Here we reassess the apparent
inconsistency between the CMB measurements and dipole estimates from the NVSS
catalog of radio sources. We find that it is important to account for the shot
noise and clustering of the NVSS sources, as well as kinematic contributions,
in determining the expected dipole signal. We use the clustering redshift
method and a cross-matching technique to refine estimates of the clustering
term. We then derive a probability distribution for the expected NVSS dipole in
a standard $\Lambda$CDM cosmological model including all (i.e., kinematic,
shot-noise and clustering) dipole components. Our model agrees with most of the
previous NVSS dipole measurements in the literature at better than $\lesssim
2\sigma$. We conclude that the NVSS dipole is consistent with a kinematic
origin for the CMB dipole within $\Lambda$CDM.
|
For a {bounded} non-negative self-adjoint operator acting in a complex,
infinite-dimensional, separable Hilbert space H and possessing a dense range R
we propose a new approach to characterisation of phenomenon concerning the
existence of subspaces M\subset H such that M\capR=M^\perp\capR=\{0\}. We show
how the existence of such subspaces leads to various {pathological} properties
of {unbounded} self-adjoint operators related to von Neumann theorems
\cite{Neumann}--\cite{Neumann2}. We revise the von Neumann-Van
Daele-Schm\"udgen assertions \cite{Neumann}, \cite{Daele}, \cite{schmud} to
refine them. We also develop {a new systematic approach, which allows to
construct for any {unbounded} densely defined symmetric/self-adjoint operator T
infinitely many pairs of its closed densely defined restrictions T_k\subset T
such that \dom(T^* T_{k})=\{0\} (\Rightarrow \dom T_{k}^2=\{0\}$) k=1,2 and
\dom T_1\cap\dom T_2=\{0\}, \dom T_1\dot+\dom T_2=\dom T.
|
Background and Objective: In the current study, we sought to determine the
value of a meta-analysis to improve decision-making processes related to
nutrition in the poultry industry. To this end, nine commercial size
experiments were conducted to test the effect of a phytogenic feed additive and
three approaches were applied to the data. Materials and Methods: In all
experiments, 1-day-old male Cobb 500 chicks were used and fed corn-soybean meal
diets. Two dietary treatments were tested: T1, control diet and T2, control
diet + feed additive at a 0.05% inclusion rate. The experimental units were
broiler houses (7 experiments), floor pens (1 experiment) and cages (1
experiment). The response variables were final body weight, feed intake, feed
conversion ratio, mortality and production efficiency. Analyses of variance of
data from each and all the experiments were performed using SAS under
completely randomized non-blocked or blocked designs, respectively. The
meta-analyses were performed in R programming language. Results: No
statistically significant effects were found in the evaluated variables in any
of the independent experiments (p>0.12), nor following the application of a
block design (p>0.08). The meta-analyses showed no statistically significant
global effects in terms of final body weight (p>0.19), feed intake (p>0.23),
mortality (p>0.09), or European Production Efficiency Factor (p>0.08); however,
a positive global effect was found with respect to feed conversion ratio
(p<0.046). Conclusion: This meta-analysis demonstrated that the phytogenic feed
additive improved the efficiency of birds to convert feed to body weight (35 g
less feed per 1 kg of body weight obtained). Thus, the use of meta-analyses in
commercial-scale poultry trials can increase statistical power and as a result,
help to detect statistical differences if they exist.
|
Online portals include an increasing amount of user feedback in form of
ratings and reviews. Recent research highlighted the importance of this
feedback and confirmed that positive feedback improves product sales figures
and thus its success. However, online portals' operators act as central
authorities throughout the overall review process. In the worst case, operators
can exclude users from submitting reviews, modify existing reviews, and
introduce fake reviews by fictional consumers. This paper presents ReviewChain,
a decentralized review approach. Our approach avoids central authorities by
using blockchain technologies, decentralized apps and storage. Thereby, we
enable users to submit and retrieve untampered reviews. We highlight the
implementation challenges encountered when realizing our approach on the public
Ethereum blockchain. For each implementation challange, we discuss possible
design alternatives and their trade-offs regarding costs, security, and
trustworthiness. Finally, we analyze which design decision should be chosen to
support specific trade-offs and present resulting combinations of decentralized
blockchain technologies, also with conventional centralized technologies.
|
The dynamical control of tunneling processes of single particles plays a
major role in science ranging from Shapiro steps in Josephson junctions to the
control of chemical reactions via light in molecules. Here we show how such
control can be extended to the regime of strongly interacting particles.
Through a weak modulation of a biased tunnel contact, we have been able to
coherently control single particle and correlated two-particle hopping
processes. We have furthermore been able to extend this control to
superexchange spin interactions in the presence of a magnetic-field gradient.
We show how such photon assisted superexchange processes constitute a novel
approach to realize arbitrary XXZ spin models in ultracold quantum gases, where
transverse and Ising type spin couplings can be fully controlled in magnitude
and sign.
|
The electromagnetic form factors, charge radii and decay constants of pion, K
and K*(892) are calculated using the three forms of relativistic kinematics:
instant form, point form and (light) front form. Simple representations of the
mass operator together with single quark currents are employed with all the
forms. Making use of previously fixed parameters, together with the constituent
quark mass for the strange quark, a reasonable reproduction of the available
data for form factors, charge radii and decay constants of pion, rho, K and
K*(892) is obtained in front form. With instant form a similar description, but
with a systematic underestimation of the vector meson decay constants is
obtained using two different sets of parameters, one for pion and rho and
another one for K and K*(892). Point form produces a poor description of the
data.
|
We compute the hereditary part of the third post-Newtonian accurate
gravitational energy radiation from hyperbolic scatterings (and parabolic
scatterings) of non-spinning compact objects. We employ large angular momentum
($j$) expansion, and compute it to the relative $1/j^{11}$ order (so the first
12 terms). For parabolic scattering case, the exact solution is computed. At
the end, the completely collected expression of the energy radiation upto the
third post-Newtonian and from $1/j^{3}$ to $1/j^{15}$ order, is presented
including the instantaneous contribution.
|
Let $F$ be a nonlinear map in a real Hilbert space $H$. Suppose that
$\sup_{u\in B(u_0,R)}$ $\|[F'(u)]^{-1}\|\leq m(R)$, where
$B(u_0,R)=\{u:\|u-u_0\|\leq R\}$, $R>0$ is arbitrary, $u_0\in H$ is an element.
If $\sup_{R>0}\frac{R}{m(R)}=\infty$, then $F$ is surjective. If
$\|[F'(u)]^{-1}\|\leq a\|u\|+b$, $a\geq 0$ and $b>0$ are constants independent
of $u$, then $F$ is a homeomorphism of $H$ onto $H$. The last result is known
as an Hadamard-type theorem, but we give a new simple proof of it based on the
DSM (dynamical systems method).
|
Presenting a general phase approach to stochastic processes we analyze in
particular the Fokker-Planck equation for the noisy Burgers equation and
discuss the time dependent and stationary probability distributions. In one
dimension we derive the long-time skew distribution approaching the symmetric
stationary Gaussian distribution. In the short time regime we discuss
heuristically the nonlinear soliton contributions and derive an expression for
the distribution in accordance with the directed polymer-replica model and
asymmetric exclusion model results.
|
Comparing the theoretically predicted and measured values of the mass
difference of the $B^{0}_{s}$ system, we estimate the lower bound on the mass
of the $Z^{\prime}$ boson of models based on the $SU(3)_{c} \otimes SU(3)_{L}
\otimes U(1)_X$ gauge group. By assuming zero-texture approaches of the quark
mass matrices, we find the ratio of the measured value to the theoretical
prediction from the Standard Model and the $Z^{\prime}$ contribution from the
331 models of the mass difference of the $B^{0}_{s}$ system. We find lower
bounds on the $Z^{\prime}$ mass ranging between 1 TeV and 30 TeV for the two
most popular 331 models, and four different zero-textures ans\"atze. The above
results are expressed as a function of the weak angle associated to the
$b-s-Z^{\prime}$ couplings.
|
Most existing recommender systems represent a user's preference with a
feature vector, which is assumed to be fixed when predicting this user's
preferences for different items. However, the same vector cannot accurately
capture a user's varying preferences on all items, especially when considering
the diverse characteristics of various items. To tackle this problem, in this
paper, we propose a novel Multimodal Attentive Metric Learning (MAML) method to
model user diverse preferences for various items. In particular, for each
user-item pair, we propose an attention neural network, which exploits the
item's multimodal features to estimate the user's special attention to
different aspects of this item. The obtained attention is then integrated into
a metric-based learning method to predict the user preference on this item. The
advantage of metric learning is that it can naturally overcome the problem of
dot product similarity, which is adopted by matrix factorization (MF) based
recommendation models but does not satisfy the triangle inequality property. In
addition, it is worth mentioning that the attention mechanism cannot only help
model user's diverse preferences towards different items, but also overcome the
geometrically restrictive problem caused by collaborative metric learning.
Extensive experiments on large-scale real-world datasets show that our model
can substantially outperform the state-of-the-art baselines, demonstrating the
potential of modeling user diverse preference for recommendation.
|
The notion of non-perturbative renormalization is discussed and extended.
Within the extended picture, a new non-perturbative representation for the
generating functional of Green functions of quantum field theories is
suggested. It is argued that the new expression agrees with the standard
renormalized perturbation theory if the latter is renormalized in an
appropriate renormalization scheme.
|
Semi-supervised learning has made significant progress in medical image
segmentation. However, existing methods primarily utilize information acquired
from a single dimensionality (2D/3D), resulting in sub-optimal performance on
challenging data, such as magnetic resonance imaging (MRI) scans with multiple
objects and highly anisotropic resolution. To address this issue, we present a
Hybrid Dual Mean-Teacher (HD-Teacher) model with hybrid, semi-supervised, and
multi-task learning to achieve highly effective semi-supervised segmentation.
HD-Teacher employs a 2D and a 3D mean-teacher network to produce segmentation
labels and signed distance fields from the hybrid information captured in both
dimensionalities. This hybrid learning mechanism allows HD-Teacher to combine
the `best of both worlds', utilizing features extracted from either 2D, 3D, or
both dimensions to produce outputs as it sees fit. Outputs from 2D and 3D
teacher models are also dynamically combined, based on their individual
uncertainty scores, into a single hybrid prediction, where the hybrid
uncertainty is estimated. We then propose a hybrid regularization module to
encourage both student models to produce results close to the
uncertainty-weighted hybrid prediction. The hybrid uncertainty suppresses
unreliable knowledge in the hybrid prediction, leaving only useful information
to improve network performance further. Extensive experiments of binary and
multi-class segmentation conducted on three MRI datasets demonstrate the
effectiveness of the proposed framework. Code is available at
https://github.com/ThisGame42/Hybrid-Teacher.
|
Inspired by the recent work of Bekka, we study two reasonable analogues of
property (T) for not necessarily unital C*-algebras. The stronger one of the
two is called ``property (T)'' and the weaker one is called ``property
(T_{e})''. It is shown that all non-unital C*-algebras do not have property (T)
(neither do their unitalizations). Moreover, all non-unital $\sigma$-unital
C*-algebras do not have property (T_e).
|
Criticality with strong coupling is described by a theory in the vicinity of
a non-Gaussian fixed point. The holographic duality conjectures that a theory
at a non-Gaussian fixed point with strong coupling is dual to a gravitational
theory. In this paper, we present a holographic theory in treating the strongly
coupled critical spin fluctuations in quasi-2-dimension. We show that a
universal frequency over temperature scaling law is a rather general property
of the critical ac spin susceptibility at strongly coupled limit. Explicit
results for the dynamic scaling of spin susceptibility are obtained in large-N
and large 't Hooft limit. We argue that such critical scaling are in good
agreement with a number of experiments, some of which can not be explained by
any perturbative spin-density-wave theory. Our results strongly suggest that
the anomalous behavior of non-Fermi liquids in materials is closely related to
the spin fluctuations described through the non-Gaussian fixed point. The
exotic properties of non-Fermi liquids can be viewed as the Fermi liquids
coupling to strongly coupled critical spin fluctuations.
|
The interaction of a multi-Petawatt, pancake-shaped laser pulse with an
unmagnetized plasma is studied analytically and numerically in the regime of
fully relativistic electron jitter velocities and in the context of the laser
wakefield acceleration scheme. The study is applied to the specifications
available at present time, or planned for the near future, of the Ti:Sa
Frascati Laser for Acceleration and Multidisciplinary Experiments (FLAME) in
Frascati. Novel nonlinear equation is derived by a three-timescale description,
with an intermediate timescale associated with the nonlinear phase of the laser
wave. They describe on an equal footing both the strong and moderate laser
intensity regimes, pertinent to the core and the edges of the pulse. These have
fundamentally different dispersive properties since, in the core, the electrons
are almost completely expelled by a very strong ponderomotive force and the
electromagnetic wave packet is imbedded in a vacuum channel and has (almost)
linear properties, while at the pulse edges the laser amplitude is smaller and
the wave is dispersive. The nonlinear phase provides a transition to a
nondispersive electromagnetic wave at large intensities and the saturation of
the previously known nonlocal cubic nonlinearity, without the violation of the
imposed scaling laws. The temporal evolution of the laser pulse is studied by
the numerical solution of the model equations in a two-dimensional geometry,
with the spot diameter presently used in the self-injection test experiment
(SITE) with FLAME. The most stable initial pulse length is found to be around 1
$\mu$m, which is several times shorter than presently available. A stretching
of the laser pulse is observed, followed by the development of a vacuum channel
and a very large electrostatic wake potential, as well as the bending of the
laser wave front.
|
We analyze the colors and sizes of 32 quiescent (UVJ-selected) galaxies with
strong Balmer absorption ($\mbox{EW}(H\delta) \geq 4$\AA) at $z\sim0.8$ drawn
from DR2 of the LEGA-C survey to test the hypothesis that these galaxies
experienced compact, central starbursts before quenching. These recently
quenched galaxies, usually referred to as post-starburst galaxies, span a wide
range of colors and we find a clear correlation between color and half-light
radius, such that bluer galaxies are smaller. We build simple toy models to
explain this correlation: a normal star-forming disk plus a central, compact
starburst component. Bursts with exponential decay timescale of $\sim$~100 Myr
that produce $\sim10\%$ to more than 100\% of the pre-existing masses can
reproduce the observed correlation. More significant bursts also produce bluer
and smaller descendants. Our findings imply that when galaxies shut down star
formation rapidly, they generally had experienced compact, starburst events and
that the large, observed spread in sizes and colors mostly reflects a variety
of burst strengths. Recently quenched galaxies should have younger stellar ages
in the centers; multi-wavelength data with high spatial resolution are required
to reveal the age gradient. Highly dissipative processes should be responsible
for this type of formation history. While determining the mechanisms for
individual galaxies is challenging, some recently quenched galaxies show signs
of gravitational interactions, suggesting that mergers are likely an important
mechanism in triggering the rapid shut-down of star-formation activities at
$z\sim0.8$.
|
In this paper we prove that if T:C[0,1] \rightarrow C[0,1] is a positive
linear operator with T(e_0)=1 and T(e_1)-e_1 does not change the sign, then the
iterates T^{m} converges to some positive linear operator T^{\infty} :C[0,1]
\rightarrow C[0,1] and we derive quantitative estimates in terms of modulii of
smoothness. This result enlarges the class of operators for which the limit of
the iterates can be computed and the quantitative estimates of iterates can be
given.
|
We investigate the phase structure and conductivity of a relativistic fluid
in a circulating electric field with a transverse magnetic field. This system
exhibits behavior similar to other driven systems such as strongly coupled
driven CFTs [Rangamani2015] or a simple anharmonic oscillator. We identify
distinct regions of fluid behavior as a function of driving frequency, and
argue that a "phase" transition will occur. Such a transition could be
measurable in graphene, and may be characterized by sudden discontinuous
increase in the Hall conductivity. The presence of the discontinuity depends on
how the boundary is approached as the frequency or amplitude is dialed. In the
region where two solution exists the measured conductivity will depend on how
the system is prepared.
|
A stochastic heat equation on an unbounded nested fractal driven by a general
stochastic measure is investigated. Existence, uniqueness and continuity of the
mild solution are proved provided that the spectral dimension of the fractal is
less than 4/3.
|
We explore varying face recognition accuracy across demographic groups as a
phenomenon partly caused by differences in face illumination. We observe that
for a common operational scenario with controlled image acquisition, there is a
large difference in face region brightness between African-American and
Caucasian, and also a smaller difference between male and female. We show that
impostor image pairs with both faces under-exposed, or both overexposed, have
an increased false match rate (FMR). Conversely, image pairs with strongly
different face brightness have a decreased similarity measure. We propose a
brightness information metric to measure variation in brightness in the face
and show that face brightness that is too low or too high has reduced
information in the face region, providing a cause for the lower accuracy. Based
on this, for operational scenarios with controlled image acquisition,
illumination should be adjusted for each individual to obtain appropriate face
image brightness. This is the first work that we are aware of to explore how
the level of brightness of the skin region in a pair of face images (rather
than a single image) impacts face recognition accuracy, and to evaluate this as
a systematic factor causing unequal accuracy across demographics. The code is
at https://github.com/HaiyuWu/FaceBrightness.
|
In this article we show the duality between tensor networks and undirected
graphical models with discrete variables. We study tensor networks on
hypergraphs, which we call tensor hypernetworks. We show that the tensor
hypernetwork on a hypergraph exactly corresponds to the graphical model given
by the dual hypergraph. We translate various notions under duality. For
example, marginalization in a graphical model is dual to contraction in the
tensor network. Algorithms also translate under duality. We show that belief
propagation corresponds to a known algorithm for tensor network contraction.
This article is a reminder that the research areas of graphical models and
tensor networks can benefit from interaction.
|
The nonadiabatic photodissociation dynamics of alkali halide molecules
excited by a femtosecond laser pulse in the gas phase are investigated
theoretically, and it is shown that the population of the photoexcited
molecules exhibits power-law decay with exponent -1/2, in contrast to
exponential decay, which is often assumed in femtosecond spectroscopy and
unimolecular reaction theory. To elucidate the mechanism of the power-law
decay, a diagrammatic method that visualizes the structure of the nonadiabatic
reaction dynamics as a pattern of occurrence of dynamical events, such as
wavepacket bifurcation, turning, and dissociation, is developed. Using this
diagrammatic method, an analytical formula for the power-law decay is derived,
and the theoretical decay curve is compared with the corresponding numerical
decay curve computed by a wavepacket dynamics simulation in the case of lithium
fluoride. This study reveals that the cause of the power-law decay is the
quantum interference arising from the wavepacket bifurcation and merging due to
nonadiabatic transitions.
|
We derive the linear force-extension relation for a wormlike chain of
arbitrary stiffness including entropy elasticity, bending and thermodynamic
buckling. From this we infer the plateau modulus $G^0$ of an isotropic
entangled solution of wormlike chains. The entanglement length $L_e$ is
expressed in terms of the characteristic network parameters for three different
scaling regimes in the entangled phase. The entanglement transition and the
concentration dependence of $G^0$ are analyzed. Finally we compare our findings
with experimental data.
|
Two {\it ASCA} observations were made of two ultra-luminous compact X-ray
sources (ULXs), Source 1 and Source 2, in the spiral galaxy IC 342. In the 1993
observation, Source 2 showed a 0.5--10 keV luminosity of $6 \times 10^{39}$
ergs s$^{-1}$ (assuming a distance of 4.0 Mpc), and a hard power-law spectrum
of photon index $\sim 1.4$. As already reported, Source 1 was $\sim 3$ times
brighter on that occasion, and exhibited a soft spectrum represented by a
multi-color disk model of inner-disk temperature $ \sim 1.8$ keV. The second
observation made in February 2000 revealed that Source 1 had made a transition
into a hard spectral state, while Source 2 into a soft spectral state. The ULXs
are therefore inferred to exhibit two distinct spectral states, and sometimes
make transitions between them. These results significantly reinforce the
scenario which describes ULXs as mass-accreting black holes.
|
We perform phase-field simulations of the electrodeposition process that
forms dendrites within metal-anode batteries including anisotropic
representation. We describe the evolution of a phase field, the lithium-ion
concentration, and an electric potential, during a battery charge cycle,
solving equations using time-marching algorithms with automatic time-step
adjustment and implemented on an open-source finite element library. A modified
lithium crystal surface anisotropy representation for phase-field
electrodeposition model is proposed and evaluated through different numerical
tests, exhibiting low sensitivity to the numerical parameters. Change of
dendritic morphological behaviour is captured by a variation of the simulated
inter-electrode distance. A set of simulations are presented to validate the
proposed formulation, showing their agreement with experimentally-observed
lithium dendrite growth rates, and morphologies reported in the literature.
|
A new infinite class of Chern-Simons theories is presented using brane
tilings. The new class reproduces all known cases so far and introduces many
new models that are dual to M2 brane theories which probe a toric non-compact
CY 4-fold. The master space of the quiver theory is used as a tool to construct
the moduli space for this class and the Hilbert Series is computed for a
selected set of examples.
|
Widefield stochastic microscopy techniques such as PALM or STORM rely on the
progressive accumulation of a large number of frames, each containing a scarce
number of super-resolved point images. We justify that the redundancy in the
localization of detected events imposes a specific limit on the temporal
resolution. Based on a theoretical model, we derive analytical predictions for
the minimal time required to obtain a reliable image at a given spatial
resolution, called image completion time. In contrast to standard assumptions,
we find that the image completion time scales logarithmically with the ratio of
the image size by the spatial resolution volume. We justify that this
non-linear relation is the hallmark of a random coverage problem. We propose a
method to estimate the risk that the image reconstruction is not complete,
which we apply to an experimental data set. Our results provide a theoretical
framework to quantify the pattern detection efficiency and to optimize the
trade-off between image coverage and acquisition time, with applications to
$1$, $2$ or $3$ dimension structural imaging.
|
For point vortices in the plane, we consider the correlation coefficient of
Ovchinnikov and Sigal. Generalising a result by Esposito, we show that it
vanishes for all vortex equilibria.
|
The g_{YM} perturbed, non supersymmetric extension of the dual single matrix
description of 1/2 BPS states, within the Hilbert space reduction to the
oscillator subsector associated with chiral primaries is considered. This
matrix model is described in terms of a single hermitean matrix. It is found
that, apart from a trivial shift in the energy, the large N background,
spectrum and interaction of invariant states are independent of g_{YM}. This
property applies to more general D terms.
|
Inclusive event-shape variables have been measured in the current region of
the Breit frame for neutral current deep inelastic ep scattering using an
integrated luminosity of 45.0 pb^-1 collected with the ZEUS detector at HERA.
The variables studied included thrust, jet broadening and invariant jet mass.
The kinematic range covered was 10 < Q^2 < 20,480 GeV^2 and 6.10^-4 < x < 0.6,
where Q^2 is the virtuality of the exchanged boson and x is the Bjorken
variable. The Q dependence of the shape variables has been used in conjunction
with NLO perturbative calculations and the Dokshitzer-Webber non-perturbative
corrections (`power corrections') to investigate the validity of this approach.
|
We introduce and analytically solve a directed sandpile model with stochastic
toppling rules. The model clearly belongs to a different universality class
from its counterpart with deterministic toppling rules, previously solved by
Dhar and Ramaswamy. The critical exponents are D_||=7/4, \tau=10/7 in two
dimensions and D_||=3/2, \tau=4/3 in one dimension. The upper critical
dimension of the model is three, at which the exponents apart from logarithmic
corrections reach their mean-field values D_||=2, \tau=3/2.
|
We present a detailed investigation of the impact of astrophysical processes
on the shape and amplitude of the kinetic Sunyaev-Zel'dovich (kSZ) power
spectrum from the post-reionization epoch. This is achieved by constructing a
new model of the kSZ power spectrum which we calibrate to the results of
hydrodynamic simulations. By construction, our method accounts for all relevant
density and velocity modes and so is unaffected by the limited box size of our
simulations. We find that radiative cooling and star-formation can reduce the
amplitude of the kSZ power spectrum by up to 33%, or 1 uK^2 at ell = 3000. This
is driven by a decrease in the mean gas density in groups and clusters due to
the conversion of gas into stars. Variations in the redshifts at which helium
reionization occurs can effect the amplitude by a similar fraction, while
current constraints on cosmological parameters (namely sigma_8) translate to a
further +-15% uncertainty on the kSZ power spectrum. We demonstrate how the
models presented in this work can be constrained -- reducing the astrophysical
uncertainty on the kSZ signal -- by measuring the redshift dependence of the
signal via kSZ tomography. Finally, we discuss how the results of this work can
help constrain the duration of reionization via measurements of the kinetic SZ
signal sourced by inhomogeneous (or patchy) reionization.
|
Call the sum of the singular values of a matrix A the energy of A. We
investigate graphs and matrices of energy close to the maximal one. We prove a
conjecture of Koolen and Moulten and give a stability theorem characterizing
all square nonnegative matrices and all graphs with energy close to the maximal
one. In particular, such graphs are quasi-random.
|
The field of plasma-based particle accelerators has seen tremendous progress
over the past decade and experienced significant growth in the number of
activities. During this process, the involved scientific community has expanded
from traditional university-based research and is now encompassing many large
research laboratories worldwide, such as BNL, CERN, DESY, KEK, LBNL and SLAC.
As a consequence, there is a strong demand for a consolidated effort in
education at the intersection of accelerator, laser and plasma physics. The
CERN Accelerator School on Plasma Wake Acceleration has been organized as a
result of this development. In this paper, we describe the interactive
component of this one-week school, which consisted of three case studies to be
solved in 11 working groups by the participants of the CERN Accelerator School.
|
As the penetration of wind generation increases, the uncertainty it brings
has imposed great challenges to power system operation. To cope with the
challenges, tremendous research work has been conducted, among which two
aspects are of most importance, i.e. making immune operation strategies and
accessing the power system's capability to accommodate the variable energy.
Driven and inspired by the latter problem, this paper will discuss the power
system's capability to accommodate variable wind generation in a probability
sense. Wind generation, along with its uncertainty is illustrated by a
polyhedron, which contains prediction, risk and uncertainty information. Then,
a three-level optimization problem is presented to estimate the lower
probability bound of power system's capability to fully accommodate wind
generation. After reformulating the inner \emph{max-min} problem, or
feasibility check problem, into its equivalent mixed-integer linear program
(MILP) form, the bisection algorithm is presented to solve this challenging
problem. Modified IEEE systems are adopted to show the effectiveness of the
proposed method.
|
The aim of this paper is to show time-decay estimates of solutions to
linearized two-phase Navier-Stokes equations with surface tension and gravity.
The original two-phase Navier-Stokes equations describe the two-phase
incompressible viscous flow with a sharp interface that is close to the
hyperplane $x_N=0$ in the $N$-dimensional Euclidean space, $N \geq 2$. It is
well-known that the Rayleigh-Taylor instability occurs when the upper fluid is
heavier than the lower one, while this paper assumes that the lower fluid is
heavier than the upper one and proves time-decay estimates of $L_p-L_q$ type
for the linearized equations. Our approach is based on solution formulas, given
by Shibata and Shimizu (2011), for a resolvent problem associated with the
linearized equations.
|
Dan Lovallo and Daniel Kahneman must be commended for their clear
identification of causes and cures to the planning fallacy in "Delusions of
Success: How Optimism Undermines Executives' Decisions" (HBR July 2003). Their
look at overoptimism, anchoring, competitor neglect, and the outside view in
forecasting is highly useful to executives and forecasters. However, Lovallo
and Kahneman underrate one source of bias in forecasting - the deliberate
"cooking" of forecasts to get ventures started.
|
In this work, we compute rates of merging neutron stars (MNS) in galaxies of
different morphological type, as well as the cosmic MNS rate in a unitary
volume of the Universe adopting different cosmological scenarios. Our aim is to
provide predictions of kilonova rates for future observations both at low and
high redshift. In the adopted galaxy models, we take into account the
production of r-process elements either by MNS or core-collapse supernovae. In
computing the MNS rates we adopt either a constant total time delay for merging
(10 Myr) or a distribution function of such delays. Our main conclusions are:
i) the observed present time MNS rate in our Galaxy is well reproduced either
with a constant time delay or a distribution function $\propto t^{-1}$. The
[Eu/Fe] vs. [Fe/H] relation in the Milky Way can be well reproduced with only
MNS, if the time delay is short and constant. If the distribution function of
delays is adopted, core-collapse supernovae as are also required. ii) The
present time cosmic MNS rate can be well reproduced in any cosmological
scenario, either pure luminosity evolution or a typical hierarchical one, and
spirals are the main contributors to it. iii) The spirals are the major
contributors to the cosmic MNS at all redshifts in hierarchical scenarios. In
the pure luminosity evolution scenario, the spirals are the major contributors
locally, whereas at high redshift ellipticals dominate. iv) The predicted
cosmic MNS rate well agrees with the cosmic rate of short Gamma Ray Bursts if
the distribution function of delays is adopted, in a cosmological hierarchical
scenario observationally derived. v) Future observations of Kilonovae in
ellipticals will allow to disentangle among constant or a distribution of time
delays as well as among different cosmological scenarios.
|
Subsets and Splits