text
stringlengths 6
128k
|
---|
Turbulence in protoplanetary disks, when present, plays a critical role in
transporting dust particles embedded in the gaseous disk component. When using
a field description of dust dynamics, a diffusion approach is traditionally
used to model this turbulent dust transport. However, it has been shown that
classical turbulent diffusion models are not fully self-consistent. Several
shortcomings exist, including the ambiguous nature of the diffused quantity and
the nonconservation of angular momentum. Orbital effects are also neglected
without an explicit prescription. In response to these inconsistencies, we
present a novel Eulerian turbulent dust transport model for isotropic and
homogeneous turbulence on the basis of a mean-field theory. Our model is based
on density-weighted averaging applied to the pressureless fluid equations and
uses appropriate turbulence closures. Our model yields novel dynamic equations
for the turbulent dust mass flux and recovers existing turbulent transport
models in special limiting cases, thus providing a more general and
self-consistent description of turbulent particle transport. Importantly, our
model ensures the conservation of global angular and linear momentum
unconditionally and implicitly accounts for the effects of orbital dynamics in
protoplanetary disks. Furthermore, our model correctly describes the vertical
settling-diffusion equilibrium solutions for both small and large particles.
Hence, this work presents a generalized Eulerian turbulent dust transport
model, establishing a comprehensive framework for more detailed studies of
turbulent dust transport in protoplanetary disks.
|
We are concerned with the dependence of the lowest positive eigenvalue of the
Dirac operator on the geometry of rectangles, subject to infinite-mass boundary
conditions. We conjecture that the square is a global minimiser both under the
area or perimeter constraints. Contrary to well-known non-relativistic
analogues, we show that the present spectral problem does not admit explicit
solutions. We prove partial optimisation results based on a variational
reformulation and newly established lower and upper bounds to the Dirac
eigenvalue. We also propose an alternative approach based on symmetries of
rectangles and a non-convex minimisation problem; this implies a sufficient
condition formulated in terms of a symmetry of the minimiser which guarantees
the conjectured results.
|
Depending on the parity of $n$ and the regularity of a bent function $f$ from
$\mathbb F_p^n$ to $\mathbb F_p$, $f$ can be affine on a subspace of dimension
at most $n/2$, $(n-1)/2$ or $n/2- 1$. We point out that many $p$-ary bent
functions take on this bound, and it seems not easy to find examples for which
one can show a different behaviour. This resembles the situation for Boolean
bent functions of which many are (weakly) $n/2$-normal, i.e. affine on a
$n/2$-dimensional subspace. However applying an algorithm by Canteaut et.al.,
some Boolean bent functions were shown to be not $n/2$- normal. We develop an
algorithm for testing normality for functions from $\mathbb F_p^n$ to $\mathbb
F_p$. Applying the algorithm, for some bent functions in small dimension we
show that they do not take on the bound on normality. Applying direct sum of
functions this yields bent functions with this property in infinitely many
dimensions.
|
In the spirit of the White-Bear version of fundamental measure theory we
derive a new density functional for hard-sphere mixtures which is based on a
recent mixture extension of the Carnahan-Starling equation of state. In
addition to the capability to predict inhomogeneous density distributions very
accurately, like the original White-Bear version, the new functional improves
upon consistency with an exact scaled-particle theory relation in the case of
the pure fluid. We examine consistency in detail within the context of
morphological thermodynamics. Interestingly, for the pure fluid the degree of
consistency of the new version is not only higher than for the original
White-Bear version but also higher than for Rosenfeld's original fundamental
measure theory.
|
We explore Noether gauge symmetries of FRW and Bianchi I universe models for
perfect fluid in scalar-tensor gravity with extra term $R^{-1}$ as curvature
correction. Noether symmetry approach can be used to fix the form of coupling
function $\omega(\phi)$ and the field potential $V(\phi)$. It is shown that for
both models, the Noether symmetries, the gauge function as well as the
conserved quantity, i.e., the integral of motion exist for the respective point
like Lagrangians. We determine the form of coupling function as well as the
field potential in each case. Finally, we investigate solutions through scaling
or dilatational symmetries for Bianchi I universe model without curvature
correction and discuss its cosmological implications.
|
The aim of this report of the Working Group on Hadronic Interactions and Air
Shower Simulation is to give an overview of the status of the field,
emphasizing open questions and a comparison of relevant results of the
different experiments. It is shown that an approximate overall understanding of
extensive air showers and the corresponding hadronic interactions has been
reached. The simulations provide a qualitative description of the bulk of the
air shower observables. Discrepancies are however found when the correlation
between measurements of the longitudinal shower profile are compared to that of
the lateral particle distributions at ground. The report concludes with a list
of important problems that should be addressed to make progress in
understanding hadronic interactions and, hence, improve the reliability of air
shower simulations.
|
In this paper we study some properties of quadrilaterals concerning
concurrence of lines under few to none restrictive conditions, and obtain an
extension of a transversal theorem from triangles to quadrilaterals.
|
In this note we study packing or covering integer programs with at most k
constraints, which are also known as k-dimensional knapsack problems. For any
integer k > 0 and real epsilon > 0, we observe there is a polynomial-sized LP
for the k-dimensional knapsack problem with integrality gap at most 1+epsilon.
The variables may be unbounded or have arbitrary upper bounds. In the packing
case, we can also remove the dependence of the LP on the cost-function,
yielding a polyhedral approximation of the integer hull. This generalizes a
recent result of Bienstock on the classical knapsack problem.
|
We present a new framework were we simultaneously fit strong lensing (SL) and
dynamical data. The SL analysis is based on LENSTOOL, and the dynamical
analysis uses MAMPOSSt code, which we have integrated into LENSTOOL. After
describing the implementation of this new tool, we apply it on the galaxy group
SL2S\,J02140-0535 ($z_{\rm spec}=0.44$), which we have already studied in the
past. We use new VLT/FORS2 spectroscopy of multiple images and group members,
as well as shallow X-ray data from \xmm. We confirm that the observed lensing
features in SL2S\,J02140-0535 belong to different background sources. One of
this sources is located at $z_{\rm spec}$ = 1.017 $\pm$ 0.001, whereas the
other source is located at $z_{\rm spec}$ = 1.628 $\pm$ 0.001. With the
analysis of our new and our previously reported spectroscopic data, we find 24
secure members for SL2S\,J02140-0535. Both data sets are well reproduced by a
single NFW mass profile: the dark matter halo coincides with the peak of the
light distribution, with scale radius, concentration, and mass equal to $r_s$
=$82^{+44}_{-17}$ kpc , $c_{200}$ = $10.0^{+1.7}_{-2.5}$, and $M_{200}$ =
$1.0^{+0.5}_{-0.2}$ $\times$ 10$^{14}$M$_{\odot}$ respectively. These
parameters are better constrained when we fit simultaneously SL and dynamical
information. The mass contours of our best model agrees with the direction
defined by the luminosity contours and the X-ray emission of SL2S\,J02140-0535.
The simultaneous fit lowers the error in the mass estimate by 0.34 dex, when
compared to the SL model, and in 0.15 dex when compared to the dynamical
method.The combination of SL and dynamics tools yields a more accurate probe of
the mass profile of SL2S\,J02140-0535 up to $r_{200}$. However, there is
tension between the best elliptical SL model and the best spherical dynamical
model.
|
It has been observed that deep learning architectures tend to make erroneous
decisions with high reliability for particularly designed adversarial
instances. In this work, we show that the perturbation analysis of these
architectures provides a framework for generating adversarial instances by
convex programming which, for classification tasks, is able to recover variants
of existing non-adaptive adversarial methods. The proposed framework can be
used for the design of adversarial noise under various desirable constraints
and different types of networks. Moreover, this framework is capable of
explaining various existing adversarial methods and can be used to derive new
algorithms as well. We make use of these results to obtain novel algorithms.
The experiments show the competitive performance of the obtained solutions, in
terms of fooling ratio, when benchmarked with well-known adversarial methods.
|
We characterize positive definiteness for some family of matrices. As an
application we derive explicit value of the quadratic embedding constants of
the path graphs.
|
Interest in ZrTe5 has been reinvigorated in recent years owing to its
potential for hosting versatile topological electronic states and intriguing
experimental discoveries. However, the mechanism of many of its unusual
transport behaviors remains controversial, for example, the characteristic peak
in the temperature-dependent resistivity and the anomalous Hall effect. Here,
through employing a clean dry-transfer fabrication method under inert
environment, we successfully obtain high-quality ZrTe5 thin devices that
exhibit clear dual-gate tunability and ambipolar field effects. Such devices
allow us to systematically study the resistance peak as well as the Hall effect
at various doping densities and temperatures, revealing the contribution from
electron-hole asymmetry and multiple-carrier transport. By comparing with
theoretical calculations, we suggest a simplified semiclassical two-band model
to explain the experimental observations. Our work helps to resolve the
long-standing puzzles on ZrTe5 and could potentially pave the way for realizing
novel topological states in the two-dimensional limit.
|
We consider a Josephson junction consisting of superconductor/ferromagnetic
insulator (S/FI) bilayers as electrodes which proximizes a nearby 2D electron
gas. By starting from a generic Josephson hybrid planar setup we present an
exhaustive analysis of the the interplay between the superconducting and
magnetic proximity effects and the conditions under which the structure
undergoes transitions to a non-trivial topological phase. We address the 2D
bound state problem using a general transfer matrix approach that reduces the
problem to an effective 1D Hamiltonian. This allows for straightforward study
of topological properties in different symmetry classes. As an example we
consider a narrow channel coupled with multiple ferromagnetic superconducting
fingers, and discuss how the Majorana bound states can be spatially controlled
by tuning the superconducting phases. Following our approach we also show the
energy spectrum, the free energy and finally the multiterminal Josephson
current of the setup.
|
Broken symmetries in graphene affect the massless nature of its charge
carriers. We present an analysis of scattering by defects in graphene in the
presence of spin-orbit interactions (SOIs). A characteristic constant ratio
($\simeq 2$) of the transport to elastic times for massless electrons signals
the anisotropy of the scattering. We show that SOIs lead to a drastic decrease
of this ratio, especially at low carrier concentrations, while the scattering
becomes increasingly isotropic. As the strength of the SOI determines the
energy (carrier concentration) where this drop is more evident, this effect
could help evaluate these interactions through transport measurements.
|
To investigate universal behavior and effects of long-range temporal
correlations in kinetic roughening, we perform extensive simulations on the
Kardar-Parisi-Zhang (KPZ) equation with temporally correlated noise based on
pseudospectral (PS) and one of the improved finite-difference (FD) schemes. We
find that scaling properties are affected by long-range temporal correlations
within the effective temporally correlated regions. Our results are consistent
with each other using these two independent numerical schemes, three
characteristic roughness exponents (global roughness exponent $\alpha$, local
roughness exponent $\alpha_{loc}$, and spectral roughness exponent
$\alpha_{s}$) are approximately equal within the small temporally correlated
regime, and satisfy $\alpha_{loc} \approx \alpha<\alpha_{s}$ for the large
temporally correlated regime, and the difference between $\alpha_{s}$ and
$\alpha$ increases with increasing the temporal correlation exponent $\theta$.
Our results also show that PS and the improved FD schemes could effectively
suppress numerical instabilities in the temporally correlated KPZ growth
equation. Furthermore, our investigations suggest that when the effects of
long-range temporal correlation are present, the continuum and discrete growth
systems do not belong to the same universality class with the same temporal
correlation exponent.
|
Heat transfer across interfaces of graphene and polar dielectrics (e.g. SiO2)
could be mediated by direct phonon coupling, as well as electronic coupling
with remote interfacial phonons (RIPs). To understand the relative contribution
of each component, we develop a new pump-probe technique, called
voltage-modulated thermoreflectance (VMTR), to accurately measure the change of
interfacial thermal conductance under an electrostatic field. We employed VMTR
on top gates of graphene field-effect transistors and find that the thermal
conductance of SiO2/graphene/SiO2 interfaces increases by up to {\Delta}G=0.8
MW m-2 K-1 under electrostatic fields of <0.2 V nm-1 . We propose two possible
explanations for the observed {\Delta}G. First, since the applied electrostatic
field induces charge carriers in graphene, our VMTR measurements could
originate from heat transfer between the charge carriers in graphene and RIPs
in SiO2. Second, the increase in heat conduction could be caused by better
conformity of graphene interfaces un-der electrostatic pressure exerted by the
induced charge carriers. Regardless of the origins of the observed {\Delta}G,
our VMTR measurements establish an upper limit for heat transfer from unbiased
graphene to SiO2 substrates via RIP scattering; i.e., only <2 % of the
interfacial heat transport is facilitated by RIP scattering even at a carrier
concentration of 4x10^12 cm-2.
|
Chiral phase properties of finite size hadronic systems are investigated
within the Nambu--Jona-Lasinio model. Finite size effects are taken into
account by making use of the multiple reflection expansion. We find that, for
droplets with relatively small baryon numbers, chiral symmetry restoration is
enhanced by the finite size effects. However the radius of the stable droplet
does not change much, as compared to that without the multiple reflection
expansion.
|
Algocracy is the rule by algorithms. This paper summarises technologies
useful to create algocratic social machines and presents idealistic examples of
their application. In particular, it describes smart contracts and their
implementations, challenges of behaviour mining and prediction, as well as
game-theoretic and AI approaches to mechanism design. The presented idealistic
examples of new algocratic solutions are picked from the reality of a modern
state. The examples are science funding, trade by organisations, regulation of
rental agreements, ranking of significance and sortition. Artificial General
Intelligence is not in the scope of this feasibility study.
|
Pre-trained Large Language Models (LLMs) often struggle on out-of-domain
datasets like healthcare focused text. We explore specialized pre-training to
adapt smaller LLMs to different healthcare datasets. Three methods are
assessed: traditional masked language modeling, Deep Contrastive Learning for
Unsupervised Textual Representations (DeCLUTR), and a novel pre-training
objective utilizing metadata categories from the healthcare settings. These
schemes are evaluated on downstream document classification tasks for each
dataset, with additional analysis of the resultant embedding spaces.
Contrastively trained models outperform other approaches on the classification
tasks, delivering strong performance from limited labeled data and with fewer
model parameter updates required. While metadata-based pre-training does not
further improve classifications across the datasets, it yields interesting
embedding cluster separability. All domain adapted LLMs outperform their
publicly available general base LLM, validating the importance of
domain-specialization. This research illustrates efficient approaches to
instill healthcare competency in compact LLMs even under tight computational
budgets, an essential capability for responsible and sustainable deployment in
local healthcare settings. We provide pre-training guidelines for specialized
healthcare LLMs, motivate continued inquiry into contrastive objectives, and
demonstrates adaptation techniques to align small LLMs with privacy-sensitive
medical tasks.
|
We address three major questions in astronomy, namely the detection of
biosignatures on habitable exoplanets, the geophysics of exoplanets and
cosmology. To achieve this goal, two requirements are needed. First a very
large aperture to detect spectro-polarimetric and spatial features of faint
objects such as exoplanets, and second a continuous monitoring to characterize
the temporal behavior of exoplanets such as rotation period, meteorology and
seasons. An Earth-based telescope is not suited for continuous monitoring and
the atmosphere limits the ultimate angular resolution and
spectro-polarimetrical domain. Moreover, a space telescope in orbit is limited
in aperture, to perhaps 15 m over the next several decades. This is why we
propose an OWL-class lunar telescope with a 50-100 m aperture for visible and
infrared (IR) astronomy, based on ESO's Overwhelmingly Large Telescope concept,
unachievable on Earth for technical issues such as wind stress that are not
relevant for a lunar platform. It will be installed near the south pole of the
Moon to allow continuous target monitoring. The low gravity of the Moon will
facilitate its building and manoeuvring, compared to Earth-based telescopes. As
a guaranteed by-product, such a large lunar telescope will allow Intensity
Interferometric measurements when coupled with large Earth-based telescopes,
leading to pico-second angular resolution.
|
We study the phase diagrams of a family of 3D "Walker-Wang" type lattice
models, which are not topologically ordered but have deconfined anyonic
excitations confined to their surfaces. We add a perturbation (analogous to
that which drives the confining transition in Z_p lattice gauge theories) to
the Walker-Wang Hamiltonians, driving a transition in which all or some of the
variables associated with the loop gas or string-net ground states of these
models become confined. We show that in many cases the location and nature of
the phase transitions involved is exactly that of a generalized Z_p lattice
gauge theory, and use this to deduce the basic structure of the phase diagram.
We further show that the relationship between the phases on opposite sides of
the transition is fundamentally different than in conventional gauge theories:
in the Walker-Wang case, the number of species of excitations that are
deconfined in the bulk can increase across a transition that confines only some
of the species of loops or string-nets. The analogue of the confining
transition in the Walker-Wang models can therefore lead to bulk deconfinement
and topological order.
|
The question of what regulates star formation is a long standing issue. To
investigate this issue, we run simulations of a kiloparsec cube section of a
galaxy with three kinds of stellar feedback: the formation of HII regions, the
explosion of supernovae, and the UV heating. We show that stellar feedback is
sufficient to reduce the averaged star formation rate (SFR) to the level of the
Schmidt- Kennicutt law in Milky-Way like galaxies but not in high-redshift gas
rich galaxies suggesting that another type of support should be added. We
investigate whether an external driving of the turbulence such as the one
created by the large galactic scales could diminish the SFR at the observed
level. Assuming that the Toomre parameter is close to 1 as suggested by the
observations, we infer a typical turbulent forcing that we argue should be
applied parallel to the plane of the galactic disc. When this forcing is
applied in our simulations, the SFR within our simulations closely follows the
Schmidt- Kennicutt relation. We found that the velocity dispersion is strongly
anisotropic with the velocity dispersion alongside the galactic plane being up
to 10 times larger than the perpendicular velocity.
|
Our unified chemical and spectrophotometric evolution code allows to
simultaneously study the ISM abundances of a series of elements and the
spectral properties of the stellar population in our model galaxies. We use
stellar evolutionary tracks, yields, spectra, color and absorption index
calibrations for 5 different metallicities and account for the increase in
initial metallicity of successive generations of stars. For any kind of stellar
system, as described by its star formation history and IMF, we thus can
directly compare the time evolution of gaseous and stellar abundance ratios.
Spiral galaxy models that successfully reproduce spectral properties as well as
ISM abundances of nearby templates are combined with a cosmological model and
compared to damped Lyman $\alpha$ absorbers. For early type galaxies various
formation scenarii -- initial monolithic collapse, spiral-spiral merger,
hierarchical formation -- are tested with respect to their predicted spectral
energy distributions from UV to NIR and absorption indices and index ratios, as
e.g. [MgFe].
|
We establish inequalities for the eigenvalues of the sub-Laplace operator
associated with a pseudo-Hermitian structure on a strictly pseudoconvex CR
manifold. Our inequalities extend those obtained by Niu and Zhang
\cite{NiuZhang} for the Dirichlet eigenvalues of the sub-Laplacian on a bounded
domain in the Heisenberg group and are in the spirit of the well known
Payne-P\'{o}lya-Weinberger and Yang universal inequalities.
|
We consider the generation of the baryon asymmetry in models with
right-handed neutrinos produced through gravitational scattering of the
inflaton during reheating. The right-handed neutrinos later decay and generate
a lepton asymmetry, which is partially converted to a baryon asymmetry by
Standard Model sphaleron processes. We find that a sufficient asymmetry can be
generated for a wide range of right-handed neutrino masses and reheating
temperatures. We also show that the same type of gravitational scattering
produces Standard Model Higgs bosons, which can achieve inflationary reheating
consistent with the production of a baryon asymmetry.
|
Let $A$ be a $C^*$-algebra. It is shown that every absolutely summing
operator from $A$ into $\ell_2$ factors through a Hilbert space operator that
belongs to the 4-Schatten- von Neumann class. We also provide finite dimensinal
examples that show that one can not improve the 4-Schatten-von Neumann class to
$p$-Schatten von Neumann class for any $p<4$. As application, we prove that
there exists a modulus of capacity $\epsilon \to N(\epsilon)$ so that if $A$ is
a $C^*$-algebra and $T \in \Pi_1(A,\ell_2)$ with $\pi_1(T)\leq 1$, then for
every $\epsilon >0$, the $\epsilon$-capacity of the image of the unit ball of
$A$ under $T$ does not exceed $N(\epsilon)$. This aswers positively a question
raised by Pe\l czynski.
|
The MEG II experiment, based at the Paul Scherrer Institut in Switzerland,
reports the result of a search for the decay $\mu^+\to e^+\gamma$ from data
taken in the first physics run in 2021. No excess of events over the expected
background is observed, yielding an upper limit on the branching ratio of
B($\mu^+\to e^+\gamma$) < $7.5 \times 10^{-13}$ (90% C.L.). The combination of
this result and the limit obtained by MEG gives B($\mu^+\to e^+\gamma$) < $3.1
\times 10^{-13}$ (90% C.L.), which is the most stringent limit to date. A
ten-fold larger sample of data is being collected during the years 2022-2023,
and data-taking will continue in the coming years.
|
We investigate a bosonic Josephson junction by using the path-integral
formalism with relative phase and population imbalance as dynamical variables.
We derive an effective only-phase action performing functional integration over
the population imbalance. We then analyze the quantum effective only-phase
action, which formally contains all the quantum corrections. To the second
order in the derivative expansion and to the lowest order in $\hbar$, we obtain
the quantum correction to the Josephson frequency of oscillation. Finally, the
same quantum correction is found by adopting an alternative approach. Our
predictions are a useful theoretical tool for experiments with atomic or
superconducting Josephson junctions.
|
In this paper, a novel process has been developed to realize high-level
complex cognitive behaviors into reactive agents, efficiently. This method
paves the way for deducting high-level reactive behaviors from low-level
perceptive information by autonomous robots. The aforementioned process lets us
actualize different generations of Braitenberg vehicles, are which able to
mimic desired behaviors to survive in complex environments with high degrees of
flexibility in perception and emergence of high-level cognitive actions. The
approach has been used to engineer a Braitenberg vehicle with a wide range of
perception-action capabilities. Verification would be realized within this
framework, due to the efficient traceability between each sequential pair of
process phases. The applied simulations demonstrate the efficiency of the
established development process, based on the Braitenberg vehicle's behavior.
|
The classical Birkhoff conjecture says that the only integrable convex
domains are circles and ellipses. In the paper we show that this a version of
this conjecture is true for small perturbations of ellipses of small
eccentricity.
|
In his classical paper, L. Schwartz proved that on the real line, in every
linear translation invariant space of continuous complex valued functions,
which is closed under compact convergence the exponential monomials span a
dense subspace. He studied so-called local ideals in the space of Fourier
transforms, and his proof based on the observation that, on the one hand, these
local ideals are completely determined by the exponential monomials in the
space, and, on the other hand, these local ideals completely determine the
space itself. D.I.Gurevich used a similar idea of localization to give
counterexamples for Schwartz's theorem in higher dimension. This localization
process depends on differential operators and differentiability properties of
the Fourier transforms. We note that in his two papers R.J. Elliott used
somewhat similar ideas, but unfortunately, some of his proofs and results are
incorrect. In this paper we show that the ideas of localization can be extended
to general locally compact Abelian groups using abstract derivations on the
Fourier algebra of compactly supported measures. Based this method we present
necessary and sufficient conditions for spectral synthesis for varieties on
locally compact Abelian groups.
|
We refine the presentation of the previous paper of our group, Y.Ezawa et
al., Class. Quantum Grav. {\bf 23} (2006), 3205 [arXiv:gr-qc/0507060]. In that
paper, we proposed a canonical formalism of f(R)-type generalized gravity by
using the Lie derivatives instead of the time derivatives. However, the use of
the Lie derivatives was not sufficient. In this note, we make use of the Lie
derivatives as far as possible, so that no time derivatives are used.
|
We present a unified description of temperature and entropy in spaces with
either "true" or "accelerated observer" horizons: In their (higher dimensional)
global embedding Minkowski geometries, the relevant detectors have constant
accelerations a_{G}; associated with their Rindler horizons are temperature
a_{G}/2\pi and entropy equal to 1/4 the horizon area. Both quantities agree
with those calculated in the original curved spaces.
As one example of this equivalence, we obtain the temperature and entropy of
Schwarzschild geometry from its flat D=6 embedding.
|
In this article, we study the on-shell production of low-mass vector
mediators from neutrino-antineutrino coalescence in the core of proto-neutron
stars. Taking into account the radial dependence of the density, energy, and
temperature inside the proto-neutron star, we compute the neutrino-antineutrino
interaction rate in the star interior in the well-motivated
$U(1)_{L_{\mu}-L_{\tau}}$ model. First, we determine the values of the coupling
above which neutrino-antineutrino interactions dominate over the Standard Model
neutrino-nucleon scattering. We argue that, although in this regime a
redistribution of the neutrino energies might take place, making low-energy
neutrinos more trapped, this only affects a small part of the neutrino
population and it cannot be constrained with the SN 1987A data. Thus, contrary
to previous claims, the region of the parameter space where the
$U(1)_{L_{\mu}-L_{\tau}}$ model explains the discrepancy in the muon anomalous
magnetic moment is not ruled out. We then focus on small gauge couplings, where
the decay length of the new gauge boson is larger than the neutrino-nucleon
mean free path, but still smaller than the size of proto-neutron star. We show
that in this regime, the on-shell production of a long-lived $Z'$ and its
subsequent decay into neutrinos can significantly reduce the duration of the
neutrino burst, probing values of the coupling below ${\cal O}(10^{-7})$ for
mediator masses between 10 and 100 MeV. This disfavours new areas of the
parameter space of the $U(1)_{L_{\mu}-L_{\tau}}$ model.
|
We find the following improved laboratory bounds on the coupling of light
pseudoscalars to protons and neutrons: $g_p^2/4\pi < 1.7 \times 10^{-9}$ and
$g_n^2/4\pi < 6.8 \times 10^{-8}$. The limit on $g_p$ arises since a nonzero
$g_p$ would induce a coupling of the pseudoscalar to two photons, which is
limited by experiments studying laser beam propagation in magnetic fields.
Combining our bound on $g_p$ with a recent analysis of Fischbach and Krause on
two-pseudoscalar exchange potentials and experiments testing the equivalence
principle, we obtain our limit on $g_n$. (PACS number(s):
14.20.Dh/14.80.-j/12.20.Fv/04.90.+e)
|
We propose and demonstrate a novel method for generating
propagation-invariant spatially-stationary fields in a controllable manner. Our
method relies on producing incoherent mixtures of plane waves using planar
primary sources that are spatially completely uncorrelated. The strengths of
the individual plane waves in the mixture decide the exact functional form of
the generated coherence function. We use LEDs as the primary incoherent sources
and experimentally demonstrate the effectiveness of our method by generating
several spatially-stationary fields, including a new type, which we refer to as
the "region-wise spatially-stationary field." We also experimentally
demonstrate the propagation-invariance of these fields, which is an extremely
interesting and useful property of such fields. Our work should have important
implications for applications that exploit the spatial coherence properties
either in a transverse plane or in a propagation-invariant manner, such as
correlation holography, wide-field OCT, and imaging through turbulence.
|
In this report, two commonly used data-driven models for predicting well
production under a waterflood setting: the capacitance resistance model (CRM)
and recurrent neural networks (RNN) are compared. Both models are completely
data-driven and are intended to learn the reservoir behavior during a water
flood from historical data. This report serves as a technical guide to the
python-based implementation of the CRM model available from the associated
GitHub repository.
|
Students often enter physics classrooms with deeply ingrained misconceptions
stemming from everyday experiences. These misconceptions challenge educators,
as students often resist information that conflicts with their preconceptions.
The first aim of this manuscript is to summarize the existing literature on
misconceptions in university physics, reviewing misconceptions' sources,
diagnoses, and remediation strategies. Most of this literature has concentrated
on classical physics. However, quantum physics poses unique challenges because
its concepts are removed from everyday experiences. This signals the need to
ask how well existing strategies for addressing misconceptions apply to quantum
physics. This is underscored by the recent surge of people from diverse
backgrounds entering quantum physics because of the growing significance of
quantum technologies. To help answer this question, we conducted in-depth
interviews with quantum physics instructors at the University of Waterloo who
have collectively taught over 100 quantum physics courses. These interviews
explored common misconceptions in quantum physics, their origins, and effective
instructional techniques to address them. We highlight specific misconceptions,
such as misunderstanding of entanglement and spin, and successful teaching
strategies, including ``misconception-trap quizzes.'' We integrate insights
from the literature review with our interview data to provide an overview of
current best practices in addressing physics misconceptions. Furthermore, we
identify key research questions that warrant further exploration, such as the
efficacy of multi-tier tests in quantum physics and developing a cohesive
quantum curriculum. This paper aims to inform educators and curriculum
developers, offering practical recommendations and setting a research agenda to
improve conceptual understanding in classical and quantum physics.
|
Hot spots in tumors are regions of high vascular density in the center of the
tumor and their analysis is an important diagnostic tool in cancer treatment.
We present a model for vascular remodeling in tumors predicting that the
formation of hot spots correlates with local inhomogeneities of the original
arterio-venous vasculature of the healthy tissue. Probable locations for hot
spots in the late stages of the tumor are locations of increased blood pressure
gradients. The developing tumor vasculature is non-hierarchical but still
complex displaying algebraically decaying density distributions.
|
More than 30 million of high-energy muons collected with the MACRO detector
at the underground Gran Sasso Laboratory have been used to search for flux
variations of different natures. Two kinds of studies were carried out: search
for periodic variations and for the occurrence of clusters of events. Different
analysis methods, including Lomb-Scargle spectral analysis and Scan Test
statistics have been applied to the data.
|
Let $\Omega\subset\mathbb{R}^n$ be a $C^2$ bounded domain and $\chi>0$ be a
constant. We will prove the existence of constants
$\lambda_N\ge\lambda_N^{\ast}\ge\lambda^{\ast}(1+\chi\int_{\Omega}\frac{dx}{1-w_{\ast}})^2$
for the nonlocal MEMS equation $-\Delta
v=\lam/(1-v)^2(1+\chi\int_{\Omega}1/(1-v)dx)^2$ in $\Omega$, $v=0$ on
$\1\Omega$, such that a solution exists for any $0\le\lambda<\lambda_N^{\ast}$
and no solution exists for any $\lambda>\lambda_N$ where $\lambda^{\ast}$ is
the pull-in voltage and $w_{\ast}$ is the limit of the minimal solution of
$-\Delta v=\lam/(1-v)^2$ in $\Omega$ with $v=0$ on $\1\Omega$ as
$\lambda\nearrow \lambda^{\ast}$. We will prove the existence, uniqueness and
asymptotic behaviour of the global solution of the corresponding parabolic
nonlocal MEMS equation under various boundedness conditions on $\lambda$. We
also obtain the quenching behaviour of the solution of the parabolic nonlocal
MEMS equation when $\lambda$ is large.
|
We prove that the motion of a triaxial Riemann ellipsoid of homogeneous
liquid without angular momentum does not possess an additional first integral
which is meromorphic in position, impulsions, and the elliptic functions which
appear in the potential, and thus is not integrable. We prove moreover that
this system is not integrable even on a fixed energy level hypersurface.
|
We examine the effect of a kinetic undercooling condition on the evolution of
a free boundary in Hele--Shaw flow, in both bubble and channel geometries. We
present analytical and numerical evidence that the bubble boundary is unstable
and may develop one or more corners in finite time, for both expansion and
contraction cases. This loss of regularity is interesting because it occurs
regardless of whether the less viscous fluid is displacing the more viscous
fluid, or vice versa. We show that small contracting bubbles are described to
leading order by a well-studied geometric flow rule. Exact solutions to this
asymptotic problem continue past the corner formation until the bubble
contracts to a point as a slit in the limit. Lastly, we consider the evolving
boundary with kinetic undercooling in a Saffman--Taylor channel geometry. The
boundary may either form corners in finite time, or evolve to a single long
finger travelling at constant speed, depending on the strength of kinetic
undercooling. We demonstrate these two different behaviours numerically. For
the travelling finger, we present results of a numerical solution method
similar to that used to demonstrate the selection of discrete fingers by
surface tension. With kinetic undercooling, a continuum of corner-free
travelling fingers exists for any finger width above a critical value, which
goes to zero as the kinetic undercooling vanishes. We have not been able to
compute the discrete family of analytic solutions, predicted by previous
asymptotic analysis, because the numerical scheme cannot distinguish between
solutions characterised by analytic fingers and those which are corner-free but
non-analytic.
|
We consider the Cauchy problem for the kinetic derivative nonlinear
Schr\"odinger equation on the torus: \[ \partial_t u - i \partial_x^2 u =
\alpha \partial_x \big( |u|^2 u \big) + \beta \partial_x \big[ H \big( |u|^2
\big) u \big] , \quad (t, x) \in [0,T] \times \mathbf{T}, \] where the
constants $\alpha,\beta$ are such that $\alpha \in \mathbf{R}$ and $\beta <0$,
and $H$ denotes the Hilbert transform. This equation has dissipative nature,
and the energy method is applicable to prove local well-posedness of the Cauchy
problem in Sobolev spaces $H^s$ for $s>3/2$. However, the gauge transform
technique, which is useful for dealing with the derivative loss in the
nonlinearity when $\beta =0$, cannot be directly adapted due to the presence of
the Hilbert transform. In particular, there has been no result on local
well-posedness in low regularity spaces or global solvability of the Cauchy
problem. In this article, we shall prove local and global well-posedness of the
Cauchy problem for small initial data in $H^s(\mathbf{T})$, $s>1/2$. To this
end, we make use of the parabolic-type smoothing effect arising from the
resonant part of the nonlocal nonlinear term $\beta \partial_x [H(|u|^2)u]$, in
addition to the usual dispersive-type smoothing effect for nonlinear
Schr\"odinger equations with cubic nonlinearities. As by-products of the proof,
we also obtain smoothing effect and backward-in-time ill-posedness results.
|
Adaptive Optics at mid-IR wavelengths has long been seen as either not
necessary or easy. The impact of atmospheric turbulence on the performance of
8-10 meter class telescopes in the mid-IR is relatively small compared to other
performance issues like sky background and telescope emission. Using a
relatively low order AO system, Strehl Ratios of larger than 95% have been
reported on 6-8 meter class telescopes. Going to 30-42 meter class telescopes
changes this picture dramatically. High Strehl Ratios require what is currently
considered a high-order AO system. Furthermore, even with a moderate AO system,
first order simulations show that the performance of such a system drops
significantly when not taking into account refractivity effects and atmospheric
composition variations. Reaching Strehl Ratios of over 90% at L, M and N band
will require special considerations and will impact the system design and
control scheme of AO systems for mid-IR on ELTs. In this paper we present an
overview of the effects that impact the performance of an AO system at mid-IR
wavelengths on an ELT and simulations on the performance and we will present a
first order system concept of such an AO system for METIS, the mid-IR
instrument for the E-ELT.
|
In this study, we examined consequences of unconventional time development of
two-dimensional conformal field theory induced by the $L_{1}$ and $L_{-1}$
operators, employing the formalism previously developed in a study of
sine-square deformation. We discovered that the retainment of the Virasoro
algebra requires the presence of a cut-off near the fixed points. The
introduction of a scale by the cut-off makes it possible to recapture the
formula for entanglement entropy in a natural and straightforward manner.
|
We examine Dirac's early algebraic approach which introduces the {\em
standard} ket and show that it emerges more clearly from a unitary
transformation of the operators based on the action. This establishes a new
picture that is unitarily equivalent to both the Schr\"{o}dinger and Heisenberg
pictures. We will call this the Dirac-Bohm picture for the reasons we discuss
in the paper. This picture forms the basis of the Feynman path theory and
allows us to show that the so-called `Bohm trajectories' are averages of an
ensemble of Feynman paths.
|
The density profiles of dark matter haloes contain rich information about
their growth history and physical properties. One particularly interesting
region is the splashback radius, $R_{\rm sp}$, which marks the transition
between particles orbiting in the halo and particles undergoing first infall.
While the dependence of $R_{\rm sp}$ on the recent accretion rate is well
established and theoretically expected, it is not clear exactly what parts of
the accretion history $R_{\rm sp}$ responds to, and what other halo properties
might additionally influence its position. We comprehensively investigate these
questions by correlating the dynamically measured splashback radii of a large
set of simulated haloes with their individual growth histories as well as their
structural, dynamical, and environmental properties. We find that $R_{\rm sp}$
is sensitive to the accretion over one crossing time but largely insensitive to
the prior history (in contrast to concentration, which probes earlier epochs).
All secondary correlations are much weaker, but we discern a relatively higher
$R_{\rm sp}$ in less massive, older, more elliptical, and more tidally deformed
haloes. Despite these minor influences, we conclude that the splashback radius
is a clean indicator of a halo's growth over the past dynamical time. We
predict that the magnitude gap should be a promising observable indicator of a
halo's accretion rate and splashback radius.
|
We consider average-cost Markov decision processes (MDPs) with Borel state
and action spaces and universally measurable policies. For the nonnegative cost
model and an unbounded cost model with a Lyapunov-type stability character, we
introduce a set of new conditions under which we prove the average cost
optimality inequality (ACOI) via the vanishing discount factor approach. Unlike
most existing results on the ACOI, our result does not require any compactness
and continuity conditions on the MDPs. Instead, the main idea is to use the
almost-uniform-convergence property of a pointwise convergent sequence of
measurable functions as asserted in Egoroff's theorem. Our conditions are
formulated in order to exploit this property. Among others, we require that for
each state, on selected subsets of actions at that state, the state transition
stochastic kernel is majorized by finite measures. We combine this majorization
property of the transition kernel with Egoroff's theorem to prove the ACOI.
|
In 1970, Coxeter gave a short and elegant geometric proof showing that if
$p_1, p_2, \ldots, p_n$ are vertices of an $n$-gon $P$ in cyclic order, then
$P$ is affinely regular if, and only if there is some $\lambda \geq 0$ such
that $p_{j+2}-p_{j-1} = \lambda (p_{j+1}-p_j)$ for $j=1,2,\ldots, n$. The aim
of this paper is to examine the properties of polygons whose vertices
$p_1,p_2,\ldots,p_n \in \mathbb{C}$ satisfy the property that
$p_{j+m_1}-p_{j+m_2} = w (p_{j+k}-p_j)$ for some $w \in \mathbb{C}$ and
$m_1,m_2,k \in \mathbb{Z}$. In particular, we show that in `most' cases this
implies that the polygon is affinely regular, but in some special cases there
are polygons which satisfy this property but are not affinely regular. The
proofs are based on the use of linear algebraic and number theoretic tools. In
addition, we apply our method to characterize polytopes with certain symmetry
groups.
|
Let ${\cal{C}}_1$ be the set of fundamental cycles of breadth-first-search
trees in a graph $G$ and ${\cal{C}}_2$ the set of the sums of two cycles in
${\cal{C}}_1$. Then we show that $(1) {\cal{C}}={\cal{C}}_1\bigcup{\cal{C}}_2$
contains a shortest $\Pi$-twosided cycle in a $\Pi$-embedded graph $G$;$(2)$
$\cal{C}$ contains all the possible shortest even cycles in a graph $G$;$(3)$
If a shortest cycle in a graph $G$ is an odd cycle, then $\cal{C}$ contains all
the shortest odd cycles in $G$. This implies the existence of a polynomially
bounded algorithm to find a shortest $\Pi-$twosided cycle in an embedded graph
and thus solves an open problem of B.Mohar and C.Thomassen[2,pp112]
|
The Comment by Holas et al. [A. Holas, M. Cinal, and N. H. March, Phys. Rev.
A 78, 016501 (2008)] on our recent paper [J. Schirmer and A. Dreuw, Phys. Rev.
A 75, 022513 (2007)]. is an appropriate and valuable contribution. As a small
addendum we briefly comment on the relationship between the radical Kohn-Sham
(rKS) form of density-functional theory and previous one-electron (particle)
potential (OPP) developments.
|
An important issue when using Machine Learning algorithms in recent research
is the lack of interpretability. Although these algorithms provide accurate
point predictions for various learning problems, uncertainty estimates
connected with point predictions are rather sparse. A contribution to this gap
for the Random Forest Regression Learner is presented here. Based on its
Out-of-Bag procedure, several parametric and non-parametric prediction
intervals are provided for Random Forest point predictions and theoretical
guarantees for its correct coverage probability is delivered. In a second part,
a thorough investigation through Monte-Carlo simulation is conducted evaluating
the performance of the proposed methods from three aspects: (i) Analyzing the
correct coverage rate of the proposed prediction intervals, (ii) Inspecting
interval width and (iii) Verifying the competitiveness of the proposed
intervals with existing methods. The simulation yields that the proposed
prediction intervals are robust towards non-normal residual distributions and
are competitive by providing correct coverage rates and comparably narrow
interval lengths, even for comparably small samples.
|
Smart contracts are increasingly being used to manage large numbers of
high-value cryptocurrency accounts. There is a strong demand for automated,
efficient, and comprehensive methods to detect security vulnerabilities in a
given contract. While the literature features a plethora of analysis methods
for smart contracts, the existing proposals do not address the increasing
complexity of contracts. Existing analysis tools suffer from false alarms and
missed bugs in today's smart contracts that are increasingly defined by
complexity and interdependencies. To scale accurate analysis to modern smart
contracts, we introduce EF/CF, a high-performance fuzzer for Ethereum smart
contracts. In contrast to previous work, EF/CF efficiently and accurately
models complex smart contract interactions, such as reentrancy and
cross-contract interactions, at a very high fuzzing throughput rate. To achieve
this, EF/CF transpiles smart contract bytecode into native C++ code, thereby
enabling the reuse of existing, optimized fuzzing toolchains. Furthermore,
EF/CF increases fuzzing efficiency by employing a structure-aware mutation
engine for smart contract transaction sequences and using a contract's ABI to
generate valid transaction inputs. In a comprehensive evaluation, we show that
EF/CF scales better -- without compromising accuracy -- to complex contracts
compared to state-of-the-art approaches, including other fuzzers,
symbolic/concolic execution, and hybrid approaches. Moreover, we show that
EF/CF can automatically generate transaction sequences that exploit reentrancy
bugs to steal Ether.
|
We discuss the production mechanism of partons via vacuum polarization during
the very early, gluon dominated phase of an ultrarelativistic heavy-ion
collision in the framework of the background field method of quantum
chromodynamics.
|
In many real world problems, control decisions have to be made with limited
information. The controller may have no a priori (or even posteriori) data on
the nonlinear system, except from a limited number of points that are obtained
over time. This is either due to high cost of observation or the highly
non-stationary nature of the system. The resulting conflict between information
collection (identification, exploration) and control (optimization,
exploitation) necessitates an active learning approach for iteratively
selecting the control actions which concurrently provide the data points for
system identification. This paper presents a dual control approach where the
information acquired at each control step is quantified using the entropy
measure from information theory and serves as the training input to a
state-of-the-art Gaussian process regression (Bayesian learning) method. The
explicit quantification of the information obtained from each data point allows
for iterative optimization of both identification and control objectives. The
approach developed is illustrated with two examples: control of logistic map as
a chaotic system and position control of a cart with inverted pendulum.
|
The burgeoning complexity of contemporary deep learning models, while
achieving unparalleled accuracy, has inadvertently introduced deployment
challenges in resource-constrained environments. Knowledge distillation, a
technique aiming to transfer knowledge from a high-capacity "teacher" model to
a streamlined "student" model, emerges as a promising solution to this dilemma.
This paper provides a comprehensive overview of the knowledge distillation
paradigm, emphasizing its foundational principles such as the utility of soft
labels and the significance of temperature scaling. Through meticulous
examination, we elucidate the critical determinants of successful distillation,
including the architecture of the student model, the caliber of the teacher,
and the delicate balance of hyperparameters. While acknowledging its profound
advantages, we also delve into the complexities and challenges inherent in the
process. Our exploration underscores knowledge distillation's potential as a
pivotal technique in optimizing the trade-off between model performance and
deployment efficiency.
|
We investigate statistics of the decay process in the equal-mass three-body
problem with randomized initial conditions. Contrary to earlier expectations of
similarity with "radioactive decay", the lifetime distributions obtained in our
numerical experiments turn out to be heavy-tailed, i.e. the tails are not
exponential, but algebraic. The computed power-law index for the differential
distribution is within the narrow range, approximately from -1.7 to -1.4,
depending on the virial coefficient. Possible applications of our results to
studies of the dynamics of triple stars known to be at the edge of disruption
are considered.
|
The Kagome lattice is an important fundamental structure in condensed matter
physics for investigating the interplay of electron correlation, topology, and
frustrated magnetism. Recent work on Kagome metals in the AV3Sb5 (A = K, Rb,
Cs) family, has shown a multitude of correlation-driven distortions, including
symmetry breaking charge density waves and nematic superconductivity at low
temperatures. Here we study the new Kagome metal Yb0.5Co3Ge3 and find a
temperature-dependent kink in the resistivity that is highly similar to the
AV3Sb5 behavior and is commensurate with an in-plane structural distortion of
the Co Kagome lattice along with a doubling of the c-axis. The space group is
found to lower from P6/mmm to P63/m below the transition temperature, breaking
the in-plane mirror planes and C6 rotation, while gaining a screw axis along
the c-direction. At very low temperatures, anisotropic negative
magnetoresistance is observed, which may be related to anisotropic magnetism.
This raises questions about the types of the distortions in Kagome nets and
their resulting physical properties including superconductivity and magnetism.
|
In scalaron-Higgs inflation the Standard Model Higgs boson is non-minimally
coupled to gravity and the Einstein-Hilbert action is supplemented by the
quadratic scalar curvature invariant. For the quartic Higgs self-coupling
$\lambda$ fixed at the electroweak scale, we find that the resulting
inflationary two-field model effectively reduces to a single field model with
the same predictions as in Higgs inflation or Starobinsky inflation, including
the limit of a vanishing non-minimal coupling. For the same model, but with the
scalar field a priori not identified with the Standard Model Higgs boson, we
study the inflationary consequences of an extremely small $\lambda$. Depending
on the initial conditions for the inflationary background trajectories, we find
that the two-field dynamics either again reduces to an effective single-field
model with a larger tensor-to-scalar ratio than predicted in Higgs inflation
and Starobinsky inflation, or involves the full two-field dynamics and leads to
oscillatory features in the inflationary power spectrum. Finally, we
investigate under which conditions the inflationary scenario with extremely
small $\lambda$ can be realized dynamically by the Standard Model
renormalization group flow and discuss how the scalaron-Higgs model can provide
a natural way to stabilize the electroweak vacuum.
|
We present synthesis and $^{75}$As-nuclear quadrupole resonance (NQR)
measurements for the noncentrosymmetric superconductor CaPtAs with a
superconducting transition temperature $T_c$ of $\sim 1.5$ K. We discovered two
different forms of CaPtAs during synthesis; one is a high-temperature
tetragonal form that was previously reported, and the other is a
low-temperature form consistent with the orthorhombic structure of CaPtP.
According to the $^{75}$As-NQR measurement for superconducting tetragonal
CaPtAs, the nuclear spin-lattice relaxation rate $1/T_1$ has an obvious
coherence peak below $T_c$ and does not follow a simple exponential variation
at low temperatures. These findings indicate that CaPtAs is a multigap
superconductor and a large $s$-wave component.
|
We present an accurate and fast 3D simulation scheme for out-of-plane grating
couplers, based on two dimensional rigorous (finite difference time domain)
grating simulations, the effective index method (EIM), and the
Rayleigh-Sommerfeld diffraction formula. In comparison with full 3D FDTD
simulations, the rms difference in electric field is below 5% and the
difference in power flux is below 3%. A grating coupler for coupling from a
silicon-on-insulator photonic integrated circuit to an optical fiber positioned
0.1 mm above the circuit is designed as example.
|
Interferometric scattering microscopy is a powerful technique that enables
various applications, such as mass photometry and particle tracking. Here we
present a numerical toolbox to simulate images obtained in interferometric
scattering, coherent bright-field, and dark-field microscopy. The scattered
fields are calculated using a boundary element method, facilitating the
simulation of arbitrary sample geometries and substrate layer structures. A
fully vectorial model is used for simulating the imaging setup. We demonstrate
excellent agreement between our simulations and experiments for different
shapes of scatterers and excitation angles. Notably, for angles near the
Brewster angle, we observe a contrast enhancement which may be beneficial for
nanosensing applications. The software is available as a Matlab toolbox.
|
DH Tau is a young ($\sim$1 Myr) classical T Tauri star. It is one of the few
young PMS stars known to be associated with a planetary mass companion, DH Tau
b, orbiting at large separation and detected by direct imaging. DH Tau b is
thought to be accreting based on copious H${\alpha}$ emission and exhibits
variable Paschen Beta emission. NOEMA observations at 230 GHz allow us to place
constraints on the disk dust mass for both DH Tau b and the primary in a regime
where the disks will appear optically thin. We estimate a disk dust mass for
the primary, DH Tau A of $17.2\pm1.7\,M_{\oplus}$, which gives a disk-to-star
mass ratio of 0.014 (assuming the usual Gas-to-Dust mass ratio of 100 in the
disk). We find a conservative disk dust mass upper limit of 0.42$M_{\oplus}$
for DH Tau b, assuming that the disk temperature is dominated by irradiation
from DH Tau b itself. Given the environment of the circumplanetary disk,
variable illumination from the primary or the equilibrium temperature of the
surrounding cloud would lead to even lower disk mass estimates. A MCFOST
radiative transfer model including heating of the circumplanetary disk by DH
Tau b and DH Tau A suggests that a mass averaged disk temperature of 22 K is
more realistic, resulting in a dust disk mass upper limit of 0.09$M_{\oplus}$
for DH Tau b. We place DH Tau b in context with similar objects and discuss the
consequences for planet formation models.
|
Human attention is a scarce resource in modern computing. A multitude of
microtasks vie for user attention to crowdsource information, perform momentary
assessments, personalize services, and execute actions with a single touch. A
lot gets done when these tasks take up the invisible free moments of the day.
However, an interruption at an inappropriate time degrades productivity and
causes annoyance. Prior works have exploited contextual cues and behavioral
data to identify interruptibility for microtasks with much success. With Quick
Question, we explore use of reinforcement learning (RL) to schedule microtasks
while minimizing user annoyance and compare its performance with supervised
learning. We model the problem as a Markov decision process and use Advantage
Actor Critic algorithm to identify interruptible moments based on context and
history of user interactions. In our 5-week, 30-participant study, we compare
the proposed RL algorithm against supervised learning methods. While the mean
number of responses between both methods is commensurate, RL is more effective
at avoiding dismissal of notifications and improves user experience over time.
|
Open Information Extraction (OIE) is a field of natural language processing
that aims to present textual information in a format that allows it to be
organized, analyzed and reflected upon. Numerous OIE systems are developed,
claiming ever-increasing performance, marking the need for objective
benchmarks. BenchIE is the latest reference we know of. Despite being very well
thought out, we noticed a number of issues we believe are limiting. Therefore,
we propose $\textit{BenchIE}^{FL}$, a new OIE benchmark which fully enforces
the principles of BenchIE while containing fewer errors, omissions and
shortcomings when candidate facts are matched towards reference ones.
$\textit{BenchIE}^{FL}$ allows insightful conclusions to be drawn on the actual
performance of OIE extractors.
|
This article constructs a class of random probability measures based on
exponentially and polynomially tilting operated on the laws of completely
random measures. The class is proved to be conjugate in that it covers both
prior and posterior random probability measures in the Bayesian sense.
Moreover, the class includes some common and widely used random probability
measures, the normalized completely random measures (James (Poisson process
partition calculus with applications to exchangeable models and Bayesian
nonparametrics (2002) Preprint), Regazzini, Lijoi and Pr\"{u}nster (Ann.
Statist. 31 (2003) 560-585), Lijoi, Mena and Pr\"{u}nster (J. Amer. Statist.
Assoc. 100 (2005) 1278-1291)) and the Poisson-Dirichlet process (Pitman and Yor
(Ann. Probab. 25 (1997) 855-900), Ishwaran and James (J. Amer. Statist. Assoc.
96 (2001) 161-173), Pitman (In Science and Statistics: A Festschrift for Terry
Speed (2003) 1-34 IMS)), in a single construction. We describe an augmented
version of the Blackwell-MacQueen P\'{o}lya urn sampling scheme (Blackwell and
MacQueen (Ann. Statist. 1 (1973) 353-355)) that simplifies implementation and
provide a simulation study for approximating the probabilities of partition
sizes.
|
ReaxFF is a computationally efficient model for reactive molecular dynamics
simulations, which has been applied to a wide variety of chemical systems. When
ReaxFF parameters are not yet available for a chemistry of interest, they must
be (re)optimized, for which one defines a set of training data that the new
ReaxFF parameters should reproduce. ReaxFF training sets typically contain
diverse properties with different units, some of which are more abundant (by
orders of magnitude) than others. To find the best parameters, one
conventionally minimizes a weighted sum of squared errors over all data in the
training set. One of the challenges in such numerical optimizations is to
assign weights so that the optimized parameters represent a good compromise
between all the requirements defined in the training set. This work introduces
a new loss function, called Balanced Loss, and a workflow that replaces weight
assignment with a more manageable procedure. The training data is divided into
categories with corresponding "tolerances", i.e. acceptable root-mean-square
errors for the categories, which define the expectations for the optimized
ReaxFF parameters. Through the Log-Sum-Exp form of Balanced Loss, the parameter
optimization is also a validation of one's expectations, providing meaningful
feedback that can be used to reconfigure the tolerances if needed. The new
methodology is demonstrated with a non-trivial parameterization of ReaxFF for
water adsorption on alumina. This results in a new force field that reproduces
both rare and frequent properties of a validation set not used for training. We
also demonstrate the robustness of the new force field with a molecular
dynamics simulation of water desorption from a $\gamma$-Al$_2$O$_3$ slab model.
|
We consider hardcore bosons in two coupled chain of one dimensional lattices
at half filling with repulsive intra-chain interaction and inter-chain
attraction. This can be mapped on to a coupled chain of spin-1/2 XXZ model with
inter chain ferromagnetic coupling. We investigate various phases of hardcore
bosons (and related spin model) at zero temperature by density matrix
renormalization group method. Apart from the usual superfluid and density wave
phases, pairing of inter chain bosons leads to the formation of novel phases
like pair-superfluid and density wave of strongly bound pairs. We discuss the
possible experimental realization of such correlated phases in the context of
cold dipolar gas.
|
The principle of absence of arbitrage opportunities allows obtaining the
distribution of stock price fluctuations by maximizing its information entropy.
This leads to a physical description of the underlying dynamics as a random
walk characterized by a stochastic diffusion coefficient and constrained to a
given value of the expected volatility, taking in this way into account the
information provided by the existence of an option market. This model is
validated by a comprehensive comparison with observed distributions of both
price return and diffusion coefficient. Expected volatility is the only
parameter in the model and can be obtained by analysing option prices. We give
an analytic formulation of the probability density function for price returns
which can be used to extract expected volatility from stock option data. This
distribution is of high practical interest since it should be preferred to a
Gaussian when dealing with the problem of pricing derivative financial
contracts.
|
Superconducting MgB2 strands with nanometer-scale SiC additions have been
investigated systematically using transport and magnetic measurements. A
comparative study of MgB2 strands with different nano-SiC addition levels has
shown C-doping-enhanced critical current density Jc through enhancements in the
upper critical field, Hc2, and decreased anisotropy. The critical current
density and flux pinning force density obtained from magnetic measurements were
found to greatly differ from the values obtained through transport
measurements, particularly with regards to magnetic field dependence. The
differences in magnetic and transport results are largely attributed to
connectivity related effects. On the other hand, based on the scaling behavior
of flux pinning force, there may be other effective pinning centers in MgB2
strands in addition to grain boundary pinning.
|
Femtosecond high-order harmonic transient absorption spectroscopy is used to
resolve the complete |j,m> quantum state distribution of Xe+ produced by
optical strong-field ionization of Xe atoms at 800nm. Probing at the Xe N_4/5
edge yields a population distribution rho_j,|m| of rho_3/2,1/2 : rho_1/2,1/2 :
rho_3/2,3/2 = 75 +- 6 : 12 +- 3 : 13 +- 6 %. The result is compared to a tunnel
ionization calculation with the inclusion of spin-orbit coupling, revealing
nonadiabatic ionization behavior. The sub-50-fs time resolution paves the way
for table-top extreme ultraviolet absorption probing of ultrafast dynamics.
|
We report here the signature of bi-modal fission, one asymmetric and the
other symmetric, in Uranium nuclei in the mass range A = 230 to 236. The
finding is unexpected and striking and is based on a model independent analysis
of experimental mass distributions (cumulative yields) at various excitations
from about 23 to 66 MeV in the alpha induced fission of 232Th. It has been
found that the observed asymmetry in the mass distributions and the unusually
narrow peak in the symmetry region, can both be explained in a consistent
manner if one assumes: a) multi-chance fission, b) bi-modal fission at lower
excitations (9 < E* < 25 MeV) for all the Uranium nuclei in the range A = 230
to 236, and c) that the shell effects get washed out completely beyond about 25
MeV of excitation resulting in symmetric fission. The analysis has allowed a
quantitative estimation of the percentages of the asymmetric and the symmetric
component in the bi-modal fission. It has been found that the bi-modal fission
in Uranium nuclei is predominantly asymmetric (~ 85%), which contributes in a
major way to the observed asymmetric peaks, while the ~15% bi-modal symmetric
fission is primarily responsible for the observed narrow symmetric peak in the
mass distributions. The unusually narrow symmetry peak in the mass
distributions indicates that the symmetric bi-modal fission in Uranium nuclei
must have proceeded from a configuration at the bi-modal symmetric saddle that
is highly deformed with a well-developed neck.
|
Bit-level sparsity in neural network models harbors immense untapped
potential. Eliminating redundant calculations of randomly distributed zero-bits
significantly boosts computational efficiency. Yet, traditional digital
SRAM-PIM architecture, limited by rigid crossbar architecture, struggles to
effectively exploit this unstructured sparsity. To address this challenge, we
propose Dyadic Block PIM (DB-PIM), a groundbreaking algorithm-architecture
co-design framework. First, we propose an algorithm coupled with a distinctive
sparsity pattern, termed a dyadic block (DB), that preserves the random
distribution of non-zero bits to maintain accuracy while restricting the number
of these bits in each weight to improve regularity. Architecturally, we develop
a custom PIM macro that includes dyadic block multiplication units (DBMUs) and
Canonical Signed Digit (CSD)-based adder trees, specifically tailored for
Multiply-Accumulate (MAC) operations. An input pre-processing unit (IPU)
further refines performance and efficiency by capitalizing on block-wise input
sparsity. Results show that our proposed co-design framework achieves a
remarkable speedup of up to 7.69x and energy savings of 83.43%.
|
Since the discovery of the Verwey transition in magnetite, transition metal
compounds with pyrochlore structures have been intensively studied as a
platform for realizing remarkable electronic phase transitions. We report the
discovery of a unique phase transition that preserves the cubic symmetry of the
beta-pyrochlore oxide CsW$_2$O$_6$, where each of W 5d electrons are confined
in regular-triangle W3 trimers. This trimer formation is an unprecedented
self-organization of d electrons, which can be resolved into a charge order
satisfying the Anderson condition in a nontrivial way, orbital order caused by
the distortion of WO6 octahedra, and the formation of a spin-singlet pair in a
regular-triangle trimer. Electronic instability due to the unusual
three-dimensional nesting of Fermi surfaces and the localized nature of the 5d
electrons characteristic of the pyrochlore oxides were found to play important
roles in this unique charge-orbital-spin coupled phenomenon.
|
The first purpose of this note is to comment on a recent article of Bursztyn,
Lima and Meinrenken, in which it is proved that if M is a smooth submanifold of
a manifold V, then there is a bijection between germs of tubular neighborhoods
of M and germs of "Euler-like" vector fields on V. We shall explain how to
approach this bijection through the deformation to the normal cone that is
associated to the embedding of M into V. The second purpose is to study
generalizations to smooth manifolds equipped with Lie filtrations. Following in
the footsteps of several others, we shall define a deformation to the normal
cone that is appropriate to this context, and relate it to Euler-like vector
fields and tubular neighborhood embeddings.
|
Following Geroch, Traschen, Mars and Senovilla, we consider Lorentzian
manifolds with distributional curvature tensor. Such manifolds represent
spacetimes of general relativity that possibly contain gravitational waves,
shock waves, and other singular patterns. We aim here at providing a
comprehensive and geometric (i.e., coordinate-free) framework. First, we
determine the minimal assumptions required on the metric tensor in order to
give a rigorous meaning to the spacetime curvature within the framework of
distribution theory. This leads us to a direct derivation of the jump relations
associated with singular parts of connection and curvature operators. Second,
we investigate the induced geometry on a hypersurface with general signature,
and we determine the minimal assumptions required to define, in the sense of
distributions, the curvature tensors and the second fundamental form of the
hypersurface and to establish the Gauss-Codazzi equations.
|
The optimal operation problem of electric vehicle aggregator (EVA) is
considered. An EVA can participate in energy and regulation markets with its
current and upcoming EVs, thus reducing its total cost of purchasing energy to
fulfill EVs' charging requirements. An MPC based optimization model is
developed to consider future arrival of EVs as well as energy and regulation
prices. The index of CVaR is used to model risk-averseness of an EVA.
Simulations on the 1000-EV test system validate the effectiveness of our work
in achieving a lucrative revenue while satisfying the charging requests from EV
owners.
|
In this paper, we propose and study the polar Orlicz-Minkowski problems:
under what conditions on a nonzero finite measure $\mu$ and a continuous
function $\varphi:(0,\infty)\rightarrow(0,\infty)$, there exists a convex body
$K\in\mathcal{K}_0$ such that $K$ is an optimizer of the following optimization
problems: \begin{equation*} \inf/\sup \bigg\{\int_{S^{n-1}}\varphi\big( h_L
\big) \,d \mu: L \in \mathcal{K}_{0} \ \text{and}\ |L^\circ|=\omega_{n}\bigg\}.
\end{equation*} The solvability of the polar Orlicz-Minkowski problems is
discussed under different conditions. In particular, under certain conditions
on $\varphi,$ the existence of a solution is proved for a nonzero finite
measure $\mu$ on $S^{n-1}$ which is not concentrated on any hemisphere of
$S^{n-1}.$ Another part of this paper deals with the $p$-capacitary
Orlicz-Petty bodies. In particular, the existence of the $p$-capacitary
Orlicz-Petty bodies is established and the continuity of the $p$-capacitary
Orlicz-Petty bodies is proved.
|
We investigate the sensitivity of the heavy ion mode of the LHC to Higgs
boson and Radion production via photon-photon fusion through the analysis of
the processes photon to photon photon, photon photon to b anti-b, and photon
photon to g g in peripheral heavy ion collisions. We suggest cuts to improve
the Higgs and Radion signal over standard model background ratio and determine
the capability of LHC to detect these particles production.
|
This is a Comment to the paper by Galitski and Larkin in Phys. Rev. Lett. 87
(2001) 087001 (cond-mat/0104247). It is pointed out that their argument that
the quantumn glass transition field should be higher than the mean field
H_{c2}(0) is incompatible with available data showing the so-called field-tuned
superconductor-insulator transition phenomena.
|
We study the energetics of superconducting vortices in the SO(5) model for
high-$T_c$ materials proposed by Zhang. We show that for a wide range of
parameters normally corresponding to type II superconductivity, the free energy
per unit flux $\FF(m)$ of a vortex with $m$ flux quanta is a decreasing
function of $m$, provided the doping is close to its critical value. This
implies that the Abrikosov lattice is unstable, a behaviour typical of type I
superconductors. For dopings far from the critical value, $\FF(m)$ can become
very flat, indicating a less rigid vortex lattice, which would melt at a lower
temperature than expected for a BCS superconductor.
|
In this paper, we establish the existence and uniqueness of invariant
measures for a class of semilinear stochastic partial differential equations
driven by multiplicative noise on a bounded domain. The main results can be
applied to SPDEs of various types such as the stochastic Burgers equation and
the reaction-diffusion equations perturbed by space-time white noise.
|
A new curvature obstruction to the existence of a timelike (resp. causal)
Killing or homothetic vector field $X$ on an even-dimensional (odd-dimensional)
Lorentzian manifold, in terms of its timelike (resp. null) sectional curvature
is given. As a consequence for the compact case, the well-known
Gauss-Bonnet-Chern obstruction to the existence of semi-Riemannian metrics is
extended from non-zero constant sectional curvature to non-zero timelike
sectional curvature on $X$.
|
In a Jacobi--Davidson (JD) type method for singular value decomposition (SVD)
problems, called JDSVD, a large symmetric and generally indefinite correction
equation is approximately solved iteratively at each outer iteration, which
constitutes the inner iterations and dominates the overall efficiency of JDSVD.
In this paper, a convergence analysis is made on the minimal residual (MINRES)
method for the correction equation. Motivated by the results obtained, a
preconditioned correction equation is derived that extracts useful information
from current searching subspaces to construct effective preconditioners for the
correction equation and is proved to retain the same convergence of outer
iterations of JDSVD. The resulting method is called inner preconditioned JDSVD
(IPJDSVD) method. Convergence results show that MINRES for the preconditioned
correction equation can converge much faster when there is a cluster of
singular values closest to a given target, so that IPJDSVD is more efficient
than JDSVD. A new thick-restart IPJDSVD algorithm with deflation and purgation
is proposed that simultaneously accelerates the outer and inner convergence of
the standard thick-restart JDSVD and computes several singular triplets of a
large matrix. Numerical experiments justify the theory and illustrate the
considerable superiority of IPJDSVD to JDSVD.
|
In this letter we present the results of a simple model for intercellular
communication via calcium oscillations, motivated in part by a recent
experimental study. The model describes two cells (a "donor" and "sensor")
whose intracellular dynamics involve a calcium-induced, calcium release
process. The cells are coupled by assuming that the the input of the sensor
cell is proportional to the output of the donor cell. As one varies the
frequency of calcium oscillations of the donor cell, the sensor cell passes
through a sequence of N:M phase locked regimes and exhibits a "Devil's
staircase" behavior. Such a phase locked response has been seen experimentally
in pulsatile stimulation of single cells. We also study a stochastic version of
the coupled two cell model. We find that phase locking holds for realistic
choices for the cell volume.
|
Within the framework of theories where both scalars and fermions are present,
we develop a systematic prescription for the construction of CP-violating
quantities that are invariant under basis transformations of those matter
fields. In theories with Spontaneous Symmetry Breaking, the analysis involves
the vevs' transformation properties under a scalar basis change, with a
considerable simplification of the study of CP violation in the scalar sector.
These techniques are then applied in detail to the two Higgs-doublet model with
quarks. It is shown that there are new invariants involving scalar-fermion
interactions, besides those already derived in previous analyses for the
fermion-gauge and scalar-gauge sectors.
|
Low-mass galaxies are highly susceptible to environmental effects that can
efficiently quench star formation. We explore the role of ram pressure in
quenching low-mass galaxies ($M_{*}\sim10^{5-9}\,\rm{M}_{\odot}$) within 2 Mpc
of Milky Way (MW) hosts using the FIRE-2 simulations. Ram pressure is highly
variable across different environments, within individual MW haloes, and for
individual low-mass galaxies over time. The impulsiveness of ram pressure --
the maximum ram pressure scaled to the integrated ram pressure prior to
quenching -- correlates with whether a galaxy is quiescent or star-forming. The
time-scale between maximum ram pressure and quenching is anticorrelated with
impulsiveness, such that high impulsiveness corresponds to quenching
time-scales $<1$ Gyr. Galaxies in low-mass groups
($M_\mathrm{*,host}\sim10^{7-9}\,\rm{M}_{\odot}$) outside of MW haloes
experience typical ram pressure only slightly lower than ram pressure on MW
satellites, helping to explain effective quenching via group pre-processing.
Ram pressure on MW satellites rises sharply with decreasing distance to the
host, and, at a fixed physical distance, more recent pericentre passages are
typically associated with higher ram pressure because of greater gas density in
the inner host halo at late times. Furthermore, the ram pressure and gas
density in the inner regions of Local Group-like paired host haloes is higher
at small angles off the host galaxy disc compared to isolated hosts. The
quiescent fraction of satellites within these low-latitude regions is also
elevated in the simulations and observations, signaling possible anisotropic
quenching via ram pressure around MW-mass hosts.
|
Multi-access Edge Computing (MEC) facilitates the deployment of critical
applications with stringent QoS requirements, latency in particular. This paper
considers the problem of jointly planning the availability of computational
resources at the edge, the slicing of mobile network and edge computation
resources, and the routing of heterogeneous traffic types to the various
slices. These aspects are intertwined and must be addressed together to provide
the desired QoS to all mobile users and traffic types still keeping costs under
control. We formulate our problem as a mixed-integer nonlinear program (MINLP)
and we define a heuristic, named Neighbor Exploration and Sequential Fixing
(NESF), to facilitate the solution of the problem. The approach allows network
operators to fine tune the network operation cost and the total latency
experienced by users. We evaluate the performance of the proposed model and
heuristic against two natural greedy approaches. We show the impact of the
variation of all the considered parameters (viz., different types of traffic,
tolerable latency, network topology and bandwidth, computation and link
capacity) on the defined model. Numerical results demonstrate that NESF is very
effective, achieving near-optimal planning and resource allocation solutions in
a very short computing time even for large-scale network scenarios.
|
In 1991, Moore [20] raised a question about whether hydrodynamics is capable
of performing computations. Similarly, in 2016, Tao [25] asked whether a
mechanical system, including a fluid flow, can simulate a universal Turing
machine. In this expository article, we review the construction in [8] of a
"Fluid computer" in dimension 3 that combines techniques in symbolic dynamics
with the connection between steady Euler flows and contact geometry unveiled by
Etnyre and Ghrist. In addition, we argue that the metric that renders the
vector field Beltrami cannot be critical in the Chern-Hamilton sense [9]. We
also sketch the completely different construction for the Euclidean metric in
$\mathbb R^3$ as given in [7]. These results reveal the existence of
undecidable fluid particle paths. We conclude the article with a list of open
problems.
|
We provide a cosmological implementation of the evolutionary quantum gravity,
describing an isotropic Universe, in the presence of a negative cosmological
constant and a massive (preinflationary) scalar field. We demonstrate that the
considered Universe has a nonsingular quantum behavior, associated to a
primordial bounce, whose ground state has a high occupation number.
Furthermore, in such a vacuum state, the super-Hamiltonian eigenvalue is
negative, corresponding to a positive emerging dust energy density. The
regularization of the model is performed via a polymer quantum approach to the
Universe scale factor and the proper classical limit is then recovered, in
agreement with a preinflationary state of the Universe. Since the dust energy
density is redshifted by the Universe deSitter phase and the cosmological
constant does not enter the ground state eigenvalue, we get a late-time
cosmology, compatible with the present observations, endowed with a turning
point in the far future.
|
Cluster galaxies are affected by the surrounding environment, which
influences, in particular, their gas, stellar content and morphology. In
particular, the ram-pressure exerted by the intracluster medium promotes the
formation of multi-phase tails of stripped gas detectable both at optical
wavelengths and in the sub-mm and radio regimes, tracing the cold molecular and
atomic gas components, respectively. In this work we analyze a sample of
sixteen galaxies belonging to clusters at redshift $\sim 0.05$ showing evidence
of an asymmetric HI morphology (based on MeerKAT observations) with and without
a star forming tail. To this sample we add three galaxies with evidence of a
star forming tail and no HI detection. Here we present the galaxies $\rm H_{2}$
gas content from APEX observations of the CO(2-1) emission. We find that in
most galaxies with a star forming tail the $\rm H_{2}$ global content is
enhanced with respect to undisturbed field galaxies with similar stellar
masses, suggesting an evolutionary path driven by the ram-pressure stripping.
As galaxies enter into the clusters their HI is displaced but also partially
converted into $\rm H_{2}$, so that they are $\rm H_{2}$ enriched when they
pass close to the pericenter, i. e. when they develop the star forming tails
that are visible in UV/B broad bands and in H$\alpha$ emission. An inspection
of the phase-space diagram for our sample suggests an anticorrelation between
the HI and $\rm H_{2}$ gas phases as galaxies fall into the cluster potential.
This peculiar behaviour is a key signature of the ram-pressure stripping in
action.
|
Conventional active magnetic bearing (AMB) systems use several separate
radial and thrust bearings to provide a 5 degree of freedom (DOF) levitation
control. This paper presents a novel combination 5-DOF active magnetic bearing
(C5AMB) designed for a shaft-less, hub-less, high-strength steel energy storage
flywheel (SHFES), which achieves doubled energy density compared to prior
technologies. As a single device, the C5AMB provides radial, axial, and tilting
levitations simultaneously. In addition, it utilizes low-cost and more
available materials to replace silicon steels and laminations, which results in
reduced costs and more convenient assemblies. Apart from the unique structure
and the use of low magnetic grade material, other design challenges include
shared flux paths, large dimensions, and relatively small air gaps. The finite
element method (FEM) is too computationally intensive for early-stage analysis.
An equivalent magnetic circuit method (EMCM) is developed for modeling and
analysis. Nonlinear FEM is then used for detailed simulations. Both permanent
magnets (PM) and electromagnetic control currents provide the weight-balancing
lifting force. During the full-scale prototype testing, the C5AMB successfully
levitates a 5440 kg and 2 m diameter flywheel at an air gap of 1.14 mm. Its
current and position stiffnesses are verified experimentally.
|
We derive a bound on the precision of state estimation for finite dimensional
quantum systems and prove its attainability in the generic case where the
spectrum is non-degenerate. Our results hold under an assumption called local
asymptotic covariance, which is weaker than unbiasedness or local unbiasedness.
The derivation is based on an analysis of the limiting distribution of the
estimator's deviation from the true value of the parameter, and takes advantage
of quantum local asymptotic normality, a useful asymptotic characterization of
identically prepared states in terms of Gaussian states. We first prove our
results for the mean square error of a special class of models, called
D-invariant, and then extend the results to arbitrary models, generic cost
functions, and global state estimation, where the unknown parameter is not
restricted to a local neighbourhood of the true value. The extension includes a
treatment of nuisance parameters, i.e. parameters that are not of interest to
the experimenter but nevertheless affect the precision of the estimation. As an
illustration of the general approach, we provide the optimal estimation
strategies for the joint measurement of two qubit observables, for the
estimation of qubit states in the presence of amplitude damping noise, and for
noisy multiphase estimation.
|
In an addendum to his seminal 1969 article J\"{o}reskog stated two sets of
conditions for rotational identification of the oblique factor solution under
utilization of fixed zero elements in the factor loadings matrix. These
condition sets, formulated under factor correlation and factor covariance
metrics, respectively, were claimed to be equivalent and to lead to global
rotational uniqueness of the factor solution. It is shown here that the
conditions for the oblique factor correlation structure need to be amended for
global rotational uniqueness, and hence, that the condition sets are not
equivalent in terms of unicity of the solution.
|
An anisotropic (dichroic) optical cavity containing a self-focusing Kerr
medium is shown to display a bifurcation between static --Ising-- and moving
--Bloch-- domain walls, the so-called nonequilibrium Ising-Bloch transition
(NIB). Bloch walls can show regular or irregular temporal behaviour, in
particular, bursting and spiking. These phenomena are interpreted in terms of
the spatio-temporal dynamics of the extended patterns connected by the wall,
which display complex dynamical behaviour as well. Domain wall interaction,
including the formation of bound states is also addressed.
|
We study the two-body decay of a mother particle into a massless daughter. We
further assume that the mother particle is unpolarized and has a generic boost
distribution in the laboratory frame. In this case, we show analytically that
the laboratory frame energy distribution of the massless decay product has a
peak, whose location is identical to the (fixed) energy of that particle in the
rest frame of the corresponding mother particle. Given its simplicity and
"invariance" under variations of the boost distribution of the mother particle,
our finding should be useful for the determination of masses of mother
particles. In particular, we anticipate that such a procedure will then not
require a full reconstruction of this two-body decay chain (or for that matter,
information about the rest of the event). With this eventual goal in mind, we
make a proposal for extracting the peak position by fitting the data to a
well-motivated analytic function describing the shape of such energy
distribution. This fitting function is then tested on the theoretical
prediction for top quark pair production and its decay and it is found to be
quite successful in this regard. As a proof of principle of the usefulness of
our observation, we apply it for measuring the mass of the top quark at the
LHC, using simulated data and including experimental effects.
|
We report 0.5"x0.9" resolution, interferometric observations of the 1.3 mm CO
J=2-1 line in the infrared luminous galactic merger NGC 6240. About half of the
CO flux is concentrated in a rotating but highly turbulent, thick disk
structure centered between the two radio and near-infrared nuclei. A number of
gas features connect this ~500 pc diameter central disk to larger scales.
Throughout this region the molecular gas has local velocity widths which exceed
300 km/s FWHM and even reach FWZP line widths of 1000 km/s in a number of
directions. The mass of the central gas concentration constitutes a significant
fraction of the dynamical mass, M_gas(R<470 pc) ~ 2-4x10^9 M_o ~ 0.3-0.7 M_dyn.
We conclude that NGC 6240 is in an earlier merging stage than the prototypical
ultraluminous galaxy, Arp 220. The interstellar gas in NGC 6240 is in the
process of settling between the two progenitor stellar nuclei, is dissipating
rapidly and will likely form a central thin disk. In the next merger stage, NGC
6240 may well experience a major starburst like that observed in Arp 220.
|
We study the recently proposed effective field theory for the phonon of an
arbitrary non-relativistic superfluid. After computing the one-loop phonon
self-energy, we obtain the low temperature T contributions to the phonon
dispersion law at low momentum, and see that the real part of those can be
parametrized as a thermal correction to the phonon velocity. Because the
phonons are the quanta of the sound waves, at low momentum their velocity
should agree with the speed of sound. We find that our results match at order
T^4ln(T) with those predicted by Andreev and Khalatnikov for the speed of
sound, derived from the superfluid hydrodynamical equations and the phonon
kinetic theory. We get also higher order corrections of order T^4, which are
not reproduced pushing naively the kinetic theory computation. Finally, as an
application, we consider the cold Fermi gas in the unitarity limit, and find a
universal expression for the low T relative correction to the speed of sound
for these systems.
|
We present a systematic study of the constraints coming from target-space
duality and the associated duality anomaly cancellations on orbifold-like 4-D
strings. A prominent role is played by the modular weights of the massless
fields. We present a general classification of all possible modular weights of
massless fields in Abelian orbifolds. We show that the cancellation of modular
anomalies strongly constrains the massless fermion content of the theory, in
close analogy with the standard ABJ anomalies. We emphasize the validity of
this approach not only for (2,2) orbifolds but for (0,2) models with and
without Wilson lines. As an application one can show that one cannot build a
${\bf Z}_3$ or ${\bf Z}_7$ orbifold whose massless charged sector with respect
to the (level one) gauge group $SU(3)\times SU(2) \times U(1)$ is that of the
minimal supersymmetric standard model, since any such model would necessarily
have duality anomalies. A general study of those constraints for Abelian
orbifolds is presented. Duality anomalies are also related to the computation
of string threshold corrections to gauge coupling constants. We present an
analysis of the possible relevance of those threshold corrections to the
computation of $\sin^2\theta_W$ and $\alpha_3$ for all Abelian orbifolds. Some
particular {\it minimal} scenarios, namely those based on all ${\bf Z}_N$
orbifolds except ${\bf Z}_6$
|
Subsets and Splits