text
stringlengths 6
128k
|
---|
Using the Karpman-Solov''ev quasiparticle approach for soliton-soliton
interaction I show that the train propagation of N well separated solitons of
the massive Thirring model is described by the complex Toda chain with N nodes.
For the optical gap system a generalised (non-integrable) complex Toda chain is
derived for description of the train propagation of well separated gap
solitons. These results are in favor of the recently proposed conjecture of
universality of the complex Toda chain.
|
We present the discovery in TMC-1 of allenyl acetylene, H2CCCHCCH, through
the observation of nineteen lines with a signal-to-noise ratio ~4-15. For this
species, we derived a rotational temperature of 7 +/- 1 K and a column density
of (1.2 +/- 0.2)e13 cm-2. The other well known isomer of this molecule, methyl
diacetylene (CH3C4H), has also been observed and we derived a similar
rotational temperature, Trot = 7.0 +/- 0.3 K, and a column density for its two
states (A and E) of (6.5 +/- 0.3)e12 cm-2. Hence, allenyl acetylene and methyl
diacetylene have a similar abundance. Remarkably, their abundances are close to
that of vinyl acetylene (CH2CHCCH). We also searched for the other isomer of
C5H4, HCCCH2CCH (1.4-pentadiyne), but only a 3sigma upper limit of 2.5e12 cm-2
to the column density can be established. These results have been compared to
state-of-the-art chemical models for TMC-1, indicating the important role of
these hydrocarbons in its chemistry. The rotational parameters of allenyl
acetylene have been improved by fitting the existing laboratory data together
with the frequencies of the transitions observed in TMC-1.
|
In the absence of CMB precision measurements, a Taylor expansion has often
been invoked to parametrize the Hubble flow function during inflation. The
standard "horizon flow" procedure implicitly relies on this assumption.
However, the recent Planck results indicate a strong preference for plateau
inflation, which suggests the use of Pad\'e approximants instead. We propose a
novel method that provides analytic solutions of the flow equations for a given
parametrization of the Hubble function. This method is illustrated in the
Taylor and Pad\'e cases, for low order expansions. We then present the results
of a full numerical treatment scanning larger order expansions, and compare
these parametrizations in terms of convergence, prior dependence, predictivity
and compatibility with the data. Finally, we highlight the implications for
potential reconstruction methods.
|
We analyze the vanishing and non-vanishing behavior of the graded Betti
numbers for Segre embeddings of products of projective spaces. We give lower
bounds for when each of the rows of the Betti table becomes non-zero, and prove
that our bounds are tight for Segre embeddings of products of P^1. This
generalizes results of Rubei concerning the Green-Lazarsfeld property N_p for
Segre embeddings. Our methods combine the Kempf-Weyman geometric technique for
computing syzygies, the Ein-Erman-Lazarsfeld approach to proving non-vanishing
of Betti numbers, and the theory of algebras with straightening laws.
|
A simplified model of particle transport at a quasiparallel one-dimensional
collisionless shock is suggested. In this model the MHD-turbulence behind the
shock is dominated by a circularly polarized, large amplitude Alfv\'en wave
originated upstream from the turbulence excited by particles leaking from the
downstream medium. It is argued that such a wave having significantly increased
its magnetic field during the transmission through the shock interface can
effectively trap thermal ions, regulating their leakage upstream. Together with
a background turbulence this wave also plays a fundamental role in
thermalization of the incoming ion flow. The spectrum of leaking particles and
the amplitude of the wave excited by these particles are selfconsistently
calculated. The injection rate into the first order Fermi acceleration based on
this leakage mechanism is obtained and compared with computer simulations. The
related problem of shock energy distribution between thermal and nonthermal
components of the shocked plasma is discussed. The chemical composition of the
leaking particles is studied.
|
Studies of energy flow in quantum systems complement the information provided
by common conductance measurements. The quantum limit of heat flow in one
dimensional (1D) ballistic modes was predicted, and experimentally
demonstrated, to have a universal value for bosons, fermions, and fractionally
charged anyons. A fraction of this value is expected in non-abelian states.
Nevertheless, open questions about energy relaxation along the propagation
length in 1D modes remain. Here, we introduce a novel experimental setup that
measures the energy relaxation in chiral 1D modes of the quantum Hall effect
(QHE). Edge modes, emanating from a heated reservoir, are partitioned by a
quantum point contact (QPC) located at their path. The resulting noise allows a
determination of the 'effective temperature' at the location of the QPC. We
found energy relaxation in all the tested QHE states, being integers or
fractional. However, the relaxation was found to be mild in particle-like
states, and prominent in hole-conjugate states.
|
Modeled along the truncated approach in Panigrahi (2016), selection-adjusted
inference in a Bayesian regime is based on a selective posterior. Such a
posterior is determined together by a generative model imposed on data and the
selection event that enforces a truncation on the assumed law. The effective
difference between the selective posterior and the usual Bayesian framework is
reflected in the use of a truncated likelihood. The normalizer of the truncated
law in the adjusted framework is the probability of the selection event; this
is typically intractable and it leads to the computational bottleneck in
sampling from such a posterior. The current work lays out a primal-dual
approach of solving an approximating optimization problem to provide valid
post-selective Bayesian inference. The selection procedures are posed as
data-queries that solve a randomized version of a convex learning program which
have the advantage of preserving more left-over information for inference. We
propose a randomization scheme under which the optimization has separable
constraints that result in a partially separable objective in lower dimensions
for many commonly used selective queries to approximate the otherwise
intractable selective posterior. We show that the approximating optimization
under a Gaussian randomization gives a valid exponential rate of decay for the
selection probability on a large deviation scale. We offer a primal-dual method
to solve the optimization problem leading to an approximate posterior; this
allows us to exploit the usual merits of a Bayesian machinery in both low and
high dimensional regimes where the underlying signal is effectively sparse. We
show that the adjusted estimates empirically demonstrate better frequentist
properties in comparison to the unadjusted estimates based on the usual
posterior, when applied to a wide range of constrained, convex data queries.
|
We report the discovery ($20\sigma$) of kilohertz quasi-periodic oscillations
(kHz QPOs) at ~ 690 Hz from the transient neutron star low-mass X-ray binary
EXO 1745-248. We find that this is a lower kHz QPO, and systematically study
the time variation of its properties using smaller data segments with and
without the shift-and-add technique. The quality (Q) factor occasionally
significantly varies within short ranges of frequency and time. A high Q-factor
(264.5 +- 38.5) of the QPO is found for a 200 s time segment, which might be
the largest value reported in the literature. We argue that an effective way to
rule out kHz QPO models is to observationally find such high Q-factors, even
for a short duration, as many models cannot explain a high coherence. However,
as we demonstrate, the shift-and-add technique cannot find a very high Q-factor
which appears for a short period of time. This shows that the coherences of kHz
QPOs can be higher than the already high values reported using this technique,
implying further constraints on models. We also discuss the energy dependence
of fractional rms amplitude and Q-factor of the kHz QPO.
|
First hitting times (FHTs) describe the time it takes a random "searcher" to
find a "target" and are used to study timescales in many applications. FHTs
have been well-studied for diffusive search, especially for small targets,
which is called the narrow capture or narrow escape problem. In this paper, we
study the first hitting time to small targets for a one-dimensional
superdiffusive search described by a Levy flight. By applying the method of
matched asymptotic expansions to a fractional differential equation we obtain
an explicit asymptotic expansion for the mean FHT (MFHT). For fractional order
$s\in(0,1)$ (describing a $(2s)$-stable Levy flight whose squared displacement
scales as $t^{1/s}$ in time $t$) and targets of radius $\varepsilon\ll1$, we
show that the MFHT is order one for $s\in(1/2,1)$ and diverges as
$\log(1/\varepsilon)$ for $s=1/2$ and $\varepsilon^{2s-1}$ for $s\in(0,1/2)$.
We then use our asymptotic results to identify the value of $s\in(0,1]$ which
minimizes the average MFHT and find that (a) this optimal value of $s$ vanishes
for sparse targets and (b) the value $s=1/2$ (corresponding to an inverse
square Levy search) is optimal in only very specific circumstances. We confirm
our results by comparison to both deterministic numerical solutions of the
associated fractional differential equation and stochastic simulations.
|
This document introduces the exciting and fundamentally new science and
astronomy that the European New Gravitational Wave Observatory (NGO) mission
(derived from the previous LISA proposal) will deliver. The mission (which we
will refer to by its informal name "eLISA") will survey for the first time the
low-frequency gravitational wave band (about 0.1 mHz to 1 Hz), with sufficient
sensitivity to detect interesting individual astrophysical sources out to z =
15. The eLISA mission will discover and study a variety of cosmic events and
systems with high sensitivity: coalescences of massive black holes binaries,
brought together by galaxy mergers; mergers of earlier, less-massive black
holes during the epoch of hierarchical galaxy and black-hole growth;
stellar-mass black holes and compact stars in orbits just skimming the horizons
of massive black holes in galactic nuclei of the present era; extremely compact
white dwarf binaries in our Galaxy, a rich source of information about binary
evolution and about future Type Ia supernovae; and possibly most interesting of
all, the uncertain and unpredicted sources, for example relics of inflation and
of the symmetry-breaking epoch directly after the Big Bang. eLISA's
measurements will allow detailed studies of these signals with high
signal-to-noise ratio, addressing most of the key scientific questions raised
by ESA's Cosmic Vision programme in the areas of astrophysics and cosmology.
They will also provide stringent tests of general relativity in the
strong-field dynamical regime, which cannot be probed in any other way. This
document not only describes the science but also gives an overview on the
mission design and orbits.
|
The aim of this paper is to construct the structural equations of
supermanifolds immersed in Euclidean, hyperbolic and spherical superspaces
parametrised with two bosonic and two fermionic variables. To perform this
analysis, for each type of immersion, we split the supermanifold into its
Grassmannian components and study separately each manifold generated. Even
though we consider four variables in the Euclidean case, we obtain that the
structural equations of each manifold are linked with the Gauss--Codazzi
equations of a surface immersed in a Euclidean or spherical space. In the
hyperbolic and spherical superspaces, we find that the body manifolds are
linked with the classical Gauss--Codazzi equations for a surface immersed in
hyperbolic and spherical spaces, respectively. For some soul manifolds, we show
that the immersion of the manifolds must be in a hyperbolic space and that the
structural equations split into two cases. In one case, the structural
equations reduce to the Liouville equation, which can be completely solved. In
the other case, we can express the geometric quantities solely in terms of the
metric coefficients, which provide a geometric characterization of the
structural equations in terms of functions linked with the Hopf differential,
the mean curvature and a new function which does not appear in the
characterization of a classical (not super) surface.
|
An explicit quantization is given of certain skew-symmetric solutions of the
classical Yang-Baxter, yielding a family of $R$-matrices which generalize to
higher dimensions the Jordanian $R$-matrices. Three different approaches to
their construction are given: as twists of degenerations of the Shibukawa-Ueno
Yang-Baxter operators on meromorphic functions; as boundary solutions of the
quantum Yang-Baxter equation; via a vertex-IRF transformation from solutions to
the dynamical Yang-Baxter equation.
|
Stoichiometric, epitaxial LaCrO3 films have been grown on TiO2-terminated
SrTiO3(001) substrates by molecular beam epitaxy using O2 as the oxidant. Film
growth occurred in a layer-by-layer fashion, giving rise to structurally
excellent films and surfaces which preserve the step-terrace structure of the
substrate. The critical thickness is in excess of 500 {\AA}. Near-surface
Cr(III) is highly susceptible to further oxidation to Cr(V), leading to the
formation of a disordered phase upon exposure to atomic oxygen. Recovery of the
original epitaxial LaCrO3 phase is readily achieved by vacuum annealing.
|
We theoretically investigate the creation of squeezed states of a
Bose-Einstein Condensate (BEC) trapped in a magnetic double well potential. The
number or phase squeezed states are created by modulating the tunnel coupling
between the two wells periodically with twice the Josephson frequency, i.e.,
through parametric amplification. Simulations are performed with the multi
configurational Hartree method for bosons (MCTDHB). We employ optimal control
theory to bring the condensate to a complete halt at a final time, thus
creating a highly squeezed state (squeezing factor of 0.12, $\xi_S^2=-18$ dB)
suitable for atom interferometry.
|
Interplanetary Coronal Mass Ejections (ICMEs) originate from the eruption of
complex magnetic structures occurring in our star's atmosphere. Determining the
general properties of ICMEs and the physical processes at the heart of their
interactions with the solar wind is a hard task, in particular using only
unidimensional in situ profiles. Thus, these phenomena are still not well
understood. In this study we simulate the propagation of a set of flux ropes in
order to understand some of the physical processes occurring during the
propagation of an ICME such as their growth or their rotation. We present
simulations of the propagation of a set of flux ropes in a simplified solar
wind. We consider different magnetic field strengths and sizes at the
initiation of the eruption, and characterize their influence on the properties
of the flux ropes during their propagation. We use the 3D MHD module of the
PLUTO code on an Adaptive Mesh Refinement grid. The evolution of the magnetic
field of the flux rope during the propagation matches evolution law deduced
from in situ observations. We also simulate in situ profiles that spacecraft
would have measured at the Earth, and we compare with the results of
statistical studies. We find a good match between simulated in situ profiles
and typical profiles obtained in these studies. During their propagation, flux
ropes interact with the magnetic field of the wind but still show realistic
signatures of ICMEs when analyzed with synthetic satellite crossings. We also
show that flux ropes with different shapes and orientations can lead to similar
unidimensional crossings. This warrants some care when extracting magnetic
topology of ICMEs using unidimensional crossings.
|
With the HIJING/BBbar v2.0 heavy ion event generator, we explore the
phenomenological consequences of several high parton density dynamical effects
predicted in central Pb+Pb collisions at the Large Hadron Collider (LHC)
energies. These include (1) jet quenching due to parton energy loss (dE/dx),
(2) strangeness and hyperon enhancement due to strong longitudinal color field
(SCF), and (3) enhancement of baryon-to-meson ratios due to baryon-anti-baryon
junctions (JJbar) loops and SCF effects. The saturation/minijet cutoff scale
p0(s)and effective string tension kappa(s,A) are constrained by our previous
analysis of LHC p+p data and recent data on the charged multiplicity for Pb+Pb
collisions reported by the ALICE collaboration. We predict the hadron flavor
dependence (mesons and baryons) of the nuclear modification factor RAA(pT)$ and
emphasize the possibility that the baryon anomaly could persist at the LHC up
to pT=10 GeV, well beyond the range observed in central Au+Au collisions at
RHIC energies.
|
We prove a general structure theorem for finitely presented torsion modules
over a class of commutative rings that need not be Noetherian. As a first
application, we then use this result to study the Weil- \'etale cohomology
groups of $\mathbb{G}_m$ for curves over finite fields.
|
Observational data on the bursting activity of all five known Soft Gamma
Repeaters are presented. This information was obtained with Konus gamma-ray
burst experiments on board Venera 11-14, Wind, and Kosmos-2326 spacecraft in
the period from 1978 to 2000. These data on appearance rates, time histories,
and energy spectra of repeated soft bursts obtained with similar instruments
and collected together in a comparable form should be useful for further
studies of SGRs. (available at http://www.ioffe.rssi.ru/LEA/SGR/Catalog/).
|
In this paper we generalize and improve the multiscale organization of graphs
by introducing a new measure that quantifies the "closeness" between two nodes.
The calculation of the measure is linear in the number of edges in the graph
and involves just a small number of relaxation sweeps. A similar notion of
distance is then calculated and used at each coarser level. We demonstrate the
use of this measure in multiscale methods for several important combinatorial
optimization problems and discuss the multiscale graph organization.
|
Webcam eye tracking for the collection of gaze data in the context of user
studies is convenient - it can be used in remote tests where participants do
not need special hardware. The approach has strong limitations, especially
regarding the motion-free nature of the test persons during data recording and
the quality of the gaze data obtained. Our study with 52 participants shows
that usable eye tracking data can be obtained with commercially available
webcams in a remote setting. However, a high drop off rate must be considered,
which is why we recommend a high over-recruitment of 150%. We also show that
the acceptance of the approach by the study participants is high despite the
given limitations.
|
Data analysis has high value both for commercial and research purposes.
However, disclosing analysis results may pose severe privacy risk to
individuals. Privug is a method to quantify privacy risks of data analytics
programs by analyzing their source code. The method uses probability
distributions to model attacker knowledge and Bayesian inference to update said
knowledge based on observable outputs. Currently, Privug uses Markov Chain
Monte Carlo (MCMC) to perform inference, which is a flexible but approximate
solution. This paper presents an exact Bayesian inference engine based on
multivariate Gaussian distributions to accurately and efficiently quantify
privacy risks. The inference engine is implemented for a subset of Python
programs that can be modeled as multivariate Gaussian models. We evaluate the
method by analyzing privacy risks in programs to release public statistics. The
evaluation shows that our method accurately and efficiently analyzes privacy
risks, and outperforms existing methods. Furthermore, we demonstrate the use of
our engine to analyze the effect of differential privacy in public statistics.
|
We present the initial results from the Infrared Array Camera (IRAC) imaging
survey of planetary nebulae (PN). The IRAC colors of PN are red, especially
when considering the 8.0 micron band. Emission in this band is likely due to
contributions from two strong molecular hydrogen lines and an [Ar III] line in
that bandpass. IRAC is sensitive to the emission in the halos as well as in the
ionized regions that are optically bright. In NGC 246, we have observed an
unexpected ring of emission in the 5.8 and 8.0 micron IRAC bands not seen
previously at other wavelengths. In NGC 650 and NGC 3132, the 8.0 micron
emission is at larger distances from the central star compared to the optical
and other IRAC bands, possibly related to the molecular hydrogen emission in
that band and the tendency for the molecular material to exist outside of the
ionized zones. In the flocculi of the outer halo of NGC 6543, however, this
trend is reversed, with the 8.0 micron emission bright on the inner edges of
the structures. This may be related to the emission mechanism, where the
molecular hydrogen is possibly excited in shocks in the NGC 6543 halo, whereas
the emission is likely fluorescently excited in the UV fields near the central
star.
|
We give sufficient conditions for F-injectivity to deform. We show these
conditions are met in two common geometrically interesting setting, namely when
the special fiber has isolated CM-locus or is F-split.
|
In this paper we study a class of matrix-valued linear-quadratic
mean-field-type games for both the risk-neutral, risk-sensitive and robust
cases. Non-cooperation, full cooperation and adversarial between teams are
treated. We provide a semi-explicit solution for both problems by means of a
direct method. The state dynamics is described by a matrix-valued linear
jump-diffusion-regime switching system of conditional mean-field type where the
conditioning is with respect to common noise which is a regime switching
process. The optimal strategies are in a state-and-conditional mean-field
feedback form. Semi-explicit solutions of equilibrium costs and strategies are
also provided for the full cooperative, adversarial teams, risk-sensitive full
cooperative and risk-sensitive adversarial team cases. It is shown that full
cooperation increases the well-posedness domain under risk-sensitive
decision-makers by means of population risk-sharing. Finally, relationship
between risk-sensitivity and robustness are established in the mean-field type
context.
|
The structural and magnetic phase transitions have been studied on NdFeAsO
single crystals by neutron and x-ray diffraction complemented by resistivity
and specific heat measurements. Two low-temperature phase transitions have been
observed in addition to the tetragonal-to-orthorhombic transition at T_S = 142
K and the onset of antiferromagnetic (AFM) Fe order below T_N = 137 K. The Fe
moments order AFM in the well-known stripe-like structure in the (ab) plane,
but change from AFM to ferromagnetic (FM) arrangement along the c direction
below T* = 15 K accompanied by the onset of Nd AFM order below T_Nd = 6 K with
this same AFM configuration. The iron magnetic order-order transition in
NdFeAsO accentuates the Nd-Fe interaction and the delicate balance of c-axis
exchange couplings that results in AFM in LaFeAsO and FM in CeFeAsO and
PrFeAsO.
|
We study a particular generalisation of the classical Kramers model
describing Brownian particles in the external potential. The generalised model
includes the stochastic force which is modelled as an additive random noise
that depends upon the position of the particle, as well as time. The stationary
solution of the Fokker-Planck equation is analysed in two limits: weak external
forcing, where the solution is equivalent to the increase of the potential
compared to the classical model, and strong external forcing, where the
solution yields a non-zero probability flux for the motion in a periodic
potential with a broken reflection symmetry.
|
We investigate pseudo-gap phenomena realized in the BCS pairing model with a
long but finite interaction range. We calculate the single-particle self-energy
in all order exactly in the temperature range where the superconducting
fluctuation propagator is Gaussian-like. It is found that vertex corrections to
the self-energy which are discarded in the previous studies are crucially
important for the pseudo-gap of the single-particle density of states in higher
order calculations.
|
Most methods for medical image segmentation use U-Net or its variants as they
have been successful in most of the applications. After a detailed analysis of
these "traditional" encoder-decoder based approaches, we observed that they
perform poorly in detecting smaller structures and are unable to segment
boundary regions precisely. This issue can be attributed to the increase in
receptive field size as we go deeper into the encoder. The extra focus on
learning high level features causes the U-Net based approaches to learn less
information about low-level features which are crucial for detecting small
structures. To overcome this issue, we propose using an overcomplete
convolutional architecture where we project our input image into a higher
dimension such that we constrain the receptive field from increasing in the
deep layers of the network. We design a new architecture for image
segmentation- KiU-Net which has two branches: (1) an overcomplete convolutional
network Kite-Net which learns to capture fine details and accurate edges of the
input, and (2) U-Net which learns high level features. Furthermore, we also
propose KiU-Net 3D which is a 3D convolutional architecture for volumetric
segmentation. We perform a detailed study of KiU-Net by performing experiments
on five different datasets covering various image modalities like ultrasound
(US), magnetic resonance imaging (MRI), computed tomography (CT), microscopic
and fundus images. The proposed method achieves a better performance as
compared to all the recent methods with an additional benefit of fewer
parameters and faster convergence. Additionally, we also demonstrate that the
extensions of KiU-Net based on residual blocks and dense blocks result in
further performance improvements. The implementation of KiU-Net can be found
here: https://github.com/jeya-maria-jose/KiU-Net-pytorch
|
In this paper we study the general conditions that have to be met for a
gauged extension of a two-dimensional bosonic sigma-model to exist. In an
inversion of the usual approach of identifying a global symmetry and then
promoting it to a local one, we focus directly on the gauge symmetries of the
theory. This allows for action functionals which are gauge invariant for rather
general background fields in the sense that their invariance conditions are
milder than the usual case. In particular, the vector fields that control the
gauging need not be Killing. The relaxation of isometry for the background
fields is controlled by two connections on a Lie algebroid L in which the gauge
fields take values, in a generalization of the common Lie-algebraic picture.
Here we show that these connections can always be determined when L is a Dirac
structure in the H-twisted Courant algebroid. This also leads us to a
derivation of the general form for the gauge symmetries of a wide class of
two-dimensional topological field theories called Dirac sigma-models, which
interpolate between the G/G Wess-Zumino-Witten model and the (Wess-Zumino-term
twisted) Poisson sigma model.
|
A problem, whether a neutrino-antineutrino transition could be responsible
for the muon neutrino deficit found in underground experiments
(Super-Kamiokande, MACRO, Soudan 2) and in the accelerator long-baseline K2K
experiment, is discussed in this paper. The intention of the work is not
consideration of concrete models for muon neutrino-antineutrino transition but
a desire to attract an attention to another possibility of understanding the
nature of the measured muon neutrino deficit in neutrino experiments.
|
Segmented primary mirrors are indispensable to master the steady increase in
spatial resolution. Phasing optics systems must reduce segment misalignments to
guarantee the high optical quality required for astronomical science programs.
Modern telescopes routinely use adaptive optics systems to compensate for the
atmosphere and use laser-guide-stars to create artificial stars as bright
references in the field of observation. Because multiple laser-guide-star
adaptive optics are being implemented in all major observatories, we propose to
use man-made stars not only for adaptive optics, but for phasing optics. We
propose a method called the doublet-wavelength coherence technique (DWCT),
exploiting the D lines of sodium in the mesosphere using laser guide-stars. The
signal coherence properties are then used. The DWCT capture range exceeds
current abilities by a factor of 100. It represents a change in paradigm by
improving the phasing optics capture range from micrometric to millimetric. It
thereby potentially eliminates the need of a man-made mechanical pre-phasing
step. Extremely large telescopes require hundreds of segments, several of which
need to be substituted on a daily basis to be recoated. The DWCT relaxes
mechanical integration requirements and speeds up integration and
re-integration process.
|
In this note we give three identities for partitions with parts separated by
parity, which were recently introduced by Andrews.
|
Within the general setting of algebraic quantum field theory, a new approach
to the analysis of the physical state space of a theory is presented; it covers
theories with long range forces, such as quantum electrodynamics. Making use of
the notion of charge class, which generalizes the concept of superselection
sector, infrared problems are avoided. In fact, on this basis one can determine
and classify in a systematic manner the proper charge content of a theory, the
statistics of the corresponding states and their spectral properties. A key
ingredient in this approach is the fact that in real experiments the arrow of
time gives rise to a Lorentz invariant infrared cutoff of a purely geometric
nature.
|
This paper study recovery conditions of weighted L1 minimization for signal
reconstruction from compressed sensing measurements. A sufficient condition for
exact recovery by using the general weighted L1 minimization is derived, which
builds a direct relationship between the weights and the recoverability.
Simulation results indicates that this sufficient condition provides a precise
prediction of the scaling law for the weighted L1 minimization.
|
A key assumption in quasar absorption line studies of the circumgalactic
medium (CGM) is that each absorption component maps to a spatially isolated
"cloud" structure that has single valued properties (e.g. density, temperature,
metallicity). We aim to assess and quantify the degree of accuracy underlying
this assumption. We used adaptive mesh refinement hydrodynamic cosmological
simulations of two $z=1$ dwarf galaxies and generated synthetic quasar
absorption-line spectra of their CGM. For the SiII $\lambda 1260$ transition,
and the CIV $\lambda\lambda1548, 1550$ and OVI $\lambda\lambda1031, 1037$
fine-structure doublets, we objectively determined which gas cells along a
line-of-sight (LOS) contribute to detected absorption. We implemented a fast,
efficient, and objective method to define individual absorption components in
each absorption profile. For each absorption component, we quantified the
spatial distribution of the absorbing gas. We studied a total of 1,302
absorption systems containing a total of 7,755 absorption components. 48% of
SiII, 68% of CIV, and 72% of OVI absorption components arise from two or more
spatially isolated "cloud" structures along the LOS. Spatially isolated "cloud"
structures were most likely to have cloud-cloud LOS separations of
0.03$R_{vir}$, 0.11$R_{vir}$, and 0.13$R_{vir}$ for SiII, CIV, and OVI,
respectively. There can be very little overlap between multi-phase gas
structures giving rise to absorption components. If our results reflect the
underlying reality of how absorption lines record CGM gas, they place tension
on current observational analysis methods as they suggest that
component-by-component absorption line formation is more complex than is
assumed and applied for chemical-ionisation modelling.
|
The sensitivity of experiments searching for neutrinoless double beta-decay
of germanium was so far limited by the background induced by external
gamma-radiation. Segmented germanium detectors can be used to identify photons
and thus reduce this background component.
The GERmanium Detector Array, GERDA, will use highly segmented germanium
detectors in its second phase. The identification of photonic events is
investigated using a prototype detector. The results are compared with Monte
Carlo data.
|
This document reviews the general approach to correcting the process
e+e-->X->ffbar for radiative effects, where X represents an exchanged gauge
boson arising from some new physics. The validity of current methods is
discussed in the context of the differential cross section. To this end the
universality of the dominant QED radiative corrections to such a process is
discussed and an attempt is made to quantify it. The paper aims to justify, as
much as possible, the general approach taken by e+e- collider experiments to
the issue of how to treat the dominant radiative corrections in fitting models
of new physics using inclusive and exclusive cross section measurements. We
conclude that in all but the most pathological of new physics models the
dominant radiative corrections (QED) to the tree level processes of the
standard model can be expected to hold well for new physics. This argument
follows from the fact that the phase space of indirect new physics searches is
generally restrictive (high s' events) in such a way that the factorization of
radiative corrections is expected to hold well and generally universal infrared
corrections should be prevalent.
|
In this paper we show that from the estimate $\sup_{t \geq 0}\|C(t) -
\cos(at)I\| <1$ we can conclude that $C(t)$ equals $\cos(at) I$. Here
$\left(C(t)\right)_{t \geq 0}$ is a strongly continuous cosine family on a
Banach space.
|
We shall give various realizations of crystals. One of them is a monome
realization introduced by Nakajima.
|
The mode transition algebra $\mathfrak{A}$, and $d$-th mode transition
subalgebras $\mathfrak{A}_d\subset \mathfrak{A}$ are associative algebras
attached to vertex operator algebras. Here, under natural assumptions, we
establish Morita equivalences involving Zhu's associative algebra $\mathsf{A}$
and $\mathfrak{A}_d$. As an application, we obtain explicit expressions for
higher level Zhu algebras as products of matrices, generalizing the result for
the 1 dimensional Heisenberg vertex operator algebra from our previous work.
This theory applies for instance, to certain VOAs with commutative, and
connected Zhu algebra, and to rational vertex operator algebras. Examples are
given.
|
JWST has recently revealed a large population of accreting black holes (BHs)
in the early Universe. Even after accounting for possible systematic biases,
the high-z $M_*-M_{\rm \rm bh}$ relation derived from these objects by Pacucci
et al. (2023 P23 relation) is above the local scaling relation by $>3\sigma$.
To understand the implications of potentially overmassive high-z BH
populations, we study the BH growth at $z\sim4-7$ using the
$[18~\mathrm{Mpc}]^3$ BRAHMA suite of cosmological simulations with systematic
variations of heavy seed models that emulate direct collapse black hole (DCBH)
formation. In our least restrictive seed model, we place $\sim10^5~M_{\odot}$
seeds in halos with sufficient dense and metal-poor gas. To model conditions
for direct collapse, we impose additional criteria based on a minimum Lyman
Werner flux (LW flux $=10~J_{21}$), maximum gas spin, and an environmental
richness criterion. The high-z BH growth in our simulations is merger
dominated, with a relatively small contribution from gas accretion. For the
most restrictive simulation that includes all the above seeding criteria for
DCBH formation, the high-z $M_*-M_{\rm bh}$ relation falls significantly below
the P23 relation (by factor of $\sim10$ at $z\sim4$). Only by excluding the
spin and environment based criteria, and by assuming $\lesssim750~\mathrm{Myr}$
delay times between host galaxy mergers and subsequent BH mergers, are we able
to reproduce the P23 relation. Overall, our results suggest that if high-z BHs
are indeed systematically overmassive, assembling them would require more
efficient heavy seeding channels, higher initial seed masses, additional
contributions from lighter seeds to BH mergers, and / or more efficient modes
for BH accretion.
|
We consider the limit of sequences of normalized $(s,2)$-Gagliardo seminorms
with an oscillating coefficient as $s\to 1$. In a seminal paper by Bourgain,
Brezis and Mironescu (subsequently extended by Ponce) it is proven that if the
coefficient is constant then this sequence $\Gamma$-converges to a multiple of
the Dirichlet integral. Here we prove that, if we denote by $\varepsilon$ the
scale of the oscillations and we assume that $1-s<\!<\varepsilon^2$, this
sequence converges to the homogenized functional formally obtained by
separating the effects of $s$ and $\varepsilon$; that is, by the homogenization
as $\varepsilon\to 0$ of the Dirichlet integral with oscillating coefficient
obtained by formally letting $s\to 1$ first.
|
We summarize basic observational results on Sagittarius~A* obtained from the
radio, infrared and X-ray domain. Infrared observations have revealed that a
dusty S-cluster object (DSO/G2) passes by SgrA*, the central super-massive
black hole of the Milky Way. It is still expected that this event will give
rise to exceptionally intense activity in the entire electromagnetic spectrum.
Based on February to September 2014 SINFONI observations. The detection of
spatially compact and red-shifted hydrogen recombination line emission allows a
us to obtain a new estimate of the orbital parameters of the DSO. We have not
detected strong pre-pericenter blue-shifted or post-pericenter red-shifted
emission above the noise level at the position of SgrA* or upstream the orbit.
The periapse position was reached in May 2014. Our 2004-2012 infrared
polarization statistics shows that SgrA* must be a very stable system - both in
terms of geometrical orientation of a jet or accretion disk and in terms of the
variability spectrum which must be linked to the accretion rate. Hence
polarization and variability measurements are the ideal tool to probe for any
change in the system as a function of the DSO/G2 fly-by. Due to the 2014 fly-by
of the DSO, increased accretion activity of SgrA* may still be upcoming. Future
observations of bright flares will improve the derivation of the spin and the
inclination of the SMBH from NIR/sub-mm observations.
|
We report computational results of blood flow through a model of the human
aortic arch and a vessel of actual diameter and length. On the top of the
aortic arch the branching of the %%three arteries are included: the subclavian
and jugular. A realistic pulsatile flow is used in all simulations.
Calculations for bifurcation type vessels are also carried out and presented.
Different mathematical methods for numerical solution of the fluid dynamics
equations have been considered. The non-Newtonian behaviour of the human blood
is investigated together with turbulence effects. A detailed time-dependent
mathematical convergence test has been carried out. The results of computer
simulations of the blood flow in vessels of three different geometries are
presented: for pressure, strain rate and velocity component distributions we
found significant disagreements between our results obtained with realistic
non-Newtonian treatment of human blood and the widely used method in the
literature: a simple Newtonian approximation. A significant increase of the
strain rate and, as a result, a wall shear stress distribution, is found in the
region of the aortic arch. Turbulent effects are found to be important,
particularly in the case of bifurcation vessels.
|
We propose Hydra-MDP, a novel paradigm employing multiple teachers in a
teacher-student model. This approach uses knowledge distillation from both
human and rule-based teachers to train the student model, which features a
multi-head decoder to learn diverse trajectory candidates tailored to various
evaluation metrics. With the knowledge of rule-based teachers, Hydra-MDP learns
how the environment influences the planning in an end-to-end manner instead of
resorting to non-differentiable post-processing. This method achieves the
$1^{st}$ place in the Navsim challenge, demonstrating significant improvements
in generalization across diverse driving environments and conditions. Code will
be available at \url{https://github.com/NVlabs/Hydra-MDP}.
|
The control of gene expression involves complex mechanisms that show large
variation in design. For example, genes can be turned on either by the binding
of an activator (positive control) or the unbinding of a repressor (negative
control). What determines the choice of mode of control for each gene? This
study proposes rules for gene regulation based on the assumption that free
regulatory sites are exposed to nonspecific binding errors, whereas sites bound
to their cognate regulators are protected from errors. Hence, the selected
mechanisms keep the sites bound to their designated regulators for most of the
time, thus minimizing fitness-reducing errors. This offers an explanation of
the empirically demonstrated Savageau demand rule: Genes that are needed often
in the natural environment tend to be regulated by activators, and rarely
needed genes tend to be regulated by repressors; in both cases, sites are bound
for most of the time, and errors are minimized. The fitness advantage of error
minimization appears to be readily selectable. The present approach can also
generate rules for multi-regulator systems. The error-minimization framework
raises several experimentally testable hypotheses. It may also apply to other
biological regulation systems, such as those involving protein-protein
interactions.
|
A traveling wave model for a semiconductor diode laser based on quantum wells
is presented as well as a comprehensive theoretical model of the lasing
dynamics produced by the intensity discrimination of the nonlinear
mode-coupling in a waveguide array. By leveraging a recently developed model
for the detailed semiconductor gain dynamics, the temporal shaping effects of
the nonlinear mode-coupling induced by the waveguide arrays can be
characterized. Specifically, the enhanced nonlinear pulse shaping provided by
the waveguides are capable of generating stable frequency combs wavelength of
800 nm in a GaAs device, a parameter regime not feasible for stable combline
generation using a single waveguide. Extensive numerical simulations showed
that stable waveform generation could be achieved and optimized by an
appropriate choice of the linear waveguide coupling coefficient, quantum well
depth, and the input currents to the first and second waveguides. The model
provides a first demonstration that a compact, efficient and robust on-chip
comb source can be produced in GaAs.
|
We present a small x resummation for the GLAP anomalous dimension and its
corresponding dual BFKL kernel, which includes all the available perturbative
information and nonperturbative constraints. Specifically, it includes all the
information coming from next-to-leading order GLAP anomalous dimensions and
BFKL kernels, from the constraints of momentum conservation, from
renormalization-group improvement of the running coupling and from gluon
interchange symmetry. The ensuing evolution kernel has a uniformly stable
perturbative expansion. It is very close to the unresummed NLO GLAP kernel in
most of the HERA kinematic region, the small x BFKL behaviour being softened by
momentum conservation and the running of the coupling. Next-to-leading
corrections are small thanks to the constraint of gluon interchange symmetry.
This result subsumes all previous resummations in that it combines optimally
all the information contained in them.
|
This paper presents a new array response control scheme named
complex-coefficient weight vector orthogonal decomposition ($
\textrm{C}^2\textrm{-WORD} $) and its application to pattern synthesis. The
proposed $ \textrm{C}^2\textrm{-WORD} $ algorithm is a modified version of the
existing WORD approach. We extend WORD by allowing a complex-valued combining
coefficient in $ \textrm{C}^2\textrm{-WORD} $, and find the optimal combining
coefficient by maximizing white noise gain (WNG). Our algorithm offers a
closed-from expression to precisely control the array response level of a given
point starting from an arbitrarily-specified weight vector. In addition, it
results less pattern variations on the uncontrolled angles. Elaborate analysis
shows that the proposed $ \textrm{C}^2\textrm{-WORD} $ scheme performs at least
as good as the state-of-the-art $\textrm{A}^\textrm{2}\textrm{RC} $ or WORD
approach. By applying $ \textrm{C}^2\textrm{-WORD} $ successively, we present a
flexible and effective approach to pattern synthesis. Numerical examples are
provided to demonstrate the flexibility and effectiveness of $
\textrm{C}^2\textrm{-WORD} $ in array response control as well as pattern
synthesis.
|
An unceasing problem of our prevailing society is the fair division of goods.
The problem of proportional cake cutting focuses on dividing a heterogeneous
and divisible resource, the cake, among $n$ players who value pieces according
to their own measure function. The goal is to assign each player a not
necessarily connected part of the cake that the player evaluates at least as
much as her proportional share.
In this paper, we investigate the problem of proportional division with
unequal shares, where each player is entitled to receive a predetermined
portion of the cake. Our main contribution is threefold. First we present a
protocol for integer demands that delivers a proportional solution in fewer
queries than all known algorithms. Then we show that our protocol is
asymptotically the fastest possible by giving a matching lower bound. Finally,
we turn to irrational demands and solve the proportional cake cutting problem
by reducing it to the same problem with integer demands only. All results
remain valid in a highly general cake cutting model, which can be of
independent interest.
|
We show that, for any prime power p^k and any convex body K (i.e., a compact
convex set with interior) in Rd, there exists a partition of K into p^k convex
sets with equal volume and equal surface area. We derive this result from a
more general one for absolutely continuous measures and continuous functionals
on the space of convex bodies. This result was independently found by Roman
Karasev generalizing work of Gromov and Memarian who proved it for the standard
measure on the sphere and p=2. Our proof uses basics from the theory of optimal
transport and equivariant topology. The topological ingredient is a Borsuk-Ulam
type statement on configuration space with the standard action of the symmetric
group. This result was discovered in increasing generality by Fuks, Vaseliev
and Karasev. We include a detailed proof and discuss how it relates to Gromov's
proof for the case p=2.
|
The carbon-to-oxygen (C/O) ratio of asymptotic giant branch (AGB) stars
constitutes an important index of evolutionary and environment/metallicity
factor. We develop a method for mass C/O classification of AGBs in photometric
surveys without using periods. For this purpose we rely on the slopes in the
tracks of individual stars in the colour-magnitude diagram. We demonstrate that
our method enables the separation of C-rich and O-rich AGB stars with little
confusion. For the Magellanic Clouds we demonstrate that this method works for
several photometric surveys and filter combinations. As we rely on no period
identification, our results are relatively insensitive to the phase coverage,
aliasing, and time-sampling problems that plague period analyses. For a
subsample of our stars, we verify our C/O classification against published C/O
catalogues. With our method we are able to produce C/O maps of the entire
Magellanic Clouds. Our purely photometric method for classification of C- and
O-rich AGBs constitutes a method of choice for large, near-infrared photometric
surveys. Because our method depends on the slope of colour-magnitude variation
but not on magnitude zero point, it remains applicable to objects with unknown
distances.
|
We study the hydrodynamic coupling of neighboring micro-beads placed in a
dual optical trap setup allowing us to precisely control the degree of coupling
and directly measure time-dependent trajectories of the entrained beads.
Average experimental trajectories of a probe bead entrained by the motion of a
neighboring scan bead are compared with theoretical computation, illustrating
the role of viscous coupling and setting timescales for probe bead relaxation.
The findings provide direct experimental corroborations of hydrodynamic
coupling at larger, micron spatial scales and millisecond timescales, of
relevance to hydrodynamic-assisted colloidal assembly as well as improving the
resolution of optical tweezers. We repeat the experiments for three bead
setups.
|
We present new results on dynamical instabilities in rapidly rotating
neutron-stars. In particular, using numerical simulations in full General
Relativity, we analyse the effects that the stellar compactness has on the
threshold for the onset of the dynamical bar-mode instability, as well as on
the appearance of other dynamical instabilities. By using an extrapolation
technique developed and tested in our previous study [1], we explicitly
determine the threshold for a wide range of compactnesses using four sequences
of models of constant baryonic mass comprising a total of 59 stellar models.
Our calculation of the threshold is in good agreement with the Newtonian
prediction and improves the previous post-Newtonian estimates. In addition, we
find that for stars with sufficiently large mass and compactness, the m=3
deformation is the fastest growing one. For all of the models considered, the
non-axisymmetric instability is suppressed on a dynamical timescale with an m=1
deformation dominating the final stages of the instability. These results,
together with those presented in [1], suggest that an m=1 deformation
represents a general and late-time feature of non-axisymmetric dynamical
instabilities both in full General Relativity and in Newtonian gravity.
|
Self-organized patterns of cathode spots in glow discharges are computed in
the cathode boundary layer geometry, which is the one employed in most of the
experiments reported in the literature. The model comprises conservation and
transport equations of electrons and a single ion species, written in the
drift-diffusion and local-field approximations, and Poisson's equation.
Multiple solutions existing for the same value of the discharge current and
describing modes with different configurations of cathode spots are computed by
means of a stationary solver. The computed solutions are compared to their
counterparts for plane-parallel electrodes, and experiments. All of the
computed spot patterns have been observed in the experiment.
|
We have calculated evolution of neutron star binaries towards the coalescence
driven by gravitational radiation. The hydrodynamical effects as well as the
general relativistic effects are important in the final phase. All corrections
up to post$^{2.5}$-Newtonian order and the tidal effect are included in the
orbital motion. The star is approximated by a simple Newtonian stellar model
called affine star model. Stellar spins and angular momentum are assumed to be
aligned. We have showed how the internal stellar structure affects the stellar
deformation, variations of the spins, and the orbital motion of the binary just
before the contact. The gravitational wave forms from the last a few
revolutions significantly depend on the stellar structure.
|
The outstanding properties of transition metal dichalcogenide (TMD)
monolayers and their van der Waals (vdW) heterostructures, arising from their
structure and the modified electron-hole Coulomb interaction in 2D, make them
promising candidates for potential electro-optical devices. However, the
production of reproducible devices remains challenging, partly due to
variability at the nanometer to atomic scales. Thus, access to chemical,
structural, and optical characterization at these lengthscales is essential.
While electron microscopy and spectroscopy can provide chemical and structural
data, accessing the optical response at the nanoscale through electron
spectroscopies has been hindered until recently. This review focuses on the
application of two electron spectroscopies in scanning (transmission) electron
microscopes, namely cathodoluminescence and electron energy-loss spectroscopy,
to study the nano-optics of TMD atomic layers and their vdW heterostructures.
We discuss how technological advancements that can improve these
spectroscopies, many of which are already underway, will make them ideal for
studying the physics of vdW heterostructures at the nanoscale.
|
Searches for periodicity in time series are often done with models of
periodic signals, whose statistical significance is assessed via false alarm
probabilities or Bayes factors. However, a statistically significant periodic
model might not originate from a strictly periodic source. In astronomy in
particular, one expects transient signals that show periodicity for a certain
amount of time before vanishing. This situation is encountered for instance in
the search for planets in radial velocity data. While planetary signals are
expected to have a stable phase, amplitude and frequency - except when strong
planet-planet interactions are present - signals induced by stellar activity
will typically not exhibit the same stability. In the present article, we
explore the use of periodic functions multiplied by time windows to diagnose
whether an apparently periodic signal is truly so. We suggest diagnostics to
check whether a signal is consistently present in the time series, and has a
stable phase, amplitude and period. The tests are expressed both in a
periodogram and Bayesian framework. Our methods are applied to the Solar
HARPS-N data as well as HD 215152, HD 69830 and HD 13808. We find that (i) the
HARPS-N Solar data exhibits signals at the Solar rotation period and its first
harmonic ($\sim$ 13.4 days). The frequency and phase of the 13.4 days signal
appear constant within the estimation uncertainties, but its amplitude presents
significant variations which can be mapped to activity levels. (ii) as
previously reported, we find four, three and two planets orbiting HD 215152, HD
69830 and HD 13808.
|
We analyze 26 archival Kepler transits of the exo-Neptune HAT-P-11b,
supplemented by ground-based transits observed in the blue (B-band) and near-IR
(J-band). Both the planet and host star are smaller than previously believed;
our analysis yields Rp=4.31 +/-0.06 Earth-radii, and Rs = 0.683 +/-0.009 solar
radii, both about 3-sigma smaller than the discovery values. Our ground-based
transit data at wavelengths bracketing the Kepler bandpass serve to check the
wavelength dependence of stellar limb darkening, and the J-band transit
provides a precise and independent constraint on the transit duration. Both the
limb darkening and transit duration from our ground-based data are consistent
with the new Kepler values for the system parameters. Our smaller radius for
the planet implies that its gaseous envelope can be less extensive than
previously believed, being very similar to the H-He envelope of GJ436b and
Kepler-4b. HAT-P-11 is an active star, and signatures of star spot crossings
are ubiquitous in the Kepler transit data. We develop and apply a methodology
to correct the planetary radius for the presence of both crossed and uncrossed
star spots. Star spot crossings are concentrated at phases -0.002 and +0.006.
This is consistent with inferences from Rossiter-McLaughlin measurements that
the planet transits nearly perpendicular to the stellar equator. We identify
the dominant phases of star spot crossings with active latitudes on the star,
and we infer that the stellar rotational pole is inclined at about 12 +/-5
degrees to the plane of the sky. We point out that precise transit measurements
over long durations could in principle allow us to construct a stellar
Butterfly diagram, to probe the cyclic evolution of magnetic activity on this
active K-dwarf star.
|
The 2-dimensional Lyness map is a 5-periodic birational map of the plane
which may famously be resolved to give an automorphism of a log Calabi-Yau
surface, given by the complement of an anticanonical pentagon of $(-1)$-curves
in a del Pezzo surface of degree 5. This surface has many remarkable properties
and, in particular, it is mirror to itself. We construct the 3-dimensional big
brother of this surface by considering the 3-dimensional Lyness map, which is
an 8-periodic birational map. The variety we obtain is a special (non-$\mathbb
Q$-factorial) affine Fano 3-fold of type $V_{12}$, and we show that it is a
self-mirror log Calabi-Yau 3-fold.
|
Myriad articles are devoted to Mertens's theorem. In yet another, we merely
wish to draw attention to a proof by Hardy, which uses a Tauberian theorem of
Landau that "leads to the conclusion in a direct and elegant manner". Hardy's
proof is also quite adaptable, and it is readily combined with well-known
results from prime number theory. We demonstrate this by proving a version of
the theorem for primes in arithmetic progressions with uniformity in the
modulus, as well as a non-abelian analogue of this.
|
Preserving beam quality during the transportation of high-brightness electron
beams is a significant and widespread challenge in the design of modern
accelerators. The importance of this issue stems from the fact that the quality
of the beam at the accelerator's output is crucial for various applications,
including particle colliders, free-electron lasers, and synchrotron radiation
sources. The coherent synchrotron radiation (CSR) effect can degrade beam
quality when a bunch is deflected. Therefore, developing a structure that
effectively suppresses the CSR effect, especially for the short bunches is
critically important. This involves protecting both the transverse emittance
and the longitudinal profile to ensure the production of a high-quality beam.
In this study, an optimization based on the reverse lattice of the beamline is
proposed. This method can simplify the optimization process. Based on this
approach, the Quadruple Bend Achromat (QBA) deflection structure has been
designed and optimized. Then we have derived a general solution to completely
suppress the impact of steady-state CSR on the transverse plane for different
topologies of QBA. Furthermore, a general condition is proposed for suppressing
displacements caused by CSR in sequence drifts for isochronous structures.
Simultaneously, QBA has proven to be the simplest structure that can
simultaneously suppress both types of CSR effects. Simulation results for
bunches with a peak current of up to $3000A$ show almost no change in
transverse emittance for a large angle deflection.
|
We investigate identical pion HBT intensity interferometry for central Au+Au
collisions at 1.23A GeV. High-statistics $\pi^-\pi^-$ and $\pi^+\pi^+$ data are
measured with HADES at SIS18/GSI. The radius parameters, derived from the
correlation function depending on relative momenta in the longitudinal-comoving
system and parametrized as three-dimensional Gaussian distribution, are studied
as function of transverse momentum. A substantial charge-sign difference of the
source radii is found, particularly pronounced at low transverse momentum. The
extracted Coulomb-corrected source parameters agree well with a smooth
extrapolation of the center-of-mass energy dependence established at higher
energies, extending the corresponding excitation functions down towards a very
low energy. Our data would thus rather disfavour any strong energy dependence
of the radius parameters in the low energy region.
|
While there have been important theoretical advances in understanding the
universality classes of interfaces moving in porous media, the developed tools
cannot be directly applied to experiments. Here we introduce a method that can
identify the universality class from snapshots of the interface profile. We
test the method on discrete models whose universality class is well known, and
use it to identify the universality class of interfaces obtained in experiments
on fluid flow in porous media.
|
The success of deep learning models is heavily tied to the use of massive
amount of labeled data and excessively long training time. With the emergence
of intelligent edge applications that use these models, the critical challenge
is to obtain the same inference capability on a resource-constrained device
while providing adaptability to cope with the dynamic changes in the data. We
propose AgileNet, a novel lightweight dictionary-based few-shot learning
methodology which provides reduced complexity deep neural network for efficient
execution at the edge while enabling low-cost updates to capture the dynamics
of the new data. Evaluations of state-of-the-art few-shot learning benchmarks
demonstrate the superior accuracy of AgileNet compared to prior arts.
Additionally, AgileNet is the first few-shot learning approach that prevents
model updates by eliminating the knowledge obtained from the primary training.
This property is ensured through the dictionaries learned by our novel
end-to-end structured decomposition, which also reduces the memory footprint
and computation complexity to match the edge device constraints.
|
Distributed storage systems must store large amounts of data over long
periods of time. To avoid data loss due to device failures, an $[n,k]$ erasure
code is used to encode $k$ data symbols into a codeword of $n$ symbols that are
stored across different devices. However, device failure rates change
throughout the life of the data, and tuning $n$ and $k$ according to these
changes has been shown to save significant storage space. Code conversion is
the process of converting multiple codewords of an initial $[n^I,k^I]$ code
into codewords of a final $[n^F,k^F]$ code that decode to the same set of data
symbols. In this paper, we study conversion bandwidth, defined as the total
amount of data transferred between nodes during conversion. In particular, we
consider the case where the initial and final codes are MDS and a single
initial codeword is split into several final codewords ($k^I=\lambda^F k^F$ for
integer $\lambda^F \geq 2$), called the split regime. We derive lower bounds on
the conversion bandwidth in the split regime and propose constructions that
significantly reduce conversion bandwidth and are optimal for certain
parameters.
|
Statistical Data Assimilation (SDA) is the transfer of information from field
or laboratory observations to a user selected model of the dynamical system
producing those observations. The data is noisy and the model has errors; the
information transfer addresses properties of the conditional probability
distribution of the states of the model conditioned on the observations. The
quantities of interest in SDA are the conditional expected values of functions
of the model state, and these require the approximate evaluation of high
dimensional integrals. We introduce a conditional probability distribution and
use the Laplace method with annealing to identify the maxima of the conditional
probability distribution. The annealing method slowly increases the precision
term of the model as it enters the Laplace method. In this paper, we extend the
idea of precision annealing (PA) to Monte Carlo calculations of conditional
expected values using Metropolis-Hastings methods.
|
Hedge fund managers with the first-loss scheme charge a management fee, a
performance fee and guarantee to cover a certain amount of investors' potential
losses. We study how parties can choose a mutually preferred first-loss scheme
in a hedge fund with the manager's first-loss deposit and investors' assets
segregated. For that, we solve the manager's non-concave utility maximization
problem, calculate Pareto optimal first-loss schemes and maximize a decision
criterion on this set. The traditional 2% management and 20% performance fees
are found to be not Pareto optimal, neither are common first-loss fee
arrangements. The preferred first-loss coverage guarantee is increasing as the
investor's risk-aversion or the interest rate increases. It decreases as the
manager's risk-aversion or the market price of risk increases. The more risk
averse the investor or the higher the interest rate, the larger is the
preferred performance fee. The preferred fee schemes significantly decrease the
fund's volatility.
|
Across industries, traditional design and engineering workflows are being
upgraded to simulation-driven processes. Many workflows include computational
fluid dynamics (CFD). Simulations of turbulent flow are notorious for high
compute costs and reliance on approximate methods that compromise accuracy.
Improvements in the speed and accuracy of CFD calculations would potentially
reduce design workflow costs by reducing computational costs and eliminating
the need for experimental testing. This study explores the feasibility of using
fault-tolerant quantum computers to improve the speed and accuracy of CFD
simulations in the incompressible or weakly compressible regime. For the
example of simulation-driven ship design, we consider simulations for
calculating the drag force in steady-state flows, and provide analysis on
economic utility and classical hardness. As a waypoint toward assessing the
feasibility of our chosen quantum approach, we estimate the quantum resources
required for the simpler case of drag force on a sphere. We estimate the
product of logical qubits $\times$ $T$ gates to range from $10^{22}$ to
$10^{28}$. These high initial estimates suggest that future quantum computers
are unlikely to provide utility for incompressible CFD applications unless
significant algorithmic advancements or alternative quantum approaches are
developed. Encouraged by applications in quantum chemistry that have realized
orders-of-magnitude improvements as they matured, we identify the most
promising next steps for quantum resource reduction as we work to scale up our
estimates from spheres to utility-scale problems with more complex geometry.
|
Even under constant external conditions, the expression levels of genes
fluctuate. Much emphasis has been placed on the components of this noise that
are due to randomness in transcription and translation; here we analyze the
role of noise associated with the inputs to transcriptional regulation, the
random arrival and binding of transcription factors to their target sites along
the genome. This noise sets a fundamental physical limit to the reliability of
genetic control, and has clear signatures, but we show that these are easily
obscured by experimental limitations and even by conventional methods for
plotting the variance vs. mean expression level. We argue that simple, global
models of noise dominated by transcription and translation are inconsistent
with the embedding of gene expression in a network of regulatory interactions.
Analysis of recent experiments on transcriptional control in the early
Drosophila embryo shows that these results are quantitatively consistent with
the predicted signatures of input noise, and we discuss the experiments needed
to test the importance of input noise more generally.
|
Network coding is a technique to maximize communication rates within a
network, in communication protocols for simultaneous multi-party transmission
of information. Linear network codes are examples of such protocols in which
the local computations performed at the nodes in the network are limited to
linear transformations of their input data (represented as elements of a ring,
such as the integers modulo 2). The quantum linear network coding protocols of
Kobayashi et al [arXiv:0908.1457 and arXiv:1012.4583] coherently simulate
classical linear network codes, using supplemental classical communication. We
demonstrate that these protocols correspond in a natural way to
measurement-based quantum computations with graph states over over qudits
[arXiv:quant-ph/0301052, arXiv:quant-ph/0603226, and arXiv:0704.1263] having a
structure directly related to the network.
|
Planetesimals inevitably bear the signatures of their natal environment,
preserving in their composition a record of the metallicity of their system's
original gas and dust, albeit one altered by the formation process. When
planetesimals are dispersed from their system of origin, this record is carried
with them. As each star is likely to contribute at least $10^{12}$ interstellar
objects, the Galaxy's drifting population of interstellar objects (ISOs)
provides an overview of the properties of its stellar population through time.
Using the EAGLE cosmological simulation and models of protoplanetary formation,
our modelling predicts an ISO population with a bimodal distribution in their
water mass fraction. Objects formed in low-metallicity, typically older,
systems have a higher water fraction than their counterparts formed in
high-metallicity protoplanetary disks, and these water-rich objects comprise
the majority of the population. Both detected ISOs seem to belong to the lower
water fraction population; these results suggest they come from recently formed
systems. We show that the population of ISOs in galaxies with different star
formation histories will have different proportions of objects with high and
low water fractions. This work suggests that it is possible that the upcoming
Vera C. Rubin Observatory Legacy Survey of Space and Time will detect a large
enough population of ISOs to place useful constraints on models of
protoplanetary disks, as well as galactic structure and evolution.
|
Neutrino mass spectrum is reanalyzed in supersymmetric models with explicit
trilinear $R$ violation. Models in this category are argued to provide
simultaneous solution to the solar and atmospheric neutrino anomalies. It is
shown specifically that large mixing and hierarchical masses needed for the
vacuum solution of neutrino anomalies arise naturally in these models without
requiring any additional symmetries or hierarchies among the trilinear
couplings.
|
Recent works suggest that the surface chemistry, in particular, the presence
of oxygen vacancies can affect the polarization in a ferroelectric material.
This should, in turn, influence the domain ordering driven by the need to
screen the depolarizing field. Here we show using density functional theory
that the presence of oxygen vacancies at the surface of BaTiO3 (001)
preferentially stabilizes an inward pointing, P-, polarization. Mirror electron
microscopy measurements of the domain ordering confirm the theoretical results.
|
The ultrafast thermal and mechanical dynamics of a two-dimensional lattice of
metallic nano-disks has been studied by near infrared pump-probe diffraction
measurements, over a temporal range spanning from 100 fs to several
nanoseconds. The experiments demonstrate that, in these systems, a
two-dimensional surface acoustic wave (2DSAW), with a wavevector given by the
reciprocal periodicity of the array, can be excited by ~120 fs Ti:sapphire
laser pulses. In order to clarify the interaction between the nanodisks and the
substrate, numerical calculations of the elastic eigenmodes and simulations of
the thermodynamics of the system are developed through finite-element analysis.
At this light, we unambiguously show that the observed 2DSAW velocity shift
originates from the mechanical interaction between the 2DSAWs and the
nano-disks, while the correlated 2DSAW damping is due to the energy radiation
into the substrate.
|
The process $e^{+}e^{-}\rightarrow D_{s}^{\ast+}D_{s}^{\ast-}$ is studied
with a semi-inclusive method using data samples at center-of-mass energies from
threshold to 4.95 GeV collected with the BESIII detector operating at the
Beijing Electron Positron Collider. The Born cross sections of the process are
measured for the first time with high precision in this energy region. Two
resonance structures are observed in the energy-dependent cross sections around
4.2 and 4.4 GeV. By fitting the cross sections with a coherent sum of three
Breit-Wigner amplitudes and one phase-space amplitude, the two significant
structures are assigned masses of (4186.5$\pm$9.0$\pm$30) MeV/$c^{2}$ and
(4414.5$\pm$3.2$\pm$6.0) MeV/$c^{2}$, widths of (55$\pm$17$\pm$53) MeV and
(122.6$\pm$7.0$\pm$8.2) MeV, where the first errors are statistical and the
second ones are systematic. The inclusion of a third Breit-Wigner amplitude is
necessary to describe a structure around 4.79 GeV.
|
Mental illness is a global health problem, but access to mental healthcare
resources remain poor worldwide. Online peer-to-peer support platforms attempt
to alleviate this fundamental gap by enabling those who struggle with mental
illness to provide and receive social support from their peers. However,
successful social support requires users to engage with each other and failures
may have serious consequences for users in need. Our understanding of
engagement patterns on mental health platforms is limited but critical to
inform the role, limitations, and design of these platforms. Here, we present a
large-scale analysis of engagement patterns of 35 million posts on two popular
online mental health platforms, TalkLife and Reddit. Leveraging communication
models in human-computer interaction and communication theory, we
operationalize a set of four engagement indicators based on attention and
interaction. We then propose a generative model to jointly model these
indicators of engagement, the output of which is synthesized into a novel set
of eleven distinct, interpretable patterns. We demonstrate that this framework
of engagement patterns enables informative evaluations and analysis of online
support platforms. Specifically, we find that mutual back-and-forth
interactions are associated with significantly higher user retention rates on
TalkLife. Such back-and-forth interactions, in turn, are associated with early
response times and the sentiment of posts.
|
Material accreted onto a neutron star can stably burn in steady state only
when the accretion rate is high (typically super-Eddington) or if a large flux
from the neutron star crust permeates the outer atmosphere. For such situations
we have analyzed the stability of nonradial oscillations, finding one unstable
mode for pure helium accretion. This is a shallow surface wave which resides in
the helium atmosphere above the heavier ashes of the ocean. It is excited by
the increase in the nuclear reaction rate during the oscillations, and it grows
on the timescale of a second. For a slowly rotating star, this mode has a
frequency of approximately 20-30 Hz (for l=1), and we calculate the full
spectrum that a rapidly rotating (>>30 Hz) neutron star would support. The
short period X-ray binary 4U 1820--30 is accreting helium rich material and is
the system most likely to show this unstable mode,especially when it is not
exhibiting X-ray bursts. Our discovery of an unstable mode in a thermally
stable atmosphere shows that nonradial perturbations have a different stability
criterion than the spherically symmetric thermal perturbations that generate
type I X-ray bursts.
|
In the framework of the Cartan classification of Hamiltonians, a kind of
topological classification of Fermi surfaces is established in terms of
topological charges. The topological charge of a Fermi surface depends on its
codimension and the class to which its Hamiltonian belongs. It is revealed that
six types of topological charges exist, and they form two groups with respect
to the chiral symmetry, with each group consisting of one original charge and
two descendants. It is these nontrivial topological charges which lead to the
robust topological protection of the corresponding Fermi surfaces against
perturbations that preserve discrete symmetries.
|
The Gatenby-Gawlinski model for cancer invasion is object of analysis in
order to investigate the mathematical framework behind the model working by
means of suitable reductions. We perform numerical simulations to study the
sharpness/smoothness of the traveling fronts starting from a brief overview
about the full model and proceed by examining the case of a two-equations-based
and one-equation-based reduction. We exploit a numerical strategy depending on
a finite volume approximation and employ a space-averaged wave speed estimate
to quantitatively approach the traveling waves phenomenon. Concerning the one
equation-based model, we propose a reduction framed within the degenerate
reaction-diffusion equations field, which proves to be effective in order to
qualitatively recover the typical trends arising from the Gatenby-Gawlinski
model. Finally, we carry out some numerical tests in a specific case where the
analytical solution is available.
|
This brief report (6 pages) was written in 1983 but never published. It
concerns the hyperbolic 3-orbifolds obtained as quotients of hyperbolic 3-space
by the group of invertible 2 by 2 matrices whose entries are integers in the
imaginary quadratic extension of Q of discriminant D. For values D > -100 the
topological type of this orbifold is tabulated, and in the cases when the
topological type is a punctured 3-sphere, the singular locus of the orbifold is
drawn. A few miscellaneous comments about these orbifolds are included. The
tables and pictures are based on Bob Riley's computer calculations of Ford
domains and face pairings. Nothing is said about later developments after 1983.
The pictures are also viewable on my webpage in a perhaps more convenient
format; see http://math.cornell.edu/~hatcher
|
Low-cost inertial navigation sensors (INS) can be exploited for a reliable
tracking solution for autonomous vehicles. However, position errors grow
exponentially due to noises in the measurements. Several deep learning
techniques have been investigated to mitigate the errors for a better
navigation solution [1-10]. However, these studies have involved the use of
different datasets not made publicly available. The lack of a robust benchmark
dataset has thus hindered the advancement in the research, comparison and
adoption of deep learning techniques for vehicle positioning based on inertial
navigation. In order to facilitate the benchmarking, fast development and
evaluation of positioning algorithms, we therefore present the first of its
kind large-scale and information-rich inertial and odometry focused public
dataset called IO-VNBD (Inertial Odometry Vehicle Navigation Benchmark
Dataset).The vehicle tracking dataset was recorded using a research vehicle
equipped with ego-motion sensors on public roads in the United Kingdom,
Nigeria, and France. The sensors include a GPS receiver, inertial navigation
sensors, wheel-speed sensors amongst other sensors found on the car as well as
the inertial navigation sensors and GPS receiver in an android smart phone
sampling at 10HZ. A diverse number of scenarios and vehicle dynamics are
captured such as traffic, round-abouts, hard-braking etc. on different road
types (country roads, motorways etc.) with varying driving patterns. The
dataset consists of a total driving time of about 40 hours over 1,300km for the
vehicle extracted data and about 58 hours over 4,400 km for the smartphone
recorded data. We hope that this dataset will prove valuable in furthering
research on the correlation between vehicle dynamics and its displacement as
well as other related studies
|
Fault-tolerant quantum computation (FTQC) is essential to implement quantum
algorithms in a noise-resilient way, and thus to enjoy advantages of quantum
computers even with presence of noise. In FTQC, a quantum circuit is decomposed
into universal gates that can be fault-tolerantly implemented, for example,
Clifford+$T$ gates. Here, $T$ gate is usually regarded as an essential resource
for quantum computation because its action cannot be simulated efficiently on
classical computers and it is experimentally difficult to implement
fault-tolerantly. Practically, it is highly likely that only a limited number
of $T$ gates are available in the near future. Pre-FTQC era, due to the
constraint on available resources, it is vital to precisely estimate the
decomposition error of a whole circuit. In this paper, we propose that the
Clifford+$T$ decomposition error for a given quantum circuit containing a large
number of quantum gates can be modeled as the depolarizing noise by averaging
the decomposition error for each quantum gate in the circuit, and our model
provides more accurate error estimation than the naive estimation. We exemplify
this by taking unitary coupled-cluster (UCC) ansatz used in the applications of
quantum computers to quantum chemistry as an example. We theoretically evaluate
the approximation error of UCC ansatz when decomposed into Clifford+$T$ gates,
and the numerical simulation for a wide variety of molecules verified that our
model well explains the total decomposition error of the ansatz. Our results
enable the precise and efficient usage of quantum resources in the early-stage
applications of quantum computers and fuel further research towards what
quantum computation can achieve in the upcoming future.
|
As the Internet of Things (IoT) emerges over the next decade, developing
secure communication for IoT devices is of paramount importance. Achieving
end-to-end encryption for large-scale IoT systems, like smart buildings or
smart cities, is challenging because multiple principals typically interact
indirectly via intermediaries, meaning that the recipient of a message is not
known in advance. This paper proposes JEDI (Joining Encryption and Delegation
for IoT), a many-to-many end-to-end encryption protocol for IoT. JEDI encrypts
and signs messages end-to-end, while conforming to the decoupled communication
model typical of IoT systems. JEDI's keys support expiry and fine-grained
access to data, common in IoT. Furthermore, JEDI allows principals to delegate
their keys, restricted in expiry or scope, to other principals, thereby
granting access to data and managing access control in a scalable, distributed
way. Through careful protocol design and implementation, JEDI can run across
the spectrum of IoT devices, including ultra low-power deeply embedded sensors
severely constrained in CPU, memory, and energy consumption. We apply JEDI to
an existing IoT messaging system and demonstrate that its overhead is modest.
|
We investigate the static charge response for the Hubbard model. Using the
Slave-Boson method in the saddle-point approximation we calculate the charge
susceptibility. We find that RPA works quite well close to half-filling,
breaking, of course, down close to the Mott transition. Away from half filling
RPA is much less reliable: Already for very small values of the Hubbard
interaction U, the linear response becomes much more efficient than RPA,
eventually leading to overscreening already beyond quite moderate values of U.
To understand this behavior we give a simple argument, which implies that the
response to an external perturbation at large U should actually be strongly
non-linear. This prediction is confirmed by the results of exact
diagonalization.
|
Speech enhancement in the time-frequency domain is often performed by
estimating a multiplicative mask to extract clean speech. However, most neural
network-based methods perform point estimation, i.e., their output consists of
a single mask. In this paper, we study the benefits of modeling uncertainty in
neural network-based speech enhancement. For this, our neural network is
trained to map a noisy spectrogram to the Wiener filter and its associated
variance, which quantifies uncertainty, based on the maximum a posteriori (MAP)
inference of spectral coefficients. By estimating the distribution instead of
the point estimate, one can model the uncertainty associated with each
estimate. We further propose to use the estimated Wiener filter and its
uncertainty to build an approximate MAP (A-MAP) estimator of spectral
magnitudes, which in turn is combined with the MAP inference of spectral
coefficients to form a hybrid loss function to jointly reinforce the
estimation. Experimental results on different datasets show that the proposed
method can not only capture the uncertainty associated with the estimated
filters, but also yield a higher enhancement performance over comparable models
that do not take uncertainty into account.
|
The Standard Model is extended minimally with a new flavor-violating family
symmetry ${\rm U(1)}_\lambda$, which acts only on leptons including the
right-handed neutrinos. The model is anomaly free with family-dependent ${\rm
U(1)}_\lambda$ charges, and consistent with the observed neutrino mixing
angles. It predicts charged lepton flavor-violating processes mediated by a new
gauge boson. Under certain conditions, the smallness of $\theta_{13}$ of
neutrino mixing can be justified in terms of the muon-to-tau mass ratio, at the
same time explaining the electron-to-tau large mass hierarchy.
|
Cognitive communications have emerged as a promising solution to enhance,
adapt, and invent new tools and capabilities that transcend conventional
wireless networks. Deep learning (DL) is critical in enabling essential
features of cognitive systems because of its fast prediction performance,
adaptive behavior, and model-free structure. These features are especially
significant for multi-antenna wireless communications systems, which generate
and handle massive data. Multiple antennas may provide multiplexing, diversity,
or antenna gains that, respectively, improve the capacity, bit error rate, or
the signal-to-interference-plus-noise ratio. In practice, multi-antenna
cognitive communications encounter challenges in terms of data complexity and
diversity, hardware complexity, and wireless channel dynamics. DL solutions
such as federated learning, transfer learning and online learning, tackle these
problems at various stages of communications processing, including
multi-channel estimation, hybrid beamforming, user localization, and sparse
array design. This article provides a synopsis of various DL-based methods to
impart cognitive behavior to multi-antenna wireless communications for improved
robustness and adaptation to the environmental changes while providing
satisfactory spectral efficiency and computation times. We discuss DL design
challenges from the perspective of data, learning, and transceiver
architectures. In particular, we suggest quantized learning models, data/model
parallelization, and distributed learning methods to address the aforementioned
challenges.
|
A new approach for the weak noise analysis of exit problems removes an
intrinsic contradiction of an existing method. It applies for both the mean
time and the location of the exits; novel outcomes mainly concern the exits
from entire domains of attraction. Moreover, the involved quasipotential is
obtained without use of a Hamiltonian system in the case of two variables.
|
Let $n$ be a positive integer and $f(x) := x^{2^n}+1$. In this paper, we
study orders of primes dividing products of the form $P_{m,n}:=f(1)f(2)\cdots
f(m)$. We prove that if $m > \max\{10^{12},4^{n+1}\}$, then there exists a
prime divisor $p$ of $P_{m,n}$ such that ord$_{p}(P_{m,n} )\leq n\cdot
2^{n-1}$. For $n=2$, we establish that for every positive integer $m$, there
exists a prime divisor $p$ of $P_{m,2}$ such that ord$_{p} (P_{m,2}) \leq 4$.
Consequently, $P_{m,2}$ is never a fifth or higher power. This extends work of
Cilleruelo who studied the case $n=1$.
|
Dewetting of thin metal films is one of the most widespread method for
functional plasmonic nanostructures fabrication. However, simple
thermal-induced dewetting does not allow to control degree of nanostructures
order without additional lithographic process steps. Here we propose a novel
method for lithography-free and large-scale fabrication of plasmonic
nanostructures via controllable femtosecond laser-induced dewetting. The method
is based on femtosecond laser surface pattering of a thin film followed by a
nanoscale hydrodynamical instability, which is found to be very controllable
under specific irradiation conditions. We achieve control over degree of
nanostructures order by changing laser irradiation parametrs and film
thickness. This allowed us to exploit the method for the broad range of
applications: resonant light absorbtion and scattering, sensing, and potential
improving of thin-film solar cells.
|
With the development of nanotechnology, the measurement of electrical
properties in local area of materials and devices has become a great need.
Although a lot kind of scanning probe microscope have been developed for
satisfying the requirement of nanotechnology, a microscope technique which can
determine electrical properties in local area of materials and devices is not
yet developed. Recently, microwave microscope has been an interest to many
researchers, due to its potential in the evaluation of electrical properties of
materials and devices. The advance of microwave is that the response of
materials is directly relative to the electromagnetic properties of materials.
However, because of the problem of the structure of probes, nanometer-scale
resolution has not been successful. To achieve the goal, a new structure
microwave probe is required. In this paper, we report a nanostructural
microwave probe. To restrain the attenuation of microwave in the probe, GaAs
was used as the substrate of the probe. To obtain the desired structure, wet
etching was used to fabricate the probe. Different with the dry etching, a
side-etching will occur under the etching mask. Utilizing this property, a
micro tip can be fabricated by etching a wafer, of which a small mask was
introduced on the surface in advance.
|
We investigate the existence of fair and efficient allocations of indivisible
chores to asymmetric agents who have unequal entitlements or weights. We
consider the fairness notion of weighted envy-freeness up to one chore (wEF1)
and the efficiency notion of Pareto-optimality (PO). The existence of EF1 and
PO allocations of chores to symmetric agents is a major open problem in
discrete fair division, and positive results are known only for certain
structured instances. In this paper, we study this problem for a more general
setting of asymmetric agents and show that an allocation that is wEF1 and PO
exists and can be computed in polynomial time for instances with:
- Three types of agents, where agents with the same type have identical
preferences but can have different weights.
- Two types of chores, where the chores can be partitioned into two sets,
each containing copies of the same chore. For symmetric agents, our results
establish that EF1 and PO allocations exist for three types of agents and also
generalize known results for three agents, two types of agents, and two types
of chores.
Our algorithms use a weighted picking sequence algorithm as a subroutine; we
expect this idea and our analysis to be of independent interest.
|
We report unusual near- and mid-infrared photometric properties of G 196-3 B,
the young substellar companion at 16 arcsec from the active M2.5-type star G
196-3 A, using data taken with the IRAC and MIPS instruments onboard Spitzer. G
196-3 B shows markedly redder colors at all wavelengths from 1.6 up to 24
micron than expected for its spectral type, which is determined at L3 from
optical and near-infrared spectra. We discuss various physical scenarios to
account for its reddish nature, and conclude that a low-gravity atmosphere with
enshrouded upper atmospheric layers and/or a warm dusty disk/envelope provides
the most likely explanations, the two of them consistent with an age in the
interval 20-300 Myr. We also present new and accurate separate proper motion
measurements for G 196-3 A and B confirming that both objects are
gravitationally linked and share the same motion within a few mas/yr. After
integration of the combined spectrophotometric spectral energy distributions,
we obtain that the difference in the bolometric magnitudes of G 196-3 A and B
is 6.15 +/- 0.10 mag. Kinematic consideration of the Galactic space motions of
the system for distances in the interval 15-30 pc suggests that the pair is a
likely member of the Local Association, and that it lay near the past positions
of young star clusters like alpha Persei less than 85 Myr ago, where the binary
might have originated. At these young ages, the mass of G 196-3 B would be in
the range 12-25 Mjup, close to the frontier between planets and brown dwarfs.
|
Global observations of ocean swell, from satellite Synthetic Aperture Radar
data, are used to estimate the dissipation of swell energy for a number of
storms. Swells can be very persistent with energy e-folding scales exceeding
20,000 km. For increasing swell steepness this scale shrinks systematically,
down to 2800 km for the steepest observed swells, revealing a significant loss
of swell energy. This value corresponds to a normalized energy decay in time
{\ss} = 4.2 x 10-6 s -1 . Many processes may be responsible for this
dissipation. Because no particular trend is found with wind magnitude and
direction, the increase of dissipation rate in dissipation with swell steepness
is interpreted as a laminar to turbulent transition of the boundary layer, with
a threshold Reynolds number of the order of 100,000. These observations of
swell evolution open the way for more accurate wave forecasting models, and
provides a constraint on swell-induced air-sea fluxes of momentum and energy.
|
The SL(2,Z) duality symmetry of IIB superstring is naturally realized on the
D=11 supermembrane restricted to have central charges arising from a nontrivial
wrapping. This supermembrane is minimally immersed on the target space (MIM2).
The hamiltonian of the MIM2 has a discrete quantum spectrum. It is manifestly
invariant under the SL(2,Z) symmetry associated to the conformal symmetry on
the base manifold and under a SL(2,Z) symmetry on the moduli of the target
space. The mass contribution of the string states on the MIM2 is obtained by
freezing the remaining degrees of freedom. It exactly agrees with the
perturbative spectrum of the (p,q) IIB and IIA superstring compactified on a
circle. We also construct a MIM2 in terms of a dual target space, then a (p,q)
set of non-perturbative states associated to the IIA superstring is obtained.
|
The mayfly nymph breathes under water through an oscillating array of
wing-shaped tracheal gills. As the nymph grows, the kinematics of these gills
change abruptly from rowing to flapping. The classical fluid dynamics approach
to consider the mayfly nymph as a pumping device fails in giving clear reasons
to this switch. In order to understand the whys and the hows of this switch
between the two distinct kinematics, we analyze the problem under a Lagrangian
viewpoint. We consider that a good Lagrangian transport that distributes and
spreads water and dissolved oxygen well between and around the gills is the
main goal of the gill motion. Using this Lagrangian approach we are able to
provide the reason behind the switch from rowing to flapping that the mayfly
nymph experiences as it grows. More precisely, recent and powerful tools from
this Lagrangian approach are applied to in-sillico mayfly nymph experiments,
where body shape, as well as, gill shapes, structures and kinematics are
matched to those from in-vivo. In this letter, we show both qualitatively and
quantitatively how the change of kinematics enables a better attraction,
stirring and confinement of water charged of dissolved oxygen inside the gills
area. From the computational velocity field we reveal attracting barriers to
transport, i.e. attracting Lagrangian coherent structures, that form the
transport skeleton between and around the gills. In addition, we quantify how
well the fluid particles and consequently dissolved oxgen is spread and stirred
inside the gills area.
|
We present a numerical study of detonation propagation in unconfined
explosive charges shaped as an annular arc (rib). Steady detonation in a
straight charge propagates at constant speed but when it enters an annular
section, it goes through a transition phase and eventually reaches a new steady
state of constant angular velocity. This study examines the speed of the
detonation wave along the annular charge during the transition phase and at
steady state, as well as its dependence on the dimensions of the annulus. The
system is modeled using a recently proposed diffuse-interface formulation which
allows for the representation of a two-phase explosive and of an additional
inert material. The explosive considered is the polymer-bonded TATB-based LX-17
and is modeled using two JWL equations of state and the Ignition and Growth
reaction rate law. Results show that steady state speeds are in good agreement
with experiment. In the transition phase, the evolution of outer detonation
speed deviates from the exponential bounded growth function suggested by
previous studies. We propose a new description of the transition phase which
consists of two regimes. The first is caused by local effects at the outer edge
of the annulus and leads to a dependence of outer detonation speed on angular
position along the arc. The second regime is induced by effects originating
from the inner edge of the annular charge and leads to the deceleration of the
outer detonation until steady state is reached. The study concludes with a
parametric study where the dependence of the steady state and the transition
phase on the dimensions of the annulus is investigated.
|
Generic inspirals and mergers of binary black holes produce beamed emission
of gravitational radiation that can lead to a gravitational recoil or kick of
the final black hole. The kick velocity depends on the mass ratio and spins of
the binary as well as on the dynamics of the binary configuration. Studies have
focused so far on the most astrophysically relevant configuration of
quasi-circular inspirals, for which kicks as large as 3,300 km/s have been
found. We present the first study of gravitational recoil in hyperbolic
encounters. Contrary to quasi-circular configurations, in which the beamed
radiation tends to average during the inspiral, radiation from hyperbolic
encounters is plunge dominated, resulting in an enhancement of preferential
beaming. As a consequence, it is possible to achieve kick velocities as large
as 10,000 km/s.
|
We study competition of two spreading colors starting from single sources on
the configuration model with i.i.d. degrees following a power-law distribution
with exponent $\tau\in (2,3)$. In this model two colors spread with a fixed and
equal speed on the unweighted random graph.
We analyse how many vertices the two colors paint eventually. We show that
coexistence sensitively depends on the initial local neighborhoods of the
source vertices: if these neighborhoods are `dissimilar enough', then there is
no coexistence, and the `loser' color paints a polynomial fraction of the
vertices with a random exponent.
If the local neighborhoods of the starting vertices are `similar enough',
then there is coexistence, i.e., both colors paint a strictly positive
proportion of vertices. We give a quantitative characterization of `similar'
local neighborhoods: two random variables describing the double exponential
growth of local neighborhoods of the source vertices must be within a factor
$\tau-2$ of each other. Both of the two outcomes happen with positive
probability with asymptotic value that is explicitly computable.
This picture reinforces the common belief that location is an important
feature in advertising.
This paper is a follow-up of the similarly named paper that handles the case
when the speeds of the two colors are not equal. There, we have shown that the
faster color paints almost all vertices, while the slower color paints only a
random sub-polynomial fraction of the vertices.
|
Subsets and Splits