text
stringlengths 6
128k
|
---|
Many modern cyber physical systems incorporate computer vision technologies,
complex sensors and advanced control software, allowing them to interact with
the environment autonomously. Testing such systems poses numerous challenges:
not only should the system inputs be varied, but also the surrounding
environment should be accounted for. A number of tools have been developed to
test the system model for the possible inputs falsifying its requirements.
However, they are not directly applicable to autonomous cyber physical systems,
as the inputs to their models are generated while operating in a virtual
environment. In this paper, we aim to design a search based framework, named
AmbieGen, for generating diverse fault revealing test scenarios for autonomous
cyber physical systems. The scenarios represent an environment in which an
autonomous agent operates. The framework should be applicable to generating
different types of environments. To generate the test scenarios, we leverage
the NSGA II algorithm with two objectives. The first objective evaluates the
deviation of the observed system behaviour from its expected behaviour. The
second objective is the test case diversity, calculated as a Jaccard distance
with a reference test case. We evaluate AmbieGen on three scenario generation
case studies, namely a smart-thermostat, a robot obstacle avoidance system, and
a vehicle lane keeping assist system. We compared three configurations of
AmbieGen: based on a single objective genetic algorithm, multi objective, and
random search. Both single and multi objective configurations outperform the
random search. Multi objective configuration can find the individuals of the
same quality as the single objective, producing more unique test scenarios in
the same time budget.
|
In this paper, we introduce Concise Chain-of-Thought (CCoT) prompting. We
compared standard CoT and CCoT prompts to see how conciseness impacts response
length and correct-answer accuracy. We evaluated this using GPT-3.5 and GPT-4
with a multiple-choice question-and-answer (MCQA) benchmark. CCoT reduced
average response length by 48.70% for both GPT-3.5 and GPT-4 while having a
negligible impact on problem-solving performance. However, on math problems,
GPT-3.5 with CCoT incurs a performance penalty of 27.69%. Overall, CCoT leads
to an average per-token cost reduction of 22.67%. These results have practical
implications for AI systems engineers using LLMs to solve real-world problems
with CoT prompt-engineering techniques. In addition, these results provide more
general insight for AI researchers studying the emergent behavior of
step-by-step reasoning in LLMs.
|
We perform a suite of high-resolution smoothed particle hydrodynamics
simulations to investigate the orbital decay and mass evolution of massive
black hole (MBH) pairs down to scales of ~30 pc during minor mergers of disk
galaxies. Our simulation set includes star formation and accretion onto the
MBHs, as well as feedback from both processes. We consider 1:10 merger events
starting at z~3, with MBH masses in the sensitivity window of the Laser
Interferometer Space Antenna, and we follow the coupling between the merger
dynamics and the evolution of the MBH mass ratio until the satellite galaxy is
tidally disrupted. While the more massive MBH accretes in most cases as if the
galaxy were in isolation, the satellite MBH may undergo distinct episodes of
enhanced accretion, owing to strong tidal torques acting on its host galaxy and
to orbital circularization inside the disk of the primary galaxy. As a
consequence, the initial 1:10 mass ratio of the MBHs changes by the time the
satellite is disrupted. Depending on the initial fraction of cold gas in the
galactic disks and the geometry of the encounter, the mass ratios of the MBH
pairs at the time of satellite disruption can stay unchanged or become as large
as 1:2. Remarkably, the efficiency of MBH orbital decay correlates with the
final mass ratio of the pair itself: MBH pairs that increase significantly
their mass ratio are also expected to inspiral more promptly down to
nuclear-scale separations. These findings indicate that the mass ratios of MBH
pairs in galactic nuclei do not necessarily trace the mass ratios of their
merging host galaxies, but are determined by the complex interplay between gas
accretion and merger dynamics.
|
We compute the Zero Point Energy in a spherically symmetric background
distorted at high energy as predicted by Gravity's Rainbow. In this context we
setup a Sturm-Liouville problem with the cosmological constant considered as
the associated eigenvalue. The eigenvalue equation is a reformulation of the
Wheeler-DeWitt equation. We find that the ordinary divergences can here be
handled by an appropriate choice of the rainbow's functions, in contrast to
what happens in other conventional approaches.
|
A pruned variant of polar coding is reinvented for all binary erasure
channels. For small $\varepsilon>0$, we construct codes with block length
$\varepsilon^{-5}$, code rate $\text{Capacity}-\varepsilon$, error probability
$\varepsilon$, and encoding and decoding time complexity
$O(N\log|\log\varepsilon|)$ per block, equivalently $O(\log|\log\varepsilon|)$
per information bit (Propositions 5 to 8).
This result also follows if one applies systematic polar coding [Ar{\i}kan
10.1109/LCOMM.2011.061611.110862] with simplified successive cancelation
decoding [Alamdar-Yazdi-Kschischang 10.1109/LCOMM.2011.101811.111480], and then
analyzes the performance using [Guruswami-Xia arXiv:1304.4321] or
[Mondelli-Hassani-Urbanke arXiv:1501.02444].
|
It is demonstrated a two-photon interfering technique based on
polarization-resolved measurements for the simultaneous estimation with the
maximum sensitivity achievable in nature of multiple parameters associated with
the polarization state of two interfering photonic qubits. This estimation is
done by exploiting a novel interferometry technique based on
polarization-resolved two-photon interference. We show the experimental
feasibility and accuracy of this technique even when a limited number of
sampling measurements is employed. This work is relevant for the development of
quantum technologies with photonic qubits and sheds light on the physics at the
interface between multiphoton interference, boson sampling, multi-parameter
quantum sensing and quantum information processing.
|
The primary objective of this paper is to derive explicit formulas for rank
one and rank two Drinfeld modules over a specific domain denoted by A. This
domain corresponds to the projective line associated with an infinite place of
degree two. To achieve the goals, we construct a pair of standard Drinfeld
modules whose coefficients are in the Hilbert class field of A. We demonstrate
that the period lattice of the exponential functions corresponding to both
modules behaves similarly to the period lattice of the Carlitz module, the
standard rank one Drinfeld module defined over rational function field.
Moreover, we employ Andersons t-motive to obtain the complete family of rank
two Drinfeld modules. This family is parameterized by the invariant J =
\lambda^{q^2+1} which effectively serves as the counterpart of the j-invariant
for elliptic curves. Building upon the concepts introduced by van~der~Heiden,
particularly with regard to rank two Drinfeld modules, we are able to
reformulate the Weil pairing of Drinfeld modules of any rank using a
specialized polynomial in multiple variables known as the Weil operator. As an
illustrative example, we provide a detailed examination of a more explicit
formula for the Weil pairing and the Weil operator of rank two Drinfeld modules
over the domain A.
|
Hydrothermal liquefaction could potentially utilize mixed plastic wastes for
sustainable biocrude production, however the fate of plastics under HTL is
largely unexplored for the same reaction conditions. In this study, we evaluate
how synthetic waste polymers can be depolymerized to bio-crude or platform
chemicals using HTL at typical conditions expected in future commercial
applications with and without alkali catalyst (potassium hydroxide). We
evaluate different characteristics for HTL processing of
poly-acrylonitrile-butadiene-styrene (ABS), Bisphenol-A Epoxy-resin,
high-density polyethylene (HDPE), low density PE (LDPE), polyamide 6 (PA6),
polyamide 66 (PA66), polyethylene terephthalate (PET), polycarbonate (PC),
polypropylene (PP), polystyrene (PS) and polyurethane (PUR) at 350 {\deg}C and
20 minutes residence time. Polyolefins and PS showed little depolymerization
due to lack of reactive sites for hydrolysis. HTL of PC and Epoxy yielded
predominantly bisphenol-A in oil fraction and phenols in aqueous phase. PA6 and
PA66 yielded one of its monomers caprolactam and a range of platform chemicals
in the aqueous phase. PET produces both original monomers. PUR yields a complex
oil containing similar molecules to its monomers and longer hydrocarbons. Our
results show how HTL can depolymerizes several different synthetic polymers and
highlights which of those are the most attractive or are unsuitable for
subcritical processing.
|
The identification of topological Weyl semimetals has recently gained
considerable attention. Here, we report the results of density-functional
theory calculations regarding the magnetic properties, the electronic
structure, and the intrinsic anomalous Hall conductivity of the title compound,
which was synthesized already 50 years ago but received little attention,
hitherto. We found Cs$_{2}$Co$_{3}$S$_4$ to be a ferrimagnetic half-metal with
a total spin magnetic moment of about 3 $\mu_B$ per formula unit. It shows
energy band gap of 0.36 eV in the majority-spin channel and a pseudo-gap at the
Fermi level in the minority-spin channel. We identified several sets of
low-energy Weyl points and traced their dependence on the direction of
magnetization. The intrinsic anomalous Hall conductivity is predicted to reach
a magnitude up to 500 $\Omega^{-1}$cm$^{-1}$, which is comparable to values
obtained in other celebrated Weyl semimetals.
|
Deep learning has emerged as an effective solution for solving the task of
object detection in images but at the cost of requiring large labeled datasets.
To mitigate this cost, semi-supervised object detection methods, which consist
in leveraging abundant unlabeled data, have been proposed and have already
shown impressive results. However, most of these methods require linking a
pseudo-label to a ground-truth object by thresholding. In previous works, this
threshold value is usually determined empirically, which is time consuming, and
only done for a single data distribution. When the domain, and thus the data
distribution, changes, a new and costly parameter search is necessary. In this
work, we introduce our method Adaptive Self-Training for Object Detection
(ASTOD), which is a simple yet effective teacher-student method. ASTOD
determines without cost a threshold value based directly on the ground value of
the score histogram. To improve the quality of the teacher predictions, we also
propose a novel pseudo-labeling procedure. We use different views of the
unlabeled images during the pseudo-labeling step to reduce the number of missed
predictions and thus obtain better candidate labels. Our teacher and our
student are trained separately, and our method can be used in an iterative
fashion by replacing the teacher by the student. On the MS-COCO dataset, our
method consistently performs favorably against state-of-the-art methods that do
not require a threshold parameter, and shows competitive results with methods
that require a parameter sweep search. Additional experiments with respect to a
supervised baseline on the DIOR dataset containing satellite images lead to
similar conclusions, and prove that it is possible to adapt the score threshold
automatically in self-training, regardless of the data distribution. The code
is available at https:// github.com/rvandeghen/ASTOD
|
We study the deconfinement of hadronic matter into quark matter in a
protoneutron star focusing on the effects of the finite size on the formation
of just-deconfined color superconducting quark droplets embedded in the
hadronic environment. The hadronic phase is modeled by the non-linear Walecka
model at finite temperature including the baryon octet and neutrino trapping.
For quark matter we use an $SU(3)_f$ Nambu-Jona-Lasinio model including color
superconductivity. The finite size effects on the just deconfined droplets are
considered in the frame of the multiple reflection expansion. In addition, we
consider that just deconfined quark matter is transitorily out of equilibrium
respect to weak interaction, and we impose color neutrality and flavor
conservation during the transition. We calculate self-consistently the surface
tension and curvature energy density of the quark hadron inter-phase and find
that it is larger than the values typically assumed in the literature. The
transition density is calculated for drops of different sizes, and at different
temperatures and neutrino trapping conditions. Then, we show that
energy-density fluctuations are much more relevant for deconfinement than
temperature and neutrino density fluctuations. We calculate the critical size
spectrum of energy-density fluctuations that allows deconfinement as well as
the nucleation rate of each critical bubble. We find that drops with any radii
smaller than 800 fm can be formed at a huge rate when matter achieves the bulk
transition limit of 5-6 times the nuclear saturation density.
|
We present, for the first time, an \textit{ab initio} calculation of the
individual up, down and strange quark helicity parton distribution functions
for the proton. The calculation is performed within the twisted mass
clover-improved fermion formulation of lattice QCD using one ensemble of
dynamical up, down, strange and charm quarks with a pion mass of 260 MeV. The
lattice matrix elements are non-perturbatively renormalized and the final
results are presented in the $\overline{ \rm MS}$ scheme at a scale of 2 GeV.
We give results on the $\Delta u^+(x)$ and $\Delta d^+(x)$, including
disconnected quark loop contributions, as well as on the $\Delta s^+(x)$. For
the latter we achieve unprecedented precision compared to the phenomenological
estimates.
|
We present recent results for radiative and electroweak penguin decays of $B$
me son at Belle. Measurements of differential branching fraction, isospin
asymmetr y, $K^*$ polarization, and forward-backward asymmetry as functions of
$q^2$ for $B \to K^{(*)}ll$ decays are reported. For the results of the
radiative process, we report measurements of branching fractions for inclusive
$B\to X_s \gamma$ and the exclusive $B\to K \eta' \gamma$ modes.
|
In the drying process of paste, we can imprint into the paste the order how
it should be broken in the future. That is, if we vibrate the paste before it
is dried, it remembers the direction of the initial external vibration, and the
morphology of resultant crack patterns is determined solely by the memory of
the direction. The morphological phase diagram of crack patterns and the
rheological measurement of the paste show that this memory effect is induced by
the plasticity of paste.
|
A subgroup Q is commensurated in a group G if each G conjugate of Q
intersects Q in a group that has finite index in both Q and the conjugate. So
commensurated subgroups are similar to normal subgroups. Semistability and
simple connectivity at infinity are geometric asymptotic properties of finitely
presented groups. In this paper we generalize several of the classic
semistability and simple connectivity at infinity results for finitely
presented groups. In particular, we show that if a finitely generated group G
contains an infinite finitely generated commensurated subgroup Q of infinite
index in G, then G is semistable at infinity. If additionally G and Q are
finitely presented and either Q is 1-ended or the pair (G,Q) has one filtered
end, then G is simply connected at infinity. This result leads to a relatively
short proof of V. M. Lew's theorem that finitely presented groups with infinite
finitely generated subnormal subgroups of infinite index are semistable at
infinity.
|
Many systems generate data as a set of triplets (a, b, c): they may represent
that user a called b at time c or that customer a purchased product b in store
c. These datasets are traditionally studied as networks with an extra dimension
(time or layer), for which the fields of temporal and multiplex networks have
extended graph theory to account for the new dimension. However, such
frameworks detach one variable from the others and allow to extend one same
concept in many ways, making it hard to capture patterns across all dimensions
and to identify the best definitions for a given dataset. This extended
abstract overrides this vision and proposes a direct processing of the set of
triplets. In particular, our work shows that a more general analysis is
possible by partitioning the data and building categorical propositions that
encode informative patterns. We show that several concepts from graph theory
can be framed under this formalism and leverage such insights to extend the
concepts to data triplets. Lastly, we propose an algorithm to list propositions
satisfying specific constraints and apply it to a real world dataset.
|
This paper provides a systematic and comprehensive survey that reviews the
latest research efforts focused on machine learning (ML) based performance
improvement of wireless networks, while considering all layers of the protocol
stack (PHY, MAC and network). First, the related work and paper contributions
are discussed, followed by providing the necessary background on data-driven
approaches and machine learning for non-machine learning experts to understand
all discussed techniques. Then, a comprehensive review is presented on works
employing ML-based approaches to optimize the wireless communication parameters
settings to achieve improved network quality-of-service (QoS) and
quality-of-experience (QoE). We first categorize these works into: radio
analysis, MAC analysis and network prediction approaches, followed by
subcategories within each. Finally, open challenges and broader perspectives
are discussed.
|
Manifold learning methods play a prominent role in nonlinear dimensionality
reduction and other tasks involving high-dimensional data sets with low
intrinsic dimensionality. Many of these methods are graph-based: they associate
a vertex with each data point and a weighted edge with each pair. Existing
theory shows that the Laplacian matrix of the graph converges to the
Laplace-Beltrami operator of the data manifold, under the assumption that the
pairwise affinities are based on the Euclidean norm. In this paper, we
determine the limiting differential operator for graph Laplacians constructed
using $\textit{any}$ norm. Our proof involves an interplay between the second
fundamental form of the manifold and the convex geometry of the given norm's
unit ball. To demonstrate the potential benefits of non-Euclidean norms in
manifold learning, we consider the task of mapping the motion of large
molecules with continuous variability. In a numerical simulation we show that a
modified Laplacian eigenmaps algorithm, based on the Earthmover's distance,
outperforms the classic Euclidean Laplacian eigenmaps, both in terms of
computational cost and the sample size needed to recover the intrinsic
geometry.
|
The Cosmic Rays propagation was studied in details using the HelMod-2D Monte
Carlo code, that includes a general description of the diffusion tensor, and
polar magnetic-field. The Numerical Approach used in this work is based on a
set of Stochastic Differential Equations fully equivalent to the well know
Parker Equation for the transport of Cosmic Rays. In our approach the Diffusion
tensor in the frame of the magnetic field turbolence does not depends
explicitly by Solar Latitude but varies with time using a diffusion parameter
obtained by Neutron Monitors. The parameters of the Model were tuned using data
during the solar Cycle 23 and Ulysses latitudinal Fast Scan in 1995. The actual
parametrization is able to well reproduce the observed latitudinal gradient of
protons and the southward shift of the minimum of latitudinal intensity. The
description of the model is also available online at website www.helmod.org.
The model was then applied on Pamela/Ulysses proton intensity from 2006 up to
2009. The model during this 4-year continous period agree well with both PAMELA
(at 1 AU) and Ulysses data (at various solar distance and solar latitude). The
agreement improves when considering the ratio between this data. Studies done
also with particles with different charge (e.g. electrons) allow us to explain
the presence (or not) of protons and electrons latitudinal gradients observed
by Ulysses during the Latitudinal Fast Scan in 1995 and 2007.
|
Near-Earth asteroid (3200) Phaethon is an active asteroid with a dust tail
repeatedly observed over the past decade for 3 days during each perihelion
passage down to a heliocentric distance of 0.14 au. The mechanism causing the
activity is still debated, and the suggested mechanisms lack clear supporting
evidence. Phaethon has been identified as the likely parent body of the annual
Geminid meteor shower, making it one of the few active asteroids associated
with a meteoroid stream. Its low albedo and B-type reflectance spectrum
indicates that Phaethon's composition is similar to carbonaceous chondrite
meteorites, but a connection to a specific meteorite group is ambiguous due to
the lack of diagnostic absorption features. In this study, we analyze the
mid-infrared emissivity spectrum of Phaethon and find that it is closely
associated with the Yamato-group (CY) of carbonaceous chondrites. The CY
chondrites represent primitive carbonaceous material that experienced early
aqueous alteration and subsequent late-stage thermal metamorphism. Minerals in
these meteorites, some of which we identify in Phaethon's spectrum, show
evidence of thermal decomposition; notably, the dehydroxylation and
transformation of phyllosilicates into poorly crystalline olivine.
Additionally, sulfides and carbonates in CYs are known release S2and CO2 gas
upon heating to ~700oC. We show that Phaethon's surface temperature during its
observed window of activity is consistent with the thermal decomposition
temperatures of several components in CY meteorites. All of these lines of
evidence are strong indicators that gas release from thermal decomposition
reactions is responsible for Phaethon's activity. The results of this study
have implications for the formation of the Geminid meteoroid stream, the
origins of thermally-altered primitive meteorites, and the destruction of
low-perihelion asteroids.
|
We report the development and detailed calibration of a multiphoton
fluorescence lifetime imaging system (FLIM) using a streak camera. The present
system is versatile with high spatial (0.2 micron) and temporal (50 psec)
resolution and allows rapid data acquisition and reliable and reproducible
lifetime determinations. The system was calibrated with standard fluorescent
dyes and the lifetime values obtained were in very good agreement with values
reported in literature for these dyes. We also demonstrate the applicability of
the system to FLIM studies in cellular specimens including stained pollen
grains and fibroblast cells expressing green fluorescent protein. The lifetime
values obtained matched well with those reported earlier by other groups for
these same specimens. Potential applications of the present system include the
measurement of intracellular physiology and Fluorescence Resonance Energy
Transfer (FRET) imaging which are discussed in the context of live cell
imaging.
|
In my previous paper about SRWS-$\zeta$ theory[Y.Ueoka,viXra:2205.014,2022],
I proposed an approximation of rough averaged summation of typical critical
Green function for the Anderson transition in the Orthogonal class. In this
paper, I remove a rough approximate summation for the series of the typical
critical Green function by replacing summation with integral. Pade approximant
is used to take a summation. The perturbation series of the critical exponent
$\nu$ of localization length from upper critical dimension is obtained. The
dimensional dependence of the critical exponent is again directly related with
Riemann $\zeta$ function. Degree of freedom about lower critical exponent
improve estimate compared with previous studies. When I fix lower critical
dimension equal to two, I obtained similar estimate of the critical exponent
compared with fitting curve estimate of the critical exponent[E.Tarquini et
al.,PhysRevB.95(2017)094204]. 1
|
We discuss a variant of Thompson sampling for nonparametric reinforcement
learning in a countable classes of general stochastic environments. These
environments can be non-Markov, non-ergodic, and partially observable. We show
that Thompson sampling learns the environment class in the sense that (1)
asymptotically its value converges to the optimal value in mean and (2) given a
recoverability assumption regret is sublinear.
|
We derive a strong bound on the chromo-electric dipole moment of the charm
quark, and we quantify its impact on models that allow for a sizeable flavour
violation in the up quark sector. In particular we show how the constraints
coming from the charm and up CEDMs limit the size of new physics contributions
to direct flavour violation in D meson decays. We also specialize our analysis
to the cases of split-families Supersymmetry and composite Higgs models. The
results we expose motivate an increase in experimental sensitivity to
fundamental hadronic dipoles, and a further exploration of the SM contribution
to both flavour violating D decays and nuclear electric dipole moments.
|
We report the experimental studies of a parametric excitation of a second
sound (SS) by a first sound (FS) in a superfluid helium in a resonance cavity.
The results on several topics in this system are presented: (i) The linear
properties of the instability, namely, the threshold, its temperature and
geometrical dependencies, and the spectra of SS just above the onset were
measured. They were found to be in a good quantitative agreement with the
theory. (ii) It was shown that the mechanism of SS amplitude saturation is due
to the nonlinear attenuation of SS via three wave interactions between the SS
waves. Strong low frequency amplitude fluctuations of SS above the threshold
were observed. The spectra of these fluctuations had a universal shape with
exponentially decaying tails. Furthermore, the spectral width grew continuously
with the FS amplitude. The role of three and four wave interactions are
discussed with respect to the nonlinear SS behavior. The first evidence of
Gaussian statistics of the wave amplitudes for the parametrically generated
wave ensemble was obtained. (iii) The experiments on simultaneous pumping of
the FS and independent SS waves revealed new effects. Below the instability
threshold, the SS phase conjugation as a result of three-wave interactions
between the FS and SS waves was observed. Above the threshold two new effects
were found: a giant amplification of the SS wave intensity and strong resonance
oscillations of the SS wave amplitude as a function of the FS amplitude.
Qualitative explanations of these effects are suggested.
|
Magnetic nanoparticles are useful in many medical applications because they
interact with biology on a cellular level thus allowing microenvironmental
investigation. An enhanced understanding of the dynamics of magnetic particles
may lead to advances in imaging directly in magnetic particle imaging (MPI) or
through enhanced MRI contrast and is essential for nanoparticle sensing as in
magnetic spectroscopy of Brownian motion (MSB). Moreover, therapeutic
techniques like hyperthermia require information about particle dynamics for
effective, safe, and reliable use in the clinic. To that end, we have developed
and validated a stochastic dynamical model of rotating Brownian nanoparticles
from a Langevin equation approach. With no field, the relaxation time toward
equilibrium matches Einstein's model of Brownian motion. In a static field, the
equilibrium magnetization agrees with the Langevin function. For high frequency
or low amplitude driving fields, behavior characteristic of the linearized
Debye approximation is reproduced. In a higher field regime where magnetic
saturation occurs, the magnetization and its harmonics compare well with the
effective field model. On another level, the model has been benchmarked against
experimental results, successfully demonstrating that harmonics of the
magnetization carry enough information to infer environmental parameters like
viscosity and temperature.
|
Federated Learning (FL) is a distributed learning paradigm that empowers edge
devices to collaboratively learn a global model leveraging local data.
Simulating FL on GPU is essential to expedite FL algorithm prototyping and
evaluations. However, current FL frameworks overlook the disparity between
algorithm simulation and real-world deployment, which arises from heterogeneous
computing capabilities and imbalanced workloads, thus misleading evaluations of
new algorithms. Additionally, they lack flexibility and scalability to
accommodate resource-constrained clients. In this paper, we present FedHC, a
scalable federated learning framework for heterogeneous and
resource-constrained clients. FedHC realizes system heterogeneity by allocating
a dedicated and constrained GPU resource budget to each client, and also
simulates workload heterogeneity in terms of framework-provided runtime.
Furthermore, we enhance GPU resource utilization for scalable clients by
introducing a dynamic client scheduler, process manager, and resource-sharing
mechanism. Our experiments demonstrate that FedHC has the capability to capture
the influence of various factors on client execution time. Moreover, despite
resource constraints for each client, FedHC achieves state-of-the-art
efficiency compared to existing frameworks without limits. When subjecting
existing frameworks to the same resource constraints, FedHC achieves a 2.75x
speedup. Code has been released on https://github.com/if-lab-repository/FedHC.
|
The alpha complex is a subset of the Delaunay triangulation and is often used
in computational geometry and topology. One of the main drawbacks of using the
alpha complex is that it is non-monotone, in the sense that if ${\cal
X}\subset{\cal X}'$ it is not necessarily (and generically not) the case that
the corresponding alpha complexes satisfy ${\cal A}_r({\cal X})\subset{\cal
A}_r({\cal X}')$. The lack of monotonicity may introduce significant
computational costs when using the alpha complex, and in some cases even render
it unusable. In this work we present a new construction based on the alpha
complex, that is homotopy equivalent to the alpha complex while maintaining
monotonicity. We provide the formal definitions and algorithms required to
construct this complex, and to compute its homology. In addition, we analyze
the size of this complex in order to argue that it is not significantly more
costly to use than the standard alpha complex.
|
Reliable generation of single photons is of key importance for fundamental
physical experiments and to demonstrate quantum technologies. Waveguide-based
photon pair sources have shown great promise in this regard due to their large
degree of spectral tunability, high generation rates and long photon coherence
times. However, for such a source to have real world applications it needs to
be efficiently integrated with fiber-optic networks. We answer this challenge
by presenting an alignment-free source of photon pairs in the
telecommunications band that maintains heralding efficiency > 50 % even after
fiber pigtailing, photon separation, and pump suppression. The source combines
this outstanding performance in heralding efficiency and brightness with a
compact, stable, and easy-to-use 'plug & play' package: one simply connects a
laser to the input and detectors to the output and the source is ready to use.
This high performance can be achieved even outside the lab without the need for
alignment which makes the source extremely useful for any experiment or
demonstration needing heralded single photons.
|
We focus on elliptic quasi-variational inequalities (QVIs) of obstacle type
and prove a number of results on the existence of solutions, directional
differentiability and optimal control of such QVIs. We give three existence
theorems based on an order approach, an iteration scheme and a sequential
regularisation through partial differential equations. We show that the
solution map taking the source term into the set of solutions of the QVI is
directionally differentiable for general data and locally Hadamard
differentiable obstacle mappings, thereby extending in particular the results
of our previous work which provided the first differentiability result for QVIs
in infinite dimensions. Optimal control problems with QVI constraints are also
considered and we derive various forms of stationarity conditions for control
problems, thus supplying among the first such results in this area.
|
Epitaxial SrTi1-xVxO3 thin films with thicknesses of ~16 nm were grown on
(001)-oriented LSAT substrates using the pulsed electron-beam deposition
technique. The transport study revealed a temperature driven metal-insulator
transition (MIT) at 95 K for the film with x = 0.67. The films with higher
vanadium concentration (x > 0.67) were metallic, and the electrical resistivity
followed the T^2 law corresponding to a Fermi liquid system. In the insulating
region of x < 0.67, the temperature dependence of electrical resistivity for
the x = 0.5 and 0.33 films can be scaled with the variable range hopping model.
The possible mechanisms behind the observed MIT were discussed, including the
effects of electron correlation, lattice distortion and Anderson localization.
|
Assuming that a Higgs sector is responsible for electroweak symmetry
breaking, we attempt to address two important questions: How much better
precision are various measurements of Higgs boson properties at a future linear
collider than at the LHC? What can a future linear collider do for Higgs
physics that the LHC cannot?
|
We study complete convergence and closely related
Hsu-Robbins-Erd\H{o}s-Spitzer-Baum-Katz series for sums whose terms are
elements of linear autoregression sequences. We obtain criterions for
convergence of this series expressed in moment assumptions, which for "weakly
dependent" sequences are the same as in classical results concerning
independent case.
|
In this article we study the inverse problem of recovering a space-dependent
coefficient of the Moore-Gibson-Thompson (MGT) equation, from knowledge of the
trace of the solution on some open subset of the boundary. We obtain the
Lipschitz stability for this inverse problem, and we design a convergent
algorithm for the reconstruction of the unknown coefficient. The techniques
used are based on Carleman inequalities for wave equations and properties of
the MGT equation.
|
We have obtained deep near-infrared images in J and K filters of four fields
in the Sculptor Group spiral galaxy NGC 247 with the ESO VLT and ISAAC camera.
For a sample of ten Cepheids in these fields, previously discovered by
Garc{\'i}a-Varela et al. from optical wide-field images, we have determined
mean J and K magnitudes and have constructed the period-luminosity (PL)
relations in these bands. Using the near-infrared PL relations together with
those in the optical V and I bands, we have determined a true distance modulus
for NGC 247 of 27.64 mag, with a random uncertainty of $\pm$2% and a systematic
uncertainty of $\sim$4% which is dominated by the effect of unresolved stars on
the Cepheid photometry. The mean reddening affecting the NGC 247 Cepheids of
E(B-V) = 0.18 $\pm$ 0.02 mag is mostly produced in the host galaxy itself and
is significantly higher than what was found in the previous optical Cepheid
studies in NGC 247 of our own group, and Madore et al., leading to a 7%
decrease in the previous optical Cepheid distance. As in other studies of our
project, the distance modulus of NGC 247 we report is tied to an assumed LMC
distance modulus of 18.50. Comparison with other distance measurements to NGC
247 shows that the present IR-based Cepheid distance is the most accurate among
these determinations.
With a distance of 3.4 Mpc, NGC 247 is about 1.5 Mpc more distant than NGC 55
and NGC 300, two other Sculptor Group spirals analyzed before with the same
technique by our group.
|
We investigate the copolymerization behavior of a two-component system into
quasi-linear self-assemblies under conditions that interspecies binding is
favored over identical species binding. The theoretical framework is based on a
coarse-grained self-assembled Ising model with nearest neighbor interactions.
In Ising language, such conditions correspond to the anti-ferromagnetic case
giving rise to copolymers with predominantly alternating configurations. In the
strong coupling limit, we show that the maximum fraction of polymerized
material and the average length of strictly alternating copolymers depend on
the stoichiometric ratio and the activation free energy of the more abundant
species. They are substantially reduced when the stoichiometric ratio
noticeably differs from unity. Moreover, for stoichiometric ratios close to
unity, the copolymerization critical concentration is remarkably lower than the
homopolymerization critical concentration of either species. We further analyze
the polymerization behavior for a finite and negative coupling constant and
characterize the composition of supramolecular copolymers. Our theoretical
insights rationalize experimental results of supramolecular polymerization of
oppositely charged monomeric species in aqueous solutions.
|
Experimental evidence of mode-selective evanescent power coupling at
telecommunication frequencies with efficiencies up to 75 % from a tapered
optical fiber to a carefully designed metal nanoparticle plasmon waveguide is
presented. The waveguide consists of a two-dimensional square lattice of
lithographically defined Au nanoparticles on an optically thin silicon
membrane. The dispersion and attenuation properties of the waveguide are
analyzed using the fiber taper. The high efficiency of power transfer into
these waveguides solves the coupling problem between conventional optics and
plasmonic devices and could lead to the development of highly efficient
plasmonic sensors and optical switches.
|
In voting contexts, some new candidates may show up in the course of the
process. In this case, we may want to determine which of the initial candidates
are possible winners, given that a fixed number $k$ of new candidates will be
added. We give a computational study of this problem, focusing on scoring
rules, and we provide a formal comparison with related problems such as control
via adding candidates or cloning.
|
This work is part of the BinaMIcS project, the aim of which is to understand
the interaction between binarity and magnetism in close binary systems. All the
studied spectroscopic binaries targeted by the BinaMIcS project encompass hot
massive and intermediate-mass stars on the main sequence, as well as cool stars
over a wide range of evolutionary stages. The present paper focuses on the
binary system FK Aqr, which is composed of two early M dwarfs. Both stars are
already known to be magnetically active based on their light curves and
detected flare activity. In addition, the two components have large convective
envelopes with masses just above the fully convective limit, making the system
an ideal target for studying effect of binarity on stellar dynamos. We use
spectropolarimetric observations obtained with ESPaDOnS at CFHT in September
2014. Mean Stokes I and V line profiles are extracted using the least-squares
deconvolution (LSD) method. The radial velocities of the two components are
measured from the LSD Stokes I profiles and are combined with interferometric
measurements in order to constrain the orbital parameters of the system. The
longitudinal magnetic fields Bl and chromospheric activity indicators are
measured from the LSD mean line profiles. The rotational modulation of the
Stokes V profiles is used to reconstruct the surface magnetic field structures
of both stars via the Zeeman Doppler imaging (ZDI) inversion technique. Maps of
the surface magnetic field structures of both components of FK Aqr are
presented for the first time. Our study shows that both components host similar
large-scale magnetic fields of moderate intensity (Bmean ~ 0.25 kG); both are
predominantly poloidal and feature a strong axisymmetric dipolar component.
(abridged)
|
In highly distributed Internet measurement systems distributed agents
periodically measure the Internet using a tool called {\tt traceroute}, which
discovers a path in the network graph. Each agent performs many traceroute
measurement to a set of destinations in the network, and thus reveals a portion
of the Internet graph as it is seen from the agent locations. In every period
we need to check whether previously discovered edges still exist in this
period, a process termed {\em validation}. For this end we maintain a database
of all the different measurements performed by each agent. Our aim is to be
able to {\em validate} the existence of all previously discovered edges in the
minimum possible time. In this work we formulate the validation problem as a
generalization of the well know set cover problem. We reduce the set cover
problem to the validation problem, thus proving that the validation problem is
${\cal NP}$-hard. We present a $O(\log n)$-approximation algorithm to the
validation problem, where $n$ in the number of edges that need to be validated.
We also show that unless ${\cal P = NP}$ the approximation ratio of the
validation problem is $\Omega(\log n)$.
|
Noncommutative rational functions appeared in many contexts in system theory
and control, from the theory of finite automata and formal languages to robust
control and LMIs. We survey the construction of noncommutative rational
functions, their realization theory and some of their applications. We also
develop a difference-differential calculus as a tool for further analysis.
|
PSR J2129-0429 is a "redback" eclipsing millisecond pulsar binary with an
unusually long 15.2 hour orbit. It was discovered by the Green Bank Telescope
in a targeted search of unidentified Fermi gamma-ray sources. The pulsar
companion is optically bright (mean $m_R = 16.6$ mag), allowing us to construct
the longest baseline photometric dataset available for such a system. We
present ten years of archival and new photometry of the companion from LINEAR,
CRTS, PTF, the Palomar 60-inch, and LCOGT. Radial velocity spectroscopy using
the Double-Beam Spectrograph on the Palomar 200-inch indicates that the pulsar
is massive: $1.74\pm0.18 M_\odot$. The G-type pulsar companion has mass
$0.44\pm0.04 M_\odot$, one of the heaviest known redback companions. It is
currently 95\% Roche-lobe filling and only mildly irradiated by the pulsar. We
identify a clear 13.1 mmag yr$^{-1}$ secular decline in the mean magnitude of
the companion as well as smaller-scale variations in the optical lightcurve
shape. This behavior may indicate that the companion is cooling. Binary
evolution calculations indicate that PSR J2129-0429 has an orbital period
almost exactly at the bifurcation period between systems that converge into
tighter orbits as black widows and redbacks and those that diverge into wider
pulsar--white dwarf binaries. Its eventual fate may depend on whether it
undergoes future episodes of mass transfer and increased irradiation.
|
We investigate the dynamic structure factor of atomic Bose and Fermi gases in
one-dimensional optical lattices at zero temperature. The focus is on the
generic behaviour of S(k,omega) as function of filling and interaction strength
with the aim of identifying possible experimental signatures for the different
quantum phase transitions. We employ the Hubbard or Bose-Hubbard model and
solve the eigenvalue problem of the Hamiltonian exactly for moderate lattice
sizes. This allows us to determine the dynamic structure factor and other
observables directly in the phase transition regime, where approximation
schemes are generally not applicable. We discuss the characteristic signatures
of the various quantum phases appearing in the dynamic structure factor and
illustrate that the centroid of the strength distribution can be used to
estimate the relevant excitation gaps. Employing sum rules, these quantities
can be evaluated using ground state expectation values only. Important
differences between bosonic and fermionic systems are observed, e.g., regarding
the origin of the excitation gap in the Mott-insulator phase.
|
A Kerr type solution in the Regge calculus is considered. It is assumed that
the discrete general relativity, the Regge calculus, is quantized within the
path integral approach. The only consequence of this approach used here is the
existence of a length scale at which edge lengths are loosely fixed, as
considered in our earlier paper.
In addition, we previously considered the Regge action on a simplicial
manifold on which the vertices are coordinatized and the corresponding
piecewise constant metric introduced, and found that for the simplest periodic
simplicial structure and in the leading order over metric variations between
4-simplices, this reduces to a finite-difference form of the Hilbert-Einstein
action.
The problem of solving the corresponding discrete Einstein equations
(classical) with a length scale (having a quantum nature) arises as the problem
of determining the optimal background metric for the perturbative expansion
generated by the functional integral. Using an one-complex-function ansatz for
the metric, which reduces to the Kerr-Schild metric in the continuum, we find a
discrete metric that approximates the continuum one at large distances and is
nonsingular on the (earlier) singularity ring. The effective curvature
$R_{\lambda \nu \nu \rho }$, including where $R_{\lambda \mu} \neq 0$ (gravity
sources), is analyzed with a focus on the vicinity of the singularity ring.
|
The difficulty in quantifying the benefit of Structural Health Monitoring
(SHM) for decision support is one of the bottlenecks to an extensive adoption
of SHM on real-world structures. In this paper, we present a framework for such
a quantification of the value of vibration-based SHM, which can be flexibly
applied to different use cases. These cover SHM-based decisions at different
time scales, from near-real time diagnostics to the prognosis of slowly
evolving deterioration processes over the lifetime of a structure. The
framework includes an advanced model of the SHM system. It employs a Bayesian
filter for the tasks of sequential joint deterioration state-parameter
estimation and structural reliability updating, using continuously identified
modal and intermittent visual inspection data. It also includes a realistic
model of the inspection and maintenance decisions throughout the structural
life-cycle. On this basis, the Value of SHM is quantified by the difference in
expected total life-cycle costs with and without the SHM. We investigate the
framework through application on a numerical model of a two-span bridge system,
subjected to gradual and shock deterioration, as well as to changing
environmental conditions, over its lifetime. The results show that this
framework can be used as an a-priori decision support tool to inform the
decision on whether or not to install a vibration-based SHM system on a
structure, for a wide range of SHM use cases.
|
The Temperley-Lieb and Brauer algebras and their cyclotomic analogues, as
well as the partition algebra, are all examples of twisted semigroup algebras.
We prove a general theorem about the cellularity of twisted semigroup algebras
of regular semigroups. This theorem, which generalises a recent result of East
about semigroup algebras of inverse semigroups, allows us to easily reproduce
the cellularity of these algebras.
|
We investigate the convexity property on $(0,1)$ of the function
$$f_a(x)=\frac{{\cal K}{(\sqrt x)}}{a-(1/2)\log(1-x)}.$$ We show that $f_a$ is
strictly convex on $(0,1)$ if and only if $a\geq a_c$ and $1/f_a$ is strictly
convex on $(0,1)$ if and only if $a\leq\log 4$, where $a_c$ is some critical
value. The second main result of the paper is to study the log-convexity and
log-concavity of the function $$h_p(x)=(1-x)^p{\cal K}(\sqrt x).$$ We prove
that $h_p$ is strictly log-concave on $(0,1)$ if and only if $p\geq 7/32$ and
strictly log-convex if and only if $p\leq 0$. This solves some problems posed
by Yang and Tian and complete their result and a result of Alzer and Richards
that $f_a$ is strictly concave on $(0,1)$ if and only if $a=4/3$ and $1/f_a$ is
strictly concave on $(0,1)$ if and only if $a\geq 8/5$. As applications of the
convexity and concavity, we establish among other inequalities, that for $a\geq
a_c$ and all $r\in(0,1)$ $$\frac{2\pi\sqrt\pi}{(2a+\log 2)\Gamma(3/4)^2}\leq
\frac{{\cal K}(\sqrt r)}{a-\frac12\log (r)}+\frac{{\cal
K}(\sqrt{1-r})}{a-\frac12\log (1-r)}<1+\frac\pi{2a},$$ and for $p\geq 3(2+\sqrt
2)/8$ and all $r\in(0,1)$ $$\sqrt{(r-r^2)^p{\cal K}(\sqrt{1-r}){\cal K}(\sqrt
r)}< \frac{\pi\sqrt\pi}{2^{p+1}\Gamma(3/4)^2}<\frac{r^p{\cal
K}(\sqrt{1-r})+(1-r)^p{\cal K}(\sqrt r)}{2}.$$
|
We prove that functions of intrinsic-mode type (a classical models for
signals) behave essentially like holomorphic functions: adding a pure carrier
frequency $e^{int}$ ensures that the anti-holomorphic part is much smaller than
the holomorphic part $ \| P_{-}(f)\|_{L^2} \ll \|P_{+}(f)\|_{L^2}.$ This
enables us to use techniques from complex analysis, in particular the
\textit{unwinding series}. We study its stability and convergence properties
and show that the unwinding series can stabilize and show that the unwinding
series can provide a high resolution time-frequency representation, which is
robust to noise.
|
We establish some deviation inequalities, moment bounds and almost sure
results for the Wasserstein distance of order p $\in$ [1, $\infty$) between the
empirical measure of independent and identically distributed R d-valued random
variables and the common distribution of the variables. We only assume the
existence of a (strong or weak) moment of order rp for some r > 1, and we
discuss the optimality of the bounds. Mathematics subject classification.
60B10, 60F10, 60F15, 60E15.
|
Single molecule experiments on single- and double stranded DNA have sparked a
renewed interest in the force-extension of polymers. The extensible Freely
Jointed Chain (FJC) model is frequently invoked to explain the observed
behavior of single-stranded DNA. We demonstrate that this model does not
satisfactorily describe recent high-force stretching data. We instead propose a
model (the Discrete Persistent Chain, or ``DPC'') that borrows features from
both the FJC and the Wormlike Chain, and show that it resembles the data more
closely. We find that most of the high-force behavior previously attributed to
stretch elasticity is really a feature of the corrected entropic elasticity;
the true stretch compliance of single-stranded DNA is several times smaller
than that found by previous authors. Next we elaborate our model to allow
coexistence of two conformational states of DNA, each with its own stretch and
bend elastic constants. Our model is computationally simple, and gives an
excellent fit through the entire overstretching transition of nicked,
double-stranded DNA. The fit gives the first values for the elastic constants
of the stretched state. In particular we find the effective bend stiffness for
DNA in this state to be about 10 nm*kbt, a value quite different from either
B-form or single-stranded DNA
|
Pulsed gamma-ray emission from millisecond pulsars (MSPs) has been detected
by the sensitive Fermi, which sheds light on studies of the emission region and
mechanism. In particular, the specific patterns of radio and gamma-ray emission
from PSR J0101-6422 challenge the popular pulsar models, e.g. outer gap and
two-pole caustic (TPC) models. Using the three dimension (3D) annular gap
model, we have jointly simulated radio and gamma-ray light curves for three
representative MSPs (PSR J0034-0534, PSR J0101-6422 and PSR J0437-4715) with
distinct radio phase lags and present the best simulated results for these
MSPs, particularly for PSR J0101-6422 with complex radio and gamma-ray pulse
profiles and for PSR J0437-4715 with a radio interpulse. It is found that both
the gamma-ray and radio emission originate from the annular gap region located
in only one magnetic pole, and the radio emission region is not primarily lower
than the gamma-ray region in most cases. In addition, the annular gap model
with a small magnetic inclination angle instead of "orthogonal rotator" can
account for MSPs' radio interpulse with a large phase separation from the main
pulse. The annular gap model is a self-consistent model not only for young
pulsars but also MSPs, and multi-wavelength light curves can be fundamentally
explained by this model.
|
In this paper, we consider both finite and infinite horizon discounted
dynamic mean-field games where there is a large population of homogeneous
players sequentially making strategic decisions and each player is affected by
other players through an aggregate population state. Each player has a private
type that only she observes. Such games have been studied in the literature
under simplifying assumption that population state dynamics are stationary. In
this paper, we consider non-stationary population state dynamics and present a
novel backward recursive algorithm to compute Markov perfect equilibrium (MPE)
that depend on both, a player's private type, and current (dynamic) population
state. Using this algorithm, we study a security problem in cyberphysical
system where infected nodes put negative externality on the system, and each
node makes a decision to get vaccinated. We numerically compute MPE of the
game.
|
Classical measurement strategies in many areas are approaching their maximum
resolution and sensitivity levels, but these levels often still fall far short
of the ultimate limits allowed by the laws of physics. To go further,
strategies must be adopted that take into account the quantum nature of the
probe particles and that optimize their quantum states for the desired
application. Here, we review some of these approaches, in which quantum
entanglement, the orbital angular momentum of single photons, and quantum
interferometry are used to produce optical measurements beyond the classical
limit.
|
Contents: 1) Introduction and a few excursions [A word on the role of
explicit solutions in other parts of physics and astrophysics. Einstein's field
equations. "Just so" notes on the simplest solutions: The Minkowski, de Sitter
and anti-de Sitter spacetimes. On the interpretation and characterization of
metrics. The choice of solutions. The outline] 2) The Schwarzschild solution
[Spherically symmetric spacetimes. The Schwarzschild metric and its role in the
solar system. Schwarzschild metric outside a collapsing star. The
Schwarzschild-Kruskal spacetime. The Schwarzschild metric as a case against
Lorentz-covariant approaches. The Schwarzschild metric and astrophysics] 3) The
Reissner- Nordstrom solution [Reissner-Nordstrom black holes and the question
of cosmic censorship. On extreme black holes, d-dimensional black holes, string
theory and "all that"] 4) The Kerr metric [Basic features. The physics and
astrophysics around rotating black holes. Astrophysical evidence for a Kerr
metric] 5) Black hole uniqueness and multi-black hole solutions 6) Stationary
axisymmetric fields and relativistic disks [Static Weyl metrics. Relativistic
disks as sources of the Kerr metric and other stationary spacetimes. Uniformly
rotating disks] 7) Taub-NUT space [A new way to the NUT metric. Taub-NUT
pathologies and applications] 8) Plane waves and their collisions
[Plane-fronted waves. New developments and applications. Colliding plane waves]
9) Cylindrical waves [Cylindrical waves and the asymptotic structure of
3-dimensional general relativity. Cylindrical waves and quantum gravity.
Cylindrical waves: a miscellany] 10) On the Robinson-Trautman solutions 11) The
boost-rotation symmetric radiative spacetimes 12) The cosmological models
[Spatially homogeneous cosmologies. Inhomogeneous models] 13) Concluding
remarks
|
We study the geometry of families of hypersurfaces in Eguchi-Hanson space
that arise as complex line bundles over curves in $S^2$ and are
three-dimensional, non-compact Riemannian manifolds, which are foliated in Hopf
tori for closed curves. They are negatively curved, asymptotically flat spaces,
and we compute the complete three-dimensional curvature tensor as well as the
second fundamental form, giving also some results concerning their geodesic
flow. We show the non-existence of $\L^p$-harmonic functions on these
hypersurfaces for every $p \geq 1$ and arbitrary curves, and determine the
infima of the essential spectra of the Laplace and of the square of the Dirac
operator in the case of closed curves. For circles we also compute the
$\L^2$-kernel of the Dirac operator in the sense of spectral theory and show
that it is infinite dimensional. We consider further the Einstein Dirac system
on these spaces and construct explicit examples of T-Killing spinors on them.
|
In recent years a remarkable progress was made in the construction of spatial
cloaks using the methods of transformation optics and metamaterials.
The temporal cloaking, i.e. the cloaking of an event in spacetime, was also
widely studied by using transformations on spacetime domains.
We propose a simple and general method for the construction of temporal
cloaking using the change of time variables only.
|
This work examines the limits of the principal spectrum point, $\lambda_p$,
of a nonlocal dispersal cooperative system with respect to the dispersal rates.
In particular, we provide precise information on the sign of $\lambda_p$ as one
of the dispersal rates is : (i) small while the other dispersal rate is
arbitrary, and (ii) large while the other is either also large or fixed. We
then apply our results to study the effects of dispersal rates on a two-stage
structured nonlocal dispersal population model whose linearized system at the
trivial solution results in a nonlocal dispersal cooperative system. The
asymptotic profiles of the steady-state solutions with respect to the dispersal
rates of the two-stage nonlocal dispersal population model are also obtained.
Some biological interpretations of our results are discussed.
|
Blue Compact Dwarf (BCD) Galaxies in the nearby Universe provide a means for
studying feedback mechanisms and star-formation processes in low-metallicity
environments in great detail. Due to their vicinity, these local analogues to
young galaxies are well suited for high-resolution studies that would be
unfeasible for primordial galaxies in the high-redshift universe. Here we
present HST-WFC3 observations of one such BCD, Mrk 71, one of the most powerful
local starbursts known, in the light of [O II], He II, Hb, [O III], Ha, and [S
II]. At D=3.44 Mpc, this extensive suite of emission line images enables us to
explore the chemical and physical conditions of Mrk 71 on ~2 pc scales. Using
these high spatial-resolution observations, we use emission line diagnostics to
distinguish ionisation mechanisms on a pixel-by-pixel basis and show that
despite the previously reported hypersonic gas and super-bubble blow out, the
gas in Mrk 71 is photoionised, with no sign of shock-excited emission. Using
strong-line metallicity diagnostics, we present the first 'metallicity image'
of a galaxy, revealing chemically inhomogeneity on scales of <50 pc. We
additionally demonstrate that while chemical structure can be lost at large
spatial scales, metallicity-diagnostics can break down on spatial scales
smaller than a HII region. HeII emission line images are used to identify up to
six Wolf-Rayet stars in Mrk 71, three of which lie on the edge of blow-out
region. This study not only demonstrates the benefits of high-resolution
spatially-resolved observations in assessing the effects of feedback
mechanisms, but also the limitations of fine spatial scales when employing
emission-line diagnostics. Both aspects are especially relevant as we enter the
era of extremely large telescopes, when observing structure on ~10 pc scales
will no longer be limited to the local universe.
|
The magnetic field effects on excitons in an InAs nano-ring are studied
theoretically. By numerically diagonalizing the effective-mass Hamiltonian of
the problem, which can be separated into terms in centre-of-mass and relative
coordinates, we calculate the low-lying exciton energy levels and oscillator
strengths as a function of the width of the ring and the strength of the
external magnetic field. The analytical results are obtained for a narrow-width
nano-ring in which the radial motion is the fastest one and adiabatically
decoupled from the azimuthal motions. It is shown that in the presence of
Coulomb correlation, the so called Aharonov-Bohm effect of excitons exists in a
finite (but small) width nano-ring. However, when the ring width becomes large,
the non-simply-connected geometry of nano-rings is destroyed and in turn yields
the suppression of Aharonov-Bohm effect. The conditional probability
distribution calculated for the low-lying exction states allows identification
of the presence of Aharonov-Bohm effect. The linear optical susceptibility is
also calculated as a function of the magnetic field, to be confronted with the
future measurements of optical emission experiments on InAs nano-rings.
|
A superconducting tunnel junction is used to directly extract quasiparticles
from one of the leads of a single-Cooper-pair-transistor. The consequent
reduction in quasiparticle density causes a lower rate of quasiparticle
tunneling onto the device. This rate is directly measured by radio-frequency
reflectometry. Local cooling may be of direct benefit in reducing the effect of
quasiparticles on coherent superconducting nanostructures.
|
The approximate Bernstein polynomial model, a mixture of beta distributions,
is applied to obtain maximum likelihood estimates of the regression
coefficients, and the baseline density and survival functions in an accelerated
failure time model based on interval censored data including current status
data. The rate of convergence of the proposed estimates are given under some
conditions for uncensored and interval censored data. Simulation shows that the
proposed method is better than its competitors. The proposed method is
illustrated by fitting the Breast Cosmetic Data using the accelerated failure
time model.
|
This is survey about action of group on Hilbert geometry. It will be a
chapter of the "Handbook of Hilbert geometry" edited by G. Besson, M. Troyanov
and A. Papadopoulos.
|
We demonstrate a simple technique for adding controlled dissipation to
Rydberg atom experiments. In our experiments we excite cold rubidium atoms in a
magneto-optical trap to $70$-S Rydberg states whilst simultaneously inducing
forced dissipation by resonantly coupling the Rydberg state to a hyperfine
level of the short-lived $6$-P state. The resulting effective dissipation can
be varied in strength and switched on and off during a single experimental
cycle.
|
Continuous sign language recognition (SLR) deals with unaligned video-text
pair and uses the word error rate (WER), i.e., edit distance, as the main
evaluation metric. Since it is not differentiable, we usually instead optimize
the learning model with the connectionist temporal classification (CTC)
objective loss, which maximizes the posterior probability over the sequential
alignment. Due to the optimization gap, the predicted sentence with the highest
decoding probability may not be the best choice under the WER metric. To tackle
this issue, we propose a novel architecture with cross modality augmentation.
Specifically, we first augment cross-modal data by simulating the calculation
procedure of WER, i.e., substitution, deletion and insertion on both text label
and its corresponding video. With these real and generated pseudo video-text
pairs, we propose multiple loss terms to minimize the cross modality distance
between the video and ground truth label, and make the network distinguish the
difference between real and pseudo modalities. The proposed framework can be
easily extended to other existing CTC based continuous SLR architectures.
Extensive experiments on two continuous SLR benchmarks, i.e.,
RWTH-PHOENIX-Weather and CSL, validate the effectiveness of our proposed
method.
|
This paper proposes a forward attention method for the sequenceto- sequence
acoustic modeling of speech synthesis. This method is motivated by the nature
of the monotonic alignment from phone sequences to acoustic sequences. Only the
alignment paths that satisfy the monotonic condition are taken into
consideration at each decoder timestep. The modified attention probabilities at
each timestep are computed recursively using a forward algorithm. A transition
agent for forward attention is further proposed, which helps the attention
mechanism to make decisions whether to move forward or stay at each decoder
timestep. Experimental results show that the proposed forward attention method
achieves faster convergence speed and higher stability than the baseline
attention method. Besides, the method of forward attention with transition
agent can also help improve the naturalness of synthetic speech and control the
speed of synthetic speech effectively.
|
We present REMM, a rotation-equivariant framework for end-to-end multimodal
image matching, which fully encodes rotational differences of descriptors in
the whole matching pipeline. Previous learning-based methods mainly focus on
extracting modal-invariant descriptors, while consistently ignoring the
rotational invariance. In this paper, we demonstrate that our REMM is very
useful for multimodal image matching, including multimodal feature learning
module and cyclic shift module. We first learn modal-invariant features through
the multimodal feature learning module. Then, we design the cyclic shift module
to rotationally encode the descriptors, greatly improving the performance of
rotation-equivariant matching, which makes them robust to any angle. To
validate our method, we establish a comprehensive rotation and scale-matching
benchmark for evaluating the anti-rotation performance of multimodal images,
which contains a combination of multi-angle and multi-scale transformations
from four publicly available datasets. Extensive experiments show that our
method outperforms existing methods in benchmarking and generalizes well to
independent datasets. Additionally, we conducted an in-depth analysis of the
key components of the REMM to validate the improvements brought about by the
cyclic shift module. Code and dataset at https://github.com/HanNieWHU/REMM.
|
Two novel phenomena in a weakly coupled granular superconductor under an
applied stress are predicted which are based on recently suggested piezophase
effect (a macroscopic quantum analog of the piezoelectric effect) in
mechanically loaded grain boundary Josephson junctions. Namely, we consider the
existence of stress induced paramagnetic moment in zero applied magnetic field
(piezomagnetism) and its influence on a low-field magnetization (leading to a
mechanically induced paramagnetic Meissner effect). The conditions under which
these two effects can be experimentally measured in high-T_$ granular
superconductors are discussed.
|
Reading the magnetic state of antiferromagnetic (AFM) thin films is key for
AFM spintronic devices. We investigate the underlying physics behind the spin
Hall magnetoresistance (SMR) of bilayers of platinum and insulating AFM
hematite ({\alpha}-Fe2O3) and find an SMR efficiency of up to 0.1%, comparable
to ferromagnetic based structures. To understand the observed complex SMR field
dependence, we analyse the effect of misalignments of the magnetic axis that
arise during growth of thin films, by electrical measurements and direct
magnetic imaging, and find that a small deviation can result in significant
signatures in the SMR response. This highlights the care that must be taken
when interpreting SMR measurements on AFM spin textures.
|
This paper investigates the impact of small instanton effects on the axion
mass in composite axion models. In particular, we focus on the Composite
Accidental Axion (CAA) models, which are designed to address the axion quality
problem, and where the Peccei-Quinn (PQ) symmetry emerges accidentally. In the
CAA models, the QCD gauge symmetry is embedded in a larger gauge group at high
energy. These models contain small instantons not included in low-energy QCD,
which could enhance the axion mass significantly. However, in the CAA models,
our analysis reveals that these effects on the axion mass are non-vanishing but
are negligible compared to the QCD effects. The suppression of the small
instanton effects originates from the global chiral U(1) symmetries which are
not broken spontaneously and play a crucial role in eliminating $\theta$-terms
in the hidden sectors through anomalies. We find these U(1) symmetries restrict
the impact of small instantons in hidden sectors on the axion mass. Our study
provides crucial insights into the dynamics within the CAA models and suggests
broader implications for understanding small instanton effects in other
composite axion models.
|
I show that holographic calculations of entanglement entropy in the context
of AdS bulk space modified by wormhole geometries provide the expected
entanglement magnitude. This arises in the context of string theory by means of
additional geometric structure that is seen by the string in its bulk
evolution. The process can be described as a net entanglement flow towards
stringy geometry. I make use of the fact that as opposed to quantum field
theory, strings have additional winding mode states around small extra
dimensions which modify the area computation given by the standard application
of the Ryu-Takayanagi entanglement entropy formula.
|
If there are multiple hidden sectors which independently break supersymmetry,
then the spectrum will contain multiple goldstini. In this paper, we explore
the possibility that the visible sector might also break supersymmetry, giving
rise to an additional pseudo-goldstino. By the standard lore, visible sector
supersymmetry breaking is phenomenologically excluded by the supertrace sum
rule, but this sum rule is relaxed with multiple supersymmetry breaking.
However, we find that visible sector supersymmetry breaking is still
phenomenologically disfavored, not because of a sum rule, but because the
visible sector pseudo-goldstino is generically overproduced in the early
universe. A way to avoid this cosmological bound is to ensure that an R
symmetry is preserved in the visible sector up to supergravity effects. A key
expectation of this R-symmetric case is that the Higgs boson will dominantly
decay invisibly at the LHC.
|
We show that a class of solutions of minimal supergravity in five dimensions
is given by lifts of three--dimensional Einstein--Weyl structures of hyper-CR
type. We characterise this class as most general near--horizon limits of
supersymmetric solutions to the five--dimensional theory. In particular, we
deduce that a compact spatial section of a horizon can only be a Berger sphere,
a product metric on $S^1\times S^2$ or a flat three-torus. We then consider the
problem of reconstructing all supersymmetric solutions from a given
near--horizon geometry. By exploiting the ellipticity of the linearised field
equations we demonstrate that the moduli space of transverse infinitesimal
deformations of a near--horizon geometry is finite--dimensional.
|
The antisymmetric square of the adjoint representation of any simple Lie
algebra is equal to the sum of adjoint and $X_2$ representations. We present
universal formulae for quantum dimensions of an arbitrary Cartan power of
$X_2$. They are analyzed for singular cases and permuted universal Vogel's
parameters. $X_2$ has been the only representation in the decomposition of the
square of the adjoint with unknown universal series. Application to universal
knot polynomials is discussed.
|
We construct real polarizable Hodge structures on the reduced leafwise
cohomology of K\"ahler-Riemann foliations by complex manifolds. As in the
classical case one obtains a hard Lefschetz theorem for this cohomology.
Serre's K\"ahlerian analogue of the Weil conjectures carries over as well.
Generalizing a construction of Looijenga and Lunts one obtains possibly
infinite dimensional Lie algebras attached to K\"ahler-Riemann foliations.
Finally using $(\mathfrak{g},K)$-cohomology we discuss a class of examples
obtained by dividing a product of symmetric spaces by a cocompact lattice and
considering the foliations coming from the factors.
|
We extend the dictionary between Fontaine rings and $p$-adic functionnal
analysis, and we give a refinement of the $p$-adic local Langlands
correspondence for principal series representations of ${\rm
GL}_2(\mathbf{Q}_p)$.
|
A search is performed for electroweak production of a vector-like top quark
partner T of charge 2/3 in association with a top or bottom quark, using
proton-proton collision data at $\sqrt{s} =$ 13 TeV collected by the CMS
experiment at the LHC in 2016. The data sample corresponds to an integrated
luminosity of 35.9 fb$^{-1}$. The search targets T quarks over a wide range of
masses and fractional widths, decaying to a top quark and either a Higgs boson
or a Z boson in fully hadronic final states. The search is performed using two
experimentally distinct signatures that depend on whether or not each quark
from the decays of the top quark, Higgs boson, or Z boson produces an
individual resolved jet. Jet substructure, b tagging, and kinematic variables
are used to identify the top quark and boson jets, and also to suppress the
standard model backgrounds. The data are found to be consistent with the
expected backgrounds. Upper limits at 95% confidence level are set on the cross
sections for T quark-mediated production of tHQq, tZQq, and their sum, where Q
is the associated top or bottom heavy quark and q is another associated quark.
The limits are given for each search signature for various T quark widths up to
30% of the T quark mass, and are between 2 pb and 20 fb for T quark masses in
the range 0.6-2.6 TeV. These results are significantly more sensitive than
prior searches for electroweak single production of T $\to$ tH and represent
the first constraints on T $\to$ tZ using hadronic decays of the Z boson with
this production mode.
|
Superparamagnetism of tryptophan implying the presence of magnetic domain is
reported. The observation helps us to conceive assembly of proteins as a
physical lattice gas with multidimensional Ising character, each lattice points
assuming discrete spin states. When magnetic field is applied the equilibrium
is lost and the population density of one spin state increases (unidirectional
alignment), resulting in net magnetization. Spatial coherence between the
identical spin states further imparts a ferromagnetic memory. This effect is
observed using direct nanoscale video imaging. Out of the three proteins
ferritin serum albumin and fibrinogen, fibrinogen showed an attenuated
response, the protein being essentially one dimensional. Eventually, Ising
lattice is capable of showing ferromagnetic memory only when it has a higher
dimensional character. The study highlights possible presence of long range
spatial coherence at physiological condition and a plausible microscopic origin
of the same.
|
In this paper, we investigate the external field effect in the context of the
MOdified Newtonian Dynamics (MOND) on the surface brightness and velocity
dispersion profiles of globular clusters (GCs). Using N-MODY, which is an
N-body simulation code with a MOND potential solver, we show that the general
effect of the external field for diffuse clusters, which obey MOND in most of
their parts, is that it pushes the dynamics towards the Newtonian regime. On
the other hand, for more compact clusters, which are essentially Newtonian in
their inner parts, the external field is effective mainly in the outer parts of
compact clusters. As a case study, we then choose the remote Galactic GC NGC
2419. By varying the cluster mass, half-light radius, and mass-to-light ratio
we aim to find a model that will reproduce the observational data most
effectively, using N-MODY. We find that even if we take the Galactic external
field into account, a Newtonian Plummer sphere represents the observational
data better than MOND to an order of magnitude in terms of the total $\chi^2$
of surface brightness and velocity dispersion.
|
Efficient continual learning techniques have been a topic of significant
research over the last few years. A fundamental problem with such learning is
severe degradation of performance on previously learned tasks, known also as
catastrophic forgetting. This paper introduces a novel method to reduce
catastrophic forgetting in the context of incremental class learning called
Gradient Correlation Subspace Learning (GCSL). The method detects a subspace of
the weights that is least affected by previous tasks and projects the weights
to train for the new task into said subspace. The method can be applied to one
or more layers of a given network architectures and the size of the subspace
used can be altered from layer to layer and task to task. Code will be
available at
\href{https://github.com/vgthengane/GCSL}{https://github.com/vgthengane/GCSL}
|
In this paper we study the filtration laws for the polymeric flow in a porous
medium. We use the quasi-Newtonian models with share dependent viscosity
obeying the power-law and the Carreau's law. Using the method of homogenization
the coupled micro-macro homogenized law, governing the quasi-newtonian flow in
a periodic model of a porous medium, was found. We decouple that law separating
the micro from the macro scale. We write the macroscopic filtration law in the
form of non-linear
Darcy's law and we prove that the obtained law is well posed. We give the
analytical as well as the numerical study of our model.
|
Social media generates an enormous amount of data on a daily basis but it is
very challenging to effectively utilize the data without annotating or labeling
it according to the target application. We investigate the problem of localized
flood detection using the social sensing model (Twitter) in order to provide an
efficient, reliable and accurate flood text classification model with minimal
labeled data. This study is important since it can immensely help in providing
the flood-related updates and notifications to the city officials for emergency
decision making, rescue operations, and early warnings, etc. We propose to
perform the text classification using the inductive transfer learning method
i.e pre-trained language model ULMFiT and fine-tune it in order to effectively
classify the flood-related feeds in any new location. Finally, we show that
using very little new labeled data in the target domain we can successfully
build an efficient and high performing model for flood detection and analysis
with human-generated facts and observations from Twitter.
|
We study some variations of the product topology on families of clopen
subsets of $2^{\mathbb{N}}\times\mathbb{N}$ in order to construct countable
nodec regular spaces (i.e. in which every nowhere dense set is closed) with
analytic topology which in addition are not selectively separable and do not
satisfy the combinatorial principle $q^+$.
|
We argue that the complex numbers are an irreducible object of quantum
probability. This can be seen in the measurements of geometric phases that have
no classical probabilistic analogue. Having complex phases as primitive
ingredient implies that we need to accept non-additive probabilities. This has
the desirable consequence of removing constraints of standard theorems about
the possibility of describing quantum theory with commutative variables.
Motivated by the formalism of consistent histories and keeping an analogy with
the theory of stochastic processes, we develop a (statistical) theory of
quantum processes. They are characterised by the introduction of a "density
matrix" on phase space paths -thus including phase information- and fully
reproduce quantum mechanical predictions. In this framework wecan write quantum
differential equations, that could be interpreted as referring to a single
system (in analogy to Langevin's equation). We describe a reconstruction
theorem by which a quantum process can yield the standard Hilbert space
structure if the Markov property is imposed. Finally, we discuss the relevance
of our iresults for the interpretation of quantum theory (a sample space if
possible if probabilities are non-additive) and quantum gravity (the Hilbert
space arises after the consideration of a background causal structure).
|
In this paper, an augmented analysis of a delay-angle information spoofing
(DAIS) is provided for location-privacy preservation, where the
location-relevant delays and angles are artificially shifted to obfuscate the
eavesdropper with an incorrect physical location. A simplified misspecified
Cramer-Rao bound (MCRB) is derived, which clearly manifests that not only
estimation error, but also the geometric mismatch introduced by DAIS can lead
to a significant increase in localization error for an eavesdropper. Given an
assumption of the orthogonality among wireless paths, the simplified MCRB can
be further expressed as a function of delay-angle shifts in a closed-form,
which enables the more straightforward optimization of these design parameters
for location-privacy enhancement. Numerical results are provided, validating
the theoretical analysis and showing that the root-mean-square error for
eavesdropper's localization can be more than 150 m with the optimized
delay-angle shifts for DAIS.
|
We present a new algorithm which allows for direct numerically exact
solutions within dynamical mean-field theory (DMFT). It is based on the
established Hirsch-Fye quantum Monte Carlo (HF-QMC) method. However, the DMFT
impurity model is solved not at fixed imaginary-time discretization Delta_tau,
but for a range of discretization grids; by extrapolation, unbiased Green
functions are obtained in each DMFT iteration. In contrast to conventional
HF-QMC, the multigrid algorithm converges to the exact DMFT fixed points. It
extends the useful range of Delta_tau, is precise and reliable even in the
immediate vicinity of phase transitions and is more efficient, also in
comparison to continuous-time methods. Using this algorithm, we show that the
spectral weight transfer at the Mott transition has been overestimated in a
recent density matrix renormalization group study.
|
The suggested association between the sources of gamma-ray bursts (GRB's) and
the sources of ultra-high energy cosmic rays (UHECR's) is based on two
arguments: (i) The average energy generation rate of UHECR's is similar to the
gamma-ray generation rate of GRB's, and (ii) The constraints that UHECR sources
must satisfy to allow proton acceleration to >10^{20} eV are similar to those
inferred for GRB sources from gamma-ray observations. We show that recent GRB
and UHECR observations strengthen both arguments, and hence strengthen the
suggested association.
|
By local Auger-electron spectroscopy on solid targets and accumulating
screens, we studied the composition of nucleosynthesis products, in which we
expected to reveal the presence of long-lived transuranium elements (LTE). In a
number of cases for analyzed elements in complicated spectra the Auger-spectra
of corresponding pure elements or their simple compounds were registered with a
high signal-to-noise ratio. As artifacts of the analysis, we consider such
phenomena as the electric charging, characteristic losses of energy, and
chemical shift. We found the unidentifiable Auger-peaks with energies of 172,
527, 1096, 94, and 560 eV and the doublet of peaks with energies of 130 and 115
eV. We failed to refer them to any Auger-peaks of chemical elements in the
atlases and catalogs or to any artifacts. As one of the variants of
interpretation of the revealed peaks, we consider the assumption about their
affiliation to LTE.
|
Primordial inflation is regarded to be driven by a phantom field which is
here implemented as a scalar field satisfying an equation of state
$p=\omega\rho$, with $\omega<-1$. Being even aggravated by the weird properties
of phantom energy, this will pose a serious problem with the exit from the
inflationary phase. We argue however in favor of the speculation that a smooth
exit from the phantom inflationary phase can still be tentatively recovered by
considering a multiverse scenario where the primordial phantom universe would
travel in time toward a future universe filled with usual radiation, before
reaching the big rip. We call this transition the "big trip" and assume it to
take place with the help of some form of anthropic principle which chooses our
current universe as being the final destination of the time transition.
|
We consider the distribution of ascents, descents, peaks, valleys, double
ascents, and double descents over permutations avoiding a set of patterns. Many
of these statistics have already been studied over sets of permutations
avoiding a single pattern of length 3. However, the distribution of peaks over
321-avoiding permutations is new and we relate it statistics on Dyck paths. We
also obtain new interpretations of a number of well-known combinatorial
sequences by studying these statistics over permutations avoiding two patterns
of length 3.
|
We prove that the linear stochastic equation
$dx(t)=(A(t)x(t)+f(t))dt+g(t)dW(t)$ with linear operator $A(t)$ generating a
continuous linear cocycle $\varphi$ and Bohr/Levitan almost periodic or almost
automorphic coefficients $(A(t),f(t),g(t))$ admits a unique Bohr/Levitan almost
periodic (respectively, almost automorphic) solution in distribution sense if
it has at least one precompact solution on $\mathbb R_{+}$ and the linear
cocycle $\varphi$ is asymptotically stable.
|
The technique of conformal mappings is applied to enlarge the convergence of
the Borel series and to accelerate the convergence of Borel-summed Green
functions in perturbative QCD. We use the optimal mapping, which takes into
account the location of all the singularities of the Borel transform as well as
the present knowledge about its behaviour near the first branch points. The
determination of \alpha_{s}(m_{\tau}) from the hadronic decay rate of the
\tau-lepton is discussed as an illustration of the method.
|
A brief review is given of black holes in Kaluza-Klein theory. This includes
both solutions which are homogeneous around the compact extra dimension and
those which are not.
|
We compute the connected four point correlation function (the trispectrum in
Fourier space) of cosmological density perturbations at one-loop order in
Standard Perturbation Theory (SPT) and the Effective Field Theory of Large
Scale Structure (EFT of LSS). This paper is a companion to our earlier work on
the non-Gaussian covariance of the matter power spectrum, which corresponds to
a particular wavenumber configuration of the trispectrum. In the present
calculation, we highlight and clarify some of the subtle aspects of the EFT
framework that arise at third order in perturbation theory for general
wavenumber configurations of the trispectrum. We consistently incorporate
vorticity and non-locality in time into the EFT counterterms and lay out a
complete basis of building blocks for the stress tensor. We show predictions
for the one-loop SPT trispectrum and the EFT contributions, focusing on
configurations which have particular relevance for using LSS to constrain
primordial non-Gaussianity.
|
Advertising, long the financial mainstay of the web ecosystem, has become
nearly ubiquitous in the world of mobile apps. While ad targeting on the web is
fairly well understood, mobile ad targeting is much less studied. In this
paper, we use empirical methods to collect a database of over 225,000 ads on 32
simulated devices hosting one of three distinct user profiles. We then analyze
how the ads are targeted by correlating ads to potential targeting profiles
using Bayes' rule and Pearson's chi squared test. This enables us to measure
the prevalence of different forms of targeting. We find that nearly all ads
show the effects of application- and time-based targeting, while we are able to
identify location-based targeting in 43% of the ads and user-based targeting in
39%.
|
Let $X$ be a smooth scheme over a finite field. It is conjectured that a
convergent $F$-isocrystal on $X$ is overconvergent if its restriction to every
curve contained in $X$ is overconvergent. Using the theory of \'etale and
crystalline companions, we establish a weaker version of this criterion in
which we also assume that the wild local monodromy of the restrictions to
curves is trivialized by pullback along a single dominant morphism to $X$.
|
The existence of closed hypersurfaces of prescribed scalar curvature in
globally hyperbolic Lorentzian manifolds is proved provided there are barriers.
|
We explore the properties of an 'almost' dark cloud of neutral hydrogen (HI)
using data from the Widefield ASKAP L-band Legacy All-sky Survey (WALLABY).
Until recently, WALLABY J103508-283427 (also known as H1032-2819 or LEDA
2793457) was not known to have an optical counterpart, but we have identified
an extremely faint optical counterpart in the DESI Legacy Imaging Survey Data
Release 10. We measured the mean g-band surface brightness to be $27.0\pm0.3$
mag arcsec$^{-2}$. The WALLABY data revealed the cloud to be closely associated
with the interacting group Klemola 13 (also known as HIPASS J1034-28 and the
Tol 9 group), which itself is associated with the Hydra cluster. In addition to
WALLABY J103508-283427/H1032-2819, Klemola 13 contains ten known significant
galaxies and almost half of the total HI gas is beyond the optical limits of
the galaxies. By combining the new WALLABY data with archival data from the
Australia Telescope Compact Array (ATCA), we investigate the HI distribution
and kinematics of the system. We discuss the relative role of tidal
interactions and ram pressure stripping in the formation of the cloud and the
evolution of the system. The ease of detection of this cloud and intragroup gas
is due to the sensitivity, resolution and wide field of view of WALLABY, and
showcases the potential of the full WALLABY survey to detect many more
examples.
|
We extend to the super Yangian of the special linear Lie superalgebra
$\mathfrak{sl}_{m|n}$ and its affine version certain results related to
Schur-Weyl duality. We do the same for the deformed double current superalgebra
of $\mathfrak{sl}_{m|n}$, which is introduced here for the first time.
|
If you are sharing a meal with a companion, how best to make sure you get
your favourite mouthfuls? Ethiopian Dinner is a game in which two players take
turns eating morsels from a common plate. Each morsel comes with a pair of
utility values measuring its tastiness to the two players. Kohler and
Chandrasekaharan discovered a good strategy -- a subgame perfect equilibrium,
to be exact -- for this game. We give a new visual proof of their result. The
players arrive at the equilibrium by figuring out their last move first and
working backward. We conclude that it's never too early to start thinking about
dessert.
|
Cooperative spontaneous emission of a single photon from a cloud of N atoms
modifies substantially the radiation pressure exerted by a far-detuned laser
beam exciting the atoms. On one hand, the force induced by photon absorption
depends on the collective decay rate of the excited atomic state. On the other
hand, directional spontaneous emission counteracts the recoil induced by the
absorption. We derive an analytical expression for the radiation pressure in
steady-state. For a smooth extended atomic distribution we show that the
radiation pressure depends on the atom number via cooperative scattering and
that, for certain atom numbers, it can be suppressed or enhanced.
|
Subsets and Splits