text
stringlengths 6
128k
|
---|
Microwave optomechanical circuits have been demonstrated in the past years to
be extremely powerfool tools for both, exploring fundamental physics of
macroscopic mechanical oscillators as well as being promising candidates for
novel on-chip quantum limited microwave devices. In most experiments so far,
the mechanical oscillator is either used as a passive device element and its
displacement is detected using the superconducting cavity or manipulated by
intracavity fields. Here, we explore the possibility to directly and
parametrically manipulate the mechanical nanobeam resonator of a cavity
electromechanical system, which provides additional functionality to the
toolbox of microwave optomechanical devices. In addition to using the cavity as
an interferometer to detect parametrically modulated mechanical displacement
and squeezed thermomechanical motion, we demonstrate that parametric modulation
of the nanobeam resonance frequency can realize a phase-sensitive parametric
amplifier for intracavity microwave photons. In contrast to many other
microwave amplification schemes using electromechanical circuits, the presented
technique allows for simultaneous cooling of the mechanical element, which
potentially enables this type of optomechanical microwave amplifier to be
quantum-limited.
|
In this paper, Doi-Peliti field theory is used to describe the motion of free
Run and Tumble particles in arbitrary dimensions. After deriving action and
propagators, the mean square displacement and the corresponding entropy
production at stationarity are calculated in this framework. We further derive
the field theory of free Active Brownian Particles in two dimensions for
comparison.
|
In this work we consider UAVs as cooperative agents supporting human users in
their operations. In this context, the 3D localisation of the UAV assistant is
an important task that can facilitate the exchange of spatial information
between the user and the UAV. To address this in a data-driven manner, we
design a data synthesis pipeline to create a realistic multimodal dataset that
includes both the exocentric user view, and the egocentric UAV view. We then
exploit the joint availability of photorealistic and synthesized inputs to
train a single-shot monocular pose estimation model. During training we
leverage differentiable rendering to supplement a state-of-the-art direct
regression objective with a novel smooth silhouette loss. Our results
demonstrate its qualitative and quantitative performance gains over traditional
silhouette objectives. Our data and code are available at
https://vcl3d.github.io/DronePose
|
We present the first experimental observation of accelerating beams in curved
space. More specifically, we demonstrate, experimentally and theoretically,
shape-preserving accelerating beams propagating on spherical surfaces:
closed-form solutions of the wave equation manifesting nongeodesic self-similar
evolution. Unlike accelerating beams in flat space, these wave packets change
their acceleration trajectory due to the interplay between interference effects
and the space curvature, and they focus and defocus periodically due to the
spatial curvature of the medium in which they propagate.
|
Gravitational-wave (GW) detections of merging neutron star-black hole (NSBH)
systems probe astrophysical neutron star (NS) and black hole (BH) mass
distributions, especially at the transition between NS and BH masses. Of
particular interest are the maximum NS mass, minimum BH mass, and potential
mass gap between them. While previous GW population analyses assumed all NSs
obey the same maximum mass, if rapidly spinning NSs exist, they can extend to
larger maximum masses than nonspinning NSs. In fact, several authors have
proposed that the $\sim2.6\,M_\odot$ object in the event GW190814 -- either the
most massive NS or least massive BH observed to date -- is a rapidly spinning
NS. We therefore infer the NSBH mass distribution jointly with the NS spin
distribution, modeling the NS maximum mass as a function of spin. Using 4
LIGO-Virgo NSBH events including GW190814, if we assume that the NS spin
distribution is uniformly distributed up to the maximum (breakup) spin, we
infer the maximum non-spinning NS mass is $2.7^{+0.5}_{-0.4}\,M_\odot$ (90\%
credibility), while assuming only nonspinning NSs, the NS maximum mass must be
$>2.53 M_\odot$ (90\% credibility). The data support the mass gap's existence,
with a minimum BH mass at $5.4^{+0.7}_{-1.0} M_\odot$. With future
observations, under simplified assumptions, 150 NSBH events may constrain the
maximum nonspinning NS mass to $\pm0.02\,M_\odot$, and we may even measure the
relation between the NS spin and maximum mass entirely from GW data. If rapidly
rotating NSs exist, their spins and masses must be modeled simultaneously to
avoid biasing the NS maximum mass.
|
Numerical and experimental studies of transitional pipe flow have shown the
prevalence of coherent flow structures that are dominated by downstream
vortices. They attract special attention because they contribute predominantly
to the increase of the Reynolds stresses in turbulent flow. In the present
study we introduce a convenient detector for these coherent states, calculate
the fraction of time the structures appear in the flow, and present a Markov
model for the transition between the structures. The fraction of states that
show vortical structures exceeds 24% for a Reynolds number of about Re=2200,
and it decreases to about 20% for Re=2500. The Markov model for the transition
between these states is in good agreement with the observed fraction of states,
and in reasonable agreement with the prediction for their persistence. It
provides insight into dominant qualitative changes of the flow when increasing
the Reynolds number.
|
Recently, MLP-based vision backbones emerge. MLP-based vision architectures
with less inductive bias achieve competitive performance in image recognition
compared with CNNs and vision Transformers. Among them, spatial-shift MLP
(S$^2$-MLP), adopting the straightforward spatial-shift operation, achieves
better performance than the pioneering works including MLP-mixer and ResMLP.
More recently, using smaller patches with a pyramid structure, Vision
Permutator (ViP) and Global Filter Network (GFNet) achieve better performance
than S$^2$-MLP.
In this paper, we improve the S$^2$-MLP vision backbone. We expand the
feature map along the channel dimension and split the expanded feature map into
several parts. We conduct different spatial-shift operations on split parts.
Meanwhile, we exploit the split-attention operation to fuse these split
parts. Moreover, like the counterparts, we adopt smaller-scale patches and use
a pyramid structure for boosting the image recognition accuracy. We term the
improved spatial-shift MLP vision backbone as S$^2$-MLPv2. Using 55M
parameters, our medium-scale model, S$^2$-MLPv2-Medium achieves an $83.6\%$
top-1 accuracy on the ImageNet-1K benchmark using $224\times 224$ images
without self-attention and external training data.
|
In this paper, we first introduce the concept of Rota-Baxter family
BiHom-$\Omega$-associative algebras, then we define the cochain complex of
BiHom-$\Omega$-associative algebras and verify it via Maurer-Cartan methods.
Next, we further introduce and study the cohomology theory of Rota-Baxter
family BiHom-$\Omega$-associative algebras of weight $\lambda$ and show that
this cohomology controls the corresponding deformations. Finally, we study
abelian extensions of Rota-Baxter family BiHom-$\Omega$-associative algebras in
terms of the second cohomology group.
|
It is shown that the analysis and the main result of the article by L-A. Wu
[Phys. Rev. A 53, 2053 (1996)] are completely erroneous.
|
The process controlling the diferentiation of stem, or progenitor, cells into
one specific functional direction is called lineage specification. An important
characteristic of this process is the multi-lineage priming, which requires the
simultaneous expression of lineage-specific genes. Prior to commitment to a
certain lineage, it has been observed that these genes exhibit intermediate
values of their expression levels. Multi-lineage differentiation has been
reported for various progenitor cells, and it has been explained through the
bifurcation of a metastable state. During the differentiation process the
dynamics of the core regulatory network follows a bifurcation, where the
metastable state, corresponding to the progenitor cell, is destabilized and the
system is forced to choose between the possible developmental alternatives.
While this approach gives a reasonable interpretation of the cell fate decision
process, it fails to explain the multi-lineage priming characteristic. Here, we
describe a new multi-dimensional switch-like model that captures both the
process of cell fate decision and the phenomenon of multi-lineage priming. We
show that in the symmetrical interaction case, the system exhibits a new type
of degenerate bifurcation, characterized by a critical hyperplane, containing
an infinite number of critical steady states. This critical hyperplane may be
interpreted as the support for the multi-lineage priming states of the
progenitor. Also, the cell fate decision (the multi-stability and switching
behavior) can be explained by a symmetry breaking in the parameter space of
this critical hyperplane. These analytical results are confirmed by Monte-Carlo
simulations of the corresponding chemical master equations.
|
The yrast spectra (i.e. the lowest states for a given total angular momentum)
of quantum dots in strong magnetic fields, are studied in terms of exact
numerical diagonalization and analytic trial wave functions. We argue that
certain features (cusps) in the many-body spectrum can be understood in terms
of particle localization due to the strong field. A new class of trial
wavefunctions supports the picture of the electrons being localized in Wigner
molecule-like states consisting of consecutive rings of electrons, with
low-lying excitations corresponding to rigid rotation of the outer ring of
electrons. The geometry of the Wigner molecule is independent of interparticle
interactions and the statistics of the particles.
|
We argue that near a Kondo breakdown critical point, a spin liquid with
spatial modulations can form. Unlike its uniform counterpart, we find that this
occurs via a second order phase transition. The amount of entropy quenched when
ordering is of the same magnitude as for an antiferromagnet. Moreover, the two
states are competitive, and at low temperatures are separated by a first order
phase transition. The modulated spin liquid we find breaks $Z_4$ symmetry, as
recently seen in the hidden order phase of URu$_2$Si$_2$. Based on this, we
suggest that the modulated spin liquid is a viable candidate for this unique
phase of matter.
|
We establish a version of a statement attributed to Kazhdan by Yau. As a
corollary, we obtain a more transparent form of our uniformization theorem in
complex algebraic geometry.
|
The COVID-19 pandemic created a significant interest and demand for infection
detection and monitoring solutions. In this paper we propose a machine learning
method to quickly triage COVID-19 using recordings made on consumer devices.
The approach combines signal processing methods with fine-tuned deep learning
networks and provides methods for signal denoising, cough detection and
classification. We have also developed and deployed a mobile application that
uses symptoms checker together with voice, breath and cough signals to detect
COVID-19 infection. The application showed robust performance on both open
sourced datasets and on the noisy data collected during beta testing by the end
users.
|
We study the decay width and CP-asymmetry of the inclusive process b--> s g g
(g denotes gluon) in the three and two Higgs doublet models with complex Yukawa
couplings. We analyse the dependencies of the differential decay width and
CP-asymmetry to the s- quark energy E_s and CP violating parameter \theta. We
observe that there exist a considerable enhancement in the decay width and CP
asymmetry is at the order of 10^{-2}. Further, it is possible to predict the
sign of C_7^{eff} using the CP asymmetry.
|
Since the pioneering work of Kontsevich and Soibelman [51], scattering
diagrams have started playing an important role in mirror symmetry, in
particular in the study of the reconstruction problem. This paper aims at
introducing the main ideas on the subject describing the role of scattering
diagrams in relation to the SYZ conjecture and the HMS conjecture.
|
This paper presents arguments purporting to show that von Neumann's
description of the measurement process in quantum mechanics has a modern day
version in the decoherence approach. We claim that this approach and the de
Broglie-Bohm theory emerges from Bohr's interpretation and are therefore
obliged to deal with some obscures ideas which were antecipated, explicitly or
implicitly and carefully circumvented, by Bohr.
|
We develop the XFaster Cosmic Microwave Background (CMB) temperature and
polarization anisotropy power spectrum and likelihood technique for the Planck
CMB satellite mission. We give an overview of this estimator and its current
implementation and present the results of applying this algorithm to simulated
Planck data. We show that it can accurately extract the power spectrum of
Planck data for the high-l multipoles range. We compare the XFaster
approximation for the likelihood to other high-l likelihood approximations such
as Gaussian and Offset Lognormal and a low-l pixel-based likelihood. We show
that the XFaster likelihood is not only accurate at high-l, but also performs
well at moderately low multipoles. We also present results for cosmological
parameter Markov Chain Monte Carlo estimation with the XFaster likelihood. As
long as the low-l polarization and temperature power are properly accounted
for, e.g., by adding an adequate low-l likelihood ingredient, the input
parameters are recovered to a high level of accuracy.
|
In this paper we are interested in testing whether there are any signals
hidden in high dimensional noise data. Therefore we study the family of
goodness-of-fit tests based on $\Phi$-divergences including the test of Berk
and Jones as well as Tukey's higher criticism test. The optimality of this
family is already known for the heterogeneous normal mixture model. We now
present a technique to transfer this optimality to more general models. For
illustration we apply our results to dense signal and sparse signal models
including the exponential-$\chi^2$ mixture model and general exponential
families as the normal, exponential and Gumbel distribution. Beside the
optimality of the whole family we discuss the power behavior on the detection
boundary and show that the whole family has no power there, whereas the
likelihood ratio test does.
|
We consider the numerical stability of the parameter recovery problem in
Linear Structural Equation Model ($\LSEM$) of causal inference. A long line of
work starting from Wright (1920) has focused on understanding which sub-classes
of $\LSEM$ allow for efficient parameter recovery. Despite decades of study,
this question is not yet fully resolved. The goal of this paper is
complementary to this line of work; we want to understand the stability of the
recovery problem in the cases when efficient recovery is possible. Numerical
stability of Pearl's notion of causality was first studied in Schulman and
Srivastava (2016) using the concept of condition number where they provide
ill-conditioned examples. In this work, we provide a condition number analysis
for the $\LSEM$. First we prove that under a sufficient condition, for a
certain sub-class of $\LSEM$ that are \emph{bow-free} (Brito and Pearl (2002)),
the parameter recovery is stable. We further prove that \emph{randomly} chosen
input parameters for this family satisfy the condition with a substantial
probability. Hence for this family, on a large subset of parameter space,
recovery is numerically stable. Next we construct an example of $\LSEM$ on four
vertices with \emph{unbounded} condition number. We then corroborate our
theoretical findings via simulations as well as real-world experiments for a
sociology application. Finally, we provide a general heuristic for estimating
the condition number of any $\LSEM$ instance.
|
We consider two-player games played over finite state spaces for an infinite
number of rounds. At each state, the players simultaneously choose moves; the
moves determine a successor state. It is often advantageous for players to
choose probability distributions over moves, rather than single moves. Given a
goal, for example, reach a target state, the question of winning is thus a
probabilistic one: what is the maximal probability of winning from a given
state?
On these game structures, two fundamental notions are those of equivalences
and metrics. Given a set of winning conditions, two states are equivalent if
the players can win the same games with the same probability from both states.
Metrics provide a bound on the difference in the probabilities of winning
across states, capturing a quantitative notion of state similarity.
We introduce equivalences and metrics for two-player game structures, and we
show that they characterize the difference in probability of winning games
whose goals are expressed in the quantitative mu-calculus. The quantitative
mu-calculus can express a large set of goals, including reachability, safety,
and omega-regular properties. Thus, we claim that our relations and metrics
provide the canonical extensions to games, of the classical notion of
bisimulation for transition systems. We develop our results both for
equivalences and metrics, which generalize bisimulation, and for asymmetrical
versions, which generalize simulation.
|
In this paper, we study the influence of anisotropy on the usefulness, of the
entanglement in a two-qubit Heisenberg XY chain at thermal equilibrium in the
presence of an external magnetic field, as resource for quantum teleportation
via the standard teleportation protocol. We show that the nonzero thermal
entanglement produced by adjusting the external magnetic field strength beyond
some critical strength is a useful resource. We also considered entanglement
teleportation via two two-qubit Heisenberg XY chains.
|
Applying the generalization of the model for chain formation in
break-junctions [JPCM 24, 135501 (2012)], we study the effect of light
impurities on the energetics and elongation properties of Pt and Ir chains. Our
model enables us with a tool ideal for detailed analysis of impurity assisted
chain formation, where zigzag bonds play an important role. In particular we
focus on H (s-like) and O (p-like) impurities and assume, for simplicity, that
the presence of impurity atoms in experiments results in ..M-X-M-X-... (M:
metal, X: impurity) chain structure in between the metallic leads. Feeding our
model with material-specific parameters from systematic full-potential
first-principles calculations, we find that the presence of such impurities
strongly affects the binding properties of the chains. We find that while both
types of impurities enhance the probability of chains to be elongated, the
s-like impurities lower the chain's stability. We also analyze the effect of
magnetism and spin-orbit interaction on the growth properties of the chains.
|
We explore the possibility to manipulate massive, i.e. motional, degrees of
freedom of trapped ions. In particular, we demonstrate that, if local control
of the trapping frequencies is achieved, one can reproduce the full toolbox of
linear optics on radial modes. Furthermore, assuming only global control of the
trapping potential, we show that unprecedented degrees of continuous variable
entanglement can be obtained and that nonlocality tests with massive degrees of
freedom can be carried out.
|
In these lectures I will present an introduction to the modern way of
studying the properties of glassy systems. I will start from soluble models of
increasing complications, the Random Energy Model, the $p$-spins interacting
model and I will show how these models can be solved due their mean field
properties. Finally, in the last section, I will discuss the difficulties in
the generalization of these findings to short range models.
|
Two fundamental requirements for the deployment of machine learning models in
safety-critical systems are to be able to detect out-of-distribution (OOD) data
correctly and to be able to explain the prediction of the model. Although
significant effort has gone into both OOD detection and explainable AI, there
has been little work on explaining why a model predicts a certain data point is
OOD. In this paper, we address this question by introducing the concept of an
OOD counterfactual, which is a perturbed data point that iteratively moves
between different OOD categories. We propose a method for generating such
counterfactuals, investigate its application on synthetic and benchmark data,
and compare it to several benchmark methods using a range of metrics.
|
It is proved that if a graph is regular of even degree and contains a
Hamilton cycle, or regular of odd degree and contains a Hamiltonian $3$-factor,
then its line graph is Hamilton decomposable. This result partially extends
Kotzig's result that a $3$-regular graph is Hamiltonian if and only if its line
graph is Hamilton decomposable, and proves the conjecture of Bermond that the
line graph of a Hamilton decomposable graph is Hamilton decomposable.
|
Efficiency at maximum power (MP) output for an engine with a passive piston
without mechanical controls between two reservoirs is theoretically studied. We
enclose a hard core gas partitioned by a massive piston in a
temperature-controlled container and analyze the efficiency at MP under a
heating and cooling protocol without controlling the pressure acting on the
piston from outside. We find the following three results: (i) The efficiency at
MP for a dilute gas is close to the Chambadal-Novikov-Curzon-Ahlborn (CNCA)
efficiency if we can ignore the side wall friction and the loss of energy
between a gas particle and the piston, while (ii) the efficiency for a
moderately dense gas becomes smaller than the CNCA efficiency even when the
temperature difference of reservoirs is small. (iii) Introducing the Onsager
matrix for an engine with a passive piston, we verify that the tight coupling
condition for the matrix of the dilute gas is satisfied, while that of the
moderately dense gas is not satisfied because of the inevitable heat leak. We
confirm the validity of these results using the molecular dynamics simulation
and introducing an effective mean-field-like model which we call stochastic
mean field model.
|
Operating on the principles of quantum mechanics, quantum algorithms hold the
promise for solving problems that are beyond the reach of the best-available
classical algorithms. An integral part of realizing such speedup is the
implementation of quantum queries, which read data into forms that quantum
computers can process. Quantum random access memory (QRAM) is a promising
architecture for realizing quantum queries. However, implementing QRAM in
practice poses significant challenges, including query latency, memory capacity
and fault-tolerance.
In this paper, we propose the first end-to-end system architecture for QRAM.
First, we introduce a novel QRAM that hybridizes two existing implementations
and achieves asymptotically superior scaling in space (qubit number) and time
(circuit depth). Like in classical virtual memory, our construction enables
queries to a virtual address space larger than what is actually available in
hardware. Second, we present a compilation framework to synthesize, map, and
schedule QRAM circuits on realistic hardware. For the first time, we
demonstrate how to embed large-scale QRAM on a 2D Euclidean space, such as a
grid layout, with minimal routing overhead. Third, we show how to leverage the
intrinsic biased-noise resilience of the proposed QRAM for implementation on
either Noisy Intermediate-Scale Quantum (NISQ) or Fault-Tolerant Quantum
Computing (FTQC) hardware. Finally, we validate these results numerically via
both classical simulation and quantum hardware experimentation. Our novel
Feynman-path-based simulator allows for efficient simulation of noisy QRAM
circuits at a larger scale than previously possible. Collectively, our results
outline the set of software and hardware controls needed to implement practical
QRAM.
|
Automatic speech recognition (ASR) has recently become an important challenge
when using deep learning (DL). It requires large-scale training datasets and
high computational and storage resources. Moreover, DL techniques and machine
learning (ML) approaches in general, hypothesize that training and testing data
come from the same domain, with the same input feature space and data
distribution characteristics. This assumption, however, is not applicable in
some real-world artificial intelligence (AI) applications. Moreover, there are
situations where gathering real data is challenging, expensive, or rarely
occurring, which can not meet the data requirements of DL models. deep transfer
learning (DTL) has been introduced to overcome these issues, which helps
develop high-performing models using real datasets that are small or slightly
different but related to the training data. This paper presents a comprehensive
survey of DTL-based ASR frameworks to shed light on the latest developments and
helps academics and professionals understand current challenges. Specifically,
after presenting the DTL background, a well-designed taxonomy is adopted to
inform the state-of-the-art. A critical analysis is then conducted to identify
the limitations and advantages of each framework. Moving on, a comparative
study is introduced to highlight the current challenges before deriving
opportunities for future research.
|
This paper presents a characteristic-based flux partitioning for the
semi-implicit time integration of atmospheric flows. Nonhydrostatic models
require the solution of the compressible Euler equations. The acoustic
time-scale is significantly faster than the advective scale, yet it is
typically not relevant to atmospheric and weather phenomena. The acoustic and
advective components of the hyperbolic flux are separated in the characteristic
space. High-order, conservative additive Runge-Kutta methods are applied to the
partitioned equations so that the acoustic component is integrated in time
implicitly with an unconditionally stable method, while the advective component
is integrated explicitly. The time step of the overall algorithm is thus
determined by the advective scale. Benchmark flow problems are used to
demonstrate the accuracy, stability, and convergence of the proposed algorithm.
The computational cost of the partitioned semi-implicit approach is compared
with that of explicit time integration.
|
In this paper we mainly describe $\mathbb{Q}$-Gorenstein smoothings of
projective surfaces with only Wahl singularities which have birational fibers.
For instance, these degenerations appear in normal degenerations of the
projective plane, and in boundary divisors of the KSBA compactification of the
moduli space of surfaces of general type [KSB88]. We give an explicit
description of them as smooth deformations plus 3-fold birational operations,
through the flips and divisorial contractions in [HTU13]. We interpret the
continuous part (smooth deformations) as degenerations of certain curves in the
general fiber. At the end, we work out examples happening in the KSBA boundary
for invariants $K^2=1$, $p_g=0$, and $\pi_1=0$ using plane curves.
|
Elastic constants and their derived properties of various cubic Heusler
compounds were calculated using first-principles density functional theory. To
begin with, Cu$_2$MnAl is used as a case study to explain the interpretation of
the basic quantities and compare them with experiments. The main part of the
work focuses on Co$_2$-based compounds that are Co$_2$Mn$M$ with the main group
elements $M=$~Al, Ga, In, Si, Ge, Sn, Pb, Sb, Bi, and Co$_2TM$ with the main
group elements Si or Ge, and the $3d$ transition metals $T=$~Sc, Ti, V, Cr, Mn,
and Fe. It is found that many properties of Heusler compounds correlate to the
mass or nuclear charge $Z$ of the main group element.
Blackman's and Every's diagrams are used to compare the elastic properties of
the materials, whereas Pugh's and Poisson's ratios are used to analyze the
relationship between interatomic bonding and physical properties. It is found
that the {\it Pugh's criterion} on brittleness needs to be revised whereas {\it
Christensen's criterion} describes the ductile--brittle transition of Heusler
compounds very well. The calculated elastic properties give hint on a metallic
bonding with an intermediate brittleness for the studied Heusler compounds.
The universal anisotropy of the stable compounds has values in the range of
$0.57 <A_U <2.73$. The compounds with higher $A_U$ values are found close to
the middle of the transition metal series. In particular, Co$_2$ScAl with
$A_U=0.01$ is predicted to be an isotropic material that comes closest to an
ideal Cauchy solid as compared to the remaining Co$_2$-based compounds. Apart
from the elastic constants and moduli, the sound velocities, Debye
temperatures, and hardness are predicted and discussed for the studied systems.
The calculated slowness surfaces for sound waves reflect the degree of
anisotropy of the compounds.
|
Let || . || be a norm on R^n. Averaging || (\eps_1 x_1, ..., \eps_n x_n) ||
over all the 2^n choices of \eps = (\eps_1, ..., \eps_n) in {-1, +1}^n, we
obtain an expression ||| . ||| which is an unconditional norm on R^n.
Bourgain, Lindenstrauss and Milman showed that, for a certain (large)
constant \eta > 1, one may average over (\eta n) (random) choices of \eps and
obtain a norm that is isomorphic to ||| . |||. We show that this is the case
for any \eta > 1.
|
We study the parameter dependence of the internal structure of resonance
states by formulating Complex two-dimensional (2D) Matrix Model, where the two
dimensions represent two-levels of resonances. We calculate a critical value of
the parameter at which "nature transition" with character exchange occurs
between two resonance states, from the viewpoint of geometry on
complex-parameter space. Such critical value is useful to know the internal
structure of resonance states with variation of the parameter in the system. We
apply the model to analyze the internal structure of hadrons with variation of
the color number Nc from infinity to a realistic value 3. By regarding 1/Nc as
the variable parameter in our model, we calculate a critical color number of
nature transition between hadronic states in terms of quark-antiquark pair and
mesonic molecule as exotics from the geometry on complex-Nc plane. For the
large-Nc effective theory, we employ the chiral Lagrangian induced by
holographic QCD with D4/D8/D8-bar multi-D brane system in the type IIA
superstring theory.
|
We develop a consistent perturbation theory in quantum fluctuations around
the classical evolution of a system of interacting bosons. The zero order
approximation gives the classical Gross-Pitaevskii equations. In the next order
we recover the truncated Wigner approximation, where the evolution is still
classical but the initial conditions are distributed according to the Wigner
transform of the initial density matrix. Further corrections can be
characterized as quantum scattering events, which appear in the form of a
nonlinear response of the observable to an infinitesimal displacement of the
field along its classical evolution. At the end of the paper we give a few
numerical examples to test the formalism.
|
The isotope $^{229}$Th is the only nucleus known to possess an excited state
$^{229m}$Th in the energy range of a few electron volts, a transition energy
typical for electrons in the valence shell of atoms, but about four orders of
magnitude lower than common nuclear excitation energies. A number of
applications of this unique nuclear system, which is accessible by optical
methods, have been proposed. Most promising among them appears a highly precise
nuclear clock that outperforms existing atomic timekeepers. Here we present the
laser spectroscopic investigation of the hyperfine structure of
$^{229m}$Th$^{2+}$, yielding values of fundamental nuclear properties, namely
the magnetic dipole and electric quadrupole moments as well as the nuclear
charge radius. After the recent direct detection of this long-searched-for
isomer, our results now provide detailed insight into its nuclear structure and
present a method for its non-destructive optical detection.
|
Recently, we have worked out the axial two-nucleon current operator to
leading one-loop order in chiral effective field theory using the method of
unitary transformation. Our final expressions, however, differ from the ones
derived by the JLab-Pisa group using time-ordered perturbation theory (Phys.
Rev. C 93, no. 1, 015501 (2016) Erratum: [Phys. Rev. C 93, no. 4, 049902
(2016)] Erratum: [Phys. Rev. C 95, no. 5, 059901 (2017)]). In this paper we
consider the box diagram contribution to the axial current and demonstrate that
the results obtained using the two methods are unitary equivalent at the
Fock-space level. We adjust the unitary phases by matching the corresponding
two-pion exchange nucleon-nucleon potentials and rederive the box diagram
contribution to the axial current operator following the approach of the
JLab-Pisa group, thereby reproducing our original result. We provide a detailed
information on the calculation including the relevant intermediate steps in
order to facilitate a clarification of this disagreement.
|
Kernel adaptive filtering (KAF) is proposed for nonlinearity-tolerant optical
direct detection. For 7x128Gbit/s PAM4 transmission over 33.6km 7-core-fiber,
KAF only needs 10 equalizer taps to reach KP4-FEC limit ([email protected]), whereas
decision-feedback-equalizer needs 43 equalizer taps to reach HD-FEC limit
([email protected]).
|
Asymptotic expansion in far-field for the incompressive Navier-Stokes flow
are established. Under moment conditions on the initial vorticity, technique of
renormalization together with Biot-Savard law derives an asymptotic expansion
for the velocity with high-order. Especially scalings and large-time behaviors
of the expansions are clarified. By employing them, time evolution of velocity
in far-field is drawn. As an appendix, asymptotic behavior of solutions as time
variable tends to infinity is given.
|
The most important problem of fundamental Physics is the quantization of the
gravitational field. A main difficulty is the lack of available experimental
tests that discriminate among the theories proposed to quantize gravity.
Recently, Lorentz invariance violation by Quantum Gravity(QG) have been the
source of a growing interest. However, the predictions depend on ad-hoc
hypothesis and too many arbitrary parameters. Here we show that the Standard
Model(SM) itself contains tiny Lorentz invariance violation(LIV) terms coming
from QG. All terms depend on one arbitrary parameter $\alpha$ that set the
scale of QG effects. This parameter can be estimated using data from the Ultra
High Energy Cosmic Rays spectrum to be $|\alpha|<\sim 10^{-22}-10^{-23}$.
|
In a very recent paper, M. Rahman introduced a remarkable family of
polynomials in two variables as the eigenfunctions of the transition matrix for
a nontrivial Markov chain due to M. Hoare and M. Rahman. I indicate here that
these polynomials are bispectral. This should be just one of the many
remarkable properties enjoyed by these polynomials. For several challenges,
including finding a general proof of some of the facts displayed here the
reader should look at the last section of this paper.
|
We study the weak error associated with the Euler scheme of non degenerate
diffusion processes with non smooth bounded coefficients. Namely, we consider
the cases of H{\"o}lder continuous coefficients as well as piecewise smooth
drifts with smooth diffusion matrices.
|
The surface of a molecule determines much of its chemical and physical
property, and is of great interest and importance. In this Letter, we introduce
the concept of molecular multiresolution surfaces as a new paradigm of
multiscale biological modeling. Molecular multiresolution surfaces contain not
only a family of molecular surfaces, corresponding to different probe radii,
but also the solvent accessible surface and van der Waals surface as limiting
cases. All the proposed surfaces are generated by a novel approach, the
diffusion map of continuum solvent over the van der Waals surface of a
molecule. A new local spectral evolution kernel is introduced for the numerical
integration of the diffusion equation in a single time step.
|
A variety of network modeling problems begin by generating a degree sequence
drawn from a given probability distribution. If the randomly generated sequence
is not graphic, we give a new approach for generating a graphic approximation
of the sequence. This approximation scheme is fast, requiring only one pass
through the sequence, and produces small probability distribution distances for
large sequences.
|
In canonical quantum gravity the wave function of the universe is static,
leading to the so-called problem of time. We summarize here how Bohmian
mechanics solves this problem.
|
This report reviews recent theory progress in the field of heavy quarkonium
and open heavy flavour production calculations.
|
We investigate the algebraic conditions the scattering data of short-range
perturbations of quasi-periodic finite-gap Jacobi operators have to satisfy. As
our main result we provide the Poisson-Jensen-type formula for the transmission
coefficient in terms of Abelian integrals on the underlying hyperelliptic
Riemann surface and give an explicit condition for its single-valuedness. In
addition, we establish trace formulas which relate the scattering data to the
conserved quantities in this case.
|
We produce explicit geometric representatives of nontrivial homology classes
in the space of long knots in R^d, when d is even. We generalize results of
Cattaneo, Cotta-Ramusino and Longoni to define cycles which live off of the
vanishing line of a homology spectral sequence due to Sinha. We use
configuration space integrals to show our classes pair nontrivially with
cohomology classes due to Longoni.
|
We have determined the mass-density radial profiles of the first five strong
gravitational lens systems discovered by the Herschel Astrophysical Terahertz
Large Area Survey (H-ATLAS). We present an enhancement of the semi-linear lens
inversion method of Warren & Dye which allows simultaneous reconstruction of
several different wavebands and apply this to dual-band imaging of the lenses
acquired with the Hubble Space Telescope. The five systems analysed here have
lens redshifts which span a range, 0.22<z<0.94. Our findings are consistent
with other studies by concluding that: 1) the logarithmic slope of the total
mass density profile steepens with decreasing redshift; 2) the slope is
positively correlated with the average total projected mass density of the lens
contained within half the effective radius and negatively correlated with the
effective radius; 3) the fraction of dark matter contained within half the
effective radius increases with increasing effective radius and increases with
redshift.
|
Accurate prediction of the onset and strength of breaking surface gravity
waves is a long-standing problem of significant theoretical and applied
interest. Recently, Barth\'el\'emy et al (https://doi.org/10.1017/jfm.2018.93)
examined the energetics of focusing wave groups in deep and intermediate depth
water and found that breaking and non-breaking regimes were clearly separated
by the normalised energy flux, $B$, near the crest tip. Furthermore, the
transition of $B$ through a generic breaking threshold value $B_\mathrm{th}
\approx 0.85$ was found to precede visible breaking onset by up to one fifth of
a wave period. This remarkable generic threshold for breaking inception has
since been validated numerically for 2D and 3D domains and for shallow and
shoaling water waves; however, there is presently no theoretical explanation
for its efficacy as a predictor for breaking. This study investigates the
correspondence between the parameter $B$ and the crest energy growth rate
following the evolving crest for breaking and non-breaking waves in a numerical
wave tank using a range of wave packet configurations. Our results indicate
that the time rate of change of the $B$ is strongly correlated with the energy
density convergence rate at the evolving wave crest. These findings further
advance present understanding of the elusive process of wave breaking.
|
In the family grand unification models (fGUTs), we propose that gauge U(1)'s
beyond the minimal GUT gauge group are family gauge symmetries. For the
symmetry $L_\mu-L_\tau$, i.e. $Q_{2}-Q_{3}$ in our case, to be useful for the
LHC anomaly, we discuss an SU(9) fGUT and also present an example in Georgi's
SU(11) fGUT.
|
We demonstrate that time-domain ptychography, a recently introduced ultrafast
pulse reconstruction modality, has properties ideally suited for the temporal
characterization of complex light pulses with large time-bandwidth products as
it achieves temporal resolution on the scale of a single optical cycle using
long probe pulses, low sampling rates, and an extremely fast and robust
algorithm. In comparison to existing techniques, ptychography minimizes the
data to be recorded and processed, and drastically reduces the computational
time of the reconstruction. Experimentally we measure the temporal waveform of
an octave-spanning, 3.5~ps long supercontinuum pulse generated in photonic
crystal fiber, resolving features as short as 5.7~fs with sub-fs resolution and
30~dB dynamic range using 100~fs probe pulses and similarly large delay steps.
|
We prove that integer programming with three quantifier alternations is
$NP$-complete, even for a fixed number of variables. This complements earlier
results by Lenstra and Kannan, which together say that integer programming with
at most two quantifier alternations can be done in polynomial time for a fixed
number of variables. As a byproduct of the proof, we show that for two
polytopes $P,Q \subset \mathbb{R}^4$ , counting the projection of integer
points in $Q \backslash P$ is $\#P$-complete. This contrasts the 2003 result by
Barvinok and Woods, which allows counting in polynomial time the projection of
integer points in $P$ and $Q$ separately.
|
Let $\mathbf{k}$ be a field which is either finite or algebraically closed
and let $R = \mathbf{k}[x_1,\ldots,x_n].$ We prove that any $g_1,\ldots,g_s\in
R$ homogeneous of positive degrees $\le d$ are contained in an ideal generated
by an $R_t$-sequence of $\le A(d)(s+t)^{B(d)}$ homogeneous polynomials of
degree $\le d,$ subject to some restrictions on the characteristic of
$\mathbf{k}.$ This yields effective bounds for new cases of Ananyan and
Hochster's theorem A in arXiv:1610.09268 on strength and the codimension of the
singular locus. It also implies effective bounds when $d$ equals the
characteristic of $\mathbf{k}$ for Tao and Ziegler's result in arXiv:1101.1469
on rank and $U^d$ Gowers norms of polynomials over finite fields.
|
We describe KMS and ground states arising from a generalized gauge action on
ultragraph C*-algebras. We focus on ultragraphs that satisfy Condition~(RFUM),
so that we can use the partial crossed product description of ultragraph
C*-algebras recently described by the second author and Danilo Royer. In
particular, for ultragraphs with no sinks, we generalize a recent result by
Toke Carlsen and Nadia Larsen: Given a time evolution on the C*-algebra of an
ultragraph, induced by a function on the edge set, we characterize the KMS
states in five different ways and ground states in four different ways. In both
cases we include a characterization given by maps on the set of generalized
vertices of the ultragraph. We apply this last result to show the existence of
KMS and ground states for the ultragraph C*-algebra that is neither an
Exel-Laca nor a graph C*-algebra.
|
Quantitative predictions about the processes that promote species coexistence
are a subject of active research in ecology. In particular, competitive
interactions are known to shape and maintain ecological communities, and
situations where some species out-compete or dominate over some others are key
to describe natural ecosystems. Here we develop ecological theory using a
stochastic, synthetic framework for plant community assembly that leads to
predictions amenable to empirical testing. We propose two stochastic
continuous-time Markov models that incorporate competitive dominance through a
hierarchy of species heights. The first model, which is spatially implicit,
predicts both the expected number of species that survive and the conditions
under which heights are clustered in realized model communities. The second one
allows spatially-explicit interactions of individuals and alternative
mechanisms that can help shorter plants overcome height-driven competition, and
it demonstrates that clustering patterns remain not only locally but also
across increasing spatial scales. Moreover, although plants are actually
height-clustered in the spatially-explicit model, it allows for plant species
abundances not necessarily skewed to taller plants.
|
The aim of this paper is investigating the existence of solutions of some
semilinear elliptic problems on open bounded domains when the nonlinearity is
subcritical and asymptotically linear at infinity and there is a perturbation
term which is just continuous. Also in the case when the problem has not a
variational structure, suitable procedures and estimates allow us to prove that
the number of distinct crtitical levels of the functional associated to the
unperturbed problem is "stable" under small perturbations, obtaining
multiplicity results if the nonlinearity is odd, both in the non--resonant and
in the resonant case.
|
EvenQuads is a new card game that is a generalization of the SET game, where
each card is characterized by three attributes, each taking four possible
values. Four cards form a quad when, for each attribute, the values are the
same, all different, or half and half. Given $\ell$ cards from the deck of
EvenQuads, we can build an error-correcting linear binary code of length $\ell$
and Hamming distance 4. The quads correspond to codewords of weight 4.
Error-correcting codes help us calculate the possible number of quads when
given up to 8 cards. We also estimate the number of cards that do not contain
quads for decks of different sizes. In addition, we discuss properties of
error-correcting codes built on semimagic, magic, and strongly magic quad
squares.
|
This is the report for the topical group Theory of Neutrino Physics
(TF11/NF08) for Snowmass 2021. This report summarizes the progress in the field
of theoretical neutrino physics in the past decade, the current status of the
field, and the prospects for the upcoming decade.
|
Metric learning algorithms aim to learn a distance function that brings the
semantically similar data items together and keeps dissimilar ones at a
distance. The traditional Mahalanobis distance learning is equivalent to find a
linear projection. In contrast, Deep Metric Learning (DML) methods are proposed
that automatically extract features from data and learn a non-linear
transformation from input space to a semantically embedding space. Recently,
many DML methods are proposed focused to enhance the discrimination power of
the learned metric by providing novel sampling strategies or loss functions.
This approach is very helpful when both the training and test examples are
coming from the same set of categories. However, it is less effective in many
applications of DML such as image retrieval and person-reidentification. Here,
the DML should learn general semantic concepts from observed classes and employ
them to rank or identify objects from unseen categories. Neglecting the
generalization ability of the learned representation and just emphasizing to
learn a more discriminative embedding on the observed classes may lead to the
overfitting problem. To address this limitation, we propose a framework to
enhance the generalization power of existing DML methods in a Zero-Shot
Learning (ZSL) setting by general yet discriminative representation learning
and employing a class adversarial neural network. To learn a more general
representation, we propose to employ feature maps of intermediate layers in a
deep neural network and enhance their discrimination power through an attention
mechanism. Besides, a class adversarial network is utilized to enforce the deep
model to seek class invariant features for the DML task. We evaluate our work
on widely used machine vision datasets in a ZSL setting.
|
A direct sampling method (DSM) is designed herein for a real-time detection
of small anomalies from scattering parameters measured by a small number of
dipole antennas. Applicability of the DSM is theoretically demonstrated by
proving that its indicator function can be represented in terms of an infinite
series of Bessel functions of integer order and the antenna locations.
Experiments using real-data then demonstrate both the effectiveness and
limitations of this method.
|
It is widely known how the human ability to cooperate has influenced the
thriving of our species. However, as we move towards a hybrid human-machine
future, it is still unclear how the introduction of AI agents in our social
interactions will affect this cooperative capacity. Within the context of the
one-shot collective risk dilemma, where enough members of a group must
cooperate in order to avoid a collective disaster, we study the evolutionary
dynamics of cooperation in a hybrid population made of both adaptive and
fixed-behavior agents. Specifically, we show how the first learn to adapt their
behavior to compensate for the behavior of the latter. The less the
(artificially) fixed agents cooperate, the more the adaptive population is
motivated to cooperate, and vice-versa, especially when the risk is higher. By
pinpointing how adaptive agents avoid their share of costly cooperation if the
fixed-behavior agents implement a cooperative policy, our work hints towards an
unbalanced hybrid world. On one hand, this means that introducing cooperative
AI agents within our society might unburden human efforts. Nevertheless, it is
important to note that costless artificial cooperation might not be realistic,
and more than deploying AI systems that carry the cooperative effort, we must
focus on mechanisms that nudge shared cooperation among all members in the
hybrid system.
|
The maximal inequalities for diffusion processes have drawn increasing
attention in recent years. However, the existing proof of the $L^p$ maximum
inequalities for the Ornstein-Uhlenbeck process was dubious. Here we give a
rigorous proof of the moderate maximum inequalities for the Ornstein-Uhlenbeck
process, which include the $L^p$ maximum inequalities as special cases and
generalize the remarkable $L^1$ maximum inequalities obtained by Graversen and
Peskir [P. Am. Math. Soc., 128(10):3035-3041, 2000]. As a corollary, we also
obtain a new moderate maximal inequality for continuous local martingales,
which can be viewed as a supplement of the classical Burkholder-Davis-Gundy
inequality.
|
Distributed system architectures such as cloud computing or the emergent
architectures of the Internet Of Things, present significant challenges for
security and privacy. Specifically, in a complex application there is a need to
securely delegate access control mechanisms to one or more parties, who in turn
can govern methods that enable multiple other parties to be authenticated in
relation to the services that they wish to consume. We identify shortcomings in
an existing proposal by Xu et al for multiparty authentication and evaluate a
novel model from Al-Aqrabi et al that has been designed specifically for
complex multiple security realm environments. The adoption of a Session
Authority Cloud ensures that resources for authentication requests are
scalable, whilst permitting the necessary architectural abstraction for myriad
hardware IoT devices such as actuators and sensor networks, etc. In addition,
the ability to ensure that session credentials are confirmed with the relevant
resource principles means that the essential rigour for multiparty
authentication is established.
|
An oscillating universe cycles through a series of expansions and
contractions. We propose a model in which ``phantom'' energy with $p < -\rho$
grows rapidly and dominates the late-time expanding phase. The universe's
energy density is so large that the effects of quantum gravity are important at
both the beginning and the end of each expansion (or contraction). The bounce
can be caused by high energy modifications to the Friedmann equation, which
make the cosmology nonsingular. The classic black hole overproduction of
oscillating universes is resolved due to their destruction by the phantom
energy.
|
The hybrid relay selection (HRS) scheme, which adaptively chooses
amplify-and-forward (AF) and decode-and-forward (DF) protocols, is very
effective to achieve robust performance in wireless networks. This paper
analyzes the frame error rate (FER) of the HRS scheme in general cooperative
wireless networks without and with utilizing error control coding at the source
node. We first develop an improved signal-to-noise ratio (SNR) threshold-based
FER approximation model. Then, we derive an analytical average FER expression
as well as an asymptotic expression at high SNR for the HRS scheme and
generalize to other relaying schemes. Simulation results are in excellent
agreement with the theoretical analysis, which validates the derived FER
expressions.
|
The cross-correlation search for gravitational wave, which is known as
'radiometry', has been previously applied to map of the gravitational wave
stochastic background in the sky and also to target on gravitational wave from
rotating neutron stars/pulsars. We consider the Virgo cluster where may be
appear as `hot spot' spanning few pixels in the sky in radiometry analysis. Our
results show that sufficient signal to noise ratio can be accumulated with
integration times of the order of a year. We also construct numerical
simulation of radiometry analysis, assuming current constructing/upgrading
ground-based detectors. Point spread function of the injected sources are
confirmed by numerical test. Typical resolution of radiometry analysis is a few
square degree which corresponds to several thousand pixels of sky mapping.
|
The surface brightness fluctuations (SBF) method measures the variance in a
galaxy's light distribution arising from fluctuations in the numbers and
luminosities of individual stars per resolution element. Once calibrated for
stellar population effects, SBF measurements with HST provide distances to
early-type galaxies with unrivaled precision. Optical SBF data from HST for the
Virgo and Fornax clusters give the relative distances of these nearby fiducial
clusters with 2% precision and constrain their internal structures.
Observations in hand will allow us to tie the Coma cluster, the standard of
comparison for distant cluster studies, into the same precise relative distance
scale. The SBF method can be calibrated in an absolute sense either empirically
from Cepheids or theoretically from stellar population models. The agreement
between the model and empirical zero points has improved dramatically,
providing an independent confirmation of the Cepheid distance scale. SBF is
still brighter in the near-IR, and an ongoing program to calibrate the method
for the F110W and F160W passbands of the WFC3 IR channel will enable accurate
distance derivation whenever a large early-type galaxy or bulge is observed in
these passbands at distances reaching well out into the Hubble flow.
|
The nucleosynthesis and production of radioactive elements in SN 1987A are
reviewed. Different methods for estimating the masses of 56Ni, 57Ni, and 44Ti
are discussed, and we conclude that broad band photometry in combination with
time-dependent models for the light curve gives the most reliable estimates.
|
We investigate theoretically spin and orbital pseudospin magnetic properties
of a molecular orbital in parabolic and elliptic double quantum dots (DQDs). In
our many body calculation we include intra- and inter-dot electron-electron
interactions, in addition to the intradot exchange interaction of `p' orbitals.
We find for parabolic DQDs that, except for the half or completely filled
molecular orbital, spins in different dots are ferromagnetically coupled while
orbital pseudospins are antiferromagnetically coupled. For elliptic DQDs spins
and pseudospins are either ferromagnetically or antiferromagnetically coupled,
depending on the number of electrons in the molecular orbital. We have
determined orbital pseudospin quantum numbers for the groundstates of elliptic
DQDs. An experiment is suggested to test the interplay between orbital
pseudospin and spin magnetism.
|
We consider three- and four-level atomic lasers that are either incoherently
(unidirectionally) or coherently (bidirectionally) pumped, the single-mode
cavity being resonant with the laser transition. The intra-cavity Fano factor
and the photo-current spectral density are evaluated on the basis of rate
equations.
According to that approach, fluctuations are caused by jumps in active and
detecting atoms. The algebra is considerably simpler than the one required by
Quantum-Optics treatments.
Whenever a comparison can be made, the expressions obtained coincide. The
conditions under which the output light exhibits sub-Poissonian statistics are
considered in detail. Analytical results, based on linearization, are verified
by comparison with Monte Carlo simulations. An essentially exhaustive
investigation of sub-Poissonian light generation by three- and four-level atoms
lasers has been performed. Only special forms were reported earlier.
|
We introduce the categories of infinitesimal Hopf modules and bimodules over
an infinitesimal bialgebra. We show that they correspond to modules and
bimodules over the infinitesimal version of the double. We show that there is a
natural, but non-obvious way to construct a pre-Lie algebra from an arbitrary
infinitesimal bialgebra and a dendriform algebra from a quasitriangular
infinitesimal bialgebra. As consequences, we obtain a pre-Lie structure on the
space of paths on an arbitrary quiver, and a striking dendriform structure on
the space of endomorphisms of an arbitrary infinitesimal bialgebra, which
combines the convolution and composition products. We extend the previous
constructions to the categories of Hopf, pre-Lie and dendriform bimodules. We
construct a brace algebra structure from an arbitrary infinitesimal bialgebra;
this refines the pre-Lie algebra construction. In two appendices, we show that
infinitesimal bialgebras are comonoid objects in a certain monoidal category
and discuss a related construction for counital infinitesimal bialgebras.
|
We discuss perturbative O(g^2a) matching with static heavy quarks and
domain-wall light quarks for lattice operators relevant to B-meson decays and
$B^0$-$\bar{B}^0$ mixing. The chiral symmetry of the light domain-wall quarks
does not prohibit operator mixing at O(a) for these operators. The O(a)
corrections to physical quantities are non-negligible and must be included to
obtain high-precision simulation results for CKM physics. We provide results
using plaquette, Symanzik, Iwasaki and DBW2 gluon actions and applying APE,
HYP1 and HYP2 link-smearing for the static quark action.
|
Wolf-Rayet (WR) stars are evolved massive stars with strong fast stellar
winds. WR stars in our Galaxy have shown three possible sources of X-ray
emission associated with their winds: shocks in the winds, colliding stellar
winds, and wind-blown bubbles; however, quantitative analyses of observations
are often hampered by uncertainties in distances and heavy foreground
absorption. These problems are mitigated in the Magellanic Clouds (MCs), which
are at known distances and have small foreground and internal extinction. We
have therefore started a survey of X-ray emission associated with WR stars in
the MCs using archival Chandra, ROSAT, and XMM-Newton observations. In the
first paper of this series, we report the results for 70 WR stars in the MCs
using 192 archival Chandra ACIS observations. X-ray emission is detected from
29 WR stars. We have investigated their X-ray spectral properties,
luminosities, and temporal variability. These X-ray sources all have
luminosities greater than a few times 10^32 ergs s^-1, with spectra indicative
of highly absorbed emission from a thin plasma at high temperatures typical of
colliding winds in WR+OB binary systems. Significant X-ray variability with
periods ranging from a few hours up to ~20 days is seen associated with several
WR stars. In most of these cases, the X-ray variability can be linked to the
orbital motion of the WR star in a binary system, further supporting the
colliding wind scenario for the origin of the X-ray emission from these stars.
|
The Lorentz Integral Transform approach allows microscopic calculations of
electromagnetic reaction cross sections without explicit knowledge of final
state wave functions. The necessary inversion of the transform has to be
treated with great care, since it constitutes a so-called ill-posed problem. In
this work new inversion techniques for the Lorentz Integral Transform are
introduced. It is shown that they all contain a regularization scheme, which is
necessary to overcome the ill-posed problem. In addition it is illustrated that
the new techniques have a much broader range of application than the present
standard inversion method of the Lorentz Integral Transform.
|
Latent Gaussian process (GP) models are widely used in neuroscience to
uncover hidden state evolutions from sequential observations, mainly in neural
activity recordings. While latent GP models provide a principled and powerful
solution in theory, the intractable posterior in non-conjugate settings
necessitates approximate inference schemes, which may lack scalability. In this
work, we propose cvHM, a general inference framework for latent GP models
leveraging Hida-Mat\'ern kernels and conjugate computation variational
inference (CVI). With cvHM, we are able to perform variational inference of
latent neural trajectories with linear time complexity for arbitrary
likelihoods. The reparameterization of stationary kernels using Hida-Mat\'ern
GPs helps us connect the latent variable models that encode prior assumptions
through dynamical systems to those that encode trajectory assumptions through
GPs. In contrast to previous work, we use bidirectional information filtering,
leading to a more concise implementation. Furthermore, we employ the Whittle
approximate likelihood to achieve highly efficient hyperparameter learning.
|
Person Re-Identification (person re-id) is a crucial task as its applications
in visual surveillance and human-computer interaction. In this work, we present
a novel joint Spatial and Temporal Attention Pooling Network (ASTPN) for
video-based person re-identification, which enables the feature extractor to be
aware of the current input video sequences, in a way that interdependency from
the matching items can directly influence the computation of each other's
representation. Specifically, the spatial pooling layer is able to select
regions from each frame, while the attention temporal pooling performed can
select informative frames over the sequence, both pooling guided by the
information from distance matching. Experiments are conduced on the iLIDS-VID,
PRID-2011 and MARS datasets and the results demonstrate that this approach
outperforms existing state-of-art methods. We also analyze how the joint
pooling in both dimensions can boost the person re-id performance more
effectively than using either of them separately.
|
Coupling between two singing wineglasses was obtained and investigated.
Rubbing the rim of one wineglass produce a tone and due to the coupling induces
oscillations on the other wineglasses. The needed coupling strength between the
wineglasses to induce oscillations as a function of the detuning was
investigated.
|
A unified theory is presented for finite-temperature many-body perturbation
expansions of the anharmonic vibrational contributions to thermodynamic
functions: the free energy, internal energy, and entropy. The theory is
diagrammatically size-consistent at any order, as ensured by the linked-diagram
theorem proved here, and thus applicable to molecular gases and solids on an
equal footing. It is also a basis-set-free formalism, just like its underlying
Bose-Einstein theory, capable of summing anharmonic effects over an infinite
number of states analytically. It is formulated by the
Rayleigh-Schrodinger-style recursions, generating sum-over-states formulas for
the perturbation series, which unambiguously converges at the
finite-temperature vibrational full-configuration-interaction limits. Two
strategies are introduced to reducing these sum-over-states formulas into
compact sum-over-modes analytical formulas. One is a purely algebraic method
that factorizes each many-mode thermal average into a product of one-mode
thermal averages, which are then evaluated by the thermal Born-Huang rules.
Canonical forms of these rules are proposed, dramatically expediting the
reduction process. The other is finite-temperature normal-ordered second
quantization, which is fully developed in this study, including a proof of
thermal Wick's theorem and the derivation of a normal-ordered vibrational
Hamiltonian at finite temperature. The latter naturally defines a
finite-temperature extension of size-extensive vibrational self-consistent
field theory. These reduced formulas can be represented graphically as Feynman
diagrams with resolvent lines, which include anomalous and renormalization
diagrams. Two order-by-order and one general-order algorithms of computing
these perturbation corrections are implemented and applied up to the eighth
order. The results show no signs of Kohn-Luttinger-type nonconvergence.
|
One of the focus areas of modern scientific research is to reveal mysteries
related to genes and their interactions. The dynamic interactions between genes
can be encoded into a gene regulatory network (GRN), which can be used to gain
understanding on the genetic mechanisms behind observable phenotypes. GRN
inference from time series data has recently been a focus area of systems
biology. Due to low sampling frequency of the data, this is a notoriously
difficult problem. We tackle the challenge by introducing the so-called
continuous-time Gaussian process dynamical model, based on Gaussian process
framework that has gained popularity in nonlinear regression problems arising
in machine learning. The model dynamics are governed by a stochastic
differential equation, where the dynamics function is modelled as a Gaussian
process. We prove the existence and uniqueness of solutions of the stochastic
differential equation. We derive the probability distribution for the Euler
discretised trajectories and establish the convergence of the discretisation.
We develop a GRN inference method called BINGO, based on the developed
framework. BINGO is based on MCMC sampling of trajectories of the GPDM and
estimating the hyperparameters of the covariance function of the Gaussian
process. Using benchmark data examples, we show that BINGO is superior in
dealing with poor time resolution and it is computationally feasible.
|
A scheme of quantum electrodynamic (QED) background-free radiative emission
of neutrino pair (RENP) is proposed in order to achieve precision determination
of neutrino properties so far not accessible. The important point for the
background rejection is the fact that the dispersion relation between wave
vector along propagating direction in wave guide (and in a photonic-crystal
type fiber) and frequency is modified by a discretized non-vanishing effective
mass. This effective mass acts as a cutoff of allowed frequencies, and one may
select the RENP photon energy region free of all macro-coherently amplified QED
processes by choosing the cutoff larger than the mass of neutrinos.
|
Production of electron-positron pairs from the quantum vacuum polarized by
the superposition of a strong and a perturbative oscillating electric-field
mode is studied. Our outcomes rely on a nonequilibrium quantum field
theoretical approach, described by the quantum kinetic Boltzmann-Vlasov
equation. By superimposing the perturbative mode, the characteristic resonant
effects and Rabi-like frequencies in the single-particle distribution function
are modified, as compared to the predictions resulting from the case driven by
a strong oscillating field mode only. This is demonstrated in the momentum
spectra of the produced pairs. Moreover, the dependence of the total number of
pairs on the intensity parameter of each mode is discussed and a strong
enhancement found for large values of the relative Keldysh parameter.
|
Fine-grained anomaly detection has recently been dominated by segmentation
based approaches. These approaches first classify each element of the sample
(e.g., image patch) as normal or anomalous and then classify the entire sample
as anomalous if it contains anomalous elements. However, such approaches do not
extend to scenarios where the anomalies are expressed by an unusual combination
of normal elements. In this paper, we overcome this limitation by proposing set
features that model each sample by the distribution its elements. We compute
the anomaly score of each sample using a simple density estimation method. Our
simple-to-implement approach outperforms the state-of-the-art in image-level
logical anomaly detection (+3.4%) and sequence-level time-series anomaly
detection (+2.4%).
|
Noise contamination affects the performance of orthogonal time frequency
space (OTFS) signals in real-world environments, making radar sensing
challenging. Our objective is to derive the range and velocity from the
delay-Doppler (DD) domain for radar sensing by using OTFS signaling. This work
introduces a two-stage approach to tackle this issue. In the first stage, we
use a convolutional neural network (CNN) model to classify the noise levels as
moderate or severe. Subsequently, if the noise level is severe, the OTFS
samples are denoised using a generative adversarial network (GAN). The proposed
approach achieves notable levels of accuracy in the classification of noisy
signals and mean absolute error (MAE) for the entire system even in low
signal-to-noise ratio (SNR) scenarios.
|
Entity Matching is the task of deciding whether two entity descriptions refer
to the same real-world entity and is a central step in most data integration
pipelines. Many state-of-the-art entity matching methods rely on pre-trained
language models (PLMs) such as BERT or RoBERTa. Two major drawbacks of these
models for entity matching are that (i) the models require significant amounts
of task-specific training data and (ii) the fine-tuned models are not robust
concerning out-of-distribution entities. This paper investigates using
generative large language models (LLMs) as a less task-specific training
data-dependent and more robust alternative to PLM-based matchers. Our study
covers hosted and open-source LLMs, which can be run locally. We evaluate these
models in a zero-shot scenario and a scenario where task-specific training data
is available. We compare different prompt designs and the prompt sensitivity of
the models and show that there is no single best prompt but needs to be tuned
for each model/dataset combination. We further investigate (i) the selection of
in-context demonstrations, (ii) the generation of matching rules, as well as
(iii) fine-tuning a hosted LLM using the same pool of training data. Our
experiments show that the best LLMs require no or only a few training examples
to perform similarly to PLMs that were fine-tuned using thousands of examples.
LLM-based matchers further exhibit higher robustness to unseen entities. We
show that GPT4 can generate structured explanations for matching decisions. The
model can automatically identify potential causes of matching errors by
analyzing explanations of wrong decisions. We demonstrate that the model can
generate meaningful textual descriptions of the identified error classes, which
can help data engineers improve entity matching pipelines.
|
We observe the outcome of the discrete time noisy voter model at a single
vertex of a graph. We show that certain pairs of graphs can be distinguished by
the frequency of repetitions in the sequence of observations. We prove that
this statistic is asymptotically normal and that it distinguishes between
(asymptotically) almost all pairs of finite graphs. We conjecture that the
noisy voter model distinguishes between any two graphs other than stars.
|
We calculate the proton lifetime in an SO(10) supersymmetric grand unified
theory [SUSY GUT] with U(2) family symmetry. This model fits the low energy
data, including the recent data for neutrino oscillations. We discuss the
predictions of this model for the proton lifetime in light of recent
SuperKamiokande results which significantly constrain the SUSY parameter space
of the model.
|
Grain boundary (GB) enthalpies in nanocrystalline (NC) $\mathrm
{Pd_{90}Au_{10}}$ are studied after preparation, thermal relaxation and plastic
deformation. By comparing results from atomistic computer simulations and
calorimetry, we show that increasing plastic deformation of equilibrated NC
$\mathrm {Pd_{90}Au_{10}}$ specimen causes an increase of the stored GB
enthalpy $\Delta \gamma$. We interpret this change of $\Delta \gamma$ as
stress-induced complexion transition from a low-energy to a high-energy GB-core
state. In fact, GBs behave not only as mere sinks and sources of zero- and
one-dimensional defects or act as migration barriers to the latter but also
have the capability of storing deformation history through configurational
changes of their core structure and hence GB enthalpy. Such a scenario can be
understood as a continuous complexion transition under non-equilibrium
conditions, which is related to hysteresis effects under loading-unloading
conditions.
|
In this paper, a logo classification system based on the appearance of logo
images is proposed. The proposed classification system makes use of global
characteristics of logo images for classification. Color, texture, and shape of
a logo wholly describe the global characteristics of logo images. The various
combinations of these characteristics are used for classification. The
combination contains only with single feature or with fusion of two features or
fusion of all three features considered at a time respectively. Further, the
system categorizes the logo image into: a logo image with fully text or with
fully symbols or containing both symbols and texts.. The K-Nearest Neighbour
(K-NN) classifier is used for classification. Due to the lack of color logo
image dataset in the literature, the same is created consisting 5044 color logo
images. Finally, the performance of the classification system is evaluated
through accuracy, precision, recall and F-measure computed from the confusion
matrix. The experimental results show that the most promising results are
obtained for fusion of features.
|
A search of more than 3,000 square degrees of high latitude sky by the Sloan
Digital Sky Survey has yielded 251 faint high-latitude carbon stars (FHLCs),
the large majority previously uncataloged. We present homogeneous spectroscopy,
photometry, and astrometry for the sample. The objects lie in the 15.6 < r <
20.8 range, and exhibit a wide variety of apparent photospheric temperatures,
ranging from spectral types near M to as early as F. Proper motion measurements
for 222 of the objects show that at least 50%, and quite probably more than
60%, of these objects are actually low luminosity dwarf carbon (dC) stars, in
agreement with a variety of recent, more limited investigations which show that
such objects are the numerically dominant type of star with C_2 in the
spectrum. This SDSS homogeneous sample of ~110 dC stars now constitutes 90% of
all known carbon dwarfs, and will grow by another factor of 2-3 by the
completion of the Survey. As the spectra of the dC and the faint halo giant C
stars are very similar (at least at spectral resolution of 1,000) despite a
difference of 10 mag in luminosity, it is imperative that simple luminosity
discriminants other than proper motion be developed. We use our enlarged sample
of FHLCs to examine a variety of possible luminosity criteria, including many
previously suggested, and find that, with certain important caveats, JHK
photometry may segregate dwarfs and giants.
|
The objective of this paper is the proof of a conjecture of Kontsevich on the
isomorphism between groups of polynomial symplectomorphisms and automorphisms
of the corresponding Weyl algebra in characteristic zero. The proof is based on
the study of topological properties of automorphism $\Ind$-varieties of the
so-called augmented and skew augmented versions of Poisson and Weyl algebras.
Approximation by tame automorphisms as well as a certain singularity analysis
procedure is utilized in the construction of the lifting of augmented
polynomial symplectomorphisms, after which specialization of the augmentation
parameter is performed in order to obtain the main result.
|
For articulatory-to-acoustic mapping using deep neural networks, typically
spectral and excitation parameters of vocoders have been used as the training
targets. However, vocoding often results in buzzy and muffled final speech
quality. Therefore, in this paper on ultrasound-based articulatory-to-acoustic
conversion, we use a flow-based neural vocoder (WaveGlow) pre-trained on a
large amount of English and Hungarian speech data. The inputs of the
convolutional neural network are ultrasound tongue images. The training target
is the 80-dimensional mel-spectrogram, which results in a finer detailed
spectral representation than the previously used 25-dimensional Mel-Generalized
Cepstrum. From the output of the ultrasound-to-mel-spectrogram prediction,
WaveGlow inference results in synthesized speech. We compare the proposed
WaveGlow-based system with a continuous vocoder which does not use strict
voiced/unvoiced decision when predicting F0. The results demonstrate that
during the articulatory-to-acoustic mapping experiments, the WaveGlow neural
vocoder produces significantly more natural synthesized speech than the
baseline system. Besides, the advantage of WaveGlow is that F0 is included in
the mel-spectrogram representation, and it is not necessary to predict the
excitation separately.
|
It is shown that interference effects between velocity and density of states,
which occur as electrons move along open orbits in the extended Brillouin zone,
result in a change of wave functions dimensionality at Magic Angle (MA)
directions of a magnetic field.
In a particular, we demonstrate that these 1D -> 2D dimensional crossovers
result in the appearance of sharp minima in a resistivity component Rzz,
perpendicular to conducting layers, which explains the main qualitative
features of MA and Angular Magneto-Resistance Oscillations (AMRO) phenomena
observed in low-dimensional conductors (TMTSF)2X, (DMET-TSeF)2X, and
a-(BEDT-TTF)2MHg(SCN)4.
|
Histopathological image classification is an important task in medical image
analysis. Recent approaches generally rely on weakly supervised learning due to
the ease of acquiring case-level labels from pathology reports. However,
patch-level classification is preferable in applications where only a limited
number of cases are available or when local prediction accuracy is critical. On
the other hand, acquiring extensive datasets with localized labels for training
is not feasible. In this paper, we propose a semi-supervised patch-level
histopathological image classification model, named CLASS-M, that does not
require extensively labeled datasets. CLASS-M is formed by two main parts: a
contrastive learning module that uses separated Hematoxylin and Eosin images
generated through an adaptive stain separation process, and a module with
pseudo-labels using MixUp. We compare our model with other state-of-the-art
models on two clear cell renal cell carcinoma datasets. We demonstrate that our
CLASS-M model has the best performance on both datasets. Our code is available
at github.com/BzhangURU/Paper_CLASS-M/tree/main
|
We study current-driven skyrmion motion in uniaxial thin film
antiferromagnets in the presence of the Dzyaloshinskii-Moriya interactions and
in an external magnetic field. We phenomenologically include relaxation and
current-induced torques due to both spin-orbit coupling and spatially
inhomogeneous magnetic textures in the equation for the N\'eel vector of the
antiferromagnet. Using the collective coordinate approach we apply the theory
to a two-dimensional antiferromagnetic skyrmion and estimate the skyrmion
velocity under an applied DC electric current.
|
We report upon new results regarding the Lya output of galaxies, derived from
the Lyman alpha Reference Sample (LARS), focusing on Hubble Space Telescope
imaging. For 14 galaxies we present intensity images in Lya, Halpha, and UV,
and maps of Halpha/Hbeta, Lya equivalent width (EW), and Lya/Halpha. We present
Lya and UV light profiles and show they are well-fitted by S\'ersic profiles,
but Lya profiles show indices systematically lower than those of the UV (n
approx 1-2 instead of >~4). This reveals a general lack of the central
concentration in Lya that is ubiquitous in the UV. Photometric growth curves
increase more slowly for Lya than the FUV, showing that small apertures may
underestimate the EW. For most galaxies, however, flux and EW curves flatten by
radii ~10 kpc, suggesting that if placed at high-z, only a few of our galaxies
would suffer from large flux losses. We compute global properties of the sample
in large apertures, and show total luminosities to be independent of all other
quantities. Normalized Lya throughput, however, shows significant correlations:
escape is found to be higher in galaxies of lower star formation rate, dust
content, mass, and several quantities that suggest harder ionizing continuum
and lower metallicity. Eight galaxies could be selected as high-z Lya emitters,
based upon their luminosity and EW. We discuss the results in the context of
high-z Lya and UV samples. A few galaxies have EWs above 50 AA, and one shows
f_escLya of 80%; such objects have not previously been reported at low-z.
|
In the standard model of magnetic reconnection, both ions and electrons
couple to the newly reconnected magnetic field lines and are ejected away from
the reconnection diffusion region in the form of bidirectional burst ion and
electron jets. Recent observations propose a new model: electron only magnetic
reconnection without ion coupling in electron scale current sheet. Based on the
data from Magnetospheric Multiscale (MMS) Mission, we observe a long extension
inner electron diffusion region (EDR) at least 40 di away from the X line at
the terrestrial Magnetopause, implying that the extension of EDR is much longer
than the prediction of the theory and simulations. This inner EDR is embedded
in an ion scale current sheet (the width of 4 di, di is ion inertial length).
However, such ongoing magnetic reconnection was not accompanied with burst ion
outflow, implying the presence of electron only reconnection in ion scale
current sheet. Our observations present new challenge for understanding the
model of standard magnetic reconnection and electron only reconnection model in
electron scale current sheet.
|
Inclusive cross sections $\sigma^A=Ed^3{\sigma(X,P_t^2)/d^3p}$ of antiproton
and negative pion production on Be, Al, Cu and Ta targets hit by 10 GeV protons
were measured at the laboratory angles of 10.5$^{\circ}$ and 59$^{\circ}$.
Antiproton cross sections were obtained in both kinematically allowed and
kinematically forbidden regions for antiproton production on a free nucleon.
The antiproton cross section ratio as a function of the longitudinal variable
$X$ exhibits three separate plateaus which gives evidence for the existence of
compact baryon configurations in nuclei-small-distance scaled objects of
nuclear structure. Comparability of the measured cross section ratios with
those obtained in the inclusive electron scattering off nuclei suggests a weak
antiproton absorption in nuclei. Observed behavior of the cross section ratios
is interpreted in the framework of a model considering the hadron production as
a fragmentation of quarks (antiquarks) into hadrons. It has been established
that the antiproton formation length in nuclear matter can reach the magnitude
of 4.5 fm.
|
We present a method for EMG-driven teleoperation of non-anthropomorphic robot
hands. EMG sensors are appealing as a wearable, inexpensive, and unobtrusive
way to gather information about the teleoperator's hand pose. However, mapping
from EMG signals to the pose space of a non-anthropomorphic hand presents
multiple challenges. We present a method that first projects from forearm EMG
into a subspace relevant to teleoperation. To increase robustness, we use a
model which combines continuous and discrete predictors along different
dimensions of this subspace. We then project from the teleoperation subspace
into the pose space of the robot hand. Our method is effective and intuitive,
as it enables novice users to teleoperate pick and place tasks faster and more
robustly than state-of-the-art EMG teleoperation methods when applied to a
non-anthropomorphic, multi-DOF robot hand.
|
Subsets and Splits