text
stringlengths 6
128k
|
---|
Isolated low-mass stars are formed in dense cores of molecular clouds. In the
standard picture, the cores are envisioned to condense out of strongly
magnetized clouds through ambipolar diffusion. Most previous calculations based
on this scenario are limited to axisymmetric cloud evolution leading to a
single core, which collapses to form an isolated star or stellar system at the
center. These calculations are here extended to the nonaxisymmetric case under
thin-disk approximation, which allows for a detailed investigation into the
process of fragmentation, fundamental to binary, multiple system, and cluster
formation. We have shown previously that initially axisymmetric, magnetically
subcritical clouds with an $m=2$ density perturbation of modest fractional
amplitude ($\sim 5%$) can develop highly elongated bars, which facilitate
binary and multiple system formation. In this paper, we show that in the
presence of higher order ($m\ge 3$) perturbations of similar amplitude such
clouds are capable of breaking up into a set of discrete dense cores. These
multiple cores are magnetically supercritical. They are expected to collapse
into single stars or stellar systems individually and, collectively, to form a
small stellar group. Our calculations demonstrate that the standard scenario
for single star formation involving magnetically subcritical clouds and
ambipolar diffusion can readily produce more than one star, provided that the
cloud mass is well above the Jeans limit and relatively uniformly distributed.
The fragments develop in the central part of the cloud, after the region has
become magnetically supercritical but before rapid collapse sets in. It is
enhanced by the flattening of mass distribution along the field lines and by
the magnetic tension force.
|
A treatment regime is a deterministic function that dictates personalized
treatment based on patients' individual prognostic information. There is a
fast-growing interest in finding optimal treatment regimes to maximize expected
long-term clinical outcomes of patients for complex diseases, such as cancer
and AIDS. For many clinical studies with survival time as a primary endpoint, a
main goal is to maximize patients' survival probabilities given treatments. In
this article, we first propose two nonparametric estimators for survival
function of patients following a given treatment regime. Then, we derive the
estimation of the optimal treatment regime based on a value-based searching
algorithm within a set of treatment regimes indexed by parameters. The
asymptotic properties of the proposed estimators for survival probabilities
under derived optimal treatment regimes are established under suitable
regularity conditions. Simulations are conducted to evaluate the numerical
performance of the proposed estimators under various scenarios. An application
to an AIDS clinical trial data is also given to illustrate the methods.
|
Topological defects are ubiquitous in condensed-matter physics but only
hypothetical in the early universe. In spite of this, even an indirect evidence
for one of these cosmic objects would revolutionize our vision of the cosmos.
We give here an introduction to the subject of cosmic topological defects and
their possible observable signatures. Beginning with a review of the basics of
general defect formation and evolution, we then focus on mainly two topics in
some detail: conducting strings and vorton formation, and some specific
imprints in the cosmic microwave background radiation from simulated cosmic
strings.
|
The signature transform is a 'universal nonlinearity' on the space of
continuous vector-valued paths, and has received attention for use in machine
learning on time series. However, real-world temporal data is typically
observed at discrete points in time, and must first be transformed into a
continuous path before signature techniques can be applied. We make this step
explicit by characterising it as an imputation problem, and empirically assess
the impact of various imputation strategies when applying signature-based
neural nets to irregular time series data. For one of these strategies,
Gaussian process (GP) adapters, we propose an extension~(GP-PoM) that makes
uncertainty information directly available to the subsequent classifier while
at the same time preventing costly Monte-Carlo (MC) sampling. In our
experiments, we find that the choice of imputation drastically affects shallow
signature models, whereas deeper architectures are more robust. Next, we
observe that uncertainty-aware predictions (based on GP-PoM or indicator
imputations) are beneficial for predictive performance, even compared to the
uncertainty-aware training of conventional GP adapters. In conclusion, we have
demonstrated that the path construction is indeed crucial for signature models
and that our proposed strategy leads to competitive performance in general,
while improving robustness of signature models in particular.
|
Crowd-shipping is a promising shared mobility service that involves the
delivery of goods using non-professional shippers. This service is mainly
intended to reduce congestion and pollution in city centers but, as some
authors observe, in most crowd-shipping initiatives the crowd rely on private
motorized vehicles and hence the environmental benefits could be small, if not
negative. Conversely, a crowd-shipping service relying on public transport
should maximize the environmental benefits. Motivated by this observation, in
this study we assess the potentials of crowd-shipping based on metro commuters
in the city of Brescia, Italy. Our contribution is twofold. First, we analyze
the results of a survey conducted among metro users to assess their willingness
to act as crowd-shippers. The main result is that most young commuters and
retirees are willing to be crowd-shippers even for a null reward. Second, we
assess the potential economic impact of using metro-based crowd-shipping
coupled with a traditional home delivery service. To this end, we formulate a
variant of the VRP model where the customers closest to the metro stations may
be served either by a conventional vehicle or by a crowd-shipper. The model is
implemented using Python with Gurobi solver. A computational study based on the
Brescia case is performed to get insights on the economic advantages that a
metro-based crowd delivery option may have for a retailing company.
|
It is shown that, at variance with previous analyses, the MIT bag model can
explain the available data of the Sivers function and satisfies the Burkardt
Sum Rule to a few percent accuracy. The agreement is similar to the one
recently found in the constituent quark model. Therefore, these two model
calculations of the Sivers function are in agreement with the present
experimental and theoretical wisdom.
|
Anatomical connectivity imposes strong constraints on brain function, but
there is no general agreement about principles that govern its organization.
Based on extensive quantitative data we tested the power of three models to
predict connections of the primate cerebral cortex: architectonic similarity
(structural model), spatial proximity (distance model) and thickness similarity
(thickness model). Architectonic similarity showed the strongest and most
consistent influence on connection features. This parameter was strongly
associated with the presence or absence of inter-areal connections and when
integrated with spatial distance, the model allowed predicting the existence of
projections with very high accuracy. Moreover, architectonic similarity was
strongly related to the laminar pattern of projections origins, and the
absolute number of cortical connections of an area. By contrast, cortical
thickness similarity and distance were not systematically related to connection
features. These findings suggest that cortical architecture provides a general
organizing principle for connections in the primate brain.
|
The production of final state photons in deep inelastic scattering originates
from photon radiation off leptons or quarks involved in the scattering process.
Photon radiation off quarks involves a contribution from the quark-to-photon
fragmentation function, corresponding to the non-perturbative transition of a
hadronic jet into a single, highly energetic photon accompanied by some limited
hadronic activity. Up to now, this fragmentation function was measured only in
electron-positron annihilation at LEP. We demonstrate by a dedicated
parton-level calculation that a competitive measurement of the quark-to-photon
fragmentation function can be obtained in deep inelastic scattering at HERA.
Such a measurement can be obtained by studying the photon energy spectra in
$\gamma + (0+1)$-jet events, where $\gamma$ denotes a hadronic jet containing a
highly energetic photon (the photon jet). Isolated photons are then defined
from the photon jet by imposing a minimal photon energy fraction. For this
so-called democratic clustering approach, we study the cross sections for
isolated $\gamma + (0+1)$-jet and $\gamma + (1+1)$-jet production as well as
for the inclusive isolated photon production in deep inelastic scattering.
|
To meet the ever-growing need for performance in silicon devices, SoC
providers have been increasingly relying on software-hardware cooperation. By
controlling hardware resources such as power or clock management from the
software, developers earn the possibility to build more flexible and power
efficient applications. Despite the benefits, these hardware components are now
exposed to software code and can potentially be misused as open-doors to
jeopardize trusted environments, perform privilege escalation or steal
cryptographic secrets. In this work, we introduce SideLine, a novel
side-channel vector based on delay-line components widely implemented in
high-end SoCs. After providing a detailed method on how to access and convert
delay-line data into power consumption information, we demonstrate that these
entities can be used to perform remote power side-channel attacks. We report
experiments carried out on two SoCs from distinct vendors and we recount
several core-vs-core attack scenarios in which an adversary process located in
one processor core aims at eavesdropping the activity of a victim process
located in another core. For each scenario, we demonstrate the adversary
ability to fully recover the secret key of an OpenSSL AES running in the victim
core. Even more detrimental, we show that these attacks are still practicable
if the victim or the attacker program runs over an operating system.
|
We present a spectral decomposition technique and its applications to a
sample of galaxies hosting large-scale counter-rotating stellar disks. Our
spectral decomposition technique allows to separate and measure the kinematics
and the properties of the stellar populations of both the two counter-rotating
disks in the observed galaxies at the same time. Our results provide new
insights on the epoch and mechanism of formation of these galaxies.
|
In the preceeding paper we constructed an infinite exact sequence a la
Villamayor-Zelinsky for a symmetric finite tensor category. It consists of
cohomology groups evaluated at three types of coefficients which repeat
periodically. In the present paper we interpret the middle cohomology group in
the second level of the sequence. We introduce the notion of coring categories
and we obtain that the mentioned middle cohomology group is isomorphic to the
group of Azumaya quasi coring categories. This result is a categorical
generalization of the classical Crossed Product Theorem, which relates the
relative Brauer group and the second Galois cohomology group with respect to a
Galois field extension. We construct the colimit over symmetric finite tensor
categories of the relative groups of Azumaya quasi coring categories and the
full group of Azumaya quasi coring categories over $vec$. We prove that the
latter two groups are isomorphic.
|
We show that models of opinion formation and dissemination in a community of
individuals can be framed within stochastic thermodynamics from which we can
build a nonequilibrium thermodynamics of opinion dynamics. This is accomplished
by decomposing the original transition rate that defines an opinion model into
two or more transition rates, each representing the contact with heat
reservoirs at different temperatures, and postulating an energy function. As
the temperatures are distinct, heat fluxes are present even at the stationary
state and linked to the production of entropy, the fundamental quantity that
characterizes nonequilibrium states. We apply the present framework to a
generic-vote model including the majority-vote model in a square lattice and in
a cubic lattice. The fluxes and the rate of entropy production are calculated
by numerical simulation and by the use of a pair approximation.
|
A stochastic approach to time-dependent density functional theory (TDDFT) is
developed for computing the absorption cross section and the random phase
approximation (RPA) correlation energy. The core idea of the approach involves
time-propagation of a small set of stochastic orbitals which are first
projected on the occupied space and then propagated in time according to the
time-dependent Kohn-Sham equations. The evolving electron density is exactly
represented when the number of random orbitals is infinite, but even a small
number (? 16) of such orbitals is enough to obtain meaningful results for
absorption spectrum and the RPA correlation energy per electron. We implement
the approach for silicon nanocrystals (NCs) using real-space grids and find
that the overall scaling of the algorithm is sublinear with computational time
and memory.
|
We discuss three different formulations of the equivariant Iwasawa main
conjecture attached to an extension K/k of totally real fields with Galois
group G, where k is a number field and G is a p-adic Lie group of dimension 1
for an odd prime p. All these formulations are equivalent and hold if Iwasawa's
\mu-invariant vanishes. Under mild hypotheses, we use this to prove non-abelian
generalizations of Brumer's conjecture, the Brumer-Stark conjecture and a
strong version of the Coates-Sinnott conjecture provided that \mu = 0.
|
The excited states of $N=44$ $^{74}$Zn were investigated via $\gamma$-ray
spectroscopy following $^{74}$Cu $\beta$ decay. By exploiting $\gamma$-$\gamma$
angular correlation analysis, the $2_2^+$, $3_1^+$, $0_2^+$ and $2_3^+$ states
in $^{74}$Zn were firmly established. The $\gamma$-ray branching and $E2/M1$
mixing ratios for transitions de-exciting the $2_2^+$, $3_1^+$ and $2_3^+$
states were measured, allowing for the extraction of relative $B(E2)$ values.
In particular, the $2_3^+ \to 0_2^+$ and $2_3^+ \to 4_1^+$ transitions were
observed for the first time. The results show excellent agreement with new
microscopic large-scale shell-model calculations, and are discussed in terms of
underlying shapes, as well as the role of neutron excitations across the $N=40$
gap. Enhanced axial shape asymmetry (triaxiality) is suggested to characterize
$^{74}$Zn in its ground state. Furthermore, an excited $K=0$ band with a
significantly larger softness in its shape is identified. A shore of the $N=40$
``island of inversion'' appears to manifest above $Z=26$, previously thought as
its northern limit in the chart of the nuclides.
|
Most of the celestial gamma rays detected by the Large Area Telescope (LAT)
aboard the Fermi Gamma-ray Space Telescope originate from the interstellar
medium when energetic cosmic rays interact with interstellar nucleons and
photons. Conventional point and extended source studies rely on the modeling of
this diffuse emission for accurate characterization. We describe here the
development of the Galactic Interstellar Emission Model (GIEM) that is the
standard adopted by the LAT Collaboration and is publicly available. The model
is based on a linear combination of maps for interstellar gas column density in
Galactocentric annuli and for the inverse Compton emission produced in the
Galaxy. We also include in the GIEM large-scale structures like Loop I and the
Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray
proton density decreases with Galactocentric distance beyond 5 kpc from the
Galactic Center. The measurements also suggest a softening of the proton
spectrum with Galactocentric distance. We observe that the Fermi bubbles have
boundaries with a shape similar to a catenary at latitudes below 20 degrees and
we observe an enhanced emission toward their base extending in the North and
South Galactic direction and located within 4 degrees of the Galactic Center.
|
We observe magnetic domain structures of MgO/CoFeB with a perpendicular
magnetic easy axis under an electric field. The domain structure shows a maze
pattern with electric-field dependent isotropic period. We find that the
electric-field modulation of the period is explained by considering the
electric-field modulation of the exchange stiffness constant in addition to the
known magnetic anisotropy modulation.
|
We present a UV completion of the twin Higgs idea in the framework of
holographic composite Higgs. The SM contribution to the Higgs potential is
effectively cut off by the SM-singlet mirror partners at the sigma-model scale
f, naturally allowing for m_{KK} beyond the LHC reach. The bulk symmetry is
SU(7) X SO(8), broken on the IR brane into SU(7) X SO(7) and on the UV brane
into (SU(3) X SU(2) X U(1))^{SM} X (SU(3) X SU(2) X U(1))^{mirror} X Z2. The
field content on the UV brane is the SM, extended by a sector transforming
under the mirror gauge group, with the Z2 exchanging the two sectors. An
additional Z2 breaking term is holographically generated to reproduce the Higgs
mass and VEV, with a mild O(10%) tuning. This model has no trace at the LHC,
but can by probed by precision Higgs measurements at future lepton colliders,
and by direct searches for KK excitations at a 100 TeV collider.
|
Ion traps are promising architectures for implementing scalable quantum
computing, but they suffer from excessive "anomalous" heating that prevents
their full potential from being realized. This heating, which is orders of
magnitude larger than that expected from Johnson-Nyquist noise, results in ion
motion that leads to decoherence and reduced fidelity in quantum logic gates.
The exact origin of anomalous heating is an open question, but experiments
point to adsorbates on trap electrodes as a likely source. Many different
models of anomalous heating have been proposed, but these models have yet to
pinpoint the atomistic origin of the experimentally-observed $1/\omega$
electric field noise scaling observed in ion traps at frequencies between
0.1-10 MHz. In this work, we perform the first computational study of the ion
trap electric field noise produced by the motions of multiple monolayers of
adsorbates described by first principles potentials. In so doing, we show that
correlated adsorbate motions play a definitive role in producing $1/\omega$
noise and identify candidate collective adsorbate motions, including
translational and rotational motions of adsorbate patches and multilayer
exchanges, that give rise to $1/\omega$ scaling at the MHz frequencies
typically employed in ion traps. These results demonstrate that multi-adsorbate
systems, even simple ones, can give rise to a set of activated motions that can
produce the $1/\omega$ noise observed in ion traps and that collective, rather
than individual, adsorbate motions are much more likely to give rise to
low-frequency heating.
|
The vacancy concentration at finite temperatures is studied for a series of
(CoCrFeMn)$_{1-x_\mathrm{Ni}}$Ni$_{x_\mathrm{Ni}}$ alloys by grand-canonical
Monte-Carlo (MC) simulations. The vacancy formation energies are calculated
from a classical interatomic potential and exhibit a distribution due to the
different chemical environments of the vacated sites. In dilute alloys, this
distribution features multiple discrete peaks, while concentrated alloys
exhibit an unimodal distribution as there are many different chemical
environments of similar vacancy formation energy. MC simulations using a
numerically efficient bond-counting model confirm that the vacancy
concentration even in concentrated alloys may be calculated by the established
Maxwell-Boltzmann equation weighted by the given distribution of formation
energies. We calculate the variation of vacancy concentration as function of Ni
content in the (CoCrFeMn)$_{x_\mathrm{Ni}}$Ni$_{1-x_\mathrm{Ni}}$ and prove the
excellent agreement of the thermodynamic model and the results from the
grand-canonical Monte-Carlo simulations.
|
Understanding how multicellular organisms reliably orchestrate cell-fate
decisions is a central challenge in developmental biology. This is particularly
intriguing in early mammalian development, where early cell-lineage
differentiation arises from processes that initially appear cell-autonomous but
later materialize reliably at the tissue level. In this study, we develop a
multi-scale, spatial-stochastic simulator of mouse embryogenesis, focusing on
inner-cell mass (ICM) differentiation in the blastocyst stage. Our model
features biophysically realistic regulatory interactions and accounts for the
innate stochasticity of the biological processes driving cell-fate decisions at
the cellular scale. We advance event-driven simulation techniques to
incorporate relevant tissue-scale phenomena and integrate them with
Simulation-Based Inference (SBI), building on a recent AI-based parameter
learning method: the Sequential Neural Posterior Estimation (SNPE) algorithm.
Using this framework, we carry out a large-scale Bayesian inferential analysis
and determine parameter sets that reproduce the experimentally observed system
behavior. We elucidate how autocrine and paracrine feedbacks via the signaling
protein FGF4 orchestrate the inherently stochastic expression of
fate-specifying genes at the cellular level into reproducible ICM patterning at
the tissue scale. This mechanism is remarkably independent of the system size.
FGF4 not only ensures correct cell lineage ratios in the ICM, but also enhances
its resilience to perturbations. Intriguingly, we find that high variability in
intracellular initial conditions does not compromise, but rather can enhance
the accuracy and precision of tissue-level dynamics. Our work provides a
genuinely spatial-stochastic description of the biochemical processes driving
ICM differentiation and the necessary conditions under which it can proceed
robustly.
|
In this paper we study the class of backward doubly stochastic differential
equations (BDSDEs, for short) whose terminal value depends on the history of
forward diffusion. We first establish a probabilistic representation for the
spatial gradient of the stochastic viscosity solution to a quasilinear
parabolic SPDE in the spirit of the Feynman-Kac formula, without using the
derivatives of the coefficients of the corresponding BDSDE. Then such a
representation leads to a closed-form representation of the martingale
integrand of BDSDE, under only standard Lipschitz condition on the
coefficients.
|
This paper deals with the local asymptotic structure, in the sense of Le
Cam's asymptotic theory of statistical experiments, of the signal detection
problem in high dimension. More precisely, we consider the problem of testing
the null hypothesis of sphericity of a high-dimensional covariance matrix
against an alternative of (unspecified) multiple symmetry-breaking directions
(\textit{multispiked} alternatives). Simple analytical expressions for the
asymptotic power envelope and the asymptotic powers of previously proposed
tests are derived. These asymptotic powers are shown to lie very substantially
below the envelope, at least for relatively small values of the number of
symmetry-breaking directions under the alternative. In contrast, the asymptotic
power of the likelihood ratio test based on the eigenvalues of the sample
covariance matrix is shown to be close to that envelope. These results extend
to the case of multispiked alternatives the findings of an earlier study
(Onatski, Moreira and Hallin, 2011) of the single-spiked case. The methods we
are using here, however, are entirely new, as the Laplace approximations
considered in the single-spiked context do not extend to the multispiked case.
|
I discuss the possibility of differentiating between popular models for
gamma-ray bursts by using multiwavelength observations to constrain the
characteristics of their host galaxies, in particular the age of the stellar
populations.
|
Two notions of "having a derivative of logarithmic order" have been studied.
They come from the study of regularity of flows and renormalized solutions for
the transport and continuity equation associated to weakly differentiable
drifts.
|
A method is developed that allows analysis of quantum Monte Carlo simulations
to identify errors in trial wave functions. The purpose of this method is to
allow for the systematic improvement of variational wave functions by
identifying degrees of freedom that are not well-described by an initial trial
state. We provide proof of concept implementations of this method by
identifying the need for a Jastrow correlation factor, and implementing a
selected multi-determinant wave function algorithm for small dimers that
systematically decreases the variational energy. Selection of the two-particle
excitations is done using quantum Monte Carlo within the presence of a Jastrow
correlation factor, and without the need to explicitly construct the
determinants. We also show how this technique can be used to design compact
wave functions for transition metal systems. This method may provide a route to
analyze and systematically improve descriptions of complex quantum systems in a
scalable way.
|
We compute the three-loop corrections to the helicity amplitudes for
$q\bar{q}\to Q\bar{Q}$ scattering in massless QCD. In the Lorentz decomposition
of the scattering amplitude we avoid evanescent Lorentz structures and map the
corresponding form factors directly to the physical helicity amplitudes. We
reduce the amplitudes to master integrals and express them in terms of harmonic
polylogarithms. The renormalised amplitudes exhibit infrared divergences of
dipole and quadrupole type, as predicted by previous work on the infrared
structure of multileg scattering amplitudes. We derive the finite remainders
and present explicit results for all relevant partonic channels, both for equal
and different quark flavours.
|
Let $\Omega \subset \mathbb{R}^d$ be a bounded open connected set with
Lipschitz boundary. Let $A^N$ and $A^D$ be the Neumann Stokes operator and
Dirichlet Stokes operator on $\Omega$, respectively. Further let $\lambda_1^N
\leq \lambda_2^N \leq \ldots$ and $\lambda_1^D \leq \lambda_2^D \leq \ldots$ be
the eigenvalues of $A^N$ and $A^D$ repeated with multiplicity, respectively.
Then \[ \lambda_{n+1}^N < \lambda_n^D \] for all $n \in \mathbb{N}$.
|
We study the steady state fluctuations of an Edwards-Wilkinson type surface
with the substrate taken to be a sphere. We show that the height fluctuations
on circles at a given latitude has the effective action of a perfect Gaussian
$1/f$ noise, just as in the case of fixed radius circles on an infinite planar
substrate. The effective surface tension, which is the overall coefficient of
the action, does not depend on the latitude angle of the circles.
|
We present a comprehensive spectral and temporal study of the black hole
X-ray transient MAXI J1820+070 during its outbursts in 2018 using Swift/XRT,
NICER, NuSTAR and AstroSat observations. The Swift/XRT and NICER spectral study
shows a plateau in the light curve with spectral softening (hardness changes
from $\sim$ $2.5$ to $2$) followed by a gradual decline without spectral
softening during the first outburst. Also, spectral modelling suggests that the
first outburst is in the low/hard state throughout with a truncated disk
whereas the thermal disk emission dominates during the second outburst. During
the entire outburst, strong reflection signature (reflection fraction varies
between $\sim$ $0.38 - 3.8$) is observed in the simultaneous wideband
(NICER-NuSTAR, XRT-NuSTAR, AstroSat) data due to the presence of a dynamically
evolving corona. The NICER timing analysis shows Quasi-periodic Oscillation
(QPO) signatures and the characteristic frequency increases (decreases) in the
plateau (decline) phase with time during the first outburst. We understand that
the reduction of the electron cooling timescale in the corona due to spectral
softening and the resonance oscillation with the local dynamical timescale may
explain the above behavior of the source during the outburst. Also, we propose
a possible scenario of outburst triggering and the associated accretion
geometry of the source.
|
In this short note we present a new approach to non-classical correlations
that is based on the compression rates for bit strings generated by Alice and
Bob. We use normalised compression distance introduced by Cilibrasi and Vitanyi
to derive information-theoretic inequalities that must be obeyed by classically
correlated bit strings and that are violated by PR-boxes. We speculate about a
violation of our inequalities by quantum mechanical correlations.
|
In this paper a novel calculus system has been established based on the
concept of 'werden'. The basis of logic self-contraction of the theories on
current calculus was shown. Mistakes and defects in the structure and meaning
of the theories on current calculus were exposed. A new quantity-figure model
as the premise of mathematics has been formed after the correction of
definition of the real number and the point. Basic concepts such as the
derivative, the differential, the primitive function and the integral have been
redefined and the theories on calculus have been reestablished. By historical
verification of theories on calculus, it is demonstrated that the
Newton-Leibniz theories on calculus have returned in the form of the novel
theories established in this paper.
|
The electroweak phase transition in the Two-Higgs-Doublet Model is
investigated. The Gibbs potential at finite temperature is computed with regard
for the one-loop plus ring diagram contributions. The strong first-order phase
transition satisfying Sakharov's baryogenesis conditions is determined for the
values of scalar field masses allowed by experimental data. The relation
between the model parameters supplying the phase transition to be of the first
order is derived. It is shown that a sequence of phase transitions is also
possible. A comparison with results of other authors is done.
|
Chandra ACIS-S observations of the galaxy cluster A3112 feature the presence
of an excess of X-ray emission above the contribution from the diffuse hot gas,
which can be equally well modeled with an additional non-thermal power-law
model or with a low-temperature thermal model of low metal abundance. We show
that the excess emission cannot be due to uncertainties in the background
subtraction or in the Galactic HI column density. Calibration uncertainties in
the ACIS detector that may affect our results are addressed by comparing the
Chandra data to XMM MOS and PN spectra. While differences between the three
instruments remain, all detect the excess in similar amounts, providing
evidence against an instrumental nature of the excess. Given the presence of
non-thermal radio emission near the center of A3112, we argue that the excess
X-ray emission is of non-thermal nature and distributed throughout the entire
X-ray bandpass, from soft to hard X-rays. The excess can be explained with the
presence of a population of relativistic electrons with ~7% of the cluster's
gas pressure. We also discuss a possible thermal nature of the excess, and
examine the problems associated with such interpretation.
|
For many compiled languages, source-level types are erased very early in the
compilation process. As a result, further compiler passes may convert type-safe
source into type-unsafe machine code. Type-unsafe idioms in the original source
and type-unsafe optimizations mean that type information in a stripped binary
is essentially nonexistent. The problem of recovering high-level types by
performing type inference over stripped machine code is called type
reconstruction, and offers a useful capability in support of reverse
engineering and decompilation.
In this paper, we motivate and develop a novel type system and algorithm for
machine-code type inference. The features of this type system were developed by
surveying a wide collection of common source- and machine-code idioms, building
a catalog of challenging cases for type reconstruction. We found that these
idioms place a sophisticated set of requirements on the type system, inducing
features such as recursively-constrained polymorphic types. Many of the
features we identify are often seen only in expressive and powerful type
systems used by high-level functional languages.
Using these type-system features as a guideline, we have developed Retypd: a
novel static type-inference algorithm for machine code that supports recursive
types, polymorphism, and subtyping. Retypd yields more accurate inferred types
than existing algorithms, while also enabling new capabilities such as
reconstruction of pointer const annotations with 98% recall. Retypd can operate
on weaker program representations than the current state of the art, removing
the need for high-quality points-to information that may be impractical to
compute.
|
We prove that the Tate conjecture is invariant under Homological Projective
Duality (=HPD). As an application, we prove the Tate conjecture in the new
cases of linear sections of determinantal varieties, and also in the cases of
complete intersections of two quadrics. Furthermore, we extend the Tate
conjecture from schemes to stacks and prove it for certain global orbifolds
|
We discuss two expressions for the conserved quantities (energy momentum and
angular momentum) of the Poincar\'e Gauge Theory. We show, that the variations
of the Hamiltonians, of which the expressions are the respective boundary
terms, are well defined, if we choose an appropriate phase space for asymptotic
flat gravitating systems. Furthermore, we compare the expressions with others,
known from the literature.
|
Monocular 3D object detection has become a mainstream approach in automatic
driving for its easy application. A prominent advantage is that it does not
need LiDAR point clouds during the inference. However, most current methods
still rely on 3D point cloud data for labeling the ground truths used in the
training phase. This inconsistency between the training and inference makes it
hard to utilize the large-scale feedback data and increases the data collection
expenses. To bridge this gap, we propose a new weakly supervised monocular 3D
objection detection method, which can train the model with only 2D labels
marked on images. To be specific, we explore three types of consistency in this
task, i.e. the projection, multi-view and direction consistency, and design a
weakly-supervised architecture based on these consistencies. Moreover, we
propose a new 2D direction labeling method in this task to guide the model for
accurate rotation direction prediction. Experiments show that our
weakly-supervised method achieves comparable performance with some fully
supervised methods. When used as a pre-training method, our model can
significantly outperform the corresponding fully-supervised baseline with only
1/3 3D labels. https://github.com/weakmono3d/weakmono3d
|
We consider the following general hidden hubs model: an $n \times n$ random
matrix $A$ with a subset $S$ of $k$ special rows (hubs): entries in rows
outside $S$ are generated from the probability distribution $p_0 \sim
N(0,\sigma_0^2)$; for each row in $S$, some $k$ of its entries are generated
from $p_1 \sim N(0,\sigma_1^2)$, $\sigma_1>\sigma_0$, and the rest of the
entries from $p_0$. The problem is to identify the high-degree hubs
efficiently. This model includes and significantly generalizes the planted
Gaussian Submatrix Model, where the special entries are all in a $k \times k$
submatrix. There are two well-known barriers: if $k\geq c\sqrt{n\ln n}$, just
the row sums are sufficient to find $S$ in the general model. For the submatrix
problem, this can be improved by a $\sqrt{\ln n}$ factor to $k \ge c\sqrt{n}$
by spectral methods or combinatorial methods. In the variant with $p_0=\pm 1$
(with probability $1/2$ each) and $p_1\equiv 1$, neither barrier has been
broken.
We give a polynomial-time algorithm to identify all the hidden hubs with high
probability for $k \ge n^{0.5-\delta}$ for some $\delta >0$, when
$\sigma_1^2>2\sigma_0^2$. The algorithm extends to the setting where planted
entries might have different variances each at least as large as $\sigma_1^2$.
We also show a nearly matching lower bound: for $\sigma_1^2 \le 2\sigma_0^2$,
there is no polynomial-time Statistical Query algorithm for distinguishing
between a matrix whose entries are all from $N(0,\sigma_0^2)$ and a matrix with
$k=n^{0.5-\delta}$ hidden hubs for any $\delta >0$. The lower bound as well as
the algorithm are related to whether the chi-squared distance of the two
distributions diverges. At the critical value $\sigma_1^2=2\sigma_0^2$, we show
that the general hidden hubs problem can be solved for $k\geq c\sqrt n(\ln
n)^{1/4}$, improving on the naive row sum-based method.
|
We present a learning approach for localization and segmentation of objects
in an image in a manner that is robust to partial occlusion. Our algorithm
produces a bounding box around the full extent of the object and labels pixels
in the interior that belong to the object. Like existing segmentation aware
detection approaches, we learn an appearance model of the object and consider
regions that do not fit this model as potential occlusions. However, in
addition to the established use of pairwise potentials for encouraging local
consistency, we use higher order potentials which capture information at the
level of im- age segments. We also propose an efficient loss function that
targets both localization and segmentation performance. Our algorithm achieves
13.52% segmentation error and 0.81 area under the false-positive per image vs.
recall curve on average over the challenging CMU Kitchen Occlusion Dataset.
This is a 42.44% decrease in segmentation error and a 16.13% increase in
localization performance compared to the state-of-the-art. Finally, we show
that the visibility labelling produced by our algorithm can make full 3D pose
estimation from a single image robust to occlusion.
|
We present a series of monitoring observations of the ultrasoft broad-line
AGN RE J2248-511 with XMM-Newton. Previous X-ray observations showed a
transition from a very soft state to a harder state five years later. We find
that the ultrasoft X-ray excess has re-emerged, yet there is no change in the
hard power-law. Reflection models with a reflection fraction of >15, and
Comptonisation models with two components of different temperatures and optical
depths can be fit to the spectrum, but cannot be constrained. The best
representation of the spectrum is a model consisting of two blackbodies
(kT~90,200 eV) plus a power-law (Gamma=1.8). We also present simultaneous
optical and infrared data showing that the optical spectral slope also changes
on timescales of years. If the optical to X-ray flux comes primarily from a
Comptonised accretion disk we obtain estimates for the black hole mass of ~1e8
solar, accretion rate ~0.8xEddington and disk inclination cosi>0.8.
|
The finite-degree Zariski (Z-) closure is a classical algebraic object, that
has found a key place in several applications of the polynomial method in
combinatorics. In this work, we characterize the finite-degree Z-closures of a
subclass of symmetric sets (subsets that are invariant under permutations of
coordinates) of the Boolean cube, in positive characteristic.
Our results subsume multiple statements on finite-degree Z-closures that have
found applications in extremal combinatorial problems, for instance, pertaining
to set systems (Heged\H{u}s, Stud. Sci. Math. Hung. 2010; Heged\H{u}s, arXiv
2021), and Boolean circuits (Hr\v{u}bes et al., ICALP 2019). Our
characterization also establishes that for the subclasses of symmetric sets
that we consider, the finite-degree Z-closures have low computational
complexity.
A key ingredient in our characterization is a new variant of finite-degree
Z-closures, defined using vanishing conditions on only symmetric polynomials
satisfying a degree bound.
|
We present optical and near-infrared photometry, as well as ground-based
optical spectra and Hubble Space Telescope ultraviolet spectra, of the Type Ia
supernova (SN) 2001ay. At maximum light the Si II and Mg II lines indicated
expansion velocities of 14,000 km/sec, while Si III and S II showed velocities
of 9,000 km/sec There is also evidence for some unburned carbon at 12,000
km/sec. SN 2001ay exhibited a decline-rate parameter Delta m_15(B) = 0.68 \pm
0.05 mag; this and the B-band photometry at t > +25 d past maximum make it the
most slowly declining Type Ia SN yet discovered. Three of four
super-Chandrasekhar-mass candidates have decline rates almost as slow as this.
After correction for Galactic and host-galaxy extinction, SN 2001ay had M_B =
-19.19 and M_V = -19.17 mag at maximum light; thus, it was not overluminous in
optical bands. In near-infrared bands it was overluminous only at the 2-sigma
level at most. For a rise time of 18 d (explosion to bolometric maximum) the
implied Ni-56 yield was (0.58 \pm 0.15)/alpha M_Sun, with alpha = L_max/E_Ni
probably in the range 1.0 to 1.2. The Ni-56 yield is comparable to that of many
Type Ia supernovae. The "normal" Ni-56 yield and the typical peak optical
brightness suggest that the very broad optical light curve is explained by the
trapping of the gamma rays in the inner regions.
|
Dynamic real-time optimization (DRTO) is a challenging task due to the fact
that optimal operating conditions must be computed in real time. The main
bottleneck in the industrial application of DRTO is the presence of
uncertainty. Many stochastic systems present the following obstacles: 1)
plant-model mismatch, 2) process disturbances, 3) risks in violation of process
constraints. To accommodate these difficulties, we present a constrained
reinforcement learning (RL) based approach. RL naturally handles the process
uncertainty by computing an optimal feedback policy. However, no state
constraints can be introduced intuitively. To address this problem, we present
a chance-constrained RL methodology. We use chance constraints to guarantee the
probabilistic satisfaction of process constraints, which is accomplished by
introducing backoffs, such that the optimal policy and backoffs are computed
simultaneously. Backoffs are adjusted using the empirical cumulative
distribution function to guarantee the satisfaction of a joint chance
constraint. The advantage and performance of this strategy are illustrated
through a stochastic dynamic bioprocess optimization problem, to produce
sustainable high-value bioproducts.
|
The inverse medium problem for a circular cylindrical domain is studied using
low-frequency acoustic waves as the probe radiation. It is shown that to second
order in $k_{0}a$ ($k_{0}$ the wavenumber in the host medium, $a$ the radius of
the cylinder), only the first three terms (i.e., of orders 0, -1 and +1) in the
partial wave representation of the scattered field are non-vanishing, and the
material parameters enter into these terms in explicit manner. Moreover, the
zeroth-order term contains only two of the unknown material constants (i.e.,
the real and imaginary parts of complex compressibility of the cylinder
$\kappa_{1}$) whereas the $\pm 1$ order terms contain the other material
constant (i.e., the density of the cylinder $\rho_{1}$). A method, relying on
the knowledge of the totality of the far-zone scattered field and resulting in
explicit expressions for $\rho_{1}$ and $\kappa_{1}$, is devised and shown to
give highly-accurate estimates of these quantities even for frequencies such
that $k_{0}a$ is as large as 0.1.
|
The wave functions of quantum Calogero-Sutherland systems for trigonometric
case are related to polynomials in l variables (l is a rank of root system) and
they are the generalization of Gegenbauer polynomials and Jack polynomials.
Using the technique of \kappa-deformation of Clebsch-Gordan series developed in
previous authors papers we investigate some new properties of generalized
Gegenbauer polynomials.Note that similar results are also valid in A_2 case for
more general two-parameter deformation ((q,t)-deformation) introduced by
Macdonald.
|
We consider random walk among random conductances where the conductance
environment is shift invariant and ergodic. We study which moment conditions of
the conductances guarantee speed zero of the random walk. We show that if there
exists \alpha>1 such that E[log^\alpha({\omega}_e)]<\infty, then the random
walk has speed zero. On the other hand, for each \alpha>1 we provide examples
of random walks with non-zero speed and random walks for which the limiting
speed does not exist that have E[log^\alpha({\omega}_e)]<\infty.
|
The reaction $K^-p\rightarrow\Lambda\eta$ at low energies is studied with a
chiral quark model approach. Good descriptions of the existing experimental
data are obtained. It is found that $\Lambda(1670)$ dominates the reaction
around threshold. Furthermore, $u$- and $t$-channel backgrounds play crucial
roles in this reaction as well. The contributions from the $D$-wave state
$\Lambda(1690)$ are negligibly small for its tiny coupling to $\eta\Lambda$. To
understand the strong coupling properties of the low-lying negative parity
$\Lambda$ resonances extracted from the $\bar{K}N$ scattering, we further study
their strong decays. It is found that these resonances are most likely mixed
states between different configurations. Considering these low-lying negative
parity $\Lambda$ resonances as mixed three-quark states, we can reasonably
understand both their strong decay properties from Particle Data Group and
their strong coupling properties extracted from the $\bar{K}N$ scattering. As a
byproduct, we also predict the strong decay properties of the missing $D$-wave
state $|\Lambda\frac{3}{2}^-\ >_3$ with a mass of $\sim1.8$ GeV. We suggest our
experimental colleagues search it in the $\Sigma(1385)\pi$ and $\Sigma\pi$
channels.
|
In the NP-hard Optimizing PD with Dependencies (PDD) problem, the input
consists of a phylogenetic tree $T$ over a set of taxa $X$, a food-web that
describes the prey-predator relationships in $X$, and integers $k$ and $D$. The
task is to find a set $S$ of $k$ species that is viable in the food-web such
that the subtree of $T$ obtained by retaining only the vertices of $S$ has
total edge weight at least $D$. Herein, viable means that for every predator
taxon of $S$, the set $S$ contains at least one prey taxon. We provide the
first systematic analysis of PDD and its special case s-PDD from a
parameterized complexity perspective. For solution-size related parameters, we
show that PDD is FPT with respect to $D$ and with respect to $k$ plus the
height of the phylogenetic tree. Moreover, we consider structural
parameterizations of the food-web. For example, we show an FPT-algorithm for
the parameter that measures the vertex deletion distance to graphs where every
connected component is a complete graph. Finally, we show that s-PDD admits an
FPT-algorithm for the treewidth of the food-web. This disproves a conjecture of
Faller et al. [Annals of Combinatorics, 2011] who conjectured that s-PDD is
NP-hard even when the food-web is a tree.
|
The thermal Hall effect, defined as a heat current response transversal to an
applied temperature gradient, is a central experimental probe of exotic
electrically insulating phases of matter. A key question is how the interplay
between magnetic and structural degrees of freedom gives rise to a nonzero
thermal Hall conductivity (THC). Here, we present evidence for an intrinsic
thermal Hall effect in the Heisenberg-Kitaev antiferromagnet and spin-liquid
candidate Na$_2$Co$_2$TeO$_6$ brought about by the quantum-geometric Berry
curvature of so-called magnon polarons, resulting from magnon-phonon
hybridization. At low temperatures, our field- and temperature-dependent
measurements show a negative THC for magnetic fields below 10 T and a sign
change to positive THC above. Theoretically, the sign and the order of
magnitude of the THC cannot be solely explained with magnetic excitations. We
demonstrate that, by incorporating spin-lattice coupling into our theoretical
calculations, the Berry curvature of magnon polarons counteracts the purely
magnonic contribution, reverses the overall sign of the THC, and increases its
magnitude, which significantly improves agreement with experimental data. Our
work highlights the crucial role of spin-lattice coupling in the thermal Hall
effect.
|
In this article we generalize a theorem by Palais on the rigidity of compact
group actions to cotangent lifts. We use this result to prove rigidity for
integrable systems on symplectic manifolds including sytems with degenerate
singularities which are invariant under a torus action.
|
Dense gas in minihalos with masses of $10^6-10^8~M_\odot$ can shield
themselves from reionization for $\sim100$ Myr after being exposed to the UV
background. These self-shielded systems, often unresolved in cosmological
simulations, can introduce strong absorption in quasar spectra. This paper is
the first systematic study on the impact of these systems on the Ly$\alpha$
forest. We first derive the HI column density profile of photoevaporating
minihalos by conducting 1D radiation-hydrodynamics simulations. We utilize
these results to estimate the Ly$\alpha$ opacity from minihalos in a
large-scale simulation that cannot resolve self-shielding. When the ionization
rate of the background radiation is $0.03\times10^{-12}~{\rm s}^{-1}$, as
expected near the end of reionization at $z\sim5.5$, we find that the incidence
rate of damped Ly$\alpha$ absorbers increases by a factor of $\sim2-4$ compared
to at $z=4.5$. The Ly$\alpha$ flux is, on average, suppressed by $\sim 3\%$ of
its mean due to minihalos. The absorption features enhance the 1D power
spectrum up to $\sim5\%$ at $k\sim0.1~h~{\rm Mpc}^{-1}~({\rm or}~10^{-3}~{\rm
km}^{-1}~{\rm s})$, which is comparable to the enhancement caused by
inhomogeneous reionization. The flux is particularly suppressed in the vicinity
of large halos along the line-of-sight direction at separations of up to
$10~h^{-1}~{\rm Mpc}$ at $r_\perp\lesssim2~h^{-1}~{\rm Mpc}$. However, these
effects become much smaller for higher ionizing rates
($\gtrsim0.3\times10^{-12}~{\rm s}^{-1}$) expected in the post-reionization
Universe. Our findings highlight the need to consider minihalo absorption when
interpreting the Ly$\alpha$ forest at $z\gtrsim5.5$. Moreover, the sensitivity
of these quantities to the ionizing background intensity can be exploited to
constrain the intensity itself.
|
This paper proposes ConsistDreamer - a novel framework that lifts 2D
diffusion models with 3D awareness and 3D consistency, thus enabling
high-fidelity instruction-guided scene editing. To overcome the fundamental
limitation of missing 3D consistency in 2D diffusion models, our key insight is
to introduce three synergetic strategies that augment the input of the 2D
diffusion model to become 3D-aware and to explicitly enforce 3D consistency
during the training process. Specifically, we design surrounding views as
context-rich input for the 2D diffusion model, and generate 3D-consistent,
structured noise instead of image-independent noise. Moreover, we introduce
self-supervised consistency-enforcing training within the per-scene editing
procedure. Extensive evaluation shows that our ConsistDreamer achieves
state-of-the-art performance for instruction-guided scene editing across
various scenes and editing instructions, particularly in complicated
large-scale indoor scenes from ScanNet++, with significantly improved sharpness
and fine-grained textures. Notably, ConsistDreamer stands as the first work
capable of successfully editing complex (e.g., plaid/checkered) patterns. Our
project page is at immortalco.github.io/ConsistDreamer.
|
It has been an open question in deep learning if fault-tolerant computation
is possible: can arbitrarily reliable computation be achieved using only
unreliable neurons? In the grid cells of the mammalian cortex, analog error
correction codes have been observed to protect states against neural spiking
noise, but their role in information processing is unclear. Here, we use these
biological error correction codes to develop a universal fault-tolerant neural
network that achieves reliable computation if the faultiness of each neuron
lies below a sharp threshold; remarkably, we find that noisy biological neurons
fall below this threshold. The discovery of a phase transition from faulty to
fault-tolerant neural computation suggests a mechanism for reliable computation
in the cortex and opens a path towards understanding noisy analog systems
relevant to artificial intelligence and neuromorphic computing.
|
We show that the path ordered Wilson line integral used in 0802.0313 to make
a nonlocal action gauge invariant is mathematically inconsistent. We also show
that it can lead to reasonable gauge field vertexes by the use of a second
mathematically unjustifiable procedure.
|
Let R be a differential domain finitely generated over a differential field,
F, with field of constants, C, of characteristic 0. Let E be the quotient field
of R. The paper investigates necessary and sufficient conditions on R's
differential ideals for the constants of differentiation of E to also be C (no
new constants). It was known that no proper nonzero differential ideals
guarantees no new constants. The paper shows that when C is algebraically
closed that if R has only finitely many height one prime differential ideals
that E will have no new constants. An example with F infinitely generated over
C shows the converse is false. The paper gives conditions on C and F so that
when F is finitely generated over C and R is a polynomial ring over F that no
new constants in E implies that R has only fintely many height one prime
differential ideals.
|
The problem of approximating the discrete spectra of families of self-adjoint
operators that are merely strongly continuous is addressed. It is well-known
that the spectrum need not vary continuously (as a set) under strong
perturbations. However, it is shown that under an additional compactness
assumption the spectrum does vary continuously, and a family of symmetric
finite-dimensional approximations is constructed. An important feature of these
approximations is that they are valid for the entire family uniformly. An
application of this result to the study of plasma instabilities is illustrated.
|
We introduce a continuous modeling approach which combines elastic responds
of the trabecular bone structure, the concentration of signaling molecules
within the bone and a mechanism how this concentration at the bone surface is
used for local bone formation and resorption. In an abstract setting bone can
be considered as a shape changing structure. For similar problems in materials
science phase field approximations have been established as an efficient
computational tool. We adapt such an approach for trabecular bone remodeling.
It allows for a smooth representation of the trabecular bone structure and
drastically reduces computational costs if compared with traditional micro
finite element approaches. We demonstrate the advantage of the approach within
a minimal model. We quantitatively compare the results with established micro
finite element approaches on simple geometries and consider the bone morphology
within a bone segment obtained from $\mu$CT data of a sheep vertebra with
realistic parameters.
|
A low-light-power theory of nonlinear magneto-optical rotation of
frequency-modulated light resonant with a J=1->J'=0 transition is presented.
The theory is developed for a Doppler-free transition, and then modified to
account for Doppler broadening and velocity mixing due to collisions. The
results of the theory are shown to be in qualitative agreement with
experimental data obtained for the rubidium D1 line.
|
We intend to derive the moment and exponential tail estimates for the
so-called bivariate or more generally multivariate functional operations, not
necessary to be linear or even multilinear. We will show also the strong or at
last weak (i.e. up to multiplicative constant) exactness of obtained estimates.
|
Large panels of etched plastic, situated aboard the Skylab Space Station and
inside the Ohya quarry near Tokyo, have been used to set limits on fluxes of
cosmogenic particles. These plastic particle track detectors also provide the
best sensitivity for some heavy dark matter that interacts strongly with
nuclei. We revisit prior dark matter bounds from Skylab, and incorporate
geometry-dependent thresholds, a halo velocity distribution, and a complete
accounting of observed through-going particle fluxes. These considerations
reduce the Skylab bound's mass range by a few orders of magnitude. However, a
new analysis of Ohya data covers a portion of the prior Skylab bound, and
excludes dark matter masses up to the Planck mass. Prospects for future etched
plastic dark matter searches are discussed.
|
Quantum systems can be used as probes in the context of metrology for
enhanced parameter estimation. In particular, the delicacy of critical systems
to perturbations can make them ideal sensors. Arguably the simplest realistic
probe system is a spin-1/2 impurity, which can be manipulated and measured
in-situ when embedded in a fermionic environment. Although entanglement between
a single impurity probe and its environment produces nontrivial many-body
effects, criticality cannot be leveraged for sensing. Here we introduce instead
the two-impurity Kondo (2IK) model as a novel paradigm for critical quantum
metrology, and examine the multiparameter estimation scenario at finite
temperature. We explore the full metrological phase diagram numerically and
obtain exact analytic results near criticality. Enhanced sensitivity to the
inter-impurity coupling driving a second-order phase transition is evidenced by
diverging quantum Fisher information (QFI) and quantum signal-to-noise ratio
(QSNR). However, with uncertainty in both coupling strength and temperature,
the multiparameter QFI matrix becomes singular -- even though the parameters to
be estimated are independent -- resulting in vanishing QSNRs. We demonstrate
that by applying a known control field, the singularity can be removed and
measurement sensitivity restored. For general systems, we show that the
degradation in the QSNR due to uncertainties in another parameter is controlled
by the degree of correlation between the unknown parameters.
|
In recent years artificial neural networks achieved performance close to or
better than humans in several domains: tasks that were previously human
prerogatives, such as language processing, have witnessed remarkable
improvements in state of the art models. One advantage of this technological
boost is to facilitate comparison between different neural networks and human
performance, in order to deepen our understanding of human cognition. Here, we
investigate which neural network architecture (feed-forward vs. recurrent)
matches human behavior in artificial grammar learning, a crucial aspect of
language acquisition. Prior experimental studies proved that artificial
grammars can be learnt by human subjects after little exposure and often
without explicit knowledge of the underlying rules. We tested four grammars
with different complexity levels both in humans and in feedforward and
recurrent networks. Our results show that both architectures can 'learn' (via
error back-propagation) the grammars after the same number of training
sequences as humans do, but recurrent networks perform closer to humans than
feedforward ones, irrespective of the grammar complexity level. Moreover,
similar to visual processing, in which feedforward and recurrent architectures
have been related to unconscious and conscious processes, our results suggest
that explicit learning is best modeled by recurrent architectures, whereas
feedforward networks better capture the dynamics involved in implicit learning.
|
Superconducting high-entropy alloys (HEAs) are a newly burgeoning field of
unconventional superconductors and raise intriguing questions about the
presence of superconductivity in highly disordered systems, which lack regular
phonon modes. In our study, we have synthesized and investigated the
superconducting characteristics of two new transition elements based HEAs
Re$_{0.35} $Os$_{0.35} $Mo$_{0.08} $W$_{0.10} $Zr$_{0.12}$ (ReOMWZ)
crystallizing in noncentrosymmetric $\alpha$-Mn structure, and Ru$_{0.35}
$Os$_{0.35} $Mo$_{0.10} $W$_{0.10} $Zr$_{0.10}$ (RuOMWZ) crystallizing
hexagonal closed-packed structure (hcp). Transition metal-based hexagonal hcp
HEA is rare and highly desirable for practical applications due to their high
hardness. Bulk magnetization, resistivity, and specific heat measurements
confirmed bulk type-II superconductivity in both alloys. Specific heat analysis
up to the measured low-temperature range suffices for a BCS explanation.
Comparable upper critical fields with the Pauli paramagnetic limit suggest the
possibility of unconventional superconductivity in both HEAs.
|
Let $f$ be a continuous monotone real function defined on a compact interval
$[a,b]$ of the real line. Given a sequence of partitions of $[a,b]$, $%
\Delta_n $, $\left\Vert {\Delta }_{n}\right\Vert \rightarrow 0$, and given
$l\geq 0,m\geq 1$, let $\mathbf{S}_{m}^{l}(\Delta _{n}) $ be the space of all
functions with the same monotonicity of $f$ that are $% \Delta_n$-piecewise
polynomial of order $m$ and that belong to the smoothness class $C^{l}[a,b]$.
In this paper we show that, for any $m\geq 2l+1$, $\bullet$ sequences of best
$L^p$-approximation in $\mathbf{S}_{m}^{l}(\Delta _{n})$ converge uniformly to
$f$ on any compact subinterval of $(a,b)$; $\bullet$ sequences of best
$L^p$-approximation in $\mathbf{S}_{m}^{0}(\Delta _{n})$ converge uniformly to
$f$ on the whole interval $[a,b] $.
|
Over the past few years, deep learning has risen to the foreground as a topic
of massive interest, mainly as a result of successes obtained in solving
large-scale image processing tasks. There are multiple challenging mathematical
problems involved in applying deep learning: most deep learning methods require
the solution of hard optimisation problems, and a good understanding of the
tradeoff between computational effort, amount of data and model complexity is
required to successfully design a deep learning approach for a given problem. A
large amount of progress made in deep learning has been based on heuristic
explorations, but there is a growing effort to mathematically understand the
structure in existing deep learning methods and to systematically design new
deep learning methods to preserve certain types of structure in deep learning.
In this article, we review a number of these directions: some deep neural
networks can be understood as discretisations of dynamical systems, neural
networks can be designed to have desirable properties such as invertibility or
group equivariance, and new algorithmic frameworks based on conformal
Hamiltonian systems and Riemannian manifolds to solve the optimisation problems
have been proposed. We conclude our review of each of these topics by
discussing some open problems that we consider to be interesting directions for
future research.
|
Word embeddings represent words as multidimensional real vectors,
facilitating data analysis and processing, but are often challenging to
interpret. Independent Component Analysis (ICA) creates clearer semantic axes
by identifying independent key features. Previous research has shown ICA's
potential to reveal universal semantic axes across languages. However, it
lacked verification of the consistency of independent components within and
across languages. We investigated the consistency of semantic axes in two ways:
both within a single language and across multiple languages. We first probed
into intra-language consistency, focusing on the reproducibility of axes by
performing ICA multiple times and clustering the outcomes. Then, we
statistically examined inter-language consistency by verifying those axes'
correspondences using statistical tests. We newly applied statistical methods
to establish a robust framework that ensures the reliability and universality
of semantic axes.
|
Compressed Image Super-resolution has achieved great attention in recent
years, where images are degraded with compression artifacts and low-resolution
artifacts. Since the complex hybrid distortions, it is hard to restore the
distorted image with the simple cooperation of super-resolution and compression
artifacts removing. In this paper, we take a step forward to propose the
Hierarchical Swin Transformer (HST) network to restore the low-resolution
compressed image, which jointly captures the hierarchical feature
representations and enhances each-scale representation with Swin transformer,
respectively. Moreover, we find that the pretraining with Super-resolution (SR)
task is vital in compressed image super-resolution. To explore the effects of
different SR pretraining, we take the commonly-used SR tasks (e.g., bicubic and
different real super-resolution simulations) as our pretraining tasks, and
reveal that SR plays an irreplaceable role in the compressed image
super-resolution. With the cooperation of HST and pre-training, our HST
achieves the fifth place in AIM 2022 challenge on the low-quality compressed
image super-resolution track, with the PSNR of 23.51dB. Extensive experiments
and ablation studies have validated the effectiveness of our proposed methods.
The code and models are available at
https://github.com/USTC-IMCL/HST-for-Compressed-Image-SR.
|
In this work, a momentum-space geometrical structure in helical evanescent
electromagnetic waves is revealed. It is shown that for every helical
evanescent wave on a helicity-dependent half tangent line in momentum space,
the orientation of each of its field, spin, and Poynting vectors is the same.
This geometric structure reveals itself as a remarkable relation between the
far-field and near-field components of the angular spectrum. Any general
evanescent wavevector is linked to two points on the $k_{\rho}=k_0$ circle of
propagating wavevectors via two helicity-dependent tangent lines. Knowing the
field on the $k_{\rho}=k_0$ circle of a general dipolar source is sufficient to
determine its entire evanescent angular spectrum. Applying this concept, we
gain insights into near-field directionality by showing that every zero in the
angular spectrum is a helicity singularity where two half-tangent lines of
opposite helicity intersect. A powerful method for synthetic design of
near-field directional sources is also devised, using structured helical
illumination to gain full control of the near-field directionality. The results
provide a fundamental insight of helical evanescent waves and have implications
in areas where chiral light-matter interaction plays a central role.
|
A stellar occultation by the large trans-Neptunian object (90482) Orcus was
predicted to occur on 2017 March 07. Observations were made at five sites in
North and South America. High-speed, visible-wavelength images were taken at
all sites, in addition to simultaneous K-band images at one location.
Solid-body occultations were observed from two sites. Post-event reconstruction
suggested an occultation of two different stars observed from two different
sites. Follow-up, speckle imaging at Gemini Observatory revealed a second star,
which verified that the occulting body in both cases was Orcus` satellite,
Vanth. The two single-chord detections, with an anomalously large timing delay
in one chord, have lengths of 291+/-125 km and 434.4+/-2.4 km. The
observations, combined with a non-detection at a nearby site, allow a tight
constraint of 443+/-10 km to be placed on Vanth`s size (assuming it is
spherical). A 3-{\sigma} upper limit of 1-4 {\mu}bar (depending on constituent)
is found for a global Vanth atmosphere. The immersion and emersion profiles are
slightly different, with atmospheric constraints 40 percent higher on immersion
than on emersion. No rings or other material were detected within ten thousand
kms of Vanth, and beyond 8010 km from Orcus, to the tightest optical depth
limit of approximately 0.1 at 5 km scale. The occultation probed as close as
5040 km from Orcus, placing an optical depth limit of approximately 0.3 at 5 km
scale on any encircling material at that distance.
|
The anti-de Sitter space/conformal field theory correspondence (AdS/CFT) can
potentially provide a complete formulation of string theory on a landscape of
stable and metastable vacua that naturally give rise to eternal inflation. As a
model for this process, we consider bubble solutions with de Sitter interiors,
obtained by patching together dS and Schwarzschild-AdS solutions along a bubble
wall. For an interesting subclass of these solutions the bubble wall reaches
spacelike infinity in the black hole interior. Including the effects of
perturbations leads to a null singularity emanating from this point. Such
solutions are interpreted as states in a single CFT, and are shown to be
compatible with holographic entropy bounds. The construction suggests de Sitter
entropy be interpreted as the total number of degrees of freedom in effective
field theory, with a novel adaptive stepsize cutoff.
|
Interactions of quantum materials with strong-laser fields can induce exotic
nonequilibrium electronic states. Monolayer transition-metal dichalcogenides, a
new class of direct-gap semiconductors with prominent quantum confinement,
offer exceptional opportunities toward Floquet engineering of quasiparticle
electron-hole states, or excitons. Strong-field driving has a potential to
achieve enhanced control of electronic band structure, thus a possibility to
open a new realm of exciton light-matter interactions. However, experimental
implementation of strongly-driven excitons has so far remained out of reach.
Here, we use mid-infrared laser pulses below the optical bandgap to excite
monolayer tungsten disulfide up to a field strength of 0.3 V/nm, and
demonstrate strong-field light dressing of excitons in the excess of a hundred
millielectronvolt. Our high-sensitivity transient absorption spectroscopy
further reveals formation of a virtual absorption feature below the 1s-exciton
resonance, which is assigned to a light-dressed sideband from the dark
2p-exciton state. Quantum-mechanical simulations substantiate the experimental
results and enable us to retrieve real-space movies of the exciton dynamics.
This study advances our understanding of the exciton dynamics in the
strong-field regime, and showcases the possibility of harnessing ultrafast,
strong-field phenomena in device applications of two-dimensional materials.
|
The generation of isotropic vortex configurations in trapped atomic
Bose-Einstein condensates offers a platform to elucidate quantum turbulence on
mesoscopic scales. We demonstrate that a laser-induced obstacle moving in a
figure-eight path within the condensate provides a simple and effective means
to generate an isotropic three-dimensional vortex tangle due to its minimal net
transfer of angular momentum to the condensate. Our characterisation of vortex
structures and their isotropy is based on projected vortex lengths and velocity
statistics obtained numerically via the Gross-Pitaevskii equation. Our
methodology provides a possible experimental route for generating and
characterising vortex tangles and quantum turbulence in atomic Bose-Einstein
condensates.
|
The Vandermonde-Chu Binomial Coefficients Identity is shown to imply
Bombieri's deep norm inequalities, via identities of Beauzamy-D\'egot, and
Reznick.
|
Clues to the physical conditions in radio cores of blazars come from
measurements of brightness temperatures as well as effects produced by
intrinsic opacity. We study the properties of the ultra compact blazar AO
0235+164 with RadioAstron ground-space radio interferometer, multi-frequency
VLBA, EVN and single-dish radio observations. We employ visibility modeling and
image stacking for deriving structure and kinematics of the source, and use
Gaussian process regression to find the relative multi-band time delays of the
flares. The multi-frequency core size and time lags support prevailing
synchrotron self absorption. The intrinsic brightness temperature of the core
derived from ground-based VLBI is close to the equipartition regime value. In
the same time, there is evidence for ultra-compact features of the size of less
than 10 $\mu$as in the source, which might be responsible for the extreme
apparent brightness temperatures of up to $10^{14}$ K as measured by
RadioAstron. In 2007--2016 the VLBI components in the source at 43 GHz are
found predominantly in two directions, suggesting a bend of the outflow from
southern to northern direction. The apparent opening angle of the jet seen in
the stacked image at 43 GHz is two times wider than that at 15 GHz, indicating
a collimation of the flow within the central 1.5 mas. We estimate the Lorentz
factor $\Gamma = 14$, the Doppler factor $\delta=21$, and the viewing angle
$\theta = 1.7^\circ$ of the apparent jet base, derive the gradients of magnetic
field strength and electron density in the outflow, and the distance between
jet apex and the core at each frequency.
|
The denoising diffusion model has recently emerged as a powerful generative
technique that converts noise into data. While there are many studies providing
theoretical guarantees for diffusion processes based on discretized stochastic
differential equation (D-SDE), many generative samplers in real applications
directly employ a discrete-time (DT) diffusion process. However, there are very
few studies analyzing these DT processes, e.g., convergence for DT diffusion
processes has been obtained only for distributions with bounded support. In
this paper, we establish the convergence guarantee for substantially larger
classes of distributions under DT diffusion processes and further improve the
convergence rate for distributions with bounded support. In particular, we
first establish the convergence rates for both smooth and general (possibly
non-smooth) distributions having a finite second moment. We then specialize our
results to a number of interesting classes of distributions with explicit
parameter dependencies, including distributions with Lipschitz scores, Gaussian
mixture distributions, and any distributions with early-stopping. We further
propose a novel accelerated sampler and show that it improves the convergence
rates of the corresponding regular sampler by orders of magnitude with respect
to all system parameters. Our study features a novel analytical technique that
constructs a tilting factor representation of the convergence error and
exploits Tweedie's formula for handling Taylor expansion power terms.
|
We extend the notion of convexity of functions defined on global nonpositive
curvature spaces by introducing (geodesically) $h$-convex functions. We prove
estimates of Hermite-Hadamard type via Katugampola's fractional integrals. We
obtain an important corollary which gives an essentially sharp estimate
involving squared distance mappings between points in a global NPC space. This
is a contribution to analysis on spaces with curved geometry.
|
Fast Blue Optical Transients (FBOTs) are luminous transients with fast
evolving (typically $t_{\rm rise}<12\ \rm days$) light curve and blue color
(usually $\rm {-0.2\ >\ g-r\ >\ -0.3}$) that cannot be explained by a
supernova-like explosion. We propose a radiative diffusion in a time-dependent
outflow model to interpret such special transients. In this model, we assume a
central engine ejects continuous outflow during a few days. We consider the
ejection of the outflow to be time-dependent. The outflow is optically thick
initially and photons are frozen in it. As the outflow expands over time,
photons gradually escape, and our work is to model such an evolution. Numerical
and analytical calculations are considered separately, and the results are
consistent. We apply the model to three typical FBOTs: PS1-10bjp, ZTF18abukavn,
and ATLAS19dqr. The modeling finds the total mass of the outflow ($\sim 1-5
{M_{\odot}}$), and the total time of the ejection ($\sim$ a few days) for them,
leading us to speculate that they may be the result of the collapse of massive
stars.
|
Learning the parameters of graphical models using the maximum likelihood
estimation is generally hard which requires an approximation. Maximum composite
likelihood estimations are statistical approximations of the maximum likelihood
estimation which are higher-order generalizations of the maximum
pseudo-likelihood estimation. In this paper, we propose a composite likelihood
method and investigate its property. Furthermore, we apply our composite
likelihood method to restricted Boltzmann machines.
|
We define a mechanical analog to the electrical basic circuit element M =
d{\phi}/dQ, namely the ideal mechanical memristance M = dp/dx; p is momentum.
We then introduce a mechanical memory resistor which has M(x) independent of
velocity v, so it is a perfect (= not-just-memristive) memristor, although its
memristance does not crucially involve inert mass. It is practically realizable
with a 1cm radius hollow sphere in heavy fuel oil with a temperature gradient.
It has a pinched hysteretic loop that collapses at high frequency in the v
versus p plot. The mechanical system clarifies the nature of memristor devices
that can be hypothesized on grounds of physical symmetries. We hypothesize a
missing mechanical perfect memristor, which must be crucially mass-involving
(MI) precisely like the 1971 implied EM memristor device needs magnetism. We
also construct MI memristive nano systems, which clarifies why perfect MI
memristors and EM memristors are still missing and likely impossible.
|
The concept of orthogonality through the block factor (OTB), defined in
Bagchi (2010), is extended here to orthogonality through a set (say S) of other
factors. We discuss the impact of such an orthogonality on the precision of the
estimates as well as on the inference procedure. Concentrating on the case when
$S$ is of size two, we construct a series of plans in each of which every pair
of other factors is orthogonal through a given pair of factors.
Next we concentrate on plans through the block factors (POTB). We construct
POTBs for symmetrical experiments with two and three-level factors. The plans
for two factors are E-optimal, while those for three-level factors are
universally optimal.
Finally, we construct POTBs for $s^t(s+1)$ experiments, where $s \equiv 3
\pmod 4$ is a prime power. The plan is universally optimal.
|
We consider the confined Fr\"ohlich polaron and establish an asymptotic
series for the low-energy eigenvalues in negative powers of the coupling
constant. The coefficients of the series are derived through a two-fold
perturbation approach, involving expansions around the electron Pekar minimizer
and the excitations of the quantum field.
|
We present diameters and albedos computed for the near-Earth and Main Belt
asteroids observed by the Near-Earth Object Wide-field Infrared Survey Explorer
(NEOWISE) spacecraft during the sixth and seventh years of its Reactivation
mission. These diameters and albedos are calculated from fitting thermal models
to NEOWISE observations of $199$ NEOs and $5851$ MBAs detected during the sixth
year of the survey, and $175$ NEOs and $5861$ MBAs from the seventh year.
Comparisons of the near-Earth object diameters derived from Reactivation data
with those derived from the WISE cryogenic mission data show a $\sim30\%$
relative uncertainty. This larger uncertainty compared to data from the
cryogenic mission is due to the need to assume a beaming parameter for the fits
to the shorter wavelength data that the Reactivation mission is limited to. We
also present an analysis of the orbital parameters of the Main Belt asteroids
that have been discovered by NEOWISE during Reactivation, finding that these
objects tend to be on orbits that result in their perihelia being far from the
ecliptic, and thus missed by other surveys. To date, the NEOWISE Reactivation
survey has provided thermal fits of $1415$ unique NEOs. Including the mission
phases before spacecraft hibernation increases the count of unique NEOs
characterized to $1845$ from WISE's launch to the present.
|
In 1980 and 1981, two pioneering papers laid the foundation for what became
known as nonlinear time-series analysis: the analysis of observed
data---typically univariate---via dynamical systems theory. Based on the
concept of state-space reconstruction, this set of methods allows us to compute
characteristic quantities such as Lyapunov exponents and fractal dimensions, to
predict the future course of the time series, and even to reconstruct the
equations of motion in some cases. In practice, however, there are a number of
issues that restrict the power of this approach: whether the signal accurately
and thoroughly samples the dynamics, for instance, and whether it contains
noise. Moreover, the numerical algorithms that we use to instantiate these
ideas are not perfect; they involve approximations, scale parameters, and
finite-precision arithmetic, among other things. Even so, nonlinear time-series
analysis has been used to great advantage on thousands of real and synthetic
data sets from a wide variety of systems ranging from roulette wheels to lasers
to the human heart. Even in cases where the data do not meet the mathematical
or algorithmic requirements to assure full topological conjugacy, the results
of nonlinear time-series analysis can be helpful in understanding,
characterizing, and predicting dynamical systems.
|
Replay is the reactivation of one or more neural patterns, which are similar
to the activation patterns experienced during past waking experiences. Replay
was first observed in biological neural networks during sleep, and it is now
thought to play a critical role in memory formation, retrieval, and
consolidation. Replay-like mechanisms have been incorporated into deep
artificial neural networks that learn over time to avoid catastrophic
forgetting of previous knowledge. Replay algorithms have been successfully used
in a wide range of deep learning methods within supervised, unsupervised, and
reinforcement learning paradigms. In this paper, we provide the first
comprehensive comparison between replay in the mammalian brain and replay in
artificial neural networks. We identify multiple aspects of biological replay
that are missing in deep learning systems and hypothesize how they could be
utilized to improve artificial neural networks.
|
A consistent statistical description of kinetics and hydrodynamics of dusty
plasma is proposed based on the Zubarev nonequilibrium statistical operator
method. For the case of partial dynamics the nonequilibrium statistical
operator and the generalized transport equations for a consistent description
of kinetics of dust particles and hydrodynamics of electrons, ions and neutral
atoms are obtained. In the approximation of weakly nonequilibrium process a
spectrum of collective excitations of dusty plasma is investigated in the
hydrodynamic limit.
|
Evidential reasoning is now a leading topic in Artificial Intelligence.
Evidence is represented by a variety of evidential functions. Evidential
reasoning is carried out by certain kinds of fundamental operation on these
functions. This paper discusses two of the basic operations on evidential
functions, the discount operation and the well-known orthogonal sum operation.
We show that the discount operation is not commutative with the orthogonal sum
operation, and derive expressions for the two operations applied to the various
evidential function.
|
We examine 4-yr almost continuous Kepler photometry of 115 B stars. We find
that the light curves of 39 percent of these stars are simply described by a
low-frequency sinusoid and its harmonic, usually with variable amplitudes,
which we interpret as rotational modulation. A large fraction (28 percent) of B
stars might be classified as ellipsoidal variables, but a statistical argument
suggests that these are probably rotational variables as well. About 8 percent
of the rotational variables have a peculiar periodogram feature which is common
among A stars. The physical cause of this is very likely related to rotation.
The presence of so many rotating variables indicates the presence of star
spots. This suggests that magnetic fields are indeed generated in radiative
stellar envelopes. We find five beta Cep variables, all of which have low
frequencies with relatively large amplitudes. The presence of these frequencies
is a puzzle. About half the stars with high frequencies are cooler than the red
edge of the beta Cep instability strip. These stars do not fit into the general
definition of beta Cep or SPB variables. We have therefore assumed they are
further examples of the anomalous pulsating stars which in the past have been
called "Maia" variables. We also examined 300 B stars observed in the K2
Campaign 0 field. We find 11 beta Cep/Maia candidates and many SPB variables.
For the stars where the effective temperature can be measured, we find at least
two further examples of Maia variables.
|
We present photometry and moderate-resolution spectroscopy of the luminous
red variable [HBS2006] 40671 originally detected as a possible nova in the
galaxy M33. We found that the star is a pulsating Mira-type variable with a
long period of 665 days and an amplitude exceeding 7 mag in the R band.
[HBS2006] 40671 is the first confirmed Mira-type star in M33. It is one of the
most luminous Mira-type variables. In the K band its mean absolute magnitude is
M_K = -9.5, its bolometric magnitude measured in the maximum light is also
extreme, M_bol = -7.4. The spectral type of the star in the maximum is M2e -
M3e. The heliocentric radial velocity of the star is -475 km/s. There is a big
negative excess (-210~km/s) in radial velocity of [HBS2006] 40671 relative to
the average radial velocity of stars in its neighborhood pointing at an
exceptional peculiar motion of the star. All the extreme properties of the new
Mira star make it important for further studies.
|
We investigate quantum gravity in the path integral formulation using the
Regge calculus. Restricting the quadratic link lengths of the originally
triangular lattice the path integral can be transformed to the partition
function of a spin system with higher couplings on a Kagome lattice. Various
measures acting as external field were considered. Extensions to matter fields
and higher dimensions are discussed.
|
The Abstraction and Reasoning Corpus (ARC) aims at benchmarking the
performance of general artificial intelligence algorithms. The ARC's focus on
broad generalization and few-shot learning has made it difficult to solve using
pure machine learning. A more promising approach has been to perform program
synthesis within an appropriately designed Domain Specific Language (DSL).
However, these too have seen limited success. We propose Abstract Reasoning
with Graph Abstractions (ARGA), a new object-centric framework that first
represents images using graphs and then performs a search for a correct program
in a DSL that is based on the abstracted graph space. The complexity of this
combinatorial search is tamed through the use of constraint acquisition, state
hashing, and Tabu search. An extensive set of experiments demonstrates the
promise of ARGA in tackling some of the complicated object-centric tasks of the
ARC rather efficiently, producing programs that are correct and easy to
understand.
|
The present paper uses the Galerkin Finite Element Method to numerically
study the triple diffusive boundary layer flow of homogenous nanofluid over
power-law stretching sheet with the effect of external magnetic field. The
fluid is composed of nanoparticles along with dissolved solutal particles in
the base fluid. The chief mechanisms responsible for enhancement of convective
transport phenomenon in nanofluids - Brownian Motion, Diffusiophoresis and
Thermophoresis have been considered. The simulations performed in this study
are based on the boundary layer approach. Recently proposed heat flux and
nanoparticle mass flux boundary conditions have been imposed. Heat transfer,
solutal mass transfer and nanoparticle mass transfer are investigated for
different values of controlling parameters i.e. Brownian-motion parameter,
Thermophoresis parameter, magnetic influence parameter and stretching
parameter. Multiple regression analysis has been performed to verify the
relationship among transfer rate parameters and controlling parameters. The
present study finds application in insulation of wires, manufacture of tetra
packs, production of glass fibres, fabrication of various polymer and plastic
products, rubber sheets etc. where the quality merit of desired product depends
on the rate of stretching, external magnetic field and composition of materials
used.
|
Let (M,g) be an odd-dimensional incomplete compact Riemannian singular space
with a simple edge singularity. We study the analytic torsion on M, and in
particular consider how it depends on the metric g. If g is an admissible edge
metric, we prove that the torsion zeta function is holomorphic near s = 0,
hence the torsion is well-defined, but possibly depends on g. In general
dimensions, we prove that the analytic torsion depends only on the asymptotic
structure of g near the singular stratum of M; when the dimension of the edge
is odd, we prove that the analytic torsion is independent of the choice of
admissible edge metric. The main tool is the construction, via the methodology
of geometric microlocal analysis, of the heat kernel for the Friedrichs
extension of the Hodge Laplacian in all degrees. In this way we obtain detailed
asymptotics of this heat kernel and its trace.
|
The Rayleigh product channel model is useful in capturing the performance
degradation due to rank deficiency of MIMO channels. In this paper, such a
performance degradation is investigated via the channel outage probability
assuming slowly varying channel with delay-constrained decoding. Using
techniques of free probability theory, the asymptotic variance of channel
capacity is derived when the dimensions of the channel matrices approach
infinity. In this asymptotic regime, the channel capacity is rigorously proven
to be Gaussian distributed. Using the obtained results, a fundamental tradeoff
between multiplexing gain and diversity gain of Rayleigh product channels can
be characterized by closed-form expression at any finite signal-to-noise ratio.
Numerical results are provided to compare the relative outage performance
between Rayleigh product channels and conventional Rayleigh MIMO channels.
|
This manuscript investigates unconditional and conditional-on-stopping
maximum likelihood estimators (MLEs), information measures and information loss
associated with conditioning in group sequential designs (GSDs). The
possibility of early stopping brings truncation to the distributional form of
MLEs; sequentially, GSD decisions eliminate some events from the sample space.
Multiple testing induces mixtures on the adapted sample space. Distributions of
MLEs are mixtures of truncated distributions. Test statistics that are
asymptotically normal without GSD, have asymptotic distributions, under GSD,
that are non-normal mixtures of truncated normal distributions under local
alternatives; under fixed alternatives, asymptotic distributions of test
statistics are degenerate. Estimation of various statistical quantities such as
information, information fractions, and confidence intervals should account for
the effect of planned adaptations. Calculation of adapted information fractions
requires substantial computational effort. Therefore, a new GSD is proposed in
which stage-specific sample sizes are fully determined by desired operational
characteristics, and calculation of information fractions is not needed.
|
The f-electron spectral function of the Falicov-Kimball model is calculated
within the dynamical mean-field theory using the numerical renormalization
group method as the impurity solver. Both the Bethe lattice and the hypercubic
lattice are considered at half filling. For small U we obtain a single-peaked
f-electron spectral function, which --for zero temperature-- exhibits an
algebraic (X-ray) singularity ($|\omega|^{-\alpha}$) for $\omega \to 0$. The
characteristic exponent $\alpha$ depends on the Coulomb (Hubbard) correlation
U. This X-ray singularity cannot be observed when using alternative
(Keldysh-based) many-body approaches. With increasing U, $\alpha$ decreases and
vanishes for sufficiently large U when the f-electron spectral function
develops a gap and a two-peak structure (metal-insulator transition).
|
The Diep and Johnson (DJ) H$_2$-H$_2$ potential energy surface (PES) obtained
from the first principles [P. Diep, K. Johnson, J. Chem. Phys. 113, 3480
(2000); 114, 222 (2000)], has been adjusted through appropriate rotation of the
three-dimensional coordinate system and applied to low-temperature ($T<300$ K)
HD+$o$-/$p$-H$_2$ collisions of astrophysical interest. A non-reactive quantum
mechanical close-coupling method is used to carry out the computation for the
total rotational state-to-state cross sections $\sigma_{j_1j_2\rightarrow
j'_1j'_2}(\epsilon)$ and corresponding thermal rate coefficients
$k_{j_1j_2\rightarrow j'_1j'_2}(T)$. A rather satisfactory agreement has been
obtained between our results computed with the modified DJ PES and with the
newer H$_4$ PES [A.I. Boothroyd, P.G. Martin, W.J. Keogh, M.J. Peterson, J.
Chem. Phys. 116, 666 (2002)], which is also applied in this work. A comparative
study with previous results is presented and discussed. Significant differences
have been obtained for few specific rotational transitions in the H$_2$/HD
molecules between our results and previous calculations. The low temperature
data for $k_{j_1j_2\rightarrow j'_1j'_2}(T)$ calculated in this work can be
used in a future application such as a new computation of the HD cooling
function of primordial gas, which is important in the astrophysics of the early
Universe.
|
Aims. We present and analyse late-time observations of the type-Ib supernova
with possible pre-supernova progenitor detection, iPTF13bvn, taken at $\sim$300
days after the explosion, and discuss these in the context of constraints on
the supernova's progenitor. Previous studies have proposed two possible natures
for the progenitor of the supernova, i.e. a massive Wolf-Rayet star or a
lower-mass star in close binary system. Methods. Our observations show that the
supernova has entered the nebular phase, with the spectrum dominated by
Mg~I]$\lambda\lambda$4571, [O~I]$\lambda\lambda$6300, 6364, and
[Ca~II]$\lambda\lambda$7291, 7324 emission lines. We measured the emission line
fluxes to estimate the core oxygen mass and compare the [O~I]/[Ca~II] line
ratio with other supernovae. Results. The core oxygen mass of the supernova
progenitor was estimated to be $\lesssim$0.7 M$_\odot$, which implies initial
progenitor mass not exceeding $\sim$15 -- 17 M$_\odot$. Since the derived mass
is too small for a single star to become a Wolf-Rayet star, this result lends
more support to the binary nature of the progenitor star of iPTF13bvn. The
comparison of [O~I]/[Ca~II] line ratio with other supernovae also shows that
iPTF13bvn appears to be in close association with the lower-mass progenitors of
stripped-envelope and type-II supernovae.
|
We propose a MIMO channel estimation method for millimeter-wave (mmWave) and
terahertz (THz) systems based on frequency-selective atomic norm minimization
(FS-ANM). For the strong line-of-sight property of the channel in such
high-frequency bands, prior knowledge on the ranges of angles of
departure/arrival (AoD/AoA) can be obtained as the prior knowledge, which can
be exploited by the proposed channel estimator to improve the estimation
accuracy. Simulation results show that the proposed method can achieve
considerable performance gain when compared with the existing approaches
without incorporating the the strong line-of-sight property.
|
Recently a paper entitled ``The Igex 76Ge Neutrinoless Double-Beta Decay
Experiment: Prospects for Next Generation Experiments'' has been published in
Phys. Rev. D 65 (2002) 092007. In view of the recently reported evidence for
neutrinoless double beta decay (KK-Evid01,KK02-Found,KK02-PN) it is
particularly unfortunate that the IGEX paper is rather incomplete in its
presentation. We would like to point out in this Comment that and why it would
be highly desirable to make more details about the experimental conditions and
the analysis of IGEX available. We list some of the main points, which require
further explanation. We also point to an arithmetic mistake in the analysis of
the IGEX data, the consequence of which are too high half life limits given in
that paper.
|
Subsets and Splits