text
stringlengths 6
128k
|
---|
We compare the contributions from quark and from gluon exchange to the
exclusive process gamma* p -> rho0 p. We present evidence that the gluon
contribution is substantial for values of the Bjorken variable xB around 0.1.
|
Self-supervised learning (SSL) models have recently demonstrated remarkable
performance across various tasks, including image segmentation. This study
delves into the emergent characteristics of the Self-Distillation with No
Labels (DINO) algorithm and its application to Synthetic Aperture Radar (SAR)
imagery. We pre-train a vision transformer (ViT)-based DINO model using
unlabeled SAR data, and later fine-tune the model to predict high-resolution
land cover maps. We rigorously evaluate the utility of attention maps generated
by the ViT backbone and compare them with the model's token embedding space. We
observe a small improvement in model performance with pre-training compared to
training from scratch and discuss the limitations and opportunities of SSL for
remote sensing and land cover segmentation. Beyond small performance increases,
we show that ViT attention maps hold great intrinsic value for remote sensing,
and could provide useful inputs to other algorithms. With this, our work lays
the groundwork for bigger and better SSL models for Earth Observation.
|
In the field of natural language processing, the rapid development of large
language model (LLM) has attracted more and more attention. LLMs have shown a
high level of creativity in various tasks, but the methods for assessing such
creativity are inadequate. The assessment of LLM creativity needs to consider
differences from humans, requiring multi-dimensional measurement while
balancing accuracy and efficiency. This paper aims to establish an efficient
framework for assessing the level of creativity in LLMs. By adapting the
modified Torrance Tests of Creative Thinking, the research evaluates the
creative performance of various LLMs across 7 tasks, emphasizing 4 criteria
including Fluency, Flexibility, Originality, and Elaboration. In this context,
we develop a comprehensive dataset of 700 questions for testing and an
LLM-based evaluation method. In addition, this study presents a novel analysis
of LLMs' responses to diverse prompts and role-play situations. We found that
the creativity of LLMs primarily falls short in originality, while excelling in
elaboration. Besides, the use of prompts and the role-play settings of the
model significantly influence creativity. Additionally, the experimental
results also indicate that collaboration among multiple LLMs can enhance
originality. Notably, our findings reveal a consensus between human evaluations
and LLMs regarding the personality traits that influence creativity. The
findings underscore the significant impact of LLM design on creativity and
bridges artificial intelligence and human creativity, offering insights into
LLMs' creativity and potential applications.
|
A model is proposed of a collapsing quasi-spherical radiating star with
matter content as shear-free isotropic fluid undergoing radial heat-flow with
outgoing radiation. To describe the radiation of the system, we have considered
both plane symmetric and spherical Vaidya solutions. Physical conditions and
thermodynamical relations are studied using local conservation of momentum and
surface red-shift. We have found that for existence of radiation on the
boundary, pressure on the boundary is not necessary.
|
Effect size indices are useful parameters that quantify the strength of
association and are unaffected by sample size. There are many available effect
size parameters and estimators, but it is difficult to compare effect sizes
across studies as most are defined for a specific type of population parameter.
We recently introduced a new Robust Effect Size Index (RESI) and confidence
interval, which is advantageous because it is not model-specific. Here we
present the RESI R package, which makes it easy to report the RESI and its
confidence interval for many different model classes, with a consistent
interpretation across parameters and model types. The package produces
coefficient, ANOVA tables, and overall Wald tests for model inputs, appending
the RESI estimate and confidence interval to each. The package also includes
functions for visualization and conversions to and from other effect size
measures. For illustration, we analyze and interpret three different model
types.
|
We evaluate systematically some new contributions of the QCD scalar mesons,
including radiative decay-productions, not considered with a better attention
until now in the evaluation of the hadronic contributions to the muon anomaly.
The sum of the scalar contributions to be added to the existing Standard Model
predictions a_mu^{SM} are estimated in units 10^{-10} to be a^{S}_mu= 1.0(0.6)
[TH based]} and 13(11) [ PDG based], where the errors are dominated by the ones
from the experimental widths of these scalar mesons. PDG based results suggest
that the value of a_mu^{SM} and its errors might have been underestimated in
previous works. The inclusion of these new effects leads to a perfect agreement
(< 1.1\sigma) of the measured value a^{exp}_mu and a_mu^{SM} from tau-decay and
implies a (1.5 ~ 3.3) sigma discrepancy between a^{exp}_mu and a_mu^{SM} from
e^+e^- into hadrons data. More refined unbiased estimates of a_mu^{SM} require
improved measurements of the scalar meson masses and widths. The impact of our
results to a_mu^{SM} is summarized in the conclusions.
|
Vector rogue wave (RW) formation and their dynamics in Rabi coupled two- and
three-species Bose-Einstein condensates with spatially varying dispersion and
nonlinearity are studied. For this purpose, we obtain the RW solution of the
two- and three-component inhomogeneous Gross-Pitaevskii (GP) systems with Rabi
coupling by introducing suitable rotational and similarity transformations.
Then, we investigate the effect of inhomogeneity (spatially varying dispersion,
trapping potential and nonlinearity) on vector RWs for two different forms of
potential strengths, namely periodic (optical lattice) with specific reference
to hyperbolic type potentials and parabolic cylinder potentials. First, we show
an interesting oscillating boomeronic behaviour of dark-bright solitons due to
Rabi coupling in two component-condensate with constant nonlinearities. Then in
the presence of inhomogeneity but in the absence of Rabi coupling we
demonstrate creation of new daughter RWs co-existing with dark (bright) soliton
part in first (second) component of the two-component GP system. Further, the
hyperbolic modulation (sech type) of parameter along with Rabi effect leads to
the formation of dromion (two-dimensional localized structure) trains even in
the (1+1) dimensional two-component GP system, which is a striking feature of
Rabi coupling with spatial modulation. Next, our study on three-component
condensate, reveals the fact that the three RWs can be converted into broad
based zero background RW appearing on top of a bright soliton by introducing
spatial modulation only. Further, by including Rabi coupling we observe beating
behaviour of solitons with internal oscillations mostly at the wings. Also, we
show that by employing parabolic cylinder modulation with model parameter $n$,
one can produce $(n+1)$ RWs.
|
We analyze the persistence of curvature singularities when analyzed using
quantum theory. First, quantum test particles obeying the Klein-Gordon and
Chandrasekhar-Dirac equation are used to probe the classical timelike naked
singularity. We show that the classical singularity is felt even by our quantum
probes. Next, we use loop quantization to resolve singularity hidden beneath
the horizon. The singularity is resolved in this case.
|
We have performed a first principles study of structural, mechanical,
electronic, and optical properties of orthorhombic Sb2S3 and Sb2Se3 compounds
using the density functional theory within the local density approximation. The
lattice parameters, bulk modulus, and its pressure derivatives of these
compounds have been obtained. The second-order elastic constants have been
calculated, and the other related quantities such as the Young's modulus, shear
modulus, Poisson's ratio, anisotropy factor, sound velocities, Debye
temperature, and hardness have also been estimated in the present work. The
linear photon-energy dependent dielectric functions and some optical properties
such as the energy-loss function, the effective number of valance electrons and
the effective optical dielectric constant are calculated. Our structural
estimation and some other results are in agreement with the available
experimental and theoretical data.
|
We propose two methods for generating non-Gaussian maps with fixed power
spectrum and bispectrum. The first makes use of a recently proposed rigorous,
non-perturbative, Bayesian framework for generating non-Gaussian distributions.
The second uses a simple superposition of Gaussian distributions. The former is
best suited for generating mildly non-Gaussian maps, and we discuss in detail
the limitations of this method. The latter is better suited for the opposite
situation, i.e. generating strongly non-Gaussian maps. The ensembles produced
are isotropic and the power spectrum can be jointly fixed; however we cannot
set to zero all other higher order cumulants (an unavoidable mathematical
obstruction). We briefly quantify the leakage into higher order moments present
in our method. We finally present an implementation of our code within the
HEALPIX package
|
We study a squeezed vacuum field generated in hot Rb vapor via the
polarization self-rotation effect. Our previous experiments showed that the
amount of observed squeezing may be limited by the contamination of the
squeezed vacuum output with higher-order spatial modes, also generated inside
the cell. Here, we demonstrate that the squeezing can be improved by making the
light interact several times with a less dense atomic ensemble. With
optimization of some parameters we can achieve up to -2.6 dB of squeezing in
the multi-pass case, which is 0.6 dB improvement compared to the single-pass
experimental configuration. Our results show that other than the optical depth
of the medium, the spatial mode structure and cell configuration also affect
the squeezing level.
|
Let $M$ be a strongly pseudoconvex complex $G$-manifold with compact quotient
$M/G$. We provide a simple condition on forms $\alpha$ sufficient for the
regular solvability of the equation $\square u=\alpha$ and other problems
related to the $\bar\partial$-Neumann problem on $M$.
|
We study arrival directions of 1.4x10^6 extensive air showers (EAS)
registered with the EAS--1000 Prototype Array in the energy range 0.1--10 PeV.
By applying an iterative algorithm that provides uniform distribution of the
data with respect to sidereal time and azimuthal angles, we find a number of
zones with excessive flux of cosmic rays (CRs) at >=3 sigma level. We compare
locations of the zones with positions of galactic supernova remnants (SNRs),
pulsars, open star clusters, and regions of ionized hydrogen and find
remarkable coincidences, which may witness in favour of the hypothesis that
certain objects of these types, including the SNRs Cassiopeia A, the Crab
Nebula, the Monogem Ring and some other, provide a noticeable contribution to
the flux of CRs in the PeV range of energies. In addition, we find certain
signs of a contribution from the M33 galaxy and a number of comparatively
nearby groups of active galactic nuclei and interacting galaxies, in particular
those in the Virgo cluster of galaxies. The results also provide some hints for
a search of possible sources of ultra-high energy (UHE) cosmic rays and support
an earlier idea that a part of both UHE and PeV CRs may originate from the same
astrophysical objects.
|
It is shown that charged hadron multiplicity distributions in restricted
(pseudo)rapidity intervals in e+e- annihilation and in e+p scattering at HERA
are quite well described by the modified negative binomial distribution and its
simple extension.
|
We present the complete analytical result for the two-loop logarithmically
enhanced contributions to the high energy asymptotic behavior of the vector
form factor and the four-fermion cross section in a spontaneously broken SU(2)
gauge model. On the basis of this result we derive the dominant two-loop
electroweak corrections to the neutral current four-fermion processes at high
energies.
|
While the star formation rates and morphologies of galaxies have long been
known to correlate with their local environment, the process by which these
correlations are generated is not well understood. Galaxy groups are thought to
play an important role in shaping the physical properties of galaxies before
entering massive clusters at low redshift, and transformations of satellite
galaxies likely dominate the buildup of local environmental correlations. To
illuminate the physical processes that shape galaxy evolution in dense
environments, we study a sample of 116 X-ray selected galaxy groups at z=0.2-1
with halo masses of 10^13-10^14 M_sun and centroids determined with weak
lensing. We analyze morphologies based on HST imaging and colors determined
from 31 photometric bands for a stellar mass-limited population of 923
satellite galaxies and a comparison sample of 16644 field galaxies. Controlling
for variations in stellar mass across environments, we find significant trends
in the colors and morphologies of satellite galaxies with group-centric
distance and across cosmic time. Specifically at low stellar mass
(log(M_stellar/M_sun) = 9.8-10.3), the fraction of disk-dominated star-forming
galaxies declines from >50% among field galaxies to <20% among satellites near
the centers of groups. This decline is accompanied by a rise in quenched
galaxies with intermediate bulge+disk morphologies, and only a weak increase in
red bulge-dominated systems. These results show that both color and morphology
are influenced by a galaxy's location within a group halo. We suggest that
strangulation and disk fading alone are insufficient to explain the observed
morphological dependence on environment, and that galaxy mergers or close tidal
encounters must play a role in building up the population of quenched galaxies
with bulges seen in dense environments at low redshift.
|
In this paper, we propose a Zero-Touch, deep reinforcement learning
(DRL)-based Proactive Failure Recovery framework called ZT-PFR for stateful
network function virtualization (NFV)-enabled networks. To this end, we
formulate a resource-efficient optimization problem minimizing the network cost
function including resource cost and wrong decision penalty. As a solution, we
propose state-of-the-art DRL-based methods such as soft-actor-critic (SAC) and
proximal-policy-optimization (PPO). In addition, to train and test our DRL
agents, we propose a novel impending-failure model. Moreover, to keep network
status information at an acceptable freshness level for appropriate
decision-making, we apply the concept of age of information to strike a balance
between the event and scheduling based monitoring. Several key systems and DRL
algorithm design insights for ZT-PFR are drawn from our analysis and simulation
results. For example, we use a hybrid neural network, consisting long
short-term memory layers in the DRL agents structure, to capture
impending-failures time dependency.
|
In this paper, we investigate resource allocation problem in the context of
multiple virtual reality (VR) video flows sharing a certain link, considering
specific deadline of each video frame and the impact of different frames on
video quality. Firstly, we establish a queuing delay bound estimation model,
enabling link node to proactively discard frames that will exceed the deadline.
Secondly, we model the importance of different frames based on viewport feature
of VR video and encoding method. Accordingly, the frames of each flow are
sorted. Then we formulate a problem of minimizing long-term quality loss caused
by frame dropping subject to per-flow quality guarantee and bandwidth
constraints. Since the frequency of frame dropping and network fluctuation are
not on the same time scale, we propose a two-timescale resource allocation
scheme. On the long timescale, a queuing theory based resource allocation
method is proposed to satisfy quality requirement, utilizing frame queuing
delay bound to obtain minimum resource demand for each flow. On the short
timescale, in order to quickly fine-tune allocation results to cope with the
unstable network state, we propose a low-complexity heuristic algorithm,
scheduling available resources based on the importance of frames in each flow.
Extensive experimental results demonstrate that the proposed scheme can
efficiently improve quality and fairness of VR video flows under various
network conditions.
|
This paper studies permutation statistics that count occurrences of patterns.
Their expected values on a product of $t$ permutations chosen randomly from
$\Gamma \subseteq S_{n}$, where $\Gamma$ is a union of conjugacy classes, are
considered. Hultman has described a method for computing such an expected
value, denoted $\mathbb{E}_{\Gamma}(s,t)$, of a statistic $s$, when $\Gamma$ is
a union of conjugacy classes of $S_{n}$. The only prerequisite is that the mean
of $s$ over the conjugacy classes is written as a linear combination of
irreducible characters of $S_{n}$. Therefore, the main focus of this article is
to express the means of pattern-counting statistics as such linear
combinations. A procedure for calculating such expressions for statistics
counting occurrences of classical and vincular patterns of length 3 is
developed, and is then used to calculate all these expressions. The results can
be used to compute $\mathbb{E}_{\Gamma}(s,t)$ for all the above statistics, and
for all functions on $S_{n}$ that are linear combinations of them.
|
Methodological development for the inference of gene regulatory networks from
transcriptomic data is an active and important research area. Several
approaches have been proposed to infer relationships among genes from
observational steady-state expression data alone, mainly based on the use of
graphical Gaussian models. However, these methods rely on the estimation of
partial correlations and are only able to provide undirected graphs that cannot
highlight causal relationships among genes. A major upcoming challenge is to
jointly analyze observational transcriptomic data and intervention data
obtained by performing knock-out or knock-down experiments in order to uncover
causal gene regulatory relationships. To this end, in this technical note we
present an explicit formula for the likelihood function for any complex
intervention design in the context of Gaussian Bayesian networks, as well as
its analytical maximization. This allows a direct calculation of the causal
effects for known graph structure. We also show how to obtain the Fisher
information in this context, which will be extremely useful for the choice of
optimal intervention designs in the future.
|
A special subclass of shear-free null congruences (SFC) is studied, with
tangent vector field being a repeated principal null direction of the Weyl
tensor. We demonstrate that this field is parallel with respect to an effective
affine connection which contains the Weyl nonmetricity and the skew symmetric
torsion. On the other hand, a Maxwell-like field can be directly associated
with any special SFC, and the electric charge for bounded singularities of this
field turns to be ``self-quantized''. Two invariant differential operators are
introduced which can be thought of as spinor analogues of the Beltrami
operators and both nullify the principal spinor of any special SFC.
|
We study the Wasserstein metric to measure distances between molecules
represented by the atom index dependent adjacency "Coulomb" matrix, used in
kernel ridge regression based supervised learning. Resulting quantum machine
learning models exhibit improved training efficiency and result in smoother
predictions of molecular distortions. We first demonstrate smoothness for the
continuous extraction of an atom from some organic molecule. Learning curves,
quantifying the decay of the atomization energy's prediction error as a
function of training set size, have been obtained for tens of thousands of
organic molecules drawn from the QM9 data set. In comparison to conventionally
used metrics ($L_1$ and $L_2$ norm), our numerical results indicate systematic
improvement in terms of learning curve off-set for random as well as sorted (by
norms of row) atom indexing in Coulomb matrices. Our findings suggest that this
metric corresponds to a favorable similarity measure which introduces
index-invariance in any kernel based model relying on adjacency matrix
representations.
|
The noncentrosymmetric superconductor AuBe have been investigated using the
magnetization, resistivity, specific heat, and muon-spin relaxation/rotation
measurements. AuBe crystallizes in the cubic FeSi-type B20 structure with
superconducting transition temperature observed at $T_{c}$ = 3.2 $\pm$ 0.1 K.
The low-temperature specific heat data, $C_{el}$(T), indicate a weakly-coupled
fully gapped BCS superconductivity with an isotropic energy gap
2$\Delta(0)/k_{B}T_{c}$ = 3.76, which is close to the BCS value of 3.52.
Interestingly, type-I superconductivity is inferred from the $\mu$SR
measurements, which is in contrast with the earlier reports of type-II
superconductivity in AuBe. The Ginzburg-Landau parameter is $\kappa_{GL}$ = 0.4
$<$ 1/$\sqrt{2}$. The transverse-field $\mu$SR data transformed in the maximum
entropy spectra depicting the internal magnetic field probability distribution,
P(H), also confirms the absence of the mixed state in AuBe. The thermodynamic
critical field, $H_{c}$, calculated to be around 259 Oe. The zero-field $\mu$SR
results indicate that time-reversal symmetry is preserved and supports a
spin-singlet pairing in the superconducting ground state.
|
We described a method to solve deterministic and stochastic Walras
equilibrium models based on associating with the given problem a bifunction
whose maxinf-points turn out to be equilibrium points. The numerical procedure
relies on an augmentation of this bifunction. Convergence of the proposed
procedure is proved by relying on the relevant lopsided convergence. In the
dynamic versions of our models, deterministic and stochastic, we are mostly
concerned with models that equip the agents with a mechanism to transfer goods
from one time period to the next, possibly simply savings, but also allows for
the transformation of goods via production
|
We investigate the acoustic properties of meta-materials that are inspired by
sound-absorbing structures. We show that it is possible to construct
meta-materials with frequency-dependent effective properties, with large and/or
negative permittivities. Mathematically, we investigate solutions
$u^\varepsilon: \Omega_\varepsilon \rightarrow \mathbb{R}$ to a Helmholtz
equation in the limit $\varepsilon\rightarrow 0$ with the help of two-scale
convergence. The domain $\Omega_\varepsilon$ is obtained by removing from an
open set $\Omega\subset \mathbb{R}^n$ in a periodic fashion a large number
(order $\varepsilon^{-n}$) of small resonators (order $\varepsilon$). The
special properties of the meta-material are obtained through sub-scale
structures in the perforations.
|
Electronic flat band systems are a fertile platform to host
correlation-induced quantum phenomena such as unconventional superconductivity,
magnetism and topological orders. While flat band has been established in
geometrically frustrated structures, such as the kagome lattice, flat
band-induced correlation effects especially in those multi-orbital bulk systems
are rarely seen. Here we report negative magnetoresistance and signature of
ferromagnetic fluctuations in a prototypical kagome metal CoSn, which features
a flat band in proximity to the Fermi level. We find that the magnetoresistance
is dictated by electronic correlations via Fermi level tuning. Combining with
first principles and model calculations, we establish flat band-induced
correlation effects in a multi-orbital electronic system, which opens new
routes to realize unconventional superconducting and topological states in
geometrically frustrated metals.
|
Engineering design problems often involve large state and action spaces along
with highly sparse rewards. Since an exhaustive search of those spaces is not
feasible, humans utilize relevant domain knowledge to condense the search
space. Previously, deep learning agents (DLAgents) were introduced to use
visual imitation learning to model design domain knowledge. This note builds on
DLAgents and integrates them with one-step lookahead search to develop
goal-directed agents capable of enhancing learned strategies for sequentially
generating designs. Goal-directed DLAgents can employ human strategies learned
from data along with optimizing an objective function. The visual imitation
network from DLAgents is composed of a convolutional encoder-decoder network,
acting as a rough planning step that is agnostic to feedback. Meanwhile, the
lookahead search identifies the fine-tuned design action guided by an
objective. These design agents are trained on an unconstrained truss design
problem that is modeled as a sequential, action-based configuration design
problem. The agents are then evaluated on two versions of the problem: the
original version used for training and an unseen constrained version with an
obstructed construction space. The goal-directed agents outperform the human
designers used to train the network as well as the previous objective-agnostic
versions of the agent in both scenarios. This illustrates a design agent
framework that can efficiently use feedback to not only enhance learned design
strategies but also adapt to unseen design problems.
|
The H\'enon--Heiles system in the general form is studied. In a nonintegrable
case new solutions have been found as formal Laurent series, depending on three
parameters. One of parameters determines a location of the singularity point,
other parameters determine coefficients of the Laurent series. For some values
of these two parameters the obtained Laurent series coincide with the Laurent
series of the known exact solutions.
|
We propose a novel compressed sensing technique to accelerate the magnetic
resonance imaging (MRI) acquisition process. The method, coined spread spectrum
MRI or simply s2MRI, consists of pre-modulating the signal of interest by a
linear chirp before random k-space under-sampling, and then reconstructing the
signal with non-linear algorithms that promote sparsity. The effectiveness of
the procedure is theoretically underpinned by the optimization of the coherence
between the sparsity and sensing bases. The proposed technique is thoroughly
studied by means of numerical simulations, as well as phantom and in vivo
experiments on a 7T scanner. Our results suggest that s2MRI performs better
than state-of-the-art variable density k-space under-sampling approaches
|
Ultraprecise space photometry enables us to reveal light variability even in
stars that were previously deemed constant. A large group of such stars show
variations that may be rotationally modulated. This type of light variability
is of special interest because it provides precise estimates of rotational
rates. We aim to understand the origin of the light variability of K2 targets
that show signatures of rotational modulation. We used phase-resolved
medium-resolution XSHOOTER spectroscopy to understand the light variability of
the stars KIC~250152017 and KIC~249660366, which are possibly rotationally
modulated. We determined the atmospheric parameters at individual phases and
tested the presence of the rotational modulation in the spectra. KIC 250152017
is a HgMn star, whose light variability is caused by the inhomogeneous surface
distribution of manganese and iron. It is only the second HgMn star whose light
variability is well understood. KIC 249660366 is a He-weak, high-velocity
horizontal branch star with overabundances of silicon and argon. The light
variability of this star is likely caused by a reflection effect in this
post-common envelope binary.
|
We report a Karl G. Jansky Very Large Array (JVLA) search for redshifted
CO(1-0) or CO(2-1) emission, and a Hubble Space Telescope Wide Field Camera~3
(HST-WFC3) search for rest-frame near-ultraviolet (NUV) stellar emission, from
seven HI-selected galaxies associated with high-metallicity ([M/H]~$\geq -1.3$)
damped Ly$\alpha$ absorbers (DLAs) at $z\approx 4$. The galaxies were earlier
identified by ALMA imaging of their [CII]~158$\mu$m emission. We also used the
JVLA to search for CO(2-1) emission from the field of a low-metallicity
([M/H]~$=-2.47$) DLA at $z\approx 4.8$. No statistically significant CO
emission is detected from any of the galaxies, yielding upper limits of
$M_{mol}<(7.4 - 17.9)\times 10^{10}\times (\alpha_{CO}/4.36) M_\odot$ on their
molecular gas mass. We detect rest-frame NUV emission from four of the seven
[CII]~158$\mu$m-emitting galaxies, the first detections of the stellar
continuum from HI-selected galaxies at $z\gtrsim 4$. The HST-WFC3 images yield
typical sizes of the stellar continua of $\approx 2-4$~kpc and inferred
dust-unobscured star-formation rates (SFRs) of $\approx 5.0-17.5 M_\odot$/yr,
consistent with, or slightly lower than, the total SFRs estimated from the
far-infrared (FIR) luminosity. We further stacked the CO(2-1) emission signals
of six [CII]~158$\mu$m-emitting galaxies in the image plane. Our non-detection
of CO(2-1) emission in the stacked image yields the limit $M_{mol}<4.1 \times
10^{10}\times (\alpha_{CO}/4.36) M_\odot$ on the average molecular gas mass of
the six galaxies. Our molecular gas mass estimates and NUV SFR estimates in
HI-selected galaxies at $z\approx 4$ are consistent with those of main-sequence
galaxies with similar [CII]~158$\mu$m and FIR luminosities at similar
redshifts. However, the NUV emission in the HI-selected galaxies appears more
extended than that in main-sequence galaxies at similar redshifts.
|
This paper deals with the solution of delay differential equations describing
evolution of dislocation density in metallic materials. Hardening, restoration,
and recrystallization characterizing the evolution of dislocation populations
provide the essential equation of the model. The last term transforms ordinary
differential equation (ODE) into delay differential equation (DDE) with strong
(in general, H\"older) nonlinearity. We prove upper error bounds for the
explicit Euler method, under the assumption that the right-hand side function
is H\"older continuous and monotone which allows us to compare accuracy of
other numerical methods in our model (e.g. Runge-Kutta), in particular when
explicit formulas for solutions are not known. Finally, we test the above
results in simulations of real industrial process.
|
The importance and demands of visual scene understanding have been steadily
increasing along with the active development of autonomous systems.
Consequently, there has been a large amount of research dedicated to semantic
segmentation and dense motion estimation. In this paper, we propose a method
for jointly estimating optical flow and temporally consistent semantic
segmentation, which closely connects these two problem domains and leverages
each other. Semantic segmentation provides information on plausible physical
motion to its associated pixels, and accurate pixel-level temporal
correspondences enhance the accuracy of semantic segmentation in the temporal
domain. We demonstrate the benefits of our approach on the KITTI benchmark,
where we observe performance gains for flow and segmentation. We achieve
state-of-the-art optical flow results, and outperform all published algorithms
by a large margin on challenging, but crucial dynamic objects.
|
Effects of heavy sea quarks on the low energy physics are described by an
effective theory where the expansion parameter is the inverse quark mass,
1/$M$. At leading order in 1/$M$ (and neglecting light quark masses) the
dependence of any low energy quantity on $M$ is given in terms of the ratio of
$\Lambda$ parameters of the effective and the fundamental theory. We define a
function describing the scaling with the mass $M$. We find that its
perturbative expansion is very reliable for the bottom quark and also seems to
work very well at the charm quark mass. The same is then true for the ratios of
$\Lambda^{(4)}/\Lambda^{(5)}$ and $\Lambda^{(3)}/\Lambda^{(4)}$, which play a
major r\^ole in connecting lattice determinations of $\alpha^{(3)}_{MSbar}$
from the three-flavor theory with $\alpha^{(5)}_{MSbar}(M_Z)$. Also the charm
quark content of the nucleon, relevant for dark matter searches, can be
computed accurately from perturbation theory.
We investigate a very closely related model, namely QCD with $N_f=2$ heavy
quarks. Our non-perturbative information is derived from simulations on the
lattice, with masses up to the charm quark mass and lattice spacings down to
about 0.023 fm followed by a continuum extrapolation. The non-perturbative mass
dependence agrees within rather small errors with the perturbative prediction
at masses around the charm quark mass. Surprisingly, from studying solely the
massive theory we can make a prediction for the ratio
$Q^{1/\sqrt{t_0}}_{0,2}=[\Lambda \sqrt{t_0(0)}]_{N_f=2}/[\Lambda
\sqrt{t_0}]_{N_f=0}$, which refers to the chiral limit in $N_f=2$. Here $t_0$
is the Gradient Flow scale of [1]. The uncertainty for $Q$ is estimated to be
2.5%. For the phenomenologically interesting $\Lambda^{(3)}/\Lambda^{(4)}$, we
conclude that perturbation theory introduces errors which are at most at the
1.5% level, far smaller than other current uncertainties.
|
BRST-invariant action of general relativity in the unimodular gauge proposed
by Baulieu is studied without using perturbative expansions. The expression for
the path integral in the unimodular gauge is reduced to a form in which a
functional measure is defined by a norm invariant under Transverse
Diffeomorphism. It is shown that general relativity in the unimodular gauge
with this action and the quantum unimodular gravity are equivalent. It is also
shown that Vacuum Expectation Values (VEVs) of Diff invariant operators in the
unimodular gauge and other gauge such as the harmonic gauge take distinct
values. A path integral for harmonic gauge is found to be gauge equivalent to a
superposition of that for unimodular gauge obtained by performing constant Weyl
transformation of the metric, after a non-dynamical cosmological term is
introduced into the action of the unimodular gauge.
|
We have calculated the general dispersion relationship for surface waves on a
ferrofluid layer of any thickness and viscosity, under the influence of a
uniform vertical magnetic field. The amplification of these waves can induce an
instability called peaks instability (Rosensweig instability). The expression
of the dispersion relationship requires that the critical magnetic field and
the critical wavenumber of the instability depend on the thickness of the
ferrofluid layer. The dispersion relationship has been simplified into four
asymptotic regimes: thick or thin layer and viscous or inertial behaviour. The
corresponding critical values are presented. We show that a typical parameter
of the ferrofluid enables one to know in which regime, viscous or inertial, the
ferrofluid will be near the onset of instability.
|
The harmonic numbers $H_n=\sum_{0<k\le n}1/k\ (n=0,1,2,\ldots)$ play
important roles in mathematics. Let $p>3$ be a prime. With helps of some
combinatorial identities, we establish the following two new congruences:
$$\sum_{k=1}^{p-1}\frac{\binom{2k}k}kH_k\equiv\frac13\left(\frac
p3\right)B_{p-2}\left(\frac13\right)\pmod{p}$$ and
$$\sum_{k=1}^{p-1}\frac{\binom{2k}k}kH_{2k}\equiv\frac7{12}\left(\frac
p3\right)B_{p-2}\left(\frac13\right)\pmod{p},$$ where $B_n(x)$ denotes the
Bernoulli polynomial of degree $n$. As an application, we determine
$\sum_{n=1}^{p-1}g_n$ and $\sum_{n=1}^{p-1}h_n$ modulo $p^3$, where
$$g_n=\sum_{k=0}^n\binom nk^2\binom{2k}k\quad\mbox{and}\quad
h_n=\sum_{k=0}^n\binom nk^2C_k$$ with $C_k=\binom{2k}k/(k+1)$.
|
We have detected solar-like oscillations in the mid K-dwarf $\varepsilon$
Indi A, making it the coolest dwarf to have measured oscillations. The star is
noteworthy for harboring a pair of brown dwarf companions and a Jupiter-type
planet. We observed $\varepsilon$ Indi A during two radial velocity campaigns,
using the high-resolution spectrographs HARPS (2011) and UVES (2021). Weighting
the time series, we computed the power spectra and established the detection of
solar-like oscillations with a power excess located at $5265 \pm 110 \ \mu$Hz
-- the highest frequency solar-like oscillations so far measured in any star.
The measurement of the center of the power excess allows us to compute a
stellar mass of $0.782 \pm 0.023 \ M_\odot$ based on scaling relations and a
known radius from interferometry. We also determine the amplitude of the peak
power and note that there is a slight difference between the two observing
campaigns, indicating a varying activity level. Overall, this work confirms
that low-amplitude solar-like oscillations can be detected in mid-K type stars
in radial velocity measurements obtained with high-precision spectrographs.
|
Effective Burnside $\infty$-categories are the centerpiece of the
$\infty$-categorical approach to equivariant stable homotopy theory. In this
\'etude, we recall the construction of the twisted arrow $\infty$-category, and
we give a new proof that it is an $\infty$-category, using an extremely helpful
modification of an argument due to Joyal--Tierney. The twisted arrow
$\infty$-category is in turn used to construct the effective Burnside
$\infty$-category. We employ a variation on this theme to construct a fibrewise
effective Burnside $\infty$-category. To show that this constuctionworks
fibrewise, we introduce a fragment of a theory of what we call marbled
simplicial sets, and we use a yet further modified form of the Joyal--Tierney
argument.
|
Major cause of midvehicle collision is due to the distraction of drivers in
both the Front and rear-end vehicle witnessed in dense traffic and high speed
road conditions. In view of this scenario, a crash detection and collision
avoidance algorithm coined as Midvehicle Collision Detection and Avoidance
System (MCDAS) is proposed to evade the possible crash at both ends of the host
vehicle. The method based upon Constant Velocity (CV) model specifically,
addresses two scenarios, the first scenario encompasses two sub-scenario
namely, a) A rear-end collision avoidance mechanism that accelerates the host
vehicle under no front-end vehicle condition and b) Curvilinear motion based on
front and host vehicle offset (position), whilst, the other scenario deals with
parallel parking issues. The offset based curvilinear motion of the host
vehicle plays a vital role in threat avoidance from the front-end vehicle. A
desired curvilinear strategy on left and right sides is achieved by the host
vehicle with concern of possible CV to avoid both end collisions. In this
methodology, path constraint is applicable for both scenarios with required
direction. Monte Carlo analysis of MCDAS covering vehicle kinematics
demonstrated acute discrimination with consistent performance for the collision
validated on simulated with real-time data.
|
The 2-Fano varieties, defined by De Jong and Starr, satisfy some higher
dimensional analogous properties of Fano varieties. We propose a definition of
(weak) $k$-Fano variety and conjecture the polyhedrality of the cone of
pseudoeffective $k$-cycles for those varieties in analogy with the case $k=1$.
Then, we calculate some Betti numbers of a large class of $k$-Fano varieties to
prove some special case of the conjecture. In particular, the conjecture is
true for all 2-Fano varieties of index $\ge n-2$, and also we complete the
classification of weak 2-Fano varieties of Araujo and Castravet.
|
We present an end-to-end method for transforming audio from one style to
another. For the case of speech, by conditioning on speaker identities, we can
train a single model to transform words spoken by multiple people into multiple
target voices. For the case of music, we can specify musical instruments and
achieve the same result. Architecturally, our method is a fully-differentiable
sequence-to-sequence model based on convolutional and hierarchical recurrent
neural networks. It is designed to capture long-term acoustic dependencies,
requires minimal post-processing, and produces realistic audio transforms.
Ablation studies confirm that our model can separate speaker and instrument
properties from acoustic content at different receptive fields. Empirically,
our method achieves competitive performance on community-standard datasets.
|
This work considers the effects of the Hurst exponent ($H$) on the local
electric field distribution and the slope of the Fowler-Nordheim (FN) plot when
considering the cold field electron emission properties of rough Large-Area
Conducting Field Emitter Surfaces (LACFESs). A LACFES is represented by a
self-affine Weierstrass-Mandelbrot function in a given spatial direction. For
$0.1 \leqslant H < 0.5$, the local electric field distribution exhibits two
clear exponential regimes. Moreover, a scaling between the macroscopic current
density ($J_M$) and the characteristic kernel current density ($J_{kC}$),
$J_{M} \sim [J_{kC}]^{\beta_{H}}$, with an H-dependent exponent $\beta_{H} >
1$, has been found. This feature, which is less pronounced (but not absent) in
the range where more smooth surfaces have been found ($0.5 \leqslant H
\leqslant 0.9$), is a consequence of the dependency between the area efficiency
of emission of a LACFES and the macroscopic electric field, which is often
neglected in the interpretation of cold field electron emission experiments.
Considering the recent developments in orthodox field emission theory, we show
that the exponent $\beta_{H}$ must be considered when calculating the slope
characterization parameter (SCP) and thus provides a relevant method of more
precisely extracting the characteristic field enhancement factor from the slope
of the FN plot.
|
Approximating higher-order tensors by the Tucker format has been applied in
many fields such as psychometrics, chemometrics, signal processing, pattern
classification, and so on. In this paper, we propose some new Tucker-like
approximations based on the modal semi-tensor product (STP), especially, a new
singular value decomposition (SVD) and a new higher-order SVD (HOSVD) are
derived. Algorithms for computing new decompositions are provided. We also give
some numerical examples to illustrate our theoretical results.
|
Glauber-Sudarshan states, sometimes simply referred to as Glauber states, or
alternatively as coherent and squeezed-coherent states, are interesting states
in the configuration spaces of any quantum field theories, that closely
resemble classical trajectories in space-time. In this paper, we identify
four-dimensional de Sitter space as a coherent state over a supersymmetric
Minkowski vacuum. Although such an identification is not new, what is new
however is the claim that this is realizable in full string theory, but only in
conjunction with temporally varying degrees of freedom and quantum corrections
resulting from them. Furthermore, fluctuations over the de Sitter space is
governed by a generalized graviton (and flux)-added coherent state, also known
as the Agarwal-Tara state. The realization of de Sitter space as a state, and
not as a vacuum, resolves many issues associated with its entropy, zero-point
energy and trans-Planckian censorship, amongst other things.
|
We investigate the nonlocal property of the fractional statistics in Kitaev's
toric code model. To this end, we construct the Greenberger-Horne-Zeilinger
paradox which builds a direct conflict between the statistics and local
realism. It turns out that the fractional statistics in the model is purely a
quantum effect and independent of any classical theory.We also discuss a
feasible experimental scheme using anyonic interferometry to test this
contradiction.
|
We study the influence of quantum fluctuations on the phase, density, and
pair correlations in a trapped quasicondensate after a quench of the
interaction strength. To do so, we derive a description similar to the
stochastic Gross-Pitaevskii equation (SGPE) but keeping a fully quantum
description of the low-energy fields using the positive-P representation. This
allows us to treat both the quantum and thermal fluctuations together in an
integrated way. A plain SGPE only allows for thermal fluctuations. The approach
is applicable to such situations as finite temperature quantum quenches, but
not equilibrium calculations due to the time limitations inherent in positive-P
descriptions of interacting gases. One sees the appearance antibunching, the
generation of counter-propagating atom pairs, and increased phase fluctuations.
We show that the behavior can be estimated by adding the T=0 quantum
fluctuation contribution to the thermal fluctuations described by the plain
SGPE.
|
We study the effect of adding lower dimensional brane charges to the 't Hooft
monopole, di-baryon and baryon vertex configurations in AdS_4 x CP^3. We show
that these configurations capture the background fluxes in a way that depends
on the induced charges, requiring additional fundamental strings to cancel the
worldvolume tadpoles. The dynamics reveal that the charges must lie inside some
interval, a situation familiar from the baryon vertex in AdS_5 x S^5 with
charges. For the baryon vertex and the di-baryon the number of fundamental
strings must also lie inside an allowed interval. Some ideas about the
existence of these bounds in relation to the stringy exclusion principle are
given.
|
Mapping the thermal transport properties of materials at the nanoscale is of
critical importance for optimizing heat conduction in nanoscale devices.
Several methods to determine the thermal conductivity of materials have been
developed, most of them yielding an average value across the sample, thereby
disregarding the role of local variations. Here, we present a method for the
spatially-resolved assessment of the thermal conductivity of suspended graphene
by using a combination of confocal Raman thermometry and a finite-element
calculations-based fitting procedure. We demonstrate the working principle of
our method by extracting the two-dimensional thermal conductivity map of one
pristine suspended single-layer graphene sheet and one irradiated using helium
ions. Our method paves the way for spatially resolving the thermal conductivity
of other types of layered materials. This is particularly relevant for the
design and engineering of nanoscale thermal circuits (e.g. thermal diodes).
|
In this paper, we introduce MADARi, a joint morphological annotation and
spelling correction system for texts in Standard and Dialectal Arabic. The
MADARi framework provides intuitive interfaces for annotating text and managing
the annotation process of a large number of sizable documents. Morphological
annotation includes indicating, for a word, in context, its baseword, clitics,
part-of-speech, lemma, gloss, and dialect identification. MADARi has a suite of
utilities to help with annotator productivity. For example, annotators are
provided with pre-computed analyses to assist them in their task and reduce the
amount of work needed to complete it. MADARi also allows annotators to query a
morphological analyzer for a list of possible analyses in multiple dialects or
look up previously submitted analyses. The MADARi management interface enables
a lead annotator to easily manage and organize the whole annotation process
remotely and concurrently. We describe the motivation, design and
implementation of this interface; and we present details from a user study
working with this system.
|
A method for reconstructing the energy landscape of simple polypeptidic
chains is described. We show that we can construct an equivalent representation
of the energy landscape by a suitable directed graph. Its topological and
dynamical features are shown to yield an effective estimate of the time scales
associated with the folding and with the equilibration processes. This
conclusion is drawn by comparing molecular dynamics simulations at constant
temperature with the dynamics on the graph, defined by a temperature dependent
Markov process. The main advantage of the graph representation is that its
dynamics can be naturally renormalized by collecting nodes into "hubs", while
redefining their connectivity. We show that both topological and dynamical
properties are preserved by the renormalization procedure. Moreover, we obtain
clear indications that the heteropolymers exhibit common topological
properties, at variance with the homopolymer, whose peculiar graph structure
stems from its spatial homogeneity. In order to obtain a clear distinction
between a "fast folder" and a "slow folder" in the heteropolymers one has to
look at kinetic features of the directed graph. We find that the average time
needed to the fast folder for reaching its native configuration is two orders
of magnitude smaller than its equilibration time, while for the bad folder
these time scales are comparable. Accordingly, we can conclude that the
strategy described in this paper can be successfully applied also to more
realistic models, by studying their renormalized dynamics on the directed
graph, rather than performing lengthy molecular dynamics simulations.
|
A detailed quantum-electrodynamic calculation of muon pair creation in
laser-driven electron-positron collisions is presented. The colliding particles
stem from a positronium atom exposed to a superintense laser wave of linear
polarization, which allows for high luminosity. The threshold laser intensity
of this high-energy reaction amounts to a few 10^22 W/cm^2 in the near-infrared
frequency range. The muons produced form an ultrarelativistic, strongly
collimated beam, which is explicable in terms of a classical simple-man's
model. Our results indicate that the process can be observed at high
positronium densities with the help of present-day laser technology.
|
Fermi/LAT observations of star-forming galaxies in the ~0.1-100GeV range have
made possible a first population study. Evidence was found for a correlation
between gamma-ray luminosity and tracers of the star formation activity.
Studying galactic cosmic rays (CRs) in various global conditions can yield
information about their origin and transport in the interstellar medium (ISM).
This work addresses the question of the scaling laws that can be expected for
the interstellar gamma-ray emission as a function of global galactic
properties, with the goal of establishing whether the current experimental data
in the GeV range can be constraining. I developed a 2D model for the
non-thermal emissions from steady-state CR populations interacting with the ISM
in star-forming galaxies. Most CR-related parameters were taken from Milky Way
studies, and a large number of galaxies were then simulated with sizes from 4
to 40kpc, several gas distributions, and star formation rates (SFR) covering
six orders of magnitude. The evolution of the gamma-ray luminosity over the
100keV-100TeV range is presented, with emphasis on the contribution of the
different emission processes and particle populations, and on the transition
between transport regimes. The model can reproduce the normalisation and trend
inferred from the Fermi/LAT population study over most of the SFR range. This
is obtained with a plain diffusion scheme, a single diffusion coefficient, and
the assumption that CRs experience large-scale volume-averaged interstellar
conditions. There is, however, no universal relation between high-energy
gamma-ray luminosity and star formation activity, as illustrated by the scatter
introduced by different galactic global properties and the downturn in
gamma-ray emission at the low end (abridged).
|
The task of point cloud completion aims to predict the missing part for an
incomplete 3D shape. A widely used strategy is to generate a complete point
cloud from the incomplete one. However, the unordered nature of point clouds
will degrade the generation of high-quality 3D shapes, as the detailed topology
and structure of discrete points are hard to be captured by the generative
process only using a latent code. In this paper, we address the above problem
by reconsidering the completion task from a new perspective, where we formulate
the prediction as a point cloud deformation process. Specifically, we design a
novel neural network, named PMP-Net, to mimic the behavior of an earth mover.
It moves each point of the incomplete input to complete the point cloud, where
the total distance of point moving paths (PMP) should be shortest. Therefore,
PMP-Net predicts a unique point moving path for each point according to the
constraint of total point moving distances. As a result, the network learns a
strict and unique correspondence on point-level, which can capture the detailed
topology and structure relationships between the incomplete shape and the
complete target, and thus improves the quality of the predicted complete shape.
We conduct comprehensive experiments on Completion3D and PCN datasets, which
demonstrate our advantages over the state-of-the-art point cloud completion
methods.
|
In quantum theory, a physical observable is represented by a Hermitian
operator as it admits real eigenvalues. This stems from the fact that any
measuring apparatus that is supposed to measure a physical observable will
always yield a real number. However, reality of eigenvalue of some operator
does not mean that it is necessarily Hermitian. There are examples of
non-Hermitian operators which may admit real eigenvalues under some symmetry
conditions. However, in general, given a non-Hermitian operator, its average
value in a quantum state is a complex number and there are only very limited
methods available to measure it. Following standard quantum mechanics, we
provide an experimentally feasible protocol to measure the expectation value of
any non-Hermitian operator via weak measurements. The average of a
non-Hermitian operator in a pure state is a complex multiple of the weak value
of the positive semi-definite part of the non-Hermitian operator. We also prove
a new uncertainty relation for any two non-Hermitian operators and show that
the fidelity of a quantum state under quantum channel can be measured using the
average of the corresponding Kraus operators. The importance of our method is
shown in testing the stronger uncertainty relation, verifying the Ramanujan
formula and in measuring the product of non commuting projectors.
|
The planar Hall sensitivity of obliquely deposited NiFe(10)/Pt(tPt)
/IrMn(8)/Pt(3) (nm) trilayer structures has been investigated by introducing
interfacial modification and altering sensor geometry. The peak-to-peak PHE
voltage and AMR ratio of the sensors exhibit an oscillatory increase as a
function of Pt thickness. This behaviour was attributed to the strong electron
spin-orbit scattering at the NiFe/Pt interface of the trilayers. The
temperature-dependent PHE signal profiles reveal that the Pt-inserted PHE
sensors are stable even at 390 K with a high signal-to-noise ratio and an
increased sensitivity due to reduction of exchange bias. In order to further
increase the sensitivity, we have fabricated PHE sensors for a fixed Pt
thickness of 8 {\AA} by using sensor architectures of a cross, tilted-cross,
one-ring and five-ring junctions. We have obtained a sensitivity of 3.82
{\mu}V/Oe.mA for the cross junction, while it considerably increased to 298.5
{\mu}V/Oe.mA for five-ring sensor geometry. The real-time voltage profile of
the PHE sensors demonstrate that the sensor states are very stable under
various magnetic fields and sensor output voltages turn back to their initial
offset values. This provides a great potential for the NiFe/Pt/IrMn-based
planar Hall sensors in many sensing applications.
|
Estimation of a precision matrix (i.e., inverse covariance matrix) is widely
used to exploit conditional independence among continuous variables. The
influence of abnormal observations is exacerbated in a high dimensional setting
as the dimensionality increases. In this work, we propose robust estimation of
the inverse covariance matrix based on an $l_1$ regularized objective function
with a weighted sample covariance matrix. The robustness of the proposed
objective function can be justified by a nonparametric technique of the
integrated squared error criterion. To address the non-convexity of the
objective function, we develop an efficient algorithm in a similar spirit of
majorization-minimization. Asymptotic consistency of the proposed estimator is
also established. The performance of the proposed method is compared with
several existing approaches via numerical simulations. We further demonstrate
the merits of the proposed method with application in genetic network
inference.
|
For all classical groups (and for their analogs in infinite dimension or over
general base fields or rings) we construct certain contractions, called
"homotopes". The construction is geometric, using as ingredient involutions of
associative geometries. We prove that, under suitable assumptions, the groups
and their homotopes have a canonical semigroup completion.
|
It is pointed out that a cavity supernova (SN) explosion of a moving massive
star could result in a significant offset of the neutron star (NS) birth-place
from the geometrical centre of the supernova remnant (SNR). Therefore: a) the
high implied transverse velocities of a number of NSs (e.g. PSR B1610-50, PSR
B1757-24, SGR0525-66) could be reduced; b) the proper motion vector of a NS
should not necessarily point away from the geometrical centre of the associated
SNR; c) the circle of possible NS/SNR associations could be enlarged. An
observational test is discussed, which could allow to find the true
birth-places of NSs associated with middle-aged SNRs, and thereby to get more
reliable estimates of their transverse velocities.
|
In this article the Helmholtz-Weyl decomposition in three dimensional
exterior domains is established within the $L^r$-setting for $1<p<\infty$.
|
As one of the key equipment in the distribution system, the distribution
transformer directly affects the reliability of the user power supply. The
probability of accidents occurring in the operation of transformer equipment is
high, so it has become a focus of material inspection in recent years. However,
the large amount of raw data from sample testing is not being used effectively.
Given the above problems, this paper aims to mine the relationship between the
unqualified distribution transformer inspection items by using the association
rule algorithm based on the distribution transformer inspection data collected
from 2017 to 2021 and sorting out the key inspection items. At the same time,
the unqualified judgment basis of the relevant items is given, and the internal
relationship between the inspection items is clarified to a certain extent.
Furthermore, based on material and equipment inspection reports, correlations
between failed inspection items, and expert knowledge, the knowledge graph of
material equipment inspection management is constructed in the graph database
Neo4j. The experimental results show that the FP-Growth method performs
significantly better than the Apriori method and can accurately assess the
relationship between failed distribution transformer inspection items. Finally,
the knowledge graph network is visualized to provide a systematic knowledge
base for material inspection, which is convenient for knowledge query and
management. This method can provide a scientific guidance program for operation
and maintenance personnel to do equipment maintenance and also offers a
reference for the state evaluation of other high-voltage equipment.
|
We construct a finite element method for the numerical solution of a
fractional porous medium equation on a bounded open Lipschitz polytopal domain
$\Omega \subset \mathbb{R}^{d}$, where $d = 2$ or $3$. The pressure in the
model is defined as the solution of a fractional Poisson equation, involving
the fractional Neumann Laplacian in terms of its spectral definition. We
perform a rigorous passage to the limit as the spatial and temporal
discretization parameters tend to zero and show that a subsequence of the
sequence of finite element approximations defined by the proposed numerical
method converges to a bounded and nonnegative weak solution of the
initial-boundary-value problem under consideration. This result can be
therefore viewed as a constructive proof of the existence of a nonnegative,
energy-dissipative, weak solution to the initial-boundary-value problem for the
fractional porous medium equation under consideration, based on the Neumann
Laplacian. The convergence proof relies on results concerning the finite
element approximation of the spectral fractional Laplacian and compactness
techniques for nonlinear partial differential equations, together with
properties of the equation, which are shown to be inherited by the numerical
method. We also prove that the total energy associated with the problem under
consideration exhibits exponential decay in time.
|
Heterostructures of 2D materials offer a fertile ground to study ion
transport and charge storage. Here we employ ab initio molecular dynamics to
examine the proton-transfer/diffusion and redox behavior in a water layer
confined in the graphene-Ti3C2O2 heterostructure. We find that in comparison
with the similar interface of water confined between Ti3C2O2 layers, proton
redox rate in the dissimilar interface of graphene-Ti3C2O2 is much higher,
owning to the very different interfacial structure as well as the interfacial
electric field induced by an electron transfer in the latter. Water molecules
in the dissimilar interface of the graphene-Ti3C2O2 heterostructure form a
denser hydrogen-bond network with a preferred orientation of water molecules,
leading to an increase of proton mobility with proton concentration in the
graphene-Ti3C2O2 interface. As the proton concentration further increases,
proton mobility deceases, due to increasingly more frequent surface redox
events that slow down proton mobility due to binding with surface O atoms. Our
work provides important insights into how the dissimilar interface and their
associated interfacial structure and properties impact proton transfer and
redox in the confined space.
|
In order to better understand Kondo insulators, we have studied both the
symmetric and asymmetric Anderson lattices at half-filling in one dimension
using the density matrix formulation of the numerical renormalization group. We
have calculated the charge gap, spin gap and quasiparticle gap as a function of
the repulsive interaction U using open boundary conditions for lattices as
large as 24 sites. We find that the charge gap is larger than the spin gap for
all U for both the symmetric and asymmetric cases. RKKY interactions are
evident in the f-spin-f-spin correlation functions at large U in the symmetric
case, but are suppressed in the asymmetric case as the f-level approaches the
Fermi energy. This suppression can also be seen in the staggered susceptibility
and it is consistent with neutron scattering measurements in CeNiSn.
|
Information that is of relevance for decision-making is often distributed,
and held by self-interested agents. Decision markets are well-suited mechanisms
to elicit such information and aggregate it into conditional forecasts that can
be used for decision-making. However, for incentive-compatible elicitation,
decision markets rely on stochastic decision rules which entails that sometimes
actions have to be taken that have been predicted to be sub-optimal. In this
work, we propose three closely related mechanisms that elicit and aggregate
information similar to a decision market, but are incentive compatible despite
using a deterministic decision rule. Following ideas from peer prediction
mechanisms, proxies rather than observed future outcomes are used to score
predictions. The first mechanism requires the principal to have her own signal,
which is then used as a proxy to elicit information from a group of
self-interested agents. The principal then deterministically maps the
aggregated forecasts and the proxy to the best possible decision. The second
and third mechanisms expand the first to cover a scenario where the principal
does not have access to her own signal. The principal offers a partial profit
to align the interest of one agent and retrieve its signal as a proxy; or
alternatively uses a proper peer prediction mechanism to elicit signals from
two agents. Aggregation and decision-making then follow the first mechanism. We
evaluate our first mechanism using a multi-agent bandit learning system. The
result suggests that the mechanism can train agents to achieve a performance
similar to a Bayesian inference model with access to all information held by
the agents.
|
We develop a complete mathematical theory for the symmetrical solutions of
the generalized nonlinear Schr\"odinger equation based on the new concept of
angular pseudomomentum. We consider the symmetric solitons of a generalized
nonlinear Schr\"odinger equation with a nonlinearity depending on the modulus
of the field. We provide a rigorous proof of a set of mathematical results
justifying that these solitons can be classified according to the irreducible
representations of a discrete group. Then we extend this theory to
non-stationary solutions and study the relationship between angular momentum
and pseudomomentum. We illustrate these theoretical results with numerical
examples. Finally, we explore the possibilities of the generalization of the
previous framework to the quantum limit.
|
The complexity of the neutron transport phenomenon throws its shadows on
every physical system wherever neutron is produced or used. In the current
study, an ab initio derivation of the neutron self-shielding factor to solve
the problem of the decrease of the neutron flux as it penetrates into a
material placed in an isotropic neutron field. We have employed the theory of
steady-state neutron transport, starting from Stuart's formula. Simple formulae
were derived based on the integral cross-section parameters that could be
adopted by the user according to various variables, such as the neutron flux
distribution and geometry of the simulation at hand. The concluded formulae of
the self-shielding factors comprise an inverted sigmoid function normalized
with a weight representing the ratio between the macroscopic total and
scattering cross-sections of the medium. The general convex volume geometries
are reduced to a set of chord lengths, while the neutron interactions
probabilities within the volume are parameterized to the epithermal and thermal
neutron energies. The arguments of the inverted-sigmoid function were derived
from a simplified version of neutron transport formulation. Accordingly, the
obtained general formulae were successful in giving the values of the
experimental neutron self-shielding factor for different elements and different
geometries.
|
We show that for any finite group $G$ and for any $d$ there exists a word
$w\in F_{d}$ such that a $d$-tuple in $G$ satisfies $w$ if and only if it
generates a solvable subgroup. In particular, if $G$ itself is not solvable,
then it cannot be obtained as a quotient of the one relator group $F_{d}/<w>$.
As a corollary, the probability that a word is satisfied in a fixed
non-solvable group can be made arbitrarily small, answering a question of Alon
Amit.
|
We derive a general expression of the quantum Fisher information for a
Mach-Zehnder interferometer, with the port inputs of an \emph{arbitrary} pure
state and a squeezed thermal state. We find that the standard quantum limit can
be beaten, when even or odd states are applied to the pure-state port. In
particular, when the squeezed thermal state becomes a thermal state, all the
even or odd states have the same quantum Fisher information for given photon
numbers. For a squeezed thermal state, optimal even or odd states are needed to
approach the Heisenberg limit. As examples, we consider several common even or
odd states: Fock states, even or odd coherent states, squeezed vacuum states,
and single-photon-subtracted squeezed vacuum states. We also demonstrate that
super-precision can be realized by implementing the parity measurement for
these states.
|
We present the first high-resolution sub-mm survey of both dust and gas for a
large population of protoplanetary disks. Characterizing fundamental properties
of protoplanetary disks on a statistical level is critical to understanding how
disks evolve into the diverse exoplanet population. We use ALMA to survey 89
protoplanetary disks around stars with $M_{\ast}>0.1~M_{\odot}$ in the young
(1--3~Myr), nearby (150--200~pc) Lupus complex. Our observations cover the
890~$\mu$m continuum and the $^{13}$CO and C$^{18}$O 3--2 lines. We use the
sub-mm continuum to constrain $M_{\rm dust}$ to a few Martian masses
(0.2--0.4~$M_{\oplus}$) and the CO isotopologue lines to constrain $M_{\rm
gas}$ to roughly a Jupiter mass (assuming ISM-like $\rm {[CO]/[H_2]}$
abundance). Of 89 sources, we detect 62 in continuum, 36 in $^{13}$CO, and 11
in C$^{18}$O at $>3\sigma$ significance. Stacking individually undetected
sources limits their average dust mass to $\lesssim6$ Lunar masses
(0.03~$M_{\oplus}$), indicating rapid evolution once disk clearing begins. We
find a positive correlation between $M_{\rm dust}$ and $M_{\ast}$, and present
the first evidence for a positive correlation between $M_{\rm gas}$ and
$M_{\ast}$, which may explain the dependence of giant planet frequency on host
star mass. The mean dust mass in Lupus is 3$\times$ higher than in Upper Sco,
while the dust mass distributions in Lupus and Taurus are statistically
indistinguishable. Most detected disks have $M_{\rm gas}\lesssim1~M_{\rm Jup}$
and gas-to-dust ratios $<100$, assuming ISM-like $\rm {[CO]/[H_2]}$ abundance;
unless CO is very depleted, the inferred gas depletion indicates that planet
formation is well underway by a few Myr and may explain the unexpected
prevalence of super-Earths in the exoplanet population.
|
We prove that the dimension $h^{1,1}_{\overline\partial}$ of the space of
Dolbeault harmonic $(1,1)$-forms is not necessarily always equal to $b^-$ on a
compact almost complex 4-manifold endowed with an almost Hermitian metric which
is not locally conformally almost K\"ahler. Indeed, we provide examples of non
integrable, non locally conformally almost K\"ahler, almost Hermitian
structures on compact 4-manifolds with $h^{1,1}_{\overline\partial}=b^-+1$.
This answers to a question by Holt.
|
Representation learning on graphs has been gaining attention due to its wide
applicability in predicting missing links, and classifying and recommending
nodes. Most embedding methods aim to preserve certain properties of the
original graph in the low dimensional space. However, real world graphs have a
combination of several properties which are difficult to characterize and
capture by a single approach. In this work, we introduce the problem of graph
representation ensemble learning and provide a first of its kind framework to
aggregate multiple graph embedding methods efficiently. We provide analysis of
our framework and analyze -- theoretically and empirically -- the dependence
between state-of-the-art embedding methods. We test our models on the node
classification task on four real world graphs and show that proposed ensemble
approaches can outperform the state-of-the-art methods by up to 8% on macro-F1.
We further show that the approach is even more beneficial for underrepresented
classes providing an improvement of up to 12%.
|
In this paper, we study the Feldman-Katok metric. We give entropy formulas by
replacing Bowen metric with Feldman-Katok metric. Some relative topics are also
discussed.
|
Cerium-doped Cs$_2$LiYCl$_6$ (CLYC) and Cs$_2$LiLaBr$_x$Cl$_{6-x}$ (CLLBC)
are scintillators in the elpasolite family that are attractive options for
resource-constrained applications due to their ability to detect both gamma
rays and neutrons within a single volume. Space-based detectors are one such
application, however, the radiation environment in space can over time damage
the crystal structure of the elpasolites, leading to degraded performance. We
have exposed 4 samples each of CLYC and CLLBC to 800 MeV protons at the Los
Alamos Neutron Science Center. The samples were irradiated with a total number
of protons of 1.3$\times$10$^{9}$, 1.3$\times$10$^{10}$, 5.2$\times$10$^{10}$,
and 1.3$\times$10$^{11}$, corresponding to estimated doses of 0.14, 1.46, 5.82,
and 14.6 kRad, respectively on the CLYC samples and 0.14, 1.38, 5.52, and 13.8
kRad, respectively on the CLLBC samples. We report the impact these radiation
doses have on the light output, activation, gamma-ray energy resolution, pulse
shapes, and pulse-shape discrimination figure of merit for CLYC and CLLBC.
|
We study the multiloop amplitudes of the light-cone gauge closed bosonic
string field theory for $d \neq 26$. We show that the amplitudes can be recast
into a BRST invariant form by adding a nonstandard worldsheet theory for the
longitudinal variables $X^{\pm}$ and the reparametrization ghost system. The
results obtained in this paper for bosonic strings provide a first step towards
the examination whether the dimensional regularization works for the multiloop
amplitudes of the light-cone gauge superstring field theory.
|
We study the problem of predicting and controlling the future state
distribution of an autonomous agent. This problem, which can be viewed as a
reframing of goal-conditioned reinforcement learning (RL), is centered around
learning a conditional probability density function over future states. Instead
of directly estimating this density function, we indirectly estimate this
density function by training a classifier to predict whether an observation
comes from the future. Via Bayes' rule, predictions from our classifier can be
transformed into predictions over future states. Importantly, an off-policy
variant of our algorithm allows us to predict the future state distribution of
a new policy, without collecting new experience. This variant allows us to
optimize functionals of a policy's future state distribution, such as the
density of reaching a particular goal state. While conceptually similar to
Q-learning, our work lays a principled foundation for goal-conditioned RL as
density estimation, providing justification for goal-conditioned methods used
in prior work. This foundation makes hypotheses about Q-learning, including the
optimal goal-sampling ratio, which we confirm experimentally. Moreover, our
proposed method is competitive with prior goal-conditioned RL methods.
|
In recent years, SSDs have gained tremendous attention in computing and
storage systems due to significant performance improvement over HDDs. The cost
per capacity of SSDs, however, prevents them from entirely replacing HDDs in
such systems. One approach to effectively take advantage of SSDs is to use them
as a caching layer to store performance critical data blocks to reduce the
number of accesses to disk subsystem. Due to characteristics of Flash-based
SSDs such as limited write endurance and long latency on write operations,
employing caching algorithms at the Operating System (OS) level necessitates to
take such characteristics into consideration. Previous caching techniques are
optimized towards only one type of application, which affects both generality
and applicability. In addition, they are not adaptive when the workload pattern
changes over time. This paper presents an efficient Reconfigurable Cache
Architecture (ReCA) for storage systems using a comprehensive workload
characterization to find an optimal cache configuration for I/O intensive
applications. For this purpose, we first investigate various types of I/O
workloads and classify them into five major classes. Based on this
characterization, an optimal cache configuration is presented for each class of
workloads. Then, using the main features of each class, we continuously monitor
the characteristics of an application during system runtime and the cache
organization is reconfigured if the application changes from one class to
another class of workloads. The cache reconfiguration is done online and
workload classes can be extended to emerging I/O workloads in order to maintain
its efficiency with the characteristics of I/O requests. Experimental results
obtained by implementing ReCA in a server running Linux show that the proposed
architecture improves performance and lifetime up to 24\% and 33\%,
respectively.
|
These are the proceedings of the workshop "Math in the Black Forest", which
brought together researchers in shape analysis to discuss promising new
directions. Shape analysis is an inter-disciplinary area of research with
theoretical foundations in infinite-dimensional Riemannian geometry, geometric
statistics, and geometric stochastics, and with applications in medical
imaging, evolutionary development, and fluid dynamics. The workshop is the 6th
instance of a series of workshops on the same topic.
|
Diffusion models are loosely modelled based on non-equilibrium
thermodynamics, where \textit{diffusion} refers to particles flowing from
high-concentration regions towards low-concentration regions. In statistics,
the meaning is quite similar, namely the process of transforming a complex
distribution $p_{\text{complex}}$ on $\mathbb{R}^d$ to a simple distribution
$p_{\text{prior}}$ on the same domain. This constitutes a Markov chain of
diffusion steps of slowly adding random noise to data, followed by a reverse
diffusion process in which the data is reconstructed from the noise. The
diffusion model learns the data manifold to which the original and thus the
reconstructed data samples belong, by training on a large number of data
points. While the diffusion process pushes a data sample off the data manifold,
the reverse process finds a trajectory back to the data manifold. Diffusion
models have -- unlike variational autoencoder and flow models -- latent
variables with the same dimensionality as the original data, and they are
currently\footnote{At the time of writing, 2023.} outperforming other
approaches -- including Generative Adversarial Networks (GANs) -- to modelling
the distribution of, e.g., natural images.
|
We introduce a new sequential methodology to calibrate the fixed parameters
and track the stochastic dynamical variables of a state-space system. The
proposed method is based on the nested hybrid filtering (NHF) framework of [1],
that combines two layers of filters, one inside the other, to compute the joint
posterior probability distribution of the static parameters and the state
variables. In particular, we explore the use of deterministic sampling
techniques for Gaussian approximation in the first layer of the algorithm,
instead of the Monte Carlo methods employed in the original procedure. The
resulting scheme reduces the computational cost and so makes the algorithms
potentially better-suited for high-dimensional state and parameter spaces. We
describe a specific instance of the new method and then study its performance
and efficiency of the resulting algorithms for a stochastic Lorenz 63 model
with uncertain parameters.
|
We demonstrate a real-space imaging of a heterodyne signal of light that is
produced as a result of the Brillouin light scattering by coherently driven
magnons in magnetostatic modes. With this imaging technique, we characterize
surface magnetostatic modes (Damon-Eshbach modes) in a one-dimensional magnonic
crystal, which is formed by patterned aluminum strips deposited on the
ferromagnetic film. The modified band structures of the magnonic crystal are
deduced from the Fourier transforms of the real-space images. The heterodyne
imaging provides a simple and powerful method to probe magnons in structured
ferromagnetic films, paving a way to investigate more complex phenomena, such
as Anderson localization and topological transport with magnons.
|
Large Language Models (LLMs), renowned for their superior proficiency in
language comprehension and generation, stimulate a vibrant ecosystem of
applications around them. However, their extensive assimilation into various
services introduces significant security risks. This study deconstructs the
complexities and implications of prompt injection attacks on actual
LLM-integrated applications. Initially, we conduct an exploratory analysis on
ten commercial applications, highlighting the constraints of current attack
strategies in practice. Prompted by these limitations, we subsequently
formulate HouYi, a novel black-box prompt injection attack technique, which
draws inspiration from traditional web injection attacks. HouYi is
compartmentalized into three crucial elements: a seamlessly-incorporated
pre-constructed prompt, an injection prompt inducing context partition, and a
malicious payload designed to fulfill the attack objectives. Leveraging HouYi,
we unveil previously unknown and severe attack outcomes, such as unrestricted
arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi
on 36 actual LLM-integrated applications and discern 31 applications
susceptible to prompt injection. 10 vendors have validated our discoveries,
including Notion, which has the potential to impact millions of users. Our
investigation illuminates both the possible risks of prompt injection attacks
and the possible tactics for mitigation.
|
We build a set of new observables using closely related non-leptonic
penguin-mediated $B_d$ and $B_s$ decays: ${\bar B}_{d,s}\to
K^{*0}\bar{K}^{*0}$, ${\bar B}_{d,s}\to K^{0}\bar{K}^{0}$, ${\bar B}_{d,s}\to
K^{0}\bar{K}^{*0}$ and ${\bar B}_{d,s}\to\bar{K}^{0}{K^{*0}}$ together with
their CP conjugate partners. These optimised observables are designed to reduce
hadronic uncertainties, mainly coming from form factors and power-suppressed
infrared divergences, and thus maximize their sensitivity to New Physics (NP).
The deviations observed with respect to the SM in the ratios of branching
ratios of ${\bar B}_{d,s}\to K^{*0}\bar{K}^{*0}$ ($2.6\sigma$) and ${\bar
B}_{d,s}\to K^{0}\bar{K}^{0}$ ($2.4\sigma$) can be explained by simple NP
scenarios involving the Wilson coefficients ${\cal C}_4$ and ${\cal C}_6$ (QCD
penguin operators) and the coefficient ${\cal C}_{8g}$ (chromomagnetic
operator). The optimised observables for ${\bar B}_{d,s}\to K^{0}\bar{K}^{*0}$
and ${\bar B}_{d,s}\to\bar{K}^{0}{K^{*0}}$ show distinctive patterns of
deviations with respect to their SM predictions under these NP scenarios. The
pattern of deviations of individual branching ratios, though affected by
significant hadronic uncertainties, suggests that NP is needed both in $b\to d$
and $b\to s$ transitions. We provide the regions for the Wilson coefficients
consistent with both optimised observables and individual branching ratios. The
NP scenarios considered to explain the deviations of ${\bar B}_{d,s}\to
K^{*0}\bar{K}^{*0}$ and ${\bar B}_{d,s}\to K^{0}\bar{K}^{0}$ can yield
deviations up to an order of magnitude among the observables that we introduced
for ${\bar B}_{d,s} \to K^{0} \bar{K}^{*0}$ and ${\bar B}_{d,s}\to\bar{K}^{0}
{K^{*0}}$. Probing these new observables experimentally may confirm the
consistency of the deviations already observed and provide a highly valuable
hint of NP in the non-leptonic sector.
|
Nanometallic devices based on amorphous insulator-metal thin films are
developed to provide a novel non-volatile resistance-switching random-access
memory (RRAM). In these devices, data recording is controlled by a bipolar
voltage, which tunes electron localization length, thus resistivity, through
electron trapping/detrapping. The low-resistance state is a metallic state
while the high-resistance state is an insulating state, as established by
conductivity studies from 2K to 300K. The material is exemplified by a Si3N4
thin film with randomly dispersed Pt or Cr. It has been extended to other
materials, spanning a large library of oxide and nitride insulator films,
dispersed with transition and main-group metal atoms. Nanometallic RRAMs have
superior properties that set them apart from other RRAMs. The critical
switching voltage is independent of the film thickness/device
area/temperature/switching speed. Trapped electrons are relaxed by
electron-phonon interaction, adding stability which enables long-term memory
retention. As electron-phonon interaction is mechanically altered, trapped
electron can be destabilized, and sub-picosecond switching has been
demonstrated using an electromagnetically generated stress pulse. AC impedance
spectroscopy confirms the resistance state is spatially uniform, providing a
capacitance that linearly scales with area and inversely scales with thickness.
The spatial uniformity is also manifested in outstanding uniformity of
switching properties. Device degradation, due to moisture, electrode oxidation
and dielectrophoresis, is minimal when dense thin films are used or when a
hermetic seal is provided. The potential for low power operation, multi-bit
storage and complementary stacking have been demonstrated in various RRAM
configurations.
|
We analyze mobility changes following the implementation of containment
measures aimed at mitigating the spread of COVID-19 in Bogot\'a, Colombia. We
characterize the mobility network before and during the pandemic and analyze
its evolution and changes between January and July 2020. We then link the
observed mobility changes to socioeconomic conditions, estimating a gravity
model to assess the effect of socioeconomic conditions on mobility flows. We
observe an overall reduction in mobility trends, but the overall connectivity
between different areas of the city remains after the lockdown, reflecting the
mobility network's resilience. We find that the responses to lockdown policies
depend on socioeconomic conditions. Before the pandemic, the population with
better socioeconomic conditions shows higher mobility flows. Since the
lockdown, mobility presents a general decrease, but the population with worse
socioeconomic conditions shows lower decreases in mobility flows. We conclude
deriving policy implications.
|
In this paper, we investigate the spinless stationary Schr\"odinger equation
for the electron when it is permanently bound to a generalized Ellis-Bronnikov
graphene wormhole-like surface. The curvature gives rise to a geometric
potential affecting thus the electronic dynamics. The geometry of the
wormhole's shape is controlled by the parameter $n$ which assumes even values.
We discuss the role played by the parameter $n$ and the orbital angular
momentum on bound states and probability density for the electron.
|
As a representation learning method, nearest regularized subspace(NRS)
algorithm is an effective tool to obtain both accuracy and speed for PolSAR
image classification. However, existing NRS methods use the polarimetric
feature vector but the PolSAR original covariance matrix(known as Hermitian
positive definite(HPD)matrix) as the input. Without considering the matrix
structure, existing NRS-based methods cannot learn correlation among channels.
How to utilize the original covariance matrix to NRS method is a key problem.
To address this limit, a Riemannian NRS method is proposed, which consider the
HPD matrices endow in the Riemannian space. Firstly, to utilize the PolSAR
original data, a Riemannian NRS method(RNRS) is proposed by constructing HPD
dictionary and HPD distance metric. Secondly, a new Tikhonov regularization
term is designed to reduce the differences within the same class. Finally, the
optimal method is developed and the first-order derivation is inferred. During
the experimental test, only T matrix is used in the proposed method, while
multiple of features are utilized for compared methods. Experimental results
demonstrate the proposed method can outperform the state-of-art algorithms even
using less features.
|
The geometric description of Yang-Mills theories and their configuration
space M is reviewed. The presence of singularities in M is explained and some
of their properties are described. The singularity structure is analyzed in
detail for structure group SU(2).
|
Quantum mechanics represents one of the greatest triumphs of human intellect
and, undoubtedly, is the most successful physical theory we have to date.
However, since its foundation about a century ago, it has been uninterruptedly
the center of harsh debates ignited by the counterintuitive character of some
of its predictions. The subject of one of these heated discussions is the
so-called "retrodiction paradox", namely a deceptive inconsistency of quantum
mechanics which is often associated with the "measurement paradox" and the
"collapse of the wave function"; it comes from the apparent time-asymmetry
between state preparation and measurement. Actually, in the literature one
finds several versions of the retrodiction paradox; however, a particularly
insightful one was presented by Sir Roger Penrose in his seminal book \emph{The
Road to Reality}. Here, we address the question to what degree Penrose's
retrodiction paradox occurs in the classical and quantum domain. We achieve a
twofold result. First, we show that Penrose's paradox manifests itself in some
form also in classical optics. Second, we demonstrate that when information is
correctly extracted from the measurements and the quantum-mechanical formalism
is properly applied, Penrose's retrodiction paradox does not manifest itself in
quantum optics.
|
We present a highly parallel implementation of the cross-correlation of
time-series data using graphics processing units (GPUs), which is scalable to
hundreds of independent inputs and suitable for the processing of signals from
"Large-N" arrays of many radio antennas. The computational part of the
algorithm, the X-engine, is implementated efficiently on Nvidia's Fermi
architecture, sustaining up to 79% of the peak single precision floating-point
throughput. We compare performance obtained for hardware- and software-managed
caches, observing significantly better performance for the latter. The high
performance reported involves use of a multi-level data tiling strategy in
memory and use of a pipelined algorithm with simultaneous computation and
transfer of data from host to device memory. The speed of code development,
flexibility, and low cost of the GPU implementations compared to ASIC and FPGA
implementations have the potential to greatly shorten the cycle of correlator
development and deployment, for cases where some power consumption penalty can
be tolerated.
|
The topological analysis from Bjorkman (1995) for the standard model that
describes the winds from hot stars by Castor, Abbott & Klein (1975) has been
extended to include the effect of stellar rotation and changes in the
ionization of the wind. The differential equation for the momentum of the wind
is non--linear and transcendental for the velocity gradient. Due to this
non--linearity the number of solutions that this equation possess is not known.
After a change of variables and the introduction of a new physically
meaningless independent variable, we manage to replace the non--linear momentum
differential equation by a system of differential equations where all the
derivatives are {\it{explicitely}} given. We then use this system of equations
to study the topology of the rotating--CAK model. For the particular case when
the wind is frozen in ionization ($\delta=0$) only one physical solution is
found, the standard CAK solution, with a X--type singular point. For the more
general case ($\delta \neq 0$), besides the standard CAK singular point, we
find a second singular point which is focal--type (or attractor). We find also,
that the wind does not adopt the maximal mass--loss rate but almost the
minimal.
|
Sensor simulation is a key component for testing the performance of
self-driving vehicles and for data augmentation to better train perception
systems. Typical approaches rely on artists to create both 3D assets and their
animations to generate a new scenario. This, however, does not scale. In
contrast, we propose to recover the shape and motion of pedestrians from sensor
readings captured in the wild by a self-driving car driving around. Towards
this goal, we formulate the problem as energy minimization in a deep structured
model that exploits human shape priors, reprojection consistency with 2D poses
extracted from images, and a ray-caster that encourages the reconstructed mesh
to agree with the LiDAR readings. Importantly, we do not require any
ground-truth 3D scans or 3D pose annotations. We then incorporate the
reconstructed pedestrian assets bank in a realistic LiDAR simulation system by
performing motion retargeting, and show that the simulated LiDAR data can be
used to significantly reduce the amount of annotated real-world data required
for visual perception tasks.
|
We propose novel two-channel filter banks for signals on graphs. Our designs
can be applied to arbitrary graphs, given a positive semi definite variation
operator, while using arbitrary vertex partitions for downsampling. The
proposed generalized filter banks (GFBs) also satisfy several desirable
properties including perfect reconstruction and critical sampling, while having
efficient implementations. Our results generalize previous approaches that were
only valid for the normalized Laplacian of bipartite graphs. Our approach is
based on novel graph Fourier transforms (GFTs) given by the generalized
eigenvectors of the variation operator. These GFTs are orthogonal in an
alternative inner product space which depends on the downsampling and variation
operators. Our key theoretical contribution is showing that the spectral
folding property of the normalized Laplacian of bipartite graphs, at the core
of bipartite filter bank theory, can be generalized for the proposed GFT if the
inner product matrix is chosen properly. In addition, we study vertex domain
and spectral domain properties of GFBs and illustrate their probabilistic
interpretation using Gaussian graphical models. While GFBs can be defined given
any choice of a vertex partition for downsampling, we propose an algorithm to
optimize these partitions with a criterion that favors balanced partitions with
large graph cuts, which are shown to lead to efficient and stable GFB
implementations. Our numerical experiments show that partition-optimized GFBs
can be implemented efficiently on 3D point clouds with hundreds of thousands of
points (nodes), while also improving the color signal representation quality
over competing state-of-the-art approaches.
|
Given a symplectomorphism f of a symplectic manifold X, one can form the
`symplectic mapping cylinder' $X_f = (X \times R \times S^1)/Z$ where the Z
action is generated by $(x,s,t)\mapsto (f(x),s+1,t)$. In this paper we compute
the Gromov invariants of the manifolds $X_f$ and of fiber sums of the $X_f$
with other symplectic manifolds. This is done by expressing the Gromov
invariants in terms of the Lefschetz zeta function of f and, in special cases,
in terms of the Alexander polynomials of knots. The result is a large set of
interesting non-Kahler symplectic manifolds with computational ways of
distinguishing them. In particular, this gives a simple symplectic construction
of the `exotic' elliptic surfaces recently discovered by Fintushel and Stern
and of related `exotic' symplectic 6-manifolds.
|
This brief note is devoted to a study of genuine non-perturbative corrections
to the Landau gauge ghost-gluon vertex in terms of the non-vanishing
dimension-two gluon condensate. We pay special attention to the kinematical
limit which the bare vertex takes for its tree-level expression at any
perturbative order, according to the well-known Taylor theorem. Based on our
OPE analysis, we also present a simple model for the vertex, in acceptable
agreement with lattice data.
|
Semi-supervised domain adaptation (SSDA) aims to bridge source and target
domain distributions, with a small number of target labels available, achieving
better classification performance than unsupervised domain adaptation (UDA).
However, existing SSDA work fails to make full use of label information from
both source and target domains for feature alignment across domains, resulting
in label mismatch in the label space during model testing. This paper presents
a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE),
to tackle this issue. Firstly, we introduce a cross-domain feature alignment
strategy, Inter-domain Mixup, that incorporates label information into model
adaptation. Specifically, we employ sample-level and manifold-level data mixing
to generate compatible training samples. These newly established samples,
combined with reliable and actual label information, display diversity and
compatibility across domains, while such extra supervision thus facilitates
cross-domain feature alignment and mitigates label mismatch. Additionally, we
utilize Neighborhood Expansion to leverage high-confidence pseudo-labeled
samples in the target domain, diversifying the label information of the target
domain and thereby further increasing the performance of the adaptation model.
Accordingly, the proposed approach outperforms existing state-of-the-art
methods, achieving significant accuracy improvements on popular SSDA
benchmarks, including DomainNet, Office-Home, and Office-31.
|
Within the field of Humanities, there is a recognized need for educational
innovation, as there are currently no reported tools available that enable
individuals to interact with their environment to create an enhanced learning
experience in the humanities (e.g., immersive spaces). This project proposes a
solution to address this gap by integrating technology and promoting the
development of teaching methodologies in the humanities, specifically by
incorporating emotional monitoring during the learning process of humanistic
context inside an immersive space. In order to achieve this goal, a real-time
emotion detection EEG-based system was developed to interpret and classify
specific emotions. These emotions aligned with the early proposal by Descartes
(Passions), including admiration, love, hate, desire, joy, and sadness. This
system aims to integrate emotional data into the Neurohumanities Lab
interactive platform, creating a comprehensive and immersive learning
environment. This work developed a ML, real-time emotion detection model that
provided Valence, Arousal, and Dominance (VAD) estimations every 5 seconds.
Using PCA, PSD, RF, and Extra-Trees, the best 8 channels and their respective
best band powers were extracted; furthermore, multiple models were evaluated
using shift-based data division and cross-validations. After assessing their
performance, Extra-Trees achieved a general accuracy of 96%, higher than the
reported in the literature (88% accuracy). The proposed model provided
real-time predictions of VAD variables and was adapted to classify Descartes'
six main passions. However, with the VAD values obtained, more than 15 emotions
can be classified (reported in the VAD emotion mapping) and extend the range of
this application.
|
We investigate the causes of the different shape of the $K$-band number
counts when compared to other bands, analyzing in detail the presence of a
change in the slope around $K\sim17.5$. We present a near-infrared imaging
survey, conducted at the 3.5m telescope of the Calar Alto Spanish-German
Astronomical Center (CAHA), covering two separated fields centered on the HFDN
and the Groth field, with a total combined area of $\sim0.27$deg$^{2}$ to a
depth of $K\sim19$ ($3\sigma$,Vega). We derive luminosity functions from the
observed $K$-band in the redshift range [0.25-1.25], that are combined with
data from the references in multiple bands and redshifts, to build up the
$K$-band number count distribution. We find that the overall shape of the
number counts can be grouped into three regimes: the classic Euclidean slope
regime ($d\log N/dm\sim0.6$) at bright magnitudes; a transition regime at
intermediate magnitudes, dominated by $M^{\ast}$ galaxies at the redshift that
maximizes the product $\phi^{\ast}\frac{dV_{c}}{d\Omega}$; and an $\alpha$
dominated regime at faint magnitudes, where the slope asymptotically approaches
-0.4($\alpha$+1) controlled by post-$M^{\ast}$ galaxies. The slope of the
$K$-band number counts presents an averaged decrement of $\sim50%$ in the range
$15.5<K<18.5$ ($d\log N/dm\sim0.6-0.30$). The rate of change in the slope is
highly sensitive to cosmic variance effects. The decreasing trend is the
consequence of a prominent decrease of the characteristic density
$\phi^{\ast}_{K,obs}$ ($\sim60%$ from $z=0.5$ to $z=1.5$) and an almost flat
evolution of $M^{\ast}_{K,obs}$ (1$\sigma$ compatible with
$M^{\ast}_{K,obs}=-22.89\pm0.25$ in the same redshift range).
|
Speech 'in-the-wild' is a handicap for speaker recognition systems due to the
variability induced by real-life conditions, such as environmental noise and
the emotional state of the speaker. Taking advantage of the principles of
representation learning, we aim to design a recurrent denoising autoencoder
that extracts robust speaker embeddings from noisy spectrograms to perform
speaker identification. The end-to-end proposed architecture uses a feedback
loop to encode information regarding the speaker into low-dimensional
representations extracted by a spectrogram denoising autoencoder. We employ
data augmentation techniques by additively corrupting clean speech with
real-life environmental noise in a database containing real stressed speech.
Our study presents that the joint optimization of both the denoiser and speaker
identification modules outperforms independent optimization of both components
under stress and noise distortions as well as hand-crafted features.
|
We show that the free factor complex of the free group of rank at least 3
does not satisfy a combinatorial isoperimetric inequality: that is, for every N
greater than or equal to 3, there is a loop of length 4 in the free factor
complex that only bounds discs containing at least O(N) triangles. To prove the
result, we construct a coarsely Lipschitz function from the `upward link' of a
free factor to the set of integers.
|
Subsets and Splits