text
stringlengths 6
128k
|
---|
We study a model with fractional quantum numbers using Monte Carlo
techniques. The model is composed of bosons interacting though a $Z_2$ gauge
field. We find that the system has three phases: a phase in which the bosons
are confined, a fractionalized phase in which the bosons are deconfined, and a
phase in which the bosons are condensed. The deconfined phase has a
``topological'' order due to the degeneracy in the ground state of the gauge
field. We discuss an experimental test proposed by Senthil and Fisher that uses
the topological order to determine the existence of a deconfined,
fractionalized phase.
|
The aim of this paper is to give a simpler, more usable sufficient condition
to the regularity of generic weakly stationary time series. Also, this
condition is used to show how regular processes satisfying these sufficient
conditions can be approximated by a lower rank \emph{regular} process. The
relevance of these issues is shown by the ever increasing presence of
high-dimensional data in many fields lately, and because of this, low rank
processes and low rank approximations are becoming more important. Moreover,
regular processes are the ones which are completely influenced by random
innovations, so they are primary targets both in the theory and applications.
|
This paper defines the specifications of a management language intended to
automate the control and administration of various service components connected
to a digital ecosystem. It is called EML short for Ecosystem Management
Language and it is based on proprietary syntax and notation and contains a set
of managerial commands issued by the system's administrator via a command
console. Additionally, EML is shipped with a collection of self-adaptation
procedures called SAP. Their purpose is to provide self-adaptation properties
to the ecosystem allowing it to self-optimize itself based on the state of its
execution environment. On top of that, there exists the EMU short for Ecosystem
Management Unit which interprets, validates, parses, and executes EML commands
and SAP procedures. Future research can improve upon EML so much so that it can
be extended to support a larger set of commands in addition to a larger set of
SAP procedures.
|
Using a formulation of quantum mechanics based on orthogonal polynomials in
the energy and physical parameters, we study quantum systems totally confined
in space and associated with the discrete Meixner polynomials. We present
several examples of such systems, derive their corresponding potential
functions, and plot some of their bound states.
|
We have numerically studied the dynamic correlation functions in
thermodynamic equilibrium of two-dimensional O(2)-symmetry models with either
bond (RSJ) or site (TDGL) dissipation as a function of temperature T. We find
that above the critical temperature the frequency dependent flux noise
$S_{\Phi}(\omega)\sim \vert 1+ {(\omega/\Omega)}^2\vert^{-\alpha (T)/2}$, with
$0.85\leq \alpha (TDGL)(T)\leq 0.95$ and $1.17 \leq \alpha (RSJ)(T) \leq 1.27$,
while the dynamic critical exponents $z(TDGL)\sim 2.0$ and $z(RSJ)\sim 0.9$.
Contrary to expectation the TDGL results are in closer agreement with the
experiments in Josephson-junction arrays by Shaw et al., than those from the
RSJ model. We find that these results are related to anomalous vortex diffusion
through vortex clusters.
|
A precise knowledge of the masses of supernova progenitors is essential to
answer various questions of modern astrophysics, such as those related to the
dynamical and chemical evolution of Galaxies. In this paper we revise the upper
bound for the mass of the progenitors of CO white dwarfs (\mup) and the lower
bound for the mass of the progenitors of normal type II supernovae (\mups). In
particular, we present new stellar models with mass between 7 and 10 \msun,
discussing their final destiny and the impact of recent improvements in our
understanding of the low energy rate of the \c12c12 reaction.
|
In this work, we propose an economical model to address some open
cosmological problems such as the absence of the initial cosmological
singularity, an early acceleration of the Universe and the generation of
matter-antimatter asymmetry. The model is based on a scenario in which the
early Universe consists of a non-linear electrodynamics fields. It is found
that the non-linear electrodynamics model has an equation of state
$p=\frac{1}{3} \rho - \frac{4}{3} \beta \rho^{1+\alpha}$ which shows that the
Universe undergoes an early epoch acceleration to a radiation era given by $p
=\frac{1}{3} \rho$. We show that the singularities in the energy density,
pressure and curvature are absent at early stages. In our scenario, the baryon
asymmetry is generated by the non-linearity parameter $\beta$. Additionally, we
calculate the resulting baryon asymmetry and discuss how a successful
gravitational baryogenesis is obtained for different values of the model's
parameter space.
|
Recombined fingerprints have been suggested as a convenient approach to
improve the efficiency of anonymous fingerprinting for the legal distribution
of copyrighted multimedia contents in P2P systems. The recombination idea is
inspired by the principles of mating, recombination and heredity of the DNA
sequences of living beings, but applied to binary sequences, like in genetic
algorithms. However, the existing recombination-based fingerprinting systems do
not provide a convenient solution for collusion resistance, since they require
double-layer fingerprinting codes, making the practical implementation of such
systems a challenging task. In fact, collusion resistance is regarded as the
most relevant requirement of a fingerprinting scheme, and the lack of any
acceptable solution to this problem would possibly deter content merchants from
deploying any practical implementation of the recombination approach. In this
paper, this drawback is overcome by introducing two non-trivial improvements,
paving the way for a future real-life application of recombination-based
systems. First, Nuida et al.'s collusion-resistant codes are used in
segment-wise fashion for the first time. Second, a novel version of the
traitor-tracing algorithm is proposed in the encrypted domain, also for the
first time, making it possible to provide the buyers with security against
framing. In addition, the proposed method avoids the use of public-key
cryptography for the multimedia content and expensive cryptographic protocols,
leading to excellent performance in terms of both computational and
communication burdens. The paper also analyzes the security and privacy
properties of the proposed system both formally and informally, whereas the
collusion resistance and the performance of the method are shown by means of
experiments and simulations.
|
We present a novel concept of proportional gas amplification for the read-out
of the spherical proportional counter. The standard single-ball read-out
presents limitations for large diameter spherical detectors and high pressure
operations. We have developed a multi-ball read-out system which consists of
several balls sitting at a fixed distance from the center of the spherical
vessel. Such a module can tune the volume electric field at the desired value
and can also provide detector segmentation with individual ball read-out. In
the latter case the large volume of the vessel becomes a spherical time
projection chamber with 3D capabilities.
|
Higher-dimensional models of grand unification allow us to relate the top
Yukawa coupling y_t to the gauge coupling g. The tree level relation y_t=g at
the scale of grand unification implies, in the framework of the MSSM, a rather
small ratio of Higgs expectation values tan beta. We find that, in the presence
of localized Fayet-Iliopoulos terms, y_t is suppressed against g because the
bulk fields acquire non-trivial profiles whose overlap is smaller than in the
case of flat profiles. This increases the prediction for tan beta to moderately
large values. Thus tan beta is related to the geometry of compact space. We
also discuss explicit realizations of such settings in orbifold
compactifications of the heterotic string. It turns out that anisotropic
compactifications, allowing for an orbifold GUT interpretation, are favored.
|
The statistical properties of patch electric fields due to a polycrystalline
metal surface are calculated. The fluctuations in the electric field scale like
1/z^2, when z >> w, where z is the distance to the surface, and w is the
characteristic length scale of the surface patches. For typical thermally
evaporated gold surfaces these field fluctuations are comparable to the image
field of an elementary charge, and scale in the same way with distance to the
surface. Expressions for calculating the statistics of the inhomogeneous
broadening of Rydberg atom energies due to patch electric fields are presented.
Spatial variations in the patch fields over the Rydberg orbit are found to be
insignificant.
|
We introduce a new model that mimics the strong and sudden effects induced by
conformity in tightly interacting human societies. Such effects range from mere
crowd phenomena to dramatic political turmoil. The model is a modified version
of the Ising Hamiltonian. We have studied the properties of this Hamiltonian
using both a Metropolis simulation and analytical derivations. Our study shows
that increasing the value of the conformity parameter, results in a first order
phase transition. As a result a majority of people begin honestly to support
the idea that may contradict the moral principles of a normal human beings
though each individual would support the moral principle without tight
interaction with the society. Thus, above some critical level of conformity our
society occurs to be instable with respect to ideas that might be doubtful. Our
model includes, in a simplified way, human diversity with respect to loyalty to
the moral principles.
|
The study of arguments as abstract entities and their interaction as
introduced by Dung (Artificial Intelligence 177, 1995) has become one of the
most active research branches within Artificial Intelligence and Reasoning. A
main issue for abstract argumentation systems is the selection of acceptable
sets of arguments. Value-based argumentation, as introduced by Bench-Capon (J.
Logic Comput. 13, 2003), extends Dung's framework. It takes into account the
relative strength of arguments with respect to some ranking representing an
audience: an argument is subjectively accepted if it is accepted with respect
to some audience, it is objectively accepted if it is accepted with respect to
all audiences. Deciding whether an argument is subjectively or objectively
accepted, respectively, are computationally intractable problems. In fact, the
problems remain intractable under structural restrictions that render the main
computational problems for non-value-based argumentation systems tractable. In
this paper we identify nontrivial classes of value-based argumentation systems
for which the acceptance problems are polynomial-time tractable. The classes
are defined by means of structural restrictions in terms of the underlying
graphical structure of the value-based system. Furthermore we show that the
acceptance problems are intractable for two classes of value-based systems that
where conjectured to be tractable by Dunne (Artificial Intelligence 171, 2007).
|
The multiple extension problem arises frequently in diagnostic and default
inference. That is, we can often use any of a number of sets of defaults or
possible hypotheses to explain observations or make Predictions. In default
inference, some extensions seem to be simply wrong and we use qualitative
techniques to weed out the unwanted ones. In the area of diagnosis, however,
the multiple explanations may all seem reasonable, however improbable. Choosing
among them is a matter of quantitative preference. Quantitative preference
works well in diagnosis when knowledge is modelled causally. Here we suggest a
framework that combines probabilities and defaults in a single unified
framework that retains the semantics of diagnosis as construction of
explanations from a fixed set of possible hypotheses. We can then compute
probabilities incrementally as we construct explanations. Here we describe a
branch and bound algorithm that maintains a set of all partial explanations
while exploring a most promising one first. A most probable explanation is
found first if explanations are partially ordered.
|
Over the past few years, ubiquitous, or pervasive computing has gained
popularity as the primary approach for a wide range of applications, including
enterprise-grade systems, consumer applications, and gaming systems. Ubiquitous
computing refers to the integration of computing technologies into everyday
objects and environments, creating a network of interconnected devices that can
communicate with each other and with humans. By using ubiquitous computing
technologies, communities can become more connected and efficient, with members
able to communicate and collaborate more easily. This enabled
interconnectedness and collaboration can lead to a more successful and
sustainable community. The spread of ubiquitous computing, however, has
emphasized the importance of automated learning and smart applications in
general. Even though there have been significant strides in Artificial
Intelligence and Deep Learning, large scale adoption has been hesitant due to
mounting pressure on expensive and highly complex cloud numerical-compute
infrastructures. Adopting, and even developing, practical machine learning
systems can come with prohibitive costs, not only in terms of complex
infrastructures but also of solid expertise in Data Science and Machine
Learning. In this paper we present an innovative approach for low-code
development and deployment of end-to-end AI cooperative application pipelines.
We address infrastructure allocation, costs, and secure job distribution in a
fully decentralized global cooperative community based on tokenized economics.
|
We study the energy loss of a quark moving in a strongly coupled QGP under
the influence of anisotropy. The heavy quark drag force, diffusion coefficient,
and jet quenching parameter are calculated using the Einstein-Maxwell-dilaton
model, where the anisotropic background is characterized by an arbitrary
dynamical parameter $A$.Our findings indicate that as the anisotropic factor
$A$ increases, the drag force and jet quenching parameter both increase, while
the diffusion coefficient decreases. Additionally, we observe that the energy
loss becomes more significant when the quark moves perpendicular to the
anisotropy direction in the transverse plane.The enhancement of the rescaled
jet quenching parameters near critical temperature $T_c$, as well as drag
forces for a fast-moving heavy quark is observed, which presents one of the
typical features of QCD phase transition.
|
We investigate CP violation in the purely right-handed b quark to c- and u-
quark coupling model under the constraint of right-handed W-gauge boson mass
M_R > 720 GeV , which is experimentally obtained recently by D0 Collaboration
at Fermilab. By using the data on $K_L-K_S$ mass difference, CP violating
parameter $\epsilon$ in the neutral kaon system and $B_d-\bar B_d$ mixing,
together with the new data of $Br (B^-\to \psi\pi^-) /Br (B^-\to \psi K^-) =
0.052 \pm 0.024 (\approx \mid V_{cd}/V_{cs} \mid^2)$, we can fix all of the
three independent angles and one phase of the right-handed mixing matrix V^R.
Under these constraints, another CP-violating parameter $\epsilon'/\epsilon$
and electric dipole moment of neutron are shown to be consistent with the data
in our model. The pattern of CP violation in the nonleptonic decay of
$B_d(B_s)$ mesons to CP eigenstates is different from that in the Standard
Model.
|
Irregularly-sampled time series (ITS) are native to high-impact domains like
healthcare, where measurements are collected over time at uneven intervals.
However, for many classification problems, only small portions of long time
series are often relevant to the class label. In this case, existing ITS models
often fail to classify long series since they rely on careful imputation, which
easily over- or under-samples the relevant regions. Using this insight, we then
propose CAT, a model that classifies multivariate ITS by explicitly seeking
highly-relevant portions of an input series' timeline. CAT achieves this by
integrating three components: (1) A Moment Network learns to seek relevant
moments in an ITS's continuous timeline using reinforcement learning. (2) A
Receptor Network models the temporal dynamics of both observations and their
timing localized around predicted moments. (3) A recurrent Transition Model
models the sequence of transitions between these moments, cultivating a
representation with which the series is classified. Using synthetic and real
data, we find that CAT outperforms ten state-of-the-art methods by finding
short signals in long irregular time series.
|
We investigate the spin transfer torque (STT) in the magnetic multilayer
structures with micromagnetic simulations. We implement the STT contribution
for the magnetic multilayer structures in addition to the
Landau-Lifshitz-Gilbert (LLG) micromagnetic simulators. Not only the Sloncewski
STT term, the zero, first, and second order field- like terms are also
considered, and the effects of the Oersted field by the current are addressed.
We determine the switching current densities of the free layer with the
exchange biased synthetic ferrimagnetic reference layers for various cases.
|
The coexistence pressure of two phases is a well-defined point at fixed
temperature. In experiment, however, due to non-hydrostatic stresses and a
stress-dependent potential energy barrier, different measurements yield
different ranges of pressure with a hysteresis. Accounting for these effects,
we propose an inequality for comparison of the theoretical value to a plurality
of measured intervals. We revisit decades of pressure experiments on the bcc -
hcp transformations in iron, which are sensitive to non-hydrostatic conditions
and sample size. From electronic-structure calculations, we find a bcc - hcp
coexistence pressure of 8.4 GPa. We construct the equation of state for
competing phases under hydrostatic pressure, compare to experiments and other
calculations, and address the observed pressure hysteresis and range of onset
pressures of the nucleating phase.
|
We present the results of 3-D SPMHD numerical simulations of
supermagnetosonic, overdense, radiatively cooling jets. Two initial magnetic
configurations are considered: (i) a helical and (ii) a longitudinal field. We
find that magnetic fields have important effects on the dynamics and structure
of radiative cooling jets, especially at the head. The presence of a helical
field suppresses the formation of the clumpy structure which is found to
develop at the head of purely hydrodynamical jets. On the other hand, a cooling
jet embedded in a longitudinal magnetic field retains clumpy morphology at its
head. This fragmented structure resembles the knotty pattern commonly observed
in HH objects behind the bow shocks of HH jets. This suggests that a strong
(equipartition) helical magnetic field configuration is ruled out at the jet
head. Therefore, if strong magnetic fields are present, they are probably
predominantly longitudinal in those regions. In both magnetic configurations,
we find that the confining pressure of the cocoon is able to excite
short-wavelength MHD K-H pinch modes that drive low-amplitude internal shocks
along the beam. These shocks are not strong however, and it likely that they
could only play a secondary role in the formation of the bright knots observed
in HH jets.
|
We present a finite-size scaling for both interaction and disorder strengths
in the critical regime of the many-body localization (MBL) transition for a
spin-1/2 XXZ spin chain with a random field by studying level statistics. We
show how the dynamical transition from the thermal to MBL phase depends on
interaction together with disorder by evaluating the ratio of adjacent level
spacings, and thus, extend previous studies in which interaction coupling is
fixed. We introduce an extra critical exponent in order to describe the
nontrivial interaction dependence of the MBL transition. It is characterized by
the ratio of the disorder strength to the power of the interaction coupling
with respect to the extra critical exponent and not by the simple ratio between
them.
|
The Efimov effect in heteronuclear cold atomic systems is experimentally more
easily accessible than the Efimov effect for identical atoms, because of the
potentially smaller scaling factor. We focus on the case of two or three heavy
identical bosons and another atom. The former case was recently observed in a
mixture of 133Cs and 6Li atoms. We employ the Gaussian Expansion Method as
developed by Hiyama, Kino et al.. This is a variational method that uses
Gaussians that are distributed geometrically over a chosen range. Supplemental
calculations are performed using the Skorniakov-Ter-Martirosian equation. Blume
et al. previously investigated the scaling properties of heteronuclear systems
in the unitary limit and at the three-body breakup threshold. We have completed
this picture by calculating the behaviour on the positive scattering length
side of the Efimov plot, focussing on the dimer threshold.
|
It is known that there is no three-dimensional analog of de Sitter black
holes. I show that the analog does exist when non-Gaussian (i.e., ring-type)
smearings of point matter hairs are considered. This provides a new way of
constructing black hole solutions from hairs. I find that the obtained black
hole solutions are quite different from the usual large black holes in that
there are i) large to small black hole transitions which may be considered as
inverse Hawking-Page transitions and ii) soliton-like (i.e., non-perturbative)
behaviors. For Gaussian smearing, there is no black hole but a gravastar
solution exists.
|
The main goal of this paper is to establish close relations among sheaves of
modules on atomic sites, representations of categories, and discrete
representations of topological groups. We characterize sheaves of modules on
atomic sites as saturated representations, and show that the category of
sheaves is equivalent to the Serre quotient of the category of presheaves by
the category of torsion presheaves. Consequently, the sheafification functor
and sheaf cohomology functors are interpreted by localization functors, section
functors, and derived functors of torsion functor in representation theory.
These results as well as a classical theorem of Artin provides us a new
approach to study discrete representations of topological groups. In
particular, by importing established facts in representation stability theory,
we explicitly classify simple or indecomposable injective discrete
representations of some discrete topological groups such as the infinite
symmetric group, the infinite general or special linear group over a finite
field, and the automorphism group of the linearly ordered set $\mathbb{Q}$. We
also show that discrete representations of these topological groups satisfy a
certain stability property.
|
We confirm the UHECR horizon established by the Pierre Auger Observatory
using the heterogeneous Veron-Cetty Veron (VCV) catalog of AGNs, by performing
a redshift-angle-IR luminosity scan using PSCz galaxies having infrared
luminosity greater than 10^{10}L_sun. The strongest correlation -- for z <
0.016, psi = 2.1 deg, and L_ir > 10^{10.5}L_sun -- arises in fewer than 0.3% of
scans with isotropic source directions. When we apply a penalty for using the
UHECR energy threshold that was tuned to maximize the correlation with VCV, the
significance degrades to 1.1%. Since the PSCz catalog is complete and
volume-limited for these parameters, this suggests that the UHECR horizon
discovered by the Pierre Auger Observatory is not an artifact of the
incompleteness and other idiosyncrasies of the VCV catalog. The strength of the
correlation between UHECRs and the nearby highest-IR-luminosity PSCz galaxies
is stronger than in about 90% percent of trials with scrambled luminosity
assignments for the PSCz galaxies. If confirmed by future data, this result
would indicate that the sources of UHECRs are more strongly associated with
luminous IR galaxies than with ordinary, lower IR luminosity galaxies.
|
We discuss the use of the FMEA to address some of the root causes of the
reliability disasters discussed in the article Reliability Disasters by
Doganaksoy Meeker and Hahn, which appeared in Quality Progress in August 2020
|
Recently, contrastive learning has largely advanced the progress of
unsupervised visual representation learning. Pre-trained on ImageNet, some
self-supervised algorithms reported higher transfer learning performance
compared to fully-supervised methods, seeming to deliver the message that human
labels hardly contribute to learning transferrable visual features. In this
paper, we defend the usefulness of semantic labels but point out that
fully-supervised and self-supervised methods are pursuing different kinds of
features. To alleviate this issue, we present a new algorithm named Supervised
Contrastive Adjustment in Neighborhood (SCAN) that maximally prevents the
semantic guidance from damaging the appearance feature embedding. In a series
of downstream tasks, SCAN achieves superior performance compared to previous
fully-supervised and self-supervised methods, and sometimes the gain is
significant. More importantly, our study reveals that semantic labels are
useful in assisting self-supervised methods, opening a new direction for the
community.
|
In the period between May 1997 and August 1997 a series of pointed RXTE
observations were made of Cyg X-3. During this period Cyg X-3 made a transition
from a quiescent radio state to a flare state (including a major flare) and
then returned to a quiescent radio state. Analyses of the observations are made
in the context of concurrent observations in the hard X-ray (CGRO/BATSE), soft
X-ray (RXTE/ASM) and the radio (Green Bank Interferometer, Ryle Telescope, and
RATAN-600). Preliminary analyses of the observations are presented.
|
Semi-inverse analytical solution of a pure bending problem for piezoelectric
layer is developed in the framework of linear electroelasticity theory with
strain gradient and electric field gradient effects. Two-dimensional solution
is derived assuming plane strain state of a layer. It is shown that obtained
solution can be used for the validation of size-dependent beam and plate models
in second gradient electroelasticity theory.
|
In this paper, we introduce the notion of a left-symmetric bialgebroid as a
geometric generalization of a left-symmetric bialgebra and construct a
left-symmetric bialgebroid from a pseudo-Hessian manifold. We also introduce
the notion of a Manin triple for left-symmetric algebroids, which is equivalent
to a left-symmetric bialgebroid. The corresponding double structure is a
pre-symplectic algebroid rather than a left-symmetric algebroid. In particular,
we establish a relation between Maurer-Cartan type equations and Dirac
structures of the pre-symplectic algebroid which is the corresponding double
structure for a left-symmetric bialgebroid.
|
Sorting is a fundamental operation in computing. However, the speed of
state-of-the-art sorting algorithms on a single thread has reached their
limits. Meanwhile, deep learning has demonstrated its potential to provide
significant performance improvements in data mining and machine learning tasks.
Therefore, it is interesting to explore whether sorting can also speed up by
deep learning techniques. In this paper, a neural network-based data
distribution aware sorting method named NN-sort is presented. Compared to
traditional comparison-based sorting algorithms, which need to compare the data
elements in pairwise, NN-sort leverages the neural network model to learn the
data distribution and uses it to map disordered data elements into ordered
ones. Although the complexity of NN-sort is $nlogn$ in theory, it can run in
near-linear time as being observed in most of the cases. Experimental results
on both synthetic and real-world datasets show that NN-sort yields performance
improvement by up to 10.9x over traditional sorting algorithms.
|
We study plane partitions satisfying condition $a_{n+1,m+1}=0$ (this
condition is called "pit") and asymptotic conditions along three coordinate
axes. We find the formulas for generating function of such plane partitions.
Such plane partitions label the basis vectors in certain representations of
quantum toroidal $\mathfrak{gl}_1$ algebra, therefore our formulas can be
interpreted as the characters of these representations. The resulting formulas
resemble formulas for characters of tensor representations of Lie superalgebra
$\mathfrak{gl}_{m|n}$. We discuss representation theoretic interpretation of
our formulas using $q$-deformed $W$-algebra $\mathfrak{gl}_{m|n}$.
|
We consider the following singularly perturbed nonlinear elliptic problem:
$$-\e^2\Delta u+V(x)u=f(u),\ u\in H^1(\mathbb{R^N}),$$ where $N\ge 3$ and the
nonlinearity $f$ is of critical growth. In this paper, we construct a solution
$u_\e$ of the above problem which concentrates at an isolated component of
positive local minimum points of $V$ as $\e\to 0$ under certain conditions on
$f$. Our result completes the study made in some very recent works in the sense
that, in those papers only the subcritical growth was considered
|
The valley degrees of freedom of carriers in crystals is useful to process
information and perform logic operations, and it is a key factor for valley
application to realize the valley polarization. Here, we propose a model that
the valley polarization transition at different valley points (-K and K points)
is produced by biaxial strain. By the first-principle calculations, we
illustrate our idea with a concrete example of Janus $\mathrm{GdClF}$
monolayer. The predicted $\mathrm{GdClF}$ monolayer is dynamically,
mechanically and thermally stable, and is a ferromagnetic (FM) semiconductor
with perpendicular magnetic anisotropy (PMA), valence band maximum (VBM) at
valley points and high Curie temperature ($T_C$). Due to its intrinsic
ferromagnetism and spin orbital coupling (SOC), a spontaneous valley
polarization will be induced, but the valley splitting is only -3.1 meV, which
provides an opportunity to achieve valley polarization transition at different
valley points by strain. In considered strain range ($a/a_0$: 0.94$\sim$1.06),
the strained GdClF monolayer has always energy bandgap, strong FM coupling and
PMA. The compressive strain is in favour of -K valley polarization, while the
tensile strain makes for K valley polarization. The corresponding valley
splitting at 0.96 and 1.04 strain are -44.5 meV and 29.4 meV, which are higher
than the thermal energy of room temperature (25 meV). Due to special Janus
structure, both in-plane and out-of-plane piezoelectric polarizations can be
observed. It is found that the direction of in-plane piezoelectric
polarizations can be overturned by strain, and the $d_{11}$ at 0.96 and 1.04
strain are -1.37 pm/V and 2.05 pm/V. Our works pave the way to design the
ferrovalley material as multifunctional valleytronics and piezoelectric devices
by strain.
|
We perform an exact diagonalization study of the topological order in
topological flat band models through calculating entanglement entropy and
spectra of low energy states. We identify multiple independent minimal
entangled states, which form a set of orthogonal basis states for the
ground-state manifold. We extract the modular transformation matrices S (U)
which contains the information of mutual (self) statistics, quantum dimensions
and fusion rule of quasi-particles. Moreover, we demonstrate that these
matrices are robust and universal in the whole topological phase against
different perturbations until the quantum phase transition takes place.
|
Super-compressible foam-like carbon nanotube films have been reported to
exhibit highly nonlinear viscoelastic behaviour in compression similar to soft
tissue. Their unique combination of light weight and exceptional electrical,
thermal and mechanical properties have helped identify them as viable building
blocks for more complex nanosystems and as stand-alone structures for a variety
of different applications. In the as-grown state, their mechanical performance
is limited by the weak adhesion between the tubes, controlled by the van der
Waals forces, and the substrate allowing the forests to split easily and to
have low resistance in shear. Under axial compression loading carbon nanotubes
have demonstrated bending, buckling8 and fracture9 (or a combination of the
above) depending on the loading conditions and on the number of loading cycles.
In this work, we partially anchor dense vertically aligned foam-like forests of
carbon nanotubes on a thin, flexible polymer layer to provide structural
stability, and report the mechanical response of such systems as a function of
the strain rate. We test the sample under quasi-static indentation loading and
under impact loading and report a variable nonlinear response and different
elastic recovery with varying strain rates. A Bauschinger-like effect is
observed at very low strain rates while buckling and the formation of permanent
defects in the tube structure is reported at very high strain rates. Using
high-resolution transmission microscopy
|
In this paper, we derive general theorems for controlling (vector-valued)
first order ordinary differential equations such that its solutions stop at a
finite time $T>0$ and apply them to relaxation and dissipative oscillation
processes. We discuss several interesting examples for relaxation processes
with finite stopping time and their energy behaviour. Our results on relaxation
and dissipative oscillations enable us to model diffusion processes with finite
front speeds and dissipative waves that cause in each space point $x$ an
oscillation with a finite stopping time $T(x)$. In the latter case, we derive
the relation between $T(0)$ and $T(x)$. Moreover, the relations beteween the
control functions in the ode model and the respective pde model are derived.In
particular, we present an application of the Paley-Wiener-Schwartz Theorem that
is used in our analysis. A complementary approach for dissipative oscillations
and its application to dissipative waves is presented in [Ko19b], where the
finite stopping time is achieved due to nonconstant coefficients in second
order odes.
|
In Robbins' problem of minimizing the expected rank, a finite sequence of $n$
independent, identically distributed random variables are observed sequentially
and the objective is to stop at such a time that the expected rank of the
selected variable (among the sequence of all $n$ variables) is as small as
possible. In this paper we consider an analogous problem in which the observed
random variables are the steps of a symmetric random walk. Assuming
continuously distributed step sizes, we describe the optimal stopping rules for
the cases $n=2$ and $n=3$ in two versions of the problem: a "full information"
version in which the actual steps of the random walk are disclosed to the
decision maker; and a "partial information" version in which only the relative
ranks of the positions taken by the random walk are observed. When $n=3$, the
optimal rule and expected rank depend on the distribution of the step sizes. We
give sharp bounds for the optimal expected rank in the partial information
version, and fairly sharp bounds in the full information version.
|
This paper deals with the class of existentially closed models of fields with
a distinguished submodule (over a fixed subring). In the positive
characteristic case, this class is elementary and was investigated by the
first-named author. Here we study this class in Robinson's logic, meaning the
category of existentially closed models with embeddings following Haykazyan and
Kirby, and prove that in this context this class is NSOP$_1$ and TP$_2$.
|
We study the Principal Component Analysis (PCA) problem in the distributed
and streaming models of computation. Given a matrix $A \in R^{m \times n},$ a
rank parameter $k < rank(A)$, and an accuracy parameter $0 < \epsilon < 1$, we
want to output an $m \times k$ orthonormal matrix $U$ for which $$ || A - U U^T
A ||_F^2 \le \left(1 + \epsilon \right) \cdot || A - A_k||_F^2, $$ where $A_k
\in R^{m \times n}$ is the best rank-$k$ approximation to $A$.
This paper provides improved algorithms for distributed PCA and streaming
PCA.
|
We propose a Model-Based Clustering (MBC) method combined with loci selection
using multi-allelic loci genetic data. The loci selection problem is regarded
as a model selection problem and models in competition are compared with the
Bayesian Information Criterion (BIC). The resulting procedure selects the
subset of clustering loci, the number of clusters, estimates the proportion of
each cluster and the allelic frequencies within each cluster. We prove that the
selected model converges in probability to the true model under a single
realistic assumption as the size of the sample tends to infinity. The proposed
method named MixMoGenD (Mixture Model using Genetic Data) was implemented using
c++ programming language. Numerical experiments on simulated data sets was
conducted to highlight the interest of the proposed loci selection procedure.
|
These notes follow from a course delivered at the V Jos\'e Pl\'{\i}nio
Baptista School of Cosmology, held at Guarapari (Esp\'{\i}rito Santo) Brazil,
from 30 September to 5 October 2021. A review of the current status of the
linear stability of black holes and naked singularities is given. The standard
modal approach, that takes advantage of the background symmetries and analyze
separately the harmonic components of linear perturbations, is briefly
introduced and used to prove that the naked singularities in the Kerr--Newman
family, as well as the inner black hole regions beyond Cauchy horizons, are
unstable and therefore unphysical. The proofs require a treatment of the
boundary condition at the timelike boundary, which is given in detail. The
nonmodal linear stability concept is then introduced, and used to prove that
the domain of outer communications of a Schwarzschild black hole with a
non-negative cosmological constant satisfies this stronger stability condition,
which rules out transient growths of perturbations, and also to show that the
perturbed black hole settles into a slowly rotating Kerr black hole. The
encoding of the perturbation fields in gauge invariant curvature scalars and
the effects of the perturbation on the geometry of the spacetime is discussed.
|
Exponential growth of the web increased the importance of web document
classification and data mining. To get the exact information, in the form of
knowing what classes a web document belongs to, is expensive. Automatic
classification of web document is of great use to search engines which provides
this information at a low cost. In this paper, we propose an approach for
classifying the web document using the frequent item word sets generated by the
Frequent Pattern (FP) Growth which is an association analysis technique of data
mining. These set of associated words act as feature set. The final
classification obtained after Na\"ive Bayes classifier used on the feature set.
For the experimental work, we use Gensim package, as it is simple and robust.
Results show that our approach can be effectively classifying the web document.
|
Flavour-changing neutral currents are extremely rare processes in the
standard model that can be sensitive to various new physics effects. The
summary of the latest experimental results from the LHC experiments is given.
Preliminary results of sensitivity studies for future colliders are also
discussed.
|
The key for realizing fault-tolerant quantum computation lies in maintaining
the coherence of all qubits so that high-fidelity and robust quantum
manipulations on them can be achieved. One of the promising approaches is to
use geometric phases in the construction of universal quantum gates, due to
their intrinsic robustness against certain types of local noises. However, due
to limitations in previous implementations, the noise-resilience feature of
nonadiabatic holonomic quantum computation (NHQC) still needs to be improved.
Here, combining with the dynamical correction technique, we propose a general
protocol of universal NHQC with simplified control, which can greatly suppress
the effect of the accompanied X errors, retaining the main merit of geometric
quantum operations. Numerical simulation shows that the performance of our gate
can be much better than previous protocols. Remarkably, when incorporating a
decoherence-free subspace encoding for the collective dephasing noise, our
scheme can also be robust against the involved Z errors. In addition, we also
outline the physical implementation of the protocol that is insensitive to both
X and Z errors. Therefore, our protocol provides a promising strategy for
scalable fault-tolerant quantum computation.
|
This work is a continuation of our previous work (JMP, Vol. 48, 12, pp.
122103-1-122103-20, 2007), where we constructed the non-relativistic Lee model
in three dimensional Riemannian manifolds. Here we renormalize the two
dimensional version by using the same methods and the results are shortly given
since the calculations are basically the same as in the three dimensional
model. We also show that the ground state energy is bounded from below due to
the upper bound of the heat kernel for compact and Cartan-Hadamard manifolds.
In contrast to the construction of the model and the proof of the lower bound
of the ground state energy, the mean field approximation to the two dimensional
model is not similar to the one in three dimensions and it requires a deeper
analysis, which is the main result of this paper.
|
Electronic nematicity, a state in which rotational symmetry is spontaneously
broken, has become a familiar characteristic of many strongly correlated
materials. One widely studied example is the discovered Ising-nematicity and
its interplay with superconductivity in tetragonal iron pnictides. Since
nematic directors in crystalline solids are restricted by the underlying
crystal symmetry, recently identified quantum material systems with three-fold
rotational (C$_3$) symmetry offer a new platform to investigate nematic order
with three-state Potts character. Here, we report reversible strain control of
the three-state Potts nematicity in a zigzag antiferromagnetic insulator,
FePSe$_3$. Probing the nematicity via optical linear dichroism, we demonstrate
either $2{\pi}/3$ or ${\pi}/2$ rotation of nematic director by uniaxial strain.
The nature of the nematic phase transition can also be controlled such that it
undergoes a smooth crossover transition, a Potts nematic transition, or a Ising
nematic flop transition. Further elastocaloric measurements demonstrate
signatures of two coupled phase transitions, indicating that the nematic phase
is a vestigial order arose from the antiferromagnetism. The ability to tune the
nematic order with in-situ strain further enables the extraction of nematic
susceptibility, which exhibits a divergent behavior near the magnetic ordering
temperature that is corroborated with both linear dichroism and elastocaloric
measurements. Our work points to an active control approach to manipulate and
explore nematicity in three-state Potts correlated materials.
|
Higher order networks are able to characterize data as different as
functional brain networks, protein interaction networks and social networks
beyond the framework of pairwise interactions. Most notably higher order
networks include simplicial complexes formed not only by nodes and links but
also by triangles, tetrahedra, etc. More in general, higher-order networks can
be cell-complexes formed by gluing convex polytopes along their faces.
Interestingly, higher order networks have a natural geometric interpretation
and therefore constitute a natural way to explore the discrete network geometry
of complex networks. Here we investigate the rich interplay between emergent
network geometry of higher order networks and their complexity in the framework
of a non-equilibrium model called Network Geometry with Flavor. This model,
originally proposed for capturing the evolution of simplicial complexes, is
here extended to cell-complexes formed by subsequently gluing different copies
of an arbitrary regular polytope. We reveal the interplay between complexity
and geometry of the higher order networks generated by the model by studying
the emergent community structure and the degree distribution as a function of
the regular polytope forming its building blocks. Additionally we discuss the
underlying hyperbolic nature of the emergent geometry and we relate the
spectral dimension of the higher-order network to the dimension and nature of
its building blocks.
|
In the field of healthcare, electronic health records (EHR) serve as crucial
training data for developing machine learning models for diagnosis, treatment,
and the management of healthcare resources. However, medical datasets are often
imbalanced in terms of sensitive attributes such as race/ethnicity, gender, and
age. Machine learning models trained on class-imbalanced EHR datasets perform
significantly worse in deployment for individuals of the minority classes
compared to those from majority classes, which may lead to inequitable
healthcare outcomes for minority groups. To address this challenge, we propose
Minority Class Rebalancing through Augmentation by Generative modeling
(MCRAGE), a novel approach to augment imbalanced datasets using samples
generated by a deep generative model. The MCRAGE process involves training a
Conditional Denoising Diffusion Probabilistic Model (CDDPM) capable of
generating high-quality synthetic EHR samples from underrepresented classes. We
use this synthetic data to augment the existing imbalanced dataset, resulting
in a more balanced distribution across all classes, which can be used to train
less biased downstream models. We measure the performance of MCRAGE versus
alternative approaches using Accuracy, F1 score and AUROC of these downstream
models. We provide theoretical justification for our method in terms of recent
convergence results for DDPMs.
|
We describe on any finitely generated group G the space of maps G->C which
satisfy the parallelogram identity, f(xy)+f(xy^{-1})=2f(x)+2f(y).
It is known (but not well-known) that these functions correspond to
Zariski-tangent vectors at the trivial character of the character variety of G
in SL_2(C). We study the obstructions for deforming the trivial character in
the direction given by f. Along the way, we show that the trivial character is
a smooth point of the character variety if dim H_1(G,C)<2 and not a smooth
point if dim H_1(G,C)>2.
|
Daytime radiative cooling is a promising passive cooling technology for
combating global warming. Existing daytime radiative coolers usually show
whitish colors due to their broadband high solar reflectivity, which severely
impedes applications in real-life situations with aesthetic demands and
effective display. However, there is a trade-off between vivid colors and high
cooling performance because colors are often produced by absorption of visible
light, decreasing net cooling power. To break this trade-off, we design
multilayered structures with coupled nanocavities and produce structural colors
with high cooling performance. Using this design, we can obtain colorful
radiative coolers which show a larger color gamut (occupying 17.7% sRGB area)
than reported ones. We further fabricate colorful multilayered radiative
coolers (CMRCs) and demonstrate they have temperature drops of 3.4 - 4.4
degrees on average based on outdoor experiments. These CMRCs are promising in
thermal management of electronic/optoelectronic devices and outdoor facilities.
|
The solutions of the time independent Schrodinger equation for non-Hermitian
(NH) Hamiltonians have been extensively studied and calculated in many
different fields of physics by using L^2 methods that originally have been
developed for the calculations of bound states. The existing non-Hermitian
formalism breaks down when dealing with wavepackets(WP). An open question is
how time dependent expectation values can be calculated when the Hamiltonian is
NH ? Using the F-product formalism, which was recently proposed, [J. Phys.
Chem., 107, 7181 (2003)] we calculate the time dependent expectation values of
different observable quantities for a simple well known study test case model
Hamiltonian. We carry out a comparison between these results with those
obtained from conventional(i.e., Hermitian) quantum mechanics (QM)
calculations. The remarkable agreement between these results emphasizes the
fact that in the NH-QM, unlike standard QM, there is no need to split the
entire space into two regions; i.e., the interaction region and its
surrounding. Our results open a door for a type of WP propagation calculations
within the NH-QM formalism that until now were impossible.
|
Infrared spectroscopy of the H-alpha emission lines of a sub-sample of 19
high-redshift (0.8 < z < 2.3) Molonglo quasars, selected at 408 MHz, is
presented. These emission lines are fitted with composite models of broad and
narrow emission, which include combinations of classical broad-line regions of
fast-moving gas clouds lying outside the quasar nucleus, and/or a theoretical
model of emission from an optically-thick, flattened, rotating accretion disk.
All bar one of the nineteen sources are found to have emission consistent with
the presence of an optically-emitting accretion disk, with the exception
appearing to display complex emission including at least three broad
components. Ten of the quasars have strong Bayesian evidence for broad-line
emission arising from an accretion disk together with a standard broad-line
region, selected in preference to a model with two simple broad lines. Thus the
best explanation for the complexity required to fit the broad H-alpha lines in
this sample is optical emission from an accretion disk in addition to a region
of fast-moving clouds. We derive estimates of the angle between the rotation
axis of the accretion disk and the line of sight. A weak correlation is found
between the accretion disk angle and the logarithm of the low-frequency radio
luminosity. This is direct, albeit tenuous, evidence for the receding torus
model. Velocity shifts of the broad H-alpha components are analysed and the
results found to be consistent with a two-component model comprising one
single-peaked broad line emitted at the same redshift as the narrow lines, and
emission from an accretion disk which appears to be preferentially redshifted
with respect to the narrow lines for high-redshift sources and blueshifted
relative to the narrow lines for low-redshift sources.
|
In this paper, we teach a machine to discover the laws of physics from video
streams. We assume no prior knowledge of physics, beyond a temporal stream of
bounding boxes. The problem is very difficult because a machine must learn not
only a governing equation (e.g. projectile motion) but also the existence of
governing parameters (e.g. velocities). We evaluate our ability to discover
physical laws on videos of elementary physical phenomena, such as projectile
motion or circular motion. These elementary tasks have textbook governing
equations and enable ground truth verification of our approach.
|
In her PhD thesis Milin developed an equivariant version of the contact
homology groups constructed by Eliashberg, Kim and Polterovich and used it to
prove an equivariant contact non-squeezing theorem. In this article we
re-obtain the same result in the setting of generating functions, starting from
the homology groups studied in arXiv:0901.3112. As Milin showed, this result
implies orderability of lens spaces.
|
The (light but not-so-light) strange quark may play a special role in the
low-energy dynamics of QCD. The presence of strange quark pairs in the sea may
have a significant impact of the pattern of chiral symmetry breaking : in
particular large differences can occur between the chiral limits of two and
three massless flavours (i.e., whether m_s is kept at its physical value or
sent to zero). This may induce problems of convergence in three-flavour chiral
expansions. To cope with such difficulties, we introduce a new framework,
called Resummed Chiral Perturbation Theory. We exploit it to analyse pi-pi and
pi-K scatterings and match them with dispersive results in a frequentist
framework. Constraints on three-flavour chiral order parameters are derived.
|
One of the most interesting explanations for the non-Gaussian Cold Spot (CS)
detected in the WMAP data by Vielva et al. 2004, is that it arises from the
interaction of the CMB radiation with a cosmic texture (Cruz et al. 2007b). In
this case, a lack of polarization is expected in the region of the spot, as
compared to the typical values associated to large fluctuations of a GIRF. In
addition, other physical processes related to a non-linear evolution of the
gravitational field could lead to a similar scenario. However, some of these
alternative scenarios (e.g., a large void in the large scale structure) have
been shown to be very unlikely. In this work we characterise the polarization
properties of the Cold Spot under both hypotheses: a large Gaussian spot and an
anomalous feature generated, for instance, by a cosmic texture. We propose a
methodology to distinguish between them, and we discuss its discrimination
power as a function of the instrumental noise level. In particular, we address
the cases of current experiments, like WMAP and Planck, and others in
development as QUIJOTE. We find that for an ideal experiment the Gaussian
hypothesis could be rejected at a significance level better than 0.8%. While
WMAP is far from providing useful information in this respect, we find that
Planck will be able to reach a significance of around 7%; in addition, we show
that the ground-based experiment QUIJOTE could provide a significance of around
1%. If these results are combined with the significance level found for the CS
in temperature, the capability of QUIJOTE and Planck to reject the alternative
hypothesis becomes 0.025% and 0.124%, respectively.
|
We identify the lift to M theory of the four types of orientifold points, and
show that they involve a chiral fermion on an orbifold fixed circle. From this
lift, we compute the number of normalizable ground states for the SO(N) and
$Sp(N)$ supersymmetric quantum mechanics with sixteen supercharges. The results
agree with known results obtained by the mass deformation method. The mass of
the orientifold is identified with the Casimir energy.
|
The spin dynamics in single crystal, electron-doped Ba(Fe1-xCox)2As2 has been
investigated by inelastic neutron scattering over the full range from undoped
to the overdoped regime. We observe damped magnetic fluctuations in the normal
state of the optimally doped compound (x=0.06) that share a remarkable
similarity with those in the paramagnetic state of the parent compound (x=0).
In the overdoped superconducting compound (x=0.14), magnetic excitations show a
gap-like behavior, possibly related to a topological change in the hole Fermi
surface (Lifshitz transition), while the imaginary part of the spin
susceptibility prominently resembles that of the overdoped cuprates. For the
heavily overdoped, non-superconducting compound (x=0.24) the magnetic
scattering disappears, which could be attributed to the absence of a hole
Fermi-surface pocket observed by photoemission.
|
Charged multiplicities in nucleus--nucleus collisions are calculated in the
Dual Parton Model taking into account shadowing corrections. Its dependence on
the number of collisions and participants is analyzed and found in agreement
with experiment at SPS and RHIC energies. Using these results, we compute the
$J/\psi$ suppression at SPS as a function of the transverse energy and of the
energy of the zero degree calorimeter. Predictions for RHIC are presented.
|
In this paper, we present a modular methodology that combines
state-of-the-art methods in (stochastic) machine learning with traditional
methods in rule learning to provide efficient and scalable algorithms for the
classification of vast data sets, while remaining explainable. Apart from
evaluating our approach on the common large scale data sets MNIST,
Fashion-MNIST and IMDB, we present novel results on explainable classifications
of dental bills. The latter case study stems from an industrial collaboration
with Allianz Private Krankenversicherungs-Aktiengesellschaft which is an
insurance company offering diverse services in Germany.
|
Volleyball is a team sport with unique and specific characteristics. We
introduce a new two level-hierarchical Bayesian model which accounts for theses
volleyball specific characteristics. In the first level, we model the set
outcome with a simple logistic regression model. Conditionally on the winner of
the set, in the second level, we use a truncated negative binomial distribution
for the points earned by the loosing team. An additional Poisson distributed
inflation component is introduced to model the extra points played in the case
that the two teams have point difference less than two points. The number of
points of the winner within each set is deterministically specified by the
winner of the set and the points of the inflation component. The team specific
abilities and the home effect are used as covariates on all layers of the model
(set, point, and extra inflated points). The implementation of the proposed
model on the Italian Superlega 2017/2018 data shows an exceptional
reproducibility of the final league table and a satisfactory predictive
ability.
|
This paper provides a variational treatment of the effect of external charges
on the free charges in an infinite free-standing graphene sheet within the
Thomas-Fermi theory. We establish existence, uniqueness and regularity of the
energy minimizers corresponding to the free charge densities that screen the
effect of an external electrostatic potential at the neutrality point. For the
potential due to one or several off-layer point charges, we also prove
positivity and a precise universal asymptotic decay rate for the screening
charge density, as well as an exact charge cancellation by the graphene sheet.
We also treat a simpler case of the non-zero background charge density and
establish similar results in that case.
|
Autonomous navigation of mobile robots is a well studied problem in robotics.
However, the navigation task becomes challenging when multi-robot systems have
to cooperatively navigate dynamic environments with deadlock-prone layouts. We
present a Distributed Timed Elastic Band (DTEB) Planner that combines
Prioritized Planning with the online TEB trajectory Planner, in order to extend
the capabilities of the latter to multi-robot systems. The proposed planner is
able to reactively avoid imminent collisions as well as predictively resolve
potential deadlocks among a team of robots, while navigating in a complex
environment. The results of our simulation demonstrate the reliable performance
and the versatility of the planner in different environment settings. The code
and tests for our approach are available online.
|
Integrated photonics is at the heart of many classical technologies, from
optical communications to biosensors, LIDAR, and data center fiber
interconnects. There is strong evidence that these integrated technologies will
play a key role in quantum systems as they grow from few-qubit prototypes to
tens of thousands of qubits. The underlying laser and optical quantum
technologies, with the required functionality and performance, can only be
realized through the integration of these components onto quantum photonic
integrated circuits (QPICs) with accompanying electronics. In the last decade,
remarkable advances in quantum photonic integration and a dramatic reduction in
optical losses have enabled benchtop experiments to be scaled down to prototype
chips with improvements in efficiency, robustness, and key performance metrics.
The reduction in size, weight, power, and improvement in stability that will be
enabled by QPICs will play a key role in increasing the degree of complexity
and scale in quantum demonstrations. In the next decade, with sustained
research, development, and investment in the quantum photonic ecosystem (i.e.
PIC-based platforms, devices and circuits, fabrication and integration
processes, packaging, and testing and benchmarking), we will witness the
transition from single- and few-function prototypes to the large-scale
integration of multi-functional and reconfigurable QPICs that will define how
information is processed, stored, transmitted, and utilized for quantum
computing, communications, metrology, and sensing. This roadmap highlights the
current progress in the field of integrated quantum photonics, future
challenges, and advances in science and technology needed to meet these
challenges.
|
The study of extragalactic planetary nebulae (EPN) is a rapidly expanding
field. The advent of powerful new instrumentation such as the PN spectrograph
has led to an avalanche of new EPN discoveries both within and between
galaxies. We now have thousands of EPN detections in a heterogeneous selection
of nearby galaxies and their local environments, dwarfing the combined galactic
detection efforts of the last century. Key scientific motivations driving this
rapid growth in EPN research and discovery have been the use of the PNLF as a
standard candle, as dynamical tracers of their host galaxies and dark matter
and as probes of Galactic evolution. This is coupled with the basic utility of
PN as laboratories of nebula physics and the consequent comparison with theory
where population differences, abundance variations and star formation history
within and between stellar systems informs both stellar and galactic evolution.
Here we pose some of the burning questions, discuss some of the observational
challenges and outline some of the future prospects of this exciting,
relatively new, research area as we strive to go fainter, image finer, see
further and survey faster than ever before and over a wider wavelength regime
|
We establish a biophysical model for the dynamics of lipid vesicles exposed
to surfactants. The solubilization of the lipid membrane due to the insertion
of surfactant molecules induces a reduction of membrane surface area at almost
constant vesicle volume. This results in a rate-dependent increase of membrane
tension and leads to the opening of a micron-sized pore. We show that
solubilization kinetics due to surfactants can determine the regimes of pore
dynamics: either the pores open and reseal within a second (short-lived pore),
or the pore stays open up to a few minutes (long-lived pore). First, we
validate our model with previously published experimental measurements of pore
dynamics. Then, we investigate how the solubilization kinetics and membrane
properties affect the dynamics of the pore and construct a phase diagram for
short and long-lived pores. Finally, we examine the dynamics of sequential pore
openings and show that cyclic short-lived pores occur at a period inversely
proportional to the solubilization rate. By deriving a theoretical expression
for the cycle period, we provide an analytic tool to measure the solubilization
rate of lipid vesicles by surfactants. Our findings shed light on some
fundamental biophysical mechanisms that allow simple cell-like structures to
sustain their integrity against environmental stresses, and have the potential
to aid the design of vesicle-based drug delivery systems.
|
When a clinician refers a patient for an imaging exam, they include the
reason (e.g. relevant patient history, suspected disease) in the scan request;
this appears as the indication field in the radiology report. The
interpretation and reporting of the image are substantially influenced by this
request text, steering the radiologist to focus on particular aspects of the
image. We use the indication field to drive better image classification, by
taking a transformer network which is unimodally pre-trained on text (BERT) and
fine-tuning it for multimodal classification of a dual image-text input. We
evaluate the method on the MIMIC-CXR dataset, and present ablation studies to
investigate the effect of the indication field on the classification
performance. The experimental results show our approach achieves 87.8 average
micro AUROC, outperforming the state-of-the-art methods for unimodal (84.4) and
multimodal (86.0) classification. Our code is available at
https://github.com/jacenkow/mmbt.
|
Research about recommender systems emerges over the last decade and comprises
valuable services to increase different companies' revenue. Several approaches
exist in handling paper recommender systems. While most existing recommender
systems rely either on a content-based approach or a collaborative approach,
there are hybrid approaches that can improve recommendation accuracy using a
combination of both approaches. Even though many algorithms are proposed using
such methods, it is still necessary for further improvement. In this paper, we
propose a recommender system method using a graph-based model associated with
the similarity of users' ratings, in combination with users' demographic and
location information. By utilizing the advantages of Autoencoder feature
extraction, we extract new features based on all combined attributes. Using the
new set of features for clustering users, our proposed approach (GHRS) has
gained a significant improvement, which dominates other methods' performance in
the cold-start problem. The experimental results on the MovieLens dataset show
that the proposed algorithm outperforms many existing recommendation algorithms
on recommendation accuracy.
|
Over the last decade more than five thousand gamma-ray sources were detected
by the Large Area Telescope (LAT) on board Fermi Gamma-ray Space Telescope.
Given the positional uncertainty of the telescope, nearly 30% of these sources
remain without an obvious counterpart in lower energies. This motivated the
release of new catalogs of gamma-ray counterpart candidates and several follow
up campaigns in the last decade. Recently, two new catalogs of blazar
candidates were released, they are the improved and expanded version of the
WISE Blazar-Like Radio-Loud Sources (WIBRaLS2) catalog and the Kernel Density
Estimation selected candidate BL Lacs (KDEBLLACS) catalog, both selecting
blazar-like sources based on their infrared colors from the Wide-field Infrared
Survey Explorer (WISE). In this work we characterized these two catalogs,
clarifying the true nature of their sources based on their optical spectra from
SDSS data release 15, thus testing how efficient they are in selecting true
blazars. We first selected all WIBRaLS2 and KDEBLLACS sources with available
optical spectra in the footprint of Sloan Digital Sky Survey data release 15.
Then we analyzed these spectra to verify the nature of each selected candidate
and see which fraction of the catalogs is composed by spectroscopically
confirmed blazars. Finally, we evaluated the impact of selection effects,
specially those related to optical colors of WIBRaLS2/KDEBLLACS sources and
their optical magnitude distributions. We found that at least ~ 30% of each
catalog is composed by confirmed blazars, with quasars being the major
contaminants in the case of WIBRaLS2 (~ 58%) and normal galaxies in the case of
KDEBLLACS (~ 38.2%). The spectral analysis also allowed us to identify the
nature of 11 blazar candidates of uncertain type (BCUs) from the Fermi-LAT 4th
Point Source Catalog (4FGL) and to find 25 new BL Lac objects.
|
We systematically study magnetic correlations in graphene within Hubbard
model on a honeycomb lattice by using quantum Monte Carlo simulations. In the
filling region below the Van Hove singularity, the system shows a short-range
ferromagnetic correlation, which is slightly strengthened by the on-site
Coulomb interaction and markedly by the next-nearest-neighbor hopping integral.
The ferromagnetic properties depend on the electron filling strongly, which may
be manipulated by the electric gate. Due to its resultant controllability of
ferromagnetism, graphene-based samples may facilitate the development of many
applications.
|
We examine critical properties of the quarter-filled one-dimensional Hubbard
model with dimerization and with the onsite and nearest-neighbor Coulomb
repulsion U and V. By utilizing the bosonization method, it is shown that the
system exhibits an Ising quantum phase transition from the Mott insulating
state to the charge-ordered insulating state. It is also shown that the
dielectric permittivity exhibits a strong enhancement as decreasing temperature
with power-law dependence at the Ising critical point.
|
Worldwide exposure to fine atmospheric particles can exasperate the risk of a
wide range of heart and respiratory diseases, due to their ability to penetrate
deep into the lungs and blood streams. Epidemiological studies in Europe and
elsewhere have established the evidence base pointing to the important role of
PM2.5 in causing over 4 million deaths per year. Traditional approaches to
model atmospheric transportation of particles suffer from high dimensionality
from both transport and chemical reaction processes, making multi-sale causal
inference challenging. We apply alternative model reduction methods: a
data-driven directed graph representation to infer spatial embeddedness and
causal directionality. Using PM2.5 concentrations in 14 UK cities over a 12
month period, we construct an undirected correlation and a directed Granger
causality network. We show for both reduced-order cases, the UK is divided into
two a northern and southern connected city communities, with greater spatial
embedding in spring and summer. We go on to infer stability to disturbances via
the network trophic coherence parameter, whereby we found that winter had the
greatest vulnerability. As a result of our novel graph-based reduced modeling,
we are able to represent high-dimensional knowledge into a causal inference and
stability framework.
|
A novel and efficient method for fiber transfer delay measurement is
demonstrated. Fiber transfer delay measurement in time domain is converted into
the frequency measurement of the modulation signal in frequency domain,
accompany with a coarse and easy ambiguity resolving process. This method
achieves a sub-picosecond resolution, with an accuracy of 1 picosecond, and a
large dynamic range up to 50 km as well as no measurement dead zone.
|
Urban economists have put forward the idea that cities that are culturally
interesting tend to attract "the creative class" and, as a result, end up being
economically successful. Yet it is still unclear how economic and cultural
dynamics mutually influence each other. By contrast, that has been extensively
studied in the case of individuals. Over decades, the French sociologist Pierre
Bourdieu showed that people's success and their positions in society mainly
depend on how much they can spend (their economic capital) and what their
interests are (their cultural capital). For the first time, we adapt Bourdieu's
framework to the city context. We operationalize a neighborhood's cultural
capital in terms of the cultural interests that pictures geo-referenced in the
neighborhood tend to express. This is made possible by the mining of what users
of the photo-sharing site of Flickr have posted in the cities of London and New
York over 5 years. In so doing, we are able to show that economic capital alone
does not explain urban development. The combination of cultural capital and
economic capital, instead, is more indicative of neighborhood growth in terms
of house prices and improvements of socio-economic conditions. Culture pays,
but only up to a point as it comes with one of the most vexing urban
challenges: that of gentrification.
|
We develop an interference alignment (IA) technique for a downlink cellular
system. In the uplink, IA schemes need channel-state-information exchange
across base-stations of different cells, but our downlink IA technique requires
feedback only within a cell. As a result, the proposed scheme can be
implemented with a few changes to an existing cellular system where the
feedback mechanism (within a cell) is already being considered for supporting
multi-user MIMO. Not only is our proposed scheme implementable with little
effort, it can in fact provide substantial gain especially when interference
from a dominant interferer is significantly stronger than the remaining
interference: it is shown that in the two-isolated cell layout, our scheme
provides four-fold gain in throughput performance over a standard multi-user
MIMO technique. We show through simulations that our technique provides
respectable gain under a more realistic scenario: it gives approximately 20%
gain for a 19 hexagonal wrap-around-cell layout. Furthermore, we show that our
scheme has the potential to provide substantial gain for macro-pico cellular
networks where pico-users can be significantly interfered with by the nearby
macro-BS.
|
We present a magnitude-limited set of lightcurves for stars observed over the
TESS Extended Mission, as extracted from full-frame images (FFIs) by MIT's
Quick-Look Pipeline (QLP). QLP uses multi-aperture photometry to produce
lightcurves for ~1 million stars each 27.4-day sector, which are then searched
for exoplanet transits. The per-sector lightcurves for 9.1 million unique
targets observed over the first year of the Extended Mission (Sectors 27 - 39)
are available as High-Level Science Products (HLSP) on the Mikulski Archive for
Space Telescopes (MAST). As in our TESS Primary Mission QLP HLSP delivery
(Huang et al. 2020), our available data products include both raw and detrended
flux time series for all observed stars brighter than TESS magnitude T = 13.5,
providing the community with one of the largest sources of FFI-extracted
lightcurves to date.
|
A well-established technique for capturing database provenance as annotations
on data is to instrument queries to propagate such annotations. However, even
sophisticated query optimizers often fail to produce efficient execution plans
for instrumented queries. We develop provenance-aware optimization techniques
to address this problem. Specifically, we study algebraic equivalences targeted
at instrumented queries and alternative ways of instrumenting queries for
provenance capture. Furthermore, we present an extensible heuristic and
cost-based optimization framework utilizing these optimizations. Our
experiments confirm that these optimizations are highly effective, improving
performance by several orders of magnitude for diverse provenance tasks.
|
There exist certain intrinsic relations between the ultraviolet divergent
graphs and the convergent ones at the same loop order in renormalizable quantum
field theories. Whereupon we may establish a new method, the intrinsic
regularization method, to regularize those divergent graphs. In this paper, we
apply this method to QCD at the one loop order. It turns out to be
satisfactory:The gauge invariance is preserved manifestly and the results are
the same as those derived by means of other regularization methods.
|
Heisenberg's intuition was that there should be a tradeoff between measuring
a particle's position with greater precision and disturbing its momentum.
Recent formulations of this idea have focused on the question of how well two
complementary observables can be jointly measured. Here, we provide an
alternative approach based on how enhancing the predictability of one
observable necessarily disturbs a complementary one. Our
measurement-disturbance relation refers to a clear operational scenario and is
expressed by entropic quantities with clear statistical meaning. We show that
our relation is perfectly tight for all measurement strengths in an existing
experimental setup involving qubit measurements.
|
We study the small scale distribution of the $L^2$ mass of eigenfunctions of
the Laplacian on the flat torus $\mathbb T^d$. Given an orthonormal basis of
eigenfunctions, we show the existence of a density one subsequence whose $L^2$
mass equidistributes at small scales. In dimension two our result holds all the
way down to the Planck scale. For dimensions $d=3,4$ we can restrict to
individual eigenspaces and show small scale equidistribution in that context.
We also study irregularities of quantum equidistribution: We construct
eigenfunctions whose $L^2$ mass does not equidistribute at all scales above the
Planck scale. Additionally, in dimension $d=4$ we show the existence of
eigenfunctions for which the proportion of $L^2$ mass in small balls blows up
at certain scales.
|
Using the mechano-optical stress sensor technique, we observe a
counter-intuitive reduction of the compressive stress when InAs is deposited on
GaAs (001) during growth of quantum posts. Through modelling of the strain
fields, we find that such anomalous behaviour can be related to the
strain-driven detachment of In atoms from the crystal and their surface
diffusion towards the self-assembled nanostructures.
|
We present Keck LRIS spectroscopy along with NICMOS F110W (~J) and F160W (~H)
images of the galaxy HDF4-473.0 (hereafter 4-473) in the Hubble Deep Field,
with a detection of an emission line consistent with Ly-alpha at a redshift of
z=5.60. Attention to this object as a high redshift galaxy was first drawn by
Lanzetta, Yahil and Fernandez-Soto and appeared in their initial list of
galaxies with redshifts estimated from the WFPC2 HDF photometry. It was
selected by us for spectroscopic observation, along with others in the Hubble
Deep Field, on the basis of the NICMOS F110W and F160W and WFPC2 photometry.
For H_0 = 65 and q_0 = 0.125, use of simple evolutionary models along with the
F814W (~I), F110W, and F160W magnitudes allow us to estimate the star formation
rate (~13 M(solar)/yr). The colors suggest a reddening of E(B-V) ~ 0.06. The
measured flux in the Ly-alpha line is approximately 1.0*10^(-17) ergs/cm/s and
the restframe equivalent width, correcting for the absorption caused by
intervening HI, is approximately 90AA. The galaxy is compact and regular, but
resolved, with an observed FWHM of ~0.44". Simple evolutionary models can
accurately reproduce the colors and these models predict the Ly-alpha flux to
within a factor of 2. Using this object as a template shifted to higher
redshifts, we calculate the magnitudes through the F814W and two NICMOS
passbands for galaxies at redshifts 6 < z < 10.
|
The entropy growth in a cosmological process of pair production is completely
determined by the associated squeezing parameter, and is insensitive to the
number of particles in the initial state. The total produced entropy may
represent a significant fraction of the entropy stored today in the cosmic
black-body radiation, provided pair production originates from a change in the
background metric at a curvature scale of the Planck order.
|
We study a semidefinite programming (SDP) relaxation of the maximum
likelihood estimation for exactly recovering a hidden community of cardinality
$K$ from an $n \times n$ symmetric data matrix $A$, where for distinct indices
$i,j$, $A_{ij} \sim P$ if $i, j$ are both in the community and $A_{ij} \sim Q$
otherwise, for two known probability distributions $P$ and $Q$. We identify a
sufficient condition and a necessary condition for the success of SDP for the
general model. For both the Bernoulli case ($P={{\rm Bern}}(p)$ and $Q={{\rm
Bern}}(q)$ with $p>q$) and the Gaussian case ($P=\mathcal{N}(\mu,1)$ and
$Q=\mathcal{N}(0,1)$ with $\mu>0$), which correspond to the problem of planted
dense subgraph recovery and submatrix localization respectively, the general
results lead to the following findings: (1) If $K=\omega( n /\log n)$, SDP
attains the information-theoretic recovery limits with sharp constants; (2) If
$K=\Theta(n/\log n)$, SDP is order-wise optimal, but strictly suboptimal by a
constant factor; (3) If $K=o(n/\log n)$ and $K \to \infty$, SDP is order-wise
suboptimal. The same critical scaling for $K$ is found to hold, up to constant
factors, for the performance of SDP on the stochastic block model of $n$
vertices partitioned into multiple communities of equal size $K$. A key
ingredient in the proof of the necessary condition is a construction of a
primal feasible solution based on random perturbation of the true cluster
matrix.
|
The searches for CP violating effects in diatomic molecules, such as
$\text{HfF}^+$ and ThO, are typically interpreted as a probe of the electron's
electric dipole moment ($e\text{EDM}$), a new electron-nucleon interaction, and
a new electron-electron interaction. However, in the case of a non-vanishing
nuclear spin, a new CP violating nucleon-nucleon long range force will also
affect the measurement. Here, we use the $\text{HfF}^+$ $e\text{EDM}$ search
and derive a new bound on this hypothetical interaction, which is the most
stringent from terrestrial experiments in the 1 eV-10 keV mass range. These
multiple new physics sources motivate independent searches in different
molecular species for CP violation at low energy that result in model
independent bounds, which are insensitive to cancellation among them.
|
The micropolar fluid mechanics and its transport coefficients are derived
from the linearized Boltzmann equation of rotating particles. In the dilute
limit, as expected, transport coefficients relating to microrotation are not
important, but the results are useful for the description of collisional
granular flow on an inclined slope.
(This paper will be published in Traffic and Granular Flow 2001 edited by
Y.Sugiyama and D. E. Wolf (Springer))
|
Using techniques of the theory of semigroups of linear operators we study the
question of approximating solutions to equations governing diffusion in thin
layers separated by a semi-permeable membrane. We show that as thickness of the
layers converges to $0$, the solutions, which by nature are functions of $3$
variables, gradually lose dependence on the vertical variable and thus may be
regarded as functions of $2$ variables. The limit equation describes diffusion
on the lower and upper sides of a two-dimensional surface (the membrane) with
jumps from one side to the other. The latter possibility is expressed as an
additional term in the generator of the limit semigroup, and this term is build
from permeability coefficients of the membrane featuring in the transmission
conditions of the approximating equations (i.e., in the description of the
domains of the generators of the approximating semigroups). We prove this
convergence result in the spaces of square integrable and continuous functions,
and study the way the choice of transmission conditions influences the limit.
|
In the two space dimensions of screens in optical sy stems, rotations,
gyrations, and fractional Fourier transformations form the Fourier subgroup of
the symplectic group of linear canonical transformations: U(2) F $\subset$
Sp(4,R). Here we study the action of this Fourier group on pixellated images
within generic rectangular $N_x$ $\times$ $N_y$ screens; its elements here
compose properly and act unitarily, i.e., without loss of information.
|
In the previous papers, we studied the 't Hooft-Polyakov (TP) monopole
configurations in the U(2) gauge theory on the fuzzy 2-sphere,and showed that
they have nonzero topological charge in the formalism based on the
Ginsparg-Wilson (GW) relation. In this paper, we will show an index theorem in
the TP monopole background, which is defined in the projected space, and
provide a meaning of the projection operator. We also extend the index theorem
to general configurations which do not satisfy the equation of motion, and show
that the configuration space can be classified into the topological sectors. We
further calculate the spectrum of the GW Dirac operator in the TP monopole
backgrounds, and consider the index theorem in these cases.
|
Stabbing Planes (also known as Branch and Cut) is a proof system introduced
very recently which, informally speaking, extends the DPLL method by branching
on integer linear inequalities instead of single variables. The techniques
known so far to prove size and depth lower bounds for Stabbing Planes are
generalizations of those used for the Cutting Planes proof system. For size
lower bounds these are established by monotone circuit arguments, while for
depth these are found via communication complexity and protection. As such
these bounds apply for lifted versions of combinatorial statements. Rank lower
bounds for Cutting Planes are also obtained by geometric arguments called
protection lemmas.
In this work we introduce two new geometric approaches to prove size/depth
lower bounds in Stabbing Planes working for any formula: (1) the antichain
method, relying on Sperner's Theorem and (2) the covering method which uses
results on essential coverings of the boolean cube by linear polynomials, which
in turn relies on Alon's combinatorial Nullenstellensatz.
We demonstrate their use on classes of combinatorial principles such as the
Pigeonhole principle, the Tseitin contradictions and the Linear Ordering
Principle. By the first method we prove almost linear size lower bounds and
optimal logarithmic depth lower bounds for the Pigeonhole principle and
analogous lower bounds for the Tseitin contradictions over the complete graph
and for the Linear Ordering Principle. By the covering method we obtain a
superlinear size lower bound and a logarithmic depth lower bound for Stabbing
Planes proof of Tseitin contradictions over a grid graph.
|
We have performed monitoring observations of the flux density toward the
Galactic center compact radio source, Sagittarius A* (Sgr A*), which is a
supermassive black hole, from 1996 to 2005 using the Nobeyama Millimeter Array
of the Nobeyama Radio Observatory, Japan. These monitoring observations of Sgr
A* were carried out in the 3- and 2-mm (100 and 140 GHz) bands, and we have
detected several flares of Sgr A*. We found intraday variation of Sgr A* in the
2000 March flare. The twofold increase timescale is estimated to be about 1.5
hr at 140 GHz. This intraday variability suggests that the physical size of the
flare-emitting region is compact on a scale at or below about 12 AU (~150 Rs;
Schwarzschild radius). On the other hand, clear evidence of long-term periodic
variability was not found from a periodicity analysis of our current millimeter
data set.
|
Nonnegative matrix factorization (NMF) is the problem of decomposing a given
nonnegative $n \times m$ matrix $M$ into a product of a nonnegative $n \times
d$ matrix $W$ and a nonnegative $d \times m$ matrix $H$. A longstanding open
question, posed by Cohen and Rothblum in 1993, is whether a rational matrix $M$
always has an NMF of minimal inner dimension $d$ whose factors $W$ and $H$ are
also rational. We answer this question negatively, by exhibiting a matrix for
which $W$ and $H$ require irrational entries.
|
We show that the gaseous halos of collapsed objects introduce a substantial
cumulative opacity to ionizing radiation, even after the smoothly distributed
hydrogen in the intergalactic medium has been fully reionized. This opacity
causes a delay of around unity in redshift between the time of the overlap of
ionized bubbles in the intergalactic medium and the lifting of complete
Gunn-Peterson Lyman alpha absorption. The minihalos responsible for this
screening effect are not resolved by existing numerical simulations of
reionization.
|
In this paper, we study the limit behavior of the conical K\"ahler-Ricci flow
as its cone angle tends to zero. More precisely, we prove that as the cone
angle tends to zero, the conical K\"ahler-Ricci flow converges to a unique
K\"ahler-Ricci flow, which is smooth outside the divisor and admits cusp
singularity along the divisor.
|
For a sample of 38 Galactic globular clusters (GCs), we confront the observed
distributions of blue straggler (BS) proper motions and masses (derived from
isochrone fitting) from the BS catalog of Simunovic & Puzia with theoretical
predictions for each of the two main competing BS formation mechanisms. These
are mass transfer from an evolved donor on to a main-sequence (MS) star in a
close binary system, and direct collisions involving MS stars during binary
encounters. We use the \texttt{FEWBODY} code to perform simulations of
single-binary and binary-binary interactions. This provides collisional
velocity and mass distributions for comparison to the observed distributions.
Most clusters are consistent with BSs derived from a dynamically relaxed
population, supportive of the binary mass-transfer scenario. In a few clusters,
including all the post-core collapse clusters in our sample, the collisional
velocities provide the best fit.
|
We investigate the effect of clustering on network observability transitions.
In the observability model introduced by Yang, Wang, and Motter [Phys. Rev.
Lett. 109, 258701 (2012)], a given fraction of nodes are chosen randomly, and
they and those neighbors are considered to be observable, while the other nodes
are unobservable. For the observability model on random clustered networks, we
derive the normalized sizes of the largest observable component (LOC) and
largest unobservable component (LUC). Considering the case where the numbers of
edges and triangles of each node are given by the Poisson distribution, we find
that both LOC and LUC are affected by the network's clustering: more
highly-clustered networks have lower critical node fractions for forming
macroscopic LOC and LUC, but this effect is small, becoming almost negligible
unless the average degree is small. We also evaluate bounds for these critical
points to confirm clustering's weak or negligible effect on the network
observability transition. The accuracy of our analytical treatment is confirmed
by Monte Carlo simulations.
|
We report experimental study of the secondary modulational instability of a
one-dimensional non-linear traveling wave in a long bounded channel. Two
qualitatively different instability regimes involving fronts of spatio-temporal
defects are linked to the convective and absolute nature of the instability.
Both transitions appear to be subcritical. The spatio-temporal defects control
the global mode structure.
|
Robot navigation in dynamic environments shared with humans is an important
but challenging task, which suffers from performance deterioration as the crowd
grows. In this paper, multi-subgoal robot navigation approach based on deep
reinforcement learning is proposed, which can reason about more comprehensive
relationships among all agents (robot and humans). Specifically, the next
position point is planned for the robot by introducing history information and
interactions in our work. Firstly, based on subgraph network, the history
information of all agents is aggregated before encoding interactions through a
graph neural network, so as to improve the ability of the robot to anticipate
the future scenarios implicitly. Further consideration, in order to reduce the
probability of unreliable next position points, the selection module is
designed after policy network in the reinforcement learning framework. In
addition, the next position point generated from the selection module satisfied
the task requirements better than that obtained directly from the policy
network. The experiments demonstrate that our approach outperforms
state-of-the-art approaches in terms of both success rate and collision rate,
especially in crowded human environments.
|
Subsets and Splits