text
stringlengths 6
128k
|
---|
We compactify the pure spinor formalism on a K3 surface. The pure spinor
splits into a six-dimensional pure spinor, a projective superspace harmonic,
and 6 non-covariant variables. A homological algebra argument reduces the
calculation of the cohomology of the Berkovits differential to a "small"
Hilbert space which is the string-theoretic analogue of projective superspace.
The description of the physical state conditions is facilitated by lifting to
the full harmonic superspace, which is accomplished by the introduction of the
missing harmonics as non-minimal variables. Finally, contact with the hybrid
formalism is made by returning to the small Hilbert space and fermionizing the
projective parameter.
|
We present a deterministic entanglement purification protocol (EPP) working
for currently available experiment technique. In this protocol, we resort to
the robust time-bin entanglement to purify the polarization entanglement
determinately, which is quite different from the previous EPPs. After
purification, one can obtain a completely pure maximally entangled pair with
the success probability of 100%, in principle. As the maximal polarization
entanglement is of vice importance in long-distance quantum communication, this
protocol may have a wide application.
|
Study of light emitting silicon fabricated by ion implantation.
|
Initially designed to discover short-period planets, the N2K campaign has
since evolved to discover new worlds at large separations from their host
stars. Detecting such worlds will help determine the giant planet occurrence at
semi-major axes beyond the ice line, where gas giants are thought to mostly
form. Here we report four newly-discovered gas giant planets (with minimum
masses ranging from 0.4 to 2.1 MJup) orbiting stars monitored as part of the
N2K program. Two of these planets orbit stars already known to host planets: HD
5319 and HD 11506. The remaining discoveries reside in previously-unknown
planetary systems: HD 10442 and HD 75784. The refined orbital period of the
inner planet orbiting HD 5319 is 641 days. The newly-discovered outer planet
orbits in 886 days. The large masses combined with the proximity to a 4:3 mean
motion resonance make this system a challenge to explain with current formation
and migration theories. HD 11506 has one confirmed planet, and here we confirm
a second. The outer planet has an orbital period of 1627.5 days, and the
newly-discovered inner planet orbits in 223.6 days. A planet has also been
discovered orbiting HD 75784 with an orbital period of 341.7 days. There is
evidence for a longer period signal; however, several more years of
observations are needed to put tight constraints on the Keplerian parameters
for the outer planet. Lastly, an additional planet has been detected orbiting
HD 10442 with a period of 1043 days.
|
Our knowledge of the intrinsic properties of short duration Gamma-Ray Bursts
has relied, so far, only upon a few cases for which the estimate of the
distance and an extended, multiwavelength monitoring of the afterglow have been
obtained. We carried out multiwavelength observations of the short GRB 061201
aimed at estimating its distance and studying its properties. We performed a
spectral and timing analysis of the prompt and afterglow emission and discuss
the results in the context of the standard fireball model. A clear temporal
break was observed in the X-ray light curve about 40 minutes after the burst
trigger. We find that the spectral and timing behaviour of the X-ray afterglow
is consistent with a jet origin of the observed break, although the optical
data can not definitively confirm this and other scenarios are possible. No
underlying host galaxy down to R~26 mag was found after fading of the optical
afterglow. Thus, no secure redshift could be measured for this burst. The
nearest galaxy is at z=0.111 and shows evidence of star formation activity. We
discuss the association of GRB 061201 with this galaxy and with the ACO S 995
galaxy cluster, from which the source is at an angular distance of 17'' and
8.5', respectively. We also test the association with a possible undetected,
positionally consistent galaxy at z~1. In all these cases, in the jet
interpretation, we find a jet opening angle of 1-2 degrees.
|
We present a new approach to Eulerian computational fluid dynamics that is
designed to work at high Mach numbers encountered in astrophysical hydrodynamic
simulations. The Eulerian fluid conservation equations are solved in an
adaptive frame moving with the fluid where Mach numbers are minimized. The
moving frame approach uses a velocity decomposition technique to define local
kinetic variables while storing the bulk kinetic components in a smoothed
background velocity field that is associated with the grid velocity.
Gravitationally induced accelerations are added to the grid, thereby minimizing
the spurious heating problem encountered in cold gas flows. Separately tracking
local and bulk flow components allows thermodynamic variables to be accurately
calculated in both subsonic and supersonic regions. A main feature of the
algorithm, that is not possible in previous Eulerian implementations, is the
ability to resolve shocks and prevent spurious heating where both the preshock
and postshock Mach numbers are high. The hybrid algorithm combines the high
resolution shock capturing ability of the second-order accurate Eulerian TVD
scheme with a low-diffusion Lagrangian advection scheme. We have implemented a
cosmological code where the hydrodynamic evolution of the baryons is captured
using the moving frame algorithm while the gravitational evolution of the
collisionless dark matter is tracked using a particle-mesh N-body algorithm.
The MACH code is highly suited for simulating the evolution of the IGM where
accurate thermodynamic evolution is needed for studies of the Lyman alpha
forest, the Sunyaev-Zeldovich effect, and the X-ray background. Hydrodynamic
and cosmological tests are described and results presented. The current code is
fast, memory-friendly, and parallelized for shared-memory machines.
|
The Luttinger-Ward functional was proposed more than five decades ago to
provide a link between static and dynamic quantities in a quantum many-body
system. Despite its widespread usage, the derivation of the Luttinger-Ward
functional remains valid only in the formal sense, and even the very existence
of this functional has been challenged by recent numerical evidence. In a
simpler and yet highly relevant regime, namely the Euclidean lattice field
theory, we rigorously prove that the Luttinger-Ward functional is a
well-defined universal functional over all physical Green's functions. Using
the Luttinger-Ward functional, the free energy can be variationally minimized
with respect to Green's functions in its domain. We then derive the widely used
bold diagrammatic expansion rigorously, without relying on formal arguments
such as partial resummation of bare diagrams to infinite order.
|
Solid state drives have a number of interesting characteristics. However,
there are numerous file system and storage design issues for SSDs that impact
the performance and device endurance. Many flash-oriented and flash-friendly
file systems introduce significant write amplification issue and GC overhead
that results in shorter SSD lifetime and necessity to use the NAND flash
overprovisioning. SSDFS file system introduces several authentic concepts and
mechanisms: logical segment, logical extent, segment's PEBs pool,
Main/Diff/Journal areas in the PEB's log, Diff-On-Write approach, PEBs
migration scheme, hot/warm data self-migration, segment bitmap, hybrid b-tree,
shared dictionary b-tree, shared extents b-tree. Combination of all suggested
concepts are able: (1) manage write amplification in smart way, (2) decrease GC
overhead, (3) prolong SSD lifetime, and (4) provide predictable file system's
performance.
|
We prove that any odd-dimensional modular category of rank at most 23 is
pointed. We also show that an odd-dimensional modular category of rank 25 is
either pointed, perfect, or equivalent to $\operatorname{Rep}(D^\omega(\mathbb
Z_7\rtimes \mathbb Z_3))$. Finally, we give partial classification results for
modular categories of rank up to 73.
|
More than any other species, humans form social ties to individuals who are
neither kin nor mates, and these ties tend to be with similar people. Here, we
show that this similarity extends to genotypes. Across the whole genome,
friends' genotypes at the SNP level tend to be positively correlated
(homophilic); however, certain genotypes are negatively correlated
(heterophilic). A focused gene set analysis suggests that some of the overall
correlation can be explained by specific systems; for example, an olfactory
gene set is homophilic and an immune system gene set is heterophilic. Finally,
homophilic genotypes exhibit significantly higher measures of positive
selection, suggesting that, on average, they may yield a synergistic fitness
advantage that has been helping to drive recent human evolution.
|
In this paper, we model dwarf galaxies as a two-component system of
gravitationally coupled stars and atomic hydrogen gas in the external force
field of a pseudo-isothermal dark matter halo, and numerically obtain the
radial distribution of {H\,{\sc i}} vertical scale heights. This is done for a
group of four dwarf galaxies (DDO\,154, Ho\,II, IC\,2574 and NGC\,2366) for
which most necessary input parameters are available from observations. The
formulation of the equations takes into account the rising rotation curves
generally observed in dwarf galaxies. The inclusion of self-gravity of the gas
into the model at par with that of the stars results in scale heights that are
smaller than what was obtained by previous authors. This is important as the
gas scale height is often used for deriving other physical quantities. The
inclusion of gas self-gravity is particularly relevant in the case of dwarf
galaxies where the gas cannot be considered a minor perturbation to the mass
distribution of the stars. We find that three out of four galaxies studied show
a flaring of their {H\,{\sc i}} disks with increasing radius, by a factor of a
few within several disk scale lengths. The fourth galaxy has a thick {H\,{\sc
i}} disk throughout. This arises as a result of the gas velocity dispersion
remaining constant or decreasing only slightly while the disk mass distribution
declines exponentially as a function of radius.
|
The \emph{Wiener index} is one of the most widely studied parameters in
chemical graph theory. It is defined as the sum of the lengths of the shortest
paths between all unordered pairs of vertices in a given graph. In 1991,
\v{S}olt\'es posed the following problem regarding the Wiener index: Find all
graphs such that its Wiener index is preserved upon removal of any vertex. The
problem is far from being solved and to this day, only one graph with such
property is known: the cycle graph on 11 vertices.
In this paper, we solve a relaxed version of the problem, proposed by Knor et
al.\ in 2018. For a given $k$, the problem is to find (infinitely many) graphs
having exactly $k$ vertices such that the Wiener index remains the same after
removing any of them. We call these vertices \emph{good} vertices and we show
that there are infinitely many cactus graphs with exactly $k$ cycles of length
at least 7 that contain exactly $2k$ good vertices and infinitely many cactus
graphs with exactly $k$ cycles of length $c \in \{5,6\}$ that contain exactly
$k$ good vertices. On the other hand, we prove that $G$ has no good vertex if
the length of the longest cycle in $G$ is at most $4$.
|
Weak interactions between quarks give rise to hadronic parity violation which
can be observed in nuclear and few-nucleon systems. We study the QCD
renormalization of the isotensor component of parity violation at
next-to-leading order accuracy. The renormalization group is employed to evolve
the interaction down to hadronic scales. As the results are renormalization
scheme dependent, we compare various schemes, including 't Hooft-Veltman
dimensional regularization, and several regularization independent-momentum
subtraction schemes.
|
Community Question Answering (CQA) has become a primary means for people to
acquire knowledge, where people are free to ask questions or submit answers. To
enhance the efficiency of the service, similar question identification becomes
a core task in CQA which aims to find a similar question from the archived
repository whenever a new question is asked. However, it has long been a
challenge to properly measure the similarity between two questions due to the
inherent variation of natural language, i.e., there could be different ways to
ask a same question or different questions sharing similar expressions. To
alleviate this problem, it is natural to involve the existing answers for the
enrichment of the archived questions. Traditional methods typically take a
one-side usage, which leverages the answer as some expanded representation of
the corresponding question. Unfortunately, this may introduce unexpected noises
into the similarity computation since answers are often long and diverse,
leading to inferior performance. In this work, we propose a two-side usage,
which leverages the answer as a bridge of the two questions. The key idea is
based on our observation that similar questions could be addressed by similar
parts of the answer while different questions may not. In other words, we can
compare the matching patterns of the two questions over the same answer to
measure their similarity. In this way, we propose a novel matching over
matching model, namely Match$^2$, which compares the matching patterns between
two question-answer pairs for similar question identification. Empirical
experiments on two benchmark datasets demonstrate that our model can
significantly outperform previous state-of-the-art methods on the similar
question identification task.
|
In a continuum model of the solvation of charged molecules in an aqueous
solvent, the classical Poisson-Boltzmann (PB) theory is generalized to include
the solute point charges and the dielectric boundary that separates the
high-dielectric solvent from the low-dielectric solutes. With such a setting,
we construct an effective electrostatic free-energy functional of ionic
concentrations, where the solute point charges are regularized by a reaction
field. We prove that such a functional admits a unique minimizer in a class of
admissible ionic concentrations and that the corresponding electrostatic
potential is the unique solution to the boundary-value problem of the
dielectric-boundary PB equation. The negative first variation of this minimum
free energy with respect to variations of the dielectric boundary defines the
normal component of the dielectric boundary force. Together with the
solute-solvent interfacial tension and van der Waals interaction forces, such
boundary force drives an underlying charged molecular system to a stable
equilibrium, as described by a variational implicit-solvent model. We develop
an $L^2$-theory for the continuity and differentiability of solutions to
elliptic interface problems with respect to boundary variations, and derive an
explicit formula of the dielectric boundary force. With a continuum
description, our result of the dielectric boundary force confirms a
molecular-level prediction that the electrostatic force points from the
high-dielectric and polarizable aqueous solvent to the charged molecules. Our
method of analysis is general as it does not rely on any variational
principles.
|
Given $n$ independent random marked $d$-vectors (points) $X_i$ distributed
with a common density, define the measure $\nu_n=\sum_i\xi_i$, where $\xi_i$ is
a measure (not necessarily a point measure) which stabilizes; this means that
$\xi_i$ is determined by the (suitably rescaled) set of points near $X_i$. For
bounded test functions $f$ on $R^d$, we give weak and strong laws of large
numbers for $\nu_n(f)$. The general results are applied to demonstrate that an
unknown set $A$ in $d$-space can be consistently estimated, given data on which
of the points $X_i$ lie in $A$, by the corresponding union of Voronoi cells,
answering a question raised by Khmaladze and Toronjadze. Further applications
are given concerning the Gamma statistic for estimating the variance in
nonparametric regression.
|
We study the impact of the ambient fluid on the evolution of collapsing false
vacuum bubbles by simulating the dynamics of a coupled bubble-particle system.
A significant increase in the mass of the particles across the bubble wall
leads to a buildup of those particles inside the false vacuum bubble. We show
that the backreaction of the particles on the bubble slows or even reverses the
collapse. Consequently, if the particles in the true vacuum become heavier than
in the false vacuum, the particle-wall interactions always decrease the
compactness that the false vacuum bubbles can reach making their collapse to
black holes less likely.
|
Recently the commissioning APOGEE observations of the Galactic bulge reported
that a significant fraction of stars ($\sim10%$) are in a cold ($\sigma_{\rm V}
\approx 30$ km/s) high velocity peak (Galactocentric radial velocity $\approx
200$ km/s). These stars are speculated to reflect the stellar orbits in the
Galactic bar. In this study, we use two $N$-body models of the Milky Way-like
disk galaxy with different bar strengths to critically examine this
possibility. The general trends of the Galactocentric radial velocity
distribution in observations and simulations are similar, but neither our
models nor the BRAVA data reveal a statistically significant cold high velocity
peak. A Monte Carlo test further suggests that it is possible for a spurious
high velocity peak to appear if there are only a limited number of stars
observed. Thus, the reported cold high velocity peak, even if it is real, is
unlikely due to stars on the bar-supporting orbits. Our models do predict an
excess of stars with high radial velocity, but not in a distinct peak. In the
distance--velocity diagram, the high velocity particles in different fields
exist at a similar distance $\sim8.5 \pm 1$ kpc away from the Sun. This result
may be explained with geometric intersections between the line-of-sight and the
particle orbits; high velocity stars naturally exist approximately at the
tangent point, without constituting a distinct peak. We further demonstrate
that even without the presence of a bar structure, particle motions in an
axisymmetric disk can also exhibit an excess of high velocity stars.
|
We report the experimental demonstration of directly produced polarization
squeezing at 1064 nm from a type I optical parametric amplifier (OPA) based on
a periodically poled KTP crystal (PPKTP). The orthogonal polarization modes of
the polarization squeezed state are both defined by the OPA cavity mode, and
the birefringence induced by the PPKTP crystal is compensated for by a second,
but inactive, PPKTP crystal. Stokes parameter squeezing of 3.6 dB and anti
squeezing of 9.4 dB is observed.
|
The restricted online Ramsey numbers were introduced by Conlon, Fox,
Grinshpun and He in 2019. In a recent paper, Briggs and Cox studied the
restricted online Ramsey numbers of matchings and determined a general upper
bound for them. They proved that for $n=3r-1=R_2(r K_2)$ we have
$\tilde{R}_{2}(r K_2;n) \leq n-1$ and asked whether this was tight. In this
short note, we provide a general lower bound for these Ramsey numbers. As a
corollary, we answer this question of Briggs and Cox, and confirm that for
$n=3r-1$ we have $\tilde{R}_{2}(r K_2;n) = n-1$. We also show that for
$n'=4r-2=R_3(r K_2)$ we have $\tilde{R}_{3}(r K_2;n') = 5r-4$.
|
The Muon (g-2) Experiment (E821) at Brookhaven National Laboratory (BNL) has
measured the anomalous magnetic moment of the positive muon to an unprecedented
precision of 1.3 parts per million. The result, a_{\mu^+} = (g-2)/2 = 11 659
202(14)(6) X 10^{-10}, is based on data recorded in 1999 and is in good
agreement with previous measurements. Upcoming analysis of data recorded in
2000 and 2001 will substantially reduce the uncertainty on this measurement.
Comparison of the new world average experimental value with the most
comprehensive Standard Model calculation, a_\mu(SM) = 11 659 159.6(6.7) X
10^{-10}, yields a difference of a_\mu(exp)-a_\mu(SM) = 43(16) X 10^{-10}.
|
In applied multivariate statistics, estimating the number of latent
dimensions or the number of clusters is a fundamental and recurring problem.
One common diagnostic is the scree plot, which shows the largest eigenvalues of
the data matrix; the user searches for a "gap" or "elbow" in the decreasing
eigenvalues; unfortunately, these patterns can hide beneath the bias of the
sample eigenvalues. This methodological problem is conceptually difficult
because, in many situations, there is only enough signal to detect a subset of
the $k$ population dimensions/eigenvectors. In this situation, one could argue
that the correct choice of $k$ is the number of detectable dimensions. We
alleviate these problems with cross-validated eigenvalues. Under a large class
of random graph models, without any parametric assumptions, we provide a
p-value for each sample eigenvector. It tests the null hypothesis that this
sample eigenvector is orthogonal to (i.e., uncorrelated with) the true latent
dimensions. This approach naturally adapts to problems where some dimensions
are not statistically detectable. In scenarios where all $k$ dimensions can be
estimated, we prove that our procedure consistently estimates $k$. In
simulations and a data example, the proposed estimator compares favorably to
alternative approaches in both computational and statistical performance.
|
The inspection-planning problem calls for computing motions for a robot that
allow it to inspect a set of points of interest (POIs) while considering plan
quality (e.g., plan length). This problem has applications across many domains
where robots can help with inspection, including infrastructure maintenance,
construction, and surgery. Incremental Random Inspection-roadmap Search (IRIS)
is an asymptotically-optimal inspection planner that was shown to compute
higher-quality inspection plans orders of magnitudes faster than the prior
state-of-the-art method. In this paper, we significantly accelerate the
performance of IRIS to broaden its applicability to more challenging real-world
applications. A key computational challenge that IRIS faces is effectively
searching roadmaps for inspection plans -- a procedure that dominates its
running time. In this work, we show how to incorporate lazy edge-evaluation
techniques into \iris's search algorithm and how to reuse search efforts when a
roadmap undergoes local changes. These enhancements, which do not compromise
IRIS's asymptotic optimality, enable us to compute inspection plans much faster
than the original IRIS. We apply IRIS with the enhancements to simulated bridge
inspection and surgical inspection tasks and show that our new algorithm for
some scenarios can compute similar-quality inspection plans 570x faster than
prior work.
|
The high cost of acquiring labels is one of the main challenges in deploying
supervised machine learning algorithms. Active learning is a promising approach
to control the learning process and address the difficulties of data labeling
by selecting labeled training examples from a large pool of unlabeled
instances. In this paper, we propose a new data-driven approach to active
learning by choosing a small set of labeled data points that are both
informative and representative. To this end, we present an efficient geometric
technique to select a diverse core-set in a low-dimensional latent space
obtained by training a Variational Autoencoder (VAE). Our experiments
demonstrate an improvement in accuracy over two related techniques and, more
importantly, signify the representation power of generative modeling for
developing new active learning methods in high-dimensional data settings.
|
We present results of magnetic neutron diffraction experiments on the
co-doped super-oxygenated La(2-x)Sr(x)CuO(4+y) (LSCO+O) system with x=0.09. The
spin-density wave has been studied and we find long-range incommensurate
antiferromagnetic order below T_N coinciding with the superconducting ordering
temperature T_c=40 K. The incommensurability value is consistent with a
hole-doping of n_h~1/8, but in contrast to non-superoxygenated
La(2-x)Sr(x)CuO(4) with hole-doping close to n_h ~ 1/8 the magnetic order
parameter is not field-dependent. We attribute this to the magnetic order being
fully developed in LSCO+O as in the other striped lanthanum-cuprate systems.
|
We prove that, under certain conditions, the topology of the event horizon of
a four dimensional asymptotically flat black hole spacetime must be a 2-sphere.
No stationarity assumption is made. However, in order for the theorem to apply,
the horizon topology must be unchanging for long enough to admit a certain kind
of cross section. We expect this condition is generically satisfied if the
topology is unchanging for much longer than the light-crossing time of the
black hole. More precisely, let $M$ be a four dimensional asymptotically flat
spacetime satisfying the averaged null energy condition, and suppose that the
domain of outer communication $\C_K$ to the future of a cut $K$ of $\Sm$ is
globally hyperbolic. Suppose further that a Cauchy surface $\Sigma$ for $\C_K$
is a topological 3-manifold with compact boundary $\partial\S$ in $M$, and
$\S'$ is a compact submanifold of $\bS$ with spherical boundary in $\S$ (and
possibly other boundary components in $M/\S$). Then we prove that the homology
group $H_1(\Sigma',Z)$ must be finite. This implies that either $\partial\S'$
consists of a disjoint union of 2-spheres, or $\S'$ is nonorientable and
$\partial\S'$ contains a projective plane. Further,
$\partial\S=\partial\Ip[K]\cap\partial\Im[\Sp]$, and $\partial \Sigma$ will be
a cross section of the horizon as long as no generator of $\partial\Ip[K]$
becomes a generator of $\partial\Im[\Sp]$. In this case, if $\S$ is orientable,
the horizon cross section must consist of a disjoint union of 2-spheres.}
|
Building efficient large-scale quantum computers is a significant challenge
due to limited qubit connectivities and noisy hardware operations.
Transpilation is critical to ensure that quantum gates are on physically linked
qubits, while minimizing $\texttt{SWAP}$ gates and simultaneously finding
efficient decomposition into native $\textit{basis gates}$. The goal of this
multifaceted optimization step is typically to minimize circuit depth and to
achieve the best possible execution fidelity. In this work, we propose
$\textit{MIRAGE}$, a collaborative design and transpilation approach to
minimize $\texttt{SWAP}$ gates while improving decomposition using
$\textit{mirror gates}$. Mirror gates utilize the same underlying physical
interactions, but when their outputs are reversed, they realize a different or
$\textit{mirrored}$ quantum operation. Given the recent attention to
$\sqrt{\texttt{iSWAP}}$ as a powerful basis gate with decomposition advantages
over $\texttt{CNOT}$, we show how systems that implement the $\texttt{iSWAP}$
family of gates can benefit from mirror gates. Further, $\textit{MIRAGE}$ uses
mirror gates to reduce routing pressure and reduce true circuit depth instead
of just minimizing $\texttt{SWAP}$s. We explore the benefits of decomposition
for $\sqrt{\texttt{iSWAP}}$ and $\sqrt[4]{\texttt{iSWAP}}$ using mirror gates,
including both expanding Haar coverage and conducting a detailed fault rate
analysis trading off circuit depth against approximate gate decomposition. We
also describe a novel greedy approach accepting mirror substitution at
different aggression levels within MIRAGE. Finally, for $\texttt{iSWAP}$
systems that use square-lattice topologies, $\textit{MIRAGE}$ provides an
average of 29.6% reduction in circuit depth by eliminating an average of 59.9f%
$\texttt{SWAP}$ gates, which ultimately improves the practical applicability of
our algorithm.
|
The nonlinear ionization with accompanied photorecombination on closely
located center has been considered. The radiation spectrum has been calculated
in the frame of the Lewenstein-Corkum approach. The dependence of radiation on
the distance between ionization point and the recombination center, and the
frequency and laser field strength has been investigated.
|
We trace the evolution of research on extreme solar and solar-terrestrial
events from the 1859 Carrington event to the rapid development of the last
twenty years. Our focus is on the largest observed/inferred/theoretical cases
of sunspot groups, flares on the Sun and Sun-like stars, coronal mass
ejections, solar proton events, and geomagnetic storms. The reviewed studies
are based on modern observations, historical or long-term data including the
auroral and cosmogenic radionuclide record, and Kepler observations of Sun-like
stars. We compile a table of 100- and 1000-year events based on occurrence
frequency distributions for the space weather phenomena listed above. Questions
considered include the Sun-like nature of superflare stars and the existence of
impactful but unpredictable solar "black swans" and extreme "dragon king" solar
phenomena that can involve different physics from that operating in events
which are merely large.
|
Most existing person re-identification (re-id) methods require supervised
model learning from a separate large set of pairwise labelled training data for
every single camera pair. This significantly limits their scalability and
usability in real-world large scale deployments with the need for performing
re-id across many camera views. To address this scalability problem, we develop
a novel deep learning method for transferring the labelled information of an
existing dataset to a new unseen (unlabelled) target domain for person re-id
without any supervised learning in the target domain. Specifically, we
introduce an Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL) for
simultaneously learning an attribute-semantic and identitydiscriminative
feature representation space transferrable to any new (unseen) target domain
for re-id tasks without the need for collecting new labelled training data from
the target domain (i.e. unsupervised learning in the target domain). Extensive
comparative evaluations validate the superiority of this new TJ-AIDL model for
unsupervised person re-id over a wide range of state-of-the-art methods on four
challenging benchmarks including VIPeR, PRID, Market-1501, and DukeMTMC-ReID.
|
We introduce an exactly solvable model of a fermi gas in one dimension and
compute the momentum distribution exactly. This is based on a generalisation of
the ideas of bosonization in one dimension. It is shown that in the RPA
limit(the ultra-high density limit) the answers we get are the exact answers
for a homogeneous fermi gas interacting via a two-body repulsive coulomb
interaction. Furthermore, the solution may be obtained exactly for arbitrary
functional forms of the interaction, so long as it is purely repulsive. No
linearization of the bare fermion dispersion is required. We find that for the
interaction considered, the fermi surface is intact for weak repulsion and is
destroyed only for sufficiently strong repulsion. Comparison with other models
like the supersymmetric t-J model with inverse square interactions is made.
|
Let $\mathcal L=-\Delta+V$ be a Schr\"odinger operator, where $\Delta$ is the
Laplacian on $\mathbb R^d$ and the nonnegative potential $V$ belongs to the
reverse H\"older class $RH_q$ for $q\geq d$. The Riesz transform associated
with the operator $\mathcal L=-\Delta+V$ is denoted by $\mathcal
R=\nabla{(-\Delta+V)}^{-1/2}$ and the dual Riesz transform is denoted by
$\mathcal R^{\ast}=(-\Delta+V)^{-1/2}\nabla$. In this paper, we first introduce
some kinds of weighted Morrey spaces related to certain nonnegative potentials
belonging to the reverse H\"older class $RH_q$ for $q\geq d$. Then we will
establish the boundedness properties of the operators $\mathcal R$ and its
adjoint $\mathcal R^{\ast}$ on these new spaces. Furthermore, weighted
strong-type estimate and weighted endpoint estimate for the corresponding
commutators $[b,\mathcal R]$ and $[b,\mathcal R^{\ast}]$ are also obtained. The
classes of weights, the classes of symbol functions as well as weighted Morrey
spaces discussed in this paper are larger than $A_p$, $\mathrm{BMO}(\mathbb
R^d)$ and $L^{p,\kappa}(w)$ corresponding to the classical Riesz transforms
($V\equiv0$).
|
This paper reports the investigation of a matrix model via super Lie algebra,
following the proposal of L. Smolin. We consider the osp(1|32,R) nongauged
matrix model and gl(1|32,R) gauged matrix model, especially paying attention to
the supersymmetry and the relationship with IKKT model. This paper is based on
the collaboration with the collaboration with S.Iso, H.Kawai and Y.Ohwashi.
|
A new mechanism for mass generation of gauge fields is proposed in this
paper. By introducing two sets of gauge fields and making the variatons of
these two sets of gauge fields compensate each other under local gauge
transformations, the mass term of gauge fields can be introduced into the
Lagrangian without violation the local gauge symmetry of the Lagrangian.
Besides, this model is renormalizable.
|
Person identification based on eye movements is getting more and more
attention, as it is anti-spoofing resistant and can be useful for continuous
authentication. Therefore, it is noteworthy for researchers to know who and
what is relevant in the field, including authors, journals, conferences, and
institutions. This paper presents a comprehensive quantitative overview of the
field of eye movement biometrics using a bibliometric approach. All data and
analyses are based on documents written in English published between 2004 and
2019. Scopus was used to perform information retrieval. This research focused
on temporal evolution, leading authors, most cited papers, leading journals,
competitions and collaboration networks.
|
In this work, according to some evidence from being an asymmetry in the
number density of left and right-handed electrons, $\delta_L$, in-universe
motivate us to calculate the dominated contribution of this asymmetry in the
generation of B-mode power spectrum $C_{ B\,l}^{(S)}$. Note, in the standard
cosmological scenario, Compton scattering in the presence of scalar matter
perturbation can not generate magnetic like pattern in linear polarization
while in the case of polarized Compton scattering, we have shown
$C_{B\,l}^{(S)}\propto \delta_L^2$. We add up the spectrum of the B-mode
generated by the polarized Compton scattering to the spectra produced by weak
lensing effects and Compton scattering in the presence of tensor perturbations.
The results show a significant amplification in $C_{B\,l}$ in large scale
$l<500$ for $\delta_L>10^{-6}$ which will be observable in future high
resolution B-mode polarization detection. Finally, we have shown that $C_{
B\,l}^{(S)}$ generated by polarized Compton scattering can suppress the tensor
to scalar ratio, $r$ parameter so that this contamination can be comparable to
a primordial tensor-to-scalar ratio spatially for $\delta_L>10^{-5}$.
|
We test for the long-run relationship between stock prices, inflation and its
uncertainty for different U.S. sector stock indexes, over the period 2002M7 to
2015M10. For this purpose we use a cointegration analysis with one structural
break to capture the crisis effect, and we assess the inflation uncertainty
based on a time-varying unobserved component model. In line with recent
empirical studies we discover that in the long-run, the inflation and its
uncertainty negatively impact the stock prices, opposed to the well-known
Fisher effect. In addition we show that for several sector stock indexes the
negative effect of inflation and its uncertainty vanishes after the crisis
setup. However, in the short-run the results provide evidence in the favor of a
negative impact of uncertainty, while the inflation has no significant
influence on stock prices, except for the consumption indexes. The
consideration of business cycle effects confirms our findings, which proves
that the results are robust, both for the long-and the short-run relationships.
|
The doubly charged Higgs bosons $\Delta^{--}$ that are present in exotic
Higgs representations can have lepton-number-violating couplings to $e^-e^-$.
We discuss general constraints and phenomenology for the $\Delta^{--}$ and
demonstrate that extremely small values for the $e^-e^-\rightarrow\Delta^{--}$
coupling (some 8 orders of magnitude smaller than the current limit) would
produce observable signals for $\Delta^{--}$ production in direct $s$-channel
production at an $e^-e^-$ collider.
|
This paper describes the use of the idea of natural time to propose a new
method for characterizing the seismic risk to the world's major cities at risk
of earthquakes. Rather than focus on forecasting, which is the computation of
probabilities of future events, we define the term seismic nowcasting, which is
the computation of the current state of seismic hazard in a defined geographic
region.
|
We present a calculation of the power spectrum generated in a classically
symmetry-breaking O(N) scalar field through inflationary quantum fluctuations,
using the large-N limit. The effective potential of the theory in de Sitter
space is obtained from a gap equation which is exact at large N. Quantum
fluctuations restore the O(N) symmetry in de Sitter space, but for the finite
values of N of interest, there is symmetry breaking and phase ordering after
inflation, described by the classical nonlinear sigma model. The main
difference with the usual cosmic texture scenario is that the initial
conditions possess long range correlations, with calculable exponents.
|
This is a draft of my brief note on the early history of $n\bar{n}$
oscillations written for the Project X at the request of Chris Quigg in March
2013.
|
We describe an application of computational techniques to the study of
instanton induced baryon decay in high energy electroweak interactions.
|
Transverse single-spin asymmetries $A_{\textrm{N}}$ of forward neutrons at
pseudorapidities larger than 6 had only been studied in the transverse momentum
range of $p_{\textrm{T}} < 0.4$ GeV/$c$. The RHICf Collaboration has extended
the previous measurements up to 1.0 GeV/$c$ in polarized $p+p$ collisions at
$\sqrt{s}~=~510$GeV, using an electromagnetic calorimeter installed in the
zero-degree area of the STAR detector at the Relativistic Heavy Ion Collider.
The resulting $A_{\textrm{N}}$s increase in magnitude with $p_{\textrm{T}}$ in
the high longitudinal momentum fraction $x_{\textrm{F}}$ range, but reach a
plateau at lower $p_{\textrm{T}}$ for lower $x_{\textrm{F}}$. For low
transverse momenta the $A_{\textrm{N}}$s show little $x_{\textrm{F}}$
dependence and level off from intermediate values. For higher transverse
momenta the $A_{\textrm{N}}$s show also an indication to reach a plateau at
increased magnitudes. The results are consistent with previous measurements at
lower collision energies, suggesting no $\sqrt{s}$ dependence of the neutron
asymmetries. A theoretical model based on the interference of $\pi$ and $a_1$
exchange between two protons could partially reproduce the current results,
however an additional mechanism is necessary to describe the neutron
$A_{\textrm{N}}$s over the whole kinematic region measured.
|
Leaf area LA, is a plant biometric index important to agroforestry and crop
production. Previous works have demonstrated the conservativeness of the
inverse of the product of the fresh leaf density and thickness, the so-called
Hughes constant, K. We use this fact to develop LAMM, an absolute method of LA
measurement, i.e. no regression fits or prior calibrations with planimeters.
Nor does it require drying the leaves. The concept involves the in situ
determination of K using geometrical shapes and their weights obtained from a
subset of fresh leaves of the set whose areas are desired. Subsequently the
LAs, at any desired stratification level, are derived by utilizing K and the
previously measured masses of the fresh leaves. The concept was first tested in
the simulated ideal case of complete planarity and uniform thickness by using
plastic film covered card-paper sheets. Next the species-specific
conservativeness of K over individual leaf zones and different leaf types from
leaves of plants from two species, Mandevilla splendens and Spathiphyllum
wallisii, was quantitatively validated. Using the global average K values, the
LA of these and additional plants, were obtained. LAMM was found to be a rapid,
simple, economic technique with accuracies, as measured for the geometrical
shapes, that were comparable to those obtained by the planimetric method that
utilizes digital image analysis, DIA. For the leaves themselves, there were no
statistically significant differences between the LAs measured by LAMM and by
the DIA and the linear correlation between the two methods was excellent.
|
We derive constraints on two-dimensional conformal field theories with higher
spin symmetry due to unitarity, modular invariance, and causality. We focus on
CFTs with $\mathcal{W}_N$ symmetry in the "irrational" regime, where $c>N-1$
and the theories have an infinite number of higher-spin primaries. The most
powerful constraints come from positivity of the Kac matrix, which (unlike the
Virasoro case) is non-trivial even when $c>N-1$. This places a lower bound on
the dimension of any non-vacuum higher-spin primary state, which is linear in
the central charge. At large $c$, this implies that the dual holographic
theories of gravity in AdS$_3$, if they exist, have no local, perturbative
degrees of freedom in the semi-classical limit.
|
Scavenging the idling computation resources at the enormous number of mobile
devices can provide a powerful platform for local mobile cloud computing. The
vision can be realized by peer-to-peer cooperative computing between edge
devices, referred to as co-computing. This paper considers a co-computing
system where a user offloads computation of input-data to a helper. The helper
controls the offloading process for the objective of minimizing the user's
energy consumption based on a predicted helper's CPU-idling profile that
specifies the amount of available computation resource for co-computing.
Consider the scenario that the user has one-shot input-data arrival and the
helper buffers offloaded bits. The problem for energy-efficient co-computing is
formulated as two sub-problems: the slave problem corresponding to adaptive
offloading and the master one to data partitioning. Given a fixed offloaded
data size, the adaptive offloading aims at minimizing the energy consumption
for offloading by controlling the offloading rate under the deadline and buffer
constraints. By deriving the necessary and sufficient conditions for the
optimal solution, we characterize the structure of the optimal policies and
propose algorithms for computing the policies. Furthermore, we show that the
problem of optimal data partitioning for offloading and local computing at the
user is convex, admitting a simple solution using the sub-gradient method.
Last, the developed design approach for co-computing is extended to the
scenario of bursty data arrivals at the user accounting for data causality
constraints. Simulation results verify the effectiveness of the proposed
algorithms.
|
Extending results in \cite{M} and \cite{MM} we characterize the classical
classes of weights that satisfy reverse H\"{o}lder inequalities in terms of
indices of suitable families of $K-$functionals of the weights. In particular,
we introduce a Samko type of index (cf. \cite{kara}) for families of functions,
that is based on quasi-monotonicity, and use it to provide an index
characterization of the $RH_{p}$ classes, as well as the limiting class $RH=$
$RH_{LLogL}=$. $\bigcup\limits_{p>1}RH_{p}$ (cf. \cite{BMR}),\ which in the
abstract case involves extrapolation spaces. Reverse H\"{o}lder inequalities
associated to $L(p,q)$ norms, and non-doubling measures are also treated.
|
Classical linear discriminant analysis (LDA) is based on squared Frobenious
norm and hence is sensitive to outliers and noise. To improve the robustness of
LDA, in this paper, we introduce capped l_{2,1}-norm of a matrix, which employs
non-squared l_2-norm and "capped" operation, and further propose a novel capped
l_{2,1}-norm linear discriminant analysis, called CLDA. Due to the use of
capped l_{2,1}-norm, CLDA can effectively remove extreme outliers and suppress
the effect of noise data. In fact, CLDA can be also viewed as a weighted LDA.
CLDA is solved through a series of generalized eigenvalue problems with
theoretical convergency. The experimental results on an artificial data set,
some UCI data sets and two image data sets demonstrate the effectiveness of
CLDA.
|
Peskin's Immersed Boundary (IB) model and method are among one of the most
important modeling tools and numerical methods. The IB method has been known to
be first order accurate in the velocity. However, almost no rigorous
theoretical proof can be found in the literature for Stokes equations with a
prescribed velocity boundary condition. In this paper, it has been shown that
the pressure of the Stokes equation has a convergence order $O(\sqrt{h} |\log
h| )$ in the $L^2$ norm while the velocity has an $O(h |\log h| )$ convergence
order in the infinity norm in two-space dimensions. The proofs are based on
splitting the singular source terms, discrete Green functions on finite
lattices with homogeneous and Neumann boundary conditions, a new discovered
simplest $L^2$ discrete delta function, and the convergence proof of the IB
method for elliptic interface problems \cite{li:mathcom}. The conclusion in
this paper can apply to problems with different boundary conditions as long as
the problems are wellposed. The proof process also provides an efficient way to
decouple the system into three Helmholtz/Poisson equations without affecting
the order of convergence. A non-trivial numerical example is also provided to
confirm the theoretical analysis and the simple new discrete delta function.
|
The activity of the Sun alternates between a solar minimum and a solar
maximum, the former corresponding to a period of "quieter" status of the
heliosphere. During solar minimum, it is in principle more straightforward to
follow eruptive events and solar wind structures from their birth at the Sun
throughout their interplanetary journey. In this paper, we report analysis of
the origin, evolution, and heliospheric impact of a series of solar transient
events that took place during the second half of August 2018, i.e. in the midst
of the late declining phase of Solar Cycle 24. In particular, we focus on two
successive coronal mass ejections (CMEs) and a following high-speed stream
(HSS) on their way towards Earth and Mars. We find that the first CME impacted
both planets, whilst the second caused a strong magnetic storm at Earth and
went on to miss Mars, which nevertheless experienced space weather effects from
the stream interacting region (SIR) preceding the HSS. Analysis of
remote-sensing and in-situ data supported by heliospheric modelling suggests
that CME--HSS interaction resulted in the second CME rotating and deflecting in
interplanetary space, highlighting that accurately reproducing the ambient
solar wind is crucial even during "simpler" solar minimum periods. Lastly, we
discuss the upstream solar wind conditions and transient structures responsible
for driving space weather effects at Earth and Mars.
|
We give two characterizations of hyperquadrics: one as non-degenerate smooth
projective varieties swept out by large dimensional quadric subvarieties
passing through a point; the other as $LQEL$-manifolds with large secant
defects.
|
Color is a critical design factor for web pages, affecting important factors
such as viewer emotions and the overall trust and satisfaction of a website.
Effective coloring requires design knowledge and expertise, but if this process
could be automated through data-driven modeling, efficient exploration and
alternative workflows would be possible. However, this direction remains
underexplored due to the lack of a formalization of the web page colorization
problem, datasets, and evaluation protocols. In this work, we propose a new
dataset consisting of e-commerce mobile web pages in a tractable format, which
are created by simplifying the pages and extracting canonical color styles with
a common web browser. The web page colorization problem is then formalized as a
task of estimating plausible color styles for a given web page content with a
given hierarchical structure of the elements. We present several
Transformer-based methods that are adapted to this task by prepending
structural message passing to capture hierarchical relationships between
elements. Experimental results, including a quantitative evaluation designed
for this task, demonstrate the advantages of our methods over statistical and
image colorization methods. The code is available at
https://github.com/CyberAgentAILab/webcolor.
|
We have obtained mid-infrared optical absorption spectra of the S=1/2 quasi
one-dimensional CuO using polarized transmission measurement and interpreted
the spectra in terms of phonon assisted magnetic excitations. When the electric
field is parallel to the main antiferromagnetic direction a Delta shaped peak
is observed with the maximum at 0.23eV which is attributed to spinons along
Cu-O chains. At low temperatures in the antiferromagnetic phase another peak
appears at 0.16eV which is attributed to two-magnon absorption but the spinon
peak remains. This behavior is interpreted as due to a dimensional crossover
where the low temperature three-dimensional magnetic phase keeps short range
characteristics of a one-dimensional magnet.
|
The signature of a surface bundle over a surface is known to be divisible by
4. It is also known that the signature vanishes if the fiber genus is less than
or equal to 2 or the base genus is less than or equal to 1. In this article, we
construct new smooth 4-manifolds with signature 4 which are surface bundles
over surfaces with small fiber and base genera. From these we derive improved
upper bounds for the minimal genus of surfaces representing the second homology
classes of a mapping class group.
|
We present the analysis of hot spot brightness in light curves of the
eclipsing dwarf nova IY Ursae Majoris during its normal outburst in March 2013
and in quiescence in April 2012 and in October 2015. Examination of four
reconstructed light curves of the hot spot eclipses showed directly that the
brightness of the hot spot changed significantly only during the outburst. The
brightness of the hot spot, before and after the outburst, was on the same
level. Hereby, based on the behaviour of the hot spot, IY UMa during its normal
outburst follows the disk-instability model.
|
Accreting neutron stars in low-mass X-ray binaries show outflows -- and
sometimes jets -- in the general manner of accreting black holes. However, the
quantitative link between the accretion flow (traced by X-rays) and outflows
and/or jets (traced by radio emission) is much less well-understood for neutron
stars than for black holes, other than the general observation that neutron
stars are fainter in the radio at a given X-ray luminosity. We use data from
the deep MAVERIC radio continuum survey of Galactic globular clusters for a
systematic radio and X-ray study of six luminous (L_X > 10^34 erg/s) persistent
neutron star X-ray binaries in our survey, as well as two other transient
systems also captured by our data. We find that these neutron star X-ray
binaries show an even larger range in radio luminosity than previously
observed. In particular, in quiescence at L_X ~ 3x10^34 erg/s, the confirmed
neutron star binary GRS 1747--312 in Terzan 6 sits near the upper envelope of
the black hole radio/X-ray correlation, and the persistently accreting neutron
star systems AC 211 (in M15) and X1850--087 (in NGC 6712) show unusual radio
variability and luminous radio emission. We interpret AC 211 as an obscured "Z
source" that is accreting at close to the Eddington limit, while the properties
of X1850--087 are difficult to explain, and motivate future coordinated radio
and X-ray observations. Overall, our results show that neutron stars do not
follow a single relation between inflow and outflow, and confirm that their
accretion dynamics are more complex than for black holes.
|
In this paper a neural network heuristic dynamic programing (HDP) is used for
optimal control of the virtual inertia based control of grid connected three
phase inverters. It is shown that the conventional virtual inertia controllers
are not suited for non inductive grids. A neural network based controller is
proposed to adapt to any impedance angle. Applying an adaptive dynamic
programming controller instead of a supervised controlled method enables the
system to adjust itself to different conditions. The proposed HDP consists of
two subnetworks, critic network and action network. These networks can be
trained during the same training cycle to decrease the training time. The
simulation results confirm that the proposed neural network HDP controller
performs better than the traditional direct fed voltage and reactive power
controllers in virtual inertia control schemes.
|
In this paper we prove a formula for fusion coefficients of affine Kac-Moody
algebras first conjectured by Walton [Wal2], and rediscovered in [Fe]. It is a
reformulation of the Frenkel-Zhu affine fusion rule theorem [FZ], written so
that it can be seen as a beautiful generalization of the classical
Parasarathy-Ranga Rao-Varadarajan tensor product theorem [PRV].
|
The nonrelativistic bosonic string theory in a curved manifold is formulated
here using gauging of symmetry approach ( Galilean Gauge theory ) . The
corresponding model in flat space has some global symmetries . By localizing
these symmetries as per Galilean Gauge theory , the action for the
nonrelativistic string interacting with gravity is obtained. A canonical
analysis of the model has been performed which demonstrate that the
transformations of the basic field variables under gauge transformations in
phase space are equivalent to the diffeomorphism parameters by an exact
mapping. Thus complete consistency of our results from both Lagrangian and
Hamiltonian procedures are established.
|
Human Activity Recognition (HAR) is a cornerstone of ubiquitous computing,
with promising applications in diverse fields such as health monitoring and
ambient assisted living. Despite significant advancements, sensor-based HAR
methods often operate under the assumption that training and testing data have
identical distributions. However, in many real-world scenarios, particularly in
sensor-based HAR, this assumption is invalidated by out-of-distribution
($\displaystyle o.o.d.$) challenges, including differences from heterogeneous
sensors, change over time, and individual behavioural variability. This paper
centres on the latter, exploring the cross-user HAR problem where behavioural
variability across individuals results in differing data distributions. To
address this challenge, we introduce the Deep Temporal State Domain Adaptation
(DTSDA) model, an innovative approach tailored for time series domain
adaptation in cross-user HAR. Contrary to the common assumption of sample
independence in existing domain adaptation approaches, DTSDA recognizes and
harnesses the inherent temporal relations in the data. Therefore, we introduce
'Temporal State', a concept that defined the different sub-activities within an
activity, consistent across different users. We ensure these sub-activities
follow a logical time sequence through 'Temporal Consistency' property and
propose the 'Pseudo Temporal State Labeling' method to identify the
user-invariant temporal relations. Moreover, the design principle of DTSDA
integrates adversarial learning for better domain adaptation. Comprehensive
evaluations on three HAR datasets demonstrate DTSDA's superior performance in
cross-user HAR applications by briding individual behavioral variability using
temporal relations across sub-activities.
|
Multiple TSP ($\mathrm{mTSP}$) is a important variant of $\mathrm{TSP}$ where
a set of $k$ salesperson together visit a set of $n$ cities. The
$\mathrm{mTSP}$ problem has applications to many real life applications such as
vehicle routing. Rothkopf introduced another variant of $\mathrm{TSP}$ called
many-visits TSP ($\mathrm{MV\mbox{-}TSP}$) where a request $r(v)\in
\mathbb{Z}_+$ is given for each city $v$ and a single salesperson needs to
visit each city $r(v)$ times and return back to his starting point. A
combination of $\mathrm{mTSP}$ and $\mathrm{MV\mbox{-}TSP}$ called many-visits
multiple TSP $(\mathrm{MV\mbox{-}mTSP})$ was studied by B\'erczi, Mnich, and
Vincze where the authors give approximation algorithms for various variants of
$\mathrm{MV\mbox{-}mTSP}$.
In this work, we show a simple linear programming (LP) based reduction that
converts a $\mathrm{mTSP}$ LP-based algorithm to a LP-based algorithm for
$\mathrm{MV\mbox{-}mTSP}$ with the same approximation factor. We apply this
reduction to improve or match the current best approximation factors of several
variants of the $\mathrm{MV\mbox{-}mTSP}$. Our reduction shows that the
addition of visit requests $r(v)$ to $\mathrm{mTSP}$ does $\textit{not}$ make
the problem harder to approximate even when $r(v)$ is exponential in number of
vertices.
To apply our reduction, we either use existing LP-based algorithms for
$\mathrm{mTSP}$ variants or show that several existing combinatorial algorithms
for $\mathrm{mTSP}$ variants can be interpreted as LP-based algorithms. This
allows us to apply our reduction to these combinatorial algorithms as well
achieving the improved guarantees.
|
We have recently proved the impossibility of imposing the condition of local
charge neutrality in a self-gravitating system of degenerate neutrons, protons
and electrons in $\beta$-equilibrium. The coupled system of the general
relativistic Thomas-Fermi equations and the Einstein-Maxwell equations have
been shown to supersede the traditional Tolman-Oppenheimer-Volkoff equations.
Here we present the Newtonian limit of the new equilibrium equations. We also
extend the treatment to the case of finite temperatures and finally we give the
explicit demonstration of the constancy of the Klein potentials in the case of
finite temperatures generalizing the condition of constancy of the general
relativistic Fermi energies in the case of zero temperatures.
|
Blind image quality assessment (BIQA) aims to automatically evaluate the
perceived quality of a single image, whose performance has been improved by
deep learning-based methods in recent years. However, the paucity of labeled
data somewhat restrains deep learning-based BIQA methods from unleashing their
full potential. In this paper, we propose to solve the problem by a pretext
task customized for BIQA in a self-supervised learning manner, which enables
learning representations from orders of magnitude more data. To constrain the
learning process, we propose a quality-aware contrastive loss based on a simple
assumption: the quality of patches from a distorted image should be similar,
but vary from patches from the same image with different degradations and
patches from different images. Further, we improve the existing degradation
process and form a degradation space with the size of roughly $2\times10^7$.
After pre-trained on ImageNet using our method, models are more sensitive to
image quality and perform significantly better on downstream BIQA tasks.
Experimental results show that our method obtains remarkable improvements on
popular BIQA datasets.
|
Let M be a compact connected oriented Riemannian manifold. The purpose of
this paper is to investigate the long time behavior of a degenerate stochastic
differential equation on the state space $M\times \mathbb{R}^{n}$; which is
obtained via a natural change of variable from a self-repelling diffusion
taking the form $$dX_{t}= \sigma dB_{t}(X_t) -\int_{0}^{t}\nabla
V_{X_s}(X_{t})dsdt,\qquad X_{0}=x$$ where $\{B_t\}$ is a Brownian vector field
on $M$, $\sigma >0$ and $V_x(y) = V(x,y)$ is a diagonal Mercer kernel.
We prove that the induced semi-group enjoys the strong Feller property and
has a unique invariant probability $\mu$ given as the product of the normalized
Riemannian measure on M and a Gaussian measure on $\mathbb{R}^{n}$. We then
prove an exponential decay to this invariant probability in $L^{2}(\mu)$ and in
total variation.
|
We report the detection of a feature at 65um and a broad feature around 100um
in the far-infrared spectra of the diffuse emission from two active
star-forming regions, the Carina nebula and Sharpless 171. The features are
seen in the spectra over a wide area of the observed regions, indicating that
the carriers are fairly ubiquitous species in the interstellar medium. A
similar 65um feature has been detected in evolved stars and attributed to
diopside, a Ca-bearing crystalline silicate. The present observations indicate
the first detection of a crystalline silicate in the interstellar medium if
this identification holds true also for the interstellar feature. A similar
broad feature around 90um reported in the spectra of evolved stars has been
attributed to calcite, a Ca-bearing carbonate mineral. The interstellar feature
seems to be shifted to longer wavelengths and have a broader width although the
precise estimate of the feature profile is difficult. As a carrier for the
interstellar 100um feature, we investigate the possibility that the feature
originates from carbon onions, grains consisting of curved graphitic shells.
Because of the curved graphitic sheet structure, the optical properties in the
direction parallel to the graphitic plane interacts with those in the vertical
direction in carbon onion grains. This effect enhances the interband transition
feature in the direction parallel to the graphitic plane in carbon onions,
which is suppressed in graphite particles. Simple calculations suggest that
carbon onion grains are a likely candidate for the observed 100um feature
carrier, but the appearance of the feature is sensitive to the assumed optical
properties.
|
Training of the neural autoregressive density estimator (NADE) can be viewed
as doing one step of probabilistic inference on missing values in data. We
propose a new model that extends this inference scheme to multiple steps,
arguing that it is easier to learn to improve a reconstruction in $k$ steps
rather than to learn to reconstruct in a single inference step. The proposed
model is an unsupervised building block for deep learning that combines the
desirable properties of NADE and multi-predictive training: (1) Its test
likelihood can be computed analytically, (2) it is easy to generate independent
samples from it, and (3) it uses an inference engine that is a superset of
variational inference for Boltzmann machines. The proposed NADE-k is
competitive with the state-of-the-art in density estimation on the two datasets
tested.
|
A fundamental challenge within the metric theory of continued fractions
involves quantifying sets of real numbers, when represented using continued
fractions, exhibit partial quotients that grow at specific rates. For any
positive function $\Phi$, Wang-Wu theorem (2008) comprehensively describes the
Hausdorff dimension of the set \begin{equation*} \EE_1(\Phi):=\left\{x\in [0,
1): a_n(x)\geq \Phi(n) \ {\rm for \ infinitely \ many} \ n\in \N\right\}.
\end{equation*} Various generalisations of this set exist, such as substituting
one partial quotient with the product of consecutive partial quotients in the
aforementioned set which has connections with the improvements to Dirichlet's
theorem, and many other sets of similar nature. Establishing the upper bound of
the Hausdorff dimension of such sets is significantly easier than proving the
lower bound. In this paper, we present a unified approach to get an optimal
lower bound for many known setups, including results by Wang-Wu [Adv. Math.,
2008], Huang-Wu-Xu [Israel J. Math. 2020], Bakhtawar-Bos-Hussain [Nonlinearity
2020], and several others, and also provide a new theorem derived as an
application of our main result. We do this by finding an exact Hausdorff
dimension of the set $$S_m(A_0,\ldots,A_{m-1}) \defeq \left\{ x\in[0,1): \, c_i
A_i^n \le a_{n+i}(x) < 2c_i A_i^n,0 \le i \le m-1 \ \text{for infinitely many }
n\in\N \right\},$$ where each partial quotient grows exponentially and the base
is given by a parameter $A_i>1$. For proper choices of $A_i$'s, this set serves
as a subset for sets under consideration, providing an optimal lower bound of
Hausdorff dimension in all of them. The crux of the proof lies in introducing
of multiple probability measures consistently distributed over the Cantor-type
subset of $S_m(A_0,\ldots,A_{m-1})$.
|
The idea, following Michel and O'Raifeartaigh, of assigning meaning to the
(gauge) \underline{group} and not only the Lie algebra for a Yang Mills theory
is reviewed. Hints from the group and the fermion spectrum of the Standard
Model are used to suggest the putting forward of our AGUT-model, which gives a
very good fit to the orders of magnitudes of the quark and lepton masses and
the mixing angles including the CP-breaking phase. But for neutrino
oscillations modifications of the model are needed. Baryogenesis is not in
conflict with the model.
|
In this article a family of second order ODEs associated to inertial gradient
descend is studied. These ODEs are widely used to build trajectories converging
to a minimizer $x^*$ of a function $F$, possibly convex. This family includes
the continuous version of the Nesterov inertial scheme and the continuous heavy
ball method. Several damping parameters, not necessarily vanishing, and a
perturbation term $g$ are thus considered. The damping parameter is linked to
the inertia of the associated inertial scheme and the perturbation term $g$ is
linked to the error that can be done on the gradient of the function $F$. This
article presents new asymptotic bounds on $F(x(t))-F(x^*)$ where $x$ is a
solution of the ODE, when $F$ is convex and satisfies local geometrical
properties such as {\L}ojasiewicz properties and under integrability conditions
on $g$. Even if geometrical properties and perturbations were already studied
for most ODEs of these families, it is the first time they are jointly studied.
All these results give an insight on the behavior of these inertial and
perturbed algorithms if $F$ satisfies some {\L}ojasiewicz properties especially
in the setting of stochastic algorithms.
|
Recent research on the response of stock prices to trading activity revealed
long lasting effects, even across stocks of different companies. These results
imply non-Markovian effects in price formation and when trading many stocks at
the same time, in particular trading costs and price correlations. How the
price response is measured depends on data set and research focus. However, it
is important to clarify, how the details of the price response definition
modify the results. Here, we evaluate different price response implementations
for the Trades and Quotes (TAQ) data set from the NASDAQ stock market and find
that the results are qualitatively the same for two different definitions of
time scale, but the response can vary by up to a factor of two. Further, we
show the key importance of the order between trade signs and returns,
displaying the changes in the signal strength. Moreover, we confirm the
dominating contribution of immediate price response directly after a trade, as
we find that delayed responses are suppressed. Finally, we test the impact of
the spread in the price response, detecting that large spreads have stronger
impact.
|
At very high energies the Earth magnetic field, although in many other
connections conside red rather weak, acts as a strong field on photons and
leptons. This paper discusses the i ntimate connection between this effect and
the corresponding `strong field effects' observ ed at accessible accelerator
energies in aligned single crystals. As such, these effects c onstitute an
experimental verification of the theoretical basis used for simulations of th e
development of electromagnetic showers in magnetic fields, in particular the
geomagnetic field. A short discussion of more general aspects of the shower
development in the fields present at different distance scales is included.
|
We have performed terahertz time-domain magnetospectroscopy by combining a
rapid scanning terahertz time-domain spectrometer based on the electronically
coupled optical sampling method with a table-top mini-coil pulsed magnet
capable of producing magnetic fields up to 30 T. We demonstrate the capability
of this system by measuring coherent cyclotron resonance oscillations in a
high-mobility two-dimensional electron gas in GaAs and interference-induced
terahertz transmittance modifications in a magnetoplasma in lightly doped
n-InSb.
|
We calculate the homomorphism of the cohomology induced by the Krichever map
of moduli spaces of curves into infinite-dimensional Grassmannian. This
calculation can be used to compute the homology classes of cycles on moduli
spaces of curves that are defined in terms of Weierstrass points.
|
In this article, we first provide a taxonomy of dynamic spectrum access. We
then focus on opportunistic spectrum access, the overlay approach under the
hierarchical access model of dynamic spectrum access. we aim to provide an
overview of challenges and recent developments in both technological and
regulatory aspects of opportunistic spectrum access.
|
There exists a unique class of local Higher Spin Gravities with propagating
massless fields in $4d$ - Chiral Higher Spin Gravity. Originally, it was
formulated in the light-cone gauge. We construct a covariant form of this
theory as a Free Differential Algebra up to NLO, i.e. at the level of equations
of motion. It also contains the recently discovered covariant forms of the
higher spin extensions of SDYM and SDGR, as well as SDYM and SDGR themselves.
From the mathematical viewpoint the result is equivalent to taking the minimal
model (in the sense of $L_\infty$-algebras) of the jet-space extension of the
BV-BRST formulation of Chiral Higher Spin Gravity, thereby, containing also
information about (presymplectic AKSZ) action, counterterms, anomalies, etc.
|
The DO Collaboration has measured the inclusive jet cross section in
proton-antiproton collisions at s**2 = 630 GeV. The results for
pseudorapidities -0.5 to 0.5 are combined with our previous results at s**2 =
1800 GeV to form a ratio of cross sections with smaller uncertainties than
either individual measurement. Next-to-leading-order QCD predictions show
excellent agreement with the measurement at 630 GeV; agreement is also
satisfactory for the ratio. Specifically, despite a 10% to 15% difference in
the absolute normalization, the dependence of the ratio on jet transverse
momentum is very similar for data and theory.
|
We investigate the implications of different forms of multi-group
connectivity. Four multi-group connectivity modalities are considered:
co-memberships, edge bundles, bridges, and liaison hierarchies. We propose
generative models to generate these four modalities. Our models are variants of
planted partition or stochastic block models conditioned under certain
topological constraints. We report findings of a comparative analysis in which
we evaluate these structures, controlling for their edge densities and sizes,
on mean rates of information propagation, convergence times to consensus, and
steady state deviations from the consensus value in the presence of noise as
network size increases.
|
In this work we study the non-parametric reconstruction of spatio-temporal
dynamical Gaussian processes (GPs) via GP regression from sparse and noisy
data. GPs have been mainly applied to spatial regression where they represent
one of the most powerful estimation approaches also thanks to their universal
representing properties. Their extension to dynamical processes has been
instead elusive so far since classical implementations lead to unscalable
algorithms. We then propose a novel procedure to address this problem by
coupling GP regression and Kalman filtering. In particular, assuming space/time
separability of the covariance (kernel) of the process and rational time
spectrum, we build a finite-dimensional discrete-time state-space process
representation amenable of Kalman filtering. With sampling over a finite set of
fixed spatial locations, our major finding is that the Kalman filter state at
instant $t_k$ represents a sufficient statistic to compute the minimum variance
estimate of the process at any $t \geq t_k$ over the entire spatial domain.
This result can be interpreted as a novel Kalman representer theorem for
dynamical GPs. We then extend the study to situations where the set of spatial
input locations can vary over time. The proposed algorithms are finally tested
on both synthetic and real field data, also providing comparisons with standard
GP and truncated GP regression techniques.
|
We answer the question: who first proved that $C/d$ is a constant? We argue
that Archimedes proved that the ratio of the circumference of a circle to its
diameter is a constant independent of the circle and that the circumference
constant equals the area constant ($C/d=A/r^{2}$). He stated neither result
explicitly, but both are implied by his work. His proof required the addition
of two axioms beyond those in Euclid's \emph{Elements}; this was the first step
toward a rigorous theory of arc length. We also discuss how Archimedes's work
coexisted with the 2000-year belief -- championed by scholars from Aristotle to
Descartes -- that it is impossible to find the ratio of a curved line to a
straight line.
|
We explore the statistical behavior of the discrete nonlinear Schroedinger
equation. We find a parameter region where the system evolves towards a state
characterized by a finite density of breathers and a negative temperature. Such
a state is metastable but the convergence to equilibrium occurs on astronomical
time scales and becomes increasingly slower as a result of a coarsening
processes. Stationary negative-temperature states can be experimentally
generated via boundary dissipation or from free expansions of wave packets
initially at positive temperature equilibrium.
|
A new concept of (asymptotic) qualitative robustness for plug-in estimators
based on identically distributed possibly dependent observations is introduced,
and it is shown that Hampel's theorem for general metrics $d$ still holds.
Since Hampel's theorem assumes the UGC property w.r.t. $d$, that is,
convergence in probability of the empirical probability measure to the true
marginal distribution w.r.t. $d$ uniformly in the class of all admissible laws
on the sample path space, this property is shown for a large class of strongly
mixing laws for three different metrics $d$. For real-valued observations, the
UGC property is established for both the Kolomogorov $\phi$-metric and the
L\'{e}vy $\psi$-metric, and for observations in a general locally compact and
second countable Hausdorff space the UGC property is established for a certain
metric generating the $\psi$-weak topology. The key is a new uniform weak LLN
for strongly mixing random variables. The latter is of independent interest and
relies on Rio's maximal inequality.
|
The relativistic analysis of stochastic kinematics is developed in order to
determine the transformation of the effective diffusivity tensor in inertial
frames. Poisson-Kac stochastic processes are initially considered. For
one-dimensional spatial models, the effective diffusion coefficient $D$
measured in a frame $\Sigma$ moving with velocity $w$ with respect to the rest
frame of the stochastic process can be expressed as $D= D_0 \, \gamma^{-3}(w)$.
Subsequently, higher dimensional processes are analyzed, and it is shown that
the diffusivity tensor in a moving frame becomes non-isotropic with
$D_\parallel = D_0 \, \gamma^{-3}(w)$, and $D_\perp = D_0 \, \gamma^{-1}(w)$,
where $D_\parallel$ and $D_\perp$ are the diffusivities parallel and orthogonal
to the velocity of the moving frame. The analysis of discrete Space-Time
Diffusion processes permits to obtain a general transformation theory of the
tensor diffusivity, confirmed by several different simulation experiments.
Several implications of the theory are also addressed and discussed.
|
We propose pix2pix3D, a 3D-aware conditional generative model for
controllable photorealistic image synthesis. Given a 2D label map, such as a
segmentation or edge map, our model learns to synthesize a corresponding image
from different viewpoints. To enable explicit 3D user control, we extend
conditional generative models with neural radiance fields. Given
widely-available monocular images and label map pairs, our model learns to
assign a label to every 3D point in addition to color and density, which
enables it to render the image and pixel-aligned label map simultaneously.
Finally, we build an interactive system that allows users to edit the label map
from any viewpoint and generate outputs accordingly.
|
Deep networks have brought significant advances in robot perception, enabling
to improve the capabilities of robots in several visual tasks, ranging from
object detection and recognition to pose estimation, semantic scene
segmentation and many others. Still, most approaches typically address visual
tasks in isolation, resulting in overspecialized models which achieve strong
performances in specific applications but work poorly in other (often related)
tasks. This is clearly sub-optimal for a robot which is often required to
perform simultaneously multiple visual recognition tasks in order to properly
act and interact with the environment. This problem is exacerbated by the
limited computational and memory resources typically available onboard to a
robotic platform. The problem of learning flexible models which can handle
multiple tasks in a lightweight manner has recently gained attention in the
computer vision community and benchmarks supporting this research have been
proposed. In this work we study this problem in the robot vision context,
proposing a new benchmark, the RGB-D Triathlon, and evaluating state of the art
algorithms in this novel challenging scenario. We also define a new evaluation
protocol, better suited to the robot vision setting. Results shed light on the
strengths and weaknesses of existing approaches and on open issues, suggesting
directions for future research.
|
For the large-scale linear discrete ill-posed problem $\min\|Ax-b\|$ or
$Ax=b$ with $b$ contaminated by Gaussian white noise, the Lanczos
bidiagonalization based Krylov solver LSQR and its mathematically equivalent
CGLS, the Conjugate Gradient (CG) method implicitly applied to $A^TAx=A^Tb$,
are most commonly used, and CGME, the CG method applied to $\min\|AA^Ty-b\|$ or
$AA^Ty=b$ with $x=A^Ty$, and LSMR, which is equivalent to the minimal residual
(MINRES) method applied to $A^TAx=A^Tb$, have also been choices. These methods
exhibit typical semi-convergence feature, and the iteration number $k$ plays
the role of the regularization parameter. However, there has been no definitive
answer to the long-standing fundamental question: {\em Can LSQR and CGLS find
2-norm filtering best possible regularized solutions}? The same question is for
CGME and LSMR too. At iteration $k$, LSQR, CGME and LSMR compute {\em
different} iterates from the {\em same} $k$ dimensional Krylov subspace. A
first and fundamental step towards to answering the above question is to {\em
accurately} estimate the accuracy of the underlying $k$ dimensional Krylov
subspace approximating the $k$ dimensional dominant right singular subspace of
$A$. Assuming that the singular values of $A$ are simple, we present a general
$\sin\Theta$ theorem for the 2-norm distances between these two subspaces and
derive accurate estimates on them for severely, moderately and mildly ill-posed
problems. We also establish some relationships between the smallest Ritz values
and these distances. Numerical experiments justify the sharpness of our
results.
|
We argue that the description of meson-nucleon dynamics based on the
boson-exchange approach, is compatible with the description of the nucleon as a
soliton in the nonrelativistic limit. Our arguments are based on an analysis of
the meson-soliton form factor and the exact meson-soliton and soliton-soliton
scattering amplitudes in the Sine-Gordon model.
|
Incorporation of high-rate internet of things (IoT) service into a massive
MIMO framework is investigated. It is revealed that massive MIMO possess the
inherent potential to offer such service provided it knows the channels for all
devices. Our proposed method is to jointly estimate and track the channels of
all devices irrespective of their current activity. Using the dynamical model
for devices' channels evolution over time, optimal and sub-optimal trackers are
developed for coordinated scenario. Furthermore, we introduce a new paradigm
where the BS need not know the pilot access patterns of devices in advance
which we refer to as uncoordinated setup. After motivating this scenario, we
derive the optimal tracker which is intractable. Then, target tracking
approaches are applied to address uncertainties in the measurements and derive
sub-optimal trackers. Our proposed approaches explicitly address the channel
aging problem and will not require downlink paging and uplink access request
control channels which can become bottlenecks in crowded scenarios. The
fundamental minimum mean square error (MMSE) gap between optimal coordinated
and uncoordinated trackers which is defined as price of anarchy is evaluated
and upper-bounded. Stability of optimal trackers is also investigated. Finally,
performance of various proposed trackers are numerically compared.
|
Under the basic assumption that the observed turbulent motions in molecular
clouds are Alfvenic waves or turbulence, we emphasize that the Doppler
broadening of molecular line profiles directly measures the velocity amplitudes
of the waves instead of the Alfven velocity. Assuming an equipartition between
the kinetic energy and the Alfvenic magnetic energy, we further propose the
hypothesis that observed standard scaling laws in molecular clouds imply a
roughly scale-independent fluctuating magnetic field, which might be understood
as a result of strong wave-wave interactions and subsequent energy cascade. We
predict that $\sigma_{v}\propto \rho^{-0.5}$ is a more basic and robust
relation in that it may approximately hold in any regions where the spatial
energy density distribution is primarily determined by wave-wave interactions,
including gravitationally unbound regions. We also discuss the fact that a
scale-independent $\sigma_{B}^{2}$ appears to contradict existing 1-D and 2-D
computer simulations of MHD turbulence in molecular clouds.
|
In their landmark paper Cover and El Gamal proposed different coding
strategies for the relay channel with a single relay supporting a communication
pair. These strategies are the decode-and-forward and compress-and-forward
approach, as well as a general lower bound on the capacity of a relay network
which relies on the mixed application of the previous two strategies. So far,
only parts of their work - the decode-and-forward and the compress-and-forward
strategy - have been applied to networks with multiple relays.
This paper derives a mixed strategy for multiple relay networks using a
combined approach of partial decode-and-forward with N +1 levels and the ideas
of successive refinement with different side information at the receivers.
After describing the protocol structure, we present the achievable rates for
the discrete memoryless relay channel as well as Gaussian multiple relay
networks. Using these results we compare the mixed strategy with some special
cases, e. g., multilevel decode-and-forward, distributed compress-and-forward
and a mixed approach where one relay node operates in decode-and-forward and
the other in compress-and-forward mode.
|
Quasi-Exactly Solvable Schr\"odinger Equations occupy an intermediate place
between exactly-solvable (e.g. the harmonic oscillator and Coulomb problems
etc) and non-solvable ones. Their major property is an explicit knowledge of
several eigenstates while the remaining ones are unknown. Many of these
problems are of the anharmonic oscillator type with a special type of
anharmonicity. The Hamiltonians of quasi-exactly-solvable problems are
characterized by the existence of a hidden algebraic structure but do not have
any hidden symmetry properties. In particular, all known one-dimensional
(quasi)-exactly-solvable problems possess a hidden $\mathfrak{sl}(2,\bf{R})-$
Lie algebra. They are equivalent to the $\mathfrak{sl}(2,\bf{R})$ Euler-Arnold
quantum top in a constant magnetic field. Quasi-Exactly Solvable problems are
highly non-trivial, they shed light on delicate analytic properties of the
Schr\"odinger Equations in coupling constant. The Lie-algebraic formalism
allows us to make a link between the Schr\"odinger Equations and
finite-difference equations on uniform and/or exponential lattices, it implies
that the spectra is preserved. This link takes the form of quantum canonical
transformation. The corresponding isospectral spectral problems for
finite-difference operators are described. The underlying Fock space formalism
giving rise to this correspondence is uncovered. For a quite general class of
perturbations of unperturbed problems with the hidden Lie algebra property we
can construct an algebraic perturbation theory, where the wavefunction
corrections are of polynomial nature, thus, can be found by algebraic means.
|
We construct a quasi-canonical lifting of a $K3$ surface of finite height
over a finite field of characteristic $p\geq3$. Such results are previously
obtained by Nygaard-Ogus when $p\geq5$. For this purpose, we use the
display-theoretic deformation theory developed by Langer, Zink, and Lau. We
study the display structure of the crystalline cohomology of deformations of a
$K3$ surface of finite height in terms of the Dieudonn\'e display of the
enlarged formal Brauer group.
|
The ultra-relativistic outflows powering gamma-ray bursts (GRBs) acquire
angular structure through their interaction with external material. They are
often characterized by a compact, nearly uniform narrow core (with half-opening
angle $\theta_{c,\{\epsilon,\Gamma\}}$) surrounded by material with energy per
unit solid angle ($\epsilon=\epsilon_c\Theta_{\epsilon}^{-a}$, where
$\Theta_{\{\epsilon,\Gamma\}}=[1+\theta^2/\theta_{c,\{\epsilon,\Gamma\}}^2]^{1/2}$)
and initial specific kinetic energy
($\Gamma_0-1=[\Gamma_c-1]\Theta_\Gamma^{-b}$) declining as power laws.
Multi-wavelength afterglow lightcurves of off-axis jets (with viewing angle
$\theta_{\rm obs} > \theta_c$) offer robust ways to constrain $a$, $b$ and the
external density radial profile ($\rho\propto R^{-k}$), even while other burst
parameters may remain highly degenerate. We extend our previous work on such
afterglows to include more realistic angular structure profiles derived from
three-dimensional hydrodynamic simulations of both long and short GRBs
(addressing also jets with shallow angular energy profiles, whose emission
exhibits unique evolution). We present afterglow lightcurves based on our
parameterized power-law jet angular profiles for different viewing angles
$\theta_{\rm obs}$ and $k=\{0,1,2\}$. We identify a unique evolutionary
power-law phase of the characteristic synchrotron frequencies ($\nu_m$ and
$\nu_c$) that manifests when the lightcurve is dominated by emission sensitive
to the angular structure of the outflow. We calculate the criterion for
obtaining single or double peaked light-curves in the general case when
$\theta_{c,\Gamma}\neq\theta_{c,\epsilon}$. We emphasize how the shape of the
lightcurve and the temporal evolution of $\nu_m$ and $\nu_c$ can be used to
constrain the outflow structure and potentially distinguish between magnetic
and hydrodynamic jets.
|
Some important features of the graphene physics can be reproduced by loading
ultracold fermionic atoms in a two-dimensional optical lattice with honeycomb
symmetry and we address here its experimental feasibility. We analyze in great
details the optical lattice generated by the coherent superposition of three
coplanar running laser waves with respective angles $2\pi/3$. The corresponding
band structure displays Dirac cones located at the corners of the Brillouin
zone and close to half-filling this system is well described by massless Dirac
fermions. We characterize their properties by accurately deriving the
nearest-neighbor hopping parameter $t_0$ as a function of the optical lattice
parameters. Our semi-classical instanton method proves in excellent agreement
with an exact numerical diagonalization of the full Hamilton operator in the
tight-binding regime. We conclude that the temperature range needed to access
the Dirac fermions regime is within experimental reach. We also analyze
imperfections in the laser configuration as they lead to optical lattice
distortions which affect the Dirac fermions. We show that the Dirac cones do
survive up to some critical intensity or angle mismatches which are easily
controlled in actual experiments. In the tight-binding regime, we predict, and
numerically confirm, that these critical mismatches are inversely proportional
to the square-root of the optical potential strength. We also briefly discuss
the interesting possibility of fine-tuning the mass of the Dirac fermions by
controlling the laser phase in an optical lattice generated by the incoherent
superposition of three coplanar independent standing waves with respective
angles $2\pi/3$.
|
Visual odometry is important for plenty of applications such as autonomous
vehicles, and robot navigation. It is challenging to conduct visual odometry in
textureless scenes or environments with sudden illumination changes where
popular feature-based methods or direct methods cannot work well. To address
this challenge, some edge-based methods have been proposed, but they usually
struggle between the efficiency and accuracy. In this work, we propose a novel
visual odometry approach called \textit{EdgeVO}, which is accurate, efficient,
and robust. By efficiently selecting a small set of edges with certain
strategies, we significantly improve the computational efficiency without
sacrificing the accuracy. Compared to existing edge-based method, our method
can significantly reduce the computational complexity while maintaining similar
accuracy or even achieving better accuracy. This is attributed to that our
method removes useless or noisy edges. Experimental results on the TUM datasets
indicate that EdgeVO significantly outperforms other methods in terms of
efficiency, accuracy and robustness.
|
Casimir energy changes are investigated for geometries obtained by small but
arbitrary deformations of a given geometry for which the vacuum energy is
already known for the massless scalar field. As a specific case, deformation of
a spherical shell is studied. From the deformation of the sphere we show that
the Casimir energy is a decreasing function of the surface to volume ratio. The
decreasing rate is higher for less smooth deformations.
|
We study fermionic pairing in an ultracold two-component gas of $^6$Li atoms
by observing an energy gap in the radio-frequency excitation spectra. With
control of the two-body interactions via a Feshbach resonance we demonstrate
the dependence of the pairing gap on coupling strength, temperature, and Fermi
energy. The appearance of an energy gap with moderate evaporative cooling
suggests that our full evaporation brings the strongly interacting system deep
into a superfluid state.
|
We prove the nonintegrability of the susceptible-exposed-infected-removed
(SEIR) epidemic model in the Bogoyavlenskij sense. This property of the SEIR
model is different from the more fundamental susceptible-infected-removed (SIR)
model, which is Bogoyavlenskij-nonintegrable. Our basic tool for the proof is
an extension of the Morales-Ramis theory due to Ayoul and Zung. Moreover, we
extend the system to a six-dimensional system to treat transcendental first
integrals and commutative vector fields. We also use the fact that the
incomplete gamma function $\Gamma(\alpha,x)$ is not elementary for
$\alpha\not\in\mathbb{N}$, of which a proof is included.
|
Some robust point cloud registration approaches with controllable pose
refinement magnitude, such as ICP and its variants, are commonly used to
improve 6D pose estimation accuracy. However, the effectiveness of these
methods gradually diminishes with the advancement of deep learning techniques
and the enhancement of initial pose accuracy, primarily due to their lack of
specific design for pose refinement. In this paper, we propose Point Cloud
Completion and Keypoint Refinement with Fusion Data (PCKRF), a new pose
refinement pipeline for 6D pose estimation. The pipeline consists of two steps.
First, it completes the input point clouds via a novel pose-sensitive point
completion network. The network uses both local and global features with pose
information during point completion. Then, it registers the completed object
point cloud with the corresponding target point cloud by our proposed Color
supported Iterative KeyPoint (CIKP) method. The CIKP method introduces color
information into registration and registers a point cloud around each keypoint
to increase stability. The PCKRF pipeline can be integrated with existing
popular 6D pose estimation methods, such as the full flow bidirectional fusion
network, to further improve their pose estimation accuracy. Experiments
demonstrate that our method exhibits superior stability compared to existing
approaches when optimizing initial poses with relatively high precision.
Notably, the results indicate that our method effectively complements most
existing pose estimation techniques, leading to improved performance in most
cases. Furthermore, our method achieves promising results even in challenging
scenarios involving textureless and symmetrical objects. Our source code is
available at https://github.com/zhanhz/KRF.
|
Accessibility in visualization is an important yet challenging topic.
Sonification, in particular, is a valuable yet underutilized technique that can
enhance accessibility for people with low vision. However, the lower bandwidth
of the auditory channel makes it difficult to fully convey dense
visualizations. For this reason, interactivity is key in making full use of its
potential. In this paper, we present a novel approach for the sonification of
dense line charts. We utilize the metaphor of a string instrument, where
individual line segments can be "plucked". We propose an importance-driven
approach which encodes the directionality of line segments using frequency and
dynamically scales amplitude for improved density perception. We discuss the
potential of our approach based on a set of examples.
|
Recently, it was shown that the spectrum of anomalous dimensions and other
important observables in N = 4 SYM are encoded into a simple nonlinear
Riemann-Hilbert problem: the P\mu-system or Quantum Spectral Curve. In this
letter we present the P\mu-system for the spectrum of the ABJM theory. This may
be an important step towards the exact determination of the interpolating
function h(\lambda) characterising the integrability of the ABJM model. We also
discuss a surprising symmetry between the P\mu-system equations for N = 4 SYM
and ABJM.
|
Subsets and Splits