text
stringlengths 6
128k
|
---|
This is a solicited whitepaper for the Snowmass 2021 community planning
exercise. The paper focuses on measurements and science with the Cosmic
Microwave Background (CMB). The CMB is foundational to our understanding of
modern physics and continues to be a powerful tool driving our understanding of
cosmology and particle physics. In this paper, we outline the broad and unique
impact of CMB science for the High Energy Cosmic Frontier in the upcoming
decade. We also describe the progression of ground-based CMB experiments, which
shows that the community is prepared to develop the key capabilities and
facilities needed to achieve these transformative CMB measurements.
|
This paper is concerned with two dual aspects of the regularity question of
the Navier-Stokes equations. First, we prove a local in time localized
smoothing effect for local energy solutions. More precisely, if the initial
data restricted to the unit ball belongs to the scale-critical space $L^3$,
then the solution is locally smooth in space for some short time, which is
quantified. This builds upon the work of Jia and \v{S}ver\'{a}k, who considered
the subcritical case. Second, we apply these localized smoothing estimates to
prove a concentration phenomenon near a possible Type I blow-up. Namely, we
show if $(0, T^*)$ is a singular point then
$$\|u(\cdot,t)\|_{L^{3}(B_{R}(0))}\geq \gamma_{univ},\qquad
R=O(\sqrt{T^*-t}).$$ This result is inspired by and improves concentration
results established by Li, Ozawa, and Wang and Maekawa, Miura, and Prange. We
also extend our results to other critical spaces, namely $L^{3,\infty}$ and the
Besov space $\dot B^{-1+\frac3p}_{p,\infty}$, $p\in(3,\infty)$.
|
The inverse of the metric matrices on the Siegel-Jacobi upper half space
${\mathcal{X}}^J_n$, invariant to the restricted real Jacobi group
$G^J_n(\mathbb{R})_0$ and extended Siegel-Jacobi $\tilde{{\mathcal{X}}}^J_n$
upper half space, invariant to the action of the real Jacobi
$G^J_n(\mathbb{R})$, are presented. The results are relevant for Berezin
quantization of the manifolds ${\mathcal{X}}^J_ n$ and
$\tilde{\mathcal{X}}^J_n$. Explicit calculations in the case $n=2$ are given.
|
As natural language processing (NLP) has recently seen an unprecedented level
of excitement, and more people are eager to enter the field, it is unclear
whether current research reproducibility efforts are sufficient for this group
of beginners to apply the latest developments. To understand their needs, we
conducted a study with 93 students in an introductory NLP course, where
students reproduced the results of recent NLP papers. Surprisingly, we find
that their programming skill and comprehension of research papers have a
limited impact on their effort spent completing the exercise. Instead, we find
accessibility efforts by research authors to be the key to success, including
complete documentation, better coding practice, and easier access to data
files. Going forward, we recommend that NLP researchers pay close attention to
these simple aspects of open-sourcing their work, and use insights from
beginners' feedback to provide actionable ideas on how to better support them.
|
The kinematics of the most metal-poor stars provide a window into the early
formation and accretion history of the Milky Way. Here, we use
5~high-resolution cosmological zoom-in simulations ($\sim~5\times10^6$ star
particles) of Milky Way-like galaxies taken from the NIHAO-UHD project, to
investigate the origin of low-metallicity stars ([Fe/H]$\leq-2.5$). The
simulations show a prominent population of low-metallicity stars confined to
the disk plane, as recently discovered in the Milky Way. The ubiquity of this
finding suggests that the Milky Way is not unique in this respect.
Independently of the accretion history, we find that $\gtrsim~90$ per cent of
the retrograde stars in this population are brought in during the initial
build-up of the galaxies during the first few Gyrs after the Big Bang. Our
results therefore highlight the great potential of the retrograde population as
a tracer of the early build-up of the Milky Way. The prograde planar
population, on the other hand, is accreted during the later assembly phase and
samples the full galactic accretion history. In case of a quiet accretion
history, this prograde population is mainly brought in during the first half of
cosmic evolution ($t\lesssim7$~Gyr), while, in the case of an on-going active
accretion history, later mergers on prograde orbits are also able to contribute
to this population. Finally, we note that the Milky Way shows a rather large
population of eccentric, very metal-poor planar stars. This is a feature not
seen in most of our simulations, with the exception of one simulation with an
exceptionally active early building phase.
|
We consider optimization problems that are formulated and solved in the
framework of tropical mathematics. The problems consist in minimizing or
maximizing functionals defined on vectors of finite-dimensional semimodules
over idempotent semifields, and may involve constraints in the form of linear
equations and inequalities. The objective function can be either a linear
function or nonlinear function calculated by means of multiplicative conjugate
transposition of vectors. We start with an overview of known tropical
optimization problems and solution methods. Then, we formulate certain new
problems and present direct solutions to the problems in a closed compact
vector form suitable for further analysis and applications. For many problems,
the results obtained are complete solutions.
|
For complete intersection Calabi-Yau manifolds in toric varieties, Gross and
Haase-Zharkov have given a conjectural combinatorial description of the special
Lagrangian torus fibrations whose existence was predicted by Strominger, Yau
and Zaslow. We present a geometric version of this construction, generalizing
an earlier conjecture of the first author.
|
We introduce a silicon metal-oxide-semiconductor quantum dot architecture
based on a single polysilicon gate stack. The elementary structure consists of
two enhancement gates separated spatially by a gap, one gate forming a
reservoir and the other a quantum dot. We demonstrate, in three devices based
on two different versions of this elementary structure, that a wide range of
tunnel rates is attainable while maintaining single-electron occupation. A
characteristic change in slope of the charge transitions as a function of the
reservoir gate voltage, attributed to screening from charges in the reservoir,
is observed in all devices, and is expected to play a role in the sizable
tuning orthogonality of the split enhancement gate structure. The all-silicon
process is expected to minimize strain gradients from electrode thermal
mismatch, while the single gate layer should avoid issues related to overlayers
(e.g., additional dielectric charge noise) and help improve yield. Finally,
reservoir gate control of the tunnel barrier has implications for
initialization, manipulation and readout schemes in multi-quantum dot
architectures.
|
We present the Chandra ACIS-S3 data of the old classical nova RR Pic (1925).
The source has a count rate of 0.067+/-0.002 c/s in the 0.3-5.0 keV energy
range. We detect the orbital period of the underlying binary system in the
X-ray wavelengths. We also find that the neutral Hydrogen column density
differs for orbital minimum and orbital maximum spectra with values
0.25(+0.23-0.18)x10^22 cm^-2 and 0.64(+0.13-0.14)x10^22 cm^-2 at 3sigma
confidence level. The X-ray spectrum of RR Pic can be represented by a
composite model of bremsstrahlung with a photoelectric absorption, two
absorption lines centered around 1.1-1.4 keV and 5 Gaussian lines centered at
emission lines around 0.3-1.1 keV corresponding to various transitions of S, N,
O, C, Ne and Fe . The bremsstrahlung temperature derived from the fits range
from 0.99 to 1.60 keV and the unabsorbed X-ray flux is found to be
2.5(+0.4-1.2)x10^-13 erg cm^-2 s^-1 in the 0.3-5.0 keV range with a luminosity
of (1.1\pm0.2)x10^31 erg s^-1 at 600 pc. We also detect excess emission in the
spectrum possibly originating from the reverse shock in the ejecta.
|
We study mass transfers between debris discs during stellar encounters. We
carried out numerical simulations of close flybys of two stars, one of which
has a disc of planetesimals represented by test particles. We explored the
parameter space of the encounters, varying the mass ratio of the two stars,
their pericentre and eccentricity of the encounter, and its geometry. We find
that particles are transferred to the other star from a restricted radial range
in the disc and the limiting radii of this transfer region depend on the
parameters of the encounter. We derive an approximate analytic description of
the inner radius of the region. The efficiency of the mass transfer generally
decreases with increasing encounter pericentre and increasing mass of the star
initially possessing the disc. Depending on the parameters of the encounter,
the transfer particles have a specific distributions in the space of orbital
elements (semimajor axis, eccentricity, inclination, and argument of
pericentre) around their new host star. The population of the transferred
particles can be used to constrain the encounter through which it was
delivered. We expect that many stars experienced transfer among their debris
discs and planetary systems in their birth environment. This mechanism presents
a formation channel for objects on wide orbits of arbitrary inclinations,
typically having high eccentricity but possibly also close-to-circular
(eccentricities of about 0.1). Depending on the geometry, such orbital elements
can be distinct from those of the objects formed around the star.
|
The Learnable Tree Filter presents a remarkable approach to model
structure-preserving relations for semantic segmentation. Nevertheless, the
intrinsic geometric constraint forces it to focus on the regions with close
spatial distance, hindering the effective long-range interactions. To relax the
geometric constraint, we give the analysis by reformulating it as a Markov
Random Field and introduce a learnable unary term. Besides, we propose a
learnable spanning tree algorithm to replace the original non-differentiable
one, which further improves the flexibility and robustness. With the above
improvements, our method can better capture long-range dependencies and
preserve structural details with linear complexity, which is extended to
several vision tasks for more generic feature transform. Extensive experiments
on object detection/instance segmentation demonstrate the consistent
improvements over the original version. For semantic segmentation, we achieve
leading performance (82.1% mIoU) on the Cityscapes benchmark without
bells-and-whistles. Code is available at
https://github.com/StevenGrove/LearnableTreeFilterV2.
|
In this paper, we deal with the problem of automatic tag recommendation for
painting artworks. Diachronic descriptions containing deviations on the
vocabulary used to describe each painting usually occur when the work is done
by many experts over time. The objective of this work is to provide a framework
that produces a more accurate and homogeneous set of tags for each painting in
a large collection. To validate our method we build a model based on a
weakly-supervised neural network for over $5{,}300$ paintings with hand-labeled
descriptions made by experts for the paintings of the Brazilian painter Candido
Portinari. This work takes place with the Portinari Project which started in
1979 intending to recover and catalog the paintings of the Brazilian painter.
The Portinari paintings at that time were in private collections and museums
spread around the world and thus inaccessible to the public. The descriptions
of each painting were made by a large number of collaborators over 40 years as
the paintings were recovered and these diachronic descriptions caused
deviations on the vocabulary used to describe each painting. Our proposed
framework consists of (i) a neural network that receives as input the image of
each painting and uses frequent itemsets as possible tags, and (ii) a
clustering step in which we group related tags based on the output of the
pre-trained classifiers.
|
A comprehensive theory of superconductivity (SC) and superfluidity (SF) is
presented of new types III and IV at temperatures into millions of degrees
involving phase transitions of fermions in heat reservoirs to form general
relativistic triple quasi-particles of 3 fermions interacting to boson-fermion
pairs. Types 0, I, and II SC/SF are deduced from such triples as: thermally
dressed, relativistic fermionic vortices; spin coupled, dressed, fermionic
vortical pairs (diamagnetic bosons); and spinrevorbitally coupled, dressed
fermionic, vortical pairs (ferromagnetic bosons). All known SC, SF and trends
in critical temperatures (Tc) are thereby explained. The recently observed
SC/SF in nano-graphene systems is explained. The above room temperature SC/SF
is predicted and modeled by transformations of intense thermal boson
populations of heat reservoirs to relativistic mass, weight, spin and magnetism
for further reasoning over compression to electricity, weak phenomena and
strong phenomena for connecting general relativism and quantum mechanics.
|
Uncertainty in renewable energy generation and load consumption is a great
challenge for microgrid operation, especially in islanded mode as the microgrid
may be small in size and has limited flexible resources. In this paper, a
multi-timescale, two-stage robust unit commitment and economic dispatch model
is proposed to optimize the microgrid operation. The first stage is a
combination of day-ahead hourly and real-time sub-hourly model, which means the
day-ahead dispatch result must also satisfy the real-time condition at the same
time. The second stage is to verify the feasibility of the day-ahead dispatch
result in worst-case condition considering high-level uncertainty in renewable
energy dispatch and load consumptions. In the proposed model, battery energy
storage system (BESS) and solar PV units are integrated as a combined
solar-storage system. The BESS plays an essential role to balance the variable
output of solar PV units, which keeps the combined solar-storage system output
unchanged on an hourly basis. In this way, it largely neutralizes the impact of
solar uncertainty and makes the microgrid operation grid friendly. Furthermore,
in order to enhance the flexibility and resilience of the microgrid, both BESS
and thermal units provide regulating reserve to manage solar and load
uncertainty. The model has been tested in a controlled hardware in loop (CHIL)
environment for the Bronzeville Community Microgrid system in Chicago. The
simulation results show that the proposed model works effectively in managing
the uncertainty in solar PV and load and can provide a flexible dispatch in
both grid-connected and islanded modes.
|
The aim of personalized medicine is to tailor treatment decisions to
individuals' characteristics. N-of-1 trials are within-person crossover trials
that hold the promise of targeting individual-specific effects. While the idea
behind N-of-1 trials might seem simple, analyzing and interpreting N-of-1
trials is not straightforward. Here we ground N-of-1 trials in a formal causal
inference framework and formalize intuitive claims from the N-of-1 trials
literature. We focus on causal inference from a single N-of-1 trial and define
a conditional average treatment effect (CATE) that represents a target in this
setting, which we call the U-CATE. We discuss assumptions sufficient for
identification and estimation of the U-CATE under different causal models where
the treatment schedule is assigned at baseline. A simple mean difference is an
unbiased, asymptotically normal estimator of the U-CATE in simple settings. We
also consider settings where carryover effects, trends over time, time-varying
common causes of the outcome, and outcome-outcome effects are present. In these
more complex settings, we show that a time-varying g-formula identifies the
U-CATE under explicit assumptions. Finally, we analyze data from N-of-1 trials
about acne symptoms and show how different assumptions about the data
generating process can lead to different analytical strategies.
|
Learning-based Adaptive Bit Rate~(ABR) method, aiming to learn outstanding
strategies without any presumptions, has become one of the research hotspots
for adaptive streaming. However, it typically suffers from several issues,
i.e., low sample efficiency and lack of awareness of the video quality
information. In this paper, we propose Comyco, a video quality-aware ABR
approach that enormously improves the learning-based methods by tackling the
above issues. Comyco trains the policy via imitating expert trajectories given
by the instant solver, which can not only avoid redundant exploration but also
make better use of the collected samples. Meanwhile, Comyco attempts to pick
the chunk with higher perceptual video qualities rather than video bitrates. To
achieve this, we construct Comyco's neural network architecture, video datasets
and QoE metrics with video quality features. Using trace-driven and real-world
experiments, we demonstrate significant improvements of Comyco's sample
efficiency in comparison to prior work, with 1700x improvements in terms of the
number of samples required and 16x improvements on training time required.
Moreover, results illustrate that Comyco outperforms previously proposed
methods, with the improvements on average QoE of 7.5% - 16.79%. Especially,
Comyco also surpasses state-of-the-art approach Pensieve by 7.37% on average
video quality under the same rebuffering time.
|
We study an SU(2) supersymmetric gauge model in a framework of gauge-Higgs
unification. Multi-Higgs spectrum appears in the model at low energy. We
develop a useful perturbative approximation scheme for evaluating effective
potential to study the multi-Higgs mass spectrum. We find that both
tree-massless and massive Higgs scalars obtain mass corrections of similar size
from finite parts of the loop effects. The corrections modify multi-Higgs mass
spectrum, and hence, the loop effects are significant in view of future
verifications of the gauge-Higgs unification scenario in high-energy
experiments.
|
We analyze the star product induced on the algebra of functions on R^3 by a
suitable reduction of the Moyal product defined on F(R^4). This is obtained
through the identification of R^3 with the dual of a three dimensional Lie
algebra.
We consider the su(2) case, exhibit a matrix basis and realize the algebra of
functions on its dual in such a basis. The relation to the Duflo map is
discussed. As an application to quantum mechanics we compute the spectrum of
the hydrogen atom.
|
Donors in silicon, conceptually described as hydrogen atom analogues in a
semiconductor environment, have become a key ingredient of many
"More-than-Moore" proposals such as quantum information processing [1-5] and
single-dopant electronics [6, 7]. The level of maturity this field has reached
has enabled the fabrication and demonstration of transistors that base their
functionality on a single impurity atom [8, 9] allowing the predicted
single-donor energy spectrum to be checked by an electrical transport
measurement. Generalizing the concept, a donor pair may behave as a hydrogen
molecule analogue. However, the molecular quantum mechanical solution only
takes us so far and a detailed understanding of the electronic structure of
these molecular systems is a challenge to be overcome. Here we present a
combined experimental-theoretical demonstration of the energy spectrum of a
strongly interacting donor pair in the channel of a silicon nanotransistor and
show the first observation of measurable two-donor exchange coupling. Moreover,
the analysis of the three charge states of the pair shows evidence of a
simultaneous enhancement of the binding and charging energies with respect to
the single donor spectrum. The measured data are accurately matched by results
obtained in an effective mass theory incorporating the Bloch states
multiplicity in Si, a central cell corrected donor potential and a full
configuration interaction treatment of the 2-electron spectrum. Our data
describe the basic 2-qubit entanglement element in Kane's quantum processing
scheme [1], namely exchange coupling, implemented here in the range of
molecular hybridization.
|
In the Summer of 2020, as COVID-19 limited in-person research opportunities
and created additional barriers for many students, institutions either canceled
or remotely hosted their Research Experience for Undergraduates (REU) programs.
The present longitudinal qualitative phenomenographic study was designed to
explore some of the possible limitations, challenges and outcomes of this
remote experience. Overall, 94 interviews were conducted with paired
participants; mentees (N=10) and mentors (N=8) from six different REU programs.
By drawing on Cultural-Historical Activity Theory (CHAT) as a framework, our
study uncovers some of the challenges for mentees around the accomplishment of
their research objectives and academic goals. These challenges included
motivation, limited access to technologies at home, limited communication among
REU students, barriers in mentor-mentee relationships, and differing
expectations about doing research. Despite the challenges, all mentees reported
that this experience was highly beneficial. Comparisons between outcomes of
these remote REUs and published outcomes of in-person undergraduate research
programs reveal many similar benefits such as integrating students into STEM
culture. Our study suggests that remote research programs could be considered
as a means to expand access to research experiences for undergraduate students
even after COVID-19 restrictions have been lifted.
|
We describe progress applying the \textit{Worldline Formalism} of quantum
field theory to the fermion propagator dressed by $N$-photons to study
multi-linear Compton scattering processes, explaining how this approach --
whose calculational advantages are well-known at multi-loop order -- yields
compact and manifestly gauge invariant scattering amplitudes.
|
Magnetic fields govern the plasma dynamics in the outer layers of the solar
atmosphere, and electric fields acting on neutral atoms that move across the
magnetic field enable us to study the dynamical coupling between neutrals and
ions in the plasma. In order to measure the magnetic and electric fields of
chromospheric jets, the full Stokes spectra of the Paschen series of neutral
hydrogen in a surge and in some active region jets that took place at the solar
limb were observed on May 5, 2012, using the spectropolarimeter of the Domeless
Solar Telescope at Hida observatory, Japan. First, we inverted the Stokes
spectra taking into account only the effect of magnetic fields on the energy
structure and polarization of the hydrogen levels. Having found no definitive
evidence of the effects of electric fields in the observed Stokes profiles, we
then estimated an upper bound for these fields by calculating the polarization
degree under the magnetic field configuration derived in the first step, with
the additional presence of a perpendicular (Lorentz type) electric field of
varying strength. The inferred direction of the magnetic field on the plane of
the sky (POS) approximately aligns to the active region jets and the surge,
with magnetic field strengths in the range 10 G < B < 640 G for the surge.
Using magnetic field strengths of 70, 200, and 600 G, we obtained upper limits
for possible electric fields of 0.04, 0.3, and 0.8 V/cm, respectively. Because
the velocity of neutral atoms of hydrogen moving across the magnetic field
derived from these upper limits of the Lorentz electric field is far below the
bulk velocity of the plasma perpendicular to the magnetic field as measured by
the Doppler shift, we conclude that the neutral atoms must be highly frozen to
the magnetic field in the surge.
|
Recently, robust funnel Model Predictive Control (MPC) was introduced, which
consists of model-based funnel MPC and model-free funnel control for its
robustification w.r.t. model-plant mismatches, bounded disturbances, and
uncertainties. It achieves output-reference tracking within prescribed bounds
on the tracking error for a class of unknown nonlinear systems. We extend
robust funnel MPC by a learning component to adapt the underlying model to the
system data and hence to improve the contribution of MPC. Since robust funnel
MPC is inherently robust and the evolution of the tracking error in the
prescribed performance funnel is guaranteed, the additional learning component
is able to perform the learning task online - even without an initial model or
offline training.
|
In this article we study the asymptotically rigid mapping class groups of
infinitely-punctured surfaces obtained by thickening planar trees. We present a
family of CAT(0) cube complexes on which the latter groups act. Along the way,
we determine in which cases the cube complexes introduced earlier by Genvois,
Lonjou and Urech are CAT(0).
|
This paper presents the R package MCS which implements the Model Confidence
Set (MCS) procedure recently developed by Hansen et al. (2011). The Hansen's
procedure consists on a sequence of tests which permits to construct a set of
'superior' models, where the null hypothesis of Equal Predictive Ability (EPA)
is not rejected at a certain confidence level. The EPA statistic tests is
calculated for an arbitrary loss function, meaning that we could test models on
various aspects, for example punctual forecasts. The relevance of the package
is shown using an example which aims at illustrating in details the use of the
functions provided by the package. The example compares the ability of
different models belonging to the ARCH family to predict large financial
losses. We also discuss the implementation of the ARCH--type models and their
maximum likelihood estimation using the popular R package rugarch developed by
Ghalanos (2014).
|
We prove upper bounds on the $L^p$ norms of eigenfunctions of the discrete
Laplacian on regular graphs. We then apply these ideas to study the $L^p$ norms
of joint eigenfunctions of the Laplacian and an averaging operator over a
finite collection of algebraic rotations of the $2$-sphere. Under mild
conditions, such joint eigenfunctions are shown to satisfy for large $p$ the
same bounds as those known for Laplace eigenfunctions on a surface of
non-positive curvature.
|
We discuss the process of Higgs boson production in $\gamma\gamma$ collider
on noncommutative spacetime and compare the results with large extra dimension
in KK graviton channel. Summing all KK mode on IR brane, the affections are in
the same order by comparing noncommutatve model prediction. This process is
completely forbidden in standard model on unitarity condition and bosonic
distribution. In noncommutative theory, the effect is induced by the
coordinates noncommutable relation, $[x^{\mu}, x^{\nu}]$ = $i\theta^{\mu\nu}$.
Due to the constant background strength tensor does not contain any conserved
quantum number, hence, this effort is indicated into particle power spectrum.
Particle mass spectrum is corrected by radiational and anisotropic
surroundings. The process of $\gamma\gamma\to H^{0}H^{0}$ restricts the
unitarity condition in noncommutative field theory. Under power law
conservation condition, the neutral Higgs mass below gauge boson resonance will
produce a accelerated phenomena as the central energy is higher than $Z_{0}$
gauge boson creation scale. The effects generated from the vast light Higgs
particles descend the power rate energy distribution as far as the ambient is
under a balance. The fractional rate on polarized polars are very small
embedded into the unpolarized surroundings depend on background electric field
couplings.
|
A long-term sublimation model to explain how Phaethon could provide the
Geminid stream is proposed. We find that it would take $\sim6$ Myr or more for
Phaethon to lose all of its internal ice (if ever there was) in its present
orbit. Thus, if the asteroid moved from the region of a 5:2 or 8:3 mean motion
resonance with Jupiter to its present orbit less than $1$ Myr ago, it may have
retained much of its primordial ice. The dust mantle on the sublimating body
should have a thickness of at least $15$ m but the mantle could have been less
than $1$ m thick $1000$ years ago. We find that the total gas production rate
could have been as large as $10^{27}\rm~s^{-1}$ then, and the gas flow could
have been capable of lifting dust particles of up to a few centimeters in size.
Therefore, gas production during the past millennium could have been sufficient
to blow away enough dust particles to explain the entire Geminid stream. For
present-day Phaethon, the gas production is comparatively weak. But strong
transient gas release with a rate of $\sim4.5\times10^{19}\rm~m^{-2}s^{-1}$ is
expected for its south polar region when Phaethon moves from $0^\circ$ to
$2^\circ$ mean anomaly near perihelion. Consequently, dust particles with radii
of $<\sim260~\mu m$ can be blown away to form a dust tail. In addition, we find
that the large surface temperature variation of $>600$ K near perihelion can
generate sufficiently large thermal stress to cause fracture of rocks or
boulders and provide an efficient mechanism to produce dust particles on the
surface. The time scale for this process should be several times longer than
the seasonal thermal cycle, thereby dominating the cycle of appearance of the
dust tail.
|
Using tape or optical devices for scale-out storage is one option for storing
a vast amount of data. However, it is impossible or almost impossible to
rewrite data with such devices. Thus, scale-out storage using such devices
cannot use standard data-distribution algorithms because they rewrite data for
moving between servers constituting the scale-out storage when the server
configuration is changed. Although using rewritable devices for scale-out
storage, when server capacity is huge, rewriting data is very hard when server
constitution is changed. In this paper, a data-distribution algorithm called
Sequential Checking is proposed, which can be used for scale-out storage
composed of devices that are hardly able to rewrite data. Sequential Checking
1) does not need to move data between servers when the server configuration is
changed, 2) distribute data, the amount of which depends on the server's
volume, 3) select a unique server when datum is written, and 4) select servers
when datum is read (there are few such server(s) in most cases) and find out a
unique server that stores the newest datum from them. These basic
characteristics were confirmed through proofs and simulations. Data can be read
by accessing 1.98 servers on average from a storage comprising 256 servers
under a realistic condition. And it is confirmed by evaluations in real
environment that access time is acceptable. Sequential Checking makes selecting
scale-out storage using tape or optical devices or using huge capacity servers
realistic.
|
Image matting refers to the estimation of the opacity of foreground objects.
It requires correct contours and fine details of foreground objects for the
matting results. To better accomplish human image matting tasks, we propose the
Cascade Image Matting Network with Deformable Graph Refinement, which can
automatically predict precise alpha mattes from single human images without any
additional inputs. We adopt a network cascade architecture to perform matting
from low-to-high resolution, which corresponds to coarse-to-fine optimization.
We also introduce the Deformable Graph Refinement (DGR) module based on graph
neural networks (GNNs) to overcome the limitations of convolutional neural
networks (CNNs). The DGR module can effectively capture long-range relations
and obtain more global and local information to help produce finer alpha
mattes. We also reduce the computation complexity of the DGR module by
dynamically predicting the neighbors and apply DGR module to higher--resolution
features. Experimental results demonstrate the ability of our CasDGR to achieve
state-of-the-art performance on synthetic datasets and produce good results on
real human images.
|
Taking the triangle areas as independent variables in the theory of Regge
calculus can lead to ambiguities in the edge lengths, which can be interpreted
as discontinuities in the metric. We construct solutions to area Regge calculus
using a triangulated lattice and find that on a spacelike hypersurface no such
discontinuity can arise. On a null hypersurface however, we can have such a
situation and the resulting metric can be interpreted as a so-called refractive
wave.
|
The search for topological superconductors (TSCs) is one of the most urgent
contemporary problems in condensed matter systems. TSCs are characterized by a
full superconducting gap in the bulk and topologically protected gapless
surface (or edge) states. Within each vortex core of TSCs, there exist the zero
energy Majorana bound states, which are predicted to exhibit non-Abelian
statistics and to form the basis of the fault-tolerant quantum computation. So
far, no stoichiometric bulk material exhibits the required topological surface
states (TSSs) at Fermi level combined with fully gapped bulk superconductivity.
Here, we report atomic scale visualization of the TSSs of the
non-centrosymmetric fully-gapped superconductor, PbTaSe2. Using quasiparticle
scattering interference (QPI) imaging, we find two TSSs with a Dirac point at
E~1.0eV, of which the inner TSS and partial outer TSS cross Fermi level, on the
Pb-terminated surface of this fully gapped superconductor. This discovery
reveals PbTaSe2 as a promising candidate as a TSC.
|
We present a calculation of the hyperfine splittings in bottomonium using
lattice Nonrelativistic QCD. The calculation includes spin-dependent
relativistic corrections through O(v^6), radiative corrections to the leading
spin-magnetic coupling and, for the first time, non-perturbative 4-quark
interactions which enter at alpha_s^2 v^3. We also include the effect of u,d,s
and c quark vacuum polarisation. Our result for the 1S hyperfine splitting is
M(Upsilon,1S) - M(eta_b,1S)= 60.0(6.4) MeV. We find the ratio of 2S to 1S
hyperfine splittings (M(Upsilon,2S) - M(eta_b,2S))/ (M(Upsilon,1S) -
M(eta_b,1S)) = 0.445(28).
|
From the analysis of the near-UV spectrum of the QSO 2206-199, obtained with
a long series of exposures with STIS on the HST, we deduce a value (D/H) =
(1.65 +/- 0.35) x 10(-5) (1 sigma error) for the abundance of deuterium in the
z(abs) = 2.0762 damped Lyman alpha system (DLA) along this sight-line. The
velocity structure of this absorber is very simple and its neutral hydrogen
column density, N(H I), is accurately known; the error in D/H is mostly due to
the limited signal-to-noise ratio of the spectrum. Since this is also one of
the most metal-poor DLAs, with metal abundances of about 1/200 of solar, the
correction due to astration of D is expected to be insignificant and the value
we deduce should be essentially the primordial abundance of deuterium. When all
(six) available measurements of D/H in high redshift QSO absorbers are
considered, we find that the three DLAs---where N(H I) is measured most
reliably---give consistently lower values than the three Lyman limit systems.
We point out that the weighted mean of the DLA measurements, D/H = (2.2 +/-
0.2) x 10(-5), yields a baryon density Omega_B h^2 = 0.025 +/- 0.001 which is
within 1 sigma of the value deduced from the analysis of the CMB angular power
spectrum, and is still consistent with the present-day D/H and models of
Galactic chemical evolution. Future observations of D I absorption in other
DLAs are needed to establish whether our finding reflects a real advantage of
DLAs over other classes of QSO absorbers for the measurement of D, or is just a
statistical fluctuation.
|
Although structural maps such as subductions and inductions appear naturally
in diffeology, one of the challenges is providing suitable analogous for
submersions, immersions, and \'{e}tale maps (i.e., local diffeomorphisms)
consistent with the classical versions of these maps between manifolds. In this
paper, we consider diffeological submersions, immersions, and \'{e}tale maps as
an adaptation of these maps to diffeology by a nonlinear approach. In the case
of manifolds, there is no difference between the classical and diffeological
versions of these maps. Moreover, we study their diffeological properties from
different aspects in a systematic fashion with respect to the germs of plots.
We also discuss notions of embeddings of diffeological spaces and regard
diffeological embeddings similar to those of manifolds. In particular, we show
that diffeological embeddings are inductions.
In order to characterize the considered maps from their linear behaviors, we
introduce a class of diffeological spaces, so-called diffeological \'etale
manifolds, which not only contains the usual manifolds but also includes
irrational tori. We state and prove versions of the rank and implicit function
theorems, as well as the fundamental theorem on flows in this class. As an
application, we use the results of this work to facilitate the computations of
the internal tangent spaces and diffeological dimensions in a few interesting
cases.
|
The Hong Kong/AAO/Strasbourg Halpha (HASH) planetary nebula database is an
online research platform providing free and easy access to the largest and most
comprehensive catalogue of known Galactic planetary nebulae (PNe) and a
repository of observational data (imaging and spectroscopy) for these and
related astronomical objects. The main motivation for creating this system is
resolving some of long standing problems in the field e.g. problems with mimics
and dubious and/or misidentifications, errors in observational data and
consolidation of the widely scattered data-sets. This facility allows
researchers quick and easy access to the archived and new observational data
and creating and sharing of non-redundant PN samples and catalogues.
|
A Compton polarimeter has been installed in Hall A at Jefferson Laboratory.
This letter reports on the first electron beam polarization measurements
performed during the HAPPEX experiment at an electron energy of 3.3 GeV and an
average current of 40 $\mu$A. The heart of this device is a Fabry-Perot cavity
which increased the luminosity for Compton scattering in the interaction region
so much that a 1.4% statistical accuracy could be obtained within one hour,
with a 3.3% total error.
|
In this paper, we shall prove that a harmonic map from $\mathbb{C}^{n}$
($n\geq2$) to any Kahler manifold must be holomorphic under an assumption of
energy density. It can be considered as a complex analogue of the Liouville
type theorem for harmonic maps obtained by Sealey.
|
We present a research study aimed at testing of applicability of machine
learning techniques for prediction of permeability of digitized rock samples.
We prepare a training set containing 3D images of sandstone samples imaged with
X-ray microtomography and corresponding permeability values simulated with Pore
Network approach. We also use Minkowski functionals and Deep Learning-based
descriptors of 3D images and 2D slices as input features for predictive model
training and prediction. We compare predictive power of various feature sets
and methods. The later include Gradient Boosting and various architectures of
Deep Neural Networks (DNN). The results demonstrate applicability of machine
learning for image-based permeability prediction and open a new area of Digital
Rock research.
|
Cross-lingual word embeddings are becoming increasingly important in
multilingual NLP. Recently, it has been shown that these embeddings can be
effectively learned by aligning two disjoint monolingual vector spaces through
linear transformations, using no more than a small bilingual dictionary as
supervision. In this work, we propose to apply an additional transformation
after the initial alignment step, which moves cross-lingual synonyms towards a
middle point between them. By applying this transformation our aim is to obtain
a better cross-lingual integration of the vector spaces. In addition, and
perhaps surprisingly, the monolingual spaces also improve by this
transformation. This is in contrast to the original alignment, which is
typically learned such that the structure of the monolingual spaces is
preserved. Our experiments confirm that the resulting cross-lingual embeddings
outperform state-of-the-art models in both monolingual and cross-lingual
evaluation tasks.
|
We study the fundamental problems of Gaussian mean estimation and linear
regression with Gaussian covariates in the presence of Huber contamination. Our
main contribution is the design of the first sample near-optimal and almost
linear-time algorithms with optimal error guarantees for both of these
problems. Specifically, for Gaussian robust mean estimation on $\mathbb{R}^d$
with contamination parameter $\epsilon \in (0, \epsilon_0)$ for a small
absolute constant $\epsilon_0$, we give an algorithm with sample complexity $n
= \tilde{O}(d/\epsilon^2)$ and almost linear runtime that approximates the
target mean within $\ell_2$-error $O(\epsilon)$. This improves on prior work
that achieved this error guarantee with polynomially suboptimal sample and time
complexity. For robust linear regression, we give the first algorithm with
sample complexity $n = \tilde{O}(d/\epsilon^2)$ and almost linear runtime that
approximates the target regressor within $\ell_2$-error $O(\epsilon)$. This is
the first polynomial sample and time algorithm achieving the optimal error
guarantee, answering an open question in the literature. At the technical
level, we develop a methodology that yields almost-linear time algorithms for
multi-directional filtering that may be of broader interest.
|
General analytical solutions of the Quantum Hamilton Jacobi Equation for
conservative one-dimensional or reducible motion are presented and discussed.
The quantum Hamilton's characteristic function and its derivative, i.e. the
quantum momentum function, are obtained in general, and it is shown that any
one-dimensional wave function can be exactly represented in a WKB-like form.
The formalism is applied to the harmonic oscillator and to the electron's
motion in the hydrogen atom, and the above mentioned functions are computed and
discussed for different quantum numbers. It is analyzed how the quantum
quantities investigated tend to the corresponding classical ones, in the limit
$\hbar\to 0$. These results demonstrate that the Quantum Hamilton Jacobi
Equation is not only completely equivalent to the Schr\"odinger Equation, but
allows also to fully investigate the transition from quantum to classical
mechanics.
|
The ubiquity of edge devices has led to a growing amount of unlabeled data
produced at the edge. Deep learning models deployed on edge devices are
required to learn from these unlabeled data to continuously improve accuracy.
Self-supervised representation learning has achieved promising performances
using centralized unlabeled data. However, the increasing awareness of privacy
protection limits centralizing the distributed unlabeled image data on edge
devices. While federated learning has been widely adopted to enable distributed
machine learning with privacy preservation, without a data selection method to
efficiently select streaming data, the traditional federated learning framework
fails to handle these huge amounts of decentralized unlabeled data with limited
storage resources on edge. To address these challenges, we propose a
Self-supervised On-device Federated learning framework with coreset selection,
which we call SOFed, to automatically select a coreset that consists of the
most representative samples into the replay buffer on each device. It preserves
data privacy as each client does not share raw data while learning good visual
representations. Experiments demonstrate the effectiveness and significance of
the proposed method in visual representation learning.
|
We report on a study of topological properties of Fibonacci quasicrystals.
Chern numbers which label the dense set of spectral gaps, are shown to be
related to the underlying palindromic symmetry. Topological and spectral
features are related to the two independent phases of the scattering matrix:
the total phase shift describing the frequency spectrum and the chiral phase
sensitive to topological features. Conveniently designed gap modes with
spectral properties directly related to the Chern numbers allow to scan these
phases. An effective topological Fabry-Perot cavity is presented.
|
Speech enhancement deep learning systems usually require large amounts of
training data to operate in broad conditions or real applications. This makes
the adaptability of those systems into new, low resource environments an
important topic. In this work, we present the results of adapting a speech
enhancement generative adversarial network by finetuning the generator with
small amounts of data. We investigate the minimum requirements to obtain a
stable behavior in terms of several objective metrics in two very different
languages: Catalan and Korean. We also study the variability of test
performance to unseen noise as a function of the amount of different types of
noise available for training. Results show that adapting a pre-trained English
model with 10 min of data already achieves a comparable performance to having
two orders of magnitude more data. They also demonstrate the relative stability
in test performance with respect to the number of training noise types.
|
In this paper we are concerned with the contact process on the squared
lattice. The contact process intuitively describes the spread of the infectious
disease on a graph, where an infectious vertex becomes healthy at a constant
rate while a healthy vertex is infected at rate proportional to the number of
infectious neighbors. As the dimension of the lattice grows to infinity, we
give a mean field limit for the survival probability of the process conditioned
on the event that only the origin of the lattice is infected at t=0. The binary
contact path process is a main auxiliary tool for our proof.
|
Let M be the Shimura variety associated with the group of spinor similitudes
of a rational quadratic space over of signature (n,2). We prove a conjecture of
Bruinier-Kudla-Yang, relating the arithmetic intersection multiplicities of
special divisors and big CM points on M to the central derivatives of certain
$L$-functions. As an application of this result, we prove an averaged version
of Colmez's conjecture on the Faltings heights of CM abelian varieties.
|
Assume that a multi-user multiple-input multiple-output (MIMO) communication
system must be designed to cover a given area with maximal energy efficiency
(bit/Joule). What are the optimal values for the number of antennas, active
users, and transmit power? By using a new model that describes how these three
parameters affect the total energy efficiency of the system, this work provides
closed-form expressions for their optimal values and interactions. In sharp
contrast to common belief, the transmit power is found to increase (not
decrease) with the number of antennas. This implies that energy efficient
systems can operate at high signal-to-noise ratio (SNR) regimes in which the
use of interference-suppressing precoding schemes is essential. Numerical
results show that the maximal energy efficiency is achieved by a massive MIMO
setup wherein hundreds of antennas are deployed to serve relatively many users
using interference-suppressing regularized zero-forcing precoding.
|
This paper presents and evaluates a new retrieval augmented generation (RAG)
and large language model (LLM)-based artificial intelligence (AI) technique:
rubric enabled generative artificial intelligence (REGAI). REGAI uses rubrics,
which can be created manually or automatically by the system, to enhance the
performance of LLMs for evaluation purposes. REGAI improves on the performance
of both classical LLMs and RAG-based LLM techniques. This paper describes
REGAI, presents data regarding its performance and discusses several possible
application areas for the technology.
|
The corrections to finite-size scaling in the critical two-point correlation
function G(r) of 2D Ising model on a square lattice have been studied
numerically by means of exact transfer-matrix algorithms. The systems of square
geometry with periodic boundaries oriented either along <10> or along <11>
direction have been considered, including up to 800 spins. The calculation of
G(r) at a distance r equal to the half of the system size L shows the existence
of an amplitude correction proportional to 1/L^2. A nontrivial correction
proportional to 1/L^0.25 of a very small magnitude also has been detected in
agreement with predictions of our recently developed GFD (grouping of Feynman
diagrams) theory. A refined analysis of the recent MC data for 3D Ising, phi^4,
and XY lattice models has been performed. It includes an analysis of the
partition function zeros of 3D Ising model, an estimation of the
correction-to-scaling exponent omega from the Binder cumulant data near
criticality, as well as a study of the effective critical exponent eta and the
effective amplitudes in the asymptotic expansion of susceptibility at the
critical point. In all cases a refined analysis is consistent with our (GFD)
asymptotic values of the critical exponents (nu=2/3, omega=1/2, eta=1/8 for 3D
Ising model and omega=5/9 for 3D XY model), while the actually accepted
"conventional" exponents are, in fact, effective exponents which are valid for
approximation of the finite--size scaling behavior of not too large systems.
|
Modeling and analysis of timing constraints is crucial in automotive systems.
EAST-ADL is a domain specific architectural language dedicated to
safety-critical automotive embedded system design. In most cases, a bounded
number of violations of timing constraints in systems would not lead to system
failures when the results of the violations are negligible, called Weakly-Hard
(WH). We have previously specified EAST-ADL timing constraints in Clock
Constraint Specification Language (CCSL) and transformed timed behaviors in
CCSL into formal models amenable to model checking. Previous work is extended
in this paper by including support for probabilistic analysis of timing
constraints in the context of WH: Probabilistic extension of CCSL, called
PrCCSL, is defined and the EAST-ADL timing constraints with stochastic
properties are specified in PrCCSL. The semantics of the extended constraints
in PrCCSL is translated into Proof Objective Models that can be verified using
SIMULINK DESIGN VERIFIER. Furthermore, a set of mapping rules is proposed to
facilitate guarantee of translation. Our approach is demonstrated on an
autonomous traffic sign recognition vehicle case study.
|
We present additional details of the scanning tunneling microscopy (STM)
spectra predicted by the model of pinning of dynamic spin collective modes in
d-wave superconductor proposed by Polkovnikov et al. (cond-mat/0203176). Along
with modulations at the twice the wavevector of the spin collective mode, the
local density of states (LDOS) displays features linked to the spectrum of the
Bogoliubov quasiparticles. The former is expected to depend more strongly on an
applied magnetic field or the doping concentration. The spin collective mode
and the quasiparticles are distinct, co-existing, low energy excitations of the
d-wave superconductor (strongly coupled only in some sectors of the Brillouin
zone), and should not be viewed as mutually exclusive sources of LDOS
modulation.
|
Recently, phase processing is attracting increasinginterest in speech
enhancement community. Some researchersintegrate phase estimations module into
speech enhancementmodels by using complex-valued short-time Fourier
transform(STFT) spectrogram based training targets, e.g. Complex RatioMask
(cRM) [1]. However, masking on spectrogram would violentits consistency
constraints. In this work, we prove that theinconsistent problem enlarges the
solution space of the speechenhancement model and causes unintended artifacts.
ConsistencySpectrogram Masking (CSM) is proposed to estimate the
complexspectrogram of a signal with the consistency constraint in asimple but
not trivial way. The experiments comparing ourCSM based end-to-end model with
other methods are conductedto confirm that the CSM accelerate the model
training andhave significant improvements in speech quality. From
ourexperimental results, we assured that our method could enha
|
A peculiar aspect of Cherenkov telescopes is that they are designed to detect
atmospheric light flashes on the time scale of nanoseconds, being almost blind
to stellar sources. As a consequence, the pointing calibration of these
instruments cannot be done in general exploiting the standard astrometry of the
focal plane. In this paper we validate a procedure to overcome this problem for
the case of the innovative ASTRI telescope, developed by INAF, exploiting sky
images produced as an ancillary output by its novel Cherenkov camera. In fact,
this instrument implements a statistical technique called "Variance method"
(VAR) owning the potentiality to image the star field (angular resolution $\sim
11'$). We demonstrate here that VAR images can be exploited to assess the
alignment of the Cherenkov camera with the optical axis of the telescope down
to $\sim 1''$. To this end, we evaluate the position of the stars with
sub-pixel precision thanks to a deep investigation of the convolution between
the point spread function and the pixel distribution of the camera, resulting
in a transformation matrix that we validated with simulations. After that, we
considered the rotation of the field of view during long observing runs,
obtaining light arcs that we exploited to investigate the alignment of the
Cherenkov camera with high precision, in a procedure that we have already
tested on real data. The strategy we have adopted, inherited from optical
astronomy, has never been performed on Variance images from a Cherenkov
telescope until now, and it can be crucial to optimize the scientific accuracy
of the incoming MiniArray of ASTRI telescopes.
|
Second derivatives of mathematical models for real-world phenomena are
fundamental ingredients of a wide range of numerical simulation methods
including parameter sensitivity analysis, uncertainty quantification, nonlinear
optimization and model calibration. The evaluation of such Hessians often
dominates the overall computational effort. The combinatorial {\sc Hessian
Accumulation} problem aiming to minimize the number of floating-point
operations required for the computation of a Hessian turns out to be
NP-complete. We propose a dynamic programming formulation for the solution of
{\sc Hessian Accumulation} over a sub-search space. This approach yields
improvements by factors of ten and higher over the state of the art based on
second-order tangent and adjoint algorithmic differentiation.
|
We present multiepoch spectropolarimetry of the superluminous interacting
Type IIn supernova SN2017hcc, covering 16 to 391 days after explosion. In our
first epoch we measure continuum polarization as high as 6%, making SN 2017hcc
the most intrinsically polarized SN ever reported. During the first 29 days of
coverage, when the polarization is strongest, the continuum polarization has a
wavelength dependence that rises toward blue wavelengths, and becomes
wavelength independent by day 45. The polarization strength drops rapidly
during the first month, even as the SN flux is still climbing to peak
brightness. Nonetheless, record-high polarization is maintained until day 68,
at which point the source polarization declines to 1.9%, comparable to peak
levels in previous well-studied SNe IIn. Thereafter the SN continues in
polarization decline, while exhibiting only minor changes in position angle on
the sky. The blue slope of the polarized continuum during the first month,
accompanied by short-lived polarized flux for Balmer emission, suggests that an
aspherical distribution of dust grains in pre-shock circumstellar material
(CSM) is echoing the SN IIn spectrum and strongly influencing the polarization,
while the subsequent decline during the wavelength-independent phase appears
broadly consistent with electron scattering near the SN/CSM interface. The
persistence of the polarization position angle between these two phases
suggests that the pre-existing CSM responsible for the dust scattering at early
times is part of the same geometric structure as the electron-scattering region
that dominates the polarization at later times. SN2017hcc appears to be yet
another, but much more extreme, case of aspherical yet well-ordered CSM in Type
IIn SNe, possibly resulting from pre-SN mass loss shaped by a binary progenitor
system.
|
Background: Egocentric video has recently emerged as a potential solution for
monitoring hand function in individuals living with tetraplegia in the
community, especially for its ability to detect functional use in the home
environment. Objective: To develop and validate a wearable vision-based system
for measuring hand use in the home among individuals living with tetraplegia.
Methods: Several deep learning algorithms for detecting functional hand-object
interactions were developed and compared. The most accurate algorithm was used
to extract measures of hand function from 65 hours of unscripted video recorded
at home by 20 participants with tetraplegia. These measures were: the
percentage of interaction time over total recording time (Perc); the average
duration of individual interactions (Dur); the number of interactions per hour
(Num). To demonstrate the clinical validity of the technology, egocentric
measures were correlated with validated clinical assessments of hand function
and independence (Graded Redefined Assessment of Strength, Sensibility and
Prehension - GRASSP, Upper Extremity Motor Score - UEMS, and Spinal Cord
Independent Measure - SCIM). Results: Hand-object interactions were
automatically detected with a median F1-score of 0.80 (0.67-0.87). Our results
demonstrated that higher UEMS and better prehension were related to greater
time spent interacting, whereas higher SCIM and better hand sensation resulted
in a higher number of interactions performed during the egocentric video
recordings. Conclusions: For the first time, measures of hand function
automatically estimated in an unconstrained environment in individuals with
tetraplegia have been validated against internationally accepted measures of
hand function. Future work will necessitate a formal evaluation of the
reliability and responsiveness of the egocentric-based performance measures for
hand use.
|
We apply the free product construction to various local algebras in algebraic
quantum field theory.
If we take the free product of infinitely many identical half-sided modular
inclusions with ergodic canonical endomorphism, we obtain a half-sided modular
inclusion with ergodic canonical endomorphism and trivial relative commutant.
On the other hand, if we take M\"obius covariant nets with trace class
property, we are able to construct an inclusion of free product von Neumann
algebras with large relative commutant, by considering either a finite family
of identical inclusions or an infinite family of inequivalent inclusions. In
two dimensional spacetime, we construct Borchers triples with trivial relative
commutant by taking free products of infinitely many, identical Borchers
triples. Free products of finitely many Borchers triples are possibly
associated with Haag-Kastler net having S-matrix which is nontrivial and non
asymptotically complete, yet the nontriviality of double cone algebras remains
open.
|
This research aims at developing path and motion planning algorithms for a
tethered Unmanned Aerial Vehicle (UAV) to visually assist a teleoperated
primary robot in unstructured or confined environments. The emerging state of
the practice for nuclear operations, bomb squad, disaster robots, and other
domains with novel tasks or highly occluded environments is to use two robots,
a primary and a secondary that acts as a visual assistant to overcome the
perceptual limitations of the sensors by providing an external viewpoint.
However, the benefits of using an assistant have been limited for at least
three reasons: (1) users tend to choose suboptimal viewpoints, (2) only ground
robot assistants are considered, ignoring the rapid evolution of small unmanned
aerial systems for indoor flying, (3) introducing a whole crew for the second
teleoperated robot is not cost effective, may introduce further teamwork
demands, and therefore could lead to miscommunication. This dissertation
proposes to use an autonomous tethered aerial visual assistant to replace the
secondary robot and its operating crew. Along with a pre-established theory of
viewpoint quality based on affordances, this dissertation aims at defining and
representing robot motion risk in unstructured or confined environments. Based
on those theories, a novel high level path planning algorithm is developed to
enable risk-aware planning, which balances the tradeoff between viewpoint
quality and motion risk in order to provide safe and trustworthy visual
assistance flight. The planned flight trajectory is then realized on a tethered
UAV platform. The perception and actuation are tailored to fit the tethered
agent in the form of a low level motion suite, including a novel tether-based
localization model with negligible computational overhead, motion primitives
for the tethered airframe based on position and velocity control, and two
different
|
The (any) seismogenic area in the lithosphere is considered as an open
physical system. Following its energy balance analysis earlier presented (Part
- I, Thanassoulas, 2008), the specific case when the seismogenic area is under
normal (input energy equals released energy) seismogenic conditions is studied.
In this case the cumulative seismic energy release is a linear time function.
Starting from this linear function a method is postulated for the determination
of the maximum expected magnitude of a future earthquake. The proposed method
has been tested "a posteriori" on real EQs from the Greek territory, USA and
data obtained from the seismological literature. The obtained results validate
the methodology while an analysis is presented that justifies the obtained high
degree of accuracy compared to the corresponding calculated EQ magnitudes with
seismological methods.
|
$M$-theory is believed to be described in various dimensions by large $N$
field theories. It has been further conjectured that at finite $N$, these
theories describe the discrete light cone quantization (DLCQ) of $M$ theory.
Even at low energies, this is not necessarily the same thing as the DLCQ of
supergravity. It is believed that this is only the case for quantities which
are protected by non-renormalization theorems. In 0+1 and 1+1 dimensions, we
provide further evidence of a non-renormalization theorem for the $v^4$ terms,
but also give evidence that there are not such theorems at order $v^8$ and
higher.
|
In this white paper, we present an experimental road map for spectroscopic
experiments beyond DESI. DESI will be a transformative cosmological survey in
the 2020s, mapping 40 million galaxies and quasars and capturing a significant
fraction of the available linear modes up to z=1.2. DESI-II will pilot
observations of galaxies both at much higher densities and extending to higher
redshifts. A Stage-5 experiment would build out those high-density and
high-redshift observations, mapping hundreds of millions of stars and galaxies
in three dimensions, to address the problems of inflation, dark energy, light
relativistic species, and dark matter. These spectroscopic data will also
complement the next generation of weak lensing, line intensity mapping and CMB
experiments and allow them to reach their full potential.
|
The objective of this introduction to Colombeau algebras of
generalized-functions (in which distributions can be freely multiplied) is to
explain in elementary terms the essential concepts necessary for their
application to basic non-linear problems in classical physics.
Examples are given in hydrodynamics and electrodynamics. The problem of the
self-energy of a point electric charge is worked out in detail: The Coulomb
potential and field are defined as Colombeau generalized-functions, and
integrals of nonlinear expressions corresponding to products of distributions
(such as the square of the Coulomb field and the square of the delta-function)
are calculated.
Finally, the methods introduced in Eur. J. Phys. /28/ (2007) 267-275,
1021-1042, and 1241, to deal with point-like singularities in classical
electrodynamics are confirmed.
|
Effective string field equations with zero-constrained torsion have been
studied extensively in the literature. But, one may think that the effects of
vanishing of the non-metricity have not been explained in detail and also in
according to some recent literature [4],[5], the action density in my previous
paper [3] is not complete. For these reasons, in this erratum, in according to
the effects of vanishing of the non-metricity to the field equations, the
action density of my previous paper [3] will be completed in a variational
setting. Furthermore, up to now, an unambiguous derivation of the dilaton
potential has not been given. If one thinks that vanishing of the non-metricity
gives a spontaneous breakdown of local Weyl invariance then a dilaton potential
is obtained unambiguously in this framework.
|
We present examples of noncommutative four-spheres that are base spaces of
$SU(2)$-principal bundles with noncommutative seven-spheres as total spaces.
The noncommutative coordinate algebras of the four-spheres are generated by the
entries of a projection which is invariant under the action of $SU(2)$. We give
conditions for the components of the Connes--Chern character of the projection
to vanish but the second (the top) one. The latter is then a non zero
Hochschild cycle that plays the role of the volume form for the noncommutative
four-spheres.
|
Compact silicon integrated devices, such as micro-ring resonators, have
recently been demonstrated as efficient sources of quantum correlated photon
pairs. The mass production of integrated devices demands the implementation of
fast and reliable techniques to monitor the device performances. In the case of
time-energy correlations, this is particularly challenging, as it requires high
spectral resolution that is not currently achievable in coincidence
measurements. Here we reconstruct the joint spectral density of photons pairs
generated by spontaneous four-wave mixing in a silicon ring resonator by
studying the corresponding stimulated process, namely stimulated four wave
mixing. We show that this approach, featuring high spectral resolution and
short measurement times, allows one to discriminate between nearly-uncorrelated
and highly-correlated photon pairs.
|
The one-dimensional harmonic oscillator wave functions are solutions to a
Sturm-Liouville problem posed on the whole real line. This problem generates
the Hermite polynomials. However, no other set of orthogonal polynomials can be
obtained from a Sturm-Liouville problem on the whole real line. In this paper
we show how to characterize an arbitrary set of polynomials orthogonal on
$(-\infty,\infty)$ in terms of a system of integro-differential equations of
Hartree-Fock type. This system replaces and generalizes the linear differential
equation associated with a Sturm-Liouville problem. We demonstrate our results
for the special case of Hahn-Meixner polynomials.
|
In this paper we have applied the generalized Kerr-Schild transformation
finding a new family of stationary perfect-fluid solutions of the Einstein
field equations. The procedure used combines some well-known techniques of null
and timelike vector fields, from which some properties of the solutions are
studied in a coordinate-free way. These spacetimes are algebraically special
being their Petrov types II and D. This family includes all the classical
vacuum Kerr-Schild spacetimes, excepting the plane-fronted gravitational waves,
and some other interesting solutions as, for instance, the Kerr metric in the
background of the Einstein Universe. However, the family is much more general
and depends on an arbitrary function of one variable.
|
We study the jamming transition in a model of elastic particles under shear
at zero temperature. The key quantity is the relaxation time $\tau$ which is
obtained by stopping the shearing and letting energy and pressure decay to
zero. At many different densities and initial shear rates we do several such
relaxations to determine the average $\tau$. We establish that $\tau$ diverges
with the same exponent as the viscosity and determine another exponent from the
relation between $\tau$ and the coordination number. Though most of the
simulations are done for the model with dissipation due to the motion of
particles relative to an affinely shearing substrate (the RD$_0$ model), we
also examine the CD$_0$ model, where the dissipation is instead due to velocity
differences of disks in contact, and confirm that the above-mentioned exponent
is the same for these two models. We also consider finite size effects on both
$\tau$ and the coordination number.
|
It is shown that for deep neural networks, a single wide layer of width $N+1$
($N$ being the number of training samples) suffices to prove the connectivity
of sublevel sets of the training loss function. In the two-layer setting, the
same property may not hold even if one has just one neuron less (i.e. width $N$
can lead to disconnected sublevel sets).
|
Age estimation is a difficult task which requires the automatic detection and
interpretation of facial features. Recently, Convolutional Neural Networks
(CNNs) have made remarkable improvement on learning age patterns from benchmark
datasets. However, for a face "in the wild" (from a video frame or Internet),
the existing algorithms are not as accurate as for a frontal and neutral face.
In addition, with the increasing number of in-the-wild aging data, the
computation speed of existing deep learning platforms becomes another crucial
issue. In this paper, we propose a high-efficient age estimation system with
joint optimization of age estimation algorithm and deep learning system.
Cooperated with the city surveillance network, this system can provide age
group analysis for intelligent demographics. First, we build a three-tier fog
computing architecture including an edge, a fog and a cloud layer, which
directly processes age estimation from raw videos. Second, we optimize the age
estimation algorithm based on CNNs with label distribution and K-L divergence
distance embedded in the fog layer and evaluate the model on the latest wild
aging dataset. Experimental results demonstrate that: 1. our system collects
the demographics data dynamically at far-distance without contact, and makes
the city population analysis automatically; and 2. the age model training has
been speed-up without losing training progress or model quality. To our best
knowledge, this is the first intelligent demographics system which has
potential applications in improving the efficiency of smart cities and urban
living.
|
This paper proposes the use of the binary primes sequence to strengthen
pseudorandom (PN) decimal sequences for cryptography applications. The binary
primes sequence is added to the PN decimal sequence (where one can choose from
many arbitrary shift values) and it is shown that the sum sequence has improved
autocorrelation properties besides being computationally hard. Also, an
analysis on the computational complexity is performed and it is shown that the
complexity for the eavesdropper is of exponential complexity and therefore, the
proposed method is an attractive procedure for cryptographic applications.
|
An exact canonical master equation of the Lindblad form is derived for a
central spin interacting uniformly with a sea of completely unpolarized spins.
The Kraus operators for the dynamical map are also derived. The
non-Markovianity of the dynamics in terms of the divisibility breaking of the
dynamical map and increase of the trace distance fidelity between quantum
states is shown. Moreover, it is observed that the irreversible entropy
production rate is always negative (for a fixed initial state) whenever the
dynamics exhibits non-Markovian behavior. In continuation with the study of
witnessing non-Markovianity, it is shown that the positive rate of change of
the purity of the central qubit is a faithful indicator of the non-Markovian
information back flow. Given the experimental feasibility of measuring the
purity of a quantum state, a possibility of experimental demonstration of
non-Markovianity and the negative irreversible entropy production rate is
addressed. This gives the present work considerable practical importance for
detecting the non-Markovianity and the negative irreversible entropy production
rate.
|
Subject-verb agreement in the presence of an attractor noun located between
the main noun and the verb elicits complex behavior: judgments of
grammaticality are modulated by the grammatical features of the attractor. For
example, in the sentence "The girl near the boys likes climbing", the attractor
(boys) disagrees in grammatical number with the verb (likes), creating a
locally implausible transition probability. Here, we parametrically modulate
the distance between the attractor and the verb while keeping the length of the
sentence equal. We evaluate the performance of both humans and two artificial
neural network models: both make more mistakes when the attractor is closer to
the verb, but neural networks get close to the chance level while humans are
mostly able to overcome the attractor interference. Additionally, we report a
linear effect of attractor distance on reaction times. We hypothesize that a
possible reason for the proximity effect is the calculation of transition
probabilities between adjacent words. Nevertheless, classical models of
attraction such as the cue-based model might suffice to explain this
phenomenon, thus paving the way for new research. Data and analyses available
at https://osf.io/d4g6k
|
Let $G$ be a graph, and $H\colon V(G)\to 2^\mathbb{N}$ a set function
associated with $G$. A spanning subgraph $F$ of $G$ is called an $H$-factor if
the degree of any vertex $v$ in $F$ belongs to the set $H(v)$. This paper
contains two results on the existence of $H$-factors in regular graphs. First,
we construct an $r$-regular graph without some given $H^*$-factor. In
particular, this gives a negative answer to a problem recently posed by Akbari
and Kano. Second, by using Lov\'asz's characterization theorem on the existence
of $(g, f)$-factors, we find a sharp condition for the existence of general
$H$-factors in $\{r, r+1\}$-graphs, in terms of the maximum and minimum of $H$.
The result reduces to Thomassen's theorem for the case that $H(v)$ consists of
the same two consecutive integers for all vertices $v$, and to Tutte's theorem
if the graph is regular in addition.
|
Upon investigating whether the variation of the antineutron-nucleus
annihilation cross-sections at very low energies satisfy Bethe-Landau's power
law of $\sigma_{\rm ann} (p) \propto 1/p^{\alpha}$ behavior as a function of
the antineutron momentum $p$, we uncover unexpected regular oscillatory
structures in the low antineutron energy region from 0.001 to 10 MeV, with
small amplitudes and narrow periodicity in the logarithm of the antineutron
energies, for large-$A$ nuclei such as Pb and Ag. Subsequent semiclassical
analyses of the $S$ matrices reveal that these oscillations are pocket
resonances that arise from quasi-bound states inside the pocket and the
interference between the waves reflecting inside the optical potential pockets
with those from beyond the potential barriers, implicit in the nuclear Ramsauer
effect. They are the continuation of bound states in the continuum.
Experimental observations of these pocket resonances will provide vital
information on the properties of the optical model potentials and the nature of
the antineutron annihilation process.
|
GRB 221009A has been referred to as the Brightest Of All Time (the BOAT). We
investigate the veracity of this statement by comparing it with a half century
of prompt gamma-ray burst observations. This burst is the brightest ever
detected by the measures of peak flux and fluence. Unexpectedly, GRB 221009A
has the highest isotropic-equivalent total energy ever identified, while the
peak luminosity is at the $\sim99$th percentile of the known distribution. We
explore how such a burst can be powered and discuss potential implications for
ultra-long and high-redshift gamma-ray bursts. By geometric extrapolation of
the total fluence and peak flux distributions GRB 221009A appears to be a once
in 10,000 year event. Thus, while it almost certainly not the BOAT over all of
cosmic history, it may be the brightest gamma-ray burst since human
civilization began.
|
In this letter we report on the computation of instanton-dominated
correlation functions in supersymmetric YM theories on ALE spaces. Following
the approach of Kronheimer and Nakajima, we explicitly construct the self-dual
connection on ALE spaces necessary to perform such computations. We restrict
our attention to the simplest case of an $SU(2)$ connection with lowest Chern
class on the Eguchi-Hanson gravitational background.
|
We developed an apparatus to couple a 50-micrometer diameter
whispering-gallery silica microtoroidal resonator in a helium-4 cryostat using
a straight optical tapered-fiber at 1550nm wavelength. On a top-loading probe
specifically adapted for increased mechanical stability, we use a
specifically-developed "cryotaper" to optically probe the cavity, allowing thus
to record the calibrated mechanical spectrum of the optomechanical system at
low temperatures. We then demonstrate excellent thermalization of a 63-MHz
mechanical mode of a toroidal resonator down to the cryostat's base temperature
of 1.65K, thereby proving the viability of the cryogenic refrigeration via heat
conduction through static low-pressure exchange gas. In the context of
optomechanics, we therefore provide a versatile and powerful tool with
state-of-the-art performances in optical coupling efficiency, mechanical
stability and cryogenic cooling.
|
We study the impact of neutrino masses on the shape and height of the BAO
peak of the matter correlation function, both in real and redshift space. In
order to describe the nonlinear evolution of the BAO peak we run N-body
simulations and compare them with simple analytic formulae. We show that the
evolution with redshift of the correlation function and its dependence on the
neutrino masses is well reproduced in a simplified version of the Zel'dovich
approximation, in which the mode-coupling contribution to the power spectrum is
neglected. While in linear theory the BAO peak decreases for increasing
neutrino masses, the effect of nonlinear structure formation goes in the
opposite direction, since the peak broadening by large scale flows is less
effective. As a result of this combined effect, the peak decreases by $\sim 0.6
\%$ for $ \sum m_\nu = 0.15$ eV and increases by $\sim1.2 \%$ for $ \sum m_\nu
= 0.3$ eV, with respect to a massless neutrino cosmology with equal value of
the other cosmological parameters. We extend our analysis to redshift space and
to halos, and confirm the agreement between simulations and the analytic
formulae. We argue that all analytical approaches having the Zel'dovich
propagator in their lowest order approximation should give comparable
performances, irrespectively to their formulation in Lagrangian or in Eulerian
space.
|
The diamond and zinc-blende semiconductors are well-known and have been
widely studied for decades. Yet, their electronic structure still surprises
with unexpected topological properties of the valence bands. In this joint
theoretical and experimental investigation we demonstrate for the benchmark
compounds InSb and GaAs that the electronic structure features topological
surface states below the Fermi energy. Our parity analysis shows that the
spin-orbit split-off band near the valence band maximum exhibits a strong
topologically non-trivial behavior characterized by the $\mathcal{Z}_2$
invariants $(1;000)$. The non-trivial character emerges instantaneously with
non-zero spin-orbit coupling, in contrast to the conventional topological phase
transition mechanism. \textit{Ab initio}-based tight-binding calculations
resolve topological surface states in the occupied electronic structure of InSb
and GaAs, further confirmed experimentally by soft X-ray angle-resolved
photoemission from both materials. Our findings are valid for all other
materials whose valence bands are adiabatically linked to those of InSb, i.e.,
many diamond and zinc-blende semiconductors, as well as other related
materials, such as half-Heusler compounds.
|
Here we show that in the case when double peaked emission lines originate
from outer parts of accretion disk, their variability could be caused by
perturbations in the disk emissivity. In order to test this hypothesis, we
introduced a model of disk perturbing region in the form of a single bright
spot (or flare) by a modification of the power law disk emissivity in
appropriate way. The disk emission was then analyzed using numerical
simulations based on ray-tracing method in Kerr metric and the corresponding
simulated line profiles were obtained. We applied this model to the observed
H-beta line profiles of 3C 390.3 (observed in the period 1995-1999), and
estimated the parameters of both, accretion disk and perturbing region. Our
results show that two large amplitude outbursts of the H-beta line observed in
3C 390.3 could be explained by successive occurrences of two bright spots on
approaching side of the disk. These bright spots are either moving, originating
in the inner regions of the disk and spiralling outwards by crossing small
distances during the period of several years, or stationary. In both cases,
their widths increase with time, indicating that they most likely decay.
|
In this paper, the WKB approximation to the scattering problem is developed
without the divergences which usually appear at the classical turning points. A
detailed procedure of complexification is shown to generate results identical
to the usual WKB prescription but without the cumbersome connection formulas.
|
We investigate the interaction of two oncoming shock waves in spherical
symmetry for an ideal barotropic fluid. Our research problem is how to
establish a local in time solution after the interaction point and determine
the state behind the shock waves. This problem is a double free boundary
problem, as the position of the shock waves in space time is unknown. Our work
is based on a number of previous studies, including Lisibach's, who studied the
case of plane symmetry. To solve this problem, we will use an iterative regime
to establish a local in time solution.
|
Electronic properties of molecular conductors exhibiting antiferromagnetic
(AFM) spin order and charge order (CO) owing to electron correlation are
studied using first-principles density functional theory calculations. We
investigate two systems, a quasi-two-dimensional Mott insulator
$\beta^\prime$-(BEDT-TTF)$_{2}$ICl$_{2}$ with an AFM ground state, and several
members of quasi-one-dimensional (TMTTF)$_2$$X$ showing CO. The stabilities of
the AFM and CO states are compared between the use of a standard
exchange-correlation functional based on the generalized gradient approximation
and that of a range-separated hybrid functional; we find that the latter
describes these states better. For $\beta^\prime$-(BEDT-TTF)$_{2}$ICl$_{2}$,
the AFM order is much stabilized with a wider band gap. For (TMTTF)$_2$$X$,
only by using the hybrid functional, the AFM insulating state is realized and
the CO states coexisting with AFM order are stable under structural
optimization, whose stability among different \textit{X} shows the tendency
consistent with experiments.
|
The connection between the evolution of an arbitrary configuration and the
evolution of its parts in the first generation is established. The equivalence
of Conway's evolution rules to the elementary configurations' (containing one,
two, three, and four pieces) evolution laws in the first generation has been
proved.
|
The fractional laplacian is an operator appearing in several evolution models
where diffusion coming from a L\'evy process is present but also in the
analysis of fluid interphases. We provide an extension of a pointwise
inequality that plays a r\^ole in their study. We begin recalling two scenarios
where it has been used. After stating the results, for fractional
Laplace-Beltrami and Dirichlet-Neumann operators, we provide an sketch of their
proofs, unravelling the underlying principle to such inequalities.
|
Time-Sensitive Networking (TSN) is an emerging real-time Ethernet technology
that provides deterministic communication for time-critical traffic. At its
core, TSN relies on Time-Aware Shaper (TAS) for pre-allocating frames in
specific time intervals and Per-Stream Filtering and Policing (PSFP) for
mitigating the fatal disturbance of unavoidable frame drift. However, as first
identified in this work, PSFP incurs heavy memory consumption during policing,
hindering normal switching functionalities.
This work proposes a lightweight policing design called FooDog, which could
facilitate sub-microsecond jitter with ultra-low memory consumption. FooDog
employs a period-wise and stream-wise structure to realize the memory-efficient
PSFP without loss of determinism. Results using commercial FPGAs in typical
aerospace scenarios show that FooDog could keep end-to-end time-sensitive
traffic jitter <150 nanoseconds in the presence of abnormal traffic, comparable
to typical TSN performance without anomalies. Meanwhile, it consumes merely
hundreds of kilobits of memory, reducing >90% of on-chip memory overheads than
unoptimized PSFP design.
|
Let $\Gamma$ be the fundamental group of a finite connected graph $\mathcal
G$. Let $\mathfrak M$ be an abelian group. A {\it distribution} on the boundary
$\partial\Delta$ of the universal covering tree $\Delta$ is an $\mathfrak
M$-valued measure defined on clopen sets. If $\mathfrak M$ has no
$\chi(\mathcal G)$-torsion then the group of $\Gamma$-invariant distributions
on $\partial\Delta$ is isomorphic to $H_1(\mathcal G,\mathfrak M)$.
|
The purpose of the paper is to study Yamabe solitons on three-dimensional
para-Sasakian, paracosymplectic and para-Kenmotsu manifolds. Mainly, we proved
that *If the semi-Riemannian metric of a three-dimensional para-Sasakian
manifold is a Yamabe soliton, then it is of constant scalar curvature, and the
flow vector field V is Killing. In the next step, we proved that either
manifold has constant curvature -1 and reduces to an Einstein manifold, or V is
an infinitesimal automorphism of the paracontact metric structure on the
manifold. *If the semi-Riemannian metric of a three-dimensional
paracosymplectic manifold is a Yamabe soliton, then it has constant scalar
curvature. Furthermore either manifold is $\eta$-Einstein, or Ricci flat. *If
the semi-Riemannian metric on a three-dimensional para-Kenmotsu manifold is a
Yamabe soliton, then the manifold is of constant sectional curvature -1,
reduces to an Einstein manifold. Furthermore, Yamabe soliton is expanding with
$\lambda$=-6 and the vector field V is Killing. Finally, we construct examples
to illustrate the results obtained in previous sections.
|
In a recent work (ECCC, TR18-171, 2018), we introduced models of testing
graph properties in which, in addition to answers to the usual graph-queries,
the tester obtains {\em random vertices drawn according to an arbitrary
distribution $D$}. Such a tester is required to distinguish between graphs that
have the property and graphs that are far from having the property, {\em where
the distance between graphs is defined based on the unknown vertex distribution
$D$}. These ("vertex-distribution free" (VDF)) models generalize the standard
models in which $D$ is postulated to be uniform on the vertex-set, and they
were studies both in the dense graph model and in the bounded-degree graph
model.
The focus of the aforementioned work was on testers, called {\sf strong},
whose query complexity depends only on the proximity parameter $\epsilon$.
Unfortunately, in the standard bounded-degree graph model, some natural
properties such as Bipartiteness do not have strong testers, and others (like
cycle-freeness) do not have strong testers of one-sided error (whereas
one-sided error was shown inherent to the VDF model). Hence, it was suggested
to study general (i.e., non-strong) testers of "sub-linear" complexity.
In this work, we pursue the foregoing suggestion, but do so in a model that
augments the model presented in the aforementioned work. Specifically, we
provide the tester with an evaluation oracle to the unknown distribution $D$,
in addition to samples of $D$ and oracle access to the tested graph. Our main
results are testers for Bipartitness and cycle-freeness, in this augmented
model, having complexity that is almost-linear in the square root of the
"effective support size" of $D$.
|
The theory of incoherent nuclear resonant scattering of synchrotron radiation
accompanied by absorption or emission of phonons in a crystal lattice is
developed. The theory is based on the Maxwell's equations and time-dependent
quantum mechanics under the condition of incoherent scattering of radiation by
various nuclei. A concept of coherence in scattering processes, properties of
the synchrotron radiation, and conditions of measurement are discussed. We show
that employing the monochromator with a narrow bandwidth plays a decisive role.
|
In this paper we give some basic results on blocking sets on minimum size for
a finite chain geometry.
|
In [RS1], we defined q-analogues of alien derivations and stated their basic
properties. In this paper, we prove the density theorem and the freeness
theorem announced in loc. cit.
[RS1] Ramis J.-P. and Sauloy J., 2007. The q-analogue of the wild fundamental
group (I)
|
Perturbative NLO and NNLO QCD evolutions of parton distributions are studied,
in particular in the (very) small-x region, where they are in very good
agreement with all recent precision measurements of F_2^p(x,Q^2). These
predictions turn out to be also rather insensitive to the specific choice of
the factorization scheme (MS or DIS). A characteristic feature of perturbative
QCD evolutions is a positive curvature of F_2^p which increases as x decreases.
This perturbatively stable prediction provides a sensitive test of the range of
validity of perturbative QCD.
|
The manipulation of visible light is important in science and technology
research. Metasurfaces can enable flexible and effective regulation of the
phase, polarization, and propagation modes of an electromagnetic wave.
Metasurfaces have become a research hotspot in optics and electromagnetics, and
cross-polarization conversion is an important application for visible-light
manipulation using a metasurface. A metasurface composed of nano-antenna arrays
and bilayer plasma can reportedly convert the direction of linear polarized
light efficiently. However, the metasurface of cross-polarization conversion
operating in short-wavelength visible light is problematic. In addition,
previous metasurfaces prepared using the top-down etching method is unsuitable
for practical applications because of the necessary harsh experimental
conditions and the high construction cost of preparation. In the present work,
we suggest a dendritic cell-cluster metasurface achieve cross-polarization in
transmission mode within 550, 570, 590 and 610 nm wavelength. Preparation is
accomplished using a bottom-up electrochemical deposition method, which is easy
and low cost. The dendritic cell-cluster metasurface is an important step in
cross-polarization conversion research and has broad application prospects and
development potential.
|
Open-vocabulary Temporal Action Detection (Open-vocab TAD) is an advanced
video analysis approach that expands Closed-vocabulary Temporal Action
Detection (Closed-vocab TAD) capabilities. Closed-vocab TAD is typically
confined to localizing and classifying actions based on a predefined set of
categories. In contrast, Open-vocab TAD goes further and is not limited to
these predefined categories. This is particularly useful in real-world
scenarios where the variety of actions in videos can be vast and not always
predictable. The prevalent methods in Open-vocab TAD typically employ a 2-stage
approach, which involves generating action proposals and then identifying those
actions. However, errors made during the first stage can adversely affect the
subsequent action identification accuracy. Additionally, existing studies face
challenges in handling actions of different durations owing to the use of fixed
temporal processing methods. Therefore, we propose a 1-stage approach
consisting of two primary modules: Multi-scale Video Analysis (MVA) and
Video-Text Alignment (VTA). The MVA module captures actions at varying temporal
resolutions, overcoming the challenge of detecting actions with diverse
durations. The VTA module leverages the synergy between visual and textual
modalities to precisely align video segments with corresponding action labels,
a critical step for accurate action identification in Open-vocab scenarios.
Evaluations on widely recognized datasets THUMOS14 and ActivityNet-1.3, showed
that the proposed method achieved superior results compared to the other
methods in both Open-vocab and Closed-vocab settings. This serves as a strong
demonstration of the effectiveness of the proposed method in the TAD task.
|
We develope the tomographic representation ofwavefunctions which are
solutions of the generalizednonlinear Schrodinger equation (NLSE) and show
itsconnection with the Weyl--Wigner map.In particular, this theory is applied
tothe envelope solitons, where tomograms for envelopebright solitons of a wide
family of modified NLSE arepresented and numerically evalueted.
|
Direct laser writing method is a promising technique for the large-scale and
cost-effective fabrication of periodic nanostructure arrays exciting hybrid
lattice plasmons. This type of electromagnetic mode manifests a narrow and deep
resonance peak with a high dispersion whose precise controllability is crucial
for practical applications in photonic devices. Here, the formation of
differently shaped gold nanostructures using the direct laser writing method on
Au layers of different thicknesses is presented. The resonance peak is
demonstrated to be highly dependent on the shape of the structures in the
array, thus its position in the spectra, as well as the quality, can be
additionally modulated by changing the morphology. The shape of the structure
and the resonance itself pertain not only on the laser pulse energy but also on
the grating period. This overlapping effect occurring at distances smaller than
the diameter of the focused laser beam is studied in detail. By taking
advantage of the highly controllable plasmonic resonance, the fabricated
gratings open up new opportunities for applications in sensing.
|
Eilenberg's holonomy decomposition is useful to ascertain the structural
properties of automata. Using this method, Egri-Nagy and Nehaniv characterized
the absence of certain types of cycles in automata. In the direction of
studying the structure of automata with cycles, this work focuses on a special
class of semi-flower automata and establish the holonomy decompositions of
certain circular semi-flower automata.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.