text
stringlengths 6
128k
|
---|
Molybdenum disulfide (MoS$_2$) is one of the most broadly utilized solid
lubricants with a wide range of applications, including but not limited to
those in the aerospace/space industry. Here we present a focused review of
solid lubrication with MoS$_2$ by highlighting its structure, synthesis,
applications and the fundamental mechanisms underlying its lubricative
properties, together with a discussion of their environmental and temperature
dependence. An effort is made to cover the main theoretical and experimental
studies that constitute milestones in our scientific understanding. The review
also includes an extensive overview of the structure and tribological
properties of doped MoS$_2$, followed by a discussion of potential future
research directions.
|
We prove that for any two quasi-Banach spaces $X$ and $Y$ and any $\alpha>0$
there exists a constant $\gamma_\alpha>0$ such that $$ \sup_{1\le k\le
n}k^{\alpha}e_k(T)\le \gamma_\alpha \sup_{1\le k\le n} k^\alpha c_k(T) $$ holds
for all linear and bounded operators $T:X\to Y$. Here $e_k(T)$ is the $k$-th
entropy number of $T$ and $c_k(T)$ is the $k$-th Gelfand number of $T$. For
Banach spaces $X$ and $Y$ this inequality is widely used and well-known as
Carl's inequality. For general quasi-Banach spaces it is a new result.
|
We investigate a particular phase transition between two different tunneling
regimes, direct and injection (Fowler-Nordheim), experimentally observed in the
current-voltage characteristics of the light receptor bacteriorhodopsin (bR).
Here, the sharp increase of the current above about 3 V is theoretically
interpreted as the cross-over between the direct and injection
sequential-tunneling regimes. Theory also predicts a very special behaviour for
the associated current fluctuations around steady state. We find the remarkable
result that in a large range of bias around the transition between the two
tunneling regimes, the probability density functions can be traced back to the
generalization of the Gumbel distribution. This non-Gaussian distribution is
the universal standard to describe fluctuations under extreme conditions.
|
Self-assembly of proteins into amyloid aggregates is an important biological
phenomenon associated with human diseases such as Alzheimer's disease. Amyloid
fibrils also have potential applications in nano-engineering of biomaterials.
The kinetics of amyloid assembly show an exponential growth phase preceded by a
lag phase, variable in duration as seen in bulk experiments and experiments
that mimic the small volumes of cells. Here, to investigate the origins and the
properties of the observed variability in the lag phase of amyloid assembly
currently not accounted for by deterministic nucleation dependent mechanisms,
we formulate a new stochastic minimal model that is capable of describing the
characteristics of amyloid growth curves despite its simplicity. We then solve
the stochastic differential equations of our model and give mathematical proof
of a central limit theorem for the sample growth trajectories of the nucleated
aggregation process. These results give an asymptotic description for our
simple model, from which closed form analytical results capable of describing
and predicting the variability of nucleated amyloid assembly were derived. We
also demonstrate the application of our results to inform experiments in a
conceptually friendly and clear fashion. Our model offers a new perspective and
paves the way for a new and efficient approach on extracting vital information
regarding the key initial events of amyloid formation.
|
Since its inception about two centuries ago thermodynamics has sparkled
continuous interest and fundamental questions. According to the second law no
heat engine can have an efficiency larger than Carnot's efficiency. The latter
can be achieved by the Carnot engine, which however ideally operates in
infinite time, hence delivers null power. A currently open question is whether
the Carnot efficiency can be achieved at finite power. Most of the previous
works addressed this question within the Onsager matrix formalism of linear
response theory. Here we pursue a different route based on finite-size-scaling
theory. We focus on quantum Otto engines and show that when the working
substance is at the verge of a second order phase transition diverging energy
fluctuations can enable approaching the Carnot point without sacrificing power.
The rate of such approach is dictated by the critical indices, thus showing the
universal character of our analysis.
|
Vectored IR drop analysis is a critical step in chip signoff that checks the
power integrity of an on-chip power delivery network. Due to the prohibitive
runtimes of dynamic IR drop analysis, the large number of test patterns must be
whittled down to a small subset of worst-case IR vectors. Unlike the
traditional slow heuristic method that select a few vectors with incomplete
coverage, MAVIREC uses machine learning techniques -- 3D convolutions and
regression-like layers -- for accurately recommending a larger subset of test
patterns that exercise worst-case scenarios. In under 30 minutes, MAVIREC
profiles 100K-cycle vectors and provides better coverage than a
state-of-the-art industrial flow. Further, MAVIREC's IR drop predictor shows
10x speedup with under 4mV RMSE relative to an industrial flow.
|
The macroscopic hydrodynamic equations are derived for many-body systems in
the local-equilibrium approach, using the Schr\"odinger picture of quantum
mechanics. In this approach, statistical operators are defined in terms of
microscopic densities associated with the fundamentally conserved quantities
and other slow modes possibly emerging from continuous symmetry breaking, as
well as macrofields conjugated to these densities. Functional identities can be
deduced, allowing us to identify the reversible and dissipative parts of the
mean current densities, to obtain general equations for the time evolution of
the conjugate macrofields, and to establish the relationship to
projection-operator methods. The entropy production is shown to be nonnegative
by applying the Peierls-Bogoliubov inequality to a quantum integral fluctuation
theorem. Using the expansion in the gradients of the conjugate macrofields, the
transport coefficients are given by Green-Kubo formulas and the entropy
production rate can be expressed in terms of quantum Einstein-Helfand formulas,
implying its nonnegativity in agreement with the second law of thermodynamics.
The results apply to multicomponent fluids and can be extended to condensed
matter phases with broken continuous symmetries.
|
Many machine learning algorithms have been developed in recent years to
enhance the performance of a model in different aspects of artificial
intelligence. But the problem persists due to inadequate data and resources.
Integrating knowledge in a machine learning model can help to overcome these
obstacles up to a certain degree. Incorporating knowledge is a complex task
though because of various forms of knowledge representation. In this paper, we
will give a brief overview of these different forms of knowledge integration
and their performance in certain machine learning tasks.
|
Adiabatic quantum computers can solve difficult optimization problems (e.g.,
the quadratic unconstrained binary optimization problem), and they seem well
suited to train machine learning models. In this paper, we describe an
adiabatic quantum approach for training support vector machines. We show that
the time complexity of our quantum approach is an order of magnitude better
than the classical approach. Next, we compare the test accuracy of our quantum
approach against a classical approach that uses the Scikit-learn library in
Python across five benchmark datasets (Iris, Wisconsin Breast Cancer (WBC),
Wine, Digits, and Lambeq). We show that our quantum approach obtains accuracies
on par with the classical approach. Finally, we perform a scalability study in
which we compute the total training times of the quantum approach and the
classical approach with increasing number of features and number of data points
in the training dataset. Our scalability results show that the quantum approach
obtains a 3.5--4.5 times speedup over the classical approach on datasets with
many (millions of) features.
|
We study the creep rupture of bundles of viscoelastic fibers occurring under
uniaxial constant tensile loading. A novel fiber bundle model is introduced
which combines the viscoelastic constitutive behaviour and the strain
controlled breaking of fibers. Analytical and numerical calculations showed
that above a critical external load the deformation of the system monotonically
increases in time resulting in global failure at a finite time $t_f$, while
below the critical load the deformation tends to a constant value giving rise
to an infinite lifetime. Our studies revealed that the nature of the transition
between the two regimes, i.e. the behaviour of $t_f$ at the critical load
$sigma_c$, strongly depends on the range of load sharing: for global load
sharing $t_f$ has a power law divergence at $\sigma_c$ with a universal
exponent of 0.5, however, for local load sharing the transition becomes abrupt:
at the critical load $t_f$ jumps to a finite value, analogous to second and
first order phase transitions, respectively. The acoustic response of the
bundle during creep is also studied.
|
Text-to-Image (T2I) and multimodal large language models (MLLMs) have been
adopted in solutions for several computer vision and multimodal learning tasks.
However, it has been found that such vision-language models lack the ability to
correctly reason over spatial relationships. To tackle this shortcoming, we
develop the REVISION framework which improves spatial fidelity in
vision-language models. REVISION is a 3D rendering based pipeline that
generates spatially accurate synthetic images, given a textual prompt. REVISION
is an extendable framework, which currently supports 100+ 3D assets, 11 spatial
relationships, all with diverse camera perspectives and backgrounds. Leveraging
images from REVISION as additional guidance in a training-free manner
consistently improves the spatial consistency of T2I models across all spatial
relationships, achieving competitive performance on the VISOR and T2I-CompBench
benchmarks. We also design RevQA, a question-answering benchmark to evaluate
the spatial reasoning abilities of MLLMs, and find that state-of-the-art models
are not robust to complex spatial reasoning under adversarial settings. Our
results and findings indicate that utilizing rendering-based frameworks is an
effective approach for developing spatially-aware generative models.
|
Several bright and massive galaxy candidates at high redshifts have been
recently observed by the James Webb Space Telescope. Such early massive
galaxies seem difficult to reconcile with standard $\Lambda$ Cold Dark Matter
model predictions. We discuss under which circumstances such observed massive
galaxy candidates can be explained by introducing primordial non-Gaussianity in
the initial conditions of the cosmological perturbations.
|
We formulate a hydrodynamic theory of confluent epithelia: i.e. monolayers of
epithelial cells adhering to each other without gaps. Taking advantage of
recent progresses toward establishing a general hydrodynamic theory of p-atic
liquid crystals, we demonstrate that collectively migrating epithelia feature
both nematic (i.e. p=2) and hexatic (i.e. p=6) order, with the former being
dominant at large and the latter at small length scales. Such a remarkable
multiscale liquid crystal order leaves a distinct signature in the system's
structure factor, which exhibits two different power law scaling regimes,
reflecting both the hexagonal geometry of small cells clusters, as well as the
uniaxial structure of the global cellular flow. We support these analytical
predictions with two different cell-resolved models of epithelia -- i.e. the
self-propelled Voronoi model and the multiphase field model -- and highlight
how momentum dissipation and noise influence the range of fluctuations at small
length scales, thereby affecting the degree of cooperativity between cells. Our
construction provides a theoretical framework to conceptualize the recent
observation of multiscale order in layers of Madin-Darby canine kidney cells
and pave the way for further theoretical developments.
|
A fully-asynchronous network with one target sensor and a few anchors (nodes
with known locations) is considered. Localization and synchronization are
traditionally treated as two separate problems. In this paper, localization and
synchronization is studied under a unified framework. We present a new model in
which time-stamps obtained either via two-way communication between the nodes
or with a broadcast based protocol can be used in a simple estimator based on
least-squares (LS) to jointly estimate the position of the target node as well
as all the unknown clock-skews and clock-offsets. The Cram\'er-Rao lower bound
(CRLB) is derived for the considered problem and is used as a benchmark to
analyze the performance of the proposed estimator.
|
Investigation of inhomogeneities has wide applications in different areas of
mechanics including the study of composite materials. Here, we analytically
study an arbitrarily-shaped isotropic inhomogeneity embedded in a finite-sized
heterogeneous medium. By modal decomposition of the influence of the
inhomogeneity on the deformation of the composite, a relation is presented that
determines the variation of effective elastic stiffness caused by the presence
of the inhomogeneity. This relation indicates that the effective elastic
stiffness of a composite is always a concave function of the properties of the
inhomogeneity, embedded inside the composite. Therefore, as the heterogeneity
of elastic random composites increases, the rate of increase in effective
stiffness caused by the stiffer constituents is smaller than the rate of its
decrease due to the softer constitutions. So, weakly heterogeneous random
composites become softer and less conductive with increasing heterogeneity at
the same mean of constituent properties. We numerically evaluated the effective
properties of about ten thousand composites to empirically support these
results and extend them to conductive materials. This article presents a
generalization of our recent theoretical study on the influence of the
stiffness of a single fiber on the elastic stiffness of a network of fibers to
arbitrarily-shaped inhomogeneities and different transport phenomena.
|
We present the discovery of three new giant planets around three
metal-deficient stars: HD5388b (1.96M_Jup), HD181720b (0.37M_Jup), and
HD190984b (3.1M_Jup). All the planets have moderately eccentric orbits (ranging
from 0.26 to 0.57) and long orbital periods (from 777 to 4885 days). Two of the
stars (HD181720 and HD190984) were part of a program searching for giant
planets around a sample of ~100 moderately metal-poor stars, while HD5388 was
part of the volume-limited sample of the HARPS GTO program. Our discoveries
suggest that giant planets in long period orbits are not uncommon around
moderately metal-poor stars.
|
We present WineSensed, a large multimodal wine dataset for studying the
relations between visual perception, language, and flavor. The dataset
encompasses 897k images of wine labels and 824k reviews of wines curated from
the Vivino platform. It has over 350k unique bottlings, annotated with year,
region, rating, alcohol percentage, price, and grape composition. We obtained
fine-grained flavor annotations on a subset by conducting a wine-tasting
experiment with 256 participants who were asked to rank wines based on their
similarity in flavor, resulting in more than 5k pairwise flavor distances. We
propose a low-dimensional concept embedding algorithm that combines human
experience with automatic machine similarity kernels. We demonstrate that this
shared concept embedding space improves upon separate embedding spaces for
coarse flavor classification (alcohol percentage, country, grape, price,
rating) and aligns with the intricate human perception of flavor.
|
We perform a thorough phase-plane analysis of the flow defined by the
equations of motion of a FRW universe filled with a tachyonic fluid plus a
barotropic one. The tachyon potential is assumed to be of inverse square form,
thus allowing for a two-dimensional autonomous system of equations. The
Friedmann constraint, combined with a convenient choice of coordinates, renders
the physical state compact. We find the fixed-point solutions, and discuss
whether they represent attractors or not. The way the two fluids contribute at
late-times to the fractional energy density depends on how fast the barotropic
fluid redshifts. If it does it fast enough, the tachyonic fluid takes over at
late times, but if the opposite happens, the situation will not be completely
dominated by the barotropic fluid; instead there will be a residual
non-negligible contribution from the tachyon subject to restrictions coming
from nucleosynthesis.
|
Sampling-based algorithms solve the path planning problem by generating
random samples in the search-space and incrementally growing a connectivity
graph or a tree. Conventionally, the sampling strategy used in these algorithms
is biased towards exploration to acquire information about the search-space. In
contrast, this work proposes an optimization-based procedure that generates new
samples to improve the cost-to-come value of vertices in a neighborhood. The
application of proposed algorithm adds an exploitative-bias to sampling and
results in a faster convergence to the optimal solution compared to other
state-of-the-art sampling techniques. This is demonstrated using benchmarking
experiments performed fora variety of higher dimensional robotic planning
tasks.
|
We present predictions for the suppression of B-mesons using AdS/CFT
techniques assuming a strongly coupled quark-gluon plasma at
$\sqrt{s_{NN}}=2.76$ TeV for central collisions and $\sqrt{s_{NN}}=5.5$ TeV for
various centrality classes. We provide estimates of the systematic theoretical
uncertainties due to 1) the mapping of QCD parameters to those in $\mathcal N =
4$ SYM and 2) the exact form of the momentum dependence of the diffusion
coefficient predicted by AdS/CFT. We show that coupling energy loss to flow
increases $v_2$ substantially out to surprisingly large momenta, on the order
of $\sim 25$ GeV/c, thus pointing to a possible resolution of the $R_{AA}$ and
$v_2$ puzzle for light hadrons.
|
Based on operations prescribed under the paradigm of Complex Transformation
Optics (CTO) [1-5], it was recently shown in [5] that a complex source point
(CSP) can be mimicked by a parity-time ($\mathcal{PT}$) transformation media.
Such coordinate transformation has a mirror symmetry for the imaginary part,
and results in a balanced loss/gain metamaterial slab. A CSP produces a
Gaussian beam and, consequently, a point source placed at the center of such
metamaterial slab produces a Gaussian beam propagating away from the slab.
Here, we extend the CTO analysis to non-symmetric complex coordinate
transformations as put forth in [6] and verify that, by using simply a
(homogeneous) doubly anisotropic gain-media metamaterial slab, one can still
mimic a CSP and produce Gaussian beam. In addition, we show that a
Gaussian-like beams can be produced by point sources placed {\it outside} the
slab as well [6]. By making use of the extra degrees of freedom (real and
imaginary part of the coordinate transformation) provided by CTO, the near-zero
requirement on the real part of the resulting constitutive parameters can be
relaxed to facilitate potential realization of Gaussian-like beams. We
illustrate how beam properties such as peak amplitude and waist location can be
controlled by a proper choice of (complex-valued) CTO Jacobian elements. In
particular, the beam waist location may be moved bidirectionally by allowing
for negative entries in the Jacobian (equivalent to inducing negative
refraction effects). These results are then interpreted in light of the ensuing
CSP location.
|
Rydberg atoms are in the focus of intense research due to the peculiar
properties which make them interesting candidates for quantum optics and
quantum information applications. In this work we study the ionization of
Rydberg atoms due to their interaction with a trapping laser field, and a
reaction microscope is used to measure photoelectron angular and energy
distributions. Reaction microscopes are excellent tools when brandished against
atomic photoionization processes involving pulsed lasers; the timing tied to
each pulse is crucial in solving the subsequent equations of motion for the
atomic fragments in the spectrometer field. However, when used in pump-probe
schemes, which rely on continuous wave probe lasers, vital information linked
to the time of flight is lost. This study reports on a method in which the
standard ReMi technique is extended in time through coincidence measurements.
This is then applied to the photoionization of $^6$Li atoms initially prepared
in optically pumped $2^{2}S_{1/2}$ and $2^{2}P_{3/2}$ states. Multi-photon
excitation from a tunable femtosecond laser is exploited to produce Rydberg
atoms inside an infrared optical dipole trap; the structure and dynamics of the
subsequent cascade back towards ground is evaluated.
|
*Context The evolution of young massive protoplanetary disks toward planetary
systems is expected to include the formation of gaps and the depletion of dust
and gas. *Aims A special group of flaring disks around Herbig Ae/Be stars do
not show prominent silicate emission features. We focus our attention on four
key Herbig Ae/Be stars to understand the structural properties responsible for
the absence of silicate feature emission. *Methods We investigate Q- and N-band
images taken with Subaru/COMICS, Gemini South/T-ReCS and VLT/VISIR. Our
radiative transfer modeling solutions require a separation of inner- and outer-
disks by a large gap. From this we characterize the radial density structure of
dust and PAHs in the disk. *Results The inner edge of the outer disk has a high
surface brightness and a typical temperature between ~100-150 K and therefore
dominates the emission in the Q-band. We derive radii of the inner edge of the
outer disk of 34, 23, 30 and 63 AU for HD97048, HD169142, HD135344B and Oph IRS
48 respectively. For HD97048 this is the first detection of a disk gap. The
continuum emission in the N-band is not due to emission in the wings of PAHs.
This continuum emission can be due to VSGs or to thermal emission from the
inner disk. We find that PAH emission is not always dominated by PAHs on the
surface of the outer disk. *Conclusions. The absence of silicate emission
features is due to the presence of large gaps in the critical temperature
regime. Many, if not all Herbig disks with Spectral Energy Distribution (SED)
classification `group I' are disks with large gaps and can be characterized as
(pre-) transitional. An evolutionary path from the observed group I to the
observed group II sources seems no longer likely. Instead, both might derive
from a common ancestor.
|
Magnetic fields generated by human and animal organs, such as the heart,
brain and nervous system carry information useful for biological and medical
purposes. These magnetic fields are most commonly detected using
cryogenically-cooled superconducting magnetometers. Here we present the frst
detection of action potentials from an animal nerve using an optical atomic
magnetometer. Using an optimal design we are able to achieve the sensitivity
dominated by the quantum shot noise of light and quantum projection noise of
atomic spins. Such sensitivity allows us to measure the nerve impulse with a
miniature room-temperature sensor which is a critical advantage for biomedical
applications. Positioning the sensor at a distance of a few millimeters from
the nerve, corresponding to the distance between the skin and nerves in
biological studies, we detect the magnetic field generated by an action
potential of a frog sciatic nerve. From the magnetic field measurements we
determine the activity of the nerve and the temporal shape of the nerve
impulse. This work opens new ways towards implementing optical magnetometers as
practical devices for medical diagnostics.
|
In 2002, D. Hrencecin and L.H. Kauffman defined a filamentation invariant on
oriented chord diagrams that may determine whether the corresponding flat
virtual knot diagrams are non-trivial. A virtual knot diagram is non-classical
if its related flat virtual knot diagram is non-trivial. Hence filamentations
can be used to detect non-classical virtual knots. We extend these
filamentation techniques to virtual links with more than one component. We also
give examples of virtual links that they can detect as non-classical.
|
We claim that $M$(atroid) theory may provide a mathematical framework for an
underlying description of $M$-theory. Duality is the key symmetry which
motivates our proposal. The definition of an oriented matroid in terms of the
Farkas property plays a central role in our formalism. We outline how this
definition may be carried over $M$-theory. As a consequence of our analysis we
find a new type of action for extended systems which combines dually the
$p$-brane and its dual $p^{\perp}$-brane.
|
The angular power spectra of the infrared maps obtained by the DIRBE (Diffuse
InfraRed Background Experiment) instrument on the COBE satellite have been
obtained by two methods: the Hauser-Peebles method previously applied to the
DMR maps, and by Fourier transforming portions of the all-sky maps projected
onto a plane. The two methods give consistent results, and the power spectrum
of the high-latitude dust emission is C_\ell \propto \ell^{-3} in the range 2 <
\ell < 300.
|
In recent years, using a network of autonomous and cooperative unmanned
aerial vehicles (UAVs) without command and communication from the ground
station has become more imperative, in particular in search-and-rescue
operations, disaster management, and other applications where human
intervention is limited. In such scenarios, UAVs can make more efficient
decisions if they acquire more information about the mobility, sensing and
actuation capabilities of their neighbor nodes. In this paper, we develop an
unsupervised online learning algorithm for joint mobility prediction and object
profiling of UAVs to facilitate control and communication protocols. The
proposed method not only predicts the future locations of the surrounding
flying objects, but also classifies them into different groups with similar
levels of maneuverability (e.g. rotatory, and fixed-wing UAVs) without prior
knowledge about these classes. This method is flexible in admitting new object
types with unknown mobility profiles, thereby applicable to emerging flying
Ad-hoc networks with heterogeneous nodes.
|
We give a new algorithm for performing the distinct-degree factorization of a
polynomial P(x) over GF(2), using a multi-level blocking strategy. The coarsest
level of blocking replaces GCD computations by multiplications, as suggested by
Pollard (1975), von zur Gathen and Shoup (1992), and others. The novelty of our
approach is that a finer level of blocking replaces multiplications by
squarings, which speeds up the computation in GF(2)[x]/P(x) of certain interval
polynomials when P(x) is sparse. As an application we give a fast algorithm to
search for all irreducible trinomials x^r + x^s + 1 of degree r over GF(2),
while producing a certificate that can be checked in less time than the full
search. Naive algorithms cost O(r^2) per trinomial, thus O(r^3) to search over
all trinomials of given degree r. Under a plausible assumption about the
distribution of factors of trinomials, the new algorithm has complexity O(r^2
(log r)^{3/2}(log log r)^{1/2}) for the search over all trinomials of degree r.
Our implementation achieves a speedup of greater than a factor of 560 over the
naive algorithm in the case r = 24036583 (a Mersenne exponent). Using our
program, we have found two new primitive trinomials of degree 24036583 over
GF(2) (the previous record degree was 6972593).
|
We present Preference Flow Matching (PFM), a new framework for
preference-based reinforcement learning (PbRL) that streamlines the integration
of preferences into an arbitrary class of pre-trained models. Existing PbRL
methods require fine-tuning pre-trained models, which presents challenges such
as scalability, inefficiency, and the need for model modifications, especially
with black-box APIs like GPT-4. In contrast, PFM utilizes flow matching
techniques to directly learn from preference data, thereby reducing the
dependency on extensive fine-tuning of pre-trained models. By leveraging
flow-based models, PFM transforms less preferred data into preferred outcomes,
and effectively aligns model outputs with human preferences without relying on
explicit or implicit reward function estimation, thus avoiding common issues
like overfitting in reward models. We provide theoretical insights that support
our method's alignment with standard PbRL objectives. Experimental results
indicate the practical effectiveness of our method, offering a new direction in
aligning a pre-trained model to preference.
|
Scotogenic is a scheme for neutrino mass generation through the one-loop
contribution of an inert scalar doublet and three sterile neutrinos. This work
argues that such inert scalar doublet is a Goldstone boson mode associated with
a gauge symmetry breaking. Hence, the resultant scotogenic gauge mechanism is
very predictive, generating neutrino mass as contributed by a new gauge boson
doublet that eats such Goldstone bosons. The dark matter stability is
manifestly ensured by a matter parity as residual gauge symmetry for which a
vector dark matter candidate is hinted.
|
This paper aims at building the theoretical foundations for manifold learning
algorithms in the space of absolutely continuous probability measures on a
compact and convex subset of $\mathbb{R}^d$, metrized with the Wasserstein-2
distance $\mathrm{W}$. We begin by introducing a construction of submanifolds
$\Lambda$ of probability measures equipped with metric $\mathrm{W}_\Lambda$,
the geodesic restriction of $W$ to $\Lambda$. In contrast to other
constructions, these submanifolds are not necessarily flat, but still allow for
local linearizations in a similar fashion to Riemannian submanifolds of
$\mathbb{R}^d$. We then show how the latent manifold structure of
$(\Lambda,\mathrm{W}_{\Lambda})$ can be learned from samples
$\{\lambda_i\}_{i=1}^N$ of $\Lambda$ and pairwise extrinsic Wasserstein
distances $\mathrm{W}$ only. In particular, we show that the metric space
$(\Lambda,\mathrm{W}_{\Lambda})$ can be asymptotically recovered in the sense
of Gromov--Wasserstein from a graph with nodes $\{\lambda_i\}_{i=1}^N$ and edge
weights $W(\lambda_i,\lambda_j)$. In addition, we demonstrate how the tangent
space at a sample $\lambda$ can be asymptotically recovered via spectral
analysis of a suitable "covariance operator" using optimal transport maps from
$\lambda$ to sufficiently close and diverse samples $\{\lambda_i\}_{i=1}^N$.
The paper closes with some explicit constructions of submanifolds $\Lambda$ and
numerical examples on the recovery of tangent spaces through spectral analysis.
|
Due to difficulties in acquiring ground truth depth of equirectangular (360)
images, the quality and quantity of equirectangular depth data today is
insufficient to represent the various scenes in the world. Therefore, 360 depth
estimation studies, which relied solely on supervised learning, are destined to
produce unsatisfactory results. Although self-supervised learning methods
focusing on equirectangular images (EIs) are introduced, they often have
incorrect or non-unique solutions, causing unstable performance. In this paper,
we propose 360 monocular depth estimation methods which improve on the areas
that limited previous studies. First, we introduce a self-supervised 360 depth
learning method that only utilizes gravity-aligned videos, which has the
potential to eliminate the needs for depth data during the training procedure.
Second, we propose a joint learning scheme realized by combining supervised and
self-supervised learning. The weakness of each learning is compensated, thus
leading to more accurate depth estimation. Third, we propose a non-local fusion
block, which can further retain the global information encoded by vision
transformer when reconstructing the depths. With the proposed methods, we
successfully apply the transformer to 360 depth estimations, to the best of our
knowledge, which has not been tried before. On several benchmarks, our approach
achieves significant improvements over previous works and establishes a state
of the art.
|
The available data on $F_L$ suggest the existence of unexpected large higher
twist contributions. We use the $1/N_f$ expansion to analyze the renormalon
contribution to the coefficient function of the longitudinal structure function
$F_L^{p-n}$. The renormalon ambiguity is calculated for all moments of the
structure function thus allowing to estimate the contribution of ``genuine''
twist-4 corrections as a function of Bjorken-$x$. The predictions turn out to
be in surprisingly good agreement with the experimental data.
|
In this paper we showed an equivalence of notions of regularity, transitivity
and Ergodic principle for quadratic stochastic Volterra operators acting on the
finite dimensional simplex.
|
We report the detection of periodic variations on the T_eff ~32 000 K DA
white dwarf star HE 1017-1352. We obtained time series photometry using the 4.1
m SOAR telescope on three separate nights for a total of 16.8 h. From the
frequency analysis we found four periods of 605 s, 556 s, 508 s and 869 s with
significant amplitudes above the 1/1000 false alarm probability detection
limit. The detected modes are compatible with low harmonic degree g-mode
non-radial pulsations with radial order higher than ~ 9. This detection
confirms the pulsation nature of HE 1017-1352 and thus the existence of the new
pulsating class of hot DA white dwarf stars. In addition, we detect a long
period of 1.52 h, compatible with a rotation period of DA white dwarf stars.
|
We investigate negative tension branes as stable thin shell wormholes in
Reissner-Nordstrom-(anti) de Sitter spacetimes in $d$ dimensional Einstein
gravity. Imposing Z2 symmetry, we construct and classify traversable static
thin shell wormholes in spherical, planar (or cylindrical) and hyperbolic
symmetries. In spherical geometry, we find the higher dimensional counterpart
of Barcelo and Visser's wormholes, which are stable against spherically
symmetric perturbations. We also find the classes of thin shell wormholes in
planar and hyperbolic symmetries with a negative cosmological constant, which
are stable against perturbations preserving symmetries. In most cases, stable
wormholes are found with the combination of an electric charge and a negative
cosmological constant. However, as special cases, we find stable wormholes even
with vanishing cosmological constant in spherical symmetry and with vanishing
electric charge in hyperbolic symmetry.
|
We extract an optimal subset of architectural parameters for the BERT
architecture from Devlin et al. (2018) by applying recent breakthroughs in
algorithms for neural architecture search. This optimal subset, which we refer
to as "Bort", is demonstrably smaller, having an effective (that is, not
counting the embedding layer) size of $5.5\%$ the original BERT-large
architecture, and $16\%$ of the net size. Bort is also able to be pretrained in
$288$ GPU hours, which is $1.2\%$ of the time required to pretrain the
highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et
al., 2019), and about $33\%$ of that of the world-record, in GPU hours,
required to train BERT-large on the same hardware. It is also $7.9$x faster on
a CPU, as well as being better performing than other compressed variants of the
architecture, and some of the non-compressed variants: it obtains performance
improvements of between $0.3\%$ and $31\%$, absolute, with respect to
BERT-large, on multiple public natural language understanding (NLU) benchmarks.
|
We propose a plasmonic modulator with semiconductor gain material for
optoelectronic integrated circuits. We analyze properties of a finite-thickness
metal-semiconductor-metal (F-MSM) waveguide to be utilized as an ultra-compact
and fast plasmonic modulator. The InP-based semiconductor core allows
electrical control of signal propagation. By pumping the core we can vary the
gain level and thus the transmittance of the whole system. The study of the
device was made using both analytical approaches for planar two-dimensional
case as well as numerical simulations for finite-width waveguides. We analyze
the eigenmodes of the F-MSM waveguide, propagation constant, confinement
factor, Purcell factor, absorption coefficient, and extinction ratio of the
structure. We show that using thin metal layers instead of thick ones we can
obtain higher extinction ratio of the device.
|
[ABRIDGED] The cosmological 21cm signal is set to become the most powerful
probe of the early Universe, with first generation interferometers aiming to
make statistical detections of reionization. There is increasing interest also
in the pre-reionization epoch when the intergalactic medium was heated by an
early X-ray background. Here we perform parameter studies varying the halo
masses hosting galaxies, and their X-ray production efficiencies. We also
relate these to popular models of Warm Dark Matter cosmologies. For each
parameter combination we compute the signal-to-noise (S/N) of the large-scale
(k~0.1/Mpc) 21cm power for both reionization and X-ray heating for a 2000h
observation with several instruments: 128 tile Murchison Wide Field Array
(MWA128T), a 256 tile extension (MWA256T), the Low Frequency Array (LOFAR), the
128 element Precision Array for Probing the Epoch of Reionization (PAPER), and
the second generation Square Kilometre Array (SKA). We show that X-ray heating
and reionization in many cases are of comparable detectability. For fiducial
astrophysical parameters, MWA128T might detect X-ray heating thanks to its
extended bandpass. When it comes to reionization, both MWA128T and PAPER will
also only achieve marginal detections, unless foregrounds on larger scales can
be mitigated. On the other hand, LOFAR should detect plausible models of
reionization at S/N > 10. The SKA will easily detect both X-ray heating and
reionization.
|
Implementing microelectromechanical system (MEMS) resonators calls for
detailed microscopic understanding of the devices, such as energy dissipation
channels, spurious modes, and imperfections from microfabrication. Here, we
report the nanoscale imaging of a freestanding super-high-frequency (3 ~ 30
GHz) lateral overtone bulk acoustic resonator with unprecedented spatial
resolution and displacement sensitivity. Using transmission-mode microwave
impedance microscopy, we have visualized mode profiles of individual overtones
and analyzed higher-order transverse spurious modes and anchor loss. The
integrated TMIM signals are in good agreement with the stored mechanical energy
in the resonator. Quantitative analysis with finite-element modeling shows that
the noise floor is equivalent to an in-plane displacement of 10 fm/sqrt(Hz) at
room temperatures, which can be further improved under cryogenic environments.
Our work contributes to the design and characterization of MEMS resonators with
better performance for telecommunication, sensing, and quantum information
science applications.
|
$\mathrm{Cu_2IrO_3}$ is among the newest layered honeycomb iridates and a
promising candidate to harbor a Kitaev quantum spin liquid state. Here, we
investigate the pressure and temperature dependence of its structure through a
combination of powder x-ray diffraction and x-ray absorption fine structure
measurements, as well as $ab$-$initio$ evolutionary structure search. At
ambient pressure, we revise the previously proposed $C2/c$ solution with a
related but notably more stable $P2_1/c$ structure. Pressures below 8 GPa drive
the formation of Ir-Ir dimers at both ambient and low temperatures, similar to
the case of $\mathrm{Li_2IrO_3}$. At higher pressures, the structural evolution
dramatically depends on temperature. A large discontinuous reduction of the Ir
honeycomb interplanar distance is observed around 15 GPa at room temperature,
likely driven by a collapse of the O-Cu-O dumbbells. At 15 K, pressures beyond
20 GPa first lead to an intermediate phase featuring a continuous reduction of
the interplanar distance, which then collapses at 30 GPa across yet another
phase transition. However, the resulting structure around 40 GPa is not the
same at room and low temperatures. Remarkably, the reduction in interplanar
distance leads to an apparent healing of the stacking faults at room
temperature, but not at 15 K. Possible implications on the evolution of
electronic structure of $\mathrm{Cu_2IrO_3}$ with pressure are discussed.
|
We discuss the exclusive lepton flavor violating (LFV) decays modes based on
$b\to d\ell_i\ell_j$ and $b\to s\ell_i\ell_j$ by considering the ground state
mesons and baryons. After spelling out the expressions for such decay rates in
a low energy effective theory which includes generic contributions arising from
physics beyond the Standard Model (BSM), we show that the experimental bounds
on meson decays can be used to bound the corresponding modes involving baryons.
We find, for example, $\mathcal{B}(\Lambda_b\to \Lambda\mu\tau)\lesssim 4\times
10^{-5}$. We also consider two specific models and constrain the relevant LFV
couplings by using the low energy observables. In the first model we assume the
Higgs mediated LFV and find the resulting decay rates to be too small to be
experimentally detectable. We also emphasize that the regions favored by the
bounds $\mathcal{B}(h\to\mu\tau)^\mathrm{Atlas}$ and $\mathcal{B}(h\to
e\tau)^\mathrm{Atlas}$ are not compatible with $\mathcal{B}(\mu\to
e\gamma)^\mathrm{MEG}$ to $1\sigma$. In the second model we assume LFV mediated
by a heavy $Z'$ boson and find that the corresponding $b$-hadron branching
fractions can be $\mathcal{O}(10^{-6})$, thus possibly within experimental
reach at LHCb and Belle~II.
|
We propose a model for the origin of the isolated nonthermal filaments
observed at the Galactic center based on an analogy to cometary plasma tails.
We invoke the interaction between a large scale magnetized galactic wind and
embedded molecular clouds. As the advected wind magnetic field encounters a
dense molecular cloud, it is impeded and drapes around the cloud, ultimately
forming a current sheet in the wake. This draped field is further stretched by
the wind flow into a long, thin filament whose aspect ratio is determined by
the balance between the dynamical wind and amplified magnetic field pressures.
The key feature of this cometary model is that the filaments are dynamic
configurations, and not static structures. As such, they are local
amplifications of an otherwise weak field and not directly connected to any
static global field. The derived field strengths for the wind and wake are
consistent with observational estimates. Finally, the observed synchrotron
emission is naturally explained by the acceleration of electrons to high energy
by plasma and MHD turbulence generated in the cloud wake.
|
For the processes e+e-\to \mu+\mu-, \tau+\tau-, b\bar{b} and c\bar{c} at a
future e+e- collider with \sqrt{s}=0.5 TeV, we examine the sensitivity of the
helicity cross sections to four-fermion contact interactions. If longitudinal
polarization of the electron beam were available, two polarized integrated
cross sections would offer the opportunity to separate the helicity cross
sections and, in this way, to derive model-independent bounds on the relevant
parameters. The measurement of these polarized cross sections with optimal
kinematical cuts could significantly increase the sensitivity of helicity cross
sections to contact interaction parameters and could give crucial information
on the chiral structure of such new interactions.
|
We provide a simple criterion for an element of the mapping class group of a
closed surface to have normal closure equal to the whole mapping class group.
We apply this to show that every nontrivial periodic mapping class that is not
a hyperelliptic involution is a normal generator for the mapping class group
when the genus is at least 3. We also give many examples of pseudo-Anosov
normal generators, answering a question of D. D. Long. In fact we show that
every pseudo-Anosov mapping class with stretch factor less than $\sqrt{2}$ is a
normal generator. Even more, we give pseudo-Anosov normal generators with
arbitrarily large stretch factors and arbitrarily large translation lengths on
the curve graph, disproving a conjecture of Ivanov.
|
Character recognition techniques for printed documents are widely used for
English language. However, the systems that are implemented to recognize Asian
languages struggle to increase the accuracy of recognition. Among other Asian
languages (such as Arabic, Tamil, Chinese), Sinhala characters are unique,
mainly because they are round in shape. This unique feature makes it a
challenge to extend the prevailing techniques to improve recognition of Sinhala
characters. Therefore, a little attention has been given to improve the
accuracy of Sinhala character recognition. A novel method, which makes use of
this unique feature, could be advantageous over other methods. This paper
describes the use of a fuzzy inference system to recognize Sinhala characters.
Feature extraction is mainly focused on distance and intersection measurements
in different directions from the center of the letter making use of the round
shape of characters. The results showed an overall accuracy of 90.7% for 140
instances of letters tested, much better than similar systems.
|
Purpose: Preliminarily evaluate the feasibility and efficacy of using
meditative virtual reality (VR) to improve the hospital experience of intensive
care unit (ICU) patients.
Methods: Effects of VR were examined in a non-randomized, single-center
cohort. Fifty-nine patients admitted to the surgical or trauma ICU of the
University of Florida Health Shands Hospital participated. A Google Daydream
headset was used to expose ICU patients to commercially available VR
applications focused on calmness and relaxation (Google Spotlight Stories and
RelaxVR). Sessions were conducted once daily for up to seven days. Outcome
measures included pain level, anxiety, depression, medication administration,
sleep quality, heart rate, respiratory rate, blood pressure, delirium status,
and patient ratings of the VR system. Comparisons were made using paired
t-tests and mixed models where appropriate.
Results: The VR meditative intervention was found to improve patients' ICU
experience with reduced levels of anxiety and depression; however, there was no
evidence suggesting that VR had any significant effects on physiological
measures, pain, or sleep.
Conclusion: The use of VR technology in the ICU was shown to be easily
implemented and well-received by patients.
|
Adversarial example is a rising way of protecting facial privacy security
from deepfake modification. To prevent massive facial images from being
illegally modified by various deepfake models, it is essential to design a
universal deepfake disruptor. However, existing works treat deepfake disruption
as an End-to-End process, ignoring the functional difference between feature
extraction and image reconstruction, which makes it difficult to generate a
cross-model universal disruptor. In this work, we propose a novel
Feature-Output ensemble UNiversal Disruptor (FOUND) against deepfake networks,
which explores a new opinion that considers attacking feature extractors as the
more critical and general task in deepfake disruption. We conduct an effective
two-stage disruption process. We first disrupt multi-model feature extractors
through multi-feature aggregation and individual-feature maintenance, and then
develop a gradient-ensemble algorithm to enhance the disruption effect by
simplifying the complex optimization problem of disrupting multiple End-to-End
models. Extensive experiments demonstrate that FOUND can significantly boost
the disruption effect against ensemble deepfake benchmark models. Besides, our
method can fast obtain a cross-attribute, cross-image, and cross-model
universal deepfake disruptor with only a few training images, surpassing
state-of-the-art universal disruptors in both success rate and efficiency.
|
In this study, we report on face-centered cubic structured CoCrFeNi
high-entropy alloy thin films with finely dispersed nano-oxide particles which
are formed by internal oxidation. Analytical scanning transmission electron
microscopy imaging found that the particles are Cr2O3. The oxide particles
contribute to the hardening of the film increasing its hardness by 14% compared
to that of the film without precipitates, through the Orowan-type strengthening
mechanism. Our novel approach paves the way to design medium- and high-entropy
alloys with high strength by making use of oxide phases.
|
Data centers are on the rise and scientists are re-thinking and re-designing
networks for data centers. The concept of central control which was not
effective in the Internet era is now gaining popularity and is used in many
data centers due to lower scale of operation (compared to Internet), structured
topologies and as the entire network resources is under a single entity's
control. With new opportunities, data center networks also pose new problems.
Data centers require: high utilization, low median, tail latencies and
fairness. In the traditional systems, the bulk traffic generally stalls the
interactive flows thereby affecting their flow completion times adversely. In
this thesis, we deal with two problems relating to central controller assisted
prioritization of interactive flow in data center networks.
Fastpass is a centralized "zero-queue" data center network. But the central
arbiter of Fastpass doesn't scale well for more than 256 nodes (or 8 cores). In
our test runs, it supports only about 1.5 Terabits's of network traffic. In
this work, we re-design their timeslot allocator of their central arbiter so
that it scales linearly till 12 cores and supports about 1024 nodes and 7.1
Terabits's of network traffic.
In the second part of the thesis, we deal with the problem of congestion
control in a software defined network. We propose a framework, where the
controller with its global view of the network actively participates in the
congestion control decisions of the end TCP hosts, by setting the ECN bits of
IPV4 packets appropriately. Our framework can be deployed very easily without
any change to the end node TCPs or the SDN switches. We also show 30x
improvement over TCP cubic and 1.7x improvement over RED in flow completion
times of interactive traffic for one implementation of this framework.
|
Underlying events dominate most of the hadronic activity in p$-$p collisions
and are spanned from perturbative to non-perturbative QCD, having a sensitivity
ranging from the multi-scale to very low-x scale physics. A detailed
understanding of such events plays a crucial role in the accurate understanding
of Standard Model ()SM and Beyond Standard Model physics. The underlying event
activities has been studied within the framework of Pythia 8 Monte Carlo model,
considering the underlying events observables mean charged particle
multiplicity density , $\langle d^{2}N /d\eta d\phi \rangle$ and mean scalar
$p_T$ sum, $\langle d^{2} \sum p_{T} /d\eta d\phi \rangle$ as a function of
leading charged particle in towards, away, and transverse region of p$-$p
collisions at $\sqrt{s}$ = 2.76, 7 and 13 TeV. The towards, away, and
transverse regions have been defined on an azimuthal plane relative to leading
particle in p$-$p collisions. The energy dependence of underlying events and
their activities in the central and forward region has also been studied. The
effect of hadronic re-scattering, color reconnection, and rope hadronization
mechanism implemented in Pythia 8 has been studied in details to gain insight
into the different processes contributing to underlying events in soft sector.
|
Forecasting future events is important for policy and decision making. In
this work, we study whether language models (LMs) can forecast at the level of
competitive human forecasters. Towards this goal, we develop a
retrieval-augmented LM system designed to automatically search for relevant
information, generate forecasts, and aggregate predictions. To facilitate our
study, we collect a large dataset of questions from competitive forecasting
platforms. Under a test set published after the knowledge cut-offs of our LMs,
we evaluate the end-to-end performance of our system against the aggregates of
human forecasts. On average, the system nears the crowd aggregate of
competitive forecasters, and in some settings surpasses it. Our work suggests
that using LMs to forecast the future could provide accurate predictions at
scale and help to inform institutional decision making.
|
We consider a version of the continuous-time multi-armed bandit problem where
decision opportunities arrive at Poisson arrival times, and study its Gittins
index policy. When driven by spectrally one-sided L\'evy processes, the Gittins
index can be written explicitly in terms of the scale function, and is shown to
converge to that in the classical L\'evy bandit of Kaspi and Mandelbaum (1995).
|
Many animal cells change their shape depending on the stiffness of the
substrate on which they are cultured: they assume small, rounded shapes in soft
ECMs, they elongate within stiffer ECMs, and flatten out on hard substrates.
Cells tend to prefer stiffer parts of the substrate, a phenomenon known as
durotaxis. Such mechanosensitive responses to ECM mechanics are key to
understanding the regulation of biological tissues by mechanical cues, as it
occurs, e.g., during angiogenesis and the alignment of cells in muscles and
tendons. Although it is well established that the mechanical cell-ECM
interactions are mediated by focal adhesions, the mechanosensitive molecular
complexes linking the cytoskeleton to the substrate, it is poorly understood
how the stiffness-dependent kinetics of the focal adhesions eventually produce
the observed interdependence of substrate stiffness and cell shape and cell
behavior. Here we show that the mechanosensitive behavior of single-focal
adhesions, cell contractility and substrate adhesivity together suffice to
explain the observed stiffness-dependent behavior of cells. We introduce a
multiscale computational model that is based upon the following assumptions:
(1) cells apply forces onto the substrate through FAs; (2) the FAs grow and
stabilize due to these forces; (3) within a given time-interval, the force that
the FAs experience is lower on soft substrates than on stiffer substrates due
to the time it takes to reach mechanical equilibrium; and (4) smaller FAs are
pulled from the substrate more easily than larger FAs. Our model combines the
cellular Potts model for the cells with a finite-element model for the
substrate, and describes each FA using differential equations. Together these
assumptions provide a unifying model for cell spreading, cell elongation and
durotaxis in response to substrate mechanics.
|
We find two chemically distinct populations separated relatively cleanly in
the [Fe/H] - [Mg/Fe] plane, but also distinguished in other chemical planes,
among metal-poor stars (primarily with metallicities [Fe/H] $< -0.9$) observed
by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and
analyzed for Data Release 13 (DR13) of the Sloan Digital Sky Survey. These two
stellar populations show the most significant differences in their [X/Fe]
ratios for the $\alpha$-elements, C+N, Al, and Ni. In addition to these
populations having differing chemistry, the low metallicity high-Mg population
(which we denote the HMg population) exhibits a significant net Galactic
rotation, whereas the low-Mg population (or LMg population) has halo-like
kinematics with little to no net rotation. Based on its properties, the origin
of the LMg population is likely as an accreted population of stars. The HMg
population shows chemistry (and to an extent kinematics) similar to the thick
disk, and is likely associated with $\it in$ $\it situ$ formation. The
distinction between the LMg and HMg populations mimics the differences between
the populations of low- and high-$\alpha$ halo stars found in previous studies,
suggesting that these are samples of the same two populations.
|
We prove that the Hilbert square $S^{[2]}$ of a very general primitively
polarized K3 surface S of degree $d(n) = 2(4n^2 + 8n + 5)$, $n \geq 1$ is
birational to a double Eisenbud-Popescu-Walter sextic. Our result implies a
positive answers, in the case when $r$ is even, to a conjecture of O'Grady: On
the Hilbert square of a very general K3 surface of genus $r^2 + 2$, $r \geq 1$
there is an antisymplectic involution. We explicitly give this involution on
$S^{[2]}$ in term of the corresponding EPW polarization on it.
|
Milky Way Cepheid variables with accurate {\it Hubble Space Telescope}
photometry have been established as standards for primary calibration of the
cosmic distance ladder to achieve a percent-level determination of the Hubble
constant ($H_0$). These 75 Cepheid standards are the fundamental sample for
investigation of possible residual systematics in the local $H_0$ determination
due to metallicity effects on their period-luminosity relations. We obtained
new high-resolution ($R\sim81,000$), high signal-to-noise ($S/N\sim50-150$)
multi-epoch spectra of 42 out of 75 Cepheid standards using ESPaDOnS instrument
at the 3.6-m Canada-France-Hawaii Telescope. Our spectroscopic metallicity
measurements are in good agreement with the literature values with systematic
differences up to $0.1$ dex due to different metallicity scales. We homogenized
and updated the spectroscopic metallicities of all 75 Milky Way Cepheid
standards and derived their multiwavelength ($GVIJHK_s$)
period-luminosity-metallicity and period-Wesenheit-metallicity relations using
the latest {\it Gaia} parallaxes. The metallicity coefficients of these
empirically calibrated relations exhibit large uncertainties due to low
statistics and a narrow metallicity range ($\Delta\textrm{[Fe/H]}=0.6$~dex).
These metallicity coefficients are up to three times better constrained if we
include Cepheids in the Large Magellanic Cloud and range between $-0.21\pm0.07$
and $-0.43\pm0.06$ mag/dex. The updated spectroscopic metallicities of these
Milky Way Cepheid standards were used in the Cepheid-Supernovae distance ladder
formalism to determine $H_0=72.9~\pm 1.0$\textrm{~km~s$^{-1}$~Mpc$^{-1}$},
suggesting little variation ($\sim 0.1$ ~km~s$^{-1}$~Mpc$^{-1}$) in the local
$H_0$ measurements due to different Cepheid metallicity scales.
|
We investigate the Li2CuSb full-Heusler alloy using the first-principles
electronic structure calculations and propose the electrochemical lithiation in
this alloy. Band structure calculations suggest the presence of metallic nature
in this alloy contrary to half-metallic nature as predicted for most of the
members of the full-Heusler alloy family. This alloy is found to be a promising
anode material for high-capacity rechargeable batteries based on lithium-ion.
We found a removal voltage of 2.48 V for lithium ions in the Li2CuSb/Cu cell,
which is in good agreement with the experimentally obtained result for a
similar kind of material Cu3Sb. During charge and discharge cycles of the
Li2CuSb/Cu cell, the formation of a non-stoichiometric compound Li2-yCu1+xSb
having a similar structure as Li2CuSb suggests a better performance as well as
stabilitty of this cell.
|
The quantum entangled $J/\psi \to \Sigma^{+}\bar{\Sigma}^{-}$ pairs from
$(1.0087\pm0.0044)\times10^{10}$ $J/\psi$ events taken by the BESIII detector
are used to study the non-leptonic two-body weak decays $\Sigma^{+} \to n
\pi^{+}$ and $\bar{\Sigma}^{-} \to \bar{n} \pi^{-}$. The $C\!P$-odd weak decay
parameters of the decays $\Sigma^{+} \to n \pi^{+}$ ($\alpha_{+}$) and
$\bar{\Sigma}^{-} \to \bar{n} \pi^{-}$ ($\bar{\alpha}_{-}$) are determined to
be $-0.0565\pm0.0047_{\rm stat}\pm0.0022_{\rm syst}$ and $0.0481\pm0.0031_{\rm
stat}\pm0.0019_{\rm syst}$, respectively. The decay parameter
$\bar{\alpha}_{-}$ is measured for the first time, and the accuracy of
$\alpha_{+}$ is improved by a factor of four compared to the previous results.
The simultaneously determined decay parameters allow the first precision $C\!P$
symmetry test for any hyperon decay with a neutron in the final state with the
measurement of
$A_{C\!P}=(\alpha_{+}+\bar{\alpha}_{-})/(\alpha_{+}-\bar{\alpha}_{-})=-0.080\pm0.052_{\rm
stat}\pm0.028_{\rm syst}$. Assuming $C\!P$ conservation, the average decay
parameter is determined as $\left< \alpha_{+}\right>=(\alpha_{+}-
\bar{\alpha}_{-})/2 = -0.0506\pm0.0026_{\rm stat}\pm0.0019_{\rm syst}$, while
the ratios $\alpha_{+}/\alpha_{0}$ and $\bar{\alpha}_{-}/\bar\alpha_{0}$ are
$-0.0490\pm0.0032_{\rm stat}\pm0.0021_{\rm syst}$ and $-0.0571\pm0.0053_{\rm
stat}\pm0.0032_{\rm syst}$, where $\alpha_{0}$ and $\bar\alpha_{0}$ are the
decay parameters of the decays $\Sigma^{+} \to p \pi^{0}$ and $\bar{\Sigma}^{-}
\to \bar{p} \pi^{0}$, respectively.
|
Let $k$ be a field of positive characteristic. Building on the work of the
second named author, we define a new class of $k$-algebras, called diagonally
$F$-regular algebras, for which the so-called Uniform Symbolic Topology
Property (USTP) holds effectively. We show that this class contains all
essentially smooth $k$-algebras. We also show that this class contains certain
singular algebras, such as the affine cone over $\mathbb{P}^r_{k} \times
\mathbb{P}^s_{k}$, when $k$ is perfect. By reduction to positive
characteristic, it follows that USTP holds effectively for the affine cone over
$\mathbb{P}^r_{\mathbb{C}} \times \mathbb{P}^s_{\mathbb{C}}$ and more generally
for complex varieties of diagonal $F$-regular type.
|
Intelligent reflecting surfaces (IRSs) were introduced to enhance the
performance of wireless communication systems. However, from a service
provider's viewpoint, a concern with the use of an IRS is its effect on
out-of-band (OOB) quality of service. Specifically, if two operators, say X and
Y, provide services in a given geographical area using non-overlapping
frequency bands, and if operator X uses an IRS to enhance the spectral
efficiency (SE) of its users (UEs), does it degrade the performance of UEs
served by operator Y? We answer this by analyzing the average and instantaneous
performances of the OOB operator considering both sub-6 GHz and mmWave bands.
Specifically, we derive the ergodic sum SE achieved by the operators under
round-robin scheduling. We also derive the outage probability and analyze the
change in the SNR caused by the IRS at an OOB UE using stochastic dominance
theory. Surprisingly, even though the IRS is randomly configured from operator
Y's point of view, the OOB operator still benefits from the presence of the
IRS, witnessing a performance enhancement for free in both sub-6 GHz and mmWave
bands. This is because the IRS introduces additional paths between the
transmitter and receiver, increasing the overall signal power arriving at the
UE and providing diversity benefits. Finally, we show that the use of
opportunistic scheduling schemes can further enhance the benefit of the
uncontrolled IRS at OOB UEs. We numerically illustrate our findings and
conclude that an IRS is always beneficial to every operator, even when the IRS
is deployed & controlled by only one operator.
|
We consider, on a trivial vector bundle over a Riemannian manifold with
boundary, the inverse problem of uniquely recovering time- and space-dependent
coefficients of the dynamic, vector-valued Schr\"odinger equation from the
knowledge of the Dirichlet-to-Neumann map. We show that the D-to-N map uniquely
determines both the connection form and the potential appearing in the
Schr\"odinger equation, under the assumption that the manifold is either a)
two-dimensional and simple, or b) of higher dimension with strictly convex
boundary and admits a smooth, strictly convex function.
|
Looking for spectroscopic families in the whole set of discovered diffuse
interstellar bands (DIBs) is an indirect trial of solving the problem of DIBs'
carriers. Basing on optical high resolution spectra, covering the range from
5655 to 7020 \AA, we found few relatively strong DIBs which are not well
correlated one with another and therefore they may play a role of
representatives of separate spectroscopic families. In the next step we
indicated DIBs which tend to follow the behaviour of their representatives. As
a result of our analysis we propose few, probably not complete yet,
spectroscopic families of DIBs.
|
Artificial Intelligence is a central topic in the computer science
curriculum. From the year 2011 a project-based learning methodology based on
computer games has been designed and implemented into the intelligence
artificial course at the University of the Bio-Bio. The project aims to develop
software-controlled agents (bots) which are programmed by using heuristic
algorithms seen during the course. This methodology allows us to obtain good
learning results, however several challenges have been founded during its
implementation.
In this paper we show how linguistic descriptions of data can help to provide
students and teachers with technical and personalized feedback about the
learned algorithms. Algorithm behavior profile and a new Turing test for
computer games bots based on linguistic modelling of complex phenomena are also
proposed in order to deal with such challenges.
In order to show and explore the possibilities of this new technology, a web
platform has been designed and implemented by one of authors and its
incorporation in the process of assessment allows us to improve the teaching
learning process.
|
We put forward and analyze an explicit finite difference scheme for the
Camassa-Holm shallow water equation that can handle general $H^1$ initial data
and thus peakon-antipeakon interactions. Assuming a specified condition
restricting the time step in terms of the spatial discretization parameter, we
prove that the difference scheme converges strongly in $H^1$ towards a
dissipative weak solution of Camassa-Holm equation.
|
In this work we introduce a new expression of the plasma Dielecronic
Recombination (DR) rate as a function of the temperature, derived assuming a
small deformation of the Maxwell-Boltzmann distribution and containing
corrective factors, in addition to the usual exponential behaviour, caused by
non-linear effects in slightly non ideal plasmas. We then compare the
calculated DR rates with the experimental DR fits in the low temperature
region.
|
We study the nature of and approach to thermal equilibrium in isolated
quantum systems. An individual isolated macroscopic quantum system in a pure or
mixed state is regarded as being in thermal equilibrium if all macroscopic
observables assume rather sharply the values obtained from thermodynamics. Of
such a system (or state) we say that it is in macroscopic thermal equilibrium
(MATE). A stronger requirement than MATE is that even microscopic observables
(i.e., ones referring to a small subsystem) have a probability distribution in
agreement with that obtained from the micro-canonical, or equivalently the
canonical, ensemble for the whole system. Of such a system we say that it is in
microscopic thermal equilibrium (MITE). The distinction between MITE and MATE
is particularly relevant for systems with many-body localization (MBL) for
which the energy eigenfuctions fail to be in MITE while necessarily most of
them, but not all, are in MATE. However, if we consider superpositions of
energy eigenfunctions (i.e., typical wave functions $\psi$) in an energy shell,
then for generic macroscopic systems, including those with MBL, most $\psi$ are
in both MATE and MITE. We explore here the properties of MATE and MITE and
compare the two notions, thereby elaborating on ideas introduced in [Goldstein
et al., Phys.Rev.Lett. 115: 100402 (2015)].
|
Quantum supermaps are a higher-order generalization of quantum maps, taking
quantum maps to quantum maps. It is known that any completely positive, trace
non-increasing (CPTNI) map can be performed as part of a quantum measurement.
By providing an explicit counterexample we show that, instead, not every
quantum supermap sending a quantum channel to a CPTNI map can be realized in a
measurement on quantum channels. We find that the supermaps that can be
implemented in this way are exactly those transforming quantum channels into
CPTNI maps even when tensored with the identity supermap. We link this result
to the fact that the principle of causality fails in the theory of quantum
supermaps.
|
Higher order nonclassical properties of fields propagating through a
codirectional asymmetric nonlinear optical coupler which is prepared by
combining a linear wave guide and a nonlinear (quadratic) wave guide operated
by second harmonic generation are studied. A completely quantum mechanical
description is used here to describe the system. Closed form analytic solutions
of Heisenberg's equations of motion for various modes are used to show the
existence of higher order antibunching, higher order squeezing, higher order
two-mode and multi-mode entanglement in the asymmetric nonlinear optical
coupler. It is also shown that nonclassical properties of light can transfer
from a nonlinear wave guide to a linear wave guide.
|
In this paper we investigate the occurrence of the Zeno and anti-Zeno effects
for quantum Brownian motion. We single out the parameters of both the system
and the reservoir governing the crossover between Zeno and anti-Zeno dynamics.
We demonstrate that, for high reservoir temperatures, the short time behaviour
of environment induced decoherence is the ultimate responsible for the
occurrence of either the Zeno or the anti-Zeno effect. Finally we suggest a way
to manipulate the decay rate of the system and to observe a controlled
continuous passage from decay suppression to decay acceleration using
engineered reservoirs in the trapped ion context .
|
In this short note we prove the unirationality of Hurwitz spaces of 6-gonal
curves of genus $g$ with $5\leq g\leq 28$ or $g=30,31,33,35,36,40,45$. Key
ingredient is a liaison construction in $\PP^1 \times \PP^2$. By
semicontinuity, the proof of the dominance of this construction is reduced to a
computation of a single curve over a finite field.
|
We present a model-independent analysis of CP violation, inspired by recent
experimental observations, in charmed meson decays. The topological diagram
approach is used to study direct CP asymmetries for singly Cabibbo-suppressed
two-body hadronic decays of charmed mesons. We extract the magnitudes and
relative phases of the corresponding topological amplitudes from available
experimental information. In order to get more precise and reliable estimates
of direct CP asymmetries, we take into account contributions from all possible
strong penguin amplitudes, including the internal $b$-quark penguin
contributions. We also study flavor SU(3) symmetry breaking effects in these
decay modes and consequently, predict direct CP asymmetries of unmeasured
modes.
|
The abundant semi-structured data on the Web, such as HTML-based tables and
lists, provide commercial search engines a rich information source for question
answering (QA). Different from plain text passages in Web documents, Web tables
and lists have inherent structures, which carry semantic correlations among
various elements in tables and lists. Many existing studies treat tables and
lists as flat documents with pieces of text and do not make good use of
semantic information hidden in structures. In this paper, we propose a novel
graph representation of Web tables and lists based on a systematic
categorization of the components in semi-structured data as well as their
relations. We also develop pre-training and reasoning techniques on the graph
model for the QA task. Extensive experiments on several real datasets collected
from a commercial engine verify the effectiveness of our approach. Our method
improves F1 score by 3.90 points over the state-of-the-art baselines.
|
Entities lie in the heart of biomedical natural language understanding, and
the biomedical entity linking (EL) task remains challenging due to the
fine-grained and diversiform concept names. Generative methods achieve
remarkable performances in general domain EL with less memory usage while
requiring expensive pre-training. Previous biomedical EL methods leverage
synonyms from knowledge bases (KB) which is not trivial to inject into a
generative method. In this work, we use a generative approach to model
biomedical EL and propose to inject synonyms knowledge in it. We propose
KB-guided pre-training by constructing synthetic samples with synonyms and
definitions from KB and require the model to recover concept names. We also
propose synonyms-aware fine-tuning to select concept names for training, and
propose decoder prompt and multi-synonyms constrained prefix tree for
inference. Our method achieves state-of-the-art results on several biomedical
EL tasks without candidate selection which displays the effectiveness of
proposed pre-training and fine-tuning strategies.
|
Boson stars are often described as macroscopic Bose-Einstein condensates. By
accommodating large numbers of bosons in the same quantum state, they
materialize macroscopically the intangible probability density cloud of a
single particle in the quantum world. We take this interpretation of boson
stars one step further. We show, by explicitly constructing the fully
non-linear solutions, that static (in terms of their spacetime metric,
$g_{\mu\nu}$) boson stars, composed of a single complex scalar field, $\Phi$,
can have a non-trivial multipolar structure, yielding the same morphologies for
their energy density as those that elementary hydrogen atomic orbitals have for
their probability density. This provides a close analogy between the elementary
solutions of the non-linear Einstein--Klein-Gordon theory, denoted
$\Phi_{(N,\ell,m)}$, which could be realized in the macrocosmos, and those of
the linear Schr\"odinger equation in a Coulomb potential, denoted
$\Psi_{(N,\ell,m)}$, that describe the microcosmos. In both cases, the
solutions are classified by a triplet of quantum numbers $(N,\ell,m)$. In the
gravitational theory, multipolar boson stars can be interpreted as individual
bosonic lumps in equilibrium; remarkably, the (generic) solutions with $m\neq
0$ describe gravitating solitons $[g_{\mu\nu},\Phi_{(N,\ell,m)}]$ without any
continuous symmetries. Multipolar boson stars analogue to hybrid orbitals are
also constructed.
|
Recently, machine learning based single image super resolution (SR)
approaches focus on jointly learning representations for high-resolution (HR)
and low-resolution (LR) image patch pairs to improve the quality of the
super-resolved images. However, due to treat all image pixels equally without
considering the salient structures, these approaches usually fail to produce
visual pleasant images with sharp edges and fine details. To address this
issue, in this work we present a new novel SR approach, which replaces the main
building blocks of the classical interpolation pipeline by a flexible,
content-adaptive deep neural networks. In particular, two well-designed
structure-aware components, respectively capturing local- and holistic- image
contents, are naturally incorporated into the fully-convolutional
representation learning to enhance the image sharpness and naturalness.
Extensively evaluations on several standard benchmarks (e.g., Set5, Set14 and
BSD200) demonstrate that our approach can achieve superior results, especially
on the image with salient structures, over many existing state-of-the-art SR
methods under both quantitative and qualitative measures.
|
Given a finite set $X\subseteq\R$ we characterize the diagonals of
self-adjoint operators with spectrum $X$. Our result extends the Schur-Horn
theorem from a finite dimensional setting to an infinite dimensional Hilbert
space analogous to Kadison's theorem for orthogonal projections and the second
author's result for operators with three point spectrum.
|
We propose a Wigner quasiprobability distribution function for Hamiltonian
systems in spaces of constant curvature --in this paper on hyperboloids--,
which returns the correct marginals and has the covariance of the Shapiro
functions under SO(D,1) transformations. To the free systems obeying the
Laplace-Beltrami equation on the hyperboloid, we add a conic-oscillator
potential in the hyperbolic coordinate. As an example, we analyze the
1-dimensional case on a hyperbola branch, where this conic-oscillator is the
Poschl-Teller potential. We present the analytical solutions and plot the
computed results. The standard theory of quantum oscillators is regained in the
contraction limit to the space of zero curvature.
|
Models of X-ray reverberation from extended coronae are developed from
general relativistic ray tracing simulations. Reverberation lags between
correlated variability in the directly observed continuum emission and that
reflected from the accretion disc arise due to the additional light travel time
between the corona and reflecting disc. X-ray reverberation is detected from an
increasing sample of Seyfert galaxies and a number of common properties are
observed, including a transition from the characteristic reverberation
signature at high frequencies to a hard lag within the continuum component at
low frequencies, as well a pronounced dip in the reverberation lag at 3keV.
These features are not trivially explained by the reverberation of X-rays
originating from simple point sources. We therefore model reverberation from
coronae extended both over the surface of the disc and vertically. Causal
propagation through its extent for both the simple case of constant velocity
propagation and propagation linked to the viscous timescale in the underlying
accretion disc is included as well as stochastic variability arising due to
turbulence locally on the disc. We find that the observed features of X-ray
reverberation in Seyfert galaxies can be explained if the long timescale
variability is dominated by the viscous propagation of fluctuations through the
corona. The corona extends radially at low height over the surface of the disc
but with a bright central region in which fluctuations propagate up the black
hole rotation axis driven by more rapid variability arising from the innermost
regions of the accretion flow.
|
We generalize Pearl's back-door criterion for directed acyclic graphs (DAGs)
to more general types of graphs that describe Markov equivalence classes of
DAGs and/or allow for arbitrarily many hidden variables. We also give easily
checkable necessary and sufficient graphical criteria for the existence of a
set of variables that satisfies our generalized back-door criterion, when
considering a single intervention and a single outcome variable. Moreover, if
such a set exists, we provide an explicit set that fulfills the criterion. We
illustrate the results in several examples. R-code is available in the
R-package pcalg.
|
I show that any complex manifold that resembles a rank two compact Hermitian
symmetric space (other than a quadric hypersurface) to order two at a general
point must be an open subset of such a space.
|
The interplay between superconductivity and Eu$ ^{2+}$ magnetic moments in
EuFe$_2$(As$_{1-x}$P$_x$)$_2$ is studied by electrical resistivity measurements
under hydrostatic pressure on $x=0.13$ and $x=0.18$ single crystals. We can map
hydrostatic pressure to chemical pressure $x$ and show, that superconductivity
is confined to a very narrow range $0.18\leq x \leq 0.23$ in the phase diagram,
beyond which ferromagnetic (FM) Eu ordering suppresses superconductivity. The
change from antiferro- to FM Eu ordering at the latter concentration coincides
with a Lifshitz transition and the complete depression of iron magnetic order.
|
We use Berezin's quantization procedure to obtain a formal $U_q
su_{1,1}$-invariant deformation of the quantum disc. Explicit formulae for the
associated q-bidifferential operators are produced.
|
Supervised machine learning applications in the health domain often face the
problem of insufficient training datasets. The quantity of labelled data is
small due to privacy concerns and the cost of data acquisition and labelling by
a medical expert. Furthermore, it is quite common that collected data are
unbalanced and getting enough data to personalize models for individuals is
very expensive or even infeasible. This paper addresses these problems by (1)
designing a recurrent Generative Adversarial Network to generate realistic
synthetic data and to augment the original dataset, (2) enabling the generation
of balanced datasets based on heavily unbalanced dataset, and (3) to control
the data generation in such a way that the generated data resembles data from
specific individuals. We apply these solutions for sleep apnea detection and
study in the evaluation the performance of four well-known techniques, i.e.,
K-Nearest Neighbour, Random Forest, Multi-Layer Perceptron, and Support Vector
Machine. All classifiers exhibit in the experiments a consistent increase in
sensitivity and a kappa statistic increase by between 0.007 and 0.182.
|
We derive the formula for the stationary states of particle-number conserving
exclusion processes infinitesimally perturbed by inhomogeneous adsorption and
desorption. The formula not only proves but also generalises the conjecture
proposed in arXiv:1711.06949 to account for inhomogeneous adsorption and
desorption. As an application of the formula, we draw part of the phase
diagrams of the open asymmetric simple exclusion process with and without
Langmuir kinetics, correctly reproducing known results.
|
A condition is identified that implies that solutions to the stochastic
reaction-diffusion equation $\frac{\partial u}{\partial t} = \mathcal{A} u +
f(u) + \sigma(u) \dot{W}$ on a bounded spatial domain never explode. We
consider the case where $\sigma$ grows polynomially and $f$ is polynomially
dissipative, meaning that $f$ strongly forces solutions toward finite values.
This result demonstrates the role that the deterministic forcing term $f$ plays
in preventing explosion.
|
Beyond 14GPa of pressure, bi-layered La$_3$Ni$_2$O$_7$ was recently found to
develop strong superconductivity above the liquid nitrogen boiling temperature.
An immediate essential question is the pressure-induced qualitative change of
electronic structure that enables the exciting high-temperature
superconductivity. We investigate this timely question via a numerical
multi-scale derivation of effective many-body physics. At the atomic scale, we
first clarify that the system has a strong charge transfer nature with
itinerant carriers residing mainly in the in-plane oxygen between spin-1
Ni$^{2+}$ ions. We then elucidate in eV- and sub-eV-scale the key physical
effect of the applied pressure: It induces a cupratelike electronic structure
through partially screening the Ni spin from 1 to 1/2. This suggests a
high-temperature superconductivity in La$_3$Ni$_2$O$_7$ with microscopic
mechanism and ($d$-wave) symmetry similar to that in the cuprates.
|
We derive an approximate analytic solution for a single fluxon in a double
stacked Josephson junctions (SJJ's) for arbitrary junction parameters and
coupling strengths. It is shown that the fluxon in a double SJJ's can be
characterized by two components, with different Swihart velocities and
Josephson penetration depths. Using the perturbation theory we find the second
order correction to the solution and analyze its accuracy. Comparison with
direct numerical simulations shows a quantitative agreement between exact and
approximate analytic solutions. It is shown that due to the presence of two
components, the fluxon in SJJ's may have an unusual shape with an inverted
magnetic field in the second junction when the velocity of the fluxon is
approaching the lower Swihart velocity.
|
We present results of a 150 MHz survey of a field centered on Epsilon
Eridani, undertaken with the Giant Metrewave Radio Telescope (GMRT). The survey
covers an area with a diameter of 2 deg, has a spatial resolution of 30" and a
noise level of 3.1 mJy at the pointing centre. These observations provide a
deeper and higher resolution view of the 150 MHz radio sky than the 7C survey
(although the 7C survey covers a much larger area). A total of 113 sources were
detected, most are point-like, but 20 are extended. We present an analysis of
these sources, in conjunction with the NVSS (at 1.4 GHz) and VLSS (at 74 MHz).
This process allowed us to identify 5 Ultra Steep Spectrum (USS) radio sources
that are candidate high redshift radio galaxies (HzRGs). In addition, we have
derived the dN/dS distribution for these observations and compare our results
with other low frequency radio surveys.
|
I review recent work that goes beyond our model for the Low-Frequency
Quasi-Periodic Oscillation of microquasars, based on the Accretion-Ejection
Instability. I show that similar instabilities, which can be viewed as strongly
unstable versions of the diskoseismologic modes, provide explanations for both
the High-Frequency QPO and for the quasi-periodicity observed durng the flares
of Sgr A*, the supermassive black hole at the Galactic Center.
|
Segmentation of skin lesions is considered as an important step in computer
aided diagnosis (CAD) for automated melanoma diagnosis. In recent years,
segmentation methods based on fully convolutional networks (FCN) have achieved
great success in general images. This success is primarily due to the
leveraging of large labelled datasets to learn features that correspond to the
shallow appearance as well as the deep semantics of the images. However, the
dependence on large dataset does not translate well into medical images. To
improve the FCN performance for skin lesion segmentations, researchers
attempted to use specific cost functions or add post-processing algorithms to
refine the coarse boundaries of the FCN results. However, the performance of
these methods is heavily reliant on the tuning of many parameters and
post-processing techniques. In this paper, we leverage the state-of-the-art
image feature learning method of generative adversarial network (GAN) for its
inherent ability to produce consistent and realistic image features by using
deep neural networks and adversarial learning concept. We improve upon GAN such
that skin lesion features can be learned at different level of complexities, in
a controlled manner. The outputs from our method is then augmented to the
existing FCN training data, thus increasing the overall feature diversity. We
evaluated our method on the ISIC 2018 skin lesion segmentation challenge
dataset and showed that it was more accurate and robust when compared to the
existing skin lesion segmentation methods.
|
A new method for solving numerically stochastic partial differential
equations (SPDEs) with multiple scales is presented. The method combines a
spectral method with the heterogeneous multiscale method (HMM) presented in [W.
E, D. Liu, and E. Vanden-Eijnden, Comm. Pure Appl. Math., 58(11):1544--1585,
2005]. The class of problems that we consider are SPDEs with quadratic
nonlinearities that were studied in [D. Blomker, M. Hairer, and G.A. Pavliotis,
Nonlinearity, 20(7):1721--1744, 2007.] For such SPDEs an amplitude equation
which describes the effective dynamics at long time scales can be rigorously
derived for both advective and diffusive time scales. Our method, based on
micro and macro solvers, allows to capture numerically the amplitude equation
accurately at a cost independent of the small scales in the problem. Numerical
experiments illustrate the behavior of the proposed method.
|
A semi-classical approach is used to calculate radiation emission in the
collision of an electron with an intense focused laser pulse. The results are
compared to predictions from the locally constant field and locally
monochromatic approximations. It is found that simulations employing the
semi-classical approach capture features in the energy spectra, such as
subharmonics and bandwidth structure, which are beyond local approaches. The
formation length is introduced as a diagnostic to select between approaches as
the electron is propagated through the pulse.
|
As humans we possess an intuitive ability for navigation which we master
through years of practice; however existing approaches to model this trait for
diverse tasks including monitoring pedestrian flow and detecting abnormal
events have been limited by using a variety of hand-crafted features. Recent
research in the area of deep-learning has demonstrated the power of learning
features directly from the data; and related research in recurrent neural
networks has shown exemplary results in sequence-to-sequence problems such as
neural machine translation and neural image caption generation. Motivated by
these approaches, we propose a novel method to predict the future motion of a
pedestrian given a short history of their, and their neighbours, past
behaviour. The novelty of the proposed method is the combined attention model
which utilises both "soft attention" as well as "hard-wired" attention in order
to map the trajectory information from the local neighbourhood to the future
positions of the pedestrian of interest. We illustrate how a simple
approximation of attention weights (i.e hard-wired) can be merged together with
soft attention weights in order to make our model applicable for challenging
real world scenarios with hundreds of neighbours. The navigational capability
of the proposed method is tested on two challenging publicly available
surveillance databases where our model outperforms the current-state-of-the-art
methods. Additionally, we illustrate how the proposed architecture can be
directly applied for the task of abnormal event detection without handcrafting
the features.
|
Multi-Messenger observations and theory of astrophysical objects is fast
becoming a critical research area in the astrophysics scientific community. In
particular, point-like objects like that of BL Lac, flat spectrum radio quasars
(FSRQ), and blazar candidates of uncertain type (BCU) are of distinct interest
among those who look at the synchrotron, Compton, neutrino, and cosmic ray
emissions sourced from compact objects. Notably, there is also much interest in
the correlation between multi-frequency observations of blazars and neutrino
surveys on source demographics. In this review we look at such multi-frequency
and multi-physics correlations of the radio, X-ray, and $\gamma$-ray fluxes of
different classes of blazars from a collection of survey catalogues. This
multi-physics survey of blazars shows that there are characteristic
cross-correlations in the spectra of blazars when considering their
multi-frequency and multi-messenger emission. Accompanying this will be a
review of cosmic ray and neutrino emissions from blazars and their
characteristics.
|
We derive the off-shell nilpotent Becchi-Rouet-Stora-Tyutin (BRST) and
anti-BRST symmetry transformations for {\it all} the fields of a free Abelian
2-form gauge theory by exploiting the geometrical superfield approach to BRST
formalism. The above four (3 + 1)-dimensional (4D) theory is considered on a
(4, 2)-dimensional supermanifold parameterized by the four even spacetime
variables x^\mu (with \mu = 0, 1, 2, 3) and a pair of odd Grassmannian
variables \theta and \bar\theta (with \theta^2 = \bar\theta^2 = 0, \theta
\bar\theta + \bar\theta \theta = 0). One of the salient features of our present
investigation is that the above nilpotent (anti-)BRST symmetry transformations
turn out to be absolutely anticommuting due to the presence of a Curci-Ferrari
(CF) type of restriction. The latter condition emerges due to the application
of our present superfield formalism. The actual CF condition, as is well-known,
is the hallmark of a 4D non-Abelian 1-form gauge theory. We demonstrate that
our present 4D Abelian 2-form gauge theory imbibes some of the key signatures
of the 4D non-Abelian 1-form gauge theory. We briefly comment on the
generalization of our supperfield approach to the case of Abelian 3-form gauge
theory in four (3 + 1)-dimensions of spacetime.
|
In general relativity, the energy conditions are invoked to restrict general
energy-momentum tensors on physical grounds. We show that in the standard
Friedmann-Lemaitre-Robertson-Walker approach to cosmological modelling where
the equation of state of the cosmological fluid is unknown, the energy
conditions provide model-independent bounds on the behavior of the distance
modulus of cosmic sources as a function of the redshift. We use both the gold
and the legacy samples of current type Ia supenovae to carry out a
model-independent analysis of the energy conditions violation in the context of
standard cosmology.
|
In this work, we demonstrate the open-loop control of chaotic systems by
means of optimized periodic signals. The use of such signals enables us to
reduce control power significantly in comparison to simple harmonic
perturbations. It is found that the stabilized periodic dynamics can be changed
by small, specific alterations of the control signal. Thus, low power switching
between different periodic states can be achieved without feedback. The
robustness of the proposed control method against noise is discussed.
|
The space ultraviolet (UV) is a critical astronomical observing window, where
a multitude of atomic, ionic, and molecular signatures provide crucial insight
into planetary, interstellar, stellar, intergalactic, and extragalactic
objects. The next generation of large space telescopes require highly
sensitive, moderate-to-high resolution UV spectrograph. However, sensitive
observations in the UV are difficult, as UV optical performance and imaging
efficiencies have lagged behind counterparts in the visible and infrared
regimes. This has historically resulted in simple, low-bounce instruments to
increase sensitivity. In this study, we present the design, fabrication, and
calibration of a simple, high resolution, high throughput far-UV spectrograph -
the Colorado High-resolution Echelle Stellar Spectrograph (CHESS). CHESS is a
sounding rocket payload to demonstrate the instrument design for the
next-generation UV space telescopes. We present tests and results on the
performance of several state-of-the-art diffraction grating and detector
technologies for far-UV astronomical applications that were flown aboard the
first two iterations of CHESS. The CHESS spectrograph was used to study the
atomic-to-molecular transitions within translucent cloud regions in the
interstellar medium (ISM) through absorption spectroscopy. The first two
flights looked at the sightlines towards alpha Virgo and epsilon Persei, and
flight results are presented.
|
Subsets and Splits