text
stringlengths 6
128k
|
---|
Using radiological scans to identify liver tumors is crucial for proper
patient treatment. This is highly challenging, as top radiologists only achieve
F1 scores of roughly 80% (hepatocellular carcinoma (HCC) vs. others) with only
moderate inter-rater agreement, even when using multi-phase magnetic resonance
(MR) imagery. Thus, there is great impetus for computer-aided diagnosis (CAD)
solutions. A critical challenge is to robustly parse a 3D MR volume to localize
diagnosable regions of interest (ROI), especially for edge cases. In this
paper, we break down this problem using a key-slice parser (KSP), which
emulates physician workflows by first identifying key slices and then
localizing their corresponding key ROIs. To achieve robustness, the KSP also
uses curve-parsing and detection confidence re-weighting. We evaluate our
approach on the largest multi-phase MR liver lesion test dataset to date (430
biopsy-confirmed patients). Experiments demonstrate that our KSP can localize
diagnosable ROIs with high reliability: 87% patients have an average 3D overlap
of >= 40% with the ground truth compared to only 79% using the best tested
detector. When coupled with a classifier, we achieve an HCC vs. others F1 score
of 0.801, providing a fully-automated CAD performance comparable to top human
physicians.
|
Blockchain based federated learning is a distributed learning scheme that
allows model training without participants sharing their local data sets, where
the blockchain components eliminate the need for a trusted central server
compared to traditional Federated Learning algorithms. In this paper we propose
a softmax aggregation blockchain based federated learning framework. First, we
propose a new blockchain based federated learning architecture that utilizes
the well-tested proof-of-stake consensus mechanism on an existing blockchain
network to select validators and miners to aggregate the participants' updates
and compute the blocks. Second, to ensure the robustness of the aggregation
process, we design a novel softmax aggregation method based on approximated
population loss values that relies on our specific blockchain architecture.
Additionally, we show our softmax aggregation technique converges to the global
minimum in the convex setting with non-restricting assumptions. Our
comprehensive experiments show that our framework outperforms existing robust
aggregation algorithms in various settings by large margins.
|
In the Hamiltonian picture, free spin-$1/2$ Dirac fermions on a bipartite
lattice have an $O(4)$ (spin-charge) symmetry. Here we construct an interacting
lattice model with an interaction $V$, which is similar to the Hubbard
interaction but preserves the spin-charge flip symmetry. By tuning the coupling
$V$, we show that we can study the phase transition between the massless
fermion phase at small-$V$ and a massive fermion phase at large-$V$. We
construct a fermion bag algorithm to study this phase transition and find
evidence for it to be second order. Numerical study shows that the universality
class of the transition is different from the one studied earlier involving the
Hubbard coupling $U$. Here we obtain some critical exponents using lattices up
to $L=48$.
|
A method is presented that allows to reduce a problem described by
differential equations with initial and boundary conditions to the problem
described only by differential equations. The advantage of using the modified
problem for physics-informed neural networks (PINNs) methodology is that it
becomes possible to represent the loss function in the form of a single term
associated with differential equations, thus eliminating the need to tune the
scaling coefficients for the terms related to boundary and initial conditions.
The weighted loss functions respecting causality were modified and new weighted
loss functions based on generalized functions are derived. Numerical
experiments have been carried out for a number of problems, demonstrating the
accuracy of the proposed methods.
|
We show that under Ricci curvature integral assumptions the dimension of the
first cohomology group can be estimated in terms of the Kato constant of the
negative part of the Ricci curvature. Moreover, this provides quantitative
statements about the cohomology group, contrary to results by Elworthy and
Rosenberg.
|
The proton spin puzzle is a longstanding problem in high-energy nuclear
physics: how the proton spin distributes among the spin and orbital angular
momenta of the quarks and gluons inside. Two of the unresolved pieces of the
puzzle are the contributions to quark and gluon spins from the region of small
Bjorken $x$. This dissertation fills the gap by constructing the evolution of
these quantities into the small-$x$ region using a modified dipole formalism.
The dominant contributions to the evolution equations resum powers of
$\alpha_s\ln^2(1/x)$, where $\alpha_s$ is the strong coupling constant. In
general, these evolution equations do not close. However, once the large-$N_c$
or large-$N_c\& N_f$ limit is taken, they turn into a system of linear integral
equations that can be solved iteratively. At large $N_c$, the evolution
equations are shown to be consistent with the gluon sector of the polarized
DGLAP evolution in the small-$x$ limit. We numerically solve the equations in
the large-$N_c$ and large-$N_c\& N_f$ limits and obtain the exponential growth
in $\ln(1/x)$ for $N_f \leq 5$, with the intercept decreasing with $N_f$. For
the large-$N_c$ limit, we have $\alpha_h = 3.66$, which agrees up to the
uncertainty with the earlier work by Bartels, Ermolaev and Ryskin. Furthermore,
at $N_f=6$, the asymptotic form attains an oscillation in $\ln(1/x)$ on top of
the exponential growth, with the period spanning several units of rapidity.
Finally, parts of the single-logarithmic corrections to the small-$x$ helicity
evolution is also derived, resumming powers of $\alpha_s\ln(1/x)$. There, the
effects of the unpolarized small-$x$ evolution and the running coupling are
also included for consistency. The complete single-logarithmic corrections can
be derived based on the framework established here. Altogether, these equations
will provide the most precise small-$x$ helicity evolution to date.
|
Omnidirectional images, aka 360 images, can deliver immersive and interactive
visual experiences. As their popularity has increased dramatically in recent
years, evaluating the quality of 360 images has become a problem of interest
since it provides insights for capturing, transmitting, and consuming this new
media. However, directly adapting quality assessment methods proposed for
standard natural images for omnidirectional data poses certain challenges.
These models need to deal with very high-resolution data and implicit
distortions due to the spherical form of the images. In this study, we present
a method for no-reference 360 image quality assessment. Our proposed ST360IQ
model extracts tangent viewports from the salient parts of the input
omnidirectional image and employs a vision-transformers based module processing
saliency selective patches/tokens that estimates a quality score from each
viewport. Then, it aggregates these scores to give a final quality score. Our
experiments on two benchmark datasets, namely OIQA and CVIQ datasets,
demonstrate that as compared to the state-of-the-art, our approach predicts the
quality of an omnidirectional image correlated with the human-perceived image
quality. The code has been available on
https://github.com/Nafiseh-Tofighi/ST360IQ
|
A possible method to reconstruct the cosmic equation of state using strong
gravitational lensing systems is proposed. The feasibility of the method is
investigated by carrying out the reconstruction on the basis of a simple
Monte-Carlo simulation. We show that the method can work and that the cosmic
equation of state $w(z)$ can be determined within errors of $\Delta w\sim
\pm0.1$ -- $\pm0.2$ when a sufficiently large number of lensing systems ($N\sim
20$) for $z\simlt 1$ are precisely measured. Statistics of lensed sources in a
wide and deep survey like the SDSS are also briefly discussed.
|
We use the mean-field method, the Quantum Monte-carlo method and the Density
matrix renormalization group method to study the trimer superfluid phase and
the quantum phase diagram of the Bose-Hubbard model in an optical lattice, with
explicit trimer tunneling term.
Theoretically, we derive the explicit trimer hopping terms, such as
$a_i^{3\dagger}a_j^3$, by the Schrieffer-Wolf transformation. In practice, the
trimer super\-fluid described by these terms is driven by photoassociation. The
phase transition between the trimer super\-fluid phase and other phases are
also studied. Without the on-site interaction, the phase transition between the
trimer superfluid phase and the Mott Insulator phase is continuous. Turning on
the on-site interaction, the phase transitions are first order with Mott
insulators of atom filling $1$ and $2$. With nonzero atom tunneling, the phase
transition is first order from the atom superfluid to the trimer superfluid. In
the trimer superfluid phase, the win\-ding numbers can be divided by three
without any remainders. In the atom superfluid and pair superfluid, the
vorticities are $1$ and $1/2$, respectively. However, the vorticity is $1/3$
for the trimer superfluid. The power law decay exponents is $1/2$ for the non
diagonal correlation $a_i^{\dagger 3} a_j^{3}$, i.e. the same as the exponent
of the correlation $a_i^{\dagger}a_j$ in hardcore bosons. The density dependent
atom-tunneling term $n_i^2a_i^{\dagger}a_j$ and pair tunneling term
$n_ia_i^{\dagger2}a_j^2$ are also studied. With these terms, the phase
transition from the empty phase to atom superfluid is first order and different
from the cases without the density dependent terms. The ef\-fects of
temperature are studied. Our results will be helpful in realizing the trimer
superfluid by a cold atom experiment.
|
We investigate the influence of curvature and topology on crystalline
wrinkling patterns in generic elastic bilayers. Our numerical analysis predicts
that the total number of defects created by adiabatic compression exhibits
universal quadratic scaling for spherical, ellipsoidal and toroidal surfaces
over a wide range of system sizes. However, both the localization of individual
defects and the orientation of defect chains depend strongly on the local
Gaussian curvature and its gradients across a surface. Our results imply that
curvature and topology can be utilized to pattern defects in elastic materials,
thus promising improved control over hierarchical bending, buckling or folding
processes. Generally, this study suggests that bilayer systems provide an
inexpensive yet valuable experimental test-bed for exploring the effects of
geometrically induced forces on assemblies of topological charges.
|
Ante-hoc interpretability has become the holy grail of explainable artificial
intelligence for high-stakes domains such as healthcare; however, this notion
is elusive, lacks a widely-accepted definition and depends on the operational
context. It can refer to predictive models whose structure adheres to
domain-specific constraints, or ones that are inherently transparent. The
latter conceptualisation assumes observers who judge this quality, whereas the
former presupposes them to have technical and domain expertise (thus alienating
other groups of explainees). Additionally, the distinction between ante-hoc
interpretability and the less desirable post-hoc explainability, which refers
to methods that construct a separate explanatory model, is vague given that
transparent predictive models may still require (post-)processing to yield
suitable explanatory insights. Ante-hoc interpretability is thus an overloaded
concept that comprises a range of implicit properties, which we unpack in this
paper to better understand what is needed for its safe adoption across
high-stakes domains. To this end, we outline modelling and explaining
desiderata that allow us to navigate its distinct realisations in view of the
envisaged application and audience.
|
A Sensor network generally has a large number of sensor nodes that are
deployed at some audited site. In most sensor networks the nodes are static.
Nevertheless, node connectivity is subject to changes because of disruptions in
wireless communication, transmission power changes, or loss of synchronization
between neighbouring nodes, so there is a need to maintain synchronization
between the neighbouring nodes in order to have efficient communication. Hence
even after a sensor is aware of its immediate neighbours, it must continuously
maintain its view a process we call continuous neighbour discovery. In this
proposed work we are maintaining synchronization between neighbouring nodes so
that the sensor network will be always active.
|
Between 1995 and 2009, electron temperature (Te) measurements of more than
15000 plasmas produced in the Joint European Torus (JET) have been carefully
reviewed using the two main diagnostics available over this time period:
Michelson interferometer and Thomson scattering systems. Long term stability of
JET Te is experimentaly observed by defining the ECE TS ratio as the ratio of
central Te measured by Michelson and LIDAR.
This paper, based on a careful review of Te measurement from 15 years of JET
plasmas, concludes that JET Te exhibits a 15-20% effective uncertainty mostly
made of large-scale temporal drifts, and an overall uncertainty of 16-22%.
Variations of 18 plasma parameters are checked in another data set, made of a
"reference data set" made of ohmic pulses as similar as possible between 1998
and 2009. Time drifts of ECE TS ratios appear to be mostly disconnected from
the variations observed on these 18 plasma parameters, except for the very low
amplitude variations of the field which are well correlated with off-plasma
variations of a 8-channel integrator module used for measuring many magnetic
signals from JET.
From mid-2002 to 2009, temporal drifts of ECE TS ratios are regarded as
calibration drifts possibly caused by unexpected sensitivity to unknown
parameters; the external temperature on JET site might be the best parameter
suspected so far.
Off-plasma monitoring of MI made of calibration performed in the laboratory
are reported and do not appear to be clearly correlated with drifts of ECE TS
ratio and variations of magnetics signals integrators. Comparison of
estimations of plasma thermal energy for purely Ohmic and NBI-only plasmas does
not provide any definite information on the accuracy of \mi or \lidar
measurements.
Whatever causes these Te drifts, this experimental issue is regarded as
crucial for JET data quality.
|
We prove that the two-variable fragment of first-order logic has the weak
Beth definability property. This makes the two-variable fragment a natural
logic separating the weak and the strong Beth properties since it does not have
the strong Beth definability property.
|
We consider a class of Backward Stochastic Differential Equations with
superlinear driver process $f$ adapted to a filtration supporting at least a
$d$ dimensional Brownian motion and a Poisson random measure on ${\mathbb R}^m-
\{0\}.$ We consider the following class of terminal conditions $\xi_1 = \infty
\cdot 1_{\{\tau_1 \le T\}}$ where $\tau_1$ is any stopping time with a bounded
density in a neighborhood of $T$ and $\xi_2 = \infty \cdot 1_{A_T}$ where
$A_t$, $t \in [0,T]$ is a decreasing sequence of events adapted to the
filtration ${\mathcal F}_t$ that is continuous in probability at $T$. A special
case for $\xi_2$ is $A_T = \{\tau_2 > T\}$ where $\tau_2$ is any stopping time
such that $P(\tau_2 =T) =0.$ In this setting we prove that the minimal
supersolutions of the BSDE are in fact solutions, i.e., they attain almost
surely their terminal values. We further show that the first exit time from a
time varying domain of a $d$-dimensional diffusion process driven by the
Brownian motion with strongly elliptic covariance matrix does have a continuous
density; therefore such exit times can be used as $\tau_1$ and $\tau_2$ to
define the terminal conditions $\xi_1$ and $\xi_2.$ The proof of existence of
the density is based on the classical Green's functions for the associated PDE.
|
We report on the preparation of dense monofilamentary MgB2/Ni and MgB2/Fe
tapes with high critical current densities. In annealed MgB2/Ni tapes, we
obtained transport critical current densities as high as 2.3*105 A/cm2 at 4.2 K
and 1.5 T, and for MgB2/Fe tapes 104 A/cm2 at 4.2 K and 6.5 T. To the best of
our knowledge, these are the highest transport jc values at 4.2 K reported for
MgB2 based tapes so far. An extrapolation to zero field of the MgB2/Fe data
gives a critical current value of ~ 1 MA/cm2, corresponding to an critical
current value well above 1000 A. The high jc values obtained after annealing
are a consequence of sintering densification and grain reconnection. Fe does
not react with MgB2 and is thus an excellent sheath material candidate for
tapes with self-field jc values at 4.2 K in excess of 1 MA/cm2.
|
We present a comparison of the noncommutative field theories built using two
different star products: Moyal and Wick-Voros (or normally ordered). We compare
the two theories in the context of the noncommutative geometry determined by a
Drinfeld twist, and the comparison is made at the level of Green's functions
and S-matrix. We find that while the Green's functions are different for the
two theories, the S-matrix is the same in both cases, and is different from the
commutative case.
|
We show that the black hole binary (BHB) coalescence rates inferred from the
advanced LIGO (aLIGO) detection of GW150914 imply an unexpectedly loud GW sky
at milli-Hz frequencies accessible to the evolving Laser Interferometer Space
Antenna (eLISA), with several outstanding consequences. First, up to thousands
of BHB will be individually resolvable by eLISA; second, millions of non
resolvable BHBs will build a confusion noise detectable with signal-to-noise
ratio of few to hundreds; third -- and perhaps most importantly -- up to
hundreds of BHBs individually resolvable by eLISA will coalesce in the aLIGO
band within ten years. eLISA observations will tell aLIGO and all
electromagnetic probes weeks in advance when and where these BHB coalescences
are going to occur, with uncertainties of <10s and <1deg^2. This will allow the
pre-pointing of telescopes to realize coincident GW and multi-wavelength
electromagnetic observations of BHB mergers. Time coincidence is critical
because prompt emission associated to a BHB merger will likely have a duration
comparable to the dynamical time-scale of the systems, and is only possible
with low frequency GW alerts.
|
The spiral structure of the Milky Way can be simulated by adopting
percolation theory, where the active zones are produced by the evolution of
many supernova (SN). Here we assume conversely that the percolative process is
triggered by superbubbles (SB), the result of multiple SNs. A first thermal
model takes into account a bursting phase which evolves in a medium with
constant density, and a subsequent adiabatic expansion which evolves in a
medium with decreasing density along the galactic height. A second cold model
follows the evolution of an SB in an auto-gravitating medium in the framework
of the momentum conservation in a thin layer. Both the thermal and cold models
are compared with the results of numerical hydro-dynamics. A simulation of
GW~46.4+5.5, the Gould Belt, and the Galactic Plane is reported. An elementary
theory of the image, which allows reproducing the hole visible at the center of
the observed SB, is provided.
|
Flow cytometry (FCM) is the standard multi-parameter assay for measuring
single cell phenotype and functionality. It is commonly used for quantifying
the relative frequencies of cell subsets in blood and disaggregated tissues. A
typical analysis of FCM data involves cell classification---that is, the
identification of cell subgroups in the sample---and comparisons of the cell
subgroups across samples or conditions. While modern experiments often
necessitate the collection and processing of samples in multiple batches,
analysis of FCM data across batches is challenging because differences across
samples may occur due to either true biological variation or technical reasons
such as antibody lot effects or instrument optics across batches. Thus a
critical step in comparative analyses of multi-sample FCM data---yet missing in
existing automated methods for analyzing such data---is cross-sample
calibration, whose goal is to align corresponding cell subsets across multiple
samples in the presence of technical variations, so that biological variations
can be meaningfully compared. We introduce a Bayesian nonparametric
hierarchical modeling approach for accomplishing both calibration and cell
classification simultaneously in a unified probabilistic manner. Three
important features of our method make it particularly effective for analyzing
multi-sample FCM data: a nonparametric mixture avoids prespecifying the number
of cell clusters; a hierarchical skew normal kernel that allows flexibility in
the shapes of the cell subsets and cross-sample variation in their locations;
and finally the "coarsening" strategy makes inference robust to departures from
the model such as heavy-tailness not captured by the skew normal kernels. We
demonstrate the merits of our approach in simulated examples and carry out a
case study in the analysis of two multi-sample FCM data sets.
|
Generative Adversarial Networks (GANs) have been promising in the field of
image generation, however, they have been hard to train for language
generation. GANs were originally designed to output differentiable values, so
discrete language generation is challenging for them which causes high levels
of instability in training GANs. Consequently, past work has resorted to
pre-training with maximum-likelihood or training GANs without pre-training with
a WGAN objective with a gradient penalty. In this study, we present a
comparison of those approaches. Furthermore, we present the results of some
experiments that indicate better training and convergence of Wasserstein GANs
(WGANs) when a weaker regularization term is enforcing the Lipschitz
constraint.
|
We obtain a new self-similar solution to the Einstein's equations in
four-dimensions, representing the collapse of a spherically symmetric,
minimally coupled, massless, scalar field. Depending on the value of certain
parameters, this solution represents the formation of naked singularities and
black holes. Since the black holes are identified as the Schwarzschild ones,
one may naturally see how these black holes are produced as remnants of the
scalar field collapse.
|
We study in this paper a compartmental SIR model for a population distributed
in a bounded domain D of $\mathbb{R}^d$, d= 1, 2, or 3. We describe a spatial
model for the spread of a disease on a grid of D. We prove two laws of large
numbers. On the one hand, we prove that the stochastic model converges to the
corresponding deterministic patch model as the size of the population tends to
infinity. On the other hand, by letting both the size of the population tend to
infinity and the mesh of the grid go to zero, we obtain a law of large numbers
in the supremum norm, where the limit is a diffusion SIR model in D.
|
A convenient formalism for averaging the losses produced by gravitational
radiation backreaction over one orbital period was developed in an earlier
paper. In the present paper we generalize this formalism to include the case of
a closed system composed from two bodies of comparable masses, one of them
having the spin S.
We employ the equations of motion given by Barker and O'Connell, where terms
up to linear order in the spin (the spin-orbit interaction terms) are kept. To
obtain the radiative losses up to terms linear in the spin, the equations of
motion are taken to the same order. Then the magnitude L of the angular
momentum L, the angle kappa subtended by S and L and the energy E are
conserved. The analysis of the radial motion leads to a new parametrization of
the orbit.
From the instantaneous gravitational radiation losses computed by Kidder the
leading terms and the spin-orbit terms are taken. Following Apostolatos,
Cutler, Sussman and Thorne, the evolution of the vectors S and L in the
momentary plane spanned by these vectors is separated from the evolution of the
plane in space. The radiation-induced change in the spin is smaller than the
leading-order spin terms in the momentary angular momentum loss. This enables
us to compute the averaged losses in the constants of motion E, L and L_S=L cos
kappa. In the latter, the radiative spin loss terms average to zero. An
alternative description using the orbital elements a,e and kappa is given.
The finite mass effects contribute terms, comparable in magnitude, to the
basic, test-particle spin terms in the averaged losses.
|
Advancements in unmanned aerial vehicle (UAV) technology have led to their
increased utilization in various commercial and military applications. One such
application is signal source search and localization (SSSL) using UAVs, which
offers significant benefits over traditional ground-based methods due to
improved RF signal reception at higher altitudes and inherent autonomous 3D
navigation capabilities. Nevertheless, practical considerations such as
propagation models and antenna patterns are frequently neglected in
simulation-based studies in the literature. In this work, we address these
limitations by using a two-ray channel model and a dipole antenna pattern to
develop a simulator that more closely represents real-world radio signal
strength (RSS) observations at a UAV. We then examine and compare the
performance of previously proposed linear least square (LLS) based localization
techniques using UAVs for SSSL. Localization of radio frequency (RF) signal
sources is assessed based on two main criteria: 1) achieving the highest
possible accuracy and 2) localizing the target as quickly as possible with
reasonable accuracy. Various mission types, such as those requiring precise
localization like identifying hostile troops, and those demanding rapid
localization like search and rescue operations during disasters, have been
previously investigated. In this paper, the efficacy of the proposed
localization approaches is examined based on these two main localization
requirements through computer simulations.
|
Visual embellishments, as a form of non-linguistic rhetorical figures, are
used to help convey abstract concepts or attract readers' attention. Creating
data visualizations with appropriate and visually pleasing embellishments is
challenging since this process largely depends on the experience and the
aesthetic taste of designers. To help facilitate designers in the ideation and
creation process, we propose a design space, VizBelle, based on the analysis of
361 classified visualizations from online sources. VizBelle consists of four
dimensions, namely, communication goal to fit user intention, object to select
the target area, strategy and technique to offer potential approaches. We
further provide a website to present detailed explanations and examples of
various techniques. We conducted a within-subject study with 20 professional
and amateur design enthusiasts to evaluate the effectiveness of our design
space. Results show that our design space is illuminating and useful for
designers to create data visualizations with embellishments.
|
Quantum rate theory encompasses the electron-transfer rate constant concept
of electrochemical reactions as a particular setting, besides demonstrating
that the electrodynamics of these reactions obey relativistic quantum
mechanical rules. The theory predicts a frequency $\nu = E/h$ for
electron-transfer reactions, in which $E = e^2/C_q$ is the energy associated
with the density-of-states $C_q/e^2$ and $C_q$ is the quantum capacitance of
the electrochemical junctions. This work demonstrates that the $\nu = E/h$
frequency of the intermolecular charge transfer of push-pull heterocyclic
compounds, assembled over conducting electrodes, follows the above-stated
quantum rate electrodynamic principles. Astonishingly, the differences between
the molecular junction electronics formed by push-pull molecules and the
electrodynamics of electrochemical reactions observed in redox-active modified
electrodes are solely owing to an adiabatic setting (strictly following
Landauer's ballistic presumption) of the quantum conductance in the push-pull
molecular junctions. An appropriate electrolyte field-effect screening
environment accounts for the resonant quantum conductance dynamics of the
molecule-bridge-electrode structure, in which the intermolecular charge
transfer dynamics within the frontier molecular orbital of push-pull
heterocyclic molecules follow relativistic quantum mechanics in agreement with
the quantum rate theory.
|
In this thesis we focus on studying the physics of cosmological recombination
and how the details of recombination affect the Cosmic Microwave Background
(CMB) anisotropies. We present a detailed calculation of the spectral line
distortions on the CMB spectrum arising from the Lyman-alpha and the lowest
two-photon transitions in the recombination of hydrogen (H), and the
corresponding lines from helium (He). The peak of these distortions mainly
comes from the Lyman-alpha transition and occurs at about 170 microns, which is
the Wien part of the CMB. The major theoretical limitation for extracting
cosmological parameters from the CMB sky lies in the precision with which we
can calculate the cosmological recombination process. With this motivation, we
perform a multi-level calculation of the recombination of H and He with the
addition of the spin-forbidden transition for neutral helium (He I), plus the
higher order two-photon transitions for H and among singlet states of He I. We
find that the inclusion of the spin-forbidden transition results in more than a
percent change in the ionization fraction, while the other transitions give
much smaller effects. Last we modify RECFAST by introducing one more parameter
to reproduce recent numerical results for the speed-up of helium recombination.
Together with the existing hydrogen `fudge factor', we vary these two
parameters to account for the remaining dominant uncertainties in cosmological
recombination. By using a Markov Chain Monte Carlo method with Planck forecast
data, we find that we need to determine the parameters to better than 10% for
He I and 1% for H, in order to obtain negligible effects on the cosmological
parameters.
|
Internet of Things (IoT) systems are bundles of networked sensors and
actuators that are deployed in an environment and act upon the sensory data
that they receive. These systems, especially consumer electronics, have two
main cooperating components: a device and a mobile app. The unique combination
of hardware and software in IoT systems presents challenges that are lesser
known to mainstream software developers. They might require innovative
solutions to support the development and integration of such systems. In this
paper, we analyze more than 90,000 reviews of ten IoT devices and their
corresponding apps and extract the issues that users encountered while using
these systems. Our results indicate that issues with connectivity, timing, and
updates are particularly prevalent in the reviews. Our results call for a new
software-hardware development framework to assist the development of reliable
IoT systems.
|
Short $\mathbb{C}^2$'s were constructed in [F] as attracting basins of a
sequence of holomorphic automorphisms whose rate of attraction increases
superexponentially. The goal of this paper is to show that such domains also
arise naturally as autonomous attracting basins: we construct a transcendental
H\'enon map with an oscillating wandering Fatou component that is a Short
$\mathbb{C}^2$. The superexponential rate of attraction is not obtained at
single iterations, but along consecutive oscillations.
|
We derive the Bosonic Dynamical Mean-Field equations for bosonic atoms in
optical lattices with arbitrary lattice geometry. The equations are presented
as a systematic expansion in 1/z, z being the number of lattice neighbors.
Hence the theory is applicable in sufficiently high dimensional lattices. We
apply the method to a two-component mixture, for which a rich phase diagram
with spin-order is revealed.
|
We deal with decay and boundedness properties of radial functions belonging
to Besov and Lizorkin-Triebel spaces. In detail we investigate the surprising
interplay of regularity and decay. Our tools are atomic decompositions in
combination with trace theorems.
|
We present a comprehensive analysis of the contributions to K->pi nu nu
decays not described by the leading dimension-six effective Hamiltonian. These
include both dimension-eight four-fermion operators generated at the charm
scale, and genuine long-distance contributions which can be described within
the framework of chiral perturbation theory. We show that a consistent
treatment of the latter contributions, which turn out to be the dominant
effect, requires the introduction of new chiral operators already at O(GF^2
p^2). Using this new chiral Lagrangian, we analyze the long-distance structure
of K->pi nu nu amplitudes at the one-loop level, and discuss the role of the
dimension-eight operators in the matching between short- and long-distance
components. From the numerical point of view, we find that these O(GF^2
LambdaQCD^2) corrections enhance the SM prediction of Br(K+->pi+ nu nu) by
about 6%
|
We present a result on topologically equivalent integral metrics (Rachev,
1991, Muller, 1997) that metrize weak convergence of laws with common
marginals. This result is relevant for applications, as shown in a few simple
examples.
|
We have demonstrated that GaN Schottky diodes can be used for high energy
(64.8 MeV) proton detection. Such proton beams are used for tumor treatment,
for which accurate and radiation resistant detectors are needed. Schottky
diodes have been measured to be highly sensitive to protons, to have a linear
response with beam intensity and fast enough for the application. Some
photoconductive gain was found in the diode leading to a good compromise
between responsivity and response time. The imaging capability of GaN diodes in
proton detection is also demonstrated.
|
We establish an explicit expression for the smallest non-zero eigenvalue of
the Laplace--Beltrami operator on every homogeneous metric on the 3-sphere, or
equivalently, on SU(2) endowed with left-invariant metric. For the subfamily of
3-dimensional Berger spheres, we obtain a full description of their spectra. We
also give several consequences of the mentioned expression. One of them
improves known estimates for the smallest non-zero eigenvalue in terms of the
diameter for homogeneous 3-spheres. Another application shows that the spectrum
of the Laplace--Beltrami operator distinguishes up to isometry any
left-invariant metric on SU(2). It is also proved the non-existence of constant
scalar curvature metrics conformal and arbitrarily close to any non-round
homogeneous metric on the 3-sphere. All of the above results are extended to
left-invariant metrics on SO(3), that is, homogeneous metrics on the
3-dimensional real projective space.
|
Deep neural networks are gaining in popularity as they are used to generate
state-of-the-art results for a variety of computer vision and machine learning
applications. At the same time, these networks have grown in depth and
complexity in order to solve harder problems. Given the limitations in power
budgets dedicated to these networks, the importance of low-power, low-memory
solutions has been stressed in recent years. While a large number of dedicated
hardware using different precisions has recently been proposed, there exists no
comprehensive study of different bit precisions and arithmetic in both inputs
and network parameters. In this work, we address this issue and perform a study
of different bit-precisions in neural networks (from floating-point to
fixed-point, powers of two, and binary). In our evaluation, we consider and
analyze the effect of precision scaling on both network accuracy and hardware
metrics including memory footprint, power and energy consumption, and design
area. We also investigate training-time methodologies to compensate for the
reduction in accuracy due to limited bit precision and demonstrate that in most
cases, precision scaling can deliver significant benefits in design metrics at
the cost of very modest decreases in network accuracy. In addition, we propose
that a small portion of the benefits achieved when using lower precisions can
be forfeited to increase the network size and therefore the accuracy. We
evaluate our experiments, using three well-recognized networks and datasets to
show its generality. We investigate the trade-offs and highlight the benefits
of using lower precisions in terms of energy and memory footprint.
|
We study linearized perturbations of Myers-Perry black holes in d=7, with two
of the three angular momenta set to be equal, and show that instabilities
always appear before extremality. Analogous results are expected for all higher
odd d. We determine numerically the stationary perturbations that mark the
onset of instability for the modes that preserve the isometries of the
background. The onset is continuously connected between the previously studied
sectors of solutions with a single angular momentum and solutions with all
angular momenta equal. This shows that the near-extremality instabilities are
of the same nature as the ultraspinning instability of d>5 singly-spinning
solutions, for which the angular momentum is unbounded. Our results raise the
question of whether there are any extremal Myers-Perry black holes which are
stable in d>5.
|
In the present study, we address the relationship between the emotions
perceived in pop and rock music (mainly in Euro-American styles with English
lyrics) and the language spoken by the listener. Our goal is to understand the
influence of lyrics comprehension on the perception of emotions and use this
information to improve Music Emotion Recognition (MER) models. Two main
research questions are addressed: 1. Are there differences and similarities
between the emotions perceived in pop/rock music by listeners raised with
different mother tongues? 2. Do personal characteristics have an influence on
the perceived emotions for listeners of a given language? Personal
characteristics include the listeners' general demographics, familiarity and
preference for the fragments, and music sophistication. Our hypothesis is that
inter-rater agreement (as defined by Krippendorff's alpha coefficient) from
subjects is directly influenced by the comprehension of lyrics.
|
The conversion of protons to positrons at the horizon of a black hole (BH) is
considered. It is shown that the process may efficiently proceed for BHs with
masses in the range $\sim 10^{18}$ -- $10^{21}$ g. It is argued that the
electric charge of BH acquired by the proton accretion to BH could create
electric field near BH horizon close to the critical Schwinger one. It leads to
efficient electron-positron pair production, when electrons are back captured
by the BH while positrons are emitted into outer space. Annihilation of these
positrons with electrons in the interstellar medium may at least partially
explain the origin of the observed 511 keV line.
|
We prove that an $L^\infty$ potential in the Schr\"odinger equation in three
and higher dimensions can be uniquely determined from a finite number of
boundary measurements, provided it belongs to a known finite dimensional
subspace $\mathcal W$. As a corollary, we obtain a similar result for
Calder\'on's inverse conductivity problem. Lipschitz stability estimates and a
globally convergent nonlinear reconstruction algorithm for both inverse
problems are also presented. These are the first results on global uniqueness,
stability and reconstruction for nonlinear inverse boundary value problems with
finitely many measurements. We also discuss a few relevant examples of finite
dimensional subspaces $\mathcal W$, including bandlimited and piecewise
constant potentials, and explicitly compute the number of required measurements
as a function of $\dim \mathcal W$.
|
We report on an experimental study of the dynamics of the reflection of
ultracold atoms from a periodic one-dimensional magnetic lattice potential. The
magnetic lattice potential of period 10 \textmu m is generated by applying a
uniform bias magnetic field to a microfabricated periodic structure on a
silicon wafer coated with a multilayered TbGdFeCo/Cr magneto-optical film. The
effective thickness of the magnetic film is about 960 nm. A detailed study of
the profile of the reflected atoms as a function of externally induced periodic
corrugation in the potential is described. The effect of angle of incidence is
investigated in detail. The experimental observations are supported by
numerical simulations.
|
Based on a criterium of mathematical simplicity and consistency with
empirical market data, a stochastic volatility model has been obtained with the
volatility process driven by fractional noise. Depending on whether the
stochasticity generators of log-price and volatility are independent or are the
same, two versions of the model are obtained with different leverage behavior.
Here, the no-arbitrage and incompleteness properties of the model are studied.
Some risk measures are also discussed in this framework.
|
Anatomy of the human brain constrains the formation of large-scale functional
networks. Here, given measured brain activity in gray matter, we interpolate
these functional signals into the white matter on a structurally-informed
high-resolution voxel-level brain grid. The interpolated volumes reflect the
underlying anatomical information, revealing white matter structures that
mediate functional signal flow between temporally coherent gray matter regions.
Functional connectivity analyses of the interpolated volumes reveal an enriched
picture of the default mode network (DMN) and its subcomponents, including how
white matter bundles support their formation, thus transcending currently known
spatial patterns that are limited within the gray matter only. These
subcomponents have distinct structure-function patterns, each of which are
differentially recruited during tasks, demonstrating plausible structural
mechanisms for functional switching between task-positive and -negative
components. This work opens new avenues for integration of brain structure and
function and demonstrates how global patterns of activity arise from a
collective interplay of signal propagation along different white matter
pathways.
|
Universal solutions to deformation quantization problems can be conveniently
classified by the cohomology of suitable graph complexes. In particular, the
deformation quantizations of (finite-dimensional) Poisson manifolds and Lie
bialgebras are characterised by an action of the Grothendieck-Teichm\"uller
group via one-colored directed and oriented graphs, respectively. In this note,
we study the action of multi-oriented graph complexes on Lie bialgebroids and
their "quasi" generalisations. Using results due to T. Willwacher and M.
Zivkovi\'c on the cohomology of (multi)-oriented graphs, we show that the
action of the Grothendieck-Teichm\"uller group on Lie bialgebras and quasi-Lie
bialgebras can be generalised to quasi-Lie bialgebroids via graphs with two
colors, one of them being oriented. However, this action generically fails to
preserve the subspace of Lie bialgebroids. By resorting to graphs with two
oriented colors, we instead show the existence of an obstruction to the
quantization of a generic Lie bialgebroid in the guise of a new
$\mathsf{Lie}_\infty$-algebra structure non-trivially deforming the "big
bracket" for Lie bialgebroids. This exotic $\mathsf{Lie}_\infty$-structure can
be interpreted as the equivalent in $d=3$ of the Kontsevich-Shoikhet
obstruction to the quantization of infinite-dimensional Poisson manifolds (in
$d=2$). We discuss the implications of these results with respect to a
conjecture due to P. Xu regarding the existence of a quantization map for Lie
bialgebroids.
|
Recent results from the KASCADE experiment on measurements of cosmic rays in
the energy range of the knee are presented. Emphasis is placed on energy
spectra of individual mass groups as obtained from an two-dimensional unfolding
applied to the reconstructed electron and truncated muon numbers of each
individual EAS. The data show a knee-like structure in the energy spectra of
light primaries (p, He, C) and an increasing dominance of heavy ones (A > 20)
towards higher energies. This basic result is robust against uncertainties of
the applied interaction models QGSJET and SIBYLL which are used in the shower
simulations to analyse the data. Slight differences observed between
experimental data and EAS simulations provide important clues for further
improvements of the interaction models. The data are complemented by new limits
on global anisotropies in the arrival directions of CRs and by upper limits on
point sources. Astrophysical implications for discriminating models of maximum
acceleration energy vs galactic diffusion/drift models of the knee are
discussed based on this data.
|
We present the results of a reweighting calculation to compute the
contribution of the charged quark sea to the neutron electric polarizability.
The chief difficulty is the stochastic estimation of weight factors, and we
present a hopping parameter expansion-based technique for reducing the
stochastic noise, along with a discussion of why this particular reweighting is
so difficult. We used this technique to estimate weight factors for 300
configurations of nHYP-clover fermions and compute the neutron polarizability,
but the reweighting greatly inflates the overall statistical error, driven by
the stochastic noise in the weight factors.
|
Robot social navigation is influenced by human preferences and
environment-specific scenarios such as elevators and doors, thus necessitating
end-user adaptability. State-of-the-art approaches to social navigation fall
into two categories: model-based social constraints and learning-based
approaches. While effective, these approaches have fundamental limitations --
model-based approaches require constraint and parameter tuning to adapt to
preferences and new scenarios, while learning-based approaches require reward
functions, significant training data, and are hard to adapt to new social
scenarios or new domains with limited demonstrations. In this work, we propose
Iterative Dimension Informed Program Synthesis (IDIPS) to address these
limitations by learning and adapting social navigation in the form of
human-readable symbolic programs. IDIPS works by combining program synthesis,
parameter optimization, predicate repair, and iterative human demonstration to
learn and adapt model-free action selection policies from orders of magnitude
less data than learning-based approaches. We introduce a novel predicate repair
technique that can accommodate previously unseen social scenarios or
preferences by growing existing policies. We present experimental results
showing that IDIPS: 1) synthesizes effective policies that model user
preference, 2) can adapt existing policies to changing preferences, 3) can
extend policies to handle novel social scenarios such as locked doors, and 4)
generates policies that can be transferred from simulation to real-world robots
with minimal effort.
|
This article introduces Kerson Huang's theory on superfluid universe in these
aspects: I. choose the asymptotically free Halpern-Huang scalar field(s) to
drive inflation; II. use quantum turbulence to create matter; III. consider
dark energy as the energy density of the cosmic superfluid and dark matter the
deviation of the superfluid density from its equilibrium value; IV. use quantum
vorticity to explain phenomena such as the non-thermal filaments at the
galactic center, the large voids in the galactic distribution, and the
gravitational collapse of stars to fast-rotating blackholes.
|
This notes gives a proof that a connected coaugmented cofiltered coalgebra is
a conilpotent coalgebra and thus a connected coaugmented cofiltered bialgebra
is a Hopf algebra. This applies in particular to a connected coaugmented
cograded coalgebra and a connected coaugmented cograded bialgebra.
|
We show that for certain hyperbolic 3-manifolds, all boundary slopes are
slopes of immersed incompressible surfaces, covered by incompressible
embeddings in some finite cover. The manifolds include hyperbolic punctured
torus bundles and hyperbolic two-bridge knots.
|
The passivation by diffusing H2 of silicon dangling bond defects (E' centers,
induced by laser irradiation in amorphous SiO_2 (silica), is investigated in
situ at several temperatures. It is found that the reaction between E' center
and H_2 requires an activation energy of 0.38eV, and that its kinetics is not
diffusion-limited. The results are compared with previous findings on the other
fundamental paramagnetic point defect in silica, the non bridging oxygen hole
center, which features completely different reaction properties with H_2.
Besides, a comparison is proposed with literature data on the reaction
properties of surface E' centers, of E' centers embedded in silica films, and
with theoretical calculations. In particular, the close agreement with the
reaction properties of surface E' centers with H_2 leads to conclude that the
bulk and surface E' varieties are indistinguishable from their reaction
properties with molecular hydrogen.
|
Calculations are presented for incoherent $J/\psi$ electroproduction from the
deuteron at JLab energies, including the effects of $J/\psi$-nucleon
rescattering in the final state, in order to determine the feasibility of
measuring the $J/\psi$-nucleon scattering length, or the $J/\psi$-nucleon
scattering amplitude at lower relative energies than in previous measurements.
It is shown that for a scattering length of the size predicted by existing
theoretical calculations, it would not be possible to determine the scattering
length. However, it may be possible to determine the scattering amplitude at
significantly lower relative energies than the only previous measurements.
|
Diffusions are a fundamental class of models in many fields, including
finance, engineering, and biology. Simulating diffusions is challenging as
their sample paths are infinite-dimensional and their transition functions are
typically intractable. In statistical settings such as parameter inference for
discretely observed diffusions, we require simulation techniques for diffusions
conditioned on hitting a given endpoint, which introduces further complication.
In this paper we introduce a Markov chain Monte Carlo algorithm for simulating
bridges of ergodic diffusions which (i) is exact in the sense that there is no
discretisation error, (ii) has computational cost that is linear in the
duration of the bridges, and (iii) provides bounds on local maxima and minima
of the simulated trajectory. Our approach works directly on diffusion path
space, by constructing a proposal (which we term a confluence) that is then
corrected with an accept/reject step in a pseudo-marginal algorithm. Our method
requires only the simulation of unconditioned diffusion sample paths. We apply
our approach to the simulation of Langevin diffusion bridges, a practical
problem arising naturally in many situations, such as statistical inference in
distributed settings.
|
We present a set of formulae to extract the longitudinal deep inelastic
structure function $F_L$ from the transverse structure function $F_2$ and its
derivative $dF_2/dlnQ^2$ at small $x$. Our expressions are valid for any value
of $\delta$, being $x^{-\delta}$ the behavior of the parton densities at low
$x$. Using $F_2$ HERA data we obtain $F_L$ in the range $10^{-4} \leq x \leq
10^{-2}$ at $Q^2=20$ GeV$^2$. Some other applications of the formulae are
pointed out.
|
E-commerce business is revolutionizing our shopping experiences by providing
convenient and straightforward services. One of the most fundamental problems
is how to balance the demand and supply in market segments to build an
efficient platform. While conventional machine learning models have achieved
great success on data-sufficient segments, it may fail in a large-portion of
segments in E-commerce platforms, where there are not sufficient records to
learn well-trained models. In this paper, we tackle this problem in the context
of market segment demand prediction. The goal is to facilitate the learning
process in the target segments by leveraging the learned knowledge from
data-sufficient source segments. Specifically, we propose a novel algorithm,
RMLDP, to incorporate a multi-pattern fusion network (MPFN) with a
meta-learning paradigm. The multi-pattern fusion network considers both local
and seasonal temporal patterns for segment demand prediction. In the
meta-learning paradigm, transferable knowledge is regarded as the model
parameter initialization of MPFN, which are learned from diverse source
segments. Furthermore, we capture the segment relations by combining
data-driven segment representation and segment knowledge graph representation
and tailor the segment-specific relations to customize transferable model
parameter initialization. Thus, even with limited data, the target segment can
quickly find the most relevant transferred knowledge and adapt to the optimal
parameters. We conduct extensive experiments on two large-scale industrial
datasets. The results justify that our RMLDP outperforms a set of
state-of-the-art baselines. Besides, RMLDP has been deployed in Taobao, a
real-world E-commerce platform. The online A/B testing results further
demonstrate the practicality of RMLDP.
|
Species diversity in ecosystems is often accompanied by the self-organisation
of the population into fascinating spatio-temporal patterns. Here, we consider
a two-dimensional three-species population model and study the spiralling
patterns arising from the combined effects of generic cyclic dominance,
mutation, pair-exchange and hopping of the individuals. The dynamics is
characterised by nonlinear mobility and a Hopf bifurcation around which the
system's phase diagram is inferred from the underlying complex Ginzburg-Landau
equation derived using a perturbative multiscale expansion. While the dynamics
is generally characterised by spiralling patterns, we show that spiral waves
are stable in only one of the four phases. Furthermore, we characterise a phase
where nonlinearity leads to the annihilation of spirals and to the spatially
uniform dominance of each species in turn. Away from the Hopf bifurcation, when
the coexistence fixed point is unstable, the spiralling patterns are also
affected by nonlinear diffusion.
|
There has been a surge in the interest of using machine learning techniques
to assist in the scientific process of formulating knowledge to explain
observational data. We demonstrate the use of Bayesian Hidden Physics Models to
first uncover the physics governing the propagation of acoustic impulses in
metallic specimens using data obtained from a pristine sample. We then use the
learned physics to characterize the microstructure of a separate specimen with
a surface-breaking crack flaw. Remarkably, we find that the physics learned
from the first specimen allows us to understand the backscattering observed in
the latter sample, a qualitative feature that is wholly absent from the
specimen from which the physics were inferred. The backscattering is explained
through inhomogeneities of a latent spatial field that can be recognized as the
speed of sound in the media.
|
The consistency formula for set theory can be stated in terms of the
free-variables theory of primitive recursive maps. Free-variable p. r.
predicates are decidable by set theory, main result here, built on recursive
evaluation of p. r. map codes and soundness of that evaluation in set
theoretical frame: internal p. r. map code equality is evaluated into set
theoretical equality. So the free-variable consistency predicate of set theory
is decided by set theory, {\omega}-consistency assumed. By G\"odel's second
incompleteness theorem on undecidability of set theory's consistency formula by
set theory under assumption of this {\omega}- consistency, classical set theory
turns out to be {\omega}-inconsistent.
|
It is argued that the familiar algebra of the non-commutative space-time with
$c$-number $\theta^{\mu\nu}$ is inconsistent from a theoretical point of view.
Consistent algebras are obtained by promoting $\theta^{\mu\nu}$ to an
anti-symmetric tensor operator ${\hat\theta}^{\mu\nu}$. The simplest among them
is Doplicher-Fredenhagen-Roberts (DFR) algebra in which the triple commutator
among the coordinate operators is assumed to vanish. This allows us to define
the Lorentz-covariant operator fields on the DFR algebra as operators diagonal
in the 6-dimensional $\theta$-space of the hermitian operators,
${\hat\theta}^{\mu\nu}$. It is shown that we then recover Carlson-Carone-Zobin
(CCZ) formulation of the Lorentz-invariant non-commutative gauge theory with no
need of compactification of the extra 6 dimensions. It is also pointed out that
a general argument concerning the normalizability of the weight function in the
Lorentz metric leads to a division of the $\theta$-space into two disjoint
spaces not connected by any Lorentz transformation so that the CCZ covariant
moment formula holds true in each space, separately. A non-commutative
generalization of Connes' two-sheeted Minkowski space-time is also proposed.
Two simple models of quantum field theory are reformulated on $M_4\times Z_2$
obtained in the commutative limit.
|
Constructing new and more challenging tasksets is a fruitful methodology to
analyse and understand few-shot classification methods. Unfortunately, existing
approaches to building those tasksets are somewhat unsatisfactory: they either
assume train and test task distributions to be identical -- which leads to
overly optimistic evaluations -- or take a "worst-case" philosophy -- which
typically requires additional human labor such as obtaining semantic class
relationships. We propose ATG, a principled clustering method to defining train
and test tasksets without additional human knowledge. ATG models train and test
task distributions while requiring them to share a predefined amount of
information. We empirically demonstrate the effectiveness of ATG in generating
tasksets that are easier, in-between, or harder than existing benchmarks,
including those that rely on semantic information. Finally, we leverage our
generated tasksets to shed a new light on few-shot classification:
gradient-based methods -- previously believed to underperform -- can outperform
metric-based ones when transfer is most challenging.
|
If $(M,g)$ is a compact real analytic Riemannian manifold, we give a
necessary and sufficient condition for there to be a sequence of quasimodes of
order $o(\lambda)$ saturating sup-norm estimates. In particular, it gives
optimal conditions for existence of eigenfunctions satisfying maximal sup norm
bounds. The condition is that there exists a self-focal point $x_0\in M$ for
the geodesic flow at which the associated Perron-Frobenius operator $U_{x_0}:
L^2(S_{x_0}^*M) \to L^2(S_{x_0}^*M)$ has a nontrivial invariant $L^2$ function.
The proof is based on an explict Duistermaat-Guillemin-Safarov pre-trace
formula and von Neumann's ergodic theorem.
|
We study fluctuations around the warped conifold supergravity solution of
Klebanov and Tseytlin [hep-th/0002159], known to be dual to a cascading N=1
gauge theory. Although this supergravity background is not asymptotically AdS,
corresponding to a non-conformal field theory, it is possible to apply the
usual methods of AdS/CFT duality to extract the high energy behavior of field
theory correlators by solving linearized equations of motion for fluctuations
around the background. We consider the Goldstone vector dual to the anomalous
R-symmetry current and compute its mass, which exactly matches the general
prediction of [hep-th/0009156]. We find the high energy 2-point functions for
the R-current and two other vectors. As expected, the R-current 2-point
function has a longitudinal part because R-symmetry is broken. We also
calculate the high energy 2-point function of the energy-momentum tensor from
fluctuations of modes in the graviton sector. This 2-point function has a trace
part corresponding to broken conformal symmetry.
|
Viscous streaming has emerged as an effective method to transport, trap, and
cluster inertial particles in a fluid. Previous work has shown that this
transport is well described by the Maxey-Riley equation augmented with a term
representing Saffman lift. However, in its straightforward application to
viscous streaming flows, the equation suffers from severe numerical stiffness
due to the wide disparity between the time scales of viscous response,
oscillation period, and slow mean transport, posing a severe challenge for
drawing physical insight on mean particle trajectories. In this work, we
develop equations that directly govern the mean transport of particles in
oscillatory viscous flows. The derivation of these equations relies on a
combination of three key techniques. In the first, we develop an inertial
particle velocity field via a small Stokes number expansion of the particle's
deviation from that of the fluid. This expansion clearly reveals the primary
importance of Fax\'en correction and Saffman lift in effecting the trapping of
particles in streaming cells. Then, we apply Generalized Lagrangian Mean theory
to unambiguously decompose the transport into fast and slow scales, and
ultimately, develop the Lagrangian mean velocity field to govern mean
transport. Finally, we carry out an expansion in small oscillation amplitude to
simplify the governing equations and to clarify the hierarchy of first- and
second-order influences, and particularly, the crucial role of Stokes drift in
the mean transport. We demonstrate the final set of equations on the transport
of both fluid and inertial particles in configurations involving one and two
weakly oscillating cylinders. Notably, the new equations allow numerical time
steps that are $O(10^3)$ larger than the existing approach with little
sacrifice in accuracy, allowing more efficient predictions of transport.
|
Perturbative solutions for unpolarized QED parton distribution and
fragmentation functions are presented explicitly in the next-to-leading
logarithmic approximation. The scheme of iterative solution of QED evolution
equations is described in detail. Terms up to $\mathcal{O}(\alpha^3L^2)$ are
calculated analytically, where $L=\ln(\mu_F^2/m_e^2)$ is the large logarithm
which depends on the factorization energy scale $\mu_F\gg m_e$. The results are
process independent and relevant for future high-precision experiments.
|
Graphene and topological insulators (TI) possess two-dimensional Dirac
fermions with distinct physical properties. Integrating these two Dirac
materials in a single device creates interesting opportunities for exploring
new physics of interacting massless Dirac fermions. Here we report on a
practical route to experimental fabrication of graphene-Sb2Te3 heterostructure.
The graphene-TI heterostructures are prepared by using a dry transfer of
chemical-vapor-deposition grown graphene film. ARPES measurements confirm the
coexistence of topological surface states of Sb2Te3 and Dirac {\pi} bands of
graphene, and identify the twist angle in the graphene-TI heterostructure. The
results suggest a potential tunable electronic platform in which two different
Dirac low-energy states dominate the transport behavior.
|
BRST construction of $D$-branes in SU(2) WZW model is proposed.
|
We investigate the late time acceleration with a Chaplygin type of gas in
spherically symmetric inhomogeneous model. At the early phase we get
Einstien-deSitter type of solution generalised to inhomogeneous spacetime. But
at late stage of the evolution our solutions admit the accelerating nature of
the universe. For a large scale factor our model behaves like a ?CDM model. We
calculate the deceleration parameter for this anisotropic model, which, unlike
its homogeneous counterpart, shows that the flip is not syn- chronous occurring
early at the outer shells. This is in line with other physical processes in any
inhomogeneous models. Depending upon initial conditions our solution also gives
bouncing universe. In the absence of inhomogeneity our solution reduces to
wellknown solutions in homogeneous case. We have also calculated the effective
deceleration parameter in terms of Hubble parameter. The whole situation is
later discussed with the help of wellknown Raychaudhury equation and the
results are compared with the previous case. This work is an extension of our
recent communication where an attempt was made to see if the presence of extra
dimensions and/or inhomogeneity can trigger an inflation in a matter dominated
Lemaitre Tolman Bondi model.
|
It is important for detecting the anomaly in power systems before it expands
and causes serious faults such as power failures or system blackout. With the
deployments of phasor measurement units (PMUs), massive amounts of
synchrophasor measurements are collected, which makes it possible for the
real-time situation awareness of the entire system. In this paper, based on
random matrix theory (RMT), a data-driven approach is proposed for anomaly
detection in power systems. First, spatio-temporal data set is formulated by
arranging high-dimensional synchrophasor measurements in chronological order.
Based on the Ring Law in RMT for the empirical spectral analysis of
`signal+noise' matrix, the mean spectral radius (MSR) is introduced to indicate
the system states from the macroscopic perspective. In order to realize anomaly
declare automatically, an anomaly indicator based on the MSR is designed and
the corresponding confidence level $1-\alpha$ is calculated. The proposed
approach is capable of detecting the anomaly in an early phase and robust
against random fluctuations and measuring errors. Cases on the synthetic data
generated from IEEE 300-bus, 118-bus and 57-bus test systems validate the
effectiveness and advantages of the approach.
|
In this paper, we propose a scalable Bayesian method for sparse covariance
matrix estimation by incorporating a continuous shrinkage prior with a
screening procedure. In the first step of the procedure, the off-diagonal
elements with small correlations are screened based on their sample
correlations. In the second step, the posterior of the covariance with the
screened elements fixed at $0$ is computed with the beta-mixture prior. The
screened elements of the covariance significantly increase the efficiency of
the posterior computation. The simulation studies and real data applications
show that the proposed method can be used for the high-dimensional problem with
the `large p, small n'. In some examples in this paper, the proposed method can
be computed in a reasonable amount of time, while no other existing Bayesian
methods can be. The proposed method has also sound theoretical properties. The
screening procedure has the sure screening property and the selection
consistency, and the posterior has the optimal minimax or nearly minimax
convergence rate under the Frobeninus norm.
|
The mechanism of fluid slip on a solid surface has been linked to surface
diffusion, by which mobile adsorbed fluid molecules perform hops between
adsorption sites. However, slip velocity arising from this surface hopping
mechanism has been estimated to be significantly lower than that observed
experimentally. In this paper, we propose a re-adsorption mechanism for fluid
slip. Slip velocity predictions via this mechanism show the improved agreement
with experimental measurements.
|
The nonlinear climbing sine map is a nonhyperbolic dynamical system
exhibiting both normal and anomalous diffusion under variation of a control
parameter. We show that on a suitable coarse scale this map generates an
oscillating parameter-dependent diffusion coefficient, similarly to hyperbolic
maps, whose asymptotic functional form can be understood in terms of simple
random walk approximations. On finer scales we find fractal hierarchies of
normal and anomalous diffusive regions as functions of the control parameter.
By using a Green-Kubo formula for diffusion the origin of these different
regions is systematically traced back to strong dynamical correlations.
Starting from the equations of motion of the map these correlations are
formulated in terms of fractal generalized Takagi functions obeying generalized
de Rham-type functional recursion relations. We finally analyze the measure of
the normal and anomalous diffusive regions in the parameter space showing that
in both cases it is positive, and that for normal diffusion it increases by
increasing the parameter value.
|
In 1921 Bach and Weyl derived the method of superposition to construct new
axially symmetric vacuum solutions of General Relativity. In this paper we
extend the Bach-Weyl approach to non-vacuum configurations with massless scalar
fields. Considering a phantom scalar field with the negative kinetic energy, we
construct a multi-wormhole solution describing an axially symmetric
superposition of $N$ wormholes. The solution found is static, everywhere
regular and has no event horizons. These features drastically tell the
multi-wormhole configuration from other axially symmetric vacuum solutions
which inevitably contain gravitationally inert singular structures, such as
`struts' and `membranes', that keep the two bodies apart making a stable
configuration. However, the multi-wormholes are static without any singular
struts. Instead, the stationarity of the multi-wormhole configuration is
provided by the phantom scalar field with the negative kinetic energy. Anther
unusual property is that the multi-wormhole spacetime has a complicated
topological structure. Namely, in the spacetime there exist $2^N$
asymptotically flat regions connected by throats.
|
We analyse linear maps of operator algebras $\mathcal{B}_H(\mathcal{H})$
mapping the set of rank-$k$ projectors onto the set of rank-$l$ projectors
surjectively. We give a complete characterisation of such maps for prime $n =
\dim\mathcal{H}$. The solution is known for $k=l=1$ as the Wigner's theorem.
|
Object recognition in the presence of background clutter and distractors is a
central problem both in neuroscience and in machine learning. However, the
performance level of the models that are inspired by cortical mechanisms,
including deep networks such as convolutional neural networks and deep belief
networks, is shown to significantly decrease in the presence of noise and
background objects [19, 24]. Here we develop a computational framework that is
hierarchical, relies heavily on key properties of the visual cortex including
mid-level feature selectivity in visual area V4 and Inferotemporal cortex (IT)
[4, 9, 12, 18], high degrees of selectivity and invariance in IT [13, 17, 18]
and the prior knowledge that is built into cortical circuits (such as the
emergence of edge detector neurons in primary visual cortex before the onset of
the visual experience) [1, 21], and addresses the problem of object recognition
in the presence of background noise and distractors. Our approach is
specifically designed to address large deformations, allows flexible
communication between different layers of representation and learns highly
selective filters from a small number of training examples.
|
Large Language Models (LLMs) have revolutionized Artificial Intelligence (AI)
services due to their exceptional proficiency in understanding and generating
human-like text. LLM chatbots, in particular, have seen widespread adoption,
transforming human-machine interactions. However, these LLM chatbots are
susceptible to "jailbreak" attacks, where malicious users manipulate prompts to
elicit inappropriate or sensitive responses, contravening service policies.
Despite existing attempts to mitigate such threats, our research reveals a
substantial gap in our understanding of these vulnerabilities, largely due to
the undisclosed defensive measures implemented by LLM service providers.
In this paper, we present Jailbreaker, a comprehensive framework that offers
an in-depth understanding of jailbreak attacks and countermeasures. Our work
makes a dual contribution. First, we propose an innovative methodology inspired
by time-based SQL injection techniques to reverse-engineer the defensive
strategies of prominent LLM chatbots, such as ChatGPT, Bard, and Bing Chat.
This time-sensitive approach uncovers intricate details about these services'
defenses, facilitating a proof-of-concept attack that successfully bypasses
their mechanisms. Second, we introduce an automatic generation method for
jailbreak prompts. Leveraging a fine-tuned LLM, we validate the potential of
automated jailbreak generation across various commercial LLM chatbots. Our
method achieves a promising average success rate of 21.58%, significantly
outperforming the effectiveness of existing techniques. We have responsibly
disclosed our findings to the concerned service providers, underscoring the
urgent need for more robust defenses. Jailbreaker thus marks a significant step
towards understanding and mitigating jailbreak threats in the realm of LLM
chatbots.
|
Time modulations at per mil level have been reported to take place in the
decay constant of about 15 nuclei with period of one year (most cases) but also
of about one month or one day. In this paper we give the results of the
activity measurement of a 40K source and a 232Th one. The two experiments have
been done at the Gran Sasso Laboratory during a period of about 500 days, above
ground (40K) and underground (232Th) with a target sensitivity of a few parts
over 10^5. We also give the results of the activity measurement at the time of
the X-class solar flares which took place in May 2013. Briefly, our
measurements do not show any evidence of unexpected time dependence in the
decay rate of 40K and 232Th.
|
The model of rigid linear heat conductor with memory is reconsidered
focussing the interest on the heat relaxation function. Thus, the definitions
of heat flux and thermal work are revised to understand where changes are
required when the heat flux relaxation function $k$ is assumed to be unbounded
at the initial time $t=0$. That is, it is represented by a regular integrable
function, namely $k\in L^1(\R^+)$, but its time derivative is not integrable,
that is $\dot k\notin L^1(\R^+)$. Notably, also under these relaxed assumptions
on $k$, whenever the heat flux is the same also the related thermal work is the
same. Thus, also in the case under investigation, the notion of equivalence is
introduced and its physical relevance is pointed out.
|
A question that comes up repeatedly is how to combine the results of two
experiments if all that is known is that one experiment had a n-sigma effect
and another experiment had a m-sigma effect. This question is not well-posed:
depending on what additional assumptions are made, the preferred answer is
different. The note lists some of the more prominent papers on the topic, with
some brief comments and excerpts.
|
Reactive synthesis automatically derives a strategy that satisfies a given
specification. However, requiring a strategy to meet the specification in every
situation is, in many cases, too hard of a requirement. Particularly in
compositional synthesis of distributed systems, individual winning strategies
for the processes often do not exist. Remorsefree dominance, a weaker notion
than winning, accounts for such situations: dominant strategies are only
required to be as good as any alternative strategy, i.e., they are allowed to
violate the specification if no other strategy would have satisfied it in the
same situation. The composition of dominant strategies is only guaranteed to be
dominant for safety properties, though; preventing the use of dominance in
compositional synthesis for liveness specifications. Yet, safety properties are
often not expressive enough. In this paper, we thus introduce a new winning
condition for strategies, called delay-dominance, that overcomes this weakness
of remorsefree~dominance: we show that it is compositional for many safety and
liveness specifications, enabling a compositional synthesis algorithm based on
delay-dominance for general specifications. Furthermore, we introduce an
automaton construction for recognizing delay-dominant strategies and prove its
soundness and completeness. The resulting automaton is of single-exponential
size in the squared length of the specification and can immediately be used for
safraless synthesis procedures. Thus, synthesis of delay-dominant strategies
is, as synthesis of winning strategies, in 2EXPTIME.
|
In quantum networks, effective entanglement routing facilitates remote
entanglement communication between quantum source and quantum destination
nodes. Unlike routing in classical networks, entanglement routing in quantum
networks must consider the quality of entanglement qubits (i.e., entanglement
fidelity), presenting a challenge in ensuring entanglement fidelity over
extended distances. To address this issue, we propose a resource allocation
model for entangled pairs and an entanglement routing model with a fidelity
guarantee. This approach jointly optimizes entangled resources (i.e., entangled
pairs) and entanglement routing to support applications in quantum networks.
Our proposed model is formulated using two-stage stochastic programming, taking
into account the uncertainty of quantum application requirements. Aiming to
minimize the total cost, our model ensures efficient utilization of entangled
pairs and energy conservation for quantum repeaters under uncertain fidelity
requirements. Experimental results demonstrate that our proposed model can
reduce the total cost by at least 20\% compared to the baseline model.
|
MAGIS-100 is a next-generation instrument that uses light-pulse atom
interferometry to search for physics beyond the standard model, to be built and
installed at Fermilab. We propose to search for dark matter and new forces, and
to test quantum mechanics at new distance scales. The detector will use the
existing 100 m vertical NuMI access shaft to make it the world's longest
baseline atom interferometer. To maximize the sensitivity of the experiment, we
will use the latest advances in atomic clock technologies. The experiment will
be a significant step towards developing a 1000 m baseline detector, with
sufficient sensitivity to detect gravitational waves in the `mid-band' from 0.1
Hz - 10 Hz, between the Advanced LIGO and LISA experiments. Here we describe an
overview of the experiment and its physics reach.
|
We consider open strings in an external constant magnetic field $H$. For an
(infinite) sequence of critical values of $H$ an increasing number of (highest
spin component) states lying on the first Regge trajectory becomes tachyonic.
In the limit of infinite $H$ all these states are tachyons (with a common
tachyonic mass) both in the case of the bosonic string and for the
Neveu-Schwarz sector of the fermionic string. This result generalizes to
extended object the same instability which occurs in ordinary non-Abelian gauge
theories. The Ramond states have always positive square masses as is the case
for ordinary QED. The weak field limit of the mass spectrum is the same as for
a field theory with gyromagnetic ratio $g_S=2$ for all charged spin states.
This behavior suggests a phase transition of the string as it has been argued
for the ordinary electroweak theory.
|
The theoretical evaluation of major nuclear structure effects on the
asymmetry of allowed Gamow-Teller beta-decay rates in light mirror nuclei is
presented. The calculations are performed within the shell model, using
empirical isospin-nonconserving interaction and realistic Woods-Saxon radial
wave functions. The revised treatment of p-shell nuclei is supplemented by
systematic calculations for sd-shell nuclei and compared to experimental
asymmetries when available. The results are important in connection with the
possible existence of second-class currents in the weak interaction.
|
We introduce a pseudo entropy extension of topological entanglement entropy
called topological pseudo entropy. Various examples of the topological pseudo
entropies are examined in three-dimensional Chern-Simons gauge theory with
Wilson loop insertions. Partition functions with knotted Wilson loops are
directly related to topological pseudo (R\'enyi) entropies. We also show that
the pseudo entropy in a certain setup is equivalent to the interface entropy in
two-dimensional conformal field theories (CFTs), and leverage the equivalence
to calculate the pseudo entropies in particular examples. Furthermore, we
define a pseudo entropy extension of the left-right entanglement entropy in
two-dimensional boundary CFTs and derive a universal formula for a pair of
arbitrary boundary states. As a byproduct, we find that the topological
interface entropy for rational CFTs has a contribution identical to the
topological entanglement entropy on a torus.
|
The accelerated growth in synthetic visual media generation and manipulation
has now reached the point of raising significant concerns and posing enormous
intimidations towards society. There is an imperative need for automatic
detection networks towards false digital content and avoid the spread of
dangerous artificial information to contend with this threat. In this paper, we
utilize and compare two kinds of handcrafted features(SIFT and HoG) and two
kinds of deep features(Xception and CNN+RNN) for the deepfake detection task.
We also check the performance of these features when there are mismatches
between training sets and test sets. Evaluation is performed on the famous
FaceForensics++ dataset, which contains four sub-datasets, Deepfakes,
Face2Face, FaceSwap and NeuralTextures. The best results are from Xception,
where the accuracy could surpass over 99\% when the training and test set are
both from the same sub-dataset. In comparison, the results drop dramatically
when the training set mismatches the test set. This phenomenon reveals the
challenge of creating a universal deepfake detection system.
|
The first light from stars and quasars ended the ``dark ages'' of the
universe and led to the reionization of hydrogen by redshift 7. Current
observations are at the threshold of probing this epoch. The study of
high-redshift sources is likely to attract major attention in observational and
theoretical cosmology over the next decade.
|
This paper studies the problem of allocating bandwidth and computation
resources to data analytics tasks in Internet of Things (IoT) networks. IoT
nodes are powered by batteries, can process (some of) the data locally, and the
quality grade or performance of how data analytics tasks are carried out
depends on where these are executed. The goal is to design a resource
allocation algorithm that jointly maximizes the network lifetime and the
performance of the data analytics tasks subject to energy constraints. This
joint maximization problem is challenging with coupled resource constraints
that induce non-convexity. We first show that the problem can be mapped to an
equivalent convex problem, and then propose an online algorithm that provably
solves the problem and does not require any a priori knowledge of the
time-varying wireless link capacities and data analytics arrival process
statistics. The algorithm's optimality properties are derived using an analysis
which, to the best of our knowledge, proves for the first time the convergence
of the dual subgradient method with time-varying sets. Our simulations seeded
by real IoT device energy measurements, show that the network connectivity
plays a crucial role in network lifetime maximization, that the algorithm can
obtain both maximum network lifetime and maximum data analytics performance in
addition to maximizing the joint objective, and that the algorithm increases
the network lifetime by approximately 50% compared to an algorithm that
minimizes the total energy consumption.
|
It is well-known that for a one dimensional stochastic differential equation
driven by Brownian noise, with coefficient functions satisfying the assumptions
of the Yamada-Watanabe theorem \cite{yamada1,yamada2} and the Feller test for
explosions \cite{feller51,feller54}, there exists a unique stationary
distribution with respect to the Markov semigroup of transition probabilities.
We consider systems on a restricted domain $D$ of the phase space $\mathbb{R}$
and study the rate of convergence to the stationary distribution. Using a
geometrical approach that uses the so called {\it free energy function} on the
density function space, we prove that the density functions, which are
solutions of the Fokker-Planck equation, converge to the stationary density
function exponentially under the Kullback-Leibler {divergence}, thus also in
the total variation norm. The results show that there is a relation between the
Bakry-Emery curvature dimension condition and the dissipativity condition of
the transformed system under the Fisher-Lamperti transformation. Several
applications are discussed, including the Cox-Ingersoll-Ross model and the
Ait-Sahalia model in finance and the Wright-Fisher model in population
genetics.
|
Patient monitoring is vital in all stages of care. We here report the
development and validation of ICU length of stay and mortality prediction
models. The models will be used in an intelligent ICU patient monitoring module
of an Intelligent Remote Patient Monitoring (IRPM) framework that monitors the
health status of patients, and generates timely alerts, maneuver guidance, or
reports when adverse medical conditions are predicted. We utilized the publicly
available Medical Information Mart for Intensive Care (MIMIC) database to
extract ICU stay data for adult patients to build two prediction models: one
for mortality prediction and another for ICU length of stay. For the mortality
model, we applied six commonly used machine learning (ML) binary classification
algorithms for predicting the discharge status (survived or not). For the
length of stay model, we applied the same six ML algorithms for binary
classification using the median patient population ICU stay of 2.64 days. For
the regression-based classification, we used two ML algorithms for predicting
the number of days. We built two variations of each prediction model: one using
12 baseline demographic and vital sign features, and the other based on our
proposed quantiles approach, in which we use 21 extra features engineered from
the baseline vital sign features, including their modified means, standard
deviations, and quantile percentages. We could perform predictive modeling with
minimal features while maintaining reasonable performance using the quantiles
approach. The best accuracy achieved in the mortality model was approximately
89% using the random forest algorithm. The highest accuracy achieved in the
length of stay model, based on the population median ICU stay (2.64 days), was
approximately 65% using the random forest algorithm.
|
We consider the $T\bar{T}$ deformation of two dimensional Yang--Mills theory
on general curved backgrounds. We compute the deformed partition function
through an integral transformation over frame fields weighted by a Gaussian
kernel. We show that this partition function satisfies a flow equation which
has been derived previously in the literature, which now holds on general
backgrounds. We connect ambiguities associated to first derivative terms in the
flow equation to the normalization of the functional integral over frame
fields. We then compute the entanglement entropy for a general state in the
theory. The connection to the string theoretic description of the theory is
also investigated.
|
Results of the measurements of the 125 GeV Higgs boson properties with
proton-proton collision data at $\sqrt{s}=13$ TeV collected by CMS detector are
presented. The used Higgs boson decay channels include the five major decay
modes, $\mathrm{H}\rightarrow\gamma\gamma$, $\mathrm{H}\rightarrow{\rm Z}{\rm
Z}\rightarrow4\ell$, $\mathrm{H}\rightarrow{\rm W}{\rm
W}\rightarrow\ell\nu\ell\nu$, $\mathrm{H}\rightarrow\tau^{+}\tau^{-}$ and
$\mathrm{H}\rightarrow b\bar{b}$, and two rare decay modes,
$\mathrm{H}\rightarrow\mu^{+}\mu^{-}$ and $\mathrm{H}\rightarrow{\rm
Z}/\gamma^{*}+\gamma\rightarrow\ell\ell\gamma$, with $\ell={\rm e},\mu$. The
measured Higgs boson properties include its mass, signal strength relative to
the standard model prediction, signal strength modifiers for different Higgs
boson production modes, coupling modifiers to fermions and bosons, effective
coupling modifiers to photons and gluons, simplified template cross sections,
fiducial cross sections. All results are consistent, within their
uncertainties, with the expectations for the Standard Model Higgs boson.
|
The structure and function of biological molecules are strongly influenced by
the water and dissolved ions that surround them. This aqueous solution
(solvent) exerts significant electrostatic forces in response to the
biomolecule's ubiquitous atomic charges and polar chemical groups. In this
work, we investigate a simple approach to numerical calculation of this model
using boundary-integral equation (BIE) methods and boundary-element methods
(BEM). Traditional BEM discretizes the protein--solvent boundary into a set of
boundary elements, or panels, and the approximate solution is defined as a
weighted combination of basis functions with compact support. The resulting BEM
matrix then requires integrating singular or near singular functions, which can
be slow and challenging to compute. Here we investigate the accuracy and
convergence of a simpler representation, namely modeling the unknown surface
charge distribution as a set of discrete point charges on the surface. We find
that at low resolution, point-based BEM is more accurate than panel-based
methods, due to the fact that the protein surface is sampled directly, and can
be of significant value for numerous important calculations that require only
moderate accuracy, such as the preliminary stages of rational drug design and
protein engineering.
|
The predicted value of the higgs mass $m_H$ is analyzed assuming the
existence of the fourth generation of leptons ($N, E$) and quarks ($U, D$).
The steep and flat directions are found in the five-dimensional parameter
space: $m_H$, $m_U$, $m_D$, $m_N$, $m_E$. The LEPTOP fit of the precision
electroweak data is compatible (in particular) with $m_H \sim 300$ GeV, $m_N
\sim 50$ GeV, $m_E \sim 100$ GeV, $m_U +m_D \sim 500$ GeV, and $|m_U -m_D| \sim
75$ GeV. The quality of fits drastically improves when the data on b- and
c-quark asymmetries and new NuTeV data on deep inelastic scattering are
ignored.
|
With the inclusion of camera in daily life, an automatic no reference image
quality evaluation index is required for automatic classification of images.
The present manuscripts proposes a new No Reference Regional Mutual Information
based technique for evaluating the quality of an image. We use regional mutual
information on subsets of the complete image. Proposed technique is tested on
four benchmark natural image databases, and one benchmark synthetic database. A
comparative analysis with classical and state-of-art methods indicate
superiority of the present technique for high quality images and comparable for
other images of the respective databases.
|
New Foundations ($\mathrm{NF}$) is a set theory obtained from naive set
theory by putting a stratification constraint on the comprehension schema; for
example, it proves that there is a universal set $V$. $\mathrm{NFU}$
($\mathrm{NF}$ with atoms) is known to be consistent through its close
connection with models of conventional set theory that admit automorphisms.
A first-order theory, $\mathrm{ML}_\mathrm{CAT}$, in the language of
categories is introduced and proved to be equiconsistent to $\mathrm{NF}$
(analogous results are obtained for intuitionistic and classical $\mathrm{NF}$
with and without atoms). $\mathrm{ML}_\mathrm{CAT}$ is intended to capture the
categorical content of the predicative class theory of $\mathrm{NF}$.
$\mathrm{NF}$ is interpreted in $\mathrm{ML}_\mathrm{CAT}$ through the
categorical semantics. Thus, the result enables application of category
theoretic techniques to meta-mathematical problems about $\mathrm{NF}$ -style
set theory. For example, an immediate corollary is that $\mathrm{NF}$ is
equiconsistent to $\mathrm{NFU} + |V| = |\mathcal{P}(V)|$. This is already
proved by Crabb\'e, but becomes more transparent in light of the results of
this paper.
Just like a category of classes has a distinguished subcategory of small
morphisms, a category modelling $\mathrm{ML}_\mathrm{CAT}$ has a distinguished
subcategory of type-level morphisms. This corresponds to the distinction
between sets and proper classes in $\mathrm{NF}$. With this in place, the axiom
of power objects familiar from topos theory can be appropriately formulated for
$\mathrm{NF}$. It turns out that the subcategory of type-level morphisms
contains a topos as a natural subcategory.
|
Trajectory Representation Learning (TRL) is a powerful tool for
spatial-temporal data analysis and management. TRL aims to convert complicated
raw trajectories into low-dimensional representation vectors, which can be
applied to various downstream tasks, such as trajectory classification,
clustering, and similarity computation. Existing TRL works usually treat
trajectories as ordinary sequence data, while some important spatial-temporal
characteristics, such as temporal regularities and travel semantics, are not
fully exploited. To fill this gap, we propose a novel Self-supervised
trajectory representation learning framework with TemporAl Regularities and
Travel semantics, namely START. The proposed method consists of two stages. The
first stage is a Trajectory Pattern-Enhanced Graph Attention Network (TPE-GAT),
which converts the road network features and travel semantics into
representation vectors of road segments. The second stage is a Time-Aware
Trajectory Encoder (TAT-Enc), which encodes representation vectors of road
segments in the same trajectory as a trajectory representation vector,
meanwhile incorporating temporal regularities with the trajectory
representation. Moreover, we also design two self-supervised tasks, i.e.,
span-masked trajectory recovery and trajectory contrastive learning, to
introduce spatial-temporal characteristics of trajectories into the training
process of our START framework. The effectiveness of the proposed method is
verified by extensive experiments on two large-scale real-world datasets for
three downstream tasks. The experiments also demonstrate that our method can be
transferred across different cities to adapt heterogeneous trajectory datasets.
|
Application developers often place executable assertions -- equipped with
program-specific predicates -- in their system, targeting programming errors.
However, these detectors can detect data errors resulting from transient
hardware faults in main memory as well. But while an assertion reduces silent
data corruptions (SDCs) in the program state they check, they add runtime to
the target program that increases the attack surface for the remaining state.
This article outlines an approach to find an optimal subset of assertions that
minimizes the SDC count, without the need to run fault-injection experiments
for every possible assertion subset.
|
The ability to reliably estimate physiological signals from video is a
powerful tool in low-cost, pre-clinical health monitoring. In this work we
propose a new approach to remote photoplethysmography (rPPG) - the measurement
of blood volume changes from observations of a person's face or skin. Similar
to current state-of-the-art methods for rPPG, we apply neural networks to learn
deep representations with invariance to nuisance image variation. In contrast
to such methods, we employ a fully self-supervised training approach, which has
no reliance on expensive ground truth physiological training data. Our proposed
method uses contrastive learning with a weak prior over the frequency and
temporal smoothness of the target signal of interest. We evaluate our approach
on four rPPG datasets, showing that comparable or better results can be
achieved compared to recent supervised deep learning methods but without using
any annotation. In addition, we incorporate a learned saliency resampling
module into both our unsupervised approach and supervised baseline. We show
that by allowing the model to learn where to sample the input image, we can
reduce the need for hand-engineered features while providing some
interpretability into the model's behavior and possible failure modes. We
release code for our complete training and evaluation pipeline to encourage
reproducible progress in this exciting new direction.
|
Interfaces between stratified epithelia and their supporting stromas commonly
exhibit irregular shapes. Undulations are particularly pronounced in dysplastic
tissues and typically evolve into long, finger-like protrusions in carcinomas.
In a previous work (Basan et al., Phys. Rev. Lett. 106, 158101 (2011)), we
demonstrated that an instability arising from viscous shear stresses caused by
the constant flow due to cell turnover in the epithelium could drive this
phenomenon. While interfacial tension between the two tissues as well as
mechanical resistance of the stroma tend to maintain a flat interface, an
instability occurs for sufficiently large viscosity, cell-division rate and
thickness of the dividing region in the epithelium. Here, extensions of this
work are presented, where cell division in the epithelium is coupled to the
local concentration of nutrients or growth factors diffusing from the stroma.
This enhances the instability by a mechanism similar to that of the
Mullins-Sekerka instability in single-diffusion processes of crystal growth. We
furthermore present the instability for the generalized case of a viscoelastic
stroma.
|
Subsets and Splits