text
stringlengths 6
128k
|
---|
A small drop that splashes into a deep liquid sometimes reappears as a small
rising jet, for example when a water drop splashes into a pool or when coffee
drips into a cup. Here we describe that the growing and rising jet continuously
redistributes its fluid to maintain a universal shape originating from a
surface tension based deceleration of the jet; the shape is universal in the
sense that the shape of the rising jet is the same at all times; only the
scaling depends on fluid parameters and deceleration. An inviscid equation of
motion for the jet is proposed assuming a time dependent but uniform
deceleration; the equation of motion is made dimensionless by using a
generalized time-dependent capillary length ${\lambda_c}$ and is solved
numerically. As a solution a concave shape function is found that is fully
determined by three measurable physical parameters: deceleration, mass density
and surface tension; it is found that the surface tension based deceleration of
the jet scales quadratic with the size of the jet base. Deceleration values
derived from the jet shape are in good agreement with deceleration values
calculated from the time plot of the height of the rising jet.
|
One of the most important challenges in robotics is producing accurate
trajectories and controlling their dynamic parameters so that the robots can
perform different tasks. The ability to provide such motion control is closely
related to how such movements are encoded. Advances on deep learning have had a
strong repercussion in the development of novel approaches for Dynamic Movement
Primitives. In this work, we survey scientific literature related to Neural
Dynamic Movement Primitives, to complement existing surveys on Dynamic Movement
Primitives.
|
Benchmarking a high-precision quantum operation is a big challenge for many
quantum systems in the presence of various noises as well as control errors.
Here we propose an $O(1)$ benchmarking of a dynamically corrected rotation by
taking the quantum advantage of a squeezed spin state in a spin-1 Bose-Einstein
condensate. Our analytical and numerical results show that tiny rotation
infidelity, defined by $1-F$ with $F$ the rotation fidelity, can be calibrated
in the order of $1/N^2$ by only several measurements of the rotation error for
$N$ atoms in an optimally squeezed spin state. Such an $O(1)$ benchmarking is
possible not only in a spin-1 BEC but also in other many-spin or many-qubit
systems if a squeezed or entangled state is available.
|
Rapid impact assessment in the immediate aftermath of a natural disaster is
essential to provide adequate information to international organisations, local
authorities, and first responders. Social media can support emergency response
with evidence-based content posted by citizens and organisations during ongoing
events. In the paper, we propose TriggerCit: an early flood alerting tool with
a multilanguage approach focused on timeliness and geolocation. The paper
focuses on assessing the reliability of the approach as a triggering system,
comparing it with alternative sources for alerts, and evaluating the quality
and amount of complementary information gathered. Geolocated visual evidence
extracted from Twitter by TriggerCit was analysed in two case studies on floods
in Thailand and Nepal in 2021.
|
Superfluid 3He-A shares the properties of spin nematic and chiral orbital
ferromagnet. Its order parameter is characterized by two vectors d and l. This
doubly anisotropic superfluid, when it is confined in aerogel, represents the
most interesting example of a system with continuous symmetry in the presence
of random anisotropy disorder. We discuss the Larkin-Imry-Ma state, which is
characterized by the short-range orientational order of the vector l, while the
long-range orientational order is destroyed by the collective action of the
randomly oriented aerogel strings. On the other hand, sufficiently large
regular anisotropy produced either by the deformation of the aerogel or by
applied superflow suppresses the Larkin-Imry-Ma effect leading to the uniform
orientation of the vector l. This interplay of regular and random anisotropy
allows us to study many different effects.
|
In a previous report [10] it was shown that emulsion stability simulations
are able to reproduce the lifetime of micrometer-size drops of hexadecane
pressed by buoyancy against a planar water-hexadecane interface. It was
confirmed that small drops (ri<10 {\mu}m) stabilized with {\beta}-casein behave
as nondeformable particles, moving with a combination of Stokes and Taylor
tensors as they approach the interface. Here, a similar methodology is used to
parametrize the potential of interaction of drops of soybean oil stabilized
with bovine serum albumin. The potential obtained is then employed to study the
lifetime of deformable drops in the range 10 \leq ri \leq 1000 {\mu}m. It is
established that the average lifetime of these drops can be adequately
replicated using the model of truncated spheres. However, the results depend
sensibly on the expressions of the initial distance of deformation and the
maximum film radius used in the calculations. The set of equations adequate for
large drops is not satisfactory for medium-size drops (10 \leq ri \leq 100
{\mu}m), and vice versa. In the case of large particles, the increase in the
interfacial area as a consequence of the deformation of the drops generates a
very large repulsive barrier which opposes coalescence. Nevertheless, the
buoyancy force prevails. As a consequence, it is the hydrodynamic tensor of the
drops which determine the characteristic behavior of the lifetime as a function
of the particle size. While the average values of the coalescence time of the
drops can be justified by the mechanism of film thinning, the scattering of the
experimental data of large drops cannot be rationalized using the methodology
previously described. A possible explanation of this phenomenon required
elaborate simulations which combine deformable drops, capillary waves,
repulsive interaction forces, and a time-dependent surfactant adsorption.
|
The working principle of axion helioscopes can be behind unexpected solar
X-ray emission, being associated with solar magnetic fields, which become the
catalyst. Solar axion signals can be transient brightenings as well as
continuous radiation. The energy range below 1 keV is a window of opportunity
for direct axion searches. (In)direct signatures support axions or the like as
an explanation of striking behaviour of X-rays from the Sun.
|
Recently a new type of cosmological singularity has been postulated for
infinite barotropic index $w$ in the equation of state $p=w \rho$ of the
cosmological fluid, but vanishing pressure and density at the singular event.
Apparently the barotropic index $w$ would be the only physical quantity to blow
up at the singularity. In this talk we would like to discuss the strength of
such singularities and compare them with other types. We show that they are
weak singularities.
|
We investigate the physical structure of the gas component of the disk around
the pre-main-sequence star HD169142. The 13CO and C18O J=2-1 line emission is
observed from the disk with 1.4'' resolution using the Submillimeter Array. We
adopt the disk physical structure derived from a model which fits the spectral
energy distribution of HD169142. We obtain the full three-dimensional
information on the CO emission with the aid of a molecular excitation and
radiative transfer code. This information is used for the analysis of our
observations and previous 12CO J=2-1 and 1.3 mm continuum data. The disk is in
Keplerian rotation and seen at an inclination close to 13 deg from face-on. We
conclude that the regions traced by different CO isotopologues are distinct in
terms of their vertical location within the disk, their temperature and their
column densities. With the given disk structure, we find that freeze-out is not
efficient enough to remove a significant amount of CO from gas phase. Both
observed lines match the model prediction both in flux and in the spatial
structure of the emission. Therefore we use our data to derive the 13CO and
C18O mass and consequently the 12CO mass using standard isotopic ratios. We
constrain the total disk gas mass to (0.6-3.0)x10(-2) Msun. Adopting a maximum
dust opacity of 2 cm2 per gram of dust we derive a minimum dust mass of
2.16x10(-4) Msun from the fit to the 1.3 mm data. Comparison of the derived gas
and dust mass shows that the gas to dust mass ratio of 100 is only possible
under the assumption of a dust opacity of 2 cm2/g and 12CO abundance of 10(-4)
with respect to H2. However, our data are also compatible with a gas to dust
ratio of 25, with a dust opacity of 1 cm2/g and 12CO abundance of 2x10(-4).
|
Parsec-scale VLBA images of BL Lac at 15 GHz show that the jet contains a
permanent quasi-stationary emission feature 0.26 mas (0.34 pc projected) from
the core, along with numerous moving features. In projection, the tracks of the
moving features cluster around an axis at position angle -166.6 deg that
connects the core with the standing feature. The moving features appear to
emanate from the standing feature in a manner strikingly similar to the results
of numerical 2-D relativistic magneto-hydrodynamic (RMHD) simulations in which
moving shocks are generated at a recollimation shock. Because of this, and the
close analogy to the jet feature HST-1 in M87, we identify the standing feature
in BL Lac as a recollimation shock. We assume that the magnetic field dominates
the dynamics in the jet, and that the field is predominantly toroidal. From
this we suggest that the moving features are compressions established by slow
and fast mode magneto-acoustic MHD waves. We illustrate the situation with a
simple model in which the slowest moving feature is a slow-mode wave, and the
fastest feature is a fast-mode wave. In the model the beam has Lorentz factor
about 3.5 in the frame of the host galaxy, and the fast mode wave has Lorentz
factor about 1.6 in the frame of the beam. This gives a maximum apparent speed
for the moving features 10c. In this model the Lorentz factor of the pattern in
the galaxy frame is approximately 3 times larger than that of the beam itself.
|
We consider the question of whether it is worth building an experiment with
the sole purpose of bringing the detectable limit on the tensor-to-scalar
ratio, r, down to 10^{-3}. We look at the inflationary models which give a
prediction in this region and recap the current situation with the tensor mode,
showing that there are only three known models of inflation which give
definitive predictions in the region 10^{-3}<r<10^{-2}.
|
Sparse model selection by structural risk minimization leads to a set of a
few predictors, ideally a subset of the true predictors. This selection clearly
depends on the underlying loss function $\tilde L$. For linear regression with
square loss, the particular (functional) Gradient Boosting variant
$L_2-$Boosting excels for its computational efficiency even for very large
predictor sets, while still providing suitable estimation consistency. For more
general loss functions, functional gradients are not always easily accessible
or, like in the case of continuous ranking, need not even exist. To close this
gap, starting from column selection frequencies obtained from $L_2-$Boosting,
we introduce a loss-dependent ''column measure'' $\nu^{(\tilde L)}$ which
mathematically describes variable selection. The fact that certain variables
relevant for a particular loss $\tilde L$ never get selected by $L_2-$Boosting
is reflected by a respective singular part of $\nu^{(\tilde L)}$ w.r.t.
$\nu^{(L_2)}$. With this concept at hand, it amounts to a suitable change of
measure (accounting for singular parts) to make $L_2-$Boosting select variables
according to a different loss $\tilde L$. As a consequence, this opens the
bridge to applications of simulational techniques such as various resampling
techniques, or rejection sampling, to achieve this change of measure in an
algorithmic way.
|
Highly specific datasets of scientific literature are important for both
research and education. However, it is difficult to build such datasets at
scale. A common approach is to build these datasets reductively by applying
topic modeling on an established corpus and selecting specific topics. A more
robust but time-consuming approach is to build the dataset constructively in
which a subject matter expert (SME) handpicks documents. This method does not
scale and is prone to error as the dataset grows. Here we showcase a new tool,
based on machine learning, for constructively generating targeted datasets of
scientific literature. Given a small initial "core" corpus of papers, we build
a citation network of documents. At each step of the citation network, we
generate text embeddings and visualize the embeddings through dimensionality
reduction. Papers are kept in the dataset if they are "similar" to the core or
are otherwise pruned through human-in-the-loop selection. Additional insight
into the papers is gained through sub-topic modeling using SeNMFk. We
demonstrate our new tool for literature review by applying it to two different
fields in machine learning.
|
We study two-dimensional eigenvalue ensembles close to certain types of
singular points in the bulk of the droplet. We prove existence of a microscopic
density which quickly approaches the classical equilibrium density, as the
distance from the singularity increases beyond the microscopic scale. As a
consequence we obtain asymptotics for the Bergman function of certain
Fock-Sobolev spaces of entire functions.
|
We study the cosmological properties of a codimension two brane world that
sits at the intersection between two four branes, in the framework of six
dimensional Einstein-Gauss-Bonnet gravity. Due to contributions of the
Gauss-Bonnet terms, the junction conditions require the presence of localized
energy density on the codimension two defect. The induced metric on this
surface assumes a FRW form, with a scale factor associated to the position of
the brane in the background; we can embed on the codimension two defect the
preferred form of energy density. We present the cosmological evolution
equations for the three brane, showing that, for the case of pure AdS$_6$
backgrounds, they acquire the same form of the ones for the Randall-Sundrum II
model. When the background is different from pure AdS$_6$, the cosmological
behavior is potentially modified in respect to the typical one of codimension
one brane worlds. We discuss, in a particular model embedded in an AdS$_6$
black hole, the conditions one should satisfy in order to obtain standard
cosmology at late epochs.
|
We study interlacing properties of the zeros of two types of linear
combinations of Laguerre polynomials with different parameters, namely
$R_n=L_n^{\alpha}+aL_{n}^{\alpha'}$ and $S_n=L_n^{\alpha}+bL_{n-1}^{\alpha'}$.
Proofs and numerical counterexamples are given in situations where the zeros of
$R_n$, and $S_n$, respectively, interlace (or do not in general) with the zeros
of $L_k^{\alpha}$, $L_k^{\alpha'}$, $k=n$ or $n-1$. The results we prove hold
for continuous, as well as integral, shifts of the parameter $\alpha$.
|
A certain kernel (sometimes called the Pick kernel) associated to Schur
functions on the disk is always positive semi-definite. A generalization of
this fact is well-known for Schur functions on the polydisk. In this article,
we show that the Pick kernel on the polydisk has a great deal of structure
beyond being positive semi-definite. It can always be split into two kernels
possessing certain shift invariance properties.
|
This paper investigates a critical access control issue in the Internet of
Things (IoT). In particular, we propose a smart contract-based framework, which
consists of multiple access control contracts (ACCs), one judge contract (JC)
and one register contract (RC), to achieve distributed and trustworthy access
control for IoT systems. Each ACC provides one access control method for a
subject-object pair, and implements both static access right validation based
on predefined policies and dynamic access right validation by checking the
behavior of the subject. The JC implements a misbehavior-judging method to
facilitate the dynamic validation of the ACCs by receiving misbehavior reports
from the ACCs, judging the misbehavior and returning the corresponding penalty.
The RC registers the information of the access control and misbehavior-judging
methods as well as their smart contracts, and also provides functions (e.g.,
register, update and delete) to manage these methods. To demonstrate the
application of the framework, we provide a case study in an IoT system with one
desktop computer, one laptop and two Raspberry Pi single-board computers, where
the ACCs, JC and RC are implemented based on the Ethereum smart contract
platform to achieve the access control.
|
The brightest giant flare from the soft $\gamma$-ray repeater (SGR) 1806-20
was detected on 2004 December 27. The isotropic-equivalent energy release of
this burst is at least one order of magnitude more energetic than those of the
two other SGR giant flares. Starting from about one week after the burst, a
very bright ($\sim 80$ mJy), fading radio afterglow was detected. Follow-up
observations revealed the multi-frequency light curves of the afterglow and the
temporal evolution of the source size. Here we show that these observations can
be understood in a two-component explosion model. In this model, one component
is a relativistic collimated outflow responsible for the initial giant flare
and the early afterglow, and another component is a subrelativistic wider
outflow responsible for the late afterglow. We also discuss triggering
mechanisms of these two components within the framework of the magnetar model.
|
We present a Monte Carlo study of a model protein with 54 amino acids that
folds directly to its native three-helix-bundle state without forming any
well-defined intermediate state. The free-energy barrier separating the native
and unfolded states of this protein is found to be weak, even at the folding
temperature. Nevertheless, we find that melting curves to a good approximation
can be described in terms of a simple two-state system, and that the relaxation
behavior is close to single exponential. The motion along individual reaction
coordinates is roughly diffusive on timescales beyond the reconfiguration time
for an individual helix. A simple estimate based on diffusion in a square-well
potential predicts the relaxation time within a factor of two.
|
With continual miniaturization ever more applications of deep learning can be
found in embedded systems, where it is common to encounter data with natural
complex domain representation. To this end we extend Sparse Variational Dropout
to complex-valued neural networks and verify the proposed Bayesian technique by
conducting a large numerical study of the performance-compression trade-off of
C-valued networks on two tasks: image recognition on MNIST-like and CIFAR10
datasets and music transcription on MusicNet. We replicate the state-of-the-art
result by Trabelsi et al. [2018] on MusicNet with a complex-valued network
compressed by 50-100x at a small performance penalty.
|
Mesoscopic solid state Aharonov-Bohm interferometers have been used to
measure the "intrinsic" phase, $\alpha_{QD}$, of the resonant quantum
transmission amplitude through a quantum dot (QD). For a two-terminal "closed"
interferometer, which conserves the electron current, Onsager's relations
require that the measured phase shift $\beta$ only "jumps" between 0 and $\pi$.
Additional terminals open the interferometer but then $\beta$ depends on the
details of the opening. Using a theoretical model, we present quantitative
criteria (which can be tested experimentally) for $\beta$ to be equal to the
desired $\alpha_{QD}$: the "lossy" channels near the QD should have both a
small transmission and a small reflection.
|
Inspired by recent trends in vision and language learning, we explore
applications of attention mechanisms for visio-lingual fusion within an
application to story-based video understanding. Like other video-based QA
tasks, video story understanding requires agents to grasp complex temporal
dependencies. However, as it focuses on the narrative aspect of video it also
requires understanding of the interactions between different characters, as
well as their actions and their motivations. We propose a novel co-attentional
transformer model to better capture long-term dependencies seen in visual
stories such as dramas and measure its performance on the video question
answering task. We evaluate our approach on the recently introduced DramaQA
dataset which features character-centered video story understanding questions.
Our model outperforms the baseline model by 8 percentage points overall, at
least 4.95 and up to 12.8 percentage points on all difficulty levels and
manages to beat the winner of the DramaQA challenge.
|
This paper presents converse theorems for safety in terms of barrier
functions for unconstrained continuous-time systems modeled as differential
inclusions. Via a counterexample, we show the lack of existence of autonomous
and continuous barrier functions certifying safety for a nonlinear system that
is not only safe but also has a smooth right-hand side. Guided by converse
Lyapunov theorems for (non-asymptotic) stability,time-varying barrier functions
and appropriate infinitesimal conditions are shown to be both necessary as well
as sufficient under mild regularity conditions on the right-hand side of the
system. More precisely, we propose a general construction of a time-varying
barrier function in terms of a marginal function involving the finite-horizon
reachable set. Using techniques from set-valued and nonsmooth analysis, we show
that such a function guarantees safety when the system is safe. Furthermore, we
show that the proposed barrier function construction inherits the regularity
properties of the proposed reachable set. In addition, when the system is safe
and smooth, we build upon the constructed barrier function to show the
existence of a smooth barrier function guaranteeing safety. Comparisons and
relationships to results in the literature are also presented.
|
One of the fundamental problems in network analysis is detecting community
structure in multi-layer networks, of which each layer represents one type of
edge information among the nodes. We propose integrative spectral clustering
approaches based on effective convex layer aggregations. Our aggregation
methods are strongly motivated by a delicate asymptotic analysis of the
spectral embedding of weighted adjacency matrices and the downstream $k$-means
clustering, in a challenging regime where community detection consistency is
impossible. In fact, the methods are shown to estimate the optimal convex
aggregation, which minimizes the mis-clustering error under some specialized
multi-layer network models. Our analysis further suggests that clustering using
Gaussian mixture models is generally superior to the commonly used $k$-means in
spectral clustering. Extensive numerical studies demonstrate that our adaptive
aggregation techniques, together with Gaussian mixture model clustering, make
the new spectral clustering remarkably competitive compared to several
popularly used methods.
|
We present a large-scale empirical study of catastrophic forgetting (CF) in
modern Deep Neural Network (DNN) models that perform sequential (or:
incremental) learning. A new experimental protocol is proposed that enforces
typical constraints encountered in application scenarios. As the investigation
is empirical, we evaluate CF behavior on the hitherto largest number of visual
classification datasets, from each of which we construct a representative
number of Sequential Learning Tasks (SLTs) in close alignment to previous works
on CF. Our results clearly indicate that there is no model that avoids CF for
all investigated datasets and SLTs under application conditions. We conclude
with a discussion of potential solutions and workarounds to CF, notably for the
EWC and IMM models.
|
We demonstrate a machine learning-based approach which predicts the
properties of crystal structures following relaxation based on the unrelaxed
structure. Use of crystal graph singular values reduces the number of features
required to describe a crystal by more than an order of magnitude compared to
the full crystal graph representation. We construct machine learning models
using the crystal graph singular value representations in order to predict the
volume, enthalpy per atom, and metal versus semiconducting phase of DFT-relaxed
organic salt crystals based on randomly generated unrelaxed crystal structures.
Initial base models are trained to relate 89,949 randomly generated structures
of salts formed by varying ratios of 1,3,5-triazine and HCl with the
corresponding volumes, enthalpies per atom, and phase of the DFT-relaxed
structures. We further demonstrate that the base model is able to extrapolate
to new chemical systems with the inclusion of 2,000 to 10,000 crystal
structures from the new system. After training a single model with a large
number of data points, extension can be done at significantly lower cost. The
constructed machine learning models can be used to rapidly screen large sets of
randomly generated organic salt crystal structures and efficiently downselect
the structures most likely to be experimentally realizable. The models can be
used either as a stand-alone crystal structure predictor or incorporated into
more sophisticated workflows as a filtering step.
|
We experimentally demonstrate that the decoherence of a spin by a spin bath
can be completely eliminated by fully polarizing the spin bath. We use electron
paramagnetic resonance at 240 gigahertz and 8 Tesla to study the spin coherence
time $T_2$ of nitrogen-vacancy centers and nitrogen impurities in diamond from
room temperature down to 1.3 K. A sharp increase of $T_2$ is observed below the
Zeeman energy (11.5 K). The data are well described by a suppression of the
flip-flop induced spin bath fluctuations due to thermal spin polarization.
$T_2$ saturates at $\sim 250 \mu s$ below 2 K, where the spin bath polarization
is 99.4 %.
|
We derive a new lower bound for the ground state energy $E^{\rm F}(N,S)$ of N
fermions with total spin S in terms of binding energies $E^{\rm F}(N-1,S \pm
1/2)$ of (N-1) fermions. Numerical examples are provided for some simple
short-range or confining potentials.
|
Symmetry contributes to processes of perceptual organization in biological
vision and influences the quality and time of goal directed decision making in
animals and humans, as discussed in recent work on the examples of symmetry of
things in a thing and bilateral shape symmetry. The present study was designed
to show that selective chromatic variations in geometric shape configurations
with mirror symmetry can be exploited to highlight functional properties of
symmetry of things in a thing in human vision. The experimental procedure uses
a psychophysical two alternative forced choice technique, where human observers
have to decide as swiftly as possible whether two shapes presented
simultaneously on a computer screen are symmetrical or not. The stimuli are
computer generated 2D shape configurations consisting of multiple elements,
with and without systematic variations in local color, color saturation, or
achromatic contrast producing variations in symmetry of things in a thing. All
stimulus pairs presented had perfect geometric mirror symmetry. The results
show that varying the color of local shape elements selectively in
multichromatic and monochromatic shapes significantly slows down perceptual
response times, which are a direct measure of stimulus uncertainty. It is
concluded that local variations in hue or contrast produce functionally
meaningful variations in symmetry of things in thing, revealed here as a
relevant perceptual variable in symmetry detection. Disturbance of the latter
increases stimulus uncertainty and thereby affects the perceptual salience of
mirror symmetry in the time course for goal relevant human decisions.
|
We investigate the local times of a continuous-time Markov chain on an
arbitrary discrete state space. For fixed finite range of the Markov chain, we
derive an explicit formula for the joint density of all local times on the
range, at any fixed time. We use standard tools from the theory of stochastic
processes and finite-dimensional complex calculus. We apply this formula in the
following directions: (1) we derive large deviation upper estimates for the
normalized local times beyond the exponential scale, (2) we derive the upper
bound in Varadhan's \chwk{l}emma for any measurable functional of the local
times, \ch{and} (3) we derive large deviation upper bounds for continuous-time
simple random walk on large subboxes of $\Z^d$ tending to $\Z^d$ as time
diverges. We finally discuss the relation of our density formula to the
Ray-Knight theorem for continuous-time simple random walk on $\Z$, which is
analogous to the well-known Ray-Knight description of Brownian local times. In
this extended version, we prove that the Ray-Knight theorem follows from our
density formula.
|
We find that convective regions of collapsing massive stellar cores possess
sufficient stochastic angular momentum to form intermittent accretion disks
around the newly born neutron star (NS) or black hole (BH), as required by the
jittering-jets model for core-collapse supernova (CCSN) explosions. To reach
this conclusion we derive an approximate expression for stochastic specific
angular momentum in convection layers of stars, and using the mixing-length
theory apply it to four stellar models at core-collapse epoch. In all models,
evolved using the stellar evolution code MESA, the convective helium layer has
sufficient angular momentum to form an accretion disk. The mass available for
disk formation around the NS or BH is 0.1-1.2Mo; stochastic accretion of this
mass can form intermittent accretion disks that launch jets powerful enough to
explode the star according to the jittering-jets model. Our results imply that
even if no explosion occurs after accretion of the inner ~2-5Mo of the core
onto the NS or BH (the mass depends on the stellar model), accretion of outer
layers of the core will eventually lead to an energetic supernova explosion.
|
We propose an iterative method for approximating the capacity of
classical-quantum channels with a discrete input alphabet and a finite
dimensional output, possibly under additional constraints on the input
distribution. Based on duality of convex programming, we derive explicit upper
and lower bounds for the capacity. To provide an $\varepsilon$-close estimate
to the capacity, the presented algorithm requires $O(\tfrac{(N \vee M) M^3
\log(N)^{1/2}}{\varepsilon})$, where $N$ denotes the input alphabet size and
$M$ the output dimension. We then generalize the method for the task of
approximating the capacity of classical-quantum channels with a bounded
continuous input alphabet and a finite dimensional output. For channels with a
finite dimensional quantum mechanical input and output, the idea of a universal
encoder allows us to approximate the Holevo capacity using the same method. In
particular, we show that the problem of approximating the Holevo capacity can
be reduced to a multidimensional integration problem. For families of quantum
channels fulfilling a certain assumption we show that the complexity to derive
an $\varepsilon$-close solution to the Holevo capacity is subexponential or
even polynomial in the problem size. We provide several examples to illustrate
the performance of the approximation scheme in practice.
|
We present the failure of the standard coupled-channels method in explaining
the inelastic scattering together with other observables such as elastic
scattering, excitation function and fusion data. We use both microscopic
double-folding and phenomenological deep potentials with shallow imaginary
components. We argue that the solution of the problems for the inelastic
scattering data is not related to the central nuclear potential, but to the
coupling potential between excited states. We present that these problems can
be addressed in a systematic way by using a different shape for the coupling
potential instead of the usual one based on Taylor expansion.
|
We first review the three known chiral anomalies in four dimensions and then
use the anomaly free conditions to study the uniqueness of quark and lepton
representations and charge quantizations in the standard model. We also extend
our results to theory with an arbitrary number of color. Finally, we discuss
the family problem.
|
In this paper, we study the dynamical properties of thermodynamic phase
transition (PT) for the charged AdS black hole (BH) with a global monopole via
the Gibbs free energy landscape and reveal the effects of a global monopole on
the kinetics of the AdS BH thermodynamic PT. First, we briefly review the
thermodynamics of the charged AdS BH with a global monopole. Then, we introduce
the Gibbs free energy landscape to study the thermodynamic stability of the BH
state. Because of thermal fluctuations, the small black hole (SBH) state can
transit to the large black hole (LBH) state, and vice versa. We use the
Fokker-Planck equation with the reflecting boundary condition to study the
probability evolution of the BH state with and without a global monopole
separately. We find that for both the SBH and LBH states, the global monopole
could slow down the evolution of the BH state. In addition, we obtain the
relationship between the first passage time and the monopole parameter $\eta$.
The result shows that as the monopole parameter $\eta$ increases, the mean
first passage time will be longer for both the SBH and LBH states.
|
For vertical velocity field $v_{\rm z} (r,z;R)$ of granular flow through an
aperture of radius $R$, we propose a size scaling form $v_{\rm z}(r,z;R)=v_{\rm
z} (0,0;R)f (r/R_{\rm r}, z/R_{\rm z})$ in the region above the aperture. The
length scales $R_{\rm r}=R- 0.5 d$ and $R_{\rm z}=R+k_2 d$, where $k_2$ is a
parameter to be determined and $d$ is the diameter of granule. The effective
acceleration, which is derived from $v_{\rm z}$, follows also a size scaling
form $a_{\rm eff} = v_{\rm z}^2(0,0;R)R_{\rm z}^{-1} \theta (r/R_{\rm r},
z/R_{\rm z})$. For granular flow under gravity $g$, there is a boundary
condition $a_{\rm eff} (0,0;R)=-g$ which gives rise to $v_{\rm z} (0,0;R)=
\sqrt{ \lambda g R_{\rm z}}$ with $\lambda=-1/\theta (0,0)$. Using the size
scaling form of vertical velocity field and its boundary condition, we can
obtain the flow rate $W =C_2 \rho \sqrt{g } R_{\rm r}^{D-1} R_{\rm z}^{1/2} $,
which agrees with the Beverloo law when $R \gg d$. The vertical velocity fields
$v_z (r,z;R)$ in three-dimensional (3D) and two-dimensional (2D) hoppers have
been simulated using the discrete element method (DEM) and GPU program.
Simulation data confirm the size scaling form of $v_{\rm z} (r,z;R)$ and the
$R$-dependence of $v_{\rm z} (0,0;R)$.
|
If the n-dimensional unit sphere is covered by finitely many spherically
convex bodies, then the sum of the inradii of these bodies is at least {\pi}.
This bound is sharp, and the equality case is characterized.
|
Low-rank matrix completion has been studied extensively under various type of
categories. The problem could be categorized as noisy completion or exact
completion, also active or passive completion algorithms. In this paper we
focus on adaptive matrix completion with bounded type of noise. We assume that
the matrix $\mathbf{M}$ we target to recover is composed as low-rank matrix
with addition of bounded small noise. The problem has been previously studied
by \cite{nina}, in a fixed sampling model. Here, we study this problem in
adaptive setting that, we continuously estimate an upper bound for the angle
with the underlying low-rank subspace and noise-added subspace. Moreover, the
method suggested here, could be shown requires much smaller observation than
aforementioned method.
|
We study the correlations between center vortices and Abelian monopoles for
SU($3$) gauge group. Combining fractional fluxes of monopoles, center vortex
fluxes are constructed in the thick center vortex model. Calculating the
potentials induced by fractional fluxes constructing the center vortex flux in
a thick center vortex-like model and comparing with the potential induced by
center vortices, we observe an attraction between fractional fluxes of
monopoles constructing the center vortex flux. We conclude that the center
vortex flux is stable, as expected. In addition, we show that adding a
contribution of the monopole-antimonopole pairs in the potentials induced by
center vortices ruins the Casimir scaling at intermediate regime.
|
This paper proposes and analyzes arbitrarily high-order discontinuous
Galerkin (DG) and finite volume methods which provably preserve the positivity
of density and pressure for the ideal MHD on general meshes. Unified auxiliary
theories are built for rigorously analyzing the positivity-preserving (PP)
property of MHD schemes with a HLL type flux on polytopal meshes in any space
dimension. The main challenges overcome here include establishing relation
between the PP property and discrete divergence of magnetic field on general
meshes, and estimating proper wave speeds in the HLL flux to ensure the PP
property. In 1D case, we prove that the standard DG and finite volume methods
with the proposed HLL flux are PP, under condition accessible by a PP limiter.
For multidimensional conservative MHD system, standard DG methods with a PP
limiter are not PP in general, due to the effect of unavoidable
divergence-error. We construct provably PP high-order DG and finite volume
schemes by proper discretization of symmetrizable MHD system, with two
divergence-controlling techniques: locally divergence-free elements and a
penalty term. The former leads to zero divergence within each cell, while the
latter controls the divergence error across cell interfaces. Our analysis
reveals that a coupling of them is important for positivity preservation, as
they exactly contribute the discrete divergence-terms absent in standard DG
schemes but crucial for ensuring the PP property. Numerical tests confirm the
PP property and the effectiveness of proposed PP schemes. Unlike conservative
MHD system, the exact smooth solutions of symmetrizable MHD system are proved
to retain the positivity even if the divergence-free condition is not
satisfied. Our analysis and findings further the understanding, at both
discrete and continuous levels, of the relation between the PP property and the
divergence-free constraint.
|
An effective approach in meta-learning is to utilize multiple "train tasks"
to learn a good initialization for model parameters that can help solve unseen
"test tasks" with very few samples by fine-tuning from this initialization.
Although successful in practice, theoretical understanding of such methods is
limited. This work studies an important aspect of these methods: splitting the
data from each task into train (support) and validation (query) sets during
meta-training. Inspired by recent work (Raghu et al., 2020), we view such
meta-learning methods through the lens of representation learning and argue
that the train-validation split encourages the learned representation to be
low-rank without compromising on expressivity, as opposed to the non-splitting
variant that encourages high-rank representations. Since sample efficiency
benefits from low-rankness, the splitting strategy will require very few
samples to solve unseen test tasks. We present theoretical results that
formalize this idea for linear representation learning on a subspace
meta-learning instance, and experimentally verify this practical benefit of
splitting in simulations and on standard meta-learning benchmarks.
|
Biological infants are naturally curious and try to comprehend their physical
surroundings by interacting, in myriad multisensory ways, with different
objects - primarily macroscopic solid objects - around them. Through their
various interactions, they build hypotheses and predictions, and eventually
learn, infer and understand the nature of the physical characteristics and
behavior of these objects. Inspired thus, we propose a model for
curiosity-driven learning and inference for real-world AI agents. This model is
based on the arousal of curiosity, deriving from observations along
discontinuities in the fundamental macroscopic solid-body physics parameters,
i.e., shape constancy, spatial-temporal continuity, and object permanence. We
use the term body-budget to represent the perceived fundamental properties of
solid objects. The model aims to support the emulation of learning from scratch
followed by substantiation through experience, irrespective of domain, in
real-world AI agents.
|
The hypermetric cone $HYP_n$ is the set of vectors $(d_{ij})_{1\leq i< j\leq
n}$ satisfying the inequalities $\sum_{1\leq i<j\leq n} b_ib_jd_{ij}\leq 0 with
b_i\in\Z and \sum_{i=1}^{n}b_i=1$. A Delaunay polytope of a lattice is called
extremal if the only affine bijective transformations of it into a Delaunay
polytope, are the homotheties; there is a correspondance between such Delaunay
polytopes and extreme rays of $HYP_n$. We show that unique Delaunay polytopes
of root lattice $A_1$ and $E_6$ are the only extreme Delaunay polytopes of
dimension at most 6. We describe also the skeletons and adjacency properties of
$HYP_7$ and of its dual.
|
A fundamental dynamical constraint -- that fluctuation induced
charge-weighted particle flux must vanish -- can prevent instabilities from
accessing the free energy in the strong gradients characteristic of Transport
Barriers (TBs). Density gradients, when larger than a certain threshold, lead
to a violation of the constraint and emerge as a stabilizing force. This
mechanism, then, broadens the class of configurations (in magnetized plasmas)
where these high confinement states can be formed and sustained. The need for
velocity shear, the conventional agent for TB formation, is obviated. The most
important ramifications of the constraint is to permit a charting out of the
domains conducive to TB formation and hence to optimally confined fusion worthy
states; the detailed investigation is conducted through new analytic methods
and extensive gyrokinetic simulations.
|
We investigate the influence of hydrogen on the electronic structure of a
binary transition metallic glass of V$_{80}$Zr$_{20}$. We examine the
hybridization between the hydrogen and metal atoms with the aid of hard x-ray
photoelectron spectroscopy. Combined with ab initio density functional theory,
we are able to show and predict the formation of $s$-$d$ hybridized energy
states. With optical transmission and resistivity measurements, we investigate
the emergent electronic properties formed out of those altered energy states,
and together with the theoretical calculations of the frequency-dependent
conductivity tensor, we qualitatively support the observed strong
wavelength-dependency of the hydrogen-induced changes on the optical absorption
and a positive parabolic change in resistivity with hydrogen concentration.
|
We discuss loss of derivatives for degenerate vector fields obtained from
infinite type exponentially non-degenerate hypersurfaces of $\C^2$.
|
RGB-D object tracking has attracted considerable attention recently,
achieving promising performance thanks to the symbiosis between visual and
depth channels. However, given a limited amount of annotated RGB-D tracking
data, most state-of-the-art RGB-D trackers are simple extensions of
high-performance RGB-only trackers, without fully exploiting the underlying
potential of the depth channel in the offline training stage. To address the
dataset deficiency issue, a new RGB-D dataset named RGBD1K is released in this
paper. The RGBD1K contains 1,050 sequences with about 2.5M frames in total. To
demonstrate the benefits of training on a larger RGB-D data set in general, and
RGBD1K in particular, we develop a transformer-based RGB-D tracker, named SPT,
as a baseline for future visual object tracking studies using the new dataset.
The results, of extensive experiments using the SPT tracker emonstrate the
potential of the RGBD1K dataset to improve the performance of RGB-D tracking,
inspiring future developments of effective tracker designs. The dataset and
codes will be available on the project homepage:
https://github.com/xuefeng-zhu5/RGBD1K.
|
The notion of a successful coupling of Markov processes, based on the idea
that both components of the coupled system ``intersect'' in finite time with
probability one, is extended to cover situations when the coupling is
unnecessarily Markovian and its components are only converging (in a certain
sense) to each other with time. Under these assumptions the unique ergodicity
of the original Markov process is proven. A price for this generalization is
the weak convergence to the unique invariant measure instead of the strong one.
Applying these ideas to infinite interacting particle systems we consider even
more involved situations when the unique ergodicity can be proven only for a
restriction of the original system to a certain class of initial distributions
(e.g. translational invariant ones). Questions about the existence of invariant
measures with a given particle density are discussed as well.
|
We study the effect of regime switches on finite size Lyapunov exponents
(FSLEs) in determining the error growth rates and predictability of multiscale
systems. We consider a dynamical system involving slow and fast regimes and
switches between them. The surprising result is that due to the presence of
regimes the error growth rate can be a non-monotonic function of initial error
amplitude. In particular, troughs in the large scales of FSLE spectra is shown
to be a signature of slow regimes, whereas fast regimes are shown to cause
large peaks in the spectra where error growth rates far exceed those estimated
from the maximal Lyapunov exponent. We present analytical results explaining
these signatures and corroborate them with numerical simulations. We show
further that these peaks disappear in stochastic parametrizations of the fast
chaotic processes, and the associated FSLE spectra reveal that large scale
predictability properties of the full deterministic model are well approximated
whereas small scale features are not properly resolved.
|
The evolution of several physical and biological systems, ranging from
neutron transport in multiplying media to epidemics or population dynamics, can
be described in terms of branching exponential flights, a stochastic process
which couples a Galton-Watson birth-death mechanism with random spatial
displacements. Within this context, one is often called to assess the length
$\ell_V$ that the process travels in a given region $V$ of the phase space, or
the number of visits $n_V$ to this same region. In this paper, we address this
issue by resorting to the Feynman-Kac formalism, which allows characterizing
the full distribution of $\ell_V$ and $n_V$ and in particular deriving explicit
moment formulas. Some other significant physical observables associated to
$\ell_V $ and $n_V$, such as the survival probability, are discussed as well,
and results are illustrated by revisiting the classical example of the rod
model in nuclear reactor physics.
|
Systems of interacting fermions can give rise to ground states whose
correlations become effectively free-fermion-like in the thermodynamic limit,
as shown by Baxter for a class of integrable models that include the
one-dimensional XYZ spin-$\frac{1}{2}$ chain. Here, we quantitatively analyse
this behaviour by establishing the relation between system size and correlation
length required for the fermionic Gaussianity to emerge. Importantly, we
demonstrate that this behaviour can be observed through the applicability of
Wick's theorem and thus it is experimentally accessible. To establish the
relevance of our results to possible experimental realisations of XYZ-related
models, we demonstrate that the emergent Gaussianity is insensitive to weak
variations in the range of interactions, coupling inhomogeneities and local
random potentials.
|
The nuclear recoil effect on the $^2 P_{3/2}$-state $g$ factor of B-like ions
is calculated to first order in the electron-to-nucleus mass ratio $m/M$ in the
range $Z=18$--$92$. The calculations are performed by means of the $1/Z$
perturbation theory. Within the independent-electron approximation, the one-
and two-electron recoil contributions are evaluated to all orders in the
parameter $\alpha Z$ by employing a fully relativistic approach. The
interelectronic-interaction correction of first order in $1/Z$ is treated
within the Breit approximation. Higher orders in $1/Z$ are partially taken into
account by incorporating the screening potential into the zeroth-order
Hamiltonian. The most accurate to date theoretical predictions for the nuclear
recoil contribution to the bound-electron $g$ factor are obtained.
|
We consider the mechanism of elastic strains and stresses as the main
controlling factor of structure change under the influence of temperature,
magnetic field, hydrostatic pressure. We should take into account that the
energy of elastic deformation is commensurate to the energy of electric
interactions and that is much higher than the rest of the bonds of lower energy
value. Besides, the energy elastic stresses are of long range, so it forms the
linearity in magnetization and bulk change. These regularities requires a
fundamental understanding of the laws of interaction with respect to accepted
interpretation of quantum mechanical forces of short range that are attributes
of magnetism formation. Due to the high sensitivity of electronic and resonance
properties with respect to small changes of the structure, we were able to
define the direct relation between elastic stresses and field-frequency
dependencies, as well as to analyze the evolution of the dynamics of phase
transitions and phase states. A cycle of studies of the influence of
hydrostatic pressure on the resonance properties are presented also. The
analysis of the effect of magnetic, magneto-elastic and elastic energy allowed
us to define the combinations of magneto-elastic interactions. The role of
elastic stresses in the linear changes of the magnetostriction, magnetization,
magnetoelasticity of single-crystal magnetic semiconductors is described in
details.
|
We define and prove the existence of the Quantum $A_{\infty}$-relations on
the Fukaya category of the elliptic curve, using the notion of the Feynman
transform of a modular operad, as defined by Getzler and Kapranov. Following
Barannikov, these relations may be viewed as defining a solution to the quantum
master equation of Batalin-Vilkovisky geometry.
|
Here we review empirical evidence for the possible existence of tachyons,
superluminal particles having m^2 < 0: The review considers searches for new
particles that might be tachyons, as well as evidence that neutrinos are
tachyons from data which may have been gathered for other purposes. Much of the
second half of the paper is devoted to the 3 + 3 neutrino model including a
tachyonic mass state, which has empirical support from a variety of areas.
Although this is primarily a review article, it contains several new results.
|
We report a first, complete lattice QCD calculation of the long-distance
contribution to the $K^+\to\pi^+\nu\bar{\nu}$ decay within the standard model.
This is a second-order weak process involving two four-Fermi operators that is
highly sensitive to new physics and being studied by the NA62 experiment at
CERN. While much of this decay comes from perturbative, short-distance physics
there is a long-distance part, perhaps as large as the planned experimental
error, which involves nonperturbative phenomena. The calculation presented
here, with unphysical quark masses, demonstrates that this contribution can be
computed using lattice methods by overcoming three technical difficulties: (i)
a short-distance divergence that results when the two weak operators approach
each other, (ii) exponentially growing, unphysical terms that appear in
Euclidean, second-order perturbation theory, and (iii) potentially large
finite-volume effects. A follow-on calculation with physical quark masses and
controlled systematic errors will be possible with the next generation of
computers.
|
Let $\Omega$ be an unbounded, pseudoconvex domain in $\Bbb C^n$ and let
$\varphi$ be a $\mathcal C^2$-weight function plurisubharmonic on $\Omega$. We
show both necessary and sufficient conditions for existence and compactness of
a weighted $\bar\partial$-Neumann operator $N_\varphi$ on the space
$L^2_{(0,1)}(\Omega,e^{-\varphi})$ in terms of the eigenvalues of the complex
Hessian $(\partial ^2\varphi/\partial z_j\partial\bar z_k)_{j,k}$ of the
weight. We also give some applications to the unweighted $\bar\partial$-Neumann
problem on unbounded domains.
|
We study the fluctuational behavior of overdamped elastic filaments (e.g.,
strings or rods) driven by active matter which induces irreversibility. The
statistics of discrete normal modes are translated into the continuum of the
position representation which allows discernment of the spatial structure of
dissipation and fluctuational work done by the active forces. The mapping of
force statistics onto filament statistics leads to a generalized
fluctuation-dissipation relation which predicts the components of the
stochastic area tensor and its spatial proxy, the irreversibility field. We
illustrate the general theory with explicit results for a tensioned string
between two fixed endpoints. Plots of the stochastic area tensor components in
the discrete plane of mode pairs reveal how the active forces induce spatial
correlations of displacement along the filament. The irreversibility field
provides additional quantitative insight into the relative spatial
distributions of fluctuational work and dissipative response.
|
The Restricted Isometry Property (RIP) introduced by Cand\'es and Tao is a
fundamental property in compressed sensing theory. It says that if a sampling
matrix satisfies the RIP of certain order proportional to the sparsity of the
signal, then the original signal can be reconstructed even if the sampling
matrix provides a sample vector which is much smaller in size than the original
signal. This short note addresses the problem of how a linear transformation
will affect the RIP. This problem arises from the consideration of extending
the sensing matrix and the use of compressed sensing in different bases. As an
application, the result is applied to the redundant dictionary setting in
compressed sensing.
|
In highly distributed environments such as cloud, edge and fog computing, the
application of machine learning for automating and optimizing processes is on
the rise. Machine learning jobs are frequently applied in streaming conditions,
where models are used to analyze data streams originating from e.g. video
streams or sensory data. Often the results for particular data samples need to
be provided in time before the arrival of next data. Thus, enough resources
must be provided to ensure the just-in-time processing for the specific data
stream. This paper focuses on proposing a runtime modeling strategy for
containerized machine learning jobs, which enables the optimization and
adaptive adjustment of resources per job and component. Our black-box approach
assembles multiple techniques into an efficient runtime profiling method, while
making no assumptions about underlying hardware, data streams, or applied
machine learning jobs. The results show that our method is able to capture the
general runtime behaviour of different machine learning jobs already after a
short profiling phase.
|
The possibility of observing large signatures of new CP-violating and
flavor-changing Higgs-Top couplings in a future e^+e^- collider experiments
such as e^+e^- -> t bar-t h, t bar-t Z and e^+e^- -> t bar-c \nu_e bar-\nu_e, t
bar-c e^+ e^- is discussed. Such, beyond the Standard Model, couplings can
occur already at the tree-level within a class of Two Higgs Doublets Models.
Therefore, an extremely interesting feature of those reactions is that the
CP-violating and flavor-changing effects are governed by tree-level dynamics.
These reactions may therefore serve as unique avenues for searching for new
phenomena associated with Two Higgs Doublets Models and, as is shown here,
could yield statistically significant signals of new physics. We find that the
CP-asymmetries in e^+e^- -> t bar-t h, t bar-t Z can reach tens of percents,
and the flavor changing cross-section of e^+e^- -> t bar-c \nu_e bar-\nu_e is
typically a few fb's, for light Higgs mass around the electroweak scale.
|
Neural implicit surface representations have recently emerged as popular
alternative to explicit 3D object encodings, such as polygonal meshes,
tabulated points, or voxels. While significant work has improved the geometric
fidelity of these representations, much less attention is given to their final
appearance. Traditional explicit object representations commonly couple the 3D
shape data with auxiliary surface-mapped image data, such as diffuse color
textures and fine-scale geometric details in normal maps that typically require
a mapping of the 3D surface onto a plane, i.e., a surface parameterization;
implicit representations, on the other hand, cannot be easily textured due to
lack of configurable surface parameterization. Inspired by this digital content
authoring methodology, we design a neural network architecture that implicitly
encodes the underlying surface parameterization suitable for appearance data.
As such, our model remains compatible with existing mesh-based digital content
with appearance data. Motivated by recent work that overfits compact networks
to individual 3D objects, we present a new weight-encoded neural implicit
representation that extends the capability of neural implicit surfaces to
enable various common and important applications of texture mapping. Our method
outperforms reasonable baselines and state-of-the-art alternatives.
|
A heat engine operating in the one-shot finite-size regime, where systems
composed of a small number of quantum particles interact with hot and cold
baths and are restricted to one-shot measurements, delivers fluctuating work.
Further, engines with lesser fluctuation produce a lesser amount of
deterministic work. Hence, the heat-to-work conversion efficiency stays well
below the Carnot efficiency. Here we overcome this limitation and attain Carnot
efficiency in the one-shot finite-size regime, where the engines allow the
working systems to simultaneously interact with two baths via the semi-local
thermal operations and reversibly operate in a one-step cycle. These engines
are superior to the ones considered earlier in work extraction efficiency, and,
even, are capable of converting heat into work by exclusively utilizing
inter-system correlations. We formulate a resource theory for quantum heat
engines to prove the results.
|
We introduce a family of quantum R\'enyi fidelities and discuss their
symmetry resolution. We express the symmetry-resolved fidelities as Fourier
transforms of charged fidelities, for which we derive exact formulas for
Gaussian states. These results also yield a formula for the total fidelities of
Gaussian states, which we expect to have applications beyond the scope of this
paper. We investigate the total and symmetry-resolved fidelities in the XX spin
chain, and focus on (i) fidelities between thermal states, and (ii) fidelities
between reduced density matrices at zero temperature. Both thermal and reduced
fidelities can detect the quantum phase transition of the XX spin chain.
Moreover, we argue that symmetry-resolved fidelities are sensitive to the inner
structure of the states. In particular, they can detect the phase transition
through the reorganisation of the charge sectors at the critical point. This a
main feature of symmetry-resolved fidelities which we expect to be general. We
also highlight that reduced fidelities can detect quantum phase transitions in
the thermodynamic limit.
|
Following a brief introduction we show that the observations obtained so far
with the Swift satellite begin to shed light over a variety of problems that
were left open following the excellent performance and related discoveries of
the Italian - Dutch Beppo SAX satellite. The XRT light curves show common
characteristics that are reasonably understood within the framework of the
fireball model. Unforeseen flares are however detected in a large fraction of
the GRB observed and the energy emitted by the brightest ones may be as much as
85% of the total soft X ray emission measured by XRT. These characteristics
seems to be common to long and short bursts.
|
We consider a Stratonovich heat equation in $(0,1)$ with a nonlinear
multiplicative noise driven by a trace-class Wiener process. First, the
equation is shown to have a unique mild solution. Secondly, convolutional rough
paths techniques are used to provide an almost sure continuity result for the
solution with respect to the solution of the 'smooth' equation obtained by
replacing the noise with an absolutely continuous process. This continuity
result is then exploited to prove weak convergence results based on Donsker and
Kac-Stroock type approximations of the noise.
|
We present high performance implementations of the QR and the singular value
decomposition of a batch of small matrices hosted on the GPU with applications
in the compression of hierarchical matrices. The one-sided Jacobi algorithm is
used for its simplicity and inherent parallelism as a building block for the
SVD of low rank blocks using randomized methods. We implement multiple kernels
based on the level of the GPU memory hierarchy in which the matrices can reside
and show substantial speedups against streamed cuSOLVER SVDs. The resulting
batched routine is a key component of hierarchical matrix compression, opening
up opportunities to perform H-matrix arithmetic efficiently on GPUs.
|
We investigate the accuracy of the parametric recovery of the line-of-sight
velocity distribution (LOSVD) of the stars in a galaxy, while working in pixel
space. Problems appear when the data have a low signal-to-noise ratio, or the
observed LOSVD is not well sampled by the data. We propose a simple solution
based on maximum penalized likelihood and we apply it to the common situation
in which the LOSVD is described by a Gauss-Hermite series. We compare different
techniques by extracting the stellar kinematics from observations of the barred
lenticular galaxy NGC 3384 obtained with the SAURON integral-field
spectrograph.
|
Using a different approach, we derive integral representations for the
Riemann zeta function and its generalizations (the Hurwitz zeta, $\zeta(-k,b)$,
the polylogarithm, $\mathrm{Li}_{-k}(e^m)$, and the Lerch transcendent,
$\Phi(e^m,-k,b)$), that coincide with their Abel-Plana expressions. A slight
variation of the approach leads to different formulae. We also present the
relations between each of these functions and their partial sums. It allows one
to figure, for example, the Taylor series expansion of $H_{-k}(n)$ about $n=0$
(when $k$ is a positive integer, we obtain a finite Taylor series, which is
nothing but the Faulhaber formula). The method used requires evaluating the
limit of $\Phi\left(e^{2\pi i\,x},-2k+1,n+1\right)+\pi i\,x\,\Phi\left(e^{2\pi
i\,x},-2k,n+1\right)/k$ when $x$ goes to $0$, which in itself already makes for
an interesting problem.
|
This paper concerns the Cauchy problem of the barotropic compressible
Navier-Stokes equations on the whole two-dimensional space with vacuum as far
field density. In particular, the initial density can have compact support.
When the shear and the bulk viscosities are a positive constant and a power
function of the density respectively, it is proved that the two-dimensional
Cauchy problem of the compressible Navier-Stokes equations admits a unique
local strong solution provided the initial density decays not too slow at
infinity. Moreover, if the initial data satisfy some additional regularity and
compatibility conditions, the strong solution becomes a classical one.
|
We investigate the mass spectrum of the $ss \bar s \bar s$ tetraquark states
within the relativized quark model. By solving the Schr\"{o}dinger-like
equation with the relativized potential, the masses of the $S-$ and $P-$wave
$ss \bar s \bar s$ tetraquarks are obtained. The screening effects are also
taken into account. It is found that the observed resonant structure $X(2239)$
in the $e^+e^- \to K^+K^-$ process by BESIII Collaboration can be assigned as a
$P-$wave $1^{--}$ $ss \bar s \bar s$ tetraquark state. Furthermore, the
radiative transition and strong decay behaviors of this structure are also
estimated, which can provide helpful information for future experimental
searches.
|
Solitons formed through the one-dimensional mass-kink mechanism on the edges
of two-dimensional systems with non-trivial topology play an important role in
the emergence of higher-order (HO) topological phases. In this connection, the
existing work in time-reversal symmetric systems has focused on gapping the
edge Dirac cones in the presence of particle-hole symmetry, which is not suited
to the common spin-Chern insulators. Here, we address the emergence of edge
solitons in spin-Chern number of $2$ insulators, in which the edge Dirac cones
are gapped by perturbations preserving time-reversal symmetry but breaking
spin-$U(1)$ symmetry. Through the mass-kink mechanism, we thus explain the
appearance of pairwise corner modes and predict the emergence of extra charges
around the corners. By tracing the evolution of the mass term along the edge,
we demonstrate that the in-gap corner modes and the associated extra charges
can be generated through the $S_z$-mixing spin-orbit coupling via the mass-kink
mechanism. We thus provide strong evidence that an even spin-Chern-number
insulator is an HO topological insulator with protected corner charges.
|
In this paper, we develop an elasto-viscoplastic (EVP) model for clay using
the non-associated flow rule. This is accomplished by using a modified form of
the Perzyna's overstressed EVP theory, the critical state soil mechanics, and
the multi-surface theory. The new model includes six parameters, five of which
are identical to those in the critical state soil mechanics model. The other
parameter is the generalized nonlinear secondary compression index. The EVP
model was implemented in a nonlinear coupled consolidated code using a
finite-element numerical algorithm (AFENA). We then tested the model for
different clays, such as the Osaka clay, the San Francisco Bay Mud clay, the
Kaolin clay, and the Hong Kong Marine Deposit clay. The numerical results show
good agreement with the experimental data.
|
We present an analysis of the static properties of heavy baryons at
next-to-leading order in the perurbative expansion of QCD. We obtain analytical
next-to-leading order three-loop results for the two-point correlators of
baryonic currents with one finite mass quark field for a variety of quantum
numbers of the baryonic currents. We consider both the massless limit and the
HQET limit of the correlator as special cases of the general finite mass
formula and find agreement with previous results. We present closed form
expressions for the moments of the spectral density. We determine the residues
of physical baryon states using sum rule techniques.
|
In a practical dialogue system, users may input out-of-domain (OOD) queries.
The Generalized Intent Discovery (GID) task aims to discover OOD intents from
OOD queries and extend them to the in-domain (IND) classifier. However, GID
only considers one stage of OOD learning, and needs to utilize the data in all
previous stages for joint training, which limits its wide application in
reality. In this paper, we introduce a new task, Continual Generalized Intent
Discovery (CGID), which aims to continuously and automatically discover OOD
intents from dynamic OOD data streams and then incrementally add them to the
classifier with almost no previous data, thus moving towards dynamic intent
recognition in an open world. Next, we propose a method called Prototype-guided
Learning with Replay and Distillation (PLRD) for CGID, which bootstraps new
intent discovery through class prototypes and balances new and old intents
through data replay and feature distillation. Finally, we conduct detailed
experiments and analysis to verify the effectiveness of PLRD and understand the
key challenges of CGID for future research.
|
Accuracy and interpretability are two essential properties for a crime
prediction model. Because of the adverse effects that the crimes can have on
human life, economy and safety, we need a model that can predict future
occurrence of crime as accurately as possible so that early steps can be taken
to avoid the crime. On the other hand, an interpretable model reveals the
reason behind a model's prediction, ensures its transparency and allows us to
plan the crime prevention steps accordingly. The key challenge in developing
the model is to capture the non-linear spatial dependency and temporal patterns
of a specific crime category while keeping the underlying structure of the
model interpretable. In this paper, we develop AIST, an Attention-based
Interpretable Spatio Temporal Network for crime prediction. AIST models the
dynamic spatio-temporal correlations for a crime category based on past crime
occurrences, external features (e.g., traffic flow and point of interest (POI)
information) and recurring trends of crime. Extensive experiments show the
superiority of our model in terms of both accuracy and interpretability using
real datasets.
|
While quantum mechanics precludes the perfect knowledge of so-called
"conjugate" variables, such as time and frequency, we discuss the importance of
compromising to retain a fair knowledge of their combined values. In the case
of light, we show how time and frequency photon correlations allow us to
identify a new type of photon emission, which can be used to design a new type
of quantum source where we can choose the distribution in time and energy of
the emitted photons.
|
Many low-light enhancement methods ignore intensive noise in original images.
As a result, they often simultaneously enhance the noise as well. Furthermore,
extra denoising procedures adopted by most methods ruin the details. In this
paper, we introduce a joint low-light enhancement and denoising strategy, aimed
at obtaining well-enhanced low-light images while getting rid of the inherent
noise issue simultaneously. The proposed method performs Retinex model based
decomposition in a successive sequence, which sequentially estimates a
piece-wise smoothed illumination and a noise-suppressed reflectance. After
getting the illumination and reflectance map, we adjust the illumination layer
and generate our enhancement result. In this noise-suppressed sequential
decomposition process we enforce the spatial smoothness on each component and
skillfully make use of weight matrices to suppress the noise and improve the
contrast. Results of extensive experiments demonstrate the effectiveness and
practicability of our method. It performs well for a wide variety of images,
and achieves better or comparable quality compared with the state-of-the-art
methods.
|
Oscillometric monitors are the most common automated blood pressure (BP)
measurement devices used in non-specialist settings. However, their accuracy
and reliability vary under different settings and for different age groups and
health conditions. A main limitation of the existing oscillometric monitors is
their underlying analysis algorithms that are unable to fully capture the BP
information encoded in the pattern of the recorded oscillometric pulses. In
this paper, we propose a new 2D oscillometric data representation that enables
a full characterization of arterial system and empowers the application of deep
learning to extract the most informative features correlated with BP. A hybrid
convolutional-recurrent neural network was developed to capture the
oscillometric pulses morphological information as well as their temporal
evolution over the cuff deflation period from the 2D structure, and estimate
BP. The performance of the proposed method was verified on three oscillometric
databases collected from the wrist and upper arms of 245 individuals. It was
found that it achieves a mean error and a standard deviation of error of as low
as 0.08 mmHg and 2.4 mmHg in the estimation of systolic BP, and 0.04 mmHg and
2.2 mmHg in the estimation of diastolic BP, respectively. Our proposed method
outperformed the state-of-the-art techniques and satisfied the current
international standards for BP monitors by a wide margin. The proposed method
shows promise toward robust and objective BP estimation in a variety of
patients and monitoring situations.
|
A parabolic subalgebra $\mathfrak{p}$ of a complex semisimple Lie algebra
$\mathfrak{g}$ is called a parabolic subalgebra of abelian type if its
nilpotent radical is abelian. In this paper, we provide a complete
characterization of the parameters for scalar generalized Verma modules
attached to parabolic subalgebras of abelian type such that the modules are
reducible. The proofs use Jantzen's simplicity criterion, as well as the
Enright-Howe-Wallach classification of unitary highest weight modules.
|
The paper describes the practical work for students visually clarifying the
mechanism of the Monte Carlo method applying to approximating the value of Pi.
Considering a traditional quadrant (circular sector) inscribed in a square,
here we demonstrate the original algorithm for generating random points on the
paper: you should arbitrarily tear up a paper blank to small pieces (the first
experiment). By the similar way the second experiment (with a preliminary
staining procedure by bright colors) can be used to prove the quadratic
dependence of the area of a circle on its radius. Manipulations with tearing up
a paper as a random sampling algorithm can be applied for solving other
teaching problems in physics.
|
We present new BeppoSAX LECS, MECS and PDS observations of five
lobe-dominated, broad-line active galactic nuclei selected from the 2-Jy sample
of southern radio sources. These include three radio quasars and two broad-line
radio galaxies. ROSAT PSPC data, available for all the objects, are also used
to better constrain the spectral shape in the soft X-ray band. The collected
data cover the energy range 0.1 - 10 keV, reaching ~ 50 keV for one source
(Pictor A). The main result from the spectral fits is that all sources have a
hard X-ray spectrum with energy index alpha_x ~ 0.75 in the 2 - 10 keV range.
This is at variance with the situation at lower energies where these sources
exhibit steeper spectra. Spectral breaks ~ 0.5 at 1 - 2 keV characterize in
fact the overall X-ray spectra of our objects. The flat, high-energy slope is
very similar to that displayed by flat-spectrum/core-dominated quasars, which
suggests that the same emission mechanism (most likely inverse Compton)
produces the hard X-ray spectra in both classes. Finally, a (weak) thermal
component is also present at low energies in the two broad-line radio galaxies
included in our study.
|
Visual reasoning, as a prominent research area, plays a crucial role in AI by
facilitating concept formation and interaction with the world. However, current
works are usually carried out separately on small datasets thus lacking
generalization ability. Through rigorous evaluation of diverse benchmarks, we
demonstrate the shortcomings of existing ad-hoc methods in achieving
cross-domain reasoning and their tendency to data bias fitting. In this paper,
we revisit visual reasoning with a two-stage perspective: (1) symbolization and
(2) logical reasoning given symbols or their representations. We find that the
reasoning stage is better at generalization than symbolization. Thus, it is
more efficient to implement symbolization via separated encoders for different
data domains while using a shared reasoner. Given our findings, we establish
design principles for visual reasoning frameworks following the separated
symbolization and shared reasoning. The proposed two-stage framework achieves
impressive generalization ability on various visual reasoning tasks, including
puzzles, physical prediction, and visual question answering (VQA), encompassing
both 2D and 3D modalities. We believe our insights will pave the way for
generalizable visual reasoning.
|
A search is presented for pair production of a new heavy quark ($Q$) that
decays into a $W$ boson and a light quark ($q$) in the final state where one
$W$ boson decays leptonically (to an electron or muon plus a neutrino) and the
other $W$ boson decays hadronically. The analysis is performed using an
integrated luminosity of 20.3 fb$^{-1}$ of $pp$ collisions at $\sqrt{s} = 8$
TeV collected by the ATLAS detector at the LHC. No evidence of $Q\bar{Q}$
production is observed. New chiral quarks with masses below 690 GeV are
excluded at 95% confidence level, assuming BR$(Q\to Wq)=1$. Results are also
interpreted in the context of vectorlike quark models, resulting in the limits
on the mass of a vectorlike quark in the two-dimensional plane of BR$(Q\to Wq)$
versus BR$(Q\to Hq)$.
|
What happens when Alice falls into a black hole? In spite of recent
challenges by Almheiri et al. -- the ""firewall" hypothesis -- the consensus on
this question tends to remain "nothing special". Here I argue that something
rather special can happen near the horizon, already at the semiclassical level:
besides the standard Hawking outgoing modes, Alice can records a quasi-thermal
spectrum of ingoing modes, whose temperature and intensity diverges as Alice's
Killing energy $E$ goes to zero. I suggest that this effect can be thought of
in terms a "horizon-infinity duality", which relates the perception of
near-horizon and asymptotic geodesic observers -- the two faces of Hawking
radiation.
|
A Cantor action is a minimal equicontinuous action of a countably generated
group G on a Cantor space X. Such actions are also called generalized odometers
in the literature. In this work, we introduce two new conjugacy invariants for
Cantor actions, the stabilizer limit group and the centralizer limit group. An
action is wild if the stabilizer limit group is an increasing sequence of
stabilizer groups without bound, and otherwise is said to be stable if this
group chain is bounded. For Cantor actions by a finitely generated group G, we
prove that stable actions satisfy a rigidity principle, and furthermore show
that the wild property is an invariant of the continuous orbit equivalence
class of the action.
A Cantor action is said to be dynamically wild if it is wild, and the
centralizer limit group is a proper subgroup of the stabilizer limit group.
This property is also a conjugacy invariant, and we show that a Cantor action
with a non-Hausdorff element must be dynamically wild. We then give examples of
wild Cantor actions with non-Hausdorff elements, using recursive methods from
Geometric Group Theory to define actions on the boundaries of trees.
|
We investigate the spatial Public Goods Game in the presence of
fitness-driven and conformity-driven agents. This framework usually considers
only the former type of agents, i.e., agents that tend to imitate the strategy
of their fittest neighbors. However, whenever we study social systems, the
evolution of a population might be affected also by social behaviors as
conformism, stubbornness, altruism, and selfishness. Although the term
evolution can assume different meanings depending on the considered domain,
here it corresponds to the set of processes that lead a system towards an
equilibrium or a steady-state. We map fitness to the agents' payoff so that
richer agents are those most imitated by fitness-driven agents, while
conformity-driven agents tend to imitate the strategy assumed by the majority
of their neighbors. Numerical simulations aim to identify the nature of the
transition, on varying the amount of the relative density of conformity-driven
agents in the population, and to study the nature of related equilibria.
Remarkably, we find that conformism generally fosters ordered cooperative
phases and may also lead to bistable behaviors.
|
In today's data and information-rich world, summarization techniques are
essential in harnessing vast text to extract key information and enhance
decision-making and efficiency. In particular, topic-focused summarization is
important due to its ability to tailor content to specific aspects of an
extended text. However, this usually requires extensive labelled datasets and
considerable computational power. This study introduces a novel method,
Augmented-Query Summarization (AQS), for topic-focused summarization without
the need for extensive labelled datasets, leveraging query augmentation and
hierarchical clustering. This approach facilitates the transferability of
machine learning models to the task of summarization, circumventing the need
for topic-specific training. Through real-world tests, our method demonstrates
the ability to generate relevant and accurate summaries, showing its potential
as a cost-effective solution in data-rich environments. This innovation paves
the way for broader application and accessibility in the field of topic-focused
summarization technology, offering a scalable, efficient method for
personalized content extraction.
|
Phase covariant qubit dynamics describes an evolution of a two-level system
under simultaneous action of pure dephasing, energy dissipation, and energy
gain with time-dependent rates $\gamma_z(t)$, $\gamma_-(t)$, and $\gamma_+(t)$,
respectively. Non-negative rates correspond to completely positive divisible
dynamics, which can still exhibit such peculiarities as non-monotonicity of
populations for any initial state. We find a set of quantum channels attainable
in the completely positive divisible phase covariant dynamics and show that
this set coincides with the set of channels attainable in semigroup phase
covariant dynamics. We also construct new examples of eternally indivisible
dynamics with $\gamma_z(t) < 0$ for all $t > 0$ that is neither unital nor
commutative. Using the quantum Sinkhorn theorem, we for the first time derive a
restriction on the decoherence rates under which the dynamics is positive
divisible, namely, $\gamma_{\pm}(t) \geq 0$, $\sqrt{\gamma_+(t) \gamma_-(t)} +
2 \gamma_z(t) > 0$. Finally, we consider phase covariant convolution master
equations and find a class of admissible memory kernels that guarantee complete
positivity of the dynamical map.
|
Let $C_\varphi$ be a composition operator acting on the Hardy space of the
unit disc $H^p$ ($1\leq p < \infty$), which is embedded in a $C_0$-semigroup of
composition operators $\mathcal{T}=(C_{\varphi_t})_{t\geq 0}.$ We investigate
whether the commutant or the bicommutant of $C_\varphi$, or the commutant of
the semigroup $\mathcal{T}$, are isomorphic to subalgebras of continuous
functions defined on a connected set. In particular, it allows us to derive
results about the existence of non-trivial idempotents (and non-trivial
orthogonal projections if $p=2$) lying in such sets. Our methods also provide
results concerning the minimality of the commutant and the double commutant
property, in the sense that they coincide with the closure in the weak operator
topology of the unital algebra generated by the operator. Moreover, some
consequences regarding the extended eigenvalues and the strong compactness of
such operators are derived. This extends previous results of Lacruz,
Le\'on-Saavedra, Petrovic and Rodr\'iguez-Piazza, Fern\'andez-Valles and Lacruz
and Shapiro on linear fractional composition operators acting on $H^2$.
|
Medium resolution ($\Delta \nu$ ~ 3 GHz) laser-induced fluorescence (LIF)
excitation spectra of a rotationally cold sample of YbOH in the 17300-17950
cm$^{-1}$ range have been recorded using two-dimensional (excitation and
dispersed fluorescence) spectroscopy. High resolution ($\Delta \lambda$ ~ 0.65
nm) dispersed laser induced fluorescence (DLIF) spectra and radiative decay
curves of numerous bands detected in the medium resolution LIF excitation
spectra were recorded. The vibronic energy levels of the $\tilde{X} \,
^2\Sigma^+$ state were predicted using a discrete variable representation
approach and compared with observations. The radiative decay curves were
analyzed to produce fluorescence lifetimes. DLIF spectra resulting from high
resolution ($\Delta \nu$ < 10 MHz) LIF excitation of individual low-rotational
lines in the $\tilde{A} \, ^2\Pi_{1/2}(0,0,0) - \tilde{X} \,
^2\Sigma^+(0,0,0)$, $\tilde{A} \, ^2\Pi_{1/2}(1,0,0) - \tilde{X} \,
^2\Sigma^+(0,0,0)$, $[17.73]\Omega=0.5(0,0,0) - \tilde{X} \, ^2\Sigma^+(0,0,0)$
bands were also recorded. The DLIF spectra were analyzed to determine branching
ratios which were combined with radiative lifetimes to obtain transition dipole
moments. The implications for laser cooling and trapping of YbOH are discussed.
|
We relate the anomalous noise found experimentally in spin ice to the
subdiffusion of magnetic monopoles. Because monopoles are emergent particles,
they do not move in a structureless vacuum. Rather, the underlying spin
ensemble filters the thermal white noise, leading to non-trivial coevolution.
Thus, monopoles can be considered as random walkers under the effect of
stochastic forces only as long as those are not trivially white, but instead
subsume the evolution of the spin vacuum. Via this conceptualization, we
conjecture relations between the color of the noise and other observables, such
as relaxation time, monopole density, the dynamic exponent, and the order of
the annihilation reaction, which lead us to introduce spin ice specific
critical exponents in a neighborhood of the ice manifold.
|
The bang-bang optimal control method was proposed for glow discharge plasma
actuators, taking account of practical issues, such as limited actuation states
with instantaneously varied aerodynamic control performance. Hence, the main
contribution of this Note is to integrate flight control with active flow
control in particular for plasma actuators. Flow control effects were examined
in wind tunnel experiments, which show that the plasma authority for flow
control is limited. Flow control effects are only obvious at pitch angles near
stall. However, flight control simulations suggest that even those small
plasma-induced roll moments can satisfactorily fulfill the maneuver tasks and
meet flight quality specifications. In addition, the disturbance from volatile
plasma-induced roll moments can be rejected. Hence, the proposed bang-bang
control method is a promising candidate of control design methodology for
plasma actuators.
|
In many applications, Image de-noising and improvement represent essential
processes in presence of colored noise such that in underwater. Power spectral
density of the noise is changeable within a definite frequency range, and
autocorrelation noise function is does not like delta function. So, noise in
underwater is characterized as colored noise. In this paper, a novel image
de-noising method is proposed using multi-level noise power estimation in
discrete wavelet transform with different basis functions. Peak signal to noise
ratio (PSNR) and mean squared error represented performance measures that the
results of this study depend on it. The results of various bases of wavelet
such as: Daubechies (db), biorthogonal (bior.) and symlet (sym.), show that
denoising process that uses in this method produces extra prominent images and
improved values of PSNR than other methods.
|
In this paper we discuss a causal network approach to describing relativistic
quantum mechanics. Each vertex on the causal net represents a possible point
event or particle observation. By constructing the simplest causal net based on
Reichenbach-like conjunctive forks in proper time we can exactly derive the 1+1
dimension Dirac equation for a relativistic fermion and correctly model quantum
mechanical statistics. Symmetries of the net provide various quantum mechanical
effects such as quantum uncertainty and wavefunction, phase, spin, negative
energy states and the effect of a potential. The causal net can be embedded in
3+1 dimensions and is consistent with the conventional Dirac equation. In the
low velocity limit the causal net approximates to the Schrodinger equation and
Pauli equation for an electromagnetic field. Extending to different momentum
states the net is compatible with the Feynman path integral approach to quantum
mechanics that allows calculation of well known quantum phenomena such as
diffraction.
|
We introduce a model system of stochastic entities, called 'rowers' which
include some essentialities of the behavior of real cilia. We introduce and
discuss the problem of symmetry breaking for these objects and its connection
with the onset of macroscopic, directed flow in the fluid. We perform a mean
field-like calculation showing that hydrodynamic interaction may provide for
the symmetry breaking mechanism and the onset of fluid flow. Finally, we
discuss the problem of the metachronal wave in a stochastic context.
|
Nowadays, smartphones are not utilized for communications only. Smartphones
are equipped with a lot of sensors that can be utilized for different purposes.
For example, inertial sensors have been used extensively in recent years for
measuring and monitoring performance in many different applications. Basically,
data from the sensors are utilized for estimation of smartphone orientation.
There is a lot of applications which can utilize these data. This paper deals
with an algorithm developed for inertial sensors data utilization for vehicle
passenger comfort assessment.
|
We show that duals of certain low-density parity-check (LDPC) codes, when
used in a standard coset coding scheme, provide strong secrecy over the binary
erasure wiretap channel (BEWC). This result hinges on a stopping set analysis
of ensembles of LDPC codes with block length $n$ and girth $\geq 2k$, for some
$k \geq 2$. We show that if the minimum left degree of the ensemble is
$l_\mathrm{min}$, the expected probability of block error is
$\calO(\frac{1}{n^{\lceil l_\mathrm{min} k /2 \rceil - k}})$ when the erasure
probability $\epsilon < \epsilon_\mathrm{ef}$, where $\epsilon_\mathrm{ef}$
depends on the degree distribution of the ensemble. As long as $l_\mathrm{min}
> 2$ and $k > 2$, the dual of this LDPC code provides strong secrecy over a
BEWC of erasure probability greater than $1 - \epsilon_\mathrm{ef}$.
|
The accelerating universe is closely related to today's version of the
cosmological constant problem; fine-tuning and coincidence problems. We show
how successfully the scalar-tensor theory, a rather rigid theoretical idea,
provides us with a simple and natural way to understand why today's observed
cosmological constant is small only because we are old cosmologically, without
fine-tuning theoretical parameters extremely.
|
Subsets and Splits