text
stringlengths 6
128k
|
---|
As a promising technique, reconfigurable intelligent surfaces (RISs) exhibit
its tremendous potential for high accuracy positioning. In this paper, we
investigates multi-user localization and tracking problem in
multi-RISs-assisted system. In particular, we incorporate statistical
spatiotemporal correlation of multi-user locations and develop a general
spatiotemporal Markov random field model (ST-+MRF) to capture multi-user
dynamic motion states. To achieve superior performance, a novel multi-user
tracking algorithm is proposed based on Bayesian inference to effectively
utilize the correlation among users. Besides that, considering the necessity of
RISs configuration for tracking performance, we further propose a predictive
RISs beamforming optimization scheme via semidefinite relaxation (SDR).
Compared to other pioneering work, finally, we confirm that the proposed
strategy by alternating tracking algorithm and RISs optimization, can achieve
significant performance gains over benchmark schemes.
|
In this paper we consider a generalization of zwei-dreibein gravity with a
chern-Simons term associated to a constraint term which fixed the torsion. We
count the local degrees of freedom of this model using Hamiltonian analysis and
show that in contrast to the usual GZDG which has 2 bulk local degrees of
freedom, our model has 3 propagating modes. Then by looking at the quadratic
Lagrangian, we determine that these propagating modes are 3 massive graviton
with different masses. Finally we obtain AdS wave solution as an example
solution for this model.
|
An inverse elastic source problem with sparse measurements is of concern. A
generic mathematical framework is proposed which incorporates a low-
dimensional manifold regularization in the conventional source reconstruction
algorithms thereby enhancing their performance with sparse datasets. It is
rigorously established that the proposed framework is equivalent to the
so-called \emph{deep convolutional framelet expansion} in machine learning
literature for inverse problems. Apposite numerical examples are furnished to
substantiate the efficacy of the proposed framework.
|
We study rank $r$ cohomological Donaldson-Thomas theory on a toric Calabi-Yau
orbifold of $\mathbb{C}^4$ by a finite abelian subgroup $\mathsf\Gamma$ of
$\mathsf{SU}(4)$, from the perspective of instanton counting in cohomological
gauge theory on a noncommutative crepant resolution of the quotient
singularity. We describe the moduli space of noncommutative instantons on
$\mathbb{C}^4/\mathsf{\Gamma}$ and its generalized ADHM parametrization. Using
toric localization, we compute the orbifold instanton partition function as a
combinatorial series over $r$-vectors of $\mathsf\Gamma$-coloured solid
partitions. When the $\mathsf\Gamma$-action fixes an affine line in
$\mathbb{C}^4$, we exhibit the dimensional reduction to rank $r$
Donaldson-Thomas theory on the toric Kahler three-orbifold
$\mathbb{C}^3/\mathsf{\Gamma}$. Based on this reduction and explicit
calculations, we conjecture closed infinite product formulas, in terms of
generalized MacMahon functions, for the instanton partition functions on the
orbifolds $\mathbb{C}^2/\mathbb{Z}_n\times\mathbb{C}^2$ and
$\mathbb{C}^3/(\mathbb{Z}_2\times\mathbb{Z}_2)\times\mathbb{C}$, finding
perfect agreement with new mathematical results of Cao, Kool and Monavari.
|
Since the discovery of the first radio pulsar fifty years ago, the population
of neutron stars in our Galaxy has grown to over 2,600. A handful of these
sources, exclusively seen in X-rays, show properties that are not observed in
normal pulsars. Despite their scarcity, they are key to understanding aspects
of the neutron star phenomenology and evolution. The forthcoming all-sky survey
of eROSITA will unveil the X-ray faint end of the neutron star population at
unprecedented sensitivity; therefore, it has the unique potential to constrain
evolutionary models and advance our understanding of the sources that are
especially silent in the radio and $\gamma$-ray regimes. In this contribution I
discuss the expected role of eROSITA, and the challenges it will face, at
probing the galactic neutron star population.
|
Using a Zariski topology associated to a finite field extensions, we give new
proofs and generalize the primitive and normal basis theorems.
|
NASA's Interface Region Imaging Spectrograph (IRIS) space mission will study
how the solar atmosphere is energized. IRIS contains an imaging spectrograph
that covers the Mg II h&k lines as well as a slit-jaw imager centered at Mg II
k. Understanding the observations will require forward modeling of Mg II h&k
line formation from 3D radiation-MHD models. This paper is the first in a
series where we undertake this forward modeling. We discuss the atomic physics
pertinent to h&k line formation, present a quintessential model atom that can
be used in radiative transfer computations and discuss the effect of partial
redistribution (PRD) and 3D radiative transfer on the emergent line profiles.
We conclude that Mg II h&k can be modeled accurately with a 4-level plus
continuum Mg II model atom. Ideally radiative transfer computations should be
done in 3D including PRD effects. In practice this is currently not possible. A
reasonable compromise is to use 1D PRD computations to model the line profile
up to and including the central emission peaks, and use 3D transfer assuming
complete redistribution to model the central depression.
|
The two-loop Higgs mass upper bounds are reanalyzed. Previous results for a
cutoff scale $\Lambda\approx$ few TeV are found to be too stringent. For
$\Lambda=10^{19}$ GeV we find $M_H < 180 \pm 4\pm 5$ GeV, the first error
indicating the theoretical uncertainty, the second error reflecting the
experimental uncertainty due to $ m_t = 175 \pm 6 $ GeV. We also summarize the
lower bounds on $M_H$. We find that a SM Higgs mass in the range of 160 to 170
GeV will certainly allow for a perturbative and well-behaved SM up to the
Planck-mass scale $\Lambda_{Pl}\simeq 10^{19}$ GeV, with no need for new
physics to set in below this scale.
|
In this paper, structural controllability of a leader-follower multi-agent
system with multiple leaders is studied from a graph-theoretic point of view.
The problem of preservation of structural controllability under simultaneous
failures in both the communication links and the agents is investigated. The
effects of the loss of agents and communication links on the controllability of
an information flow graph are previously studied. In this work, the
corresponding results are exploited to introduce some useful indices and
importance measures that help characterize and quantify the role of individual
links and agents in the controllability of the overall network. Existing
results are then extended by considering the effects of losses in both links
and agents at the same time. To this end, the concepts of joint
(r,s)-controllability and joint t-controllability are introduced as
quantitative measures of reliability for a multi-agent system, and their
important properties are investigated. Lastly, the class of jointly critical
digraphs is introduced and it is stated that if a digraph is jointly critical,
then joint t-controllability is a necessary and sufficient condition for
remaining controllable following the failure of any set of links and agents,
with cardinality less than t. Various examples are exploited throughout the
paper to elaborate on the analytical findings.
|
We consider experimental signatures of WIMPless dark matter. We focus on
models where the WIMPless dark matter candidate is a Majorana fermion, and dark
matter scattering is predominantly spin-dependent. These models can be probed
by IceCube/DeepCore, which can potentially find $3\sigma$ evidence with ~ 5
years of data.
|
Normalizing flows model probability distributions by learning invertible
transformations that transfer a simple distribution into complex distributions.
Since the architecture of ResNet-based normalizing flows is more flexible than
that of coupling-based models, ResNet-based normalizing flows have been widely
studied in recent years. Despite their architectural flexibility, it is
well-known that the current ResNet-based models suffer from constrained
Lipschitz constants. In this paper, we propose the monotone formulation to
overcome the issue of the Lipschitz constants using monotone operators and
provide an in-depth theoretical analysis. Furthermore, we construct an
activation function called Concatenated Pila (CPila) to improve gradient flow.
The resulting model, Monotone Flows, exhibits an excellent performance on
multiple density estimation benchmarks (MNIST, CIFAR-10, ImageNet32,
ImageNet64). Code is available at https://github.com/mlvlab/MonotoneFlows.
|
An unscented Kalman filter for matrix Lie groups is proposed where the time
propagation of the state is formulated on the Lie algebra. This is done with
the kinematic differential equation of the logarithm, where the inverse of the
right Jacobian is used. The sigma points can then be expressed as logarithms in
vector form, and time propagation of the sigma points and the computation of
the mean and the covariance can be done on the Lie algebra. The resulting
formulation is to a large extent based on logarithms in vector form, and is
therefore closer to the UKF for systems in $\mathbb{R}^n$. This gives an
elegant and well-structured formulation which provides additional insight into
the problem, and which is computationally efficient. The proposed method is in
particular formulated and investigated on the matrix Lie group $SE(3)$. A
discussion on right and left Jacobians is included, and a novel closed form
solution for the inverse of the right Jacobian on $SE(3)$ is derived, which
gives a compact representation involving fewer matrix operations. The proposed
method is validated in simulations.
|
The Stokes Mueller polarimetry is generalized to include nonlinear optical
processes such as second- and third-harmonic generation, sum- and
difference-frequency generations. The overall algebraic form of the polarimetry
is preserved, where the incoming and outgoing radiations are represented by
column vectors and the intervening medium is represented by a matrix.
Expressions for the generalized nonlinear Stokes vector and the Mueller matrix
are provided in terms of coherency and correlation matrices, expanded by
higher-dimensional analogues of Pauli matrices. In all cases, the outgoing
radiation is represented by the conventional $4\times 1$ Stokes vector, while
dimensions of the incoming radiation Stokes vector and Mueller matrix depend on
the order of the process being examined. In addition, relation between
nonlinear susceptibilities and the measured Mueller matrices are explicitly
provided. Finally, the approach of combining linear and nonlinear optical
elements is discussed within the context of polarimetry.
|
The space-based gravitational wave detector LISA will observe in the
low-frequency gravitational-wave band (0.1 mHz up to 1 Hz). LISA will search
for a variety of expected signals, and when it detects a signal it will have to
determine a number of parameters, such as the location of the source on the sky
and the signal's polarisation. This requires pattern-matching, called matched
filtering, which uses the best available theoretical predictions about the
characteristics of waveforms. All the estimates of the sensitivity of LISA to
various sources assume that the data analysis is done in the optimum way.
Because these techniques are unfamiliar to many young physicists, I use the
first part of this lecture to give a very basic introduction to time-series
data analysis, including matched filtering. The second part of the lecture
applies these techniques to LISA, showing how estimates of LISA's sensitivity
can be made, and briefly commenting on aspects of the signal-analysis problem
that are special to LISA.
|
The science and origins of asteroids is deemed high priority in the Planetary
Science Decadal Survey. Major scientific goals for the study of planetesimals
are to decipher geological processes in SSSBs not determinable from
investigation via in-situ experimentation, and to understand how planetesimals
contribute to the formation of planets. Ground based observations are not
sufficient to examine SSSBs, as they are only able to measure what is on the
surface of the body; however, in-situ analysis allows for further, close up
investigation as to the surface characteristics and the inner composure of the
body. To this end, the Asteroid Mobile Imager and Geologic Observer (AMIGO) an
autonomous semi-inflatable robot will operate in a swarm to efficiently
characterize the surface of an asteroid. The stowed package is 10x10x10 cm
(equivalent to a 1U CubeSat) that deploys an inflatable sphere of ~1m in
diameter. Three mobility modes are identified and designed: ballistic hopping,
rotation during hops, and up-righting maneuvers. Ballistic hops provide the
AMIGO robot the ability to explore a larger portion of the asteroid's surface
to sample a larger area than a stationary lander. Rotation during the hop
entails attitude control of the robot, utilizing propulsion and reaction wheel
actuation. In the event of the robot tipping or not landing up-right, a
combination of thrusters and reaction wheels will correct the robot's attitude.
The AMIGO propulsion system utilizes sublimate-based micro-electromechanical
systems (MEMS) technology as a means of lightweight, low-thrust ballistic
hopping and coarse attitude control. Each deployed AMIGO will hop across the
surface of the asteroid multiple times.
|
In this paper, we study groups of automorphisms of algebraic systems over a
set of $p$-adic integers with different sets of arithmetic and coordinate-wise
logical operations and congruence relations modulo $p^k,$ $k\ge 1.$ The main
result of this paper is the description of groups of automorphisms of $p$-adic
integers with one or two arithmetic or coordinate-wise logical operations on
$p$-adic integers. To describe groups of automorphisms, we use the apparatus of
the $p$-adic analysis and $p$-adic dynamical systems. The motive for the study
of groups of automorphism of algebraic systems over $p$-adic integers is the
question of the existence of a fully homomorphic encryption in a given family
of ciphers. The relationship between these problems is based on the possibility
of constructing a "continuous" $p$-adic model for some families of ciphers (in
this context, these ciphers can be considered as "discrete" systems). As a
consequence, we can apply the "continuous" methods of $p$-adic analysis to
solve the "discrete" problem of the existence of fully homomorphic ciphers.
|
In this thesis we deal with the specific collective phenomena in condensed
matter - striped-structures formation. Such structures are observed in
different branches of condensed matter physics, like surface physics or physics
of high-temperature superconductors. These quasi-one-dimensional objects appear
in theoretical analyses as well as in computer simulations of different
theoretical models. Here, the main topic of interest is the stability of
striped structures in certain quantum models, where a tendency towards
crystallization competes with a tendency towards phase separation, and some
basic properties of these structures.
|
We compute the lattice spacing corrections to the spectral density of the
Hermitean Wilson Dirac operator using Wilson Chiral Perturbation Theory at NLO.
We consider a regime where the quark mass $m$ and the lattice spacing $a$ obey
the relative power counting $m\sim a \Lambda_{\rm QCD}^2$: in this situation
discretisation effects can be treated as perturbation of the continuum
behaviour. While this framework fails to describe lattice spectral density
close to the threshold, it allows nevertheless to investigate important
properties of the spectrum of the Wilson Dirac operator. We discuss the range
of validity of our results and the possible implications in understanding the
phase diagram of Wilson fermions.
|
In a single-parameter mechanism design problem, a provider is looking to sell
a service to a group of potential buyers. Each buyer $i$ has a private value
$v_i$ for receiving the service and a feasibility constraint restricts which
sets of buyers can be served simultaneously. Recent work in economics
introduced clock auctions as a superior class of auctions for this problem, due
to their transparency, simplicity, and strong incentive guarantees. Subsequent
work focused on evaluating the social welfare approximation guarantees of these
auctions, leading to strong impossibility results: in the absence of prior
information regarding the buyers' values, no deterministic clock auction can
achieve a bounded approximation, even for simple feasibility constraints with
only two maximal feasible sets.
We show that these negative results can be circumvented by using prior
information or by leveraging randomization. We provide clock auctions that give
a $O(\log\log k)$ approximation for general downward-closed feasibility
constraints with $k$ maximal feasible sets for three different information
models, ranging from full access to the value distributions to complete absence
of information. The more information the seller has, the simpler these auctions
are. Under full access, we use a particularly simple deterministic clock
auction, called a single-price clock auction, which is only slightly more
complex than posted price mechanisms. In this auction, each buyer is offered a
single price and a feasible set is selected among those who accept their
offers. In the other extreme, where no prior information is available, this
approximation guarantee is obtained using a complex randomized clock auction.
In addition to our main results, we propose a parameterization that
interpolates between single-price clock auctions and general clock auctions,
paving the way for an exciting line of future research.
|
We present the ultraviolet (UV) luminosity function of galaxies from the
GALEX Medium Imaging Survey with measured spectroscopic redshifts from the
first data release of the WiggleZ Dark Energy Survey. This sample selects
galaxies with high star formation rates: at 0.6 < z < 0.9 the median star
formation rate is at the upper 95th percentile of optically-selected (r<22.5)
galaxies and the sample contains about 50 per cent of all NUV < 22.8, 0.6 < z <
0.9 starburst galaxies within the volume sampled.
The most luminous galaxies in our sample (-21.0>M_NUV>-22.5) evolve very
rapidly with a number density declining as (1+z)^{5\pm 1} from redshift z = 0.9
to z = 0.6. These starburst galaxies (M_NUV<-21 is approximately a star
formation rate of 30 \msuny) contribute about 1 per cent of cosmic star
formation over the redshift range z=0.6 to z=0.9. The star formation rate
density of these very luminous galaxies evolves rapidly, as (1+z)^{4\pm 1}.
Such a rapid evolution implies the majority of star formation in these large
galaxies must have occurred before z = 0.9.
We measure the UV luminosity function in 0.05 redshift intervals spanning
0.1<z<0.9, and provide analytic fits to the results. At all redshifts greater
than z=0.55 we find that the bright end of the luminosity function is not well
described by a pure Schechter function due to an excess of very luminous
(M_NUV<-22) galaxies. These luminosity functions can be used to create a radial
selection function for the WiggleZ survey or test models of galaxy formation
and evolution. Here we test the AGN feedback model in Scannapieco et al.
(2005), and find that this AGN feedback model requires AGN feedback efficiency
to vary with one or more of the following: stellar mass, star formation rate
and redshift.
|
We propose an efficient dual boson scheme, which extends the DMFT paradigm to
collective excitations in correlated systems. The theory is fully
self-consistent both on the one- and on the two-particle level, thus describing
the formation of collective modes as well as the renormalization of electronic
and bosonic spectra on equal footing. The method employs an effective impurity
model comprising both fermionic and bosonic hybridization functions. Only
single- and two-electron Green's functions of the reference problem enter the
theory, due to the optimal choice of the self-consistency condition for the
effective bosonic bath. We show that the theory is naturally described by a
dual Luttinger-Ward functional and obeys the relevant conservation laws.
|
This paper presents nonlinear tracking control systems for a quadrotor
unmanned aerial vehicle under the influence of uncertainties. Assuming that
there exist unstructured disturbances in the translational dynamics and the
attitude dynamics, a geometric nonlinear adaptive controller is developed
directly on the special Euclidean group. In particular, a new form of an
adaptive control term is proposed to guarantee stability while compensating the
effects of uncertainties in quadrotor dynamics. A rigorous mathematical
stability proof is given. The desirable features are illustrated by numerical
example and experimental results of aggressive maneuvers.
|
Let $f$ and $g$ be weakly holomorphic modular functions on $\Gamma_0(N)$ with
the trivial character. For an integer $d$, let $\Tr_d(f)$ denote the modular
trace of $f$ of index $d$. Let $r$ be a rational number equivalent to $i\infty$
under the action of $\Gamma_0(4N)$. In this paper, we prove that, when $z$ goes
radially to $r$, the limit $Q_{\hat{H}(f)}(r)$ of the sum $H(f)(z) =
\sum_{d>0}\Tr_d(f)e^{2\pi idz}$ is a special value of a regularized twisted
$L$-function defined by $\Tr_d(f)$ for $d\leq0$. It is proved that the
regularized $L$-function is meromorphic on $\mathbb{C}$ and satisfies a certain
functional equation. Finally, under the assumption that $N$ is square free, we
prove that if $Q_{\hat{H}(f)}(r)=Q_{\hat{H}(g)}(r)$ for all $r$ equivalent to
$i \infty$ under the action of $\Gamma_0(4N)$, then $\Tr_d(f)=\Tr_d(g)$ for all
integers $d$.
|
The physics of gravitational wave and its detection in the recent experiment
by the LIGO collaboration is discussed in simple terms for a general audience.
The main article is devoid of any mathematics, but an appendix is included for
inquisitive readers where essential mathematics for general theory of
relativity and gravitational waves are given.
|
The Davis-Chandrasekhar-Fermi (DCF) method using the Angular Dispersion
Function (ADF), the Histogram of Relative Orientations (HROs) and the
Polarization-Intensity Gradient Relation (P-IGR) are the most common tools used
to analyse maps of linearly polarized emission by thermal dust grains at
submilliter wavelengths in molecular clouds and star-forming regions. A short
review of these methods is given. The combination of these methods will provide
valuable tools to shed light on the impact of the magnetic fields on the
formation and evolution of subparsec scale hub-filaments that will be mapped
with the NIKA2 camera and future experiments.
|
This paper has a twofold goal. The first aim is to provide a deeper
understanding of the family of the Real Elliptically Symmetric (RES)
distributions by investigating their intrinsic semiparametric nature. The
second aim is to derive a semiparametric lower bound for the estimation of the
parametric component of the model. The RES distributions represent a
semiparametric model where the parametric part is given by the mean vector and
by the scatter matrix while the non-parametric, infinite-dimensional, part is
represented by the density generator. Since, in practical applications, we are
often interested only in the estimation of the parametric component, the
density generator can be considered as nuisance. The first part of the paper is
dedicated to conveniently place the RES distributions in the framework of the
semiparametric group models. The second part of the paper, building on the
mathematical tools previously introduced, the Constrained Semiparametric
Cram\'{e}r-Rao Bound (CSCRB) for the estimation of the mean vector and of the
constrained scatter matrix of a RES distributed random vector is introduced.
The CSCRB provides a lower bound on the Mean Squared Error (MSE) of any robust
$M$-estimator of mean vector and scatter matrix when no a-priori information on
the density generator is available. A closed form expression for the CSCRB is
derived. Finally, in simulations, we assess the statistical efficiency of the
Tyler's and Huber's scatter matrix $M$-estimators with respect to the CSCRB.
|
The results obtained by the Working Group on Supersymmetry at the 1999 Les
Houches Workshop on Collider Physics are summarized. Separate chapters treat
"general" supersymmetry, R-parity violation, gauge mediated supersymmetry
breaking, and anomaly mediated supersymmetry breaking.
|
We examine numerically the post-merger regime of two Schwarzschild black
holes in non head-on collision. Our treatment is made in the realm of
non-axisymmetric Robinson-Trautman spacetimes which are appropriate for the
description of the system. Characteristic initial data for the system are
constructed and the Robinson-Trautman equation is integrated using a numerical
code based on the Galerkin spectral method. The collision is planar, restricted
to the plane determined by the directions of the two initial colliding black
holes, with the net momentum fluxes of gravitational waves confined to this
plane. We evaluate the efficiency of mass-energy extraction, the total energy
and momentum carried out by gravitational waves and the momentum distribution
of the remnant black hole. Our analysis is based on the Bondi-Sachs four
momentum conservation laws. Head-on collisions and orthogonal collisions
constitute, respectively, upper and lower bounds to the power emission and to
the efficiency of mass-energy extraction by gravitational waves. The momentum
extraction and the pattern of the momentum fluxes, as a function of the
incidence angle, are examined. The momentum extraction characterizes a regime
of strong deceleration of the system. The angular pattern of gravitational wave
signals is also examined. They are typically bremsstrahlung for early times
emission. Gravitational waves are also emitted outside the plane of collision
but this component has a zero net momentum flux. The relation between the
incidence angle of collision and the exit angle of the remnant closely
approximates a relation for inelastic collisions of classical particles in
Newtonian dynamics.
|
We consider the following evolutionary Hamilton-Jacobi equation with initial
condition: \begin{equation*} \begin{cases}
\partial_tu(x,t)+H(x,u(x,t),\partial_xu(x,t))=0,\\ u(x,0)=\phi(x), \end{cases}
\end{equation*} where $\phi(x)\in C(M,\mathbb{R})$. Under some assumptions on
the convexity of $H(x,u,p)$ with respect to $p$ and the uniform Lipschitz of
$H(x,u,p)$ with respect to $u$, we establish a variational principle and
provide an intrinsic relation between viscosity solutions and certain minimal
characteristics. By introducing an implicitly defined {\it fundamental
solution}, we obtain a variational representation formula of the viscosity
solution of the evolutionary Hamilton-Jacobi equation. Moreover, we discuss the
large time behavior of the viscosity solution of the evolutionary
Hamilton-Jacobi equation and provide a dynamical representation formula of the
viscosity solution of the stationary Hamilton-Jacobi equation with strictly
increasing $H(x,u,p)$ with respect to $u$.
|
In a recent article, Alon, Hanneke, Holzman, and Moran (FOCS '21) introduced
a unifying framework to study the learnability of classes of partial concepts.
One of the central questions studied in their work is whether the learnability
of a partial concept class is always inherited from the learnability of some
``extension'' of it to a total concept class.
They showed this is not the case for PAC learning but left the problem open
for the stronger notion of online learnability.
We resolve this problem by constructing a class of partial concepts that is
online learnable, but no extension of it to a class of total concepts is online
learnable (or even PAC learnable).
|
We extend abstract interpretation for the purpose of verifying hybrid
systems. Abstraction has been playing an important role in many verification
methodologies for hybrid systems, but some special care is needed for
abstraction of continuous dynamics defined by ODEs. We apply Cousot and
Cousot's framework of abstract interpretation to hybrid systems, almost as it
is, by regarding continuous dynamics as an infinite iteration of infinitesimal
discrete jumps. This extension follows the recent line of work by Suenaga,
Hasuo and Sekine, where deductive verification is extended for hybrid systems
by 1) introducing a constant dt for an infinitesimal value; and 2) employing
Robinson's nonstandard analysis (NSA) to define mathematically rigorous
semantics. Our theoretical results include soundness and termination via
uniform widening operators; and our prototype implementation successfully
verifies some benchmark examples.
|
Although the LHC experiments have searched for and excluded many proposed new
particles up to masses close to 1 TeV, there are many scenarios that are
difficult to address at a hadron collider. This talk will review a number of
these scenarios and present the expectations for searches at an
electron-positron collider such as the International Linear Collider. The cases
discussed include SUSY in strongly or moderately compressed models, heavy
neutrinos, heavy vector bosons coupling to the s-channel in $e^+e^-$
annihilation, and new scalars.
|
In Braginskii extended magneto-hydrodynamics (ExMHD), applicable to
collisional astrophysical and high energy density plasmas, the electric field
and heat flow are described by the $\alpha$, $\beta$ and $\kappa$ transport
coefficients. We show that magnetic transport relies primarily on
$\beta_\parallel-\beta_\perp$ and $\alpha_\perp-\alpha_\parallel$, rather than
$\alpha_\perp$ and $\beta_\perp$ themselves. However, commonly used coefficient
fit functions [Epperlein and Haines, Phys. Fluids 29, 1029 (1986)] cannot
accurately calculate these quantities. This means that many ExMHD simulations
have significantly over-estimated the cross-gradient Nernst advection,
resulting in artificial magnetic dissipation and discontinuities. We repeat the
kinetic analysis to provide fits that rectify this problem. Use of these in the
Gorgon ExMHD code resolves the known discrepancies with kinetic simulations in
the literature. Recognizing the fundamental importance of
$\alpha_\perp-\alpha_\parallel$ and $\beta_\parallel-\beta_\perp$, we re-cast
the set of coefficients to find that each of them now shares the same
underlying properties. This makes explicit the symmetry of the magnetic and
thermal transport equations, as well as the symmetry of the coefficients
themselves.
|
The general model of coagulation is considered.
For basic classes of unbounded coagulation kernels the central limit theorem
(CLT) is obtained for the fluctuations around the dynamic law of large numbers
(LLN). A rather precise rate of convergence is given both for LLN and CLT.
|
In this paper the log surfaces without $\QQ$-complement are classified. In
particular, they are non-rational always. This result takes off the restriction
in the theory of complements and allows one to apply it in the most wide class
of log surfaces.
|
We demonstrate generation of 7.6 fs near-UV pulses centered at 400 nm via
8-fold soliton-effect self-compression in an Ar-filled hollow-core
kagom\'e-style photonic crystal fiber with ultrathin core walls. Analytical
calculations of the effective compression length and soliton order permit
adjustment of the experimental parameters, and numerical modelling of the
nonlinear pulse dynamics in the fiber accurately predict the spectro-temporal
profiles of the self-compressed pulses. After compensation of phase distortion
introduced by the optical elements along the beam path from the fiber to the
diagnostics, 71% of the pulse energy was in the main temporal lobe, with peak
powers in excess of 0.2 GW. The convenient set-up opens up new opportunities
for time-resolved studies in spectroscopy, chemistry and materials science.
|
We investigate constraints that the requirements of perturbativity and gauge
coupling unification impose on extensions of the Standard Model and of the
MSSM. In particular, we discuss the renormalization group running in several
SUSY left-right symmetric and Pati-Salam models and show how the various scales
appearing in these models have to be chosen in order to achieve unification. We
find that unification in the considered models occurs typically at scales below
M^{min}_{B violation} = 10^16 GeV, implying potential conflicts with the
non-observation of proton decay. We emphasize that extending the particle
content of a model in order to push the GUT scale higher or to achieve
unification in the first place will very often lead to non-perturbative
evolution. We generalize this observation to arbitrary extensions of the
Standard Model and of the MSSM and show that the requirement of perturbativity
up to M^{min}_{B violation}, if considered a valid guideline for model
building, severely limits the particle content of any such model, especially in
the supersymmetric case. However, we also discuss several mechanisms to
circumvent perturbativity and proton decay issues, for example in certain
classes of extra dimensional models.
|
It is shown that the study of correlations in the associative production of
B_c and D mesons at LHC allows to obtain the essential information about the
B_c production mechanism.
|
The rupture of a polymer chain maintained at temperature $T$ under fixed
tension is prototypical to a wide array of systems failing under constant
external strain and random perturbations. Past research focused on analytic and
numerical studies of the mean rate of collapse of such a chain. Surprisingly,
an analytic calculation of the probability distribution function (PDF) of
collapse rates appears to be lacking. Since rare events of rapid collapse can
be important and even catastrophic, we present here a theory of this
distribution, with a stress on its tail of fast rates. We show that the tail of
the PDF is a power law with a {\em universal} exponent that is theoretically
determined. Extensive numerics validate the offered theory. Lessons pertaining
to other problems of the same type are drawn.
|
We present a sensitive $\lambda$20cm VLA continuum survey of the Galactic
center region using new and archival data based on multi-configuration
observations taken with relatively uniform {\it uv} coverage. The high dynamic
range images cover the regions within $-2^\circ < l < 5^\circ$ and $-40' < b <
40'$ with a spatial resolution of $\approx30''$ and 10$''$. The wide field
imaging technique is used to construct a low-resolution mosaic of 40
overlapping pointings. The mosaic image includes the Effelsburg observations
filling the low spatial frequency {\it uv} data. We also present high
resolution images of twenty three overlapping fields using DnC and CnB array
configurations. The survey has resulted in a catalog of 345 discrete sources as
well as 140 images revealing structural details of HII regions, SNRs, pulsar
wind nebulae and more than 80 linear filaments distributed toward the complex
region of the Galactic center. These observations show the evidence for an
order of magnitude increase in the number of faint linear filaments with
typical lengths of few arcminutes. Many of the filaments show morphological
characteristics similar to the Galactic center nonthermal radio filaments
(NRFs). The linear filaments are not isolated but are generally clustered in
star forming regions where prominent NRFs had been detected previously. The
extensions of many of these linear filaments appear to terminate at either a
compact source or a resolved shell-like thermal source. A relationship between
the filaments, the compact and extended thermal sources as well as a lack of
preferred orientation for many RFs should constrain models that are proposed to
explain the origin of nonthermal radio filaments in the Galactic center.
|
The processes leading to dust formation and the subsequent role it plays in
driving mass loss in cool evolved stars is an area of intense study. Here we
present high resolution ALMA Science Verification data of the continuum
emission around the highly evolved oxygen-rich red supergiant VY CMa. These
data enable us to study the dust in its inner circumstellar environment at a
spatial resolution of 129 mas at 321 GHz and 59 mas at 658 GHz, thus allowing
us to trace dust on spatial scales down to 11 R$_{\star}$ (71 AU). Two
prominent dust components are detected and resolved. The brightest dust
component, C, is located 334 mas (61 R$_{\star}$) South East of the star and
has a dust mass of at least $2.5\times 10^{-4}$ M$_{\odot}$. It has a dust
emissivity spectral index of $\beta =-0.1$ at its peak, implying that it is
optically thick at these frequencies with a cool core of $T_{d}\lesssim 100$ K.
Interestingly, not a single molecule in the ALMA data has emission close to the
peak of this massive dust clump. The other main dust component, VY, is located
at the position of the star and contains a total dust mass of $4.0 \times
10^{-5} $M$_{\odot}$. It also contains a weaker dust feature extending over
$60$ R$_{\star}$ to the North with the total component having a typical dust
emissivity spectral index of $\beta =0.7$. We find that at least $17\%$ of the
dust mass around VY CMa is located in clumps ejected within a more quiescent
roughly spherical stellar wind, with a quiescent dust mass loss rate of $5
\times 10^{-6}$ M$_{\odot} $yr$^{-1}$. The anisotropic morphology of the dust
indicates a continuous, directed mass loss over a few decades, suggesting that
this mass loss cannot be driven by large convection cells alone.
|
The form of energy termed heat that typically derives from lattice
vibrations, i.e. the phonons, is usually considered as waste energy and,
moreover, deleterious to information processing. However, with this colloquium,
we attempt to rebut this common view: By use of tailored models we demonstrate
that phonons can be manipulated like electrons and photons can, thus enabling
controlled heat transport. Moreover, we explain that phonons can be put to
beneficial use to carry and process information. In a first part we present
ways to control heat transport and how to process information for physical
systems which are driven by a temperature bias. Particularly, we put forward
the toolkit of familiar electronic analogs for exercising phononics; i.e.
phononic devices which act as thermal diodes, thermal transistors, thermal
logic gates and thermal memories, etc.. These concepts are then put to work to
transport, control and rectify heat in physical realistic nanosystems by
devising practical designs of hybrid nanostructures that permit the operation
of functional phononic devices and, as well, report first experimental
realizations. Next, we discuss yet richer possibilities to manipulate heat flow
by use of time varying thermal bath temperatures or various other external
fields. These give rise to a plenty of intriguing phononic nonequilibrium
phenomena as for example the directed shuttling of heat, a geometrical phase
induced heat pumping, or the phonon Hall effect, that all may find its way into
operation with electronic analogs.
|
This paper reviews results about discrete physics and non-commutative worlds
and explores further the structure and consequences of constraints linking
classical calculus and discrete calculus formulated via commutators. In
particular we review how the formalism of generalized non-commutative
electromagnetism follows from a first order constraint and how, via the
Kilmister equation, relationships with general relativity follow from a second
order constraint. It is remarkable that a second order constraint, based on
interlacing the commutative and non-commutative worlds, leads to an equivalent
tensor equation at the pole of geodesic coordinates for general relativity.
|
Massless QED(1+1) - the Schwinger model - is studied in a covariant gauge.
The main new ingredient is an operator solution of the Dirac equation expressed
directly in terms of the fields present in the Lagrangian. This allows us to
study in detail the residual symmetry of the covariant gauge. For comparison,
we analyze first an analogous solution in the Thirring-Wess model and its
implication for the axial anomaly arising from the necessity to correctly
define products of fermion operators via point-splitting. In the Schwinger
model, one has to define the currents in a gauge invariant (GI) way. Certain
problems with their usual derivation are identified that obscure the origin of
the massive vector boson. We show how to define the truly GI interacting
currents, reformulate the theory in a finite volume and clarify role of the
gauge zero mode in the axial anomaly and in the Schwinger mechanism. A
transformation to the Coulomb gauge representation is suggested along with
ideas about how to correctly obtain other properties of the model.
|
We study nonnegative, measure-valued solutions to nonlinear drift type
equations modelling concentration phenomena related to Bose-Einstein particles.
In one spatial dimension, we prove existence and uniqueness for measure
solutions. Moreover, we prove that all solutions blow up in finite time leading
to a concentration of mass only at the origin, and the concentrated mass
absorbs increasingly the mass converging to the total mass as time goes to
infinity. Our analysis makes a substantial use of independent variable scalings
and pseudo-inverse functions techniques.
|
The shear free condition is studied for dissipative relativistic
self-gravitating fluids in the quasi-static approximation. It is shown that, in
the Newtonian limit, such condition implies the linear homology law for the
velocity of a fluid element, only if homology conditions are further imposed on
the temperature and the emission rate. It is also shown that the shear-free
plus the homogeneous expansion rate conditions are equivalent (in the Newtonian
limit) to the homology conditions. Deviations from homology and their
prospective applications to some astrophysical scenarios are discussed, and a
model is worked out.
|
We present a systematic analysis on coherent states of composite bosons
consisting of two distinguishable particles. By defining an effective composite
boson (coboson) annihilation operator, we derive its eigenstate and commutator.
Depending on the elementary particles comprising the composite particles, we
gauge the resemblance between this eigenstate and traditional coherent states
through typical measures of nonclassicality, such as quadrature variances and
Mandel's Q parameter. Furthermore, we show that the eigenstate of the coboson
annihilation operator is useful in estimating the maximum eigenvalue of the
coboson number operator.
|
Statistical mechanics is an important tool for understanding polymer
electroelasticity because the elasticity of polymers is primarily due to
entropy. However, a common approach for the statistical mechanics of polymer
chains, the Gaussian chain approximation, misses key physics. By considering
the nonlinearities of the problem, we show a strong coupling between the
deformation of a polymer chain and its dielectric response; that is, its net
dipole. When chains with this coupling are cross-linked in an elastomer network
and an electric field is applied, the field breaks the symmetry of the
elastomer's elastic properties, and, combined with electrostatic torque and
incompressibility, leads to intrinsic electrostriction. Conversely, deformation
can break the symmetry of the dielectric response leading to volumetric torque
(i.e., a couple stress or torque per unit volume) and asymmetric actuation.
Both phenomena have important implications for designing high-efficiency soft
actuators and soft electroactive materials; and the presence of mechanisms for
volumetric torque, in particular, can be used to develop higher degree of
freedom actuators and to achieve bioinspired locomotion.
|
Image denoising is always a challenging task in the field of computer vision
and image processing. In this paper, we have proposed an encoder-decoder model
with direct attention, which is capable of denoising and reconstruct highly
corrupted images. Our model consists of an encoder and a decoder, where the
encoder is a convolutional neural network and decoder is a multilayer Long
Short-Term memory network. In the proposed model, the encoder reads an image
and catches the abstraction of that image in a vector, where decoder takes that
vector as well as the corrupted image to reconstruct a clean image. We have
trained our model on MNIST handwritten digit database after making lower half
of every image as black as well as adding noise top of that. After a massive
destruction of the images where it is hard for a human to understand the
content of those images, our model can retrieve that image with minimal error.
Our proposed model has been compared with convolutional encoder-decoder, where
our model has performed better at generating missing part of the images than
convolutional autoencoder.
|
The exact solution for a static spherically symmetric field outside a charged
point particle is found in a non-linear $U(1)$ gauge theory with a logarithmic
Lagrangian. The electromagnetic self-mass is finite, and for a particular
relation between mass, charge, and the value of the non-linearity coupling
constant, $\lambda$, the electromagnetic contribution to the Schwarzschild mass
is equal to the total mass. If we also require that the singularity at the
origin be hidden behind a horizon, the mass is fixed to be slightly less than
the charge. This object is a {\em black point.}
|
Weighted singular value decomposition (WSVD) of a quaternion matrix and with
its help determinantal representations of the quaternion weighted Moore-Penrose
inverse have been derived recently by the author. In this paper, using these
determinantal representations, explicit determinantal representation formulas
for the solution of the restricted quaternion matrix equations, ${\bf A}{\bf
X}{\bf B}={\bf D}$, and consequently, ${\bf A}{\bf X}={\bf D}$ and ${\bf X}{\bf
B}={\bf D}$ are obtained within the framework of the theory of column-row
determinants. We consider all possible cases depending on weighted matrices.
|
The adiabatic self-consistent collective coordinate (ASCC) method is applied
to the pairing-plus-quadrupole (P + Q) model Hamiltonian including the
quadrupole pairing, and the oblate-prolate shape coexistence phenomena in
proton-rich nuclei, 68Se and 72Kr, are investigated. It is shown that the
collective path connecting the oblate and prolate local minima runs along a
triaxial valley in the beta-gamma plane. Quantum collective Hamiltonian is
constructed and low-lying energy spectra and E2 transition probabilities are
calculated for the first time using the ASCC method. Basic properties of the
shape coexistence/mixing are well reproduced. We also clarify the effects of
the time-odd pair field on the collective mass (inertial function) for the
large-amplitude vibration and on the rotational moments of inertia about three
principal axes.
|
Recent literature including our past work provide analysis and solutions for
using (i) erasure coding, (ii) parallelism, or (iii) variable slicing/chunking
(i.e., dividing an object of a specific size into a variable number of smaller
chunks) in speeding the I/O performance of storage clouds. However, a
comprehensive approach that considers all three dimensions together to achieve
the best throughput-delay trade-off curve had been lacking. This paper presents
the first set of solutions that can pick the best combination of coding rate
and object chunking/slicing options as the load dynamically changes. Our
specific contributions are as follows: (1) We establish via measurement that
combining variable coding rate and chunking is mostly feasible over a popular
public cloud. (2) We relate the delay optimal values for chunking level and
code rate to the queue backlogs via an approximate queueing analysis. (3) Based
on this analysis, we propose TOFEC that adapts the chunking level and coding
rate against the queue backlogs. Our trace-driven simulation results show that
TOFEC's adaptation mechanism converges to an appropriate code that provides the
optimal throughput-delay trade-off without reducing system capacity. Compared
to a non-adaptive strategy optimized for throughput, TOFEC delivers $2.5\times$
lower latency under light workloads; compared to a non-adaptive strategy
optimized for latency, TOFEC can scale to support over $3\times$ as many
requests. (4) We propose a simpler greedy solution that performs on a par with
TOFEC in average delay performance, but exhibits significantly more performance
variations.
|
An irreducible Hamiltonian BRST-anti-BRST treatment of reducible first-class
systems based on homological arguments is proposed. The general formalism is
exemplified on the Freedman-Townsend model.
|
We propose to study market efficiency from a computational viewpoint.
Borrowing from theoretical computer science, we define a market to be
\emph{efficient with respect to resources $S$} (e.g., time, memory) if no
strategy using resources $S$ can make a profit. As a first step, we consider
memory-$m$ strategies whose action at time $t$ depends only on the $m$ previous
observations at times $t-m,...,t-1$. We introduce and study a simple model of
market evolution, where strategies impact the market by their decision to buy
or sell. We show that the effect of optimal strategies using memory $m$ can
lead to "market conditions" that were not present initially, such as (1) market
bubbles and (2) the possibility for a strategy using memory $m' > m$ to make a
bigger profit than was initially possible. We suggest ours as a framework to
rationalize the technological arms race of quantitative trading firms.
|
Understanding the phonon behavior in semiconductors from a topological
physics perspective provides more opportunities to uncover extraordinary
physics related to phonon transport and electron-phonon interactions. While
various kinds of topological phonons have been reported in different
crystalline solids, their microscopic origin has not been quantitatively
uncovered. In this work, four typical analytical interatomic force constant
(IFC) models are employed for wurtzite GaN and AlN to help establish the
relationships between phonon topology and real-space IFCs. In particular,
various nearest neighbor IFCs, i.e., different levels of nonlocality, and IFC
strength controlled by characteristic coefficients, can be achieved in these
models. The results demonstrate that changes in the strength of both the IFCs
and nonlocal interactions can induce phonon phase transitions in GaN and AlN,
leading to the disappearance of existing Weyl phonons and the appearance of new
Weyl phonons. These new Weyl phonons are the result of a band reversal and have
a Chern number of 1. Most of them are located in the kz=0 plane in pairs, while
some of them are inside or at the boundary of the irreducible Brillouin zone.
Among the various Weyl points observed, certain ones remain identical in both
materials, while others exhibit variability depending on the particular case.
Compared to the strength of the IFC, nonlocal interactions show much more
significant effects in inducing the topological phonon phase transition,
especially in cases modeled by the IFC model and SW potential. The larger
number of 3NN atoms provides more space for variations in the topological
phonon phase of wurtzite AlN than in GaN, resulting in a greater abundance of
changes in AlN.
|
We demonstrate dynamic stabilisation of axisymmetric Fourier modes
susceptible to the classical Rayleigh-Plateau (RP) instability on a liquid
cylinder by subjecting it to a radial oscillatory body force. Viscosity is
found to play a crucial role in this stabilisation. Linear stability
predictions are obtained via Floquet analysis demonstrating that RP unstable
modes can be stabilised using radial forcing. We also solve the linearised,
viscous initial-value problem for free-surface deformation obtaining an
equation governing the amplitude of a three-dimensional Fourier mode. This
equation generalises the Mathieu equation governing Faraday waves on a cylinder
derived earlier in Patankar et al. (2018), is non-local in time and represents
the cylindrical analogue of its Cartesian counterpart (Beyer & Friedrich 1995).
The memory term in this equation is physically interpreted and it is shown that
for highly viscous fluids, its contribution can be sizeable. Predictions from
the numerical solution to this equation demonstrates RP mode stabilisation upto
several hundred forcing cycles and is in excellent agreement with numerical
simulations of the incompressible, Navier-Stokes equations.
|
This fun polemical piece was written several months ago on a tip that the
\emph{Chronicle of Higher Education} might be interested in publishing
something like it. Sadly (both for me and, I think, for the \emph{Chronicle}'s
readership) the editors didn't think it was of sufficient interest to the wider
academic community. I am posting it here at the arxiv so that it can,
nevertheless, be publicly available. If anyone out there wants to (suggest a
place to) publish the piece, I'm all ears.
|
Given an alphabet $S$, we consider the size of the subsets of the full
sequence space $S^{\rm {\bf Z}}$ determined by the additional restriction that
$x_i\not=x_{i+f(n)},\ i\in {\rm {\bf Z}},\ n\in {\rm {\bf N}}.$ Here $f$ is a
positive, strictly increasing function. We review an other, graph theoretic,
formulation and then the known results covering various combinations of $f$ and
the alphabet size. In the second part of the paper we turn to the fine
structure of the allowed sequences in the particular case where $f$ is a
suitable polynomial. The generation of sequences leads naturally to consider
the problem of their maximal length, which turns out highly random
asymptotically in the alphabet size.
|
Sco X-1, the brightest low mass X-ray binary, is likely to be a source for
gravitational wave emission. In one mechanism, emission of a gravitational wave
arrests the increase in spin frequency due to the accretion torque in a low
mass X-ray binary. Since the gravitational waveform is unknown, a detection
method assuming no apriori knowledge of the signal is preferable. In this
paper, we propose to search for a gravitational wave from Sco X-1 using a {{\it
source tracking}} method based on a coherent network analysis. In the method,
we combine data from several interferometric gravitational wave detectors
taking into account of the direction to Sco X-1, and reconstruct two
polarization waveforms at the location of Sco X-1 in the sky as Sco X-1 is
moving. The source tracking method opens up the possibility of searching for a
wide variety of signals. We perform Monte Carlo simulations and show results
for bursts, modeled, short duration periodic sources using a simple excess
power and a matched filter method on the reconstructed signals.
|
We compute the proper time Lyapunov exponent for charged Myers Perry black
hole spacetime and investigate the instability of the equatorial circular
geodesics (both timelike and null) via this exponent. We also show that for
more than four spacetime dimensions $(N \geq 3)$, there are \emph{no} Innermost
Stable Circular Orbits (ISCOs) in charged Myers Perry black hole spacetime. We
further show that among all possible circular orbits, timelike circular orbits
have \emph{longer} orbital periods than null circular orbits (photon spheres)
as measured by asymptotic observers. Thus, timelike circular orbits provide the
\emph{slowest way} to orbit around the charged Myers Perry black hole.
|
For an arbitrary left Artinian ring $R$, explicit descriptions are given of
all the left denominator sets $S$ of $R$ and left localizations $S^{-1}R$ of
$R$. It is proved that, up to $R$-isomorphism, there are only finitely many
left localizations and each of them is an idempotent localization, i.e.
$S^{-1}R\simeq S_e^{-1}R$ and ${\rm ass} (S) = {\rm ass} (S_e)$ where
$S_e=\{1,e\}$ is a left denominator set of $R$ and $e$ is an idempotent.
Moreover, the idempotent $e$ is unique up to a conjugation. It is proved that
the number of maximal left denominator sets of $R$ is finite and does not
exceed the number of isomorphism classes of simple left $R$-modules. The set of
maximal left denominator sets of $R$ and the left localization radical of $R$
are described.
|
It seems to be not well known that the metrics of general relativity (GR) can
be obtained without integrating Einstein equations. To that, we need only
define a unit for GR-interval $\Delta s$, and observe 10 geodesics (out of
which at least one must be nonnull). Even without using any unit, we can have
$\kappa g_{\mu\nu}(x^\rho)$, where $\kappa=$const. Our notes attempt to
simplify the articles of E. Kretschmann (1917) and of H.A. Lorentz (1923) about
this last subject. The text of this article in English will soon be available,
in LaTeX. Please ask the author.
-----
/Sajne estas malmulte konata ke la metrikoj de /generala relativeco (/GR)
povas esti havataj sen integri Einstein-ajn ekvaciojn. Por tio, ni bezonas
difini nur unuon por /GR-tempo $\Delta s$, kaj observi 10 geodezajn (el kiuj,
almena/u unu devas esti nenulan). E/c sen uzi iun unuon, ni povas havi $\kappa
g_{\mu\nu}(x^\rho)$, kie $\kappa$=konst. Niaj notoj tentas simpligi la
artikolojn de E. Kretschmann (1917) kaj de H.A. Lorentz (1923) pri tiu lasta
afero.
|
Bead packs of up to 150,000 mono-sized spheres with packing densities ranging
from 0.58 to 0.64 have been studied by means of X-ray Computed Tomography.
These studies represent the largest and the most accurate description of the
structure of disordered packings at the grain-scale ever attempted. We
investigate the geometrical structure of such packings looking for signatures
of disorder. We discuss ways to characterize and classify these systems and the
implications that local geometry can have on densification dynamics.
|
Short-range quark-quark correlations are introduced into the quark-meson
coupling (QMC) model in a simple way. The effect of these correlations on the
structure of the nucleon in dense nuclear matter is studied. We find that the
short-range correlations may serve to reduce a serious problem associated with
the modified quark-meson coupling model (within which the bag constant is
allowed to decrease with increasing density), namely the tendency for the size
of the bound nucleon to increase rapidly as the density rises. We also find
that, with the addition of correlations, both QMC and modified QMC are
consistent with the phenomenological equation of state at high density.
|
Recent works on machine learning for combinatorial optimization have shown
that learning based approaches can outperform heuristic methods in terms of
speed and performance. In this paper, we consider the problem of finding an
optimal topological order on a directed acyclic graph with focus on the memory
minimization problem which arises in compilers. We propose an end-to-end
machine learning based approach for topological ordering using an
encoder-decoder framework. Our encoder is a novel attention based graph neural
network architecture called \emph{Topoformer} which uses different topological
transforms of a DAG for message passing. The node embeddings produced by the
encoder are converted into node priorities which are used by the decoder to
generate a probability distribution over topological orders. We train our model
on a dataset of synthetically generated graphs called layered graphs. We show
that our model outperforms, or is on-par, with several topological ordering
baselines while being significantly faster on synthetic graphs with up to 2k
nodes. We also train and test our model on a set of real-world computation
graphs, showing performance improvements.
|
In this paper we give an upper bound for the number of integral points on an
elliptic curve E over F_q[T] in terms of its conductor N and q. We proceed by
applying the lower bounds for the canonical height that are analogous to those
given by Silverman and extend the technique developed by Helfgott-Venkatesh to
express the number of integral points on E in terms of its algebraic rank. We
also use the sphere packing results to optimize the size of an implied
constant. In the end we use partial Birch Swinnerton-Dyer conjecture that is
known to be true over function fields to bound the algebraic rank by the
analytic one and apply the explicit formula for the analytic rank of E.
|
Synthetic visual data can provide practically infinite diversity and rich
labels, while avoiding ethical issues with privacy and bias. However, for many
tasks, current models trained on synthetic data generalize poorly to real data.
The task of 3D human pose estimation is a particularly interesting example of
this sim2real problem, because learning-based approaches perform reasonably
well given real training data, yet labeled 3D poses are extremely difficult to
obtain in the wild, limiting scalability. In this paper, we show that standard
neural-network approaches, which perform poorly when trained on synthetic RGB
images, can perform well when the data is pre-processed to extract cues about
the person's motion, notably as optical flow and the motion of 2D keypoints.
Therefore, our results suggest that motion can be a simple way to bridge a
sim2real gap when video is available. We evaluate on the 3D Poses in the Wild
dataset, the most challenging modern benchmark for 3D pose estimation, where we
show full 3D mesh recovery that is on par with state-of-the-art methods trained
on real 3D sequences, despite training only on synthetic humans from the
SURREAL dataset.
|
We obtained high-resolution infrared spectroscopy and short-cadence
photometry of the 600-800 Myr Praesepe star K2-100 during transits of its
1.67-day planet. This Neptune-size object, discovered by the NASA K2 mission,
is an interloper in the "desert" of planets with similar radii on short period
orbits. Our observations can be used to understand its origin and evolution by
constraining the orbital eccentricity by transit fitting, measuring the
spin-orbit obliquity by the Rossiter-McLaughlin effect, and detecting any
extended, escaping hydrogen-helium envelope with the 10830A line of neutral
helium in the 2s3S triplet state. Transit photometry with 1-min cadence was
obtained by the K2 satellite during Campaign 18 and transit spectra were
obtained with the IRD spectrograph on the Subaru telescope. While the elevated
activity of K2-100 prevented us from detecting the Rossiter-McLaughlin effect,
the new photometry combined with revised stellar parameters allowed us to
constrain the eccentricity to e < 0.15/0.28 with 90%/99% confidence. We modeled
atmospheric escape as an isothermal, spherically symmetric Parker wind, with
photochemistry driven by UV radiation that we estimate by combining the
observed spectrum of the active Sun with calibrations from observations of
K2-100 and similar young stars in the nearby Hyades cluster. Our non-detection
(<5.7mA) of a transit-associated He I line limits mass loss of a
solar-composition atmosphere through a T<10000K wind to <0.3Me/Gyr. Either
K2-100b is an exceptional desert-dwelling planet, or its mass loss is occurring
at a lower rate over a longer interval, consistent with a core
accretion-powered scenario for escape.
|
We compute the Hausdorff dimension of the set of simultaneously
$q^{-\lambda}$-well approximable points on the Veronese curve in $\mathbb{R}^n$
for $\lambda$ between $\frac{1}{n}$ and $\frac{2}{2n-1}$. For $n=3$, the same
result is given for a wider range of $\lambda$ between $\frac13$ and $\frac12$.
We also provide a nontrivial upper bound for this Hausdorff dimension in the
case $\lambda\le \frac{2}{n}$. In the course of the proof we establish that the
number of cubic polynomials of height at most $H$ and non-zero discriminant at
most $D$ is bounded from above by $c(\epsilon) H^{2/3 + \epsilon} D^{5/6}$.
|
This paper introduces a new end-to-end text-to-speech (E2E-TTS) toolkit named
ESPnet-TTS, which is an extension of the open-source speech processing toolkit
ESPnet. The toolkit supports state-of-the-art E2E-TTS models, including
Tacotron~2, Transformer TTS, and FastSpeech, and also provides recipes inspired
by the Kaldi automatic speech recognition (ASR) toolkit. The recipes are based
on the design unified with the ESPnet ASR recipe, providing high
reproducibility. The toolkit also provides pre-trained models and samples of
all of the recipes so that users can use it as a baseline. Furthermore, the
unified design enables the integration of ASR functions with TTS, e.g.,
ASR-based objective evaluation and semi-supervised learning with both ASR and
TTS models. This paper describes the design of the toolkit and experimental
evaluation in comparison with other toolkits. The experimental results show
that our models can achieve state-of-the-art performance comparable to the
other latest toolkits, resulting in a mean opinion score (MOS) of 4.25 on the
LJSpeech dataset. The toolkit is publicly available at
https://github.com/espnet/espnet.
|
In this paper are briefly outlined the motivations, mathematical ideas in
use, pre-formalization and assumptions, object-as-functor construction, `soft'
types and concept constructions, case study for concepts based on variable
domains, extracting a computational background, and examples of evaluations.
|
In this article we study Semi-abelian analogues of Schanuel conjecture. As
showed by the first author, Schanuel Conjecture is equivalent to the
Generalized Period Conjecture applied to 1-motives without abelian part.
Extending her methods, the second, the third and the fourth authors have
introduced the Abelian analogue of Schanuel Conjecture as the Generalized
Period Conjecture applied to 1-motives without toric part. As a first result of
this paper, we define the Semi-abelian analogue of Schanuel Conjecture as the
Generalized Period Conjecture applied to 1-motives. C. Cheng et al. proved that
Schanuel conjecture implies the algebraic independence of the values of the
iterated exponential and the values of the iterated logarithm, answering a
question of M. Waldschmidt. The second, the third and the fourth authors have
investigated a similar question in the setup of abelian varieties: the Weak
Abelian Schanuel conjecture implies the algebraic independence of the values of
the iterated abelian exponential and the values of an iterated generalized
abelian logarithm. The main result of this paper is that a Relative
Semi-abelian conjecture implies the algebraic independence of the values of the
iterated semi-abelian exponential and the values of an iterated generalized
semi-abelian logarithm.
|
We study the spin mixing dynamics of ultracold spin-1 atoms in a weak
non-uniform magnetic field with field gradient $G$, which can flip the spin
from +1 to -1 so that the magnetization $m=\rho_{+}-\rho_{-}$ is not any more a
constant. The dynamics of $m_F=0$ Zeeman component $\rho_{0}$, as well as the
system magnetization $m$, are illustrated for both ferromagnetic and polar
interaction cases in the mean-field theory. We find that the dynamics of system
magnetization can be tuned between the Josephson-like oscillation similar to
the case of double well, and the interesting self-trapping regimes, i.e. the
spin mixing dynamics sustains a spontaneous magnetization. Meanwhile the
dynamics of $\rho_0$ may be sufficiently suppressed for initially imbalanced
number distribution in the case of polar interaction. A "beat-frequency"
oscillation of the magnetization emerges in the case of balanced initial
distribution for polar interaction, which vanishes for ferromagnetic
interaction.
|
In the continual learning setting, tasks are encountered sequentially. The
goal is to learn whilst i) avoiding catastrophic forgetting, ii) efficiently
using model capacity, and iii) employing forward and backward transfer
learning. In this paper, we explore how the Variational Continual Learning
(VCL) framework achieves these desiderata on two benchmarks in continual
learning: split MNIST and permuted MNIST. We first report significantly
improved results on what was already a competitive approach. The improvements
are achieved by establishing a new best practice approach to mean-field
variational Bayesian neural networks. We then look at the solutions in detail.
This allows us to obtain an understanding of why VCL performs as it does, and
we compare the solution to what an `ideal' continual learning solution might
be.
|
The regular observation of the solar magnetic field is available only for
about last five cycles. Thus, to understand the origin of the variation of the
solar magnetic field, it is essential to reconstruct the magnetic field for the
past cycles, utilizing other datasets. Long-term uniform observations for the
past 100 years as recorded at the Kodaikanal Solar Observatory (KoSO) provide
such opportunity. We develop a method for the reconstruction of the solar
magnetic field using the synoptic observations of the Sun's emission in the Ca
II K and H$\alpha$ lines from KoSO for the first time. The reconstruction
method is based on the facts that the Ca II K intensity correlates well with
the unsigned magnetic flux, while the sign of the flux is derived from the
corresponding H$\alpha$ map which provides the information of the dominant
polarities. Based on this reconstructed magnetic map, we study the evolution of
the magnetic field in Cycles 15--19. We also study bipolar magnetic regions
(BMRs) and their remnant flux surges in their causal relation. Time-latitude
analysis of the reconstructed magnetic flux provides an overall view of
magnetic field evolution: emergent magnetic flux, its further transformations
with the formation of unipolar magnetic regions (UMRs) and remnant flux surges.
We identify the reversals of the polar field and critical surges of following
and leading polarities. We found that the poleward transport of opposite
polarities led to multiple changes of the dominant magnetic polarities in
poles. Furthermore, the remnant flux surges that occur between adjacent 11-year
cycles reveal physical connections between them.
|
India has a maternal mortality ratio of 113 and child mortality ratio of 2830
per 100,000 live births. Lack of access to preventive care information is a
major contributing factor for these deaths, especially in low resource
households. We partner with ARMMAN, a non-profit based in India employing a
call-based information program to disseminate health-related information to
pregnant women and women with recent child deliveries. We analyze call records
of over 300,000 women registered in the program created by ARMMAN and try to
identify women who might not engage with these call programs that are proven to
result in positive health outcomes. We built machine learning based models to
predict the long term engagement pattern from call logs and beneficiaries'
demographic information, and discuss the applicability of this method in the
real world through a pilot validation. Through a pilot service quality
improvement study, we show that using our model's predictions to make
interventions boosts engagement metrics by 61.37%. We then formulate the
intervention planning problem as restless multi-armed bandits (RMABs), and
present preliminary results using this approach.
|
In the recent work of Nakariakov et al. (2004), it has been shown that the
time dependences of density and velocity in a flaring loop contain pronounced
quasi-harmonic oscillations associated with the 2nd harmonic of a standing slow
magnetoacoustic wave. That model used a symmetric heating function (heat
deposition was strictly at the apex). This left outstanding questions: A) is
the generation of the 2nd harmonic a consequence of the fact that the heating
function was symmetric? B) Would the generation of these oscillations occur if
we break symmetry? C) What is the spectrum of these oscillations? Is it
consistent with a 2nd spatial harmonic? The present work (and partly Tsiklauri
et al. (2004b)) attempts to answer these important outstanding questions.
Namely, we investigate the physical nature of these oscillations in greater
detail: we study their spectrum (using periodogram technique) and how heat
positioning affects the mode excitation. We found that excitation of such
oscillations is practically independent of location of the heat deposition in
the loop. Because of the change of the background temperature and density, the
phase shift between the density and velocity perturbations is not exactly a
quarter of the period, it varies along the loop and is time dependent,
especially in the case of one footpoint (asymmetric) heating. We also were able
to model successfully SUMER oscillations observed in hot coronal loops.
|
There is given a characterization of the geometric distribution by the
independence of linear forms with random coefficients. The result is a discrete
analog of the corresponding theorem on exponential distribution. The property
of linear statistics independence is also a characterization of Poisson law.
Keywords: geometric distribution; exponential distribution; Poisson
distribution; linear forms; random coefficients
|
We propose an AE-based transceiver for a WDM system impaired by hardware
imperfections. We design our AE following the architecture of conventional
communication systems. This enables to initialize the AE-based transceiver to
have similar performance to its conventional counterpart prior to training and
improves the training convergence rate. We first train the AE in a
single-channel system, and show that it achieves performance improvements by
putting energy outside the desired bandwidth, and therefore cannot be used for
a WDM system. We then train the AE in a WDM setup. Simulation results show that
the proposed AE significantly outperforms the conventional approach. More
specifically, it increases the spectral efficiency of the considered system by
reducing the guard band by 37\% and 50\% for a root-raised-cosine filter-based
matched filter with 10\% and 1\% roll-off, respectively. An ablation study
indicates that the performance gain can be ascribed to the optimization of the
symbol mapper, the pulse-shaping filter, and the symbol demapper. Finally, we
use reinforcement learning to learn the pulse-shaping filter under the
assumption that the channel model is unknown. Simulation results show that the
reinforcement-learning-based algorithm achieves similar performance to the
standard supervised end-to-end learning approach assuming perfect channel
knowledge.
|
In this contribution we reconsider the calculation at next-to-leading order
of forward inclusive single hadron production in $pA$ collisions within the
hybrid approach. We conclude that the proper framework to compute this cross
section beyond leading order is not collinear factorization as assumed so far,
but the TMD factorized framework.
|
The Ponzano-Regge model of three-dimensional quantum gravity is well-defined
when the observables satisfy a certain condition involving the twisted
cohomology. In this case, the partition function is defined in terms of the
Reidemeister torsion. Some consequences for the special cases of planar graphs
and knots are given.
|
We propose Styleformer, which is a style-based generator for GAN
architecture, but a convolution-free transformer-based generator. In our paper,
we explain how a transformer can generate high-quality images, overcoming the
disadvantage that convolution operations are difficult to capture global
features in an image. Furthermore, we change the demodulation of StyleGAN2 and
modify the existing transformer structure (e.g., residual connection, layer
normalization) to create a strong style-based generator with a convolution-free
structure. We also make Styleformer lighter by applying Linformer, enabling
Styleformer to generate higher resolution images and result in improvements in
terms of speed and memory. We experiment with the low-resolution image dataset
such as CIFAR-10, as well as the high-resolution image dataset like
LSUN-church. Styleformer records FID 2.82 and IS 9.94 on CIFAR-10, a benchmark
dataset, which is comparable performance to the current state-of-the-art and
outperforms all GAN-based generative models, including StyleGAN2-ADA with fewer
parameters on the unconditional setting. We also both achieve new
state-of-the-art with FID 15.17, IS 11.01, and FID 3.66, respectively on STL-10
and CelebA. We release our code at
https://github.com/Jeeseung-Park/Styleformer.
|
Electron paramagnetic resonance (EPR) study of air-physisorbed defective
carbon nano-onions evidences in favor of microwave assisted formation of
weakly-bound paramagnetic complexes comprising negatively-charged O2- ions and
edge carbon atoms carrying pi-electronic spins. These complexes being located
on the graphene edges are stable at low temperatures but irreversibly
dissociate at temperatures above 50-60 K. These EPR findings are justified by
density functional theory (DFT) calculations demonstrating transfer of an
electron from the zigzag edge of graphene-like material to oxygen molecule
physisorbed on the graphene sheet edge. This charge transfer causes changing
the spin state of the adsorbed oxygen molecule from S = 1 to S = 1/2 one. DFT
calculations show significant changes of adsorption energy of oxygen molecule
and robustness of the charge transfer to variations of the graphene-like
substrate morphology (flat and corrugated mono- and bi-layered graphene) as
well as edges passivation. The presence of H- and COOH- terminated edge carbon
sites with such corrugated substrate morphology allows formation of ZE-O2-
paramagnetic complexes characterized by small (<50 meV) binding energies and
also explains their irreversible dissociation as revealed by EPR.
|
In this work, we explore different approaches to combine modalities for the
problem of automated age-suitability rating of movie trailers. First, we
introduce a new dataset containing videos of movie trailers in English
downloaded from IMDB and YouTube, along with their corresponding
age-suitability rating labels. Secondly, we propose a multi-modal deep learning
pipeline addressing the movie trailer age suitability rating problem. This is
the first attempt to combine video, audio, and speech information for this
problem, and our experimental results show that multi-modal approaches
significantly outperform the best mono and bimodal models in this task.
|
Electrical power system calculations rely heavily on the $Y_{bus}$ matrix,
which is the Laplacian matrix of the network under study, weighted by the
complex-valued admittance of each branch. It is often useful to partition the
$Y_{bus}$ into four submatrices, to separately quantify the connectivity
between and among the load and generation nodes in the network. Simple
manipulation of these submatrices gives the $F_{LG}$ matrix, which offers
useful insights on how voltage deviations propagate through a power system and
on how energy losses may be minimized. Various authors have observed that in
practice the elements of $F_{LG}$ are real-valued and its rows sum close to
one: the present paper explains and proves these properties.
|
Transverse thermoelectric power generation has emerged as a topic of immense
interest in recent years owing to the orthogonal geometry which enables better
scalability and fabrication of devices. Here, we investigate the thickness
dependence of longitudinal and transverse responses in film-substrate systems
i.e., the Seebeck coefficient, Hall coefficient, Nernst coefficient and
anomalous Nernst coefficient in a unified and general manner based on the
circuit model, which describes the system as the parallel setup. By solving the
parallel circuit model, we show that the transverse responses exhibit a
significant peak, indicating the importance of a cooperative effect between the
film and the substrate, arising from circulating currents that occur in these
multilayer systems in the presence of a temperature gradient. Finally, on the
basis of realistic material parameters, we predict that the Nernst effect in
bismuth thin films on doped silicon substrates is boosted to unprecedented
values if the thickness ratio is tuned accordingly, motivating experimental
validation.
|
We survey recent work and announce new results concerning two singular
integral operators whose kernels are holomorphic functions of the output
variable, specifically the Cauchy-Leray integral and the Cauchy-Szeg\H o
projection associated to various classes of bounded domains in $\mathbb C^n$
with $n\geq 2$.
|
In 1999 Allan Swett checked (in 150 hours) the Erd\H{o}s-Straus conjecture up
to $N=10^{14}$ with a sieve based on a single modular equation. After having
proved the existence of a "complete" set of seven modular equations (including
three new ones), this paper offers an optimized sieve based on these equations.
A program written in C++ (and given elsewhere) allows then to make a checking
whose running time, on a typical computer, range from few minutes for
$N=10^{14}$ to about 16 hours for $N=10^{17}$.
|
In this short note, we propose a concrete analogue of the space $\cL(H)$ for
local operator spaces, the multinormed $C^*$-algebra
$\displaystyle\prod_{\alpha} \cL(H_{\alpha})$.
|
We calculate ground-state properties of a many-quark system in the
string-flip model using variational Monte Carlo methods. The many-body
potential energy of the system is determined by finding the optimal grouping of
quarks into hadrons. This (optimal) assignment problem is solved by using the
stochastic optimization technique of simulated annealing. Results are presented
for the energy and length-scale for confinement as a function of density. These
results show how quarks clustering decreases with density and characterize the
nuclear- to quark-matter transition. We compare our results to a previously
published work with a similar model which uses, instead, a pairing approach to
the optimization problem.
|
We study the conditions for successful Affleck-Dine baryogenesis and the
origin of gravitino dark matter in GMSB models. AD baryogenesis in GMSB models
is ruled out by neutron star stability unless Q-balls are unstable and decay
before nucleosynthesis. Unstable Q-balls can form if the messenger mass scale
is larger than the flat-direction field Phi when the condensate fragments. We
provide an example based on AD baryogenesis along a d = 6 flat direction for
the case where m_{3/2} \approx 2 GeV, as predicted by gravitino dark matter
from Q-ball decay. Using a phenomenological GMSB potential which models the Phi
dependence of the SUSY breaking terms, we numerically solve for the evolution
of Phi and show that the messenger mass can be sufficiently close to the
flat-direction field when the condensate fragments. We compute the
corresponding reheating temperature and the baryonic charge of the condensate
fragments and show that the charge is large enough to produce late-decaying
Q-balls which can be the origin of gravitino dark matter.
|
I consider general reflection coefficients for arbitrary one-dimensional
whole line differential or difference operators of order $2$. These reflection
coefficients are semicontinuous functions of the operator: their absolute value
can only go down when limits are taken. This implies a corresponding
semicontinuity result for the absolutely continuous spectrum, which applies to
a very large class of maps. In particular, we can consider shift maps (thus
recovering and generalizing a result of Last-Simon) and flows of the Toda and
KdV hierarchies (this is new). Finally, I evaluate an attempt at finding a
similar general setup that gives the much stronger conclusion of reflectionless
limit operators in more specialized situations.
|
Planets can affect debris disk structure by creating gaps, sharp edges,
warps, and other potentially observable signatures. However, there is currently
no simple way for observers to deduce a disk-shepherding planet's properties
from the observed features of the disk. Here we present a single equation that
relates a shepherding planet's maximum mass to the debris ring's observed width
in scattered light, along with a procedure to estimate the planet's
eccentricity and minimum semimajor axis. We accomplish this by performing
dynamical N-body simulations of model systems containing a star, a single
planet, and a disk of parent bodies and dust grains to determine the resulting
debris disk properties over a wide range of input parameters. We find that the
relationship between planet mass and debris disk width is linear, with
increasing planet mass producing broader debris rings. We apply our methods to
five imaged debris rings to constrain the putative planet masses and orbits in
each system. Observers can use our empirically-derived equation as a guide for
future direct imaging searches for planets in debris disk systems. In the
fortuitous case of an imaged planet orbiting interior to an imaged disk, the
planet's maximum mass can be estimated independent of atmospheric models.
|
Diaphragmatic electromyogram (EMGdi) contains crucial information about human
respiration therefore can be used to monitor respiratory condition. Although it
is practical to record EMGdi noninvasively and conveniently by placing surface
electrodes over chest skin, extraction of such weak surface EMGdi (sEMGdi) from
great noisy environment is a challenging task, limiting its clinical use
compared with esophageal EMGdi. In this paper, a novel method is presented for
extracting weak sEMGdi signal from high-noise environment based on fast
independent component analysis (FastICA), constrained FastICA and a peel-off
strategy. It is truly a modified version of of progressive FastICA peel-off
(PFP) framework, where the constrained FastICA helps to extract and refine
respiration-related sEMGdi signals, while the peel-off strategy ensures the
complete extraction of weaker sEMGdi components. The method was validated using
both synthetic and clinical signals. It was demonstrated that our method was
able to extract clean sEMGdi signals efficiently with little distortion. It
outperformed state-of-the-art comparison methods in terms of sufficiently high
SIR and CORR at all noise levels when tested on synthetic data, while also
achieved an accuracy of 95.06% and a F2-score of 96.73% for breath
identification on clinical data. The study presents a valuable solution for
noninvasive extraction of sEMGdi signals, providing a convenient and valuable
way of ventilator synchrony with a significant potential in aiding respiratory
rehabilitation and health.
|
We consider a wireless network with a set of transmitter-receiver pairs, or
links, that share a common channel, and address the problem of emptying finite
traffic volume from the transmitters in minimum time. This, so called,
minimum-time scheduling problem has been proved to be NP-hard in general. In
this paper, we study a class of minimum-time scheduling problems in which the
link rates have a particular structure consistent with the assumed environment
and topology. We show that global optimality can be reached in polynomial time
and derive optimality conditions. Then we consider a more general case in which
we apply the same approach and thus obtain approximation as well as lower and
upper bounds to the optimal solution. Simulation results confirm and validate
our approach.
|
Despite having the potential to provide significant insights into tactical
preparations for future matches, very few studies have considered the spatial
trends of team attacking possessions in rugby league. Those which have
considered these trends have used grid based aggregation methods, which provide
a discrete understanding of rugby league match play but may fail to provide a
complete understanding of the spatial trends of attacking possessions due to
the dynamic nature of the sport. In this study, we use Kernel Density
Estimation (KDE) to provide a continuous understanding of the spatial trends of
attacking possessions in rugby league on a team by team basis. We use the
Wasserstein distance to understand the differences between teams (i.e. using
all of each team's data) and within teams (i.e. using a single team's data
against different opponents). Our results show that KDEs are able to provide
interesting tactical insights at the between team level. Furthermore, at the
within team level, the results are able to show patterns of spatial trends for
attacking teams, which are present against some opponents but not others. The
results could help sports practitioners to understand opposition teams'
previous performances and prepare tactical strategies for matches against them.
|
Non-Markovian effects can speed up the dynamics of quantum systems while the
limits of the evolution time can be derived by quantifiers of quantum
statistical speed. We introduce a witness for characterizing the
non-Markovianity of quantum evolutions through the Hilbert-Schmidt speed (HSS),
which is a special type of quantum statistical speed. This witness has the
advantage of not requiring diagonalization of evolved density matrix. Its
sensitivity is investigated by considering several paradigmatic instances of
open quantum systems, such as one qubit subject to phase-covariant noise and
Pauli channel, two independent qubits locally interacting with leaky cavities,
V-type and $\Lambda $-type three-level atom (qutrit) in a dissipative cavity.
We show that the proposed HSS-based non-Markovianity witness detects memory
effects in agreement with the well-established trace distance-based witness,
being sensitive to system-environment information backflows.
|
Terahertz frequency wakefields can be excited by ultra-short relativistic
electron bunches travelling through dielectric lined waveguide (DLW)
structures. These wakefields can either accelerate a witness bunch with high
gradient, or modulate the energy of the driving bunch. In this paper, we study
a passive dechirper based on the DLW to compensate the correlated energy spread
of the bunches accelerated by the laser plasma wakefield accelerator (LWFA). A
rectangular waveguide structure was employed taking advantage of its
continuously tunable gap during operation. The assumed 200 MeV driving bunch
had a Gaussian distribution with a bunch length of 3.0 {\mu}m, a relative
correlated energy spread of 1%, and a total charge of 10 pC. Both of the CST
Wakefield Solver and PIC Solver were used to simulate and optimize such a
dechirper. Effect of the time-dependent self-wake on the driving bunch was
analyzed in terms of the energy modulation and the transverse phase space.
|
Incremental text-to-speech, also known as streaming TTS, has been
increasingly applied to online speech applications that require ultra-low
response latency to provide an optimal user experience. However, most of the
existing speech synthesis pipelines deployed on GPU are still non-incremental,
which uncovers limitations in high-concurrency scenarios, especially when the
pipeline is built with end-to-end neural network models. To address this issue,
we present a highly efficient approach to perform real-time incremental TTS on
GPUs with Instant Request Pooling and Module-wise Dynamic Batching.
Experimental results demonstrate that the proposed method is capable of
producing high-quality speech with a first-chunk latency lower than 80ms under
100 QPS on a single NVIDIA A10 GPU and significantly outperforms the
non-incremental twin in both concurrency and latency. Our work reveals the
effectiveness of high-performance incremental TTS on GPUs.
|
Subsets and Splits