text
stringlengths 6
128k
|
---|
Within type-I seesaw mechanism it is possible to have large (order one)
light-heavy neutrino mixing even in case of low right-handed neutrino mass
scale (of the order of GeV). This implies large lepton flavor violation. As
example we consider the process $\mu \to e \gamma$ that can have a branching up
to $10^{-8}$ within type-I seesaw (in contrast with the tiny value $10^{-54}$
expected). Such an enhancing of lepton flavor violation can be used to
constraint the parameter space of long lived particle experiments.
|
The aim of this paper is to determine the non-abelian tensor square and Schur
multiplier of groups of square free order and of groups of orders $p^2q$,
$pq^2$ and $p^2qr$, where $p$, $q$ and $r$ are primes and $p<q<r$.
|
The type II seesaw mechanism is an attractive way to generate the observed
light neutrino masses. It postulates a SU(2)$_\mathrm{L}$-triplet scalar field,
which develops an induced vacuum expectation value after electroweak symmetry
breaking, giving masses to the neutrinos via its couplings to the lepton
SU(2)$_\mathrm{L}$-doublets. When the components of the triplet field have
masses around the electroweak scale, the model features a rich phenomenology.
We discuss the current allowed parameter space of the minimal low scale type II
seesaw model, taking into account all relevant constraints, including charged
lepton flavour violation as well as collider searches. We point out that the
symmetry protected low scale type II seesaw scenario, where an approximate
"lepton number"-like symmetry suppresses the Yukawa couplings of the triplet to
the lepton doublets, is still largely untested by the current LHC results. In
part of this parameter space the triplet components can be long-lived,
potentially leading to a characteristic displaced vertex signature where the
doubly-charged component decays into same-sign charged leptons. By performing a
detailed analysis at the reconstructed level we find that already at the
current run of the LHC a discovery would be possible for the considered
parameter point, via dedicated searches for displaced vertex signatures. The
discovery prospects are further improved at the HL-LHC and the FCC-hh/SppC.
|
We use exact diagonalization and cluster perturbation theory to address the
role of strong interactions and quantum fluctuations for spinless fermions on
the honeycomb lattice. We find quantum fluctuations to be very pronounced both
at weak and strong interactions. A weak second-neighbor Coulomb repulsion $V_2$
induces a tendency toward an interaction-generated quantum anomalous Hall
phase, as borne out in mean-field theory. However, quantum fluctuations prevent
the formation of a stable quantum Hall phase before the onset of the
charge-modulated phase predicted at large $V_2$ by mean-field theory.
Consequently, the system undergoes a direct transition from the semimetal to
the charge-modulated phase. For the latter, charge fluctuations also play a key
role. While the phase, which is related to pinball liquids, is stabilized by
the repulsion $V_2$, the energy of its low-lying charge excitations scales with
the electronic hopping $t$, as in a band insulator.
|
Research-based assessments represent a valuable tool for both instructors and
researchers interested in improving undergraduate physics education. However,
the historical model for disseminating and propagating conceptual and
attitudinal assessments developed by the physics education research (PER)
community has not resulted in widespread adoption of these assessments within
the broader community of physics instructors. Within this historical model,
assessment developers create high quality, validated assessments, make them
available for a wide range of instructors to use, and provide minimal (if any)
support to assist with administration or analysis of the results. Here, we
present and discuss an alternative model for assessment dissemination, which is
characterized by centralized data collection and analysis. This model provides
a greater degree of support for both researchers and instructors in order to
more explicitly support adoption of research-based assessments. Specifically,
we describe our experiences developing a centralized, automated system for an
attitudinal assessment we previously created to examine students'
epistemologies and expectations about experimental physics. This system
provides a proof-of-concept that we use to discuss the advantages associated
with centralized administration and data collection for research-based
assessments in PER. We also discuss the challenges that we encountered while
developing, maintaining, and automating this system. Ultimately, we argue that
centralized administration and data collection for standardized assessments is
a viable and potentially advantageous alternative to the default model
characterized by decentralized administration and analysis. Moreover, with the
help of online administration and automation, this model can support the
long-term sustainability of centralized assessment systems.
|
We give an alternative proof of a result by N. Gantert, Y. Hu and Z. Shi on
the asymptotic behavior of the survival probability of the branching random
walk killed below a linear boundary, in the special case of deterministic
binary branching and bounded random walk steps. Connections with the
Brunet-Derrida theory of stochastic fronts are discussed.
|
Theoretical and numerical studies are performed for the nonlinear structures
(explosive, solitons and shock) in quantum electron-positron-ion
magnetoplasmas. For this purpose, the reductive perturbation method is employed
to the quantum hydrodynamical equations and the Poisson equation, obtaining
extended quantum Zakharov-Kuznetsov equation. The latter has been solved using
the generalized expansion method to obtain a set of analytical solutions, which
reflect the possibility of the propagation of various nonlinear structures. The
relevance of the present investigation to the white dwarfs is highlighted.
|
Diffusion generative models have recently become a robust technique for
producing and modifying coherent, high-quality video. This survey offers a
systematic overview of critical elements of diffusion models for video
generation, covering applications, architectural choices, and the modeling of
temporal dynamics. Recent advancements in the field are summarized and grouped
into development trends. The survey concludes with an overview of remaining
challenges and an outlook on the future of the field. Website:
https://github.com/ndrwmlnk/Awesome-Video-Diffusion-Models
|
A manifestly Lorentz and diffeomorphism invariant form for the abelian gauge
field action with local duality symmetry of Schwarz and Sen is given. Some of
the underlying symmetries of the covariant action are further considered. The
Noether conserved charge under continuous local duality rotations is found. The
covariant couplings with gravity and the axidilaton field are discussed.
|
According to the Wiener-Hopf factorization, the characteristic function
$\varphi$ of any probability distribution $\mu$ on $\mathbb{R}$ can be
decomposed in a unique way as
\[1-s\varphi(t)=[1-\chi_-(s,it)][1-\chi_+(s,it)]\,,\;\;\;|s|\le1,\,t\in\mathbb{R}\,,\]
where $\chi_-(e^{iu},it)$ and $\chi_+(e^{iu},it)$ are the characteristic
functions of possibly defective distributions in
$\mathbb{Z}_+\times(-\infty,0)$ and $\mathbb{Z}_+\times[0,\infty)$,
respectively.
We prove that $\mu$ can be characterized by the sole data of the upward
factor $\chi_+(s,it)$, $s\in[0,1)$, $t\in\mathbb{R}$ in many cases including
the cases where:
1) $\mu$ has some exponential moments;
2) the function $t\mapsto\mu(t,\infty)$ is completely monotone on
$(0,\infty)$;
3) the density of $\mu$ on $[0,\infty)$ admits an analytic continuation on
$\mathbb{R}$.
We conjecture that any probability distribution is actually characterized by
its upward factor. This conjecture is equivalent to the following: {\it Any
probability measure $\mu$ on $\mathbb{R}$ whose support is not included in
$(-\infty,0)$ is determined by its convolution powers $\mu^{*n}$, $n\ge1$
restricted to $[0,\infty)$}. We show that in many instances, the sole knowledge
of $\mu$ and $\mu^{*2}$ restricted to $[0,\infty)$ is actually sufficient to
determine $\mu$. Then we investigate the analogous problem in the framework of
infinitely divisible distributions.
|
Charge order in underdoped and optimally doped high-$T_\mathrm{c}$
superconductors Bi$_{2}$Sr$_{2-x}$La$_x$CuO$_{6+\delta}$ (Bi2201) is
investigated by Cu $L_3$ edge resonant inelastic x-ray scattering (RIXS). We
have directly observed charge density modulation in the optimally doped Bi2201
at momentum transfer $Q_{\|} \simeq 0.23$ rlu, with smaller intensity and
correlation length with respect to the underdoped sample. This demonstrates
that short-range charge order in Bi2201 persists up to optimal doping, as in
other hole-doped cuprates. We explored the nodal (diagonal) direction and found
no charge order peak, confirming that charge order modulates only along the
Cu-O bond directions. We measured the out-of-plane dependence of charge order,
finding a flat response and no maxima at half integer \emph{L} values. This
suggests there is no out-of-plane phase correlation in single layer Bi2201, at
variance from YBa$_2$Cu$_3$O$_{6+x}$ and La$_{2-x}$(Ba,Sr)$_x$CuO$_4$.
Combining our results with data from the literature we assess that charge order
in Bi2201 exists in a large doping range across the phase diagram, i.e. $0.07
\lesssim p \lesssim 0.16$, demonstrating thereby that it is intimately
entangled with the antiferromagnetic background, the pseudogap and
superconductivity.
|
We investigate the application of syzygies for efficiently computing (finite)
Pommaret bases. For this purpose, we first describe a non-trivial variant of
Gerdt's algorithm to construct an involutive basis for the input ideal as well
as an involutive basis for the syzygy module of the output basis. Then we apply
this new algorithm in the context of Seiler's method to transform a given ideal
into quasi stable position to ensure the existence of a finite Pommaret basis.
This new approach allows us to avoid superfluous reductions in the iterative
computation of Janet bases required by this method. We conclude the paper by
proposing an involutive variant of the signature based algorithm of Gao et al.
to compute simultaneously a Grobner basis for a given ideal and for the syzygy
module of the input basis. All the presented algorithms have been implemented
in Maple and their performance is evaluated via a set of benchmark ideals.
|
Video anomaly detection (VAD) has been extensively studied. However, research
on egocentric traffic videos with dynamic scenes lacks large-scale benchmark
datasets as well as effective evaluation metrics. This paper proposes traffic
anomaly detection with a \textit{when-where-what} pipeline to detect, localize,
and recognize anomalous events from egocentric videos. We introduce a new
dataset called Detection of Traffic Anomaly (DoTA) containing 4,677 videos with
temporal, spatial, and categorical annotations. A new spatial-temporal area
under curve (STAUC) evaluation metric is proposed and used with DoTA.
State-of-the-art methods are benchmarked for two VAD-related tasks.Experimental
results show STAUC is an effective VAD metric. To our knowledge, DoTA is the
largest traffic anomaly dataset to-date and is the first supporting traffic
anomaly studies across when-where-what perspectives. Our code and dataset can
be found in: https://github.com/MoonBlvd/Detection-of-Traffic-Anomaly
|
The machinery of industrial environments was connected to the Internet years
ago with the scope of increasing their performance. However, this change made
such environments vulnerable against cyber-attacks that can compromise their
correct functioning resulting in economic or social problems. Moreover,
implementing cryptosystems in the communications between operational technology
(OT) devices is a more challenging task than for information technology (IT)
environments since the OT networks are generally composed of legacy elements,
characterized by low-computational capabilities. Consequently, implementing
cryptosystems in industrial communication networks faces a trade-off between
the security of the communications and the amortization of the industrial
infrastructure. Critical Infrastructure (CI) refers to the industries which
provide key resources for the daily social and economical development, e.g.
electricity. Furthermore, a new threat to cybersecurity has arisen with the
theoretical proposal of quantum computers, due to their potential ability of
breaking state-of-the-art cryptography protocols, such as RSA or ECC. Many
global agents have become aware that transitioning their secure communications
to a quantum secure paradigm is a priority that should be established before
the arrival of fault-tolerance. In this paper, we aim to describe the
problematic of implementing post-quantum cryptography (PQC) to CI environments.
For doing so, we describe the requirements for these scenarios and how they
differ against IT. We also introduce classical cryptography and how quantum
computers pose a threat to such security protocols. Furthermore, we introduce
state-of-the-art proposals of PQC protocols and present their characteristics.
We conclude by discussing the problematic of integrating PQC in industrial
environments.
|
We study the stability of pencils of plane sextics in the sense of geometric
invariant theory. In particular, we obtain a complete and geometric description
of the stability of Halphen pencils of index two.
|
A central part of population genomics consists of finding genomic regions
implicated in local adaptation. Population genomic analyses are based on
genotyping numerous molecular markers and looking for outlier loci in terms of
patterns of genetic differentiation. One of the most common approach for
selection scan is based on statistics that measure population differentiation
such as $F_{ST}$. However they are important caveats with approaches related to
$F_{ST}$ because they require grouping individuals into populations and they
additionally assume a particular model of population structure. Here we
implement a more flexible individual-based approach based on Bayesian factor
models. Factor models capture population structure with latent variables called
factors, which can describe clustering of individuals into populations or
isolation-by-distance patterns. Using hierarchical Bayesian modeling, we both
infer population structure and identify outlier loci that are candidates for
local adaptation. As outlier loci, the hierarchical factor model searches for
loci that are atypically related to population structure as measured by the
latent factors. In a model of population divergence, we show that the factor
model can achieve a 2-fold or more reduction of false discovery rate compared
to the software BayeScan or compared to a $F_{ST}$ approach. We analyze the
data of the Human Genome Diversity Panel to provide an example of how factor
models can be used to detect local adaptation with a large number of SNPs. The
Bayesian factor model is implemented in the open-source PCAdapt software.
|
In this short note we observe that the recent examples of derived-equivalent
Calabi-Yau 3-folds with different fundamental groups also have different Brauer
groups, using a little topological K-theory.
|
In this paper we characterize the approximation schemes that satisfy
Shapiro's theorem and we use this result for several classical approximation
processes. In particular, we study approximation of operators by finite rank
operators and n-term approximation for several dictionaries and norms.
Moreover, we compare our main theorem with a classical result by Yu. Brundyi
and we show two examples of approximation schemes that do not satisfy Shapiro's
theorem.
|
We elucidate how the color neutrality is harmed in the Polyakov Nambu-Jona
Lasinio (PNJL) model at finite density within the adopted mean field
approximation. Also we point out how usual assumption about the diagonal form
of the Wilson loop may fail in the presence of the diquark condensate on
several grounds.
|
Language models can be trained to recognize the moral sentiment of text,
creating new opportunities to study the role of morality in human life. As
interest in language and morality has grown, several ground truth datasets with
moral annotations have been released. However, these datasets vary in the
method of data collection, domain, topics, instructions for annotators, etc.
Simply aggregating such heterogeneous datasets during training can yield models
that fail to generalize well. We describe a data fusion framework for training
on multiple heterogeneous datasets that improve performance and
generalizability. The model uses domain adversarial training to align the
datasets in feature space and a weighted loss function to deal with label
shift. We show that the proposed framework achieves state-of-the-art
performance in different datasets compared to prior works in morality
inference.
|
The search for topological systems has recently broadened to include random
substitutional alloys, which lack the specific crystalline symmetries that
protect topological phases, raising the question whether topological properties
can be preserved, or are modified by disorder. To address this question, we
avoid methods that assumed at the outset high (averaged) symmetry, using
instead a fully-atomistic, topological description of alloy. Application to
PbSe-SnSe alloy reveals that topology survives in an interesting fashion: (a)
spatial randomness removes the valley degeneracy (splitting larger than 150
meV), leading to a sequential inversion of the split valley components over a
range of compositions; (b) absence of inversion lifts spin degenerates, leading
to a Weyl semimetal phase without the need of external magnetic field, an
unexpected result, given that the alloy constituent compounds are
inversion-symmetric. (a) and (b) underpin the topological physics at low
symmetry and complete the missing understanding of possible topological phases
within the normal-topological insulator transition.
|
Embedding methods transform the knowledge graph into a continuous,
low-dimensional space, facilitating inference and completion tasks. Existing
methods are mainly divided into two types: translational distance models and
semantic matching models. A key challenge in translational distance models is
their inability to effectively differentiate between 'head' and 'tail' entities
in graphs. To address this problem, a novel location-sensitive embedding (LSE)
method has been developed. LSE innovatively modifies the head entity using
relation-specific mappings, conceptualizing relations as linear transformations
rather than mere translations. The theoretical foundations of LSE, including
its representational capabilities and its connections to existing models, have
been thoroughly examined. A more streamlined variant, LSE-d, which employs a
diagonal matrix for transformations to enhance practical efficiency, is also
proposed. Experiments conducted on four large-scale KG datasets for link
prediction show that LSEd either outperforms or is competitive with
state-of-the-art related works.
|
This paper explores digital privacy challenges for migrants, analyzing trends
from 2013 to 2023. Migrants face heightened risks such as government
surveillance and identity theft. Understanding these threats is vital for
raising awareness and guiding research towards effective solutions and policies
to protect migrant digital privacy.
|
The isomerization of hydrogen cyanide to hydrogen isocyanide on icy grain
surfaces is investigated by an accurate composite method (jun-Cheap) rooted in
the coupled cluster ansatz and by density functional approaches. After
benchmarking density functional predictions of both geometries and reaction
energies against jun-Cheap results for the relatively small model system HCN --
(H2O)2 the best performing DFT methods are selected. A large cluster containing
20 water molecules is then employed within a QM/QM$'$ approach to include a
realistic environment mimicking the surface of icy grains. Our results indicate
that four water molecules are directly involved in a proton relay mechanism,
which strongly reduces the activation energy with respect to the direct
hydrogen transfer occurring in the isolated molecule. Further extension of the
size of the cluster up to 192 water molecules in the framework of a three-layer
QM/QM'/MM model has a negligible effect on the energy barrier ruling the
isomerization. Computation of reaction rates by transition state theory
indicates that on icy surfaces the isomerization of HNC to HCN could occur
quite easily even at low temperatures thanks to the reduced activation energy
that can be effectively overcome by tunneling.
|
We generalize Littlewood subordination principle for proper holomorphic
functions and give many applications.
|
Let $A\sub \R^{n+r}$ be a set definable in an o-minimal expansion $\S$ of the
real field, $A' \sub \R^r$ be its projection, and assume that the non-empty
fibers $A_a \sub \R^n$ are compact for all $a \in A'$ and uniformly bounded,
{\em i.e.} all fibers are contained in a ball of fixed radius $B(0,R).$ If $L$
is the Hausdorff limit of a sequence of fibers $A_{a_i},$ we give an
upper-bound for the Betti numbers $b_k(L)$ in terms of definable sets
explicitly constructed from a fiber $A_a.$ In particular, this allows to
establish effective complexity bounds in the semialgebraic case and in the
Pfaffian case. In the Pfaffian setting, Gabrielov introduced the {\em relative
closure} to construct the o-minimal structure $\S_\pfaff$ generated by Pfaffian
functions in a way that is adapted to complexity problems. Our results can be
used to estimate the Betti numbers of a relative closure $(X,Y)_0$ in the
special case where $Y$ is empty.
|
This paper describes an automatic isophotal fitting procedure that succeeds,
without the support of any visual inspection of neither the images nor the
ellipticity/P.A. radial profiles, at extracting a fairly pure sample of barred
LTGs among thousands of optical images from the SDSS. The procedure relies on
the methods described in Consolandi et al. (2016) to robustly extract the
photometrical properties of a large sample of local SDSS galaxies and is
tailored to extract bars on the basis of their well-known peculiarities in
their P.A. and ellipticity profiles. It has been run on a sample of 5853
galaxies in the Coma and Local supercluster. The procedure extracted for each
galaxy a color, an ellipticity and a position angle radial profile of the
ellipses fitted to the isophotes. Examining automatically the profiles of 922
face-on late-type galaxies (B/A >0.7) the procedure found that ~ 36 % are
barred. The local bar fraction strongly increases with stellar mass. The sample
of barred galaxies is used to construct a set of template radial color profiles
in order to test the impact of the barred galaxy population on the average
color profiles shown by Consolandi et al. (2016) and to test the bar-quenching
scenario proposed in Gavazzi et al. (2015). The radial color profile of barred
galaxy shows that bars are on average redder than their surrounding disk
producing an outside-in gradient toward red in correspondence of their
corotation radius. The distribution of the extension of the deprojected length
of the bar suggests that bars have strong impacts on the gradients of averaged
color profiles. The dependence of the profiles on the mass is consistent with
the bar-quenching scenario, i.e. more massive barred galaxies have redder
colors (hence older stellar population and suppressed star formation) inside
their corotation radius with respect to their lower mass counterparts.
|
In this paper we unify two families of topological Tutte polynomials. The
first family is that coming from the surface Tutte polynomial, a polynomial
that arises in the theory of local flows and tensions. The second family arises
from the canonical Tutte polynomials of Hopf algebras. Each family includes the
Las Vergnas, Bollob\'as-Riordan, and Krushkal polynomials. As a consequence we
determine a deletion-contraction definition of the surface Tutte polynomial and
recursion relations for the number of local flows and tensions in an embedded
graph.
|
A general method for deriving closed reduced models of Hamiltonian dynamical
systems is developed using techniques from optimization and statistical
estimation. As in standard projection operator methods, a set of resolved
variables is selected to capture the slow, macroscopic behavior of the system,
and the family of quasi-equilibrium probability densities on phase space
corresponding to these resolved variables is employed as a statistical model.
The macroscopic dynamics of the mean resolved variables is determined by
optimizing over paths of these probability densities. Specifically, a cost
function is introduced that quantifies the lack-of-fit of such paths to the
underlying microscopic dynamics; it is an ensemble-averaged, squared-norm of
the residual that results from submitting a path of trial densities to the
Liouville equation. The evolution of the macrostate is estimated by minimizing
the time integral of the cost function. The value function for this
optimization satisfies the associated Hamilton-Jacobi equation, and it
determines the optimal relation between the statistical parameters and the
irreversible fluxes of the resolved variables, thereby closing the reduced
dynamics. The resulting equations for the macroscopic variables have the
generic form of governing equations for nonequilibrium thermodynamics, and they
furnish a rational extension of the classical equations of linear irreversible
thermodynamics beyond the near-equilibrium regime. In particular, the value
function is a thermodynamic potential that extends the classical dissipation
function and supplies the nonlinear relation between thermodynamics forces and
fluxes.
|
Let m,n be positive integers, v a multilinear commutator word and w=v^m. We
prove that if G is an orderable group in which all w-values are n-Engel, then
the verbal subgroup v(G) is locally nilpotent. We also show that in the
particular case where v=x the group G is nilpotent (rather than merely locally
nilpotent).
|
In this paper we investigate the relation between the detailed isophotal
shape of elliptical galaxies and the strength of the H beta absorption in their
spectra. We find that disky galaxies have higher H beta indices. Stellar
population synthesis models show that the H beta line is a good age indicator,
hence disky galaxies tend to have younger mean ages than boxy galaxies. We show
that the observed trend can be brought about by a contaminating young
population, which we associate with the disky component. This population need
only account for a small fraction of the total mass, for example if a
contaminating population of age of 2 Gyrs is superimposed on an old (12 Gyr)
elliptical galaxy, then the observed trend can be explained if it contributes
only 10% to the total mass. The size of this effect is consistent with the
estimates of disk-to-total light ratios from surface photometry.
|
We theoretically investigate the terahertz dielectric response of a
semiconductor slab hosting an infrared photoinduced grating. The periodic
structure is due to the charge carries photo-excited by the interference of two
tilted infrared plane waves so that the grating depth and period can be tuned
by modifying the beam intensities and incidence angles, respectively. In the
case where the grating period is much smaller than the terahertz wavelength, we
numerically evaluate the ordinary and extraordinary component of the effective
permittivity tensor by resorting to electromagnetic full-wave simulation
coupled to the dynamics of charge carries excited by infrared radiation. We
show that the photoinduced metamaterial optical response can be tailored by
varying the grating and it ranges from birefringent to hyperbolic to
anisotropic negative dielectric without resorting to microfabrication.
|
This paper is devoted to the study of minimal immersions of flat $n$-tori
into spheres, especially those immersed by the first eigenfunctions (such
immersion is called $\lambda_1$-minimal immersion), which also play important
roles in spectral geometry. It is known that there are only two non-congruent
$\lambda_1$-minimal $2$-tori in spheres, which are both flat. For higher
dimensional case, the Clifford $n$-torus in $\mathbb{S}^{2n-1}$ might be the
only known example in the literature. In this paper, by discussing the general
construction of homogeneous minimal flat $n$-tori in spheres, we construct many
new examples of $\lambda_1$-minimal flat $3$-tori and $4$-tori. In contrast to
the rigidity in the case of $2$-tori, we show that there exists a $2$-parameter
family of non-congruent $\lambda_1$-minimal flat $4$-tori. It turns out that
the examples we constructed exhaust all $\lambda_1$-minimal immersions of
conformally flat $3$-tori and $4$-tori in spheres. The classification involves
some detailed investigations of shortest vectors in lattices, which can also be
used to solve the Berger's problem on flat $3$-tori and $4$-tori. The
dilation-invariant functional $\lambda_1(g)V(g)^{\frac{2}{n}}$ about the first
eignvalue is proved to have maximal value among all flat $3$-tori and $4$-tori.
|
We approximate the homogenization of fully nonlinear, convex, uniformly
elliptic Partial Differential Equations in the periodic setting, using a
variational formula for the optimal invariant measure, which may be derived via
Legendre-Fenchel duality. The variational formula expresses $\bar H$ as an
average of the operator against the optimal invariant measure, generalizing the
linear case. Several nontrivial analytic formulas for $\bar H$ are obtained.
These formulas are compared to numerical simulations, using both PDE and
variational methods. We also perform a numerical study of convergence rates for
homogenization in the periodic and random setting and compare these to
theoretical results.
|
We study convergence to equilibrium for a large class of Markov chains in
random environment. The chains are sparse in the sense that in every row of the
transition matrix $P$ the mass is essentially concentrated on few entries.
Moreover, the random environment is such that rows of $P$ are independent and
such that the entries are exchangeable within each row. This includes various
models of random walks on sparse random directed graphs. The models are
generally non reversible and the equilibrium distribution is itself unknown. In
this general setting we establish the cutoff phenomenon for the total variation
distance to equilibrium, with mixing time given by the logarithm of the number
of states times the inverse of the average row entropy of $P$. As an
application, we consider the case where the rows of $P$ are i.i.d. random
vectors in the domain of attraction of a Poisson-Dirichlet law with index
$\alpha\in(0,1)$. Our main results are based on a detailed analysis of the
weight of the trajectory followed by the walker. This approach offers an
interpretation of cutoff as an instance of the concentration of measure
phenomenon.
|
The observed microlensing events towards the LMC do not have yet a coherent
explanation. If they are due to Galactic Halo objects, the nature of these
objects is puzzling --- half the halo in dark 0.5 Msol objects. On the other
hand, traditional models of the LMC predict a self-lensing optical depth about
an order of magnitude too low, although characteristics of some of the observed
events favor a self-lensing explanation. We present here two models of the LMC
taking into account the correlation between the mass of the stars and their
velocity dispersion: a thin Mestel disk, and an ellipsoidal model. Both yield
optical depths, event rates, and event duration distributions compatible with
the observations. The grounds for such models are discussed, as well as their
observational consequences.
|
The formalism for analysing the magnetic field distribution in Pauli limited
superconductors developed earlier is applied to the field dependence of the
vortex lattice static linewidth measured in Muon Spin Rotation ($\mu$SR)
experiments. In addition to writing analytical formulae for the static
linewidth for the vortex structure in the limit of independent vortices (i.e.
moderate magnetic fields), we use Abrikosov's analysis to describe the field
variations of the static linewidth at the approach of the superconductor to
metal transition in the limit where the critical field is determined by Pauli
depairing.
|
We describe a simple method for generating new string solutions for which the
brane worldvolume is a curved space. As a starting point we use solutions with
NS-NS charges combined with 2-d CFT's representing different parts of
space-time. We illustrate our method with many examples, some of which are
associated with conformally invariant sigma models. Using U-duality, we also
obtain supergravity solutions with RR charges which can be interpreted as
D-branes with non-trivial worldvolume geometry. In particular, we discuss the
case of a D5-brane wrapped on AdS_3 x S^3, a solution interpolating between
AdS_3 x S^3 x R^5 and AdS_3 x S^3 x S^3 x R, and a D3-brane wrapped over S^3 x
R or AdS_2 x S^2. Another class of solutions we discuss involves NS5-branes
intersecting over a 3-space and NS5-branes intersecting over a line. These
solutions are similar to D7-brane or cosmic string backgrounds.
|
We review how D-brane instantons can generate open string couplings of
stringy hierarchy in the superpotential which violate global abelian symmetries
and are therefore perturbatively forbidden. We discuss the main ingredients of
this mechanism, focussing for concreteness on Euclidean $D2$-branes in Type IIA
orientifold compactifications. Special emphasis is put on a careful analysis of
instanton zero modes and a classification of situations leading to
superpotential or higher fermionic F-terms. This includes the discussion of
chiral and non-chiral instanton recombination, viewed as a multi-instanton
effect. As phenomenological applications we discuss the generation of
perturbatively forbidden Yukawa couplings in SU(5) GUT models and Majorana
masses for right-handed neutrinos. Finally we analyse the mirror dual
description of $D1$-instantons in Type I compactifications with $D9$-branes and
stable holomorphic bundles. We present globally defined semi-realistic string
vacua on an elliptically fibered Calabi-Yau realising the non-perturbative
generation of Majorana masses.
|
Counterfactuals -- expressing what might have been true under different
circumstances -- have been widely applied in statistics and machine learning to
help understand causal relationships. More recently, counterfactuals have begun
to emerge as a technique being applied within visualization research. However,
it remains unclear to what extent counterfactuals can aid with visual data
communication. In this paper, we primarily focus on assessing the quality of
users' understanding of data when provided with counterfactual visualizations.
We propose a preliminary model of causality comprehension by connecting
theories from causal inference and visual data communication. Leveraging this
model, we conducted an empirical study to explore how counterfactuals can
improve users' understanding of data in static visualizations. Our results
indicate that visualizing counterfactuals had a positive impact on
participants' interpretations of causal relations within datasets. These
results motivate a discussion of how to more effectively incorporate
counterfactuals into data visualizations.
|
We propose DiffuStereo, a novel system using only sparse cameras (8 in this
work) for high-quality 3D human reconstruction. At its core is a novel
diffusion-based stereo module, which introduces diffusion models, a type of
powerful generative models, into the iterative stereo matching network. To this
end, we design a new diffusion kernel and additional stereo constraints to
facilitate stereo matching and depth estimation in the network. We further
present a multi-level stereo network architecture to handle high-resolution (up
to 4k) inputs without requiring unaffordable memory footprint. Given a set of
sparse-view color images of a human, the proposed multi-level diffusion-based
stereo network can produce highly accurate depth maps, which are then converted
into a high-quality 3D human model through an efficient multi-view fusion
strategy. Overall, our method enables automatic reconstruction of human models
with quality on par to high-end dense-view camera rigs, and this is achieved
using a much more light-weight hardware setup. Experiments show that our method
outperforms state-of-the-art methods by a large margin both qualitatively and
quantitatively.
|
We review the geometric structure of the IL$^0$PE model, a rotating
shallow-water model with variable buoyancy, thus sometimes called ``thermal''
shallow-water model. We start by discussing the Euler--Poincar\'e equations for
rigid body dynamics and the generalized Hamiltonian structure of the system. We
then reveal similar geometric structure for the IL$^0$PE. We show, in
particular, that the model equations and its (Lie--Poisson) Hamiltonian
structure can be deduced from Morrison and Greene's (1980) system upon ignoring
the magnetic field ($\vec{\mathrm B} = 0$) and setting $U(\rho,s) =
\frac{1}{2}\rho s$, where $\rho$ is mass density and $s$ is entropy per unit
mass. These variables play the role of layer thickness ($h$) and buoyancy
($\t$) in the IL$^0$PE, respectively. Included in an appendix is an explicit
proof of the Jacobi identity satisfied by the Poisson bracket of the system.
|
Primary {\gamma}' phase instead of carbides and borides plays an important
role in suppressing grain growth during solution at 1433K of FGH98 nickel-based
polycrystalline alloys. Results illustrate that as-fabricated FGH98 has
equiaxed grain structure and after heat treatment, grains remain equiaxed but
grow larger. In order to clarify the effects of the size and volume fraction of
the primary {\gamma}' phase on the grain growth during heat treatment, this
paper establish a 2D Cellular Automata (CA) model based on the thermal
activation and the lowest energy principle. The CA results are compared with
the experimental results and show a good fit with an error less than 10%. Grain
growth kinetics are depicted and simulations in real time for various sizes and
volume fractions of primary {\gamma}' particles work out well with the Zener
relation. The coefficient n value in Zener relation is theoretically calculated
and its minimum value is 0.23 when the radius of primary {\gamma}' is
2.8{\mu}m.
|
We present an efficient, elastic 3D LiDAR reconstruction framework which can
reconstruct up to maximum LiDAR ranges (60 m) at multiple frames per second,
thus enabling robot exploration in large-scale environments. Our approach only
requires a CPU. We focus on three main challenges of large-scale
reconstruction: integration of long-range LiDAR scans at high frequency, the
capacity to deform the reconstruction after loop closures are detected, and
scalability for long-duration exploration. Our system extends upon a
state-of-the-art efficient RGB-D volumetric reconstruction technique, called
supereight, to support LiDAR scans and a newly developed submapping technique
to allow for dynamic correction of the 3D reconstruction. We then introduce a
novel pose graph clustering and submap fusion feature to make the proposed
system more scalable for large environments. We evaluate the performance using
two public datasets including outdoor exploration with a handheld device and a
drone, and with a mobile robot exploring an underground room network.
Experimental results demonstrate that our system can reconstruct at 3 Hz with
60 m sensor range and ~5 cm resolution, while state-of-the-art approaches can
only reconstruct to 25 cm resolution or 20 m range at the same frequency.
|
The latest trend in studies of modern electronically and/or optically active
materials is to provoke phase transformations induced by high electric fields
or by short (femtosecond) powerful optical pulses. The systems of choice are
cooperative electronic states whose broken symmetries give rise to topological
defects. For typical quasi-one-dimensional architectures, those are the
microscopic solitons taking from electrons the major roles as carriers of
charge or spin. Because of the long-range ordering, the solitons experience
unusual super-long-range forces leading to a sequence of phase transitions in
their ensembles: the higher-temperature transition of the confinement and the
lower one of aggregation into macroscopic walls. Here we present results of an
extensive numerical modeling for ensembles of both neutral and charged solitons
in both two- and three-dimensional systems. We suggest a specific Monte Carlo
algorithm preserving the number of solitons, which substantially facilitates
the calculations, allows to extend them to the three-dimensional case and to
include the important long-range Coulomb interactions. The results confirm the
first confinement transition, except for a very strong Coulomb repulsion, and
demonstrate a pattern formation at the second transition of aggregation.
|
In this article, we present an efficient deep learning method called coupled
deep neural networks (CDNNs) for coupled physical problems. Our method compiles
the interface conditions of the coupled PDEs into the networks properly and can
be served as an efficient alternative to the complex coupled problems. To
impose energy conservation constraints, the CDNNs utilize simple fully
connected layers and a custom loss function to perform the model training
process as well as the physical property of the exact solution. The approach
can be beneficial for the following reasons: Firstly, we sampled randomly and
only input spatial coordinates without being restricted by the nature of
samples. Secondly, our method is meshfree which makes it more efficient than
the traditional methods. Finally, our method is parallel and can solve multiple
variables independently at the same time. We give the theory to guarantee the
convergence of the loss function and the convergence of the neural networks to
the exact solution. Some numerical experiments are performed and discussed to
demonstrate the performance of the proposed method.
|
This work is concerned with the time optimal control problem for evolution
equations in Hilbert spaces. The attention is focused on the maximum principle
for the time optimal controllers having the dimension smaller that of the state
system, in particular for minimal time sliding mode controllers, which is one
of the novelties of this paper. We provide the characterization of the
controllers by the optimality conditions determined for some general cases. The
proofs rely on a set of hypotheses meant to cover a large class of
applications. Examples of control problems governed by parabolic equations with
potential and drift terms, porous media equation or reaction-diffusion systems
with linear and nonlinear perturbations, describing real world processes, are
presented at the end.
|
The Seyfert galaxies with Z-shaped emission filaments in the Narrow Line
Region (NLR) are considered. We assume that observable Z-shaped structures and
velocity pattern of NLR may be explained as tridimensional helical waves in the
ionization cone.
|
This paper brings space weather prediction close to earthquake (EQ)
prediction research. The results of this paper support conclusions of
previously presented statistical studies that solar activity influences the
seismic activity, this influence is mediated through rapid geomagnetic
disturbances and the geomagnetic disturbances are related with increases of
solar wind speed. Our study concern an example of 40 days with direct response
of a series of 7 strong-to-giant (M=6.8-9.3) EQs (including the Andaman-Sumatra
EQ) to solar wind speed increases and subsequent geomagnetic fast disturbances.
Our analysis for 10 M>6 EQs from November 23 to December 28, 2004 suggests a
mean time response delay of EQs to fast geomagnetic disturbances of ~1.5 days.
The two giant EQs during this period occurred after the two fastest geomagnetic
variations, as revealed by the ratio of the daily Kp index variation over a day
{\Delta}Kp/{\Delta}t (12 and 15, respectively). It suggests that the fast
disturbance of the magnetosphere, as a result of the solar wind speed increase,
is a key parameter in a related space weather-earthquake prediction research.
The Solar-magnetosphere-lithosphere coupling and their possible special
characteristics during the period examined needs further investigation, since
it could provide significant information on the underlying physical relation
processes of strong earthquakes.
|
New final results from the CMD-2 and SND e+e- annihilation experiments,
together with radiative return measurements from BaBar, lead to recent
improvements in the standard model prediction for the muon anomaly. The
uncertainty at 0.48 ppm--a largely data-driven result--is now slightly below
the experimental uncertainty of 0.54 ppm. The difference, a_mu(expt)- a_mu(SM)
= (27.6 +/- 8.4) x 10^-10, represents a 3.3 standard deviation effect. At this
level, it is one of the most compelling indicators of physics beyond the
standard model and, at the very least, a major constraint for speculative new
theories such as SUSY or extra dimensions. Others at this Workshop detailed
further planned standard model theory improvements to a_mu. Here I outline how
BNL E969 will achieve a factor of 2 or more reduction in the experimental
uncertainty. The new experiment is based on a proven technique and track
record. I argue that this work must be started now to have maximal impact on
the interpretation of the new physics anticipated to be unearthed at the LHC.
|
Context. Since the discovery of the first Accreting Millisecond X-ray Pulsar
SAX J1808.4-3658 in 1998, the family of these sources kept growing on.
Currently, it counts 22 members. All AMXPs are transients with usually very
long quiescence periods, implying that mass accretion rate in these systems is
quite low and not constant. Moreover, for at least three sources, a
non-conservative evolution was also proposed.
Aims. Our purpose is to study the long term averaged mass-accretion rates in
all the Accreting Millisecond X-ray Pulsars discovered so far, to investigate a
non-conservative mass-transfer scenario.
Methods. We calculated the expected mass-transfer rate under the hypothesis
of a conservative evolution based on their orbital periods and on the (minimum)
mass of the secondary (as derived from the mass function), driven by
gravitational radiation and/or magnetic braking. Using this theoretical
mass-transfer, we determined the expected accretion luminosity of the systems.
Thus, we achieved the lower limit to the distance of the sources by comparing
the computed theoretical luminosity and the observed flux averaged over a time
period of 20 years. Then, the lower limit to the distance of the sources has
been compared to the value of the distance reported in literature to evaluate
how reasonable is the hypothesis of a conservative mass-transfer.
Results. Based on a sample of 18 sources, we found strong evidences of a
non-conservative mass-transfer for five sources, for which the estimated
distance lower limits are higher than their known distances. We also report
hints for mass outflows in other six sources. The discrepancy can be fixed
under the hypothesis of a non-conservative mass-transfer in which a fraction of
the mass transferred onto the compact object is swept away from the system,
likely due to the (rotating magnetic dipole) radiation pressure of the pulsar.
|
Electron-spin resonance carried out with scanning tunneling microscopes
(ESR-STM) is a recently developed experimental technique that is attracting
enormous interest on account of its potential to carry out single-spin
on-surface resonance with subatomic resolution. Here we carry out a theoretical
study of the role of tip-adatom interactions and provide guidelines for
choosing the experimental parameters in order to optimize spin resonance
measurements. We consider the case of the Fe adatom on a MgO surface and its
interaction with the spin-polarized STM tip. We address three problems: first,
how to optimize the tip-sample distance to cancel the effective magnetic field
created by the tip on the surface spin, in order to carry out proper magnetic
field sensing. Second, how to reduce the voltage dependence of the surface-spin
resonant frequency, in order to minimize tip-induced decoherence due to voltage
noise. Third, we propose an experimental protocol to infer the detuning angle
between the applied field and the tip magnetization, which plays a crucial role
in the modeling of the experimental results.
|
A parameterized string (p-string) is a string over an alphabet $(\Sigma_{s}
\cup \Sigma_{p})$, where $\Sigma_{s}$ and $\Sigma_{p}$ are disjoint alphabets
for static symbols (s-symbols) and for parameter symbols (p-symbols),
respectively. Two p-strings $x$ and $y$ are said to parameterized match
(p-match) if and only if $x$ can be transformed into $y$ by applying a
bijection on $\Sigma_{p}$ to every occurrence of p-symbols in $x$. The indexing
problem for p-matching is to preprocess a p-string $T$ of length $n$ so that we
can efficiently find the occurrences of substrings of $T$ that p-match with a
given pattern. Extending the Burrows-Wheeler Transform (BWT) based index for
exact string pattern matching, Ganguly et al. [SODA 2017] proposed the first
compact index (named pBWT) for p-matching, and posed an open problem on how to
construct it in compact space, i.e., in $O(n \lg |\Sigma_{s} \cup \Sigma_{p}|)$
bits of space. Hashimoto et al. [SPIRE 2022] partially solved this problem by
showing how to construct some components of pBWTs for $T$ in $O(n
\frac{|\Sigma_{p}| \lg n}{\lg \lg n})$ time in an online manner while reading
the symbols of $T$ from right to left. In this paper, we improve the time
complexity to $O(n \frac{\lg |\Sigma_{p}| \lg n}{\lg \lg n})$. We remark that
removing the multiplicative factor of $|\Sigma_{p}|$ from the complexity is of
great interest because it has not been achieved for over a decade in the
construction of related data structures like parameterized suffix arrays even
in the offline setting. We also show that our data structure can support
backward search, a core procedure of BWT-based indexes, at any stage of the
online construction, making it the first compact index for p-matching that can
be constructed in compact space and even in an online manner.
|
We construct analogues of the classical Heisenberg spin chain model (or the
discrete Neumann system) on pseudo-spheres and light-like cones in the
pseudo-Euclidean spaces and show their complete Hamiltonian integrability.
Further, we prove that the Heisenberg model on a light--like cone leads to a
new example of integrable discrete contact system.
|
Direct training of Spiking Neural Networks (SNNs) on neuromorphic hardware
has the potential to significantly reduce the energy consumption of artificial
neural network training. SNNs trained with Spike Timing-Dependent Plasticity
(STDP) benefit from gradient-free and unsupervised local learning, which can be
easily implemented on ultra-low-power neuromorphic hardware. However,
classification tasks cannot be performed solely with unsupervised STDP. In this
paper, we propose Stabilized Supervised STDP (S2-STDP), a supervised STDP
learning rule to train the classification layer of an SNN equipped with
unsupervised STDP for feature extraction. S2-STDP integrates error-modulated
weight updates that align neuron spikes with desired timestamps derived from
the average firing time within the layer. Then, we introduce a training
architecture called Paired Competing Neurons (PCN) to further enhance the
learning capabilities of our classification layer trained with S2-STDP. PCN
associates each class with paired neurons and encourages neuron specialization
toward target or non-target samples through intra-class competition. We
evaluate our methods on image recognition datasets, including MNIST,
Fashion-MNIST, and CIFAR-10. Results show that our methods outperform
state-of-the-art supervised STDP learning rules, for comparable architectures
and numbers of neurons. Further analysis demonstrates that the use of PCN
enhances the performance of S2-STDP, regardless of the hyperparameter set and
without introducing any additional hyperparameters.
|
We present new ideas for computing elliptic Gau{\ss} sums, which constitute
an analogue of the classical cyclotomic Gau{\ss} sums and whose use has been
proposed in the context of counting points on elliptic curves and primality
tests. By means of certain well-known modular functions we define the universal
elliptic Gau{\ss} sums and prove they admit an efficiently computable
representation in terms of the $j$-invariant and another modular function.
After that, we show how this representation can be used for obtaining the
elliptic Gau{\ss} sum associated to an elliptic curve over a finite field
$\mathbb{F}_p$, which may then be employed for counting points or primality
proving.
|
The first year of the COVID-19 pandemic put considerable strain on the
national healthcare system in England. In order to predict the effect of the
local epidemic on hospital capacity in England, we used a variety of data
streams to inform the construction and parameterisation of a hospital
progression model, which was coupled to a model of the generalised epidemic. We
named this model EpiBeds. Data from a partially complete patient-pathway
line-list was used to provide initial estimates of the mean duration that
individuals spend in the different hospital compartments. We then fitted
EpiBeds using complete data on hospital occupancy and hospital deaths, enabling
estimation of the proportion of individuals that follow different clinical
pathways, and the reproduction number of the generalised epidemic. The
construction of EpiBeds makes it straightforward to adapt to different patient
pathways and settings beyond England. As part of the UK response to the
pandemic, EpiBeds has provided weekly forecasts to the NHS for hospital bed
occupancy and admissions in England, Wales, Scotland, and Northern Ireland.
|
How does immigrant integration in a country change with immigration density?
Guided by a statistical mechanics perspective we propose a novel approach to
this problem. The analysis focuses on classical integration quantifiers such as
the percentage of jobs (temporary and permanent) given to immigrants, mixed
marriages, and newborns with parents of mixed origin. We find that the average
values of different quantifi?ers may exhibit either linear or non-linear growth
on immigrant density and we suggest that social action, a concept identified by
Max Weber, causes the observed non- linearity. Using the statistical mechanics
notion of interaction to quantitatively emulate social action, a unified
mathematical model for integration is proposed and it is shown to explain both
growth behaviors observed. The linear theory instead, ignoring the possibility
of interaction effects would underestimate the quantifiers up to 30% when
immigrant densities are low, and overestimate them as much when densities are
high. The capacity to quantitatively isolate different types of integration
mechanisms makes our framework a suitable tool in the quest for more efficient
integration policies.
|
This paper investigates the asymptotic behaviour of solutions of periodic
evolution equations. Starting with a general result concerning the quantified
asymptotic behaviour of periodic evolution families we go on to consider a
special class of dissipative systems arising naturally in applications. For
this class of systems we analyse in detail the spectral properties of the
associated monodromy operator, showing in particular that it is a so-called
Ritt operator under a natural 'resonance' condition. This allows us to deduce
from our general result a precise description of the asymptotic behaviour of
the corresponding solutions. In particular, we present conditions for rational
rates of convergence to periodic solutions in the case where the convergence
fails to be uniformly exponential. We illustrate our general results by
applying them to concrete problems including the one-dimensional wave equation
with periodic damping.
|
Visual Speech Recognition (VSR) aims to infer speech into text depending on
lip movements alone. As it focuses on visual information to model the speech,
its performance is inherently sensitive to personal lip appearances and
movements, and this makes the VSR models show degraded performance when they
are applied to unseen speakers. In this paper, to remedy the performance
degradation of the VSR model on unseen speakers, we propose prompt tuning
methods of Deep Neural Networks (DNNs) for speaker-adaptive VSR. Specifically,
motivated by recent advances in Natural Language Processing (NLP), we finetune
prompts on adaptation data of target speakers instead of modifying the
pre-trained model parameters. Different from the previous prompt tuning methods
mainly limited to Transformer variant architecture, we explore different types
of prompts, the addition, the padding, and the concatenation form prompts that
can be applied to the VSR model which is composed of CNN and Transformer in
general. With the proposed prompt tuning, we show that the performance of the
pre-trained VSR model on unseen speakers can be largely improved by using a
small amount of adaptation data (e.g., less than 5 minutes), even if the
pre-trained model is already developed with large speaker variations. Moreover,
by analyzing the performance and parameters of different types of prompts, we
investigate when the prompt tuning is preferred over the finetuning methods.
The effectiveness of the proposed method is evaluated on both word- and
sentence-level VSR databases, LRW-ID and GRID.
|
Lack of knowledge about the background expansion history of the Universe from
independent observations makes it problematic to obtain a precise and accurate
estimation of the Hubble constant $H_0$ from gravitational wave standard
sirens, even with electromagnetic counterpart redshifts. Simply fitting
simultaneously for the matter density in a flat \lcdm\ model can reduce the
precision on $H_0$ from 1\% to 5\%, while not knowing the actual background
expansion model of the universe (e.g.\ form of dark energy) can introduce
substantial bias in estimation of the Hubble constant. When the statistical
precision is at the level of 1\% uncertainty on $H_0$, biases in non-\lcdm\
cosmologies that are consistent with current data could reach the 3$\sigma$
level. To avoid model-dependent biases, statistical techniques that are
appropriately agnostic about model assumptions need to be employed.
|
Systems that produce crackling noises such as Barkhausen pulses are
statistically similar and can be compared with one another. In this project,
the Barkhausen noise of three ferroelectric lead zirconate titanate (PZT)
samples were demonstrated to be compatible with avalanche statistics. The peaks
of the slew-rate (time derivative of current $dI/dt$) squared, defined as
jerks, were statistically analysed and shown to obey power-laws. The critical
exponents obtained for three PZT samples (B, F and S) were 1.73, 1.64 and 1.61,
respectively, with a standard deviation of 0.04. This power-law behaviour is in
excellent agreement with recent theoretical predictions of 1.65 in avalanche
theory. If these critical exponents do resemble energy exponents, they were
above the energy exponent 1.33 derived from mean-field theory. Based on the
power-law distribution of the jerks, we demonstrate that domain switching
display self-organised criticality and that Barkhausen jumps measured as
electrical noise follows avalanche theory.
|
Recent developments in classical simulation of quantum circuits make use of
clever decompositions of chunks of magic states into sums of efficiently
simulable stabiliser states. We show here how, by considering certain
non-stabiliser entangled states which have more favourable decompositions, we
can speed up these simulations. This is made possible by using the ZX-calculus,
which allows us to easily find instances of these entangled states in the
simplified diagram representing the quantum circuit to be simulated. We
additionally find a new technique of partial stabiliser decompositions that
allow us to trade magic states for stabiliser terms. With this technique we
require only $2^{\alpha t}$ stabiliser terms, where $\alpha\approx 0.396$, to
simulate a circuit with T-count $t$. This matches the $\alpha$ found by Qassim
et al., but whereas they only get this scaling in the asymptotic limit, ours
applies for a circuit of any size. Our method builds upon a recently proposed
scheme for simulation combining stabiliser decompositions and optimisation
strategies implemented in the software QuiZX. With our techniques we manage to
reliably simulate 50-qubit 1400 T-count hidden shift circuits in a couple of
minutes on a consumer laptop.
|
We evaluate the possible deviation (from the conventional Cahn's result) of
the phase between the one-photon-exchange and the `nuclear' high energy $pp$
scattering amplitudes in a small $t\to 0$ region caused by a more complicated
(not just $exp(Bt)$) behaviour of the nuclear amplitude. Furthermore we look at
the possible role of the $t$-dependence of the $\rho(t) \equiv$ Real/Imaginary
amplitude ratio. It turns out that both effects are rather small - much smaller
than to have any influence on the experimental accuracy of $\rho(t=0)$
extracted from the elastic proton-proton scattering data.
|
Conductance signatures that signal the presence of Majorana zero modes in a
three terminal nanowire-topological superconductor hybrid system are analyzed
in detail, in both the clean nanowire limit and in the presence of non-coherent
dephasing interactions. In the coherent transport regime for a clean wire, we
point out contributions of the local Andreev reflection and the non-local
transmissions toward the total conductance lineshapes while clarifying the role
of contact broadening on the Majorana conductance lineshapes at the magnetic
field parity crossings. Interestingly, at larger $B$-field parity crossings,
the contribution of the Andreev reflection process decreases which is
compensated by the non-local processes in order to maintain the conductance
quantum regardless of contact coupling strength. In the non-coherent transport
regime, we include dephasing that is introduced by momentum randomization
processes, that allows one to smoothly transition to the diffusive limit. Here,
as expected, we note that while the Majorana character of the zero modes is
unchanged, there is a reduction in the conductance peak magnitude that scales
with the strength of the impurity scattering potentials. Important distinctions
between the effect of non-coherent dephasing processes and contact-induced
tunnel broadenings in the coherent regime on the conductance lineshapes are
elucidated. Most importantly our results reveal that the addition of dephasing
in the set up does not lead to any notable length dependence to the conductance
of the zero modes, contrary to what one would expect in a gradual transition to
the diffusive limit. We believe this work paves a way for a systematic
introduction of scattering processes into the realistic modeling of Majorana
nanowire hybrid devices and assessing topological signatures in such systems in
the presence of non-coherent scattering processes.
|
Gravitational wave astronomy has placed strong constraints on fundamental
physics, and there is every expectation that future observations will continue
to do so. In this work we quantify this expectation for future binary merger
observations to constrain hidden sectors, such as scalar-tensor gravity or dark
matter, which induce a Yukawa-type modification to the gravitational potential.
We explicitly compute the gravitational waveform, and perform a Fisher
information matrix analysis to estimate the sensitivity of next generation
gravitational wave detectors to these modifications. We find an optimal
sensitivity to the Yukawa interaction strength of $10^{-5}$ and to the
associated dipole emission parameter of $10^{-7}$, with the best constraints
arising from the Einstein Telescope. When applied to a minimal model of dark
matter, this provides an exquisite probe of dark matter accumulation by neutron
stars, and for sub-TeV dark matter gravitational waves are able to detect mass
fractions $m_{DM}/m_{NS}$ less then 1 part in $10^{15}$.
|
We apply the transfer-matrix DMRG (TMRG) to a stochastic model, the
Domany-Kinzel cellular automaton, which exhibits a non-equilibrium phase
transition in the directed percolation universality class. Estimates for the
stochastic time evolution, phase boundaries and critical exponents can be
obtained with high precision. This is possible using only modest numerical
effort since the thermodynamic limit can be taken analytically in our approach.
We also point out further advantages of the TMRG over other numerical
approaches, such as classical DMRG or Monte-Carlo simulations.
|
In Network on Chip (NoC) rooted system, energy consumption is affected by
task scheduling and allocation schemes which affect the performance of the
system. In this paper we test the pre-existing proposed algorithms and
introduced a new energy skilled algorithm for 3D NoC architecture. An efficient
dynamic and cluster approaches are proposed along with the optimization using
bio-inspired algorithm. The proposed algorithm has been implemented and
evaluated on randomly generated benchmark and real life application such as
MMS, Telecom and VOPD. The algorithm has also been tested with the E3S
benchmark and has been compared with the existing mapping algorithm spiral and
crinkle and has shown better reduction in the communication energy consumption
and shows improvement in the performance of the system. On performing
experimental analysis of proposed algorithm results shows that average
reduction in energy consumption is 49%, reduction in communication cost is 48%
and average latency is 34%. Cluster based approach is mapped onto NoC using
Dynamic Diagonal Mapping (DDMap), Crinkle and Spiral algorithms and found DDmap
provides improved result. On analysis and comparison of mapping of cluster
using DDmap approach the average energy reduction is 14% and 9% with crinkle
and spiral.
|
In replay-based methods for continual learning, replaying input samples in
episodic memory has shown its effectiveness in alleviating catastrophic
forgetting. However, the potential key factor of cross-entropy loss with
softmax in causing catastrophic forgetting has been underexplored. In this
paper, we analyze the effect of softmax and revisit softmax masking with
negative infinity to shed light on its ability to mitigate catastrophic
forgetting. Based on the analyses, it is found that negative infinity masked
softmax is not always compatible with dark knowledge. To improve the
compatibility, we propose a general masked softmax that controls the stability
by adjusting the gradient scale to old and new classes. We demonstrate that
utilizing our method on other replay-based methods results in better
performance, primarily by enhancing model stability in continual learning
benchmarks, even when the buffer size is set to an extremely small value.
|
We propose a protocol that achieves fast adiabatic transfer between two
orthogonal states of a qubit by coupling with an ancilla. The qubit undergoes
Landau-Zener dynamics, whereas the coupling realizes a time-dependent
Hamiltonian, which is diagonal in the spin's instantaneous Landau-Zener
eigenstates. The ancilla (or meter), in turn, couples to a thermal bath, such
that the overall dynamics is incoherent. We analyse the protocol's fidelity as
a function of the strength of the coupling and of the relaxation rate of the
meter. When the meter's decay rate is the largest frequency scale of the
dynamics, the spin dynamics is encompassed by a master equation describing
dephasing of the spin in the instantaneous eigenbasis. In this regime the
fidelity of adiabatic transfer improves as the bath temperature is increased.
Surprisingly, the adiabatic transfer is significantly more efficient in the
opposite regime, where the time scale of the ancilla dynamics is comparable to
the characteristic spin time scale. Here, for low temperatures the coupling
with the ancilla tends to suppress diabatic transitions via effective cooling.
The protocol can be efficiently implemented by means of a pulsed, stroboscopic
coupling with the ancilla and is robust against moderate fluctuations of the
experimental parameters.
|
We summarize results for the processes $e^-\gamma \to \nu W^-, e^-Z,
e^-\gamma$ within the electroweak Standard Model. We discuss the essential
features of the corresponding lowest-order cross-sections and present numerical
results for the unpolarized cross-sections including the complete $O(\alpha)$
virtual, soft-photonic, and hard-photonic corrections. While at low energies
the weak corrections are dominated by the leading universal corrections, at
high energies we find large, non-universal corrections, which arise from vertex
and box diagrams involving non-Abelian gauge couplings.
|
In recent years, vision Transformers and MLPs have demonstrated remarkable
performance in image understanding tasks. However, their inherently dense
computational operators, such as self-attention and token-mixing layers, pose
significant challenges when applied to spatio-temporal video data. To address
this gap, we propose PosMLP-Video, a lightweight yet powerful MLP-like backbone
for video recognition. Instead of dense operators, we use efficient relative
positional encoding (RPE) to build pairwise token relations, leveraging
small-sized parameterized relative position biases to obtain each relation
score. Specifically, to enable spatio-temporal modeling, we extend the image
PosMLP's positional gating unit to temporal, spatial, and spatio-temporal
variants, namely PoTGU, PoSGU, and PoSTGU, respectively. These gating units can
be feasibly combined into three types of spatio-temporal factorized positional
MLP blocks, which not only decrease model complexity but also maintain good
performance. Additionally, we enrich relative positional relationships by using
channel grouping. Experimental results on three video-related tasks demonstrate
that PosMLP-Video achieves competitive speed-accuracy trade-offs compared to
the previous state-of-the-art models. In particular, PosMLP-Video pre-trained
on ImageNet1K achieves 59.0%/70.3% top-1 accuracy on Something-Something V1/V2
and 82.1% top-1 accuracy on Kinetics-400 while requiring much fewer parameters
and FLOPs than other models. The code is released at
https://github.com/zhouds1918/PosMLP_Video.
|
We examine the electronic properties of 2D electron gas in black phosphorus
multilayers in the presence of a perpendicular magnetic field, highlighting the
role of in-plane anisotropy on various experimental quantities such as ac
magneto-conductivity, screening, and magneto-plasmons. We find that resonant
structures in the ac conductivity exhibits a red-shift with increasing doping
due to inter-band coupling, $\gamma$. This arises from an extra correction term
in the Landau energy spectrum proportional to $n^2\gamma^2$ ($n$ is Landau
index), up to second order in $\gamma$. We found also that Coulomb interaction
leads to highly anisotropic magneto-excitons.
|
Density functional theory (DFT) has become a basic tool for the study of
electronic structure of matter, in which the Hohenberg-Kohn theorem plays a
fundamental role in the development of DFT. Unfortunately, the existing proofs
are incomplete even incorrect; besides, the statement of the Hohenberg-Kohn
theorem for many-electron Coulomb systems is not perfect. In this paper, we
shall restate the Hohenberg-Kohn theorem for Coulomb type systems and present a
rigorous proof by using the Fundamental Theorem of Algebra.
|
Constraints on the Yukawa-type corrections to Newtonian gravitational law are
obtained resulting from the measurement of the Casimir force between two
crossed cylinders. The new constraints are stronger than those previously
derived in the interaction range between 1.5 nm and 11 nm. The maximal
strengthening in 300 times is achieved at 4.26 nm. Possible applications of the
obtained results to the elementary particle physics are discussed.
|
We develop a fully quantum theoretical approach which describes the dynamics
of Frenkel excitons and bi-excitons induced by few photon quantum light in a
quantum well or wire (atomic chain) of finite size. The eigenenergies and
eigenfunctions of the coupled exciton-photon states in a multiatomic system are
found and the role of spatial confinement as well as the energy quantization
effects in 1D and 2D cases is analyzed. Due to the spatial quantization, the
excitation process is found to consist in the Rabi-like oscillations between
the collective symmetric states characterized by discrete energy levels and
arising in the picture of the ladder bosonic operators. At the same time, the
enhanced excitation of additional states with energy close to the upper
polariton branch is revealed. The found new effect is referred to as the
formation of Rabi-shifted resonances and is analyzed in details. Such states
are shown to influence dramatically on the dynamics of excitation especially in
the limit of large times.
|
In this paper, we present two video processing techniques for contact-less
estimation of the Respiratory Rate (RR) of framed subjects. Due to the modest
extent of movements related to respiration in both infants and adults, specific
algorithms to efficiently detect breathing are needed. For this reason,
motion-related variations in video signals are exploited to identify
respiration of the monitored patient and simultaneously estimate the RR over
time. Our estimation methods rely on two motion magnification algorithms that
are exploited to enhance the subtle respiration-related movements. In
particular, amplitude- and phase-based algorithms for motion magnification are
considered to extract reliable motion signals. The proposed estimation systems
perform both spatial decomposition of the video frames combined with proper
temporal filtering to extract breathing information. After periodic (or
quasi-periodic) respiratory signals are extracted and jointly analysed, we
apply the Maximum Likelihood (ML) criterion to estimate the fundamental
frequency, corresponding to the RR. The performance of the presented methods is
first assessed by comparison with reference data. Videos framing different
subjects, i.e., newborns and adults, are tested. Finally, the RR estimation
accuracy of both methods is measured in terms of normalized Root Mean Squared
Error (RMSE).
|
We study the evolution of quantum fluctuations of gravity around an
inflationary solution in renormalizable quantum gravity, in which the initial
scalar-fluctuation dominance is shown by the background-free nature expressed
by a special conformal invariance. Inflation ignites at the Planck scale and
continues until spacetime phase transition occurs at a dynamical scale about
$10^{17}$GeV. We can show that during inflation, the initially large
scale-invariant fluctuations reduce in amplitude to the appropriate magnitude
suggested by tiny CMB anisotropies. The goal of this research is to derive the
spectra of scalar fluctuations at the phase transition point, that is, the
primordial spectra. A system of nonlinear evolution equations for the
fluctuations is derived from the quantum gravity effective action. The running
coupling constant is then expressed by a time-dependent average following the
spirit of the mean field approximation. In this paper, we determine and examine
various nonlinear terms, not treated in previous studies such as the
exponential factor of the conformal mode. These contributions occur during the
early stage of inflation when the amplitude is still large. Moreover, in order
to verify their effects concretely, we numerically solve the evolution equation
by making a simplification to extract the most contributing parts of the terms
in comoving momentum space. The result indicates that they serve to maintain
the initial scale invariance over a wide range beyond the comoving Planck
scale. This is a challenge toward the derivation of the precise primordial
spectra, and we expect in the future that it will lead to the resolution of the
tensions that have arisen in cosmology.
|
In this work, we explore the existence of traversable wormhole solutions
supported by double gravitational layer thin-shells and satisfying the Null
Energy Condition (NEC) throughout the whole spacetime, in a quadratic-linear
form of the generalized hybrid metric-Palatini gravity. We start by showing
that for a particular quadratic-linear form of the action, the junction
conditions on the continuity of the Ricci scalar $R$ and the Palatini Ricci
scalar $\mathcal R$ of the theory can be discarded without the appearance of
undefined distribution terms in the field equations. As a consequence, a double
gravitational layer thin-shell arises at the separation hypersurface. We then
outline a general method to find traversable wormhole solutions satisfying the
NEC at the throat and provide an example. Finally, we use the previously
derived junction conditions to match the interior wormhole solution to an
exterior vacuum and asymptotic flat solution, thus obtaining a full traversable
wormhole solution supported by a double gravitational layer thin-shell and
satisfying the NEC. Unlike the wormhole solutions previously obtained in the
scalar-tensor representation of this theory, which were scarce and required
fine-tuning, the solutions obtained through this method are numerous and exist
for a wide variety of metrics and actions.
|
We study finite-temperature phase transition and equation of state for
two-flavor QCD at $N_t=4$ using an RG-improved gauge action and a
meanfield-improved clover quark action. The pressure is computed using the
integral method. The O(4) scaling of chiral order parameter is also examined.
|
Self-supervised learning (SSL) has led to great strides in speech processing.
However, the resources needed to train these models has become prohibitively
large as they continue to scale. Currently, only a few groups with substantial
resources are capable of creating SSL models, which harms reproducibility. In
this work, we optimize HuBERT SSL to fit in academic constraints. We reproduce
HuBERT independently from the original implementation, with no performance
loss. Our code and training optimizations make SSL feasible with only 8 GPUs,
instead of the 32 used in the original work. We also explore a semi-supervised
route, using an ASR model to skip the first pre-training iteration. Within one
iteration of pre-training, our models improve over HuBERT on several tasks.
Furthermore, our HuBERT Large variant requires only 8 GPUs, achieving similar
performance to the original trained on 128. As our contribution to the
community, all models, configurations, and code are made open-source in ESPnet.
|
The presence of certain elements within a star, and by extension its planet,
strongly impacts the formation and evolution of the planetary system. The
positive correlation between a host star's iron-content and the presence of an
orbiting giant exoplanet has been confirmed; however, the importance of other
elements in predicting giant planet occurrence is less certain despite their
central role in shaping internal planetary structure. We designed and applied a
machine learning algorithm to the Hypatia Catalog (Hinkel et a. 2014) to
analyze the stellar abundance patterns of known host stars to determine those
elements important in identifying potential giant exoplanet host stars. We
analyzed a variety of different elements ensembles, namely volatiles,
lithophiles, siderophiles, and Fe. We show that the relative abundances of
oxygen, carbon, and sodium, in addition to iron, are influential indicators of
the presence of a giant planet. We demonstrate the predictive power of our
algorithm by analyzing stars with known giant planets and found that they had
median 75% prediction score. We present a list of ~350 stars with no currently
discovered planets that have a $\geq$90% prediction probability likelihood of
hosting a giant exoplanet. We investigated archival HARPS data and found
significant trends that HIP62345, HIP71803, and HIP10278 host long-period giant
planet companions with estimated minimum $M_p\sin(i)$ values of 3.7, 6.8, and
8.5 M$_{J}$, respectively. We anticipate that our findings will revolutionize
future target selection, the role that elements play in giant planet formation,
and the determination of giant planet interior structure models.
|
We present ROBO, a model and its companion code for the study of the
interstellar medium (ISM). The aim is to provide an accurate description of the
physical evolution of the ISM and to set the ground for an ancillary tool to be
inserted in NBody-Tree-SPH (NB-TSPH) simulations of large scale structures in
cosmological context or of the formation and evolution of individual galaxies.
The ISM model consists of gas and dust. The gas chemical composition is
regulated by a network of reactions that includes a large number of species
(hydrogen and deuterium based molecules, helium, and metals). New reaction
rates for the charge transfer in $\mathrm H^+$ and $\mathrm H_2$ collisions are
presented. The dust contains the standard mixture of carbonaceous grains
(graphite grains and PAHs) and silicates of which the model follows the
formation and destruction by several processes. The model takes into account an
accurate treatment of the cooling process, based on several physical
mechanisms, and cooling functions recently reported in the literature. The
model is applied to a wide range of the input parameters and the results for
important quantities describing the physical state of the gas and dust are
presented. The results are organized in a database suited to the artificial
neural networks (ANNs). Once trained, the ANNs yield the same results obtained
by ROBO, with great accuracy. We plan to develop ANNs suitably tailored for
applications to NB-TSPH simulations of cosmological structures and/or galaxies.
|
We describe a way to optimize the chiral behavior of Wilson-type lattice
fermion actions by studying the low energy real eigenmodes of the Dirac
operator. We find a candidate action, the clover action with fat links with a
tuned clover term. The action shows good scaling behavior at Wilson gauge
coupling beta=5.7.
|
Large-scale high-quality training data is important for improving the
performance of models. After trained with data that has rationales (reasoning
steps), models gain reasoning capability. However, the dataset with
high-quality rationales is relatively scarce due to the high annotation cost.
To address this issue, we propose \textit{Self-motivated Learning} framework.
The framework motivates the model itself to automatically generate rationales
on existing datasets. Based on the inherent rank from correctness across
multiple rationales, the model learns to generate better rationales, leading to
higher reasoning capability. Specifically, we train a reward model with the
rank to evaluate the quality of rationales, and improve the performance of
reasoning through reinforcement learning. Experiment results of Llama2 7B on
multiple reasoning datasets show that our method significantly improves the
reasoning ability of models, even outperforming text-davinci-002 in some
datasets.
|
Recent advances in generative artificial intelligence have enabled the
creation of high-quality synthetic data that closely mimics real-world data.
This paper explores the adaptation of the Stable Diffusion 2.0 model for
generating synthetic datasets, using Transfer Learning, Fine-Tuning and
generation parameter optimisation techniques to improve the utility of the
dataset for downstream classification tasks. We present a class-conditional
version of the model that exploits a Class-Encoder and optimisation of key
generation parameters. Our methodology led to synthetic datasets that, in a
third of cases, produced models that outperformed those trained on real
datasets.
|
The present work deals with the study of d-wave superconductor in presence of
a single vortex placed at the centre of the 2D lattice using the $t-t'-J$ model
within the renormalized mean field theory. It is found that the in absence of
the vortex the ground state has a d-wave configuration. In presence of the
vortex, the superconducting order parameter, above the critical doping, drops
to a low non zero value within a few lattice points from the vortex (that is
within the the vortex core) and beyond it converges to the constant value in
absence of vortex. We observe that above the critical doping things are
consistent with the experimental results while there is an anomalous rise in
the superconducting order parameter at very low doping within the vortex core.
This we feel is because of antiferromagnetic ordering taking place within the
vortex core at low doping.
|
We investigate the possibility of replacing the topology of convergence in
probability with convergence in $L^1$. A characterization of continuous linear
functionals on the space of measurable functions is also obtained.
|
DA$\Phi$NE $e^+ e^-$ collider is an abundant source of low energy $K \bar K$
pairs suitable to explore different fields of non perturbative QCD regime. Two
different experiments, DEAR and FINUDA, using different experimental techniq
ues are trying to shed new light on the strong interaction at the nucleon scale
by producing high precision results at this energy range. The DEAR experiment
is studying kaonic atoms in order to determine antikaon-nucleon scattering
lengths. FINUDA aims to produce hypernuclei to study nuclear structure and
$\Lambda$-N interaction.
|
We show that exclusive double-diffractive Higgs production, pp -> p+H+p,
followed by the H -> bbbar decay, could play an important role in identifying a
`light' Higgs boson at the LHC, provided that the forward outgoing protons are
tagged. We predict the cross sections for the signal and for all possible bbbar
backgrounds.
|
By tagging one or two intact protons in the forward direction, it is possible
to select and measure exclusive photon-fusion processes at the LHC. The same
processes can also be measured in heavy ion collisions, and are often denoted
as ultraperipheral collisions (UPC) processes. Such measurements open up the
possibility of probing certain dimension-8 operators and their positivity
bounds at the LHC. As a demonstration, we perform a phenomenological study on
the $\gamma\gamma\to \ell^+\ell^-$ processes, and find out that the
measurements of this process at the HL-LHC provide reaches on a set of
dimension-8 operator coefficients that are comparable to the ones at future
lepton colliders. We also point out that the $\gamma q\to \gamma q$ process
could potentially have better reaches on similar types of operators due to its
larger cross section, but a more detailed experimental study is need to
estimate the signal and background rates of this process. The validity of
effective field theory (EFT) and the robustness of the positivity
interpretation are also discussed.
|
We investigated the thermodynamic property of the heavy fermion
superconductor UTe$_2$ in pulsed high magnetic fields. The superconducting
transition in zero field was observed at $T_{\rm c}$=1.65 K as a sharp heat
capacity jump. Magnetocaloric effect measurements in pulsed-magnetic fields
obviously detected a thermodynamic anomaly accompanied by a first-order
metamagnetic transition at $\mu$$_{0}$$H_{\rm m}$=36.0 T when the fields are
applied nearly along the hard-magnetization $b$-axis. From the results of heat
capacity measurements in magnetic fields, we found a drastic diverging
electronic heat capacity coefficient of the normal state $\gamma$$_{\rm N}$
with approaching $H_{\rm m}$. Comparing with the previous works via the
magnetic Clausius-Clapeyron relation, we unveil the thermodynamic details of
the metamagnetic transition. The enhancement of the effective mass observed as
the development of $\gamma_{\rm N}$ indicates that quantum fluctuation strongly
evolves around $H_{\rm m}$; it assists the superconductivity emerging even in
extremely high fields.
|
The first stars in the universe are thought to be massive, forming in dark
matter halos with masses around 10^6 solar masses. Recent simulations suggest
that these metal-free (Population III) stars may form in binary or multiple
systems. Because of their high stellar masses and small host halos, their
feedback ionizes the surrounding 3 kpc of intergalactic medium and drives the
majority of the gas from the potential well. The next generation of stars then
must form in this gas-poor environment, creating the first galaxies that
produce the majority of ionizing radiation during cosmic reionization. I will
review the latest developments in the field of Population III star formation
and feedback and its impact on galaxy formation prior to reionization. In
particular, I will focus on the numerical simulations that have demonstrated
this sequence of events, ultimately leading to cosmic reionization.
|
Using results of a recent calculation of the $\Lambda(1520)$ in the nuclear
medium, which show that the medium width is about five times the free width, we
study the A dependence of the $\Lambda(1520)$ production cross section in the
reactions $\gamma ~A \to K^+ \Lambda(1520) A^\prime$ and $p~ A \to p~ K^+
\Lambda(1520) A^\prime$. We find a sizable A dependence in the ratio of the
nuclear cross sections for heavy nuclei with respect to a light one due to the
large value of the $\Lambda(1520)$ width in the medium, showing that devoted
experiments, easily within reach in present facilities, can provide good
information on that magnitude by measuring the cross sections studied here.
|
This article is divided into two chapters. The first chapter describes the
failure rate as a KPI and studies its properties. The second one goes over ways
to compare this KPI across two groups using the concepts of statistical
hypothesis testing.
In section 1., we will motivate the failure rate as a KPI (in Azure, it is
dubbed `Annual Interruption Rate' or AIR. In section 3, we will discuss
measuring failure rate from logs machines typically generate. In section 1.2,
we will discuss the problem of measuring it from real-world data.
In section 2.1, we will discuss the general concepts of hypothesis testing.
In section 2.2, we will go over some general count distributions for modeling
Azure reboots. In section 2.3, we will go over some experiments on applying
various hypothesis tests to simulated data. In section 2.4, we will discuss
some applications of this work like using these statistical methods to catch
regressions in failure rate and how long we need to let changes to the system
`bake' before we are reasonably sure they didn't regress failure rate.
|
The population of known extrasolar planets includes giant and terrestrial
planets that closely orbit their host star. Such planets experience significant
tidal distortions that can force the planet into synchronous rotation. The
combined effects of tidal deformation and centripetal acceleration induces
significant asphericity in the shape of these planets, compared to the mild
oblateness of Earth, with maximum gravitational acceleration at the poles. Here
we show that this latitudinal variation in gravitational acceleration is
relevant for modeling the climate of oblate planets including Jovian planets
within the solar system, closely-orbiting hot Jupiters, and planets within the
habitable zone of white dwarfs. We compare first- and third-order
approximations for gravitational acceleration on an oblate spheroid and
calculate the geostrophic wind that would result from this asphericity on a
range of solar system planets and exoplanets. Third-order variations in
gravitational acceleration are negligible for Earth but become significant for
Jupiter, Saturn, and Jovian exoplanets. This latitudinal variation in
gravitational acceleration can be measured remotely, and the formalism
presented here can be implemented for use in general circulation climate
modeling studies of exoplanet atmospheres.
|
We find an asymptotic solution for two- and three-point correlators of local
gauge-invariant operators, in a lower-spin sector of massless large-$N$ $QCD$,
in terms of glueball and meson propagators, by means of a new purely
field-theoretical technique that we call the asymptotically-free bootstrap. The
asymptotically-free bootstrap exploits the lowest-order conformal invariance of
connected correlators of gauge invariant composite operators in perturbation
theory, the renormalization-group improvement, and a recently-proved asymptotic
structure theorem for glueball and meson propagators, that involves the unknown
particle spectrum and the anomalous dimension of operators for fixed spin. In
principle the asymptotically-free bootstrap extends to all the higher-spin two-
and three-point correlators whose lowest-order conformal limit is non-vanishing
in perturbation theory, and by means of the operator product expansion to the
corresponding asymptotic multi-point correlators as well. Besides, the
asymptotically-free bootstrap provides asymptotic $S$-matrix amplitudes in
massless large-$N$ $QCD$ in terms of glueball and meson propagators as opposed
to perturbation theory. Remarkably, the asymptotic $S$-matrix depends only on
the unknown particle spectrum, but not on the anomalous dimensions. Moreover,
the asymptotically-free bootstrap applies to large-$N$ $\mathcal{N}=1$ $SUSY$
$YM$ as well. Practically, as just a few examples among many more, it follows
the structure of the light by light scattering amplitude, of the pion form
factor, and the associated vector dominance. Theoretically, the asymptotic
solution sets the strongest constraints on any actual solution of large-$N$
$QCD$ (and of large-$N$ $\mathcal{N}=1$ $SUSY$ $YM$), and in particular on any
string solution.
|
We use a collection of 14 well-measured neutron star masses to strengthen the
case that a substantial fraction of these neutron stars was formed via
electron-capture supernovae (SNe) as opposed to Fe-core collapse SNe. The
e-capture SNe are characterized by lower resultant gravitational masses and
smaller natal kicks, leading to lower orbital eccentricities when the e-capture
SN has led to the formation of the second neutron star in a binary system.
Based on the measured masses and eccentricities, we identify four neutron
stars, which have a mean post-collapse gravitational mass of ~1.25 solar
masses, as the product of e-capture SNe. We associate the remaining ten neutron
stars, which have a mean mass of 1.35 solar masses, with Fe-core collapse SNe.
If the e-capture supernova occurs during the formation of the first neutron
star, then this should substantially increase the formation probability for
double neutron stars, given that more systems will remain bound with the
smaller kicks. However, this does not appear to be the case for any of the
observed systems, and we discuss possible reasons for this.
|
The hypergeometric function method naturally provides the analytic
expressions of scalar integrals from concerned Feynman diagrams in some
connected regions of independent kinematic variables, also presents the systems
of homogeneous linear partial differential equations satisfied by the
corresponding scalar integrals. Taking examples of the one-loop $B_{_0}$ and
massless $C_{_0}$ functions, as well as the scalar integrals of two-loop vacuum
and sunset diagrams, we verify our expressions coinciding with the well-known
results of literatures. Based on the multiple hypergeometric functions of
independent kinematic variables, the systems of homogeneous linear partial
differential equations satisfied by the mentioned scalar integrals are
established. Using the calculus of variations, one recognizes the system of
linear partial differential equations as stationary conditions of a functional
under some given restrictions, which is the cornerstone to perform the
continuation of the scalar integrals to whole kinematic domains numerically
with the finite element methods. In principle this method can be used to
evaluate the scalar integrals of any Feynman diagrams.
|
We extend the Type I and Type III seesaw mechanisms to generate neutrino
masses within the left-right symmetric theories where parity is spontaneously
broken. We construct a next to minimal left-right symmetric model where
neutrino masses are generated through a variant $double$ seesaw mechanism. In
our model at least one of the triplet fermions and the right handed neutrinos
are at TeV scale and others are heavy. The phenomenological aspects and
testability of the TeV scale particles at collider experiments are discussed.
The decays of heavy fermions leading to leptogenesis are pointed out.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.