text
stringlengths 6
128k
|
---|
A production function $f$ is called quasi-sum if there are strict monotone
functions $F, h_1,...,h_n$ with $F'>0$ such that $$f(x)= F(h_1 (x_1)+...+h_n
(x_n)).$$ The justification for studying quasi-sum production functions is that
these functions appear as solutions of the general bisymmetry equation and they
are related to the problem of consistent aggregation.
In this article, first we present the classification of quasi-sum production
functions satisfying the constant elasticity of substitution property. Then we
prove that if a quasi-sum production function satisfies the constant elasticity
of substitution property, then its graph has vanishing Gauss-Kronecker
curvature (or its graph is a flat space) if and only if the production function
is either a linearly homogeneous generalized ACMS function or a linearly
homogeneous generalized Cobb-Douglas function.
|
This paper proposes a new method that combines check-pointing methods with
error-controlled lossy compression for large-scale high-performance
Full-Waveform Inversion (FWI), an inverse problem commonly used in geophysical
exploration. This combination can significantly reduce data movement, allowing
a reduction in run time as well as peak memory. In the Exascale computing era,
frequent data transfer (e.g., memory bandwidth, PCIe bandwidth for GPUs, or
network) is the performance bottleneck rather than the peak FLOPS of the
processing unit. Like many other adjoint-based optimization problems, FWI is
costly in terms of the number of floating-point operations, large memory
footprint during backpropagation, and data transfer overheads. Past work for
adjoint methods has developed checkpointing methods that reduce the peak memory
requirements during backpropagation at the cost of additional floating-point
computations. Combining this traditional checkpointing with error-controlled
lossy compression, we explore the three-way tradeoff between memory, precision,
and time to solution. We investigate how approximation errors introduced by
lossy compression of the forward solution impact the objective function
gradient and final inverted solution. Empirical results from these numerical
experiments indicate that high lossy-compression rates (compression factors
ranging up to 100) have a relatively minor impact on convergence rates and the
quality of the final solution.
|
Graphene's near-field radiative heat transfer is determined from its
electrical conductivity, commonly modeled using the local Kubo and Drude
formulas. In this letter, we analyze the non-locality of graphene's electrical
conductivity using the Lindhard model combined with the Mermin relaxation time
approximation. We also study how the variation of electrical conductivity with
wavevector affects near-field radiative conductance between two graphene sheets
separated by a vacuum gap. It is shown that the variation of electrical
conductivity with wavevector, $k_{\rho}$, is appreciable for $k_{\rho}$s
greater than $100k_0$, where $k_0$ is the magnitude of the wavevector in the
free space. The Kubo electrical conductivity provides an accurate estimation of
the spectral radiative conductance between two graphene sheets except for
around the surface-plasmon-polariton frequency of graphene and at separation
gaps smaller than 20 nm where there is a non-negligible contribution from modes
with $k_{\rho}>100k_0$ to the radiative conductance. The Drude formula proves
to be inaccurate for modeling the electrical conductivity and radiative
conductance of graphene except for at temperatures much below the Fermi
temperature and frequencies much smaller than $2{\mu}_c/{\hbar}$, where
${\mu}_c$ and ${\hbar}$ are the chemical potential and reduced Planck's
constant, respectively. It is also shown that the electronic scattering
processes should be considered in the Lindhard model properly, such that the
local electron number is conserved. A substitution of ${\omega}$ by
${\omega}+i{\gamma}$ (${\omega}$, $i$, and ${\gamma}$ being the angular
frequency, imaginary unit, and scattering rate, respectively) in the
collisionless Lindhard model does not satisfy the conservation of the local
electron number and results in significant errors in computing graphene's
electrical conductivity and radiative conductance.
|
In this paper the Feynman Green function for Maxwell's theory in curved
space-time is studied by using the Fock-Schwinger-DeWitt asymptotic expansion;
the point-splitting method is then applied, since it is a valuable tool for
regularizing divergent observables. Among these, the stress-energy tensor is
expressed in terms of second covariant derivatives of the Hadamard Green
function, which is also closely linked to the effective action; therefore one
obtains a series expansion for the stress-energy tensor. Its divergent part can
be isolated, and a concise formula is here obtained: by dimensional analysis
and combinatorics, there are two kinds of terms: quadratic in curvature tensors
(Riemann, Ricci tensors and scalar curvature) and linear in their second
covariant derivatives. This formula holds for every space-time metric; it is
made even more explicit in the physically relevant particular cases of
Ricci-flat and maximally symmetric spaces, and fully evaluated for some
examples of physical interest: Kerr and Schwarzschild metrics and de Sitter
space-time.
|
Moduli spaces of (polarized) Enriques surfaces can be described as open
subsets of modular varieties of orthogonal type. It was shown by Gritsenko and
Hulek that there are, up to isomorphism, only finitely many different moduli
spaces of polarized Enriques surfaces. Here we investigate the possible
arithmetic groups and show that there are exactly $87$ such groups up to
conjugacy. We also show that all moduli spaces are dominated by a moduli space
of polarized Enriques surfaces of degree $1240$. Ciliberto, Dedieu, Galati, and
Knutsen have also investigated moduli spaces of polarized Enriques surfaces in
detail. We discuss how our enumeration relates to theirs. We further compute
the Tits building of the groups in question. Our computation is based on groups
and indefinite quadratic forms and the algorithms used are explained.
|
We study asymptotic decay rates of viscosity solutions to some doubly
nonlinear parabolic equations, including Trudinger's equation. We also prove a
Phragm\'en-Lindel\"of type result and show its optimality.
|
Switch-based hybrid network is a promising implementation for beamforming in
large-scale millimetre wave (mmWave) antenna arrays. By fully exploiting the
sparse nature of the mmWave channel, such hybrid beamforming reduces complexity
and power consumption when compared with a structure based on phase shifters.
However, the difficulty of designing an optimum beamformer in the analog domain
is prohibitive due to the binary nature of such a switch-based structure. Thus,
here we propose a new method for designing a switch-based hybrid beamformer for
massive MIMO communications in mmWave bands. We first propose a method for
decoupling the joint optimization of analog and digital beamformers by
confining the problem to a rank-constrained subspace. We then approximate the
solution through two approaches: norm maximization (SHD-NM), and majorization
(SHD-QRQU). In the norm maximization method, we propose a modified sequential
convex programming (SCP) procedure that maximizes the mutual information while
addressing the mismatch incurred from approximating the log-determinant by a
Frobenius norm. In the second method, we employ a lower bound on the mutual
information by QR factorization. We also introduce linear constraints in order
to include frequently-used partially-connected structures. Finally, we show the
feasibility, and effectiveness of the proposed methods through several
numerical examples. The results demonstrate ability of the proposed methods to
track closely the spectral efficiency provided by unconstrained optimal
beamformer and phase shifting hybrid beamformer, and outperform a competitor
switch-based hybrid beamformer.
|
Large-scale data centers and cloud computing have turned system configuration
into a challenging problem. Several widely-publicized outages have been blamed
not on software bugs, but on configuration bugs. To cope, thousands of
organizations use system configuration languages to manage their computing
infrastructure. Of these, Puppet is the most widely used with thousands of
paying customers and many more open-source users. The heart of Puppet is a
domain-specific language that describes the state of a system. Puppet already
performs some basic static checks, but they only prevent a narrow range of
errors. Furthermore, testing is ineffective because many errors are only
triggered under specific machine states that are difficult to predict and
reproduce. With several examples, we show that a key problem with Puppet is
that configurations can be non-deterministic.
This paper presents Rehearsal, a verification tool for Puppet configurations.
Rehearsal implements a sound, complete, and scalable determinacy analysis for
Puppet. To develop it, we (1) present a formal semantics for Puppet, (2) use
several analyses to shrink our models to a tractable size, and (3) frame
determinism-checking as decidable formulas for an SMT solver. Rehearsal then
leverages the determinacy analysis to check other important properties, such as
idempotency. Finally, we apply Rehearsal to several real-world Puppet
configurations.
|
We investigate SU(2) lattice gauge theory in four dimensions in the maximally
abelian projection. Studying the effects on different lattice sizes we show
that the deconfinement transition of the fields and the percolation transition
of the monopole currents in the three space dimensions are precisely related.
To arrive properly at this result the uses of a mathematically sound
characterization of the occurring networks of monopole currents and of an
appropriate method of gauge fixing turn out to be crucial. In addition we
investigate detailed features of the monopole structure in time direction.
|
We report a three-variable simplified model of excitation fronts in human
atrial tissue. The model is derived by novel asymptotic techniques \new{from
the biophysically realistic model of Courtemanche et al (1998) in extension of
our previous similar models. An iterative analytical solution of the model is
presented which is in excellent quantitative agreement with the realistic
model. It opens new possibilities for analytical studies as well as for
efficient numerical simulation of this and other cardiac models of similar
structure.
|
As the performance of autonomous systems increases, safety concerns arise,
especially when operating in non-structured environments. To deal with these
concerns, this work presents a safety layer for mechanical systems that detects
and responds to unstable dynamics caused by external disturbances. The safety
layer is implemented independently and on top of already present nominal
controllers, like pose or wrench tracking, and limits power flow when the
system's response would lead to instability. This approach is based on the
computation of the Largest Lyapunov Exponent (LLE) of the system's error
dynamics, which represent a measure of the dynamics' divergence or convergence
rate. By actively computing this metric, divergent and possibly dangerous
system behaviors can be promptly detected. The LLE is then used in combination
with Control Barrier Functions (CBFs) to impose power limit constraints on a
jerk controlled system. The proposed architecture is experimentally validated
on an Omnidirectional Micro Aerial Vehicle (OMAV) both in free flight and
interaction tasks.
|
We extend an approach of Beliakova for computing knot Floer homology and
implement it in a publicly available computer program. We review the main
programming and optimization methods used. Our program is then used to check
that the Floer homology of a prime non-alternating knot with less than 12
crossings has no torsion.
|
We constructed characteristic identities for the 3-split (polarized) Casimir
operators of simple Lie algebras in the adjoint representations $\mathsf{ad}$
and deduced a certain class of subrepresentations in $\mathsf{ad}^{\otimes 3}$.
The projectors onto invariant subspaces for these subrepresentations were
directly constructed from the characteristic identities for the 3-split Casimir
operators. For all simple Lie algebras, universal expressions for the traces of
higher powers of the 3-split Casimir operators were found and dimensions of the
subrepresentations in $\mathsf{ad}^{\otimes 3}$ were calculated. All our
formulas are in agreement with the universal description of (irreducible)
subrepresentations in $\mathsf{ad}^{\otimes 3}$ for simple Lie algebras in
terms of the Vogel parameters.
|
We compare two non-perturbative techniques for calculating the
single-particle Green's function of interacting Fermi systems with dominant
forward scattering: our recently developed functional integral approach to
bosonization in arbitrary dimensions, and the eikonal expansion. In both
methods the Green's function is first calculated for a fixed configuration of a
background field, and then averaged with respect to a suitably defined
effective action. We show that, after linearization of the energy dispersion at
the Fermi surface, both methods yield for Fermi liquids exactly the same
non-perturbative expression for the quasi-particle residue. However, in the
case of non-Fermi liquid behavior the low-energy behavior of the Green's
function predicted by the eikonal method can be erroneous. In particular, for
the Tomonaga-Luttinger model the eikonal method neither reproduces the correct
scaling behavior of the spectral function, nor predicts the correct location of
its singularities.
|
We investigate the chemical and structural configuration of acetophenone on
Si(001) using synchrotron radiation core-level spectroscopy techniques and
density functional theory calculations. Samples were prepared by vapour phase
dosing of clean Si(001) surfaces with acetophenone in ultrahigh vacuum. Near
edge X-ray adsorption fine structure spectroscopy and photoelectron
spectroscopy measurements were made at room temperature as a function of
coverage density and post-deposition anneal temperature. We show that the
dominant room temperature adsorption structure lies flat on the substrate,
while moderate thermal annealing induces the breaking of Si-C bonds between the
phenyl ring and the surface resulting in the reorientation of the adsorbate
into an upright configuration.
|
Open-source development has revolutionized the software industry by promoting
collaboration, transparency, and community-driven innovation. Today, a vast
amount of various kinds of open-source software, which form networks of
repositories, is often hosted on GitHub - a popular software development
platform. To enhance the discoverability of the repository networks, i.e.,
groups of similar repositories, GitHub introduced repository topics in 2017
that enable users to more easily explore relevant projects by type, technology,
and more. It is thus crucial to accurately assign topics for each GitHub
repository. Current methods for automatic topic recommendation rely heavily on
TF-IDF for encoding textual data, presenting challenges in understanding
semantic nuances. This paper addresses the limitations of existing techniques
by proposing Legion, a novel approach that leverages Pre-trained Language
Models (PTMs) for recommending topics for GitHub repositories. The key novelty
of Legion is three-fold. First, Legion leverages the extensive capabilities of
PTMs in language understanding to capture contextual information and semantic
meaning in GitHub repositories. Second, Legion overcomes the challenge of
long-tailed distribution, which results in a bias toward popular topics in
PTMs, by proposing a Distribution-Balanced Loss (DB Loss) to better train the
PTMs. Third, Legion employs a filter to eliminate vague recommendations,
thereby improving the precision of PTMs. Our empirical evaluation on a
benchmark dataset of real-world GitHub repositories shows that Legion can
improve vanilla PTMs by up to 26% on recommending GitHubs topics. Legion also
can suggest GitHub topics more precisely and effectively than the
state-of-the-art baseline with an average improvement of 20% and 5% in terms of
Precision and F1-score, respectively.
|
A novel copula-based multivariate panel ordinal model is developed to
estimate structural relations among components of well-being. Each ordinal
time-series is modelled using a copula-based Markov model to relate the
marginal distributions of the response at each time of observation and then, at
each observation time, the conditional distributions of each ordinal
time-series are joined using a multivariate t copula. Maximum simulated
likelihood based on evaluating the multidimensional integrals of the likelihood
with randomized quasi Monte Carlo methods is used for the estimation.
Asymptotic calculations show that our method is nearly as efficient as maximum
likelihood for fully specified multivariate copula models. Our findings
highlight the importance of one's relative position in evaluating their
well-being with no direct effects of socio-economic characteristics on
well-being but strong indirect effects through their impact on components of
well-being. Temporal resilience, habit formation and behavioural traits can
explain the dependence in the joint tails over time and across well-being
components.
|
The Kubo-Greenwood (KG) formula is often used in conjunction with Kohn-Sham
(KS) density functional theory (DFT) to compute the optical conductivity,
particularly for warm dense mater. For applying the KG formula, all KS
eigenstates and eigenvalues up to an energy cutoff are required and thus the
approach becomes expensive, especially for high temperatures and large systems,
scaling cubically with both system size and temperature. Here, we develop an
approach to calculate the KS conductivity within the stochastic DFT (sDFT)
framework, which requires knowledge only of the KS Hamiltonian but not its
eigenstates and values. We show that the computational effort associated with
the method scales linearly with system size and reduces in proportion to the
temperature unlike the cubic increase with traditional deterministic
approaches. In addition, we find that the method allows an accurate description
of the entire spectrum, including the high-frequency range, unlike the
deterministic method which is compelled to introduce a high-frequency cut-off
due to memory and computational time constraints. We apply the method to
helium-hydrogen mixtures in the warm dense matter regime at temperatures of
\sim60\text{kK} and find that the system displays two conductivity phases,
where a transition from non-metal to metal occurs when hydrogen atoms
constitute \sim0.3 of the total atoms in the system.
|
The speed of firing pattern propagation in a synfire chain, composed of
non-leaky integrate-and-fire neurons, and assuming homogenous connection
delays, is studied. An explicit relation, relating the propagation speed to the
connecting weights distribution and other network parameters, is derived. The
analytic results are then checked with a computer simulation. When the network
is fed with a fully synchronized input pattern, the pattern propagation speed
is independent of the weight parameters. When the fed input is asynchronous,
depending on the weight parameters, the propagation speed is more than or less
than the synchronous case. In this case the propagation speed increases by
increasing the mean or standard deviation of connecting weights. The biological
relevance of these findings and their relevance to the notion of synfire chains
are discussed.
|
We briefly review why the non-linear realisation of the semi-direct product
of a group with one of its representations leads to a field theory defined on a
generalised space-time equipped with a generalised vielbein. We give formulae,
which only involve matrix multiplication, for the generalised vielbein, the
Cartan forms and their transformations. We consider the generalised space-time
introduced in 2003 in the context of the non-linear realisation of the
semi-direct product of E(11) and its first fundamental representation. For this
latter theory we give explicit expressions for the generalised vielbein up to
and including the levels associated with the dual graviton in four, five and
eleven dimensions and for the IIB theory in ten dimensions. We also compute the
generalised vielbein, up to the analogous level, for the non-linear realisation
of the semi-direct product of very extended SL(2) with its first fundamental
representation, which is a theory associated with gravity in four dimensions.
|
Melittin, a natural antimicrobial peptide comprising 26 amino acid residues,
can kill bacteria by inducing pores in cell membranes. Clinical applications of
melittin as an antibiotic require a thorough understanding of its poration
mechanism and mutations that enhance its antimicrobial activity. Previous
experiments showed Melp5, a variant of melittin with five mutations, exhibits a
higher poration ability. However, the mechanism of the enhanced poration
ability is not fully understood. Here, we investigated the mechanism by
comparing the poration of melittin and Melp5 using coarse-grained (CG) and
all-atom (AA) molecular dynamics (MD) simulations. We observe that Melp5 is
likely to form a pore with 5 peptides (pentameric), while melittin is likely to
form a pore with 4 peptides (tetrameric). Our atomistic MD simulations show
that the pentameric pore of Melp5 has a higher water permeability than the
tetrameric pore of melittin. We also analyze the stability of the pores of
melittin and Melp5 by calculating the interaction energies of the pores. In
particular, we investigate the effects of mutant residues on pore stability by
calculating electrostatic and LJ interactions. These results should provide
insights on the enhanced poration ability of Melp5 and push it toward
applications.
|
This paper exhibits fundamental structure underlying Lie algebra homology
with coefficients in tensor products of the adjoint representation, mostly
focusing upon the case of free Lie algebras.
The main result yields a DG category that is constructed from the PROP
associated to the Lie operad. Underlying this is a two-term complex of
bimodules over this PROP; it is a quotient of the universal Chevalley-Eilenberg
complex.
The homology of this DG category is intimately related to outer functors over
free groups (introduced in earlier joint work with Vespa). This uses the
author's previous results relating functors on free groups to representations
of the PROP associated to the Lie operad.
This gives a direct algebraic explanation as to why the degree one homology
should correspond to an outer functor. Hitherto, the only known argument relied
upon the relationship with the higher Hochschild homology functors that arise
from the work of Turchin and Willwacher.
|
We study the logic obtained by endowing the language of first-order
arithmetic with second-order measure quantifiers. This new kind of
quantification allows us to express that the argument formula is true in a
certain portion of all possible interpretations of the quantified variable. We
show that first-order arithmetic with measure quantifiers is capable of
formalizing simple results from probability theory and, most importantly, of
representing every recursive random function. Moreover, we introduce a
realizability interpretation of this logic in which programs have access to an
oracle from the Cantor space.
|
In this paper we continue the analysis of the two-scale method for the
Monge-Amp\`ere equation for dimension $d \geq 2$ introduced in [10]. We prove
continuous dependence of discrete solutions on data that in turn hinges on a
discrete version of the Alexandroff estimate. They are both instrumental to
prove pointwise error estimates for classical solutions with H\"older and
Sobolev regularity. We also derive convergence rates for viscosity solutions
with bounded Hessians which may be piecewise smooth or degenerate.
|
Self-supervised learning has attracted increasing attention as it learns
data-driven representation from data without annotations. Vision
transformer-based autoencoder (ViT-AE) by He et al. (2021) is a recent
self-supervised learning technique that employs a patch-masking strategy to
learn a meaningful latent space. In this paper, we focus on improving ViT-AE
(nicknamed ViT-AE++) for a more effective representation of 2D and 3D medical
images. We propose two new loss functions to enhance the representation during
training. The first loss term aims to improve self-reconstruction by
considering the structured dependencies and indirectly improving the
representation. The second loss term leverages contrastive loss to optimize the
representation from two randomly masked views directly. We extended ViT-AE++ to
a 3D fashion for volumetric medical images as an independent contribution. We
extensively evaluate ViT-AE++ on both natural images and medical images,
demonstrating consistent improvement over vanilla ViT-AE and its superiority
over other contrastive learning approaches. Codes are here:
https://github.com/chinmay5/vit_ae_plus_plus.git.
|
Let $K$ be the closure of a bounded region in the complex plane with simply
connected complement whose boundary is a piecewise analytic curve with at least
one outward cusp. The asymptotics of zeros of Faber polynomials for $K$ are not
understood in this general setting. Joukowski airfoils provide a particular
class of such sets. We determine the (unique) weak-* limit of the full sequence
of normalized counting measures of the Faber polynomials for Joukowski
airfoils; it is never equal to the potential-theoretic equilibrium measure of
$K$. This implies that many of these airfoils admit an electrostatic skeleton
and also explains an interesting class of examples of Ullman related to
Chebyshev quadrature.
|
We investigate the relation between Connes-Kreimer Hopf algebra approach to
renomalization and deformation quantization. Both approaches rely on
factorization, the correspondence being established at the level of Wiener-Hopf
algebras, and double Lie algebras/Lie bialgebras, via r-matrices.
It is suggested that the QFTs obtained via deformation quantization and
renormalization correspond to each other in the sense of
Kontsevich/Cattaneo-Felder.
|
Different ways of linguistically expressing the same real-world event can
lead to different perceptions of what happened. Previous work has shown that
different descriptions of gender-based violence (GBV) influence the reader's
perception of who is to blame for the violence, possibly reinforcing
stereotypes which see the victim as partly responsible, too. As a contribution
to raise awareness on perspective-based writing, and to facilitate access to
alternative perspectives, we introduce the novel task of automatically
rewriting GBV descriptions as a means to alter the perceived level of
responsibility on the perpetrator. We present a quasi-parallel dataset of
sentences with low and high perceived responsibility levels for the
perpetrator, and experiment with unsupervised (mBART-based), zero-shot and
few-shot (GPT3-based) methods for rewriting sentences. We evaluate our models
using a questionnaire study and a suite of automatic metrics.
|
We propose a rigorous approach of Semi-Infinite lattice systems illustrated
with the study of surface transitions of the semi-infinite Potts model.
|
We consider the problem of estimating a spectral risk measure (SRM) from
i.i.d. samples, and propose a novel method that is based on numerical
integration. We show that our SRM estimate concentrates exponentially, when the
underlying distribution has bounded support. Further, we also consider the case
when the underlying distribution is either Gaussian or exponential, and derive
a concentration bound for our estimation scheme. We validate the theoretical
findings on a synthetic setup, and in a vehicular traffic routing application.
|
Shannon entropies of one- and two-electron atomic structure factors in the
position and momentum representations are used to examine the behavior of the
off-diagonal elements of density matrices with respect to the uncertainty
principle and to analyze the effects of electron correlation on off-diagonal
order. We show that electron correlation induces off-diagonal order in position
space which is characterized by larger entropic values. Electron correlation in
momentum space is characterized by smaller entropic values as information is
forced into regions closer to the diagonal. Related off-diagonal correlation
functions are also discussed.
|
We demonstrate that the law of the rectilinear coexistence diameter in
two-dimensional (2D) mixtures of non-spherical colloids and non-adsorbing
polymers is violated. Upon approach of the critical point, the diameter shows
logarithmic singular behavior governed by a term t ln(t), with t the relative
distance from the critical point. No sign of a term t^2b could be detected,
with b the critical exponent of the order parameter, indicating a very weak or
absent Yang-Yang anomaly. Our analysis thus reveals that non-spherical particle
shape alone is not sufficient for the formation of a pronounced Yang-Yang
anomaly in the critical behavior of fluids.
|
The normal-mode analysis of the Reynolds-Orr energy equation governing the
stability of viscous motion for general three-dimensional disturbances has been
revisited. The energy equation has been solved as an unconstrained minimization
problem for the Couette-Poiseuille flow. The minimum Reynolds number for every
Couette-Poiseuille velocity profile has been computed and compared with those
available in the literature. For fully three-dimensional disturbances, it is
shown that the minimum Reynolds number is in general smaller than the
corresponding two-dimensional counterpart for all the Couette-Poiseuille
profiles except plane Couette flow.
|
Capturing and re-animating the 3D structure of articulated objects present
significant barriers. On one hand, methods requiring extensively calibrated
multi-view setups are prohibitively complex and resource-intensive, limiting
their practical applicability. On the other hand, while single-camera Neural
Radiance Fields (NeRFs) offer a more streamlined approach, they have excessive
training and rendering costs. 3D Gaussian Splatting would be a suitable
alternative but for two reasons. Firstly, existing methods for 3D dynamic
Gaussians require synchronized multi-view cameras, and secondly, the lack of
controllability in dynamic scenarios. We present CoGS, a method for
Controllable Gaussian Splatting, that enables the direct manipulation of scene
elements, offering real-time control of dynamic scenes without the prerequisite
of pre-computing control signals. We evaluated CoGS using both synthetic and
real-world datasets that include dynamic objects that differ in degree of
difficulty. In our evaluations, CoGS consistently outperformed existing dynamic
and controllable neural representations in terms of visual fidelity.
|
We consider the solution to the parabolic Anderson model with homogeneous
initial condition in large time-dependent boxes. We derive stable limit
theorems, ranging over all possible scaling parameters, for the rescaled sum
over the solution depending on the growth rate of the boxes. Furthermore, we
give sufficient conditions for a strong law of large numbers.
|
We study the dynamics of a particle in a space that is non-differentiable.
Non-smooth geometrical objects have an inherently probabilistic nature and,
consequently, introduce stochasticity in the motion of a body that lives in
their realm. We use the mathematical concept of fiber bundle to characterize
the multivalued nature of geodesic trajectories going through a point that is
non-differentiable. Then, we generalize our concepts to everywhere non-smooth
structures. The resulting theoretical framework can be considered a
hybridization of the theory of surfaces and the theory of stochastic processes.
We keep the concepts as general as possible, in order to allow for the
introduction of other fundamental processes capable of modeling the fractality
or the fluctuations of any conceivable continuous, but non-differentiable
space.
|
We study the collision rates and velocities for point-particles of different
sizes in turbulent flows. We construct fits for the collision rates at
specified velocities (effectively a collisional velocity probability
distribution) for particle stopping time ratios up to four; already by that
point the collisional partners are very poorly correlated and so the results
should be robust for even larger stopping time ratios. Significantly, we find
that while particles of very different masses have approximately Maxwellian
collisional statistics, as the mass ratio shrinks the distribution changes
dramatically. At small stopping time ratios, the collisional partners are
highly correlated and we find a population of high number density (clustered),
low relative-velocity particle pairs. Unlike in the case of identical stopping
time collisional partners, this low relative-velocity clustered population is
collisional, but the clustering is barely adequate to trigger bulk effects such
as the streaming instability. We conclude our analysis by constructing a master
fit to the collisional statistics as a function only of the stopping time
ratio. Together with our previous work for identical stopping time particle
pairs, this provides a recipe for including collisional velocity probability
distributions in dust coagulation models for protoplanetary disks. We also
include our recipe for determining particle collisional diagnostics from
numerical simulations.
|
The sandpile group Pic^0(G) of a finite graph G is a discrete analogue of the
Jacobian of a Riemann surface which was rediscovered several times in the
contexts of arithmetic geometry, self-organized criticality, random walks, and
algorithms. Given a ribbon graph G, Holroyd et al. used the "rotor-routing"
model to define a free and transitive action of Pic^0(G) on the set of spanning
trees of G. However, their construction depends a priori on a choice of
basepoint vertex. Ellenberg asked whether this action does in fact depend on
the choice of basepoint. We answer this question by proving that the action of
Pic^0(G) is independent of the basepoint if and only if G is a planar ribbon
graph.
|
Topologically non-trivial electronic structures can give rise to a range of
unusual physical phenomena, and the interplay of band topology with other
effects such as electronic correlations and magnetism requires further
exploration. The rare earth monopnictides $X$(Sb,Bi) ($X$ = lanthanide) are a
large family of semimetals where these different effects may be tuned by the
substitution of rare-earth elements. Here we observe anomalous behavior in the
quantum oscillations of one member of this family, antiferromagnetic SmSb. The
analysis of Shubnikov-de Haas (SdH) oscillations provides evidence for a
non-zero Berry phase, indicating a non-trivial topology of the $\alpha$-band.
Furthermore, striking differences are found between the temperature dependence
of the amplitudes of de Haas-van Alphen effect oscillations, which are well
fitted by the Lifshitz-Kosevich (LK) formula across the measured temperature
range, and those from SdH measurements which show a significant disagreement
with LK behavior at low temperatures. Our findings of unusual quantum
oscillations in an antiferromagnetic, mixed valence semimetal with a possible
non-trivial band topology can provide an opportunity for studying the interplay
between topology, electronic correlations and magnetism.
|
Federated multi-view clustering offers the potential to develop a global
clustering model using data distributed across multiple devices. However,
current methods face challenges due to the absence of label information and the
paramount importance of data privacy. A significant issue is the feature
heterogeneity across multi-view data, which complicates the effective mining of
complementary clustering information. Additionally, the inherent incompleteness
of multi-view data in a distributed setting can further complicate the
clustering process. To address these challenges, we introduce a federated
incomplete multi-view clustering framework with heterogeneous graph neural
networks (FIM-GNNs). In the proposed FIM-GNNs, autoencoders built on
heterogeneous graph neural network models are employed for feature extraction
of multi-view data at each client site. At the server level, heterogeneous
features from overlapping samples of each client are aggregated into a global
feature representation. Global pseudo-labels are generated at the server to
enhance the handling of incomplete view data, where these labels serve as a
guide for integrating and refining the clustering process across different data
views. Comprehensive experiments have been conducted on public benchmark
datasets to verify the performance of the proposed FIM-GNNs in comparison with
state-of-the-art algorithms.
|
Rapidity distributions for $\Lambda$ and $\bar{\Lambda}$ hyperons in central
Pb-Pb collisions at 40, 80 and 158 A$\cdot$GeV and for ${\rm K}_{s}^{0}$ mesons
at 158 A$\cdot$GeV are presented. The lambda multiplicities are studied as a
function of collision energy together with AGS and RHIC measurements and
compared to model predictions. A different energy dependence of the
$\Lambda/\pi$ and $\bar{\Lambda}/\pi$ is observed. The $\bar{\Lambda}/\Lambda$
ratio shows a steep increase with collision energy. Evidence for a
$\bar{\Lambda}/\bar{\rm p}$ ratio greater than 1 is found at 40 A$\cdot$GeV.
|
Domain adaptation refers to the problem of leveraging labeled data in a
source domain to learn an accurate model in a target domain where labels are
scarce or unavailable. A recent approach for finding a common representation of
the two domains is via domain adversarial training (Ganin & Lempitsky, 2015),
which attempts to induce a feature extractor that matches the source and target
feature distributions in some feature space. However, domain adversarial
training faces two critical limitations: 1) if the feature extraction function
has high-capacity, then feature distribution matching is a weak constraint, 2)
in non-conservative domain adaptation (where no single classifier can perform
well in both the source and target domains), training the model to do well on
the source domain hurts performance on the target domain. In this paper, we
address these issues through the lens of the cluster assumption, i.e., decision
boundaries should not cross high-density data regions. We propose two novel and
related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model,
which combines domain adversarial training with a penalty term that punishes
the violation the cluster assumption; 2) the Decision-boundary Iterative
Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model
as initialization and employs natural gradient steps to further minimize the
cluster assumption violation. Extensive empirical results demonstrate that the
combination of these two models significantly improve the state-of-the-art
performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation
benchmarks.
|
We describe a lightweight RISC-V ISA extension for AES and SM4 block ciphers.
Sixteen instructions (and a subkey load) is required to implement an AES round
with the extension, instead of 80 without. An SM4 step (quarter-round) has 6.5
arithmetic instructions, a similar reduction. Perhaps even more importantly the
ISA extension helps to eliminate slow, secret-dependent table lookups and to
protect against cache timing side-channel attacks. Having only one S-box, the
extension has a minimal hardware size and is well suited for ultra-low power
applications. AES and SM4 implementations using the ISA extension also have a
much-reduced software footprint. The AES and SM4 instances can share the same
data paths but are independent in the sense that a chip designer can implement
SM4 without AES and vice versa. Full AES and SM4 assembler listings, HDL source
code for instruction's combinatorial logic, and C code for emulation is
provided to the community under a permissive open source license. The
implementation contains depth- and size-optimized joint AES and SM4 S-Box logic
based on the Boyar-Peralta construction with a shared non-linear middle layer,
demonstrating additional avenues for logic optimization. The instruction logic
has been experimentally integrated into the single-cycle execution path of the
"Pluto" RV32 core and has been tested on an FPGA system.
|
Augmentation is an effective alternative to utilize the small amount of
labeled protein data. However, most of the existing work focuses on design-ing
new architectures or pre-training tasks, and relatively little work has studied
data augmentation for proteins. This paper extends data augmentation techniques
previously used for images and texts to proteins and then benchmarks these
techniques on a variety of protein-related tasks, providing the first
comprehensive evaluation of protein augmentation. Furthermore, we propose two
novel semantic-level protein augmentation methods, namely Integrated Gradients
Substitution and Back Translation Substitution, which enable protein
semantic-aware augmentation through saliency detection and biological
knowledge. Finally, we integrate extended and proposed augmentations into an
augmentation pool and propose a simple but effective framework, namely
Automated Protein Augmentation (APA), which can adaptively select the most
suitable augmentation combinations for different tasks. Extensive experiments
have shown that APA enhances the performance of five protein related tasks by
an average of 10.55% across three architectures compared to vanilla
implementations without augmentation, highlighting its potential to make a
great impact on the field.
|
The purpose of this research report is to present the our learning curve and
the exposure to the Machine Learning life cycle, with the use of a Kaggle
binary classification data set and taking to explore various techniques from
pre-processing to the final optimization and model evaluation, also we
highlight on the data imbalance issue and we discuss the different methods of
handling that imbalance on the data level by over-sampling and under sampling
not only to reach a balanced class representation but to improve the overall
performance. This work also opens some gaps for future work.
|
The leaves in singular holomorphic foliation theory are examples of
quasi-analytic layers. In the first part of our publication we are concerned
with a theory of these subjects. A quasi-analytic decomposition of a complex
manifold is a decomposition into pairwise disjoint connected quasi-analytic
layers. These are holomorphic foliations in the sense of P. Stefan and K.
Spallek. A very different but more usual conception of holomorphic foliations
is develloped by P. Baum and R. Bott. It is based on holomorphic sheaf theory.
In the second part we study the relation between quasi-analytic decompositions
and singular holomorphic foliations in the sense of Baum and Bott.
|
We describe a model-independent approach for the extraction of spin-wave
dispersion curves from neutron total scattering data. The method utilises a
statistical analysis of real-space spin configurations to calculate
spin-dynamical quantities. The RMCProfile implementation of the reverse Monte
Carlo refinement process is used to generate a large ensemble of supercell spin
configurations from powder diffraction data. Our analysis of these
configurations gives spin-wave dispersion curves that agree well with those
determined independently using neutron triple-axis spectroscopic techniques.
|
For the kappa-symmetric super IIA D-brane action by the canonical approach we
construct an equivalent effective action which is characterized by an auxiliary
scalar field. By analyzing the canonical equations of motion for the
kappa-symmetry-gauge-fixed action we find a suitable conformal-like covariant
gauge fixing of reparametrization symmetry to obtain a simplified effective
action where the non-linear square root structure is removed. We discuss how
the two effective actions are connected.
|
Within the framework of the average approach and direct 3D PIC
(particle-in-cell) simulations we demonstrate that the gyrotrons operating in
the regime of developed turbulence can sporadically emit "giant" spikes with
intensities a factor of 100-150 greater than the average radiation power and a
factor of 6-9 exceeding the power of the driving electron beams. Together with
the statistical features such as a long-tail probability distribution, this
allows the interpretation of generated spikes as microwave rogue waves. The
mechanism of spikes formation is related to the simultaneous cyclotron
interaction of a gyrating electron beam with forward and backward waves near
the waveguide cutoff frequency as well as with the longitudinal deceleration of
electrons.
|
Using first-principles calculations combined with a semi-empirical van der
Waals dispersion correction, we have investigated structural parameters, mixing
enthalpies, and band gaps of buckled and planar few-layer In$_x$Ga$_{1-x}$N
alloys. We predict that the free-standing buckled phases are less stable than
the planar ones. However, with hydrogen passivation, the buckled
In$_x$Ga$_{1-x}$N alloys become more favorable. Their band gaps can be tuned
from 6 eV to 1 eV with preservation of direct band gap and well-defined Bloch
character, making them promising candidate materials for future light-emitting
applications. Unlike their bulk counterparts, the phase separation could be
suppressed in these two-dimensional systems due to reduced geometrical
constraints. In contrast, the disordered planar thin films undergo severe
lattice distortion, nearly losing the Bloch character for valence bands;
whereas the ordered planar ones maintain the Bloch character yet with the
highest mixing enthalpies.
|
We report on the first observation of electroluminescence amplification with
a Microstrip Plate immersed in liquid xenon. The electroluminescence of the
liquid, induced by alpha-particles, was observed in an intense non-uniform
electric field in the vicinity of 8-$\mu$m narrow anode strips interlaced with
wider cathode ones, deposited on the same side of a glass substrate. The
electroluminescence yield in the liquid reached a value of $(35.5 \pm 2.6)$ VUV
photons/electron. We propose ways of enhancing this response with more
appropriate microstructures towards their potential incorporation as sensing
elements in single-phase noble-liquid detectors.
|
We construct knot invariants from the radical part of projective modules of
restricted quantum groups. We also show a relation between these invariants and
the colored Alexander invariants.
|
A divide-and-conquer cryptanalysis can often be mounted against some
keystream generators composed of several (nonlinear) independent devices
combined by a Boolean function. In particular, any parity-check relation
derived from the periods of some constituent sequences usually leads to a
distinguishing attack whose complexity is determined by the bias of the
relation. However, estimating this bias is a difficult problem since the
piling-up lemma cannot be used. Here, we give two exact expressions for this
bias. Most notably, these expressions lead to a new algorithm for computing the
bias of a parity-check relation, and they also provide some simple formulae for
this bias in some particular cases which are commonly used in cryptography.
|
We report the results of spatially resolved X-ray spectroscopy of 8 different
ASCA pointings distributed symmetrically around the center of the Perseus
cluster. The outer region of the intracluster gas is roughly isothermal, with
temperature ~ 6-7 keV, and metal abundance ~ 0.3 Solar. Spectral analysis of
the central pointing is consistent with the presence of a cooling flow and a
central metal abundance gradient.
A significant velocity gradient is found along an axis at a position angle of
\~135 deg, which is ~ 45 deg discrepant with the major axis of the X-ray
elongation. The radial velocity difference is found to be greater than 1000
km/s/Mpc at the 90% confidence level. Simultaneous fittings of GIS 2 & 3
indicate that the velocity gradient is significant at the 95% confidence level
and the F-test rules out constant velocities at the 99% level. Intrinsic short
and long term variations of gain are unlikely (P < 0.03) to explain the
velocity discrepancies.
|
Optical absorption in amorphous tungsten oxide ($\textit{a}\mathrm{WO}_{3}$),
for photon energies below that of the band gap, can be rationalized in terms of
electronic transitions between localized states. For the study of this
phenomenon, we employed the differential coloration efficiency concept, defined
as the derivative of the optical density with respect to the inserted charge.
We also made use of its extension to a complex quantity in the context of
frequency-resolved studies. Combined $\textit{in situ}$ electrochemical and
optical experiments were performed on electrochromic
$\textit{a}\mathrm{WO}_{3}$ thin films for a wide lithium intercalation range
using an optical wavelength of $810~\mathrm{nm}$ ($1.53~\mathrm{eV}$).
Quasi-equilibrium measurements were made by chronopotentiometry (CP). Dynamic
frequency-dependent measurements were carried out by simultaneous
electrochemical and color impedance spectroscopy (SECIS). The differential
coloration efficiency obtained from CP changes sign at a critical intercalation
level. Its response exhibits an excellent agreement with a theoretical model
that considers electronic transitions between $\mathrm{W}^{4+}$,
$\mathrm{W}^{5+}$, and $\mathrm{W}^{6+}$ sites. For the SECIS experiment, the
low-frequency limit of the differential coloration efficiency shows a general
trend similar to that from CP. However, it does not change sign at a critical
ion insertion level. This discrepancy could be due to degradation effects
occurring in the films at high $\mathrm{Li}^+$ insertion levels. The
methodology and results presented in this work can be of great interest both
for the study of optical absorption in disordered materials and for
applications in electrochromism.
|
It is well known that recognizers personalized to each user are much more
effective than user-independent recognizers. With the popularity of smartphones
today, although it is not difficult to collect a large set of audio data for
each user, it is difficult to transcribe it. However, it is now possible to
automatically discover acoustic tokens from unlabeled personal data in an
unsupervised way. We therefore propose a multi-task deep learning framework
called a phoneme-token deep neural network (PTDNN), jointly trained from
unsupervised acoustic tokens discovered from unlabeled data and very limited
transcribed data for personalized acoustic modeling. We term this scenario
"weakly supervised". The underlying intuition is that the high degree of
similarity between the HMM states of acoustic token models and phoneme models
may help them learn from each other in this multi-task learning framework.
Initial experiments performed over a personalized audio data set recorded from
Facebook posts demonstrated that very good improvements can be achieved in both
frame accuracy and word accuracy over popularly-considered baselines such as
fDLR, speaker code and lightly supervised adaptation. This approach complements
existing speaker adaptation approaches and can be used jointly with such
techniques to yield improved results.
|
We examine an unexpected but significant source of positive public health
messaging during the COVID-19 pandemic -- K-pop fandoms. Leveraging more than 7
million tweets related to mask-wearing and K-pop between March 2020 and
December 2021, we analyzed the online spread of the hashtag \#WearAMask and
vaccine-related tweets amid anti-mask sentiments and public health
misinformation. Analyses reveal the South Korean boyband BTS as one of the most
significant driver of health discourse. Tweets from health agencies and
prominent figures that mentioned K-pop generate 111 times more online responses
compared to tweets that did not. These tweets also elicited strong responses
from South America, Southeast Asia, and rural States -- areas often neglected
in Twitter-based messaging by mainstream social media campaigns. Network and
temporal analysis show increased use from right-leaning elites over time.
Mechanistically, strong-levels of parasocial engagement and connectedness allow
sustained activism in the community. Our results suggest that public health
institutions may leverage pre-existing audience markets to synergistically
diffuse and target under-served communities both domestically and globally,
especially during health crises such as COVID-19.
|
The main propose of this paper is to show that Bridgeland's moduli space of
perverse point sheaves for certain flopping contractions gives the flops, and
the Fourier-Mukai transform given by the birational correspondence of the flop
is an equivalence between bounded derived categories.
|
The properties of the compactness of interpolation sets of algebras of
generalized analytic functions are investigated and convenient sufficient
conditions for interpolation are given.
|
In this paper, we show generation and propagation of polynomial and
exponential moments, as well as global well-posedness of the homogeneous
binary-ternary Boltzmann equation. We also show that the co-existence of binary
and ternary collisions yields better generation properties and time decay, than
when only binary or ternary collisions are considered. To address these
questions, we develop for the first time angular averaging estimates for
ternary interactions. This is the first paper which discusses this type of
questions for the binary-ternary Boltzmann equation and opens the door for
studying moments properties of gases with higher collisional density.
|
Systems as diverse as genetic networks or the world wide web are best
described as networks with complex topology. A common property of many large
networks is that the vertex connectivities follow a scale-free power-law
distribution. This feature is found to be a consequence of the two generic
mechanisms that networks expand continuously by the addition of new vertices,
and new vertices attach preferentially to already well connected sites. A model
based on these two ingredients reproduces the observed stationary scale-free
distributions, indicating that the development of large networks is governed by
robust self-organizing phenomena that go beyond the particulars of the
individual systems.
|
The (T) and [T] perturbative corrections are derived for multicomponent
coupled-cluster theory with single and double excitations (CCSD). Benchmarking
shows that multicomponent CCSD methods that include the perturbative
corrections are more accurate than multicomponent CCSD for the calculation of
proton affinities and absolute energies. An approximation is introduced that
includes only (T) or [T] contributions from mixed electron-nuclear excitations,
which significantly reduces computational effort with only small changes in
protonic properties.
|
We review some features and results of the calculations performed with the
program SIXPHACT for six fermion final states at Linear Collider
|
A fluctuation law of the energy in freely-decaying, homogeneous and isotropic
turbulence is derived within standard closure hypotheses for 3D incompressible
flow. In particular, a fluctuation-dissipation relation is derived which
relates the strength of a stochastic backscatter term in the energy decay
equation to the mean of the energy dissipation rate. The theory is based on the
so-called ``effective action'' of the energy history and illustrates a
Rayleigh-Ritz method recently developed to evaluate the effective action
approximately within probability density-function (PDF) closures. These
effective actions generalize the Onsager-Machlup action of nonequilibrium
statistical mechanics to turbulent flow. They yield detailed, concrete
predictions for fluctuations, such as multi-time correlation functions of
arbitrary order, which cannot be obtained by direct PDF methods. They also
characterize the mean histories by a variational principle.
|
Recurrent neural networks (RNNs) have been widely adopted in temporal
sequence analysis, where realtime performance is often in demand. However, RNNs
suffer from heavy computational workload as the model often comes with large
weight matrices. Pruning schemes have been proposed for RNNs to eliminate the
redundant (close-to-zero) weight values. On one hand, the non-structured
pruning methods achieve a high pruning rate but introducing computation
irregularity (random sparsity), which is unfriendly to parallel hardware. On
the other hand, hardware-oriented structured pruning suffers from low pruning
rate due to restricted constraints on allowable pruning structure. This paper
presents CSB-RNN, an optimized full-stack RNN framework with a novel compressed
structured block (CSB) pruning technique. The CSB pruned RNN model comes with
both fine pruning granularity that facilitates a high pruning rate and regular
structure that benefits the hardware parallelism. To address the challenges in
parallelizing the CSB pruned model inference with fine-grained structural
sparsity, we propose a novel hardware architecture with a dedicated compiler.
Gaining from the architecture-compilation co-design, the hardware not only
supports various RNN cell types, but is also able to address the challenging
workload imbalance issue and therefore significantly improves the hardware
efficiency.
|
In this paper we study computationally feasible bounds for relative free
energies between two many-particle systems. Specifically, we consider systems
out of equilibrium that do not necessarily satisfy a fluctuation-dissipation
relation, but that nevertheless admit a nonequilibrium steady state that is
reached asymptotically in the long-time limit. The bounds that we suggest are
based on the well-known Bogoliubov inequality and variants of Gibbs' and
Donsker-Varadhan variational principles. As a general paradigm, we consider
systems of oscillators coupled to heat baths at different temperatures. For
such systems, we define the free energy of the system relative to any given
reference system (that may or may not be in thermal equilibrium) in terms of
the Kullback-Leibler divergence between steady states. By employing a two-sided
Bogoliubov inequality and a mean-variance approximation of the free energy (or
cumulant generating function, we can efficiently estimate the free energy cost
needed in passing from the reference system to the system out of equilibrium
(characterised by a temperature gradient). A specific test case to validate our
bounds are harmonic oscillator chains with ends that are coupled to Langevin
thermostats at different temperatures; such a system is simple enough to allow
for analytic calculations and general enough to be used as a prototype to
estimate, e.g., heat fluxes or interface effects in a larger class of
nonequilibrium particle systems.
|
In this paper, we study the colorability of link diagrams by the Alexander
quandles. We show that if the reduced Alexander polynomial $\Delta_{L}(t)$ is
vanishing, then $L$ admits a non-trivial coloring by any non-trivial Alexander
quandle $Q$, and that if $\Delta_{L}(t)=1$, then $L$ admits only the trivial
coloring by any Alexander quandle $Q$, also show that if $\Delta_{L}(t)\not=0,
1$, then $L$ admits a non-trivial coloring by the Alexander quandle
$\Lambda/(\Delta_{L}(t))$.
|
For a large class of time-dependent non-Hermitain Hamiltonians expressed in
terms linear and bilinear combinations of the generators for an Euclidean
Lie-algebra respecting different types of PT-symmetries, we find explicit
solutions to the time-dependent Dyson equation. A specific Hermitian model with
explicit time-dependence is analyzed further and shown to be quasi-exactly
solvable. Technically we constructed the Lewis-Riesenfeld invariants making
useof the metric picture, which is an equivalent alternative to the
Schr\"{o}dinger, Heisenberg and interaction picture containing the
time-dependence in the metric operator that relates the time-dependent
Hermitian Hamiltonian to a static non-Hermitian Hamiltonian.
|
In science, macro level descriptions of the causal interactions within
complex, dynamical systems are typically deemed convenient, but ultimately
reducible to a complete causal account of the underlying micro constituents.
Yet, such a reductionist perspective is hard to square with several issues
related to autonomy and agency: (1) agents require (causal) borders that
separate them from the environment, (2) at least in a biological context,
agents are associated with macroscopic systems, and (3) agents are supposed to
act upon their environment. Integrated information theory (IIT) (Oizumi et al.,
2014) offers a quantitative account of causation based on a set of causal
principles, including notions such as causal specificity, composition, and
irreducibility, that challenges the reductionist perspective in multiple ways.
First, the IIT formalism provides a complete account of a system's causal
structure, including irreducible higher-order mechanisms constituted of
multiple system elements. Second, a system's amount of integrated information
($\Phi$) measures the causal constraints a system exerts onto itself and can
peak at a macro level of description (Hoel et al., 2016; Marshall et al.,
2018). Finally, the causal principles of IIT can also be employed to identify
and quantify the actual causes of events ("what caused what"), such as an
agent's actions (Albantakis et al., 2019). Here, we demonstrate this framework
by example of a simulated agent, equipped with a small neural network, that
forms a maximum of $\Phi$ at a macro scale.
|
We show that if $v\in A_\infty$ and $u\in A_1$, then there is a constant $c$
depending on the $A_1$ constant of $u$ and the $A_{\infty}$ constant of $v$
such that $$\Big\|\frac{ T(fv)} {v}\Big\|_{L^{1,\infty}(uv)}\le c\,
\|f\|_{L^1(uv)},$$ where $T$ can be the Hardy-Littlewood maximal function or
any Calder\'on-Zygmund operator. This result was conjectured in [IMRN,
(30)2005, 1849--1871] and constitutes the most singular case of some extensions
of several problems proposed by E. Sawyer and Muckenhoupt and Wheeden. We also
improve and extends several quantitative estimates.
|
We prove that a sufficiently large subset of the $d$-dimensional vector space
over a finite field with $q$ elements, $ {\Bbb F}_q^d$, contains a copy of
every $k$-simplex. Fourier analytic methods, Kloosterman sums, and
bootstrapping play an important role.
|
Two timescale stochastic approximation (SA) has been widely used in
value-based reinforcement learning algorithms. In the policy evaluation
setting, it can model the linear and nonlinear temporal difference learning
with gradient correction (TDC) algorithms as linear SA and nonlinear SA,
respectively. In the policy optimization setting, two timescale nonlinear SA
can also model the greedy gradient-Q (Greedy-GQ) algorithm. In previous
studies, the non-asymptotic analysis of linear TDC and Greedy-GQ has been
studied in the Markovian setting, with diminishing or accuracy-dependent
stepsize. For the nonlinear TDC algorithm, only the asymptotic convergence has
been established. In this paper, we study the non-asymptotic convergence rate
of two timescale linear and nonlinear TDC and Greedy-GQ under Markovian
sampling and with accuracy-independent constant stepsize. For linear TDC, we
provide a novel non-asymptotic analysis and show that it attains an
$\epsilon$-accurate solution with the optimal sample complexity of
$\mathcal{O}(\epsilon^{-1}\log(1/\epsilon))$ under a constant stepsize. For
nonlinear TDC and Greedy-GQ, we show that both algorithms attain
$\epsilon$-accurate stationary solution with sample complexity
$\mathcal{O}(\epsilon^{-2})$. It is the first non-asymptotic convergence result
established for nonlinear TDC under Markovian sampling and our result for
Greedy-GQ outperforms the previous result orderwisely by a factor of
$\mathcal{O}(\epsilon^{-1}\log(1/\epsilon))$.
|
We propose a model for realizing exotic paired states in cold atomic Fermi
gases. By using a {\it spin dependent} optical lattice it is possible to
engineer spatially anisotropic Fermi surfaces for each hyperfine species, that
are rotated 90 degrees with respect to one another. We consider a balanced
population of the fermions with an attractive interaction. We explore the BCS
mean field phase diagram as a function of the anisotropy, density, and
interaction strength, and find the existence of an unusual paired superfluid
state with coexisting pockets of momentum space with gapless unpaired carriers.
This state is a relative of the Sarma or breached pair states in polarized
mixtures, but in our case the Fermi gas is unpolarized. We also propose the
possible existence of an exotic paired "Cooper-pair Bose-Metal" (CPBM) phase,
which has a gap for single fermion excitations but gapless and uncondensed
"Cooper pair" excitations residing on a "Bose-surface" in momentum space.
|
It is a classical result from Diophantine approximation that the set of badly
approximable numbers has Lebesgue measure zero. In this paper we generalise
this result to more general sequences of balls.
Given a countable set of closed $d$-dimensional Euclidean balls
$\{B(x_{i},r_{i})\}_{i=1}^{\infty},$ we say that $\alpha\in \mathbb{R}^{d}$ is
a badly approximable number with respect to $\{B(x_{i},r_{i})\}_{i=1}^{\infty}$
if there exists $\kappa(\alpha)>0$ and $N(\alpha)\in\mathbb{N}$ such that
$\alpha\notin B(x_{i},\kappa(\alpha)r_{i})$ for all $i\geq N(\alpha)$. Under
natural conditions on the set of balls, we prove that the set of badly
approximable numbers with respect to $\{B(x_{i},r_{i})\}_{i=1}^{\infty}$ has
Lebesgue measure zero. Moreover, our approach yields a new proof that the set
of badly approximable numbers has Lebesgue measure zero.
|
We propose a new random process to construct the eigenvectors of some random
operators which make a short and clean connection with the resolvent. In this
process the center of localization has to be chosen randomly.
|
We study the stability of gap solitons of the super-Tonks-Girardeau bosonic
gas in one-dimensional periodic potential. The linear stability analysis
indicates that increasing the amplitude of periodic potential or decreasing the
nonlinear interactions, the unstable gap solitons can become stable. In
particular, the theoretical analysis and numerical calculations show that,
comparing to the lower-family of gap solitons, the higher-family of gap
solitons are easy to form near the bottoms of the linear Bloch band gaps. The
numerical results also verify that the composition relations between various
gap solitons and nonlinear Bloch waves are general and can exist in the
super-Tonks-Girardeau phase.
|
We present families of space-time finite element methods (STFEMs) for a
coupled hyperbolic-parabolic system of poro- or thermoelasticity.
Well-posedness of the discrete problems is proved. Higher order approximations
inheriting most of the rich structure of solutions to the continuous problem on
computationally feasible grids are naturally embedded. However, the block
structure and solution of the algebraic systems become increasingly complex for
these members of the families. We present and analyze a robust geometric
multigrid (GMG) preconditioner for GMRES iterations. The GMG method uses a
local Vanka-type smoother. Its action is defined in an exact mathematical way.
Due to nonlocal coupling mechanisms of unknowns, the smoother is applied on
patches of elements. This ensures the damping of error frequencies. In a
sequence of numerical experiments, including a challenging three-dimensional
benchmark of practical interest, the efficiency of the solver for STFEMs is
illustrated and confirmed. Its parallel scalability is analyzed. Beyond this
study of classical performance engineering, the solver's energy efficiency is
investigated as an additional and emerging dimension in the design and tuning
of algorithms and their implementation on the hardware.
|
Despite the simplicity of the original perovskite crystal structure, this
family of compounds shows an enormous variety of structural modifications and
variants. In the following, we will describe several examples of perovskites,
their structural variants and discuss the implications of distortions and
non-stoichiometry on their electronic and magnetic properties.
|
The recent evolution of induced seismicity in Central United States calls for
exhaustive catalogs to improve seismic hazard assessment. Over the last
decades, the volume of seismic data has increased exponentially, creating a
need for efficient algorithms to reliably detect and locate earthquakes.
Today's most elaborate methods scan through the plethora of continuous seismic
records, searching for repeating seismic signals. In this work, we leverage the
recent advances in artificial intelligence and present ConvNetQuake, a highly
scalable convolutional neural network for earthquake detection and location
from a single waveform. We apply our technique to study the induced seismicity
in Oklahoma (USA). We detect 20 times more earthquakes than previously
cataloged by the Oklahoma Geological Survey. Our algorithm is orders of
magnitude faster than established methods.
|
Acoustic impedance mismatches between soft tissues and bones are known to
result in strong aberrations in optoacoustic and ultrasound images. Of
particular importance are the severe distortions introduced by the human skull,
impeding transcranial brain imaging with these modalities. While modelling of
ultrasound propagation through the skull may in principle help correcting for
some of the skull-induced aberrations, these approaches are commonly challenged
by the highly heterogeneous and dispersive acoustic properties of the skull and
lack of exact knowledge on its geometry and internal structure. Here we
demonstrate that the spatio-temporal properties of the acoustic distortions
induced by the skull are preserved for signal sources generated at neighboring
intracranial locations by means of optoacoustic excitation. This optoacoustic
memory effect is exploited for building a three-dimensional model accurately
describing the generation, propagation and detection of time-resolved broadband
optoacoustic waveforms traversing the skull. The memory-based model-based
inversion is then shown to accurately recover the optical absorption
distribution inside the skull with spatial resolution and image quality
comparable to those attained in skull-free medium.
|
Laboratory models are often used to understand the interaction of related
pathogens via host immunity. For example, recent experiments where ferrets were
exposed to two influenza strains within a short period of time have shown how
the effects of cross-immunity vary with the time between exposures and the
specific strains used. On the other hand, studies of the workings of different
arms of the immune response, and their relative importance, typically use
experiments involving a single infection. However, inferring the relative
importance of different immune components from this type of data is
challenging. Using simulations and mathematical modelling, here we investigate
whether the sequential infection experiment design can be used not only to
determine immune components contributing to cross-protection, but also to gain
insight into the immune response during a single infection.
We show that virological data from sequential infection experiments can be
used to accurately extract the timing and extent of cross-protection. Moreover,
the broad immune components responsible for such cross-protection can be
determined. Such data can also be used to infer the timing and strength of some
immune components in controlling a primary infection, even in the absence of
serological data. By contrast, single infection data cannot be used to reliably
recover this information. Hence, sequential infection data enhances our
understanding of the mechanisms underlying the control and resolution of
infection, and generates new insight into how previous exposure influences the
time course of a subsequent infection.
|
In this paper I shall consider a scalar-scalar field theory with scalar field
phi on a four-dimensional manifold M, and a Lorentzian Cofinsler function f on
T*M. A particularly simple Lagrangian is chosen to govern this theory, and when
f is chosen to generate FLRW metrics on M the Lagrangian becomes a function of
phi and its first two time derivatives. The associated Hamiltonian is
third-order, and admits infinitely many vacuum solutions. These vacuum
solutions can be pieced together to generate a multiverse. This is done for
those FLRW spaces with k>0. So when time, t, is less than zero we have a
universe in which the t=constant spaces are 3-spheres with constant curvature
k. As time passes through zero the underlying 4-space splits into an infinity
of spaces (branches) with metric tensors that describe piecewise de Sitter
spaces until some cutoff time, which will, in general, be different for
different branches. After passing through the cutoff time all branches will
return to their original 4-space in which the t=constant spaces are of constant
curvature k, but will remain separate from all of the other branch universes.
The metric tensor for this multiverse is everywhere continuous, but experiences
discontinuous derivatives as the universe branches change between different de
Sitter spaces. Some questions I address using this formalism are: what is the
nature of matter when t<0; what happens to matter as time passes through t=0;
and what was the universe doing before the multiple universes came into
existence at t=0? The answers to these questions will help to explain the
paper's title. I shall also briefly discuss a possible means of quantizing
space, how inflation influences the basic cells that constitute space, and how
gravitons might act.
|
As data-driven intelligent systems advance, the need for reliable and
transparent decision-making mechanisms has become increasingly important.
Therefore, it is essential to integrate uncertainty quantification and model
explainability approaches to foster trustworthy business and operational
process analytics. This study explores how model uncertainty can be effectively
communicated in global and local post-hoc explanation approaches, such as
Partial Dependence Plots (PDP) and Individual Conditional Expectation (ICE)
plots. In addition, this study examines appropriate visualization analytics
approaches to facilitate such methodological integration. By combining these
two research directions, decision-makers can not only justify the plausibility
of explanation-driven actionable insights but also validate their reliability.
Finally, the study includes expert interviews to assess the suitability of the
proposed approach and designed interface for a real-world predictive process
monitoring problem in the manufacturing domain.
|
The {\em wavelet tree} is a flexible data structure that permits representing
sequences $S[1,n]$ of symbols over an alphabet of size $\sigma$, within
compressed space and supporting a wide range of operations on $S$. When
$\sigma$ is significant compared to $n$, current wavelet tree representations
incur in noticeable space or time overheads. In this article we introduce the
{\em wavelet matrix}, an alternative representation for large alphabets that
retains all the properties of wavelet trees but is significantly faster. We
also show how the wavelet matrix can be compressed up to the zero-order entropy
of the sequence without sacrificing, and actually improving, its time
performance. Our experimental results show that the wavelet matrix outperforms
all the wavelet tree variants along the space/time tradeoff map.
|
In past years, triggered by their successful realizations in
electromagnetics, invisible cloaks have experienced rapid development and have
been widely pursued in many different fields, though so far only for a single
physical system. In this letter we made an unprecedented experimental attempt
to show a multidisciplinary framework designed on the basis of two different
physical equations. The proposed structure has the exceptional capability to
simultaneously control two different physical phenomena according to the
predetermined evolution scenarios. As a proof of concept, we implemented an
electric-thermal bifunctional device that can guide both electric current and
heat flux "across" a strong 'scatter' (air cavity) and restore their original
diffusion directions as if nothing exists along the paths, thus rending dual
cloaking effects for objects placed inside the cavity. This bifunctional
cloaking performance is also numerically verified for a point-source nonuniform
excitation. Our results and the fabrication technique presented here will help
broaden the current research scope for multiple disciplines and may pave a
prominent way to manipulate multiple flows and create new functional devices,
e.g., for on-chip applications.
|
Docker offers an ecosystem that offers a platform for application packaging,
distributing, and managing within containers. However, the Docker platform has
not yet matured. Presently, Docker is less secured than virtual machines (VM)
and most of the other cloud technologies. The key to Dockers inadequate
security protocols is container sharing of Linux kernel, which can lead to the
risk of privileged escalations. This research will outline some significant
security vulnerabilities at Docker and counter solutions to neutralize such
attacks. There are a variety of security attacks like insider and outsider.
This research will outline both types of attacks and their mitigations
strategies. Taking some precautionary measures can save from massive disasters.
This research will also present Docker secure deployment guidelines. These
guidelines will suggest different configurations to deploy Docker containers in
a more secure way.
|
To choose a suitable multiwinner voting rule is a hard and ambiguous task.
Depending on the context, it varies widely what constitutes the choice of an
``optimal'' subset of alternatives. In this paper, we provide a quantitative
analysis of multiwinner voting rules using methods from the theory of
approximation algorithms---we estimate how well multiwinner rules approximate
two extreme objectives: a representation criterion defined via the Approval
Chamberlin--Courant rule and a utilitarian criterion defined via Multiwinner
Approval Voting. With both theoretical and experimental methods, we classify
multiwinner rules in terms of their quantitative alignment with these two
opposing objectives. Our results provide fundamental information about the
nature of multiwinner rules and, in particular, about the necessary tradeoffs
when choosing such a rule.
|
The purpose of this paper is to investigate the spectral nature of the
Neumann-Poincar\'e operator on the intersecting disks, which is a domain with
the Lipschitz boundary. The complete spectral resolution of the operator is
derived, which shows in particular that it admits only the absolutely
continuous spectrum, no singularly continuous spectrum and no pure point
spectrum. We then quantitatively analyze using the spectral resolution the
plasmon resonance at the absolutely continuous spectrum.
|
A recent study has shown that diffusion models are well-suited for modeling
the generative process of user-item interactions in recommender systems due to
their denoising nature. However, existing diffusion model-based recommender
systems do not explicitly leverage high-order connectivities that contain
crucial collaborative signals for accurate recommendations. Addressing this
gap, we propose CF-Diff, a new diffusion model-based collaborative filtering
(CF) method, which is capable of making full use of collaborative signals along
with multi-hop neighbors. Specifically, the forward-diffusion process adds
random noise to user-item interactions, while the reverse-denoising process
accommodates our own learning model, named cross-attention-guided multi-hop
autoencoder (CAM-AE), to gradually recover the original user-item interactions.
CAM-AE consists of two core modules: 1) the attention-aided AE module,
responsible for precisely learning latent representations of user-item
interactions while preserving the model's complexity at manageable levels, and
2) the multi-hop cross-attention module, which judiciously harnesses high-order
connectivity information to capture enhanced collaborative signals. Through
comprehensive experiments on three real-world datasets, we demonstrate that
CF-Diff is (a) Superior: outperforming benchmark recommendation methods,
achieving remarkable gains up to 7.29% compared to the best competitor, (b)
Theoretically-validated: reducing computations while ensuring that the
embeddings generated by our model closely approximate those from the original
cross-attention, and (c) Scalable: proving the computational efficiency that
scales linearly with the number of users or items.
|
Construct theory in social psychology, developed by George Kelly are mental
constructs to predict and anticipate events. Constructs are how humans
interpret, curate, predict and validate data; information. AI today is biased
because it is trained with a narrow construct as defined by the training data
labels. Machine Learning algorithms for facial recognition discriminate against
darker skin colors and in the ground breaking research papers (Buolamwini, Joy
and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in
Commercial Gender Classification. FAT (2018), the inclusion of phenotypic
labeling is proposed as a viable solution. In Construct theory, phenotype is
just one of the many subelements that make up the construct of a face. In this
paper, we present 15 main elements of the construct of face, with 50
subelements and tested Google Cloud Vision API and Microsoft Cognitive Services
API using FairFace dataset that currently has data for 7 races, genders and
ages, and we retested against FairFace Plus dataset curated by us. Our results
show exactly where they have gaps for inclusivity. Based on our experiment
results, we propose that validated, inclusive constructs become industry
standards for AI ML models going forward.
|
Research on sound event detection (SED) with weak labeling has mostly focused
on presence/absence labeling, which provides no temporal information at all
about the event occurrences. In this paper, we consider SED with sequential
labeling, which specifies the temporal order of the event boundaries. The
conventional connectionist temporal classification (CTC) framework, when
applied to SED with sequential labeling, does not localize long events well due
to a "peak clustering" problem. We adapt the CTC framework and propose
connectionist temporal localization (CTL), which successfully solves the
problem. Evaluation on a subset of Audio Set shows that CTL closes a third of
the gap between presence/ absence labeling and strong labeling, demonstrating
the usefulness of the extra temporal information in sequential labeling. CTL
also makes it easy to combine sequential labeling with presence/absence
labeling and strong labeling.
|
We consider products of uniform random variables from the Stiefel manifold of
orthonormal $k$-frames in $\mathbb{R}^n$, $k \le n$, and random vectors from
the $n$-dimensional $\ell_p^n$-ball $\mathbb{B}_p^n$ with certain $p$-radial
distributions, $p\in[1,\infty)$. The distribution of this product geometrically
corresponds to the projection of the $p$-radial distribution on
$\mathbb{B}^n_p$ onto a random $k$-dimensional subspace. We derive large
deviation principles (LDPs) on the space of probability measures on
$\mathbb{R}^k$ for sequences of such projections.
|
This letter presents measurements of the differential cross-sections for
inclusive electron and muon production in proton-proton collisions at a
centre-of-mass energy of sqrt(s) = 7 TeV, using data collected by the ATLAS
detector at the LHC. The muon cross-section is measured as a function of pT in
the range 4 < pT < 100 GeV and within pseudorapidity |eta| < 2.5. In addition
the electron and muon cross-sections are measured in the range 7 < pT < 26 GeV
and within |eta| <2.0, excluding 1.37<|eta|<1.52. Integrated luminosities of
1.3 pb-1 and 1.4 pb-1 are used for the electron and muon measurements,
respectively. After subtraction of the W/Z/gamma* contribution, the
differential cross-sections are found to be in good agreement with theoretical
predictions for heavy-flavour production obtained from Fixed Order NLO
calculations with NLL high-pT resummation, and to be sensitive to the effects
of NLL resummation.
|
We consider a few types of bounded homomorphisms on a topological group.
These classes of bounded homomorphisms are, in a sense, weaker than the class
of continuous homomorphisms. We show that with appropriate topologies each
class of these homomorphisms on a complete topological group forms a complete
topological group.
|
LS I+61 303 has been detected by the Cherenkov telescope MAGIC at very high
energies, presenting a variable flux along the orbital motion with a maximum
clearly separated from the periastron passage. In the light of the new
observational constraints, we revisit the discussion of the production of
high-energy gamma rays from particle interactions in the inner jet of this
system. The hadronic contribution could represent a major fraction of the TeV
emission detected from this source. The spectral energy distribution resulting
from p-p interactions is recalculated. Opacity effects introduced by the photon
fields of the primary star and the stellar decretion disk are shown to be
essential in shaping the high-energy gamma-ray light curve at energies close to
200 GeV. We also present results of Monte Carlo simulations of the
electromagnetic cascades developed very close to the periastron passage. We
conclude that a hadronic microquasar model for the gamma-ray emission in LS I
+61 303 can reproduce the main features of its observed high-energy gamma-ray
flux.
|
I criticize the widely-defended view that the quantum measurement problem is
an example of underdetermination of theory by evidence: more specifically, the
view that the unmodified, unitary quantum formalism (interpreted following
Everett) is empirically indistinguishable from Bohmian Mechanics and from
dynamical-collapse theories like the GRW or CSL theories. I argue that there as
yet no empirically successful generalization of either theory to interacting
quantum field theory and so the apparent underdetermination is broken by a very
large class of quantum experiments that require field theory somewhere in their
description. The class of quantum experiments reproducible by either is much
smaller than is commonly recognized and excludes many of the most iconic
successes of quantum mechanics, including the quantitative account of Rayleigh
scattering that explains the color of the sky. I respond to various arguments
to the contrary in the recent literature.
|
We investigate shot noise at {\it finite temperatures} induced by the
quasi-particle tunneling between fractional quantum Hall (FQH) edge states. The
resulting Fano factor has the peak structure at a certain bias voltage. Such a
structure indicates that quasi-particles are weakly {\it glued} due to thermal
fluctuation. We show that the effect makes it possible to probe the difference
of statistics between $\nu=1/5,{}2/5$ FQH states where quasi-particles have the
same unit charge.Finally we propose a way to indirectly obtain statistical
angle in hierarchical FQH states.
|
Enabling additive manufacturing to employ a wide range of novel, functional
materials can be a major boost to this technology. However, making such
materials printable requires painstaking trial-and-error by an expert operator,
as they typically tend to exhibit peculiar rheological or hysteresis
properties. Even in the case of successfully finding the process parameters,
there is no guarantee of print-to-print consistency due to material differences
between batches. These challenges make closed-loop feedback an attractive
option where the process parameters are adjusted on-the-fly. There are several
challenges for designing an efficient controller: the deposition parameters are
complex and highly coupled, artifacts occur after long time horizons,
simulating the deposition is computationally costly, and learning on hardware
is intractable. In this work, we demonstrate the feasibility of learning a
closed-loop control policy for additive manufacturing using reinforcement
learning. We show that approximate, but efficient, numerical simulation is
sufficient as long as it allows learning the behavioral patterns of deposition
that translate to real-world experiences. In combination with reinforcement
learning, our model can be used to discover control policies that outperform
baseline controllers. Furthermore, the recovered policies have a minimal
sim-to-real gap. We showcase this by applying our control policy in-vivo on a
single-layer, direct ink writing printer.
|
MXene transition-metal carbides and nitrides are of growing interest for
energy storage applications. These compounds are especially promising for use
as pseudocapacitive electrodes due to their ability to convert energy
electrochemically at fast rates. Using voltage-dependent cluster expansion
models, we predict the charge storage performance of MXene pseudocapacitors for
a range of electrode compositions. $M_3C_2O_2$ electrodes based on group-VI
transition metals have up to 80% larger areal energy densities than
prototypical titanium-based ( e.g. $Ti_3C_2O_2$) MXene electrodes. We attribute
this high pseudocapacitance to the Faradaic voltage windows of group-VI MXene
electrodes, which are predicted to be 1.2 to 1.8 times larger than those of
titanium-based MXenes. The size of the pseudocapacitive voltage window
increases with the range of oxidation states that is accessible to the MXene
transition metals. By similar mechanisms, the presence of multiple ions in the
solvent (Li$^+$ and H$^+$) leads to sharp changes in the transition-metal
oxidation states and can significantly increase the charge capacity of MXene
pseudocapacitors.
|
Background: Recently, we introduced solar related geomagnetic disturbances
(GMD) as a potential environmental risk factor for multiple sclerosis (MS). The
aim of this study was to test probable correlation between solar activities and
GMD with long-term variations of MS incidence.
Methods: After a systematic review, we studied the association between
alterations in solar wind velocity (Vsw) and planetary A index (Ap, a GMD
index) with MS incidence in Tehran and western Greece, during the 23rd solar
cycle (1996-2008), by an ecological-correlational study.
Results: We found moderate to strong correlations among MS incidence of
Tehran with Vsw (Rs=0.665, p=0.013), with one year delay, and also with Ap
(Rs=0.864, p=0.001) with 2 year delay. There were very strong correlations
among MS incidence data of Greece with Vsw (R=0.906, p<0.001) and with Ap
(R=0.844, p=0.001), both with one year lag.
Conclusion: It is the first time that a hypothesis has introduced an
environmental factor that may describe MS incidence alterations; however, it
should be reminded that correlation does not mean necessarily the existence of
a causal relationship. Important message of these findings for researchers is
to provide MS incidence reports with higher resolution for consecutive years,
based on the time of disease onset and relapses, not just the time of
diagnosis. Then, it would be possible to further investigate the validity of
GMD hypothesis or any other probable environmental risk factors.
Keywords: Correlation analysis, Multiple sclerosis, Incidence, Geomagnetic
disturbance, Geomagnetic activity, Solar wind velocity, Environmental risk
factor.
|
Subsets and Splits