text
stringlengths 6
128k
|
---|
Time-domain thermoreflectance (TDTR) and frequency-domain thermoreflectance
(FDTR) have been widely used for non-contact measurement of anisotropic thermal
conductivity of materials with high spatial resolution. However, the
requirement of high thermoreflectance coefficient restricts the choice of metal
coating and laser wavelength. The accuracy of the measurement is often limited
by the high sensitivity to the radii of the laser beams. We describe an
alternative frequency-domain pump-probe technique based on probe beam
deflection. The beam deflection is primarily caused by thermoelastic
deformation of the sample surface with a magnitude determined by the thermal
expansion coefficient of the bulk material to measure. We derive an analytical
solution to the coupled elasticity and heat diffusion equations for periodic
heating of a multilayer sample with anisotropic elastic constants, thermal
conductivity, and thermal expansion coefficients. In most cases, a simplified
model can reliably describe the frequency dependence of the beam deflection
signal without knowledge of the elastic constants and thermal expansion
coefficients of the material. The magnitude of the probe beam deflection signal
is larger than the maximum magnitude achievable by thermoreflectance detection
of surface temperatures if the thermal expansion coefficient is greater than
5x10^(-6) /K. The sensitivity to laser beam radii is suppressed when a larger
beam offset is used. We find nearly perfect matching of the measured signal and
model prediction, and measure thermal conductivities within 6% of accepted
values for materials spanning the range of polymers to gold, 0.1 - 300 W/(m K).
|
Though reinforcement learning has greatly benefited from the incorporation of
neural networks, the inability to verify the correctness of such systems limits
their use. Current work in explainable deep learning focuses on explaining only
a single decision in terms of input features, making it unsuitable for
explaining a sequence of decisions. To address this need, we introduce
Abstracted Policy Graphs, which are Markov chains of abstract states. This
representation concisely summarizes a policy so that individual decisions can
be explained in the context of expected future transitions. Additionally, we
propose a method to generate these Abstracted Policy Graphs for deterministic
policies given a learned value function and a set of observed transitions,
potentially off-policy transitions used during training. Since no restrictions
are placed on how the value function is generated, our method is compatible
with many existing reinforcement learning methods. We prove that the worst-case
time complexity of our method is quadratic in the number of features and linear
in the number of provided transitions, $O(|F|^2 |tr\_samples|)$. By applying
our method to a family of domains, we show that our method scales well in
practice and produces Abstracted Policy Graphs which reliably capture
relationships within these domains.
|
The magnetic penetration depth ($\lambda$) as a function of applied magnetic
field and temperature in SrPt$_3$P($T_c\simeq8.4$ K) was studied by means of
muon-spin rotation ($\mu$SR). The dependence of $\lambda^{-2}$ on temperature
suggests the existence of a single $s-$wave energy gap with the
zero-temperature value $\Delta=1.58(2)$ meV. At the same time $\lambda$ was
found to be strongly field dependent which is the characteristic feature of the
nodal gap and/or multi-gap systems. The multi-gap nature of the
superconduicting state is further confirmed by observation of an upward
curvature of the upper critical field. This apparent contradiction would be
resolved with SrPt$_3$P being a two-band superconductor with equal gaps but
different coherence lengths within the two Fermi surface sheets.
|
DPLL and resolution are two popular methods for solving the problem of
propositional satisfiability. Rather than algorithms, they are families of
algorithms, as their behavior depend on some choices they face during
execution: DPLL depends on the choice of the literal to branch on; resolution
depends on the choice of the pair of clauses to resolve at each step. The
complexity of making the optimal choice is analyzed in this paper. Extending
previous results, we prove that choosing the optimal literal to branch on in
DPLL is Delta[log]^2-hard, and becomes NP^PP-hard if branching is only allowed
on a subset of variables. Optimal choice in regular resolution is both NP-hard
and CoNP-hard. The problem of determining the size of the optimal proofs is
also analyzed: it is CoNP-hard for DPLL, and Delta[log]^2-hard if a conjecture
we make is true. This problem is CoNP-hard for regular resolution.
|
We analyze CP violation in supersymmetric extensions of the Standard Model
with heavy scalar fermions of the first two generations. Neglecting
intergenerational mixing in the sfemion mass matrices and thus considering only
chargino, charged Higgs and W--boson diagrams we show that it is possible to
fully account for CP violation in the kaon system even in the absence of the
standard CKM phase. This opens new possibilities for large supersymmetric
contributions to CP violation in the B system.
|
In the celebrated Stern-Gerlach experiment an inhomogeneous static magnetic
field separates a beam of charge-neutral atoms with opposite spins, thereby
driving a ``spin current" normal to the propagation direction. Here we
generalize it to the dynamic scenario by demonstrating a spin transfer between
an AC inhomogeneous magnetic field and intraband electrons or charge-neutral
excitons and phonons. We predict that parametric pumping can efficiently
radiate their DC spin currents from local AC magnetic sources with van der
Waals semiconductors as prototypes. This mechanism brings a unified and
efficient paradigm in the spin transport of distinct mobile carriers.
|
Higher-order topological phases give rise to new bulk and boundary physics,
as well as new classes of topological phase transitions. While the realization
of higher-order topological phases has been confirmed in many platforms by
detecting the existence of gapless boundary modes, a direct determination of
the higher-order topology and related topological phase transitions through the
bulk in experiments has still been lacking. To bridge the gap, in this work we
carry out the simulation of a two-dimensional second-order topological phase in
a superconducting qubit. Owing to the great flexibility and controllability of
the quantum simulator, we observe the realization of higher-order topology
directly through the measurement of the pseudo-spin texture in momentum space
of the bulk for the first time, in sharp contrast to previous experiments based
on the detection of gapless boundary modes in real space. Also through the
measurement of the evolution of pseudo-spin texture with parameters, we further
observe novel topological phase transitions from the second-order topological
phase to the trivial phase, as well as to the first-order topological phase
with nonzero Chern number. Our work sheds new light on the study of
higher-order topological phases and topological phase transitions.
|
A living cell's interior is one of the most complex and intrinsically dynamic
systems, providing an elaborate interplay between cytosolic crowding and
ATP-driven motion, which controls cellular functionality. Here, we investigated
two distinct fundamental features of the merely passive, not-bio-motor shuttled
material transport within the cytoplasm of Dictyostelium discoideum cells: the
anomalous non-linear scaling of the mean-squared displacement of a
150nm-diameter particle and non-Gaussian distribution of increments. Relying on
single-particle tracking data of 320,000 data points, we performed a systematic
analysis of four possible origins for non-Gaussian transport: (1) sample-based
variability, (2) rare occurring strong motion events, (3) ergodicity
breaking/ageing, and (4) spatio-temporal heterogeneities of the intracellular
medium. After excluding the first three reasons, we investigated the remaining
hypothesis of a heterogeneous cytoplasm as cause for non-Gaussian transport. A
novel fit model with randomly distributed diffusivities implementing medium
heterogeneities suits the experimental data. Strikingly, the non-Gaussian
feature is independent of the cytoskeleton condition and lag time. This reveals
that efficiency and consistency of passive intracellular transport and the
related anomalous scaling of the mean-squared displacement are regulated by
cytoskeleton components, while cytoplasmic heterogeneities are responsible for
the generic, non-Gaussian distribution of increments.
|
We are interested in reconstructing the initial condition of a non-linear
partial differential equation (PDE), namely the Fokker-Planck equation, from
the observation of a Dyson Brownian motion at a given time $t>0$. The
Fokker-Planck equation describes the evolution of electrostatic repulsive
particle systems, and can be seen as the large particle limit of correctly
renormalized Dyson Brownian motions. The solution of the Fokker-Planck equation
can be written as the free convolution of the initial condition and the
semi-circular distribution. We propose a nonparametric estimator for the
initial condition obtained by performing the free deconvolution via the
subordination functions method. This statistical estimator is original as it
involves the resolution of a fixed point equation, and a classical
deconvolution by a Cauchy distribution. This is due to the fact that, in free
probability, the analogue of the Fourier transform is the R-transform, related
to the Cauchy transform. In past literature, there has been a focus on the
estimation of the initial conditions of linear PDEs such as the heat equation,
but to the best of our knowledge, this is the first time that the problem is
tackled for a non-linear PDE. The convergence of the estimator is proved and
the integrated mean square error is computed, providing rates of convergence
similar to the ones known for non-parametric deconvolution methods. Finally, a
simulation study illustrates the good performances of our estimator.
|
We combine interactive zero-knowledge protocols and weak physical layer
randomness properties to construct a protocol which allows bootstrapping an
IT-secure and PF-secure channel from a memorizable shared secret. The protocol
also tolerates failures of its components, still preserving most of its
security properties, which makes it accessible to regular users.
|
Antiferromagnetic spintronics is a promising emerging paradigm to develop
high-performance computing and communications devices. From a theoretical point
of view, it is important to implement simulation tools that can support a
data-driven development of materials having specific properties for particular
applications. Here, we present a study focusing on antiferromagnetic materials
having an easy-plane anisotropy and interfacial Dzyaloshinskii-Moriya
interaction (IDMI). An analytical theory is developed and benchmarked against
full numerical micromagnetic simulations, describing the main properties of the
ground state in antiferromagnets and how it is possible to estimate the IDMI
from experimental measurements. The effect of the IDMI on the electrical
switching dynamics of the antiferromagnetic element is also analyzed. Our
theoretical results can be used for the design of multi-terminal heavy
metal/antiferromagnet memory devices.
|
The accretion of minor satellites is currently proposed as the most likely
mechanism to explain the significant size evolution of the massive galaxies
during the last ~10 Gyr. In this paper we investigate the rest-frame colors and
the average stellar ages of satellites found around massive galaxies (Mstar
10^11Msun) since z~2. We find that the satellites have bluer colors than their
central galaxies. When exploring the stellar ages of the galaxies, we find that
the satellites have similar ages to the massive galaxies that host them at high
redshifts, while at lower redshifts they are, on average, ~1.5 Gyr younger. If
our satellite galaxies create the envelope of nearby massive galaxies, our
results would be compatible with the idea that the outskirts of those galaxies
are slightly younger, metal-poorer and with lower [alpha/Fe] abundance ratios
than their inner regions.
|
Algorithm selection is a well-known problem where researchers investigate how
to construct useful features representing the problem instances and then apply
feature-based machine learning models to predict which algorithm works best
with the given instance. However, even for simple optimization problems such as
Euclidean Traveling Salesman Problem (TSP), there lacks a general and effective
feature representation for problem instances. The important features of TSP are
relatively well understood in the literature, based on extensive domain
knowledge and post-analysis of the solutions. In recent years, Convolutional
Neural Network (CNN) has become a popular approach to select algorithms for
TSP. Compared to traditional feature-based machine learning models, CNN has an
automatic feature-learning ability and demands less domain expertise. However,
it is still required to generate intermediate representations, i.e., multiple
images to represent TSP instances first. In this paper, we revisit the
algorithm selection problem for TSP, and propose a novel Graph Neural Network
(GNN), called GINES. GINES takes the coordinates of cities and distances
between cities as input. It is composed of a new message-passing mechanism and
a local neighborhood feature extractor to learn spatial information of TSP
instances. We evaluate GINES on two benchmark datasets. The results show that
GINES outperforms CNN and the original GINE models. It is better than the
traditional handcrafted feature-based approach on one dataset. The code and
dataset will be released in the final version of this paper.
|
We present an update to seven stars with long-period planets or planetary
candidates using new and archival radial velocities from Keck-HIRES and
literature velocities from other telescopes. Our updated analysis better
constrains orbital parameters for these planets, four of which are known
multi-planet systems. HD 24040 b and HD 183263 c are super-Jupiters with
circular orbits and periods longer than 8 yr. We present a previously unseen
linear trend in the residuals of HD 66428 indicative on an additional planetary
companion. We confirm that GJ 849 is a multi-planet system and find a good
orbital solution for the c component: it is a $1 M_{\rm Jup}$ planet in a 15 yr
orbit (the longest known for a planet orbiting an M dwarf). We update the HD
74156 double-planet system. We also announce the detection of HD 145934 b, a $2
M_{\rm Jup}$ planet in a 7.5 yr orbit around a giant star. Two of our stars, HD
187123 and HD 217107, at present host the only known examples of systems
comprising a hot Jupiter and a planet with a well constrained period $> 5$ yr,
and with no evidence of giant planets in between. Our enlargement and
improvement of long-period planet parameters will aid future analysis of
origins, diversity, and evolution of planetary systems.
|
We present KERMIT, a simple insertion-based approach to generative modeling
for sequences and sequence pairs. KERMIT models the joint distribution and its
decompositions (i.e., marginals and conditionals) using a single neural network
and, unlike much prior work, does not rely on a prespecified factorization of
the data distribution. During training, one can feed KERMIT paired data $(x,
y)$ to learn the joint distribution $p(x, y)$, and optionally mix in unpaired
data $x$ or $y$ to refine the marginals $p(x)$ or $p(y)$. During inference, we
have access to the conditionals $p(x \mid y)$ and $p(y \mid x)$ in both
directions. We can also sample from the joint distribution or the marginals.
The model supports both serial fully autoregressive decoding and parallel
partially autoregressive decoding, with the latter exhibiting an empirically
logarithmic runtime. We demonstrate through experiments in machine translation,
representation learning, and zero-shot cloze question answering that our
unified approach is capable of matching or exceeding the performance of
dedicated state-of-the-art systems across a wide range of tasks without the
need for problem-specific architectural adaptation.
|
This paper reviews developments in statistics for spatial point processes
obtained within roughly the last decade. These developments include new classes
of spatial point process models such as determinantal point processes, models
incorporating both regularity and aggregation, and models where points are
randomly distributed around latent geometric structures. Regarding parametric
inference the main focus is on various types of estimating functions derived
from so-called innovation measures. Optimality of such estimating functions is
discussed as well as computational issues. Maximum likelihood inference for
determinantal point processes and Bayesian inference are briefly considered
too. Concerning non-parametric inference, we consider extensions of functional
summary statistics to the case of inhomogeneous point processes as well as new
approaches to simulation based inference.
|
Within the framework of Relativistic Schroedinger Theory (an alternative form
of quantum mechanics for relativistic many-particle systems) it is shown that a
general N-particle system must occur in one of two forms: either as a
``positive'' or as a ``negative'' mixture, in analogy to the fermion-boson
dichotomy of matter in the conventional theory. The pure states represent a
limiting case between the two types of mixtures which themselves are considered
as the RST counterparts of the entangled (fermionic or bosonic) states of the
conventional quantum theory. Both kinds of mixtures are kept separated from
dynamical as well as from topological reasons. The 2-particle configurations
(N=2) are studied in great detail with respect to their geometric and
topological properties which are described in terms of the Euler class of an
appropriate bundle connection. If the underlying space-time manifold (as the
base space of the fibre bundles applied) is parallelisable, the 2-particle
configurations can be thought to be generated geometrically by an appropriate
(2+2) splitting of the local tangent space.
|
We consider light-fermion three-loop corrections to $gg\to HH$ using forward
scattering kinematics in the limit of a vanishing Higgs boson mass, which
covers a large part of the physical phase space. We compute the form factors
and discuss the technical challenges. The approach outlined in this letter can
be used to obtain the full virtual corrections to $gg\to HH$ at
next-to-next-to-leading order.
|
In the Reverse Engineering and Hardware Assurance domain, a majority of the
data acquisition is done through electron microscopy techniques such as
Scanning Electron Microscopy (SEM). However, unlike its counterparts in optical
imaging, only a limited number of techniques are available to enhance and
extract information from the raw SEM images. In this paper, we introduce an
algorithm to segment out Integrated Circuit (IC) structures from the SEM image.
Unlike existing algorithms discussed in this paper, this algorithm is
unsupervised, parameter-free and does not require prior information on the
noise model or features in the target image making it effective in low quality
image acquisition scenarios as well. Furthermore, the results from the
application of the algorithm on various structures and layers in the IC are
reported and discussed.
|
The intensity of Smith-Purcell radiation from metallic and dielectric
gratings (silicon, silica) is compared in a frequency-domain simulation. The
numerical model is discussed and verified with the Frank-Tamm formula for
Cherenkov radiation. For 30 keV electrons, rectangular dielectric gratings are
less efficient than their metallic counterpart, by an order of magnitude for
silicon, and two orders of magnitude for silica. For all gratings studied,
radiation intensity oscillates with grating tooth height due to electromagnetic
resonances in the grating. 3D and 2D numerical models are compared.
|
Radio-loud active galactic nuclei (RLAGNs) are rare among AGN populations.
Lacking high-resolution and high-frequency observations, their structure and
evolution stages are not well understood at high redshifts. In this work, we
report ALMA 237 GHz continuum observation at $0.023''$ resolution and VLA 44
GHz continuum observation at $0.08''$ resolution of the radio continuum
emission from a high-redshift radio and hyper-luminous infrared galaxy at
$z=1.92$. The new observations confirm the South-East (SE) and North-West (NW)
hotspots identified by previous low-resolution VLA observations at 4.7 and 8.2
GHz and identify a radio core undetected in all previous observations. The SE
hotspot has a higher flux density than the NW one does by a factor of 6,
suggesting that there can be a Doppler boosting effect in the SE one. In this
scenario, we estimate the advance speed of the jet head, ranging from
$\sim$0.1c -- 0.3c, which yields a mildly relativistic case. The projected
linear distance between the two hotspots is $\sim13$ kpc, yielding a linear
size ($\leq20$ kpc) of a Compact-Steep-Spectrum (CSS) source. Combined with new
\black{high-frequency ($\nu_\text{obs}\geq44$ GHz) and archived low-frequency
observations ($\nu_\text{obs}\leq8.2$ GHz)}, we find that injection spectra of
both NW and SE hotspots can be fitted with a continuous injection (CI) model.
Based on the CI model, the synchrotron ages of NW and SE hotspots have an order
of $10^5$ yr, consistent with the order of magnitude $10^3 - 10^5$ yr observed
in CSS sources associated with radio AGNs at an early evolution stage. The CI
model also favors the scenario in which the double hotspots have experienced a
quiescent phase, suggesting that this RLAGN may have transient or intermittent
activities.
|
Visual Inertial Odometry (VIO) algorithms estimate the accurate camera
trajectory by using camera and Inertial Measurement Unit (IMU) sensors. The
applications of VIO span a diverse range, including augmented reality and
indoor navigation. VIO algorithms hold the potential to facilitate navigation
for visually impaired individuals in both indoor and outdoor settings.
Nevertheless, state-of-the-art VIO algorithms encounter substantial challenges
in dynamic environments, particularly in densely populated corridors. Existing
VIO datasets, e.g., ADVIO, typically fail to effectively exploit these
challenges. In this paper, we introduce the Amirkabir campus dataset (AUT-VI)
to address the mentioned problem and improve the navigation systems. AUT-VI is
a novel and super-challenging dataset with 126 diverse sequences in 17
different locations. This dataset contains dynamic objects, challenging
loop-closure/map-reuse, different lighting conditions, reflections, and sudden
camera movements to cover all extreme navigation scenarios. Moreover, in
support of ongoing development efforts, we have released the Android
application for data capture to the public. This allows fellow researchers to
easily capture their customized VIO dataset variations. In addition, we
evaluate state-of-the-art Visual Inertial Odometry (VIO) and Visual Odometry
(VO) methods on our dataset, emphasizing the essential need for this
challenging dataset.
|
We propose a new scenario for generating a relic density of non-relativistic
dark matter in the context of heterotic string theory. Contrary to standard
thermal freeze-out scenarios, dark-matter particles are abundantly produced
while still relativistic, and then decouple from the thermal bath due to the
sudden increase of their mass above the universe temperature. This mass
variation is sourced by the condensation of an order-parameter modulus, which
is triggered when the temperature T(t) drops below the supersymmetry breaking
scale M(t), which are both time-dependent. A cosmological attractor mechanism
forces this phase transition to take place, in an explicit class of heterotic
string models with spontaneously broken supersymmetry, and at finite
temperature.
|
Deep generative models have recently yielded encouraging results in producing
subjectively realistic samples of complex data. Far less attention has been
paid to making these generative models interpretable. In many scenarios,
ranging from scientific applications to finance, the observed variables have a
natural grouping. It is often of interest to understand systems of interaction
amongst these groups, and latent factor models (LFMs) are an attractive
approach. However, traditional LFMs are limited by assuming a linear
correlation structure. We present an output interpretable VAE (oi-VAE) for
grouped data that models complex, nonlinear latent-to-observed relationships.
We combine a structured VAE comprised of group-specific generators with a
sparsity-inducing prior. We demonstrate that oi-VAE yields meaningful notions
of interpretability in the analysis of motion capture and MEG data. We further
show that in these situations, the regularization inherent to oi-VAE can
actually lead to improved generalization and learned generative processes.
|
We argue that strong dynamics at the Planck scale can solve the cosmological
moduli problem. We discuss its implications for inflation models, and find that
a certain type of multi-field inflation model is required for this mechanism to
work, since otherwise it would lead to the serious eta-problem. Combined with
the inflaton-induced gravitino problem, we show that a chaotic inflation with a
discrete symmetry naturally avoids both problems. Interestingly, the focus
point supersymmetry is predicted when this mechanism is applied to the Polonyi
model.
|
Deep neural networks are prone to overconfident predictions on outliers.
Bayesian neural networks and deep ensembles have both been shown to mitigate
this problem to some extent. In this work, we aim to combine the benefits of
the two approaches by proposing to predict with a Gaussian mixture model
posterior that consists of a weighted sum of Laplace approximations of
independently trained deep neural networks. The method can be used post hoc
with any set of pre-trained networks and only requires a small computational
and memory overhead compared to regular ensembles. We theoretically validate
that our approach mitigates overconfidence "far away" from the training data
and empirically compare against state-of-the-art baselines on standard
uncertainty quantification benchmarks.
|
We present here the results of calculations of photoelectrons' angular
anisotropy and spin-polarization parameters for a number of semi-filled shell
atoms. We consider ionization of outer or in some cases next to the outer
electrons in a number of elements from I, V, and VI groups of the Periodic
Table. All calculations are performed with account of multi-electron
correlations in the frame of the Spin Polarized version of the Random Phase
Approximation with Exchange - SP RPAE. We consider the dipole angular
distribution and spin polarization of photoelectrons from semi-filled subshells
and from closed shells that are neighbors to the semi-filled shells. We have
considered also angular anisotropy and spin-polarization of photoelectrons from
some excited atoms that are formed by spin-flip of one of the outer electrons.
To check the accuracy and consistency of the applied SP RPAE approach and to
see the role of the nuclear charge variation only, we have calculated the
dipole angular anisotropy and spin-polarization parameters of 3p - electrons in
K and compare them to Ar and K+ that have the same configuration. Entirely, we
have calculated the angular anisotropy and spin-polarization parameters for
following subshells of atoms N (2p), P (3p), Ar (3p), K+(3p), K(3p), Cr(3p,
3d), Cr*(3d), Mn(3p, 3d), As(3d, 4p), Mo(4p, 4d), Mo*(4d), Tc(4p, 4d), Sb(4d,
5p), Eu(4f). The peculiarities of obtained parameters as function of photon
frequencies are discussed, as well as some specific features of considered
semi-filled shell objects.
|
We study the Neumann and Dirichlet problems for the total variation flow in
metric measure spaces. We prove existence and uniqueness of weak solutions and
study their asymptotic behaviour. Furthermore, in the Neumann problem we
provide a notion of solutions which is valid for $L^1$ initial data, as well as
prove their existence and uniqueness. Our main tools are the first-order linear
differential structure due to Gigli and a version of the Gauss-Green formula.
|
Triangulated surfaces are compact Riemann surfaces equipped with a conformal
triangulation by equilateral triangles. In 2004, Brooks and Makover asked how
triangulated surfaces are distributed in the moduli space of Riemann surfaces
as the genus tends to infinity. Mirzakhani raised this question in her 2010 ICM
address. We show that in the large genus case, triangulated surfaces are well
distributed in moduli space in a fairly strong sense. We do this by proving
upper and lower bounds for the number of triangulated surfaces lying in a
Teichm\"uller ball in moduli space. In particular, we show that the number of
triangulated surfaces lying in a Teichm\"uller unit ball is at most exponential
in the number of triangles, independent of the genus.
|
RESTful APIs based on HTTP are one of the most important ways to make data
and functionality available to applications and software services. However, the
quality of the API design strongly impacts API understandability and usability,
and many rules have been specified for this. While we have evidence for the
effectiveness of many design rules, it is still difficult for practitioners to
identify rule violations in their design. We therefore present RESTRuler, a
Java-based open-source tool that uses static analysis to detect design rule
violations in OpenAPI descriptions. The current prototype supports 14 rules
that go beyond simple syntactic checks and partly rely on natural language
processing. The modular architecture also makes it easy to implement new rules.
To evaluate RESTRuler, we conducted a benchmark with over 2,300 public OpenAPI
descriptions and asked 7 API experts to construct 111 complicated rule
violations. For robustness, RESTRuler successfully analyzed 99% of the used
real-world OpenAPI definitions, with some failing due to excessive size. For
performance efficiency, the tool performed well for the majority of files and
could analyze 84% in less than 23 seconds with low CPU and RAM usage. Lastly,
for effectiveness, RESTRuler achieved a precision of 91% (ranging from 60% to
100% per rule) and recall of 68% (ranging from 46% to 100%). Based on these
variations between rule implementations, we identified several opportunities
for improvements. While RESTRuler is still a research prototype, the evaluation
suggests that the tool is quite robust to errors, resource-efficient for most
APIs, and shows good precision and decent recall. Practitioners can use it to
improve the quality of their API design.
|
We provide base change theorems, projection formulae and Verdier duality for
both cohomology and homology in the context of finite topological spaces
|
For each non-constant Boolean function $q$, Klapper introduced the notion of
$q$-transforms of Boolean functions. The {\em $q$-transform} of a Boolean
function $f$ is related to the Hamming distances from $f$ to the functions
obtainable from $q$ by nonsingular linear change of basis.
In this work we discuss the existence of $q$-nearly bent functions, a new
family of Boolean functions characterized by the $q$-transform. Let $q$ be a
non-affine Boolean function. We prove that any balanced Boolean functions
(linear or non-linear) are $q$-nearly bent if $q$ has weight one, which gives a
positive answer to an open question (whether there exist non-affine $q$-nearly
bent functions) proposed by Klapper. We also prove a necessary condition for
checking when a function isn't $q$-nearly bent.
|
In 1955 Dye proved that two von Neumann factors not of type I_2n are
isomorphic (via a linear or a conjugate linear *-isomorphism) if and only if
their unitary groups are isomorphic as abstract groups. We consider an analogue
for C*-algebras. We show that the topological general linear group is a
classifying invariant for simple, unital AH-algebras of slow dimension growth
and of real rank zero, and the abstract general linear group is a classifying
invariant for unital Kirchberg algebras in the UCT class.
|
We characterise Geometric Property (T) by the existence of a certain
projection in the maximal uniform Roe algebra $C_{u,\max}^*(X)$, extending the
notion of Kazhdan projection for groups to the realm of metric spaces. We also
describe this projection in terms of the decomposition of the metric space into
coarsely connected components.
|
In anomaly detection, a prominent task is to induce a model to identify
anomalies learned solely based on normal data. Generally, one is interested in
finding an anomaly detector that correctly identifies anomalies, i.e., data
points that do not belong to the normal class, without raising too many false
alarms. Which anomaly detector is best suited depends on the dataset at hand
and thus needs to be tailored. The quality of an anomaly detector may be
assessed via confusion-based metrics such as the Matthews correlation
coefficient (MCC). However, since during training only normal data is available
in a semi-supervised setting, such metrics are not accessible. To facilitate
automated machine learning for anomaly detectors, we propose to employ
meta-learning to predict MCC scores based on metrics that can be computed with
normal data only. First promising results can be obtained considering the
hypervolume and the false positive rate as meta-features.
|
A dynamical systems approach to competition of Saffman-Taylor fingers in a
channel is developed. This is based on the global study of the phase space
structure of the low-dimensional ODE's defined by the classes of exact
solutions of the problem without surface tension. Some simple examples are
studied in detail, and general proofs concerning properties of fixed points and
existence of finite-time singularities for broad classes of solutions are
given. The existence of a continuum of multifinger fixed points and its
dynamical implications are discussed. The main conclusion is that exact
zero-surface tension solutions taken in a global sense as families of
trajectories in phase space spanning a sufficiently large set of initial
conditions, are unphysical because the multifinger fixed points are
nonhyperbolic, and an unfolding of them does not exist within the same class of
solutions. Hyperbolicity (saddle-point structure) of the multifinger fixed
points is argued to be essential to the physically correct qualitative
description of finger competition. The restoring of hyperbolicity by surface
tension is discussed as the key point for a generic Dynamical Solvability
Scenario which is proposed for a general context of interfacial pattern
selection.
|
Precision measurements of the number of effective relativistic neutrino
species and the primordial element abundances require accurate theoretical
predictions for early Universe observables in the Standard Model and beyond.
Given the complexity of accurately modelling the thermal history of the early
Universe, in this work, we extend a previous method presented by the author to
obtain simple, fast and accurate early Universe thermodynamics. The method is
based upon the approximation that all relevant species can be described by
thermal equilibrium distribution functions characterized by a temperature and a
chemical potential. We apply the method to neutrino decoupling in the Standard
Model and find $N_{\rm eff}^{\rm SM} = 3.045$ -- a result in excellent
agreement with previous state-of-the-art calculations. We apply the method to
study the thermal history of the Universe in the presence of a very light
($1\,\text{eV}<m_\phi < 1\,\text{MeV}$) and weakly coupled ($\lambda \lesssim
10^{-9}$) neutrinophilic scalar. We find our results to be in excellent
agreement with the solution to the exact Liouville equation. Finally, we
release a code: NUDEC_BSM (available in both Mathematica and Python formats),
with which neutrino decoupling can be accurately and efficiently solved in the
Standard Model and beyond: https://github.com/MiguelEA/nudec_BSM .
|
The phase diagram of a polydisperse mixture of uniaxial rod-like and
plate-like hard parallelepipeds is determined for aspect ratios $\kappa=5$ and
15. All particles have equal volume and polydispersity is introduced in a
highly symmetric way. The corresponding binary mixture is known to have a
biaxial phase for $\kappa=15$, but to be unstable against demixing into two
uniaxial nematics for $\kappa=5$. We find that the phase diagram for
$\kappa=15$ is qualitatively similar to that of the binary mixture, regardless
the amount of polydispersity, while for $\kappa=5$ a sufficient amount of
polydispersity stabilizes the biaxial phase. This provides some clues for the
design of an experiment in which this long searched biaxial phase could be
observed.
|
From face recognition systems installed in phones to self-driving cars, the
field of AI is witnessing rapid transformations and is being integrated into
our everyday lives at an incredible pace. Any major failure in these system's
predictions could be devastating, leaking sensitive information or even costing
lives (as in the case of self-driving cars). However, deep neural networks,
which form the basis of such systems, are highly susceptible to a specific type
of attack, called adversarial attacks. A hacker can, even with bare minimum
computation, generate adversarial examples (images or data points that belong
to another class, but consistently fool the model to get misclassified as
genuine) and crumble the basis of such algorithms. In this paper, we compile
and test numerous approaches to defend against such adversarial attacks. Out of
the ones explored, we found two effective techniques, namely Dropout and
Denoising Autoencoders, and show their success in preventing such attacks from
fooling the model. We demonstrate that these techniques are also resistant to
both higher noise levels as well as different kinds of adversarial attacks
(although not tested against all). We also develop a framework for deciding the
suitable defense technique to use against attacks, based on the nature of the
application and resource constraints of the Deep Neural Network.
|
In this paper, we propose an approach for solving PDEs on evolving surfaces
using a combination of the trace finite element method and a fast marching
method. The numerical approach is based on the Eulerian description of the
surface problem and employs a time-independent background mesh that is not
fitted to the surface. The surface and its evolution may be given implicitly,
for example, by the level set method. Extension of the PDE off the surface is
not required. The method introduced in this paper naturally allows a surface to
undergo topological changes and experience local geometric singularities. In
the simplest setting, the numerical method is second order accurate in space
and time. Higher order variants are feasible, but not studied in this paper. We
show results of several numerical experiments, which demonstrate the
convergence properties of the method and its ability to handle the case of the
surface with topological changes.
|
Asynchronous programming is widely adopted for building responsive and
efficient software, and modern languages such as C# provide async/await
primitives to simplify the use of asynchrony. In this paper, we propose an
approach for refactoring a sequential program into an asynchronous program that
uses async/await, called asynchronization. The refactoring process is
parametrized by a set of methods to replace with asynchronous versions, and it
is constrained to avoid introducing data races. We investigate the delay
complexity of enumerating all data race free asynchronizations, which
quantifies the delay between outputting two consecutive solutions. We show that
this is polynomial time modulo an oracle for solving reachability in sequential
programs. We also describe a pragmatic approach based on an interprocedural
data-flow analysis with polynomial-time delay complexity. The latter approach
has been implemented and evaluated on a number of non-trivial C# programs
extracted from open-source repositories
|
A recent Science Advances paper by Schilling et al, claiming "flow of heat
from cold to hot without intervention" with "oscillatory thermal inertia" are
fundamentally misplaced and dramatized as miraculous, even though compliance
with the Second Law of thermodynamics is acknowledged. There is nothing
"magical and beyond the proof-of-concept" as claimed. It could have been
achieved by any work generating device, stored by any suitable device
(superconductive inductor was beneficial but not essential as claimed), and
such stored work used subsequently in any refrigeration device to sub-cool the
body. Cooling devices work by transforming temperature to desired level by work
transfer (thermal transformer and temperature oscillator), by non-thermal,
adiabatic processes. However, the "direct heat transfer" is always from higher
to lower temperature in all refrigeration components, without exception - it is
not to be confused by "net-transport of thermal energy by work" from cold to
hot ambients. The unjustified claims are critically analyzed and demystified
here.
|
Solidarity is a crucial concept to understand social relations in societies.
In this paper, we explore fine-grained solidarity frames to study solidarity
towards women and migrants in German parliamentary debates between 1867 and
2022. Using 2,864 manually annotated text snippets (with a cost exceeding 18k
Euro), we evaluate large language models (LLMs) like Llama 3, GPT-3.5, and
GPT-4. We find that GPT-4 outperforms other LLMs, approaching human annotation
quality. Using GPT-4, we automatically annotate more than 18k further instances
(with a cost of around 500 Euro) across 155 years and find that solidarity with
migrants outweighs anti-solidarity but that frequencies and solidarity types
shift over time. Most importantly, group-based notions of (anti-)solidarity
fade in favor of compassionate solidarity, focusing on the vulnerability of
migrant groups, and exchange-based anti-solidarity, focusing on the lack of
(economic) contribution. Our study highlights the interplay of historical
events, socio-economic needs, and political ideologies in shaping migration
discourse and social cohesion. We also show that powerful LLMs, if carefully
prompted, can be cost-effective alternatives to human annotation for hard
social scientific tasks.
|
We prove a universal lower bound for the $L^{n/2}$-norm of the Weyl tensor in
terms of the Betti numbers for compact $n$-dimensional Riemannian manifolds
that are conformally immersed as hypersurfaces in the Euclidean space. As a
consequence, we determine the homology of almost conformally flat
hypersurfaces. Furthermore, we provide a necessary condition for a compact
Riemannian manifold to admit an isometric minimal immersion as a hypersurface
in the sphere and extend a result due to Shiohama and Xu \cite{SX} for compact
hypersurfaces in any space form.
|
We study ultra-broadband slow light in a warm Rubidium vapor cell. By working
between the D1 and D2 transitions, we find a several-nm window centered at
788.4 nm in which the group index is highly uniform and the absorption is small
(<1%). We demonstrate that we can control the group delay by varying the
temperature of the cell, and observe a tunable fractional delay of 18 for
pulses as short as 250 fs (6.9 nm bandwidth) with a fractional broadening of
only 0.65 and a power leakage of 55%. We find that a simple theoretical model
is in excellent agreement with the experimental results. Using this model, we
discuss the impact of the pulse's spectral characteristics on the distortion it
incurs during propagation through the vapor.
|
A stochastic theory for a branching process in a neutron population with two
energy levels is used to assess the applicability of the differential
self-interrogation Feynman-alpha method by numerically estimated reaction
intensities from Monte Carlo simulations. More specifically, the variance to
mean or Feynman-alpha formula is applied to investigate the appearing
exponentials using the numerically obtained reaction intensities.
|
The study of the chemical abundances of metal-poor stars in dwarf galaxies
provides a venue to constrain paradigms of chemical enrichment and galaxy
formation. Here we present metallicity and carbon abundance measurements of 100
stars in Sculptor from medium-resolution (R ~ 2000) spectra taken with the
Magellan/Michigan Fiber System mounted on the Magellan-Clay 6.5m telescope at
Las Campanas Observatory. We identify 24 extremely metal-poor star candidates
([Fe/H] < -3.0) and 21 carbon-enhanced metal-poor (CEMP) star candidates. Eight
carbon-enhanced stars are classified with at least 2$\sigma$ confidence and
five are confirmed as such with follow-up R~6000 observations using the
Magellan Echellette Spectrograph on the Magellan-Baade 6.5m telescope. We
measure a CEMP fraction of 36% for stars below [Fe/H] = -3.0, indicating that
the prevalence of carbon-enhanced stars in Sculptor is similar to that of the
halo (~43%) after excluding likely CEMP-s and CEMP-r/s stars from our sample.
However, we do not detect that any CEMP stars are strongly enhanced in carbon
(e.g., [C/Fe] > 1.0). The existence of a large number of CEMP stars both in the
halo and in Sculptor suggests that some halo CEMP stars may have originated
from accreted early analogs of dwarf galaxies.
|
The semiclassical Wigner treatment of bimolecular collisions, proposed by Lee
and Scully on a partly intuitive basis [J. Chem. Phys. 73, 2238 (1980)], is
derived here from first principles. The derivation combines E. J. Heller's
ideas [J. Chem. Phys. 62, 1544 (1975); 65, 1289 (1976); 75, 186 (1981)], the
backward picture of molecular collisions [L. Bonnet, J. Chem. Phys. 133, 174108
(2010)] and the microreversibility principle.
|
Achieving significant performance gains both in terms of system throughput
and massive connectivity, non-orthogonal multiple access (NOMA) has been
considered as a very promising candidate for future wireless communications
technologies. It has already received serious consideration for implementation
in the fifth generation (5G) and beyond wireless communication systems. This is
mainly due to NOMA allowing more than one user to utilise one transmission
resource simultaneously at the transmitter side and successive interference
cancellation (SIC) at the receiver side. However, in order to take advantage of
the benefits, NOMA provides in an optimal manner, power allocation needs to be
considered to maximise the system throughput. This problem is non-deterministic
polynomial-time (NP)-hard which is mainly why the use of deep learning
techniques for power allocation is required. In this paper, a state-of-the-art
review on cutting-edge solutions to the power allocation optimisation problem
using deep learning is provided. It is shown that the use of deep learning
techniques to obtain effective solutions to the power allocation problem in
NOMA is paramount for the future of NOMA-based wireless communication systems.
Furthermore, several possible research directions based on the use of deep
learning in NOMA systems are presented.
|
"The Center is Everywhere" is a sculpture by Josiah McElheny, currently
(through October 14, 2012) on exhibit at the Institute of Contemporary Art,
Boston. The sculpture is based on data from the Sloan Digital Sky Survey
(SDSS), using hundreds of glass crystals and lamps suspended from brass rods to
represent the three-dimensional structure mapped by the SDSS through one of its
2000+ spectroscopic plugplates. This article describes the scientific ideas
behind this sculpture, emphasizing the principle of the statistical homogeneity
of cosmic structure in the presence of local complexity. The title of the
sculpture is inspired by the work of the French revolutionary Louis Auguste
Blanqui, whose 1872 book "Eternity Through The Stars: An Astronomical
Hypothesis" was the first to raise the spectre of the infinite replicas
expected in an infinite, statistically homogeneous universe. Puzzles of
infinities, probabilities, and replicas continue to haunt modern fiction and
contemporary discussions of inflationary cosmology.
|
Dempster-Shafer theory of imprecise probabilities has proved useful to
incorporate both nonspecificity and conflict uncertainties in an inference
mechanism. The traditional Bayesian approach cannot differentiate between the
two, and is unable to handle non-specific, ambiguous, and conflicting
information without making strong assumptions. This paper presents a
generalization of a recent Bayesian-based method of quantifying information
flow in Dempster-Shafer theory. The generalization concretely enhances the
original method removing all its weaknesses that are highlighted in this paper.
In so many words, our generalized method can handle any number of secret inputs
to a program, it enables the capturing of an attacker's beliefs in all kinds of
sets (singleton or not), and it supports a new and precise quantitative
information flow measure whose reported flow results are plausible in that they
are bounded by the size of a program's secret input, and can be easily
associated with the exhaustive search effort needed to uncover a program's
secret information, unlike the results reported by the original metric.
|
The substrate material of monolayer graphene influences the charge carrier
mobility by various mechanisms. At room temperature, the scattering of
conduction electrons by phonon modes localized at the substrate surface can
severely limit the charge carrier mobility. We here show that for substrates
made of the piezoelectric hexagonal boron nitride (hBN), in comparison to the
widely used SiO$_2$, this mechanism of remote phonon scattering is --at room
temperature-- weaker by almost an order of magnitude, and causes a resistivity
of approximately 3\,$\Omega$. This makes hBN an excellent candidate material
for future graphene based electronic devices operating at room temperature.
|
We consider the most general form of soft and collinear factorization for
hard-scattering amplitudes to all orders in perturbative Quantum
Chromodynamics. Specifically, we present the generalization of collinear
factorization to configurations with several collinear directions, where the
most singular behaviour is encoded by generalized collinear splitting
amplitudes that manifestly embed the breaking of strict collinear factorization
in space-like collinear configurations. We also extend the analysis to the
simultaneous soft-collinear factorization with multiple collinear directions
and show how na\"{\i}ve multiplicative factorization do not hold.
|
The glass transition is a long-standing problem in physics. Identifying the
structural origin of the transition may lead to the ultimate solution to the
problem. Here, for the first time, we discover such a structural origin by
proposing a novel method to analyze structure-dynamics relation in glasses. An
interesting two-step glass transition, with rotational glass transition
preceding translational one, is identified experimentally in 2D colloidal rod
systems. During the transition, parallel and perpendicularly packed rods are
found to form local free energy minima in configurational space, separated by
an activation barrier. This barrier increases significantly when rotational
glass transition is approached; thereby the rotational motion is frozen while
the translational one remains diffusive. We argue that the activation barrier
for rotation is the origin of the two-step glass transition. Such an activation
barrier between well-defined local configurations holds the key to understand
the two-step glass transition in general.
|
We explore the applicability of MATLAB for 3D computational fluid dynamics
(CFD) of shear-driven indoor airflows. A new scale-resolving, large-eddy
simulation (LES) solver titled DNSLABIB is proposed for MATLAB utilizing
graphics processing units (GPUs). The solver is first validated against another
CFD software (OpenFOAM). Next, we demonstrate the solver performance in three
isothermal indoor ventilation configurations and the results are discussed in
the context of airborne transmission of COVID-19. Ventilation in these cases is
studied at both low (0.1 m/s) and high (1 m/s) airflow rates corresponding to
$Re=5000$ and $Re=50000$. An analysis of the indoor CO$_2$ concentration is
carried out as the room is emptied from stale, high CO$_2$ content air. We
estimate the air changes per hour (ACH) values for three different room
geometries and show that the numerical estimates from 3D CFD simulations may
differ by 80-150 % ($Re=50000$) and 75-140 % ($Re=5000$) from the theoretical
ACH value based on the perfect mixing assumption. Additionally, the analysis of
the CO$_2$ probability distributions (PDFs) indicates a relatively non-uniform
distribution of fresh air indoors. Finally, utilizing a time-dependent
Wells-Riley analysis, an example is provided on the growth of the cumulative
infection risk being reduced rapidly after the ventilation is started. The
average infection risk is shown to reduce by a factor of 2 for lower
ventilation rates (ACH=3.4-6.3) and 10 for the higher ventilation rates
(ACH=37-64). The results indicate a high potential for DNSLABIB in various
future developments on airflow prediction.
|
We address the problem of learning in an online setting where the learner
repeatedly observes features, selects among a set of actions, and receives
reward for the action taken. We provide the first efficient algorithm with an
optimal regret. Our algorithm uses a cost sensitive classification learner as
an oracle and has a running time $\mathrm{polylog}(N)$, where $N$ is the number
of classification rules among which the oracle might choose. This is
exponentially faster than all previous algorithms that achieve optimal regret
in this setting. Our formulation also enables us to create an algorithm with
regret that is additive rather than multiplicative in feedback delay as in all
previous work.
|
An N-bit quantum state requires a vector of length $2^N$, leading to an
exponential increase in the required memory with N in conventional
statevector-based quantum simulators. A proposed solution to this issue is the
decision diagram-based quantum simulator, which can significantly decrease the
necessary memory and is expected to operate faster for specific quantum
circuits. However, decision diagram-based quantum simulators are not easily
parallelizable because data must be manipulated dynamically, and most
implementations run on one thread. This paper introduces ring
communication-based optimal parallelization and automatic swap insertion
techniques for multi-node implementation of decision diagram-based quantum
simulators. The ring communication approach is designed so that each node
communicates with its neighboring nodes, which can facilitate faster and more
parallel communication than broadcasting where one node needs to communicate
with all nodes simultaneously. The automatic swap insertion method, an approach
to minimize inter-node communication, has been employed in existing multi-node
state vector-based simulators, but this paper proposes two methods specifically
designed for decision diagram-based quantum simulators. These techniques were
implemented and evaluated using the Shor algorithm and random circuits with up
to 38 qubits using a maximum of 256 nodes. The experimental results have
revealed that multi-node implementation can reduce run-time by up to 26 times.
For example, Shor circuits that need 38 qubits can finish simulation in 147
seconds. Additionally, it was shown that ring communication has a higher
speed-up effect than broadcast communication, and the importance of selecting
the appropriate automatic swap insertion method was revealed.
|
We study the block counting process and the fixation line of exchangeable
coalescents. Formulas for the infinitesimal rates of both processes are
provided. It is shown that the block counting process is Siegmund dual to the
fixation line. For exchangeable coalescents restricted to a sample of size n
and with dust we provide a convergence result for the block counting process as
n tends to infinity. The associated limiting process is related to the
frequencies of singletons of the coalescent. Via duality we obtain an analog
convergence result for the fixation line of exchangeable coalescents with dust.
The Dirichlet coalescent and the Poisson-Dirichlet coalescent are studied in
detail.
|
The base station (BS) in a multi-channel cognitive radio (CR) network has to
broadcast to secondary (or unlicensed) receivers/users on more than one
broadcast channels via channel hopping (CH), because a single broadcast channel
can be reclaimed by the primary (or licensed) user, leading to broadcast
failures. Meanwhile, a secondary receiver needs to synchronize its clock with
the BS's clock to avoid broadcast failures caused by the possible clock drift
between the CH sequences of the secondary receiver and the BS. In this paper,
we propose a CH-based broadcast protocol called SASS, which enables a BS to
successfully broadcast to secondary receivers over multiple broadcast channels
via channel hopping. Specifically, the CH sequences are constructed on basis of
a mathematical construct---the Self-Adaptive Skolem sequence. Moreover, each
secondary receiver under SASS is able to adaptively synchronize its clock with
that of the BS without any information exchanges, regardless of any amount of
clock drift.
|
At the present paper we have computed non-ergodicity paramater from Molecular
Dynamics (MD) Simulation data after the mode-coupling theory (MCT) for a glass
transition. MCT of dense liquids marks the dynamic glass-transition through a
critical temperature $T_c$ that is reflected in the temperature-dependence of
various physical quantities. Here, molecular dynamics simulations data of a
model adapted to Ni$_{0.2}$Zr$_{0.8}$ are analyzed to deduce $T_c$ from the
temperature-dependence of corresponding quantities and to check the consistency
of the statements. Analyzed is the diffusion coefficients. The resulting values
agree well with the critical temperature of the non-vanisihing non-ergodicity
parameter determined from the structure factors in the asymptotic solution of
the mode-coupling theory with memory-kernels in ``One-Loop'' approximation.
|
Human-robot collaboration is on the rise. Robots need to increasingly improve
the efficiency and smoothness with which they assist humans by properly
anticipating a human's intention. To do so, prediction models need to increase
their accuracy and responsiveness. This work builds on top of Interaction
Movement Primitives with phase estimation and re-formulates the framework to
use dynamic human-motion observations which constantly update anticipatory
motions. The original framework only considers a single fixed-duration static
human observation which is used to perform only one anticipatory motion.
Dynamic observations, with built-in phase estimation, yield a series of updated
robot motion distributions. Co-activation is performed between the existing and
newest most probably robot motion distribution. This results in smooth
anticipatory robot motions that are highly accurate and with enhanced
responsiveness.
|
We investigate the electronic specific heat of overdoped
BaFe$_{2}$(As$_{1-x}$P$_{x}$)$_{2}$ single crystals in the superconducting
state using high-resolution nanocalorimetry. From the measurements, we extract
the doping dependence of the condensation energy, superconducting gap $\Delta$,
and related microscopic parameters. We find that the anomalous scaling of the
specific heat jump $\Delta C \propto T_{\mathrm{c}}^3$, found in many
iron-based superconductors, in this system originates from a
$T_\mathrm{c}$-dependent ratio $\Delta/k_\mathrm{B}T_\mathrm{c}$ in combination
with a doping-dependent density of states $N(\varepsilon_\mathrm{F})$. A clear
enhancement is seen in the effective mass $m^{*}$ as the composition approaches
the value that has been associated with a quantum critical point at optimum
doping. However, a simultaneous increase in the superconducting carrier
concentration $n_\mathrm{s}$ maintains the superfluid density, yielding an
apparent penetration depth $\lambda$ that decreases with increasing
$T_\mathrm{c}$ without sharp divergence at the quantum critical point. Uemura
scaling indicates that $T_\mathrm{c}$ is governed by the Fermi temperature
$T_\mathrm{F}$ for this multi-band system.
|
In this paper we investigate a variational discretization for the class of
mechanical systems in presence of symmetries described by the action of a Lie
group which reduces the phase space to a (non-trivial) principal bundle. By
introducing a discrete connection we are able to obtain the discrete
constrained higher-order Lagrange-Poincar\'e equations. These equations
describe the dynamics of a constrained Lagrangian system when the Lagrangian
function and the constraints depend on higher-order derivatives such as the
acceleration, jerk or jounces. The equations, under some mild regularity
conditions, determine a well defined (local) flow which can be used to define a
numerical scheme to integrate the constrained higher-order Lagrange-Poincar\'e
equations.
Optimal control problems for underactuated mechanical systems can be viewed
as higher-order constrained variational problems. We study how a variational
discretization can be used in the construction of variational integrators for
optimal control of underactuated mechanical systems where control inputs act
soley on the base manifold of a principal bundle (the shape space). Examples
include the energy minimum control of an electron in a magnetic field and two
coupled rigid bodies attached at a common center of mass.
|
Machine-to-Machine (M2M) communications is one of the key enablers of the
Internet of Things (IoT). Billions of devices are expected to be deployed in
the next future for novel M2M applications demanding ubiquitous access and
global connectivity. In order to cope with the massive number of machines,
there is a need for new techniques to coordinate the access and allocate the
resources. Although the majority of the proposed solutions are focused on the
adaptation of the traditional cellular networks to the M2M traffic patterns,
novel approaches based on the direct communication among nearby devices may
represent an effective way to avoid access congestion and cell overload. In
this paper, we propose a new strategy inspired by the classical Trunked Radio
Systems (TRS), exploiting the Device-to-Device (D2D) connectivity between
cellular users and Machine-Type Devices (MTDs). The aggregation of the locally
generated packets is performed by a user device, which aggregates the
machine-type data, supplements it with its own data and transmits all of them
to the Base Station. We observe a fundamental trade-off between latency and the
transmit power needed to deliver the aggregate traffic, in a sense that lower
latency requires increase in the transmit power.
|
As observations of the Epoch of Reionization (EoR) in redshifted 21cm
emission begin, we asses the accuracy of the early catalog results from the
Precision Array for Probing the Epoch of Reionization (PAPER) and the Murchison
Widefield Array. The MWA EoR approach derives much of its sensitivity from
subtracting foregrounds to <1% precision while the PAPER approach relies on the
stability and symmetry of the primary beam. Both require an accurate flux
calibration to set the amplitude of the measured power spectrum. The two
instruments are very similar in resolution, sensitivity, sky coverage and
spectral range and have produced catalogs from nearly contemporaneous data. We
use a Bayesian MCMC fitting method to estimate that the two instruments are on
the same flux scale to within 20% and find that the images are mostly in good
agreement. We then investigate the source of the errors by comparing two
overlapping MWA facets where we find that the differences are primarily related
to an inaccurate model of the primary beam but also correlated errors in bright
sources due to CLEAN. We conclude with suggestions for mitigating and better
characterizing these effects.
|
We prove that within the space of ergodic Lebesgue-preserving C1 expanding
maps of the circle, unbounded distortion is C1-generic.
|
Inertial effects play an important role in classical mechanics but have been
largely overlooked in quantum mechanics. Nevertheless, the analogy between
inertial forces on mass particles and electromagnetic forces on charged
particles is not new. In this paper, we consider a rotating non-interacting
planar two-dimensional electron gas with a perpendicular uniform magnetic field
and investigate the effects of the rotation in the Hall conductivi
|
The domains of mesh functions are strict subsets of the underlying space of
continuous independent variables. Spaces of partial maps between topological
spaces admit topologies which do not depend on any metric. Such topologies
geometrically generalize the usual numerical analysis definitions of
convergence.
|
Excerpts are presented from a graduate course on Classical Electrodynamics
held during the spring semester of 2000 at the Institute of Physics, Guanajuato
State University, Mexico
|
Starting from correlation identities for the Blume-Capel spin 1 systems and
using correlation inequalities, we obtain rigorous upper bounds for the
critical temperature.The obtained results improve over effective field type
results.
|
A classical inventory problem is studied from the perspective of embedded
options, reducing inventory-management to the design of optimal contracts for
forward delivery of stock (commodity). Financial option techniques \`{a} la
Black-Scholes are invoked to value the additional `option to expand stock'. A
simplified approach which ignores distant time effects identifies an optimal
`time to deliver' and an optimal `amount to deliver' for a production process
run in continuous time modelled by a Cobb-Douglas revenue function. Commodity
prices, quoted in initial value terms, are assumed to evolve as a geometric
Brownian process with positive drift. Expected revenue maximization identifies
an optimal `strike price' for the expansion option to be exercised, and
uncovers the underlying martingale in a truncated (censored) commodity price.
The paper establishes comparative statics of the censor in terms of drift and
volatility, and uses asymptotic approximation for a tractable analysis of the
optimal timing.
|
We provide a general mathematical framework based on the theory of graphical
models to study admixture graphs. Admixture graphs are used to describe the
ancestral relationships between past and present populations, allowing for
population merges and migration events, by means of gene flow. We give various
mathematical properties of admixture graphs with particular focus on properties
of the so-called $F$-statistics. Also the Wright-Fisher model is studied and a
general expression for the loss of heterozygosity is derived.
|
We give a simple proof of the insoperimetric inequality for quermassintegrals
of non-convex starshaped domains, using a reslut of Gerhardt \cite{G} and Urbas
\cite{U} on an expanding geometric curvature flow.
|
We present an analytical formalism, within the Effective-One-Body framework,
which predicts gravitational-wave signals from inspiralling and coalescing
black-hole binaries that agree, within numerical errors, with the results of
the currently most accurate numerical relativity simulations for several
different mass ratios. In the equal-mass case, the gravitational wave energy
flux predicted by our formalism agrees, within numerical errors, with the most
accurate numerical-relativity energy flux. We think that our formalism opens a
realistic possibility of constructing a sufficiently accurate, large bank of
gravitational wave templates, as needed both for detection and data analysis of
(non spinning) coalescing binary black holes.
|
Weak cosmic censorship conjecture (WCCC) is a basic principle that guarantees
the predictability of spacetime and should be valid in any classical theories.
One critical scientific question is whether the WCCC can serve as a constraint
to the gravitational theories. To explore this question, we perform the
first-order Sorce-Wald's gedanken experiments to test the WCCC in the
higher-order gravitational theories and find that there exists a destruction
condition $S_\text{ext}'(r_h)<0$ for the extremal black holes. To show the
power of this condition, we evaluate the constraints given by WCCC in the
quadratic and cubic gravitational theories. Our investigation makes an
essential step toward applying WCCC to constrain the modified gravitational
theories, and opens a new avenue to judge which theory is reasonable.
|
We demonstrate that optical data from SDSS, X-ray data from ROSAT and
Chandra, and SZ data from Planck, can be modeled in a fully self-consistent
manner. After accounting for systematic errors and allowing for property
covariance, we find that scaling relations derived from optical and X-ray
selected cluster samples are consistent with one another. Moreover, these
clusters scaling relations satisfy several non-trivial spatial abundance
constraints and closure relations. Given the good agreement between optical and
X-ray samples, we combine the two and derive a joint set of LX-M and YSZ-M
relations. Our best fit YSZ-M relation is in good agreement with the observed
amplitude of the thermal SZ power spectrum for a WMAP7 cosmology, and is
consistent with the masses for the two CLASH galaxy clusters published thus
far. We predict the halo masses of the remaining z \leq 0.4 CLASH clusters, and
use our scaling relations to compare our results with a variety of X-ray and
weak lensing cluster masses from the literature.
|
Monochromatic gamma-ray lines are thought to be the smoking gun signal of the
annihilation or decay of dark matter since they do not suffer from deflection
or absorption on galactic scales. A recent claim on strong evidence for two
gamma-ray lines from the inner galaxy suggests that two-body final states might
be one photon plus a Z boson or one photon plus a Higgs boson. In this study,
we investigate which final state is more possible by analyzing the energy
resolution of the Fermi-LAT. It is concluded that the former case, i.e. one
photon plus a Z boson is more plausible than the latter one, i.e. one photon
and a Higgs boson since in the latter case the mass of dark matter particle
shows tension with a constraint coming from the energy resolution of the
Fermi-LAT.
|
This is a review of applications of the Color Glass Condensate to the
phenomenology of relativistic heavy ion collisions. The initial stages of the
collision can be understood in terms of the nonperturbatively strong nonlinear
glasma color fields. We discuss how the CGC framework can and has been used to
compute properties of the initial conditions of AA collisions. In particular
this has led to recent progress in understanding multiparticle correlations,
which can provide a directly observable signal of the properties of the initial
stage of the collision process.
|
Large language models have achieved remarkable success on general NLP tasks,
but they may fall short for domain-specific problems. Recently, various
Retrieval-Augmented Large Language Models (RALLMs) are proposed to address this
shortcoming. However, existing evaluation tools only provide a few baselines
and evaluate them on various domains without mining the depth of domain
knowledge. In this paper, we address the challenges of evaluating RALLMs by
introducing the R-Eval toolkit, a Python toolkit designed to streamline the
evaluation of different RAG workflows in conjunction with LLMs. Our toolkit,
which supports popular built-in RAG workflows and allows for the incorporation
of customized testing data on the specific domain, is designed to be
user-friendly, modular, and extensible. We conduct an evaluation of 21 RALLMs
across three task levels and two representative domains, revealing significant
variations in the effectiveness of RALLMs across different tasks and domains.
Our analysis emphasizes the importance of considering both task and domain
requirements when choosing a RAG workflow and LLM combination. We are committed
to continuously maintaining our platform at https://github.com/THU-KEG/R-Eval
to facilitate both the industry and the researchers.
|
Nonconvex and nonsmooth optimization problems are frequently encountered in
much of statistics, business, science and engineering, but they are not yet
widely recognized as a technology in the sense of scalability. A reason for
this relatively low degree of popularity is the lack of a well developed system
of theory and algorithms to support the applications, as is the case for its
convex counterpart. This paper aims to take one step in the direction of
disciplined nonconvex and nonsmooth optimization. In particular, we consider in
this paper some constrained nonconvex optimization models in block decision
variables, with or without coupled affine constraints. In the case of without
coupled constraints, we show a sublinear rate of convergence to an
$\epsilon$-stationary solution in the form of variational inequality for a
generalized conditional gradient method, where the convergence rate is shown to
be dependent on the H\"olderian continuity of the gradient of the smooth part
of the objective. For the model with coupled affine constraints, we introduce
corresponding $\epsilon$-stationarity conditions, and apply two proximal-type
variants of the ADMM to solve such a model, assuming the proximal ADMM updates
can be implemented for all the block variables except for the last block, for
which either a gradient step or a majorization-minimization step is
implemented. We show an iteration complexity bound of $O(1/\epsilon^2)$ to
reach an $\epsilon$-stationary solution for both algorithms. Moreover, we show
that the same iteration complexity of a proximal BCD method follows
immediately. Numerical results are provided to illustrate the efficacy of the
proposed algorithms for tensor robust PCA.
|
We apply a recently developed 2+1+1 decomposition of spacetime, based on a
nonorthogonal double foliation for the study of spherically symmetric, static
black hole solutions of Horndeski scalar-tensor theory. Our discussion proceeds
in an effective field theory (EFT) of modified gravity approach, with the
action depending on metric and embedding scalars adapted to the nonorthogonal
2+1+1 decomposition. We prove that the most generic class of Horndeski
Lagrangians compatible with observations can be expressed in this EFT form. By
studying the first order perturbation of the EFT action we derive three
equations of motion, which reduce to those derived earlier in an orthogonal
2+1+1 decomposition, and a fourth equation for the metric parameter N related
to the nonorthogonality of the foliation. For the Horndeski class of theories
with vanishing $G_3$ and $G_5$, but generic functions $G_2(\phi,X)$ (k-essence)
and $G_4(\phi)$ (nonminimal coupling to the metric) we prove the unicity
theorem that no action beyond Einstein--Hilbert allows for the Schwarzschild
solution. Next we integrate the EFT field equations for the case with only one
independent metric function obtaining new solutions characterized by a
parameter interpreted as either mass or tidal charge, the cosmological constant
and a third parameter. These solutions represent naked singularities, black
holes with scalar hair or have the double horizon structure of the
Schwarzschild--de Sitter spacetime. Solutions with homogeneous Kantowski--Sachs
type regions also emerge. Finally, one of the solutions obtained for the
function $G_4$ linear in the curvature coordinate, in certain parameter range
exhibits an intriguing logarithmic singularity lying outside the horizon. The
newly derived hairy black hole solutions evade previously known unicity
theorems by being asymptotically nonflat, even in the absence of the
cosmological constant.
|
We investigate hard radiation emission in small-angle transplanckian
scattering. We show how to reduce this problem to a quantum field theory
computation in a classical background (gravitational shock wave). In momentum
space, the formalism is similar to the flat-space light cone perturbation
theory, with shock wave crossing vertices added. In the impact parameter
representation, the radiating particle splits into a multi-particle virtual
state, whose wavefunction is then multiplied by individual eikonal factors. As
a phenomenological application, we study QCD radiation in transplanckian
collisions of TeV-scale gravity models. We derive the distribution of initial
state radiation gluons, and find a suppression at large transverse momenta with
respect to the standard QCD result. This is due to rescattering events, in
which the quark and the emitted gluon scatter coherently. Interestingly, the
suppression factor depends on the number of extra dimensions and provides a new
experimental handle to measure this number. We evaluate the leading-log
corrections to partonic cross-sections due to the initial state radiation, and
prove that they can be absorbed into the hadronic PDF. The factorization scale
should then be chosen in agreement with an earlier proposal of Emparan, Masip,
and Rattazzi. In the future, our methods can be applied to the gravitational
radiation in transplanckian scattering, where they can go beyond the existing
approaches limited to the soft radiation case.
|
The Laplace operator acting on antisymmetric tensor fields in a
$D$--dimensional Euclidean ball is studied. Gauge-invariant local boundary
conditions (absolute and relative ones, in the language of Gilkey) are
considered. The eigenfuctions of the operator are found explicitly for all
values of $D$. Using in a row a number of basic techniques, as Mellin
transforms, deformation and shifting of the complex integration contour, and
pole compensation, the zeta function of the operator is obtained. From its
expression, in particular, $\zeta (0)$ and $\zeta'(0)$ are evaluated exactly. A
table is given in the paper for $D=3, 4, ...,8$. The functional determinants
and Casimir energies are obtained for $D=3, 4, ...,6$.
|
Measurements of single-mode phase observables are studied in the spirit of
the quantum theory of measurement. We determine the minimal measurement models
of phase observables and consider methods of measuring such observables by
using a double homodyne detector. We show that, in principle, the canonical
phase distribution of the signal state can be measured via double homodyne
detection by first processing the state using a two-mode unitary channel.
|
Reinforcement learning has enabled agents to solve challenging tasks in
unknown environments. However, manually crafting reward functions can be time
consuming, expensive, and error prone to human error. Competing objectives have
been proposed for agents to learn without external supervision, but it has been
unclear how well they reflect task rewards or human behavior. To accelerate the
development of intrinsic objectives, we retrospectively compute potential
objectives on pre-collected datasets of agent behavior, rather than optimizing
them online, and compare them by analyzing their correlations. We study input
entropy, information gain, and empowerment across seven agents, three Atari
games, and the 3D game Minecraft. We find that all three intrinsic objectives
correlate more strongly with a human behavior similarity metric than with task
reward. Moreover, input entropy and information gain correlate more strongly
with human similarity than task reward does, suggesting the use of intrinsic
objectives for designing agents that behave similarly to human players.
|
We calculate the radio-frequency spectrum of balanced and imbalanced
ultracold Fermi gases in the normal phase at unitarity.
For the homogeneous case the spectrum of both the majority and minority
components always has a single peak even in the pseudogap regime.
We furthermore show how the double-peak structures observed in recent
experiments arise due to the inhomogeneity of the trapped gas.
The main experimental features observed above the critical temperature in the
recent experiment of Schunck et al. [Science 316, 867, (2007)] are recovered
with no fitting parameters.
|
An interphase boundary may be immobilized due to nonlinear diffractional
interactions in a feedback optical device. This effect reminds of the Turing
mechanism, with the optical field playing the role of a diffusive inhibitor.
Two examples of pattern formation are considered in detail: arrays of kinks in
1d, and solitary spots in 2d. In both cases, a large number of equilibrium
solutions is possible due to the oscillatory character of diffractional
interaction.
|
We consider framed chord diagrams, i.e. chord diagrams with chords of two
types. It is well known that chord diagrams modulo 4T-relations admit Hopf
algebra structure, where the multiplication is given by any connected sum with
respect to the orientation. But in the case of framed chord diagrams a natural
way to define a multiplication is not known yet. In the present paper, we first
define a new module $\mathcal{M}_2$ which is generated by chord diagrams on two
circles and factored by $4$T-relations. Then we construct a "covering" map from
the module of framed chord diagrams into $\mathcal{M}_2$ and a weight system on
$\mathcal{M}_2$. Using the map and weight system we show that a connected sum
for framed chord diagrams is not a well-defined operation. In the end of the
paper we touch linear diagrams, the circle replaced by a directed line.
|
The Apache Point Observatory Galactic Evolution Experiment (APOGEE) has
observed $\sim$600 transiting exoplanets and exoplanet candidates from
\textit{Kepler} (Kepler Objects of Interest, KOIs), most with $\geq$18 epochs.
The combined multi-epoch spectra are of high signal-to-noise (typically
$\geq$100) and yield precise stellar parameters and chemical abundances. We
first confirm the ability of the APOGEE abundance pipeline, ASPCAP, to derive
reliable [Fe/H] and effective temperatures for FGK dwarf stars -- the primary
\textit{Kepler} host stellar type -- by comparing the ASPCAP-derived stellar
parameters to those from independent high-resolution spectroscopic
characterizations for 221 dwarf stars in the literature. With a sample of 282
close-in ($P<100$ days) KOIs observed in the APOGEE KOI goal program, we find a
correlation between orbital period and host star [Fe/H] characterized by a
critical period, $P_\mathrm{crit}$= $8.3^{+0.1}_{-4.1}$ days, below which small
exoplanets orbit statistically more metal-enriched host stars. This effect may
trace a metallicity dependence of the protoplanetary disk inner-radius at the
time of planet formation or may be a result of rocky planet ingestion driven by
inward planetary migration. We also consider that this may trace a metallicity
dependence of the dust sublimation radius, but find no statistically
significant correlation with host $T_\mathrm{eff}$ and orbital period to
support such a claim.
|
We present multi-wavelength observations and modeling of the exceptionally
bright long $\gamma$-ray burst GRB 160625B. The optical and X-ray data are
well-fit by synchrotron emission from a collimated blastwave with an opening
angle of $\theta_j\approx 3.6^\circ$ and kinetic energy of $E_K\approx
2\times10^{51}$ erg, propagating into a low density ($n\approx 5\times10^{-5}$
cm$^{-3}$) medium with a uniform profile. The forward shock is sub-dominant in
the radio band; instead, the radio emission is dominated by two additional
components. The first component is consistent with emission from a reverse
shock, indicating an initial Lorentz factor of $\Gamma_0\gtrsim 100$ and an
ejecta magnetization of $R_B\approx 1-100$. The second component exhibits
peculiar spectral and temporal evolution and is most likely the result of
scattering of the radio emission by the turbulent Milky Way interstellar medium
(ISM). Such scattering is expected in any sufficiently compact extragalactic
source and has been seen in GRBs before, but the large amplitude and long
duration of the variability seen here are qualitatively more similar to extreme
scattering events previously observed in quasars, rather than normal
interstellar scintillation effects. High-cadence, broadband radio observations
of future GRBs are needed to fully characterize such effects, which can
sensitively probe the properties of the ISM and must be taken into account
before variability intrinsic to the GRB can be interpreted correctly.
|
A new method for continuing the usual Dirichlet series that defines the
Riemann zeta function ${\zeta}(s)$ is presented. Numerical experiments
demonstrating the computational efficacy of the resulting continuation are
discussed.
|
Since its introduction in 1952, Turing's (pre-)pattern theory ("the chemical
basis of morphogenesis") has been widely applied to a number of areas in
developmental biology. The related pattern formation models normally comprise a
system of reaction-diffusion equations for interacting chemical species
("morphogens"), whose heterogeneous distribution in some spatial domain acts as
a template for cells to form some kind of pattern or structure through, for
example, differentiation or proliferation induced by the chemical pre-pattern.
Here we develop a hybrid discrete-continuum modelling framework for the
formation of cellular patterns via the Turing mechanism. In this framework, a
stochastic individual-based model of cell movement and proliferation is
combined with a reaction-diffusion system for the concentrations of some
morphogens. As an illustrative example, we focus on a model in which the
dynamics of the morphogens are governed by an activator-inhibitor system that
gives rise to Turing pre-patterns. The cells then interact with morphogens in
their local area through either of two forms of chemically-dependent cell
action: chemotaxis and chemically-controlled proliferation. We begin by
considering such a hybrid model posed on static spatial domains, and then turn
to the case of growing domains. In both cases, we formally derive the
corresponding deterministic continuum limit and show that that there is an
excellent quantitative match between the spatial patterns produced by the
stochastic individual-based model and its deterministic continuum counterpart,
when sufficiently large numbers of cells are considered. This paper is intended
to present a proof of concept for the ideas underlying the modelling framework,
with the aim to then apply the related methods to the study of specific
patterning and morphogenetic processes in the future.
|
We study high-harmonic generation (HHG) in the one-dimensional Hubbard model
in order to understand its relation to elementary excitations as well as the
similarities and differences to semiconductors. The simulations are based on
the infinite time-evolving block decimation (iTEBD) method and exact
diagonalization. We clarify that the HHG originates from the doublon-holon
recombination, and the scaling of the cutoff frequency is consistent with a
linear dependence on the external field. We demonstrate that the subcycle
features of the HHG can be reasonably described by a phenomenological three
step model for a doublon-holon pair. We argue that the HHG in the
one-dimensional Mott insulator is closely related to the dispersion of the
doublon-holon pair with respect to its relative momentum, which is not
necessarily captured by the single-particle spectrum due to the many-body
nature of the elementary excitations. For the comparison to semiconductors, we
introduce effective models obtained from the Schrieffer-Wolff transformation,
i.e. a strong-coupling expansion, which allows us to disentangle the different
processes involved in the Hubbard model: intraband dynamics of doublons and
holons, interband dipole excitations, and spin exchanges. These demonstrate the
formal similarity of the Mott system to the semiconductor models in the dipole
gauge, and reveal that the spin dynamics, which does not directly affect the
charge dynamics, can reduce the HHG intensity. We also show that the long-range
component of the intraband dipole moment has a substantial effect on the HHG
intensity, while the correlated hopping terms for the doublons and holons
essentially determine the shape of the HHG spectrum. A new numerical method to
evaluate single-particle spectra within the iTEBD method is also introduced.
|
Given the fact that Earth is so far the only place in the Milky Way galaxy
known to harbor life, the question arises of whether the solar system is in any
way special. To address this question, I compare the solar system to the many
recently discovered exoplanetary systems. I identify two main features that
appear to distinguish the solar system from the majority of other systems: (i)
the lack of super-Earths, (ii) the absence of close-in planets. I examine
models for the formation of super-Earths, as well as models for the evolution
of asteroid belts, the rate of asteroid impacts on Earth, and of snow lines,
all of which may have some implications for the emergence and evolution of life
on a terrestrial planet.
Finally, I revisit an argument by Brandon Carter on the rarity of intelligent
civilizations, and I review a few of the criticisms of this argument.
|
A recurrence formula for absolute central moments of Poisson distribution is
suggested.
|
We give a detailed account of Agol's theorem and his proof concerning
two-meridional-generator subgroups of hyperbolic 2-bridge link groups, which is
included in the slide of his talk at the Bolyai conference 2001. We also give a
generalization of the theorem to two-parabolic-generator subgroups of
hyperbolic 3-manifold groups, which gives a refinement of a result due to
Boileau-Weidmann.
|
The purpose of this note is to describe some algebraic conditions on a Banach
algebra which force it to be finite dimensional. One of the main results in
Theorem~2 which states that for a locally compact group $G$, $G$ is compact if
there exists a measure $\mu$ in $\hbox{Soc}(L^{1}(G))$ such that $\mu(G) \neq
0$. We also prove that $G$ is finite if $\hbox{Soc}(M(G))$ is closed and every
nonzero left ideal in $M(G)$ contains a minimal left ideal.
|
In this paper, we deal with the convergence of an iterative scheme for the
2-D stochastic Navier-Stokes Equations on the torus suggested by the
Lie-Trotter product formulas for stochastic differential equations of parabolic
type. The stochastic system is split into two problems which are simpler for
numerical computations. An estimate of the approximation error is given either
with periodic boundary conditions. In particular, we prove that the strong
speed of the convergence in probability is almost $1/2$. This is shown by means
of an $L^2(\Omega,P)$ convergence localized on a set of arbitrary large
probability. The assumptions on the diffusion coefficient depend on the fact
that some multiple of the Laplace operator is present or not with the
multiplicative stochastic term. Note that if one of the splitting steps only
contains the stochastic integral, then the diffusion coefficient may not
contain any gradient of the solution.
|
The classification of the representations of the generalized deformed
oscillator algebra is given together with several comments about possibility of
introducing a coproduct structure in some type of deformed oscillator algebra.
|
This report summarises the activity of the E3 working group "Experimental
Approaches at Linear Colliders". The group was charged with examining
critically the physics case for a linear collider of energy of order 1 TeV as
well as the cases for higher energy machines, assessing the performance
requirements and exploring the viability of several special options. In
addition it was asked to identify the critical areas where R&D is required.
|
Subsets and Splits