text
stringlengths 6
128k
|
---|
We present a novel multistream network that learns robust eye representations
for gaze estimation. We first create a synthetic dataset containing eye region
masks detailing the visible eyeball and iris using a simulator. We then perform
eye region segmentation with a U-Net type model which we later use to generate
eye region masks for real-world eye images. Next, we pretrain an eye image
encoder in the real domain with self-supervised contrastive learning to learn
generalized eye representations. Finally, this pretrained eye encoder, along
with two additional encoders for visible eyeball region and iris, are used in
parallel in our multistream framework to extract salient features for gaze
estimation from real-world images. We demonstrate the performance of our method
on the EYEDIAP dataset in two different evaluation settings and achieve
state-of-the-art results, outperforming all the existing benchmarks on this
dataset. We also conduct additional experiments to validate the robustness of
our self-supervised network with respect to different amounts of labeled data
used for training.
|
In this paper we introduce a method that significantly reduces the character
error rates for OCR text obtained from OCRopus models trained on early printed
books. The method uses a combination of cross fold training and confidence
based voting. After allocating the available ground truth in different subsets
several training processes are performed, each resulting in a specific OCR
model. The OCR text generated by these models then gets voted to determine the
final output by taking the recognized characters, their alternatives, and the
confidence values assigned to each character into consideration. Experiments on
seven early printed books show that the proposed method outperforms the
standard approach considerably by reducing the amount of errors by up to 50%
and more.
|
Here we present the results obtained from the analysis of 75 ksec of
XMM-Newton observations of a sample of EROs selected from one MUNICS field
(K'<19.5 mag). We find 6 EROs with a X-ray counterpart down to a 2--10 keV flux
limit of ~10^{-15} cgs. For all of them the X-ray--to--optical flux ratios and
the 2--10 keV luminosities suggest the presence of AGN. In particular, a
complete X-ray spectral analysis shows that high luminosity, obscured AGNs
(i.e. QSO2 candidates) are present in 3 of them.
|
This paper investigates the decay rate of the probability that the row sum of
a triangular array of truncated heavy tailed random variables is larger than an
integer (k) times the truncating threshold, as both - the number of summands
and the threshold go to infinity. The method of attack for this problem is
significantly different from the one where k is not an integer, and requires
much sharper estimates.
|
We present a generalised architecture for reactive mobile manipulation while
a robot's base is in motion toward the next objective in a high-level task. By
performing tasks on-the-move, overall cycle time is reduced compared to methods
where the base pauses during manipulation. Reactive control of the manipulator
enables grasping objects with unpredictable motion while improving robustness
against perception errors, environmental disturbances, and inaccurate robot
control compared to open-loop, trajectory-based planning approaches. We present
an example implementation of the architecture and investigate the performance
on a series of pick and place tasks with both static and dynamic objects and
compare the performance to baseline methods. Our method demonstrated a
real-world success rate of over 99%, failing in only a single trial from 120
attempts with a physical robot system. The architecture is further demonstrated
on other mobile manipulator platforms in simulation. Our approach reduces task
time by up to 48%, while also improving reliability, gracefulness, and
predictability compared to existing architectures for mobile manipulation. See
https://benburgesslimerick.github.io/ManipulationOnTheMove for supplementary
materials.
|
In this note we prove a new \epsilon-regularity theorem for the Ricci flow.
Let (M^n,g(t)) with t\in [-T,0] be a Ricci flow and H_{x} the conjugate heat
kernel centered at a point (x,0) in the final time slice. Substituting H_{x}
into Perelman's W-functional produces a monotone function W_{x}(s) of s \in
[-T,0], the pointed entropy, with W_{x}(s) <= 0, and W_{x}(s) = 0 iff (M,g(t))
is isometric to the trivial flow on R^n. Our main theorem asserts the
following: There exists an \epsilon>0, depending only on T and on lower scalar
curvature and \mu-entropy bounds for (M,g(-T)), such that W_{x_0}(s) >
-\epsilon implies |Rm|< r^{-2} on P_{\epsilon r}(x,0), where r^2 = |s| and
P_r(x,t) \equiv B_r(x,t)\times (t-r^2,t] is the parabolic ball.
The main technical challenge of the theorem is to prove an effective
Lipschitz bound in x for the s-average of W_x(s). To accomplish this, we
require a new log-Sobolev inequality. It is well known by Perelman that the
metric measure spaces (M,g(t),dv_{g(t)}) satisfy a log-Sobolev; however we
prove that this is also true for the conjugate heat kernel weighted spaces
(M,g(t),H_{x}(-,t)\,dv_{g(t)}). Our log-Sobolev constants for these weighted
spaces are in fact universal and sharp.
The weighted log-Sobolev has other consequences as well, including an average
Gaussian upper bound on the conjugate heat kernel that only depends on a
two-sided scalar curvature bound.
|
Let $(W,H,\mu)$ be the classical Wiener space where $H$ is the Cameron-Martin
space which consists of the primitives of the elements of
$L^2([0,1],\,dt)\otimes \R^d$, we denote by $L^2_a(\mu,H)$ the equivalence
classes w.r.t. $dt\times d\mu$ whose Lebesgue densities $s\to\dot{u}(s,w)$ are
almost surely adapted to the canonical Brownian filtration. If $f$ is a Wiener
functional s.t. $\frac{1}{E[e^{-f}]}e^{-f}d\mu$ is of finite relative entropy
w.r.t. $\mu$, we prove that \beaa J_\star&=& \inf\left(E_\mu\left[f\circ
U+\half |u|_H^2\right]: u\in
L_a^2(\mu,H)\right)\\ &\geq&-\log E_\mu[e^{-f}]=\inf\left(\int_W
fd\ga+H(\ga|\mu):\,\nu\in
P(W)\right) \eeaa where $P(W)$ is the set of probability measures on
$(W,\calB(W))$ and $H(\ga|\mu)$ is the relative entropy of $\ga$ w.r.t. $\mu$.
We call $f$ a tamed functional if the inequality above can be replaced with
equality, we characterize the class of tamed functionals, which is much larger
than the set of essentially bounded Wiener functionals. We show that for a
tamed functional the minimization problem of l.h.s. has a solution $u_0$ if and
only if $U_0=I_W+u_0$ is almost surely invertible and $$
\frac{dU_0\mu}{d\mu}=\frac{e^{-f}}{E_\mu[e^{-f}]} $$ and then $u_0$ is unique.
To do this is we prove the theorem which says that the relative entropy of
$U_0\mu$ is equal to the energy of $u_0$ if and only if it has a $\mu$-a.s.
left inverse. We use these results to prove the strong existence of the
solutions of stochastic differentail equations with singular (functional)
drifts and also to prove the non-existence of strong solutions of some
stochastic differential equations.
\noindent {\sl Keywords:} Invertibility, entropy, Girsanov theorem,
variational calculus, Malliavin calculus, large deviations
|
Ultrasound super-localization microscopy techniques presented in the last few
years enable non-invasive imaging of vascular structures at the capillary level
by tracking the flow of ultrasound contrast agents (gas microbubbles). However,
these techniques are currently limited by low temporal resolution and long
acquisition times. Super-resolution optical fluctuation imaging (SOFI) is a
fluorescence microscopy technique enabling sub-diffraction limit imaging with
high temporal resolution by calculating high order statistics of the
fluctuating optical signal. The aim of this work is to achieve fast acoustic
imaging with enhanced resolution by applying the tools used in SOFI to
contrast-enhance ultrasound (CEUS) plane-wave scans. The proposed method was
tested using numerical simulations and evaluated using two in-vivo rabbit
models: scans of healthy kidneys and VX-2 tumor xenografts. Improved spatial
resolution was observed with a reduction of up to 50% in the full width half
max of the point spread function. In addition, substantial reduction in the
background level was achieved compared to standard mean amplitude persistence
images, revealing small vascular structures within tumors. The scan duration of
the proposed method is less than a second while current superlocalization
techniques require acquisition duration of several minutes. As a result, the
proposed technique may be used to obtain scans with enhanced spatial resolution
and high temporal resolution, facilitating flow-dynamics monitoring. Our method
can also be applied during a breath-hold, reducing the sensitivity to motion
artifacts.
|
The experimental status of charged lepton flavor violation searches is
briefly reviewed, with particular emphasis on the three classical searches
involving muon transisions: $\mu \to e \gamma$, $\mu \to e$ conversion and $\mu
\to 3e$.
|
A model-independent or non-parametric approach for modeling a database has
been widely used in cosmology. In these scenarios, the data has been used
directly to reconstruct an underlying function. In this work, we introduce a
novel semi-model-independent method to do the task. The new approach not only
removes some drawbacks of previous methods but also has some remarkable
advantages. We combine the well-known Gaussian linear model with a neural
network and introduce a procedure for the reconstruction of an arbitrary
function. In the scenario, the neural network produces some arbitrary base
functions which subsequently are fed to the Gaussian linear model. Given a
prior distribution on the free parameters, the Gaussian linear model provides a
close form for the posterior distribution as well as the Bayesian evidence. In
addition, contrary to other methods, it is straightforward to compute the
uncertainty.
|
We consider the convex geometry of the cone of nonnegative quadratics over
Stanley-Reisner varieties. Stanley-Reisner varieties (which are unions of
coordinate planes) are amongst the simplest real projective varieties, so this
is potentially a starting point that can generalize to more complicated real
projective varieties. This subject has some suprising connections to algebraic
topology and category theory, which we exploit heavily in our work.
These questions are also valuable in applied math, because they directly
translate to questions about positive semidefinite (PSD) matrices. In
particular, this relates to a long line of work concerning the extent to which
it is possible to approximately check that a matrix is PSD by checking that
some principle submatrices are PSD, or to check if a partial matrix can be
approximately completed to full PSD matrix.
We systematize both these practical and theoretical questions using a
framework based on algebraic topology, category theory, and convex geometry. As
applications of this framework we are able to classify the extreme nonnegative
quadratics over many Stanley-Reisner varieties. We plan to follow these
structural results with a paper that is more focused on quantitative questions
about PSD matrix completion, which have applications in sparse semidefinite
programming.
|
Precise measurements of electroweak processes at the International Linear
Collider (ILC) will provide unique opportunities to explore new physics beyond
the Standard Model. Fermion pair production events are sensitive to new
interactions involving a new heavy gauge boson or an electroweak interacting
massive particle (EWIMP).We studied the mass reach of new particles at the ILC
with $\sqrt{s}=250$ GeV by using $e^+ e^-\to e^+ e^-$and $e^+ e^-\to \mu^+
\mu^-$ events. We show that a mass reach for BSM particles can be determined
with 90% confidence level using a toy Monte Carlo technique.
|
Assuming 0# does not exist, we present a combinatorial approach to Jensen's
method of coding by a real. The forcing uses combinatorial consequences of fine
structure (including the Covering Lemma, in various guises), but makes no
direct appeal to fine structure itself.
|
Massive Open Online Courses (MOOCs) use peer assessment to grade open ended
questions at scale, allowing students to provide feedback. Relative to teacher
based grading, peer assessment on MOOCs traditionally delivers lower quality
feedback and fewer learner interactions. We present the identified peer review
(IPR) framework, which provides non-blind peer assessment and incentives
driving high quality feedback. We show that, compared to traditional peer
assessment methods, IPR leads to significantly longer and more useful feedback
as well as more discussion between peers.
|
Contrary to the quark mixing matrix, the lepton mixing matrix could be
symmetric. We study the phenomenological consequences of this possibility. In
particular, we find that symmetry would imply that |U_{e3}| is larger than
0.16, i.e., above its current 2 sigma limit. The other mixing angles are also
constrained and CP violating effects in neutrino oscillations are suppressed,
even though |U_{e3}| is sizable. Maximal atmospheric mixing is only allowed if
the other observables are outside their current 3 sigma ranges, and sin^2
theta_{23} lies typically below 0.5. The Majorana phases are not affected, but
the implied values of the solar neutrino mixing angle have some effect on the
predictions for neutrinoless double beta decay. We further discuss some formal
properties of a symmetric mixing matrix.
|
Temporal sentence grounding in videos(TSGV), which aims to localize one
target segment from an untrimmed video with respect to a given sentence query,
has drawn increasing attentions in the research community over the past few
years. Different from the task of temporal action localization, TSGV is more
flexible since it can locate complicated activities via natural languages,
without restrictions from predefined action categories. Meanwhile, TSGV is more
challenging since it requires both textual and visual understanding for
semantic alignment between two modalities(i.e., text and video). In this
survey, we give a comprehensive overview for TSGV, which i) summarizes the
taxonomy of existing methods, ii) provides a detailed description of the
evaluation protocols(i.e., datasets and metrics) to be used in TSGV, and iii)
in-depth discusses potential problems of current benchmarking designs and
research directions for further investigations. To the best of our knowledge,
this is the first systematic survey on temporal sentence grounding. More
specifically, we first discuss existing TSGV approaches by grouping them into
four categories, i.e., two-stage methods, end-to-end methods, reinforcement
learning-based methods, and weakly supervised methods. Then we present the
benchmark datasets and evaluation metrics to assess current research progress.
Finally, we discuss some limitations in TSGV through pointing out potential
problems improperly resolved in the current evaluation protocols, which may
push forwards more cutting edge research in TSGV. Besides, we also share our
insights on several promising directions, including three typical tasks with
new and practical settings based on TSGV.
|
We calculate the three-loop master integrals of Ref. [1] [arXiv:1709.02160]
in analytic form. This allows us to present the fermionic contributions to the
$\Delta B=2$ Wilson coefficients of the $B$-$\bar B$ decay matrix in
next-to-next-to-leading order of QCD with full analytic dependence on the mass
of the charm quark in the fermionic loops.
|
Time-resolved optically detected magnetic resonance (ODMR) is a valuable
technique to study the local deformation of the crystal lattice around magnetic
ion as well as the ion spin relaxation time. Here we utilize selective
Mn-doping to additionally enhance the inherent locality of the ODMR technique.
We present the time-resolved ODMR studies of single {(Cd,Mg)Te/(Cd,Mn)Te}
quantum wells (QWs) with manganese ions located at different positions along
the growth axis -- in the center or on the sides of the quantum well. We
observe that spin-lattice relaxation of Mn$^{2+}$ significantly depends on the
ion-carrier wavefunction overlap at low-magnetic fields. Interestingly, the
effect is clearly observed in spite of very low carrier density, which suggests
the potential for control of the Mn$^{2+}$ ion relaxation rate by means of the
electric field in future experiments.
|
We consider the scenario of supervised learning in Deep Learning (DL)
networks, and exploit the arbitrariness of choice in the Riemannian metric
relative to which the gradient descent flow can be defined (a general fact of
differential geometry). In the standard approach to DL, the gradient flow on
the space of parameters (weights and biases) is defined with respect to the
Euclidean metric. Here instead, we choose the gradient flow with respect to the
Euclidean metric in the output layer of the DL network. This naturally induces
two modified versions of the gradient descent flow in the parameter space, one
adapted for the overparametrized setting, and the other for the
underparametrized setting. In the overparametrized case, we prove that,
provided that a rank condition holds, all orbits of the modified gradient
descent drive the ${\mathcal L}^2$ cost to its global minimum at a uniform
exponential convergence rate; one thereby obtains an a priori stopping time for
any prescribed proximity to the global minimum. We point out relations of the
latter to sub-Riemannian geometry. Moreover, we generalize the above framework
to the situation in which the rank condition does not hold; in particular, we
show that local equilibria can only exist if a rank loss occurs, and that
generically, they are not isolated points, but elements of a critical
submanifold of parameter space.
|
The Harder-Narasimhan types are a family of discrete isomorphism invariants
for representations of finite quivers. Previously (arXiv:2303.16075), we
evaluated their discriminating power in the context of persistence modules over
a finite poset, including multiparameter persistence modules (over a finite
grid). In particular, we introduced the skyscraper invariant and proved it was
strictly finer than the rank invariant. In order to study the stability of the
skyscraper invariant, we extend its definition from the finite to the infinite
setting and consider multiparameter persistence modules over $\mathbb Z ^n$ and
$\mathbb R^n$. We then establish an erosion-type stability result for this
version of the skyscraper invariant.
|
Health-related data is noisy and stochastic in implying the true
physiological states of patients, limiting information contained in
single-moment observations for sequential clinical decision making. We model
patient-clinician interactions as partially observable Markov decision
processes (POMDPs) and optimize sequential treatment based on belief states
inferred from history sequence. To facilitate inference, we build a variational
generative model and boost state representation with a recurrent neural network
(RNN), incorporating an auxiliary loss from sequence auto-encoding. Meanwhile,
we optimize a continuous policy of drug levels with an actor-critic method
where policy gradients are obtained from a stablized off-policy estimate of
advantage function, with the value of belief state backed up by parallel
best-first suffix trees. We exploit our methodology in optimizing dosages of
vasopressor and intravenous fluid for sepsis patients using a retrospective
intensive care dataset and evaluate the learned policy with off-policy policy
evaluation (OPPE). The results demonstrate that modelling as POMDPs yields
better performance than MDPs, and that incorporating heuristic search improves
sample efficiency.
|
In this paper we present an immersed weak Galerkin method for solving
second-order elliptic interface problems on polygonal meshes, where the meshes
do not need to be aligned with the interface. The discrete space consists of
constants on each edge and broken linear polynomials satisfying the interface
conditions in each element. For triangular meshes, such broken linear
plynomials coincide with the basis functions in immersed finite element methods
[26]. We establish some approximation properties of the broken linear
polynomials and the discrete weak gradient of a certain projection of the
solution on polygonal meshes. We then prove an optimal error estimate of our
scheme in the discrete $H^1$-seminorm under some assumptions on the exact
solution. Numerical experiments are provided to confirm our theoretical
analysis.
|
We present a comparison of CN bandstrength variations in the high-metallicity
globular clusters NGC 6356 and NGC 6528 with those measured in the old open
clusters NGC 188, NCG 2158 and NGC 7789. Star-to-star abundance variations, of
which CN differences are a readily observable sign, are commonplace in
moderate-metallicity halo globular clusters but are unseen in the field or in
open clusters. We find that the open clusters have narrow, unimodal
distributions of CN bandstrength, as expected from the literature, while the
globular clusters have broad, bimodal distributions of CN bandstrength, similar
to moderate-metallicity halo globular clusters. This result has interesting
implications for the various mechanisms proposed to explain the origin of
globular cluster abundance inhomogeneities, and suggests that the local
environment at the epoch of cluster formation plays a vital role in regulating
intracluster enrichment processes.
|
We investigate the phase diagrams of two-dimensional lattice dipole systems
with variable geometry. For bipartite square and triangular lattices with
tunable vertical sublattice separation, we find rich phase diagrams featuring a
sequence of easy-plane magnetically ordered phases separated by incommensurate
spin-wave states.
|
From 't Hooft's argument, one expects that the analyticity domain of an
asymptotically free quantum field theory is horned shaped. In the usual Borel
summation, the function is obtained through a Laplace transform and thus has a
much larger analyticity domain. However, if the summation process goes through
the process called acceleration by Ecalle, one obtains such a horn shaped
analyticity domain. We therefore argue that acceleration, which allows to go
beyond standard Borel summation, must be an integral part of the toolkit for
the study of exactly renormalisable quantum field theories. We sketch how this
procedure is working and what are its consequences.
|
The objective of this investigation is to evaluate and contrast the
effectiveness of four state-of-the-art pre-trained models, ResNet-34, VGG-19,
DenseNet-121, and Inception V3, in classifying traffic and road signs with the
utilization of the GTSRB public dataset. The study focuses on evaluating the
accuracy of these models' predictions as well as their ability to employ
appropriate features for image categorization. To gain insights into the
strengths and limitations of the model's predictions, the study employs the
local interpretable model-agnostic explanations (LIME) framework. The findings
of this experiment indicate that LIME is a crucial tool for improving the
interpretability and dependability of machine learning models for image
identification, regardless of the models achieving an f1 score of 0.99 on
classifying traffic and road signs. The conclusion of this study has important
ramifications for how these models are used in practice, as it is crucial to
ensure that model predictions are founded on the pertinent image features.
|
We endow a topological group $(G, \tau)$ with a coarse structure defined by
the smallest group ideal $S_{\tau} $ on $G$ containing all converging sequences
with their limits and denote the obtained coarse group by $(G, S_{\tau})$. If
$G$ is discrete then $(G, S_{\tau})$ is a finitary coarse group studding in
Geometric Group Theory. The main result: if a topological abelian group $(G,
\tau)$ contains a non-trivial converging sequence then $asdim \ (G, S_{\tau})=
\infty $.
|
The ability to animate photo-realistic head avatars reconstructed from
monocular portrait video sequences represents a crucial step in bridging the
gap between the virtual and real worlds. Recent advancements in head avatar
techniques, including explicit 3D morphable meshes (3DMM), point clouds, and
neural implicit representation have been exploited for this ongoing research.
However, 3DMM-based methods are constrained by their fixed topologies,
point-based approaches suffer from a heavy training burden due to the extensive
quantity of points involved, and the last ones suffer from limitations in
deformation flexibility and rendering efficiency. In response to these
challenges, we propose MonoGaussianAvatar (Monocular Gaussian Point-based Head
Avatar), a novel approach that harnesses 3D Gaussian point representation
coupled with a Gaussian deformation field to learn explicit head avatars from
monocular portrait videos. We define our head avatars with Gaussian points
characterized by adaptable shapes, enabling flexible topology. These points
exhibit movement with a Gaussian deformation field in alignment with the target
pose and expression of a person, facilitating efficient deformation.
Additionally, the Gaussian points have controllable shape, size, color, and
opacity combined with Gaussian splatting, allowing for efficient training and
rendering. Experiments demonstrate the superior performance of our method,
which achieves state-of-the-art results among previous methods.
|
We report on a calculation of cross sections for charged-current quasielastic
antineutrino scattering off $^{12}$C in the energy range of interest for the
MiniBooNE experiment. We adopt the impulse approximation (IA) and use the
nonrelativistic continuum random phase approximation (CRPA) to model the
nuclear dynamics. An effective nucleon-nucleon interaction of the Skyrme type
is used. We compare our results with the recent MiniBooNE antineutrino
cross-section data and confront them with alternate calculations. The CRPA
predictions reproduce the gross features of the shape of the measured
double-differential cross sections. The CRPA cross sections are typically
larger than those of other reported IA calculations but tend to underestimate
the magnitude of the MiniBooNE data. We observe that an enhancement of the
nucleon axial mass in CRPA calculations is an effective way of improving on the
description of the shape and magnitude of the double-differential cross
sections. The rescaling of $M_{A}$ is illustrated to affect the shape of the
double-differential cross sections differently than multinucleon effects beyond
the IA.
|
The most up to date femto- and micro-lensing constraints indicate that
primordial black holes of $\sim 10^{-16} M_\odot$ and $\sim 10^{-12} M_\odot$,
respectively, may constitute a large fraction of the dark matter. We describe
analytically and numerically the dynamics by which inflationary fluctuations
featuring a time-varying propagation speed or an effective Planck mass can lead
to abundant primordial black hole production. As an example, we provide an ad
hoc DBI-like model. A very large primordial spectrum originating from a small
speed of sound typically leads to strong coupling within the vanilla effective
theory of inflationary perturbations. However, we point out that ghost
inflation may be able to circumvent this problem. We consider as well black
hole formation in solid inflation, for which, in addition to an analogous
difficulty, we stress the importance of the reheating process. In addition, we
review the basic formalism for the collapse of large radiation density
fluctuations, emphasizing the relevance of an adequate choice of gauge
invariant variables.
|
We conduct a detailed investigation of the polaron self-interaction (pSI)
error in standard approximations to the exchange-correlation (XC) functional
within density-functional theory (DFT). The pSI leads to delocalization error
in the polaron wave function and energy, as calculated from the Kohn-Sham (KS)
potential in the native charge state of the polaron. This constitutes the
origin of the systematic failure of DFT to describe polaron formation in band
insulators. It is shown that the delocalization error in these systems is,
however, largely absent in the KS potential of the closed-shell neutral charge
state. This leads to a modification of the DFT total-energy functional that
corrects the pSI in the XC functional. The resulting pSIC-DFT method
constitutes an accurate parameter-free {\it ab initio} methodology for
calculating polaron properties in insulators at a computational cost that is
orders of magnitude smaller than hybrid XC functionals. Unlike approaches that
rely on parametrized localized potentials such as DFT+$U$, the pSIC-DFT method
properly captures both site and bond-centered polaron configurations. This is
demonstrated by studying formation and migration of self-trapped holes in
alkali halides (bond-centered) as well as self-trapped electrons in an
elpasolite compound (site-centered). The pSIC-DFT approach consistently
reproduces the results obtained by hybrid XC functionals parametrized by
DFT+$G_0W_0$ calculations. Finally, we generalize the pSIC approach to hybrid
functionals, and show that in stark contrast to conventional hybrid
calculations of polaron energies, the pSIC-hybrid method is insensitive to the
parametrization of the hybrid XC functional. On this basis, we further
rationalize the success of the pSIC-DFT approach.
|
We present and validate a semi-analytical quasi-normal mode (QNM) theory for
the local density of states (LDOS) in coupled photonic crystal (PhC)
cavity-waveguide structures. By means of an expansion of the Green's function
on one or a few QNMs, a closed-form expression for the LDOS is obtained, and
for two types of two-dimensional PhCs, with one and two cavities side-coupled
to an extended waveguide, the theory is validated against numerically exact
computations. For the single cavity, a slightly asymmetric spectrum is found,
which the QNM theory reproduces, and for two cavities a non-trivial spectrum
with a peak and a dip is found, which is reproduced only when including both
the two relevant QNMs in the theory. In both cases, we find relative errors
below 1% in the bandwidth of interest.
|
We revisit the relationship of inequality between the gravitational field
energy and the Komar charge, both quantities evaluated at the event horizon,
for static and spherically symmetric regular black hole solutions obtained with
nonlinear electrodynamics. We found a way to characterize these regular black
hole solutions by the energy conditions that they satisfy. In particular, we
show the relation between the direction of the inequality and the energy
condition that satisfy the regular black hole solutions.
|
We calculate the power spectrum of density fluctuations in the statistical
non-equilibrium field theory for classical, microscopic degrees of freedom to
first order in the interaction potential. We specialise our result to cosmology
by choosing appropriate initial conditions and propagators and show that the
non-linear growth of the density power spectrum found in numerical simulations
of cosmic structure evolution is reproduced well to redshift zero and for
arbitrary wave numbers. The main difference of our approach to ordinary
cosmological perturbation theory is that we do not perturb a dynamical equation
for the density contrast. Rather, we transport the initial phase-space
distribution of a canonical particle ensemble forward in time and extract any
collective information from it at the time needed. Since even small
perturbations of particle trajectories can lead to large fluctuations in
density, our approach allows to reach high density contrast already at first
order in the perturbations of the particle trajectories. We argue why the
expected asymptotic behaviour of the non-linear power spectrum at large wave
numbers can be reproduced in our approach at any order of the perturbation
series.
|
Recently it has been recognized that the electromotive force (emf) can be
induced just by the spin precession where the generation of the electromotive
force has been considered as a real-space topological pumping effect. It has
been shown that the amount of the electromotive force is independent of the
functionality of the localized moments. It was also demonstrated that the rigid
domain wall (DW) motion cannot generate electromotive force in the system.
Based on real-space topological pumping approach in the current study we show
that the electromotive force can be induced by rigid motion of a deformed DW.
We also demonstrate that the generated electromotive force strongly depends on
the DW bulging. Meanwhile results show that the DW bulging leads to generation
of the electromotive force both along the axis of the DW motion and normal to
the direction of motion.
|
Metals are the most common materials used in space technology. Metal
structures, while used in space, are subjected to the full spectrum of the
electromagnetic radiation together with particle irradiation. Hence, they
undergo degradation. Future space missions are planned to proceed in the
interplanetary space, where the protons of the solar wind play a very
destructive role on metallic surfaces. Unfortunately, their real degradation
behavior is to a great extent unknown.
Our aim is to predict materials' behavior in such a destructive environment.
Therefore both, theoretical and experimental studies are performed at the
German Aerospace Center (DLR) in Bremen, Germany.
Here, we report the theoretical results of those studies. We examine the
process of H2-bubble formation on metallic surfaces. H2-bubbles are metal caps
filled with Hydrogen molecular gas resulting from recombination processes of
the metal free electrons and the solar protons. A thermodynamic model of the
bubble growth is presented. Our model predicts e.g. the velocity of that growth
and the reflectivity of foils populated by bubbles.
Formation of bubbles irreversibly changes the surface quality of irradiated
metals. Thin metallic films are especially sensitive for such degradation
processes. They are used e.g. in the solar sail propulsion technology. The
efficiency of that technology depends on the thermo-optical properties of the
sail materials. Therefore, bubble formation processes have to be taken into
account for the planning of long-term solar sail missions.
|
The chemical evolution of nascent quark matter core in a newborn compact
neutron star is studied in presence of a strong magnetic field. The effective
rate of strange quark production in degenerate quark matter core in presence of
strong magnetic fields is obtained. The investigations show that in presence of
strong magnetic fields a quark matter core becomes energetically unstable and
hence a deconfinement transition to quark matter at the centre of a compact
neutron star under such circumstances is not possible. The critical strength of
magnetic field at the central core to make the system energetically unstable
with respect to dense nuclear matter is found to be $\sim 4.4\times 10^{13}$G.
This is the typical strength at which the Landau levels for electrons are
populated. The other possible phase transitions at such high density and ultra
strong magnetic field environment are discussed.
|
We present a determination of the pion-nucleon sigma-term based on a novel
analysis of the $\pi N$ scattering amplitude in Lorentz covariant baryon chiral
perturbation theory renormalized in the extended-on-mass-shell scheme. This
amplitude, valid up-to next-to-next-leading order in the chiral expansion,
systematically includes the effects of the $\Delta(1232)$, giving a reliable
description of the phase shifts of different partial wave analyses up to
energies just below the resonance region. We obtain predictions on some
observables that are within experimental bounds and phenomenological
expectations. In particular, we use the center-of-mass energy dependence of the
amplitude adjusted with the data above threshold to extract accurately the
value of $\sigma_{\pi N}$. Our study indicates that the inclusion of modern
meson-factory and pionic-atom data favors relatively large values of the sigma
term. We report the value $\sigma_{\pi N}=59(7)$ MeV.
|
Geomagnetically-aligned density structures with a range of sizes exist in the
near-Earth plasma environment, including 10-100 km-wide VLF/HF wave-ducting
structures. Their small diameters and modest density enhancements make them
difficult to observe, and there is limited evidence for any of the several
formation mechanisms proposed to date. We present a case study of an event on
26 August 2014 where a travelling ionospheric disturbance (TID) shortly
precedes the formation of a complex collection of field-aligned ducts, using
data obtained by the Murchison Widefield Array (MWA) radio telescope. Their
spatiotemporal proximity leads us to suggest a causal interpretation.
Geomagnetic conditions were quiet at the time, and no obvious triggers were
noted. Growth of the structures proceeds rapidly, within 0.5 hr of the passage
of the TID, attaining their peak prominence 1-2 hr later and persisting for
several more hours until observations ended at local dawn. Analyses of the next
two days show field-aligned structures to be preferentially detectable under
quiet rather than active geomagnetic conditions. We used a raster scanning
strategy facilitated by the speed of electronic beamforming to expand the
quasi-instantaneous field of view of the MWA by a factor of three. These
observations represent the broadest angular coverage of the ionosphere by a
radio telescope to date.
|
Charge carriers that execute multi-phonon hopping generally interact strongly
enough with phonons to form polarons. A polarons sluggish motion is linked to
slowly shifting atomic displacements that severely reduce the intrinsic width
of its transport band. Here a means to estimate hopping polarons bandwidths
from Seebeck-coefficient measurements is described. The magnitudes of
semiconductors Seebeck coefficients are usually quite large (greater than 86
microvolts/K) near room temperature. However, in accord with the third law of
thermodynamics, Seebeck coefficients must vanish at absolute zero. Here the
transition of the Seebeck coefficient of hopping polarons to its
low-temperature regime is investigated. The temperature and sharpness of this
transition depends on the concentration of carriers and on the width of their
transport band. This feature provides a means of estimating the width of a
polarons transport band. Since the intrinsic broadening of polaron bands is
very small, less than the characteristic phonon energy, the net widths of
polaron transport bands in disordered semiconductors approach the energetic
disorder experienced by their hopping carriers, their disorder energy.
|
Vassiliev's spectral sequence for long knots is discussed. Briefly speaking
we study what happens if the strata of non-immersions are ignored. Various
algebraic structures on the spectral sequence are introduced. General theorems
about these structures imply, for example, that the bialgebra of chord diagrams
is polynomial for any field of coefficients.
|
Solutions to gravity with quadratic Lagrangians are found for the simple case
where the only nonconstant metric component is the lapse $N$ and the Riemann
tensor takes the form $R^{t}_{.itj}=-k_{i}k_{j}, i,j=1,2,3$; thus these
solutions depend on cross terms in the Riemann tensor and therefore complement
the linearized theory where it is the derivatives of the Riemann tensor that
matter. The relationship of this metric to the null gravitational radiation
metric of Peres is given. Gravitaional energy Poynting vectors are construcetd
for the solutions and one of these, based on the Lanczos tensor, supports the
indication in the linearized theory that nonnull gravitational radiation can
occur.
|
In this paper, an accurate direction-of-arrival (DOA) estimator is developed
based on the real-valued singular value decomposition (SVD) of covariance
matrix. Unitary transform on the complex-valued covariance matrix is first
applied, and then SVD performs on the resulting real-valued data matrix. The
singular vector is then utilized with a weighted least squares (WLS) method to
achieve DOA estimation. The performance of the proposed algorithm is compared
with several state-of-the-art methods as well as the CRB. The results indicate
the accuracy and effectiveness of the proposed method.
|
In this paper, the problem of finding a generalized Nash equilibrium (GNE) of
a networked game is studied. Players are only able to choose their decisions
from a feasible action set. The feasible set is considered to be a private
linear equality constraint that is coupled through decisions of the other
players. We consider that each player has his own private constraint and it has
not to be shared with the other players. This general case also embodies the
one with shared constraints between players and it can be also simply extended
to the case with inequality constraints. Since the players don't have access to
other players' actions, they need to exchange estimates of others' actions and
a local copy of the Lagrangian multiplier with their neighbors over a connected
communication graph. We develop a relatively fast algorithm by reformulating
the conservative GNE problem within the framework of inexact-ADMM. The
convergence of the algorithm is guaranteed under a few mild assumptions on cost
functions. Finally, the algorithm is simulated for a wireless ad-hoc network.
|
The goal of the work was to study the role of GC alternative dimmers in the
binding of DNA with Ni (II) ions. The method of ultraviolet difference
spectroscopy has been applied to investigate Ni (II) ions interactions with DNA
extracted from Clostridium perfringens, Mice liver (C3HA line), Calf thymus,
Salmon sperm, Herring sperm, E.coli, Micrococcus luteus and polynucleotides
Poly (dA-dT)xPoly (dA-dT), Poly (dG)x Poly (dC), Poly (dG-dC)xPoly (dG-dC). It
is shown that Ni (II) ions at outer-spherical binding with DNA double helix
from the side of the major groove choose more stable dimmers 3^'-C-G-5^'
. . 5^'-G-C-3^' and get bound with N7 atoms of both guanines in dimmer
forming G-G interstrand crosslink. It directly correlates to the process of
forming point defects of Watson-Crick wrong pair type (creation of rare
keto-enolic and amino-imino tautomeric forms) and depurinization.
|
We introduce "$t$-LC triangulated manifolds" as those triangulations
obtainable from a tree of $d$-simplices by recursively identifying two boundary
$(d-1)$-faces whose intersection has dimension at least $d-t-1$. The $t$-LC
notion interpolates between the class of LC manifolds introduced by
Durhuus--Jonsson (corresponding to the case $t=1$), and the class of all
manifolds (case $t=d$). Benedetti--Ziegler proved that there are at most
$2^{d^2 \, N}$ triangulated $1$-LC $d$-manifolds with $N$ facets. Here we prove
that there are at most $2^{\frac{d^3}{2}N}$ triangulated $2$-LC $d$-manifolds
with $N$ facets. This extends to all dimensions an intuition by Mogami for
$d=3$.
We also introduce "$t$-constructible complexes", interpolating between
constructible complexes (the case $t=1$) and all complexes (case $t=d$). We
show that all $t$-constructible pseudomanifolds are $t$-LC, and that all
$t$-constructible complexes have (homotopical) depth larger than $d-t$. This
extends the famous result by Hochster that constructible complexes are
(homotopy) Cohen--Macaulay.
|
A pair $(A,B)$ of square $(0,1)$-matrices is called a \emph{Lehman pair} if
$AB^T=J+kI$ for some integer $k\in\{-1,1,2,3,\ldots\}$. In this case $A$ and
$B$ are called \emph{Lehman matrices}. This terminology arises because Lehman
showed that the rows with the fewest ones in any non-degenerate minimally
nonideal (mni) matrix $M$ form a square Lehman submatrix of $M$. Lehman
matrices with $k=-1$ are essentially equivalent to \emph{partitionable graphs}
(also known as $(\alpha,\omega)$-graphs), so have been heavily studied as part
of attempts to directly classify minimal imperfect graphs. In this paper, we
view a Lehman matrix as the bipartite adjacency matrix of a regular bipartite
graph, focusing in particular on the case where the graph is cubic. From this
perspective, we identify two constructions that generate cubic Lehman graphs
from smaller Lehman graphs. The most prolific of these constructions involves
repeatedly replacing suitable pairs of edges with a particular $6$-vertex
subgraph that we call a $3$-rung ladder segment. Two decades ago, L\"{u}tolf \&
Margot initiated a computational study of mni matrices and constructed a
catalogue containing (among other things) a listing of all cubic Lehman
matrices with $k =1$ of order up to $17 \times 17$. We verify their catalogue
(which has just one omission), and extend the computational results to $20
\times 20$ matrices. Of the $908$ cubic Lehman matrices (with $k=1$) of order
up to $20 \times 20$, only two do not arise from our $3$-rung ladder
construction. However these exceptions can be derived from our second
construction, and so our two constructions cover all known cubic Lehman
matrices with $k=1$.
|
For overlay networks, the ability to recover from a variety of problems like
membership changes or faults is a key element to preserve their functionality.
In recent years, various self-stabilizing overlay networks have been proposed
that have the advantage of being able to recover from any illegal state.
However, the vast majority of these networks cannot give any guarantees on its
functionality while the recovery process is going on. We are especially
interested in searchability, i.e., the functionality that search messages for a
specific identifier are answered successfully if a node with that identifier
exists in the network. We investigate overlay networks that are not only
self-stabilizing but that also ensure that monotonic searchability is
maintained while the recovery process is going on, as long as there are no
corrupted messages in the system. More precisely, once a search message from
node $u$ to another node $v$ is successfully delivered, all future search
messages from $u$ to $v$ succeed as well. Monotonic searchability was recently
introduced in OPODIS 2015, in which the authors provide a solution for a simple
line topology.
We present the first universal approach to maintain monotonic searchability
that is applicable to a wide range of topologies. As the base for our approach,
we introduce a set of primitives for manipulating overlay networks that allows
us to maintain searchability and show how existing protocols can be transformed
to use theses primitives.
We complement this result with a generic search protocol that together with
the use of our primitives guarantees monotonic searchability.
As an additional feature, searching existing nodes with the generic search
protocol is as fast as searching a node with any other fixed routing protocol
once the topology has stabilized.
|
We present a detailed high-resolution weak-lensing (WL) study of SPT-CL
J2106-5844 at z=1.132, claimed to be the most massive system discovered at z >
1 in the South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey. Based on the
deep imaging data from the Advanced Camera for Surveys and Wide Field Camera 3
on-board the Hubble Space Telescope, we find that the cluster mass distribution
is asymmetric, composed of a main clump and a subclump ~640 kpc west thereof.
The central clump is further resolved into two smaller northwestern and
southeastern substructures separated by ~150 kpc. We show that this rather
complex mass distribution is more consistent with the cluster galaxy
distribution than a unimodal distribution as previously presented. The
northwestern substructure coincides with the BCG and X-ray peak while the
southeastern one agrees with the location of the number density peak. These
morphological features and the comparison with the X-ray emission suggest that
the cluster might be a merging system. We estimate the virial mass of the
cluster to be $M_{200c} = (10.4^{+3.3}_{-3.0}\pm1.0)~\times~10^{14}~M_{\odot}$,
where the second error bar is the systematic uncertainty. Our result confirms
that the cluster SPT-CL J2106-5844 is indeed the most massive cluster at z>1
known to date. We demonstrate the robustness of this mass estimate by
performing a number of tests with different assumptions on the centroids,
mass-concentration relations, and sample variance.
|
A topological computation method, called the MGSTD method, is applied to
time-series data obtained from meteorological measurement. The method gives
decomposition of the dynamics into invariant sets and gradient-like transitions
between them, by dividing the phase space into grids and representing the
time-series as a combinatorial multi-valued map over the grids. Since the
time-series is highly stochastic, the multi-valued map is statistically
determined by taking preferable transitions between the grids into account. The
time-series data are principal components of pressure pattern in troposphere
and stratosphere in the northern hemisphere. The application yields some
particular transitions between invariant sets, which leads to circular motion
on the phase space spanned by the principal components. The Morse sets and the
circular motion are consistent with the characteristic pressure patterns and
the change between them that have been shown in preceding meteorological
studies.
|
It is an interesting and open problem to trace the origin of the pseudospin
symmetry in nuclear single-particle spectra and its symmetry breaking mechanism
in actual nuclei. In this report, we mainly focus on our recent progress on
this topic by combining the similarity renormalization group technique,
supersymmetric quantum mechanics, and perturbation theory. We found that it is
a promising direction to understand the pseudospin symmetry in a quantitative
way.
|
Exocytosis is a common transport mechanism via which cells transport out
non-essential macro-molecules (cargo) into the extra cellular space. ESCRT-III
proteins are known to help in this. They polymerize into a conical spring like
structure and help deform the cell membrane locally into a bud which wrapps the
outgoing cargo. we model this process using a continuum energy functional. It
consists of elastic energies of the membrane and the semi-rigid ESCRT-III
filament, favorable adhesion energy between the cargo and the membrane, and
affinity among the ESCRT-III filaments. We take the free energy minimization
route to identify the sequence of composite structures which form during the
process. We show that membrane adhesion of the cargo is the driving force for
this budding process and not the buckling of ESCRT-III filaments from flat
spiral to conical spring shape. However ESCRT-III stabilizes the bud once it
forms. Further we conclude that a non-equilibrium process is needed to pinch
off/separate the stable bud (containing the cargo) from the cell body.
|
We show that the order on probability measures, inherited from the dominance
order on the Young diagrams, is preserved under natural maps reducing the
number of boxes in a diagram by $1$. As a corollary we give a new proof of the
Thoma theorem on the structure of characters of the infinite symmetric group.
We present several conjectures generalizing our result. One of them (if it is
true) would imply the Kerov's conjecture on the classification of all
homomorphisms from the algebra of symmetric functions into $\mathbb R$ which
are non-negative on Hall--Littlewood polynomials.
|
Using exact diagonalizations and Green's function Monte Carlo simulations, we
have studied the zero-temperature properties of the quantum dimer model on the
triangular lattice on clusters with up to 588 sites. A detailed comparison of
the properties in different topological sectors as a function of the cluster
size and for different cluster shapes has allowed us to identify different
phases, to show explicitly the presence of topological degeneracy in a phase
close to the Rokhsar-Kivelson point, and to understand finite-size effects
inside this phase. The nature of the various phases has been further
investigated by calculating dimer-dimer correlation functions. The present
results confirm and complement the phase diagram proposed by Moessner and
Sondhi on the basis of finite-temperature simulations [Phys. Rev. Lett. {\bf
86}, 1881 (2001)].
|
3D human pose estimation from monocular images is a highly ill-posed problem
due to depth ambiguities and occlusions. Nonetheless, most existing works
ignore these ambiguities and only estimate a single solution. In contrast, we
generate a diverse set of hypotheses that represents the full posterior
distribution of feasible 3D poses. To this end, we propose a normalizing flow
based method that exploits the deterministic 3D-to-2D mapping to solve the
ambiguous inverse 2D-to-3D problem. Additionally, uncertain detections and
occlusions are effectively modeled by incorporating uncertainty information of
the 2D detector as condition. Further keys to success are a learned 3D pose
prior and a generalization of the best-of-M loss. We evaluate our approach on
the two benchmark datasets Human3.6M and MPI-INF-3DHP, outperforming all
comparable methods in most metrics. The implementation is available on GitHub.
|
In this report, we will show a detector which can be used to search for
proton decay in the lifetime region beyond 10$^{35}$ years.
We will briefly review the current experimental status and discuss the
sensitivity of the future proton decay detectors, and we specifically present a
possibility of a scalable multi-megaton water Cherenkov detector immersed in
the shallow water.
|
In this note we give a new sufficient condition for the boundedness of the
composition operator on the Dirichlet-type space on the disc, via a two
dimensional change of variables formula. With the same formula, we characterise
the bounded composition operators on the anisotropic Dirichlet-type spaces
$\mathfrak{D}_{\vec{a}}(\mathbb{D}^2)$ induced by holomorphic self maps of the
bi-disc $\mathbb{D}^2$ of the form $\Phi(z_1,z_2)=(\phi_1(z_1),\phi_2(z_2))$.
We also consider the problem of boundedness of composition operators
$C_{\Phi}:A^2(\mathbb{D}^2) \to \mathfrak{D}(\mathbb{D}^2)$ for general self
maps of the bi-disc, applying some recent results about Carleson measures on
the the Dirichlet space of the bi-disc.
|
This paper surveys the representation theory of rational Cherednik algebras.
We also discuss the representations of the spherical subalgebras. We describe
in particular the results on category O. For type A, we explain relations with
the Hilbert scheme of points on C^2. We insist on the analogy with the
representation theory of complex semi-simple Lie algebras.
|
We give an explicit raising operator formula for the modified Macdonald
polynomials $\tilde{H}_{\mu }(X;q,t)$, which follows from our recent formula
for $\nabla$ on an LLT polynomial and the Haglund-Haiman-Loehr formula
expressing modified Macdonald polynomials as sums of LLT polynomials. Our
method just as easily yields a formula for a family of symmetric functions
$\tilde{H}^{1,n}(X;q,t)$ that we call $1,n$-Macdonald polynomials, which reduce
to a scalar multiple of $\tilde{H}_{\mu}(X;q,t)$ when $n=1$. We conjecture that
the coefficients of $1,n$-Macdonald polynomials in terms of Schur functions
belong to $\mathbb{N}[q,t]$, generalizing Macdonald positivity.
|
The idea of Universal Grammar (UG) as the hypothetical linguistic structure
shared by all human languages harkens back at least to the 13th century. The
best known modern elaborations of the idea are due to Chomsky. Following a
devastating critique from theoretical, typological and field linguistics, these
elaborations, the idea of UG itself and the more general idea of language
universals stand untenable and are largely abandoned. The proposal tackles the
hypothetical contents of UG using dependent and polymorphic type theory in a
framework very different from the Chomskyan ones. We introduce a type logic for
a precise, universal and parsimonious representation of natural language
morphosyntax and compositional semantics. The logic handles grammatical
ambiguity (with polymorphic types), selectional restrictions and diverse kinds
of anaphora (with dependent types), and features a partly universal set of
morphosyntactic types (by the Curry-Howard isomorphism).
|
This paper reports on the state-of-the-art in application of multidimensional
scaling (MDS) techniques to create semantic maps in linguistic research. MDS
refers to a statistical technique that represents objects (lexical items,
linguistic contexts, languages, etc.) as points in a space so that close
similarity between the objects corresponds to close distances between the
corresponding points in the representation. We focus on the use of MDS in
combination with parallel corpus data as used in research on cross-linguistic
variation.
We first introduce the mathematical foundations of MDS and then give an
exhaustive overview of past research that employs MDS techniques in combination
with parallel corpus data. We propose a set of terminology to succinctly
describe the key parameters of a particular MDS application. We then show that
this computational methodology is theory-neutral, i.e. it can be employed to
answer research questions in a variety of linguistic theoretical frameworks.
Finally, we show how this leads to two lines of future developments for MDS
research in linguistics.
|
We consider the time evolution of simple quantum systems under the influence
of random fluctuations of the control parameters. We show that when the
parameters fluctuate sufficiently fast, there is a cancellation effect of the
noise. We propose that such an effect could be experimentally observed by
performing a simple experiment with trapped ions. As a byproduct of our
analysis, we provide an explanation of the robustness against random
perturbations of adiabatic population transfer techniques in atom optics.
|
We give a new proof of a result of Lazarev, that the dual of the circle
$S^1_+$ in the category of spectra is equivalent to a strictly square-zero
extension as an associative ring spectrum. As an application, we calculate the
topological cyclic homology of $DS^1$ and rule out a Koszul-dual reformulation
of the Novikov conjecture.
|
Motivated by the need to control the exponential growth of constraint
violations in numerical solutions of the Einstein evolution equations, two
methods are studied here for controlling this growth in general hyperbolic
evolution systems. The first method adjusts the evolution equations
dynamically, by adding multiples of the constraints, in a way designed to
minimize this growth. The second method imposes special constraint preserving
boundary conditions on the incoming components of the dynamical fields. The
efficacy of these methods is tested by using them to control the growth of
constraints in fully dynamical 3D numerical solutions of a particular
representation of the Maxwell equations that is subject to constraint
violations. The constraint preserving boundary conditions are found to be much
more effective than active constraint control in the case of this Maxwell
system.
|
Some time ago, Atiyah showed that there exists a natural identification
between the k-instantons of a Yang-Mills theory with gauge group $G$ and the
holomorphic maps from $CP_1$ to $\Omega G$. Since then, Nair and Mazur, have
associated the $\Theta $ vacua structure in QCD with self-intersecting Riemann
surfaces immersed in four dimensions. From here they concluded that these 2D
surfaces correspond to the non-perturbative phase of QCD and carry the
topological information of the $\Theta$ vacua. In this paper we would like to
elaborate on this point by making use of Atiyah's identification. We will argue
that an effective description of QCD may be more like a $WZW$ model coupled to
the induced metric of an immersion of a 2-D Riemann surface in $R^4$. We make
some further comments on the relationship between the coadjoint orbits of the
Kac-Moody group on $G$ and instantons with axial symmetry and monopole charge.
|
We study the statistical properties of an ensemble of weak gravitational
waves interacting nonlinearly in a flat space-time. We show that the resonant
three-wave interactions are absent and develop a theory for four-wave
interactions in a reduced case of a diagonal metric tensor. In this limit,
where only one type of gravitational waves are present, we derive the
interaction Hamiltonian and consider the asymptotic regime of weak
gravitational wave turbulence. Both direct and inverse cascades are found for
the energy and the wave action respectively, and the corresponding wave spectra
are derived. The inverse cascade is characterized by a finite-time propagation
of the metric excitations - a process similar to an explosive non-equilibrium
Bose-Einstein condensation, which provides an efficient mechanism to ironing
out small-scale inhomogeneities. The direct cascade leads to an accumulation of
the radiation energy in the system. These processes might be important for
understanding the early Universe where a background of weak nonlinear
gravitational waves is expected.
|
The Arabic language is a complex language; it is different from Western
languages especially at the morphological and spelling variations. Indeed, the
performance of information retrieval systems in the Arabic language is still a
problem. For this reason, we are interested in studying the performance of the
most famous search engine, which is a Google Desktop, while searching in Arabic
language documents. Then, we propose an update to the Google Desktop to take
into consideration in search the Arabic words that have the same root. After
that, we evaluate the performance of the Google Desktop in this context. Also,
we are interested in evaluation the performance of peer-to-peer application in
two ways. The first one uses a simple indexation that indexes Arabic documents
without taking in consideration the root of words. The second way takes in
consideration the roots in the indexation of Arabic documents. This evaluation
is done by using a corpus of ten thousand documents and one hundred different
queries.
|
General concept of Fano resonance is considered so that to show the
possibility of this resonance in space. Using a recently found solution for a
Bessel wave beam impinging a dielectric sphere, we analyze the electromagnetic
fields near a microsphere with different optical size and permittivity values.
We theoretically reveal a spatial Fano resonance when a resonant mode of the
sphere interferes with {an amount of } non-resonant modes. This resonance
results in a giant jump of the electric field behind the sphere impinged by the
first-order Bessel beam. The local minimum of the electromagnetic field turns
out to be noticeably distanced from the rear edge of the microsphere. However,
this is a near-field effect and we prove it. We also show that this effect can
be utilized for engineering a submicron optical trap with unusual and useful
properties.
|
In order to understand magnetic behavior observed in CmO$_2$ with
non-magnetic ground state, we numerically evaluate magnetic susceptibility on
the basis of a seven-orbital Anderson model with spin-orbit coupling. Naively
we do not expect magnetic behavior in CmO$_2$, since Cm is considered to be
tetravalent ion with six $5f$ electrons and the ground state is characterized
by $J$=0, where $J$ is total angular momentum. However, there exists magnetic
excited state and the excitation energy is smaller than the value of the
Land\'e interval rule due to the effect of crystalline electric field
potential. Then, we open a way to explain magnetic behavior in CmO$_2$.
|
In this paper it is shown that in case of trace class perturbations the
singular part of Pushnitski $\mu$-invariant does not depend on the angle
variable. This gives an alternative proof of integer-valuedness of the singular
part of the spectral shift function. As a consequence, the Birman-Krein formula
for trace class perturbations follows.
|
Coronal Mass Ejections (CMEs) are key drivers of space weather activity but
most predictions have been limited to the expected arrival time of a CME,
rather than the internal properties that affect the severity of an impact. Many
properties, such as the magnetic field density and mass density, follow
conservation laws and vary systematically with changes in the size of a CME. We
present ANTEATR-PARADE, the newest version of the ANTEATR arrival time model,
which now includes physics-driven changes in the size and shape of both the
CME's central axis and its cross section. Internal magnetic and thermal and
external drag forces affect the acceleration of the CME in different
directions, inducing asymmetries between the radial and perpendicular
directions. These improvements should lead to more realistic CME velocities,
both bulk and expansion, sizes and shapes, and internal properties. We present
the model details, an initial illustration of the general behavior, and a study
of the relative importance of the different forces. The model shows a pancaking
of both the cross section and central axis of the CME so that their radial
extent becomes smaller than their extent in the perpendicular direction. We
find that the initial velocities, drag, any form of cross section expansion,
and the precise form of thermal expansion have strong effects. The results are
less sensitive to axial forces and the specific form of the cross section
expansion.
|
Flatness -- the absence of spacetime curvature -- is a well-understood
property of macroscopic, classical spacetimes in general relativity. The same
cannot be said about the concepts of curvature and flatness in nonperturbative
quantum gravity, where the microscopic structure of spacetime is not
describable in terms of small fluctuations around a fixed background geometry.
An interesting case are two-dimensional models of quantum gravity, which lack a
classical limit and therefore are maximally "quantum". We investigate the
recently introduced quantum Ricci curvature in CDT quantum gravity on a
two-dimensional torus, whose quantum geometry could be expected to behave like
a flat space on suitably coarse-grained scales. On the basis of Monte Carlo
simulations we have performed, with system sizes of up to 600.000 building
blocks, this does not seem to be the case. Instead, we find a scale-independent
"quantum flatness", without an obvious classical analogue. As part of our
study, we develop a criterion that allows us to distinguish between local and
global, topological properties of the toroidal quantum system.
|
In this paper we show that the entropy of a cosmological horizon in
topological Reissner-Nordstr\"om- de Sitter and Kerr-Newman-de Sitter spaces
can be described by the Cardy-Verlinde formula, which is supposed to be an
entropy formula of conformal field theory in any number of dimension.
Furthermore, we find that the entropy of a black hole horizon can also be
rewritten in terms of the Cardy-Verlinde formula for these black holes in de
Sitter spaces, if we use the definition due to Abbott and Deser for conserved
charges in asymptotically de Sitter spaces. Such result presume a well-defined
dS/CFT correspondence, which has not yet attained the credibility of its AdS
analogue.
|
The 1-2-3 Conjecture asks whether almost all graphs can be (edge-)labelled
with $1,2,3$ so that no two adjacent vertices are incident to the same sum of
labels. In the last decades, several aspects of this problem have been studied
in literature, including more general versions and slight variations. Notable
such variations include the List 1-2-3 Conjecture variant, in which edges must
be assigned labels from dedicated lists of three labels, and the Multiplicative
1-2-3 Conjecture variant, in which labels~$1,2,3$ must be assigned to the edges
so that adjacent vertices are incident to different products of labels. Several
results obtained towards these two variants led to observe some behaviours that
are distant from those of the original conjecture.
In this work, we consider the list version of the Multiplicative 1-2-3
Conjecture, proposing the first study dedicated to this very problem. In
particular, given any graph $G$, we wonder about the minimum~$k$ such that $G$
can be labelled as desired when its edges must be assigned labels from
dedicated lists of size~$k$. Exploiting a relationship between our problem and
the List 1-2-3 Conjecture, we provide upper bounds on~$k$ when $G$ belongs to
particular classes of graphs. We further improve some of these bounds through
dedicated arguments.
|
Generalized class discovery (GCD) aims to infer known and unknown categories
in an unlabeled dataset leveraging prior knowledge of a labeled set comprising
known classes. Existing research implicitly/explicitly assumes that the
frequency of occurrence for each category, whether known or unknown, is
approximately the same in the unlabeled data. However, in nature, we are more
likely to encounter known/common classes than unknown/uncommon ones, according
to the long-tailed property of visual classes. Therefore, we present a
challenging and practical problem, Imbalanced Generalized Category Discovery
(ImbaGCD), where the distribution of unlabeled data is imbalanced, with known
classes being more frequent than unknown ones. To address these issues, we
propose ImbaGCD, A novel optimal transport-based expectation maximization
framework that accomplishes generalized category discovery by aligning the
marginal class prior distribution. ImbaGCD also incorporates a systematic
mechanism for estimating the imbalanced class prior distribution under the GCD
setup. Our comprehensive experiments reveal that ImbaGCD surpasses previous
state-of-the-art GCD methods by achieving an improvement of approximately 2 -
4% on CIFAR-100 and 15 - 19% on ImageNet-100, indicating its superior
effectiveness in solving the Imbalanced GCD problem.
|
We examine the validity of the hydrostatic equilibrium (HSE) assumption for
galaxy clusters using one of the highest-resolution cosmological hydrodynamical
simulations. We define and evaluate several effective mass terms corresponding
to the Euler equations of the gas dynamics, and quantify the degree of the
validity of HSE in terms of the mass estimate. We find that the mass estimated
under the HSE assumption (the HSE mass) deviates from the true mass by up to ~
30 %. This level of departure from HSE is consistent with the previous claims,
but our physical interpretation is rather different. We demonstrate that the
inertial term in the Euler equations makes a negligible contribution to the
total mass, and the overall gravity of the cluster is balanced by the thermal
gas pressure gradient and the gas acceleration term. Indeed the deviation from
the HSE mass is well explained by the acceleration term at almost all radii. We
also clarify the confusion of previous work due to the inappropriate
application of the Jeans equations in considering the validity of HSE from the
gas dynamics extracted from cosmological hydrodynamical simulations.
|
Ultracold atomic Fermi gases present an opportunity to study strongly
interacting Fermi systems in a controlled and uncomplicated setting. The
ability to tune attractive interactions has led to the discovery of
superfluidity in these systems with an extremely high transition temperature,
near T/T_F = 0.2. This superfluidity is the electrically neutral analog of
superconductivity; however, superfluidity in atomic Fermi gases occurs in the
limit of strong interactions and defies a conventional BCS description. For
these strong interactions, it is predicted that the onset of pairing and
superfluidity can occur at different temperatures. This gives rise to a
pseudogap region where, for a range of temperatures, the system retains some of
the characteristics of the superfluid phase, such as a BCS-like dispersion and
a partially gapped density of states, but does not exhibit superfluidity. By
making two independent measurements: the direct observation of pair
condensation in momentum space and a measurement of the single-particle
spectral function using an analog to photoemission spectroscopy, we directly
probe the pseudogap phase. Our measurements reveal a BCS-like dispersion with
back-bending near the Fermi wave vector k_F that persists well above the
transition temperature for pair condensation.
|
Second-order optimizers, maintaining a matrix termed a preconditioner, are
superior to first-order optimizers in both theory and practice. The states
forming the preconditioner and its inverse root restrict the maximum size of
models trained by second-order optimizers. To address this, compressing 32-bit
optimizer states to lower bitwidths has shown promise in reducing memory usage.
However, current approaches only pertain to first-order optimizers. In this
paper, we propose the first 4-bit second-order optimizers, exemplified by 4-bit
Shampoo, maintaining performance similar to that of 32-bit ones. We show that
quantizing the eigenvector matrix of the preconditioner in 4-bit Shampoo is
remarkably better than quantizing the preconditioner itself both theoretically
and experimentally. By rectifying the orthogonality of the quantized
eigenvector matrix, we enhance the approximation of the preconditioner's
eigenvector matrix, which also benefits the computation of its inverse 4-th
root. Besides, we find that linear square quantization slightly outperforms
dynamic tree quantization when quantizing second-order optimizer states.
Evaluation on various networks for image classification demonstrates that our
4-bit Shampoo achieves comparable test accuracy to its 32-bit counterpart while
being more memory-efficient. The source code will be made available.
|
In this paper we classify the solutions to the geometric Neumann problem for
the Liouville equation in the upper half-plane or an upper half-disk, with the
energy condition given by finite area. As a result, we classify the conformal
Riemannian metrics of constant curvature and finite area on a half-plane that
have a finite number of boundary singularities, not assumed a priori to be
conical, and constant geodesic curvature along each boundary arc.
|
The symmetry algebra of massless fields living on the 3-dimensional conformal
boundary of AdS(4) is shown to be isomorphic to 3d conformal higher spin
algebra (AdS(4) higher spin algebra). A simple realization of this algebra on
the free flat 3d massless matter fields is given in terms of an auxiliary Fock
module.
|
A stability analysis of a spherically symmetric star in scalar-tensor
theories of gravity is given in terms of the frequencies of quasi-normal modes.
The scalar-tensor theories have a scalar field which is related to gravitation.
There is an arbitrary function, the so-called coupling function, which
determines the strength of the coupling between the gravitational scalar field
and matter. Instability is induced by the scalar field for some ranges of the
value of the first derivative of the coupling function. This instability leads
to significant discrepancies with the results of binary-pulsar-timing
experiments and hence, by the stability analysis, we can exclude the ranges of
the first derivative of the coupling function in which the instability sets in.
In this article, the constraint on the first derivative of the coupling
function from the stability of relativistic stars is found. Analysis in terms
of the quasi-normal mode frequencies accounts for the parameter dependence of
the wave form of the scalar gravitational waves emitted from the
Oppenheimer-Snyder collapse. The spontaneous scalarization is also discussed.
|
We investigate the large-N critical behavior of 2-d lattice chiral models by
Monte Carlo simulations of U(N) and SU(N) groups at large N. Numerical results
confirm strong coupling analyses, i.e. the existence of a large-N second order
phase transition at a finite $\beta_c$.
|
We compute the Casimir energy for a system consisting of a fermion and a
pseudoscalar field in the form of a prescribed kink. This model is not exactly
solvable and we use the phase shift method to compute the Casimir energy. We
use the relaxation method to find the bound states and the Runge-Kutta-Fehlberg
method to obtain the scattering wavefunctions of the fermion in the whole
interval of $x$. The resulting phase shifts are consistent with the weak and
strong forms of the Levinson theorem. Then, we compute and plot the Casimir
energy as a function of the parameters of the pseudoscalar field, i.e. the
slope of $\phi(x)$ at x=0 ($\mu$) and the value of $\phi(x)$ at infinity
($\theta_0$). In the graph of the Casimir energy as a function of $\mu$ there
is a sharp maximum occurring when the fermion bound state energy crosses the
line of E=0. Furthermore, this graph shows that the Casimir energy goes to zero
for $\mu\rightarrow 0$, and also for $\mu\rightarrow \infty$ when $\theta_0$ is
an integer multiple of $\pi$. Moreover, the graph of the Casimir energy as a
function of $\theta_0$ shows that this energy is on the average an increasing
function of $\theta_0$ and has a cusp whenever there is a zero fermionic mode.
We finally compute the total energy of a system consisting of a valence fermion
in the ground state. Most importantly, we show that this energy (the sum of the
Casimir energy and the energy of the fermion) is minimum when the background
field has winding number one, independent of the details of the background
profile. Throughout the paper we compare our results with those of a simple
exactly solvable model, where a piece-wise linear profile approximates the
kink. We find that the kink is an almost reflectionless barrier for the
fermions, within the context of our model.
|
It is expected on general grounds that the moduli space of 4d $\mathcal{N}=3$
theories is of the form $\mathbb{C}^{3r}/\Gamma$, with $r$ the rank and
$\Gamma$ a crystallographic complex reflection group (CCRG). As in the case of
Lie algebras, the space of CCRGs consists of several infinite families,
together with some exceptionals. To date, no 4d $\mathcal{N}=3$ theory with
moduli space labelled by an exceptional CCRG (excluding Weyl groups) has been
identified. In this work we show that the 4d $\mathcal{N}=3$ theories proposed
in \cite{Garcia-Etxebarria:2016erx}, constructed via non-geometric quotients of
type-$\mathfrak{e}$ 6d (2,0) theories, realize nearly all such exceptional
moduli spaces. In addition, we introduce an extension of this construction to
allow for twists and quotients by outer automorphism symmetries. This gives new
examples of 4d $\mathcal{N}=3$ theories going beyond simple S-folds.
|
A standard result by Smale states that n dimensional strongly cooperative
dynamical systems can have arbitrary dynamics when restricted to unordered
invariant hyperspaces. In this paper this result is extended to the case when
all solutions of the strongly cooperative system are bounded and converge
towards one of only two equilibria outside of the hyperplane.
An application is given in the context of strongly cooperative systems of
reaction diffusion equations. It is shown that such a system can have a
continuum of spatially inhomogeneous steady states, even when all solutions of
the underlying reaction system converge to one of only three equilibria.
|
With the success of the graph embedding model in both academic and industry
areas, the robustness of graph embedding against adversarial attack inevitably
becomes a crucial problem in graph learning. Existing works usually perform the
attack in a white-box fashion: they need to access the predictions/labels to
construct their adversarial loss. However, the inaccessibility of
predictions/labels makes the white-box attack impractical to a real graph
learning system. This paper promotes current frameworks in a more general and
flexible sense -- we demand to attack various kinds of graph embedding models
with black-box driven. We investigate the theoretical connections between graph
signal processing and graph embedding models and formulate the graph embedding
model as a general graph signal process with a corresponding graph filter.
Therefore, we design a generalized adversarial attacker: GF-Attack. Without
accessing any labels and model predictions, GF-Attack can perform the attack
directly on the graph filter in a black-box fashion. We further prove that
GF-Attack can perform an effective attack without knowing the number of layers
of graph embedding models. To validate the generalization of GF-Attack, we
construct the attacker on four popular graph embedding models. Extensive
experiments validate the effectiveness of GF-Attack on several benchmark
datasets.
|
BPS invariants are computed, capturing topological invariants of moduli
spaces of semi-stable sheaves on rational surfaces. For a suitable stability
condition, it is proposed that the generating function of BPS invariants of a
Hirzebruch surface takes the form of a product formula. BPS invariants for
other stability conditions and other rational surfaces are obtained using
Harder-Narasimhan filtrations and the blow-up formula. Explicit expressions are
given for rank <4 sheaves on a Hirzebruch surface or the projective plane. The
applied techniques can be applied iteratively to compute invariants for higher
rank.
|
We show within the framework of relativistic quantum tasks that the doability
of any task is fully determined by a small subset of its parameters that we
call its "coarse causal structure", as well as the distributed computation it
aims to accomplish. We do this by making rigorous the notion of a protocol
using a structure known as a spacetime circuit, which describes how a
computation is preformed across a region of spacetime. Using spacetime circuits
we show that any protocol that can accomplish a given task can, without
changing its doability, undergo significant geometric modifications such as
changing the background spacetime and moving the location of input and output
points, so long as the coarse causal structure of the task is maintained.
Besides giving a powerful tool for determining the doability of a task, our
results strengthen the no-go theorem for position based quantum cryptography to
include arbitrary sending and receiving of signals by verifier agents outside
the authentication region. They also serve as a consistency check for the
holographic principle by showing that discrepancies between bulk and boundary
causal structure can not cause a task to be doable in one but not the other.
|
Since the advent of new pairwise non-diffeomorphic structures on smooth
manifolds, it has been questioned whether two topologically identical manifolds
could admit different geometries. Not surprisingly, physicists have wondered
whether a smooth structure assumption different from some classical known
models could produce different physical meanings. In this paper, we inaugurate
a very computational manner to produce physical models on classical and exotic
spheres that can be built equivariantly, such as the classical Gromoll--Meyer
exotic spheres. As first applications, we produce Lorentzian metrics on
homeomorphic but not diffeomorphic manifolds that enjoy the same physical
properties, such as geodesic completeness, positive Ricci curvature, and
compatible time orientation. These constructions can be pulled back to higher
models, such as exotic ten spheres bounding spin manifolds, to be approached in
forthcoming papers.
|
The Kibble-Zurek mechanism (KZM) describes the non-equilibrium dynamics and
topological defect formation in systems undergoing second-order phase
transitions. KZM has found applications in fields such as cosmology and
condensed matter physics. However, it is generally not suitable for describing
first-order phase transitions. It has been demonstrated that transitions in
systems like superconductors or charged superfluids, typically classified as
second-order, can exhibit weakly first-order characteristics when the influence
of fluctuations is taken into account. Moreover, the order of the phase
transition (i.e., the extent to which it becomes first rather than second
order) can be tuned. We explore quench-induced formation of topological defects
in such tunable phase transitions and propose that their density can be
predicted by combining KZM with nucleation theory.
|
Hermite processes are self--similar processes with stationary increments
which appear as limits of normalized sums of random variables with long range
dependence. The Hermite process of order $1$ is fractional Brownian motion and
the Hermite process of order $2$ is the Rosenblatt process. We consider here
the sum of two Hermite processes of order $q\geq 1$ and $q+1$ and of different
Hurst parameters. We then study its quadratic variations at different scales.
This is akin to a wavelet decomposition. We study both the cases where the
Hermite processes are dependent and where they are independent. In the
dependent case, we show that the quadratic variation, suitably normalized,
converges either to a normal or to a Rosenblatt distribution, whatever the
order of the original Hermite processes.
|
The Telegraph equation $(\partial_{t}^{\rho })^{2}u(x,t)+2\alpha
\partial_{t}^{\rho }u(x,t)-u_{xx}(x,t)=f(x,t)$, where $0<t\leq T$ and
$0<\rho<1$, with the Riemann-Liouville derivative is considered. Existence and
uniqueness theorem for the solution to the problem under consideration is
proved. Inequalities of stability are obtained. The applied method allows us to
study a similar problem by taking instead of $d^2/dx^2$ an arbitrary elliptic
differential operator $A(x, D)$, having a compact inverse.
|
We consider the horizon problem in a homogeneous but anisotropic universe
(Bianchi type I). We show that the problem cannot be solved if (1) the matter
obeys the strong energy condition with the positive energy density and (2) the
Einstein equations hold. The strong energy condition is violated during
cosmological inflation.
|
Toeplitz matrices for the study of the fractional Laplacian on a bounded
interval. In this work we get a deep link between (--$\Delta$) $\alpha$ ]0,1[
the fractional Laplacian on the interval ]0, 1[ and T N ($\Phi$ $\alpha$) the
Toeplitz matrices of symbol $\Phi$ $\alpha$ : $\theta$ $\rightarrow$ |1 -- e
i$\theta$ | 2$\alpha$ when N goes to the infinity and for $\alpha$ $\in$]0, 1 2
[$\cup$] 1 2 , 1[. In the second part of the paper we provide a Green function
for the fractional equation (--$\Delta$) $\alpha$ ]0,1[ ($\psi$) = f for
$\alpha$ $\in$]0, 1 2 [ and f a sufficiently smooth function on [0, 1]. The
interest is that this Green's function is the same as the Laplacian operator of
order 2n, n $\in$ N. Mathematical Subject Classification (2000) Primary 35S05,
35S10,35S11 ; Secondary 47G30.
|
Shift current and ballistic current have been proposed to explain the bulk
photovoltaic effect (BPVE), and there have been experiments designed to
separate the two mechanisms. These experiments are based on the assumption that
under magnetic field, ballistic current can have a Hall effect while the shift
current cannot, which is from some energy-scale arguments and has never been
proven. A recent work [Phys. Rev. B 103, 195203 (2021)] using quantum transport
formalism achieves a conclusion that shift current indeed has a Hall current,
seemingly contradicting the previous assumption and making the situation more
confusing. Moreover, the behavior of BPVE under strong magnetic field is still
unexplored. In this Letter, using a minimal 2D tight-binding model, we carry
out a systematic numerical study of the BPVE under weak and strong magnetic
field by treating the field in a non-perturbative way. Our model clearly shows
the appearance of the magnetically-induced ballistic current along the
transverse direction, which agrees with the previous predictions, and
interestingly a sizable longitudinal response of the shift current is also
observed, a phenomenon that is not captured by any existing theories where the
magnetic field is treated perturbatively. More surprisingly, drastically
different shift current is found in the strong-field regime, and the evolution
from weak to strong field resembles a phase transition. We hope that our work
could resolve the debate over the behavior of BPVE under magnetic field, and
the strong-field behavior of shift current is expected to inspire more studies
on the relation between nonlinear optics and quantum geometry.
|
This paper analyzes the outage performance in finite wireless networks.
Unlike most prior works, which either assumed a specific network shape or
considered a special location of the reference receiver, we propose two general
frameworks for analytically computing the outage probability at any arbitrary
location of an arbitrarily-shaped finite wireless network: (i) a moment
generating function-based framework which is based on the numerical inversion
of the Laplace transform of a cumulative distribution and (ii) a reference link
power gain-based framework which exploits the distribution of the fading power
gain between the reference transmitter and receiver. The outage probability is
spatially averaged over both the fading distribution and the possible locations
of the interferers. The boundary effects are accurately accounted for using the
probability distribution function of the distance of a random node from the
reference receiver. For the case of the node locations modeled by a Binomial
point process and Nakagami-$m$ fading channel, we demonstrate the use of the
proposed frameworks to evaluate the outage probability at any location inside
either a disk or polygon region. The analysis illustrates the location
dependent performance in finite wireless networks and highlights the importance
of accurately modeling the boundary effects.
|
A status report of the microlensing search by the pixel method in the
direction of M31, on the 2 meter telescope at Pic du Midi is given. Pixels are
stable to a level better than 0.5%. Pixel variations as small as 0.02 magnitude
can clearly be detected.
|
We perform a two-dimensional numerical study on the thermal effect of porous
media on global heat transport and flow structure in Rayleigh-B\'enard (RB)
convection, focusing on the role of thermal conductivity $\lambda$ of porous
media, which ranges from $0.1$ to $50$ relative to the fluid. The simulation is
carried out in a square RB cell with the Rayleigh number $Ra$ ranging from
$10^7$ to $10^9$ and the Prandtl number $Pr$ fixed at $4.3$. The porosity of
the system is fixed at $\phi=0.812$, with the porous media modeled by a set of
randomly displayed circular obstacles. For a fixed $Ra$, the increase of
conductivity shows a small effect on the total heat transfer, slightly
depressing the Nusselt number. The limited influence comes from the small
number of obstacles contacting with thermal plumes in the system as well as the
counteraction of the increased plume area and the depressed plume strength. The
study shows that the global heat transfer is insensitive to the conduction
effect of separated porous media in the bulk region, which may have
implications for industrial designs.
|
Stellar models generally use simple parametrizations to treat convection. The
most widely used parametrization is the so-called "Mixing Length Theory" where
the convective eddy sizes are described using a single number, \alpha, the
mixing-length parameter. This is a free parameter, and the general practice is
to calibrate \alpha using the known properties of the Sun and apply that to all
stars. Using data from NASA's Kepler mission we show that using the
solar-calibrated \alpha is not always appropriate, and that in many cases it
would lead to estimates of initial helium abundances that are lower than the
primordial helium abundance. Kepler data allow us to calibrate \alpha for many
other stars and we show that for the sample of stars we have studied, the
mixing-length parameter is generally lower than the solar value. We studied the
correlation between \alpha and stellar properties, and we find that \alpha
increases with metallicity. We therefore conclude that results obtained by
fitting stellar models or by using population-synthesis models constructed with
solar values of \alpha are likely to have large systematic errors. Our results
also confirm theoretical expectations that the mixing-length parameter should
vary with stellar properties.
|
The thermal conductivity of a $d=1$ lattice of ferromagnetically coupled
planar rotators is studied through molecular dynamics. Two different types of
anisotropies (local and in the coupling) are assumed in the inertial XY model.
In the limit of extreme anisotropy, both models approach the Ising model and
its thermal conductivity $\kappa$, which, at high temperatures, scales like
$\kappa\sim T^{-3}$. This behavior reinforces the result obtained in various
$d$-dimensional models, namely $\kappa \propto L\,
e_{q}^{-B(L^{\gamma}T)^{\eta}}$ where $e_q^z
\equiv[1+(1-q)z]^{\frac{1}{1-q}}\;(e_1^z=e^z)$, $L$ being the linear size of
the $d$-dimensional macroscopic lattice. The scaling law $\frac{\eta
\,\gamma}{q-1}=1$ guarantees the validity of Fourier's law, $\forall d$.
|
Subsets and Splits