text
stringlengths 6
128k
|
---|
We show that lattice polytopes cut out by root systems of classical type are
normal and Koszul, generalizing a well-known result of Bruns, Gubeladze, and
Trung in type A. We prove similar results for Cayley sums of collections of
polytopes whose Minkowski sums are cut out by root systems. The proofs are
based on a combinatorial characterization of diagonally split toric varieties.
|
Traditionally, calcium dynamics in neurons are modeled using partial
differential equations (PDEs) and ordinary differential equations (ODEs). The
PDE component focuses on reaction-diffusion processes, while the ODE component
addresses transmission via ion channels on the cell's or organelle's membrane.
However, analytically determining the underlying equations for ion channels is
highly challenging due to the complexity and unknown factors inherent in
biological processes. Therefore, we employ deep neural networks (DNNs) to model
the open probability of ion channels, a task that can be intricate when
approached with ODEs. This technique also reduces the number of unknowns
required to model the open probability. When trained with valid data, the same
neural network architecture can be used for different ion channels, such as
sodium, potassium, and calcium. Furthermore, based on the given data, we can
build more physiologically reasonable DNN models that can be customized.
Subsequently, we integrated the DNN model into calcium dynamics in neurons with
endoplasmic reticulum, resulting in a hybrid model that combines PDEs and DNNs.
Numerical results are provided to demonstrate the flexibility and advantages of
the PDE-DNN model.
|
Mesoscopic molecular dynamics simulations are used to determine the large
scale structure of several binary polymer mixtures of various chemical
architecture, concentration, and thermodynamic conditions. By implementing an
analytical formalism, which is based on the solution to the Ornstein-Zernike
equation, each polymer chain is mapped onto the level of a single soft colloid.
From the appropriate closure relation, the effective, soft-core potential
between coarse-grained units is obtained and used as input to our mesoscale
simulations. The potential derived in this manner is analytical and explicitly
parameter dependent, making it general and transferable to numerous systems of
interest. From computer simulations performed under various thermodynamic
conditions the structure of the polymer mixture, through pair correlation
functions, is determined over the entire miscible region of the phase diagram.
In the athermal regime mesoscale simulations exhibit quantitative agreement
with united atom simulations. Furthermore, they also provide information at
larger scales than can be attained by united atom simulations and in the
thermal regime approaching the phase transition.
|
We present a finite-volume, genuinely 4th-order accurate numerical method for
solving the equations of resistive relativistic magnetohydrodynamics (Res-RMHD)
in Cartesian coordinates. In our formulation, the magnetic field is evolved in
time in terms of face-average values via the constrained-transport method while
the remaining variables (density, momentum, energy and electric fields) are
advanced as cell volume-averages. Spatial accuracy employs 5th-order accurate
WENO-Z reconstruction from point values (as described in a companion paper) to
obtain left and right states at zone interfaces. Explicit flux evaluation is
carried out by solving a Riemann problem at cell interfaces, using the
Maxwell-Harten-Lax-van Leer with contact wave resolution (MHLLC). Time stepping
is based on the implicit-explicit (IMEX) Runge-Kutta (RK) methods, of which we
consider both the 3rd-order strong stability preserving SSP3(4,3,3) and a
recent 4th-order additive RK scheme, to cope with the stiffness introduced by
the source term in Ampere's law. Numerical benchmarks are presented in order to
assess the accuracy and robustness of our implementation.
|
Cosmological models involving an interaction between dark matter and dark
energy have been proposed in order to solve the so-called coincidence problem.
Different forms of coupling have been studied, but there have been claims that
observational data seem to narrow (some of) them down to something annoyingly
close to the $\Lambda$CDM model, thus greatly reducing their ability to deal
with the problem in the first place. The smallness problem of the initial
energy density of dark energy has also been a target of cosmological models in
recent years. Making use of a moderately general coupling scheme, this paper
aims to unite these different approaches and shed some light as to whether this
class of models has any true perspective in suppressing the aforementioned
issues that plague our current understanding of the universe, in a quantitative
and unambiguous way.
|
Using high resolution VLT spectra, we study the multi-component outflow
systems of two quasars exhibiting intrinsic Fe II absorption (QSO 2359-1241 and
SDSS J0318-0600). From the extracted ionic column densities and using
photoionization modeling we determine the gas density, total column density,
and ionization parameter for several of the components. For each object the
largest column density component is also the densest, and all other components
have densities of roughly 1/4 of that of the main component. We demonstrate
that all the absorbers lie roughly at the same distance from the source.
Further, we calculate the total kinetic luminosities and mass outflow rates of
all components and show that these quantities are dominated by the main
absorption component.
|
The Electro-Encephalo-Graphy (EEG) technique consists of estimating the
cortical distribution of signals over time of electrical activity and also of
locating the zones of primary sensory projection. Moreover, it is able to
record respectively the variations of potential and field magnetic waves
generated by electrical activity in the brain every millisecond. Concerning,
the study of the localization source, the brain localizationactivity requires
the solution of a inverse problem. Many different imaging methods are used to
solve the inverse problem.The aim of the presentstudy is to provide comparison
criteria for choosing the least bad method. Hence, the transcranial magnetic
stimulation (TMS) and electroencephalography (EEG) technique are combined for
the sake of studying the dynamics of the brain at rest following a disturbance.
The study focuses in the comparison of the following methods for EEG following
stimulation by TMS: sLORETA (standardized Low Resolution Electromagnetic
Tomography), MNE (Minimum Estimate of the standard), dSPM (dynamic Statistical
Parametric Mapping) and wMEM (wavelet based on the Maximum Entropy on the
Mean)in order to study the impact of TMS towards rest and to study inter and
intra zone connectivity.The contribution of the comparison is demonstrated via
the stages of the simulations.
|
Convolutional neural networks can be trained to perform histology slide
classification using weak annotations with multiple instance learning (MIL).
However, given the paucity of labeled histology data, direct application of MIL
can easily suffer from overfitting and the network is unable to learn rich
feature representations due to the weak supervisory signal. We propose to
overcome such limitations with a two-stage semi-supervised approach that
combines the power of data-efficient self-supervised feature learning via
contrastive predictive coding (CPC) and the interpretability and flexibility of
regularized attention-based MIL. We apply our two-stage CPC + MIL
semi-supervised pipeline to the binary classification of breast cancer
histology images. Across five random splits, we report state-of-the-art
performance with a mean validation accuracy of 95% and an area under the ROC
curve of 0.968. We further evaluate the quality of features learned via CPC
relative to simple transfer learning and show that strong classification
performance using CPC features can be efficiently leveraged under the MIL
framework even with the feature encoder frozen.
|
We have measured the center-of-mass structure factor S(k) of liquid
para-hydrogen by neutron diffraction, using the D4C diffractometer at the
Institute Laue Langevin, Grenoble, France. The present determination is at
variance with previous results obtained from inelastic neutron scattering data,
but agrees with path integral Monte Carlo simulations.
|
We consider a possibility to use the solar neutrinos for studies of small
scale structures of the Earth and for geological research. Effects of thin
layers of matter with density contrast on oscillations of Beryllium neutrinos
inside the Earth are studied. We find that change of the $^7Be$ neutrino flux
can reach 0.1 % for layers with density of oil and size 20 km. Problems of
detection are discussed. Hypothetical method would consist of measuring the
$^7Be -$ flux by, {\it e.g.}, large deep underwater detector$-$submarine which
could change its location.
|
This study considers advective and diffusive transport of passive scalar
fields by spatially-varying incompressible flows. Prior studies have shown that
the eddy diffusivities governing the mean field transport in such systems can
generally be nonlocal in space and time. While for many flows nonlocal eddy
diffusivities are more accurate than commonly-used Boussinesq eddy
diffusivities, nonlocal eddy diffusivities are often computationally
cost-prohibitive to obtain and difficult to implement in practice. We develop a
systematic and more cost-effective approach for modeling nonlocal eddy
diffusivities using matched moment inverse (MMI) operators. These operators are
constructed using only a few leading-order moments of the exact nonlocal eddy
diffusivity kernel, which can be easily computed using the inverse macroscopic
forcing method (IMFM) (Mani and Park (2021)). The resulting reduced-order
models for the mean fields that incorporate the modeled eddy diffusivities
often improve Boussinesq-limit models since they capture leading-order nonlocal
effects. But more importantly, these models can be expressed as partial
differential equations that are readily solvable using existing computational
fluid dynamics capabilities rather than as integro-partial differential
equations.
|
Enhancing the generalization capability of deep neural networks to unseen
domains is crucial for safety-critical applications in the real world such as
autonomous driving. To address this issue, this paper proposes a novel instance
selective whitening loss to improve the robustness of the segmentation networks
for unseen domains. Our approach disentangles the domain-specific style and
domain-invariant content encoded in higher-order statistics (i.e., feature
covariance) of the feature representations and selectively removes only the
style information causing domain shift. As shown in Fig. 1, our method provides
reasonable predictions for (a) low-illuminated, (b) rainy, and (c) unseen
structures. These types of images are not included in the training dataset,
where the baseline shows a significant performance drop, contrary to ours.
Being simple yet effective, our approach improves the robustness of various
backbone networks without additional computational cost. We conduct extensive
experiments in urban-scene segmentation and show the superiority of our
approach to existing work. Our code is available at
https://github.com/shachoi/RobustNet.
|
Rapid advancement of antenna technology catalyses the popularization of
extremely large-scale multiple-input multiple-output (XL-MIMO) antenna arrays,
which pose unique challenges for localization with the inescapable near-field
effect. In this paper, we propose an efficient near-field localization
algorithm by leveraging a sectored uniform circular array (sUCA). In
particular, we first customize a backprojection algorithm in the polar
coordinate for sUCA-enabled near-field localization, which facilitates the
target detection procedure. We then analyze the resolutions in both angular and
distance domains via deriving the interval of zero-crossing points, and further
unravel the minimum required number of antennas to eliminate grating lobes. The
proposed localization method is finally implemented using fast Fourier
transform (FFT) to reduce computational complexity. Simulation results verify
the resolution analysis and demonstrate that the proposed method remarkably
outperforms conventional localization algorithms in terms of localization
accuracy. Moreover, the low-complexity FFT implementation achieves an average
runtime that is hundreds of times faster when large numbers of antenna elements
are employed.
|
The fluctuations in nonequilibrium systems are under intense theoretical and
experimental investigation. Topical ``fluctuation relations'' describe
symmetries of the statistical properties of certain observables, in a variety
of models and phenomena. They have been derived in deterministic and, later, in
stochastic frameworks. Other results first obtained for stochastic processes,
and later considered in deterministic dynamics, describe the temporal evolution
of fluctuations. The field has grown beyond expectation: research works and
different perspectives are proposed at an ever faster pace. Indeed,
understanding fluctuations is important for the emerging theory of
nonequilibrium phenomena, as well as for applications, such as those of
nanotechnological and biophysical interest. However, the links among the
different approaches and the limitations of these approaches are not fully
understood. We focus on these issues, providing: a) analysis of the theoretical
models; b) discussion of the rigorous mathematical results; c) identification
of the physical mechanisms underlying the validity of the theoretical
predictions, for a wide range of phenomena.
|
We revisit a class of Z' explanations of the anomalies found by the LHCb
collaboration in $B$ decays, and show that the scenario is tightly constrained
by a combination of constraints: (i) LHC searches for di-muon resonances, (ii)
pertubativity of the Z' couplings; (iii) the $B_s$ mass difference, and (iv)
and electro-weak precision data. Solutions are found by suppressing the Z'
coupling to electrons and to light quarks and/or by allowing for a Z' decay
width into dark matter. We also present a simplified framework where a
TeV-scale Z' gauge boson that couples to standard leptons as well as to new
heavy vector-like leptons, can simultaneously accommodate the LHCb anomalies
and the muon g-2 anomaly.
|
We investigate models in which inflation is driven by a single geometrical
tachyon. We assume that the D-brane as a probe brane in the background of
NS5-branes has non-zero angular momentum which is shown to play similar role as
the number of the scalar fields of the assisted inflation. We demonstrate that
the angular momentum corrected effective potential allows to account for the
observational constraint on COBE normalization, spectral index $n_S$ and the
tensor to scalar ratio of perturbations consistent with WMAP seven years data.
|
In this article, we present MuMuPy, a computational library and cloud-based
tool for calculating cross sections for the interaction of dimuonium (true
muonium) with matter. MuMuPy calculates corresponding form factors and allows
one to find the probabilities of dimuonium transitions in the electric field of
the nucleus.
MuMuPy was developed in the context of the $\mu\mu$-tron facility, the
project of a low-energy electron-positron collider for production and
experimental study of dimuonium, proposed in our home institute, Budker
Institute of Nuclear Physics.
The reliability of MuMuPy was verified by three independent methods, one of
which was developed by the authors earlier.
|
In extreme classification problems, learning algorithms are required to map
instances to labels from an extremely large label set. We build on a recent
extreme classification framework with logarithmic time and space, and on a
general approach for error correcting output coding (ECOC) with loss-based
decoding, and introduce a flexible and efficient approach accompanied by
theoretical bounds. Our framework employs output codes induced by graphs, for
which we show how to perform efficient loss-based decoding to potentially
improve accuracy. In addition, our framework offers a tradeoff between
accuracy, model size and prediction time. We show how to find the sweet spot of
this tradeoff using only the training data. Our experimental study demonstrates
the validity of our assumptions and claims, and shows that our method is
competitive with state-of-the-art algorithms.
|
We discuss the decomposition of the tensorial relaxation function for
isotropic and transversely isotropic Modified Quasi-Linear Viscoelastic models.
We show how to formulate the constitutive equation by using a convenient
decomposition of the relaxation tensor into scalar components and tensorial
bases. We show that the bases must be symmetrically additive, i.e they must sum
up to the symmetric fourth-order identity tensor. This is a fundamental
property both for isotropic and anisotropic bases that ensures the constitutive
equation is consistent with the elastic limit. We provide two robust methods to
obtain such bases. Furthermore, we show that, in the transversely isotropic
case, the bases are naturally deformation-dependent for deformation modes that
induce rotation or stretching of the fibres. Therefore, the Modified
Quasi-Linear Viscoelastic framework allows to capture the non-linear phenomenon
of strain-dependent relaxation, which has always been a criticised limitation
of the original Quasi-Linear Viscoelastic theory. We illustrate this intrinsic
non-linear feature, unique to the Modified Quasi-Linear Viscoelastic model,
with two examples (uni-axial extension and perpendicular shear).
|
The question ''Which abelian permutation groups arise as group of simple
currents in Rational Conformal Field Theory?'' is investigated using the
formalism of weighted permutation actions. After a review of the relevant
properties of simple current symmetries, the general theory of WPA-s and
admissibility conditions are described, and classification results are
illustrated by a couple of examples.
|
The talk summarises the case for Higgs physics in $e^+e^-$ collisions and
explains how Higgs parameters can be extracted in a model-independent way at
the International Linear Collider (ILC). The expected precision will be
discussed in the context of projections for the experiments at the Large Hadron
Collider (LHC).
|
In this work we shall study a class of $f(R,\phi)$ gravity models which
during the inflationary era, which is the large curvature regime, result to an
effective inflationary Lagrangian that contains a rescaled Einstein-Hilbert
term $\alpha R$ in the presence of a canonical minimally coupled scalar field.
The dimensionless parameter $\alpha$ is chosen to take values in the range
$0<\alpha<1$ and the main motivation for studying these rescaled
Einstein-Hilbert $f(R,\phi)$ gravities, is the fact that the rescaled action
may render an otherwise incompatible canonical scalar field theory with the
Swampland criteria, to be compatible with the Swampland criteria. As we will
show, by studying a large number of inflationary potentials appearing in the
2018 Planck collaboration article for the constraints on inflation, the
simultaneous compatibility with both the Planck constraints and the Swampland
criteria, is achieved for some models, and the main characteristic of the
models for which this is possible, is the small values that the parameter
$\alpha$ must take.
|
Let $H_n$ be the minimal number of smaller homothetic copies of an
$n$-dimensional convex body required to cover the whole body. Equivalently,
$H_n$ can be defined via illumination of the boundary of a convex body by
external light sources. The best known upper bound in three-dimensional case is
$H_3\le 16$ and is due to Papadoperakis. We use Papadoperakis' approach to show
that $H_4\le 96$, $H_5\le 1091$ and $H_6\le 15373$ which significantly improve
the previously known upper bounds on $H_n$ in these dimensions.
|
The properties of the MoSr2RCu2O8 (R=rare earth) system are found to
systematically change with the contraction of the R ions. For the light R ions
(La-Nd) the samples are paramagnetic down to 5 K, whereas in the intermediate
range (Sm-Tb), the Mo sublattice orders antiferromagnetically at TN, ranging
from 11 to 24 K. For the heavy R ions, Ho-Tm and Y, superconductivity appears
at TC in the range 19-27 K and antiferromagnetism sets in at TN < TC. This
latter behavior resembles most of the magneto-superconductors, but is in sharp
contrast to the iso-structural RuSr2RCu2O8 system where TN > TC.
|
The goals of this paper are two-fold. The first goal is to serve as an
expository tutorial on the working of deep learning models which emphasizes
geometrical intuition about the reasons for success of deep learning. The
second goal is to complement the current results on the expressive power of
deep learning models and their loss surfaces with novel insights and results.
In particular, we describe how deep neural networks carve out manifolds
especially when the multiplication neurons are introduced. Multiplication is
used in dot products and the attention mechanism and it is employed in capsule
networks and self-attention based transformers. We also describe how random
polynomial, random matrix, spin glass and computational complexity perspectives
on the loss surfaces are interconnected.
|
A simple quantitative example of a reflexive feedback process and the
resulting price dynamics after an exogenous price shock to a financial network
is presented. Furthermore, an outline of a theory that connects financial
reflexivity, which stems from cross-ownership and delayed or incomplete
information, and no-arbitrage pricing theory under systemic risk is provided.
|
The asymptotic solution of the inviscid Burgers equations with initial
potential $\psi$ is closely related to the convex hull of the graph of $\psi$.
In this paper, we study this convex hull, and more precisely its extremal
points, if $\psi$ is a stochastic process. The times where those extremal
points are reached, called extremal times, form a negligible set for L\'evy
processes, their integrated processes, and It\^o processes. We examine more
closely the case of a L\'evy process with bounded variation. Its extremal
points are almost surely countable, with accumulation only around the extremal
values. These results are derived from the general study of the extremal times
of $\psi+f$, where $\psi$ is a L\'evy process and $f$ a smooth deterministic
drift.
These results allow us to show that, for an inviscid Burgers turbulence with
a compactly supported initial potential $\psi$, the only point capable of being
Lagrangian regular is the time $T$ where $\psi$ reaches its maximum, and that
is indeed a regular point iff 0 is regular for both half-lines. As a
consequence, if the turbulence occurs on a non-compact interval, there are a.s.
no Lagrangian regular points.
|
The behavior of neutral pseudoscalar mesons $\pi^0, \eta$ and $\eta'$ in hot
and dense matter is investigated, in the framework of the three flavor
Nambu-Jona-Lasinio model. Three different scenarios are considered: zero
density and finite temperature, zero temperature and finite density in a flavor
asymmetric medium with and without strange valence quarks, and finite
temperature and density. The behavior of mesons is analyzed in connection with
possible signatures of restoration of symmetries. In the high density region
and at zero temperature it is found that the mass of the $\eta'$ increases, the
deviation from the mass of the $\eta$ being more pronounced in matter without
strange valence quarks.
|
A major barrier to deploying healthcare AI models is their trustworthiness.
One form of trustworthiness is a model's robustness across different subgroups:
while existing models may exhibit expert-level performance on aggregate
metrics, they often rely on non-causal features, leading to errors in hidden
subgroups. To take a step closer towards trustworthy seizure onset detection
from EEG, we propose to leverage annotations that are produced by healthcare
personnel in routine clinical workflows -- which we refer to as workflow notes
-- that include multiple event descriptions beyond seizures. Using workflow
notes, we first show that by scaling training data to an unprecedented level of
68,920 EEG hours, seizure onset detection performance significantly improves
(+12.3 AUROC points) compared to relying on smaller training sets with
expensive manual gold-standard labels. Second, we reveal that our binary
seizure onset detection model underperforms on clinically relevant subgroups
(e.g., up to a margin of 6.5 AUROC points between pediatrics and adults), while
having significantly higher false positives on EEG clips showing
non-epileptiform abnormalities compared to any EEG clip (+19 FPR points). To
improve model robustness to hidden subgroups, we train a multilabel model that
classifies 26 attributes other than seizures, such as spikes, slowing, and
movement artifacts. We find that our multilabel model significantly improves
overall seizure onset detection performance (+5.9 AUROC points) while greatly
improving performance among subgroups (up to +8.3 AUROC points), and decreases
false positives on non-epileptiform abnormalities by 8 FPR points. Finally, we
propose a clinical utility metric based on false positives per 24 EEG hours and
find that our multilabel model improves this clinical utility metric by a
factor of 2x across different clinical settings.
|
The interpretation of seismic data is vital for characterizing sediments'
shape in areas of geological study. In seismic interpretation, deep learning
becomes useful for reducing the dependence on handcrafted facies segmentation
geometry and the time required to study geological areas. This work presents a
Deep Neural Network for Facies Segmentation (DNFS) to obtain state-of-the-art
results for seismic facies segmentation. DNFS is trained using a combination of
cross-entropy and Jaccard loss functions. Our results show that DNFS obtains
highly detailed predictions for seismic facies segmentation using fewer
parameters than StNet and U-Net.
|
We define $g(n)$ to be the maximal order of an element of the symmetric group
on $n$ elements. Results about the prime factorization of $g(n)$ allow a
reduction of the upper bound on the largest prime divisor of $g(n)$ to
$1.328\sqrt{n\log n}$.
|
The rigidity theorems of Alexandrov (1950) and Stoker (1968) are classical
results in the theory of convex polyhedra. In this paper we prove analogues of
them for normal (resp., standard) ball-polyhedra. Here, a ball-polyhedron means
an intersection of finitely many congruent balls in Euclidean 3-space.
|
We give topological lower bounds on the number of periodic and closed
trajectories in strictly convex smooth billiards. We use variational reduction
admitting a finite group of symmetries and apply topological approach based on
equivariant Morse and Lusternik - Schnirelman theories.
The paper continues results published in math.DG/9911226 and math.DG/0006049
|
The singular chain complex of the iterated loop space is expressed in terms
of the cobar construction. After that we consider the spectral sequence of the
cobar construction and calculate its first term over Z/p-coefficients and over
a field of characteristic zero. Finally we apply these results to calculate the
homology of the iterated loop spaces of the stunted real and complex projective
spaces. In the Appendix, written by F.Sergeraert there are considered computer
methods for calculations of the homology of iterated loop spaces.
|
We describe a generalization of the Lellouch-L\"uscher formula to the case of
multiple strongly-coupled decay channels. As in the original formula, our final
result is a relation between weak matrix elements in finite and infinite
volumes. Our extension is limited to final states with two scalar particles,
with center of mass energies below the lowest three- or four-particle
threshold. Otherwise the extension is general, accommodating any number of
channels, arbitrary strong coupling between channels, as well as any form of
weak decay operators in the matrix elements. Among many possible applications,
we emphasize that this is a necessary first step on the way to a lattice-QCD
calculation of weak decay rates for D -> pi pi and D -> K K-bar. Our results
allow for arbitrary total momentum and hold for degenerate or non-degenerate
particles.
|
Given the ever increasing bandwidth of the visual information available to
many intelligent systems, it is becoming essential to endow them with a sense
of what is worthwhile their attention and what can be safely disregarded. This
article presents a general mathematical framework to efficiently allocate the
available computational resources to process the parts of the input that are
relevant to solve a given perceptual problem. By this we mean to find the
hypothesis H (i.e., the state of the world) that maximizes a function L(H),
representing how well each hypothesis "explains" the input. Given the large
bandwidth of the sensory input, fully evaluating L(H) for each hypothesis H is
computationally infeasible (e.g., because it would imply checking a large
number of pixels). To address this problem we propose a mathematical framework
with two key ingredients. The first one is a Bounding Mechanism (BM) to compute
lower and upper bounds of L(H), for a given computational budget. These bounds
are much cheaper to compute than L(H) itself, can be refined at any time by
increasing the budget allocated to a hypothesis, and are frequently enough to
discard a hypothesis. To compute these bounds, we develop a novel theory of
shapes and shape priors. The second ingredient is a Focus of Attention
Mechanism (FoAM) to select which hypothesis' bounds should be refined next,
with the goal of discarding non-optimal hypotheses with the least amount of
computation. The proposed framework: 1) is very efficient since most hypotheses
are discarded with minimal computation; 2) is parallelizable; 3) is guaranteed
to find the globally optimal hypothesis; and 4) its running time depends on the
problem at hand, not on the bandwidth of the input. We instantiate the proposed
framework for the problem of simultaneously estimating the class, pose, and a
noiseless version of a 2D shape in a 2D image.
|
We prove an equivalence, in the large N limit, between certain U(N) gauge
theories containing adjoint representation matter fields and their orbifold
projections. Lattice regularization is used to provide a non-perturbative
definition of these theories; our proof applies in the strong coupling, large
mass phase of the theories. Equivalence is demonstrated by constructing and
comparing the loop equations for a parent theory and its orbifold projections.
Loop equations for both expectation values of single-trace observables, and for
connected correlators of such observables, are considered; hence the
demonstrated non-perturbative equivalence applies to the large N limits of both
string tensions and particle spectra.
|
We consider the oscillating sign of the drag resistivity and its anomalous
temperature dependence discovered experimentally in a bi-layer system in the
regime of the integer quantum Hall effect. We attribute the oscillating sign to
the effect of disorder on the relation between an adiabatic momentum transfer
to an electron and the displacement of its position. While in the absence of
any Landau level mixing a momentum transfer $\hbar \bf q$ implies a
displacement of $ql_H^2$ (with $l_H$ being the magnetic length), Landau level
mixing induced by short range disorder adds a potentially large displacement
that depends on the electron's energy, with the sign being odd with respect to
the distance of that energy from the center of the Landau level. We show how
the oscillating sign of drag disappears when the disorder is smooth and when
the electronic states are localized.
|
Modeling the dynamics of interacting entities using an evolving graph is an
essential problem in fields such as financial networks and e-commerce.
Traditional approaches focus primarily on pairwise interactions, limiting their
ability to capture the complexity of real-world interactions involving multiple
entities and their intricate relationship structures. This work addresses the
problem of forecasting higher-order interaction events in multi-relational
recursive hypergraphs. This is done using a dynamic graph representation
learning framework that can capture complex relationships involving multiple
entities. The proposed model, \textit{Relational Recursive Hyperedge Temporal
Point Process} (RRHyperTPP) uses an encoder that learns a dynamic node
representation based on the historical interaction patterns and then a
hyperedge link prediction based decoder to model the event's occurrence. These
learned representations are then used for downstream tasks involving
forecasting the type and time of interactions. The main challenge in learning
from hyperedge events is that the number of possible hyperedges grows
exponentially with the number of nodes in the network. This will make the
computation of negative log-likelihood of the temporal point process expensive,
as the calculation of survival function requires a summation over all possible
hyperedges. In our work, we use noise contrastive estimation to learn the
parameters of our model, and we have experimentally shown that our models
perform better than previous state-of-the-art methods for interaction
forecasting.
|
We show that the first law for the rotating Taub-NUT is straightforwardly
established with the surface charge method. The entropy is explicitly found as
a charge, and its value is not proportional to the horizon area. We conclude
that there are unavoidable contributions from the Misner strings to the
charges, still, the mass and angular momentum gets standard values. However,
there are no independent charges associated with the Misner strings.
|
Recent advances in image data processing through machine learning and
especially deep neural networks (DNNs) allow for new optimization and
performance-enhancement schemes for radiation detectors and imaging hardware
through data-endowed artificial intelligence. We give an overview of data
generation at photon sources, deep learning-based methods for image processing
tasks, and hardware solutions for deep learning acceleration. Most existing
deep learning approaches are trained offline, typically using large amounts of
computational resources. However, once trained, DNNs can achieve fast inference
speeds and can be deployed to edge devices. A new trend is edge computing with
less energy consumption (hundreds of watts or less) and real-time analysis
potential. While popularly used for edge computing, electronic-based hardware
accelerators ranging from general purpose processors such as central processing
units (CPUs) to application-specific integrated circuits (ASICs) are constantly
reaching performance limits in latency, energy consumption, and other physical
constraints. These limits give rise to next-generation analog neuromorhpic
hardware platforms, such as optical neural networks (ONNs), for high parallel,
low latency, and low energy computing to boost deep learning acceleration.
|
Unsupervised cross-domain person re-identification (Re-ID) faces two key
issues. One is the data distribution discrepancy between source and target
domains, and the other is the lack of labelling information in target domain.
They are addressed in this paper from the perspective of representation
learning. For the first issue, we highlight the presence of camera-level
sub-domains as a unique characteristic of person Re-ID, and develop
camera-aware domain adaptation to reduce the discrepancy not only between
source and target domains but also across these sub-domains. For the second
issue, we exploit the temporal continuity in each camera of target domain to
create discriminative information. This is implemented by dynamically
generating online triplets within each batch, in order to maximally take
advantage of the steadily improved feature representation in training process.
Together, the above two methods give rise to a novel unsupervised deep domain
adaptation framework for person Re-ID. Experiments and ablation studies on
benchmark datasets demonstrate its superiority and interesting properties.
|
Quasi-elastic scattering of the vector bosons W and Z is a sensitive probe of
the details of electroweak symmetry breaking, and a key process at future
lepton colliders. We discuss the limitations of a model-independent
effective-theory approach and describe the extension to a class of Simplified
Models that is applicable to all energies in a quantitative way, and enables
realistic Monte-Carlo simulations. The framework has been implemented in the
Monte-Carlo event generator WHIZARD.
|
The main goal of this study is to investigate the LF of a sample of 142 X-ray
selected clusters, with spectroscopic redshift confirmation and a well defined
selection function, spanning a wide redshift and mass range, and to test the LF
dependence on cluster global properties, in a homogeneous and unbiased way. Our
study is based on the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS)
photometric galaxy catalogue,associated with photometric redshifts. We
constructed LFs inside a scaled radius using a selection in photometric
redshift around the cluster spectroscopic redshift in order to reduce
projection effects. The width of the photometric redshift selection was
carefully determined to avoid biasing the LF and depended on both the cluster
redshift and the galaxy magnitudes. The purity was then enhanced by applying a
precise background subtraction. We constructed composite luminosity functions
(CLFs) by stacking the individual LFs and studied their evolution with redshift
and richness, analysing separately the brightest cluster galaxy (BCG) and
non-BCG members. We fitted the dependences of the CLFs and BCG distributions
parameters with redshift and richness conjointly in order to distinguish
between these two effects. We find that the usual photometric redshift
selection methods can bias the LF estimate if the redshift and magnitude
dependence of the photometric redshift quality is not taken into account. Our
main findings concerning the evolution of the galaxy luminosity distribution
with redshift and richness are that, in the inner region of clusters and in the
redshift-mass range we probe (about $0<z<1$ and $10^{13}
M_{\odot}<M_{500}<5\times10^{14}M_{\odot}$), the bright part of the LF (BCG
excluded) does not depend much on mass or redshift except for its amplitude,
whereas the BCG luminosity increases both with redshift and richness, and its
scatter decreases with redshift.
|
Using density functional molecular dynamics simulations, we analyze the
broken chemical order in a GeS$_2$ glass and its impact on the dynamical
properties of the glass through the in-depth study of the vibrational
eigenvectors. We find homopolar bonds and the frequencies of the corresponding
modes are in agreement with experimental data. Localized S-S modes and 3-fold
coordinated sulfur atoms are found to be at the origin of specific Raman peaks
whose origin was not previously clear. Through the ring size statistics we
find, during the glass formation, a conversion of 3-membered rings into larger
units but also into 2-membered rings whose vibrational signature is in
agreement with experiments.
|
Advancing ultrafast high-repetition-rate lasers to shortest pulse durations
comprising only a few optical cycles while pushing their energy into the
multi-millijoule regime opens a route towards terawatt-class peak powers at
unprecedented average power. We explore this route via efficient
post-compression of high-energy 1.2 ps pulses from an Ytterbium InnoSlab laser
to 9.6 fs duration using gas-filled multi-pass cells (MPCs) at a repetition
rate of 1 kHz. Employing dual-stage compression with a second MPC stage
supporting a close-to-octave-spanning bandwidth enabled by dispersion-matched
dielectric mirrors, a record compression factor of 125 is reached at 70%
overall efficiency, delivering 6.7 mJ pulses with a peak power of about 0.3 TW.
Moreover, we show that post-compression can improve the temporal contrast at
picosecond delay by at least one order of magnitude. Our results demonstrate
efficient conversion of multi-millijoule picosecond lasers to high-peak-power
few-cycle sources, opening up new parameter regimes for laser plasma physics,
high energy physics, biomedicine and attosecond science.
|
We look for elliptic curves featuring rational points whose coordinates form
two arithmetic progressions, one for each coordinate. A constructive method for
creating such curves is shown, for lengths up to 5.
|
We present a multiple stellar population study of the metal-poor globular
cluster (GC) M92 (NGC 6341), which is long known for the substantial
metallicity dispersion, using our own photometric system. We find two groups
with slightly different mean metallicities, the metal-poor (MP) stars with
[Fe/H] = $-$2.412$\pm$0.03, while the metal-rich (MR) ones with
$-$2.282$\pm$0.002. The MP constitutes about 23\% of the total mass with a more
central concentration. Our populational tagging based on the [C/Fe] and [N/Fe]
provides the mean n(P):n(I):n(E) = 32.2:31.6:36.2 ($\pm$2.4), where P, I, and E
denote the primordial, intermediate, and extreme populations, respectively. Our
populational number ratio is consistent with those of others. However, the MP
has a significantly different populational number ratio than the mean value,
and the domination of the primordial population in the MP is consistent with
observations of Galactic GCs that less massive GCs contain larger fractions of
the primordial population. Structural and constituent differences between the
MP and MR may indicate that M92 is a merger remnant in a dwarf galaxy
environment, consistent with recent suggestions that M92 is a GC in a dwarf
galaxy or a remnant nucleus of the progenitor galaxy. Discrepancy between our
method and those widely used for the HST photometry exists in the primordial
population. Significant magnesium and oxygen depletions of $-$0.8 and $-$0.3
dex, respectively, and helium enhancement of $\Delta Y$ $\gtrsim$ 0.03 are
required to explain the presence of this abnormal primordial group. No clear
explanation is available with limited information of detailed elemental
abundances.
|
We present here the observation of the Cygnus Superbubble (CSB) using the
Solid-state slit camera (SSC) aboard the Monitor of All-sky X-ray Image. The
CSB is a large diffuse structure in the Cygnus region with enhanced soft X-ray
emission. By utilizing the CCD spectral resolution of the SSC, we detect Fe,
Ne, Mg emission lines from the CSB for the first time. The best fit model
implies thin hot plasma of kT ~ 0.3 keV with depleted abundance of 0.26 +/- 0.1
solar. Joint spectrum fitting of the ROSAT PSPC data and MAXI/SSC data enables
us to measure precise values of NH and temperature inside the CSB. The results
show that all of the regions in the CSB have similar NH and temperature,
indicating that the CSB is single unity. The energy budgets calculation
suggests that 2-3 Myrs of stellar wind from the Cyg OB2 is enough to power up
the CSB, whereas due to its off center position, the origin of the CSB is most
likely a Hypernova.
|
With the first detection of gravitational waves from a binary system of
neutron stars, GW170817, a new window was opened to study the properties of
matter at and above nuclear-saturation density. Reaching densities a few times
that of nuclear matter and temperatures up to $100\,\rm{MeV}$, such mergers
also represent potential sites for a phase transition (PT) from confined
hadronic matter to deconfined quark matter. While the lack of a postmerger
signal in GW170817 has prevented us from assessing experimentally this
scenario, two theoretical studies have explored the postmerger
gravitational-wave signatures of PTs in mergers of binary systems of neutron
stars. We here extend and complete the picture by presenting a novel signature
of the occurrence of a PT. More specifically, using fully general-relativistic
hydrodynamic simulations and employing a suitably constructed equation of state
that includes a PT, we present the occurrence of a "delayed PT", i.e. a PT that
develops only some time after the merger and produces a metastable object with
a quark-matter core, i.e. a hypermassive hybrid star. Because in this scenario,
the postmerger signal exhibits two distinct fundamental gravitational-wave
frequencies -- before and after the PT -- the associated signature promises to
be the strongest and cleanest among those considered so far, and one of the
best signatures of the production of quark matter in the present Universe.
|
Gradient boosting decision trees (GBDTs) have seen widespread adoption in
academia, industry and competitive data science due to their state-of-the-art
performance in many machine learning tasks. One relative downside to these
models is the large number of hyper-parameters that they expose to the
end-user. To maximize the predictive power of GBDT models, one must either
manually tune the hyper-parameters, or utilize automated techniques such as
those based on Bayesian optimization. Both of these approaches are
time-consuming since they involve repeatably training the model for different
sets of hyper-parameters. A number of software GBDT packages have started to
offer GPU acceleration which can help to alleviate this problem. In this paper,
we consider three such packages: XGBoost, LightGBM and Catboost. Firstly, we
evaluate the performance of the GPU acceleration provided by these packages
using large-scale datasets with varying shapes, sparsities and learning tasks.
Then, we compare the packages in the context of hyper-parameter optimization,
both in terms of how quickly each package converges to a good validation score,
and in terms of generalization performance.
|
This paper presents a numerical method to calculate the value function for a
general discounted impulse control problem for piecewise deterministic Markov
processes. Our approach is based on a quantization technique for the underlying
Markov chain defined by the post jump location and inter-arrival time.
Convergence results are obtained and more importantly we are able to give a
convergence rate of the algorithm. The paper is illustrated by a numerical
example.
|
Inspired by the formation of geological structures as earth's crust deforms
by magmatic intrusions, we investigate the elastohydrodynamic growth of a
viscoplastic blister under an elastic sheet. By combining experiments, scaling
analysis and numerical simulations we reveal a new regime for the growth of the
blister's height $\sim t^{5/9}$ and radius $\sim t^{2/9}$. A plug like flow
inside the blister dictates its dynamics, whereas the blister takes a
quasi-static self-similar shape given by a balance in the pressure gradient
induced by bending of the elastic sheet and the fluid's yield stress.
|
Imaging the change in the magnetization vector in real time by spin-polarized
low-energy electron microscopy, we observed a hydrogen-induced, reversible
spin-reorientation transition in a cobalt bilayer on Ru(0001). Initially,
hydrogen sorption reduces the size of out-of-plane magnetic domains and leads
to the formation of a magnetic stripe domain pattern, which can be understood
as a consequence of reducing the out-of-plane magnetic anisotropy. Further
hydrogen sorption induces a transition to an in-plane easy-axis. Desorbing the
hydrogen by heating the film to 400 K recovers the original out-of-plane
magnetization. By means of ab-initio calculations we determine that the origin
of the transition is the local effect of the hybridization of the hydrogen
orbital and the orbitals of the Co atoms bonded to the absorbed hydrogen.
|
The central engine that powers gamma-ray bursts (GRBs), the most powerful
explosions in the universe, is still not identified. Besides hyper-accreting
black holes, rapidly spinning and highly magnetized neutron stars, known as
millisecond magnetars, have been suggested to power both long and short GRBs.
The presence of a magnetar engine following compact star mergers is of
particular interest as it would provide essential constraints on the poorly
understood equation of state for neutron stars. Indirect indications of a
magnetar engine in these merger sources have been observed in the form of
plateau features present in the X-ray afterglow light curves of some short
GRBs. Additionally, some X-ray transients lacking gamma-ray bursts (GRB-less)
have been identified as potential magnetar candidates originating from compact
star mergers. Nevertheless, smoking gun evidence is still lacking for a
magnetar engine in short GRBs, and the associated theoretical challenges have
been addressed. Here we present a comprehensive analysis of the broad-band
prompt emission data of a peculiar, very bright GRB 230307A. Despite its
apparently long duration, the prompt emission and host galaxy properties point
toward a compact star merger origin, being consistent with its association with
a kilonova. More intriguingly, an extended X-ray emission component emerges as
the $\gamma$-ray emission dies out, signifying the emergence of a magnetar
central engine. We also identify an achromatic temporal break in the
high-energy band during the prompt emission phase, which was never observed in
previous bursts and reveals a narrow jet with half opening angle of
approximately $3.4^\circ$.
|
We present the first systematic calculations based on the angular-momentum
projection of cranked Slater determinants. We propose the Iy --> I scheme, by
which one projects the angular momentum I from the 1D cranked state constrained
to the average spin projection of <I_y>=I. Calculations performed for the
rotational band in 46Ti show that the AMP Iy --> I scheme offers a natural
mechanism for correcting the cranking moment of inertia at low-spins and
shifting the terminating state up by ~2 MeV, in accordance with data. We also
apply this scheme to high-spin states near the band termination in A~44 nuclei,
and compare results thereof with experimental data, shell-model calculations,
and results of the approximate analytical symmetry-restoration method proposed
previously.
|
Despite recent advancements in detecting disinformation generated by large
language models (LLMs), current efforts overlook the ever-evolving nature of
this disinformation. In this work, we investigate a challenging yet practical
research problem of detecting evolving LLM-generated disinformation.
Disinformation evolves constantly through the rapid development of LLMs and
their variants. As a consequence, the detection model faces significant
challenges. First, it is inefficient to train separate models for each
disinformation generator. Second, the performance decreases in scenarios when
evolving LLM-generated disinformation is encountered in sequential order. To
address this problem, we propose DELD (Detecting Evolving LLM-generated
Disinformation), a parameter-efficient approach that jointly leverages the
general fact-checking capabilities of pre-trained language models (PLM) and the
independent disinformation generation characteristics of various LLMs. In
particular, the learned characteristics are concatenated sequentially to
facilitate knowledge accumulation and transformation. DELD addresses the issue
of label scarcity by integrating the semantic embeddings of disinformation with
trainable soft prompts to elicit model-specific knowledge. Our experiments show
that \textit{DELD} significantly outperforms state-of-the-art methods.
Moreover, our method provides critical insights into the unique patterns of
disinformation generation across different LLMs, offering valuable perspectives
in this line of research.
|
A balanced speech corpus is the basic need for any speech processing task. In
this report we describe our effort on development of Assamese speech corpus. We
mainly focused on some issues and challenges faced during development of the
corpus. Being a less computationally aware language, this is the first effort
to develop speech corpus for Assamese. As corpus development is an ongoing
process, in this paper we report only the initial task.
|
Recently, deep neural networks (DNNs) have been widely and successfully used
in Object Detection, e.g. Faster RCNN, YOLO, CenterNet. However, recent studies
have shown that DNNs are vulnerable to adversarial attacks. Adversarial attacks
against object detection can be divided into two categories, whole-pixel
attacks and patch attacks. While these attacks add perturbations to a large
number of pixels in images, we proposed a diffused patch attack
(\textbf{DPAttack}) to successfully fool object detectors by diffused patches
of asteroid-shaped or grid-shape, which only change a small number of pixels.
Experiments show that our DPAttack can successfully fool most object detectors
with diffused patches and we get the second place in the Alibaba Tianchi
competition: Alibaba-Tsinghua Adversarial Challenge on Object Detection. Our
code can be obtained from https://github.com/Wu-Shudeng/DPAttack.
|
We establish an infinitesimal variant of Guo-Jacquet trace formula for the
case of a central simple algebra over a number field $F$ containing a quadratic
field extension $E/F$. It is an equality between a sum of geometric
distributions on the tangent space of some symmetric space and its Fourier
transform. To prove this, we need to define an analogue of Arthur's truncation
and then use the Poisson summation formula. We describe the terms attached to
regular semi-simple orbits as explicit weighted orbital integrals. To compare
them to those for another case studied in our previous work, we state and prove
the weighted fundamental lemma at the infinitesimal level by using Labesse's
work on the base change for $GL_n$.
|
To avoid the complicated topology of surviving clusters induced by standard
Strong Disorder RG in dimension $d>1$, we introduce a modified procedure called
'Boundary Strong Disorder RG' where the order of decimations is chosen a
priori. We apply numerically this modified procedure to the Random Transverse
Field Ising model in dimension $d=2$. We find that the location of the critical
point, the activated exponent $\psi \simeq 0.5$ of the Infinite Disorder
scaling, and the finite-size correlation exponent $\nu_{FS} \simeq 1.3$ are
compatible with the values obtained previously by standard Strong Disorder
RG.Our conclusion is thus that Strong Disorder RG is very robust with respect
to changes in the order of decimations. In addition, we analyze in more details
the RG flows within the two phases to show explicitly the presence of various
correlation length exponents : we measure the typical correlation exponent
$\nu_{typ} \simeq 0.64$ in the disordered phase (this value is very close to
the correlation exponent $\nu^Q_{pure}(d=2) \simeq 0.63$ of the {\it pure}
two-dimensional quantum Ising Model), and the typical exponent $\nu_h \simeq 1$
within the ordered phase. These values satisfy the relations between critical
exponents imposed by the expected finite-size scaling properties at Infinite
Disorder critical points. Within the disordered phase, we also measure the
fluctuation exponent $\omega \simeq 0.35$ which is compatible with the Directed
Polymer exponent $\omega_{DP}(1+1)=1/3$ in $(1+1)$ dimensions.
|
We consider various probabilistic games with piles for one player or two
players. In each round of the game, a player randomly chooses to add $a$ or $b$
chips to his pile under the condition that $a$ and $b$ are not necessarily
positive. If a player has a negative number of chips after making his play,
then the number of chips he collects will stay at $0$ and the game will
continue. All the games we considered satisfy these rules. The game ends when
one collects $n$ chips for the first time. Each player is allowed to start with
$s$ chips where $s\geq 0$. We consider various cases of $(a,b)$ including the
pairs $(1,-1)$ and $(2,-1)$ in particular. We investigate the probability
generating functions of the number of turns required to end the games. We
derive interesting recurrence relations for the sequences of such functions in
$n$ and write these generating functions as rational functions. As an
application, we derive other statistics for the games which include the average
number of turns required to end the game and other higher moments.
|
We adopt the beam splitter model for losses to analyse the performance of a
recent compact continuous-variable entanglement distillation protocol [Phys.
Rev. Lett. 108, 060502, (2012)] implemented using realistic quantum memories.
We show that the decoherence undergone by a two-mode squeezed state while
stored in a quantum memory can strongly modify the results of the preparatory
step of the protocol. We find that the well-known method for locally increasing
entanglement, phonon subtraction, may not result in entanglement gain when
losses are taken into account. Thus, we investigate the critical number $m_c$
of phonon subtraction attempts from the matter modes of the quantum memory. If
the initial state is not de-Gaussified within $m_c$ attempts, the protocol
should be restarted to obtain any entanglement increase. Moreover, the
condition $m_c>1$ implies an additional constraint on the subtraction beam
splitter interaction transmissivity, viz. it should be about 50% for a wide
range of protocol parameters. Additionally, we consider the average
entanglement rate, which takes into account both the unavoidable probabilistic
nature of the protocol and its possible failure as a result of a large number
of unsuccessful subtraction attempts. We find that a higher value of the
average entanglement can be achieved by increasing the subtraction beam
splitter interaction transmissivity. We conclude that the compact distillation
protocol with the practical constraints coming from realistic quantum memories
allows a feasible experimental realization within existing technologies.
|
We present Space Telescope Imaging Spectrograph (STIS) spectral images of the
HH~30 stellar jet taken through a wide slit over two epochs. The jet is
unresolved spectrally, so the observations produce emission-line images for
each line in the spectrum. This rich dataset shows how physical conditions in
the jet vary with distance and time, produces precise proper motions of knots
within the jet, resolves the jet width close to the star, and gives a spectrum
of the reflected light from the disk over a large wavelength range at several
positions. We introduce a new method for analyzing a set of line ratios based
on minimizing a quadratic form between models and data. The method generates
images of the density, temperature and ionization fraction computed using all
the possible line ratios appropriately weighted. In HH 30, the density declines
with distance from the source in a manner consistent with an expanding flow,
and is larger by a factor of two along the axis of the jet than it is at the
periphery. Ionization in the jet ranges from ~ 5% to 40%, and high
ionization/excitation knots form at about 100 AU from the star and propagate
outward with the flow. These high-excitation knots are not accompanied by
corresponding increases in the density, so if formed by velocity variations the
knots must have a strong internal magnetic pressure to smooth out density
increases while lengthening recombination times.
|
Recently, substantial progress has been made in text ranking based on
pretrained language models such as BERT. However, there are limited studies on
how to leverage more powerful sequence-to-sequence models such as T5. Existing
attempts usually formulate text ranking as classification and rely on
postprocessing to obtain a ranked list. In this paper, we propose RankT5 and
study two T5-based ranking model structures, an encoder-decoder and an
encoder-only one, so that they not only can directly output ranking scores for
each query-document pair, but also can be fine-tuned with "pairwise" or
"listwise" ranking losses to optimize ranking performances. Our experiments
show that the proposed models with ranking losses can achieve substantial
ranking performance gains on different public text ranking data sets. Moreover,
when fine-tuned with listwise ranking losses, the ranking model appears to have
better zero-shot ranking performance on out-of-domain data sets compared to the
model fine-tuned with classification losses.
|
A substantial enhancement of the superconducting gap was recently reported in
clean, large ~30nm, and close to hemispherical Sn grains. A satisfactory
explanation of this behaviour is still missing as shell effects caused by
fluctuations of the spectral density or surface phonons are negligible in this
region. Here we show that this enhancement is caused by spatial inhomogeneities
of the Cooper's pairs density of probability. In the mean field approach that
we employ these inhomogeneities are closely related to the eigenstates of the
one-body problem, namely, a particle in a hemispherical shaped potential. The
parameter free theoretical prediction agrees well with the experimental
results. A similar enhancement is predicted for other weakly coupled
superconductors.
|
Multi-frequency interferometry (MFI) is well known as an accurate phase-based
measurement scheme. The paper reveals the inherent relationship of the
unambiguous measurement range (UMR), the outlier probability, the MSE
performance with the frequency pattern in MFI system, and then provides the
corresponding criterion for choosing the frequency pattern. We point out that
the theoretical rigorous UMR of MFI deduced in the literature is usually
optimistic for practical application and derive a more practical expression .
It is found that the least-square (LS) estimator of MFI has a distinguished
"double threshold effect". Distinct difference is observed for the MSE in
moderate and high signal-to-noise ratio (SNR) region (denoted by MMSE and HMSE
respectively) and the second threshold effect occurs during the rapid
transition from MMSE to HMSE with increasing SNR. The closed-form expressions
for the MMSE, HMSE and Cramer-Rao bound (CRB) are further derived, with HMSE
coinciding with CRB. Since the HMSE is insensitive to frequency pattern, we
focus on MMSE minimization by proper frequency optimization. We show that a
prime-based frequency interval can be exploited for the purpose of both outlier
suppression and UMR extension and design a special optimal rearrangement for
any set of frequency interval, in the sense of MMSE minimization. An extremely
simple frequency design method is finally developed. Simulation and field
experiment verified that the proposed scheme considerably outperforms the
existing method in UMR as well as MSE performance, especially in the transition
from MMSE to HMSE, for Gaussian and non-Gaussian channel.
|
We present new results of a program aimed at studying the physical
properties, origin and evolution of those phenomena which go under the somewhat
generic definition of "low-ionization, small-scale structures in PNe". We have
obtained morphological and kinematical data for 10 PNe, finding low-ionization
structures with very different properties relative to each other, in terms of
expansion velocities, shapes, sizes and locations relatively to the main
nebular components. It is clear that several physical processes have to be
considered in order to account for the formation and evolution of the different
structures observed. We present here some results that are illustrative of our
work - on IC 4593, NGC 3918, K 1-2, Wray 17-1, NGC 6337, He 2-186 and K 4-47 -
and some of the questions that we try to address.
|
We present the first sample-optimal sublinear time algorithms for the sparse
Discrete Fourier Transform over a two-dimensional sqrt{n} x sqrt{n} grid. Our
algorithms are analyzed for /average case/ signals. For signals whose spectrum
is exactly sparse, our algorithms use O(k) samples and run in O(k log k) time,
where k is the expected sparsity of the signal. For signals whose spectrum is
approximately sparse, our algorithm uses O(k log n) samples and runs in O(k
log^2 n) time; the latter algorithm works for k=Theta(sqrt{n}). The number of
samples used by our algorithms matches the known lower bounds for the
respective signal models.
By a known reduction, our algorithms give similar results for the
one-dimensional sparse Discrete Fourier Transform when n is a power of a small
composite number (e.g., n = 6^t).
|
We present a generalization of the theory of quantum symmetric pairs as
developed by Kolb and Letzter. We introduce a class of generalized Satake
diagrams that give rise to (not necessarily involutive) automorphisms of the
second kind of symmetrizable Kac-Moody algebras $\mathfrak{g}$. These lead to
right coideal subalgebras $B_{\mathbf{c},\mathbf{s}}$ of quantized enveloping
algebras $U_q(\mathfrak{g})$. In the case that $\mathfrak{g}$ is a twisted or
untwisted affine Lie algebra of classical type Jimbo found intertwiners
(equivariant maps) of the vector representation of $U_q(\mathfrak{g})$ yielding
trigonometric solutions to the parameter-dependent quantum Yang-Baxter
equation. In the present paper we compute intertwiners of the vector
representation restricted to the subalgebras $B_{\mathbf{c},\mathbf{s}}$ when
$\mathfrak{g}$ is of type ${\rm A}^{(1)}_n$, ${\rm B}^{(1)}_n$, ${\rm
C}^{(1)}_n$ and ${\rm D}^{(1)}_n$. These intertwiners are matrix solutions to
the parameter-dependent quantum reflection equation known as trigonometric
reflection matrices. They are symmetric up to conjugation by a diagonal matrix
and in many cases satisfy a certain sparseness condition: there are at most two
nonzero entries in each row and column. Conjecturally, this classifies all such
solutions in vector spaces carrying this representation. A group of Hopf
algebra automorphisms of $U_q(\mathfrak{g})$ acts on these reflection matrices,
allowing us to show that each reflection matrix found is equivalent to one with
at most two additional free parameters. Additional characteristics of the
reflection matrices such as eigendecompositions and affinization relations are
also obtained. The eigendecompositions suggest that for all these matrices
there should be a natural interpretation in terms of representations of
Hecke-type algebras.
|
Nonlinear evolution of circularly polarized Alfv\'en waves are discussed by
using the recently developed Vlasov-MHD code, which is a generalized
Landau-fluid model. The numerical results indicate that as far as the
nonlinearity in the system is not so large, the Vlasov-MHD model can validly
solve time evolution of the Alfv\'enic turbulence both in the linear and
nonlinear stages. The present Vlasov-MHD model is proper to discuss the solar
coronal heating and solar wind acceleration by Alfve\'n waves propagating from
the photosphere.
|
The alpha and cluster decay properties of the $132-138$^Nd, $144-158$^Gd,
$176-196$^Hg and $192-198$^Pb even-even isotopes in the two mass regions A =
130-158 and A = 180-198 are analysed using the Coulomb and Proximity Potential
Model. On examining the clusters at corresponding points in the cold valleys
(points with same A_2) of the various isotopes of a particular nucleus we find
that at certain mass numbers of the parent nuclei, the clusters emitted are
getting shifted to the next lower atomic number. It is interesting to see that
the change in clusters appears at those isotopes where a change in shape is
occurring correspondingly. Such a change of clusters with shape change is
studied for the first time in cluster decay. The alpha decay half lives of
these nuclei are computed and these are compared with the available
experimental alpha decay data. It is seen that the two are in good agreement.
On making a comparison of the alpha half lives of the normal deformed and super
deformed nuclei, it can be seen that the normal deformed $132$^Nd, $176-188$^Hg
and $192$^Pb nuclei are found to be better alpha emitters than the super
deformed (in excited state) $134,136$^Nd, $190-196$^Hg and $194$^Pb nuclei. The
cluster decay studies reveal that as the atomic number of the parent nuclei
increases the N \neq Z cluster emissions become equally or more probable than
the N=Z emissions. On the whole the alpha and cluster emissions are more
probable from the parents in the heavier mass region (A=180-198) than from the
parents in the lighter mass region (A= 130-158). The effect of quadrupole
({\beta}_2) and hexadecapole ({\beta}_4) deformations of parent and fragments
on half life times are also studied.
|
We consider parametric Markov decision processes (pMDPs) that are augmented
with unknown probability distributions over parameter values. The problem is to
compute the probability to satisfy a temporal logic specification with any
concrete MDP that corresponds to a sample from these distributions. As solving
this problem precisely is infeasible, we resort to sampling techniques that
exploit the so-called scenario approach. Based on a finite number of samples of
the parameters, the proposed method yields high-confidence bounds on the
probability of satisfying the specification. The number of samples required to
obtain a high confidence on these bounds is independent of the number of states
and the number of random parameters. Experiments on a large set of benchmarks
show that several thousand samples suffice to obtain tight and high-confidence
lower and upper bounds on the satisfaction probability.
|
The purpose of this paper is to give a self-contained exposition of the
Atiyah-Bott picture for the Yang-Mills equation over Riemann surfaces with an
emphasis on the analogy to finite dimensional geometric invariant theory. The
main motivation is to provide a careful study of the semistable and unstable
orbits: This includes the analogue of the Ness uniqueness theorem for
Yang-Mills connections, the Kempf-Ness theorem, the Hilbert-Mumford criterion
and a new proof of the moment-weight inequality following an approach outlined
by Donaldson. A central ingredient in our discussion is the Yang-Mills flow for
which we assume longtime existence and convergence.
|
We study the commuting graph on elements of odd prime order in finite simple
groups. The results are used in a forthcoming paper describing the structure of
Bruck loops and Bol loops of exponent 2.
|
We performed a time-resolved spectral analysis of 53 bright gamma-ray bursts
(GRBs) observed by \textit{Fermi}/GBM. Our sample consists of 908 individual
spectra extracted from the finest time slices in each GRB. We fitted them with
the synchrotron radiation model by considering the electron distributions in
five different cases: mono-energetic, single power-law, Maxwellian, traditional
fast cooling, and broken power-law. Our results were further qualified through
Bayesian Information Criterion (BIC) by comparing with the fit by empirical
models, namely the so-called Band function and cut-off power-law models. Our
study showed that the synchrotron models, except for the fast-cooling case, can
successfully fit most observed spectra, with the single power-law case being
the most preferred. We also found that the electron distribution indices for
the single power-law synchrotron fit in more than half of our spectra exhibits
flux-tracking behavior, i.e., the index increases/decreases with the flux
increasing/decreasing, implying that the distribution of the radiating
electrons is increasingly narrower with time before the flux peaks and becomes
more spreading afterward. Our results indicate that the synchrotron radiation
is still feasible as a radiation mechanism of the GRB prompt emission phase.
|
In this work we consider brightness and mass conservation laws for motion
estimation on evolving Riemannian 2-manifolds that allow for a radial
parametrisation from the 2-sphere. While conservation of brightness constitutes
the foundation for optical flow methods and has been generalised to said
scenario, we formulate in this article the principle of mass conservation for
time-varying surfaces which are embedded in Euclidean 3-space and derive a
generalised continuity equation. The main motivation for this work is efficient
cell motion estimation in time-lapse (4D) volumetric fluorescence microscopy
images of a living zebrafish embryo. Increasing spatial and temporal resolution
of modern microscopes require efficient analysis of such data. With this
application in mind we address this need and follow an emerging paradigm in
this field: dimensional reduction. In light of the ill-posedness of considered
conservation laws we employ Tikhonov regularisation and propose the use of
spatially varying regularisation functionals that recover motion only in
regions with cells. For the efficient numerical solution we devise a Galerkin
method based on compactly supported (tangent) vectorial basis functions.
Furthermore, for the fast and accurate estimation of the evolving sphere-like
surface from scattered data we utilise surface interpolation with
spatio-temporal regularisation. We present numerical results based on
aforementioned zebrafish microscopy data featuring fluorescently labelled
cells.
|
Several closely related ab initio thermal mean-field theories for fermions,
both well-established and new ones, are compared with one another at the
formalism level and numerically. The theories considered are Fermi-Dirac
theory, thermal Hartree-Fock (HF) theory, two modifications of the thermal
single-determinant approximation of Kaplan and Argyres, and first-order
finite-temperature many-body perturbation theory based on zero-temperature or
thermal HF reference. The thermal full-configuration-interaction theory is used
as the benchmark.
|
We solve the time-independent Gross-Pitaevskii equation modeling the
Bose-Einstein condensate trapped in an anistropic harmonic potential using a
pseudospectral method. Numerically obtained values for an energy and a chemical
potential for the condensate with positive and negative scattering length have
been compared with those from the literature. The results show that they are in
good agreement when an atomic interaction is not too strong.
|
Stimulated Raman adiabatic passage is a quantum protocol that can be used for
robust state preparation in a three-level system. It has been commonly employed
in quantum optics, but recently this technique has drawn attention also in
circuit quantum electrodynamics. The protocol relies on two slowly varying
drive pulses that couple the initial and the target state via an intermediate
state, which remains unpopulated. Here we study the detrimental effect of the
parasitic couplings of the drives into transitions other than those required by
the protocol. The effect is most prominent in systems with almost harmonic
energy level structure, such as the transmon. We show that under these
conditions in the presence of decoherence there exists an optimal STIRAP
amplitude for population transfer.
|
We introduce a convolutional neural network model for unsupervised learning
of depth and ego-motion from cylindrical panoramic video. Panoramic depth
estimation is an important technology for applications such as virtual reality,
3D modeling, and autonomous robotic navigation. In contrast to previous
approaches for applying convolutional neural networks to panoramic imagery, we
use the cylindrical panoramic projection which allows for the use of the
traditional CNN layers such as convolutional filters and max pooling without
modification. Our evaluation of synthetic and real data shows that unsupervised
learning of depth and ego-motion on cylindrical panoramic images can produce
high-quality depth maps and that an increased field-of-view improves ego-motion
estimation accuracy. We create two new datasets to evaluate our approach: a
synthetic dataset created using the CARLA simulator, and Headcam, a novel
dataset of panoramic video collected from a helmet-mounted camera while biking
in an urban setting. We also apply our network to the problem of converting
monocular panoramas to stereo panoramas.
|
Decentralized Finance (DeFi) refers to financial services that are not
necessarily related to crypto-currencies. By employing blockchain for security
and integrity, DeFi creates new possibilities that attract retail and
institution users, including central banks. Given its novel applications and
sophisticated designs, the distinction between DeFi services and understanding
the risk involved is often complex. This work systematically presents the major
categories of DeFi protocols that cover over 90\% of total value locked (TVL)
in DeFi. It establishes a structured methodology to differentiate between DeFi
protocols based on their design and architecture. Every DeFi protocol is
classified into one of three groups: liquidity pools, pegged and synthetic
tokens, and aggregator protocols, followed by risk analysis. In particular, we
classify stablecoins, liquid staking tokens, and bridged (wrapped) assets as
pegged tokens resembling similar risks. The full risk exposure of DeFi users is
derived not only from the DeFi protocol design but also from how it is used and
with which tokens.
|
We will classify physically admissible manifold structures by the use of
Waldhausen categories. These categories give rise to algebraic K-Theory.
Moreover, we will show that a universal K-spectrum is necessary for a physical
manifold being admissible. Application to the generalized structure of D-branes
are also provided. This might give novel insights in how the manifold structure
in String and M-Theory looks like.
|
In this paper, we study the application of DRL algorithms in the context of
local navigation problems, in which a robot moves towards a goal location in
unknown and cluttered workspaces equipped only with limited-range exteroceptive
sensors, such as LiDAR. Collision avoidance policies based on DRL present some
advantages, but they are quite susceptible to local minima, once their capacity
to learn suitable actions is limited to the sensor range. Since most robots
perform tasks in unstructured environments, it is of great interest to seek
generalized local navigation policies capable of avoiding local minima,
especially in untrained scenarios. To do so, we propose a novel reward function
that incorporates map information gained in the training stage, increasing the
agent's capacity to deliberate about the best course of action. Also, we use
the SAC algorithm for training our ANN, which shows to be more effective than
others in the state-of-the-art literature. A set of sim-to-sim and sim-to-real
experiments illustrate that our proposed reward combined with the SAC
outperforms the compared methods in terms of local minima and collision
avoidance.
|
We study Dirac-harmonic maps from surfaces to manifolds with torsion, which
is motivated from the superstring action considered in theoretical physics. We
discuss analytic and geometric properties of such maps and outline an existence
result for uncoupled solutions.
|
Content-aware visual-textual presentation layout aims at arranging spatial
space on the given canvas for pre-defined elements, including text, logo, and
underlay, which is a key to automatic template-free creative graphic design. In
practical applications, e.g., poster designs, the canvas is originally
non-empty, and both inter-element relationships as well as inter-layer
relationships should be concerned when generating a proper layout. A few recent
works deal with them simultaneously, but they still suffer from poor graphic
performance, such as a lack of layout variety or spatial non-alignment. Since
content-aware visual-textual presentation layout is a novel task, we first
construct a new dataset named PosterLayout, which consists of 9,974
poster-layout pairs and 905 images, i.e., non-empty canvases. It is more
challenging and useful for greater layout variety, domain diversity, and
content diversity. Then, we propose design sequence formation (DSF) that
reorganizes elements in layouts to imitate the design processes of human
designers, and a novel CNN-LSTM-based conditional generative adversarial
network (GAN) is presented to generate proper layouts. Specifically, the
discriminator is design-sequence-aware and will supervise the "design" process
of the generator. Experimental results verify the usefulness of the new
benchmark and the effectiveness of the proposed approach, which achieves the
best performance by generating suitable layouts for diverse canvases.
|
We use extreme value theory to estimate the probability of successive
exceedances of a threshold value of a time-series of an observable on several
classes of chaotic dynamical systems. The observables have either a Fr\'echet
(fat-tailed) or Weibull (bounded) distribution. The motivation for this work
was to give estimates of the probabilities of sustained periods of weather
anomalies such as heat-waves, cold spells or prolonged periods of rainfall in
climate models. Our predictions are borne out by numerical simulations and also
analysis of rainfall and temperature data.
|
Protecting sensitive information is crucial in today's world of Large
Language Models (LLMs) and data-driven services. One common method used to
preserve privacy is by using data perturbation techniques to reduce
overreaching utility of (sensitive) Personal Identifiable Information (PII)
data while maintaining its statistical and semantic properties. Data
perturbation methods often result in significant information loss, making them
impractical for use. In this paper, we propose 'Life of PII', a novel
Obfuscation Transformer framework for transforming PII into faux-PII while
preserving the original information, intent, and context as much as possible.
Our approach includes an API to interface with the given document, a
configuration-based obfuscator, and a model based on the Transformer
architecture, which has shown high context preservation and performance in
natural language processing tasks and LLMs.
Our Transformer-based approach learns mapping between the original PII and
its transformed faux-PII representation, which we call "obfuscated" data. Our
experiments demonstrate that our method, called Life of PII, outperforms
traditional data perturbation techniques in terms of both utility preservation
and privacy protection. We show that our approach can effectively reduce
utility loss while preserving the original information, offering greater
flexibility in the trade-off between privacy protection and data utility. Our
work provides a solution for protecting PII in various real-world applications.
|
The Mizar Mathematical Library (MML) is a rich database of formalized
mathematical proofs (see http://mizar.org). Owing to its large size (it
contains more than 1100 "articles" summing to nearly 2.5 million lines of text,
expressing more than 50000 theorems and 10000 definitions using more than 7000
symbols), the nature of its contents (the MML is slanted toward pure
mathematics), and its classical foundations (first-order logic, set theory,
natural deduction), the MML is an especially attractive target for research on
foundations of mathematics. We have implemented a system, mizar-items, on which
a variety of such foundational experiements can be based. The heart of
mizar-items is a method for decomposing the contents of the MML into
fine-grained "items" (e.g., theorem, definition, notation, etc.) and computing
dependency relations among these items. mizar-items also comes equipped with a
website for exploring these dependencies and interacting with them.
|
We study data processing inequalities that are derived from a certain class
of generalized information measures, where a series of convex functions and
multiplicative likelihood ratios are nested alternately. While these
information measures can be viewed as a special case of the most general
Zakai-Ziv generalized information measure, this special nested structure calls
for attention and motivates our study. Specifically, a certain choice of the
convex functions leads to an information measure that extends the notion of the
Bhattacharyya distance (or the Chernoff divergence): While the ordinary
Bhattacharyya distance is based on the (weighted) geometric mean of two
replicas of the channel's conditional distribution, the more general
information measure allows an arbitrary number of such replicas. We apply the
data processing inequality induced by this information measure to a detailed
study of lower bounds of parameter estimation under additive white Gaussian
noise (AWGN) and show that in certain cases, tighter bounds can be obtained by
using more than two replicas. While the resulting lower bound may not compete
favorably with the best bounds available for the ordinary AWGN channel, the
advantage of the new lower bound, relative to the other bounds, becomes
significant in the presence of channel uncertainty, like unknown fading. This
different behavior in the presence of channel uncertainty is explained by the
convexity property of the information measure.
|
The positivity property for canonical bases asserts that the structure
constants of the multiplication for the canonical basis are in ${\mathbb
N}[v,v^{-1}]$. Let $\mathbf U$ be the quantum group over ${\mathbb Q}(v)$
associated with a symmetric Cartan datum. The positivity property for the
positive part ${\mathbf U}^+$ of ${\mathbf U}$ was proved by Lusztig. He
conjectured that the positivity property holds for the modified form
$\dot{\mathbf U}$ of ${\mathbf U}$. In this paper, we prove that the structure
constants for the canonical basis of $\dot{{\mathbf U}}(\widehat{\frak{sl}}_n)$
coincide with certain structure constants for the canonical basis of ${\mathbf
U}(\widehat{\frak{sl}}_N)^+$ for $n<N$. In particular, the positivity property
for $\dot{{\mathbf U}}(\widehat{\frak{sl}}_n)$ follows from the positivity
property for ${\mathbf U}(\widehat{\frak{sl}}_N)^+$.
|
The goal of this work is to probe the total mass distribution of early-type
galaxies with globular clusters (GCs) as kinematic tracers, by constraining the
parameters of the profile with a flexible modelling approach. To that end, we
leverage the extended spatial distribution of GCs from the SLUGGS survey
($\langle R_{\rm GC,\ max} \rangle \sim 8R_{\rm e}$) in combination with
discrete dynamical modelling. We use discrete Jeans anisotropic modelling in
cylindrical coordinates to determine the velocity moments at the location of
the GCs in our sample. We use a Bayesian framework to determine the best-fit
parameters of the total mass density profile and orbital properties of the GC
systems. We find that the orbital properties (anisotropy and rotation of the
dispersion-dominated GC systems) minimally impact the measurements of the inner
slope and enclosed mass, while a strong presence of dynamically-distinct
subpopulations or low numbers of kinematic tracers can bias the results. Owing
to the large spatial extent of the tracers our method is sensitive to the
intrinsic inner slope of the total mass profile and we find $\overline{\alpha}
= -1.88\pm 0.01$ for 12 galaxies with robust measurements. To compare our
results with literature values we fit a single power-law profile to the
resulting total mass density. In the radial range 0.1-4~$R_{\rm e}$ our
measured slope has a value of $\langle \gamma_{\rm tot}\rangle = -2.22\pm0.14$
and is in good agreement with the literature.
|
We study how the curvature of spacetime, in conjunction with solar radiation
pressure (SRP), affects the bound orbital motion of solar sails. While neither
the curvature of spacetime nor the SRP alter the form of Kepler's third law by
themselves, their simultaneous effects lead to deviations from this law. We
also study deviations from Keplerian motion due to frame dragging, the
gravitational multipole moments of the sun, a possible net electric charge on
the sun, and a positive cosmological constant. The presence of the SRP tends to
increase these deviations by several orders of magnitude, possibly rendering
some of them detectable. As for non-circular bound orbits, the SRP dampens the
rate at which the perihelion is shifted due to curved spacetime, while the
perihelion shift due to the oblateness of the sun is increased. With regards to
the Lense-Thirring effect, the SRP increases the angle of precession of polar
orbits during one orbital period, although the precession frequency is not
actually altered. We also consider non-Keplerian orbits, which lie outside of
the plane of the sun. In particular, we investigate how the pitch angle of the
solar sail is affected by the partial absorption of light by the sail, general
relativistic effects, and the oblateness of the sun. Non-Keplerian orbits
exhibit an analog of the Lense-Thirring effect, in that the orbital plane
precesses around the sun. A near-solar mission for observations of these
effects could provide an interesting confirmation of these phenomena.
|
Fix an integer $\kappa\geqslant 2$. Let $P$ be prime and let $k> \kappa$ be
an even integer. For $f$ a holomorphic cusp form of weight $k$ and full level
and $g$ a primitive holomorphic cusp form of weight $2 \kappa$ and level $P$,
we prove hybrid subconvexity bounds for $L \left(\tfrac{1}{2}, \text{Sym}^2 f
\otimes g\right)$ in the $k$ and $P$ aspects when $P^{\frac {13} {64} + \delta}
< k < P^{\frac 3 8 - \delta}$ for any $0 < \delta < \frac {11} {128}$. These
bounds are achieved through a first moment method (with amplification when
$P^{\frac {13} {64}} < k \leqslant P^{\frac 4 {13}}$).
|
Multi-Modal Large Language Models (MLLMs), despite being successful, exhibit
limited generality and often fall short when compared to specialized models.
Recently, LLM-based agents have been developed to address these challenges by
selecting appropriate specialized models as tools based on user inputs.
However, such advancements have not been extensively explored within the
medical domain. To bridge this gap, this paper introduces the first agent
explicitly designed for the medical field, named \textbf{M}ulti-modal
\textbf{Med}ical \textbf{Agent} (MMedAgent). We curate an instruction-tuning
dataset comprising six medical tools solving seven tasks, enabling the agent to
choose the most suitable tools for a given task. Comprehensive experiments
demonstrate that MMedAgent achieves superior performance across a variety of
medical tasks compared to state-of-the-art open-source methods and even the
closed-source model, GPT-4o. Furthermore, MMedAgent exhibits efficiency in
updating and integrating new medical tools.
|
Multi-epoch observations with ACS on HST provide a unique and comprehensive
probe of stellar dynamics within NGC 6397. We are able to confront analytic
models of the globular cluster with the observed stellar proper motions. The
measured proper motions probe well along the main sequence from 0.8 to below
0.1 M$_\odot$ as well as white dwarfs younger than one gigayear. The observed
field lies just beyond the half-light radius where standard models of globular
cluster dynamics (e.g. based on a lowered Maxwellian phase-space distribution)
make very robust predictions for the stellar proper motions as a function of
mass. The observed proper motions show no evidence for anisotropy in the
velocity distribution; furthermore, the observations agree in detail with a
straightforward model of the stellar distribution function. We do not find any
evidence that the young white dwarfs have received a natal kick in
contradiction with earlier results. Using the observed proper motions of the
main-sequence stars, we obtain a kinematic estimate of the distance to NGC 6397
of $2.2^{+0.5}_{-0.7}$ kpc and a mass of the cluster of $1.1 \pm 0.1 \times
10^5 \mathrm{M}_\odot$ at the photometric distance of 2.53 kpc. One of the
main-sequence stars appears to travel on a trajectory that will escape the
cluster, yielding an estimate of the evaporation timescale, over which the
number of stars in the cluster decreases by a factor of e, of about 3 Gyr. The
proper motions of the youngest white dwarfs appear to resemble those of the
most massive main-sequence stars, providing the first direct constraint on the
relaxation time of the stars in a globular cluster of greater than or about 0.7
Gyr.
|
In this article we investigate the electrical conductance of an insulating
porous medium (e.g., a sedimentary rock) filled with an electrolyte (e.g.,
brine), usually described using the Archie cementation exponent. We show how
the electrical conductance depends on changes in the drift velocity and the
length of the electric field lines, in addition to the porosity and the
conductance of the electrolyte. We characterized the length of the electric
field lines by a tortuosity and the changes in drift velocity by a constriction
factor. Both the tortuosity and the constriction factor are descriptors of the
pore microstructure. We define a conductance reduction factor to measure the
local contributions of the pore microstructure to the global conductance. It is
shown that the global conductance reduction factor is the product of the
tortuosity squared divided by the constriction factor, thereby proving that the
combined effect of tortuosity and constriction, in addition to the porosity and
conductance of the electrolyte, fully describes the effective electrical
conductance of a porous medium. We show that our tortuosity, constriction
factor, and conductance reduction factor reproduce the electrical conductance
for idealized porous media. They are also applied to Bentheimer sandstone,
where we describe a microstructure-related correlation between porosity and
conductivity using both the global conductance reduction factor and the
distinct contributions from tortuosity and constriction. Overall, this work
shows how the empirical Archie cementation exponent can be substituted by more
descriptive, physical parameters, either by the global conductance reduction
factor or by tortuosity and constriction.
|
Bulk motion Comptonization utilizes properties of both matter and radiation
close to the horizon of a black hole. Computation with these considerations
produce hard tails of energy spectral slope $\sim 1.5-1.7$. These are the most
direct evidence of the horizon of a black hole. We argue that even in presence
of winds and outflows this property is not likely to change as winds are
negligible in soft states.
|
Brane model of universe is considered for zero-mass particle. Equation of
Wheeler - de Witt type is obtained using variation principle from the
well-known conservation laws inside the brane. This equation includes term
accounting the variation of brane topology. Solutions are obtained analytically
at some simplifications and the dispersion relations are derived for frequency
of wave associated with the particle.
|
We compute physical properties across the phase diagram of the $t$-$J_\perp$
chain with long-range dipolar interactions, which describe ultracold polar
molecules on optical lattices. Our results obtained by the density-matrix
renormalization group (DMRG) indicate that superconductivity is enhanced when
the Ising component $J_z$ of the spin-spin interaction and the charge component
$V$ are tuned to zero, and even further by the long-range dipolar interactions.
At low densities, a substantially larger spin gap is obtained. We provide
evidence that long-range interactions lead to algebraically decaying
correlation functions despite the presence of a gap. Although this has recently
been observed in other long-range interacting spin and fermion models, the
correlations in our case have the peculiar property of having a small and
continuously varying exponent. We construct simple analytic models and
arguments to understand the most salient features.
|
Subsets and Splits