text
stringlengths 6
128k
|
---|
Topological connections in the single-streaming voids and multistreaming
filaments and walls reveal a cosmic web structure different from traditional
mass density fields. A single void structure not only percolates the
multistream field in all the directions, but also occupies over 99 per cent of
all the single-streaming regions. Sub-grid analyses on scales smaller than
simulation resolution reveal tiny pockets of voids that are isolated by
membranes of the structure. For the multistreaming excursion sets, the
percolating structure is significantly thinner than the filaments in
over-density excursion approach.
Hessian eigenvalues of the multistream field are used as local geometrical
indicators of dark matter structures. Single-streaming regions have most of the
zero eigenvalues. Parameter-free conditions on the eigenvalues in the
multistream region may be used to delineate primitive geometries with
concavities corresponding to filaments, walls and haloes.
|
Network embedding, which maps graphs to distributed representations, is a
unified framework for various graph inference tasks. According to the topology
properties (e.g., structural roles and community memberships of nodes) to be
preserved, it can be categorized into the identity and position embedding.
However, existing methods can only capture one type of property. Some
approaches can support the inductive inference that generalizes the embedding
model to new nodes or graphs but relies on the availability of attributes. Due
to the complicated correlations between topology and attributes, it is unclear
for some inductive methods which type of property they can capture. In this
study, we explore a unified framework for the joint inductive inference of
identity and position embeddings without attributes. An inductive random walk
embedding (IRWE) method is proposed, which combines multiple attention units to
handle the random walk on graph topology and simultaneously derives identity
and position embeddings that are jointly optimized. In particular, we
demonstrate that some random walk statistics can be informative features to
characterize node identities and positions while supporting the inductive
embedding inference. Experiments validate the superior performance of IRWE
beyond various baselines for the transductive and inductive inference of
identity and position embeddings.
|
As of today, the main business application of onomastics is naming, or
branding: finding the proper name for your company or your product to stand out
in the world. Meaningfully, Onoma, the Greek root for name, is also a
registered trademark of Nomen, the naming agency founded by Marcel Botton in
1981. Nomen initially licensed one of Roland Moreno's inventions, the Radoteur
name generator, and created many distinctive and global brand names such as:
Vinci, Clio or Amundi. But once your business has a name, should you forget
about onomastics? Not anymore. Globalization, digitalization and the Big Data
open new fields to experiment disruptive applications in Sales and Marketing,
Communication, HR and Risk Management. Though discriminating names carries a
high risk of abuse, it can also drive new, unexpected ways for developing poor
areas.
|
We present a novel deep learning-based framework: Embedded Feature Similarity
Optimization with Specific Parameter Initialization (SOPI) for 2D/3D medical
image registration which is a most challenging problem due to the difficulty
such as dimensional mismatch, heavy computation load and lack of golden
evaluation standard. The framework we design includes a parameter specification
module to efficiently choose initialization pose parameter and a
fine-registration module to align images. The proposed framework takes
extracting multi-scale features into consideration using a novel composite
connection encoder with special training techniques. We compare the method with
both learning-based methods and optimization-based methods on a in-house
CT/X-ray dataset as well as simulated data to further evaluate performance. Our
experiments demonstrate that the method in this paper has improved the
registration performance, and thereby outperforms the existing methods in terms
of accuracy and running time. We also show the potential of the proposed method
as an initial pose estimator. The code is available at
https://github.com/m1nhengChen/SOPI
|
A restless multi-armed bandit problem that arises in multichannel
opportunistic communications is considered, where channels are modeled as
independent and identical Gilbert-Elliot channels and channel state
observations are subject to errors. A simple structure of the myopic policy is
established under a certain condition on the false alarm probability of the
channel state detector. It is shown that the myopic policy has a semi-universal
structure that reduces channel selection to a simple round-robin procedure and
obviates the need to know the underlying Markov transition probabilities. The
optimality of the myopic policy is proved for the case of two channels and
conjectured for the general case based on numerical examples.
|
Local maxima of random processes are useful for finding important regions and
are routinely used, for summarising features of interest (e.g. in
neuroimaging). In this work we provide confidence regions for the location of
local maxima of the mean and standardized effect size (i.e. Cohen's d) given
multiple realisations of a random process. We prove central limit theorems for
the location of the maximum of mean and t-statistic random fields and use these
to provide asymptotic confidence regions for the location of peaks of the mean
and Cohen's d. Under the assumption of stationarity we develop Monte Carlo
confidence regions for the location of peaks of the mean that have better
finite sample coverage than regions derived based on classical asymptotic
normality. We illustrate our methods on 1D MEG data and 2D fMRI data from the
UK Biobank.
|
A recent preprint [arxiv:1807.08572] has reported the observation of room
temperature supercondutivity in a nanostructured solid composed of gold and
silver nanocrystals. Given the extraordinary and exciting nature of this claim,
it is worth examining the reported data closely. In this short comment I point
out a very surprising feature in the data: an identical pattern of noise for
two presumably independent measurements of the magnetic susceptibility as a
function of temperature.
|
Working within the post-Minkowskian approach to General Relativity, we prove
that the radiation-reaction to the emission of gravitational waves during the
large-impact-parameter scattering of two (classical) point masses modifies the
conservative scattering angle by an additional contribution of order $G^3$
which involves a high-energy (or massless) logarithmic divergence of opposite
sign to the one contained in the third-post-Minkowskian result of Bern et al.
[Phys. Rev. Lett. {\bf 122}, 201603 (2019)]. The high-energy limit of the
resulting radiation-reaction-corrected (classical) scattering angle is finite,
and is found to agree with the one following from the (quantum) eikonal-phase
result of Amati, Ciafaloni and Veneziano [ Nucl. Phys. B {\bf 347}, 550
(1990)].
|
We consider the properties of the ground state of bottomium. The $\Upsilon$
mass is evaluated to two loops, and including leading higher order
[$O(\alpha_s^5\log\alpha_s)$] and $m_c^2/m_b^2$ corrections. This allows us to
present updated values for the pole mass and $\bar{MS}$ mass of the $b$ quark:
$m_b=5022\pm58$ MeV, for the pole mass, and $\bar{m}_b(\bar{m}_b)=4286\pm36$
MeV for the $\bar{MS}$ one. The value for the \msbar mass is accurate including
and $O(\alpha_s^3)$ corrections and leading orders in the ratio $m_c^2/m_b^2$.
We then consider the wave function for the ground state of $\bar{b}b$, which is
calculated to two loops in the nonrelativistic approximation. Taking into
account the evaluation of the matching coefficients by Beneke and Signer one
can calculate, in principle, the width for the decay $\Upsilon\to e^+e^-$ to
order $\alpha_s^5$. Unfortunately, given the size of the corrections it is
impossible to produce reliable numbers. The situation is slightly better for
the ground state of toponium, where a decay width into $e^+e^-$ of 11 -- 14 keV
is predicted.
|
Brain tumor segmentation is a critical task for patient's disease management.
In order to automate and standardize this task, we trained multiple U-net like
neural networks, mainly with deep supervision and stochastic weight averaging,
on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training
dataset. Two independent ensembles of models from two different training
pipelines were trained, and each produced a brain tumor segmentation map. These
two labelmaps per patient were then merged, taking into account the performance
of each ensemble for specific tumor subregions. Our performance on the online
validation dataset with test time augmentation were as follows: Dice of 0.81,
0.91 and 0.85; Hausdorff (95%) of 20.6, 4,3, 5.7 mm for the enhancing tumor,
whole tumor and tumor core, respectively. Similarly, our solution achieved a
Dice of 0.79, 0.89 and 0.84, as well as Hausdorff (95%) of 20.4, 6.7 and 19.5mm
on the final test dataset, ranking us among the top ten teams. More complicated
training schemes and neural network architectures were investigated without
significant performance gain at the cost of greatly increased training time.
Overall, our approach yielded good and balanced performance for each tumor
subregion. Our solution is open sourced at
https://github.com/lescientifik/open_brats2020.
|
Deep neural networks (DNNs) have shown vulnerability to adversarial attacks,
i.e., carefully perturbed inputs designed to mislead the network at inference
time. Recently introduced localized attacks, Localized and Visible Adversarial
Noise (LaVAN) and Adversarial patch, pose a new challenge to deep learning
security by adding adversarial noise only within a specific region without
affecting the salient objects in an image. Driven by the observation that such
attacks introduce concentrated high-frequency changes at a particular image
location, we have developed an effective method to estimate noise location in
gradient domain and transform those high activation regions caused by
adversarial noise in image domain while having minimal effect on the salient
object that is important for correct classification. Our proposed Local
Gradients Smoothing (LGS) scheme achieves this by regularizing gradients in the
estimated noisy region before feeding the image to DNN for inference. We have
shown the effectiveness of our method in comparison to other defense methods
including Digital Watermarking, JPEG compression, Total Variance Minimization
(TVM) and Feature squeezing on ImageNet dataset. In addition, we systematically
study the robustness of the proposed defense mechanism against Back Pass
Differentiable Approximation (BPDA), a state of the art attack recently
developed to break defenses that transform an input sample to minimize the
adversarial effect. Compared to other defense mechanisms, LGS is by far the
most resistant to BPDA in localized adversarial attack setting.
|
In this study, we present EventRL, a reinforcement learning approach
developed to enhance event extraction for large language models (LLMs). EventRL
utilizes outcome supervision with specific reward functions to tackle prevalent
challenges in LLMs, such as instruction following and hallucination, manifested
as the mismatch of event structure and the generation of undefined event types.
We evaluate EventRL against existing methods like Few-Shot Prompting (FSP)
(based on GPT4) and Supervised Fine-Tuning (SFT) across various LLMs, including
GPT-4, LLaMa, and CodeLLaMa models. Our findings show that EventRL
significantly outperforms these conventional approaches by improving the
performance in identifying and structuring events, particularly in handling
novel event types. The study emphasizes the critical role of reward function
selection and demonstrates the benefits of incorporating code data for better
event extraction. While increasing model size leads to higher accuracy,
maintaining the ability to generalize is essential to avoid overfitting.
|
Infrastructure systems play a critical role in providing essential products
and services for the functioning of modern society; however, they are
vulnerable to disasters and their service disruptions can cause severe societal
impacts. To protect infrastructure from disasters and reduce potential impacts,
great achievements have been made in modeling interdependent infrastructure
systems in past decades. In recent years, scholars have gradually shifted their
research focus to understanding and modeling societal impacts of disruptions
considering the fact that infrastructure systems are critical because of their
role in societal functioning, especially under situations of modern societies.
Exploring how infrastructure disruptions impair society to enhance resilient
city has become a key field of study. By comprehensively reviewing relevant
studies, this paper demonstrated the definition and types of societal impact of
infrastructure disruptions, and summarized the modeling approaches into four
types: extended infrastructure modeling approaches, empirical approaches,
agent-based approaches, and big data-driven approaches. For each approach, this
paper organized relevant literature in terms of modeling ideas, advantages, and
disadvantages. Furthermore, the four approaches were compared according to
several criteria, including the input data, types of societal impact, and
application scope. Finally, this paper illustrated the challenges and future
research directions in the field.
|
Complete positivity of quantum dynamics is often viewed as a litmus test for
physicality, yet it is well known that correlated initial states need not give
rise to completely positive evolutions. This observation spurred numerous
investigations over the past two decades attempting to identify necessary and
sufficient conditions for complete positivity. Here we describe a complete and
consistent mathematical framework for the discussion and analysis of complete
positivity for correlated initial states of open quantum systems. This
formalism is built upon a few simple axioms and is sufficiently general to
contain all prior methodologies going back to Pechakas, PRL (1994). The key
observation is that initial system-bath states with the same reduced state on
the system must evolve under all admissible unitary operators to system-bath
states with the same reduced state on the system, in order to ensure that the
induced dynamical maps on the system are well-defined. Once this consistency
condition is imposed, related concepts like the assignment map and the
dynamical maps are uniquely defined. In general, the dynamical maps may not be
applied to arbitrary system states, but only to those in an appropriately
defined physical domain. We show that the constrained nature of the problem
gives rise to not one but three inequivalent types of complete positivity.
Using this framework we elucidate the limitations of recent attempts to provide
conditions for complete positivity using quantum discord and the quantum
data-processing inequality. The problem remains open, and may require fresh
perspectives and new mathematical tools. The formalism presented herein may be
one step in that direction.
|
Optically active artificial structures have attracted tremendous research
attention. Such structures must meet two requirements: Lack of spatial
inversion symmetries and, a condition usually not explicitly considered, the
structure shall preserve the helicity of light, which implies that there must
be a vanishing coupling between the states of opposite polarization handedness
among incident and scattered plane waves. Here, we put forward and demonstrate
that a unit cell made from chiraly arranged electromagnetically dual scatterers
serves exactly this purpose. We prove this by demonstrating optical activity of
such unit cell in general scattering directions.
|
In the paper we propose a theoretical model that takes into account Vegard
strains and perform a detailed quantitative comparison of the theoretical
results with experimental ones for quasispherical nanoparticles, which reveal
the essential (about 100 K) increase of the transition temperature in spherical
nanoparticles in comparison with bulk crystals. The average radius of
nanoparticles was about 25 nm, they consist of K(Ta,Nb)O3 solid solution, where
KTaO3 is a quantum paraelectric, while KNbO3 is a ferroelectric.From the
comparison between the theory and experiment we unambiguously established the
leading contribution of Vegard strains into the extrinsic size effect in
ferroelectric nanoparticles. We determined the dependence of Vegard strains on
the content of Nb and reconstructed the Curie temperature dependence on the
content of Nb using this dependence. Appeared that the dependence of the Curie
temperature on the Nb content becomes nonmonotonic one for the small (< 20 nm)
elongated K(Ta,Nb)O3 nanoparticles. We established that the accumulation of
intrinsic and extrinsic defects near the surface can play the key role in the
physical origin of extrinsic size effect in ferroelecric nanoparticles and
govern its main features.
|
Stochastic gradient descent (SGD) exhibits strong algorithmic regularization
effects in practice, which has been hypothesized to play an important role in
the generalization of modern machine learning approaches. In this work, we seek
to understand these issues in the simpler setting of linear regression
(including both underparameterized and overparameterized regimes), where our
goal is to make sharp instance-based comparisons of the implicit regularization
afforded by (unregularized) average SGD with the explicit regularization of
ridge regression. For a broad class of least squares problem instances (that
are natural in high-dimensional settings), we show: (1) for every problem
instance and for every ridge parameter, (unregularized) SGD, when provided with
logarithmically more samples than that provided to the ridge algorithm,
generalizes no worse than the ridge solution (provided SGD uses a tuned
constant stepsize); (2) conversely, there exist instances (in this wide problem
class) where optimally-tuned ridge regression requires quadratically more
samples than SGD in order to have the same generalization performance. Taken
together, our results show that, up to the logarithmic factors, the
generalization performance of SGD is always no worse than that of ridge
regression in a wide range of overparameterized problems, and, in fact, could
be much better for some problem instances. More generally, our results show how
algorithmic regularization has important consequences even in simpler
(overparameterized) convex settings.
|
With the advent of space-based precision photometry missions the quantity and
quality of starspot light curves has greatly increased. This paper presents a
large number of starspot models and their resulting light curves to: 1) better
determine light curve metrics and methods that convey useful physical
information, 2) understand how the underlying degeneracies of the translation
from physical starspot distributions to the resulting light curves obscure that
information. We explore models of relatively active stars at several
inclinations while varying the number of (dark) spots, random spot
distributions in position and time, timescales of growth and decay, and
differential rotation. We examine the behavior of absolute and differential
variations of individual intensity dips and overall light curves, and
demonstrate how complex spot distributions and behaviors result in light curves
that typically exhibit only one or two dips per rotation. Unfortunately
simplistic "one or two spot" or "active longitude" descriptions or modeling of
light curves can often be highly misleading. We also show that short "activity
cycles" can easily be simply due to random processes.
It turns out to be quite difficult to disentangle the competing effects of
spot lifetime and differential rotation, but under most circumstances spot
lifetime is the more influential of the two. Many of the techniques tried to
date only work when spots live for many rotations. These include
autocorrelation degradation for spot lifetimes and periodograms for both global
and differential rotation. Differential rotation may be nearly impossible to
accurately infer from light curves alone unless spots live for many rotations.
The Sun and solar-type stars its age or older are unfortunately the most
difficult type of case. Further work is needed to have increased confidence in
light curve inferences.
|
We predict that a novel bias-voltage assisted magnetization reversal process
will occur in Mn doped II-VI semiconductor quantum wells or heterojunctions
with carrier induced ferromagnetism. The effect is due to strong
exchange-coupling induced subband mixing that leads to electrically tunable
hysteresis loops. Our model calculations are based on the mean-field theory of
carrier induced ferromagnetism in Mn-doped quantum wells and on a
semi-phenomenological description of the host II-VI semiconductor valence
bands.
|
Location-aware networks are of great importance and interest in both civil
and military applications. This paper determines the localization accuracy of
an agent, which is equipped with an antenna array and localizes itself using
wireless measurements with anchor nodes, in a far-field environment. In view of
the Cram\'er-Rao bound, we first derive the localization information for static
scenarios and demonstrate that such information is a weighed sum of Fisher
information matrices from each anchor-antenna measurement pair. Each matrix can
be further decomposed into two parts: a distance part with intensity
proportional to the squared baseband effective bandwidth of the transmitted
signal and a direction part with intensity associated with the normalized
anchor-antenna visual angle. Moreover, in dynamic scenarios, we show that the
Doppler shift contributes additional direction information, with intensity
determined by the agent velocity and the root mean squared time duration of the
transmitted signal. In addition, two measures are proposed to evaluate the
localization performance of wireless networks with different anchor-agent and
array-antenna geometries, and both formulae and simulations are provided for
typical anchor deployments and antenna arrays.
|
In this paper we study the theoretical properties of the simultaneous
multiscale change point estimator (SMUCE) proposed by Frick et al. (2014) in
regression models with dependent error processes. Empirical studies show that
in this case the change point estimate is inconsistent, but it is not known if
alternatives suggested in the literature for correlated data are consistent. We
propose a modification of SMUCE scaling the basic statistic by the long run
variance of the error process, which is estimated by a difference-type variance
estimator calculated from local means from different blocks. For this
modification we prove model consistency for physical dependent error processes
and illustrate the finite sample performance by means of a simulation study.
|
Time-dependent protocols that perform irreversible logical operations, such
as memory erasure, cost work and produce heat, placing bounds on the efficiency
of computers. Here we use a prototypical computer model of a physical memory to
show that it is possible to learn feedback-control protocols to do fast memory
erasure without input of work or production of heat. These protocols, which are
enacted by a neural-network ``demon'', do not violate the second law of
thermodynamics because the demon generates more heat than the memory absorbs.
The result is a form of nonlocal heat exchange in which one computation is
rendered energetically favorable while a compensating one produces heat
elsewhere, a tactic that could be used to rationally design the flow of energy
within a computer.
|
I present a review of Smoothed Particle Hydrodynamics (SPH), with the aim of
providing a mathematically rigorous, clear derivation of the algorithms from
first principles. The method of discretising a continuous field into particles
using a smoothing kernel is considered, and also the errors associated with
this approach. A fully conservative form of SPH is then derived from the
Lagrangian, demonstrating the explicit conservation of mass, linear and angular
momenta and energy/entropy. The method is then extended to self-consistently
include spatially varying smoothing lengths, (self) gravity and various forms
of artificial viscosity, required for the correct treatment of shocks. Finally
two common methods of time integration are discussed, the Runge-Kutta-Fehlberg
and leapfrog integrators, along with an overview of time-stepping criteria.
|
Despite progress developing experimentally-consistent models of insect
in-flight sensing and feedback for individual agents, a lack of systematic
understanding of the multi-agent and group performance of the resulting
bio-inspired sensing and feedback approaches remains a barrier to robotic swarm
implementations. This study introduces the small-target motion reactive (STMR)
swarming approach by designing a concise engineering model of the small target
motion detector (STMD) neurons found in insect lobula complexes. The STMD
neuron model identifies the bearing angle at which peak optic flow magnitude
occurs, and this angle is used to design an output feedback switched control
system. A theoretical stability analysis provides bi-agent stability and state
boundedness in group contexts. The approach is simulated and implemented on
ground vehicles for validation and behavioral studies. The results indicate
despite having the lowest connectivity of contemporary approaches (each agent
instantaneously regards only a single neighbor), collective group motion can be
achieved. STMR group level metric analysis also highlights continuously varying
polarization and decreasing heading variance.
|
In our comprehensive experiments and evaluations, we show that it is possible
to generate multiple contrast (even all synthetically) and use synthetically
generated images to train an image segmentation engine. We showed promising
segmentation results tested on real multi-contrast MRI scans when delineating
muscle, fat, bone and bone marrow, all trained on synthetic images. Based on
synthetic image training, our segmentation results were as high as 93.91\%,
94.11\%, 91.63\%, 95.33\%, for muscle, fat, bone, and bone marrow delineation,
respectively. Results were not significantly different from the ones obtained
when real images were used for segmentation training: 94.68\%, 94.67\%,
95.91\%, and 96.82\%, respectively.
|
We present further arguments that the Hipparcos parallaxes for some of the
clusters and associations represented in the Hipparcos catalog should be used
with caution in the study of the Galactic structure. It has been already shown
that the discrepancy between the Hipparcos and ground based parallaxes for
several clusters including the Pleiades, Coma Ber and NGC 6231 can be resolved
by recomputing the Hipparcos astrometric solutions with an improved algorithm
diminishing correlated errors in the attitude parameters. Here we present new
parallaxes obtained with this algorithm for another group of stars with
discrepant data - the galactic cluster Cr 121. The original Hipparcos
parallaxes led de Zeeuw et al. to conclude that Cr 121 and the surrounding
association of OB stars form a relatively compact and coherent moving group at
a distance of 550 -- 600 pc. Our corrected parallaxes reveal a different
spatial distribution of young stellar populace in this area. Both the cluster
Cr 121 and the extended OB association are considerably more distant (750 --
1000 pc), and the latter has a large depth probably extending beyond 1 kpc.
Therefore, not only are the recalculated parallaxes in complete agreement with
the photometric uvbybeta parallaxes, but the structure of the field they reveal
is no longer in discrepancy with that found by the photometric method.
|
We apply the notion of 2-extensions of algebras to the deformation theory of
algebras. After standard results on butterflies between 2-extensions, we use
this (2, 0)-category to give three perspectives on the deformation theory of
algebras. We conclude by fixing an error in the literature.
|
Thermal escape out of a metastable well is considered in the weak friction
regime, where the bottleneck for decay is energy diffusion, and at lower
temperatures, where quantum tunneling becomes relevant. Within a systematic
semiclassical formalism an extension of the classical diffusion equation is
derived starting from a quantum mechanical master equation. In contrast to
previous approaches finite barrier transmission also affects transition
probabilities. The decay rate is obtained from the stationary non-equilibrium
solution and captures the intimate interplay between thermal and quantum
fluctuations above the crossover to the deep quantum regime.
|
One may define a complex system as a system in which phenomena emerge as a
consequence of multiscale interaction among the system's components and their
environments. The field of Complex Systems is the study of such
systems--usually naturally occurring, either bio-logical or social. Systems
Engineering may be understood to include the conceptualising and building of
systems that consist of a large number of concurrently operating and
interacting components--usually including both human and non-human elements. It
has become increasingly apparent that the kinds of systems that systems
engineers build have many of the same multiscale characteristics as those of
naturally occurring complex systems. In other words, systems engineering is the
engineering of complex systems. This paper and the associated panel will
explore some of the connections between the fields of complex systems and
systems engineering.
|
Three-dimensional (3D) topological Weyl semimetals (TWSs) represent a novel
state of quantum matter with unusual electronic structures that resemble both a
"3D graphene" and a topological insulator by possessing pairs of Weyl points
(through which the electronic bands disperse linearly along all three momentum
directions) connected by topological surface states, forming the unique
"Fermi-arc" type Fermi-surface (FS). Each Weyl point is chiral and contains
half of the degrees of freedom of a Dirac point, and can be viewed as a
magnetic monopole in the momentum space. Here, by performing angle-resolved
photoemission spectroscopy on non-centrosymmetric compound TaAs, we observed
its complete band structures including the unique "Fermi-arc" FS and linear
bulk band dispersion across the Weyl points, in excellent agreement with the
theoretical calculations. This discovery not only confirms TaAs as the first 3D
TWS, but also provides an ideal platform for realizing exotic physical
phenomena (e.g. negative magnetoresistance, chiral magnetic effects and quantum
anomalous Hall effect) which may also lead to novel future applications.
|
Random matrix models based on an integral over supermatrices are proposed as
a natural extension of bosonic matrix models. The subtle nature of superspace
integration allows these models to have very different properties from the
analogous bosonic models. Two choices of integration slice are investigated.
One leads to a perturbative structure which is reminiscent of, and perhaps
identical to, the usual Hermitian matrix models. Another leads to an eigenvalue
reduction which can be described by a two component plasma in one dimension. A
stationary point of the model is described.
|
In this paper we consider minimal Lagrangian submanifolds in $n$-dimensional
complex space forms. More precisely, we study such submanifolds which, endowed
with the induced metrics, write as a Riemannian product of two Riemannian
manifolds, each having constant sectional curvature. As the main result, we
give a complete classification of these submanifolds.
|
We prove that any non zero inertia, however small, is able to change the
nature of the synchronization transition in Kuramoto-like models, either from
continuous to discontinuous, or from discontinuous to continuous. This result
is obtained through an unstable manifold expansion in the spirit of J.D.
Crawford, which features singularities in the vicinity of the bifurcation. Far
from being unwanted artifacts, these singularities actually control the
qualitative behavior of the system. Our numerical tests fully support this
picture.
|
The India-based Neutrino Observatory (INO) is a project aimed at building a
large underground laboratory to explore the Earth's mater effects on the
atmospheric neutrinos in multi-GeV range. INO will host a 50 kton magnetized
iron calorimeter detector (ICAL) in which Resistive Plate Chambers(RPCs) will
be the active detector elements. In ICAL, 28,800 glass RPCs of 2 m $\times$ 2 m
size will be operated in the avalanche mode. A small variation in the
compositions of ionizing gaseous medium in the RPC affects its performance.
Study of the charge distribution of the RPC at different gas compositions is
necessary to optimize the gas mixture.
An RPC made with glass plates of dimension 30 cm $\times$ 30 cm was operated
in avalanche mode with a gas mixture of $C_2H_2F_4$/$iC_4H_{10}$/$SF_6$. We
have studied the performance of these RPCs at the same ambient conditions. The
percentages of the $iC_4H_{10}$ or $SF_6$ were varied and its effect on the
performance of RPC were studied. The study of the charge distribution and time
resolution of the RPC signals at different gas compositions is presented in
this paper.
|
We present a multi-document summarizer, called MEAD, which generates
summaries using cluster centroids produced by a topic detection and tracking
system. We also describe two new techniques, based on sentence utility and
subsumption, which we have applied to the evaluation of both single and
multiple document summaries. Finally, we describe two user studies that test
our models of multi-document summarization.
|
We review here the main contributions of Einstein to the quantum theory. To
put them in perspective we first give an account of Physics as it was before
him. It is followed by a brief account of the problem of black body radiation
which provided the context for Planck to introduce the idea of quantum.
Einstein's revolutionary paper of 1905 on light-quantum hypothesis is then
described as well as an application of this idea to the photoelectric effect.
We next take up a discussion of Einstein's other contributions to old quantum
theory. These include (i) his theory of specific heat of solids, which was the
first application of quantum theory to matter, (ii) his discovery of
wave-particle duality for light and (iii) Einstein's A and B coefficients
relating to the probabilities of emission and absorption of light by atomic
systems and his discovery of radiation stimulated emission of light which
provides the basis for laser action. We then describe Einstein's contribution
to quantum statistics viz Bose-Einstein Statistics and his prediction of
Bose-Einstein condensation of a boson gas. Einstein played a pivotal role in
the discovery of Quantum mechanics and this is briefly mentioned. After 1925
Einstein's contributed mainly to the foundations of Quantum Mechanics. We
choose to discuss here (i) his Ensemble (or Statistical) Interpretation of
Quantum Mechanics and (ii) the discovery of Einstein-Podolsky-Rosen (EPR)
correlations and the EPR theorem on the conflict between Einstein-Locality and
the completeness of the formalism of Quantum Mechanics. We end with some
comments on later developments.
|
The detailed characterization of scaling laws relating the observables of
cluster of galaxies to their mass is crucial for obtaining accurate
cosmological constraints with clusters. In this paper, we present a comparison
between the hydrostatic and lensing mass profiles of the cluster \psz\ at
$z=0.59$. The hydrostatic mass profile is obtained from the combination of high
resolution NIKA2 thermal Sunyaev-Zel'dovich (tSZ) and \xmm\ X-ray observations
of the cluster. Instead, the lensing mass profile is obtained from an analysis
of the CLASH lensing data based on the lensing convergence map. We find
significant variation on the cluster mass estimate depending on the observable,
the modelling of the data and the knowledge of the cluster dynamical state.
This {\bf might} lead to significant systematic effects on cluster cosmological
analyses for which only a single observable is generally used. From this pilot
study, we conclude that the combination of high resolution SZ, X-ray and
lensing data could allow us to identify and correct for these systematic
effects. This would constitute a very interesting extension of the NIKA2 SZ
Large Program.
|
A comprehensive numerical investigation has been conducted on the angular
distribution and spectrum of radiation emitted by 855 MeV electron and positron
beams while traversing a 'quasi-mosaic' bent silicon (111) crystal. This
interaction of charged particles with a bent crystal gives rise to various
phenomena such as channeling, dechanneling, volume reflection, and volume
capture. The crystal's geometry, emittance of the collimated particle beams, as
well as their alignment with respect to the crystal, have been taken into
account as they are essential for an accurate quantitative description of the
processes. The simulations have been performed using a specialized relativistic
molecular dynamics module implemented in the MBN Explorer package. The angular
distribution of the particles after traversing the crystal has been calculated
for beams of different emittances as well as for different anticlastic
curvatures of the bent crystals. For the electron beam, the angular
distributions of the deflected particles and the spectrum of radiation obtained
in the simulations are compared with the experimental data collected at the
Mainz Microtron facility. For the positron beam such calculations have been
performed for the first time. We predict significant differences in the angular
distributions and the radiation spectra for positrons versus electrons.
|
Let $(R,\frak m)$ be a commutative Noetherian local ring and let $M$ and $N$
be finitely generated $R$-modules of finite injective dimension and finite
Gorenstein injective dimension, respectively. In this paper we prove a
generalization of Ischebeck Formula, that is $\depth_RM+\sup\{i|
{0.1cm}\Ext_R^i(M,N)\neq 0\}=\depth R.$
|
This paper presents a multilevel convergence framework for
multigrid-reduction-in-time (MGRIT) as a generalization of previous two-grid
estimates. The framework provides a priori upper bounds on the convergence of
MGRIT V- and F-cycles, with different relaxation schemes, by deriving the
respective residual and error propagation operators. The residual and error
operators are functions of the time stepping operator, analyzed directly and
bounded in norm, both numerically and analytically. We present various upper
bounds of different computational cost and varying sharpness. These upper
bounds are complemented by proposing analytic formulae for the approximate
convergence factor of V-cycle algorithms that take the number of fine grid time
points, the temporal coarsening factors, and the eigenvalues of the time
stepping operator as parameters.
The paper concludes with supporting numerical investigations of parabolic
(anisotropic diffusion) and hyperbolic (wave equation) model problems. We
assess the sharpness of the bounds and the quality of the approximate
convergence factors. Observations from these numerical investigations
demonstrate the value of the proposed multilevel convergence framework for
estimating MGRIT convergence a priori and for the design of a convergent
algorithm. We further highlight that observations in the literature are
captured by the theory, including that two-level Parareal and multilevel MGRIT
with F-relaxation do not yield scalable algorithms and the benefit of a
stronger relaxation scheme. An important observation is that with increasing
numbers of levels MGRIT convergence deteriorates for the hyperbolic model
problem, while constant convergence factors can be achieved for the diffusion
equation. The theory also indicates that L-stable Runge-Kutta schemes are more
amendable to multilevel parallel-in-time integration with MGRIT than A-stable
Runge-Kutta schemes.
|
Integration testing is one the important phase in software testing life cycle
(STLC). With the fast growth of internet and web services, web-based
applications are also growing rapidly and their importance and complexity is
also increasing. Heterogeneous and diverse nature of distributed components,
applications, along with their multi-platform support and cooperativeness make
these applications more complex and swiftly increasing in their size. Quality
assurance of these applications is becoming more crucial and important. Testing
is one of the key processes to achieve and ensure the quality of these software
or Webbased products. There are many testing challenges involved in Web-based
applications. But most importantly integration is the most critical testing
associated with Web-based applications. There are number of challenging factors
involved in integration testing efforts. These factors have almost 70 percent
to 80 percent impact on overall quality of Web-based applications. In software
industry different kind of testing approaches are used by practitioners to
solve the issues associated with integration which are due to ever increasing
complexities of Web-based applications.
|
We construct the Green current for a random iteration of "horizontal-like"
mappings in two complex dimensions. This is applied to the study of a
polynomial map $f:\mathbb{C}^2\to\mathbb{C}^2$ with the following properties:
1. infinity is $f$-attracting,
2. $f$ contracts the line at infinity to a point not in the indeterminacy
set.
Then the Green current of $f$ can be decomposed into pieces associated with
an itinerary determined by the indeterminacy points.
We also study the set of escape rates near infinity, i.e. the possible values
of the function $\limsup \frac{1}{n}\log^+\log^+ \norm{f^n}$. We exhibit
examples for which this set contains an interval.
|
The noncovariant duality symmetric action put forward by Schwarz-Sen is
quantized by means of the Dirac bracket quantization procedure. The resulting
quantum theory is shown to be, nevertheless, relativistically invariant.
|
We fit an isothermal oscillatory density model of Neptune's protoplanetary
disk to the surviving regular satellites and its innermost ring and we
determine the radial scale length of the disk, the equation of state and the
central density of the primordial gas, and the rotational state of the
Neptunian nebula. Neptune's regular moons suffered from the retrograde capture
of Triton that disrupted the system. Some moons may have been ejected, while
others may have survived inside their potential minima. For this reason, the
Neptunian nebula does not look like any of the nebulae that we modeled
previously. In particular, there must be two density maxima deep inside the
core of the nebula where no moons or rings are found nowadays. Even with this
strong assumption, the recent discovery of the minor moon N XIV complicates
further the modeling effort. With some additional assumptions, the Neptunian
nebula still shares many similarities with the Uranian nebula, as was expected
from the relative proximity and similar physical conditions of the two systems.
For Neptune's primordial disk, we find a steep power-law index ($k=-3.0$),
needed to accommodate the arrangement of the outer moons Larissa, N XIV, and
Proteus. The rotation parameter that measures centrifugal support against
self-gravity is quite small ($\beta_0=0.00808$), as is its radial scale length
(13.6 km). The extent of the disk ($R_{\rm max}=0.12$ Gm) is a lot smaller than
that of Uranus ($R_{\rm max}=0.60$ Gm) and Triton appears to be responsible for
the truncation of the disk. The central density of the compact Neptunian core
and its angular velocity are higher than but comparable to those of Uranus'
core. In the end, we compare the models of the protoplanetary disks of the four
gaseous giants.
|
We study the properties of the relativistic, steady, axisymmetric, low
angular momentum, inviscid, advective, geometrically thin accretion flow in a
Kerr-Taub-NUT (KTN) spacetime which is characterized by the Kerr parameter
($a_{\rm k}$) and NUT parameter ($n$). Depending on $a_{\rm k}$ and $n$ values,
KTN spacetime represents either a black or a naked singularity. We solve the
governing equations that describe the relativistic accretion flow in KTN
spacetime and obtain all possible global transonic accretion solutions around
KTN black hole in terms of the energy $({\cal E})$ and angular momentum
$(\lambda)$ of the flow. We identify the region of the parameter space in
$\lambda-{\cal E}$ plane that admits the flow to possess multiple critical
points for KTN black hole. We examine the modification of the parameter space
due to $a_{\rm k}$ and $n$ and find that the role of $a_{\rm k}$ and $n$ in
determining the parameter space is opposite to each other. This clearly
indicates that the NUT parameter $n$ effectively mitigates the effect of black
hole rotation in deciding the accretion flow structure. Further, we calculate
the maximum disc luminosity ($L_{\rm max}$) corresponding to the accretion
solutions around the KTN black hole and for a given set of $a_{\rm k}$ and $n$.
In addition, we also investigate all possible flow topologies around the naked
singularity and find that there exists a region around the naked singularity
which remains inaccessible to the flow. We study the critical point properties
for naked singularities and find that the flow possesses maximum of four
critical points. Finally, we obtain the parameter space for multiple critical
points for naked singularity and find that parameter space is shrunk and
shifted to lower $\lambda$ and higher ${\cal E}$ side as $a_{\rm k}$ is
increased which ultimately disappears.
|
We demonstrate an atom laser using all-optical techniques. A Bose-Einstein
condensate of rubidium atoms is created by direct evaporative cooling in a
quasistatic dipole trap realized with a single, tightly focused CO$_{2}$-laser
beam. An applied magnetic field gradient allows formation of the condensate in
a field-insensitive $m_{F} = 0$ spin projection only, which suppresses
fluctuations of the chemical potential from stray magnetic fields. A collimated
and monoenergetic beam of atoms is extracted from the Bose-Einstein condensate
by continuously lowering the dipole trapping potential in a controlled way to
form a novel type of atom laser.
|
Many young, massive stars are found in close binaries. Using population
synthesis simulations we predict the likelihood of a companion star being
present when these massive stars end their lives as core-collapse supernovae
(SNe). We focus on stripped-envelope SNe, whose progenitors have lost their
outer hydrogen and possibly helium layers before explosion. We use these
results to interpret new Hubble Space Telescope observations of the site of the
broad-lined Type Ic SN 2002ap, 14 years post-explosion. For a subsolar
metallicity consistent with SN 2002ap, we expect a main-sequence companion
present in about two thirds of all stripped-envelope SNe and a compact
companion (likely a stripped helium star or a white dwarf/neutron star/black
hole) in about 5% of cases. About a quarter of progenitors are single at
explosion (originating from initially single stars, mergers or disrupted
systems). All the latter scenarios require a massive progenitor, inconsistent
with earlier studies of SN 2002ap. Our new, deeper upper limits exclude the
presence of a main-sequence companion star $>8$-$10$ Msun, ruling out about 40%
of all stripped-envelope SN channels. The most likely scenario for SN 2002ap
includes nonconservative binary interaction of a primary star initially
$\lesssim 23$ Msun. Although unlikely ($<$1% of the scenarios), we also discuss
the possibility of an exotic reverse merger channel for broad-lined Type Ic
events. Finally, we explore how our results depend on the metallicity and the
model assumptions and discuss how additional searches for companions can
constrain the physics that governs the evolution of SN progenitors.
|
We analyse forward-jet production at HERA in the framework of the
Golec-Biernat and Wusthoff saturation models. We obtain a good description of
the forward jet cross sections measured by the H1 and ZEUS collaborations in
the two-hard-scale region kT ~ Q >> Lambda_QCD with two different
parametrisations with either significant or weak saturation effects. The weak
saturation parametrization gives a scale compatible with the one found for the
proton structure function F_2. We argue that Mueller-Navelet jets at the
Tevatron and the LHC could help distinguishing between both options.
|
A paradoxist Smarandache geometry combines Euclidean, hyperbolic, and
elliptic geometry into one space along with other non-Euclidean behaviors of
lines that would seem to require a discrete space. A class of continuous spaces
is presented here together with specific exmples that exhibit almost all of
these phenomena and suggest the prospect of a continuous paradoxist geometry.
|
In this paper, we consider the following problem $$ -\Delta u -\zeta
\frac{u}{|x|^{2}} = \sum_{i=1}^{k} \left( \int_{\mathbb{R}^{N}}
\frac{|u|^{2^{*}_{\alpha_{i}}}}{|x-y|^{\alpha_{i}}} \mathrm{d}y \right)
|u|^{2^{*}_{\alpha_{i}}-2}u + |u|^{2^{*}-2}u , \mathrm{~in~} \mathbb{R}^{N}, $$
where $N\geqslant3$, $\zeta\in(0,\frac{(N-2)^{2}}{4})$, $2^{*}=\frac{2N}{N-2}$
is the critical Sobolev exponent, and
$2^{*}_{\alpha_{i}}=\frac{2N-\alpha_{i}}{N-2}$ ($i=1,\ldots,k$) are the
critical Hardy--Littlewood--Sobolev upper exponents. The parameters
$\alpha_{i}$ ($i=1,\ldots,k$) satisfy some suitable assumptions. By using
Coulomb--Sobolev space, endpoint refined Sobolev inequality and variational
methods, we establish the existence of nontrivial solutions. Our result
generalizes the result obtained by Yang and Wu [Adv. Nonlinear Stud. (2017)].
|
Exclusive neutral-pion electroproduction ($ep\to e^\prime p^\prime \pi^0$)
was measured at Jefferson Lab with a 5.75-GeV electron beam and the CLAS
detector. Differential cross sections $d^4\sigma/dtdQ^2dx_Bd\phi_\pi$ and
structure functions $\sigma_T+\epsilon\sigma_L, \sigma_{TT}$ and $\sigma_{LT}$
as functions of $t$ were obtained over a wide range of $Q^2$ and $x_B$. The
data are compared with Regge and handbag theoretical calculations. Analyses in
both frameworks find that a large dominance of transverse processes is
necessary to explain the experimental results. For the Regge analysis it is
found that the inclusion of vector meson rescattering processes is necessary to
bring the magnitude of the calculated and measured structure functions into
rough agreement. In the handbag framework, there are two independent
calculations, both of which appear to roughly explain the magnitude of the
structure functions in terms of transversity generalized parton distributions.
|
Influence of magic numbers on nuclear radii is investigated via the
Hartree-Fock-Bogolyubov calculations and available experimental data. With the
$\ell s$ potential including additional density-dependence suggested from the
chiral effective field theory, kinks are universally predicted at the
$jj$-closed magic numbers and anti-kinks (\textit{i.e.} inverted kinks) are
newly predicted at the $\ell s$-closed magic numbers, both in the charge radii
and in the matter radii along the isotopic and isotonic chains where nuclei
stay spherical. These results seem consistent with the kinks of the charge
radii observed in Ca, Sn and Pb and the anti-kink in Ca. The kinks and the
anti-kinks could be a peculiar indicator for magic numbers, discriminating
$jj$-closure and $\ell s$-closure.
|
In Multi-objective Reinforcement Learning (MORL) agents are tasked with
optimising decision-making behaviours that trade-off between multiple, possibly
conflicting, objectives. MORL based on decomposition is a family of solution
methods that employ a number of utility functions to decompose the
multi-objective problem into individual single-objective problems solved
simultaneously in order to approximate a Pareto front of policies. We focus on
the case of linear utility functions parameterised by weight vectors w. We
introduce a method based on Upper Confidence Bound to efficiently search for
the most promising weight vectors during different stages of the learning
process, with the aim of maximising the hypervolume of the resulting Pareto
front. The proposed method is shown to outperform various MORL baselines on
Mujoco benchmark problems across different random seeds. The code is online at:
https://github.com/SYCAMORE-1/ucb-MOPPO.
|
The problem to establish not only the asymptotic distribution results for
statistical estimators but also the moment convergence of the estimators has
been recognized as an important issue in advanced theories of statistics. One
of the main goals of this paper is to present a metod to derive the moment
convergence of $Z$-estimators as it has been done for $M$-estimators. Another
goal of this paper is to develop a general, unified approach, based on some
partial estimation functions which we call "$Z$-process", to the change point
problems for ergodic models as well as some models where the Fisher information
matrix is random and inhomogeneous in time. Applications to some diffusion
process models and Cox's regression model are also discussed.
|
Various materials are made of long thin fibers that are randomly oriented to
form a complex network in which drops of wetting liquid tend to accumulate at
the nodes. The capillary force exerted by the liquid can bend flexible fibers,
which in turn influences the morphology adopted by the liquid. In this paper,
we investigate, the role of the fiber flexibility on the shape of a small
volume of liquid on a pair of crossed flexible fibers, through a model
situation. We characterize the liquid morphologies as we vary the volume of
liquid, the angle between the fibers, and the length of the fibers. The drop
morphologies previously reported for rigid crossed fibers, i.e., a drop, a
column and a mixed morphology, are also observed on flexible crossed fibers
with modified domains of existence. In addition, at small tilting angles
between the fibers, a new behavior is observed: the fibers bend and collapse.
Depending on the volume of liquid, a thin column with or without a drop is
reported on the collapsed fibers. Our study suggests that the fiber flexibility
adds a rich variety of behaviors that may be important for some applications.
|
Gaussian beams are often used in optical systems. The fundamental Gaussian
TEM00 mode is the most common of the Gaussian modes present in various optical
devices, systems and equipment. Within an optical system, it is common that
this Gaussian TEM00 beam passes through a circular aperture of a finite
diameter. Such circular apertures include irises, spatial filters, circular
Photo-Detectors (PDs) and optical mounts with circular rims. The magnitude of
optical power passing through a finite-sized circular aperture is
well-documented for cases where the Gaussian beam passes through the center of
the clear circular aperture, and is chopped off symmetrically in all radial
directions on a given plane. More often than not, a non-axial incident Gaussian
Beam is not blocked in a radially uniform manner by a circular aperture. Such
situations arise due to a lateral displacement of the beam from tilted glass
blocks, manufacturing errors and imperfect surface flatness or parallelness of
surfaces. The fraction of optical power of a laterally-shifted Gaussian Beam
passing through a circular aperture is calculated in this paper through
conventional integration techniques.
|
We measure empirical relationships between the local star formation rate
(SFR) and properties of the star-forming molecular gas on 1.5 kpc scales across
80 nearby galaxies. These relationships, commonly referred to as "star
formation laws," aim at predicting the local SFR surface density from various
combinations of molecular gas surface density, galactic orbital time, molecular
cloud free-fall time, and the interstellar medium dynamical equilibrium
pressure. Leveraging a multiwavelength database built for the PHANGS survey, we
measure these quantities consistently across all galaxies and quantify
systematic uncertainties stemming from choices of SFR calibrations and the
CO-to-H$_2$ conversion factors. The star formation laws we examine show 0.3-0.4
dex of intrinsic scatter, among which the molecular Kennicutt-Schmidt relation
shows a $\sim$10% larger scatter than the other three. The slope of this
relation ranges $\beta\approx0.9{-}1.2$, implying that the molecular gas
depletion time remains roughly constant across the environments probed in our
sample. The other relations have shallower slopes ($\beta\approx0.6{-}1.0$),
suggesting that the star formation efficiency (SFE) per orbital time, the SFE
per free-fall time, and the pressure-to-SFR surface density ratio (i.e., the
feedback yield) may vary systematically with local molecular gas and SFR
surface densities. Last but not least, the shapes of the star formation laws
depend sensitively on methodological choices. Different choices of SFR
calibrations can introduce systematic uncertainties of at least 10-15% in the
star formation law slopes and 0.15-0.25 dex in their normalization, while the
CO-to-H$_2$ conversion factors can additionally produce uncertainties of 20-25%
for the slope and 0.10-0.20 dex for the normalization.
|
Using a new state-dependent, $\lambda$-deformable, linear functional
operator, ${\cal Q}_{\psi}^{\lambda}$, which presents a natural $C^{\infty}$
deformation of quantization, we obtain a uniquely selected non--linear,
integro--differential Generalized Schr\"odinger equation. The case ${\cal
Q}_{\psi}^{1}$ reproduces linear quantum mechanics, whereas ${\cal
Q}_{\psi}^{0}$ admits an exact dynamic, energetic and measurement theoretic
{\em reproduction} of classical mechanics. All solutions to the resulting
classical wave equation are given and we show that functionally chaotic
dynamics exists.
|
We study the effect of symmetry breaking perturbations in the one-dimensional
SU(4) spin-orbital model. We allow the exchange in spin ($J_1$) and orbital
($J_2$) channel to be different and thus reduce the symmetry to SU(2) $\otimes$
SU(2). A magnetic field $h$ along the $S^z$ direction is also applied. Using
the formalism developped by Azaria et al we extend their analysis of the
isotropic $J_1=J_2$, h=0 case and obtain the low-energy effective theory near
the SU(4) point in the asymmetric case. An accurate analysis of the
renormalization group flow is presented with a particular emphasis on the
effect of the anisotropy. In zero magnetic field, we retrieve the same
qualitative low-energy physics than in the isotropic case. In particular, the
massless behavior found on the line $J_1=J_2>K/4$ extends in a large
anisotropic region. We discover though that the anisotropy plays its trick in
allowing non trivial scaling behaviors of the physical quantities. When a
magnetic field is present the effect of the anisotropy is striking. In addition
to the usual commensurate-incommensurate phase transition that occurs in the
spin sector of the theory, we find that the field may induce a second
transition of the KT type in the remaining degrees of freedom to which it does
not couple directly. In this sector, we find that the effective theory is that
of an SO(4) Gross-Neveu model with an h-dependent coupling that may change its
sign as h varies.
|
We investigate the properties of the standard perturbative expansions which
describe the early stages of the dynamics of gravitational clustering. We show
that for hierarchical scenarios with no small-scale cutoff perturbation theory
always breaks down beyond a finite order $q_+$. Besides, the degree of
divergence increases with the order of the perturbative terms so that
renormalization procedures cannot be applied. Nevertheless, we explain that
despite the divergence of these subleading terms the results of perturbation
theory are correct at leading order because they can be recovered through a
steepest-descent method which does not use such perturbative expansions.
Finally, we investigate the simpler cases of the Zel'dovich and Burgers
dynamics. In particular, we show that the standard Burgers equation exhibits
similar properties. This analogy suggests that the results of the standard
perturbative expansions are valid up to the order $q_+$ (i.e. until they are
finite). Moreover, the first ``non-regular'' term of a large-scale expansion of
the two-point correlation function should be of the form $R^{-2} \sigma^2(R)$.
At higher orders the large-scale expansion should no longer be over powers of
$\sigma^2$ but over a different combination of powers of 1/R. However, its
calculation requires new non-perturbative methods.
|
A team of identical and oblivious ant-like agents - a(ge)nts - leaving
pheromone traces, are programmed to jointly patrol an area modeled as a graph.
They perform this task using simple local interactions, while also achieving
the important byproduct of partitioning the graph into roughly equal-sized
disjoint sub-graphs. Each a(ge)nt begins to operate at an arbitrary initial
location, and throughout its work does not acquire any information on either
the shape or size of the graph, or the number or whereabouts of other a(ge)nts.
Graph partitioning occurs spontaneously, as each of the a(ge)nts patrols and
expands its own pheromone-marked sub-graph, or region. This graph partitioning
algorithm is inspired by molecules hitting the borders of air filled elastic
balloons: an a(ge)nt that hits a border edge from the interior of its region
more frequently than an external a(ge)nt hits the same edge from an adjacent
vertex in the neighboring region, may conquer that adjacent vertex, expanding
its region at the expense of the neighbor. Since the rule of patrolling a
region ensures that each vertex is visited with a frequency inversely
proportional to the size of the region, in terms of vertex count, a smaller
region will effectively exert higher "pressure" at its borders, and conquer
adjacent vertices from a larger region, thereby increasing the smaller region
and shrinking the larger. The algorithm, therefore, tends to equalize the sizes
of the regions patrolled, resembling a set of perfectly elastic physical
balloons, confined to a closed volume and filled with an equal amount of air.
The pheromone based local interactions of agents eventually cause the system to
evolve into a partition that is close to balanced rather quickly, and if the
graph and the number of a(ge)nts remain unchanged, it is guaranteed that the
system settles into a stable and balanced partition.
|
We generalize the ZX calculus to quantum systems of dimension higher than
two. The resulting calculus is sound and universal for quantum mechanics. We
define the notion of a mutually unbiased qudit theory and study two particular
instances of these theories in detail: qudit stabilizer quantum mechanics and
Spekkens-Schreiber toy theory for dits. The calculus allows us to analyze the
structure of qudit stabilizer quantum mechanics and provides a geometrical
picture of qudit stabilizer theory using D-toruses, which generalizes the Bloch
sphere picture for qubit stabilizer quantum mechanics. We also use our
framework to describe generalizations of Spekkens toy theory to higher
dimensional systems. This gives a novel proof that qudit stabilizer quantum
mechanics and Spekkens-Schreiber toy theory for dits are operationally
equivalent in three dimensions. The qudit pictorial calculus is a useful tool
to study quantum foundations, understand the relationship between qubit and
qudit quantum mechanics, and provide a novel, high level description of quantum
information protocols.
|
A probabilistic expert system emulates the decision-making ability of a human
expert through a directional graphical model. The first step in building such
systems is to understand data generation mechanism. To this end, one may try to
decompose a multivariate distribution into product of several conditionals, and
evolving a blackbox machine learning predictive models towards transparent
cause-and-effect discovery. Most causal models assume a single homogeneous
population, an assumption that may fail to hold in many applications. We show
that when the homogeneity assumption is violated, causal models developed based
on such assumption can fail to identify the correct causal direction. We
propose an adjustment to a commonly used causal direction test statistic by
using a $k$-means type clustering algorithm where both the labels and the
number of components are estimated from the collected data to adjust the test
statistic. Our simulation result show that the proposed adjustment
significantly improves the performance of the causal direction test statistic
for heterogeneous data. We study large sample behaviour of our proposed test
statistic and demonstrate the application of the proposed method using real
data.
|
Prior attacks on graph neural networks have mostly focused on graph poisoning
and evasion, neglecting the network's weights and biases. Traditional
weight-based fault injection attacks, such as bit flip attacks used for
convolutional neural networks, do not consider the unique properties of graph
neural networks. We propose the Injectivity Bit Flip Attack, the first bit flip
attack designed specifically for graph neural networks. Our attack targets the
learnable neighborhood aggregation functions in quantized message passing
neural networks, degrading their ability to distinguish graph structures and
losing the expressivity of the Weisfeiler-Lehman test. Our findings suggest
that exploiting mathematical properties specific to certain graph neural
network architectures can significantly increase their vulnerability to bit
flip attacks. Injectivity Bit Flip Attacks can degrade the maximal expressive
Graph Isomorphism Networks trained on various graph property prediction
datasets to random output by flipping only a small fraction of the network's
bits, demonstrating its higher destructive power compared to a bit flip attack
transferred from convolutional neural networks. Our attack is transparent and
motivated by theoretical insights which are confirmed by extensive empirical
results.
|
Deep Neural Networks (DNNs) are prone to learning spurious features that
correlate with the label during training but are irrelevant to the learning
problem. This hurts model generalization and poses problems when deploying them
in safety-critical applications. This paper aims to better understand the
effects of spurious features through the lens of the learning dynamics of the
internal neurons during the training process. We make the following
observations: (1) While previous works highlight the harmful effects of
spurious features on the generalization ability of DNNs, we emphasize that not
all spurious features are harmful. Spurious features can be "benign" or
"harmful" depending on whether they are "harder" or "easier" to learn than the
core features for a given model. This definition is model and
dataset-dependent. (2) We build upon this premise and use instance difficulty
methods (like Prediction Depth (Baldock et al., 2021)) to quantify "easiness"
for a given model and to identify this behavior during the training phase. (3)
We empirically show that the harmful spurious features can be detected by
observing the learning dynamics of the DNN's early layers. In other words, easy
features learned by the initial layers of a DNN early during the training can
(potentially) hurt model generalization. We verify our claims on medical and
vision datasets, both simulated and real, and justify the empirical success of
our hypothesis by showing the theoretical connections between Prediction Depth
and information-theoretic concepts like V-usable information (Ethayarajh et
al., 2021). Lastly, our experiments show that monitoring only accuracy during
training (as is common in machine learning pipelines) is insufficient to detect
spurious features. We, therefore, highlight the need for monitoring early
training dynamics using suitable instance difficulty metrics.
|
We present a joint theoretical and experimental study to investigate
polymorphism in $\alpha$-sexithiophene (6T) crystals. By means of
density-functional theory calculations, we clarify that the low-temperature
phase is favorable over the high-temperature one, with higher relative
stability by about 50 meV/molecule. This result is in agreement with our
thermal desorption measurements. We also propose a transition path between the
high- and low-temperature 6T polymorphs, estimating an upper bound for the
energy barrier of about 1 eV/molecule. The analysis of the electronic
properties of the investigated 6T crystal structures complements our study.
|
The advancement of large language models (LLMs) brings notable improvements
across various applications, while simultaneously raising concerns about
potential private data exposure. One notable capability of LLMs is their
ability to form associations between different pieces of information, but this
raises concerns when it comes to personally identifiable information (PII).
This paper delves into the association capabilities of language models, aiming
to uncover the factors that influence their proficiency in associating
information. Our study reveals that as models scale up, their capacity to
associate entities/information intensifies, particularly when target pairs
demonstrate shorter co-occurrence distances or higher co-occurrence
frequencies. However, there is a distinct performance gap when associating
commonsense knowledge versus PII, with the latter showing lower accuracy.
Despite the proportion of accurately predicted PII being relatively small, LLMs
still demonstrate the capability to predict specific instances of email
addresses and phone numbers when provided with appropriate prompts. These
findings underscore the potential risk to PII confidentiality posed by the
evolving capabilities of LLMs, especially as they continue to expand in scale
and power.
|
We examine the weak noise limit of an overdamped dissipative system within a
semiclassical description and show how quantization influences the growth and
decay of fluctuations of the thermally equilibrated systems. We trace its
origin in a semiclassical counterpart of the generalized potential for the
dissipative system.
|
We count the number of occurrences of restricted patterns of length 3 in
permutations with respect to length and the number of cycles. The main tool is
a bijection between permutations in standard cycle form and weighted Motzkin
paths.
|
A polynomial $f(x)$ over a field $K$ is said to be stable if all its iterates
are irreducible over $K$. L. Danielson and B. Fein have shown that over a large
class of fields $K$, if $f(x)$ is an irreducible monic binomial, then it is
stable over $K$. In this paper it is proved that this result no longer holds
over finite fields. Necessary and sufficient conditions are given in order that
a given binomial is stable over $\mathbb{F}_q$. These conditions are used to
construct a table listing the stable binomials over $\mathbb{F}_q$ of the form
$f(x)=x^d-a$, $a\in\mathbb{F}_q\setminus\{0,1\}$, for $q \leq 27$ and $d \leq
10$. The paper ends with a brief link with Mersenne primes.
|
We construct a 3-dimensional cell complex that is the 3-skeleton for an
Eilenberg--MacLane classifying space for the symmetric group $\mathfrak{S}_n$.
Our complex starts with the presentation for $\mathfrak{S}_n$ with $n-1$
adjacent transpositions with squaring, commuting, and braid relations, and adds
seven classes of 3-cells that fill in certain 2-spheres bounded by these
relations. We use a rewriting system and a combinatorial method of K. Brown to
prove the correctness of our construction. Our main application is a
computation of the second cohomology of $\mathfrak{S}_n$ in certain twisted
coefficient modules; we use this computation in a companion paper to study
splitting of extensions related to braid groups. As another application, we
give a concrete description of the third homology of $\mathfrak{S}_n$ with
untwisted coefficients in $\mathbb{Z}$.
|
The goal in this paper is to demonstrate a new method for constructing
global-in-time approximate (asymptotic) solutions of (pseudodifferential)
parabolic equations with a small parameter. We show that, in the leading term,
such a solution can be constructed by using characteristics, more precisely, by
using solutions of the corresponding Hamiltonian system and without using any
integral representation. For completeness, we also briefly describe the
well-known scheme developed by V.P.Maslov for constructing global-in-time
solutions.
|
The growth of a finitely generated group is an important geometric invariant
which has been studied for decades. It can be either polynomial, for a
well-understood class of groups, or exponential, for most groups studied by
geometers, or intermediate, that is between polynomial and exponential. Despite
recent spectacular progresses, the class of groups with intermediate growth
remains largely mysterious. Many examples of such groups are constructed using
Mealy automata. The aim of this paper is to give an algorithmic procedure to
study the growth of such automata groups, and more precisely to provide
numerical upper bounds on their exponents. Our functions retrieve known optimal
bounds on the famous first Grigorchuk group. They also improve known upper
bounds on other automata groups and permitted us to discover several new
examples of automata groups of intermediate growth. All the algorithms
described are implemented in GAP, a language dedicated to computational group
theory.
|
Crystal seeding enables a deeper understanding of phase behavior, leading to
the development of methods for controlling and manipulating phase transitions
in various applications such as materials synthesis, crystallization processes,
and phase transformation engineering. How to seed a crystalline in time domain
is an open question, which is of great significant and may provide an avenue to
understand and control time-dependent quantum many-body physics. Here, we
utilize a microwave pulse as a seed to induce the formation of a discrete time
crystal in Floquet driven Rydberg atoms. In the experiment, the periodic
driving on Rydberg states acts as a seeded crystalline order in subspace, which
triggers the time-translation symmetry breaking across the entire ensemble. The
behavior of the emergent time crystal is elaborately linked to alterations in
the seed, such as the relative phase shift and the frequency difference, which
result in phase dependent seeding and corresponding shift in periodicity of the
time crystal, leading to embryonic synchronization. This result opens up new
possibilities for studying and harnessing time-dependent quantum many-body
phenomena, offering insights into the behavior of complex many-body systems
under seeding.
|
We study the general evolution of spherical over-densities for thawing class
of dark energy models. We model dark energy with scalar fields having canonical
as well as non-canonical kinetic energy. For non-canonical case, we consider
models where the kinetic energy is of the Born-Infeld Form. We study various
potentials like linear, inverse-square, exponential as well as PNGB-type. We
also consider the case when dark energy is homogeneous as well as the case when
it is inhomogeneous and virializes together with matter. Our study shows that
models with linear potential in particular with Born-Infeld type kinetic term
can have significant deviation from the $\Lambda$CDM model in terms of density
contrast at the time of virialization. Although our approach is a simplified
one to study the nonlinear evolution of matter overdensities inside the cluster
and is not applicable to actual physical situation, it gives some interesting
insights into the nonlinear clustering of matter in the presence of thawing
class of dark energy models.
|
This paper introduces an enhanced meta-heuristic (ML-ACO) that combines
machine learning (ML) and ant colony optimization (ACO) to solve combinatorial
optimization problems. To illustrate the underlying mechanism of our ML-ACO
algorithm, we start by describing a test problem, the orienteering problem. In
this problem, the objective is to find a route that visits a subset of vertices
in a graph within a time budget to maximize the collected score. In the first
phase of our ML-ACO algorithm, an ML model is trained using a set of small
problem instances where the optimal solution is known. Specifically,
classification models are used to classify an edge as being part of the optimal
route, or not, using problem-specific features and statistical measures. The
trained model is then used to predict the probability that an edge in the graph
of a test problem instance belongs to the corresponding optimal route. In the
second phase, we incorporate the predicted probabilities into the ACO component
of our algorithm, i.e., using the probability values as heuristic weights or to
warm start the pheromone matrix. Here, the probability values bias sampling
towards favoring those predicted high-quality edges when constructing feasible
routes. We have tested multiple classification models including graph neural
networks, logistic regression and support vector machines, and the experimental
results show that our solution prediction approach consistently boosts the
performance of ACO. Further, we empirically show that our ML model trained on
small synthetic instances generalizes well to large synthetic and real-world
instances. Our approach integrating ML with a meta-heuristic is generic and can
be applied to a wide range of optimization problems.
|
Starting from a finite-dimensional representation of the Yangian
$Y(\mathfrak{g})$ for a simple Lie algebra $\mathfrak{g}$ in Drinfeld's
original presentation, we construct a Hopf algebra
$X_\mathcal{I}(\mathfrak{g})$, called the extended Yangian, whose defining
relations are encoded in a ternary matrix relation built from a specific
$R$-matrix $R(u)$. We prove that there is a surjective Hopf algebra morphism
$X_\mathcal{I}(\mathfrak{g})\twoheadrightarrow Y(\mathfrak{g})$ whose kernel is
generated as an ideal by the coefficients of a central matrix $\mathcal{Z}(u)$.
When the underlying representation is irreducible, we show that this matrix
becomes a grouplike central series, thereby making available a proof of a
well-known theorem stated by Drinfeld in the 1980's. We then study in detail
the algebraic structure of the extended Yangian, and prove several
generalizations of results which are known to hold for Yangians associated to
classical Lie algebras in their $R$-matrix presentations.
|
We introduce a method to synthesize animator guided human motion across 3D
scenes. Given a set of sparse (3 or 4) joint locations (such as the location of
a person's hand and two feet) and a seed motion sequence in a 3D scene, our
method generates a plausible motion sequence starting from the seed motion
while satisfying the constraints imposed by the provided keypoints. We
decompose the continual motion synthesis problem into walking along paths and
transitioning in and out of the actions specified by the keypoints, which
enables long generation of motions that satisfy scene constraints without
explicitly incorporating scene information. Our method is trained only using
scene agnostic mocap data. As a result, our approach is deployable across 3D
scenes with various geometries. For achieving plausible continual motion
synthesis without drift, our key contribution is to generate motion in a
goal-centric canonical coordinate frame where the next immediate target is
situated at the origin. Our model can generate long sequences of diverse
actions such as grabbing, sitting and leaning chained together in arbitrary
order, demonstrated on scenes of varying geometry: HPS, Replica, Matterport,
ScanNet and scenes represented using NeRFs. Several experiments demonstrate
that our method outperforms existing methods that navigate paths in 3D scenes.
|
From a perspective of feature matching, optical flow estimation for event
cameras involves identifying event correspondences by comparing feature
similarity across accompanying event frames. In this work, we introduces an
effective and robust high-dimensional (HD) feature descriptor for event frames,
utilizing Vector Symbolic Architectures (VSA). The topological similarity among
neighboring variables within VSA contributes to the enhanced representation
similarity of feature descriptors for flow-matching points, while its
structured symbolic representation capacity facilitates feature fusion from
both event polarities and multiple spatial scales. Based on this HD feature
descriptor, we propose a novel feature matching framework for event-based
optical flow, encompassing both model-based (VSA-Flow) and self-supervised
learning (VSA-SM) methods. In VSA-Flow, accurate optical flow estimation
validates the effectiveness of HD feature descriptors. In VSA-SM, a novel
similarity maximization method based on the HD feature descriptor is proposed
to learn optical flow in a self-supervised way from events alone, eliminating
the need for auxiliary grayscale images. Evaluation results demonstrate that
our VSA-based method achieves superior accuracy in comparison to both
model-based and self-supervised learning methods on the DSEC benchmark, while
remains competitive among both methods on the MVSEC benchmark. This
contribution marks a significant advancement in event-based optical flow within
the feature matching methodology.
|
Conceptual reasoning, the ability to reason in abstract and high-level
perspectives, is key to generalization in human cognition. However, limited
study has been done on large language models' capability to perform conceptual
reasoning. In this work, we bridge this gap and propose a novel
conceptualization framework that forces models to perform conceptual reasoning
on abstract questions and generate solutions in a verifiable symbolic space.
Using this framework as an analytical tool, we show that existing large
language models fall short on conceptual reasoning, dropping 9% to 28% on
various benchmarks compared to direct inference methods. We then discuss how
models can improve since high-level abstract reasoning is key to unbiased and
generalizable decision-making. We propose two techniques to add trustworthy
induction signals by generating familiar questions with similar underlying
reasoning paths and asking models to perform self-refinement. Experiments show
that our proposed techniques improve models' conceptual reasoning performance
by 8% to 11%, achieving a more robust reasoning system that relies less on
inductive biases.
|
We consider a model combining technicolor with the top quark condensation. As
a concrete model for Technicolor we use the Minimal Walking Technicolor, and
this will result in the appearance of a novel fourth generation whose leptons
constitute a usual weak doublet while the QCD quarks are vectorlike singlets
under the weak interactions. We carry out an analysis of the mass spectra and
precision measurement constraints, and find the model viable. We contrast the
model with present LHC data and discuss the future prospects.
|
In this paper, we study bijections on strictly convex sets of $\mathbf R
\mathbf P^n$ for $n \geq 2$ and closed convex projective surfaces equipped with
the Hilbert metric that map complete geodesics to complete geodesics as sets.
Hyperbolic $n$-space with its standard metric is a special example of the
spaces we consider, and it is known that these bijections in this context are
precisely the isometries. We first prove that this result generalizes to an
arbitrary strictly convex set. For the surfaces setting, we prove the
equivalence of mapping simple closed geodesics to simple closed geodesics and
mapping closed geodesics to closed geodesics. We also outline some future
directions and questions to further explore these topics.
|
This paper reviews the experimental and theoretical state of the art in
ballistic hot electron transistors that utilize two-dimensional base contacts
made from graphene, i.e. graphene base transistors (GBTs). Early performance
predictions that indicated potential for THz operation still hold true today,
even with improved models that take non-idealities into account. Experimental
results clearly demonstrate the basic functionality, with on/off current
switching over several orders of magnitude, but further developments are
required to exploit the full potential of the GBT device family. In particular,
interfaces between graphene and semiconductors or dielectrics are far from
perfect and thus limit experimental device integrity, reliability and
performance.
|
The article discusses carbocatalysis provided with amorphous carbons. The
discussion is conducted from the standpoint of the spin chemistry of graphene
molecules, in the framework of which the amorphous carbocatalysts are a
conglomerate of graphene-oxynitrothiohydride stable radicals presenting the
basic structural units (BSUs) of the species. The chemical activity of the BSUs
atoms is reliably determined computationally, which allows mapping the
distribution of active sites in these molecular catalysts. The presented maps
reliably evidence the BSUs radicalization provided with carbon atoms only, the
non-terminated edge part of which presents a set of active cites. Spin mapping
of carbocatalysts active cites is suggested as the first step towards the spin
carbocatalysis of the species.
|
Zipf's law predicts a power-law relationship between word rank and frequency
in language communication systems, and is widely reported in texts yet remains
enigmatic as to its origins. Computer simulations have shown that language
communication systems emerge at an abrupt phase transition in the fidelity of
mappings between symbols and objects. Since the phase transition approximates
the Heaviside or step function, we show that Zipfian scaling emerges
asymptotically at high rank based on the Laplace transform. We thereby
demonstrate that Zipf's law gradually emerges from the moment of phase
transition in communicative systems. We show that this power-law scaling
behavior explains the emergence of natural languages at phase transitions. We
find that the emergence of Zipf's law during language communication suggests
that the use of rare words in a lexicon is critical for the construction of an
effective communicative system at the phase transition.
|
The essence of quadrupeds' movements is the movement of the center of
gravity, which has a pattern in the action of quadrupeds. However, the gait
motion planning of the quadruped robot is time-consuming. Animals in nature can
provide a large amount of gait information for robots to learn and imitate.
Common methods learn animal posture with a motion capture system or numerous
motion data points. In this paper, we propose a video imitation adaptation
network (VIAN) that can imitate the action of animals and adapt it to the robot
from a few seconds of video. The deep learning model extracts key points during
animal motion from videos. The VIAN eliminates noise and extracts key
information of motion with a motion adaptor, and then applies the extracted
movements function as the motion pattern into deep reinforcement learning
(DRL). To ensure similarity between the learning result and the animal motion
in the video, we introduce rewards that are based on the consistency of the
motion. DRL explores and learns to maintain balance from movement patterns from
videos, imitates the action of animals, and eventually, allows the model to
learn the gait or skills from short motion videos of different animals and to
transfer the motion pattern to the real robot.
|
Although the Sun's polar magnetic fields are thought to provide important
clues for understanding the 11-year sunspot cycle, including the observed
variations of its amplitude and period, the current database of high-quality
polar-field measurements spans relatively few sunspot cycles. In this paper we
address this deficiency by consolidating Mount Wilson Observatory polar faculae
data from four data reduction campaigns, validating it through a comparison
with facular data counted automatically from MDI intensitygrams, and
calibrating it against polar field measurements taken by the Wilcox Solar
Observatory and average polar field and total polar flux calculated using MDI
line-of-sight magnetograms. Our results show that the consolidated polar
facular measurements are in excellent agreement with both polar field and polar
flux estimates, making them an ideal proxy to study the evolution of the polar
magnetic field. Additionally, we combine this database with sunspot area
measurements to study the role of the polar magnetic flux in the evolution of
the heliospheric magnetic field (HMF). We find that there is a strong
correlation between HMF and polar flux at solar minimum and that, taken
together, polar flux and sunspot area are better at explaining the evolution of
the HMF during the last century than sunspot area alone.
|
We analyze X-ray spectra and images of a sample of Seyfert 2 galaxies that
unambiguously contain starbursts, based on their optical and UV
characteristics. Although all sample members contain active galactic nuclei
(AGNs), supermassive black holes or other related processes at the galactic
centers alone cannot account for the total X-ray emission in all instances.
Eleven of the twelve observed galaxies are significantly resolved with the
ROSAT HRI, while six of the eight sources observed with the lower-resolution
PSPC also appear extended on larger scales. The X-ray emission is extended on
physical scales of 10 kpc and greater, which we attribute to starburst-driven
outflows and supernova-heating of the interstellar medium. Spectrally, a
physically-motivated composite model of the X-ray emission that includes a
heavily absorbed (N_H > 10^{23} cm^{-2}) nuclear component (the AGN), power-law
like scattered AGN flux, and a thermal starburst describes this sample well.
Half the sample exhibit iron K alpha lines, which are typical of AGNs.
|
Reduced dimensionality has long been regarded as an important strategy for
increasing thermoelectric performance, for example in superlattices and other
engineered structures. Here we point out and illustrate by examples that three
dimensional bulk materials can be made to behave as if they were two
dimensional from the point of view of thermoelectric performance. Implications
for the discovery of new practical thermoelectrics are discussed.
|
Aims: We examine the recoverability and completeness limits of the dense core
mass functions (CMFs) derived for a molecular cloud using extinction data and a
core identification scheme based on two-dimensional thresholding.
Methods: We performed simulations where a population of artificial cores was
embedded into the variable background extinction field of the Pipe nebula. We
extracted the cores from the simulated extinction maps, constructed the CMFs,
and compared them to the input CMFs. The simulations were repeated using a
variety of extraction parameters and several core populations with differing
input mass functions and differing degrees of crowding.
Results: The fidelity of the observed CMF depends on the parameters selected
for the core extraction algorithm for our background. More importantly, it
depends on how crowded the core population is. We find that the observed CMF
recovers the true CMF reliably when the mean separation of cores is larger than
their mean diameter (f>1). If this condition holds, the derived CMF is accurate
and complete above M > 0.8-1.5 Msun, depending on the parameters used for the
core extraction. In the simulations, the best fidelity was achieved with the
detection threshold of 1 or 2 times the rms-noise of the extinction data, and
with the contour level spacings of 3 times the rms-noise. Choosing larger
threshold and wider level spacings increases the limiting mass. The simulations
show that when f>1.5, the masses of individual cores are recovered with a
typical uncertainty of 25-30 %. When f=1 the uncertainty is ~60 %. In very
crowded cases where f<1 the core identification algorithm is unable to recover
the masses of the cores adequately. For the cores of the Pipe nebula f~2.0 and
therefore the use of the method in that region is justified.
|
Bode integrals of sensitivity and sensitivity-like functions along with
complementary sensitivity and complementary sensitivity-like functions are
conventionally used for describing performance limitations of a feedback
control system. In this paper, we show that in the case when the disturbance is
a wide sense stationary process the (complementary) sensitivity Bode integral
and the (complementary) sensitivity-like Bode integral are identical. A lower
bound of the continuous-time complementary sensitivity-like Bode integral is
also derived and examined with the linearized flight-path angle tracking
control problem of an F-16 aircraft.
|
Track functions describe the collective effect of the fragmentation of quarks
and gluons into charged hadrons, making them a key ingredient for jet
substructure measurements at hadron colliders, where track-based measurements
offer superior angular resolution. The first moment of the track function,
describing the average energy deposited in charged particles, is a simple and
well-studied object. However, measurements of higher-point correlations of
energy flow necessitate a characterization of fluctuations in the hadronization
process, described theoretically by higher moments of the track function. In
this paper we derive the structure of the renormalization group (RG) evolution
equations for track function moments. We show that energy conservation gives
rise to a shift symmetry that allows the evolution equations to be written in
terms of cumulants, $\kappa(N)$, and the difference between the first moment of
quark and gluon track functions, $\Delta$. The uniqueness of the first three
cumulants then fixes their all-order evolution to be DGLAP, up to corrections
involving powers of $\Delta$, that are numerically suppressed by an effective
order in the perturbative expansion for phenomenological track functions.
However, at the fourth cumulant and beyond there is non-trivial RG mixing into
products of cumulants such as $\kappa(4)$ into $\kappa(2)^2$. We analytically
compute the evolution equations up to the sixth moment at
$\mathcal{O}(\alpha_s^2)$, and study the associated RG flows. These results
allow for the study of up to six-point correlations in energy flow using
tracks, paving the way for precision jet substructure at the LHC.
|
We study spin glass clusters ("shards") in a random transverse magnetic
field, and determine the regime where quantum chaos and random matrix level
statistics emerge from the integrable limits of weak and strong field.
Relations with quantum phase transition are also discussed.
|
We study higher critical points of the variational functional associated with
a free boundary problem related to plasma confinement. Existence and regularity
of minimizers in elliptic free boundary problems have already been studied
extensively. But because the functionals are not smooth, standard variational
methods cannot be used directly to prove the existence of higher critical
points. Here we find a nontrivial critical point of mountain pass type and
prove many of the same estimates known for minimizers, including Lipschitz
continuity and nondegeneracy. We then show that the free boundary is smooth in
dimension 2 and prove partial regularity in higher dimensions.
|
Monte Carlo integration is typically interpreted as an estimator of the
expected value using stochastic samples. There exists an alternative
interpretation in calculus where Monte Carlo integration can be seen as
estimating a \emph{constant} function -- from the stochastic evaluations of the
integrand -- that integrates to the original integral. The integral mean value
theorem states that this \emph{constant} function should be the mean (or
expectation) of the integrand. Since both interpretations result in the same
estimator, little attention has been devoted to the calculus-oriented
interpretation. We show that the calculus-oriented interpretation actually
implies the possibility of using a more \emph{complex} function than a
\emph{constant} one to construct a more efficient estimator for Monte Carlo
integration. We build a new estimator based on this interpretation and relate
our estimator to control variates with least-squares regression on the
stochastic samples of the integrand. Unlike prior work, our resulting estimator
is \emph{provably} better than or equal to the conventional Monte Carlo
estimator. To demonstrate the strength of our approach, we introduce a
practical estimator that can act as a simple drop-in replacement for
conventional Monte Carlo integration. We experimentally validate our framework
on various light transport integrals. The code is available at
\url{https://github.com/iribis/regressionmc}.
|
Using covariant quantization of the electromagnetic field, the Casimir force
per unit area experienced by a long conducting cylindrical shell, under both
Dirichlet and Neumann boundary conditions, is calculated. The renormalization
procedure is based on the plasma cut-off frequency for real conductors. The
real case of a gold (silver) cylindrical shell is considered and the
corresponding electromagnetic Casimir pressure is computed. It is discussed
that the Dirichlet and Neumann problems should be considered separately without
adding their corresponding results.
|
Deep clustering has exhibited remarkable performance; however, the
over-confidence problem, i.e., the estimated confidence for a sample belonging
to a particular cluster greatly exceeds its actual prediction accuracy, has
been overlooked in prior research. To tackle this critical issue, we pioneer
the development of a calibrated deep clustering framework. Specifically, we
propose a novel dual-head (calibration head and clustering head) deep
clustering model that can effectively calibrate the estimated confidence and
the actual accuracy. The calibration head adjusts the overconfident predictions
of the clustering head, generating prediction confidence that match the model
learning status. Then, the clustering head dynamically select reliable
high-confidence samples estimated by the calibration head for pseudo-label
self-training. Additionally, we introduce an effective network initialization
strategy that enhances both training speed and network robustness. The
effectiveness of the proposed calibration approach and initialization strategy
are both endorsed with solid theoretical guarantees. Extensive experiments
demonstrate the proposed calibrated deep clustering model not only surpasses
state-of-the-art deep clustering methods by 10 times in terms of expected
calibration error but also significantly outperforms them in terms of
clustering accuracy.
|
Shell galaxies make a class of tidally distorted galaxies, characterised by
wide concentric arc(s), extending out to large galactocentric distances with
sharp outer edges. Recent observations of young massive star clusters in the
prominent outer shell of NGC 474 suggest that such systems host extreme
conditions of star formation. In this paper, we present a hydrodynamic
simulation of a galaxy merger and its transformation into a shell galaxy. We
analyse how the star formation activity evolves with time, location-wise within
the system, and what are the physical conditions for star formation. During the
interaction, an excess of dense gas appears, triggering a starburst, i.e. an
enhanced star formation rate and a reduced depletion time. Star formation
coincides with regions of high molecular gas fraction, such as the galactic
nucleus, spiral arms, and occasionally the tidal debris during the early stages
of the merger. Tidal interactions scatter stars into a stellar spheroid, while
the gas cools down and reforms a disc. The morphological transformation after
coalescence stabilises the gas and thus quenches star formation, without the
need for feedback from an active galactic nucleus. This evolution shows
similarities with a compaction scenario for compact quenched spheroids at
high-redshift, yet without a long red nugget phase. Shells appear after
coalescence, during the quenched phase, implying that they do not host the
conditions necessary for in situ star formation. The results suggest that
shell-forming mergers might be part of the process of turning blue late-type
galaxies into red and dead early-types.
|
Clinical trials in specific indications require the administration of rescue
medication in case a patient does not sufficiently respond to investigational
treatment. The application of additional treatment on an as needed basis causes
problems to the analysis and interpretation of the results of these studies
since the effect of the investigational treatment can be confounded by the
additional medication. Following-up all patients until study end and capturing
all data is not fully addressing the issue. We present an analysis that takes
care of the fact that rescue is a study outcome and not a covariate when rescue
medication is administered according to a deterministic rule. This approach
allows to clearly define a biological effect. For normally distributed
longitudinal data a practically unbiased estimator of the biological effect can
be obtained. The results are compared to an ITT analysis and an analysis on all
patients not receiving rescue.
|
We construct a triangulation of a compactification of the Moduli space of a
surface with at least one puncture that is closely related to the
Deligne-Mumford compactification. Specifically, there is a surjective map from
the compactification we construct to the Deligne-Mumford compactification so
that the inverse image of each point is contractible. In particular our
compactification is homotopy equivalent to the Deligne-Mumford
compactification.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.