text
stringlengths 6
128k
|
---|
Graph clustering involves the task of dividing nodes into clusters, so that
the edge density is higher within clusters as opposed to across clusters. A
natural, classic and popular statistical setting for evaluating solutions to
this problem is the stochastic block model, also referred to as the planted
partition model.
In this paper we present a new algorithm--a convexified version of Maximum
Likelihood--for graph clustering. We show that, in the classic stochastic block
model setting, it outperforms existing methods by polynomial factors when the
cluster size is allowed to have general scalings. In fact, it is within
logarithmic factors of known lower bounds for spectral methods, and there is
evidence suggesting that no polynomial time algorithm would do significantly
better.
We then show that this guarantee carries over to a more general extension of
the stochastic block model. Our method can handle the settings of semi-random
graphs, heterogeneous degree distributions, unequal cluster sizes, unaffiliated
nodes, partially observed graphs and planted clique/coloring etc. In
particular, our results provide the best exact recovery guarantees to date for
the planted partition, planted k-disjoint-cliques and planted noisy coloring
models with general cluster sizes; in other settings, we match the best
existing results up to logarithmic factors.
|
Understanding the isotopic composition of cosmic rays (CRs) observed near
Earth represents a milestone towards the identification of their origin. Local
fluxes contain all the known stable and long-lived isotopes, reflecting the
complex history of primaries and secondaries as they traverse the interstellar
medium. For that reason, a numerical code which aims at describing the CR
transport in the Galaxy must unavoidably rely on accurate modelling of the
production of secondary particles. In this work we provide a detailed
description of the nuclear cross sections and decay network as implemented in
the forthcoming release of the galactic propagation code DRAGON2. We present
the secondary production models implemented in the code and we apply the
different prescriptions to compute quantities of interest to interpret local CR
fluxes (e.g., nuclear fragmentation timescales, secondary and tertiary source
terms). In particular, we develop a nuclear secondary production model aimed at
accurately computing the light secondary fluxes (namely: Li, Be, B) above 1
GeV/n. This result is achieved by fitting existing empirical or semi-empirical
formalisms to a large sample of measurements in the energy range 100 MeV/n to
100 GeV/n and by considering the contribution of the most relevant decaying
isotopes up to iron. Concerning secondary antiparticles (positrons and
antiprotons), we describe a collection of models taken from the literature, and
provide a detailed quantitative comparison.
|
Gamow-Teller strength functions in full $(pf)^{8}$ spaces are calculated with
sufficient accuracy to ensure that all the states in the resonance region have
been populated. Many of the resulting peaks are weak enough to become
unobservable. The quenching factor necessary to bring into agreement the low
lying observed states with shell model predictions is shown to be due to
nuclear correlations. To within experimental uncertainties it is the same that
is found in one particle transfer and (e,e') reactions. Perfect consistency
between the observed $^{48}Ca(p,n)^{48}Sc$ peaks and the calculation is
achieved by assuming an observation threshold of 0.75\% of the total strength,
a value that seems typical in several experiments
|
Learning meaningful representations of data is an important aspect of machine
learning and has recently been successfully applied to many domains like
language understanding or computer vision. Instead of training a model for one
specific task, representation learning is about training a model to capture all
useful information in the underlying data and make it accessible for a
predictor. For predictive process analytics, it is essential to have all
explanatory characteristics of a process instance available when making
predictions about the future, as well as for clustering and anomaly detection.
Due to the large variety of perspectives and types within business process
data, generating a good representation is a challenging task. In this paper, we
propose a novel approach for representation learning of business process
instances which can process and combine most perspectives in an event log. In
conjunction with a self-supervised pre-training method, we show the
capabilities of the approach through a visualization of the representation
space and case retrieval. Furthermore, the pre-trained model is fine-tuned to
multiple process prediction tasks and demonstrates its effectiveness in
comparison with existing approaches.
|
We performed first - principles relativistic full-potential linearized
augmented plane wave calculations for strained tetragonal ferromagnetic
La(Ba)MnO$_3$ with an assumed experimental structure of thin strained
tetragonal La$_{0.67}$Ca$_{0.33}$MnO$_3$ (LCMO) films grown on SrTiO$_3$[001]
and LaAlO$_3$[001] substrates. The calculated uniaxial magnetic anisotropy
energy (MAE) values, are in good quantitative agreement with experiment for
LCMO films on SrTiO$_3$ substrate. We also analyze the applicability of linear
magnetoelastic theory for describing the stain dependence of MAE, and estimate
magnetostriction coefficient $\lambda_{001}$.
|
We introduce the notion of Gauss-Landau-Hall magnetic field on a Riemannian
surface. The corresponding Landau-Hall problem is shown to be equivalent to the
dynamics of a massive boson. This allows one to view that problem as a globally
stated, variational one. In this framework, flowlines appear as critical points
of an action with density depending on the proper acceleration. Moreover, we
can study global stability of flowlines. In this equivalence, the massless
particle model correspond with a limit case obtained when the force of the
Gauss-Landau-Hall increases arbitrarily. We also obtain new properties related
with the completeness of flowlines for a general magnetic fields. The paper
also contains new results relative to the Landau-Hall problem associated with a
uniform magnetic field. For example, we characterize those revolution surfaces
whose parallels are all normal flowlines of a uniform magnetic field.
|
It is well known that the Helstrom bound can be improved by generalizing the
form of a coherent state. Thus, designing a quantum measurement achieving the
improved Helstrom bound is important for novel quantum communication. In the
present article, we analytically show that the improved Helstrom bound can be
achieved by a projective measurement composed of orthogonal non-standard
Schr\"{o}dinger cat states. Moreover, we numerically show that the improved
Helstrom bound can be nearly achieved by an indirect measurement based on the
Jaynes-Cummings model. As the Jaynes-Cummings model describes an interaction
between a light and a two-level atom, we emphasize that the indirect
measurement considered in this article has potential to be experimentally
implemented.
|
Morphological Segmentation involves decomposing words into morphemes, the
smallest meaning-bearing units of language. This is an important NLP task for
morphologically-rich agglutinative languages such as the Southern African Nguni
language group. In this paper, we investigate supervised and unsupervised
models for two variants of morphological segmentation: canonical and surface
segmentation. We train sequence-to-sequence models for canonical segmentation,
where the underlying morphemes may not be equal to the surface form of the
word, and Conditional Random Fields (CRF) for surface segmentation.
Transformers outperform LSTMs with attention on canonical segmentation,
obtaining an average F1 score of 72.5% across 4 languages. Feature-based CRFs
outperform bidirectional LSTM-CRFs to obtain an average of 97.1% F1 on surface
segmentation. In the unsupervised setting, an entropy-based approach using a
character-level LSTM language model fails to outperforms a Morfessor baseline,
while on some of the languages neither approach performs much better than a
random baseline. We hope that the high performance of the supervised
segmentation models will help to facilitate the development of better NLP tools
for Nguni languages.
|
Counterion distribution around an isolated flexible polyelectrolyte in the
presence of a divalent salt is evaluated using the adsorption model [M.
Muthukumar, J. Chem. Phys. {\bf 120}, 9343 (2004)] that considers Bjerrum
length, salt concentration, and local dielectric heterogeneity as physical
variables in the system. Self consistent calculations of effective charge and
size of polymer show that divalent counterions replace condensed monovalent
counterions in competitive adsorption. The theory further predicts that at
modest physical conditions, polymer charge is compensated and reversed with
increasing divalent salt. Consequently, the polyelectrolyte collapses and
reswells, respectively. Lower temperatures and higher degrees of dielectric
heterogeneity enhance condensation of all species of ions. Complete diagram of
states for the effective charge calculated as functions of Coulomb strength and
salt concentration suggest that (a) overcharging requires a minimum Coulomb
strenth, and (b) progressively higher presence of salt recharges the polymer
due to either electrostatic screening (low Coulomb strength) or negative coion
condensation (high Coulomb strength). A simple theory of ion-bridging is also
presented which predicts a first-order collapse of polyelectrolytes. The
theoretical predictions are in agreement with generic results from experiments
and simulations.
|
Using Tyablikov's decoupling approximation, we calculate the initial
suppression rate of the Neel temperature, $R_{IS}=-lim_{x-> 0} T^{-1}_{N}
dT_{N}/dx$, in a quasi two-dimensional diluted Heisenberg antiferromagnet with
nonmagnetic impurities of concentration $x$. In order to explain an
experimental fact that $R^{(Zn)}_{IS}=3.4$ of the Zn-substitution is different
from $R^{(Mg)}_{IS}=3.0$ of the Mg-substitution, we propose a model in which
impurity substitution reduces the intra-plane exchange couplings surrounding
impurities, as well as dilution of spin systems. The decrease of 12% in
exchange coupling constants by Zn substitution and decrease of 6% by Mg
substitution explain those two experimental results, when an appropriate value
of the interplane coupling is used.
|
We review how the Square Kilometre Array (SKA) will address fundamental
questions in cosmology, focussing on its use for neutral Hydrogen (HI) surveys.
A key enabler of its unique capabilities will be large (but smart) receptors in
the form of aperture arrays. We outline the likely contributions of Phase-1 of
the SKA (SKA1), Phase-2 SKA (SKA2) and pathfinding activities (SKA0). We
emphasise the important role of cross-correlation between SKA HI results and
those at other wavebands such as: surveys for objects in the EoR with VISTA and
the SKA itself; and huge optical and near-infrared redshift surveys, such as
those with HETDEX and Euclid. We note that the SKA will contribute in other
ways to cosmology, e.g. through gravitational lensing and $H_{0}$ studies.
|
Real-time railway rescheduling is an important technique to enable
operational recovery in response to unexpected and dynamic conditions in a
timely and flexible manner. Current research relies mostly on OD based data and
model-based methods for estimating train passenger demands. These approaches
primarily focus on averaged disruption patterns, often overlooking the
immediate uneven distribution of demand over time. In reality, passenger demand
deviates significantly from predictions, especially during a disaster.
Disastrous situations such as flood in Zhengzhou, China in 2022 has created not
only unprecedented effect on Zhengzhou railway station itself, which is a major
railway hub in China, but also other major hubs connected to Zhengzhou, e.g.,
Xi'an, the closest hub west of Zhengzhou. In this study, we define a real-time
demand-responsive (RTDR) railway rescheduling problem focusing two specific
aspects, namely, volatility of the demand, and management of station
crowdedness. For the first time, we propose a data-driven approach using
real-time mobile data (MD) to deal with this RTDR problem. A hierarchical deep
reinforcement learning (HDRL) framework is designed to perform real-time
rescheduling in a demand-responsive manner. The use of MD has enabled the
modelling of passenger dynamics in response to train delays and station
crowdedness, and a real-time optimisation for rescheduling of train services in
view of the change in demand as a result of passengers' behavioural response to
disruption. Results show that the agent can steadily satisfy over 62% of the
demand with only 61% of the original rolling stock, ensuring continuous
operations without overcrowding. Moreover, the agent exhibits adaptability when
transferred to a new environment with increased demand, highlighting its
effectiveness in addressing unforeseen disruptions in real-time settings.
|
Let $G$ be a connected reductive group over a $p$-adic local field $F$. We
propose and study the notions of $G$-$\varphi$-modules and
$G$-$(\varphi,\nabla)$-modules over the Robba ring, which are exact faithful
$F$-linear tensor functors from the category of $G$-representations on
finite-dimensional $F$-vector spaces to the categories of $\varphi$-modules and
$(\varphi,\nabla)$-modules over the Robba ring, respectively, commuting with
the respective fiber functors. We study Kedlaya's slope filtration theorem in
this context, and show that $G$-$(\varphi,\nabla)$-modules over the Robba ring
are "$G$-quasi-unipotent", which is a generalization of the $p$-adic local
monodromy theorem proven independently by Y. Andr\'e, K. S. Kedlaya, and Z.
Mebkhout.
|
We give a polynomial time algorithm for computing the Igusa local zeta
function $Z(s,f)$ attached to a polynomial $f(x)\in \QTR{Bbb}{Z}[x]$, in one
variable, with splitting field $\QTR{Bbb}{Q}$, and a prime number $p$. We also
propose a new class of Linear Feedback Shift Registers based on the computation
of Igusa's local zeta function.
|
A fruitful way of obtaining meaningful, possibly concrete, algorithmically
random numbers is to consider a potential behaviour of a Turing machine and its
probability with respect to a measure (or semi-measure) on the input space of
binary codes. For example, Chaitin's Omega is a well known Martin-Loef random
number that is obtained by considering the halting probability of a universal
prefix-free machine. In the last decade, similar examples have been obtained
for higher forms of randomness, i.e. randomness relative to strong oracles. In
this work we obtain characterizations of the algorithmically random reals in
higher randomness classes, as probabilities of certain events that can happen
when an oracle universal machine runs probabilistically on a random oracle.
Moreover we apply our analysis to different machine models, including oracle
Turing machines, prefix-free machines, and models for infinite online
computation. We find that in many cases the arithmetical complexity of a
property is directly reflected in the strength of the algorithmic randomness of
the probability with which it occurs, on any given universal machine. On the
other hand, we point to many examples where this does not happen and the
probability is a number whose algorithmic randomness is not the maximum
possible (with respect to its arithmetical complexity). Finally we find that,
unlike the halting probability of a universal machine, the probabilities of
more complex properties like totality, cofinality, computability or
completeness do not necessarily have the same Turing degree when they are
defined with respect to different universal machines.
|
We revisit the well-known aqueous ferrous-ferric electron transfer reaction
in order to address recent suggestions that nuclear tunnelling can lead to
significant deviation from the linear response assumption inherent in the
Marcus picture of electron transfer. A recent study of this reaction by
Richardson and coworkers has found a large difference between their new
path-integral method, GR-QTST, and the saddle point approximation of Wolynes
(Wolynes theory). They suggested that this difference could be attributed to
the existence of multiple tunnelling pathways, leading Wolynes theory to
significantly overestimate the rate. This was used to argue that the linear
response assumptions of Marcus theory may break down for liquid systems when
tunnelling is important. If true, this would imply that the commonly used
method for studying such systems, where the problem is mapped onto a spin-boson
model, is invalid. However, we have recently shown that size inconsistency in
GR-QTST can lead to poor predictions of the rate in systems with many degrees
of freedom. We have also suggested an improved method, the path-integral linear
golden-rule (LGR) approximation, which fixes this problem. Here we demonstrate
that the GR-QTST results for ferrous-ferric electron transfer are indeed
dominated by its size consistency error. Furthermore, by comparing the LGR and
Wolynes theory results, we confirm the established picture of nuclear
tunnelling in this system. Finally, by comparing our path-integral results to
those obtained by mapping onto the spin-boson model, we reassess the importance
of anharmonic effects and the accuracy of this commonly used mapping approach.
|
Activation functions play a key role in neural networks so it becomes
fundamental to understand their advantages and disadvantages in order to
achieve better performances. This paper will first introduce common types of
non linear activation functions that are alternative to the well known sigmoid
function and then evaluate their characteristics. Moreover deeper neural
networks will be analysed because they positively influence the final
performances compared to shallower networks. They also strictly depend on the
weight initialisation hence the effect of drawing weights from Gaussian and
uniform distribution will be analysed making particular attention on how the
number of incoming and outgoing connection to a node influence the whole
network.
|
This paper presents DeepKalPose, a novel approach for enhancing temporal
consistency in monocular vehicle pose estimation applied on video through a
deep-learning-based Kalman Filter. By integrating a Bi-directional Kalman
filter strategy utilizing forward and backward time-series processing, combined
with a learnable motion model to represent complex motion patterns, our method
significantly improves pose accuracy and robustness across various conditions,
particularly for occluded or distant vehicles. Experimental validation on the
KITTI dataset confirms that DeepKalPose outperforms existing methods in both
pose accuracy and temporal consistency.
|
Unsupervised image-to-image translation is an important and challenging
problem in computer vision. Given an image in the source domain, the goal is to
learn the conditional distribution of corresponding images in the target
domain, without seeing any pairs of corresponding images. While this
conditional distribution is inherently multimodal, existing approaches make an
overly simplified assumption, modeling it as a deterministic one-to-one
mapping. As a result, they fail to generate diverse outputs from a given source
domain image. To address this limitation, we propose a Multimodal Unsupervised
Image-to-image Translation (MUNIT) framework. We assume that the image
representation can be decomposed into a content code that is domain-invariant,
and a style code that captures domain-specific properties. To translate an
image to another domain, we recombine its content code with a random style code
sampled from the style space of the target domain. We analyze the proposed
framework and establish several theoretical results. Extensive experiments with
comparisons to the state-of-the-art approaches further demonstrates the
advantage of the proposed framework. Moreover, our framework allows users to
control the style of translation outputs by providing an example style image.
Code and pretrained models are available at https://github.com/nvlabs/MUNIT
|
The SL(2,R) invaraint ten dimensional type IIB superstring effective action
is compactified on a torus to D spacetime dimensions. The transformation
properties of scalar, vector and tensor fields, appearing after the dimensional
reduction, are obtained in order to maintain the SL(2,R)} invariance of the
reduced effective action. The symmetry of the action enables one to generate
new string vacua from known configurations. As illustrative examples, new black
hole solutions are obtained in five and four dimensions from a given set of
solutions of the equations of motion.
|
Observations with RXTE (Rossi X-ray Timing Explorer) revealed the presence of
High Frequency Quasi-Periodic Oscillations (HFQPOs) of the X-ray flux from
several accreting stellar mass Black Holes. HFQPOs (and their counterparts at
lower frequencies) may allow us to study general relativity in the strong
gravity regime. However, the observational evidence today does not yet allow us
to distinguish between different HFQPO models. In this paper we use a general
relativistic ray-tracing code to investigate X-ray timing-spectroscopy and
polarization properties of HFQPOs in the orbiting Hotspot model. We study
observational signatures for the particular case of the 166 Hz quasi-periodic
oscillation (QPO) in the galactic binary GRS 1915+105. We conclude with a
discussion of the observability of spectral signatures with a
timing-spectroscopy experiment like the LOFT (Large Observatory for X-ray
Timing) and polarization signatures with space-borne X-ray polarimeters such as
IXPE (Imaging X-ray Polarimetry Explorer), PolSTAR (Polarization Spectroscopic
Telescope Array), PRAXyS (Polarimetry of Relativistic X-ray Sources), or XIPE
(X-ray Imaging Polarimetry Explorer). A high count-rate mission like LOFT would
make it possible to get a QPO phase for each photon, enabling the study of the
QPO-phase-resolved spectral shape and the correlation between this and the flux
level. Owing to the short periods of the HFQPOs, first-generation X-ray
polarimeters would not be able to assign a QPO phase to each photon. The study
of QPO-phase-resolved polarization energy spectra would thus require
simultaneous observations with a first-generation X-ray polarimeter and a
LOFT-type mission.
|
We study numerically the Casimir interaction between dielectrics in both two
and three dimensions. We demonstrate how sparse matrix factorizations enable
one to study torsional interactions in three dimensions. In two dimensions we
study the full cross-over between non-retarded and retarded interactions as a
function of separation. We use constrained factorizations in order to measure
the interaction of a particle with a rough dielectric surface and compare with
a scaling argument.
|
We theoretically study the dephasing of an Andreev spin qubit (ASQ) due to
electric and magnetic noise. Using a tight-binding model, we calculate the
Andreev states formed in a Josephson junction where the link is a semiconductor
with strong spin-orbit interaction. As a result of both the spin-orbit
interaction and induced superconductivity, the local charge and spin of these
states varies as a function of externally controllable parameters: the phase
difference between the superconducting leads, an applied magnetic field, and
filling of the underlying semiconductor. Concomitantly, coupling to
fluctuations of the electric or magnetic environment will vary, which informs
the rate of dephasing. We qualitatively predict the dependence of dephasing on
the nature of the environment, magnetic field, phase difference between the
junction, and filling of the semiconductor. Comparing the simulated electric-
and magnetic-noise-induced dephasing rate to experiment suggests that the
dominant source of noise is magnetic. Moreover, by appropriately tuning these
external parameters, we find sweet-spots at which we predict an enhancement in
ASQ coherence times.
|
Searches for lepton flavour and lepton number violation in kaon decays by the
NA48/2 and NA62 experiments at CERN are presented. A new measurement of the
helicity suppressed ratio of charged kaon leptonic decay rates
$RK=BR(Ke2)/BR(Kmu2) to sub-percent relative precision is discussed. An
improved upper limit on the lepton number violating K+- --> pi-+mu+-mu+- decay
rate is also presented.
|
In this paper, we consider the classical wave equation with time-dependent,
spatially multiscale coefficients. We propose a fully discrete computational
multiscale method in the spirit of the localized orthogonal decomposition in
space with a backward Euler scheme in time. We show optimal convergence rates
in space and time beyond the assumptions of spatial periodicity or scale
separation of the coefficients. Further, we propose an adaptive update strategy
for the time-dependent multiscale basis. Numerical experiments illustrate the
theoretical results and showcase the practicability of the adaptive update
strategy.
|
For randomized clinical trials where a single, primary, binary endpoint would
require unfeasibly large sample sizes, composite endpoints are widely chosen as
the primary endpoint. Despite being commonly used, composite endpoints entail
challenges in designing and interpreting results. Given that the components may
be of different relevance and have different effect sizes, the choice of
components must be made carefully. Especially, sample size calculations for
composite binary endpoints depend not only on the anticipated effect sizes and
event probabilities of the composite components, but also on the correlation
between them. However, information on the correlation between endpoints is
usually not reported in the literature which can be an obstacle for planning of
future sound trial design. We consider two-arm randomized controlled trials
with a primary composite binary endpoint and an endpoint that consists only of
the clinically more important component of the composite endpoint. We propose a
trial design that allows an adaptive modification of the primary endpoint based
on blinded information obtained at an interim analysis. We consider a decision
rule to select between a composite endpoint and its most relevant component as
primary endpoint. The decision rule chooses the endpoint with the lower
estimated required sample size. Additionally, the sample size is reassessed
using the estimated event probabilities and correlation, and the expected
effect sizes of the composite components. We investigate the statistical power
and significance level under the proposed design through simulations. We show
that the adaptive design is equally or more powerful than designs without
adaptive modification on the primary endpoint. The targeted power is achieved
even if the correlation is misspecified while maintaining the type 1 error. We
illustrated the proposal by means of two case studies.
|
Knowledge leakage poses a critical risk to the competitive advantage of
knowledge-intensive organisations. Although knowledge leakage is a
human-centric security issue, little is known about leakage resulting from
individual behaviour and the protective strategies and controls that could be
effective in mitigating leakage risk. Therefore, this research explores the
perspectives of security practitioners on the key factors that influence
knowledge leakage risk in the context of knowledge-intensive organisations. We
conduct two focus groups to explore these perspectives. The research highlights
three types of behavioural controls that mitigate the risk of knowledge
leakage: human resource management practices, knowledge security training and
awareness practices, and compartmentalisation practices.
|
This is the user's manual of MC@NLO 2.0. This package is a practical
implementation, based upon the HERWIG event generator, of the MC@NLO formalism,
which allows one to incorporate NLO QCD matrix elements consistently into a
parton shower framework. The processes available in this version are those of
vector boson pair and heavy quark pair production in hadron collisions. This
document is self-contained, but we emphasise the main differences with respect
to version 1.0.
|
We describe a method for incorporating ambipolar diffusion in the strong
coupling approximation into a multidimensional magnetohydrodynamics code based
on the total variation diminishing scheme. Contributions from ambipolar
diffusion terms are included by explicit finite difference operators in a fully
unsplit way, maintaining second order accuracy. The divergence-free condition
of magnetic fields is exactly ensured at all times by a flux-interpolated
constrained transport scheme. The super time stepping method is used to
accelerate the timestep in high resolution calculations and/or in strong
ambipolar diffusion. We perform two test problems, the steady-state oblique
C-type shocks and the decay of Alfv\'en waves, confirming the accuracy and
robustness of our numerical approach. Results from the simulations of the
compressible MHD turbulence with ambipolar diffusion show the flexibility of
our method as well as its ability to follow complex MHD flows in the presence
of ambipolar diffusion. These simulations show that the dissipation rate of MHD
turbulence is strongly affected by the strength of ambipolar diffusion.
|
In this paper, we carry out a systematic study of the prospect of testing
general relativity with the inspiral signals of black hole binaries that could
be detected with TianQin. The study is based on the parameterized
post-Einsteinian (ppE) waveform, so that many modified gravity theories can be
covered simultaneously. We consider black hole binaries with total masses
ranging from $10\rm M_\odot\sim10^7 M_\odot$ and ppE corrections at
post-Newtonian (PN) orders ranging from $-4$PN to $2$PN. Compared to the
current ground-based detectors, TianQin can improve the constraints on the ppE
phase parameter $\beta$ by orders of magnitude. For example, the improvement at
the $-4$PN and $2$PN orders can be about $13$ and $3$ orders of magnitude
(compared to the results from GW150914), respectively. Compared to future
ground-based detectors, such as ET, TianQin is expected to be superior below
the $-1$PN order, and for corrections above the $-0.5$PN order, TianQin is
still competitive near the large mass end of the low mass range $[10 \rm
M_\odot, \,10^3 \rm M_\odot]\,$. Compared to the future space-based detector
LISA, TianQin can be competitive in the lower mass end as the PN order is
increased. For example, at the $-4$PN order, LISA is always superior for
sources more massive than about $30\rm M_\odot\,$, while at the $2$PN order,
TianQin becomes competitive for sources less massive than about $10^4\rm
M_\odot$. We also study the scientific potentials of detector networks
involving TianQin, LISA and ET, and discuss the constraints on specific
theories such as the dynamic Chern-Simons theory and the Einstein-dilaton
Gauss-Bonnet theory.
|
The heat kernel expansion can be used as a tool to obtain the effective
geometric quantities in fuzzy spaces. Generalizing the efficient method
presented in the previous work on the global quantities, it is applied to the
effective local geometric quantities in compact fuzzy spaces. Some simple fuzzy
spaces corresponding to singular spaces in continuum theory are studied as
specific examples. A fuzzy space with a non-associative algebra is also
studied.
|
An optimized sideband cooling in the presence of initial system correlations
is investigated for a standard optomechanical system coupled to a general
mechanical non-Markovian reservoir. We study the evolution of phonon number by
incorporating the effects of initial correlations into the time-dependent
coefficients in the Heisenberg equation. We introduce the concept of cooling
rate and define an average phonon reduction function to describe the sideband
cooling effect in non-Markovian regime. Our results show that the instantaneous
phonon number can be significantly reduced by introducing either the
parametric-amplification type or the beam-splitter type initial correlations.
In addition, the ground state cooling rate can be accelerated by enhancing the
initial correlation of beam-splitter type. By optimizing the initial state of
the system and utilizing Q-modulation technology, a stable mechanical ground
state can be obtained in a very short time. Our optimized cooling protocol
provides an appealing platform for phonon manipulation and quantum information
processing in solid-state systems.
|
High-energy muon collider can play as an emitter of electroweak gauge bosons
and thus leads to substantial vector boson scattering (VBS) processes. In this
work, we investigate the production of heavy neutral lepton (HNL) $N$ and
lepton number violation (LNV) signature through VBS at high-energy muon
colliders. VBS induces LNV processes $W^\pm Z/\gamma\to \ell^\pm N \to \ell^\pm
\ell^\pm W^\mp\to \ell^\pm \ell^\pm q\bar{q}'$ with an on-shell HNL $N$ at
$\mu^+\mu^-$ colliders. In analogy to neutrinoless double-beta decay with the
HNL in t-channel, the LNV signature $W^+W^+\to \ell^+\ell^+$ can also happen
via VBS at same-sign muon collider. They provide clean and robust LNV
signatures to tell the nature of Majorana HNLs and thus have more advantageous
benefits than direct $\mu\mu$ annihilation. We analyze the potential of
searching for Majorana HNL and obtain the exclusion limits on mixing $V_{\ell
N}$. Based on this same-sign lepton signature, we also obtain the sensitivity
of muon collider to the Weinberg operator.
|
Nonparametric Bayesian models are often based on the assumption that the
objects being modeled are exchangeable. While appropriate in some applications
(e.g., bag-of-words models for documents), exchangeability is sometimes assumed
simply for computational reasons; non-exchangeable models might be a better
choice for applications based on subject matter. Drawing on ideas from
graphical models and phylogenetics, we describe a non-exchangeable prior for a
class of nonparametric latent feature models that is nearly as efficient
computationally as its exchangeable counterpart. Our model is applicable to the
general setting in which the dependencies between objects can be expressed
using a tree, where edge lengths indicate the strength of relationships. We
demonstrate an application to modeling probabilistic choice.
|
In this paper, we propose a stochastic process, which is a Cox-Ingersoll-Ross
process with Hawkes jumps. It can be seen as a generalization of the classical
Cox-Ingersoll-Ross process and the classical Hawkes process with exponential
exciting function. Our model is a special case of the affine point processes.
Laplace transforms and limit theorems have been obtained, including law of
large numbers, central limit theorems and large deviations.
|
Vehicular communication networks are rapidly emerging as vehicles become
smarter. However, these networks are increasingly susceptible to various
attacks. The situation is exacerbated by the rise in automated vehicles
complicates, emphasizing the need for security and authentication measures to
ensure safe and effective traffic management. In this paper, we propose a novel
hybrid physical layer security (PLS)-machine learning (ML) authentication
scheme by exploiting the position of the transmitter vehicle as a device
fingerprint. We use a time-of-arrival (ToA) based localization mechanism where
the ToA is estimated at roadside units (RSUs), and the coordinates of the
transmitter vehicle are extracted at the base station (BS).Furthermore, to
track the mobility of the moving legitimate vehicle, we use ML model trained on
several system parameters. We try two ML models for this purpose, i.e., support
vector regression and decision tree. To evaluate our scheme, we conduct binary
hypothesis testing on the estimated positions with the help of the ground
truths provided by the ML model, which classifies the transmitter node as
legitimate or malicious. Moreover, we consider the probability of false alarm
and the probability of missed detection as performance metrics resulting from
the binary hypothesis testing, and mean absolute error (MAE), mean square error
(MSE), and coefficient of determination $\text{R}^2$ to further evaluate the ML
models. We also compare our scheme with a baseline scheme that exploits the
angle of arrival at RSUs for authentication. We observe that our proposed
position-based mechanism outperforms the baseline scheme significantly in terms
of missed detections.
|
The mathematical theory of quantum feedback networks has recently been
developed by Gough and James \cite{QFN1} for general open quantum dynamical
systems interacting with bosonic input fields. In this article we show, that
their feedback reduction formula for the coefficients of the closed-loop
quantum stochastic differential equation can be formulated in terms of Belavkin
matrices. We show that the reduction formula leads to a non-commutative Mobius
transformation based on Belavkin matrices, and establish a $\star$-unitary
version of the Siegel identities.
|
We discuss progress and prospects in the application of bootstrap methods to
string theory.
|
I review the observational prospects to constrain the equation of state
parameter of dark energy and I discuss the potential of future imaging and
redshift surveys.
Bayesian model selection is used to address the question of the level of
accuracy on the equation of state parameter that is required before
explanations alternative to a cosmological constant become very implausible. I
discuss results in the prediction space of dark energy models. If no
significant departure from w=-1 is detected, a precision on w of order 1% will
translate into strong evidence against fluid-like dark energy, while decisive
evidence will require a precision of order 10^-3.
|
These notes are an expanded version of two series of lectures given at the
winter school in mathematical physics at les Houches and at the Vietnamese
Institute for Mathematical Sciences. They are an introduction to factorization
algebras, factorization homology and some of their applications, notably for
studying $E_n$-algebras. We give an account of homology theory for manifolds
(and spaces), which give invariant of manifolds but also invariant of
$E_n$-algebras. We particularly emphasize the point of view of factorization
algebras (a structure originating from quantum field theory) which plays, with
respect to homology theory for manifolds, the role of sheaves with respect to
singular cohomology. We mention some applications to the study of mapping
spaces and study several examples, including some over stratified spaces.
|
Since 1979, many new classes of superconductors have been discovered,
including heavy-fermion compounds, organic conductors, high-Tc cuprates, and
Sr2RuO4. Most of these superconductors are unconventional and/or nodal.
Therefore it is of central importance to determine the symmetry of the order
parameter in each of these superconductors. In particular, the
angular-controlled thermal conductivity in the vortex state provides a unique
means of investigating the nodal structure of the superconducting energy gap
when high-quality single crystals in the extremely clean limit are available.
Using this method, Izawa et al have recently succeeded in identifying the
energy gap symmetry of superconductivity in Sr2RuO4, CeCoIn5,
kappa-(ET)2Cu(NCS)2, YNi2B2C, and PrOs4Sb12.
|
The COVID-19 pandemic in 2020 has caused sudden shocks in transportation
systems, specifically the subway ridership patterns in New York City.
Understanding the temporal pattern of subway ridership through statistical
models is crucial during such shocks. However, many existing statistical
frameworks may not be a good fit to analyze the ridership data sets during the
pandemic since some of the modeling assumption might be violated during this
time. In this paper, utilizing change point detection procedures, we propose a
piece-wise stationary time series model to capture the nonstationary structure
of subway ridership. Specifically, the model consists of several independent
station based autoregressive integrated moving average (ARIMA) models
concatenated together at certain time points. Further, data-driven algorithms
are utilized to detect the changes of ridership patterns as well as to estimate
the model parameters before and during the COVID-19 pandemic. The data sets of
focus are daily ridership of subway stations in New York City for randomly
selected stations. Fitting the proposed model to these data sets enhances our
understanding of ridership changes during external shocks, both in terms of
mean (average) changes as well as the temporal correlations.
|
Stochastic simulation algorithms (SSAs) are widely used to numerically
investigate the properties of stochastic, discrete-state models. The Gillespie
Direct Method is the pre-eminent SSA, and is widely used to generate sample
paths of so-called agent-based or individual-based models. However, the
simplicity of the Gillespie Direct Method often renders it impractical where
large-scale models are to be analysed in detail. In this work, we carefully
modify the Gillespie Direct Method so that it uses a customised binary decision
tree to trace out sample paths of the model of interest. We show that a
decision tree can be constructed to exploit the specific features of the chosen
model. Specifically, the events that underpin the model are placed in
carefully-chosen leaves of the decision tree in order to minimise the work
required to keep the tree up-to-date. The computational efficencies that we
realise can provide the apparatus necessary for the investigation of
large-scale, discrete-state models that would otherwise be intractable. Two
case studies are presented to demonstrate the efficiency of the method.
|
We report on the discovery of Swift J010902.6-723710, a rare eclipsing
Be/X-ray Binary system by the Swift SMC Survey (S-CUBED). Swift
J010902.6-723710 was discovered via weekly S-CUBED monitoring observations when
it was observed to enter a state of X-ray outburst on 10 October 2023. X-ray
emission was found to be modulated by a 182s period. Optical spectroscopy is
used to confirm the presence of a highly-inclined circumstellar disk
surrounding a B0-0.5Ve optical companion. Historical UV and IR photometry are
then used to identify strong eclipse-like features re-occurring in both light
curves with a 60.623 day period, which is adopted as the orbital period of the
system. Eclipsing behavior is found to be the result of a large accretion disk
surrounding the neutron star. Eclipses are produced when the disk passes in
front of the OBe companion, blocking light from both the stellar surface and
circumstellar disk. This is only the third Be/X-ray Binary to have confirmed
eclipses. We note that this rare behavior provides an important opportunity to
constrain the physical parameters of a Be/X-ray Binary with greater accuracy
than is possible in non-eclipsing systems.
|
We study the dynamics of an array of nearest-neighbor coupled spatially
distributed systems each generating a periodic sequence of short pulses. We
demonstrate that unlike a solitary system generating a train of equidistant
pulses, an array of such systems can produce a sequence of clusters of closely
packed pulses, with the distance between individual pulses depending on the
coupling phase. This regime associated with the formation of locally coupled
pulse trains bounded due to a balance of attraction and repulsion between them
is different from the pulse bound states reported earlier in different laser,
plasma, chemical, and biological systems. We propose a simplified analytical
description of the observed phenomenon, which is in a good agreement with the
results of direct numerical simulations of a model system describing an array
of coupled mode-locked lasers.
|
The vast parallelism, exceptional energy efficiency and extraordinary
information inherent in DNA molecules are being explored for computing, data
storage and cryptography. DNA cryptography is a emerging field of cryptography.
In this paper a novel encryption algorithm is devised based on number
conversion, DNA digital coding, PCR amplification, which can effectively
prevent attack. Data treatment is used to transform the plain text into cipher
text which provides excellent security
|
We investigate the possibility of a dark matter candidate emerging from a
minimal walking technicolor theory. In this case techniquarks as well as
technigluons transform under the adjoint representation of SU(2) of
technicolor. It is therefore possible to have technicolor neutral bound states
between a techniquark and a technigluon. We investigate this scenario by
assuming that such a particle can have a Majorana mass and we calculate the
relic density. We identify the parameter space where such an object can account
for the full dark matter density avoiding constraints imposed by the CDMS and
the LEP experiments.
|
Based on the results of Chen & Li (2009) and Pakmor et al. (2010), we carried
out a series of binary population synthesis calculations and considered two
treatment of common envelope (CE) evolution, i.e. $\alpha$-formalism and
$\gamma$-algorithm. We found that the evolution of birth rate of these peculiar
SNe Ia is heavily dependent on how to treat the CE evolution. The over-luminous
SNe Ia may only occur for $\alpha$-formalism with low CE ejection efficiency
and the delay time of the SNe Ia is between 0.4 and 0.8 Gyr. The upper limit of
the contribution rate of the supernovae to all SN Ia is less than 0.3%. The
delay time of sub-luminous SNe Ia from equal-mass DD systems is between 0.1 and
0.3 Gyr for $\alpha$-formalism with $\alpha=3.0$, while longer than 9 Gyr for
$\alpha=1.0$. The range of the delay time for $\gamma$-algorithm is very wide,
i.e. longer than 0.22 Gyr, even as long as 15 Gyr. The sub-luminous SNe Ia from
equal-mass DD systems may only account for no more than 1% of all SNe Ia
observed. The super-Chandrasekhar mass model of Chen & Li (2009) may account
for a part of 2003fg-like supernovae and the equal-mass DD model developed by
Pakmor et al. (2010) may explain some 1991bg-like events, too. In addition,
based on the comparison between theories and observations, including the birth
rate and delay time of the 1991bg-like events, we found that the
$\gamma$-algorithm is more likely to be an appropriate prescription of the CE
evolution of DD systems than the $\alpha$-formalism if the equal-mass DD
systems is the progenitor of 1991bg-like SNe Ia.
|
In this paper we define a set of numerical criteria for a handlebody link to
be irreducible. It provides an effective, easy-to-implement method to determine
the irreducibility of handlebody links; particularly, it recognizes the
irreducibility of all handlebody knots in the Ishii-Kishimoto-Moriuchi-Suzuki
knot table and most handlebody links in the Bellettini-Paolini-Paolini-Wang
link table.
|
The central object of the quantum algebraic approach to the study of quantum
integrable models is the universal $R$-matrix, which is an element of a
completed tensor product of two copies of quantum algebra. Various
integrability objects are constructed by choosing representations for the
factors of this tensor product. There are two approaches to constructing
explicit expressions for the universal $R$-matrix. One is based on the quantum
double construction, and the other is based on the concept of the
$q$-commutator. In the case of a quantum superalgebra, we cannot use the first
approach, since we do not know an explicit expression for the Lusztig
automorphisms. One can use the second approach, but it requires some
modifications related to the presence of isotropic roots. In this article, we
provide the necessary modification of the method and use it to find an
$R$-operator for quantum integrable systems related to the quantum superalgebra
$\mathrm U_q(\mathcal{L}(\mathfrak{sl}_{M | N}))$.
|
Most of existing neural methods for multi-objective combinatorial
optimization (MOCO) problems solely rely on decomposition, which often leads to
repetitive solutions for the respective subproblems, thus a limited Pareto set.
Beyond decomposition, we propose a novel neural heuristic with diversity
enhancement (NHDE) to produce more Pareto solutions from two perspectives. On
the one hand, to hinder duplicated solutions for different subproblems, we
propose an indicator-enhanced deep reinforcement learning method to guide the
model, and design a heterogeneous graph attention mechanism to capture the
relations between the instance graph and the Pareto front graph. On the other
hand, to excavate more solutions in the neighborhood of each subproblem, we
present a multiple Pareto optima strategy to sample and preserve desirable
solutions. Experimental results on classic MOCO problems show that our NHDE is
able to generate a Pareto front with higher diversity, thereby achieving
superior overall performance. Moreover, our NHDE is generic and can be applied
to different neural methods for MOCO.
|
Organized by Working Group 6 "Computational Dosimetry" of the European
Radiation Dosimetry Group (EURADOS), a group of intercomparison exercises was
conducted in which participants were asked to solve predefined problems in
computational dosimetry. The results of these comparisons were published in a
series of articles in this virtual special issue of Radiation Measurements.
This paper reviews the experience gained from the various exercises and
highlights the resulting conclusions for future exercises, as well as regarding
the state of the art and the need for development in terms of quality assurance
for computational dosimetry techniques.
|
The galactic center excess of gamma ray photons can be naturally explained by
light Majorana fermions in combination with a pseudoscalar mediator. The NMSSM
provides exactly these ingredients. We show that for neutralinos with a
significant singlino component the galactic center excess can be linked to
invisible decays of the Standard-Model-like Higgs at the LHC. We find
predictions for invisible Higgs branching ratios in excess of 50 percent,
easily accessible at the LHC. Constraining the NMSSM through GUT-scale boundary
conditions only slightly affects this expectation. Our results complement
earlier NMSSM studies of the galactic center excess, which link it to heavy
Higgs searches at the LHC.
|
We present a new program synthesis approach that combines an encoder-decoder
based synthesis architecture with a differentiable program fixer. Our approach
is inspired from the fact that human developers seldom get their program
correct on the first attempt, and perform iterative testing-based program
fixing to get to the desired program functionality. Similarly, our approach
first learns a distribution over programs conditioned on an encoding of a set
of input-output examples, and then iteratively performs fix operations using
the differentiable fixer. The fixer takes as input the original examples and
the current program's outputs on example inputs, and generates a new
distribution over the programs with the goal of reducing the discrepancies
between the current program outputs and the desired example outputs. We train
our architecture end-to-end on the RobustFill domain, and show that the
addition of the fixer module leads to a significant improvement on synthesis
accuracy compared to using beam search.
|
In Karatzas and Kardaras's paper on semimartingale financial models, it is
proved that the NUPBR condition is a property of the local characteristic of
the asset process alone. In Takaoka's paper on NUPBR, it is proved that the
NUPBR condition is equivalent to the existence of a simga-martingale deflator.
However, Takaoka's paper founds its proof on Delbaen and Schachermayer's
fundamental asset pricing theorem, i.e. the NFLVR condition, which is not a
pure property of the local characteristic of the asset process. In this paper
we give an alternative proof of the result of Takaoka, which makes use only the
properties of the local characteristic of the asset process.
|
Stochastic averaging for a class of stochastic differential equations (SDEs)
with fractional Brownian motion, of the Hurst parameter H in the interval (1/2,
1), is investigated. An averaged SDE for the original SDE is proposed, and
their solutions are quantitatively compared. It is shown that the solution of
the averaged SDE converges to that of the original SDE in the sense of mean
square and also in probability. It is further demonstrated that a similar
averaging principle holds for SDEs under stochastic integral of pathwise
backward and forward types. Two examples are presented and numerical
simulations are carried out to illustrate the averaging principle.
|
Stringy canonical forms are a class of integrals that provide
$\alpha'$-deformations of the canonical form of any polytopes. For generalized
associahedra of finite-type cluster algebra, there exist completely rigid
stringy integrals, whose configuration spaces are the so-called binary
geometries, and for classical types are associated with (generalized)
scattering of particles and strings. In this paper we propose a large class of
rigid stringy canonical forms for another class of polytopes, generalized
permutohedra, which also include associahedra and cyclohedra as special cases
(type $A_n$ and $B_n$ generalized associahedra). Remarkably, we find that the
configuration spaces of such integrals are also binary geometries, which were
suspected to exist for generalized associahedra only. For any generalized
permutohedron that can be written as Minkowski sum of coordinate simplices, we
show that its rigid stringy integral factorizes into products of lower
integrals for massless poles at finite $\alpha'$, and the configuration space
is binary although the $u$ equations take a more general form than those
"perfect" ones for cluster cases. Moreover, we provide an infinite class of
examples obtained by degenerations of type $A_n$ and $B_n$ integrals, which
have perfect $u$ equations as well. Our results provide yet another family of
generalizations of the usual string integral and moduli space, whose physical
interpretations remain to be explored.
|
A common technique for producing a new model category structure is to lift
the fibrations and weak equivalences of an existing model structure along a
right adjoint. Formally dual but technically much harder is to lift the
cofibrations and weak equivalences along a left adjoint. For either technique
to define a valid model category, there is a well-known necessary "acyclicity"
condition. We show that for a broad class of "accessible model structures" - a
generalization introduced here of the well-known combinatorial model structures
- this necessary condition is also sufficient in both the right-induced and
left-induced contexts, and the resulting model category is again accessible. We
develop new and old techniques for proving the acyclity condition and apply
these observations to construct several new model structures, in particular on
categories of differential graded bialgebras, of differential graded comodule
algebras, and of comodules over corings in both the differential graded and the
spectral setting. We observe moreover that (generalized) Reedy model category
structures can also be understood as model categories of "bialgebras" in the
sense considered here.
|
Synergies between advanced communications, computing and artificial
intelligence are unraveling new directions of coordinated operation and
resiliency in microgrids. On one hand, coordination among sources is
facilitated by distributed, privacy-minded processing at multiple locations,
whereas on the other hand, it also creates exogenous data arrival paths for
adversaries that can lead to cyber-physical attacks amongst other reliability
issues in the communication layer. This long-standing problem necessitates new
intrinsic ways of exchanging information between converters through power lines
to optimize the system's control performance. Going beyond the existing power
and data co-transfer technologies that are limited by efficiency and
scalability concerns, this paper proposes neuromorphic learning to implant
communicative features using spiking neural networks (SNNs) at each node, which
is trained collaboratively in an online manner simply using the power exchanges
between the nodes. As opposed to the conventional neuromorphic sensors that
operate with spiking signals, we employ an event-driven selective process to
collect sparse data for training of SNNs. Finally, its multi-fold effectiveness
and reliable performance is validated under simulation conditions with
different microgrid topologies and components to establish a new direction in
the sense-actuate-compute cycle for power electronic dominated grids and
microgrids.
|
In natural resource management, decision-makers often aim at maintaining
thestate of the system within a desirable set for all times.For instance,
fisheries management procedures include keeping thespawning stock biomass over
a critical threshold.Another example is given by the peak control of an
epidemic outbreakthat encompasses maintaining thenumber of infected individuals
below medical treatment capacities.In mathematical terms, one controls a
dynamical system.Then, keeping the state of the system within a desirable set
for all times is possible when the initial state belongs to the so-called
viabilitykernel. We introduce the notion of conic quasimonotonicity
reducibility.With this property, we provide a comparison theorem by inclusion
between two viabilitykernels, corresponding to two control systems in the
infinite horizon case. Wealso derive conditions for equality. We illustrate the
method with a model for thebiocontrol of a vector-transmitted epidemic.
|
After launch, the Advanced CCD Imaging Spectrometer (ACIS), a focal plane
instrument on the Chandra X-ray Observatory, suffered radiation damage from
exposure to soft protons during passages through the Earth's radiation belts.
An effect of the damage was to increase the charge transfer inefficiency (CTI)
of the front illuminated CCDs. As part of the initial damage assessment, the
focal plane was warmed from the operating temperature of -100C to +30C which
unexpectedly further increased the CTI. We report results of ACIS CCD
irradiation experiments in the lab aimed at better understanding this reverse
annealing process. Six CCDs were irradiated cold by protons ranging in energy
from 100 keV to 400 keV, and then subjected to simulated bakeouts in one of
three annealing cycles. We present results of these lab experiments, compare
them to our previous experiences on the ground and in flight, and derive limits
on the annealing time constants.
|
A reinforcement learning (RL) based method that enables the robot to
accomplish the assembly-type task with safety regulations is proposed. The
overall strategy consists of grasping and assembly, and this paper mainly
considers the assembly strategy. Force feedback is used instead of visual
feedback to perceive the shape and direction of the hole in this paper.
Furthermore, since the emergency stop is triggered when the force output is too
large, a force-based dynamic safety lock (DSL) is proposed to limit the
pressing force of the robot. Finally, we train and test the robot model with a
simulator and build ablation experiments to illustrate the effectiveness of our
method. The models are independently tested 500 times in the simulator, and we
get an 88.57% success rate with a 4mm gap. These models are transferred to the
real world and deployed on a real robot. We conducted independent tests and
obtained a 79.63% success rate with a 4mm gap. Simulation environments:
https://github.com/0707yiliu/peg-in-hole-with-RL.
|
A recent generalization of Gerstenhaber's theorem on spaces of nilpotent
matrices is shown to yield a new proof of the classification of linear
subspaces of diagonalizable real matrices with the maximal dimension.
|
Relativistic plasma jets are observed in many accreting black holes.
According to theory, coiled magnetic fields close to the black hole accelerate
and collimate the plasma, leading to a jet being launched. Isolating emission
from this acceleration and collimation zone is key to measuring its size and
understanding jet formation physics. But this is challenging because emission
from the jet base cannot be easily disentangled from other accreting
components. Here, we show that rapid optical flux variations from a Galactic
black-hole binary are delayed with respect to X-rays radiated from close to the
black hole by ~0.1 seconds, and that this delayed signal appears together with
a brightening radio jet. The origin of these sub-second optical variations has
hitherto been controversial. Not only does our work strongly support a jet
origin for the optical variations, it also sets a characteristic elevation of
<~10$^3$ Schwarzschild radii for the main inner optical emission zone above the
black hole, constraining both internal shock and magnetohydrodynamic models.
Similarities with blazars suggest that jet structure and launching physics
could potentially be unified under mass-invariant models. Two of the
best-studied jetted black hole binaries show very similar optical lags, so this
size scale may be a defining feature of such systems.
|
In action domains where agents may have erroneous beliefs, reasoning about
the effects of actions involves reasoning about belief change. In this paper,
we use a transition system approach to reason about the evolution of an agents
beliefs as actions are executed. Some actions cause an agent to perform belief
revision while others cause an agent to perform belief update, but the
interaction between revision and update can be non-elementary. We present a set
of rationality properties describing the interaction between revision and
update, and we introduce a new class of belief change operators for reasoning
about alternating sequences of revisions and updates. Our belief change
operators can be characterized in terms of a natural shifting operation on
total pre-orderings over interpretations. We compare our approach with related
work on iterated belief change due to action, and we conclude with some
directions for future research.
|
We propose a set-indexed family of capacities $\{\cap_G \}_{G \subseteq
\R_+}$ on the classical Wiener space $C(\R_+)$. This family interpolates
between the Wiener measure ($\cap_{\{0\}}$) on $C(\R_+)$ and the standard
capacity ($\cap_{\R_+}$) on Wiener space. We then apply our capacities to
characterize all quasi-sure lower functions in $C(\R_+)$. In order to do this
we derive the following capacity estimate which may be of independent interest:
There exists a constant $a > 1$ such that for all $r > 0$,
\[
\frac {1}{a} \K_G(r^6) e^{-\pi^2/(8r^2)} \le \cap_G \{f^* \le r\}
\le a \K_G(r^6) e^{-\pi^2/(8r^2)}.
\]
Here, $\K_G$ denotes the Kolmogorov $\epsilon$-entropy of $G$, and $f^* :=
\sup_{[0,1]}|f|$.
|
Information theoretic secret key agreement is impossible without making
initial assumptions. One type of initial assumption is correlated random
variables that are generated by using a noisy channel that connects the
terminals. Terminals use the correlated random variables and communication over
a reliable public channel to arrive at a shared secret key. Previous channel
models assume that each terminal either controls one input to the channel, or
receives one output variable of the channel. In this paper, we propose a new
channel model of transceivers where each terminal simultaneously controls an
input variable and observes an output variable of the (noisy) channel. We give
upper and lower bounds for the secret key capacity (i.e., highest achievable
key rate) of this transceiver model, and prove the secret key capacity under
the conditions that the public communication is noninteractive and input
variables of the noisy channel are independent.
|
Energy preservation is one of the most important challenges in wireless
sensor networks. In most applications, sensor networks consist of hundreds or
thousands nodes that are dispersed in a wide field. Hierarchical architectures
and data aggregation methods are increasingly gaining more popularity in such
large-scale networks. In this paper, we propose a novel adaptive
Energy-Efficient Multi-layered Architecture (EEMA) protocol for large-scale
sensor networks, wherein both hierarchical architecture and data aggregation
are efficiently utilized. EEMA divides the network into some layers as well as
each layer into some clusters, where the data are gathered in the first layer
and are recursively aggregated in upper layers to reach the base station. Many
criteria are wisely employed to elect head nodes, including the residual
energy, centrality, and proximity to bottom-layer heads. The routing delay is
mathematically analyzed. Performance evaluation is performed via simulations
which confirms the effectiveness of the proposed EEMA protocol in terms of the
network lifetime and reduced routing delay.
|
Whether the 3D incompressible Navier-Stokes equations can develop a finite
time singularity from smooth initial data is one of the most challenging
problems in nonlinear PDEs. In this paper, we present some new numerical
evidence that the incompressible axisymmetric Navier-Stokes equations with
smooth initial data of finite energy seem to develop potentially singular
behavior at the origin. This potentially singular behavior is induced by a
potential finite time singularity of the 3D Euler equations that we reported in
the companion paper (arXiv:2107.05870). We present numerical evidence that the
3D Navier--Stokes equations develop nearly self-similar singular scaling
properties with maximum vorticity increased by a factor of 10^7. We have
applied several blow-up criteria to study the potentially singular behavior of
the Navier--Stokes equations. The Beale-Kato-Majda blow-up criterion and the
blow-up criteria based on the growth of enstrophy and negative pressure seem to
imply that the Navier--Stokes equations using our initial data develop a
potential finite time singularity. We have also examined the
Ladyzhenskaya-Prodi-Serrin regularity criteria. Our numerical results for the
cases of (p,q) = (4,8), (6,4), (9,3) and (p,q)=(\infty,2) provide strong
evidence for the potentially singular behavior of the Navier--Stokes equations.
Our numerical study shows that while the global L^3 norm of the velocity grows
very slowly, the localized version of the L^3 norm of the velocity experiences
rapid dynamic growth relative to the localized L^3 norm of the initial
velocity. This provides further evidence for the potentially singular behavior
of the NavieStokes equations.
|
We study angular momentum radiation from electrically-biased chiral single
molecular junctions using the nonequilibrium Green's function method. Using
single helical chains as examples, we make connections between the ability of a
chiral molecule to emit photons with angular momentum to the geometrical
factors of the molecule. We point out that the mechanism studied here does not
involve the magnetic moment. Rather, it relies on inelastic transitions between
scattering states originated from two electrodes with different chiral
properties and chemical potentials. The required time-reversal symmetry
breaking is provided by nonequilibrium electron transport. Our work sheds light
on the relationship between geometrical and optoelectrical chiral properties at
single molecular limit.
|
A variant of continuous nonequilibrium thermodynamic theory based on the
postulate of the scale invariance of the local relation between generalized
fluxes and forces has been proposed. This single postulate replaces the
assumptions on local equilibrium and on the known relation between
thermodynamic fluxes and forces, which are widely used in classical
nonequilibrium thermodynamics. It has been shown that such a modification not
only makes it possible to deductively obtain the main results of classical
linear nonequilibrium thermodynamics, but also provides a number of statements
for a nonlinear case (maximum entropy production principle, macroscopic
reversibility principle, and generalized reciprocity relations) that are under
discussion in the literature.
|
The three-operators splitting algorithm is a popular operator splitting
method for finding the zeros of the sum of three maximally monotone operators,
with one of which is cocoercive operator. In this paper, we propose a class of
inertial three-operator splitting algorithm. The convergence of the proposed
algorithm is proved by applying the inertial Krasnoselskii-Mann iteration under
certain conditions on the iterative parameters in real Hilbert spaces. As
applications, we develop an inertial three-operator splitting algorithm to
solve the convex minimization problem of the sum of three convex functions,
where one of them is differentiable with Lipschitz continuous gradient.
Finally, we conduct numerical experiments on a constrained image inpainting
problem with nuclear norm regularization. Numerical results demonstrate the
advantage of the proposed inertial three-operator splitting algorithms.
|
There exist a number of models in the literature in which the weak
interactions are derived from a chiral gauge theory based on a larger group
than SU(2)_L x U(1)_Y. Such theories can be constructed so as to be
anomaly-free and consistent with precision electroweak measurements, and may be
interpreted as a deconstruction of an extra dimension. They also provide
interesting insights into the issues of flavor and dynamical electroweak
symmetry breaking, and can help to raise the mass of the Higgs boson in
supersymmetric theories. In this work we show that these theories can also give
rise to baryon and lepton number violating processes, such as nucleon decay and
spectacular multijet events at colliders, via the instanton transitions
associated with the extended gauge group. For a particular model based on
SU(2)_1 x SU(2)_2, we find that the $B+L$ violating scattering cross sections
are too small to be observed at the LHC, but that the lower limit on the
lifetime of the proton implies an upper bound on the gauge couplings.
|
We perform a semiclassical calculation of the magnetoresistance of spinless
two-dimensional fermions in a long-range correlated random magnetic field. In
the regime relevant for the problem of the half filled Landau level the
perturbative Born approximation fails and we develop a new method of solving
the Boltzmann equation beyond the relaxation time approximation. In absence of
interactions, electron density modulations, in-plane fields, and Fermi surface
anisotropy we obtain a quadratic negative magnetoresistance in the weak field
limit.
|
The Kuznetsov and Petersson trace formulae for $GL(2)$ forms may collectively
be derived from Poincar\'e series in the space of Maass forms with weight.
Having already developed the spherical spectral Kuznetsov formula for $GL(3)$,
the goal of this series of papers is to derive the spectral Kuznetsov formulae
for non-spherical Maass forms and use them to produce the corresponding Weyl
laws; this appears to be the first proof of the existence of such forms not
coming from the symmetric-square construction. Aside from general interest in
new types of automorphic forms, this is a necessary step in the development of
a theory of exponential sums on $GL(3)$. We take the opportunity to demonstrate
a sort of minimal method for developing Kuznetsov-type formulae, and produce
auxillary results in the form of generalizations of Stade's formula and
Kontorovich-Lebedev inversion. This first paper is limited to the non-spherical
prinicpal series forms as there are some significant technical details
associated with the generalized principal series forms, which will be handled
in a separate paper. The best analog of this type of form on $GL(2)$ is the
forms of weight one which sometimes occur on congruence subgroups.
|
In this paper we introduce a class of stochastic population models based on
"patch dynamics". The size of the patch may be varied, and this allows one to
quantify the departures of these stochastic models from various mean field
theories, which are generally valid as the patch size becomes very large. These
models may be used to formulate a broad range of biological processes in both
spatial and non-spatial contexts. Here, we concentrate on two-species
competition. We present both a mathematical analysis of the patch model, in
which we derive the precise form of the competition mean field equations (and
their first order corrections in the non-spatial case), and simulation results.
These mean field equations differ, in some important ways, from those which are
normally written down on phenomenological grounds. Our general conclusion is
that mean field theory is more robust for spatial models than for a single
isolated patch. This is due to the dilution of stochastic effects in a spatial
setting resulting from repeated rescue events mediated by inter-patch
diffusion. However, discrete effects due to modest patch sizes lead to striking
deviations from mean field theory even in a spatial setting.
|
Fluid limit techniques have become a central tool to analyze queueing
networks over the last decade, with applications to performance analysis,
simulation and optimization. In this paper, some of these techniques are
extended to a general class of skip-free Markov chains. As in the case of
queueing models, a fluid approximation is obtained by scaling time, space and
the initial condition by a large constant. The resulting fluid limit is the
solution of an ordinary differential equation (ODE) in ``most'' of the state
space. Stability and finer ergodic properties for the stochastic model then
follow from stability of the set of fluid limits. Moreover, similarly to the
queueing context where fluid models are routinely used to design control
policies, the structure of the limiting ODE in this general setting provides an
understanding of the dynamics of the Markov chain. These results are
illustrated through application to Markov chain Monte Carlo methods.
|
Within a BCS-type mean-field approach to the extended Hubbard model, a
nontrivial dependence of T_c on the hole content per unit CuO_2 is recovered,
in good agreement with the celebrated non-monotonic universal behaviour at
normal pressure. Evaluation of T_c at higher pressures is then made possible by
the introduction of an explicit dependence of the tight-binding band and of the
carrier concentration on pressure P. Comparison with the known experimental
data for underdoped Bi2212 allows to single out an `intrinsic' contribution to
d T_c / d P from that due to the carrier concentration, and provides a
remarkable estimate of the dependence of the inter-site coupling strength on
the lattice scale.
|
Image segmentation relies heavily on neural networks which are known to be
overconfident, especially when making predictions on out-of-distribution (OOD)
images. This is a common scenario in the medical domain due to variations in
equipment, acquisition sites, or image corruptions. This work addresses the
challenge of OOD detection by proposing Laplacian Segmentation Networks (LSN):
methods which jointly model epistemic (model) and aleatoric (data) uncertainty
for OOD detection. In doing so, we propose the first Laplace approximation of
the weight posterior that scales to large neural networks with skip connections
that have high-dimensional outputs. We demonstrate on three datasets that the
LSN-modeled parameter distributions, in combination with suitable uncertainty
measures, gives superior OOD detection.
|
Existing text scaling methods often require a large corpus, struggle with
short texts, or require labeled data. We develop a text scaling method that
leverages the pattern recognition capabilities of generative large language
models (LLMs). Specifically, we propose concept-guided chain-of-thought
(CGCoT), which uses prompts designed to summarize ideas and identify target
parties in texts to generate concept-specific breakdowns, in many ways similar
to guidance for human coder content analysis. CGCoT effectively shifts pairwise
text comparisons from a reasoning problem to a pattern recognition problem. We
then pairwise compare concept-specific breakdowns using an LLM. We use the
results of these pairwise comparisons to estimate a scale using the
Bradley-Terry model. We use this approach to scale affective speech on Twitter.
Our measures correlate more strongly with human judgments than alternative
approaches like Wordfish. Besides a small set of pilot data to develop the
CGCoT prompts, our measures require no additional labeled data and produce
binary predictions comparable to a RoBERTa-Large model fine-tuned on thousands
of human-labeled tweets. We demonstrate how combining substantive knowledge
with LLMs can create state-of-the-art measures of abstract concepts.
|
The quasi-periodic oscillations (QPOs) in black hole (BH) systems of
different scales are interpreted based on the magnetic reconnection of the
large-scale magnetic fields generated by the toroidal electric currents flowing
in the inner region of accretion disk, where the current density is assumed to
be proportional to the mass density of the accreting plasma. The magnetic
connection (MC) is taken into account in resolving the dynamic equations of the
accretion disk, in which the MC between the inner and outer disk regions, the
MC between the plunging region and the disk, and the MC between the BH horizon
and the disk are involved. It turns out that the single QPO frequency of
several BH systems of different scales can be fitted by invoking the magnetic
reconnection due to the MC between the inner and outer regions of the disk,
where the BH binaries XTE J1859+226, XTE J1650-500 and GRS 1915+105 and the
massive BHs in NGC 5408 X-1 and RE J1034+396 are included. In addition, the
X-ray spectra corresponding to the QPOs are fitted for these sources based on
the typical disk-corona model.
|
We present a simple, closed form expression for the potential of an
axisymmetric disk of stars interacting through gravitational potentials of the
form $V(r)=-\beta /r+\gamma r/2$, the potential associated with fundamental
sources in the conformal invariant fourth order theory of gravity which has
recently been advanced by Mannheim and Kazanas as a candidate alternative to
the standard second order Einstein theory. Using the model we obtain a
reasonable fit to some representative galactic rotation curve data without the
need for any non-luminous or dark matter. Our study suggests that the observed
flatness of rotation curves might only be an intermediate phenomenon rather
than an asymptotic one.
|
We implement a bottom-up multiscale approach for the modeling of defect
localization in $C_{6n^2}H_{6n}$ islands, i.e. graphene quantum dots with a
hexagonal symmetry, by means of density functional and semiempirical
approaches. Using the \textit{ab initio} calculations as a reference, we
recognize the theoretical framework under which semiempirical methods describe
adequately the electronic structure of the studied systems and thereon proceed
to the calculation of quantum transport within the non-equilibrium Green's
function formalism. The computational data reveal an impurity-like behavior of
vacancies in these clusters and evidence the role of parameterization even
within the same semiempirical context. In terms of conduction, failure to
capture the proper chemical aspects in the presence of generic local
alterations of the ideal atomic structure results in an improper description of
the transport features. As an example, we show wavefunction localization
phenomena induced by the presence of vacancies and discuss the importance of
their modeling for the conduction characteristics of the studied structures.
|
We review the existing set of optical/UV/IR observations of Supernova 1993J,
concentrating heavily on optical data because these are by far the most
plentiful. Some results from theoretical modeling of the observations are also
discussed. SN 1993J has provided the best observational evidence for the
transformation of a SN from one spectral type to another, thereby providing a
link between Type II and Type Ib supernovae (SNe). This has strengthened the
argument that SNe Ib (and, by extension, SNe Ic) are core-collapse events. SN
1993J has remained relatively bright for 10 years; its late-time emission comes
from the collision of supernova ejecta with circumstellar gas that was released
by the progenitor prior to the explosion. The circumstellar material shows
strong evidence of CNO processing.
|
Accessing the thermal transport properties of glasses is a major issue for
the design of production strategies of glass industry, as well as for the
plethora of applications and devices where glasses are employed. From the
computational standpoint, the chemical and morphological complexity of glasses
calls for atomistic simulations where the interatomic potentials are able to
capture the variety of local environments, composition, and (dis)order that
typically characterize glassy phases. Machine-learning potentials (MLPs) are
emerging as a valid alternative to computationally expensive ab initio
simulations, inevitably run on very small samples which cannot account for
disorder at different scales, as well as to empirical force fields, fast but
often reliable only in a narrow portion of the thermodynamic and composition
phase diagrams. In this article, we make the point on the use of MLPs to
compute the thermal conductivity of glasses, through a review of recent
theoretical and computational tools and a series of numerical applications on
vitreous silica and vitreous silicon, both pure and intercalated with lithium.
|
I survey the use of the Haag expansion as a technique to solve quantum field
theories. After an exposition of the asymptotic condition and the Haag
expansion, I report the results of applying the Haag expansion to several
quantum field theories, including galilean-invariant theories, matter at finite
temperature (using the BCS model of superconductivity as an illustrative
example), the Nambu--Jona-Lasinio model and the Schwinger model. I conclude
with the outlook for further development of this method.
|
A fermionic disordered one dimensional wire in the presence of attractive
interactions is known to have two distinct phases: A localized and a
superconducting one depending on the strength of interaction and disorder. The
localized region may also exhibit a metallic behaviour if the system size is
shorter than the localization length. Here we show that the superconducting
phase has a distinct distribution of the entanglement entropy and entanglement
spectrum distinct from the metallic regime. The entanglement entropy
distribution is strongly asymmetric with L\'evy alpha stable distribution
(compared to the Gaussian metallic distribution), and the entanglement level
spacing distribution is unitary (compared to orthogonal). Thus, entanglement
properties may reveal properties which can not be detected by other methods.
|
We report on the efficient design of quantum optimal control protocols to
manipulate the motional states of an atomic Bose-Einstein condensate (BEC) in a
one-dimensional optical lattice. Our protocols operate on the momentum comb
associated with the lattice. In contrast to previous works also dealing with
control in discrete and large Hilbert spaces, our control schemes allow us to
reach a wide variety of targets by varying a single parameter, the lattice
position. With this technique, we experimentally demonstrate a precise, robust
and versatile control: we optimize the transfer of the BEC to a single or
multiple quantized momentum states with full control on the relative phase
between the different momentum components. This also allows us to prepare the
BEC in a given eigenstate of the lattice band structure, or superposition
thereof.
|
We consider an important generalization of the Dicke model in which
multi-level atoms, instead of two-level atoms as in conventional Dicke model,
interact with a single photonic mode. We explore the phase diagram of a broad
class of atom-photon coupling schemes and show that, under this generalization,
the Dicke model can become multicritical. For a subclass of experimentally
realizable schemes, multicritical conditions of arbitrary order can be
expressed analytically in compact forms. We also calculate the atom-photon
entanglement entropy for both critical and non-critical cases. We find that the
order of the criticality strongly affects the critical entanglement entropy:
higher order yields stronger entanglement. Our work provides deep insight into
quantum phase transitions and multicriticality.
|
Realistic networks display heterogeneous transmission delays. We analyze here
the limits of large stochastic multi-populations networks with stochastic
coupling and random interconnection delays. We show that depending on the
nature of the delays distributions, a quenched or averaged propagation of chaos
takes place in these networks, and that the network equations converge towards
a delayed McKean-Vlasov equation with distributed delays. Our approach is
mostly fitted to neuroscience applications. We instantiate in particular a
classical neuronal model, the Wilson and Cowan system, and show that the
obtained limit equations have Gaussian solutions whose mean and standard
deviation satisfy a closed set of coupled delay differential equations in which
the distribution of delays and the noise levels appear as parameters. This
allows to uncover precisely the effects of noise, delays and coupling on the
dynamics of such heterogeneous networks, in particular their role in the
emergence of synchronized oscillations. We show in several examples that not
only the averaged delay, but also the dispersion, govern the dynamics of such
networks.
|
This study addresses the growing standardization of airline fleets,
highlighting that frequent passengers are more likely to fly on the same
aircraft model more often. The objective is to analyze the fleet management of
airlines and the impact of a reduced variety of models on company operations.
The benefits of standardization, such as operational efficiency, and the risks,
such as vulnerability to specific model failures, are discussed. The work
reviews international scientific literature on the subject, identifying
consensus and disagreements that suggest areas for future research. It also
includes a study on the Brazilian market, examining how standardization affects
operational costs and profitability in terms of model, family, and aircraft
manufacturer. Furthermore, the relationship between fleet standardization and
the business model of the companies is investigated, concluding that the
advantages of standardization are not exclusive to low-cost companies but can
also be leveraged by other airlines.
|
This study examines the relationship between road infrastructure and crime
rate in rural India using a nationally representative survey. On the one hand,
building roads in villages may increase connectivity, boost employment, and
lead to better living standards, reducing criminal activities. On the other
hand, if the benefits of roads are non-uniformly distributed among villagers,
it may lead to higher inequality and possibly higher crime. We empirically test
the relationship using the two waves of the Indian Human Development Survey. We
use an instrumental variable estimation strategy and observe that building
roads in rural parts of India has reduced crime. The findings are robust to
relaxing the strict instrument exogeneity condition and using alternate
measures. On exploring the pathways, we find that improved street lighting,
better public bus services and higher employment are a few of the direct
potential channels through which road infrastructure impedes crime. We also
find a negative association between villages with roads and various types of
inequality measures confirming the broad economic benefits of roads. Our study
also highlights that the negative impact of roads on crime is more pronounced
in states with weaker institutions and higher income inequality.
|
We provide a complete proof of an optimal version of the Marcinkiewicz
multiplier theorem.
|
In the past decades, short baseline neutrino oscillation studies around
experimental or commercial reactor cores have revealed two anomalies. The first
one is linked to the absolute flux and the second one to the spectral shape.
The first anomaly, called Reactor Antineutrino Anomaly (RAA), could be
explained by the introduction of a new oscillation of antineutrinos towards a
sterile state of the eV mass. The \stereo detector has been taking data since
the end of 2016 at 10~m from the core of the Institut Laue-Langevin research
reactor, Grenoble, France. The separation of its Target volume along the
neutrino propagation axis allows for measurements of the neutrino spectrum at
multiple baselines, providing a clear test of an oscillation at short baseline.
In this contribution, a special focus is put on the data analysis and the
neutrino extraction using the Pulse Shape Discrimination observable. The
results from 119 days of reactor turned on and 210 days of reactor turned off
are then reported. The resulting antineutrino rate is (365.7~$\pm$~3.2) \anue
/day. The test of a new oscillation towards a sterile neutrino is found to be
compatible with the non-oscillation hypothesis and the best fit of the RAA is
excluded at 99\% C.L.
|
Universal adversarial perturbations (UAPs), a.k.a. input-agnostic
perturbations, has been proved to exist and be able to fool cutting-edge deep
learning models on most of the data samples. Existing UAP methods mainly focus
on attacking image classification models. Nevertheless, little attention has
been paid to attacking image retrieval systems. In this paper, we make the
first attempt in attacking image retrieval systems. Concretely, image retrieval
attack is to make the retrieval system return irrelevant images to the query at
the top ranking list. It plays an important role to corrupt the neighbourhood
relationships among features in image retrieval attack. To this end, we propose
a novel method to generate retrieval-against UAP to break the neighbourhood
relationships of image features via degrading the corresponding ranking metric.
To expand the attack method to scenarios with varying input sizes or
untouchable network parameters, a multi-scale random resizing scheme and a
ranking distillation strategy are proposed. We evaluate the proposed method on
four widely-used image retrieval datasets, and report a significant performance
drop in terms of different metrics, such as mAP and mP@10. Finally, we test our
attack methods on the real-world visual search engine, i.e., Google Images,
which demonstrates the practical potentials of our methods.
|
It is pointed out that if the molecular interpretation of the recently
observed resonance X(3872) is valid, then nature may have prepared a good
laboratory for us to examine the phenomenon of superradiance and subradiance of
Dicke. The superradiance and supradiance factors are evaluated and the effects
on the electromagnetic radiative decay of X(3872) are discussed. Our results
using coordinate space representation is similar to the momentum space results
of Voloshin.
|
We consider matrices with entries in a local ring, Mat(m,n,R). Fix a group
action, G on Mat(m,n,R), and a subset of allowed deformations, \Sigma\subseteq
Mat(m,n,R). The standard question of Singularity Theory is the
finite-(\Sigma,G)-determinacy of matrices. Finite determinacy implies
algebraizability and is equivalent to a stronger notion: stable
algebraizability.
In our previous work this determinacy question was reduced to the study of
the tangent spaces to \Sigma and to the orbit, T_{(\Sigma,A)}, T_{(GA,A)} , and
their quotient, the tangent module to the miniversal deformation. In
particular, the order of determinacy is controlled by the annihilator of this
tangent module.
In this work we study this tangent module for the group action GL(m,R)\times
GL(n,R) on Mat(m,n,R) and various natural subgroups of it. We obtain
ready-to-use criteria of determinacy for deformations of (embedded) modules,
(skew-)symmetric forms, filtered modules, filtered morphisms of filtered
modules, chains of modules etc.
|
The initial mass function (IMF), binary fraction and distributions of binary
parameters (mass ratios, separations and eccentricities) are indispensable
input for simulations of stellar populations. It is often claimed that these
are poorly constrained significantly affecting evolutionary predictions.
Recently, dedicated observing campaigns provided new constraints on the initial
conditions for massive stars. Findings include a larger close binary fraction
and a stronger preference for very tight systems. We investigate the impact on
the predicted merger rates of neutron stars and black holes.
Despite the changes with previous assumptions, we only find an increase of
less than a factor 2 (insignificant compared with evolutionary uncertainties of
typically a factor 10-100). We further show that the uncertainties in the new
initial binary properties do not significantly affect (within a factor of 2)
our predictions of double compact object merger rates. An exception is the
uncertainty in IMF (variations by a factor of 6 up and down). No significant
changes in the distributions of final component masses, mass ratios, chirp
masses and delay times are found.
We conclude that the predictions are, for practical purposes, robust against
uncertainties in the initial conditions concerning binary parameters with
exception of the IMF. This eliminates an important layer of the many uncertain
assumptions affecting the predictions of merger detection rates with the
gravitational wave detectors aLIGO/aVirgo.
|
The motivations for studying dynamical scenarios of electroweak and flavor
symmetry breaking are reviewed and the latest ideas, especially
topcolor-assisted technicolor, are summarized. Several technicolor signatures
at the Tevatron and Large Hadron Collider are described and it is emphasized
that all of them are well within the reach of these colliders. (This is the
written version of a plenary talk of this title at the 28th International
Conference on High Energy Physics, Warsaw (1996).)
|
Recently, a surge of high-quality 3D-aware GANs have been proposed, which
leverage the generative power of neural rendering. It is natural to associate
3D GANs with GAN inversion methods to project a real image into the generator's
latent space, allowing free-view consistent synthesis and editing, referred as
3D GAN inversion. Although with the facial prior preserved in pre-trained 3D
GANs, reconstructing a 3D portrait with only one monocular image is still an
ill-pose problem. The straightforward application of 2D GAN inversion methods
focuses on texture similarity only while ignoring the correctness of 3D
geometry shapes. It may raise geometry collapse effects, especially when
reconstructing a side face under an extreme pose. Besides, the synthetic
results in novel views are prone to be blurry. In this work, we propose a novel
method to promote 3D GAN inversion by introducing facial symmetry prior. We
design a pipeline and constraints to make full use of the pseudo auxiliary view
obtained via image flipping, which helps obtain a robust and reasonable
geometry shape during the inversion process. To enhance texture fidelity in
unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide
extra supervision. We design constraints aimed at filtering out conflict areas
for optimization in asymmetric situations. Comprehensive quantitative and
qualitative evaluations on image reconstruction and editing demonstrate the
superiority of our method.
|
Subsets and Splits