text
stringlengths 6
128k
|
---|
We give the exact critical frontier of the Potts model on bowtie lattices.
For the case of $q=1$, the critical frontier yields the thresholds of bond
percolation on these lattices, which are exactly consistent with the results
given by Ziff et al [J. Phys. A 39, 15083 (2006)]. For the $q=2$ Potts model on
the bowtie-A lattice, the critical point is in agreement with that of the Ising
model on this lattice, which has been exactly solved. Furthermore, we do
extensive Monte Carlo simulations of Potts model on the bowtie-A lattice with
noninteger $q$. Our numerical results, which are accurate up to 7 significant
digits, are consistent with the theoretical predictions. We also simulate the
site percolation on the bowtie-A lattice, and the threshold is
$s_c=0.5479148(7)$. In the simulations of bond percolation and site
percolation, we find that the shape-dependent properties of the percolation
model on the bowtie-A lattice are somewhat different from those of an isotropic
lattice, which may be caused by the anisotropy of the lattice.
|
Starting from the modified Maxwell equations in Carroll-Field-Jackiw
electrodynamics we study the electromagnetic radiation in a chiral medium
characterized by an axion coupling $\theta(x)=b_\mu x^\mu$, with $b_\mu=
(0,\mathbf{b})$, which gives rise to the magnetoelectric effect. Employing the
stationary phase approximation we construct the Green's matrix in the radiation
zone which allows the calculation of the corresponding electromagnetic
potentials and fields for arbitrary sources. We obtain a general expression for
the angular distribution of the radiated energy per unit frequency. As an
application we consider a charge moving at constant velocity parallel to
$\mathbf{b}$ in the medium and discuss the resulting Cherenkov radiation. We
recover the vacuum Cherenkov radiation. For the case of a material with
refraction index $n > 1$ we find that zero, one or two Cherenkov cones can
appear. The spectral distribution of the radiation together with the comparison
of the radiation output of each cone are presented, as well as some angular
plots showing the appearance of the cones.
|
The X-ray signal from hydrogen-rich supernovae (SNe II) in the first tens to
hundreds of days after the shock breakout encodes important information about
the circumstellar material (CSM) surrounding their progenitors before
explosion. In this study, we describe a way to generate the SN II X-ray light
curves from hydrodynamical simulations performed with the code Athena++, using
the X-ray package XSPEC. In addition, we employ a radiation diffusion
hydrodynamic code SNEC for generating the optical light curves in different
bands. In this numerical setup, we model the X-ray and optical emission from a
set of progenitor models, consisting of either two (red supergiant + low
density steady wind), or three (red supergiant + dense CSM + low density steady
wind) components. We vary the density in the wind and the slope in the CSM to
see how these parameters influence the resulting X-ray and optical light
curves. Among our models, we identify one that is able to roughly reproduce
both optical and X-ray data of the well observed SN 2013ej. In order to achieve
this, the slope of the dense CSM in this model should be steeper than the one
of a steady wind ($\rho\propto r^{-2}$), and closer to $\rho\propto r^{-5}$. On
the other hand, we show that too steep and extended CSM profiles may produce
excessive X-ray emission in the first few tens of days, up to a few orders of
magnitude larger than observed. We conclude that ability to reproduce the
observed X-ray signal from SNe~II together with their optical light curves is
crucial in establishing the validity of different CSM models.
|
We have studied the Globular Cluster System of the merger galaxy NGC 1316 in
Fornax, using CCD BVI photometry. A clear bimodality is not detected from the
broadband colours. However, dividing the sample into red (presumably metal-
rich) and blue (metal-poor) subpopulations at B-I=1.75, we find that they
follow strikingly different angular distributions. The red clusters show a
strong correlation with the galaxy elongation, but the blue ones are circularly
distributed. No systematic difference is seen in their radial profile and both
are equally concentrated.
We derive an astonishingly low Specific Frequency for NGC 1316 of only
Sn=0.9, which confirms with a larger field a previous finding by Grillmair et
al. (1999). Assuming a "normal" Sn of ~4 for early-type galaxies, we use
stellar population synthesis models to estimate in 2 Gyr the age of this
galaxy, if an intermediate-age population were to explain the low Sn we
observe. This value agrees with the luminosity-weighted mean age of NGC 1316
derived by Kuntschner & Davies (1998) and Mackie & Fabbiano (1998).
By fitting t5 functions to the Globular Cluster Luminosity Function (GCLF),
we derived the following turnover magnitudes: B=24.69 +/- 0.15, V=23.87 +/-
0.20 and I=22.72 +/- 0.14. They confirm that NGC 1316, in spite of its outlying
location, is at the same distance as the core of the Fornax cluster.
|
Principal Component Analysis (PCA) is the most widely used tool for linear
dimensionality reduction and clustering. Still it is highly sensitive to
outliers and does not scale well with respect to the number of data samples.
Robust PCA solves the first issue with a sparse penalty term. The second issue
can be handled with the matrix factorization model, which is however
non-convex. Besides, PCA based clustering can also be enhanced by using a graph
of data similarity. In this article, we introduce a new model called "Robust
PCA on Graphs" which incorporates spectral graph regularization into the Robust
PCA framework. Our proposed model benefits from 1) the robustness of principal
components to occlusions and missing values, 2) enhanced low-rank recovery, 3)
improved clustering property due to the graph smoothness assumption on the
low-rank matrix, and 4) convexity of the resulting optimization problem.
Extensive experiments on 8 benchmark, 3 video and 2 artificial datasets with
corruptions clearly reveal that our model outperforms 10 other state-of-the-art
models in its clustering and low-rank recovery tasks.
|
This note gives a correction to the proof of the main result of "Harmonic
representatives for cuspidal cohomology classes" by J. Dodziuk, J. McGowan and
Peter Perry, an article that appeared in Serge Lang memorial volume.
|
A kernel method is proposed to estimate the condensed density of the
generalized eigenvalues of pencils of Hankel matrices whose elements have a
joint noncentral Gaussian distribution with nonidentical covariance. These
pencils arise when the complex exponentials approximation problem is considered
in Gaussian noise. Several moments problems can be formulated in this framework
and the estimation of the condensed density above is the main critical step for
their solution. It is shown that the condensed density satisfies approximately
a diffusion equation, which allows to estimate an optimal bandwidth. It is
proved by simulation that good results can be obtained even when the
signal-to-noise ratio is so small that other methods fail.
|
SPARQL is the W3C candidate recommendation query language for RDF. In this
paper we address systematically the formal study of SPARQL, concentrating in
its graph pattern facility. We consider for this study a fragment without
literals and a simple version of filters which encompasses all the main issues
yet is simple to formalize. We provide a compositional semantics, prove there
are normal forms, prove complexity bounds, among others that the evaluation of
SPARQL patterns is PSPACE-complete, compare our semantics to an alternative
operational semantics, give simple and natural conditions when both semantics
coincide and discuss optimizations procedures.
|
Motivated by recently reported experimental phase diagrams, we study the
effects of CoO6 distortion on the electronic structure in NaxCoO2.yH2O. We
construct the multiband tight-binding model by employing the LDA result.
Analyzing this model, we show the deformation of band dispersions and
Fermi-surface topology as functions of CoO2-layer thickness. Considering these
results together with previous theoretical ones, we propose a possible
schematic phase diagram with three successive phases: the extended s-wave
superconductivity (SC), the magnetic order, and the spin-triplet SC phases when
the Co valence number s is +3.4. A phase diagram with only one phase of
spin-triplet SC is also proposed for the s=+3.5 case.
|
In this study, we examine the role of the repulsive Casimir force in
counteracting the gravitational contraction of a thin spherically symmetric
shell. Our main focus is to explore the possibility of achieving a stable
balanced configuration within the theoretically reliable weak field limit. To
this end, we consider different types of Casimir forces, including those
generated by massless scalar fields, massive scalar fields, electromagnetic
fields, and temperature-dependent fields.
|
We describe all linear operators on spaces of multivariate polynomials
preserving the property of being non-vanishing in open circular domains. This
completes the multivariate generalization of the classification program
initiated by P\'olya-Schur for univariate real polynomials and provides a
natural framework for dealing in a uniform way with Lee-Yang type problems in
statistical mechanics, combinatorics, and geometric function theory.
This is an announcement with some of the main results in arXiv:0809.0401 and
arXiv:0809.3087.
|
We consider higher-dimensional effective (EFT) operators consisting of
fermion dark matter (DM) connecting to Standard Model (SM) leptons upto
dimension six. Considering all operators together and assuming the DM to
undergo thermal freeze-out, we find out relic density allowed parameter space
in terms of DM mass ($m_\chi$) and New Physics (NP) scale ($\Lambda$) with one
loop direct search constraints from XENON1T experiment. Allowed parameter space
of the model is probed at the proposed International Linear Collider (ILC) via
monophoton signal for both Dirac and Majorana cases, limited by the
centre-of-mass energy $\sqrt s=$1 TeV, where DM mass can be probed within
$m_\chi<\frac{\sqrt{s}}{2}$ for the pair production to occur and $\Lambda>\sqrt
s$ for the validity of EFT framework.
|
This work deals with quantum graphs, focusing on the transmission properties
they engender. We first select two simple diamond graphs, and two hexagonal
graphs in which the vertices are all of degree 3, and investigate their
transmission coefficients. In particular, we identified regions in which the
transmission is fully suppressed. We also considered the transmission
coefficients of some series and parallel arrangements of the two basic graphs,
with the vertices still preserving the degree 3 condition, and then identified
specific series and parallel compositions that allow for windows of no
transmission. Inside some of these windows, we found very narrow peaks of full
transmission, which are consequences of constructive quantum interference.
Possibilities of practical use as the experimental construction of devices of
current interest to control and manipulate quantum transmission are also
discussed.
|
A description of all normal Hopf subalgebras of a semisimple Drinfeld double
is given. This is obtained by considering an analogue of Goursat's lemma
concerning fusion subcategories of Deligne products of two fusion categories.
As an application we show that the Drinfeld double of any abelian extension is
also an abelian extension.
|
We consider random filtered complexes built over marked point processes on
Euclidean spaces. Examples of our filtered complexes include a filtration of
$\check{\textrm{C}}$ech complexes of a family of sets with various sizes,
growths, and shapes. We establish the law of large numbers for persistence
diagrams as the size of the convex window observing a marked point process
tends to infinity.
|
(Abriged) We report results of a campaign to image the stellar populations in
the halos of highly inclined spiral galaxies, with the fields roughly 10 kpc
(projected) from the nuclei. We use the F814W (I) and F606W (V) filters in the
Wide Field Planetary Camera 2, on board the Hubble Space telescope. Extended
halo populations are detected in all galaxies. The color-magnitude diagrams
appear to be completely dominated by giant-branch stars, with no evidence for
the presence of young stellar populations in any of the fields. We find that
the metallicity distribution functions are dominated by metal-rich populations,
with a tail extending toward the metal poor end. To first order, the overall
shapes of the metallicity distribution functions are similar to what is
predicted by simple, single-component model of chemical evolution with the
effective yields increasing with galaxy luminosity. However, metallicity
distributions significantly narrower than the simple model are observed for a
few of the most luminous galaxies in the sample. It appears clear that more
luminous spiral galaxies also have more metal-rich stellar halos. The
increasingly significant departures from the closed-box model for the more
luminous galaxies indicate that a parameter in addition to a single yield is
required to describe chemical evolution. This parameter, which could be related
to gas infall or outflow either in situ or in progenitor dwarf galaxies that
later merge to form the stellar halo, tends to act to make the metallicity
distributions narrower at high metallicity.
|
This paper surveys a range of methods to collect necessary performance data
on Intel CPUs and NVIDIA GPUs for hierarchical Roofline analysis. As of
mid-2020, two vendor performance tools, Intel Advisor and NVIDIA Nsight
Compute, have integrated Roofline analysis into their supported feature set.
This paper fills the gap for when these tools are not available, or when users
would like a more customized workflow for certain analysis. Specifically, we
will discuss how to use Intel Advisor, RRZE LIKWID, Intel SDE and Intel
Amplifier on Intel architectures, and nvprof, Nsight Compute metrics, and
Nsight Compute section files on NVIDIA architectures. These tools will be used
to collect information for as many memory/cache levels in the memory hierarchy
as possible in order to provide insights into application's data reuse and
cache locality characteristics.
|
The contact angle that a liquid drop makes on a soft substrate does not obey
the classical Young's relation, since the solid is deformed elastically by the
action of the capillary forces. The finite elasticity of the solid also renders
the contact angles different from that predicted by Neumann's law, which
applies when the drop is floating on another liquid. Here we derive an
elasto-capillary model for contact angles on a soft solid, by coupling a
mean-field model for the molecular interactions to elasticity. We demonstrate
that the limit of vanishing elastic modulus yields Neumann's law or a slight
variation thereof, depending on the force transmission in the solid surface
layer. The change in contact angle from the rigid limit (Young) to the soft
limit (Neumann) appears when the length scale defined by the ratio of surface
tension to elastic modulus $\gamma/E$ reaches a few molecular sizes.
|
The recent explosion of performance of large language models (LLMs) has
changed the field of Natural Language Processing (NLP) more abruptly and
seismically than any other shift in the field's 80-year history. This has
resulted in concerns that the field will become homogenized and
resource-intensive. The new status quo has put many academic researchers,
especially PhD students, at a disadvantage. This paper aims to define a new NLP
playground by proposing 20+ PhD-dissertation-worthy research directions,
covering theoretical analysis, new and challenging problems, learning
paradigms, and interdisciplinary applications.
|
The effects on the local structure due to self-irradiation damage of Ga
stabilized $\delta$-Pu stored at cryogenic temperatures have been examined
using extended x-ray absorption fine structure (EXAFS) experiments. Extensive
damage, seen as a loss of local order, was evident after 72 days of storage
below 15 K. The effect was observed from both the Pu and Ga sites, although
less pronounced around Ga. Isochronal annealing was performed on this sample to
study the annealing processes that occur between cryogenic and room temperature
storage conditions, where damage is mostly reversed. Damage fractions at
various points along the annealing curve have been determined using an
amplitude-ratio method, standard EXAFS fitting, and a spherical crystallite
model, and provide information complementary to previous electrical
resistivity- and susceptibility-based isochronal annealing studies. The use of
a spherical crystallite model accounts for the changes in EXAFS spectra using
just two parameters, namely, the crystalline fraction and the particle radius.
Together, these results are discussed in terms of changes to the local
structure around Ga and Pu throughout the annealing process and highlight the
unusual role of Ga in the behavior of the lowest temperature anneals.
|
The paradigm shift from shallow classifiers with hand-crafted features to
end-to-end trainable deep learning models has shown significant improvements on
supervised learning tasks. Despite the promising power of deep neural networks
(DNN), how to alleviate overfitting during training has been a research topic
of interest. In this paper, we present a Generative-Discriminative Variational
Model (GDVM) for visual classification, in which we introduce a latent variable
inferred from inputs for exhibiting generative abilities towards prediction. In
other words, our GDVM casts the supervised learning task as a generative
learning process, with data discrimination to be jointly exploited for improved
classification. In our experiments, we consider the tasks of multi-class
classification, multi-label classification, and zero-shot learning. We show
that our GDVM performs favorably against the baselines or recent generative DNN
models.
|
We design a calculus for true concurrency called CTC, including its syntax
and operational semantics. CTC has good properties modulo several kinds of
strongly truly concurrent bisimulations and weakly truly concurrent
bisimulations, such as monoid laws, static laws, new expansion law for strongly
truly concurrent bisimulations, $\tau$ laws for weakly truly concurrent
bisimulations, and full congruences for strongly and weakly truly concurrent
bisimulations, and also unique solution for recursion.
|
We present a novel algorithm for computing collision-free navigation for
heterogeneous road-agents such as cars, tricycles, bicycles, and pedestrians in
dense traffic. Our approach currently assumes the positions, shapes, and
velocities of all vehicles and pedestrians are known and computes smooth
trajectories for each agent by taking into account the dynamic constraints. We
describe an efficient optimization-based algorithm for each road-agent based on
reciprocal velocity obstacles that takes into account kinematic and dynamic
constraints. Our algorithm uses tight fitting shape representations based on
medial axis to compute collision-free trajectories in dense traffic situations.
We evaluate the performance of our algorithm in real-world dense traffic
scenarios and highlight the benefits over prior reciprocal collision avoidance
schemes.
|
On va montrer que la densite d etat surfacique de de l operateur de
Schrodinger presque periodique discret converge faiblement vers la Densite d
etat surfacique continue .
|
Motion of a classical particle in 3-dimensional Lobachevsky and Riemann
spaces is studied in the presence of an external magnetic field which is
analogous to a constant uniform magnetic field in Euclidean space. In both
cases three integrals of motions are constructed and equations of motion are
solved exactly in the special cylindrical coordinates on the base of the method
of separation of variables. In Lobachevsky space there exist trajectories of
two types, finite and infinite in radial variable, in Riemann space all motions
are finite and periodical. The invariance of the uniform magnetic field in
tensor description and gauge invariance of corresponding 4-potential
description is demonstrated explicitly. The role of the symmetry is clarified
in classification of all possible solutions, based on the geometric symmetry
group, SO(3,1) and SO(4) respectively.
|
In this talk we discuss symmetry preserving D-branes on a line of a
marginally deformed SU(2) WZW model. A semiclassical and a quantum theoretical
approach are presented.
|
Expert search and team formation systems operate on collaboration networks,
with nodes representing individuals, labeled with their skills, and edges
denoting collaboration relationships. Given a keyword query corresponding to
the desired skills, these systems identify experts that best match the query.
However, state-of-the-art solutions to this problem lack transparency. To
address this issue, we propose ExES, a tool designed to explain expert search
and team formation systems using factual and counterfactual methods from the
field of explainable artificial intelligence (XAI). ExES uses factual
explanations to highlight important skills and collaborations, and
counterfactual explanations to suggest new skills and collaborations to
increase the likelihood of being identified as an expert. Towards a practical
deployment as an interactive explanation tool, we present and experimentally
evaluate a suite of pruning strategies to speed up the explanation search. In
many cases, our pruning strategies make ExES an order of magnitude faster than
exhaustive search, while still producing concise and actionable explanations.
|
We show that the moduli space of nonnegatively curved metrics on each member
of a large class of 2-connected 7-manifolds, including each smooth manifold
homeomorphic to $S^7$, has infinitely many connected components. The components
are distinguished using the Kreck-Stolz $s$-invariant computed for metrics
constructed by Goette, Kerin and Shankar. The invariant is computed by
extending each metric to the total space of an orbifold disc bundle and
applying generalizations of the Atiyah-Patodi-Singer index theorem for
orbifolds with boundary. We develop methods for computing characteristic
classes and integrals of characteristic forms appearing in index theorems for
orbifolds, in particular orbifolds constructed using Lie group actions of
cohomogeneity one.
|
Quantization of gravitation theory as gauge theory of general covariant
transformations in the framework of Batalin-Vilkoviski (BV) formalism is
considered. Its gauge-fixed Lagrangian is constructed.
|
Single crystals of MgB2 have been grown at high pressure via the peritectic
decomposition of MgNB9. The crystals are of a size up to 1.5x0.9x0.2 mm3 with a
weight up to 0.23 mg, and typically have transition temperatures between 37 and
39 with a width of 0.3-0.5 K. Investigations of the P-T phase diagram prove
that the MgB2 phase is stable at least up to 2190 C at high hydrostatic
pressure in the presence of Mg vapor under high pressure. Small variations of
Tc are caused by doping with metal elements from the precursor or annealing of
defects during the crystal growth process.
|
In this thesis, we investigate quantum ergodicity for two classes of
Hamiltonian systems satisfying intermediate dynamical hypotheses between the
well understood extremes of ergodic flow and quantum completely integrable
flow. These two classes are mixed Hamiltonian systems and KAM Hamiltonian
systems.
Hamiltonian systems with mixed phase space decompose into finitely many
invariant subsets, only some of which are of ergodic character. It has been
conjectured by Percival that the eigenfunctions of the quantisation of this
system decompose into associated families of analogous character. The first
project in this thesis proves a weak form of this conjecture for a class of
dynamical billiards, namely the mushroom billiards of Bunimovich for a full
measure subset of a shape parameter $t\in (0,2]$.
KAM Hamiltonian systems arise as perturbations of completely integrable
Hamiltonian systems. The dynamics of these systems are well understood and have
near-integrable character. The classical-quantum correspondence suggests that
the quantisation of KAM systems will not have quantum ergodic character. The
second project in this thesis proves an initial negative quantum ergodicity
result for a class of positive Gevrey perturbations of a Gevrey Hamiltonian
that satisfy a mild "slow torus" condition.
|
The mode specific dissociative chemisorption dynamics of ammonia on the
Ru(0001) surface is investigated using a quasi-classical trajectory (QCT)
method on a new global potential energy surface (PES) with twelve dimensions.
The PES is constructed by fitting 92524 density functional theory points using
the permutation invariant polynomial-neural network method, which rigorously
enforces the permutation symmetry of the three hydrogen atoms as well as the
surface periodicity. The PES enables highly efficient QCT simulations as well
as future quantum dynamical studies of the scattering/dissociation dynamics.
The QCT calculations yield satisfactory agreement with experiment and suggest
strong mode specificity, in general agreement with the predictions of the
Sudden Vector Projection model.
|
We study the problem of estimating time-varying coefficients in ordinary
differential equations. Current theory only applies to the case when the
associated state variables are observed without measurement errors as presented
in \cite{chenwu08b,chenwu08}. The difficulty arises from the quadratic
functional of observations that one needs to deal with instead of the linear
functional that appears when state variables contain no measurement errors. We
derive the asymptotic bias and variance for the previously proposed two-step
estimators using quadratic regression functional theory.
|
Several cosmologically distant astrophysical sources may produce high-energy
cosmic neutrinos (E \geq 10^6 GeV) of all flavors above the atmospheric
neutrino background. We study the effects of vacuum neutrino mixing in three
flavor framework on this cosmic neutrino flux. We also consider the effects of
possible mixing between the three active neutrinos and the (fourth) sterile
neutrino with or without Big-Bang nucleosynthesis constraints and estimate the
resulting final high-energy cosmic neutrino flux ratios on earth compatible
with currently existing different neutrino oscillation hints in a model
independent way. Further, we discuss the case where the intrinsic cosmic
neutrino flux does not have the standard ratio.
|
We present the complete set of thirty four ASCA observations of non-magnetic
cataclysmic variables. Timing analysis reveals large X-ray flux variations in
dwarf novae in outburst (Z Cam, SS Cyg and SU UMa) and orbital modulation in
high inclination systems (including OY Car, HT Cas, U Gem, T Leo). We also
found episodes of unusually low accretion rate during quiescence (VW Hyi and SS
Cyg). Spectral analysis reveals broad temperature distributions in individual
systems, with emission weighted to lower temperatures in dwarf novae in
outburst. Absorption in excess of interstellar values is required in dwarf
novae in outburst, but not in quiescence. We also find evidence for sub-solar
abundances and X-ray reflection in the brightest systems. LS Peg, V426 Oph and
EI UMa have X-ray spectra that are distinct from the rest of the sample and all
three exhibit candidate X-ray periodicities. We argue that they should be
reclassified as intermediate polars. In the case of V345 Pav we found that the
X-ray source had been previously misidentified.
|
We consider the problem of estimating species trees from unrooted gene tree
topologies in the presence of incomplete lineage sorting, a common phenomenon
that creates gene tree heterogeneity in multilocus datasets. One popular class
of reconstruction methods in this setting is based on internode distances, i.e.
the average graph distance between pairs of species across gene trees. While
statistical consistency in the limit of large numbers of loci has been
established in some cases, little is known about the sample complexity of such
methods. Here we make progress on this question by deriving a lower bound on
the worst-case variance of internode distance which depends linearly on the
corresponding graph distance in the species tree. We also discuss some
algorithmic implications.
|
A low-complexity 8-point orthogonal approximate DCT is introduced. The
proposed transform requires no multiplications or bit-shift operations. The
derived fast algorithm requires only 14 additions, less than any existing DCT
approximation. Moreover, in several image compression scenarios, the proposed
transform could outperform the well-known signed DCT, as well as
state-of-the-art algorithms.
|
We consider the decision problem asking whether a partial rational symmetric
matrix with an all-ones diagonal can be completed to a full positive
semidefinite matrix of rank at most $k$. We show that this problem is
$\NP$-hard for any fixed integer $k\ge 2$. Equivalently, for $k\ge 2$, it is
$\NP$-hard to test membership in the rank constrained elliptope $\EE_k(G)$,
i.e., the set of all partial matrices with off-diagonal entries specified at
the edges of $G$, that can be completed to a positive semidefinite matrix of
rank at most $k$. Additionally, we show that deciding membership in the convex
hull of $\EE_k(G)$ is also $\NP$-hard for any fixed integer $k\ge 2$.
|
We propose a simple yet efficient scheme for a set of energy-harvesting
sensors to establish secure communication with a common destination (a master
node). An eavesdropper attempts to decode the data sent from the sensors to
their common destination. We assume a single modulation scheme that can be
implemented efficiently for energy-limited applications. We design a
multiple-access scheme for the sensors under secrecy and limited-energy
constraints. In a given time slot, each energy-harvesting sensor chooses
between sending its packet or remaining idle. The destination assigns a set of
data time slots to each sensor. The optimization problem is formulated to
maximize the secrecy sum-throughput.
|
We present a novel mathematical model that seeks to capture the key design
feature of generative adversarial networks (GANs). Our model consists of two
interacting spin glasses, and we conduct an extensive theoretical analysis of
the complexity of the model's critical points using techniques from Random
Matrix Theory. The result is insights into the loss surfaces of large GANs that
build upon prior insights for simpler networks, but also reveal new structure
unique to this setting.
|
The momentum distributions of the constituent quarks inside the nucleon and
the prominent electroproduced nucleon resonances are investigated in the two
most sophisticated, available quark potential models, based respectively on the
assumption of the valence + gluon dominance and on the exchange of the
pseudoscalar Goldstone-bosons arising from the spontaneous breaking of chiral
symmetry. It is shown that both models predict a large, similar content of
high-momentum components, due to the short-range part of the interquark
interaction, which affect the behaviour of both elastic and transition
electromagnetic form factors at large values of the momentum transfer. The
electromagnetic form factors are calculated within a relativistic approach
formulated on the light-front, adopting a one-body current with constituent
quark form factors. The results suggest that soft, non-perturbative effects can
play a relevant role for explaining the existing data on elastic as well as
transition form factors (at least) for Q**2 ~ 10 - 20 (GeV/c)**2.
|
We give a new combinatorial characterization of the big height of a
squarefree monomial ideal leading to a new bound for the projective dimension
of a monomial ideal.
|
How to aggregate information from multiple instances is a key question
multiple instance learning. Prior neural models implement different variants of
the well-known encoder-decoder strategy according to which all input features
are encoded a single, high-dimensional embedding which is then decoded to
generate an output. In this work, inspired by Choquet capacities, we propose
Capacity networks. Unlike encoder-decoders, Capacity networks generate multiple
interpretable intermediate results which can be aggregated in a semantically
meaningful space to obtain the final output. Our experiments show that
implementing this simple inductive bias leads to improvements over different
encoder-decoder architectures in a wide range of experiments. Moreover, the
interpretable intermediate results make Capacity networks interpretable by
design, which allows a semantically meaningful inspection, evaluation, and
regularization of the network internals.
|
Heart rate variability results from the combined activity of several
physiological systems, including the cardiac, vascular, and respiratory systems
which have their own internal regulation, but also interact with each other to
preserve the homeostatic function. These control mechanisms operate across
multiple temporal scales, resulting in the simultaneous presence of short-term
dynamics and long-range correlations. The Network Physiology framework provides
statistical tools based on information theory able to quantify structural
aspects of multivariate and multiscale interconnected mechanisms driving the
dynamics of complex physiological networks. In this work, the multiscale
representation of Transfer Entropy from Systolic Arterial Pressure (S) and
Respiration (R) to Heart Period (H) and of its decomposition into unique,
redundant and synergistic contributions is obtained using a Vector
AutoRegressive Fractionally Integrated (VARFI) framework for Gaussian
processes. This novel approach allows to quantify the directed information flow
accounting for the simultaneous presence of short-term dynamics and long-range
correlations among the analyzed processes. The approach is first illustrated in
simulated VARFI processes and then applied to H, S and R time series measured
in healthy subjects monitored at rest and during mental and postural stress.
Our results highlight the dependence of the information transfer on the balance
between short-term and long-range correlations in coupled dynamical systems,
which cannot be observed using standard methods that do not consider long-range
correlations. The proposed methodology shows that postural stress induces
larger redundant effects at short time scales and mental stress induces larger
cardiovascular information transfer at longer time scales.
|
To a subshift over a finite alphabet, one can naturally associate an infinite
family of finite graphs, called its Rauzy graphs. We show that for a subshift
of subexponential complexity the Rauzy graphs converge to the line $\mathbf{Z}$
in the sense of Benjamini-Schramm convergence if and only if its complexity
function $p(n)$ is unbounded and satisfies $\lim_n\frac{p(n+1)}{p(n)} = 1$. We
then apply this criterion to many examples of well-studied dynamical systems.
If the subshift is moreover uniquely ergodic then we show that the limit of
labelled Rauzy graphs if it exists can be identified with the unique invariant
measure. In addition we consider an example of a non uniquely ergodic system
recently studied by Cassaigne and Kabor\'e and identify a continuum of
invariant measures with subsequential limits of labelled Rauzy graphs.
|
A previously published analytical magnetohydrodynamic model for the local
interstellar magnetic field in the vicinity of the heliopause (R\"oken et al.
2015) is extended from incompressible to compressible, yet predominantly
subsonic flow, considering both isothermal and adiabatic equations of state.
Exact expressions and suitable approximations for the density and the flow
velocity are derived and discussed. In addition to the stationary induction
equation, these expressions also satisfy the momentum balance equation along
stream lines. The practical usefulness of the corresponding, still exact
analytical magnetic field solution is assessed by comparing it quantitatively
to results from a fully self-consistent magnetohydrodynamic simulation of the
interstellar magnetic field draping around the heliopause.
|
The method of the nonequilibrium statistical operator developed by D. N.
Zubarev is employed to analyse and derive generalized transport and kinetic
equations. The degrees of freedom in solids can often be represented as a few
interacting subsystems (electrons, spins, phonons, nuclear spins, etc.).
Perturbation of one subsystem may produce a nonequilibrium state which is then
relaxed to an equilibrium state due to the interaction between particles or
with a thermal bath. The generalized kinetic equations were derived for a
system weakly coupled to a thermal bath to elucidate the nature of transport
and relaxation processes. It was shown that the "collision term" had the same
functional form as for the generalized kinetic equations for the system with
small interactions among particles. The applicability of the general formalism
to physically relevant situations is investigated. It is shown that some known
generalized kinetic equations (e.g. kinetic equation for magnons, Peierls
equation for phonons) naturally emerges within the NSO formalism. The
relaxation of a small dynamic subsystem in contact with a thermal bath is
considered on the basis of the derived equations. The Schrodinger-type equation
for the average amplitude describing the energy shift and damping of a particle
in a thermal bath and the coupled kinetic equation describing the dynamic and
statistical aspects of the motion are derived and analysed. The equations
derived can help in the understanding of the origin of irreversible behavior in
quantum phenomena.
|
We consider the Lie group PSL(2) (the group of orientation preserving
isometries of the hyperbolic plane) and a left-invariant Riemannian metric on
this group with two equal eigenvalues that correspond to space-like
eigenvectors (with respect to the Killing form). For such metrics we find a
parametrization of geodesics, the conjugate time, the cut time and the cut
locus. The injectivity radius is computed. We show that the cut time and the
cut locus in such Riemannian problem converge to the cut time and the cut locus
in the corresponding sub-Riemannian problem as the third eigenvalue of the
metric tends to infinity. Similar results are also obtained for SL(2).
|
In this article, we propose a space-time Multi-Index Monte Carlo (MIMC)
estimator for a one-dimensional parabolic stochastic partial differential
equation (SPDE) of Zakai type. We compare the complexity with the Multilevel
Monte Carlo (MLMC) method of Giles and Reisinger (2012), and find, by means of
Fourier analysis, that the MIMC method: (i) has suboptimal complexity of
$O(\varepsilon^{-2}|\log\varepsilon|^3)$ for a root mean square error (RMSE)
$\varepsilon$ if the same spatial discretisation as in the MLMC method is used;
(ii) has a better complexity of $O(\varepsilon^{-2}|\log\varepsilon|)$ if a
carefully adapted discretisation is used; (iii) has to be adapted for
non-smooth functionals. Numerical tests confirm these findings empirically.
|
We extend and apply a model-independent analysis method developed earlier by
Daly & Djorgovski to new samples of supernova standard candles, radio galaxy
and cluster standard rulers, and use it to constrain physical properties of the
dark energy as functions of redshift. Similar results are obtained for the
radio galaxy and supernova data sets. The first and second derivatives of the
distance are compared directly with predictions in a standard model based on
General Relativity. The good agreement indicates that General Relativity
provides an accurate description of the data on look-back time scales of about
ten billion years. The first and second derivatives are combined to obtain the
acceleration parameter, assuming only the validity of the Robertson-Walker
metric, independent of a theory of gravity and of the physical nature of the
dark energy. The acceleration of the universe at the current epoch is indicated
by the analysis. The effect of non-zero space curvature on q(z) is explored. We
solve for the pressure, energy density, equation of state, and potential and
kinetic energy of the dark energy as functions of redshift assuming that
General Relativity is the correct theory of gravity, and the results indicate
that a cosmological constant in a spatially flat universe provides a good
description of each of these quantities over the redshift range from zero to
about one. We define a new function, the dark energy indicator, in terms of the
first and second derivatives of the coordinate distance and show how this can
be used to measure deviations of w from -1 and to obtain a new and independent
measure of Omega.
|
An efficient sampling method, the pmmLang+RBM, is proposed to compute the
quantum thermal average in the interacting quantum particle system. Benefiting
from the random batch method (RBM), the pmmLang+RBM reduces the complexity due
to the interaction forces per timestep from $O(NP^2)$ to $O(NP)$, where $N$ is
the number of beads and $P$ is the number of particles. Although the RBM
introduces a random perturbation of the interaction forces at each timestep,
the long time effects of the random perturbations along the sampling process
only result in a small bias in the empirical measure of the pmmLang+RBM from
the target distribution, which also implies a small error in the thermal
average calculation. We numerically study the convergence of the pmmLang+RBM,
and quantitatively investigate the dependence of the error in computing the
thermal average on the parameters including the batch size, the timestep, etc.
We also propose an extension of the pmmLang+RBM, which is based on the
splitting Monte Carlo method and is applicable when the interacting potential
contains a singular part.
|
We present an optimized rerandomization design procedure for a non-sequential
treatment-control experiment. Randomized experiments are the gold standard for
finding causal effects in nature. But sometimes random assignments result in
unequal partitions of the treatment and control group visibly seen as imbalance
in observed covariates. There can additionally be imbalance on unobserved
covariates. Imbalance in either observed or unobserved covariates increases
treatment effect estimator error inflating the width of confidence regions and
reducing experimental power. "Rerandomization" is a strategy that omits poor
imbalance assignments by limiting imbalance in the observed covariates to a
prespecified threshold. However, limiting this threshold too much can increase
the risk of contracting error from unobserved covariates. We introduce a
criterion that combines observed imbalance while factoring in the risk of
inadvertently imbalancing unobserved covariates. We then use this criterion to
locate the optimal rerandomization threshold based on the practitioner's level
of desired insurance against high estimator error. We demonstrate the gains of
our designs in simulation and in a dataset from a large randomized experiment
in education. We provide an open source R package available on CRAN named
OptimalRerandExpDesigns which generates designs according to our algorithm.
|
This is the second of a series of popular lectures on quantum chromodynamics.
The first (introductory) lecture can be found here:
https://scfh.ru/blogs/O_fizike_i_fizikah/polet-nad-kvantovoy-khromodinamikoy/
The lecture deals with the one of the main pillars of quantum chromodynamics
--- quantum mechanics . Non-physicists usually consider quantum mechanics as an
extremely weird subject far from the everyday common sense. Partly this is
true. However we will try to argue in this lecture that in its essence quantum
mechanics is, in a some sense, even more natural than classical mechanics, and
it is not so far from the common sense as a layman usually assumes.
|
Inspired by the Basilica group $\mathcal B$, we describe a general
construction which allows us to associate to any group of automorphisms $G \leq
\operatorname{Aut}(T)$ of a rooted tree $T$ a family of Basilica groups
$\operatorname{Bas}_s(G), s \in \mathbb{N}_+$. For the dyadic odometer
$\mathcal{O}_2$, one has $\mathcal B = \operatorname{Bas}_2(\mathcal{O}_2)$. We
study which properties of groups acting on rooted trees are preserved under
this operation. Introducing some techniques for handling
$\operatorname{Bas}_s(G)$, in case $G$ fulfills some branching conditions, we
are able to calculate the Hausdorff dimension of the Basilica groups associated
to certain $\mathsf{GGS}$-groups and of generalisations of the odometer,
$\mathcal{O}_m^d$. Furthermore, we study the structure of groups of type
$\operatorname{Bas}_s(\mathcal{O}_m^d)$ and prove an analogue of the congruence
subgroup property in the case $m = p$, a prime.
|
Considering a three-dimensional $C-$metric, we obtain the exact accelerating
black holes in the $F(R)$ theory of gravity coupled with and without a matter
field. First, we extract uncharged accelerating AdS black hole solutions in
$F(R)$ gravity. Then, we study the effects of various parameters on metric
function, roots, and the temperature of these black holes. The temperature is
always positive for the radii less than $\frac{1}{\alpha }$, and it is negative
for the radii more than $\frac{1}{\alpha }$. We extend our study by coupling
nonlinear electrodynamics as a matter filed to $F(R)$ gravity to obtain charged
black holes in this theory. Next, we evaluate the effects of different
parameters such as the electrical charge, accelerating parameter, angular,
$F(R)$ gravity, and scalar curvature on the obtained solutions, roots, and
temperature of three-dimensional charged accelerating AdS black holes. The
results indicate that there is a root in which it depends on various
parameters. The temperature of these black holes is positive after this root.
|
The motion of a stone skimming over a water surface is considered. A
simplified description of the collisional process of the stone with water is
proposed. The maximum number of bounces is estimated by considering both the
slowing down of the stone and its angular stability. The conditions for a
successful throw are discussed.
|
We show that the mesh mutations are the minimal relations among the
$\boldsymbol{g}$-vectors with respect to any initial seed in any finite type
cluster algebra. We then use this algebraic result to derive geometric
properties of the $\boldsymbol{g}$-vector fan: we show that the space of all
its polytopal realizations is a simplicial cone, and we then observe that this
property implies that all its realizations can be described as the intersection
of a high dimensional positive orthant with well-chosen affine spaces. This
sheds a new light on and extends earlier results of N. Arkani-Hamed, Y. Bai, S.
He, and G. Yan in type $A$ and of V. Bazier-Matte, G. Douville, K. Mousavand,
H. Thomas and E. Yildirim for acyclic initial seeds.
Moreover, we use a similar approach to study the space of polytopal
realizations of the $\boldsymbol{g}$-vector fans of another generalization of
the associahedron: non-kissing complexes (a.k.a. support $\tau$-tilting
complexes) of gentle algebras. We show that the space of realizations of the
non-kissing fan is simplicial when the gentle bound quiver is brick and
$2$-acyclic, and we describe in this case its facet-defining inequalities in
terms of mesh mutations.
Along the way, we prove algebraic results on $2$-Calabi-Yau triangulated
categories, and on extriangulated categories that are of independent interest.
In particular, we prove, in those two setups, an analogue of a result of M.
Auslander on minimal relations for Grothendieck groups of module categories.
|
Automated analysis of recursive derivations in logic programming is known to
be a hard problem. Both termination and non-termination are undecidable
problems in Turing-complete languages. However, some declarative languages
offer a practical work-around for this problem, by making a clear distinction
between whether a program is meant to be understood inductively or
coinductively. For programs meant to be understood inductively, termination
must be guaranteed, whereas for programs meant to be understood coinductively,
productive non-termination (or "productivity") must be ensured. In practice,
such classification helps to better understand and implement some
non-terminating computations.
Logic programming was one of the first declarative languages to make this
distinction: in the 1980's, Lloyd and van Emden's "computations at infinity"
captured the big-step operational semantics of derivations that produce
infinite terms as answers. In modern terms, computations at infinity describe
"global productivity" of computations in logic programming. Most programming
languages featuring coinduction also provide an observational, or small-step,
notion of productivity as a computational counterpart to global productivity.
This kind of productivity is ensured by checking that finite initial fragments
of infinite computations can always be observed to produce finite portions of
their infinite answer terms.
In this paper we introduce a notion of observational productivity for logic
programming as an algorithmic approximation of global productivity, give an
effective procedure for semi-deciding observational productivity, and offer an
implemented automated observational productivity checker for logic programs.
|
In this paper, we obtain an analytical expression for the vapor pressure of a
paramagnetic solid for high temperatures. We have considered the behavior of
magnetic materials in the presence of an external magnetic field using the
thermodynamical analysis and the elements of statistical mechanics in
microscopic systems. We found that the vapor pressure depends on the magnetic
susceptibility of material and the external field applied.
|
The paper presents the bosonic and fermionic supersymmetric extensions of the
structural equations describing conformally parametrized surfaces immersed in a
Grasmann superspace, based on the authors' earlier results. A detailed analysis
of the symmetry properties of both the classical and supersymmetric versions of
the Gauss-Weingarten equations is performed. A supersymmetric generalization of
the conjecture establishing the necessary conditions for a system to be
integrable in the sense of soliton theory is formulated and illustrated by the
examples of supersymmetric versions of the sine-Gordon equation and the
Gauss-Codazzi equations.
|
Majorana fermions (MFs) are exotic particles that are their own
anti-particles. Recently, the search for the MFs occurring as quasi-particle
excitations in solid-state systems has attracted widespread interest, because
of their fundamental importance in fundamental physics and potential
applications in topological quantum computation based on solid-state devices.
Here we study the quantum correlations between two spatially separate quantum
dots induced by a pair of MFs emerging at the two ends of a semiconductor
nanowire, in order to develop a new method for probing the MFs. We find that
without the tunnel coupling between these paired MFs, quantum entanglement
cannot be induced from an unentangled (i.e., product) state, but quantum
discord is observed due to the intrinsic nonlocal correlations of the paired
MFs. This finding reveals that quantum discord can indeed demonstrate the
intrinsic non-locality of the MFs formed in the nanowire. Also, quantum discord
can be employed to discriminate the MFs from the regular fermions. Furthermore,
we propose an experimental setup to measure the onset of quantum discord due to
the nonlocal correlations. Our approach provides a new, and experimentally
accessible, method to study the Majorana bound states by probing their
intrinsic non-locality signature.
|
We introduce two families of inequalities. Large ensemble decoupling is
connected to the continuous restriction phenomenon. Tight decoupling is
connected to the discrete Restriction conjecture for the sphere.
Our investigation opens new grounds and answers some questions.
|
(Abridged) Existing models invoking AGN activty to resolve the cooling flow
conundrum in galaxy clusters focus exclusively on the role of the central
galaxy. Such models require fine-tuning of highly uncertain microscopic
transport properties to distribute the thermal thermal over the entire cluster
cooling core. We propose that the ICM is instead heated by multiple, spatially
distributed AGNs. There is mounting observational evidence for multiple AGNs in
cluster environments. Active AGNs drive bubbles into the ICM. We identify three
distinct interactions between the bubble and the ICM: (1) Upon injection, the
bubbles expand rapidly in situ to reach pressure equilibrium with their
surroundings, generating shocks and waves whose dissipation is the principal
source of ICM heating. (2) Once inflated, the bubbles rise buoyantly at rate
determined by a balance with the viscous drag force, which itself results in
some additional heating. (3) Rising bubbles expand and compress their
surroundings. This process is adiabatic and does not contribute to any
additional heating; rather, the increased ICM density due to compression
enhances cooling. Our model sidesteps the ``transport'' issue by relying on the
spatially distributed galaxies to heat the cluster core. We include self
regulation in our model by linking AGN activity in a galaxy to cooling
characteristics of the surrounding ICM. We use a spherically symmetric
one-dimensional hydrodynamical code to carry out a preliminary study
illustrating the efficacy of the model. Our self-regulating scenario predicts
that there should be enhanced AGN activity of galaxies inside the cooling
regions compared to galaxies in the outer parts of the cluster. This prediction
remains to be confirmed or refuted by observations.
|
In this paper, we propose a model enabling the creation of a social graph
corresponding to real society. The procedure uses data describing the real
social relations in the community, like marital status or number of kids.
Results show the power-law behavior of the distribution of links and, typical
for small worlds, the independence of the clustering coefficient on the size of
the graph.
|
We study the heat kernel for the Laplace type partial differential operator
acting on smooth sections of a complex spin-tensor bundle over a generic
$n$-dimensional Riemannian manifold. Assuming that the curvature of the U(1)
connection (that we call the electromagnetic field) is constant we compute the
first two coefficients of the non-perturbative asymptotic expansion of the heat
kernel which are of zero and the first order in Riemannian curvature and of
arbitrary order in the electromagnetic field. We apply these results to the
study of the effective action in non-perturbative electrodynamics in four
dimensions and derive a generalization of the Schwinger's result for the
creation of scalar and spinor particles in electromagnetic field induced by the
gravitational field. We discover a new infrared divergence in the imaginary
part of the effective action due to the gravitational corrections, which seems
to be a new physical effect.
|
We study the nature of the phase transition of lattice gauge theories at high
temperature and high density by focusing on the probability distribution
function, which represents the probability that a certain density will be
realized in a heat bath. The probability distribution function is obtained by
creating a canonical partition function fixing the number of particles from the
grand partition function. However, if the Z_3 center symmetry, which is
important for understanding the finite temperature phase transition of SU(3)
lattice gauge theory, is maintained on a finite lattice, the probability
distribution function is always zero, except when the number of particles is a
multiple of 3. For U(1) lattice gauge theory, this problem is more serious. The
probability distribution becomes zero when the particle number is nonzero. This
problem is essentially the same as the problem that the expectation value of
the Polyakov loop is always zero when calculating with finite volume. In this
study, we propose a solution to this problem. We also propose a method to avoid
the sign problem, which is an important problem at finite density, using the
center symmetry. In the case of U(1) lattice gauge theory with heavy fermions,
numerical simulations are actually performed, and we demonstrate that the
probability distribution function at a finite density can be calculated by the
method proposed in this study. Furthermore, the application of this method to
QCD is discussed.
|
Image harmonization has been significantly advanced with large-scale
harmonization dataset. However, the current way to build dataset is still
labor-intensive, which adversely affects the extendability of dataset. To
address this problem, we propose to construct rendered harmonization dataset
with fewer human efforts to augment the existing real-world dataset. To
leverage both real-world images and rendered images, we propose a cross-domain
harmonization network to bridge the domain gap between two domains. Moreover,
we also employ well-designed style classifiers and losses to facilitate
cross-domain knowledge transfer. Extensive experiments demonstrate the
potential of using rendered images for image harmonization and the
effectiveness of our proposed network.
|
Accurate depth maps are essential in various applications, such as autonomous
driving, scene reconstruction, point-cloud creation, etc. However,
monocular-depth estimation (MDE) algorithms often fail to provide enough
texture & sharpness, and also are inconsistent for homogeneous scenes. These
algorithms mostly use CNN or vision transformer-based architectures requiring
large datasets for supervised training. But, MDE algorithms trained on
available depth datasets do not generalize well and hence fail to perform
accurately in diverse real-world scenes. Moreover, the ground-truth depth maps
are either lower resolution or sparse leading to relatively inconsistent depth
maps. In general, acquiring a high-resolution ground truth dataset with
pixel-level precision for accurate depth prediction is an expensive, and
time-consuming challenge.
In this paper, we generate a high-resolution synthetic depth dataset (HRSD)
of dimension 1920 X 1080 from Grand Theft Auto (GTA-V), which contains 100,000
color images and corresponding dense ground truth depth maps. The generated
datasets are diverse and have scenes from indoors to outdoors, from homogeneous
surfaces to textures. For experiments and analysis, we train the DPT algorithm,
a state-of-the-art transformer-based MDE algorithm on the proposed synthetic
dataset, which significantly increases the accuracy of depth maps on different
scenes by 9 %. Since the synthetic datasets are of higher resolution, we
propose adding a feature extraction module in the transformer encoder and
incorporating an attention-based loss, further improving the accuracy by 15 %.
|
Hidden convex optimization is such a class of nonconvex optimization problems
that can be globally solved in polynomial time via equivalent convex
programming reformulations. In this paper, we focus on checking local
optimality in hidden convex optimization. We first introduce a class of hidden
convex optimization problems by jointing the classical nonconvex trust-region
subproblem (TRS) with convex optimization (CO), and then present a
comprehensive study on local optimality conditions. In order to guarantee the
existence of a necessary and sufficient condition for local optimality, we need
more restrictive assumptions. To our surprise, while (TRS) has at most one
local non-global minimizer and (CO) has no local non-global minimizer, their
joint problem could have more than one local non-global minimizer.
|
We match continuum and lattice heavy-light four-fermion operators at one loop
in perturbation theory. For the heavy quarks we use nonrelativistic QCD and for
the massless light quarks the highly improved staggered quark action. We
include the full set of $\Delta B=2$ operators relevant to neutral $B$ mixing
both within and beyond the Standard Model and match through order $\alpha_s$,
$\Lambda_{\mathrm{QCD}}/M_b$, and $\alpha_s/(aM_b)$.
|
Prompting inputs with natural language task descriptions has emerged as a
popular mechanism to elicit reasonably accurate outputs from large-scale
generative language models with little to no in-context supervision. This also
helps gain insight into how well language models capture the semantics of a
wide range of downstream tasks purely from self-supervised pre-training on
massive corpora of unlabeled text. Such models have naturally also been exposed
to a lot of undesirable content like racist and sexist language and there is
limited work on awareness of models along these dimensions. In this paper, we
define and comprehensively evaluate how well such language models capture the
semantics of four tasks for bias: diagnosis, identification, extraction and
rephrasing. We define three broad classes of task descriptions for these tasks:
statement, question, and completion, with numerous lexical variants within each
class. We study the efficacy of prompting for each task using these classes and
the null task description across several decoding methods and few-shot
examples. Our analyses indicate that language models are capable of performing
these tasks to widely varying degrees across different bias dimensions, such as
gender and political affiliation. We believe our work is an important step
towards unbiased language models by quantifying the limits of current
self-supervision objectives at accomplishing such sociologically challenging
tasks.
|
This paper examines the interplay between desegregation, institutional bias,
and individual behavior in education. Using a game-theoretic model that
considers race-heterogeneous social incentives, the study investigates the
effects of between-school desegregation on within-school disparities in
coursework. The analysis incorporates a segregation measure based on entropy
and proposes an optimization-based approach to evaluate the impact of student
reassignment policies. The results highlight that Black and Hispanic students
in predominantly White schools, despite receiving less encouragement to apply
to college, exhibit higher enrollment in college-prep coursework due to
stronger social incentives from their classmates' coursework decisions.
|
We provide an optimal sufficient condition, relating minimum degree and
bandwidth, for a graph to contain a spanning subdivision of the complete
bipartite graph $K_{2,\ell}$. This includes the containment of Hamilton paths
and cycles, and has applications in the random geometric graph model. Our proof
provides a greedy algorithm for constructing such structures.
|
Let $\Omega \subset \mathbb R^3$ be a broken sheared waveguide, i.e., it is
built by translating a cross-section in a constant direction along a broken
line in $\mathbb R^3$. We prove that the discrete spectrum of the Dirichlet
Laplacian operator in $\Omega$ is non-empty and finite. Furthermore, we show a
particular geometry for $\Omega$ which implies that the total multiplicity of
the discrete spectrum is equals 1.
|
We seek to introduce a mathematical method to derive the relativistic wave
equations for two-particle system. According to this method, if we define
stationary wave functions as special solutions like
$\Psi(\mathbf{r}_1,\mathbf{r}_2,t)=\psi(\mathbf{r}_1,\mathbf{r}_2)e^{-iEt/\hbar},\,
\psi(\mathbf{r}_1,\mathbf{r}_2)\in\mathscr{S}
(\mathbb{R}^3\times\mathbb{R}^3)$, and properly define the relativistic reduced
mass $\mu_0$, then some new relativistic two-body wave equations can be
derived. On this basis, we obtain the two-body Sommerfeld fine-structure
formula for relativistic atomic two-body systems such as the pionium and pionic
hydrogen atoms bound states, using which, we discuss the pair production and
annihilation of $\pi+$ and $\pi-$.
|
We provide an axiomatic characterization of the Logarithmic Least Squares
Method (sometimes called row geometric mean), used for deriving a preference
vector from a pairwise comparison matrix. This procedure is shown to be the
only one satisfying two properties, correctness in the consistent case, which
requires the reproduction of the inducing vector for any consistent matrix, and
invariance to a specific transformation on a triad, that is, the weight vector
is not influenced by an arbitrary multiplication of matrix elements along a
3-cycle by a positive scalar.
|
We present a spectroscopic catalogue of 40 luminous starburst galaxies at
z=0.7--1.7 (median z=1.3). 19 of these are submillimetre galaxies (SMGs) and 21
are submillimetre-faint radio galaxies (SFRGs). This sample helps to fill in
the redshift desert at z=1.2--1.7 in previous studies as well as probing a
lower luminosity population of galaxies. Radio fluxes are used to determine
star-formation rates for our sample which range from around 50 to 500 M$_\odot$
yr$^{-1}$ and are generally lower than those in z$\sim$2 SMGs. We identify
nebular [OII] 3727 emission in the rest-UV spectra and use the linewidths to
show that SMGs and SFRGs in our sample have larger linewidths and therefore
dynamical masses than optically selected star-forming galaxies at similar
redshifts. The linewidths are indistinguishable from those measured in the
z$\sim$2 SMG populations suggesting little evolution in the dynamical masses of
the galaxies between redshift 1--2. [NeV] and [NeIII] emission lines are
identified in a subset of the spectra indicating the presence of an active
galactic nucleus (AGN). In addition, a host of interstellar absorption lines
corresponding to transitions of MgII and FeII ions are also detected. These
features show up prominently in composite spectra and we use these composites
to demonstrate that the absorption lines are present at an average blueshift of
$-240\pm$50 kms$^{-1}$ relative to the systemic velocities of the galaxies
derived from [OII]. This indicates the presence of large-scale outflowing
interstellar gas in these systems (Abridged)
|
Let $K\Delta$ be the incidence algebra associated with a finite poset
$(\Delta,\preceq)$ over the algebraically closed field $K$. We present a study
of incidence algebras $K\Delta$ that are piecewise hereditary, which we
denominate PHI algebras. We investigate the strong global dimension, the simply
conectedeness and the one-point extension algebras over a PHI algebras.
We also give a positive answer to the so-called Skowro\'nski problem for
$K\Delta$ a PHI algebra which is not of wild quiver type. That is for this kind
of algebra we show that $HH^1(K\Delta)$ is trivial if, and only if, $K\Delta$
is a simply connected algebra. We determine an upper bound for the strong
global dimension of PHI algebras; furthermore, we extend this result to sincere
algebras proving that the strong global dimension of a sincere piecewise
hereditary algebra is less or equal than three.
|
We describe a recently developed algebraic framework for proving first-order
statements about linear operators by computations with noncommutative
polynomials. Furthermore, we present our new SageMath package operator_gb,
which offers functionality for automatising such computations. We aim to
provide a practical understanding of our approach and the software through
examples, while also explaining the completeness of the method in the sense
that it allows to find algebraic proofs for every true first-order operator
statement. We illustrate the capability of the framework in combination with
our software by a case study on statements about the Moore-Penrose inverse,
including classical facts and recent results, presented in an online notebook.
|
Hybrid organic-inorganic halide perovskites have shown remarkable
optoelectronic properties (1-3), believed to originate from correlated motion
of charge carriers and the polar lattice forming large polarons (4-7). Few
experimental techniques are capable of probing these correlations directly,
requiring simultaneous sub-meV energy and femtosecond temporal resolution after
absorption of a photon (8). Here we use transient multi-THz spectroscopy,
sensitive to the internal motions of charges within the polaron, to temporally
and energetically resolve the coherent coupling of charges to longitudinal
optical phonons in single crystal CH3NH3PbI3 (MAPI). We observe room
temperature quantum beats arising from the coherent displacement of charge from
the coupled phonon cloud. Our measurements provide unambiguous evidence of the
existence of polarons in MAPI.
|
Magnetic fields in galaxy halos are in general very difficult to observe.
Most recently, the CHANG-ES collaboration (Continuum HAlos in Nearby Galaxies -
an EVLA Survey) investigated in detail the radio halos of 35 nearby edge-on
spiral galaxies and detected large scale magnetic fields in 16 of them. We used
the CHANG-ES radio polarization data to create Rotation Measure (RM) maps for
all galaxies in the sample and stack them with the aim to amplify any
underlying universal toroidal magnetic field pattern in the halo above and
below the disk of the galaxy. We discovered a large-scale magnetic field in the
central region of the stacked galaxy profile, attributable to an axial electric
current that universally outflows from the center both above and below the
plane of the disk. A similar symmetry-breaking has also been observed in
astrophysical jets but never before in galaxy halos. This is an indication that
galaxy halo magnetic fields are probably not generated by pure ideal
magnetohydrodynamic (MHD) processes in the central regions of galaxies. One
such promising physical mechanism is the Cosmic Battery operating in the
innermost accretion disk around the central supermassive black hole. We
anticipate that our discovery will stimulate a more general discussion on the
origin of astrophysical magnetic fields.
|
We have tested complementarity for the ensemble-averaged spin states of
nuclei $^{13}$C in the molecule of $^{13}$CHCl$_{3}$ by the use of the spin
states of another nuclei $^{1}$H as the path marker. It turns out that the
wave-particle duality holds when one merely measures the probability density of
quantum states, and that the wave- and particle-like behavior is simultaneously
observed with the help of measuring populations and coherence in a single
nuclear magnetic resonance(NMR) experiment. Effects of path-marking schemes and
causes of the appearance and disappearance of the wave behavior are analysed.
|
This review paper highlights research findings from the authors'
participation in the SUMMIT-P project, which studied how to build and sustain
multi-institutional interdisciplinary partnerships to design and implement
curricular change in mathematics courses in the first two years of college,
using the Curriculum Foundations Project (CFP) as a launchpad. The CFP
interviewed partner discipline faculty to learn about the mathematical needs of
their students and how they use mathematics in their courses. This paper
summarizes research findings from the CFP and the SUMMIT-P project, and
presents a detailed example of how these findings were implemented in the
calculus sequence at Augsburg University to improve course focus, increase the
relevance of course content, and provide opportunities for student to practice
transference of the calculus to disciplinary contexts. This paper is based on
the talk "Applied and Active Calculus Built Through Interdisciplinary
Partnerships" presented at the 2022 AWM Research Symposium in the Session on
"Research on the First Two Years of College Mathematics".
|
We propose a protocol to encode classical bits in the measurement statistics
of many-body Pauli observables, leveraging quantum correlations for a random
access code. Measurement contexts built with these observables yield outcomes
with intrinsic redundancy, something we exploit by encoding the data into a set
of convenient context eigenstates. This allows to randomly access the encoded
data with few resources. The eigenstates used are highly entangled and can be
generated by a discretely-parametrized quantum circuit of low depth.
Applications of this protocol include algorithms requiring large-data storage
with only partial retrieval, as is the case of decision trees. Using $n$-qubit
states, this Quantum Random Access Code has greater success probability than
its classical counterpart for $n\ge 14$ and than previous Quantum Random Access
Codes for $n \ge 16$. Furthermore, for $n\ge 18$, it can be amplified into a
nearly-lossless compression protocol with success probability $0.999$ and
compression ratio $O(n^2/2^n)$. The data it can store is equal to Google-Drive
server capacity for $n= 44$, and to a brute-force solution for chess (what to
do on any board configuration) for $n= 100$.
|
Recently, the peak structure of the sound velocity was observed in the
lattice simulation of two-color and two-flavor QCD at the finite quark chemical
potential. The comparison with the chiral perturbation theory (ChPT) result was
undertaken, however, the ChPT failed in reproducing the peak structure. In this
study, to extend the ChPT framework, we incorporate contributions of the
$\sigma$ meson, that is identified as the chiral partner of pions, on top of
the low-energy pion dynamics by using the linear sigma model (LSM). Based on
the LSM we derive analytic expressions of the thermodynamic quantities as well
as the sound velocity within a mean-field approximation. As a result, we find
that those quantities are provided by sums of the ChPT results and corrections,
where the latter is characterized by a mass difference between the chiral
partners, the $\sigma$ meson and pion. The chiral partner contributions are
found to yield a peak in the sound velocity successfully. We furthermore show
that the sound velocity peak emerges only when $m_\sigma >\sqrt{3}m_\pi$ and
$\mu_q > m_\pi$, with $m_{\sigma(\pi)}$ and $\mu_q$ being the $\sigma$ meson
(pion) mass and the quark chemical potential, respectively. The correlation
between the sound velocity peak and the sign of the trace anomaly is also
addressed.
|
Broad absorption line quasars (BALQSOs) are key objects for studying the
structure and emission/absorption properties of AGN. However, despite their
fundamental importance, the properties of BALQSOs are still not well
understood. In order to investigate the X-ray nature of these sources, as well
as the correlations between X-ray and rest-frame UV properties, we compile a
large sample of 88 BALQSOs observed by XMM-Newton. We performed a full X-ray
spectral analysis on a sample of 39 sources with higher X-ray spectral quality,
and an approximate HR analysis on the remaining sources. Using available
optical spectra, we calculate the BALnicity index and investigate the
dependence between this optical parameter and different X-ray properties.
Using the neutral absorption model, we found that 36% of our BALQSOs have NH
< 5x10^21 cm^-2, lower than the expected X-ray absorption for such objects.
However, when we used a physically-motivated model for the X-ray absorption in
BALQSOs, i.e. ionized absorption, \sim 90% of the objects are absorbed. The
absorption properties also suggest that LoBALs may be physically different
objects from HiBALs. In addition, we report on a correlation between the
ionized absorption column density and BAL parameters. There is evidence (at 98%
level) that the amount of X-ray absorption is correlated with the strength of
high-ionization UV absorption. This correlation, not previously reported, can
be naturally understood in virtually all BALQSO models, as driven by the total
amount of gas mass flowing towards the observer.
|
Here, we show that electrostatic solitons in a plasma with turbulent heating
of the electrons through an accelerating electric field, can form with very
high velocities, reaching up to several order of magnitudes larger than the
ion-sound speed. We call these solitons hypersonic solitons. The possible
parameter regime, where this work may be relevant, can be found the so-called
``dead zones'' of a protoplanetary disk. These zones are stable to
magnetorotational instability but the resultant turbulence can in effect heat
the electrons make them follow a highly non-Maxwellian velocity distribution.
We show that these hypersonic solitons can also reach very high velocities.
With electron velocity distribution described by Davydov distribution function,
we argue that these solitons can be an effective mechanism for energy
equilibration in such a situation through soliton decay and radiation.
|
In an ever more connected world, awareness has grown towards the hazards and
vulnerabilities that the networking on sensitive digitized information pose for
all parties involved. This vulnerability rests in a number of factors, both
human and technical.From an ethical perspective, this means people seeking to
maximise their own gain, and accomplish their goals through exploiting
information existing in cyber space at the expense of other individuals and
parties. One matter that is yet to be fully explored is the eventuality of not
only financial information and other sensitive material being globally
connected on the information highways, but also the people themselves as
physical beings. Humans are natural born cyborgs who have integrated technology
into their being throughout history. Issues of cyber security are extended to
cybernetic security, which not only has severe ethical implications for how we,
policy makers, academics, scientists, designers etc., define ethics in relation
to humanity and human rights, but also the security and safety of merged
organic and artificial systems and ecosystems.
|
The Feynman-Kac formulae (FKF) express local solutions of partial
differential equations (PDEs) as expectations with respect to some
complementary stochastic differential equation (SDE). Repeatedly sampling paths
from the complementary SDE enables the construction of Monte Carlo estimates of
local solutions, which are more naturally suited to statistical inference than
the numerical approximations obtained via finite difference and finite element
methods. Until recently, simulating from the complementary SDE would have
required the use of a discrete-time approximation, leading to biased estimates.
In this paper we utilize recent developments in two areas to demonstrate that
it is now possible to obtain unbiased solutions for a wide range of PDE models
via the FKF. The first is the development of algorithms that simulate diffusion
paths exactly (without discretization error), and so make it possible to obtain
Monte Carlo estimates of the FKF directly. The second is the development of
debiasing methods for SDEs, enabling the construction of unbiased estimates
from a sequence of biased estimates.
|
Structured light is routinely used in free space optical communication
channels, both classical and quantum, where information is encoded in the
spatial structure of the mode for increased bandwidth. Unlike polarisation, the
spatial structure of light is perturbed through such channels by atmospheric
turbulence, and consequently, much attention has focused on whether one mode
type is more robust than another, but with seemingly inconclusive and
contradictory results. Both real-world and experimentally simulated turbulence
conditions have revealed that free-space structured light modes are perturbed
in some manner by turbulence, resulting in both amplitude and phase
distortions. Here, we present complex forms of structured light which are
invariant under propagation through the atmosphere: the true eigenmodes of
atmospheric turbulence. We provide a theoretical procedure for obtaining these
eigenmodes and confirm their invariance both numerically and experimentally.
Although we have demonstrated the approach on atmospheric turbulence, its
generality allows it to be extended to other channels too, such as underwater
and in optical fibre.
|
We have developed an end-to-end, retrosynthesis system, named ChemiRise, that
can propose complete retrosynthesis routes for organic compounds rapidly and
reliably. The system was trained on a processed patent database of over 3
million organic reactions. Experimental reactions were atom-mapped, clustered,
and extracted into reaction templates. We then trained a graph convolutional
neural network-based one-step reaction proposer using template embeddings and
developed a guiding algorithm on the directed acyclic graph (DAG) of chemical
compounds to find the best candidate to explore. The atom-mapping algorithm and
the one-step reaction proposer were benchmarked against previous studies and
showed better results. The final product was demonstrated by retrosynthesis
routes reviewed and rated by human experts, showing satisfying functionality
and a potential productivity boost in real-life use cases.
|
Higher-order modes up to LP$_{33}$ are controllably excited in water-filled
kagom\'{e}- and bandgap-style hollow-core photonic crystal fibers (HC-PCF). A
spatial light modulator is used to create amplitude and phase distributions
that closely match those of the fiber modes, resulting in typical launch
efficiencies of 10-20% into the liquid-filled core. Modes, excited across the
visible wavelength range, closely resemble those observed in air-filled
kagom\'{e} HC-PCF and match numerical simulations. Mode indices are obtained by
launching plane-waves at specific angles onto the fiber input-face and
comparing the resulting intensity pattern to that of a particular mode. These
results provide a framework for spatially-resolved sensing in HC-PCF
microreactors and fiber-based optical manipulation.
|
The combination of high spatial and spectral resolution in optical astronomy
enables new observational approaches to many open problems in stellar and
circumstellar astrophysics. However, constructing a high-resolution
spectrograph for an interferometer is a costly and time-intensive undertaking.
Our aim is to show that, by coupling existing high-resolution spectrographs to
existing interferometers, one could observe in the domain of high spectral and
spatial resolution, and avoid the construction of a new complex and expensive
instrument. We investigate in this article the different challenges which arise
from combining an interferometer with a high-resolution spectrograph. The
requirements for the different sub-systems are determined, with special
attention given to the problems of fringe tracking and dispersion. A concept
study for the combination of the VLTI (Very Large Telescope Interferometer)
with UVES (UV-Visual Echelle Spectrograph) is carried out, and several other
specific instrument pairings are discussed. We show that the proposed
combination of an interferometer with a high-resolution spectrograph is indeed
feasible with current technology, for a fraction of the cost of building a
whole new spectrograph. The impact on the existing instruments and their
ongoing programs would be minimal.
|
A new framework for exploiting information about the renormalization group
(RG) behavior of gravity in a dynamical context is discussed. The
Einstein-Hilbert action is RG-improved by replacing Newton's constant and the
cosmological constant by scalar functions in the corresponding Lagrangian
density. The position dependence of $G$ and $\Lambda$ is governed by a RG
equation together with an appropriate identification of RG scales with points
in spacetime. The dynamics of the fields $G$ and $\Lambda$ does not admit a
Lagrangian description in general. Within the Lagrangian formalism for the
gravitational field they have the status of externally prescribed
``background'' fields. The metric satisfies an effective Einstein equation
similar to that of Brans-Dicke theory. Its consistency imposes severe
constraints on allowed backgrounds. In the new RG-framework, $G$ and $\Lambda$
carry energy and momentum. It is tested in the setting of homogeneous-isotropic
cosmology and is compared to alternative approaches where the fields $G$ and
$\Lambda$ do not carry gravitating 4-momentum. The fixed point regime of the
underlying RG flow is studied in detail.
|
Manifold learning is a hot research topic in the field of computer science. A
crucial issue with current manifold learning methods is that they lack a
natural quantitative measure to assess the quality of learned embeddings, which
greatly limits their applications to real-world problems. In this paper, a new
embedding quality assessment method for manifold learning, named as
Normalization Independent Embedding Quality Assessment (NIEQA), is proposed.
Compared with current assessment methods which are limited to isometric
embeddings, the NIEQA method has a much larger application range due to two
features. First, it is based on a new measure which can effectively evaluate
how well local neighborhood geometry is preserved under normalization, hence it
can be applied to both isometric and normalized embeddings. Second, it can
provide both local and global evaluations to output an overall assessment.
Therefore, NIEQA can serve as a natural tool in model selection and evaluation
tasks for manifold learning. Experimental results on benchmark data sets
validate the effectiveness of the proposed method.
|
Current unsupervised anomaly detection approaches perform well on public
datasets but struggle with specific anomaly types due to the domain gap between
pre-trained feature extractors and target-specific domains. To tackle this
issue, this paper presents a two-stage training strategy, called
\textbf{ToCoAD}. In the first stage, a discriminative network is trained by
using synthetic anomalies in a self-supervised learning manner. This network is
then utilized in the second stage to provide a negative feature guide, aiding
in the training of the feature extractor through bootstrap contrastive
learning. This approach enables the model to progressively learn the
distribution of anomalies specific to industrial datasets, effectively
enhancing its generalizability to various types of anomalies. Extensive
experiments are conducted to demonstrate the effectiveness of our proposed
two-stage training strategy, and our model produces competitive performance,
achieving pixel-level AUROC scores of 98.21\%, 98.43\% and 97.70\% on MVTec AD,
VisA and BTAD respectively.
|
In this paper, I respond to a critique of one of my papers previously
published in this journal, entitled `Dr. Bertlmann's socks in a quaternionic
world of ambidextral reality.' The geometrical framework presented in my paper
is based on a quaternionic 3-sphere, or S^3, taken as a model of the physical
space in which we are inescapably confined to perform all our experiments. The
framework intrinsically circumvents Bell's theorem by reproducing the singlet
correlations local-realistically, without resorting to backward causation,
superdeterminism, or any other conspiracy loophole. In this response, I
demonstrate point by point that, contrary to its claims, the critique has not
found any mistakes in my paper, either in the analytical model of the singlet
correlations or in its event-by-event numerical simulation based on Geometric
Algebra.
|
High-throughput sequencing technology provides unprecedented opportunities to
quantitatively explore human gut microbiome and its relation to diseases.
Microbiome data are compositional, sparse, noisy, and heterogeneous, which pose
serious challenges for statistical modeling. We propose an identifiable
Bayesian multinomial matrix factorization model to infer overlapping clusters
on both microbes and hosts. The proposed method represents the observed
over-dispersed zero-inflated count matrix as Dirichlet-multinomial mixtures on
which latent cluster structures are built hierarchically. Under the Bayesian
framework, the number of clusters is automatically determined and available
information from a taxonomic rank tree of microbes is naturally incorporated,
which greatly improves the interpretability of our findings. We demonstrate the
utility of the proposed approach by comparing to alternative methods in
simulations. An application to a human gut microbiome dataset involving
patients with inflammatory bowel disease reveals interesting clusters, which
contain bacteria families Bacteroidaceae, Bifidobacteriaceae,
Enterobacteriaceae, Fusobacteriaceae, Lachnospiraceae, Ruminococcaceae,
Pasteurellaceae, and Porphyromonadaceae that are known to be related to the
inflammatory bowel disease and its subtypes according to biological literature.
Our findings can help generate potential hypotheses for future investigation of
the heterogeneity of the human gut microbiome.
|
We point out that the relative Heisenberg uncertainty relations vanish for
non-compact spaces in homogeneous loop quantum cosmology. As a consequence, for
sharply peaked states quantum fluctuations in the scale factor never become
important, even near the bounce point. This shows why quantum back-reaction
effects remain negligible and explains the surprising accuracy of the effective
equations in describing the dynamics of sharply peaked wave packets. This also
underlines the fact that minisuperspace models ---where it is global variables
that are quantized--- do not capture the local quantum fluctuations of the
geometry.
|
We derive for Bohmian mechanics topological factors for quantum systems with
a multiply-connected configuration space Q. These include nonabelian factors
corresponding to what we call holonomy-twisted representations of the
fundamental group of Q. We employ wave functions on the universal covering
space of Q. As a byproduct of our analysis, we obtain an explanation, within
the framework of Bohmian mechanics, of the fact that the wave function of a
system of identical particles is either symmetric or anti-symmetric.
|
Subsets and Splits