text
stringlengths 6
128k
|
---|
In AI-assisted decision-making, humans often passively review AI's suggestion
and decide whether to accept or reject it as a whole. In such a paradigm,
humans are found to rarely trigger analytical thinking and face difficulties in
communicating the nuances of conflicting opinions to the AI when disagreements
occur. To tackle this challenge, we propose Human-AI Deliberation, a novel
framework to promote human reflection and discussion on conflicting human-AI
opinions in decision-making. Based on theories in human deliberation, this
framework engages humans and AI in dimension-level opinion elicitation,
deliberative discussion, and decision updates. To empower AI with deliberative
capabilities, we designed Deliberative AI, which leverages large language
models (LLMs) as a bridge between humans and domain-specific models to enable
flexible conversational interactions and faithful information provision. An
exploratory evaluation on a graduate admissions task shows that Deliberative AI
outperforms conventional explainable AI (XAI) assistants in improving humans'
appropriate reliance and task performance. Based on a mixed-methods analysis of
participant behavior, perception, user experience, and open-ended feedback, we
draw implications for future AI-assisted decision tool design.
|
A magnetohydrodynamic model of a steady, transverse C-type shock in a dense
molecular cloud is presented. A complete gas-grain chemical network is taken
into account: the gas-phase chemistry, the adsorption of gas species on dust
grains, various desorption mechanisms, the grain surface chemistry, the ion
neutralization on dust grains, the sputtering of grain mantles. The population
densities of energy levels of ions CI, CII and OI and molecules H$_2$, CO,
H$_2$O are computed in parallel with the dynamical and chemical rate equations.
The large velocity gradient approximation is used in the line radiative
transfer calculations. The simulations consist of two steps: (i) modelling of
the chemical and thermal evolution of a static molecular cloud and (ii) shock
simulations. A comparison is made with the results of publicly available models
of similar physical systems.
The focus of the paper is on the chemical processing of gas material and ice
mantles of dust grains by the shock. Sputtering of ice mantles takes place in
the shock region close to the temperature peak of the neutral gas. At high
shock speeds, molecules ejected from ice mantles are effectively destroyed in
hot gas, and their survival time is low - of the order of dozens of years.
After a passage of high-speed C-type shock, a zone of high abundance of atomic
hydrogen appears in the cooling postshock gas that triggers formation of
complex organic species such as methanol. It is shown that abundances of some
complex organic molecules (COMs) in the postshock region can be much higher
than in the preshock gas. These results are important for interpretation of
observations of COMs in protostellar outflows.
|
In this paper we prove the Tamagawa number conjecture of Bloch and Kato for
CM elliptic curves using a new explicit description of the specialization of
the elliptic polylogarithm. The Tamagawa number conjecture describes the
special values of the L-function of a CM elliptic curve in terms of the
regulator maps of the K-theory of the variety into Deligne and etale
cohomology. The regulator map to Deligne cohomology was computed by Deninger
with the help of the Eisenstein symbol. For the Tamagawa number conjecture one
needs an understanding of the $p$-adic regulator on the subspace of K-theory
defined by the Eisenstein symbol. This is accomplished by giving a new explicit
computation of the specialization of the elliptic polylogarithm sheaf. It turns
out that this sheaf is an inverse limit of $p^r$-torsion points of a certain
one-motive. The cohomology classes of the elliptic polylogarithm sheaf can then
be described by classes of sections of certain line bundles. These sections are
elliptic units and going carefully through the construction one finds an analog
of the elliptic Soul\'e elements. Finally Rubin's ``main conjecture'' of
Iwasawa theory is used to compare these elements with etale cohomology.
|
Bayesian optimization (BO), while proved highly effective for many black-box
function optimization tasks, requires practitioners to carefully select priors
that well model their functions of interest. Rather than specifying by hand,
researchers have investigated transfer learning based methods to automatically
learn the priors, e.g. multi-task BO (Swersky et al., 2013), few-shot BO
(Wistuba and Grabocka, 2021) and HyperBO (Wang et al., 2022). However, those
prior learning methods typically assume that the input domains are the same for
all tasks, weakening their ability to use observations on functions with
different domains or generalize the learned priors to BO on different search
spaces. In this work, we present HyperBO+: a pre-training approach for
hierarchical Gaussian processes that enables the same prior to work universally
for Bayesian optimization on functions with different domains. We propose a
two-step pre-training method and analyze its appealing asymptotic properties
and benefits to BO both theoretically and empirically. On real-world
hyperparameter tuning tasks that involve multiple search spaces, we demonstrate
that HyperBO+ is able to generalize to unseen search spaces and achieves lower
regrets than competitive baselines.
|
Moderators and automated methods enforce bans on malicious users who engage
in disruptive behavior. However, malicious users can easily create a new
account to evade such bans. Previous research has focused on other forms of
online deception, like the simultaneous operation of multiple accounts by the
same entities (sockpuppetry), impersonation of other individuals, and studying
the effects of de-platforming individuals and communities. Here we conduct the
first data-driven study of ban evasion, i.e., the act of circumventing bans on
an online platform, leading to temporally disjoint operation of accounts by the
same user.
We curate a novel dataset of 8,551 ban evasion pairs (parent, child)
identified on Wikipedia and contrast their behavior with benign users and
non-evading malicious users. We find that evasion child accounts demonstrate
similarities with respect to their banned parent accounts on several behavioral
axes - from similarity in usernames and edited pages to similarity in content
added to the platform and its psycholinguistic attributes. We reveal key
behavioral attributes of accounts that are likely to evade bans. Based on the
insights from the analyses, we train logistic regression classifiers to detect
and predict ban evasion at three different points in the ban evasion lifecycle.
Results demonstrate the effectiveness of our methods in predicting future
evaders (AUC = 0.78), early detection of ban evasion (AUC = 0.85), and matching
child accounts with parent accounts (MRR = 0.97). Our work can aid moderators
by reducing their workload and identifying evasion pairs faster and more
efficiently than current manual and heuristic-based approaches. Dataset is
available https://github.com/srijankr/ban_evasion.
|
The amount of data has exploded over the last ten years. Data is captured and
shared from personal devices, transactional operations, sensors, social media
and other sources. Firms should, thus, be able to explore the new opportunities
and rapidly seize them by developing the corresponding capabilities. In our
work, we focus on two emerging dynamic capabilities: Absorptive capacity and
organizational agility. We propose a new theoretical Framework based on the
previous literature linking the use of knowledge management systems and
organizational agility by highlighting the mediating role of absorptive
capacity. In addition, we carried out an empirical study based on a survey to
support and validate the proposed Framework. The main findings of this study
are presented.
|
For a binary code $\Gamma$ of length $v$, a $v$-word $w$ produces by a set of
codewords $\{w^1,...,w^r\} \subseteq \Gamma$ if for all $i=1,...,v$, we have
$w_i\in \{w_i^1, ..., w_i^r\}$ . We call a code $r$-secure frameproof of size
$t$ if $|\Gamma|=t$ and for any $v$-word that is produced by two sets $C_1$ and
$C_2$ of size at most $r$ then the intersection of these sets is nonempty. A
$d$-biclique cover of size $v$ of a graph $G$ is a collection of $v$-complete
bipartite subgraphs of $G$ such that each edge of $G$ belongs to at least $d$
of these complete bipartite subgraphs. In this paper, we show that for $t\geq
2r$, an $r$-secure frameproof code of size $t$ and length $v$ exists if and
only if there exists a 1-biclique cover of size $v$ for the Kneser graph ${\rm
KG}(t,r)$ whose vertices are all $r$-subsets of a $t$-element set and two
$r$-subsets are adjacent if their intersection is empty. Then we investigate
some connection between the minimum size of $d$-biclique covers of Kneser
graphs and cover-free families, where an $(r,w; d)$ cover-free family is a
family of subsets of a finite set such that the intersection of any $r$ members
of the family contains at least $d$ elements that are not in the union of any
other $w$ members. Also, we present an upper bound for 1-biclique covering
number of Kneser graphs.
|
Axion-like particles (ALPs) may be abundantly produced in core-collapse (CC)
supernovae (SNe), hence the cumulative signal from all past SN events can
create a diffuse flux peaked at energies of about 25~MeV. We improve upon the
modeling of the ALPs flux by including a set of CC SN models with different
progenitor masses, as well as the effects of failed CC SNe -- which yield the
formation of black holes instead of explosions. Relying on the coupling
strength of ALPs to photons and the related Primakoff process, the diffuse SN
ALP flux is converted into gamma rays while traversing the magnetic field of
the Milky Way. The spatial morphology of this signal is expected to follow the
shape of the Galactic magnetic field lines. We make use of this via a
template-based analysis that utilizes 12 years of $Fermi$-LAT data in the
energy range from 50 MeV to 500 GeV. In our benchmark case of the realization
of astrophysical and cosmological parameters, we find an upper limit of
$g_{a\gamma} \lesssim 3.76\times10^{-11}\;\mathrm{GeV}^{-1}$ at 95$\%$
confidence level for $m_a \ll 10^{-11}$ eV, while we find that systematic
deviations from this benchmark scenario induce an uncertainty as large as about
a factor of two. Our result slightly improves the CAST bound, while still being
a factor of six (baseline scenario) weaker than the SN1987A gamma-ray burst
limit.
|
An initially isotropic medium, when subjected to either a magnetic field or a
coherent field, can induce anisotropy in the medium and can cause the
polarization of a probe field to rotate. Therefore the rotation of probe
polarization, due to magnetic field alone, can be controlled efficiently with
the use of a coherent control field. We demonstrate this enhancement of the
magneto-optical rotation (MOR) of a linearly polarized light, by doing detailed
calculations on a system with relevant transitions $j=0\leftrightarrow
j=1\leftrightarrow j=0$.
|
The existence of solitons -- stable, long-lived, and localized field
configurations -- is a generic prediction for ultralight dark matter. These
solitons, known by various names such as boson stars, axion stars, oscillons,
and Q-balls depending on the context, are typically treated as distinct
entities in the literature. This study aims to provide a unified perspective on
these solitonic objects for real or complex, scalar or vector dark matter,
considering self-interactions and nonminimal gravitational interactions. We
demonstrate that these solitons share universal nonrelativistic properties,
such as conserved charges, mass-radius relations, stability and profiles.
Without accounting for alternative interactions or relativistic effects,
distinguishing between real and complex scalar dark matter is challenging.
However, self-interactions differentiate real and complex vector dark matter
due to their different dependencies on the macroscopic spin density of dark
matter waves. Furthermore, gradient-dependent nonminimal gravitational
interactions impose an upper bound on soliton amplitudes, influencing their
mass distribution and phenomenology in the present-day universe.
|
Temporal grounding is crucial in multimodal learning, but it poses challenges
when applied to animal behavior data due to the sparsity and uniform
distribution of moments. To address these challenges, we propose a novel
Positional Recovery Training framework (Port), which prompts the model with the
start and end times of specific animal behaviors during training. Specifically,
Port enhances the baseline model with a Recovering part to predict flipped
label sequences and align distributions with a Dual-alignment method. This
allows the model to focus on specific temporal regions prompted by ground-truth
information. Extensive experiments on the Animal Kingdom dataset demonstrate
the effectiveness of Port, achieving an<EMAIL_ADDRESS>of 38.52. It emerges as one of
the top performers in the sub-track of MMVRAC in ICME 2024 Grand Challenges.
|
We extend the validity of Brill's axisymmetric positive energy theorem to all
asymptotically flat initial data sets with positive scalar curvature on simply
connected manifolds.
|
We examine information theory using the steady-state Boltzmann equation. In a
nonequilibrium steady-state system under steady heat conduction, the
thermodynamic quantities from information theory are calculated and compared
with those from the steady-state Boltzmann equation. We have found that
information theory is inconsistent with the steady-state Boltzmann equation.
|
We present an analysis and comparison of the 30 micron dust features seen in
the Spitzer Space Telescope spectra of 207 carbon-rich asymptotic giant branch
(AGB) stars, post-AGB objects, and planetary nebulae located in the Milky Way,
the Magellanic Clouds (MCs), or the Sagittarius dwarf spheroidal galaxy (Sgr
dSph), which are characterised by different average metallicities. We
investigated whether the formation of the 30 micron feature carrier may be a
function of the metallicity. Through this study we expect to better understand
the late stages of stellar evolution of carbon-rich stars in these galaxies.
Our analysis uses the `Manchester method' as a basis for estimating the
temperature of dust for the carbon-rich AGB stars and the planetary nebulae in
our sample. We used a black-body function with a single temperature deduced
from the Manchester method or its modification to approximate the continuum
under the 30 micron feature. The most important conclusion of our work is the
fact that the formation of the 30 micron feature is affected by metallicity.
Specifically that, as opposed to more metal-poor samples of AGB stars in the
MCs, the feature is seen at lower mass-loss rates, higher temperatures, and has
seen to be more prominent in Galactic carbon stars. The averaged feature
(profile) in the AGB, post-AGB objects, and PNe seems unaffected by metallicity
at least between a fifth and solar metallicity, but in the case of PNe it is
shifted to significantly longer wavelengths.
|
In the experiment that first demonstrated gyrotactic behaviour of
bottom-heavy swimming microalgae (e.g. Chlamydomonas), Kessler (Nature, vol.
313, 1985, pp. 218-220) showed that a beam-like structure, often referred to as
a gyrotactic plume, would spontaneously appear from a suspension of gyrotactic
swimmers in a downflowing pipe. Such a plume is prone to an instability to form
blips. This work models the gyrotactic plume as a steady parallel basic state
and its subsequent breakdown into blips as an instability, employing both the
Generalised Taylor Dispersion (GTD) theory and the Fokker-Planck model for
comparison. Upon solving for the basic state, it is discovered that the steady
plume solution undergoes sophisticated bifurcations. When there is no net flow,
there exists a non-trivial solution of the plume structure other than the
stationary uniform suspension, stemming from a transcritical bifurcation with
the average cell concentration. When a net downflow is prescribed, there exists
a cusp bifurcation. Furthermore, there is a critical concentration, at which
the cell concentration at the centre would blow up for the GTD model. The
subsequent stability analysis using the steady plume solution shows that the
Fokker-Planck model is inconsistent with what was experimentally observed, as
it predicts stabilisation of axisymmetric blips at high concentration of the
plume and destabilisation of the first non-axisymmetric mode at low flow rates.
|
Heavy meson decays with missing energy in the final state offer interesting
avenues to search for light invisible new physics such as dark matter (DM). In
this context, we show that such NP interactions also affect lifetime difference
in neutral meson-antimeson mixing. We consider general dimension-six effective
quark interactions involving a pair of DM particles and calculate their
contributions to lifetime difference in beauty and charm meson systems. We use
the latest data on mixing observables to constrain the relevant effective
operators. We find that lifetime differences provide novel and complementary
flavor constraints compared to those obtained from heavy meson decays.
|
We use algebraic geometry over pointed monoids to give an intrinsic
interpretation for the compactification of the spectrum of the ring of integers
of a number field $K$, for the projective line over algebraic extensions of
$\mathbb{F}_1$ and for maps between them induced by elements of $K$, as
introduced by Alexander Smirnov in his approach to the ABC conjecture.
|
Age and gender are complementary soft biometric traits for face recognition.
Successful estimation of age and gender from facial images taken under
real-world conditions can contribute improving the identification results in
the wild. In this study, in order to achieve robust age and gender
classification in the wild, we have benefited from Deep Convolutional Neural
Networks based representation. We have explored transferability of existing
deep convolutional neural network (CNN) models for age and gender
classification. The generic AlexNet-like architecture and domain specific
VGG-Face CNN model are employed and fine-tuned with the Adience dataset
prepared for age and gender classification in uncontrolled environments. In
addition, task specific GilNet CNN model has also been utilized and used as a
baseline method in order to compare with transferred models. Experimental
results show that both transferred deep CNN models outperform the GilNet CNN
model, which is the state-of-the-art age and gender classification approach on
the Adience dataset, by an absolute increase of 7% and 4.5% in accuracy,
respectively. This outcome indicates that transferring a deep CNN model can
provide better classification performance than a task specific CNN model, which
has a limited number of layers and trained from scratch using a limited amount
of data as in the case of GilNet. Domain specific VGG-Face CNN model has been
found to be more useful and provided better performance for both age and gender
classification tasks, when compared with generic AlexNet-like model, which
shows that transfering from a closer domain is more useful.
|
A suitable extra differential on graph complexes can lead to a pairing of its
cohomological classes. Many such extra differentials are known for various
graph complexes, including Kontsevich's graph complex $GC_n$ for odd $n$. In
this paper we introduce another extra differential on the same graph complex,
leading to another way of pairing of its cohomological classes. Two ways of
pairing lead to even further understanding of graph cohomology through
"waterfall mechanism".
|
This paper presents final results of the Out-Of-Vocabulary 2022 (OOV)
challenge. The OOV contest introduces an important aspect that is not commonly
studied by Optical Character Recognition (OCR) models, namely, the recognition
of unseen scene text instances at training time. The competition compiles a
collection of public scene text datasets comprising of 326,385 images with
4,864,405 scene text instances, thus covering a wide range of data
distributions. A new and independent validation and test set is formed with
scene text instances that are out of vocabulary at training time. The
competition was structured in two tasks, end-to-end and cropped scene text
recognition respectively. A thorough analysis of results from baselines and
different participants is presented. Interestingly, current state-of-the-art
models show a significant performance gap under the newly studied setting. We
conclude that the OOV dataset proposed in this challenge will be an essential
area to be explored in order to develop scene text models that achieve more
robust and generalized predictions.
|
The eigenvalue problem for the Sen--Witten operator on closed spacelike
hypersurfaces is investigated. The (square of its) eigenvalues are shown to be
given exactly by the 3-surface integral appearing in the expression of the
total energy-momentum of the matter+gravity systems in Witten's energy
positivity proof. A sharp lower bound for the eigenvalues, given in terms of
the constraint parts of the spacetime Einstein tensor, i.e. the energy and
momentum densities of the matter fields, is given.
|
We investigate some aspects of the problem of the estimation of birth
distributions (BD) in multi-type Galton-Watson trees (MGW) with unobserved
types. More precisely, we consider two-type MGW called spinal-structured trees.
This kind of tree is characterized by a spine of special individuals whose BD
$\nu$ is different from the other individuals in the tree (called normal whose
BD is denoted $\mu$). In this work, we show that even in such a very structured
two-type population, our ability to distinguish the two types and estimate
$\mu$ and $\nu$ is constrained by a trade-off between the growth-rate of the
population and the similarity of $\mu$ and $\nu$. Indeed, if the growth-rate is
too large, large deviations events are likely to be observed in the sampling of
the normal individuals preventing us to distinguish them from special ones.
Roughly speaking, our approach succeeds if $r<\mathfrak{D}(\mu,\nu)$ where $r$
is the exponential growth-rate of the population and $\mathfrak{D}$ is a
divergence measuring the dissimilarity between $\mu$ and $\nu$.
|
Writing the Poisson equation for the pressure in the vorticity-strain form,
we show that the pressure has a finite inertial range spectrum for high
Reynolds number isotropic turbulence only if the anomalous scaling exponents
$\mu$ and $\mu_{\omega}$ for the dissipation and enstrophy (squared vorticity)
are equal. Since a finite inertial range pressure spectrum requires only very
weak assumptions about high Reynolds number turbulence, we conclude that the
inference from experiment and direct numerical siimulation that these exponents
are different must be a finite range scaling result which will not survive
taking the high Reynolds number limit.
|
We expand our results in \cite{Astefanesei:2019ehu} to investigate a general
class of exact hairy black hole solutions in Einstein-Maxwell-dilaton gravity.
The dilaton is endowed with a potential that originates from an electromagnetic
Fayet-Iliopoulos term in $\mathcal{N} = 2$ extended supergravity in four
spacetime dimensions. We present the usual thermodynamics by using the
counterterm method supplemented with boundary terms for a scalar field with
mixed boundary conditions. We then extend our analysis by considering a
dynamical cosmological constant and verify the isoperimetric inequality. We
obtain a very rich phase diagram and criticality in both the canonical and
grand canonical ensembles. Within string theory, the cosmological constant is
related to the radius of the external sphere (of the compactification) and can
be interpreted as a modulus. In this context, the existence of a critical value
hints to the fact that the thermodynamic properties of black holes in lower
dimensions depend on the size of the compactification.
|
We study the Generalized Brans-Dicke cosmology in the presence of matter and
dark energy. Of particular interest for a constant Brans-Dicke parameter, the
de Sitter space has also been investigated.
|
We find several classes of exact classical solutions of critical bosonic
string theory, constructed as twisted products of one Euclidean and one
Minkowskian 2D black hole coset. One class of these solutions leads (after
tensoring with free scalars and supersymmetrizing) to a rotating version of the
recently discovered exact black fivebrane. Another class represents a
one-parameter family of axisymmetric stationary four-dimensional targets with
horizons. Global properties and target duality of the 4D solutions are briefly
analyzed.
|
Dynamical Hall conductivity {\sigma}_H({\omega}) of a 2D electron gas with
impurities in the perpendicular magnetic field is analyzed. Plateau-like
behavior at low frequencies as well as at high frequencies provided the
complete filling of Landau levels is predicted. The broadening of a Landau
level separates two frequency regions with different behaviour. Imaginary part
of dynamical Hall conductivity reveals oscillations in the localized states
region. Comparison with the experiment is carried out.
|
We construct special Lagrangian 3-spheres in non-K\"ahler compact threefolds
equipped with the Fu-Li-Yau geometry. These non-K\"ahler geometries emerge from
topological transitions of compact Calabi-Yau threefolds. From this point of
view, a conifold transition exchanges holomorphic 2-cycles for special
Lagrangian 3-cycles.
|
Within the model A in the Hohenberg's dynamical universality classification,
we investigate the critical slowing down effects on the critical fluctuations
driven by the expanding quark-gluon plasma, using a trajectory and cooling rate
obtained from hydrodynamics. We numerically solved the Langevin dynamics of the
non-conserved order parameter field and find that, compared with commonly used
Hubble-like expansion, the cooling rate of a realistic hydrodynamic system is
pretty large and the associated critical slowing down effects strongly suppress
the higher-order cumulants of the order parameter field ({\it e.g.,} $C_4$).
Furthermore, for an evolving system that approaches the critical point, such
critical slowing down suppression overcomes the enhancement of the critical
fluctuations, which indicates that the largest fluctuations of the order
parameter field ({\it i.e.,} $C_2$) do not necessarily associate with the
evolving trajectory closest to the critical point.
|
In this work, we present a new and general method for measuring the
astrophysical S-factor of nuclear reactions in laser-induced plasmas and we
apply it to d(d,n)$^{3}$He. The experiment was performed with the Texas
Petawatt laser, which delivered 150-270 fs pulses of energy ranging from 90 to
180 J to D$_{2}$ or CD$_{4}$ molecular clusters. After removing the background
noise, we used the measured time-of-flight data of energetic deuterium ions to
obtain their energy distribution. We derive the S-factor using the measured
energy distribution of the ions, the measured volume of the fusion plasma and
the measured fusion yields. This method is model-independent in the sense that
no assumption on the state of the system is required, but it requires an
accurate measurement of the ion energy distribution especially at high energies
and of the relevant fusion yields. In the d(d,n)$^{3}$He and
$^{3}$He(d,p)$^{4}$He cases discussed here, it is very important to apply the
background subtraction for the energetic ions and to measure the fusion yields
with high precision. While the available data on both ion distribution and
fusion yields allow us to determine with good precision the S-factor in the d+d
case (lower Gamow energies), for the d+$^3$He case the data are not precise
enough to obtain the S-factor using this method. Our results agree with other
experiments within the experimental error, even though smaller values of the
S-factor were obtained. This might be due to the plasma environment differing
from the beam target conditions in a conventional accelerator experiment.
|
In this work, it is proved that a set of numbers closed under addition and
whose representations in a rational base numeration system is a rational
language is not a finitely generated additive monoid.
A key to the proof is the definition of a strong combinatorial property on
languages : the bounded left iteration property. It is both an unnatural
property in usual formal language theory (as it contradicts any kind of pumping
lemma) and an ideal fit to the languages defined through rational base number
systems.
|
A major challenge in the field of nanosciences is the assembly of anisotropic
nano objects into aligned structures. The way the objects are aligned
determines the physical properties of the final material. In this work, we take
a closer look at the shapes of orientation distributions of aligned anisotropic
nano and macro objects by examining previously published works. The data shows
that the orientation distribution shape of anisotropic objects aligned by
shearing and other commonly used methods varies size-independently between
Laplace and Gaussian depending on the distribution width and on the cohesivity
of the particles.
|
We have extended our chemical and cosmological galaxy evolution model to
calculate the abundance evolution for altogether 16 different elements in
spiral galaxies in a chemically consistent way which is a considerable step
towards a more realistic galaxy modeling. Our models well reproduce observed
average HII region abundances in all spiral types. All observed element
abundances in DLA systems have been compiled. The conformity between observed
and calculated abundances over the redshift range from z=4.5 through z=0.4
indicates that DLA galaxies may well evolve into the full range of present- day
spiral galaxies from Sa through Sd.
Comparison of our chemically consistent models with models using only solar
metallicity input physics shows that differences in the redshift evolution are
small for some elements but large for others. For those elements with large
differences the chemically consistent models provide significant better
agreement with observed DLA abundances.
For typical spiral galaxies the star formation histories of our models
clearly bridge the gap between high redshift DLA systems and the nearby spiral
galaxy population. The slow redshift evolution of DLA abundances is understood
in terms of the long star formation timescales in galactic and proto-galactic
disks. Towards lower redshift z < 1.5 our models indicate that early type
spirals drop out of the DLA samples as their gas content falls below ~50%.
Implications for optical identification are discussed.
|
We apply the GiBUU model to questions relevant for current and future
neutrino long-baseline experiments, we address in particular the relevance of
charged-current reactions for neutrino disappearance experiments. A correct
identification of charged-current quasielastic (CCQE) events - which is the
signal channel in oscillation experiments - is relevant for the neutrino energy
reconstruction and thus for the oscillation result. We show that about 20% of
the quasielastic cross section is misidentified in present-day experiments and
has to be corrected for by means of event generators. Furthermore, we show that
also a significant part of 1pi+ (> 40%) events is misidentified as CCQE mainly
caused by the pion absorption in the nucleus. We also discuss the dependence of
both of these numbers on experimental detection thresholds. We further
investigate the influence of final-state interactions on the neutrino energy
reconstruction.
|
Effects of size and charge asymmetry between oppositely charged ions or
particles on spatial inhomogeneities are studied for a large range of charge
and size ratios. We perform a stability analysis of the primitive model (PM) of
ionic systems with respect to periodic ordering using the collective variables
based theory. We extend previous studies [A. Ciach et al., Phys. Rev.E
\textbf{75}, 051505 (2007)] in several ways. First, we employ a non-local
approximation for the reference hard-sphere fluid which leads to the
Percus-Yevick pair direct correlation functions for the uniform case. Second,
we use the Weeks-Chandler-Anderson regularization scheme for the Coulomb
potential inside the hard core. We determine the relevant order parameter
connected with the periodic ordering and analyze the character of the dominant
fluctuations along the $\lambda$-lines. We show that the above-mentioned
modifications produce large quantitative and partly qualitative changes in the
phase diagrams obtained previously. We discuss possible scenarios of the
periodic ordering for the whole range of size- and charge ratios of the two
ionic species, covering electrolytes, ionic liquids, charged globular proteins
or nanoparticles in aqueous solutions and charge-stabilized colloids.
|
We derive electromagnetomotive force fields for charged particles moving in a
rotating Hall sample, satisfying a twofold U(1) gauge invariance principle. It
is then argued that the phase coherence property of quantization of the line
integral of total collective particle momentum into multiples of Planck's
quantum of action is solely responsible for quantization in the Hall state. As
a consequence, the height of the Hall quantization steps should remain
invariant in a rapidly rotating Hall probe. Quantum Hall particle
conductivities do not depend on charge and mass of the electron, and are
quantized in units of the inverse of Planck's action quantum.
|
The high parton density regime of the Quantum Chromodynamics (QCD), where the
physics of parton saturation is expected to be dominant, is briefly discussed.
Some phenomenological aspects of saturation are described, mainly focusing on
possible signatures of the non-linear QCD dynamics in the heavy quark
production in electron-proton/nucleus collisions. Implications of these effects
in the heavy quark production in ultraperipheral heavy-ion collisions are also
presented.
|
We present model light curves, Power Spectral Densities (PSD) and time lags
of accreting Black Hole Candidates (BHC) based on a recent model of these
sources. According to our model the observed variability is due, to a large
extent, to the stochastic nature of Comptonization, the basic process
responsible also for the emission of high energy radiation. Our additional
assumption is that the Comptonization process takes place in an extended but
non-uniform atmosphere around the compact object. Our model reproduces the
observed light curves well, in that it provides a good fit to the PSDs and the
overall light curve morphology, indicating, in accordance with observation,
that most of the power resides at time scales $\gsim$ a few seconds while at
the same time one can distinguish shots of a few msec in duration. We suggest
that refinement of this type of model along with spectral and phase lag
information can be used to probe the structure of this class of high energy
sources.
|
This is the second of two companion papers on computing the self-force in a
radiation gauge; more precisely, the method uses a radiation gauge for the
radiative part of the metric perturbation, together with an arbitrarily chosen
gauge for the parts of the perturbation associated with changes in black-hole
mass and spin and with a shift in the center of mass. We compute the
conservative part of the self-force for a particle in circular orbit around a
Schwarzschild black hole. The gauge vector relating our radiation gauge to a
Lorenz gauge is helically symmetric, implying that the quantity h_{\alpha\beta}
u^\alpha u^\beta (= h_{uu}) must have the same value for our radiation gauge as
for a Lorenz gauge; and we confirm this numerically to one part in 10^{13}. As
outlined in the first paper, the perturbed metric is constructed from a Hertz
potential that is in term obtained algebraically from the the retarded
perturbed spin-2 Weyl scalar, \psi_0 . We use a mode-sum renormalization and
find the renormalization coefficients by matching a series in L = \ell + 1/2 to
the large-L behavior of the expression for the self-force in terms of the
retarded field h_{\alpha\beta}^{ret}; we similarly find the leading
renormalization coefficients of h_{uu} and the related change in the angular
velocity of the particle due to its self-force. We show numerically that the
singular part of the self-force has the form f_{\alpha} \propto < \nabla_\alpha
\rho^{-1}>, the part of \nabla_\alpha \rho^{-1} that is axisymmetric about a
radial line through the particle. This differs only by a constant from its form
for a Lorenz gauge. It is because we do not use a radiation gauge to describe
the change in black-hole mass that the singular part of the self-force has no
singularity along a radial line through the particle and, at least in this
example, is spherically symmetric to subleading order in \rho.
|
The Moore-Read state, one of the leading candidates for describing the
fractional quantum Hall effect at filling factor $\nu{=}5/2$, is a paradigmatic
$p$-wave superconductor with non-Abelian topological order. Among its many
exotic properties, the state hosts two collective modes: a bosonic density wave
and a neutral fermion mode that arises from an unpaired electron in the
condensate. It has recently been proposed that the descriptions of the two
modes can be unified by postulating supersymmetry (SUSY) that relates them in
the long-wavelength limit. Here we extend the SUSY description to construct
wave functions of the two modes on closed surfaces, such as the sphere and
torus, and we test the resulting states in large-scale numerical simulations.
We demonstrate the equivalence in the long-wavelength limit between SUSY wave
functions and previous descriptions of collective modes based on the
Girvin-MacDonald-Platzman ansatz, Jack polynomials, and bipartite composite
fermions. Leveraging the first-quantized form of the SUSY wave functions, we
study their energies using the Monte Carlo method and show that realistic
$\nu{=}5/2$ systems are close to the putative SUSY point, where the two
collective modes become degenerate in energy.
|
The poset Ramsey number $R(Q_m,Q_n)$ is the smallest integer $N$ such that
any blue-red coloring of the elements of the Boolean lattice $Q_N$ has a blue
induced copy of $Q_m$ or a red induced copy of $Q_n$. The weak poset Ramsey
number $R_w(Q_m,Q_n)$ is defined analogously, with weak copies instead of
induced copies. It is easy to see that $R(Q_m,Q_n) \ge R_w(Q_m,Q_n)$.
Axenovich and Walzer showed that $n+2 \le R(Q_2,Q_n) \le 2n+2$. Recently, Lu
and Thompson improved the upper bound to $\frac{5}{3}n+2$. In this paper, we
solve this problem asymptotically by showing that $R(Q_2,Q_n)=n+O(n/\log n)$.
In the diagonal case, Cox and Stolee proved $R_w(Q_n,Q_n) \ge 2n+1$ using a
probabilistic construction. In the induced case, Bohman and Peng showed
$R(Q_n,Q_n) \ge 2n+1$ using an explicit construction. Improving these results,
we show that $R_w(Q_m,Q_n) \ge n+m+1$ for all $m \ge 2$ and large $n$ by giving
an explicit construction; in particular, we prove that $R_w(Q_2,Q_n)=n+3$.
|
The 2.5-generation (2.5G) ground-based gravitational wave (GW) detectors LIGO
Voyager and NEMO are expected to be operational in the late 2020s and early
2030s. In this work, we explore the potential of GW standard sirens observed by
the 2.5G GW detectors in measuring cosmological parameters, especially for the
Hubble constant. Using GWs to measure cosmological parameters is inherently
challenging, especially for 2.5G detectors, given their limited capability,
which results in weaker constraints on cosmological parameters from the
detected standard sirens. However, the measurement of the Hubble constant using
standard siren observations from Voyager and NEMO is still promising. For
example, using bright sirens from Voyager and NEMO can measure the Hubble
constant with an accuracy of about $2\%$ and $6\%$ respectively, and using the
Voyager-NEMO network can improve the accuracy to about $1.6\%$. Moreover,
bright sirens can be used to break the degeneracy of cosmological parameters
generated by CMB data, and to a certain extent, 2.5G detectors can also play a
role in this aspect. Observations of dark sirens by 2.5G detectors can achieve
relatively good results in measuring the Hubble constant, with an accuracy of
within $2\%$, and if combining observations of bright and dark sirens, the
accuracy of the Hubble constant measurement can reach about $1.3\%$. Finally,
we also discussed the impact of the uncertainty in the binary neutron star
merger rate on the estimation of cosmological parameters. We conclude that the
magnificent prospect for solving the Hubble tension is worth expecting in the
era of the 2.5G ground-based GW detectors.
|
We report the first observation of $B^0 \to X(3872) (K^{+}\pi^{-})$ and
evidence for $B^+ \to X(3872) (K^{0}\pi^{+})$. The product of branching
fractions for the former decay mode is measured to be ${\cal B}(B^0 \to X(3872)
(K^+ \pi^-)) \times {\cal B}(X(3872) \to J/\psi \pi^+ \pi^-) = (7.9 \pm
1.3(\mbox{stat.})\pm 0.4(\mbox{syst.})) \times 10^{-6}$ and also find that
$B^{0}\to X(3872) K^{*}(892)^{0}$ does not dominate the $B^{0}\to
X(3872)K^{+}\pi^{-}$ decay mode in contrast to other charmonium states like
$\psi'$. The product of branching fractions for the latter decay mode is
measured to be ${\cal B}(B^+ \to X(3872) (K^0 \pi^+)) \times {\cal B}(X(3872)
\to J/\psi \pi^+ \pi^-) = (10.6 \pm 3.0(\mbox{stat.}) \pm 0.9(\mbox{syst.}))
\times 10^{-6}$. This study is based on the full and final data sample of
711~fb$^{-1}$ ($772\times 10^6 B\bar B$ pairs) collected at the $\Upsilon(4S)$
resonance with the Belle detector at the KEKB collider.
|
We have studied the time evolution of the heavy ion luminosity and bunch
intensities in the Relativistic Heavy Ion Collider (RHIC), at BNL, and in the
Large Hadron Collider (LHC), at CERN. First, we present measurements from a
large number of RHIC stores (from Run 7), colliding 100 GeV/nucleon Au beams
without stochastic cooling. These are compared with two different calculation
methods. The first is a simulation based on multi-particle tracking taking into
account collisions, intrabeam scattering, radiation damping, and synchrotron
and betatron motion. In the second, faster, method, a system of ordinary
differential equations with terms describing the corresponding effects on
emittances and bunch populations is solved numerically. Results of the tracking
method agree very well with the RHIC data. With the faster method, significant
discrepancies are found since the losses of particles diffusing out of the RF
bucket due to intrabeam scattering are not modeled accurately enough. Finally,
we use both methods to make predictions of the time evolution of the future Pb
beams in the LHC at injection and collision energy. For this machine, the two
methods agree well.
|
Generative models able to synthesize layouts of different kinds (e.g.
documents, user interfaces or furniture arrangements) are a useful tool to aid
design processes and as a first step in the generation of synthetic data, among
other tasks. We exploit the properties of self-attention layers to capture high
level relationships between elements in a layout, and use these as the building
blocks of the well-known Variational Autoencoder (VAE) formulation. Our
proposed Variational Transformer Network (VTN) is capable of learning margins,
alignments and other global design rules without explicit supervision. Layouts
sampled from our model have a high degree of resemblance to the training data,
while demonstrating appealing diversity. In an extensive evaluation on publicly
available benchmarks for different layout types VTNs achieve state-of-the-art
diversity and perceptual quality. Additionally, we show the capabilities of
this method as part of a document layout detection pipeline.
|
A new infinite series of Einstein metrics is constructed explicitly on S^2 x
S^3, and the non-trivial S^3-bundle over S^2, containing infinite numbers of
inhomogeneous ones. They appear as a certain limit of a nearly extreme
5-dimensional AdS Kerr black hole. In the special case, the metrics reduce to
the homogeneous Einstein metrics studied by Wang and Ziller. We also construct
an inhomogeneous Einstein metric on the non-trivial S^{d-2}-bundle over S^2
from a d-dimensional AdS Kerr black hole. Our construction is a higher
dimensional version of the method of Page, which gave an inhomogeneous Einstein
metric on CP^2\sharp\bar{CP^2}.
|
The complex energies of the three-body resonances for one infinitely heavy
particle and two non-interacting light particles are the sum of the two
contributing two-body complex resonance energies. The bound state of a
Borromean system originates from a resonance when the third interaction is
introduced, a finite mass is allowed and proper angular momentum coupling is
included. The relative importance of these contributions are investigated and
the resulting structure of Borromean systems are traced back to the two-body
continuum properties. The $0^+$ and $2^+$ states in $^{6}$He result from
neutron-core p-states and the ground and first excited state of $^{11}$Li
originate from neutron-core $s^2$ and $sp$-states.
|
The high degree of control available over individual atoms enables precision
tests of fundamental physical concepts. In this Letter, we experimentally study
how precision measurements can be improved by preparing entangled states immune
to the dominant source of decoherence. Using \Ca ions, we explicitly
demonstrate the advantage from entanglement on a precision test of local
Lorentz invariance for the electron. Reaching the quantum projection noise
limit set by quantum mechanics, we observe for bipartite entangled states the
expected gain of a factor of two in the precision. Under specific conditions,
multipartite entangled states may yield substantial further improvements. Our
measurements improve the previous best limit for local Lorentz invariance of
the electron using \Ca ions by factor of two to four to about
$5\times10^{-19}$.
|
We derive formulae which lend themselves to TQFT interpretations of the
Milnor torsion, the Lescop invariant, the Casson invariant, and the
Casson-Morita cocyle of a 3-manifold, and, furthermore, relate them to the
Reshetikhin-Turaev theory.
|
We study the decay of the lightest neutral Higgs boson to a charm quark pair
at full one-loop level in the MSSM with non-minimal quark flavour violation
(QFV). In the numerical analysis we consider mixing between the second and the
third squark generation and all relevant constraints from B meson data are
taken into account. It is shown that the full one-loop corrected decay width
can be quite sensitive to the MSSM QFV parameters due to large $\tilde c -
\tilde t$ mixing and large trilinear couplings. After summarising the
theoretical and experimental errors, we conclude that an observation of these
SUSY QFV effects is possible at the ILC.
|
X-ray fluorescence computed tomography based on sheet-beam can save a huge
amount of time to obtain a whole set of projections using synchrotron. However,
it is clearly unpractical for most biomedical research laboratories. In this
paper, polychromatic X-ray fluorescence computed tomography with sheet-beam
geometry is tested by Monte Carlo simulation. First, two phantoms (A and B)
filled with PMMA are used to simulate imaging process through GEANT 4. Phantom
A contains several GNP-loaded regions with the same size (10 mm) in height and
diameter but different Au weight concentration ranging from 0.3% to 1.8%.
Phantom B contains twelve GNP-loaded regions with the same Au weight
concentration (1.6%) but different diameter ranging from 1mm to 9mm. Second,
discretized presentation of imaging model is established to reconstruct more
accurate XFCT images. Third, XFCT images of phantom A and B are reconstructed
by fliter backprojection (FBP) and maximum likelihood expectation maximization
(MLEM) with and without correction, respectively. Contrast to noise ratio (CNR)
is calculated to evaluate all the reconstructed images. Our results show that
it is feasible for sheet-beam XFCT system based on polychromatic X-ray source
and the discretized imaging model can be used to reconstruct more accurate
images.
|
We consider a general two-component plasma of classical pointlike charges
$+e$ ($e$ is say the elementary charge) and $-Z e$ (valency $Z=1,2,\ldots$),
living on the surface of a sphere of radius $R$. The system is in thermal
equilibrium at the inverse temperature $\beta$, in the stability region against
collapse of oppositely charged particle pairs $\beta e^2 < 2/Z$. We study the
effect of the system excess charge $Q e$ on the finite-size expansion of the
(dimensionless) grand potential $\beta\Omega$. By combining the stereographic
projection of the sphere onto an infinite plane, the linear response theory and
the planar results for the second moments of the species density correlation
functions we show that for any $\beta e^2 < 2/Z$ the large-$R$ expansion of the
grand potential is of the form $\beta\Omega \sim A_V R^2 + [\chi/6 - \beta
(Qe)^2/2] \ln R$, where $A_V$ is the non-universal coefficient of the volume
(bulk) part and the Euler number of the sphere $\chi=2$. The same formula,
containing also a non-universal surface term proportional to $R$, was obtained
previously for the disc domain ($\chi=1$), in the case of the symmetric $(Z=1)$
two-component plasma at the collapse point $\beta e^2=2$ and the jellium model
$(Z\to 0)$ of identical $e$-charges in a fixed neutralizing background charge
density at any coupling $\beta e^2$ being an even integer. Our result thus
indicates that the prefactor to the logarithmic finite-size expansion does not
depend on the composition of the Coulomb fluid and its non-universal part
$-\beta (Qe)^2/2$ is independent of the geometry of the confining domain.
|
In 1997 Soker laid out a framework for understanding the formation and
shaping of planetary nebulae (PN). Starting from the assumption that
non-spherical PN cannot be formed by single stars, he linked PN morphologies to
the binary mechanisms that may have formed them, basing these connections
almost entirely on observational arguments. In light of the last decade of
discovery in the field of PN, we revise this framework, which, although
simplistic, can still serve as a benchmark against which to test theories of PN
origin and shaping. Within the framework, we revisit the role of planets in
shaping PN. Soker invoked a planetary role in shaping PN because there are not
enough close binaries to shape the large fraction of non-spherical PN. In this
paper we adopt a model whereby only ~20% of all 1-8 solar mass stars make a PN.
This reduces the need for planetary shaping. Through a propagation of
percentages argument, and starting from the assumption that planets can only
shape mildly elliptical PN, we conclude, like in Soker, that ~20% of all PN
were shaped via planetary and other substellar interactions but we add that
this corresponds to only ~5% of all 1-8 solar mass stars. This may be in line
with findings of planets around main sequence stars. PN shaping by planets is
made plausible by the recent discovery of planets that have survived
interactions with red giant branch (RGB) stars. Finally, we conclude that of
the ~80% of 1-8 solar mass stars that do not make a PN, about one quarter do
not even ascend the AGB due to interactions with stellar and substellar
companions, while three quarters ascend the AGB but do not make a PN. Once
these stars leave the AGB they evolve normally and can be confused with
post-RGB, extreme horizontal branch stars. We propose tests to identify them.
|
Multi-agent reinforcement learning for incomplete information environments
has attracted extensive attention from researchers. However, due to the slow
sample collection and poor sample exploration, there are still some problems in
multi-agent reinforcement learning, such as unstable model iteration and low
training efficiency. Moreover, most of the existing distributed framework are
proposed for single-agent reinforcement learning and not suitable for
multi-agent. In this paper, we design an distributed MARL framework based on
the actor-work-learner architecture. In this framework, multiple asynchronous
environment interaction modules can be deployed simultaneously, which greatly
improves the sample collection speed and sample diversity. Meanwhile, to make
full use of computing resources, we decouple the model iteration from
environment interaction, and thus accelerate the policy iteration. Finally, we
verified the effectiveness of propose framework in MaCA military simulation
environment and the SMAC 3D realtime strategy gaming environment with
imcomplete information characteristics.
|
Recent work has shown that stabilizing an affine control system while
optimizing a quadratic cost subject to state and control constraints can be
mapped to a sequence of Quadratic Programs (QPs) using Control Barrier
Functions (CBFs) and Control Lyapunov Functions (CLFs). One of the main
challenges in this method is that the QPs could easily become infeasible under
safety and spatio-temporal constraints with tight control bounds. In our own
recent work, we defined Auxiliary-Variable Adaptive CBFs (AVCBFs) to improve
the feasibility of the CBF-based QP, while avoiding extensive parameter tuning.
In this paper, we consider spatio-temporal constraints as finite-time
reachability requirements. In order to satisfy these requirements, we
generalize AVCBFs to Auxiliary-Variable Adaptive Control Lyapunov Barrier
Functions (AVCLBFs) that work for systems and constraints with arbitrary
relative degrees. We show that our method has fewer conflicts with safety and
input constraints, and outperforms the state of the art in term of adaptivity
and feasibility in solving the QP. We illustrate our approach on an optimal
control problem for a unicycle.
|
A recent characterisation of Fock-adapted contraction operator stochastic
cocycles on a Hilbert space, in terms of their associated semigroups, yields a
general principle for the construction of such cocycles by approximation of
their stochastic generators. This leads to new existence results for quantum
stochastic differential equations. We also give necessary and sufficient
conditions for a cocycle to satisfy such an equation.
|
We propose simultaneous confidence bands of the hyperbolic-type for the
contrasts between several nonlinear (curvilinear) regression curves. The
critical value of a confidence band is determined from the distribution of the
maximum of a chi-square random process defined on the domain of explanatory
variables. We use the volume-of-tube method to derive an upper tail probability
formula of the maximum of a chi-square random process, which is asymptotically
exact and sufficiently accurate in commonly used tail regions. Moreover, we
prove that the formula obtained is equivalent to the expectation of the
Euler-Poincare characteristic of the excursion set of the chi-square random
process, and hence conservative. This result is therefore a generalization of
Naiman's inequality for Gaussian random processes. As an illustrative example,
growth curves of consomic mice are analyzed.
|
We present a thorough analysis of symmetry breaking observed in Hartree-Fock
(HF) solutions of fullerenes C$_{60}$, C$_{36}$, and C$_{20}$ in order to
characterize the nature of electron correlation in them. Our analysis is based
on (1) the critical regularization strength to restore symmetry breaking in the
recently developed regularized orbital optimized second-order M{\o}ller-Plesset
perturbation theory ($\kappa$-OOMP2), (2) singlet-triplet gaps from various MP2
methods, and (3) natural orbital occupation numbers from restricted
coupled-cluster with singles and doubles (RCCSD) and coupled-cluster valence
bond with singles and doubles (CCVB-SD). Based on these three independent
probes, we conclude that C$_{36}$ (D$_\text{6h}$) exhibits genuine strong
correlation and symmetry breaking whereas C$_{60}$ exhibits {\it artificial} HF
symmetry breaking and is not strongly correlated. Investigating the critical
regularization strength, we discuss strong correlation in C$_{20}$ at the
Jahn-Teller distorted geometries (C$_\text{2h}$, D$_\text{2h}$, C$_\text{i}$,
and D$_\text{3h}$) and the I$_\text{h}$ geometry. Only C$_{20}$ (I$_\text{h}$)
was found to be strongly correlated while others exhibit {\it artificial} HF
symmetry breaking. This analysis highlights a useful feature of the recommended
$\kappa$-OOMP2 method. It is an electronic structure method that describes
dynamic correlation, and attenuates strong correlation in MP2 towards zero by
regularization. Therefore, $\kappa$-OOMP2 will exhibit symmetry breaking in its
reference determinant only when correlation is strong (i.e., essential symmetry
breaking). Artificial symmetry breaking (arising in HF due to neglect of
dynamic correlation) is thus removed in $\kappa$-OOMP2.
|
Artificial intelligence (AI)coupled with existing Internet of Things (IoT)
enables more streamlined and autonomous operations across various economic
sectors. Consequently, the paradigm of Artificial Intelligence of Things (AIoT)
having AI techniques at its core implies additional energy and carbon costs
that may become significant with more complex neural architectures. To better
understand the energy and Carbon Footprint (CF) of some AIoT components, very
recent studies employ conventional metrics. However, these metrics are not
designed to capture energy efficiency aspects of inference. In this paper, we
propose a new metric, the Energy Cost of AIoT Lifecycle (eCAL) to capture the
overall energy cost of inference over the lifecycle of an AIoT system. We
devise a new methodology for determining eCAL of an AIoT system by analyzing
the complexity of data manipulation in individual components involved in the
AIoT lifecycle and derive the overall and per bit energy consumption. With eCAL
we show that the better a model is and the more it is used, the more energy
efficient an inference is. For an example AIoT configuration, eCAL for making
$100$ inferences is $1.43$ times higher than for $1000$ inferences. We also
evaluate the CF of the AIoT system by calculating the equivalent CO$_{2}$
emissions based on the energy consumption and the Carbon Intensity (CI) across
different countries. Using 2023 renewable data, our analysis reveals that
deploying an AIoT system in Germany results in emitting $4.62$ times higher
CO$_2$ than in Finland, due to latter using more low-CI energy sources.
|
Trapped ions driven by electromagnetic radiation constitute one of the most
developed quantum technologies to date. The scenarios range from
proof-of-principle experiments to on-chip integration for quantum information
units. In most cases, these systems have operated in a regime where the
magnitude of the ion-radiation coupling constant is much smaller than the trap
and electronic transition frequencies. This regime allows the use of simple
effective Hamiltonians based on the validity of the rotating wave
approximation. However, novel trap and cavity designs now permit regimes in
which the trap frequency and the ion-radiation coupling constant are
commensurate. This opens up new venues for faster quantum gates and state
transfers from the ion to a photon, and other quantum operations. From the
theoretical side, however, there is not yet much known in terms of models and
applications that go beyond the weak driving scenario. In this work, we will
present two main results in the scenario of stronger drivings. First, we
revisit a known protocol to reconstruct the motional Wigner function and expand
it to stronger driving lasers. This extension is not trivial because the
original protocol makes use of effective Hamiltonians valid only for weak
drivings. The use of stronger fields or faster operations is desirable since
experimental reconstruction methods of that kind are usually hindered by
decoherence. We then present a model that allows the analytical treatment of
stronger drivings and that works well for non-resonant interactions, which are
generally out of the reach of the previous models.
|
The linear polarization of the characteristic photon emission from
few-electron ions is studied for its sensitivity with regard to the nuclear
spin and magnetic moment of the ions. Special attention is paid, in particular,
to the K$\alpha_1$ ($1s 2p_{3/2} ^{1,3}P_{1,2} \to 1s^2 ^1S_0$) decay of
selected helium-like ions following the radiative electron capture into
initially hydrogen-like species. Based on the density matrix theory, a unified
description is developed that includes both, the many-electron and hyperfine
interactions as well as the multipole-mixing effects arising from the expansion
of the radiation field. It is shown that the polarization of the K$\alpha_1$
line can be significantly affected by the mutipole mixing between the leading
$M2$ and hyperfine-induced $E1$ components of $1s2p ^3P_2, F_i \to 1s^2 ^1S_0,
F_f$ transitions. This $E1$-$M2$ mixing strongly depends on the nuclear
properties of the considered isotopes and can be addressed experimentally at
existing heavy-ion storage rings.
|
A common assumption in analyses of error thresholds and quantum computing in
general is that one applies fault-tolerant quantum error correction (FTQEC)
after every gate. This, however, is known not to always be optimal if the FTQEC
procedure itself can introduce errors. We investigate the effect of varying the
number of logical gates between FTQEC operations, and in particular the case
where failure of a postselection condition in FTQEC may cause FTQEC to be
skipped with high probability.
By using a simplified model of errors induced in FTQEC, we derive an
expression for the logical error rate as a function of error-correction
frequency, and show that in this model the optimal frequency is relatively
insensitive to postselection failure probability for a large range of such
probabilities. We compare the model to data derived from Monte Carlo simulation
for the $[[7,1,3]]$ Steane code.
|
Computing low-rank approximations of kernel matrices is an important problem
with many applications in scientific computing and data science. We propose
methods to efficiently approximate and store low-rank approximations to kernel
matrices that depend on certain hyperparameters. The main idea behind our
method is to use multivariate Chebyshev function approximation along with the
tensor train decomposition of the coefficient tensor. The computations are in
two stages: an offline stage, which dominates the computational cost and is
parameter-independent, and an online stage, which is inexpensive and
instantiated for specific hyperparameters. A variation of this method addresses
the case that the kernel matrix is symmetric and positive semi-definite. The
resulting algorithms have linear complexity in terms of the sizes of the kernel
matrices. We investigate the efficiency and accuracy of our method on
parametric kernel matrices induced by various kernels, such as the Mat\'ern
kernel, through various numerical experiments. Our methods have speedups up to
$200\times$ in the online time compared to other methods with similar
complexity and comparable accuracy.
|
Recent works such as VisProg and ViperGPT have smartly composed foundation
models for visual reasoning-using large language models (LLMs) to produce
programs that can be executed by pre-trained vision-language models. However,
they operate in limited domains, such as 2D images, not fully exploiting the
generalization of language: abstract concepts like "left" can also be grounded
in 3D, temporal, and action data, as in moving to your left. This limited
generalization stems from these inference-only methods' inability to learn or
adapt pre-trained models to a new domain. We propose the Logic-Enhanced
Foundation Model (LEFT), a unified framework that learns to ground and reason
with concepts across domains with a differentiable, domain-independent,
first-order logic-based program executor. LEFT has an LLM interpreter that
outputs a program represented in a general, logic-based reasoning language,
which is shared across all domains and tasks. LEFT's executor then executes the
program with trainable domain-specific grounding modules. We show that LEFT
flexibly learns concepts in four domains: 2D images, 3D scenes, human motions,
and robotic manipulation. It exhibits strong reasoning ability in a wide
variety of tasks, including those that are complex and not seen during
training, and can be easily applied to new domains.
|
This paper is concerned with pulsating waves for multi-dimensional
reaction-diffusion equations in spatially periodic media. First, assuming the
existence of pulsating waves connecting two linearly stable steady states, we
study the continuity of wave speeds with respect to the direction of
propagation. The continuity was proved in [15] under the extra condition that
the speeds are nonzero in all directions. Here, we revisit this continuity
result without the extra condition. Secondly, we provide some sufficient
conditions ensuring the existence of pulsating waves in rapidly oscillating
media, which allow the equations to have multiple stable steady states.
|
The Yarkovsky effect is a thermal process acting upon the orbits of small
celestial bodies, which can cause these orbits to slowly expand or contract
with time. The effect is subtle (da/dt ~ 10^-4 au/My for a 1 km diameter
object) and is thus generally difficult to measure. We analyzed both optical
and radar astrometry for 600 near-Earth asteroids (NEAs) for the purpose of
detecting and quantifying the Yarkovsky effect. We present 247 NEAs with
measured drift rates, which is the largest published set of Yarkovsky
detections. This large sample size provides an opportunity to examine the
Yarkovsky effect in a statistical manner. In particular, we describe two
independent population-based tests that verify the measurement of Yarkovsky
orbital drift. First, we provide observational confirmation for the Yarkovsky
effect's theoretical size dependence of 1/D, where D is diameter. Second, we
find that the observed ratio of negative to positive drift rates in our sample
is 2.34, which, accounting for bias and sampling uncertainty, implies an actual
ratio of $2.7^{+0.3}_{-0.7}$. This ratio has a vanishingly small probability of
occurring due to chance or statistical noise. The observed ratio of retrograde
to prograde rotators is two times lower than the ratio expected from numerical
predictions from NEA population studies and traditional assumptions about the
sense of rotation of NEAs originating from various main belt escape routes. We
also examine the efficiency with which solar energy is converted into orbital
energy and find a median efficiency in our sample of 12%. We interpret this
efficiency in terms of NEA spin and thermal properties.
|
In this paper, we address the problem of data reconstruction from
privacy-protected templates, based on recent concept of sparse ternary coding
with ambiguization (STCA). The STCA is a generalization of randomization
techniques which includes random projections, lossy quantization, and addition
of ambiguization noise to satisfy the privacy-utility trade-off requirements.
The theoretical privacy-preserving properties of STCA have been validated on
synthetic data. However, the applicability of STCA to real data and potential
threats linked to reconstruction based on recent deep reconstruction algorithms
are still open problems. Our results demonstrate that STCA still achieves the
claimed theoretical performance when facing deep reconstruction attacks for the
synthetic i.i.d. data, while for real images special measures are required to
guarantee proper protection of the templates.
|
In this poster paper we present a data dissemination transmission abstraction
for over the air programming (OAP) protocol which is fundamentally different
from the previous hop by hop transmission protocols. Instead of imposing the
greedy requirement that at least one node in the ith hop receives all packets
before transmitting packets to the next hop and its neighbours, we take
advantage of the spatial diversity and broadcast nature of wireless
transmission to adopt a cooperative approach in which node broadcast whatever
packets it has received with the expectation that it will recover the lost
packets with high probability by overhearing the broadcast transmissions of its
neighbours. The use of coded transmissions ensures that this does not lead to
the broadcast storm problem. We validate the improved performance our of
proposed transmission scheme with respect to the previous state of the art OAP
protocols on a proof-of-concept two-hops TelosB wireless sensor network
testbed.
|
It is of critical importance to understand the mechanical properties change
of electrode materials during lithium intercalation in the mechanical design of
Li-ion batteries, for the purpose of the high reliability and safety in their
applications. Here, we investigated the mechanical properties of both bulk and
single layer phosphorus during the lithium intercalation process by using the
first-principles calculations. Our results show that the Young's modulus of
bulk and layered phosphorus strongly depends on the lithium intercalation. The
mechanical bearing capacities, such as critical strain and stress, are
significantly reduced by several times after lithium intercalation in both bulk
and single layer phosphorus, which may reduce the reliability of Li-ion
batteries. Our findings suggest that this remarkable mechanical properties
deterioration during Li intercalation should be considered carefully in the
mechanical design of Li-ion batteries, in order to keep they working reliably
and safely in the charge-discharge process.
|
Objective: To develop a rule-based algorithm that detects temporal
information of clinical events during pregnancy for women with COVID-19 by
inferring gestational weeks and delivery dates from Electronic Health Records
(EHR) from the National COVID Cohort Collaborate (N3C). Materials and Methods:
The EHR are normalized by the Observational Medical Outcomes Partnership (OMOP)
Clinical Data Model (CDM). EHR phenotyping resulted in 270,897 pregnant women
(2018-06-01 to 2021-05-31). We developed a rule-based algorithm and performed a
multi-level evaluation to test content validity and clinical validity of the
algorithm; and extreme value analysis for individuals with <150 or >300 days of
gestation. Results: The algorithm identified 296,194 pregnancies (16,659
COVID-19 174 and 744 without COVID-19 peri-pandemic) in 270,897 pregnant women.
For inferring gestational age, 95% cases (n=40) have moderate-high accuracy
(Cohen Kappa = 0.62); 100% cases (n=40) have moderate-high granularity of
temporal information (Cohen Kappa = 1). For inferring delivery dates, the
accuracy is 100% (Cohen Kappa = 1). Accuracy of gestational age detection for
extreme length of gestation is 93.3% (Cohen Kappa = 1). Mothers with COVID-19
showed higher prevalence in obesity (35.1% vs. 29.5%), diabetes (17.8% vs.
17.0%), chronic obstructive pulmonary disease (COPD) (0.2% vs. 0.1%),
respiratory distress syndrome (ARDS) (1.8% vs. 0.2%). Discussion: We explored
the characteristics of pregnant women by different timing of COVID-19 with our
algorithm: the first to infer temporal information from complete antenatal care
and detect the timing of SARS-CoV-2 infection for pregnant women using N3C.
Conclusion: The algorithm shows excellent validity in inferring gestational age
and delivery dates, which supports national EHR cohorts on N3C studying the
impact of COVID-19 on pregnancy.
|
A detection scheme for uplink massive MIMO, dubbed massive-BLAST or M-BLAST,
is proposed. The derived algorithm is an enhancement of the well-known soft
parallel interference cancellation. Using computer simulations in massive MIMO
application scenarios, M-BLAST is shown to yield a substantially better error
performance with reduced complexity, compared to the benchmark alternative of a
one-shot linear detector, as well as the original sequential V-BLAST. Hence,
M-BLAST may serve as a computationally efficient means to exploit the large
number of antennas in massive MIMO.
|
The Knights Landing (KNL) is the codename for the latest generation of Intel
processors based on Intel Many Integrated Core (MIC) architecture. It relies on
massive thread and data parallelism, and fast on-chip memory. This processor
operates in standalone mode, booting an off-the-shelf Linux operating system.
The KNL peak performance is very high - approximately 3 Tflops in double
precision and 6 Tflops in single precision - but sustained performance depends
critically on how well all parallel features of the processor are exploited by
real-life applications. We assess the performance of this processor for Lattice
Boltzmann codes, widely used in computational fluid-dynamics. In our OpenMP
code we consider several memory data-layouts that meet the conflicting
computing requirements of distinct parts of the application, and sustain a
large fraction of peak performance. We make some performance comparisons with
other processors and accelerators, and also discuss the impact of the various
memory layouts on energy efficiency.
|
Below scales of about 100/h Mpc our universe displays a complex inhomogeneous
structure dominated by voids, with clusters of galaxies in sheets and
filaments. The coincidence that cosmic expansion appears to start accelerating
at the epoch when such structures form has prompted a number of researchers to
question whether dark energy is a signature of a failure of the standard
cosmology to properly account, on average, for the distribution of matter we
observe. Here I discuss the timescape scenario, in which cosmic acceleration is
understood as an apparent effect, due to gravitational energy gradients that
grow when spatial curvature gradients become significant with the nonlinear
growth of cosmic structure. I discuss conceptual issues related to the
averaging problem, and their impact on the calibration of local geometry to the
solutions of the volume-average evolution equations corrected by backreaction,
and the question of nonbaryonic dark matter in the timescape framework. I
further discuss recent work on defining observational tests for average
geometric quantities which can distinguish the timescape model from a
cosmological constant or other models of dark energy.
|
Strain engineering is used to obtain desirable materials properties in a
range of modern technologies. Direct nanoscale measurement of the
three-dimensional strain tensor field within these materials has however been
limited by a lack of suitable experimental techniques and data analysis tools.
Scanning electron diffraction has emerged as a powerful tool for obtaining
two-dimensional maps of strain components perpendicular to the incident
electron beam direction. Extension of this method to recover the full
three-dimensional strain tensor field has been restricted though by the absence
of a formal framework for tensor tomography using such data. Here, we show that
it is possible to reconstruct the full non-symmetric strain tensor field as the
solution to an ill-posed tensor tomography inverse problem. We then demonstrate
the properties of this tomography problem both analytically and
computationally, highlighting why incorporating precession to perform scanning
precession electron diffraction may be important. We establish a general
framework for non-symmetric tensor tomography and demonstrate computationally
its applicability for achieving strain tomography with scanning precession
electron diffraction data.
|
We consider the hypodissipative Navier-Stokes equations on
$[0,T]\times\mathbb{T}^{d}$ and seek to construct non-unique,
H\"older-continuous solutions with epochs of regularity (smooth almost
everywhere outside a small singular set in time), using convex integration
techniques. In particular, we give quantitative relationships between the power
of the fractional Laplacian, the dimension of the singular set, and the
regularity of the solution. In addition, we also generalize the usual vector
calculus arguments to higher dimensions with Lagrangian coordinates.
|
Compressive Sensing (CS) method is a burgeoning technique being applied to
diverse areas including wireless sensor networks (WSNs). In WSNs, it has been
studied in the context of data gathering and aggregation, particularly aimed at
reducing data transmission cost and improving power efficiency. Existing CS
based data gathering work in WSNs assume fixed and uniform compression
threshold across the network, regard- less of the data field characteristics.
In this paper, we present a novel data aggregation architecture model that
combines a multi- resolution structure with compressed sensing. The compression
thresholds vary over the aggregation hierarchy, reflecting the underlying data
field. Compared with previous relevant work, the proposed model shows its
significant energy saving from theoretical analysis. We have also implemented
the proposed CS- based data aggregation framework on a SIDnet SWANS platform,
discrete event simulator commonly used for WSN simulations. Our experiments
show substantial energy savings, ranging from 37% to 77% for different nodes in
the networking depending on the position of hierarchy.
|
We numerically investigate the hydrodynamics and membrane dynamics of
multicomponent vesicles in two strongly confined geometries. This serves as a
simplified model for red blood cells undergoing large deformations while
traversing narrow constrictions. We propose a new parameterization for the
bending modulus that remains positive for all lipid phase parameter values. For
a multicomponent vesicle passing through a stenosis, we establish connections
between various properties: lipid phase coarsening, size and flow profile of
the lubrication layers, excess pressure, and the tank-treading velocity of the
membrane. For a multicomponent vesicle passing through a contracting channel,
we find that the lipid always phase separates so that the vesicle is stiffer in
the front as it passes through the constriction. For both cases of confinement,
we find that lipid coarsening is arrested under strong confinement and proceeds
at a high rate upon relief from extreme confinement. The results may be useful
for efficient sorting lipid domains using microfluidic flows by controlled
release of vesicles passing through strong confinement
|
Adversarial training (AT) is a popular method for training robust deep neural
networks (DNNs) against adversarial attacks. Yet, AT suffers from two
shortcomings: (i) the robustness of DNNs trained by AT is highly intertwined
with the size of the DNNs, posing challenges in achieving robustness in smaller
models; and (ii) the adversarial samples employed during the AT process exhibit
poor generalization, leaving DNNs vulnerable to unforeseen attack types. To
address these dual challenges, this paper introduces adversarial training via
adaptive knowledge amalgamation of an ensemble of teachers (AT-AKA). In
particular, we generate a diverse set of adversarial samples as the inputs to
an ensemble of teachers; and then, we adaptively amalgamate the logtis of these
teachers to train a generalized-robust student. Through comprehensive
experiments, we illustrate the superior efficacy of AT-AKA over existing AT
methods and adversarial robustness distillation techniques against cutting-edge
attacks, including AutoAttack.
|
Cross-lingual summarization (CLS) is the task to produce a summary in one
particular language for a source document in a different language. We introduce
WikiMulti - a new dataset for cross-lingual summarization based on Wikipedia
articles in 15 languages. As a set of baselines for further studies, we
evaluate the performance of existing cross-lingual abstractive summarization
methods on our dataset. We make our dataset publicly available here:
https://github.com/tikhonovpavel/wikimulti
|
We propose a deep metric learning model to create embedded sub-spaces with a
well defined structure. A new loss function that imposes Gaussian structures on
the output space is introduced to create these sub-spaces thus shaping the
distribution of the data. Having a mixture of Gaussians solution space is
advantageous given its simplified and well established structure. It allows
fast discovering of classes within classes and the identification of mean
representatives at the centroids of individual classes. We also propose a new
semi-supervised method to create sub-classes. We illustrate our methods on the
facial expression recognition problem and validate results on the FER+,
AffectNet, Extended Cohn-Kanade (CK+), BU-3DFE, and JAFFE datasets. We
experimentally demonstrate that the learned embedding can be successfully used
for various applications including expression retrieval and emotion
recognition.
|
In this letter we report the first room temperature gravimetric measurements
of hydrogen absorption in nanoscale titanium-benzene complexes formed by laser
ablation in a benzene atmosphere in a UHV chamber. We are able to obtain a 6%
by weight absorption as predicted by recent density functional theory based
calculations under the conditions of low benzene pressure (35 milli-torr) and
for sub-monolayer samples. For samples synthesized under higher benzene
pressures we find a systematic degradation of the hydrogen absorption.
|
We study numerical simulations of large (N~10^4) two-dimensional quasi-static
granular assemblies subjected to a slowly increasing deviator stress. We report
some peculiarities in the behavior of these packings that have not yet been
adressed. The number of sliding contacts is not necessarily related to
stability: first the number of sliding contacts rises linearly and smoothly
with the applied stress. Then, at approximately half the peak stress, the
increase slows down, a plateau develops, and a decrease follows.
|
The acoustic behavior of micro-perforated panels (MPP) is studied
theoretically and experimentally at high level of pressure excitation. A model
based on Forcheimer's regime of flow velocity in the perforations is proposed.
This model is valid at relatively high Reynolds numbers and low Mach numbers.
The experimental method consists in measuring the acoustical pressure at three
different positions in an impedance tube, the two measurement positions usually
considered in an impedance tube and one measurement in the vicinity of the rear
surface of the MPP. The impedance tube is equipped with a pressure driver
instead of the usual loudspeaker and capable of delivering a high sound
pressure level up to 160 dB. Several MPP specimens made out of steel and
polypropylene were tested. Measurements using random noise or sinusoidal
excitation in a frequency range between 200 and 1600 Hz were carried out on
MPPs backed by air cavities. It was observed that the maximum of absorption can
be a positive or a negative function of the flow velocity in the perforations.
This suggests the existence of a maximum of absorption as a function of flow
velocity. This behavior was predicted by the model and confirmed
experimentally.
|
Medical doctors rely on images of the human anatomy, such as magnetic
resonance imaging (MRI), to localize regions of interest in the patient during
diagnosis and treatment. Despite advances in medical imaging technology, the
information conveyance remains unimodal. This visual representation fails to
capture the complexity of the real, multisensory interaction with human tissue.
However, perceiving multimodal information about the patient's anatomy and
disease in real-time is critical for the success of medical procedures and
patient outcome. We introduce a Multimodal Medical Image Interaction (MMII)
framework to allow medical experts a dynamic, audiovisual interaction with
human tissue in three-dimensional space. In a virtual reality environment, the
user receives physically informed audiovisual feedback to improve the spatial
perception of anatomical structures. MMII uses a model-based sonification
approach to generate sounds derived from the geometry and physical properties
of tissue, thereby eliminating the need for hand-crafted sound design. Two user
studies involving 34 general and nine clinical experts were conducted to
evaluate the proposed interaction framework's learnability, usability, and
accuracy. Our results showed excellent learnability of audiovisual
correspondence as the rate of correct associations significantly improved (p <
0.001) over the course of the study. MMII resulted in superior brain tumor
localization accuracy (p < 0.05) compared to conventional medical image
interaction. Our findings substantiate the potential of this novel framework to
enhance interaction with medical images, for example, during surgical
procedures where immediate and precise feedback is needed.
|
$S$ algebra is an infinite dimensional Lie algebra which is known to be the
symmetry algebra of some gauge theories. It is a "coloured version" of the
$w_{1+\infty}$. In this paper we write down all possible $S$ invariant
(celestial) OPEs between two positive helicity outgoing gluons and also find
the Knizhnik-Zamolodchikov type null states for these theories. Our analysis
hints at the existence of an infinite number of $S$ invariant gauge theories
which include the (tree-level) MHV-sector and the self-dual Yang-Mills theory.
|
A viscous dusty plasma containing Kappa distributed electrons, positive warm
viscous ions and constant negatively charged dust grains with viscosity have
been considered to study the modes of dust ion acoustic waves (DIAWs)
theoretically and numerically. The derivations and basic features of shock and
solitary waves with different plasma parameters like Mach number, finite
temperature coefficient, unperturbed dust streaming velocity, kinematic
viscosity of dust etc. of this DIAWs mode have been performed. Considering the
dynamical equation from Korteweg de Vries(KdV) equation, a phase portrait has
been drawn and the position of saddle point or col. and center have also been
discussed. This type of dusty plasma can be found in celestial bodies. The
results of this research work can be applied to study the properties of DIAWs
in various astrophysical situations where Kappa distributive electrons are
present and careful modification of the same model can help us to understand
the nature of the DIAWs of laboratory plasma as well.
|
We consider Wood anomalies in diffraction spectrum from two-dimensional
dielectric periodic grid embedded in a surrounding media. The grid is of
subwavelength thickness, and diffraction of wave having S-polarization is
investigated in the vicinity of the emergence of first diffraction maximum. We
reduce Maxwell equations to coupled-mode theory with three parameters, which
are integral characteristics of material arrangement in the grid. In
particular, we show that such grids are capable to have full reflectance in
parametrically narrow frequency bandwidth. The effect is accompanied by a
parametric evanescent field enhancement in the region near the grid. In
particular, we consider grids with sinusoidal profile and show that this type
of grids possesses unique diffraction properties due to absence of the guided
mode coupling. For such grids, there is a thin transparency window at a
background of near to zero transmission at slightly nonnormal incidence. We
estimate what are enough grid size and the incident beam flatness to resolve
the singularities.
|
We study the mirrors of five-parameter Calabi-Yau threefolds first studied by
Hulek and Verrill in the context of observed modular behaviour of the zeta
functions for Calabi-Yau manifolds. Toric geometry allows for a simple explicit
construction of these mirrors, which turn out to be familiar manifolds. These
are elliptically fibred in multiple ways. By studying the singular fibres, we
are able to identify the rational curves of low degree on the mirror manifolds.
This verifies the mirror symmetry prediction obtained by studying the mirror
map near large complex structure points. We undertake also an extensive study
of the periods of the Hulek-Verrill manifolds and their monodromies. On the
mirror, we compute the genus-zero and -one instanton numbers, which are
labelled by 5 indices, as $h^{1,1}=5$. There is an obvious permutation symmetry
on these indices, but in addition there is a surprising repetition of values.
We trace this back to an $S_{6}$ symmetry made manifest by certain
constructions of the complex structure moduli space of the Hulek-Verrill
manifold. Among other consequences, we see in this way that the moduli space
has six large complex structure limits. It is the freedom to expand the
prepotential about any one of these points that leads to this symmetry in the
instanton numbers. An intriguing fact is that the group that acts on the
instanton numbers is larger than $S_6$ and is in fact an infinite hyperbolic
Coxeter group, that we study. The group orbits have a 'web' structure, and with
certain qualifications the instanton numbers are only nonzero if they belong to
what we term 'positive webs'. This structure has consequences for instanton
numbers at all genera.
|
We show that quantum mechanics can be given a Lorentz-invariant realistic
interpretation by applying our recently proposed relativistic extension of the
de Broglie-Bohm theory to deduce non-locally correlated, Lorentz-invariant
individual particle motions for the Einstein-Podolsky-Rosen experiment and the
double-interferometer experiment proposed by Horne, Shimony and Zeilinger.
|
Approximate Bayesian Computation (ABC) methods are used to approximate
posterior distributions in models with unknown or computationally intractable
likelihoods. Both the accuracy and computational efficiency of ABC depend on
the choice of summary statistic, but outside of special cases where the optimal
summary statistics are known, it is unclear which guiding principles can be
used to construct effective summary statistics. In this paper we explore the
possibility of automating the process of constructing summary statistics by
training deep neural networks to predict the parameters from artificially
generated data: the resulting summary statistics are approximately posterior
means of the parameters. With minimal model-specific tuning, our method
constructs summary statistics for the Ising model and the moving-average model,
which match or exceed theoretically-motivated summary statistics in terms of
the accuracies of the resulting posteriors.
|
The rigorous diagnostics of experiments with warm dense matter (WDM) is
notoriously difficult. A key method is given by X-ray Thomson scattering
(XRTS), but the interpretation of XRTS measurements is usually based on
theoretical models that entail various approximations. Recently, Dornheim et
al. [arXiv:2206.12805] have introduced a new framework for temperature
diagnostics of XRTS experiments that is based on imaginary-time correlation
functions (ITCF). On the one hand, switching from the frequency- to the
imaginary-time domain gives one direct access to a number of physical
properties, which facilitates the extraction of the temperature of arbitrarily
complex materials without any models or approximations. On the other hand, the
bulk of theoretical works in dynamic quantum many-body theory is devoted to the
frequency-domain, and, to our knowledge, the manifestation of physics
properties within the ITCF remains poorly understood. In the present work, we
aim to change this unsatisfactory situation by introducing a simple,
semi-analytical model for the imaginary-time dependence of two-body
correlations within the framework of imaginary-time path integrals. As a
practical example, we compare our new model to extensive ab initio path
integral Monte Carlo results for the ITCF of a uniform electron gas, and find
excellent agreement over a broad range of wave numbers, densities, and
temperatures.
|
Importance sampling has been known as a powerful tool to reduce the variance
of Monte Carlo estimator for rare event simulation. Based on the criterion of
minimizing the variance of Monte Carlo estimator within a parametric family, we
propose a general account for finding the optimal tilting measure. To this end,
when the moment generating function of the underlying distribution exists, we
obtain a simple and explicit expression of the optimal alternative
distribution. The proposed algorithm is quite general to cover many interesting
examples, such as normal distribution, noncentral $\chi^2$ distribution, and
compound Poisson processes. To illustrate the broad applicability of our
method, we study value-at-risk (VaR) computation in financial risk management
and bootstrap confidence regions in statistical inferences.
|
This contribution is a survey of the available experimental radiation data
measured in the VUV range related to hypersonic atmospheric entry. The
objective is to identify the experimental datasets already gathered during
aerothermodynamics studies for preparing sample return missions, and future.
The final goal is to identify the most valuable VUV datasets for comparisons
with future measurements to be performed in the European shock-tube ESTHER. Due
to the limited number of studies covering VUV radiation in relation to space
exploration missions, and manned Moon exploration, the review has been extended
to domains such as nuclear fusion, exobiology, chemical and process
engineering.
|
In this article, we give a complete and self--contained account of Chernysh's
strengthening of the Gromov--Lawson surgery theorem for metrics of positive
scalar curvature. No claim of originality is made.
|
We present Fermi-LAT and multi-frequency, multi-epoch VLBA data for the TeV
blazar Mrk421. We collected the data during a long and intensive
multi-frequency campaign in 2011. We study the gamma-ray light curve, the
photon index evolution and their connection to the radio data on sub-parsec
scales, including total intensity, polarized flux density, polarization angle,
spectral index, and rotation measure both in the core and the jet region. The
VLBA data were obtained at 15 and 24 GHz for 12 epochs and at 43 GHz for 23
epochs, thus providing the best temporal and spatial coverage in the radio band
ever achieved for a TeV blazar. We provide significant constraints on the jet
Doppler factor, the presence of proper motion, the magnetic field
configuration, and an intriguing connection between variability in the radio
data and the gamma-ray light curve: the total intensity and polarized core
emission reach a peak simultaneously to the main gamma-ray peak, followed by a
rotation of the polarization angle at low frequency. Opacity-related, long
wavelength polarization swings are also detected later in the year, possibly
related to secondary peaks in the gamma-ray light curve, setting constraints on
the physics of the gamma-ray zone.
|
We report a photoluminescence study of electron-hole complexes in specially
designed semiconductor heterostructures. Placing a remote dilute layer of
donors at different distances \itshape d \normalfont from the quantum well
leads to the transformation of luminescence spectra of neutral ($X$) and
negatively charged ($X^{-}$) excitons. The onset of an additional spectral line
and its energy dependence on \itshape d \normalfont allows us to unambiguously
relate the so-called $X^{-}$ trion state with charged excitons bound on charged
donors in a barrier. The results indicate the overestimation in free-trion
binding energies from previous studies of GaAs/Al$_{0.3}$Ga$_{0.7}$As quantum
wells, and give their corrected values for QWs of width 200 and 300 \AA \space
in the limiting case of infinitely distant donors.
|
We study the Gibbs statistics of high-density hard-core configurations on a
unit square lattice $\mathbb{Z}^2$, for a general Euclidean exclusion distance
$D$. As a by-product, we solve the disk-packing problem on $\mathbb{Z}^2$ for
disks of diameter $D$. The key point is an analysis of solutions to norm
equations in $\mathbb{Z}[{\sqrt[6]{-1}}]$. We describe the ground states in
terms of M-triangles, i.e., non-obtuse $\mathbb{Z}^2$-triangles of a minimal
area with the side-lengths $\geq D$.
There is a finite class (Class S) formed by values $D^2$ generating sliding,
a phenomenon leading to countable families of periodic ground states. We
identify all $D^2$ with sliding. Each of the remaining classes is proven to be
infinite; they are characterized by uniqueness or non-uniqueness of a minimal
triangle for a given $D^2$, up to $\mathbb{Z}^2$-congruencies. For values of
$D^2$ with uniqueness (Class A) we describe the periodic ground states as
admissible sub-lattices in $\mathbb{Z}^2$ of maximum density. By using the
Pirogov-Sinai theory, it allows us to identify the extreme Gibbs measures (pure
phases) for large values of fugacity and describe symmetries between them.
Next, we analyze the values $D^2$ with non-uniqueness. For some $D^2$ all
M-triangles are ${\mathbb{R}}^2$-congruent but not $\mathbb{Z}^2$-congruent
(Class B0). For other values of $D^2$ there exist
non-${\mathbb{R}}^2$-congruent M-triangles, with different collections of
side-lengths (Class B1). Moreover, there are values $D^2$ for which both cases
occur (Class B2). The large-fugacity phase diagram for Classes B0, B1, B2 is
determined by dominant ground states.
Classes A, B0-B2 are described in terms of cosets in
$\mathbb{Z}[{\sqrt[6]{-1}}]$ by the group of units.
|
Particle Metropolis-Hastings (PMH) allows for Bayesian parameter inference in
nonlinear state space models by combining Markov chain Monte Carlo (MCMC) and
particle filtering. The latter is used to estimate the intractable likelihood.
In its original formulation, PMH makes use of a marginal MCMC proposal for the
parameters, typically a Gaussian random walk. However, this can lead to a poor
exploration of the parameter space and an inefficient use of the generated
particles.
We propose a number of alternative versions of PMH that incorporate gradient
and Hessian information about the posterior into the proposal. This information
is more or less obtained as a byproduct of the likelihood estimation. Indeed,
we show how to estimate the required information using a fixed-lag particle
smoother, with a computational cost growing linearly in the number of
particles. We conclude that the proposed methods can: (i) decrease the length
of the burn-in phase, (ii) increase the mixing of the Markov chain at the
stationary phase, and (iii) make the proposal distribution scale invariant
which simplifies tuning.
|
We summarize the development of visible-sensitive gaseous photomultipliers,
combining a semitransparent bi-alkali photocathode with a state-of-the-art
cascaded electron multiplier. The latter has high photoelectron collection
efficiency and a record ion blocking capability. We describe in details the
system and methods of photocathode production and characterization, their
coupling with the electron multiplier and the gaseous-photomultiplier operation
and characterization in a continuous mode. We present results on the properties
of laboratory-produced K$_2$CsSb, Cs$_3$Sb and Na$_2$KSb photocathodes and
report on their stability and QE in gas; K$_2$CsSb photocathodes yielded QE
values in Ar/CH$_4$(95/5) above 30% at wavelengths of 360-400 nm. The novel
gaseous photomultiplier yielded stable operation at gains of 10$^5$, in
continuous operation mode, in 700 Torr of this gas; its sensitivity to single
photons was demonstrated. Other properties are described. The successful
detection of visible light with this gas-photomultiplier pave ways towards
further development of large-area sealed imaging detectors, of flat geometry,
insensitive to magnetic fields, which might have significant impact on light
detection in numerous fields.
|
The effects of edge chemistry on the relative stability and electronic
properties of zigzag boron nitride nanoribbons (ZBNNRs) are investigated. Among
all functional groups considered, fully hydroxylated ZBNNRs are found to be the
most energetically stable. When an in-plane external electric field is applied
perpendicular to the axis of both hydrogenated and hydroxylated ZBNNRs, a
spin-polarized half-metallic state is induced, whose character is different
than that predicted for ZGNRs. The onset field for achieving the half-metallic
state is found to mainly depend on the width of the ribbon. Our results
indicate that edge functionalization of ZBNNRs may open the way for the design
of new nano-electronic and nano-spintronic devices.
|
Subsets and Splits