text
stringlengths 6
128k
|
---|
We show that, up to strong cocycle conjugacy, every countable exact group
admits a unique equivariantly $\mathcal{O}_2$-absorbing, pointwise outer action
on the Cuntz algebra $\mathcal{O}_2$ with the quasi-central approximation
property (QAP). In particular, we establish the equivariant analogue of the
Kirchberg $\mathcal{O}_2$-absorption theorem for these groups.
|
The Nielsen-Thurston theory of surface diffeomorphisms shows that useful
dynamical information can be obtained about a surface diffeomorphism from a
finite collection of periodic orbits.In this paper, we extend these results to
homoclinic and heteroclinic orbits of saddle points. These orbits are most
readily computed and studied as intersections of unstable and stable manifolds
comprising homoclinic or heteroclinic tangles in the surface. We show how to
compute a map of a one-dimensional space similar to a train-track which
represents the isotopy-stable dynamics of the surface diffeomorphism relative
to a tangle. All orbits of this one-dimensional representative are globally
shadowed by orbits of the surface diffeomorphism, and periodic, homoclinic and
heteroclinic orbits of the one-dimensional representative are shadowed by
similar orbits in the surface.By constructing suitable surface diffeomorphisms,
we prove that these results are optimal in the sense that the topological
entropy of the one-dimensional representative is the greatest lower bound for
the entropies of diffeomorphisms in the isotopy class.
|
A means to take advantage of molecular similarity to lower the computational
cost of electronic structure theory is proposed, in which parameters are
embedded into a low-cost, low-level (LL) ab initio theory and adjusted to
obtain agreement with a higher level (HL) ab initio theory. This approach is
explored by training such a model on data for ethane and testing the resulting
model on methane, propane and butane. The electronic distribution of the
molecules is varied by placing them in strong electrostatic environments
consisting of random charges placed on the corners of a cube. The results find
that parameters embedded in HF/STO-3G theory can be adjusted to obtain
agreement, to within about 2 kcal/mol, with results of HF/6-31G theory.
Obtaining this level of agreement requires the use of parameters that are
functions of the bond lengths, atomic charges, and bond orders within the
molecules. The argument is made that this approach provides a well-controlled
means to take advantage of molecular similarity in quantum chemistry.
|
Polarized models of relativistically hot astrophysical plasmas require
transport coefficients as input: synchrotron absorption and emission
coefficients in each of the four Stokes parameters, as well as three Faraday
rotation coefficients. Approximations are known for all coefficients for a
small set of electron distribution functions, such as the Maxwell-Juttner
relativistic thermal distribution, and a general procedure has been obtained by
Huang & Shcherbakov for an isotropic distribution function. Here we provide an
alternative general procedure, with a full derivation, for calculating
absorption and rotation coefficients for an arbitrary isotropic distribution
function. Our method involves the computation of the full plasma susceptibility
tensor, which in addition to absorption and rotation coefficients may be used
to determine plasma modes and the dispersion relation. We implement the scheme
in a publicly available library with a simple interface, thus allowing for easy
incorporation into radiation transport codes. We also provide a comprehensive
survey of the literature and comparison with earlier results.
|
Those best-positioned to profit from the proliferation of artificial
intelligence (AI) systems are those with the most economic power. Extant global
inequality has motivated Western institutions to involve more diverse groups in
the development and application of AI systems, including hiring foreign labour
and establishing extra-national data centers and laboratories. However, given
both the propensity of wealth to abet its own accumulation and the lack of
contextual knowledge in top-down AI solutions, we argue that more focus should
be placed on the redistribution of power, rather than just on including
underrepresented groups. Unless more is done to ensure that opportunities to
lead AI development are distributed justly, the future may hold only AI systems
which are unsuited to their conditions of application, and exacerbate
inequality.
|
We compute the effective good divisibility of a rational homogeneous variety,
extending an earlier result for complex Grassmannians by Naldi and Occhetta.
Non-existence of nonconstant morphisms to rational homogeneous varieties of
classical Lie type are obtained as applications.
|
Accurate estimation of nuclear masses and their prediction beyond the
experimentally explored domains of the nuclear landscape are crucial to an
understanding of the fundamental origin of nuclear properties and to many
applications of nuclear science, most notably in quantifying the $r$-process of
stellar nucleosynthesis. Neural networks have been applied with some success to
the prediction of nuclear masses, but they are known to have shortcomings in
application to extrapolation tasks. In this work, we propose and explore a
novel type of neural network for mass prediction in which the usual neuron-like
processing units are replaced by complex-valued product units that permit
multiplicative couplings of inputs to be learned from the input data. This
generalized network model is tested on both interpolation and extrapolation
data sets drawn from the Atomic Mass Evaluation. Its performance is compared
with that of several neural-network architectures, substantiating its
suitability for nuclear mass prediction. Additionally, a prediction-uncertainty
measure for such complex-valued networks is proposed that serves to identify
regions of expected low prediction error.
|
For a smooth surface X over an algebraically closed field of positive
characteristic, we consider the ramification of an Artin-Schreier extension of
X. A ramification at a point of codimension 1 of X is understood by the Swan
conductor. A ramification at a closed point of X is understood by the invariant
r_x defined by Kato [2]. The main theme of this paper is to give a simple
formula to compute r_x' defined in [4], which is equal to r_x for good
Artin-Schreier extension. We also prove Kato's conjecture for upper bound of
r_x.
|
The Kane-Mele (KM) model is proposed to describe the quantum spin Hall effect
of electrons on the two-dimensional honeycomb lattice. Here, we will show that,
in a certain parameter region, the London equation is obtained from the
effective field theory of the layered KM model with an electronic correlation.
|
Case-Based Reasoning (CBR) is an artificial intelligence approach to
problem-solving with a good record of success. This article proposes using
Quantum Computing to improve some of the key processes of CBR, such that a
Quantum Case-Based Reasoning (qCBR) paradigm can be defined. The focus is set
on designing and implementing a qCBR based on the variational principle that
improves its classical counterpart in terms of average accuracy, scalability
and tolerance to overlapping. A comparative study of the proposed qCBR with a
classic CBR is performed for the case of the Social Workers' Problem as a
sample of a combinatorial optimization problem with overlapping. The
algorithm's quantum feasibility is modelled with docplex and tested on IBMQ
computers, and experimented on the Qibo framework.
|
In this paper, we discuss the collection of a corpus associated to tropical
storm Harvey, as well as its analysis from both spatial and topical
perspectives. From the spatial perspective, our goal here is to get a first
estimation of the quality and precision of the geographical information
featured in the collected corpus. From a topical perspective, we discuss the
representation of Twitter posts, and strategies to process an initially
unlabeled corpus of tweets.
|
A search for pair production of the supersymmetric partner of the top quark,
the top squark, in proton-proton collisions at $\sqrt{s}$ = 13 TeV is presented
in final states containing at least one hadronically decaying tau lepton and
large missing transverse momentum. This final state is highly sensitive to
scenarios of supersymmetry in which the decay of the top squark to tau leptons
is enhanced. The search uses a data sample corresponding to an integrated
luminosity of 138 fb$^{-1}$, which was recorded with the CMS detector during
2016-2018. No significant excess is observed with respect to the standard model
predictions. Exclusion limits at 95% confidence level on the masses of the top
squark and the lightest neutralino are presented under the assumptions of
simplified models. The results probe top squark masses up to 1150 GeV for a
nearly massless neutralino. This search covers a relatively less explored
parameter space in the context of supersymmetry, and the exclusion limit is the
most stringent to date for the model considered here.
|
Short period (<50 days) low-mass (<10Mearth) exoplanets are abundant and the
few of them whose radius and mass have been measured already reveal a diversity
in composition. Some of these exoplanets are found on eccentric orbits and are
subjected to strong tides affecting their rotation and resulting in significant
tidal heating. Within this population, some planets are likely to be depleted
in volatiles and have no atmosphere. We model the thermal emission of these
"Super Mercuries" to study the signatures of rotation and tidal dissipation on
their infrared light curve. We compute the time-dependent temperature map at
the surface and in the subsurface of the planet and the resulting
disk-integrated emission spectrum received by a distant observer for any
observation geometry. We calculate the illumination of the planetary surface
for any Keplerian orbit and rotation. We include the internal tidal heat flow,
vertical heat diffusion in the subsurface and generate synthetic light curves.
We show that the different rotation periods predicted by tidal models
(spin-orbit resonances, pseudo-synchronization) produce different photometric
signatures, which are observable provided that the thermal inertia of the
surface is high, like that of solid or melted rocks (but not regolith). Tidal
dissipation can also directly affect the light curves and make the inference of
the rotation more difficult or easier depending on the existence of hot spots
on the surface. Infrared light curve measurement with the James Webb Space
Telescope and EChO can be used to infer exoplanets' rotation periods and
dissipation rates and thus to test tidal models. This data will also constrain
the nature of the (sub)surface by constraining the thermal inertia.
|
We report the results of measurements of the dc magnetic susceptibility
chi(T) and of the 23Na nuclear magnetic resonance (NMR) response of NaVGe2O6, a
material in which the V ions form a network of interacting one-dimensional spin
S=1 chains. The experiments were made at temperatures between 2.5 and 300 K.
The chi(T) data suggest that the formation of the expected low-temperature
Haldane phase is intercepted by an antiferromagnetic phase transition at 18 K.
The transition is also reflected in the 23Na NMR spectra and the corresponding
spin-lattice relaxation rate 1/T1(T). In the ordered phase, 1/T1(T) decreases
by orders of magnitude with decreasing temperature, indicating the formation of
a gap of the order of 12 K in the magnetic excitation spectrum.
|
We present mean horizontal branch absolute magnitudes and iron abundances for
a sample of 39 globular clusters. These quantities were calculated in an
unprecedented homogeneous fashion based on Fourier decomposition of ligt curves
of RR Lyrae cluster members. Zero points for the luminosity calibrations are
discussed. Our photometrically derived metallicities and distances compare very
well with spectroscopic determinations of [Fe/H] and accurate distances
obtained using {\sl Gaia} and {\sl Hubble Space Telescope} data. The need to
distinguish between the results for RRab and RRc stars for a correct evaluation
of the $M_V$--[Fe/H] relation is discussed. For RRab stars, the relation is
non-linear, and the horizontal branch structure plays a significant role. For
RRc stars, the relation remains linear and tight, and the slope is very
shallow. Hence, the RRc stars seem better indicators of the parental cluster
distances. Systematic time-series CCD imaging performed over the last 20 years
enabled to discover and classify 330 variables in our sample of globular
clusters.
|
We consider a perturbed KdV equation:
[\dot{u}+u_{xxx} - 6uu_x = \epsilon f(x,u(\cdot)), \quad x\in \mathbb{T},
\quad\int_\mathbb{T} u dx=0.]
For any periodic function $u(x)$, let
$I(u)=(I_1(u),I_2(u),...)\in\mathbb{R}_+^{\infty}$ be the vector, formed by the
KdV integrals of motion, calculated for the potential $u(x)$. Assuming that the
perturbation $\epsilon f(x,u(\cdot))$ is a smoothing mapping (e.g. it is a
smooth function $\epsilon f(x)$, independent from $u$), and that solutions of
the perturbed equation satisfy some mild a-priori assumptions, we prove that
for solutions $u(t,x)$ with typical initial data and for $0\leqslant t\lesssim
\epsilon^{-1}$, the vector $I(u(t))$ may be well approximated by a solution of
the averaged equation.
|
Proper motion studies of stars in the centre of the Milky Way have been
typically limited to the Arches and Quintuplet clusters and to the central
parsec. Here, we present the first results of a large-scale proper motion study
of stars within several tens of parsecs of Sagittarius A* based on our $0.2''$
angular resolution GALACTICNUCLEUS survey (epoch 2015) combined with NICMOS/HST
data from the Paschen-$\alpha$ survey (epoch 2008). This study will be the
first extensive proper motion study of the central $\sim 36' \times 16'$ of the
Galaxy, which is not covered adequately by any of the existing astronomical
surveys such as Gaia because of its extreme interstellar extinction ($A_{V}
\gtrsim 30$ mag). Proper motions can help us to disentangle the different
stellar populations along the line-of-sight and interpret their properties in
combination with multi-wavelength photometry from GALACTICNUCLEUS and other
sources. It also allows us to infer the dynamics and interrelationship between
the different stellar components of the Galactic Centre (GC). In particular, we
use proper motions to detect co-moving groups of stars which can trace low mass
or partially dissolved young clusters in the GC that can hardly be discovered
by any other means. Our pilot study in this work is on a field in the nuclear
bulge associated by HII regions that show the presence of young stars. We
detect the first group of co-moving stars coincident with an HII region. Using
colour-magnitude diagrams, we infer that the co-moving stars are consistent
with being the post-main sequence stars with ages of few Myrs. Simulations show
that this group of stars is a real group that can indicate the existence of a
dissolving or low to intermediate mass young cluster. A census of these
undiscovered clusters will ultimately help us to constrain star formation at
the GC in the past few ten Myrs.
|
In this note we prove that the maximum length of a $d$-dimensional circuit
code of spread $k$ equals $2^{d+O_k(\log^2d)}$, with the implied constant
depending only on $k$.
|
We propose a method to integrate dissipative PDEs rigorously forward in time
with the use of Finite Element Method (FEM). The technique is based on the
Galerkin projection on the FEM space and estimates on the residual terms. The
proposed approach is illustrated on a periodically forced one-dimensional
Burgers equation with Dirichlet conditions. For two particular choices of the
forcing we prove the existence of the periodic globally attracting trajectory
and give precise bounds on its shape.
|
Biquandle brackets are a type of quantum enhancement of the biquandle
counting invariant for oriented knots and links, defined by a set of skein
relations with coefficients which are functions of biquandle colors at a
crossing. In this paper we use biquandle brackets to enhance the biquandle
counting matrix invariant defined by the first two authors in arXiv:1803.11308.
We provide examples to illustrate the method of calcuation and to show that the
new invariants are stronger than the previous ones.
|
Asymptotic formulae for the mechanical and electric fields in a piezoelectric
body with a small void are derived and justified. Such results are new and
useful for applications in the field of design of smart materials. In this way
the topological derivatives of shape functionals are obtained for
piezoelectricity. The asymptotic formulae are given in terms of the so-called
polarization tensors (matrices) which are determined by the integral
characteristics of voids. The distinguished feature of the piezoelectricity
boundary value problems under considerations is the absence of positive
definiteness of an differential operator which is non self-adjoint. Two
specific Gibbs' functionals of the problem are defined by the energy and the
electric enthalpy. The topological derivatives are defined in different manners
for each of the governing functionals. Actually, the topological derivative of
the enthalpy functional is local i.e., defined by the pointwise values of the
governing fields, in contrary to the energy functional and some other suitable
shape functionals which admit non-local topological derivatives, i.e.,
depending on the whole problem data. An example with the weak interaction
between mechanical and electric fields provides the explicit asymptotic
expansions and can be directly used in numerical procedures of optimal design
for smart materials.
|
We report first results on the calculation of NNLO corrections to event shape
distributions in electron-positron annhilation. The corrections are sizeable
for all variables, however their magnitude is substantially different for
different observables. We observe that inclusion of the NNLO corrections yields
a considerably better agreement between theory and experimental data both in
shape and normalisation of the event shape distributions.
|
Arone and Lesh constructed and studied spectrum level filtrations that
interpolate between connective (topological or algebraic) K-theory and the
Eilenberg-MacLane spectrum for the integers. In this paper we consider (global)
equivariant generalizations of these filtrations and of another closely related
class of filtrations, the modified rank filtrations of the K-theory spectra
themselves. We lift Arone and Lesh's description of the filtration subquotients
to the equivariant context and apply it to compute algebraic filtrations on
representation rings that arise on equivariant homotopy groups. It turns out
that these representation ring filtrations are considerably easier to express
globally than over a fixed compact Lie group. Furthermore, they have formal
similarities to the filtration on Burnside rings induced by the symmetric
products of spheres, which was computed by Schwede.
|
For more than two decades, the Navarro, Frenk, and White (NFW) model has
stood the test of time; it has been used to describe the distribution of mass
in galaxy clusters out to their outskirts. Stacked weak lensing measurements of
clusters are now revealing the distribution of mass out to and beyond their
virial radii, where the NFW model is no longer applicable. In this study we
assess how well the parameterised Diemer & Kravstov (DK) density profile
describes the characteristic mass distribution of galaxy clusters extracted
from cosmological simulations. This is determined from stacked synthetic
lensing measurements of the 50 most massive clusters extracted from the
Cosmo-OWLS simulations, using the Dark Matter Only run and also the run that
most closely matches observations. The characteristics of the data reflect the
Weighing the Giants survey and data from the future Large Synoptic Survey
Telescope (LSST). In comparison with the NFW model, the DK model favored by the
stacked data, in particular for the future LSST data, where the number density
of background galaxies is higher. The DK profile depends on the accretion
history of clusters which is specified in the current study. Eventually however
subsamples of galaxy clusters with qualities indicative of disparate accretion
histories could be studied.
|
Simple, self-similar, analytic solutions of relativistic hydrodynamics are
presented for cylindrically symmetric, three dimensionally expanding fireballs
corresponding to central collisions of heavy ions at relativistic bombarding
energies.
|
By matching infrared-selected, massive young stellar objects (MYSOs) and
compact HII regions in the RMS survey to massive clumps found in the
submillimetre ATLASGAL survey, we have identified ~1000 embedded young massive
stars between 280\degr < $\ell$ < 350\degr and 10degr < $\ell$ < 60\degr with
|b|<1.5degr. Combined with an existing sample of radio-selected methanol masers
and compact HII regions, the result is a catalogue of ~1700 massive stars
embedded within ~1300 clumps located across the inner Galaxy, containing three
observationally distinct subsamples, methanol-maser, MYSO and HII-region
associations, covering the most important tracers of massive star formation,
thought to represent key stages of evolution. We find that massive star
formation is strongly correlated with the regions of highest column density in
spherical, centrally condensed clumps. We find no significant differences
between the three samples in clump structure or the relative location of the
embedded stars, which suggests that the structure of a clump is set before the
onset of star formation, and changes little as the embedded object evolves
towards the main sequence. There is a strong linear correlation between clump
mass and bolometric luminosity, with the most massive stars forming in the most
massive clumps. We find that the MYSO and HII-region subsamples are likely to
cover a similar range of evolutionary stages and that the majority are near the
end of their main accretion phase. We find few infrared-bright MYSOs associated
with the most massive clumps, probably due to very short pre-main sequence
lifetimes in the most luminous sources.
|
We aim to examine the relative cross-calibration accuracy of the on-axis
effective areas of the XMM-Newton EPIC pn and MOS instruments. Spectra from a
sample of 46 bright, high-count, non-piled-up isolated on-axis point sources
are stacked together, and model residuals are examined to characterize the EPIC
MOS-to-pn inter-calibration. The MOS1-to-pn and MOS2-to-pn results are broadly
very similar. The cameras show the closest agreement below 1 keV, with MOS
excesses over pn of 0-2% (MOS1/pn) and 0-3% (MOS2/pn). Above 3 keV, the MOS/pn
ratio is consistent with energy-independent (or only mildly increasing)
excesses of 7-8% (MOS1/pn) and 5-8% (MOS2/pn). In addition, between 1-2 keV
there is a `silicon bump' - an enhancement at a level of 2-4% (MOS1/pn) and
3-5% (MOS2/pn). Tests suggest that the methods employed here are stable and
robust. The results presented here provide the most accurate cross-calibration
of the effective areas of the XMM-Newton EPIC pn and MOS instruments to date.
They suggest areas of further research where causes of the MOS-to-pn
differences might be found, and allow the potential for corrections to and
possible rectification of the EPIC cameras to be made in the future.
|
In quantum information processing quantum operations are often processed
alongside measurements which result in classical data. Due to the information
gain of classical measurement outputs non-unitary dynamical processes can take
place on the system, for which common quantum channel descriptions fail to
describe the time evolution. Quantum measurements are correctly treated by
means of so-called quantum instruments capturing both classical outputs and
post-measurement quantum states. Here we present a general recipe to
characterize quantum instruments alongside its experimental implementation and
analysis. Thereby, the full dynamics of a quantum instrument can be captured,
exhibiting details of the quantum dynamics that would be overlooked with common
tomography techniques. For illustration, we apply our characterization
technique to a quantum instrument used for the detection of qubit loss and
leakage, which was recently implemented as a building block in a quantum error
correction (QEC) experiment (Nature 585, 207-210 (2020)). Our analysis reveals
unexpected and in-depth information about the failure modes of the
implementation of the quantum instrument. We then numerically study the
implications of these experimental failure modes on QEC performance, when the
instrument is employed as a building block in QEC protocols on a logical qubit.
Our results highlight the importance of careful characterization and modelling
of failure modes in quantum instruments, as compared to simplistic
hardware-agnostic phenomenological noise models, which fail to predict the
undesired behavior of faulty quantum instruments. The presented methods and
results are directly applicable to generic quantum instruments.
|
Mixup augmentation has been widely integrated to generate adversarial
examples with superior adversarial transferability when immigrating from a
surrogate model to other models. However, the underlying mechanism influencing
the mixup's effect on transferability remains unexplored. In this work, we
posit that the adversarial examples located at the convergence of decision
boundaries across various categories exhibit better transferability and
identify that Admix tends to steer the adversarial examples towards such
regions. However, we find the constraint on the added image in Admix decays its
capability, resulting in limited transferability. To address such an issue, we
propose a new input transformation-based attack called Mixing the Image but
Separating the gradienT (MIST). Specifically, MIST randomly mixes the input
image with a randomly shifted image and separates the gradient of each loss
item for each mixed image. To counteract the imprecise gradient, MIST
calculates the gradient on several mixed images for each input sample.
Extensive experimental results on the ImageNet dataset demonstrate that MIST
outperforms existing SOTA input transformation-based attacks with a clear
margin on both Convolutional Neural Networks (CNNs) and Vision Transformers
(ViTs) w/wo defense mechanisms, supporting MIST's high effectiveness and
generality.
|
The fact that most extrasolar planets found to date are orbiting metal-rich
stars lends credence to the core accretion mechanism of gas giant planet
formation over its competitor, the disc instability mechanism. However, the
core accretion mechanism is not refined to the point of explaining orbital
parameters such as their unexpected semi-major axes and eccentricities. We
propose a model, which correlates the metallicity of the host star with the
original semi-major axis of its most massive planet, prior to migration,
considering that the core accretion scenario governs giant gas planet
formation. The model predicts that the optimum regions for planetary formation
shift inward as stellar metallicity decreases, providing an explanation for the
observed absence of long period planets in metal-poor stars. We compare our
predictions with the available data on extrasolar planets for stars with masses
similar to the mass of the Sun. A fitting procedure produces an estimate of
what we define as the Zero Age Planetary Orbit (ZAPO) curve as a function of
the metallicity of the star. The model also hints that the lack of planets
circling metal-poor stars may be partly caused by an enhanced destruction
probability during the migration process, since the planets lie initially
closer to the central stars.
|
It is shown that a recent result regarding the average rate of evolution of a
dynamical system at equilibrium in combination with the quantization of
geometric areas coming from LQG, implies the validity of Kepler's Second Law of
planetary motion.
|
Progressive acquisition of slowly-scanned images is desirable for drift
correction and real-time visualization. Interlacing methods are common
approaches to storing and transmitting data on rectilinear grids, and here we
propose using them for acquisition in scanning-mode image modalities.
Especially in these cases, it is essential to make optimal use of sample points
to speed up the scan and reduce damage to the subject. It has long been known
that optimal sampling of band-limited signals is achieved using hexagonal
scanning grids. In this note, we demonstrate two new methods for interlacing
hexagonal grids, which enable early full field-of-view imaging with optimal
sampling and resolution doubling.
|
Generative AI, in particular text-based "foundation models" (large models
trained on a huge variety of information including the internet), can generate
speech that could be problematic under a wide range of liability regimes.
Machine learning practitioners regularly "red team" models to identify and
mitigate such problematic speech: from "hallucinations" falsely accusing people
of serious misconduct to recipes for constructing an atomic bomb. A key
question is whether these red-teamed behaviors actually present any liability
risk for model creators and deployers under U.S. law, incentivizing investments
in safety mechanisms. We examine three liability regimes, tying them to common
examples of red-teamed model behaviors: defamation, speech integral to criminal
conduct, and wrongful death. We find that any Section 230 immunity analysis or
downstream liability analysis is intimately wrapped up in the technical details
of algorithm design. And there are many roadblocks to truly finding models (and
their associated parties) liable for generated speech. We argue that AI should
not be categorically immune from liability in these scenarios and that as
courts grapple with the already fine-grained complexities of platform
algorithms, the technical details of generative AI loom above with thornier
questions. Courts and policymakers should think carefully about what technical
design incentives they create as they evaluate these issues.
|
The dynamics of fake news and rumor spreading is investigated using a model
with three kinds of agents who are respectively the Seeds, the Agnostics and
the Others. While Seeds are the ones who start spreading the rumor being
adamantly convinced of its truth, Agnostics reject any kind of rumor and do not
believe in conspiracy theories. In between, the Others constitute the main part
of the community. While Seeds are always Believers and Agnostics are always
Indifferents, Others can switch between being Believer and Indifferent
depending on who they are discussing with. The underlying driving dynamics is
implemented via local updates of randomly formed groups of agents. In each
group, an Other turns into a Believer as soon as $m$ or more Believers are
present in the group. However, since some Believers may lose interest in the
rumor as time passes by, we add a flipping fixed rate $0<d<1$ from Believers
into Indifferents. Rigorous analysis of the associated dynamics reveals that
switching from $m=1$ to $m\ge2$ triggers a drastic qualitative change in the
spreading process. When $m=1$ even a small group of Believers may manage to
convince a large part of the community very quickly. In contrast, for $m\ge 2$,
even a substantial fraction of Believers does not prevent the rumor dying out
after a few update rounds. Our results provide an explanation on why a given
rumor spreads within a social group and not in another, and also why some
rumors will not spread in neither groups.
|
We investigate the relative time scales associated with finite future
cosmological singularities, especially those classified as Big Rip cosmologies,
and the maximum predictability time of a coupled FRW-KG scalar cosmology with
chaotic regimes. Our approach is to show that by starting with a FRW-KG scalar
cosmology with a potential that admits an analytical solution resulting in a
finite time future singularity there exists a Lyapunov time scale that is
earlier than the formation of the singularity. For this singularity both the
cosmological scale parameter a(t) and the Hubble parameter H(t) become infinite
at a finite future time, the Big Rip time. We compare this time scale to the
predictability time scale for a chaotic FRW-KG scalar cosmology. We find that
there are cases where the chaotic time scale is earlier than the Big Rip
singularity calling for special care in interpreting and predicting the
formation of the future cosmological singularity.
|
Natural language (NL) toolkits enable visualization developers, who may not
have a background in natural language processing (NLP), to create natural
language interfaces (NLIs) for end-users to flexibly specify and interact with
visualizations. However, these toolkits currently only support one-off
utterances, with minimal capability to facilitate a multi-turn dialog between
the user and the system. Developing NLIs with such conversational interaction
capabilities remains a challenging task, requiring implementations of low-level
NLP techniques to process a new query as an intent to follow-up on an older
query. We extend an existing Python-based toolkit, NL4DV, that processes an NL
query about a tabular dataset and returns an analytic specification containing
data attributes, analytic tasks, and relevant visualizations, modeled as a JSON
object. Specifically, NL4DV now enables developers to facilitate multiple
simultaneous conversations about a dataset and resolve associated ambiguities,
augmenting new conversational information into the output JSON object. We
demonstrate these capabilities through three examples: (1) an NLI to learn
aspects of the Vega-Lite grammar, (2) a mind mapping application to create
free-flowing conversations, and (3) a chatbot to answer questions and resolve
ambiguities.
|
We study the aging property for stationary models in the KPZ universality
class. In particular, we show aging for the stationary KPZ fixed point, the
Cole-Hopf solution to the stationary KPZ equation, the height function of the
stationary TASEP, last-passage percolation with boundary conditions and
stationary directed polymers in the intermediate disorder regime. All of these
models are shown to display a universal aging behavior characterized by the
rate of decay of their correlations. As a comparison, we show aging for models
in the Edwards-Wilkinson universality class where a different decay exponent is
obtained. A key ingredient to our proofs is a characteristic of space-time
stationarity - covariance-to-variance reduction - which allows to deduce the
asymptotic behavior of the correlations of two space-time points by the one of
the variances at one point. We formulate several open problems.
|
Theories on the bosonic nature of dark matter are a promising alternative to
the cold dark matter model. Here we consider a dark matter halo in the state of
a Bose-Einstein condensate, subject to the gravitation of a black hole. In the
low energy limit, we bring together the general relativity in the Schwarzschild
metric and the quantum description of the Bose-Einstein condensate. The model
is solvable in the Fermi normal coordinates with the so called highly nonlocal
approximation and describes tidal deformations in the condensate wave function.
The black hole deforms the localized condensate until the attraction of the
compact object overcomes the self-gravitation and destabilizes the solitonic
dark matter. Moreover, the model can be implemented as a gravitational analog
in the laboratory; the time-dependent potential generated by the galactic black
hole can be mimicked by an optical trap acting on a conventional condensate.
The results open the way to new laboratory simulators for quantum gravitational
effects.
|
Nonequilibrium flows have been frequently encountered in various aerospace
engineering applications. To understand nonequilibrium physics, multiscale
effects, and the dynamics in these applications, an effective and reliable
multiscale scheme for all flow regimes is required. Following the direct
modeling methodology, the adaptive unified gas-kinetic scheme employs discrete
velocity space (DVS) to accurately capture the non-equilibrium physics,
recovering the original unified gas-kinetic scheme (UGKS), and adaptively
employs continuous distribution functions based on the Chapman-Enskog expansion
to achieve better efficiency. Different regions are dynamically coupled at the
cell interface through the fluxes from the discrete and continuous gas
distribution functions, thereby avoiding any buffer zone between them. In the
current study, an implicit adaptive unified gas-kinetic scheme (IAUGKS) is
constructed to further enhance the efficiency of steady-state solutions. The
current scheme employs implicit macroscopic governing equations and couples
them with implicit microscopic governing equations within the non-equilibrium
region, resulting in high convergence efficiency in all flow regimes. A series
of numerical tests were conducted for high Mach number flows around diverse
geometries such as a cylinder, a sphere, an X-38-like vehicle, and a space
station. The current scheme can capture the non-equilibrium physics and provide
accurate predictions of surface quantities. In comparison with the original
UGKS, the velocity space adaptation, unstructured DVS, and implicit iteration
significantly improve the efficiency by one or two orders of magnitude. Given
its exceptional efficiency and accuracy, the IAUGKS serves as an effective tool
for nonequilibrium flow simulations.
|
This is a survey paper on the theory of scattered spaces in Galois geometry
and its applications.
|
We show that a finite dimensional algebra $A$ has dominant dimension at least
$n \geq 2$ if and only if the regular bimodule $A$ is $n$-torsionfree if and
only if $A \cong \Omega^{n}(\text{Tr}(\Omega^{n-2}(V)))$ as $A$-bimodules,
where $V=\text{Hom}_A(D(A),A)$ is the canonical $A$-bimodule in the sense of
\cite{FKY}. We apply this to give new formulas for the Hochschild homology and
cohomology for algebras with dominant dimension at least two and show a new
relation between the first Tachikawa conjecture, the Nakayama conjecture and
Gorenstein homological algebra.
|
The lifting of the two-fold degeneracy of the conduction valleys in a
strained silicon quantum well is critical for spin quantum computing. Here, we
obtain an accurate measurement of the splitting of the valley states in the
low-field region of interest, using the microwave spectroscopy technique of
electron valley resonance (EVR). We compare our results with conventional
methods, observing a linear magnetic field dependence of the valley splitting,
and a strong low-field suppression, consistent with recent theory. The
resonance linewidth shows a marked enhancement above $T\simeq 300$ mK.
|
Recent developments in the theory of amorphous plasticity point to the
central role played by the concept of an effective disorder temperature
$T_{eff}$. An athermal dynamics for $T_{eff}$ are proposed in the framework of
a deformation theory and discussed in light of the recent steady state
simulations by Haxton and Liu [Phys. Rev. Lett. {\bf 99}, 195701 (2007)]. The
structure of the resulting theory, its parameters and transient dynamics are
discussed and compared to available data.
|
Harmonically modulated complex solitary waves which are a generalized type of
envelope soliton (herein coined oscillatory solitons) are studied for the two
U(1)-invariant integrable generalizations of the modified Korteweg-de Vries
equation, given by the Hirota equation and the Sasa-Satsuma equation. A
bilinear formulation of these two equations is used to derive the oscillatory
1-soliton and 2-soliton solutions, which are then written out in a physical
form parameterized in terms of their speed, modulation frequency, and phase.
Depending on the modulation frequency, the speeds of oscillatory waves
(1-solitons) can be positive, negative, or zero, in contrast to the strictly
positive speed of ordinary solitons. When the speed is zero, an oscillatory
wave is a time-periodic standing wave. Properties of the amplitude and phase of
oscillatory 1-solitons are derived. Oscillatory 2-solitons are graphically
illustrated to describe collisions between two oscillatory 1-solitons in the
case when the speeds are distinct. In the special case of equal speeds,
oscillatory 2-solitons are shown to reduce to harmonically modulated breather
waves.
|
We propose a stochastic map model of economic dynamics. In the last decade,
an array of observations in economics has been investigated in the econophysics
literature, a major example being the universal features of inequality in terms
of income and wealth. Another area of inquiry is the formation of opinion in a
society. The proposed model attempts to produce positively skewed distributions
and the power law distributions as has been observed in the real data of income
and wealth. Also, it shows a non-trivial phase transition in the opinion of a
society (opinion formation). A number of physical models also generates similar
results. In particular, the kinetic exchange models have been especially
successful in this regard. Therefore, we compare the results obtained from
these two approaches and discuss a number of new features and drawbacks of this
model.
|
In this study we present a simple model of elliptical galaxies aimed at
interpreting the gradients in colours and narrow band indices observed across
these systems. Salient features of the model are the gradients in mass density
and star formation and infall of primordial gas aimed at simulating the
collapse of a galaxy into the potential well of dark matter. Adopting a
multi-zone model we follow in detail the history of star formation, gas
consumption, and chemical enrichment of the galaxy and also allow for the
occurrence of galactic winds according to the classical supernova (and stellar
winds) energy deposit. The outline of the model, the time scale of gas
accretion and rate of star formation as a function of the galacto-centric
distance in particular, seek to closely mimic the results from Tree-SPH
dynamical models. Although same specific ingredients of the model can be
questioned from many points of view (of which we are well aware) the model
predictions have to be considered as a gross tool for exploring the
consequences of different receipts of gas accretion and star formation in which
the simple one-zone scheme is abandoned. With the aid of this model we discuss
the observational data on the gradients in metallicity, colour, and narrow band
indices across elliptical galaxies.
|
We investigate two source coding problems with secrecy constraints. In the
first problem we consider real--time fully secure transmission of a memoryless
source. We show that although classical variable--rate coding is not an option
since the lengths of the codewords leak information on the source, the key rate
can be as low as the average Huffman codeword length of the source. In the
second problem we consider causal source coding with a fidelity criterion and
side information at the decoder and the eavesdropper. We show that when the
eavesdropper has degraded side information, it is optimal to first use a causal
rate distortion code and then encrypt its output with a key.
|
Smile veracity classification is a task of interpreting social interactions.
Broadly, it distinguishes between spontaneous and posed smiles. Previous
approaches used hand-engineered features from facial landmarks or considered
raw smile videos in an end-to-end manner to perform smile classification tasks.
Feature-based methods require intervention from human experts on feature
engineering and heavy pre-processing steps. On the contrary, raw smile video
inputs fed into end-to-end models bring more automation to the process with the
cost of considering many redundant facial features (beyond landmark locations)
that are mainly irrelevant to smile veracity classification. It remains unclear
to establish discriminative features from landmarks in an end-to-end manner. We
present a MeshSmileNet framework, a transformer architecture, to address the
above limitations. To eliminate redundant facial features, our landmarks input
is extracted from Attention Mesh, a pre-trained landmark detector. Again, to
discover discriminative features, we consider the relativity and trajectory of
the landmarks. For the relativity, we aggregate facial landmark that
conceptually formats a curve at each frame to establish local spatial features.
For the trajectory, we estimate the movements of landmark composed features
across time by self-attention mechanism, which captures pairwise dependency on
the trajectory of the same landmark. This idea allows us to achieve
state-of-the-art performances on UVA-NEMO, BBC, MMI Facial Expression, and SPOS
datasets.
|
The hybrid plasmonic waveguide consists of a high-permittivity dielectric
nanofiber embedded in a low-permittivity dielectric near a metal surface. This
architecture is considered as one of the most perspective candidates for
long-range subwavelength guiding. We present qualitative analysis and numerical
results which reveal advantages of the special waveguide design when dielectric
constant of the cylinder is greater than the absolute value of the dielectric
constant of the metal. In this case the arbitrary subwavelength mode size can
be achieved by controlling the gap width. Our qualitative analysis is based on
consideration of sandwich-like conductor-gap-dielectric system. The numerical
solution is obtained by expansion of the hybrid plasmonic mode over single
cylinder modes and the surface plasmon-polariton modes of the metal screen and
matching the boundary conditions.
|
We use the in-in or Schwinger-Keldysh formalism to explore the construction
and interpretation of effective field theories for time-dependent systems
evolving out of equilibrium. Starting with a simple model consisting of a heavy
and a light scalar field taken to be in their free vacuum states at a finite
initial time, we study the effects from the heavy field on the dynamics of the
light field by analyzing the equation of motion for the expectation value of
the light background field. New terms appear which cannot arise from a local
action of an effective field theory in terms of the light field, though they
disappear in the adiabatic limit. We discuss the origins of these terms as well
as their possible implications for time dependent situations such as inflation.
|
We analyze the decomposition rank (a notion of covering dimension for nuclear
$C^*$-algebras introduced by E. Kirchberg and the author) of subhomogeneous
$C^*$-algebras. In particular we show that a subhomogeneous $C^*$-algebra has
decomposition rank $n$ if and only if it is recursive subhomogeneous of
topological dimension $n$ and that $n$ is determined by the primitive ideal
space. As an application, we use recent results of Q. Lin and N. C. Phillips to
show the following: Let $A$ be the crossed product $C^*$-algebra coming from a
compact smooth manifold and a minimal diffeomorphism. Then the decomposition
rank of $A$ is dominated by the covering dimension of the underlying manifold.
|
Convolutional Neural Networks (CNNs) have dominated computer vision for
years, due to its ability in capturing locality and translation invariance.
Recently, many vision transformer architectures have been proposed and they
show promising performance. A key component in vision transformers is the
fully-connected self-attention which is more powerful than CNNs in modelling
long range dependencies. However, since the current dense self-attention uses
all image patches (tokens) to compute attention matrix, it may neglect locality
of images patches and involve noisy tokens (e.g., clutter background and
occlusion), leading to a slow training process and potential degradation of
performance. To address these problems, we propose the $k$-NN attention for
boosting vision transformers. Specifically, instead of involving all the tokens
for attention matrix calculation, we only select the top-$k$ similar tokens
from the keys for each query to compute the attention map. The proposed $k$-NN
attention naturally inherits the local bias of CNNs without introducing
convolutional operations, as nearby tokens tend to be more similar than others.
In addition, the $k$-NN attention allows for the exploration of long range
correlation and at the same time filters out irrelevant tokens by choosing the
most similar tokens from the entire image. Despite its simplicity, we verify,
both theoretically and empirically, that $k$-NN attention is powerful in
speeding up training and distilling noise from input tokens. Extensive
experiments are conducted by using 11 different vision transformer
architectures to verify that the proposed $k$-NN attention can work with any
existing transformer architectures to improve its prediction performance. The
codes are available at \url{https://github.com/damo-cv/KVT}.
|
Most research aimed at measuring biomarkers on the skin is only concerned
with sensing chemicals in sweat using electrical signals, but these methods are
not truly non-invasive nor non-intrusive because they require substantial
amounts of sweat to get a reading. This project aims to create a truly
non-invasive wearable sensor that continuously detects the gaseous acetone (a
biomarker related to metabolic disorders) that ambiently comes out of the skin.
Composite films of polyaniline and cellulose acetate, exhibiting
chemo-mechanical actuation upon exposure to gaseous acetone, were tested in the
headspaces above multiple solutions containing acetone, ethanol, and water to
gauge response sensitivity, selectivity, and repeatability. The bending of the
films in response to exposures to these environments was tracked by an
automatic video processing code, which was found to out-perform an
off-the-shelf deep neural network-based tracker. Using principal component
analysis, we showed that the film bending is low dimensional with over 90% of
the shape changes being captured with just two parameters. We constructed
forward models to predict shape changes from the known exposure history and
found that a linear model can explain 40% of the observed variance in film tip
angle changes. We constructed inverse models, going from third order fits of
shape changes to acetone concentrations where about 45% of the acetone
variation and about 30% of ethanol variation are captured by linear models, and
non-linear models did not perform substantially better. This suggests there is
sufficient sensitivity and inherent selectivity of the films. These models,
however, provide evidence for substantial hysteretic or long-time-scale
responses of the PANI films, seemingly due to the presence of water. Further
experiments will allow more accurate discrimination of unknown exposure
environments.
|
The relativistic field theory model of the deuteron (RFMD) is reformulated
from the first principles of QCD. The deuteron appears as a neutron-proton
collective excitation, i.e. a Cooper np-pair, induced by a phenomenological
local four-nucleon interaction in the nuclear phase of QCD. The RFMD describes
the deuteron coupled to hadrons through one-nucleon loop exchanges providing a
minimal transfer of nucleon flavours from initial to final nuclear states and
accounting for contributions of nucleon-loop anomalies which are completely
determined by one-nucleon loop diagrams. The dominance of contributions of
nucleon-loop anomalies to effective Lagrangians of low-energy nuclear
interactions is justified in the large N expansion, where N is the number of
quark colours.
|
The perturbative integral method was applied to quantify the contribution of
external forces during a specific interval of time in trajectories of
spacecraft around asteroids and under the Luni-solar influence. However, this
method has not been used to quantify the contributions of drag in aerocapture
and aerobraking. For this reason, the planet Mars is selected to apply this
method during an aerogravity-assisted maneuver. Several trajectories are
analyzed, making use of a drag device with area to mass ratios varying from 0.0
to 20.0 m2/kg, simulating solar sails or de-orbit devices. The mathematical
model is based in the restricted three-body problem. The use of this maneuver
makes it possible to obtain the variations of energy in the trajectory,
replacing expensive maneuvers based on fuel consumption. To observe the effects
of the maneuvers, different values of pericenter velocity and altitude were
selected for prograde and retrograde orbits. The innovation of this research is
the application of an integral method to quantify the delta-V of the aero
gravity maneuver, comparing the cost of the maneuver with the traditional
methods of space propulsion. The results allow the identification of orbits
with conditions to capture, and the perturbative maps show the velocity
variations.
|
Let $X(\mathbb{R})$ be a separable Banach function space such that the
Hardy-Littlewood maximal operator $M$ is bounded on $X(\mathbb{R})$ and on its
associate space $X'(\mathbb{R})$. Suppose $a$ is a Fourier multiplier on the
space $X(\mathbb{R})$. We show that the Fourier convolution operator $W^0(a)$
with symbol $a$ is compact on the space $X(\mathbb{R})$ if and only if $a=0$.
This result implies that nontrivial Fourier convolution operators on Lebesgue
spaces with Muckenhoupt weights are never compact.
|
Existing statistical approaches to natural language problems are very coarse
approximations to the true complexity of language processing. As such, no
single technique will be best for all problem instances. Many researchers are
examining ensemble methods that combine the output of successful, separately
developed modules to create more accurate solutions. This paper examines three
merging rules for combining probability distributions: the well known mixture
rule, the logarithmic rule, and a novel product rule. These rules were applied
with state-of-the-art results to two problems commonly used to assess human
mastery of lexical semantics -- synonym questions and analogy questions. All
three merging rules result in ensembles that are more accurate than any of
their component modules. The differences among the three rules are not
statistically significant, but it is suggestive that the popular mixture rule
is not the best rule for either of the two problems.
|
Training a text-to-image generator in the general domain (e.g., Dall.e,
CogView) requires huge amounts of paired text-image data, which is too
expensive to collect. In this paper, we propose a self-supervised scheme named
as CLIP-GEN for general text-to-image generation with the language-image priors
extracted with a pre-trained CLIP model. In our approach, we only require a set
of unlabeled images in the general domain to train a text-to-image generator.
Specifically, given an image without text labels, we first extract the
embedding of the image in the united language-vision embedding space with the
image encoder of CLIP. Next, we convert the image into a sequence of discrete
tokens in the VQGAN codebook space (the VQGAN model can be trained with the
unlabeled image dataset in hand). Finally, we train an autoregressive
transformer that maps the image tokens from its unified language-vision
representation. Once trained, the transformer can generate coherent image
tokens based on the text embedding extracted from the text encoder of CLIP upon
an input text. Such a strategy enables us to train a strong and general
text-to-image generator with large text-free image dataset such as ImageNet.
Qualitative and quantitative evaluations verify that our method significantly
outperforms optimization-based text-to-image methods in terms of image quality
while not compromising the text-image matching. Our method can even achieve
comparable performance as flagship supervised models like CogView.
|
Traditionally, 802.11-based networks that relied on wired equivalent protocol
(WEP) were especially vulnerable to packet sniffing. Today, wireless networks
are more prolific, and the monitoring devices used to find them are mobile and
easy to access. Securing wireless networks can be difficult because these
networks consist of radio transmitters and receivers, and anybody can listen,
capture data and attempt to compromise it. In recent years, a range of
technologies and mechanisms have helped make networking more secure. This paper
holistically evaluated various enhanced protocols proposed to solve WEP related
authentication, confidentiality and integrity problems. It discovered that
strength of each solution depends on how well the encryption, authentication
and integrity techniques work. The work suggested using a Defence-in-Depth
Strategy and integration of biometric solution in 802.11i. Comprehensive
in-depth comparative analysis of each of the security mechanisms is driven by
review of related work in WLAN security solutions.
|
We consider the additional entropy production (EP) incurred by a fixed
quantum or classical process on some initial state $\rho$, above the minimum EP
incurred by the same process on any initial state. We show that this additional
EP, which we term the "mismatch cost of $\rho$", has a universal
information-theoretic form: it is given by the contraction of the relative
entropy between $\rho$ and the least-dissipative initial state $\varphi$ over
time. We derive versions of this result for integrated EP incurred over the
course of a process, for trajectory-level fluctuating EP, and for instantaneous
EP rate. We also show that mismatch cost for fluctuating EP obeys an integral
fluctuation theorem. Our results demonstrate a fundamental relationship between
"thermodynamic irreversibility" (generation of EP) and "logical
irreversibility" (inability to know the initial state corresponding to a given
final state). We use this relationship to derive quantitative bounds on the
thermodynamics of quantum error correction and to propose a
thermodynamically-operationalized measure of the logical irreversibility of a
quantum channel. Our results hold for both finite and infinite dimensional
systems, and generalize beyond EP to many other thermodynamic costs, including
nonadiabatic EP, free energy loss, and entropy gain.
|
The Hyades constitute a homogeneous sample of stars ideal for investigating
the dependence of planet formation on the mass of the central star. Due to
their youth, Hyades members are much more chromospherically active than stars
traditionally surveyed for planets using high precision radial velocity (RV)
techniques. Therefore, we have conducted a detailed investigation of whether
magnetic activity of our Hyades target stars will interfere with our ability to
make precise RV searches for substellar companions. We measure chromospheric
activity (which we take as a proxy for magnetic activity) by computing the
equivalent of the R'HK activity index from the Ca II K line. <R'HK> is not
constant in the Hyades: we confirm that it decreases with increasing
temperature in the F stars, and also find it decreases for stars cooler than
mid-K. We examine correlations between simultaneously measured R'HK and RV
using both a classical statistical test and a Bayesian odds ratio test. We find
that there is a significant correlation between R'HK and the RV in only 5 of
the 82 stars in this sample. Thus, simple Rprime HK-RV correlations will
generally not be effective in correcting the measured RV values for the effects
of magnetic activity in the Hyades. We argue that this implies long timescale
activity variations (of order a few years; i.e., magnetic cycles or growth and
decay of plage regions) will not significantly hinder our search for planets in
the Hyades if the stars are closely monitored for chromospheric activity. The
trends in the RV scatter (sigma'_v) with <R'HK>, vsini, and P_rot for our stars
is generally consistent with those found in field stars in the Lick planet
search data, with the notable exception of a shallower dependence of sigma'_v
on <R'HK> for F stars.
|
Fourier transform power spectra of major axis cuts in V and Halpha images
were made for a sample of 9 irregular galaxies. These power spectra reveal
structure over a wide range of scales. For 6 of the galaxies the power spectrum
slopes at intermediate scales (1-400 pc) in the V-band images range from -1.3
to -1.5. The similarity of slopes suggests that the same processes are
structuring these systems. These slopes are slightly shallower than what is
observed in other galaxies in HI, molecular emission, dust extinction, and
optical light. Three of the galaxies have flat power spectra like noise from
the sky; these three galaxies are relatively indistinct in the direct images.
The power spectrum slope for Halpha steepens with increasing star formation
rate, ranging from a shallow value comparable to the noise at low rates to a
steep value with a slope of -1.5 at high rates. This change reflects the
increasing areal filling factor of Halpha emission with increasing star
formation rate, and an apparently universal slope inside the Halpha regions
that is comparable to that for Kolmogorov turbulence. The power spectrum of HI
in one galaxy has a steeper power law, with a slope of -2.9. The fact that the
power laws of star formation are about the same for dwarf galaxies and giant
spiral galaxies suggests the microscopic processes are the same, independent of
spiral density waves and galaxy size.
|
The thermal stability in nanostructured magnetic systems is an important
issue for applications in information storage. From a theoretical and
simulation perspective, an accurate prediction of thermally-activated
transitions is a challenging problem because desired retention times are on the
order of 10 years, while the characteristic time scale for precessional
magnetization dynamics is of the order of nanoseconds. Here, we present a
theoretical study of the thermal stability of magnetic elements in the form of
perpendicularly-magnetized ferromagnetic disks using the forward flux sampling
method, which is useful for simulating rare events. We demonstrate how rates of
thermally-activated switching between the two uniformly-magnetized ``up'' and
``down'' states, which occurs through domain wall nucleation and propagation,
vary with the interfacial Dzyaloshinskii-Moriya interaction, which affect the
energy barrier separating these states. Moreover, we find that the average
lifetimes differ by several orders of magnitude from estimates based on the
commonly assumed value of 1 GHz for the attempt frequency.
|
The ability to capture different levels of abstraction in a system model is
especially important for remote integration, testing/verification, and
manufacturing of cyber-physical systems (CPSs). However, the complexity of
modelling and testing of CPSs makes these processes extremely prone to human
error. In this paper we present our ongoing work on introducing human-centred
considerations into modelling and testing of CPSs, which allow for agile
iterative refinement processes of different levels of abstraction when errors
are discovered or missing information is completed.
|
A distinguishing characteristic of wireless sensor networks is the
opportunity to exploit characteristics of the application at lower layers. This
paper reports on the results of a simulation comparison of proposed data
dissemination protocols using the J-Sim simulator for the WSN protocols:
Forwarding Diffusion Data Dissemination(FDDDP), Decentralized Data
Dissemination(DDDP), Credit Broadcast Data Dissemination (CBDDP), Energy Aware
& Geographical Data Dissemination (EAGDDP) .Our performance provides useful
insights for the network designer such as which protocols (and design choices)
scale control traffic well, improve data delivery or reduce overall energy
consumption,improves routing overhead and maximizes the bandwidth utilization.
The static pre configuration of the cell size in DDDP, is one of the reasons
why DDDP exhibits larger routing overhead than FDDDP by 74.2% on average.
Although CBDDP produces approximately 94.6% smaller overhead than DDDP and
90.7% smaller than FDDDP, because of statically configured amount credit CBDDP
delivers on average 7.5 times more of the redundant data packets than DDDP and
FDDDP.EAGDDP improves the delivery by 80% on average and makes a balance of
energy consumption .We suggest that making these protocols truly self-learning
can significantly improve their performance.
|
The composition of cometary ices provides key information on the thermal and
chemical properties of the outer parts of the protoplanetary disk where they
formed 4.6 Gy ago. This chapter reviews our knowledge of composition of
cometary comae based on remote spectroscopy and in-situ investigations
techniques. Cometary comae can be dominated by water vapour, CO or CO2. The
abundances of several dozen of molecules, with a growing number of complex
organics, have been measured in comets. Many species that are not directly
sublimating from the nucleus ices have also been observed and traced out into
the coma in order to determine their production mechanisms. Chemical diversity
in the comet population and compositional heterogeneity of the coma are
discussed. With the completion of the Rosetta mission, isotopic ratios, which
hold additional clues on the origin of cometary material, have been measured in
several species. Finally, important pending questions (e.g., the nitrogen
deficiency in comets) and the need for further work in certain critical areas
are discussed in order to answer questions and resolve discrepancies between
techniques.
|
We consider the problem of optimizing the design of a heat sink used for
cooling an insulated gate bipolar transistor (IGBT) power module. The thermal
behavior of the heat sink is originally estimated using a high-fidelity
computational fluid dynamics (CFD) simulation, which renders numerical
optimization too computationally demanding. To enable optimization studies, we
substitute the CFD simulation model with an inexpensive polynomial surrogate
model that approximates the relation between the device's design features and a
relevant thermal quantity of interest. The surrogate model of choice is a
data-driven polynomial chaos expansion (DD-PCE), which learns the
aforementioned relation by means of polynomial regression. Advantages of the
DD-PCE include its applicability in small-data regimes and its easily adaptable
model structure. To address the issue of model-form uncertainty and model
robustness in view of limited training and test data, ensembles of DD-PCEs are
generated based on data re-shuffling. Then, using the full ensemble of
surrogate models, the surrogate-based predictions are accompanied by
uncertainty metrics such as mean value and variance. Once trained and tested in
terms of accuracy and robustness, the ensemble of DD-PCE surrogates replaces
the high-fidelity simulation model in optimization algorithms aiming to
identify heat sink designs that optimize the thermal behavior of the IGBT under
geometrical and operational constraints. Optimized heat sink designs are
obtained for a computational cost much smaller than utilizing the original
model in the optimization procedure. Due to ensemble modeling, the optimization
results can also be assessed in terms of uncertainty and robustness.
Comparisons against alternative surrogate modeling techniques illustrate why
the DD-PCE should be preferred in the considered setting.
|
The exact nature of the lowest $K^\pi =2_\gamma ^+$ rotational bands in all
deformed nuclei remains obscure. Traditionally they are assumed to be
collective vibrations of the nuclear shape in the $\gamma$ degree of freedom
perpendicular to the nuclear symmetry axis. Very few such $\gamma$-bands have
been traced past the usual back-bending rotational alignments of high-j
nucleons. We have investigated the structure of positive-parity bands in the
N=90 nucleus 156Dy, using the 148Nd(12C,4n)156Dy reaction at 65 MeV, observing
the resulting ${\gamma}$-ray transitions with the Gammasphere array. The even-
and odd-spin members of the $K^\pi =2_\gamma^+$ $\gamma$-band are observed to
32+ and 31+ respectively. This rotational band faithfully tracks the
ground-state configuration to the highest spins. The members of a possible
$\gamma$-vibration built on the aligned yrast S-band are observed to spins 28+
and 27+. An even-spin positive-parity band, observed to spin 24+, is a
candidate for an aligned S-band built on the seniority-zero configuration of
the $0_2^+$ state at 676 keV. The crossing of this band with the $0_2^+$ band
is at $\hbar\omega$= 0.28(1) MeV and is consistent with the configuration of
the $0_2^+$ band not producing any blocking of the monopole pairing.
|
Optomechanical systems provide a pathway for the bidirectional
optical-to-microwave interconversion in (quantum) networks. We demonstrate the
implementation of this functionality and non-adiabatic optomechanical control
in a single, $\mu$m-sized potential trap for phonons and exciton-polariton
condensates in a structured semiconductor microcavity. The exciton-enhanced
optomechanical coupling leads to self-oscillations (phonon lasing) -- thus
proving reversible photon-to-phonon conversion. We show that these oscillations
are a signature of the optomechanical strong coupling signalizing the emergence
of elusive phonon-exciton-photon quasiparticles -- the phonoritons. We then
demonstrate full control of the phonoriton spectrum as well as coherent
microwave-to-photon interconversion using electrically generated GHz-vibrations
and a resonant optical laser beam. These findings establish the
zero-dimensional polariton condensates as a scalable coherent interface between
microwave and optical domains with enhanced microwave-to-mechanical and
mechanical-to-optical coupling rates.
|
We trace several dusty infrared sources on their orbit around the
supermassive black hole (SMBH) SgrA* in the center of our galaxy. We give an
overview of known and unknown sources in the direct vicinity of our SMBH in a
radius of around 0.04pc. For that, we are using NACO (K- and L'-band) and
SINFONI (H+K-band) data (VLT, Chile/Paranal) between 2002 and 2018. Our
spectroscopic analysis reveals a Doppler-shifted line emission of Br_gamma and
HeI. Additionally, we report the detection of [FeIII] lines that are found
exclusively in the investigated dusty sources west of SgrA*. We speculate, that
the known [FeIII] emission in the GC is partially generated due to the line
emission of the Dusty sources investigated in this work. However, we extend our
analysis of the GC by taking the bright Br_gamma-bar close (< 120 mas) to SgrA*
into account. The finding of this feature is in line with a reported SgrA*
X-ray bubble that consists of an open side towards G359.945-0.044
(North-West-West direction). The location of the open side of this X-ray bubble
coincides with the emission of the bright Br_gamma-bar detected in our SINFONI
data-cubes.
|
We discuss the prospects for indirect detection of dark matter (DM) with the
Cherenkov Telescope Array (CTA), a future ground-based gamma-ray observatory
that will be sensitive to gamma rays in the energy range from a few tens of GeV
to 100 TeV. We consider the detectability of DM annihilation in different
astrophysical targets with a focus on the Galactic Center (GC) region. With a
deep observation of the GC, CTA will be sensitive to DM particles with mass
greater than 100 GeV and an annihilation cross section close to the thermal
relic value.
|
We present an overview of the analysis of the multiloop topologies that
appear for the first time at four loops and the assembly of them in a general
expression, the N$^4$MLT universal topology. Based on the fact that the
Loop-Tree Duality enables to open any scattering amplitude in terms of
convolutions of known subtopologies, we go through the dual representation of
the universal N$^4$MLT topology and the manifestly causal representation.
Additionally, we expose the application of a quantum algorithm as an
alternative methodology to identify the causal singular configurations of
multiloop Feynman diagrams.
|
Open-vocabulary instance segmentation aims at segmenting novel classes
without mask annotations. It is an important step toward reducing laborious
human supervision. Most existing works first pretrain a model on captioned
images covering many novel classes and then finetune it on limited base classes
with mask annotations. However, the high-level textual information learned from
caption pretraining alone cannot effectively encode the details required for
pixel-wise segmentation. To address this, we propose a cross-modal
pseudo-labeling framework, which generates training pseudo masks by aligning
word semantics in captions with visual features of object masks in images.
Thus, our framework is capable of labeling novel classes in captions via their
word semantics to self-train a student model. To account for noises in pseudo
masks, we design a robust student model that selectively distills mask
knowledge by estimating the mask noise levels, hence mitigating the adverse
impact of noisy pseudo masks. By extensive experiments, we show the
effectiveness of our framework, where we significantly improve mAP score by
4.5% on MS-COCO and 5.1% on the large-scale Open Images & Conceptual Captions
datasets compared to the state-of-the-art.
|
Optimal beamforming designs under imperfect successive interference
cancellation (SIC) decoding for a symbiotic network of non-orthogonal multiple
access (NOMA) primary users and a secondary ambient tag have been lacking. We
address that issue here. The primary base station (BS) serves NOMA users and a
passive tag simultaneously in this network. We develop two transmit beamforming
designs to meet the user and tag requirements while mitigating the effect of
imperfect SIC. Specifically, we design optimal BS transmit beamforming and
power allocation to either maximize the weighted sum rate of NOMA users and the
tag or minimize the BS transmit power under the minimum rate requirements while
satisfying the tag minimum energy requirement. Because both these problems are
non-convex, we propose algorithms using alternative optimization, fractional
programming, and semi-definite relaxation techniques. We also analyze their
computational complexity. Finally, we present extensive numerical results to
validate the proposed schemes and to show significant performance gains while
keeping the tag design intact. For example, the proposed digital beamforming
increases the harvested power and data rate by 2.16e3 % and 314.5 % compared to
random beamforming.
|
The self-interaction force of dislocation curves in metals depends on the
local arrangement of the atoms and on the nonlocal interaction between
dislocation curve segments. While these nonlocal segment-segment interactions
can be accurately described by linear elasticity when the segments are further
apart than the atomic scale of size $\varepsilon$, this model breaks down and
blows up when the segments are $O(\varepsilon)$ apart. To separate the nonlocal
interactions from the local contribution, various models depending on
$\varepsilon$ have been constructed to account for the nonlocal term. However,
there are no quantitative comparisons available between these models. This
paper makes such comparisons possible by expanding the self-interaction force
in these models in $\varepsilon$ beyond the $O(1)$-term. Our derivation of
these expansions relies on asymptotic analysis. The practical use of these
expansions is demonstrated by developing numerical schemes for them, and by --
for the first time -- bounding the corresponding discretization error.
|
We used resonant inelastic x-ray scattering (RIXS) with and without analysis
of the scattered photon polarization, to study dispersive spin excitations in
the high temperature superconductor YBa2Cu3O6+x over a wide range of doping
levels (0.1 < x < 1). The excitation profiles were carefully monitored as the
incident photon energy was detuned from the resonant condition, and the spin
excitation energy was found to be independent of detuning for all x. These
findings demonstrate that the largest fraction of the spin-flip RIXS profiles
in doped cuprates arises from magnetic collective modes, rather than from
incoherent particle-hole excitations as recently suggested theoretically
[Benjamin et al. Phys. Rev. Lett. 112, 247002(2014)]. Implications for the
theoretical description of the electron system in the cuprates are discussed.
|
We present microscopic, multiple Landau level, (frustration-free and positive
semi-definite) parent Hamiltonians whose ground states, realizing different
quantum Hall fluids, are parton-like and whose excitations display either
Abelian or non-Abelian braiding statistics. We prove ground state energy
monotonicity theorems for systems with different particle numbers in multiple
Landau levels, demonstrate S-duality in the case of toroidal geometry, and
establish complete sets of zero modes of special Hamiltonians stabilizing
parton-like states. The emergent Entangled Pauli Principle (EPP), introduced in
Phys. Rev. B 98, 161118(R) (2018) and which defines the ``DNA'' of the quantum
Hall fluid, is behind the exact determination of the topological
characteristics of the fluid, including charge and braiding statistics of
excitations, and effective edge theory descriptions. When the closed-shell
condition is satisfied, the densest (i.e., the highest density and lowest total
angular momentum) zero-energy mode is a unique parton state. We conjecture that
parton-like states generally span the subspace of many-body wave functions with
the two-body $M$-clustering property within any given number of Landau levels.
General arguments are supplemented by rigorous considerations for the $M=3$
case of fermions in four Landau levels. For this case, we establish that the
zero mode counting can be done by enumerating certain patterns consistent with
an underlying EPP. We apply the coherent state approach to show that the
elementary (localized) bulk excitations are Fibonacci anyons. This demonstrates
that the DNA associated with fractional quantum Hall states encodes all
universal properties. Specifically, for parton-like states, we establish a link
with tensor network structures of finite bond dimension that emerge via root
level entanglement.
|
We show that the difference between the genus and the stable topological
4-genus of alternating knots is either zero or at least 1/3.
|
Non-reciprocal photonic devices are essential components of classical optical
information processing. It is interesting and important to investigate their
feasibility in the quantum world. In this work, the quantum properties of an
on-chip silicon nitride (SiN)-based magneto-optical (MO) isolator were studied
using a single-photon non-reciprocal dynamical transmission experiment. The
measured isolation ratio for single photons achieved was 12.33 dB, which proved
the functionality of our on-chip isolator. The quantum coherence of the passing
single photons was further verified using high-visibility quantum interference.
Our work will promote on-chip isolators within the integrated quantum circuits
and help introduce novel phenomena in quantum information processes.
|
Let $f$ and $g$ be functions, not identically zero, in the Fock space $F^2$
of $C_n$. We show that the product $T_fT_{\bar g}$ of Toeplitz operators on
$F^2$ is bounded if and only if $f(z)=e^{q(z)}$ and $g(z)=ce^{-q(z)}$, where
$c$ is a nonzero constant and $q$ is a linear polynomial.
|
Quantum critical behavior in 2+1 dimensions is established via holographic
methods in a 5+1-dimensional Einstein gravity theory with gauge potential form
fields of rank 1 and 2. These fields are coupled to one another via a
tri-linear Chern-Simons term with strength k. The quantum phase transition is
physically driven by the expulsion of the electric charge from inside the black
brane horizon to the outside, where it gets carried by the gauge fields which
acquire charge thanks to the Chern-Simons interaction. At a critical value
k=k_c, zero temperature, and any finite value of the magnetic field, the IR
behavior is governed by a near-horizon Lifshitz geometry. The associated
dynamical scaling exponent depends on the magnetic field. For k<k_c, the flow
towards low temperature is governed by a Reissner-Nordstrom-like black brane
whose charge and entropy density are non-vanishing at zero temperature. For k >
k_c, the IR flow is towards the purely magnetic brane in AdS_6. Its
near-horizon geometry is AdS_4 \times R^2, so that the entropy density vanishes
quadratically with temperature, and all charge is carried by the gauge fields
outside of the horizon.
|
In this paper we propose a systematic method to solve the inverse dynamical
problem for a quantum system governed by the von Neumann equation: to find a
class of Hamiltonians reproducing a prescribed time evolution of a pure or
mixed state of the system. Our approach exploits the equivalence between an
action of the group of evolution operators over the state space and an adjoint
action of the unitary group over Hermitian matrices. The method is illustrated
by two examples involving a pure and a mixed state.
|
In this paper we aim to push the analogy between thermodynamics and quantum
resource theories one step further. Previous inspirations were based
predominantly on thermodynamic considerations concerning scenarios with a
single heat bath, neglecting an important part of thermodynamics that studies
heat engines operating between two baths at different temperatures. Here, we
investigate the performance of resource engines, which replace the access to
two heat baths at different temperatures with two arbitrary constraints on
state transformations. The idea is to imitate the action of a two--stroke heat
engine, where the system is sent to two agents (Alice and Bob) in turns, and
they can transform it using their constrained sets of free operations. We raise
and address several questions, including whether or not a resource engine can
generate a full set of quantum operations or all possible state
transformations, and how many strokes are needed for that. We also explain how
the resource engine picture provides a natural way to fuse two or more resource
theories, and we discuss in detail the fusion of two resource theories of
thermodynamics with two different temperatures, and two resource theories of
coherence with respect to two different bases.
|
Many data sets contain an inherent multilevel structure, for example, because
of repeated measurements of the same observational units. Taking this structure
into account is critical for the accuracy and calibration of any statistical
analysis performed on such data. However, the large number of possible model
configurations hinders the use of multilevel models in practice. In this work,
we propose a flexible framework for efficiently assessing differences between
the levels of given grouping variables in the data. The assessed group
heterogeneity is valuable in choosing the relevant group coefficients to
consider in a multilevel model. Our empirical evaluations demonstrate that the
framework can reliably identify relevant multilevel components in both
simulated and real data sets.
|
Young stars are formed within dusty discs. The grains in the disc are
originally of the same size as interstellar dust. Models predict that these
grains will grow in size through coagulation. Observations of the silicate
features at micron wavelengths are consistent with growth to micron sizes
whereas the slope of the SED at longer wavelengths traces growth up to mm
sizes. We here look for a correlation between these two grain growth
indicators. A large sample of T-Tauri and Herbig-Ae/Be stars was observed with
the Spitzer Space Telescope at 5-13 micron; a subsample was observed at mm
wavelengths. We complement this subsample with data from the literature to
maximise the overlap between micron and mm observations and search for
correlations. Synthetic spectra are produced to determine which processes may
produce the dust evolution. Dust disc masses in the range <1 to 7 x 10^-4 MSun
are obtained. Most sources have a mm spectral slope consistent with grain
growth. There is a tentative correlation between the 10-micron silicate feature
and the mm slope of the SED. The observed sources seem to be grouped per
star-forming region in the micron-vs-mm diagram. The modelling results show
that the 10-micron feature becomes flatter and subsequently the mm slope
becomes shallower. Grain size distributions shallower than that of the ISM
and/or bright central stars are required to explain specific features. Settling
of larger grains towards the disc midplane affects the 10-micron feature, but
hardly the mm slope. The tentative correlation between the strength of the
10-micron feature and the mm slope suggests that the inner and outer disc
evolve simultaneously. Dust with a mass dominated by mm-sized grains is
required to explain the shallowest mm slopes. Other processes besides grain
growth may also be responsible for the removal of small grains.
|
BGP is the de facto protocol used for inter-autonomous system routing in the
Internet. Generally speaking, BGP has been proven to be secure, efficient,
scalable, and robust. However, with the rapid evolving of the Internet in the
past few decades, there are increasing concerns about BGS's ability to meet the
needs of the Internet routing. There are two major limitations of BGP which are
its failure to address several key security issues, and some operational
related problems. The design and ubiquity of BGP have complicated past efforts
at securing inter-domain routing. This paper surveys the past work related to
BGP security and operational issues. We explore the limitations and advantages
of proposed solutions in these two limitations.
|
We present results for Higgs boson pair production in gluon fusion at
next-to-leading order in QCD, including effects of anomalous couplings within
Standard Model Effective Field Theory (SMEFT). In particular, we investigate
truncation effects of the SMEFT series, comparing different ways to treat
powers of dimension-six operators and double operator insertions.
|
We have obtained spatially resolved spectra of the z=3.911 triply imaged QSO
APM08279+5255 using the Space Telescope Imaging Spectrograph (STIS) on board
the Hubble Space Telescope (HST). We study the line of sight equivalent width
(EW) differences and velocity shear of high and low ionization absorbers
(including a damped Lyman alpha [DLA] system identified in a spatially
unresolved ground based spectrum) in the three lines of sight. We find that
high ionization systems (primarily CIV absorbers) do not exhibit strong EW
variations on scales <0.4 kpc; their fractional EW differences are typically
less than 30%. When combined with previous work on other QSO pairs, we find
that the fractional variation increases steadily with separation out to at
least ~100 kpc. Conversely, low ionization systems (primarily MgII absorbers)
show strong variations (often > 80%) over kpc scales. A minimum radius for
strong (EW > 0.3 A) MgII systems of > 1.4 kpc is inferred from absorption
coincidences in all lines of sight. For weak MgII absorbers (EW < 0.3 A), a
maximum likelihood analysis indicates a most probable coherence scale of 2.0
kpc for a uniform spherical geometry, with 95% confidence limits ranging
between 1.5 and 4.4 kpc. Finally, for systems with weak absorption that can be
confidently converted to column densities, we find constant N(CIV)/N(SiIV)
across the three lines of sight. Similarly, the [Al/Fe] ratios in the z = 2.974
DLA are consistent with solar relative abundances over a transverse distance of
\~0.35 kpc. (abrdiged)
|
Many popular first order algorithms for convex optimization, such as
forward-backward splitting, Douglas-Rachford splitting, and the alternating
direction method of multipliers (ADMM), can be formulated as averaged iteration
of a nonexpansive mapping. In this paper we propose a line search for averaged
iteration that preserves the theoretical convergence guarantee, while often
accelerating practical convergence. We discuss several general cases in which
the additional computational cost of the line search is modest compared to the
savings obtained.
|
Lindel\"of topological groups $G_1$ , $H_1$, $G_2$, $H_2$ are constructed in
such a way that the products of $G_1 \times H_1$ and $G_2 \times H_2$ are not
$\mathbb R$-factorizable groups and (1) the group $G_1 \times H_1$ is not
pseudo-$\aleph_1$-compact; (2) the group $G_2 \times H_2$ is a separable not
normal group and contains a discrete closed subset of the cardinality
continuum.
|
We propose and analyze a unified structure-preserving parametric finite
element method (SP-PFEM) for the anisotropic surface diffusion of curves in two
dimensions $(d=2)$ and surfaces in three dimensions $(d=3)$ with an arbitrary
anisotropic surface energy density $\gamma(\boldsymbol{n})$, where
$\boldsymbol{n}\in \mathbb{S}^{d-1}$ represents the outward unit vector. By
introducing a novel unified surface energy matrix
$\boldsymbol{G}_k(\boldsymbol{n})$ depending on $\gamma(\boldsymbol{n})$, the
Cahn--Hoffman $\boldsymbol{\xi}$-vector and a stabilizing function
$k(\boldsymbol{n}):\ \mathbb{S}^{d-1}\to {\mathbb R}$, we obtain a unified and
conservative variational formulation for the anisotropic surface diffusion via
different surface differential operators including the surface gradient
operator, the surface divergence operator and the surface Laplace--Beltrami
operator. A SP-PFEM discretization is presented for the variational problem. In
order to establish the unconditional energy stability of the proposed SP-PFEM
under a very mild condition on $\gamma(\boldsymbol{n})$, we propose a new
framework via {\sl local energy estimate} for proving energy
stability/structure-preserving properties of the parametric finite element
method for the anisotropic surface diffusion. This framework sheds light on how
to prove unconditional energy stability of other numerical methods for
geometric partial differential equations. Extensive numerical results are
reported to demonstrate the efficiency and accuracy as well as
structure-preserving properties of the proposed SP-PFEM for the anisotropic
surface diffusion with arbitrary anisotropic surface energy density
$\gamma(\boldsymbol{n})$ arising from different applications.
|
We calculate numerically the density of states n(S) for SU(2) lattice gauge
theory on $L^4$ lattices. Small volume dependence are resolved for small values
of S. We compare $ln(n(S))$ with weak and strong coupling expansions.
Intermediate order expansions show a good overlap for values of S corresponding
to the crossover. We relate the convergence of these expansions to those of the
average plaquette. We show that when known logarithmic singularities are
subtracted from $ln(n(S))$, expansions in Legendre polynomials appear to
converge and could be suitable to determine the Fisher's zeros of the partition
function.
|
Peptides are formed by the dehydration condensation of multiple amino acids.
The primary structure of a peptide can be represented either as an amino acid
sequence or as a molecular graph consisting of atoms and chemical bonds.
Previous studies have indicated that deep learning routes specific to
sequential and graphical peptide forms exhibit comparable performance on
downstream tasks. Despite the fact that these models learn representations of
the same modality of peptides, we find that they explain their predictions
differently. Considering sequential and graphical models as two experts making
inferences from different perspectives, we work on fusing expert knowledge to
enrich the learned representations for improving the discriminative
performance. To achieve this, we propose a peptide co-modeling method, RepCon,
which employs a contrastive learning-based framework to enhance the mutual
information of representations from decoupled sequential and graphical
end-to-end models. It considers representations from the sequential encoder and
the graphical encoder for the same peptide sample as a positive pair and learns
to enhance the consistency of representations between positive sample pairs and
to repel representations between negative pairs. Empirical studies of RepCon
and other co-modeling methods are conducted on open-source discriminative
datasets, including aggregation propensity, retention time, antimicrobial
peptide prediction, and family classification from Peptide Database. Our
results demonstrate the superiority of the co-modeling approach over
independent modeling, as well as the superiority of RepCon over other methods
under the co-modeling framework. In addition, the attribution on RepCon further
corroborates the validity of the approach at the level of model explanation.
|
A cluster of spins $1/2$ of a finite size can be regarded as a basic building
block of a spin texture in high-temperature cuprate superconductors. If this
texture has the character of a network of weakly coupled spin clusters, then
spin excitation spectra of finite clusters are expected to capture the
principal features of the experimental spin response. We calculate spin
excitation spectra of several clusters of spins $1/2$ coupled by Heisenberg
interaction. We find that the calculated spectra exhibit a high degree of
variability representative of the actual phenomenology of curates, while, at
the same time, reproducing a number of important features of the experimentally
measured spin response. Among such features are the spin gap, the broad peak
around $\hbar \omega\simeq (40 - 70)$ meV and the sharp peak at zero frequency.
The latter feature emerges due to transitions inside the ground-state multiplet
of the so-called "uncompensated" clusters with an odd number of spins.
|
We consider the recovery of a low rank and jointly sparse matrix from under
sampled measurements of its columns. This problem is highly relevant in the
recovery of dynamic MRI data with high spatio-temporal resolution, where each
column of the matrix corresponds to a frame in the image time series; the
matrix is highly low-rank since the frames are highly correlated. Similarly the
non-zero locations of the matrix in appropriate transform/frame domains (e.g.
wavelet, gradient) are roughly the same in different frame. The superset of the
support can be safely assumed to be jointly sparse. Unlike the classical
multiple measurement vector (MMV) setup that measures all the snapshots using
the same matrix, we consider each snapshot to be measured using a different
measurement matrix. We show that this approach reduces the total number of
measurements, especially when the rank of the matrix is much smaller than than
its sparsity. Our experiments in the context of dynamic imaging shows that this
approach is very useful in realizing free breathing cardiac MRI.
|
Frontier Large Language Models (LLMs) are increasingly being deployed for
high-stakes decision-making. On the other hand, these models are still
consistently making predictions that contradict users' or society's
expectations, e.g., hallucinating, or discriminating. Thus, it is important
that we develop test-time strategies to improve their trustworthiness. Inspired
by prior work, we leverage causality as a tool to formally encode two aspects
of trustworthiness in LLMs: fairness and robustness. Under this perspective,
existing test-time solutions explicitly instructing the model to be fair or
robust implicitly depend on the LLM's causal reasoning capabilities. In this
work, we explore the opposite approach. Instead of explicitly asking the LLM
for trustworthiness, we design prompts to encode the underlying causal
inference algorithm that will, by construction, result in more trustworthy
predictions. Concretely, we propose out-of-context prompting as a test-time
solution to encourage fairness and robustness in LLMs. Out-of-context prompting
leverages the user's prior knowledge of the task's causal model to apply
(random) counterfactual transformations and improve the model's
trustworthiness. Empirically, we show that out-of-context prompting
consistently improves the fairness and robustness of frontier LLMs across five
different benchmark datasets without requiring additional data, finetuning or
pre-training.
|
The statistical mechanical calculation of the thermodynamical properties of
non-rotating isolated horizons are studied in the loop quantum gravity
framework. By employing the Hawking temperature and horizon mass of isolated
horizons as physical inputs, the microcanonical ensemble associated with the
system are well established. As a result, the black hole entropy and other
thermodynamical quantities can be computed and consistent with well-known
Hawking's semiclassical analysis. Moreover, the value of the Immirzi parameter
of loop quantum gravity for {higher dimensional case and 4-dimensional U(1)
case are} also obtained.
|
A prototype ultrahigh resolution spectrograph has been built with an adaptive
optics telescope. It provides $250,000$ resolving power, 300 \AA\ wavelength
coverage and 0.8\% efficiency.
|
We reappraise the viability of asymmetric dark matter (ADM) realized as a
Dirac fermion coupling dominantly to the Standard Model fermions. Treating the
interactions of such a DM particle with quarks/leptons in an
effective-interactions framework, we derive updated constraints using mono-jet
searches from the Large Hadron Collider (LHC) and mono-photon searches at the
Large Electron-Positron (LEP) collider. We carefully model the detectors used
in these experiments, which is found to have significant impact. The constraint
of efficient annihilation of the symmetric part of the ADM, as well as other
observational constraints are synthesized to produce a global picture.
Consistent with previous work, we find that ADM with mass in the range $1-100$
GeV is strongly constrained, thus ruling out its best motivated mass range.
However, we find that leptophilic ADM remains allowed for $\gtrsim 10$ GeV DM,
including bounds from colliders, direct detection, and stellar heating. We
forecast that the Future Circular Collider for electron-positron collisions
(FCC-ee) will improve sensitivity to DM-lepton interactions by almost an order
of magnitude.
|
In this work we investigate the growth of $\beta$-Ga2O3 homoepitaxial layers
on top of (100) oriented substrates via indium-assisted metal exchange
catalyzed molecular beam epitaxy (MEXCAT-MBE) which have exhibited
prohibitively low growth rates by non-catalyzed MBE in the past. We demonstrate
that the proper tuning of the MEXCAT growth parameters and the choice of a
proper substrate offcut allow for the deposition of thin films with high
structural quality via step-flow growth mechanism at relatively high growth
rates for $\beta$-Ga2O3 homoepitaxy (i.e., around 1.5 nm/min, $\approx$45%
incorporation of the incoming Ga flux), making MBE growth on this orientation
feasible. Moreover, through the employment of the investigated four different
(100) substrate offcuts along the [00-1] direction (i.e., 0$^\circ$, 2$^\circ$,
4$^\circ$, 6$^\circ$) we give experimental evidence on the fundamental role of
the (-201) step edges as nucleation sites for growth of (100)-oriented Ga2O3
films by MBE.
|
Subsets and Splits