text
stringlengths 6
128k
|
---|
We describe a sample of thermally emitting neutron stars discovered in the
ROSAT All-Sky Survey. We discuss the basic observational properties of these
objects and conclude that they are nearby, middle-aged pulsars with moderate
magnetic fields that we see through their cooling radiation. While these
objects are potentially very useful as probes of matter at very high densities
and magnetic fields, our lack of understanding of their surface emission limits
their current utility. We discuss this and other outstanding problems: the
spectral evolution of one sources and the relation of this population to the
overall pulsar population.
|
The recently proposed definition of complexity for static and spherically
symmetric self--gravitating systems [1], is extended to the fully dynamic
situation. In this latter case we have to consider not only the complexity
factor of the structure of the fluid distribution, but also the condition of
minimal complexity of the pattern of evolution. As we shall see these two
issues are deeply intertwined. For the complexity factor of the structure we
choose the same as for the static case, whereas for the simplest pattern of
evolution we assume the homologous condition. The dissipative and
non-dissipative cases are considered separately. In the latter case the fluid
distribution, satisfying the vanishing complexity factor condition and evolving
homologously, corresponds to a homogeneous (in the energy density), geodesic
and shear-free, isotropic (in the pressure) fluid. In the dissipative case the
fluid is still geodesic, but shearing, and there exists (in principle) a large
class of solutions. Finally we discuss about the stability of the vanishing
complexity condition.
|
Deployment and operation of autonomous underwater vehicles is expensive and
time-consuming. High-quality realistic sonar data simulation could be of
benefit to multiple applications, including training of human operators for
post-mission analysis, as well as tuning and validation of autonomous target
recognition (ATR) systems for underwater vehicles. Producing realistic
synthetic sonar imagery is a challenging problem as the model has to account
for specific artefacts of real acoustic sensors, vehicle altitude, and a
variety of environmental factors. We propose a novel method for generating
realistic-looking sonar side-scans of full-length missions, called Markov
Conditional pix2pix (MC-pix2pix). Quantitative assessment results confirm that
the quality of the produced data is almost indistinguishable from real.
Furthermore, we show that bootstrapping ATR systems with MC-pix2pix data can
improve the performance. Synthetic data is generated 18 times faster than real
acquisition speed, with full user control over the topography of the generated
data.
|
We study the $T=0$ frustrated phase of the $1D$ quantum spin-$\frac 12$
system with nearest-neighbour and next-nearest-neighbour isotropic exchange
known as the Majumdar-Ghosh Hamiltonian. We first apply the coupled-cluster
method of quantum many-body theory based on a spiral model state to obtain the
ground state energy and the pitch angle. These results are compared with
accurate numerical results using the density matrix renormalisation group
method, which also gives the correlation functions. We also investigate the
periodicity of the phase using the Marshall sign criterion. We discuss
particularly the behaviour close to the phase transitions at each end of the
frustrated phase.
|
Face Recognition is one of the prominent problems in the computer vision
domain. Witnessing advances in deep learning, significant work has been
observed in face recognition, which touched upon various parts of the
recognition framework like Convolutional Neural Network (CNN), Layers, Loss
functions, etc. Various loss functions such as Cross-Entropy, Angular-Softmax
and ArcFace have been introduced to learn the weights of network for face
recognition. However, these loss functions do not give high priority to the
hard samples as compared to the easy samples. Moreover, their learning process
is biased due to a number of easy examples compared to hard examples. In this
paper, we address this issue by considering hard examples with more priority.
In order to do so, We propose a Hard-Mining loss by increasing the loss for
harder examples and decreasing the loss for easy examples. The proposed concept
is generic and can be used with any existing loss function. We test the
Hard-Mining loss with different losses such as Cross-Entropy, Angular-Softmax
and ArcFace. The proposed Hard-Mining loss is tested over widely used Labeled
Faces in the Wild (LFW) and YouTube Faces (YTF) datasets. The training is
performed over CASIA-WebFace and MS-Celeb-1M datasets. We use the residual
network (i.e., ResNet18) for the experimental analysis. The experimental
results suggest that the performance of existing loss functions is boosted when
used in the framework of the proposed Hard-Mining loss.
|
Although numerical studies modeling the quantum hall effect at filling
fraction $5/2$ predict either the Pfaffian (Pf) or its particle hole conjugate,
the anti-Pfaffian (aPf) state, recent experiments appear to favor a quantized
thermal hall conductivity with quantum number $K=5/2$ , rather than the value
$K=7/2$ or $K=3/2$ expected for the Pf or aPF state, respectively. While a
particle hole symmetric topological order (the PH-Pfaffian) would be consistent
with the experiments, this state is believed to be energetically unfavorable in
a homogenous system. Here we study the effects of disorder that are assumed to
locally nucleate domains of Pf and aPf. When the disorder is relatively weak
and the size of domains is relatively large, we find that when the electrical
Hall conductance is on the quantized plateau with $\sigma_{xy} = (5/2)(e^2/h)$,
the value of $K$ can be only 7/2 or 3/2, with a possible first-order-like
transition between them as the magnetic field is varied. However, for
sufficiently strong disorder an intermediate state might appear, which we
analyze within a network model of the domain walls. Predominantly, we find a
thermal metal phase, where $K$ varies continuously and the longitudinal thermal
conductivity is non-zero, while the electrical Hall conductivity remains
quantized at $(5/2)e^2/h$. However, in a restricted parameter range we find a
thermal insulator with $K=5/2$, a disorder stabilized phase which is
adiabatically connected to the PH-Pfaffian. We discuss a possible scenario to
rationalize these special values of parameters.
|
We discuss the description of a many-body nuclear system using Hamiltonians
that contain the nucleon relativistic kinetic energy and potentials with
relativistic corrections. Through the Foldy-Wouthuysen transformation, the
field theoretical problem of interacting nucleons and mesons is mapped to an
equivalent one in terms of relativistic potentials, which are then expanded at
some order in 1/m_N. The formalism is applied to the Hartree problem in nuclear
matter, showing how the results of the relativistic mean field theory can be
recovered over a wide range of densities.
|
Reinforcement learning has been applied to a wide variety of robotics
problems, but most of such applications involve collecting data from scratch
for each new task. Since the amount of robot data we can collect for any single
task is limited by time and cost considerations, the learned behavior is
typically narrow: the policy can only execute the task in a handful of
scenarios that it was trained on. What if there was a way to incorporate a
large amount of prior data, either from previously solved tasks or from
unsupervised or undirected environment interaction, to extend and generalize
learned behaviors? While most prior work on extending robotic skills using
pre-collected data focuses on building explicit hierarchies or skill
decompositions, we show in this paper that we can reuse prior data to extend
new skills simply through dynamic programming. We show that even when the prior
data does not actually succeed at solving the new task, it can still be
utilized for learning a better policy, by providing the agent with a broader
understanding of the mechanics of its environment. We demonstrate the
effectiveness of our approach by chaining together several behaviors seen in
prior datasets for solving a new task, with our hardest experimental setting
involving composing four robotic skills in a row: picking, placing, drawer
opening, and grasping, where a +1/0 sparse reward is provided only on task
completion. We train our policies in an end-to-end fashion, mapping
high-dimensional image observations to low-level robot control commands, and
present results in both simulated and real world domains. Additional materials
and source code can be found on our project website:
https://sites.google.com/view/cog-rl
|
We study experimentally the electron transport properties of gated quantum
dots formed in InGaAs/InP and InAsP/InP quantum well structures grown by
chemical-beam epitaxy. For the case of the InGaAs quantum well, quantum dots
form directly underneath narrow gate electrodes due to potential fluctuations.
We measure the Coulomb-blockade diamonds in the few-electron regime of a single
quantum dot and observe photon-assisted tunneling peaks under microwave
irradiation. A singlet-triplet transition at high magnetic field and
Coulomb-blockade effects in the quantum Hall regime are also observed. For the
InAsP quantum well, an incidental triple quantum dot forms also due to
potential fluctuations within a single dot layout. Tunable quadruple points are
observed via transport measurements.
|
Multiwavelength observations of blazars such as Mrk 421 and Mrk 501 show that
they exhibit strong short time variabilities in flare-like phenomena. Based on
the homogeneous synchrotron self-Compton (SSC) model and assuming that time
variability of the emission is initiated by changes in the injection of
nonthermal electrons, we perform detailed temporal and spectral studies of a
purely cooling plasma system. One important parameter is the total injected
energy E and we show how the synchrotron and Compton components respond as E
varies. We discuss in detail how one could infer important physical parameters
using the observed spectra. In particular, we could infer the size of the
emission region by looking for exponential decay in the light curves. We could
also test the basic assumption of SSC by measuring the difference in the rate
of peak energy changes of synchrotron and SSC peaks. We also show that the
trajectory in the photon-index and flux plane evolves clockwise or
counter-clockwise depending on the value of E and observed energy bands.
|
We present the detection and analysis of a weak low-ionization absorber at $z
= 0.12122$ along the blazar sightline PG~$1424+240$, using spectroscopic data
from both $HST$/COS and STIS. The absorber is a weak Mg II analogue, with
incidence of weak C II and Si II, along with multi-component C IV and O VI. The
low ions are tracing a dense ($n_{H} \sim 10^{-3}$ cm$^{-3}$) parsec scale
cloud of solar or higher metallicity. The kinematically coincident higher ions
are either from a more diffuse ($n_{H} \sim 10^{-5} - 10^{-4}$ cm$^{-3}$)
photoionized phase of kiloparsec scale dimensions, or are tracing a warm (T
$\sim 2 \times 10^{5}$ K) collisionally ionized transition temperature plasma
layer. The absorber resides in a galaxy overdense region, with 18 luminous ($>
L^*$) galaxies within a projected radius of $5$ Mpc and $750$ km s$^{-1}$ of
the absorber. The multi-phase properties, high metallicity and proximity to a
$1.4$ $L^*$ galaxy, at $\rho \sim 200$ kpc and $|\Delta v| = 11$ km s$^{-1}$
separation, favors the possibility of the absorption tracing circumgalactic
gas. The absorber serves as an example of weak Mg II - O VI systems as a means
to study multiphase high velocity clouds in external galaxies.
|
A key stepping stone in the development of an artificial general intelligence
(a machine that can perform any task), is the production of agents that can
perform multiple tasks at once instead of just one. Unfortunately, canonical
methods are very prone to catastrophic forgetting (CF) - the act of overwriting
previous knowledge about a task when learning a new task. Recent efforts have
developed techniques for overcoming CF in learning systems, but no attempt has
been made to apply these new techniques to evolutionary systems. This research
presents a novel technique, weight protection, for reducing CF in evolutionary
systems by adapting a method from learning systems. It is used in conjunction
with other evolutionary approaches for overcoming CF and is shown to be
effective at alleviating CF when applied to a suite of reinforcement learning
tasks. It is speculated that this work could indicate the potential for a wider
application of existing learning-based approaches to evolutionary systems and
that evolutionary techniques may be competitive with or better than learning
systems when it comes to reducing CF.
|
During the normal operation of a Cloud solution, no one usually pays
attention to the logs except technical department, which may periodically check
them to ensure that the performance of the platform conforms to the Service
Level Agreements. However, the moment the status of a component changes from
acceptable to unacceptable, or a customer complains about accessibility or
performance of a platform, the importance of logs increases significantly.
Depending on the scope of the issue, all departments, including management,
customer support, and even the actual customer, may turn to logs to find out
what has happened, how it has happened, and who is responsible for the issue.
The party at fault may be motivated to tamper the logs to hide their fault.
Given the number of logs that are generated by the Cloud solutions, there are
many tampering possibilities. While tamper detection solution can be used to
detect any changes in the logs, we argue that critical nature of logs calls for
immutability. In this work, we propose a blockchain-based log system, called
Logchain, that collects the logs from different providers and avoids log
tampering by sealing the logs cryptographically and adding them to a
hierarchical ledger, hence, providing an immutable platform for log storage.
|
We prove local energy decay estimates for solutions to the inhomogeneous
Maxwell system on a generic class of spherically symmetric black holes.
|
We have performed a three-dimensional magnetohydrodynamic simulation to study
the emergence of a twisted magnetic flux tube from -20,000 km of the solar
convection zone to the corona through the photosphere and the chromosphere. The
middle part of the initial tube is endowed with a density deficit to instigate
a buoyant emergence. As the tube approaches the surface, it extends
horizontally and makes a flat magnetic structure due to the photosphere ahead
of the tube. Further emergence to the corona breaks out via the
interchange-mode instability of the photospheric fields, and eventually several
magnetic domes build up above the surface. What is new in this
three-dimensional experiment is, multiple separation events of the vertical
magnetic elements are observed in the photospheric magnetogram, and they
reflect the interchange instability. Separated elements are found to gather at
the edges of the active region. These gathered elements then show shearing
motions. These characteristics are highly reminiscent of active region
observations. On the basis of the simulation results above, we propose a
theoretical picture of the flux emergence and the formation of an active region
that explains the observational features, such as multiple separations of
faculae and the shearing motion.
|
In this publication we consider particle production at a future circular
hadron collider with 100 TeV centre of mass energy within the Standard Model,
and in particular their QCD aspects. Accurate predictions for these processes
pose severe theoretical challenges related to large hierarchies of scales and
possible large multiplicities of final state particles. We investigate scaling
patterns in multijet-production rates allowing to extrapolate predictions to
very high final-state multiplicities. Furthermore, we consider large-area QCD
jets and study the expectation for the mean number of subjets to be
reconstructed from their constituents and confront these with analytical
resummed predictions and with the expectation for boosted hadronic decays of
top-quarks and W-bosons. We also discuss the validity of
Higgs-Effective-Field-Theory in making predictions for Higgs-boson production
in association with jets. Finally, we consider the case of New Physics searches
at such a 100 TeV hadron-collider machine and discuss the expectations for
corresponding Standard-Model background processes.
|
Using a 1-MA, 100ns-rise-time pulsed power generator, radial foil
configurations can produce strongly collimated plasma jets. The resulting jets
have electron densities on the order of 10^20 cm^-3, temperatures above 50 eV
and plasma velocities on the order of 100 km/s, giving Reynolds numbers of the
order of 10^3, magnetic Reynolds and P\'eclet numbers on the order of 1. While
Hall physics does not dominate jet dynamics due to the large particle density
and flow inside, it strongly impacts flows in the jet periphery where plasma
density is low. As a result, Hall physics affects indirectly the geometrical
shape of the jet and its density profile. The comparison between experiments
and numerical simulations demonstrates that the Hall term enhances the jet
density when the plasma current flows away from the jet compared to the case
where the plasma current flows towards it.
|
The implications of the SU(2) gauge fixing associated with the choice of
invariant triads in Loop Quantum Cosmology are discussed for a Bianchi I model.
In particular, via the analysis of Dirac brackets, it is outlined how the
holonomy-flux algebra coincides with the one of Loop Quantum Gravity if paths
are parallel to fiducial vectors only. This way the quantization procedure for
the Bianchi I model is performed by applying the techniques developed in Loop
Quantum Gravity but restricting the admissible paths. Furthermore, the local
character retained by the reduced variables provides a relic diffeomorphisms
constraint, whose imposition implies homogeneity on a quantum level. The
resulting picture for the fundamental spatial manifold is that of a cubical
knot with attached SU(2) irreducible representations. The discretization of
geometric operators is outlined and a new perspective for the super-Hamiltonian
regularization in Loop Quantum Cosmology is proposed.
|
Motivated by recent experimental activities on surface critical phenomena, we
present a detailed theoretical study of the near-surface behavior of the local
order parameter m(z) in Ising-like spin systems. Special attention is paid to
the crossover regime between ``ordinary'' and ``normal'' transition in the
three-dimensional semi-infinite Ising model, where a finite magnetic field H_1
is imposed on the surface which itself exhibits a reduced tendency to order
spontaneously. As the theoretical foundation, the spatial behavior of m(z) is
discussed by means of phenomenological scaling arguments, and a finite-size
scaling analysis is performed. Then we present Monte Carlo results for m(z)
obtained with the Swendsen-Wang algorithm. In particular the power-law increase
of the magnetization, predicted for a small H_1 by previous work of the
authors, is corroborated by the numerical results. The relevance of these
findings for experiments on critical adsorption in systems where a small
effective surface field occurs is pointed out.
|
Biomedical photoacoustic tomography, which can provide high resolution 3D
soft tissue images based on the optical absorption, has advanced to the stage
at which translation from the laboratory to clinical settings is becoming
possible. The need for rapid image formation and the practical restrictions on
data acquisition that arise from the constraints of a clinical workflow are
presenting new image reconstruction challenges. There are many classical
approaches to image reconstruction, but ameliorating the effects of incomplete
or imperfect data through the incorporation of accurate priors is challenging
and leads to slow algorithms. Recently, the application of Deep Learning, or
deep neural networks, to this problem has received a great deal of attention.
This paper reviews the literature on learned image reconstruction, summarising
the current trends, and explains how these new approaches fit within, and to
some extent have arisen from, a framework that encompasses classical
reconstruction methods. In particular, it shows how these new techniques can be
understood from a Bayesian perspective, providing useful insights. The paper
also provides a concise tutorial demonstration of three prototypical approaches
to learned image reconstruction. The code and data sets for these
demonstrations are available to researchers. It is anticipated that it is in in
vivo applications - where data may be sparse, fast imaging critical and priors
difficult to construct by hand - that Deep Learning will have the most impact.
With this in mind, the paper concludes with some indications of possible future
research directions.
|
In this paper, based on the limited memory techniques and subspace
minimization conjugate gradient (SMCG) methods, a regularized limited memory
subspace minimization conjugate gradient method is proposed, which contains two
types of iterations. In SMCG iteration, we obtain the search direction by
minimizing the approximate quadratic model or approximate regularization model.
In RQN iteration, combined with regularization technique and BFGS method, a
modified regularized quasi-Newton method is used in the subspace to improve the
orthogonality. Moreover, some simple acceleration criteria and an improved
tactic for selecting the initial stepsize to enhance the efficiency of the
algorithm are designed. Additionally, an generalized nonmonotone line search is
utilized and the global convergence of our proposed algorithm is established
under mild conditions. Finally, numerical results show that, the proposed
algorithm has a significant improvement over ASMCG_PR and is superior to the
particularly well-known limited memory conjugate gradient software packages
CG_DESCENT (6.8) and CGOPT(2.0) for the CUTEr library.
|
We explore new types of binomial sums with Fibonacci and Lucas numbers. The
binomial coefficients under consideration are $\frac{n}{n+k}\binom{n+k}{n-k}$
and $\frac{k}{n+k}\binom{n+k}{n-k}$. The identities are derived by relating the
underlying sums to Chebyshev polynomials. Finally, some combinatorial sums are
studied and a connection to a recent paper by Chu and Guo from 2022 is derived.
|
Modeling communication channels as thermal systems results in Hamiltonians
which are an explicit function of the temperature. The first two authors have
recently generalized the second thermodynamic law to encompass systems with
temperature-dependent energy levels, $dQ=TdS+<d\mathcal{E}/dT>dT$, where
{$<\cdot>$} denotes averaging over the Boltzmann distribution, recomputing the
mutual information and other main properties of the popular Gaussian channel.
Here the mutual information for the binary symmetric channel as well as for the
discrete symmetric channel consisting of 4 input/output (I/O) symbols is
explicitly calculated using the generalized second law of thermodynamics. For
equiprobable I/O the mutual information of the examined channels has a very
simple form, -$\gamma U(\gamma)|_0^\beta$, where $U$ denotes the internal
energy of the channel. We prove that this simple form of the mutual information
governs the class of discrete memoryless symmetric communication channels with
equiprobable I/O symbols.
|
Orthogonal parameterization is a compelling solution to the vanishing
gradient problem (VGP) in recurrent neural networks (RNNs). With orthogonal
parameters and non-saturated activation functions, gradients in such models are
constrained to unit norms. On the other hand, although the traditional vanilla
RNNs are seen to have higher memory capacity, they suffer from the VGP and
perform badly in many applications. This work proposes Adaptive-Saturated RNNs
(asRNN), a variant that dynamically adjusts its saturation level between the
two mentioned approaches. Consequently, asRNN enjoys both the capacity of a
vanilla RNN and the training stability of orthogonal RNNs. Our experiments show
encouraging results of asRNN on challenging sequence learning benchmarks
compared to several strong competitors. The research code is accessible at
https://github.com/ndminhkhoi46/asRNN/.
|
Fine-grained visual recognition is to classify objects with visually similar
appearances into subcategories, which has made great progress with the
development of deep CNNs. However, handling subtle differences between
different subcategories still remains a challenge. In this paper, we propose to
solve this issue in one unified framework from two aspects, i.e., constructing
feature-level interrelationships, and capturing part-level discriminative
features. This framework, namely PArt-guided Relational Transformers (PART), is
proposed to learn the discriminative part features with an automatic part
discovery module, and to explore the intrinsic correlations with a feature
transformation module by adapting the Transformer models from the field of
natural language processing. The part discovery module efficiently discovers
the discriminative regions which are highly-corresponded to the gradient
descent procedure. Then the second feature transformation module builds
correlations within the global embedding and multiple part embedding, enhancing
spatial interactions among semantic pixels. Moreover, our proposed approach
does not rely on additional part branches in the inference time and reaches
state-of-the-art performance on 3 widely-used fine-grained object recognition
benchmarks. Experimental results and explainable visualizations demonstrate the
effectiveness of our proposed approach. The code can be found at
https://github.com/iCVTEAM/PART.
|
In this paper, we first introduce the notion of controlled weaving K-g-frames
in Hilbert spaces. Then, we present sufficient conditions for controlled
weaving K-g-frames in separable Hilbert spaces. Also, a characterization of
controlled weaving K-g-frames is given in terms of an operator. Finally, we
show that if bounds of frames associated with atomic spaces are positively
confined, then controlled K-g-woven frames gives ordinary weaving K-frames and
vice-versa.
|
Modern Generative Adversarial Networks (GANs) generate realistic images
remarkably well. Previous work has demonstrated the feasibility of
"GAN-classifiers" that are distinct from the co-trained discriminator, and
operate on images generated from a frozen GAN. That such classifiers work at
all affirms the existence of "knowledge gaps" (out-of-distribution artifacts
across samples) present in GAN training. We iteratively train GAN-classifiers
and train GANs that "fool" the classifiers (in an attempt to fill the knowledge
gaps), and examine the effect on GAN training dynamics, output quality, and
GAN-classifier generalization. We investigate two settings, a small DCGAN
architecture trained on low dimensional images (MNIST), and StyleGAN2, a SOTA
GAN architecture trained on high dimensional images (FFHQ). We find that the
DCGAN is unable to effectively fool a held-out GAN-classifier without
compromising the output quality. However, StyleGAN2 can fool held-out
classifiers with no change in output quality, and this effect persists over
multiple rounds of GAN/classifier training which appears to reveal an ordering
over optima in the generator parameter space. Finally, we study different
classifier architectures and show that the architecture of the GAN-classifier
has a strong influence on the set of its learned artifacts.
|
Repeating X-ray bursts from the Galactic magnetar SGR 1806-20 have been
observed with a period of 398 days. Similarly, periodic X-ray bursts from SGR
1935+2154 with a period of 238 days have also been observed. Here we argue that
these X-ray bursts could be produced by the interaction of a neutron star (NS)
with its planet in a highly elliptical orbit. The periastron of the planet is
very close to the NS, so it would be partially disrupted by the tidal force
every time it passes through the periastron. Major fragments generated in the
process will fall onto the NS under the influence of gravitational
perturbation. The collision of the in-falling fragments with the NS produces
repeating X-ray bursts. The main features of the observed X-ray bursts, such as
their energy, duration, periodicity, and activity window, can all be explained
in our framework.
|
Three-nucleon forces, which compose an up-to-date subject in few-nucleon
systems, provide a good account of the triton binding energy and the cross
section minimum in proton-deuteron elastic scattering, while do not succeed in
explaining spin observables such as the nucleon and deuteron analyzing powers,
suggesting serious defects in their spin dependence. We study the spin
structure of nucleon-deuteron elastic amplitudes by decomposing them into
spin-space tensors and examine effects of three-nucleon forces to each
component of the amplitudes obtained by solving the Faddeev equation. Assuming
that the spin-scalar amplitudes dominate the others, we derive simple
expressions for spin observables in the nucleon-deuteron elastic scattering.
The expressions suggest that a particular combination of spin observables in
the scattering provides direct information of scalar, vector, or tensor
component of the three-nucleon forces. These effects are numerically
investigated by the Faddeev calculation.
|
In this two-part paper we prove an existence result for affine buildings
arising from exceptional algebraic reductive groups. Combined with earlier
results on classical groups, this gives a complete and positive answer to the
conjecture concerning the existence of affine buildings arising from such
groups defined over a (skew) field with a complete valuation, as proposed by
Jacques Tits.
This first part lays the foundations for our approach and deals with the
`large minimal angle' case.
|
This paper explores a variant of automatic headline generation methods, where
a generated headline is required to include a given phrase such as a company or
a product name. Previous methods using Transformer-based models generate a
headline including a given phrase by providing the encoder with additional
information corresponding to the given phrase. However, these methods cannot
always include the phrase in the generated headline. Inspired by previous
RNN-based methods generating token sequences in backward and forward directions
from the given phrase, we propose a simple Transformer-based method that
guarantees to include the given phrase in the high-quality generated headline.
We also consider a new headline generation strategy that takes advantage of the
controllable generation order of Transformer. Our experiments with the Japanese
News Corpus demonstrate that our methods, which are guaranteed to include the
phrase in the generated headline, achieve ROUGE scores comparable to previous
Transformer-based methods. We also show that our generation strategy performs
better than previous strategies.
|
It has been suggested by various authors that a significant anticorrelation
exists between the Homestake solar neutrino data and the sunspot cycle. Some of
these claims rest on smoothing the data by taking running averages, a method
that has recently undergone criticism. We demonstrate that no significant
anticorrelation can be found in the Homestake, data, or in standard 2- and
4-point averages of that data. However, when 3-, 5-, and 7-point running
averages are taken, an anticorrelation seems to emerge whose significance grows
as the number of points in the average increases. Our analysis indicates that
the apparently high significance of these anticorrelations is an artifact of
the failure to consider the loss of independence introduced in the running
average process. When this is considered, the significance is reduced to that
of the unaveraged data. Furthermore, when evaluated via parametric subsampling,
no statistically significant anticorrelation is found. We conclude that the
Homestake data can not be used to substantiate any claim of an anticorrelation
with the sunspot cycle.
|
One-loop amplitudes of gluons in N=4 gauge theory can be written as linear
combinations of known scalar box integrals with coefficients that are rational
functions. In this paper we show how to use generalized unitarity to basically
read off the coefficients. The generalized unitarity cuts we use are quadruple
cuts. These can be directly applied to the computation of four-mass scalar
integral coefficients, and we explicitly present results in next-to-next-to-MHV
amplitudes. For scalar box functions with at least one massless external leg we
show that by doing the computation in signature (--++) the coefficients can
also be obtained from quadruple cuts, which are not useful in Minkowski
signature. As examples, we reproduce the coefficients of some one-, two-, and
three-mass scalar box integrals of the seven-gluon next-to-MHV amplitude, and
we compute several classes of three-mass and two-mass-hard coefficients of
next-to-MHV amplitudes to all multiplicities.
|
Classes of graphs with bounded expansion are a generalization of both proper
minor closed classes and degree bounded classes. Such classes are based on a
new invariant, the greatest reduced average density (grad) of G with rank r,
∇r(G). These classes are also characterized by the existence of several
partition results such as the existence of low tree-width and low tree-depth
colorings. These results lead to several new linear time algorithms, such as an
algorithm for counting all the isomorphs of a fixed graph in an input graph or
an algorithm for checking whether there exists a subset of vertices of a priori
bounded size such that the subgraph induced by this subset satisfies some
arbirtrary but fixed first order sentence. We also show that for fixed p,
computing the distances between two vertices up to distance p may be performed
in constant time per query after a linear time preprocessing. We also show,
extending several earlier results, that a class of graphs has sublinear
separators if it has sub-exponential expansion. This result result is best
possible in general.
|
Close to one half of the LHC events are expected to be due to elastic or
inelastic diffractive scattering. Still, predictions based on extrapolations of
experimental data at lower energies differ by large factors in estimating the
relative rate of diffractive event categories at the LHC energies. By
identifying diffractive events, detailed studies on proton structure can be
carried out.
The combined forward physics objects: rapidity gaps, forward multiplicity and
transverse energy flows can be used to efficiently classify proton-proton
collisions. Data samples recorded by the forward detectors, with a simple
extension, will allow first estimates of the single diffractive (SD), double
diffractive (DD), central diffractive (CD), and non-diffractive (ND) cross
sections. The approach, which uses the measurement of inelastic activity in
forward and central detector systems, is complementary to the detection and
measurement of leading beam-like protons.
In this investigation, three different multivariate analysis approaches are
assessed in classifying forward physics processes at the LHC. It is shown that
with gene expression programming, neural networks and support vector machines,
diffraction can be efficiently identified within a large sample of simulated
proton-proton scattering events. The event characteristics are visualized by
using the self-organizing map algorithm.
|
We prove that the focusing cubic wave equation in three spatial dimensions
has a countable family of self-similar solutions which are smooth inside the
past light cone of the singularity. These solutions are labeled by an integer
index $n$ which counts the number of oscillations of the solution. The
linearized operator around the $n$-th solution is shown to have $n+1$ negative
eigenvalues (one of which corresponds to the gauge mode) which implies that all
$n>0$ solutions are unstable. It is also shown that all $n>0$ solutions have a
singularity outside the past light cone which casts doubt on whether these
solutions may participate in the Cauchy evolution, even for non-generic initial
data.
|
How to steer a given joint state probability density function to another over
finite horizon subject to a controlled stochastic dynamics with hard state
(sample path) constraints? In applications, state constraints may encode safety
requirements such as obstacle avoidance. In this paper, we perform the feedback
synthesis for minimum control effort density steering (a.k.a. Schr\"{o}dinger
bridge) problem subject to state constraints. We extend the theory of
Schr\"{o}dinger bridges to account the reflecting boundary conditions for the
sample paths, and provide a computational framework building on our previous
work on proximal recursions, to solve the same.
|
In this paper, we consider a risk-averse control problem for diffusion
processes, in which there is a partition of the admissible control strategy
into two decision-making groups (namely, the {\it leader} and {\it follower})
with different cost functionals and risk-averse satisfactions. Our approach,
based on a hierarchical optimization framework, requires that a certain level
of risk-averse satisfaction be achieved for the {\it leader} as a priority over
that of the {\it follower's} risk-averseness. In particular, we formulate such
a risk-averse control problem involving a family of time-consistent dynamic
convex risk measures induced by conditional $g$-expectations (i.e.,
filtration-consistent nonlinear expectations associated with the generators of
certain backward stochastic differential equations). Moreover, under suitable
conditions, we establish the existence of optimal risk-averse solutions, in the
sense of viscosity solutions, for the corresponding risk-averse dynamic
programming equations. Finally, we briefly comment on the implication of our
results.
|
The need to address representation biases and sentencing disparities in legal
case data has long been recognized. Here, we study the problem of identifying
and measuring biases in large-scale legal case data from an algorithmic
fairness perspective. Our approach utilizes two regression models: A baseline
that represents the decisions of a "typical" judge as given by the data and a
"fair" judge that applies one of three fairness concepts. Comparing the
decisions of the "typical" judge and the "fair" judge allows for quantifying
biases across demographic groups, as we demonstrate in four case studies on
criminal data from Cook County (Illinois).
|
We propose to learn a fully-convolutional network model that consists of a
Chain of Identity Mapping Modules and residual on the residual architecture for
image denoising. Our network structure possesses three distinctive features
that are important for the noise removal task. Firstly, each unit employs
identity mappings as the skip connections and receives pre-activated input to
preserve the gradient magnitude propagated in both the forward and backward
directions. Secondly, by utilizing dilated kernels for the convolution layers
in the residual branch, each neuron in the last convolution layer of each
module can observe the full receptive field of the first layer. Lastly, we
employ the residual on the residual architecture to ease the propagation of the
high-level information. Contrary to current state-of-the-art real denoising
networks, we also present a straightforward and single-stage network for real
image denoising. The proposed network produces remarkably higher numerical
accuracy and better visual image quality than the classical state-of-the-art
and CNN algorithms when being evaluated on the three conventional benchmark and
three real-world datasets.
|
We present experimental data and computational analysis of the formation of
GaN nanowires on graphene virtual substrates. We show that GaN nanowires on
graphene exhibit nitrogen polarity. We employ the DFT-based computational
analysis to demonstrate that among different possible configurations of Ga and
N atoms only the N-polar one is stable. We suggest that polarity discrimination
occurs due to the dipole interaction between the GaN nanocrystal and
$\pi$-orbitals of the graphene sheet.
|
Survey statisticians make use of the available auxiliary information to
improve estimates. One important example is given by calibration estimation,
that seeks for new weights that are close (in some sense) to the basic design
weights and that, at the same time, match benchmark constraints on available
auxiliary information. Recently, multiple frame surveys have gained much
attention and became largely used by statistical agencies and private
organizations to decrease sampling costs or to reduce frame undercoverage
errors that could occur with the use of only a single sampling frame. Much
attention has been devoted to the introduction of different ways of combining
estimates coming from the different frames. We will extend the calibration
paradigm, developed so far for one frame surveys, to the estimation of the
total of a variable of interest in dual frame surveys as a general tool to
include auxiliary information, also available at different levels. In fact,
calibration allows us to handle different types of auxiliary information and
can be shown to encompass as a special cases some of the methods already
proposed in the literature. The theoretical properties of the proposed class of
estimators are derived and discussed, a set of simulation studies is conducted
to compare the efficiency of the procedure in presence of different sets of
auxiliary variables. Finally, the proposed methodology is applied to data from
the Barometer of Culture of Andalusia survey.
|
A spreadsheet usually starts as a simple and single-user software artifact,
but, as frequent as in other software systems, quickly evolves into a complex
system developed by many actors. Often, different users work on different
aspects of the same spreadsheet: while a secretary may be only involved in
adding plain data to the spreadsheet, an accountant may define new business
rules, while an engineer may need to adapt the spreadsheet content so it can be
used by other software systems. Unfortunately, spreadsheet systems do not offer
modular mechanisms, and as a consequence, some of the previous tasks may be
defined by adding intrusive "code" to the spreadsheet.
In this paper we go through the design and implementation of an
aspect-oriented language for spreadsheets so that users can work on different
aspects of a spreadsheet in a modular way. For example, aspects can be defined
in order to introduce new business rules to an existing spreadsheet, or to
manipulate the spreadsheet data to be ported to another system. Aspects are
defined as aspect-oriented program specifications that are dynamically woven
into the underlying spreadsheet by an aspect weaver. In this aspect-oriented
style of spreadsheet development, different users develop, or reuse, aspects
without adding intrusive code to the original spreadsheet. Such code is
added/executed by the spreadsheet weaving mechanism proposed in this paper.
|
Possible assignments of $D_{s1}(2700)^\pm$ and $D_{sJ}(2860)$ in the
conventional quark model are analyzed. The study indicates that both the
orbitally excited $c\bar s$ and the radially excited $c\bar s$ are possible.
Some implications to these assignments are explored. If $D_{s1}(2700)^\pm$ and
$D_{sJ}(2860)$ are the orbitally excited D-wave $1^- (j^P={3\over 2}^-)$ and
$3^- (j^P={5\over 2}^-)$, respectively, another orbitally excited D-wave $2^-$
$D_s$, $D_{s2}(2800)$, is expected. If $D_{s1}(2700)^\pm$ and $D_{sJ}(2860)$
are the first radially excited $1^- (j^P={1\over 2}^-)$ and $0^+ (j^P={1\over
2}^+)$, respectively, other two radially excited $0^- D^\prime_s(2582)$ and
$1^+ D^\prime_{s1}(2973)$ are expected. $D_{s2}(2800)$ and
$D^\prime_{s1}(2973)$ are mixing states. The chiral doubling relation may exist
in radially excited $D_s$, the splitting between the parity partners(the
$(0^-,1^-)$ and the $(0^+,1^+)$) is $\approx 280$ MeV.
|
Let $\left\{ Z_{n},n=0,1,2,...\right\} $ be a critical branching process in
random environment and let $\left\{ S_{n},n=0,1,2,...\right\} $ be its
associated random walk. It is known that if the increments of this random walk
belong (without centering) to the domain of attraction of a stable law, then
there exists a sequence $a_{1},a_{2},...,$ slowly varying at infinity such that
the conditional distributions \begin{equation*} \mathbf{P}\left(
\frac{S_{n}}{a_{n}}\leq x\Big|Z_{n}>0\right) ,\quad x\in (-\infty ,+\infty ),
\end{equation*}% weakly converges, as $n\rightarrow \infty $ to the
distribution of a strictly positive and proper random variable. In this paper
we supplement this result with a description of the asymptotic behavior of the
probability \begin{equation*} \mathbf{P}\left( S_{n}\leq \varphi
(n);Z_{n}>0\right) , \end{equation*}% if $\varphi (n)\rightarrow \infty $ \ as
$n\rightarrow \infty $ in such a way that $\varphi (n)=o(a_{n})$.
|
An experimental investigation of laser produced colliding plasma of aluminium
target in the presence of external magnetic field in vacuum is done.
Characteristic parameters and line emission of plasma plume in the presence of
magnetic field are compared with those for field free case. Axial expansion of
the plasma is slowed down in the presence of magnetic field as compared to the
field free case. Contrary to the field free case no sharp interaction zone is
observed. Higher electron temperature and increased ionic line emission from
singly as well as doubly ionized aluminium can be attributed to the Joule
heating phenomenon.
|
We present a solution to the problem of spatio-temporal calibration for event
cameras mounted on an onmi-directional vehicle. Different from traditional
methods that typically determine the camera's pose with respect to the
vehicle's body frame using alignment of trajectories, our approach leverages
the kinematic correlation of two sets of linear velocity estimates from event
data and wheel odometers, respectively. The overall calibration task consists
of estimating the underlying temporal offset between the two heterogeneous
sensors, and furthermore, recovering the extrinsic rotation that defines the
linear relationship between the two sets of velocity estimates. The first
sub-problem is formulated as an optimization one, which looks for the optimal
temporal offset that maximizes a correlation measurement invariant to arbitrary
linear transformation. Once the temporal offset is compensated, the extrinsic
rotation can be worked out with an iterative closed-form solver that
incrementally registers associated linear velocity estimates. The proposed
algorithm is proved effective on both synthetic data and real data,
outperforming traditional methods based on alignment of trajectories.
|
Retail checkout systems employed at supermarkets primarily rely on barcode
scanners, with some utilizing QR codes, to identify the items being purchased.
These methods are time-consuming in practice, require a certain level of human
supervision, and involve waiting in long queues. In this regard, we propose a
system, that we call ARC, which aims at making the process of check-out at
retail store counters faster, autonomous, and more convenient, while reducing
dependency on a human operator. The approach makes use of a computer
vision-based system, with a Convolutional Neural Network at its core, which
scans objects placed beneath a webcam for identification. To evaluate the
proposed system, we curated an image dataset of one-hundred local retail items
of various categories. Within the given assumptions and considerations, the
system achieves a reasonable test-time accuracy, pointing towards an ambitious
future for the proposed setup. The project code and the dataset are made
publicly available.
|
We show that the partition function of many classical models with continuous
degrees of freedom, e.g. abelian lattice gauge theories and statistical
mechanical models, can be written as the partition function of an (enlarged)
four-dimensional lattice gauge theory (LGT) with gauge group U(1). This result
is very general that it includes models in different dimensions with different
symmetries. In particular, we show that a U(1) LGT defined in a curved
spacetime can be mapped to a U(1) LGT with a flat background metric. The result
is achieved by expressing the U(1) LGT partition function as an inner product
between two quantum states.
|
We link observational parameters such as the deceleration parameter, the
jerk, the kerk (snap) and higher-order derivatives of the scale factor, called
statefinders, to the conditions which allow to develop sudden future
singularities of pressure with finite energy density. In this context, and
within the framework of Friedmann cosmology, we also propose higher-order
energy conditions which relate time derivatives of the energy density and
pressure which may be useful in general relativity.
|
We introduce the moduli space of quasi-parabolic $SL(2,\mathbb{C})$-Higgs
bundles over a compact Riemann surface $\Sigma$ and consider a natural
involution, studying its fixed point locus when $\Sigma$ is $\mathbb{C}
\mathbb{P}^1$ and establishing an identification with a moduli space of null
polygons in Minkowski $3$-space.
|
Although DALL-E has shown an impressive ability of composition-based
systematic generalization in image generation, it requires the dataset of
text-image pairs and the compositionality is provided by the text. In contrast,
object-centric representation models like the Slot Attention model learn
composable representations without the text prompt. However, unlike DALL-E its
ability to systematically generalize for zero-shot generation is significantly
limited. In this paper, we propose a simple but novel slot-based autoencoding
architecture, called SLATE, for combining the best of both worlds: learning
object-centric representations that allows systematic generalization in
zero-shot image generation without text. As such, this model can also be seen
as an illiterate DALL-E model. Unlike the pixel-mixture decoders of existing
object-centric representation models, we propose to use the Image GPT decoder
conditioned on the slots for capturing complex interactions among the slots and
pixels. In experiments, we show that this simple and easy-to-implement
architecture not requiring a text prompt achieves significant improvement in
in-distribution and out-of-distribution (zero-shot) image generation and
qualitatively comparable or better slot-attention structure than the models
based on mixture decoders.
|
An average pedestrian flow through an exit is one of the most important index
in evaluating pedestrian dynamics. In order to study the flow in detail, the
floor field model, which is a crowd model by using cellular automaton, is
extended by taking into account a realistic behavior of pedestrians around the
exit. The model is studied by both numerical simulations and cluster analysis
to obtain a theoretical expression of an average pedestrian flow through the
exit. It is found quantitatively that the effect of exit door width, a wall,
and pedestrian's mood of competition or cooperation significantly influence the
average flow. The results show that there is suitable width of the exit and
position according to pedestrian's mood.
|
We consider the class of exchangeable fragmentation-coagulation (EFC)
processes where coagulations are multiple and not simultaneous, as in a
$\Lambda$-coalescent, and fragmentation dislocates at finite rate an individual
block into sub-blocks of infinite size. We call these partition-valued
processes, simple EFC processes, and study the question whether such a process,
when started with infinitely many blocks, can visit partitions with a finite
number of blocks or not. When this occurs, one says that the process comes down
from infinity. We introduce two sharp parameters $\theta_{\star}\leq
\theta^{\star}\in [0,\infty]$, so that if $\theta^{\star}<1$, the process comes
down from infinity and if $\theta_\star>1$, then it stays infinite. We
illustrate our result with regularly varying coagulation and fragmentation
measures. In this case, the parameters $\theta_{\star},\theta^{\star}$ coincide
and are explicit.
|
In this paper, we propose a 4-parameter family of coupled Painlev\'e III
systems in dimension four with affine Weyl group symmetry of type $D_4^{(1)}$.
We also propose its symmetric form in which the $D_4^{(1)}$-symmetries become
clearly visible.
|
Quasiparticle approach to dynamics of dark solitons is applied to the case of
ring solitons. It is shown that the energy conservation law provides the
effective equations of motion of ring dark solitons for general form of the
nonlinear term in the generalized nonlinear Schroedinger or Gross-Pitaevskii
equation. Analytical theory is illustrated by examples of dynamics of ring
solitons in light beams propagating through a photorefractive medium and in
non-uniform condensates confined in axially symmetric traps. Analytical results
agree very well with the results of our numerical simulations.
|
We study the Kondo chain in the regime of high spin concentration where the
low energy physics is dominated by the Ruderman-Kittel-Kasuya-Yosida (RKKY)
interaction. As has been recently shown (A. M. Tsvelik and O. M. Yevtushenko,
Phys. Rev. Lett 115, 216402 (2015)), this model has two phases with drastically
different transport properties depending on the anisotropy of the exchange
interaction. In particular, the helical symmetry of the fermions is
spontaneously broken when the anisotropy is of the easy plane type (EP). This
leads to a parametrical suppression of the localization effects. In the present
paper we substantially extend the previous theory, in particular, by analyzing
a competition of forward- and backward- scattering, including into the theory
short range electron interactions and calculating spin correlation functions.
We discuss applicability of our theory and possible experiments which could
support the theoretical findings.
|
Developing artificial intelligence (AI) tools for healthcare is a
collaborative effort, bringing data scientists, clinicians, patients and other
disciplines together. In this paper, we explore the collaborative data
practices of research consortia tasked with applying AI tools to understand and
manage multiple long-term conditions in the UK. Through an inductive thematic
analysis of 13 semi-structured interviews with participants of these consortia,
we aimed to understand how collaboration happens based on the tools used,
communication processes and settings, as well as the conditions and obstacles
for collaborative work. Our findings reveal the adaptation of tools that are
used for sharing knowledge and the tailoring of information based on the
audience, particularly those from a clinical or patient perspective.
Limitations on the ability to do this were also found to be imposed by the use
of electronic healthcare records and access to datasets. We identified meetings
as the key setting for facilitating exchanges between disciplines and allowing
for the blending and creation of knowledge. Finally, we bring to light the
conditions needed to facilitate collaboration and discuss how some of the
challenges may be navigated in future work.
|
We introduce the Colombeau Quaternio Algebra and study its algebraic
structure. We also study the dense ideal, dense in the algebraic sense, of the
algebra of Colombeau generalized numbers and use this show the existence of a
maximal ting of quotions which is Von Neumann regular. Recall that it is
already known that then algebra of COlombeau generalized numbers is not Von
Neumann regular. We also use the study of the dense ideals to give a criteria
for a generalized holomorphic function to satisfy the identity theorem.
Aragona-Fernadez-Juriaans showed that a generalized holomorphic function has
a power series. This is one of the ingredients use to prove the identity
theorem.
|
We develop an alternative derivation of the renormalized expression for the
one-loop soliton quantum mass corrections in (1+1)-dimensional scalar field
theories. We regularize implicitly such quantity by subtracting and adding its
corresponding tadpole graph contribution and use the renormalization
prescription that such added term vanishes with adequate counterterms. As a
result, we get a finite unambiguous formula for the soliton quantum mass
corrections up to one-loop order, which turns to be independent of the chosen
regularization scheme.
|
We prove the existence of an upper bound on critical volume of a large class
of toric Sasaki-Einstein manifolds with respect to the first Chern class of the
resolutions of the Gorenstein singularities in the corresponding toric
Calabi-Yau varieties. We examine the canonical metrics obtained by the Delzant
construction on these varieties and characterise cases when the bound is
attained. We comment on computational tools used in the investigation, in
particular Neural Networks and the gradient saliency method.
|
New multiband CCD photometry is presented for the eclipsing binary GW Gem;
the $RI$ light curves are the first ever compiled. Four new minimum timings
have been determined. Our analysis of eclipse timings observed during the past
79 years indicates a continuous period increase at a fractional rate of
+(1.2$\pm$0.1)$\times10^{-10}$, in excellent agreement with the value
$+1.1\times10^{-10}$ calculated from the Wilson-Devinney binary code. The new
light curves display an inverse O'Connell effect increasing toward longer
wavelengths. Hot and cool spot models are developed to describe these
variations but we prefer a cool spot on the secondary star. Our light-curve
synthesis reveals that GW Gem is in a semi-detached, but near-contact,
configuration. It appears to consist of a near-main-sequence primary star with
a spectral type of about A7 and an evolved early K-type secondary star that
completely fills its inner Roche lobe. Mass transfer from the secondary to the
primary component is responsible for the observed secular period change.
|
Depletion of natural and artificial resources is a fundamental problem and a
potential cause of economic crises, ecological catastrophes, and death of
living organisms. Understanding the depletion process is crucial for its
further control and optimized replenishment of resources. In this paper, we
investigate a stock depletion by a population of species that undergo an
ordinary diffusion and consume resources upon each encounter with the stock. We
derive the exact form of the probability density of the random depletion time,
at which the stock is exhausted. The dependence of this distribution on the
number of species, the initial amount of resources, and the geometric setting
is analyzed. Future perspectives and related open problems are discussed.
|
Let $\alpha,\beta$ be real parameters and let $a>0$. We study radially
symmetric solutions of \begin{equation*} S_k(D^2v)+\alpha v+\beta
\xi\cdot\nabla v=0,\, v>0\;\; \mbox{in}\;\; \mathbb{R}^n,\; v(0)=a,
\end{equation*} where $S_k(D^2v)$ denotes the $k$-Hessian operator of $v$. For
$\alpha\leq\frac{\beta(n-2k)}{k}\;\;\mbox{and}\;\;\beta>0$, we prove the
existence of a unique solution to this problem, without using the phase plane
method. We also prove existence and properties of the solutions of the above
equation for other ranges of the parameters $\alpha$ and $\beta$. These results
are then applied to construct different types of explicit solutions, in
self-similar forms, to a related evolution equation. In particular, for the
heat equation, we have found a new family of self-similar solutions of type II
which blows up in finite time. These solutions are represented as a power
series, called the Kummer function.
|
Wireless Body Area Networks (WBANs) have gained a lot of research attention
in recent years since they offer tremendous benefits for remote health
monitoring and continuous, real-time patient care. However, as with any
wireless communication, data security in WBANs is a challenging design issue.
Since such networks consist of small sensors placed on the human body, they
impose resource and computational restrictions, thereby making the use of
sophisticated and advanced encryption algorithms infeasible. This calls for the
design of algorithms with a robust key generation / management scheme, which
are reasonably resource optimal. This paper presents a security suite for
WBANs, comprised of IAMKeys, an independent and adaptive key management scheme
for improving the security of WBANs, and KEMESIS, a key management scheme for
security in inter-sensor communication. The novelty of these schemes lies in
the use of a randomly generated key for encrypting each data frame that is
generated independently at both the sender and the receiver, eliminating the
need for any key exchange. The simplicity of the encryption scheme, combined
with the adaptability in key management makes the schemes simple, yet secure.
The proposed algorithms are validated by performance analysis.
|
Spin systems are an attractive candidate for quantum-enhanced metrology. Here
we develop a variational method to generate metrological states in small
dipolar-interacting ensembles with limited qubit controls and unknown spin
locations. The generated states enable sensing beyond the standard quantum
limit (SQL) and approaching the Heisenberg limit (HL). Depending on the circuit
depth and the level of readout noise, the resulting states resemble
Greenberger-Horne-Zeilinger (GHZ) states or Spin Squeezed States (SSS). Sensing
beyond the SQL holds in the presence of finite spin polarization and a
non-Markovian noise environment.
|
We study some structural aspects of the evolution equations with Pomeron
loops recently derived in QCD at high energy and for a large number of colors,
with the purpose of clarifying their probabilistic interpretation. We show
that, in spite of their appealing dipolar structure and of the self-duality of
the underlying Hamiltonian, these equations cannot be given a meaningful
interpretation in terms of a system of dipoles which evolves through
dissociation (one dipole splitting into two) and recombination (two dipoles
merging into one). The problem comes from the saturation effects, which cannot
be described as dipole recombination, not even effectively. We establish this
by showing that a (probabilistically meaningful) dipolar evolution in either
the target or the projectile wavefunction cannot reproduce the actual evolution
equations in QCD.
|
Data from satellites or aerial vehicles are most of the times unlabelled.
Annotating such data accurately is difficult, requires expertise, and is costly
in terms of time. Even if Earth Observation (EO) data were correctly labelled,
labels might change over time. Learning from unlabelled data within a
semi-supervised learning framework for segmentation of aerial images is
challenging. In this paper, we develop a new model for semantic segmentation of
unlabelled images, the Non-annotated Earth Observation Semantic Segmentation
(NEOS) model. NEOS performs domain adaptation as the target domain does not
have ground truth semantic segmentation masks. The distribution inconsistencies
between the target and source domains are due to differences in acquisition
scenes, environment conditions, sensors, and times. Our model aligns the
learned representations of the different domains to make them coincide. The
evaluation results show that NEOS is successful and outperforms other models
for semantic segmentation of unlabelled data.
|
Image restoration and enhancement is a process of improving the image quality
by removing degradations, such as noise, blur, and resolution degradation. Deep
learning (DL) has recently been applied to image restoration and enhancement.
Due to its ill-posed property, plenty of works have been explored priors to
facilitate training deep neural networks (DNNs). However, the importance of
priors has not been systematically studied and analyzed by far in the research
community. Therefore, this paper serves as the first study that provides a
comprehensive overview of recent advancements in priors for deep image
restoration and enhancement. Our work covers five primary contents: (1) A
theoretical analysis of priors for deep image restoration and enhancement; (2)
A hierarchical and structural taxonomy of priors commonly used in the DL-based
methods; (3) An insightful discussion on each prior regarding its principle,
potential, and applications; (4) A summary of crucial problems by highlighting
the potential future directions, especially adopting the large-scale foundation
models as prior, to spark more research in the community; (5) An open-source
repository that provides a taxonomy of all mentioned works and code links.
|
There is a well-known analogy between statistical and quantum mechanics. In
statistical mechanics, Boltzmann realized that the probability for a system in
thermal equilibrium to occupy a given state is proportional to exp(-E/kT) where
E is the energy of that state. In quantum mechanics, Feynman realized that the
amplitude for a system to undergo a given history is proportional to exp(-S/i
hbar) where S is the action of that history. In statistical mechanics we can
recover Boltzmann's formula by maximizing entropy subject to a constraint on
the expected energy. This raises the question: what is the quantum mechanical
analogue of entropy? We give a formula for this quantity, which we call
"quantropy". We recover Feynman's formula from assuming that histories have
complex amplitudes, that these amplitudes sum to one, and that the amplitudes
give a stationary point of quantropy subject to a constraint on the expected
action. Alternatively, we can assume the amplitudes sum to one and that they
give a stationary point of a quantity we call "free action", which is analogous
to free energy in statistical mechanics. We compute the quantropy, expected
action and free action for a free particle, and draw some conclusions from the
results.
|
A general formalism for joint weak measurements of a pair of complementary
observables is given. The standard process of optical three-wave mixing in a
nonlinear crystal (such as in parametric down-conversion) is suitable for such
tasks. To obtain the weak value of a variable $A$ one performs weak
measurements twice, with different initial states of the "meter" field. This
seems to be a drawback, but as a compensation we get for free the weak value of
a complementary variable $B$. The scheme is tunable and versatile: one has
access to a continuous set of possible weak measurements of pair of
observables. The scheme increases signal-to-noise ratio with respect to the
case without postselection.
|
This work introduces \emph{sharding} and \emph{Poissonization} as a unified
framework for analyzing prophet inequalities. Sharding involves splitting a
random variable into several independent random variables, shards, that
collectively mimic the original variable's behavior. We combine this with
Poissonization, where these shards are modeled using a Poisson distribution.
Despite the simplicity of our framework, we improve the competitive ratio
analysis of a dozen well studied prophet inequalities in the literature, some
of which have been studied for decades. This includes the
\textsc{Top-$1$-of-$k$} prophet inequality, prophet secretary inequality, and
semi-online prophet inequality, among others. This approach not only refines
the constants but also offers a more intuitive and streamlined analysis for
many prophet inequalities in the literature. Furthermore, it simplifies proofs
of several known results and may be of independent interest for other variants
of the prophet inequality, such as order-selection.
|
Non-reciprocity is an important topic in fundamental physics and
quantum-device design, as much effort has been devoted to its engineering and
manipulation. Here we experimentally demonstrate non-reciprocal transport in a
two-dimensional quantum walk of photons, where the directional propagation is
highly tunable through dissipation and synthetic magnetic flux. The
non-reciprocal dynamics hereof is a manifestation of the non-Hermitian skin
effect, with its direction continuously adjustable through the photon-loss
parameters. By contrast, the synthetic flux originates from an engineered
geometric phase, which competes with the non-Hermitian skin effect through
magnetic confinement. We further demonstrate how the non-reciprocity and
synthetic flux impact the dynamics of the Floquet topological edge modes along
an engineered boundary. Our results exemplify an intriguing strategy for
achieving tunable non-reciprocal transport, highlighting the interplay of
non-Hermiticity and gauge fields in quantum systems of higher dimensions.
|
Community search is a derivative of community detection that enables online
and personalized discovery of communities and has found extensive applications
in massive real-world networks. Recently, there needs to be more focus on the
community search issue within directed graphs, even though substantial research
has been carried out on undirected graphs. The recently proposed D-truss model
has achieved good results in the quality of retrieved communities. However,
existing D-truss-based work cannot perform efficient community searches on
large graphs because it consumes too many computing resources to retrieve the
maximal D-truss. To overcome this issue, we introduce an innovative merge
relation known as D-truss-connected to capture the inherent density and
cohesiveness of edges within D-truss. This relation allows us to partition all
the edges in the original graph into a series of D-truss-connected classes.
Then, we construct a concise and compact index, ConDTruss, based on
D-truss-connected. Using ConDTruss, the efficiency of maximum D-truss retrieval
will be greatly improved, making it a theoretically optimal approach.
Experimental evaluations conducted on large directed graph certificate the
effectiveness of our proposed method.
|
We consider a general problem of inelastic collision of particles interacting
with power-law potentials. Using quantum defect theory we derive an analytical
formula for the energy-dependent complex scattering length, valid for arbitrary
collision energy, and use it to analyze the elastic and reactive collision
rates. Our theory is applicable for both universal and non-universal
collisions. The former corresponds to the unit reaction probability at short
range, while in the latter case the reaction probability is smaller than one.
In the high-energy limit we present a method that allows to incorporate quantum
corrections to the classical reaction rate due to the shape resonances and the
quantum tunneling.
|
Exploration in multi-agent reinforcement learning is a challenging problem,
especially in environments with sparse rewards. We propose a general method for
efficient exploration by sharing experience amongst agents. Our proposed
algorithm, called Shared Experience Actor-Critic (SEAC), applies experience
sharing in an actor-critic framework. We evaluate SEAC in a collection of
sparse-reward multi-agent environments and find that it consistently
outperforms two baselines and two state-of-the-art algorithms by learning in
fewer steps and converging to higher returns. In some harder environments,
experience sharing makes the difference between learning to solve the task and
not learning at all.
|
Does bound entanglement naturally appear in quantum many-body systems? We
address this question by showing the existence of bound-entangled thermal
states for harmonic oscillator systems consisting of an arbitrary number of
particles. By explicit calculations of the negativity for different partitions,
we find a range of temperatures for which no entanglement can be distilled by
means of local operations, despite the system being globally entangled. We
offer an interpretation of this result in terms of entanglement-area laws,
typical of these systems. Finally, we discuss generalizations of this result to
other systems, including spin chains.
|
Let $\mathfrak{z}$ be a stochastic exponential, i.e.,
$\mathfrak{z}_t=1+\int_0^t\mathfrak{z}_{s-}dM_s$, of a local martingale $M$
with jumps $\triangle M_t>-1$. Then $\mathfrak{z}$ is a nonnegative local
martingale with $\E\mathfrak{z}_t\le 1$. If $\E\mathfrak{z}_T= 1$, then
$\mathfrak{z}$ is a martingale on the time interval $[0,T]$. Martingale
property plays an important role in many applications. It is therefore of
interest to give natural and easy verifiable conditions for the martingale
property. In this paper, the property $\E\mathfrak{z}_{_T}=1$ is verified with
the so-called linear growth conditions involved in the definition of parameters
of $M$, proposed by Girsanov \cite{Girs}. These conditions generalize the
Bene\^s idea, \cite{Benes}, and avoid the technology of piece-wise
approximation. These conditions are applicable even if Novikov, \cite{Novikov},
and Kazamaki, \cite{Kaz}, conditions fail. They are effective for Markov
processes that explode, Markov processes with jumps and also non Markov
processes. Our approach is different to recently published papers \cite{CFY}
and \cite{MiUr}.
|
We show that the probability densities af accelerations of Lagrangian test
particles in turbulent flows as measured by Bodenschatz et al. [Nature 409,
1017 (2001)] are in excellent agreement with the predictions of a stochastic
model introduced in [C. Beck, PRL 87, 180601 (2001)] if the fluctuating
friction parameter is assumed to be log-normally distributed. In a generalized
statistical mechanics setting, this corresponds to a superstatistics of
log-normal type. We analytically evaluate all hyperflatnes factors for this
model and obtain a flatness prediction in good agreement with the experimental
data. There is also good agreement with DNS data of Gotoh et al. We relate the
model to a generalized Sawford model with fluctuating parameters, and discuss a
possible universality of the small-scale statistics.
|
In this note, we prove the existence of weak solutions of a Poisson type
equation in the weighted Hilbert space $L^2(\mathbb{R}^n,e^{-|x|^2})$.
|
Let T be a random field invariant under the action of a compact group G We
give conditions ensuring that independence of the random Fourier coefficients
is equivalent to Gaussianity. As a consequence, in general it is not possible
to simulate a non-Gaussian invariant random field through its Fourier expansion
using independent coefficients.
|
We study the large order and infinite order soliton of the coupled nonlinear
Schrodinger equation with the Riemann-Hilbert method. By using the
Riemann-Hilbert representation for the high order Darboux dressing matrix, the
large order and infinite order solitons can be analyzed directly without using
inverse scattering transform. We firstly disclose the asymptotics for large
order soliton, which is divided into four different regions -- the elliptic
function region, the non-oscillatory region, the exponential and algebraic
decay region. We verify the consistence between asymptotic expression and exact
solutions by the Darboux dressing method numerically. Moreover, we consider the
property and dynamics for infinite order solitons -- a special limitation for
the larger order soliton. It is shown that the elliptic function and
exponential region will disappear for the infinite order solitons.
|
Iris centre localization in low-resolution visible images is a challenging
problem in computer vision community due to noise, shadows, occlusions, pose
variations, eye blinks, etc. This paper proposes an efficient method for
determining iris centre in low-resolution images in the visible spectrum. Even
low-cost consumer-grade webcams can be used for gaze tracking without any
additional hardware. A two-stage algorithm is proposed for iris centre
localization. The proposed method uses geometrical characteristics of the eye.
In the first stage, a fast convolution based approach is used for obtaining the
coarse location of iris centre (IC). The IC location is further refined in the
second stage using boundary tracing and ellipse fitting. The algorithm has been
evaluated in public databases like BioID, Gi4E and is found to outperform the
state of the art methods.
|
The effects of magnetic interaction between vector mesons and nucleons on the
propagation (mass and width) of the $\rho$-meson in particular moving through
very dense nuclear matter is studied and the modifications, qualitative and
quantitative, due to the relevant collective modes (zero-sound and plasma
frequencies) of the medium discussed. It is shown that the $\rho$-mesons
produced in high-energy nuclear collisions will be longitudinally polarized in
the region of sufficiently dense nuclear matter, in the presence of such an
interaction.
|
Quantum computing provides a promising approach for solving the real-time
dynamics of systems consist of quarks and gluons from first-principle
calculations that are intractable with classical computers. In this work, we
start with an initial problem of the ultra-relativistic quark-nucleus
scattering and present an efficient and precise approach to quantum simulate
the dynamics on the light front. This approach employs the eigenbasis of the
asymptotic scattering system and implements the compact scheme for basis
encoding. It exploits the operator structure of the light-front Hamiltonian of
the scattering system, which enables the Hamiltonian input scheme that utilizes
the quantum Fourier transform for efficiency. It utilizes the truncated Taylor
series for the dynamics simulations. The qubit cost of our approach scales
logarithmically with the Hilbert space dimension of the scattering system. The
gate cost has optimal scaling with the simulation error and near optimal
scaling with the simulation time. These scalings make our approach advantageous
for large-scale dynamics simulations on future fault-tolerant quantum
computers. We demonstrate our approach with a simple scattering problem and
benchmark the results with those from the Trotter algorithm and the classical
calculations, where good agreement between the results is found.
|
We consider the possibility of an oscillating scalar field accounting for
dark matter and dynamically controlling the spontaneous breaking of the
electroweak symmetry through a Higgs-portal coupling. This requires a late
decay of the inflaton field, such that thermal effects do not restore the
electroweak symmetry after reheating, and so inflation is followed by an
inflaton matter-dominated epoch. During inflation, the dark scalar field
acquires a large expectation value due to a negative non-minimal coupling to
curvature, thus stabilizing the Higgs field by holding it at the origin. After
inflation, the dark scalar oscillates in a quartic potential, behaving as dark
radiation, and only when its amplitude drops below a critical value does the
Higgs field acquire a non-zero vacuum expectation value. The dark scalar then
becomes massive and starts behaving as cold dark matter until the present day.
We further show that consistent scenarios require dark scalar masses in the few
GeV range, which may be probed with future collider experiments.
|
Monte Carlo methods cannot probe far into the QCD phase diagram with a real
chemical potential, due to the famous sign problem. Complex Langevin
simulations, using adaptive step-size scaling and gauge cooling, are suited for
sampling path integrals with complex weights. We report here on tests of the
deconfinement transition in pure Yang-Mills SU(3) simulations and present an
update on the QCD phase diagram in the limit of heavy and dense quarks.
|
Results of searches for heavy stable charged particles produced in pp
collisions at sqrt(s) = 7 and 8 TeV are presented corresponding to an
integrated luminosity of 5.0 inverse femtobarns and 18.8 inverse femtobarns,
respectively. Data collected with the CMS detector are used to study the
momentum, energy deposition, and time-of-flight of signal candidates. Leptons
with an electric charge between e/3 and 8e, as well as bound states that can
undergo charge exchange with the detector material, are studied. Analysis
results are presented for various combinations of signatures in the inner
tracker only, inner tracker and muon detector, and muon detector only. Detector
signatures utilized are long time-of-flight to the outer muon system and
anomalously high (or low) energy deposition in the inner tracker. The data are
consistent with the expected background, and upper limits are set on the
production cross section of long-lived gluinos, scalar top quarks, and scalar
tau leptons, as well as pair produced long-lived leptons. Corresponding lower
mass limits, ranging up to 1322 GeV for gluinos, are the most stringent to
date.
|
We compute that extrasolar minor planets can retain much of their internal
H_2O during their host star's red giant evolution. The eventual accretion of a
water-rich body or bodies onto a helium white dwarf might supply an observable
amount of atmospheric hydrogen, as seems likely for GD 362. More generally, if
hydrogen pollution in helium white dwarfs typically results from accretion of
large parent bodies rather than interstellar gas as previously supposed, then
H_2O probably constitutes at least 10% of the aggregate mass of extrasolar
minor planets. One observational test of this possibility is to examine the
atmospheres of externally-polluted white dwarfs for oxygen in excess of that
likely contributed by oxides such as SiO_2. The relatively high oxygen
abundance previously reported in GD 378 plausibly but not uniquely can be
explained by accretion of an H_2O-rich parent body or bodies. Future
ultraviolet observations of white dwarf pollutions can serve to investigate the
hypothesis that environments with liquid water that are suitable habitats for
extremophiles are widespread in the Milky Way.
|
Aspect based Sentiment Analysis is a major subarea of sentiment analysis.
Many supervised and unsupervised approaches have been proposed in the past for
detecting and analyzing the sentiment of aspect terms. In this paper, a
graph-based semi-supervised learning approach for aspect term extraction is
proposed. In this approach, every identified token in the review document is
classified as aspect or non-aspect term from a small set of labeled tokens
using label spreading algorithm. The k-Nearest Neighbor (kNN) for graph
sparsification is employed in the proposed approach to make it more time and
memory efficient. The proposed work is further extended to determine the
polarity of the opinion words associated with the identified aspect terms in
review sentence to generate visual aspect-based summary of review documents.
The experimental study is conducted on benchmark and crawled datasets of
restaurant and laptop domains with varying value of labeled instances. The
results depict that the proposed approach could achieve good result in terms of
Precision, Recall and Accuracy with limited availability of labeled data.
|
We study dark matter-helium scattering in the early Universe and its impact
on constraints from cosmic microwave background (CMB) anisotropy measurements.
We describe possible theoretical frameworks for dark matter-nucleon
interactions via a scalar, pseudoscalar, or vector mediator; such interactions
give rise to hydrogen and helium scattering, with cross sections that have a
power-law dependence on relative velocity. Within these frameworks, we consider
three scenarios: dark matter coupling to only neutrons, to only protons, and to
neutrons and protons with equal strength. For these various cases, we use
\textit{Planck} 2018 temperature, polarization, and lensing anisotropy data to
place constraints on dark matter scattering with hydrogen and/or helium for
dark matter masses between 10 keV and 1 TeV. For any model that permits both
helium and hydrogen scattering with a non-negative power-law velocity
dependence, we find that helium scattering dominates the constraint for dark
matter masses well above the proton mass. Furthermore, we place the first CMB
constraints on dark matter that scatters dominantly/exclusively with helium in
the early Universe.
|
EUSO-TA is a ground-based experiment, placed at Black Rock Mesa of the
Telescope Array site as a part of the JEM-EUSO (Joint Experiment Missions for
the Extreme Universe Space Observatory) program. The UV fluorescence imaging
telescope with a field of view of about 10.6 deg x 10.6 deg consisting of 2304
pixels (36 Multi-Anode Photomultipliers, 64 pixels each) works with
2.5-microsecond time resolution. An experimental setup with two Fresnel lenses
allows for measurements of Ultra High Energy Cosmic Rays in parallel with the
TA experiment as well as the other sources like flashes of lightning,
artificial signals from UV calibration lasers, meteors and stars. Stars
increase counts on pixels while crossing the field of view as the point-like
sources. In this work, we discuss the method for calibration of EUSO
fluorescence detectors based on signals from stars registered by the EUSO-TA
experiment during several campaigns. As the star position is known, the
analysis of signals gives an opportunity to determine the pointing accuracy of
the detector. This can be applied to space-borne or balloon-borne EUSO
missions. We describe in details the method of the analysis which provides
information about detector parameters like the shape of the point spread
function and is the way to perform absolute calibration of EUSO cameras.
|
A light baryon is viewed as $N_c$ valence quarks bound by meson mean fields
in the large $N_c$ limit. In much the same way a singly heavy baryon is
regarded as $N_c-1$ valence quarks bound by the same mean fields, which makes
it possible to use the properties of light baryons to investigate those of the
heavy baryons. A heavy quark being regarded as a static color source in the
limit of the infinitely heavy quark mass, the magnetic moments of the heavy
baryon are determined entirely by the chiral soliton consisting of a
light-quark pair. The magnetic moments of the baryon sextet are obtained by
using the parameters fixed in the light-baryon sector. In this mean-field
approach, the numerical results of the magnetic moments of the baryon sextet
with spin $3/2$ are just 3/2 larger than those with spin $1/2$. The magnetic
moments of the bottom baryons are the same as those of the corresponding
charmed baryons.
|
Several approaches to mitigating the Forwarding Information Base (FIB)
overflow problem were developed and software solutions using FIB aggregation
are of particular interest. One of the greatest concerns to deploy these
algorithms to real networks is their high running time and heavy computational
overhead to handle thousands of FIB updates every second. In this work, we
manage to use a single tree traversal to implement faster aggregation and
update handling algorithm with much lower memory footprint than other existing
work. We utilize 6-year realistic IPv4 and IPv6 routing tables from 2011 to
2016 to evaluate the performance of our algorithm with various metrics. To the
best of our knowledge, it is the first time that IPv6 FIB aggregation has been
performed. Our new solution is 2.53 and 1.75 times as fast as
the-state-of-the-art FIB aggregation algorithm for IPv4 and IPv6 FIBs,
respectively, while achieving a near-optimal FIB aggregation ratio.
|
Recent excesses across different search modes of the collaborations at the
LHC seem to indicate the presence of a Higgs-like scalar particle at 125 GeV.
Using the current data sets, we review and update analyses addressing the
extent to which this state is compatible with the Standard Model, and provide
two contextual answers for how it might instead fit into alternative scenarios
with enlarged electroweak symmetry breaking sectors.
|
A near-infrared imaging study of the evolved stellar populations in the dwarf
spheroidal galaxy Leo I is presented. Based on JHK observations obtained with
the WFCAM wide-field array at the UKIRT telescope, we build a near-infrared
photometric catalogue of red giant branch (RGB) and asymptotic giant branch
(AGB) stars in Leo I over a 13.5 arcmin square area. The V-K colours of RGB
stars, obtained by combining the new data with existing optical observations,
allow us to derive a distribution of global metallicity [M/H] with average
[M/H] = -1.51 (uncorrected) or [M/H] = -1.24 +/- 0.05 (int) +/- 0.15 (syst)
after correction for the mean age of Leo I stars. This is consistent with the
results from spectroscopy once stellar ages are taken into account. Using a
near-infrared two-colour diagram, we discriminate between carbon- and
oxygen-rich AGB stars and obtain a clean separation from Milky Way foreground
stars. We reveal a concentration of C-type AGB stars relative to the red giant
stars in the inner region of the galaxy, which implies a radial gradient in the
intermediate-age (1-3 Gyr) stellar populations. The numbers and luminosities of
the observed carbon- and oxygen-rich AGB stars are compared with those
predicted by evolutionary models including the thermally-pulsing AGB phase, to
provide new constraints to the models for low-metallicity stars. We find an
excess in the predicted number of C stars fainter than the RGB tip, associated
to a paucity of brighter ones. The number of O-rich AGB stars is roughly
consistent with the models, yet their predicted luminosity function is extended
to brighter luminosity. It appears likely that the adopted evolutionary models
overestimate the C star lifetime and underestimate their K-band luminosity.
|
This paper presents experimental verification of below-cutoff transmission
through miniaturized waveguides whose interior is coated with a thin
anisotropic metamaterial liner possessing epsilon-negative and near-zero (ENNZ)
properties. These liners are realized using a simple, printed-circuit
implementation based on inductively loaded wires, and introduce an HE$_{11}$
mode well below the natural cutoff frequency. The inclusion of the liner is
shown to substantially improve the transmission between two embedded
shielded-loop sources. A homogenization scheme is developed to characterize the
liner's anisotropic effective-medium parameters, which is shown to accurately
describe a set of frequency-reduced cutoffs. The fabrication of the lined
waveguide is discussed, and the experimental and simulated transmission results
are shown to be in agreement.
|
We give large families of Shimura curves defined by congruence conditions,
all of whose twists lack $p$-adic points for some $p$. For each such curve we
give analytically large families of counterexamples to the Hasse principle via
the descent (or equivalently \'etale Brauer-Manin) obstruction to rational
points applied to \'etale coverings coming from the level structure. More
precisely, we find infinitely many quadratic fields defined using congruence
conditions such that a twist of a related Shimura curve by each of those fields
violates the Hasse principle. As a minimal example, we find the twist of the
genus 11 Shimura curve $X^{143}$ by $\mathbf{Q}(\sqrt{-67})$ and its
bi-elliptic involution to violate the Hasse principle.
|
Critical to the development of improved solid oxide fuel cell (SOFC)
technology are novel compounds with high oxygen reduction reaction (ORR)
catalytic activity and robust stability under cathode operating conditions.
Approximately 2145 distinct perovskite compositions are screened for potential
use as high activity, stable SOFC cathodes, and it is verified that the
screening methodology qualitatively reproduces the experimental activity,
stability, and conduction properties of well-studied cathode materials. The
calculated oxygen p-band center is used as a first principle-based descriptor
of the surface exchange coefficient (k*), which in turn correlates with cathode
ORR activity. Convex hull analysis is used under operating conditions in the
presence of oxygen, hydrogen, and water vapor to determine thermodynamic
stability. This search has yielded 52 potential cathode materials with good
predicted stability in typical SOFC operating conditions and predicted k* on
par with leading ORR perovskite catalysts. The established trends in predicted
k* and stability are used to suggest methods of improving the performance of
known promising compounds. The material design strategies and new materials
discovered in the computational search help enable the development of high
activity, stable compounds for use in future solid oxide fuel cells and related
applications.
|
Strange particle enhancement in relativistic ion collisions is discussed with
particular attention to the dependence on the size of the volume and/or the
baryon number of the system.
|
Subsets and Splits