text
stringlengths 6
128k
|
---|
In this paper, we present a novel method named RECON, that automatically
identifies relations in a sentence (sentential relation extraction) and aligns
to a knowledge graph (KG). RECON uses a graph neural network to learn
representations of both the sentence as well as facts stored in a KG, improving
the overall extraction quality. These facts, including entity attributes
(label, alias, description, instance-of) and factual triples, have not been
collectively used in the state of the art methods. We evaluate the effect of
various forms of representing the KG context on the performance of RECON. The
empirical evaluation on two standard relation extraction datasets shows that
RECON significantly outperforms all state of the art methods on NYT Freebase
and Wikidata datasets. RECON reports 87.23 F1 score (Vs 82.29 baseline) on
Wikidata dataset whereas on NYT Freebase, reported values are 87.5(P@10) and
74.1(P@30) compared to the previous baseline scores of 81.3(P@10) and
63.1(P@30).
|
Reliable extraction of cosmological information from observed cosmic
microwave background (CMB) maps may require removal of strongly foreground
contaminated regions from the analysis. In this article, we employ an
artificial neural network (ANN) to predict the full sky CMB angular power
spectrum between intermediate to large angular scales from the partial sky
spectrum obtained from masked CMB temperature anisotropy map. We use a simple
ANN architecture with one hidden layer containing $895$ neurons. Using $1.2
\times 10^{5}$ training samples of full sky and corresponding partial sky CMB
angular power spectra at Healpix pixel resolution parameter $N_{side} = 256$,
we show that predicted spectrum by our ANN agrees well with the target spectrum
at each realization for the multipole range $2 \leq l \leq 512$. The predicted
spectra are statistically unbiased and they preserve the cosmic variance
accurately. Statistically, the differences between the mean predicted and
underlying theoretical spectra are within approximately $3\sigma$. Moreover,
the probability densities obtained from predicted angular power spectra agree
very well with those obtained from `actual' full sky CMB angular power spectra
for each multipole. Interestingly, our work shows that the significant
correlations in input cut-sky spectra, due to mode-mode coupling introduced on
the partial sky, are effectively removed since the ANN learns the hidden
pattern between the partial sky and full sky spectra preserving the entire
statistical properties. The excellent agreement of statistical properties
between the predicted and the ground-truth demonstrates the importance of using
artificial intelligence systems in cosmological analysis more widely.
|
Let $G$ be a filtered Lie conformal algebra whose associated graded conformal
algebra is isomorphic to that of general conformal algebra $gc_1$. In this
paper, we prove that $G\cong gc_1$ or ${\rm gr\,}gc_1$ (the associated graded
conformal algebra of $gc_1$), by making use of some results on the second
cohomology groups of the conformal algebra $\fg$ with coefficients in its
module $M_{b,0}$ of rank 1, where $\fg=\Vir\ltimes M_{a,0}$ is the semi-direct
sum of the Virasoro conformal algebra $\Vir$ with its module $M_{a,0}$.
Furthermore, we prove that ${\rm gr\,}gc_1$ does not have a nontrivial
representation on a finite $\C[\partial]$-module, this provides an example of a
finitely freely generated simple Lie conformal algebra of linear growth that
cannot be embedded into the general conformal algebra $gc_N$ for any $N$.
|
MP Gem has long been suspected as a long-period eclipsing binary, which had
not been seen in faint state for 71 years since the discovery in 1944. Its
nature has been a mystery. Using Public Data Release of Zwicky Transient
Facility observations, I finally reached a conclusion that this object is a VY
Scl-type cataclysmic variable by the detection of the deep faint state and the
rising phase in 2018.
|
The effect of gravitational waves (GWs) has been observed indirectly, by
monitoring the change in the orbital frequency of neutron stars in a binary
system as they lose energy via gravitational radiation. However, GWs have not
yet been observed directly. The initial LIGO apparatus has not yet observed
GWs. The Advanced LIGO (AdLIGO) will use a combination of improved techniques
in order to increase the sensitivity. Along with power recycling and a higher
power laser source, the AdLIGO will employ signal recycling (SR). While SR
would increase sensitivity, it would also reduce the bandwidth significantly.
Previously, we and others have investigated, theoretically and experimentally,
the feasibility of using a White Light Cavity (WLC) to circumvent this
constraint. However, in the previous work, it was not clear how one would
incorporate the white light cavity effect. Here, we first develop a general
model for Michelson-Interferometer based GW detectors that can be easily
adapted to include the effects of incorporating a WLC into the design. We then
describe a concrete design of a WLC constructed as a compound mirror, to
replace the signal recycling mirror. This design is simple, robust, completely
non-invasive, and can be added to the AdLIGO system without changing any other
optical elements. We show a choice of parameters for which the signal
sensitivity as well as the bandwidth are enhanced significantly over what is
planned for the AdLIGO, covering the entire spectrum of interest for
gravitational waves.
|
We show that acoustic crystalline wave gives rise to an effect similar to
that of a gravitational wave to an electron gas. Applying this idea to a
two-dimensional electron gas in the fractional quantum Hall regime, this allows
for experimental study of its dynamical gravitational response. To study such
response we generalize Haldane's geometrical description of fractional quantum
Hall states to situations where the external metric is time-dependent. We show
that such time-dependent metric (generated by acoustic or effective
gravitational wave) couples to collective modes of the system, including a
quadrapolar mode similar to graviton at long wave length, and magneto-roton at
finite wave length. Energies of these modes can be revealed in spectroscopic
measurements. We argue that such gravitational probe provides a potentially
highly useful alternative probe of quantum Hall liquids, in addition to the
usual electromagnetic response.
|
We study lattice QCD at non-vanishing chemical potential using the complex
Langevin equation. We compare the results with multi-parameter reweighting both
from $\mu=0$ and phase quenched ensembles. We find a good agreement for lattice
spacings below $\approx$0.15 fm. On coarser lattices the complex Langevin
approach breaks down. Four flavors of staggered fermions are used on $N_t=4, 6$
and 8 lattices. For one ensemble we also use two flavors to investigate the
effects of rooting.
|
Standard federated learning (FL) algorithms typically require multiple rounds
of communication between the server and the clients, which has several
drawbacks, including requiring constant network connectivity, repeated
investment of computational resources, and susceptibility to privacy attacks.
One-Shot FL is a new paradigm that aims to address this challenge by enabling
the server to train a global model in a single round of communication. In this
work, we present FedFisher, a novel algorithm for one-shot FL that makes use of
Fisher information matrices computed on local client models, motivated by a
Bayesian perspective of FL. First, we theoretically analyze FedFisher for
two-layer over-parameterized ReLU neural networks and show that the error of
our one-shot FedFisher global model becomes vanishingly small as the width of
the neural networks and amount of local training at clients increases. Next, we
propose practical variants of FedFisher using the diagonal Fisher and K-FAC
approximation for the full Fisher and highlight their communication and compute
efficiency for FL. Finally, we conduct extensive experiments on various
datasets, which show that these variants of FedFisher consistently improve over
competing baselines.
|
Using multiscaling analysis, we compare the characteristic roughening of
ferroelectric domain walls in PZT thin films with numerical simulations of
weakly pinned one-dimensional interfaces. Although at length scales up to a
length scale greater or equal to 5 microns the ferroelectric domain walls
behave similarly to the numerical interfaces, showing a simple mono-affine
scaling (with a well-defined roughness exponent), we demonstrate more complex
scaling at higher length scales, making the walls globally multi-affine
(varying roughness exponent at different observation length scales). The
dominant contributions to this multi-affine scaling appear to be very localized
variations in the disorder potential, possibly related to dislocation defects
present in the substrate.
|
We present results on the system size dependence of high transverse momentum
di-hadron correlations at $\sqrt{s_{NN}}$ = 200 GeV as measured by STAR at
RHIC. Measurements in d+Au, Cu+Cu and Au+Au collisions reveal similar jet-like
correlation yields at small angular separation ($\Delta\phi\sim0$,
$\Delta\eta\sim0$) for all systems and centralities. Previous measurements have
shown that the away-side yield is suppressed in heavy-ion collisions. We
present measurements of the away-side suppression as a function of transverse
momentum and centrality in Cu+Cu and Au+Au collisions. The suppression is found
to be similar in Cu+Cu and Au+Au collisions at a similar number of
participants. The results are compared to theoretical calculations based on the
parton quenching model and the modified fragmentation model. The observed
differences between data and theory indicate that the correlated yields
presented here will provide important constraints on medium density profile and
energy loss model parameters.
|
The exponential factor of Arrhenius satisfactorily quantifies the energetic
restriction of chemical reactions but is still awaiting a rigorous basis.
Assuming that the Arrhenius equation should be based on statistical mechanics
and is probabilistic in nature, two structures for this equation are compared,
depending on whether the reactant energies are viewed as the mean values of
specific energy distributions or as particular levels in a global energy
distribution. In the first version, the Arrhenius exponential factor would be a
probability that depends once on temperature, while in the second it is a ratio
of probabilities that depends twice on temperature. These concurrent equations
are tested using experimental data for the isomerization of 2-butene. This
comparison reveals the fundamental structure of the Arrhenius law in isothermal
systems and overlooked properties resulting from the introduction of reactant
energies into the equation.
|
We study convergence of return- and hitting-time distributions of small sets
$E_{k}$ with $\mu(E_{k})\rightarrow0$ in recurrent ergodic dynamical systems
preserving an infinite measure $\mu$. Some properties which are easy in finite
measure situations break down in this null-recurrent setup. However, in the
presence of a uniform set $Y$ with wandering rate regularly varying of index
$1-\alpha$ with $\alpha\in(0,1]$, there is a scaling function suitable for all
subsets of $Y$. In this case, we show that return distributions for the $E_{k}$
converge iff the corresponding hitting time distributions do, and we derive an
explicit relation between the two limit laws. Some consequences of this result
are discussed. In particular, this leads to improved sufficient conditions for
convergence to $\mathcal{E}^{1/\alpha}\,\mathcal{G}_{\alpha}$, where
$\mathcal{E}$ and $\mathcal{G}_{\alpha}$ are independent random variables, with
$\mathcal{E}$ exponentially distributed and $\mathcal{G}% _{\alpha}$ following
the one-sided stable law of order $\alpha$ (and $\mathcal{G}_{1}:=1$). The same
principle also reveals the limit laws (different from the above) which occur at
hyperblic periodic points of prototypical null-recurrent interval maps. We also
derive similar results for the barely recurrent $\alpha=0$ case.
|
X-ray diffraction and Raman-scattering measurements on cerium vanadate have
been performed up to 12 and 16 GPa, respectively. Experiments reveal that at
5.3 GPa the onset of a pressure-induced irreversible phase transition from the
zircon to the monazite structure. Beyond this pressure, diffraction peaks and
Raman-active modes of the monazite phase are measured. The zircon to monazite
transition in CeVO4 is distinctive among the other rare-earth orthovanadates.
We also observed softening of external translational Eg and internal B2g
bending modes. We attributed it to mechanical instabilities of zircon phase
against the pressure-induced distortion. We additionally report
lattice-dynamical and total-energy calculations which are in agreement with the
experimental results. Finally, the effect of non-hydrostatic stresses on the
structural sequence is studied and the equations of state of different phases
are reported.
|
The development of habitable conditions on Earth is tightly connected to the
evolution of its atmosphere which is strongly influenced by atmospheric escape.
We investigate the evolution of the polar ion outflow from the open field line
bundle which is the dominant escape mechanism for the modern Earth. We perform
Direct Simulation Monte Carlo (DSMC) simulations and estimate the upper limits
on escape rates from the Earth's open field line bundle starting from three
gigayears ago (Ga) to present assuming the present-day composition of the
atmosphere. We perform two additional simulations with lower mixing ratios of
oxygen of 1% and 15% to account for the conditions shortly after the Great
Oxydation Event (GOE). We estimate the maximum loss rates due to polar outflow
three gigayears ago of $3.3 \times10^{27}$ s$^{-1}$ and $2.4 \times 10^{27}$
s$^{-1}$ for oxygen and nitrogen, respectively. The total integrated mass loss
equals to 39% and 10% of the modern atmosphere's mass, for oxygen and nitrogen,
respectively. According to our results, the main factors that governed the
polar outflow in the considered time period are the evolution of the XUV
radiation of the Sun and the atmosphere's composition. The evolution of the
Earth's magnetic field plays a less important role. We conclude that although
the atmosphere with the present-day composition can survive the escape due to
polar outflow, a higher level of CO$_2$ between 3.0 and 2.0~Ga is likely
necessary to reduce the escape.
|
Planetesimals form in gas-rich protoplanetary disks around young stars.
However, protoplanetary disks fade in about 10 Myr. The planetesimals (and also
many of the planets) left behind are too dim to study directly. Fortunately,
collisions between planetesimals produce dusty debris disks. These debris disks
trace the processes of terrestrial planet formation for 100 Myr and of
exoplanetary system evolution out to 10 Gyr. This chapter begins with a summary
of planetesimal formation as a prelude to the epoch of planetesimal
destruction. Our review of debris disks covers the key issues, including dust
production and dynamics, needed to understand the observations. Our discussion
of extrasolar debris keeps an eye on similarities to and differences from Solar
System dust.
|
In space-air-ground integrated networks (SAGIN), receivers experience diverse
interference from both the satellite and terrestrial transmitters. The
heterogeneous structure of SAGIN poses challenges for traditional interference
management (IM) schemes to effectively mitigate interference. To address this,
a novel UAV-RIS-aided IM scheme is proposed for SAGIN, where different types of
channel state information (CSI) including no CSI, instantaneous CSI, and
delayed CSI, are considered. According to the types of CSI, interference
alignment, beamforming, and space-time precoding are designed at the satellite
and terrestrial transmitter side, and meanwhile, the UAV-RIS is introduced for
cooperating interference elimination process. Additionally, the degrees of
freedom (DoF) obtained by the proposed IM scheme are discussed in depth when
the number of antennas on the satellite side is insufficient. Simulation
results show that the proposed IM scheme improves the system capacity in
different CSI scenarios, and the performance is better than the existing IM
benchmarks without UAV-RIS.
|
Millimeter Waves (mmWave) systems have the potential of enabling
multi-gigabit-per-second communications in future Intelligent Transportation
Systems (ITSs). Unfortunately, because of the increased vehicular mobility,
they require frequent antenna beam realignments - thus significantly increasing
the in-band Beamforming (BF) overhead. In this paper, we propose Smart
Motion-prediction Beam Alignment (SAMBA), a MAC-layer algorithm that exploits
the information broadcast via DSRC beacons by all vehicles. Based on this
information, overhead-free BF is achieved by estimating the position of the
vehicle and predicting its motion. Moreover, adapting the beamwidth with
respect to the estimated position can further enhance the performance. Our
investigation shows that SAMBA outperforms the IEEE 802.11ad BF strategy,
increasing the data rate by more than twice for sparse vehicle density while
enhancing the network throughput proportionally to the number of vehicles.
Furthermore, SAMBA was proven to be more efficient compared to legacy BF
algorithm under highly dynamic vehicular environments and hence, a viable
solution for future ITS services.
|
In the integrative analyses of omics data, it is often of interest to extract
data representation from one data type that best reflect its relations with
another data type. This task is traditionally fulfilled by linear methods such
as canonical correlation analysis (CCA) and partial least squares (PLS).
However, information contained in one data type pertaining to the other data
type may be complex and in nonlinear form. Deep learning provides a convenient
alternative to extract low-dimensional nonlinear data embedding. In addition,
the deep learning setup can naturally incorporate the effects of clinical
confounding factors into the integrative analysis. Here we report a deep
learning setup, named Autoencoder-based Integrative Multi-omics data Embedding
(AIME), to extract data representation for omics data integrative analysis. The
method can adjust for confounder variables, achieve informative data embedding,
rank features in terms of their contributions, and find pairs of features from
the two data types that are related to each other through the data embedding.
In simulation studies, the method was highly effective in the extraction of
major contributing features between data types. Using a real microRNA-gene
expression dataset, and a real DNA methylation-gene expression dataset, we show
that AIME excluded the influence of confounders including batch effects, and
extracted biologically plausible novel information. The R package based on
Keras and the TensorFlow backend is available at
https://github.com/tianwei-yu/AIME.
|
The popularity of social media platforms such as Twitter has led to the
proliferation of automated bots, creating both opportunities and challenges in
information dissemination, user engagements, and quality of services. Past
works on profiling bots had been focused largely on malicious bots, with the
assumption that these bots should be removed. In this work, however, we find
many bots that are benign, and propose a new, broader categorization of bots
based on their behaviors. This includes broadcast, consumption, and spam bots.
To facilitate comprehensive analyses of bots and how they compare to human
accounts, we develop a systematic profiling framework that includes a rich set
of features and classifier bank. We conduct extensive experiments to evaluate
the performances of different classifiers under varying time windows, identify
the key features of bots, and infer about bots in a larger Twitter population.
Our analysis encompasses more than 159K bot and human (non-bot) accounts in
Twitter. The results provide interesting insights on the behavioral traits of
both benign and malicious bots.
|
Reinforcement learning (RL) is widely used in autonomous driving tasks and
training RL models typically involves in a multi-step process: pre-training RL
models on simulators, uploading the pre-trained model to real-life robots, and
fine-tuning the weight parameters on robot vehicles. This sequential process is
extremely time-consuming and more importantly, knowledge from the fine-tuned
model stays local and can not be re-used or leveraged collaboratively. To
tackle this problem, we present an online federated RL transfer process for
real-time knowledge extraction where all the participant agents make
corresponding actions with the knowledge learned by others, even when they are
acting in very different environments. To validate the effectiveness of the
proposed approach, we constructed a real-life collision avoidance system with
Microsoft Airsim simulator and NVIDIA JetsonTX2 car agents, which cooperatively
learn from scratch to avoid collisions in indoor environment with obstacle
objects. We demonstrate that with the proposed framework, the simulator car
agents can transfer knowledge to the RC cars in real-time, with 27% increase in
the average distance with obstacles and 42% decrease in the collision counts.
|
We report on the impact of magnetoelastic coupling on the magnetocaloric
properties of LaFe$_{11.4}$Si$_{1.6}$H$_{1.6}$ in terms of the vibrational
density of states, which we determined with $^{57}$Fe nuclear resonant
inelastic X-ray scattering measurements and with density-functional-theory
based first-principles calculations in the ferromagnetic low-temperature and
paramagnetic high-temperature phase. In experiments and calculations, we
observe pronounced differences in the shape of the Fe-partial VDOS between
non-hydrogenated and hydrogenated samples. This shows that hydrogen does not
only shift the temperature of the first-order phase transition, but also
affects the elastic response of the Fe-subsystem significantly. In turn, the
anomalous redshift of the Fe VDOS, observed by going to the low-volume PM
phase, survives hydrogenation. As a consequence, the change in the Fe specific
vibrational entropy $\Delta S_\mathrm{lat}$ across the phase transition has the
same sign as the magnetic and electronic contribution. DFT calculations show
that the same mechanism, which is a consequence of the itinerant electron
metamagnetism associated with the Fe subsystem, is effective in both the
hydrogenated and he hydrogen-free compounds. Although reduced by 50 % as
compared to the hydrogen-free system, the measured change $\Delta
S_\mathrm{lat}$ of 3.2\pm1.9 J/kgK across the FM to PM transition contributes
with 35 % significantly and cooperatively to the total isothermal entropy
change $\Delta S_\mathrm{iso}$. Hydrogenation is observed to induce an overall
blueshift of the Fe-VDOS with respect to the H-free compound; this effect,
together with the enhanced Debye temperature observed, is a fingerprint of the
hardening of the Fe sublattice by hydrogen incorporation. In addition, the mean
Debye velocity of sound of LaFe$_{11.4}$Si$_{1.6}$H$_{1.6}$ was determined from
the NRIXS and the DFT data.
|
We report on the X-ray emission from the radio jet of 3C 17 from Chandra
observations and compare the X-ray emission with radio maps from the VLA
archive and with the optical-IR archival images from the Hubble Space
Telescope. X-ray detections of two knots in the 3C 17 jet are found and both of
these features have optical counterparts. We derive the spectral energy
distribution for the knots in the jet and give source parameters required for
the various X-ray emission models, finding that both IC/CMB and synchrotron are
viable to explain the high energy emission. A curious optical feature (with no
radio or X-ray counterparts) possibly associated with the 3C 17 jet is
described. We also discuss the use of curved jets for the problem of
identifying inverse Compton X-ray emission via scattering on CMB photons.
|
We continue our study on the corresponding noncommutative deformation of the
relative $p$-adic Hodge structures of Kedlaya-Liu along our previous work. In
this paper, we are going to initiate the study of the corresponding descent of
pseudocoherent modules carrying large noncommutative coefficients. And also we
are going to more systematically study the corresponding noncommutative
geometric aspects of noncommutative deformation of Hodge structures, which will
definitely also provide the insights not only for noncommutative Iwasawa theory
but also for noncommutative analytic geometry. The noncommutative Hodge-Iwasawa
theory is now improved along some very well-defined direction (we will expect
many well-targeted applications to noncommutative Tamagawa number conjectures
from the modern perspectives of Burns-Flach-Fukaya-Kato), while the
corresponding Kedlaya-Liu glueing of pseudocoherent Banach modules with certain
stability is also generalized to the large noncommutative coefficient case.
|
We introduce novel high order well-balanced finite volume methods for the
full compressible Euler system with gravity source term. They require no a
priori knowledge of the hydrostatic solution which is to be well-balanced and
are not restricted to certain classes of hydrostatic solutions. In one spatial
dimension we construct a method that exactly balances a high order
discretization of any hydrostatic state. The method is extended to two spatial
dimensions using a local high order approximation of a hydrostatic state in
each cell. The proposed simple, flexible, and robust methods are not restricted
to a specific equation of state. Numerical tests verify that the proposed
method improves the capability to accurately resolve small perturbations on
hydrostatic states.
|
We introduce the concept of pseudotwistor (with particular cases called
twistor and braided twistor) for an algebra $(A, \mu, u)$ in a monoidal
category, as a morphism $T:A\otimes A\to A\otimes A$ satisfying a list of
axioms ensuring that $(A, \mu \circ T, u)$ is also an algebra in the category.
This concept provides a unifying framework for various deformed (or twisted)
algebras from the literature, such as twisted tensor products of algebras,
twisted bialgebras and algebras endowed with Fedosov products. Pseudotwistors
appear also in other topics from the literature, e.g. Durdevich's braided
quantum groups and ribbon algebras. We also focus on the effect of twistors on
the universal first order differential calculus, as well as on lifting twistors
to braided twistors on the algebra of universal differential forms.
|
An important phase transition in black hole thermodynamics is associated with
the divergence of the specific heat with fixed charge and angular momenta, yet
one can demonstrate that neither Ruppeiner's entropy metric nor Weinhold's
energy metric reveals this phase transition. In this paper, we introduce a new
thermodynamical metric based on the Hessian matrix of several free energy. We
demonstrate, by studying various charged and rotating black holes, that the
divergence of the specific heat corresponds to the curvature singularity of
this new metric. We further investigate metrics on all thermodynamical
potentials generated by Legendre transformations and study correspondences
between curvature singularities and phase transition signals. We show in
general that for a system with n-pairs of intensive/extensive variables, all
thermodynamical potential metrics can be embedded into a flat (n,n)-dimensional
space. We also generalize the Ruppeiner metrics and they are all conformal to
the metrics constructed from the relevant thermodynamical potentials.
|
Using angle-resolved photoelectron spectroscopy and ab-initio GW
calculations, we unambiguously show that the widely investigated
three-dimensional topological insulator Bi2Se3 has a direct band gap at the
Gamma point. Experimentally, this is shown by a three-dimensional band mapping
in large fractions of the Brillouin zone. Theoretically, we demonstrate that
the valence band maximum is located at the Brillouin center only if many-body
effects are included in the calculation. Otherwise, it is found in a
high-symmetry mirror plane away from the zone center.
|
We present a new oblivious RAM that supports variable-sized storage blocks
(vORAM), which is the first ORAM to allow varying block sizes without trivial
padding. We also present a new history-independent data structure (a HIRB tree)
that can be stored within a vORAM. Together, this construction provides an
efficient and practical oblivious data structure (ODS) for a key/value map, and
goes further to provide an additional privacy guarantee as compared to prior
ODS maps: even upon client compromise, deleted data and the history of old
operations remain hidden to the attacker.
We implement and measure the performance of our system using Amazon Web
Services, and the single-operation time for a realistic database (up to
$2^{18}$ entries) is less than 1 second. This represents a 100x speed-up
compared to the current best oblivious map data structure (which provides
neither secure deletion nor history independence) by Wang et al. (CCS 14).
|
Pathology deals with the practice of discovering the reasons for disease by
analyzing the body samples. The most used way in this field, is to use
histology which is basically studying and viewing microscopic structures of
cell and tissues. The slide viewing method is widely being used and converted
into digital form to produce high resolution images. This enabled the area of
deep learning and machine learning to deep dive into this field of medical
sciences. In the present study, a neural based network has been proposed for
classification of blood cells images into various categories. When input image
is passed through the proposed architecture and all the hyper parameters and
dropout ratio values are used in accordance with proposed algorithm, then model
classifies the blood images with an accuracy of 95.24%. The performance of
proposed model is better than existing standard architectures and work done by
various researchers. Thus model will enable development of pathological system
which will reduce human errors and daily load on laboratory men. This will in
turn help pathologists in carrying out their work more efficiently and
effectively.
|
The concept of correlated two-photon spiral imaging is introduced. We begin
by analyzing the joint orbital angular momentum (OAM) spectrum of correlated
photon pairs. The mutual information carried by the photon pairs is evaluated,
and it is shown that when an object is placed in one of the beam paths the
value of the mutual information is strongly dependent on object shape and is
closely related to the degree of rotational symmetry present. After analyzing
the effect of the object on the OAM correlations, the method of correlated
spiral imaging is described. We first present a version using parametric
downconversion, in which entangled pairs of photons with opposite OAM values
are produced, placing an object in the path of one beam. We then present a
classical (correlated, but non-entangled) version. The relative problems and
benefits of the classical versus entangled configurations are discussed. The
prospect is raised of carrying out compressive imaging via twophoton OAM
detection to reconstruct sparse objects with few measurements.
|
Real-world applications could benefit from the ability to automatically
generate a fine-grained ranking of photo aesthetics. However, previous methods
for image aesthetics analysis have primarily focused on the coarse, binary
categorization of images into high- or low-aesthetic categories. In this work,
we propose to learn a deep convolutional neural network to rank photo
aesthetics in which the relative ranking of photo aesthetics are directly
modeled in the loss function. Our model incorporates joint learning of
meaningful photographic attributes and image content information which can help
regularize the complicated photo aesthetics rating problem.
To train and analyze this model, we have assembled a new aesthetics and
attributes database (AADB) which contains aesthetic scores and meaningful
attributes assigned to each image by multiple human raters. Anonymized rater
identities are recorded across images allowing us to exploit intra-rater
consistency using a novel sampling strategy when computing the ranking loss of
training image pairs. We show the proposed sampling strategy is very effective
and robust in face of subjective judgement of image aesthetics by individuals
with different aesthetic tastes. Experiments demonstrate that our unified model
can generate aesthetic rankings that are more consistent with human ratings. To
further validate our model, we show that by simply thresholding the estimated
aesthetic scores, we are able to achieve state-or-the-art classification
performance on the existing AVA dataset benchmark.
|
Although Convolutional Neural Networks (CNNs) have high accuracy in image
recognition, they are vulnerable to adversarial examples and
out-of-distribution data, and the difference from human recognition has been
pointed out. In order to improve the robustness against out-of-distribution
data, we present a frequency-based data augmentation technique that replaces
the frequency components with other images of the same class. When the training
data are CIFAR10 and the out-of-distribution data are SVHN, the Area Under
Receiver Operating Characteristic (AUROC) curve of the model trained with the
proposed method increases from 89.22\% to 98.15\%, and further increased to
98.59\% when combined with another data augmentation method. Furthermore, we
experimentally demonstrate that the robust model for out-of-distribution data
uses a lot of high-frequency components of the image.
|
The topic of statistical inference for dynamical systems has been studied
extensively across several fields. In this survey we focus on the problem of
parameter estimation for non-linear dynamical systems. Our objective is to
place results across distinct disciplines in a common setting and highlight
opportunities for further research.
|
Internet of Things (IoT) and cloud computing together give us the ability to
sense, collect, process, and analyse data so we can use them to better
understand behaviours, habits, preferences and life patterns of users and lead
them to consume resources more efficiently. In such knowledge discovery
activities, privacy becomes a significant challenge due to the extremely
personal nature of the knowledge that can be derived from the data and the
potential risks involved. Therefore, understanding the privacy expectations and
preferences of stakeholders is an important task in the IoT domain. In this
paper, we review how privacy knowledge has been modelled and used in the past
in different domains. Our goal is not only to analyse, compare and consolidate
past research work but also to appreciate their findings and discuss their
applicability towards the IoT. Finally, we discuss major research challenges
and opportunities.
|
Stochastic linear bandits are a natural and simple generalisation of
finite-armed bandits with numerous practical applications. Current approaches
focus on generalising existing techniques for finite-armed bandits, notably the
optimism principle and Thompson sampling. While prior work has mostly been in
the worst-case setting, we analyse the asymptotic instance-dependent regret and
show matching upper and lower bounds on what is achievable. Surprisingly, our
results show that no algorithm based on optimism or Thompson sampling will ever
achieve the optimal rate, and indeed, can be arbitrarily far from optimal, even
in very simple cases. This is a disturbing result because these techniques are
standard tools that are widely used for sequential optimisation. For example,
for generalised linear bandits and reinforcement learning.
|
The electromagnetic emissivity from QCD media away from equilibrium is
studied in the framework of closed time path thermal field theory. For the
dilepton rate a nonequilibrium mesonic medium is considered applying finite
temperature perturbation theory for $\pi -\rho$ interactions. The dilepton rate
is derived up to the order $g_\rho^2$. For hard photon production a quark gluon
plasma is assumed and calculations are performed in leading order in the strong
coupling constant. These examples are chosen in order to investigate the role
of possible pinch terms in boson and in fermion propagators, respectively. The
implications of the results for phenomenology are also discussed.
|
Observations of accretion disks around young brown dwarfs have led to the
speculation that they may form planetary systems similar to normal stars. While
there have been several detections of planetary-mass objects around brown
dwarfs (2MASS 1207-3932 and 2MASS 0441-2301), these companions have relatively
large mass ratios and projected separations, suggesting that they formed in a
manner analogous to stellar binaries. We present the discovery of a
planetary-mass object orbiting a field brown dwarf via gravitational
microlensing, OGLE-2012-BLG-0358Lb. The system is a low secondary/primary mass
ratio (0.080 +- 0.001), relatively tightly-separated (~0.87 AU) binary composed
of a planetary-mass object with 1.9 +- 0.2 Jupiter masses orbiting a brown
dwarf with a mass 0.022 M_Sun. The relatively small mass ratio and separation
suggest that the companion may have formed in a protoplanetary disk around the
brown dwarf host, in a manner analogous to planets.
|
A new generation of wide-field emission-line surveys based on integral field
units (IFU) is allowing us to obtain spatially resolved information of the
gas-phase emission in nearby late-type galaxies, based on large samples of HII
regions and full two-dimensional coverage.These observations are allowing us to
discover and characterise abundance differentials between galactic
substructures and new scaling relations with global physical properties. Here I
review some highlights of our current studies employing this technique: (1) the
case study of NGC 628, the largest galaxy ever sampled with an IFU; (2) a
statistical approach to the abundance gradients of spiral galaxies, which
indicates a universal radial gradient for oxygen abundance; and (3) the
discovery of a new scaling relation of HII regions in spiral galaxies, the
local mass-metallicity relation of star-forming galaxies. The observational
properties and constrains found in local galaxies using this new technique will
allow us to interpret the gas-phase abundance of analogue high-z systems.
|
In the case of Dynkin quivers we establish a formula for the Grothendieck
class of a quiver cycle as the iterated residue of a certain rational function,
for which we provide an explicit combinatorial construction. Moreover, we
utilize a new definition of the double stable Grothendieck polynomials due to
Rimanyi and Szenes in terms of iterated residues to exhibit how the computation
of quiver coefficients can be reduced to computing coefficients in Laurent
expansions of certain rational functions.
|
The discovery made at the Large Hadron Collider (LHC) has revealed that the
spontaneous symmetry breaking mechanism is realised in a gauge theory such as
the Standard Model (SM) by at least one Higgs doublet. However, the possible
existence of other scalar bosons cannot be excluded. We analyze signatures
extensions of the SM, characterized by an extra representations of scalars, in
view of the recent and previous Higgs data. We show that such representations
can be probed and distinguished, mostly with multileptonic final states, with a
relatively low luminosity at the LHC.
|
Aim of this research is to transform images of roadside traffic panels to
speech to assist the vehicle driver, which is a new approach in the
state-of-the-art of the advanced driver assistance systems. The designed system
comprises of three modules, where the first module is used to capture and
detect the text area in traffic panels, second module is responsible for
converting the image of the detected text area to editable text and the last
module is responsible for transforming the text to speech. Additionally, during
experiments, we developed a corpus of 250 images of traffic panels for two
Indian languages.
|
Modern fiber-optic coherent communications employ advanced
spectrally-efficient modulation formats that require sophisticated narrow
linewidth local oscillators (LOs) and complex digital signal processing (DSP).
Here, we establish a novel approach to carrier recovery harnessing large-gain
stimulated Brillouin scattering (SBS) on a photonic chip for up to 116.82
Gbit/sec self-coherent optical signals, eliminating the need for a separate LO.
In contrast to SBS processing on-fiber, our solution provides phase and
polarization stability while the narrow SBS linewidth allows for a
record-breaking small guardband of ~265 MHz, resulting in higher
spectral-efficiency than benchmark self-coherent schemes. This approach reveals
comparable performance to state-of-the-art coherent optical receivers without
requiring advanced DSP. Our demonstration develops a low-noise and
frequency-preserving filter that synchronously regenerates a low-power
narrowband optical tone that could relax the requirements on very-high-order
modulation signaling and be useful in long-baseline interferometry for
precision optical timing or reconstructing a reference tone for quantum-state
measurements.
|
Detecting and analyzing potential anomalous performances in cloud computing
systems is essential for avoiding losses to customers and ensuring the
efficient operation of the systems. To this end, a variety of automated
techniques have been developed to identify anomalies in cloud computing
performance. These techniques are usually adopted to track the performance
metrics of the system (e.g., CPU, memory, and disk I/O), represented by a
multivariate time series. However, given the complex characteristics of cloud
computing data, the effectiveness of these automated methods is affected. Thus,
substantial human judgment on the automated analysis results is required for
anomaly interpretation. In this paper, we present a unified visual analytics
system named CloudDet to interactively detect, inspect, and diagnose anomalies
in cloud computing systems. A novel unsupervised anomaly detection algorithm is
developed to identify anomalies based on the specific temporal patterns of the
given metrics data (e.g., the periodic pattern), the results of which are
visualized in our system to indicate the occurrences of anomalies. Rich
visualization and interaction designs are used to help understand the anomalies
in the spatial and temporal context. We demonstrate the effectiveness of
CloudDet through a quantitative evaluation, two case studies with real-world
data, and interviews with domain experts.
|
The last decade has seen the parallel emergence in computational neuroscience
and machine learning of neural network structures which spread the input signal
randomly to a higher dimensional space; perform a nonlinear activation; and
then solve for a regression or classification output by means of a mathematical
pseudoinverse operation. In the field of neuromorphic engineering, these
methods are increasingly popular for synthesizing biologically plausible neural
networks, but the "learning method" - computation of the pseudoinverse by
singular value decomposition - is problematic both for biological plausibility
and because it is not an online or an adaptive method. We present an online or
incremental method of computing the pseudoinverse, which we argue is
biologically plausible as a learning method, and which can be made adaptable
for non-stationary data streams. The method is significantly more
memory-efficient than the conventional computation of pseudoinverses by
singular value decomposition.
|
There have been a long string of efforts to understand the source of the
variability observed in microquasars but no model has yet gained wide
acceptance, especially concerning the elusive High-Frequency Quasi-Periodic
Oscillation (HFQPO). We first list the constraints arising from observations
and how that translates for an HFQPO model. Then we present how a model based
on having the Rossby Wave Instability (RWI) active in the disk could answer
those constraints.
|
We develop a relativistic model to describe the bound states of positive
energy and negative energy in finite nuclei at the same time. Instead of
searching for the negative-energy solution of the nucleon's Dirac equation, we
solve the Dirac equations for the nucleon and the anti-nucleon simultaneously.
The single-particle energies of negative-energy nucleons are obtained through
changing the sign of the single-particle energies of positive-energy
anti-nucleons. The contributions of the Dirac sea to the source terms of the
meson fields are evaluated by means of the derivative expansion up to the
leading derivative order for the one-meson loop and one-nucleon loop. After
refitting the parameters of the model to the properties of spherical nuclei,
the results of positive-energy sector are similar to that calculated within the
commonly used relativistic mean field theory under the no-sea approximation.
However, the bound levels of negative-energy nucleons vary drastically when the
vacuum contributions are taken into account. It implies that the
negative-energy spectra deserve a sensitive probe to the effective interactions
in addition to the positive-energy spectra.
|
Consider a scaled Nevanlinna-Pick interpolation problem and let $\Pi$ be the
Blaschke product whose zeros are the nodes of the problem. It is proved that if
$\Pi$ belongs to a certain class of inner functions, then the extremal
solutions of the problem or most of them, are in the same class. Three
different classical classes are considered: inner functions whose derivative is
in a certain Hardy space, exponential Blaschke products and also the well known
class of $\alpha$-Blaschke products, for $0<\alpha<1$.
|
We consider integral functionals with slow growth and explicit dependence on
u of the lagrangian; this includes many relevant examples, as, for instance, in
elastoplastic torsion problems or in image restoration problems. Our aim is to
prove that the local minimizers are locally Lipschitz continuous. The proof
makes use of recent results concerning the Bounded Slope Conditions.
|
The Banzhaf power and interaction indexes for a pseudo-Boolean function (or a
cooperative game) appear naturally as leading coefficients in the standard
least squares approximation of the function by a pseudo-Boolean function of a
specified degree. We first observe that this property still holds if we
consider approximations by pseudo-Boolean functions depending only on specified
variables. We then show that the Banzhaf influence index can also be obtained
from the latter approximation problem. Considering certain weighted versions of
this approximation problem, we introduce a class of weighted Banzhaf influence
indexes, analyze their most important properties, and point out similarities
between the weighted Banzhaf influence index and the corresponding weighted
Banzhaf interaction index. We also discuss the issue of reconstructing a
pseudo-Boolean function from prescribed influences and point out very different
behaviors in the weighted and non-weighted cases.
|
Anomaly detection is of great interest in fields where abnormalities need to
be identified and corrected (e.g., medicine and finance). Deep learning methods
for this task often rely on autoencoder reconstruction error, sometimes in
conjunction with other errors. We show that this approach exhibits intrinsic
biases that lead to undesirable results. Reconstruction-based methods are
sensitive to training-data outliers and simple-to-reconstruct points. Instead,
we introduce a new unsupervised Lipschitz anomaly discriminator that does not
suffer from these biases. Our anomaly discriminator is trained, similar to the
ones used in GANs, to detect the difference between the training data and
corruptions of the training data. We show that this procedure successfully
detects unseen anomalies with guarantees on those that have a certain
Wasserstein distance from the data or corrupted training set. These additions
allow us to show improved performance on MNIST, CIFAR10, and health record
data.
|
Recently, Heusler ferromagnets have been found to exhibit unconventional
anomalous electric, thermal, and thermoelectric transport properties. In this
study, we employed first-principles density functional theory calculations to
systematically investigate both intrinsic and extrinsic contributions to the
anomalous Hall effect (AHE), anomalous Nernst effect (ANE), and anomalous
thermal Hall effect (ATHE) in two Heusler ferromagnets: Fe$_2$CoAl and
Fe$_2$NiAl. Our analysis reveals that the extrinsic mechanism originating from
disorder dominates the AHE and ATHE in Fe$_2$CoAl , primarily due to the steep
band dispersions across the Fermi energy and corresponding high longitudinal
electronic conductivity. Conversely, the intrinsic Berry phase mechanism,
physically linked to nearly flat bands around the Fermi energy and gapped by
spin-orbit interaction band crossings, governs the AHE and ATHE in Fe$_2$NiAl.
With respect to ANE, both intrinsic and extrinsic mechanisms are competing in
Fe$_2$CoAl as well as in Fe$_2$NiAl. Furthermore, Fe$_2$CoAl and Fe$_2$NiAl
exhibit tunable and remarkably pronounced anomalous transport properties. For
instance, the anomalous Nernst and anomalous thermal Hall conductivities in
Fe$_2$NiAl attain giant values of 8.29 A/Km and 1.19 W/Km, respectively, at
room temperature. To provide a useful comparison, we also thoroughly
investigated the anomalous transport properties of Co$_2$MnGa. Our findings
suggest that Heusler ferromagnets Fe$_2$CoAl and Fe$_2$NiAl are promising
candidates for spintronics and spin-caloritronics applications.
|
Stochastic switched systems are a relevant class of stochastic hybrid systems
with probabilistic evolution over a continuous domain and control-dependent
discrete dynamics over a finite set of modes. In the past few years several
different techniques have been developed to assist in the stability analysis of
stochastic switched systems. However, more complex and challenging objectives
related to the verification of and the controller synthesis for logic
specifications have not been formally investigated for this class of systems as
of yet. With logic specifications we mean properties expressed as formulae in
linear temporal logic or as automata on infinite strings. This paper addresses
these complex objectives by constructively deriving approximately equivalent
(bisimilar) symbolic models of stochastic switched systems. More precisely,
this paper provides two different symbolic abstraction techniques: one requires
state space discretization, but the other one does not require any space
discretization which can be potentially more efficient than the first one when
dealing with higher dimensional stochastic switched systems. Both techniques
provide finite symbolic models that are approximately bisimilar to stochastic
switched systems under some stability assumptions on the concrete model. This
allows formally synthesizing controllers (switching signals) that are valid for
the concrete system over the finite symbolic model, by means of mature
automata-theoretic techniques in the literature. The effectiveness of the
results are illustrated by synthesizing switching signals enforcing logic
specifications for two case studies including temperature control of a six-room
building.
|
We show that existence of nonzero nilpotent elements in the $\Z$-module
$\Z/(p_1^{k_1}\times \cdots \times p_n^{k_n})\Z$ inhibits the module from
possessing good structural properties. In particular, it stops it from being
semisimple and from admitting certain good homological properties.
|
In the contemporary landscape of technological advancements, the automation
of manual processes is crucial, compelling the demand for huge datasets to
effectively train and test machines. This research paper is dedicated to the
exploration and implementation of an automated approach to generate test cases
specifically using Large Language Models. The methodology integrates the use of
Open AI to enhance the efficiency and effectiveness of test case generation for
training and evaluating Large Language Models. This formalized approach with
LLMs simplifies the testing process, making it more efficient and
comprehensive. Leveraging natural language understanding, LLMs can
intelligently formulate test cases that cover a broad range of REST API
properties, ensuring comprehensive testing. The model that is developed during
the research is trained using manually collected postman test cases or
instances for various Rest APIs. LLMs enhance the creation of Postman test
cases by automating the generation of varied and intricate test scenarios.
Postman test cases offer streamlined automation, collaboration, and dynamic
data handling, providing a user-friendly and efficient approach to API testing
compared to traditional test cases. Thus, the model developed not only conforms
to current technological standards but also holds the promise of evolving into
an idea of substantial importance in future technological advancements.
|
We study the phase diagram of the three-dimensional SU(3)+adjoint Higgs
theory with lattice Monte Carlo simulations. A critical line consisting of a
first order line, a tricritical point and a second order line, divides the
phase diagram into two parts distinguished by <Tr A0^3>=0 and /=0. The location
and the type of the critical line are determined by measuring the condensates
<Tr A0^2> and <Tr A0^3>, and the masses of scalar and vector excitations.
Although in principle there can be different types of broken phases,
corresponding perturbatively to unbroken SU(2)xU(1) or U(1)xU(1) symmetries, we
find that dynamically only the broken phase with SU(2)xU(1)-like properties is
realized. The relation of the phase diagram to 4d finite temperature QCD is
discussed.
|
Synthetic Aperture Radar (SAR) data and Interferometric SAR (InSAR) products
in particular, are one of the largest sources of Earth Observation data. InSAR
provides unique information on diverse geophysical processes and geology, and
on the geotechnical properties of man-made structures. However, there are only
a limited number of applications that exploit the abundance of InSAR data and
deep learning methods to extract such knowledge. The main barrier has been the
lack of a large curated and annotated InSAR dataset, which would be costly to
create and would require an interdisciplinary team of experts experienced on
InSAR data interpretation. In this work, we put the effort to create and make
available the first of its kind, manually annotated dataset that consists of
19,919 individual Sentinel-1 interferograms acquired over 44 different
volcanoes globally, which are split into 216,106 InSAR patches. The annotated
dataset is designed to address different computer vision problems, including
volcano state classification, semantic segmentation of ground deformation,
detection and classification of atmospheric signals in InSAR imagery,
interferogram captioning, text to InSAR generation, and InSAR image quality
assessment.
|
We solve, numerically, the massless spin-2 equations, written in terms of a
gauge based on the properties of conformal geodesics, in a neighbourhood of
spatial infinity using spectral methods in both space and time. This strategy
allows us to compute the solutions to these equations up to the critical sets
where null infinity intersects with spatial infinity. Moreover, we use the
convergence rates of the numerical solutions to read-off their regularity
properties.
|
In order to investigate the inhomogeneity of the superconducting (SC) state
in the overdoped high-$T_{\rm c}$ cuprates, we have measured the magnetic
susceptibility, $\chi$, of La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) single crystals in
the overdoped regime in magnetic fields parallel to the c-axis up to 7 T on
warming after zero-field cooling. It has been found for $x$ = 0.198 and 0.219
that the temperature dependence of $\chi$ in 1 T shows a plateau, that is,
$\chi$ is almost independent of temperature in a moderate temperature-range in
the SC state. Moreover, a so-called second peak in the magnetization curve has
markedly appeared in these crystals. These results indicate an anomalous
enhancement of the vortex pinning and strongly suggest the occurrence of a
$microscopic$ phase separation into SC and normal-state regions in the
overdoped high-$T_{\rm c}$ cuprates.
|
We refine previous results concerning the Renewal Contact Processes. We
significantly widen the family of distributions for the interarrival times for
which the critical value can be shown to be strictly positive. The result now
holds for any dimension $d \ge 1$ and requires only a moment condition slightly
stronger than finite first moment. For heavy-tailed interarrival times, we
prove a Complete Convergence Theorem and examine when the contact process,
conditioned on survival, can be asymptotically predicted knowing the renewal
processes. We close with an example of distribution attracted to a stable law
of index 1 for which the critical value vanishes.
|
This paper outlines how the new GaLactic and Extragalactic All-sky MWA Survey
(GLEAM, Wayth et al. 2015), observed by the Murchison Widefield Array covering
the frequency range 72 - 231 MHz, allows identification of a new large,
complete, sample of more than 2000 bright extragalactic radio sources selected
at 151 MHz. With a flux density limit of 4 Jy this sample is significantly
larger than the canonical fully-complete sample, 3CRR (Laing, Riley & Longair
1983). In analysing this small bright subset of the GLEAM survey we are also
providing a first user check of the GLEAM catalogue ahead of its public release
(Hurley-Walker et al. in prep). Whilst significant work remains to fully
characterise our new bright source sample, in time it will provide important
constraints to evolutionary behaviour, across a wide redshift and intrinsic
radio power range, as well as being highly complementary to results from
targeted, small area surveys.
|
An algebraic quantization procedure for discretized spacetime models is
suggested based on the duality between finitary substitutes and their incidence
algebras. The provided limiting procedure that yields conventional manifold
characteristics of spacetime structures is interpreted in the algebraic quantum
framework as a correspondence principle.
|
The Scalar Field Dark Matter model has been known in various ways throughout
its history; Fuzzy, BEC, Wave, Ultralight, Axion-like Dark Matter, etc. All of
them consist in proposing that the dark matter of the universe is a spinless
field $\Phi$ that follows the Klein-Gordon (KG) equation of motion
$\Box\Phi-dV/d\Phi=0$, for a given scalar field potential $V$. The difference
between different models is sometimes the choice of the scalar field potential
$V$. In the literature we find that people usually work in the nonrelativistic,
weak-field limit of the KG equation where it transforms into the Schr\"odinger
equation and the Einstein equations into the Poisson equation, reducing the
KG-Einstein system, to the Schr\"odinger-Poisson system. In this paper, we
review some of the most interesting achievements of this model from the
historical point of view and its comparison with observations, showing that
this model could be the last answer to the question about the nature of dark
matter in the universe.
|
The purpose of this paper is to construct a crepant resolution of quotient
singularities by trihedral groups ( finite subgroups of SL(3,C) of certain type
), and prove that each Euler number of the minimal model is equal to the number
of conjugacy classes. Trihedral singularities are 3-dimensional version of
D_n-singularities, and they are non-isolated and many of them are not complete
intersections. The resolution is similar to one of D_n-singularities. There is
a nice combination of the toric resolution and Calabi-Yau resolution.
|
The Mermin-Peres magic square game is a cooperative two-player nonlocal game
in which shared quantum entanglement allows the players to win with certainty,
while players limited to classical operations cannot do so, a phenomenon dubbed
"quantum pseudo-telepathy". The game has a referee separately ask each player
to color a subset of a 3x3 grid. The referee checks that their colorings
satisfy certain parity constraints that can't all be simultaneously realized.
We define a generalization of these games to be played on an arbitrary
arrangement of intersecting sets of elements. We characterize exactly which of
these games exhibit quantum pseudo-telepathy, and give quantum winning
strategies for those that do. In doing so, we show that it suffices for the
players to share three Bell pairs of entanglement even for games on arbitrarily
large arrangements. Moreover, it suffices for Alice and Bob to use measurements
from the three-qubit Pauli group. The proof technique uses a novel connection
of Mermin-style games to graph planarity.
|
A new approach to Jiu-Kang Yu's construction of tame supercuspidal
representations of $p$-adic reductive groups is presented. Connections with the
theory of cuspidal Deligne-Lusztig representations of finite groups of Lie type
are also discussed.
|
The formal expression for the most general polarization observable in elastic
electromagnetic lepton hadron scattering at low energies is derived for the
nonrelativistic regime. For the explicit evaluation the influence of Coulomb
distortion on various polarization observables is calculated in a distorted
wave Born approximation. Besides the hyperfine interaction also the spin-orbit
interactions of lepton and hadron are included. For like charges the Coulomb
repulsion reduces strongly the size of polarization observables compared to the
plane wave Born approximation whereas for opposite charges the Coulomb
attraction leads to a substantial increase of these observables for hadron lab
kinetic energies below about 20 keV.
|
The Standard Model of particle physics is governed by Poincar\'e symmetry,
while all other symmetries, exact or approximate, are essentially dictated by
theoretical consistency with the particle spectrum. On the other hand, many
models of dark matter exist that rely upon the addition of new added global
symmetries in order to stabilize the dark matter particle and/or achieve the
correct abundance. In this work we begin a systematic exploration into truly
natural models of dark matter, organized by only relativity and quantum
mechanics, without the appeal to any additional global symmetries, no
fine-tuning, and no small parameters. We begin by reviewing how singlet dark
sectors based on spin 0 or spin ${1\over2}$ should readily decay, while pure
strongly coupled spin 1 models have an overabundance problem. This inevitably
leads us to construct chiral models with spin ${1\over2}$ particles charged
under confining spin 1 particles. This leads to stable dark matter candidates
that are analogs of baryons, with a confinement scale that can be naturally
$\mathcal{O}(100)$TeV. This leads to the right freeze-out abundance by
annihilating into massless unconfined dark fermions. The minimal model involves
a dark copy of $SU(3)\times SU(2)$ with 1 generation of chiral dark quarks and
leptons. The presence of massless dark leptons can potentially give rise to a
somewhat large value of $\Delta N_{\text{eff}}$ during BBN. In order to not
upset BBN one may either appeal to a large number of heavy degrees of freedom
beyond the Standard Model, or to assume the dark sector has a lower reheat
temperature than the visible sector, which is also natural in this framework.
This reasoning provides a robust set of dark matter models that are entirely
natural. Some are concrete realizations of the nightmare scenario in which dark
matter may be very difficult to detect, which may impact future search
techniques.
|
Sequential lateration is a class of methods for multidimensional scaling
where a suitable subset of nodes is first embedded by some method, e.g., a
clique embedded by classical scaling, and then the remaining nodes are
recursively embedded by lateration. A graph is a lateration graph when it can
be embedded by such a procedure. We provide a stability result for a particular
variant of sequential lateration. We do so in a setting where the
dissimilarities represent noisy Euclidean distances between nodes in a
geometric lateration graph. We then deduce, as a corollary, a perturbation
bound for stress minimization. To argue that our setting applies broadly, we
show that a (large) random geometric graph is a lateration graph with high
probability under mild conditions, extending a previous result of Aspnes et al
(2006).
|
Accretion disks are ubiquitous in the universe and it is generally accepted
that magnetic fields play a pivotal role in accretion-disk physics. The spin
history of millisecond pulsars, which are usually classified as magnetized
neutron stars spun up by an accretion disk, depends sensitively on the magnetic
field structure, and yet highly idealized models from the 80s are still being
used for calculating the magnetic field components. We present a possible way
of improving the currently used models with a semi-analytic approach. The
resulting magnetic field profile of both the poloidal and the toroidal
component can be very different from the one suggested previously. This might
dramatically change our picture of which parts of the disk tend to spin the
star up or down.
|
Cryptographic protocols are often implemented at upper layers of
communication networks, while error-correcting codes are employed at the
physical layer. In this paper, we consider utilizing readily-available physical
layer functions, such as encoders and decoders, together with shared keys to
provide a threshold-type security scheme. To this end, we first consider a
scenario where the effect of the physical layer is omitted and all the channels
between the involved parties are assumed to be noiseless. We introduce a model
for threshold-secure coding, where the legitimate parties communicate using a
shared key such that an eavesdropper does not get any information, in an
information-theoretic sense, about the key as well as about any subset of the
input symbols of size up to a certain threshold. Then, a framework is provided
for constructing threshold-secure codes from linear block codes while
characterizing the requirements to satisfy the reliability and security
conditions. Moreover, we propose a threshold-secure coding scheme, based on
Reed-Muller (RM) codes, that meets security and reliability conditions. It is
shown that the encoder and the decoder of the scheme can be implemented
efficiently with quasi-linear time complexity. In particular, a successive
cancellation decoder is shown for the RM-based coding scheme. Then we extend
the setup to the scenario where the channel between the legitimate parties is
no longer noiseless. The reliability condition for noisy channels is then
modified accordingly, and a method is described to construct codes attaining
threshold security as well as desired reliability. Also, we propose a coding
scheme based on RM codes for threshold security and robustness designed for
binary erasure channels along with a unified successive cancellation decoder.
The proposed threshold-secure coding schemes are flexible and can be adapted
for different key lengths.
|
Previous work on dynamical black hole instability is further elucidated
within the Hamilton-Jacobi method for horizon tunneling and the reconstruction
of the classical action by means of the null-expansion method. Everything is
based on two natural requirements, namely that the tunneling rate is an
observable and therefore it must be based on invariantly defined quantities,
and that coordinate systems which do not cover the horizon should not be
admitted. These simple observations can help to clarify some ambiguities, like
the doubling of the temperature occurring in the static case when using
singular coordinates, and the role, if any, of the temporal contribution of the
action to the emission rate. The formalism is also applied to FRW cosmological
models, where it is observed that it predicts the positivity of the temperature
naturally, without further assumptions on the sign of the energy.
|
The electronic structure and chemical bonding of the recently discovered
inverse perovskite Sc3AlN, in comparison to ScN and Sc metal have been
investigated by bulk-sensitive soft x-ray emission spectroscopy. The measured
Sc L, N K, Al L1, and Al L2,3 emission spectra are compared with calculated
spectra using first principle density-functional theory including dipole
transition matrix elements. The main Sc 3d - N 2p and Sc 3d - Al 3p chemical
bond regions are identified at -4 eV and -1.4 eV below the Fermi level,
respectively. A strongly modified spectral shape of 3s states in the Al L2,3
emission from Sc3AlN in comparison to pure Al metal is found, which reflects
the Sc 3d - Al 3p hybridization observed in the Al L1 emission. The differences
between the electronic structure of Sc3AlN, ScN, and Sc metal are discussed in
relation to the change of the conductivity and elastic properties.
|
A weak invariant of a stochastic system is defined in such a way that its
expectation value with respect to the distribution function as a solution of
the associated Fokker-Planck equation is constant in time. A general formula is
given for time evolution of fluctuations of the invariants. An application to
the problem of share price in finance is illustrated. It is shown how this
theory makes it possible to reduce the growth rate of the fluctuations.
|
In this paper we study the question of residual gauge fixing in the path
integral approach for a general class of axial-type gauges including the
light-cone gauge. We show that the two cases -- axial-type gauges and the
light-cone gauge -- lead to very different structures for the explicit forms of
the propagator. In the case of the axial-type gauges, fixing the residual
symmetry determines the propagator of the theory completely. On the other hand,
in the light-cone gauge there is still a prescription dependence even after
fixing the residual gauge symmetry, which is related to the existence of an
underlying global symmetry.
|
In isogeometric analysis, it is frequently required to handle the geometric
models enclosed by four-sided or non-four-sided boundary patches, such as
trimmed surfaces. In this paper, we develop a Gregory solid based method to
parameterize those models. First, we extend the Gregory patch representation to
the trivariate Gregory solid representation. Second, the trivariate Gregory
solid representation is employed to interpolate the boundary patches of a
geometric model, thus generating the polyhedral volume parametrization. To
improve the regularity of the polyhedral volume parametrization, we formulate
the construction of the trivariate Gregory solid as a sparse optimization
problem, where the optimization objective function is a linear combination of
some terms, including a sparse term aiming to reduce the negative Jacobian area
of the Gregory solid. Then, the alternating direction method of multipliers
(ADMM) is used to solve the sparse optimization problem. Lots of experimental
examples illustrated in this paper demonstrate the effectiveness and efficiency
of the developed method.
|
In this work, we provide energy-efficient architectural support for floating
point accuracy. Our goal is to provide accuracy that is far greater than that
provided by the processor's hardware floating point unit (FPU). Specifically,
for each floating point addition performed, we "recycle" that operation's
error: the difference between the finite-precision result produced by the
hardware and the result that would have been produced by an infinite-precision
FPU. We make this error architecturally visible such that it can be used, if
desired, by software. Experimental results on physical hardware show that
software that exploits architecturally recycled error bits can achieve accuracy
comparable to a 2B-bit FPU with performance and energy that are comparable to a
B-bit FPU.
|
Context. Hypervelocity stars move fast enough to leave the gravitational
field of their home galaxies and venture into intergalactic space. The most
extreme examples known have estimated speeds in excess of 1000 km/s. These can
be easily induced at the centres of galaxies via close encounters between
binary stars and supermassive black holes; however, a number of other
mechanisms operating elsewhere can produce them as well.
Aims. Recent studies suggest that hypervelocity stars are ubiquitous in the
local Universe. In the Milky Way, the known hypervelocity stars are
anisotropically distributed, but it is unclear why. Here, we used Gaia Data
Release 2 (DR2) data to perform a systematic exploration aimed at confirming or
refuting these findings.
Methods. We assume that the farther the candidate hypervelocity stars are,
the more likely they are to be unbound from the Galaxy. We used the statistical
analysis of both the spatial distribution and kinematics of these objects to
achieve our goals.
Results. Focussing on nominal Galactocentric distances greater than 30 kpc,
which are the most distant candidates, we isolated a sample with speeds in
excess of 500 km/s that exhibits a certain degree of anisotropy but remains
compatible with possible systematic effects. We find that the effect of the
Eddington-Trumpler-Weaver bias is important in our case: over 80% of our
sources are probably located further away than implied by their parallaxes;
therefore, most of our velocity estimates are lower limits. If this bias is as
strong as suggested here, the contamination by disc stars may not affect our
overall conclusions.
Conclusions. The subsample with the lowest uncertainties shows stronger, but
obviously systematic, anisotropies and includes a number of candidates of
possible extragalactic origin and young age with speeds of up to 2000 km/s.
|
Classical sequential recommendation models generally adopt ID embeddings to
store knowledge learned from user historical behaviors and represent items.
However, these unique IDs are challenging to be transferred to new domains.
With the thriving of pre-trained language model (PLM), some pioneer works adopt
PLM for pre-trained recommendation, where modality information (e.g., text) is
considered universal across domains via PLM. Unfortunately, the behavioral
information in ID embeddings is still verified to be dominating in PLM-based
recommendation models compared to modality information and thus limits these
models' performance. In this work, we propose a novel ID-centric recommendation
pre-training paradigm (IDP), which directly transfers informative ID embeddings
learned in pre-training domains to item representations in new domains.
Specifically, in pre-training stage, besides the ID-based sequential model for
recommendation, we also build a Cross-domain ID-matcher (CDIM) learned by both
behavioral and modality information. In the tuning stage, modality information
of new domain items is regarded as a cross-domain bridge built by CDIM. We
first leverage the textual information of downstream domain items to retrieve
behaviorally and semantically similar items from pre-training domains using
CDIM. Next, these retrieved pre-trained ID embeddings, rather than certain
textual embeddings, are directly adopted to generate downstream new items'
embeddings. Through extensive experiments on real-world datasets, both in cold
and warm settings, we demonstrate that our proposed model significantly
outperforms all baselines. Codes will be released upon acceptance.
|
While deep learning techniques have provided the state-of-the-art performance
in various clinical tasks, explainability regarding their decision-making
process can greatly enhance the credence of these methods for safer and quicker
clinical adoption. With high flexibility, Gradient-weighted Class Activation
Mapping (Grad-CAM) has been widely adopted to offer intuitive visual
interpretation of various deep learning models' reasoning processes in
computer-assisted diagnosis. However, despite the popularity of the technique,
there is still a lack of systematic study on Grad-CAM's performance on
different deep learning architectures. In this study, we investigate its
robustness and effectiveness across different popular deep learning models,
with a focus on the impact of the networks' depths and architecture types, by
using a case study of automatic pneumothorax diagnosis in X-ray scans. Our
results show that deeper neural networks do not necessarily contribute to a
strong improvement of pneumothorax diagnosis accuracy, and the effectiveness of
GradCAM also varies among different network architectures.
|
Analyis of the Superkamiokande atmospheric neutrino data is presented in the
framework of four neutrinos without imposing constraints of Big Bang
Nucleosynthesis. Implications to long baseline experiments are briefly
discussed.
|
Works, briefly surveyed here, are concerned with two basic methods: Maximum
Probability and Bayesian Maximum Probability; as well as with their asymptotic
instances: Relative Entropy Maximization and Maximum Non-parametric Likelihood.
Parametric and empirical extensions of the latter methods - Empirical Maximum
Maximum Entropy and Empirical Likelihood - are also mentioned. The methods are
viewed as tools for solving certain ill-posed inverse problems, called
Pi-problem, Phi-problem, respectively. Within the two classes of problems,
probabilistic justification and interpretation of the respective methods are
discussed.
|
We investigate a long-perceived shortcoming in the typical use of BLEU: its
reliance on a single reference. Using modern neural paraphrasing techniques, we
study whether automatically generating additional diverse references can
provide better coverage of the space of valid translations and thereby improve
its correlation with human judgments. Our experiments on the into-English
language directions of the WMT19 metrics task (at both the system and sentence
level) show that using paraphrased references does generally improve BLEU, and
when it does, the more diverse the better. However, we also show that better
results could be achieved if those paraphrases were to specifically target the
parts of the space most relevant to the MT outputs being evaluated. Moreover,
the gains remain slight even when human paraphrases are used, suggesting
inherent limitations to BLEU's capacity to correctly exploit multiple
references. Surprisingly, we also find that adequacy appears to be less
important, as shown by the high results of a strong sampling approach, which
even beats human paraphrases when used with sentence-level BLEU.
|
The representation learning problem in the oil & gas industry aims to
construct a model that provides a representation based on logging data for a
well interval. Previous attempts are mainly supervised and focus on similarity
task, which estimates closeness between intervals. We desire to build
informative representations without using supervised (labelled) data. One of
the possible approaches is self-supervised learning (SSL). In contrast to the
supervised paradigm, this one requires little or no labels for the data.
Nowadays, most SSL approaches are either contrastive or non-contrastive.
Contrastive methods make representations of similar (positive) objects closer
and distancing different (negative) ones. Due to possible wrong marking of
positive and negative pairs, these methods can provide an inferior performance.
Non-contrastive methods don't rely on such labelling and are widespread in
computer vision. They learn using only pairs of similar objects that are easier
to identify in logging data.
We are the first to introduce non-contrastive SSL for well-logging data. In
particular, we exploit Bootstrap Your Own Latent (BYOL) and Barlow Twins
methods that avoid using negative pairs and focus only on matching positive
pairs. The crucial part of these methods is an augmentation strategy. Our
augmentation strategies and adaption of BYOL and Barlow Twins together allow us
to achieve superior quality on clusterization and mostly the best performance
on different classification tasks. Our results prove the usefulness of the
proposed non-contrastive self-supervised approaches for representation learning
and interval similarity in particular.
|
Quantum theory does not provide a unique definition for the joint probability
of two non-commuting observables, which is the next important question after
the Born's probability for a single observable. Instead, various definitions
were suggested, e.g. via quasi-probabilities or via hidden-variable theories.
After reviewing open issues of the joint probability, we relate it to quantum
imprecise probabilities, which are non-contextual and are consistent with all
constraints expected from a quantum probability. We study two non-commuting
observables in a two-dimensional Hilbert space and show that there is no
precise joint probability that applies for any quantum state and is consistent
with imprecise probabilities. This contrasts to theorems by Bell and
Kochen-Specker that exclude joint probabilities for more than two non-commuting
observables, in Hilbert space with dimension larger than two. If measurement
contexts are included into the definition, joint probabilities are not anymore
excluded, but they are still constrained by imprecise probabilities.
|
As a common approach to learning English, reading comprehension primarily
entails reading articles and answering related questions. However, the
complexity of designing effective exercises results in students encountering
standardized questions, making it challenging to align with individualized
learners' reading comprehension ability. By leveraging the advanced
capabilities offered by large language models, exemplified by ChatGPT, this
paper presents a novel personalized support system for reading comprehension,
referred to as ChatPRCS, based on the Zone of Proximal Development theory.
ChatPRCS employs methods including reading comprehension proficiency
prediction, question generation, and automatic evaluation, among others, to
enhance reading comprehension instruction. First, we develop a new algorithm
that can predict learners' reading comprehension abilities using their
historical data as the foundation for generating questions at an appropriate
level of difficulty. Second, a series of new ChatGPT prompt patterns is
proposed to address two key aspects of reading comprehension objectives:
question generation, and automated evaluation. These patterns further improve
the quality of generated questions. Finally, by integrating personalized
ability and reading comprehension prompt patterns, ChatPRCS is systematically
validated through experiments. Empirical results demonstrate that it provides
learners with high-quality reading comprehension questions that are broadly
aligned with expert-crafted questions at a statistical level.
|
Robots hold promise in many scenarios involving outdoor use, such as
search-and-rescue, wildlife management, and collecting data to improve
environment, climate, and weather forecasting. However, autonomous navigation
of outdoor trails remains a challenging problem. Recent work has sought to
address this issue using deep learning. Although this approach has achieved
state-of-the-art results, the deep learning paradigm may be limited due to a
reliance on large amounts of annotated training data. Collecting and curating
training datasets may not be feasible or practical in many situations,
especially as trail conditions may change due to seasonal weather variations,
storms, and natural erosion. In this paper, we explore an approach to address
this issue through virtual-to-real-world transfer learning using a variety of
deep learning models trained to classify the direction of a trail in an image.
Our approach utilizes synthetic data gathered from virtual environments for
model training, bypassing the need to collect a large amount of real images of
the outdoors. We validate our approach in three main ways. First, we
demonstrate that our models achieve classification accuracies upwards of 95% on
our synthetic data set. Next, we utilize our classification models in the
control system of a simulated robot to demonstrate feasibility. Finally, we
evaluate our models on real-world trail data and demonstrate the potential of
virtual-to-real-world transfer learning.
|
Breast cancer is the most widespread neoplasm among women and early detection
of this disease is critical. Deep learning techniques have become of great
interest to improve diagnostic performance. However, distinguishing between
malignant and benign masses in whole mammograms poses a challenge, as they
appear nearly identical to an untrained eye, and the region of interest (ROI)
constitutes only a small fraction of the entire image. In this paper, we
propose a framework, parameterized hypercomplex attention maps (PHAM), to
overcome these problems. Specifically, we deploy an augmentation step based on
computing attention maps. Then, the attention maps are used to condition the
classification step by constructing a multi-dimensional input comprised of the
original breast cancer image and the corresponding attention map. In this step,
a parameterized hypercomplex neural network (PHNN) is employed to perform
breast cancer classification. The framework offers two main advantages. First,
attention maps provide critical information regarding the ROI and allow the
neural model to concentrate on it. Second, the hypercomplex architecture has
the ability to model local relations between input dimensions thanks to
hypercomplex algebra rules, thus properly exploiting the information provided
by the attention map. We demonstrate the efficacy of the proposed framework on
both mammography images as well as histopathological ones. We surpass
attention-based state-of-the-art networks and the real-valued counterpart of
our approach. The code of our work is available at
https://github.com/ispamm/AttentionBCS.
|
We conducted spectroscopic and photometric observations of the optical
counterpart of the X-ray transient RX J0117.6-7330 in the Small Magellanic
Cloud, during a quiescent state. The primary star is identified as a B0.5 IIIe,
with mass M = (18 +/- 2) M(sun) and bolometric magnitude M(bol) = -7.4 +/- 0.2.
The main spectral features are strong H-alpha emission, H-beta and H-gamma
emission cores with absorption wings, and narrow HeI and OII absorption lines.
Equivalent width and FWHM of the main lines are listed. The average systemic
velocity over our observing run is v(r) = (184 +/- 4) km/s; measurements over a
longer period of time are needed to determine the binary period and the K
velocity of the primary. We determine a projected rotational velocity v sin i =
(145 +/- 10) km/s for the Be star, and we deduce that the inclination angle of
the system is i = (21 +/- 3)deg.
|
Exchange Traded Funds (ETFs) have been gaining increasing popularity in the
investment community as is evidenced by the high growth both in the number of
ETFs and their net assets since 2000. As ETFs are in nature similar to index
mutual funds, in this paper we examined if this growing demand for ETFs can be
explained through their outperformance as compared to index mutual funds. We
considered the population of all ETFs with inception dates prior to 2002 and
then for each ETF found all the passive index mutual funds that had the same
investment style as the selected ETF and had inception date prior to 2002.
Within each investment style we matched every ETF with all the passive index
funds in that investment style and compared the performances of the matched
pairs in terms of Sharp Ratios and risk adjusted buy and hold total returns for
the period 2002-2010. We then applied the Wilcoxon signed rank test to examine
if ETFs had better performances than index mutual funds during the sample
period. Out of the 230 paired matches of all the styles, ETFs outperformed
index mutual funds in 134 of the times in terms of Sharpe Ratio, however, the
test of the hypothesis showed no statistically significant difference between
ETFs and index funds performances in terms of Sharpe ratio. Out of the 230
paired matches of all the styles, ETFs outperformed index mutual funds in 125
of the times in terms of risk adjusted buy and hold total return, however, the
test of hypothesis showed no statistically significant difference between ETFs
and index funds performances in terms of risk adjusted buy and hold total
return. These findings indicate there is statistically no significant
difference between ETFs and passive index mutual funds performances at the fund
level and investors' choice between the two is related to product
characteristics and tax advantages.
|
The Auger decay is a relevant recombination channel during the first few
femtoseconds of molecular targets impinged by attosecond XUV or soft X-ray
pulses. Including this mechanism in time--dependent simulations of
charge--migration processes is a difficult task, and Auger scatterings are
often ignored altogether. In this work we present an advance of the current
state-of-the-art by putting forward a real--time approach based on
nonequilibrium Green's functions suitable for first-principles calculations of
molecules with tens of active electrons. To demonstrate the accuracy of the
method we report comparisons against accurate grid simulations of
one-dimensional systems. We also predict a highly asymmetric profile of the
Auger wavepacket, with a long tail exhibiting ripples temporally spaced by the
inverse of the Auger energy.
|
In the future situation, aiming to seek more resources, human beings decided
to march towards the mysterious and bright starry sky, which opened the era of
great interstellar exploration. According to the Outer Space Treaty, any
exploration of celestial bodies should be aimed at promoting global equality
and for the benefit of all nations. Firstly, we defined global equity and set a
Unified Equity Index (UEI) model to measure it. We merge the factors with
greater correlation, and finally, get 6 elements, and then use the entropy
method (TEM) to find the dispersion of these elements in different countries.
Then use principal component analysis (PCA) to reduce the dimensionality of the
dispersion, and then use the scandalized index to obtain the global equity.
Secondly, we simulated a future with asteroid mining and evaluated its impact
on Unified Equity Index (UEI). Then, we divided the mineable asteroids into
three classes with different mining difficulties and values, identified 28
mining entities including private companies, national and international
organizations. We considered changes in the asteroid classes, mining
capabilities and mining scales to determine the changes in the value of
minerals mined between 2025 and 2085. We convert mining output value into
mineral transaction value through allocation matrix. Based on grey relational
analysis (GRA). Finally, we presented three possible versions of the future of
asteroid mining by changing the conditions. We propose two sets of
corresponding policies for changes in future trends in global fairness with
asteroid mining. We test the separate and combined effects of these policies
and find that they are positive, strongly supporting the effectiveness of our
model.
|
The power corrections in the Operator Product Expansion (OPE) of QCD
correlators can be viewed mathematically as an illustration of the transseries
concept, which allows to recover a function from its asymptotic divergent
expansion. Alternatively, starting from the divergent behavior of the
perturbative QCD encoded in the singularities in the Borel plane, a modified
expansion can be defined by means of the conformal mapping of this plane. A
comparison of the two approaches concerning their ability to recover
nonperturbative properties of the true correlator was not explored up to now.
In the present paper, we make a first attempt to investigate this problem. We
use for illustration the Adler function and observables expressed as integrals
of this function along contours in the complex energy plane. We show that the
expansions based on the conformal mapping of the Borel plane go beyond
finite-order perturbation theory, containing an infinite number of terms when
reexpanded in powers of the coupling. Moreover, the expansion functions exhibit
nonperturbative features of the true function, while the expansions have a
tamed behavior at large orders and are expected even to be convergent. Using
these properties, we argue that there are no mathematical reasons for
supplementing the expansions based on the conformal mapping of the Borel plane
by additional arbitrary power corrections. Therefore, we make the conjecture
that they provide an alternative to the standard OPE in approximating the QCD
correlator. This conjecture allows to slightly improve the accuracy of the
strong coupling extracted from the hadronic $\tau$ decay width. Using the
optimal expansions based on conformal mapping and the contour-improved
prescription of renormalization-group resummation, we obtain
$\alpha_s(m_\tau^2)=0.314 \pm 0.006$, which implies $\alpha_s(m_Z^2)=0.1179 \pm
0.0008$.
|
We extend the notion of generalised Cesaro summation/convergence developed
previously to the more natural setting of what we call "remainder" Cesaro
summation/convergence and, after illustrating the utility of this approach in
deriving certain classical results, use it to develop a notion of generalised
root identities. These extend elementary root identities for polynomials both
to more general functions and to a family of identities parametrised by a
complex parameter \mu. In so doing they equate one expression (the derivative
side) which is defined via Fourier theory, with another (the root side) which
is defined via remainder Cesaro summation. For \mu a non-positive integer these
identities are naturally adapted to investigating the asymptotic behaviour of
the given function and the geometric distribution of its roots. For the Gamma
function we show that it satisfies the generalised root identities and use them
to constructively deduce Stirling's theorem. For the Riemann zeta function the
implications of the generalised root identities for \mu=0,-1 and -2 are
explored in detail; in the case of \mu=-2 a symmetry of the non-trivial roots
is broken and allows us to conclude, after detailed computation, that the
Riemann hypothesis must be false. In light of this, some final direct
discussion is given of areas where the arguments used throughout the paper are
deficient in rigour and require more detailed justification. The conclusion of
section 1 gives guidance on the most direct route through the paper to the
claim regarding the Riemann hypothesis.
|
The main goal of this work is to compare the effects induced in ices of
astrophysical relevance by high-energy ions, simulating cosmic rays, and by
vacuum ultraviolet (UV) photons. This comparison relies on in situ infrared
spectroscopy of irradiated CH3OH:NH3 ice. Swift heavy ions were provided by the
GANIL accelerator. The source of UV was a microwave-stimulated hydrogen flow
discharge lamp. The deposited energy doses were similar for ion beams and UV
photons to allow a direct comparison. A variety of organic species was detected
during irradiation and later during ice warm-up. These products are common to
ion and UV irradiation for doses up to a few tens of eV per molecule. Only the
relative abundance of the CO product, after ice irradiation, was clearly higher
in the ion irradiation experiments. For some ice mixture compositions, the
irradiation products formed depend only weakly on the type of irradiation,
swift heavy ions, or UV photons. This simplifies the chemical modeling of
energetic ice processing in space.
|
Relevance Vector Machine (RVM) is a supervised learning algorithm extended
from Support Vector Machine (SVM) based on the Bayesian sparsity model.
Compared with the regression problem, RVM classification is difficult to be
conducted because there is no closed-form solution for the weight parameter
posterior. Original RVM classification algorithm used Newton's method in
optimization to obtain the mode of weight parameter posterior then approximated
it by a Gaussian distribution in Laplace's method. It would work but just
applied the frequency methods in a Bayesian framework. This paper proposes a
Generic Bayesian approach for the RVM classification. We conjecture that our
algorithm achieves convergent estimates of the quantities of interest compared
with the nonconvergent estimates of the original RVM classification algorithm.
Furthermore, a Fully Bayesian approach with the hierarchical hyperprior
structure for RVM classification is proposed, which improves the classification
performance, especially in the imbalanced data problem. By the numeric studies,
our proposed algorithms obtain high classification accuracy rates. The Fully
Bayesian hierarchical hyperprior method outperforms the Generic one for the
imbalanced data classification.
|
We report a characterization of the high-pressure behavior of zinc-iodate,
Zn(IO3)2. By the combination of x-ray diffraction, Raman spectroscopy, and
first-principles calculations we have found evidence of two subtle isosymmetric
structural phase transitions. We present arguments relating these transitions
to a non-linear behavior of phonons and changes induced by pressure on the
coordination sphere of the iodine atoms. This fact is explained as a
consequence of the formation of metavalent bonding at high-pressure which is
favored by the lone-electron pairs of iodine. In addition, the pressure
dependence of unit-cell parameters, volume, and bond is reported. An equation
of state to describe the pressure dependence of the volume is presented,
indicating that Zn(IO3)2 is the most compressible iodate among those studied up
to now. Finally, phonon frequencies are reported together with their symmetry
assignment and pressure dependence.
|
We review known results on the relations between conformal field theory, the
quantization of moduli spaces of flat PSL(2,R)-connections on Riemann surfaces,
and the quantum Teichmueller theory.
|
Deep reinforcement learning (RL) has achieved outstanding results in recent
years, which has led a dramatic increase in the number of methods and
applications. Recent works are exploring learning beyond single-agent scenarios
and considering multi-agent scenarios. However, they are faced with lots of
challenges and are seeking for help from traditional game-theoretic algorithms,
which, in turn, show bright application promise combined with modern algorithms
and boosting computing power. In this survey, we first introduce basic concepts
and algorithms in single agent RL and multi-agent systems; then, we summarize
the related algorithms from three aspects. Solution concepts from game theory
give inspiration to algorithms which try to evaluate the agents or find better
solutions in multi-agent systems. Fictitious self-play becomes popular and has
a great impact on the algorithm of multi-agent reinforcement learning.
Counterfactual regret minimization is an important tool to solve games with
incomplete information, and has shown great strength when combined with deep
learning.
|
We present a detailed analytical and numerical study for the spreading of
infections in complex population networks with acquired immunity. We show that
the large connectivity fluctuations usually found in these networks strengthen
considerably the incidence of epidemic outbreaks. Scale-free networks, which
are characterized by diverging connectivity fluctuations, exhibit the lack of
an epidemic threshold and always show a finite fraction of infected
individuals. This particular weakness, observed also in models without
immunity, defines a new epidemiological framework characterized by a highly
heterogeneous response of the system to the introduction of infected
individuals with different connectivity. The understanding of epidemics in
complex networks might deliver new insights in the spread of information and
diseases in biological and technological networks that often appear to be
characterized by complex heterogeneous architectures.
|
A lattice Boltzmann scheme associated with flexible Prandtl number and
specific heat ratio is proposed, which is based on the polyatomic ellipsoidal
statistics model(ES-BGK). The Prandtl number can be modified by a parameter of
the Gaussian distribution and the specific heat ratio can be modified by
additional free degrees. For the sake of constructing the scheme proposed, the
Gaussian distribution is expanded on the Hermite polynomials and the general
term formula for the Hermite coefficients of the Gaussian distribution is
deduced. Benchmarks are carried out to verify the scheme proposed. The
numerical results are in good agreement with the the analytical solutions.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.