text
stringlengths 6
128k
|
---|
Idea Density (ID) measures the rate at which ideas or elementary predications
are expressed in an utterance or in a text. Lower ID is found to be associated
with an increased risk of developing Alzheimer's disease (AD) (Snowdon et al.,
1996; Engelman et al., 2010). ID has been used in two different versions:
propositional idea density (PID) counts the expressed ideas and can be applied
to any text while semantic idea density (SID) counts pre-defined information
content units and is naturally more applicable to normative domains, such as
picture description tasks. In this paper, we develop DEPID, a novel
dependency-based method for computing PID, and its version DEPID-R that enables
to exclude repeating ideas---a feature characteristic to AD speech. We conduct
the first comparison of automatically extracted PID and SID in the diagnostic
classification task on two different AD datasets covering both closed-topic and
free-recall domains. While SID performs better on the normative dataset, adding
PID leads to a small but significant improvement (+1.7 F-score). On the
free-topic dataset, PID performs better than SID as expected (77.6 vs 72.3 in
F-score) but adding the features derived from the word embedding clustering
underlying the automatic SID increases the results considerably, leading to an
F-score of 84.8.
|
We study sufficient conditions for local asymptotic mixed normality. We
weaken the sufficient conditions in Theorem 1 of Jeganathan (Sankhya Ser. A
1982) so that they can be applied to a wider class of statistical models
including a jump-diffusion model. Moreover, we show that local asymptotic mixed
normality of a statistical model generated by approximated transition density
functions is implied for the original model. Together with density
approximation by means of thresholding techniques, we show local asymptotic
normality for a statistical model of discretely observed jump-diffusion
processes where the drift coefficient, diffusion coefficient, and jump
structure are parametrized. As a consequence, the quasi-maximum-likelihood and
Bayes-type estimators proposed in Shimizu and Yoshida (Stat. Inference Stoch.
Process. 2006) and Ogihara and Yoshida (Stat. Inference Stoch. Process. 2011)
are shown to be asymptotically efficient in this model. Moreover, we can
construct asymptotically uniformly most powerful tests for the parameters.
|
We search for periodic variations in the radial velocity of the young Herbig
star HD 135344B with the aim to determine a rotation period. We analyzed 44
high-resolution optical spectra taken over a time range of 151 days. The
spectra were acquired with FEROS at the 2.2m MPG/ESO telescope in La Silla. The
stellar parameters of HD 135344B are determined by fitting synthetic spectra to
the stellar spectrum. In order to obtain radial velocity measurements, the
stellar spectra have been cross-correlated with a theoretical template computed
from determined stellar parameters. We report the first direct measurement of
the rotation period of a Herbig star from radial-velocity measurements. The
rotation period is found to be 0.16 d (3.9 hr), which makes HD 135344B a rapid
rotator at or close to its break-up velocity. The rapid rotation could explain
some of the properties of the circumstellar environment of HD 135344B such as
the presence of an inner disk with properties (composition, inclination), that
are significantly different from the outer disk.
|
We analyse quantum properties of ${\cal N}=2$ and ${\cal N}=4$ supersymmetric
gauge theories formulated in terms of ${\cal N}=1$ superfields and investigate
the conditions imposed on a renormalization prescription under which the
non-renormalization theorems are valid. For this purpose in these models we
calculate the two-loop contributions to the anomalous dimensions of all chiral
matter superfields and the three-loop contributions to the $\beta$-functions
for an arbitrary ${\cal N}=1$ supersymmetric subtraction scheme supplementing
the higher covariant derivative regularization. We demonstrate that, in
general, the results do not vanish due to the scheme dependence, which becomes
essential in the considered approximations. However, the two-loop anomalous
dimensions vanish if a subtraction scheme is compatible with the structure of
quantum corrections and does not break the relation between the Yukawa and
gauge couplings which follows from ${\cal N}=2$ supersymmetry. Nevertheless,
even under these conditions the three-loop contribution to the $\beta$-function
does not in general vanishes for ${\cal N}=2$ supersymmetric theories. To
obtain the purely one-loop $\beta$-function, one should also chose an NSVZ
renormalization prescription. The similar statements for the higher loop
contributions are proved in all orders.
|
We study the capacity of entanglement as an alternative to entanglement
entropies in estimating the degree of entanglement of quantum bipartite systems
over fermionic Gaussian states. In particular, we derive the exact and
asymptotic formulas of average capacity of two different cases - with and
without particle number constraints. For the later case, the obtained formulas
generalize some partial results of average capacity in the literature. The key
ingredient in deriving the results is a set of new tools for simplifying finite
summations developed very recently in the study of entanglement entropy of
fermionic Gaussian states.
|
We have evaluated a universal ratio between diffusion constants of the ring
polymer with a given knot $K$ and a linear polymer with the same molecular
weight in solution through the Brownian dynamics under hydrodynamic
interaction. The ratio is found to be constant with respect to the number of
monomers, $N$, and hence the estimate at some $N$ should be valid practically
over a wide range of $N$ for various polymer models. Interestingly, the ratio
is determined by the average crossing number ($N_{AC}$) of an ideal
conformation of knotted curve $K$, i.e. that of the ideal knot. The $N_{AC}$ of
ideal knots should therefore be fundamental in the dynamics of knots.
|
The angular power spectrum of the cosmic microwave background (CMB)
temperature anisotropies is a good probe to look into the primordial density
fluctuations at large scales in the universe. Here we re-examine the angular
power spectrum of the Wilkinson Microwave Anisotropy Probe data, paying
particular attention to the fine structures (oscillations) at $\ell=100 \sim
150$ reported by several authors. Using Monte-Carlo simulations, we confirm
that the gap from the simple power law spectrum is a rare event, about
2.5--3$\sigma$, if these fine structures are generated by experimental noise
and the cosmic variance. Next, in order to investigate the origin of the
structures, we examine frequency and direction dependencies of the fine
structures by dividing the observed QUV frequency maps into four sky regions.
We find that the structures around $\ell \sim 120$ do not have significant
dependences either on frequencies or directions. For the structure around $\ell
\sim 140$, however, we find that the characteristic signature found in the all
sky power spectrum is attributed to the anomaly only in the South East region.
|
Understanding the microscopic origin of the gate-controlled supercurrent
(GCS) in superconducting nanobridges is crucial for engineering superconducting
switches suitable for a variety of electronic applications. The origin of GCS
is controversial, and various mechanisms have been proposed to explain it. In
this work, we have investigated the GCS in a Ta layer deposited on the surface
of InAs nanowires. Comparison between switching current distributions at
opposite gate polarities and between the gate dependence of two opposite side
gates with different nanowire$-$gate spacings shows that the GCS is determined
by the power dissipated by the gate leakage. We also found a substantial
difference between the influence of the gate and elevated bath temperature on
the magnetic field dependence of the supercurrent. Detailed analysis of the
switching dynamics at high gate voltages shows that the device is driven into
the multiple phase slips regime by high-energy fluctuations arising from the
leakage current.
|
A TQFT is a functor from a cobordism category to the category of vector
spaces, satisfying certain properties. An important property is that the vector
spaces should be finite dimensional. For the WRT TQFT, the relevant
2+1-cobordism category is built from manifolds which are equipped with an extra
structure such as a p_1-structure, or an extended manifold structure. We
perform the universal construction of Blanchet, Habegger, Masbaum and Vogel on
a cobordism category without this extra structure and show that the resulting
quantization functor assigns an infinite dimensional vector space to the torus.
|
Exact algorithms for learning Bayesian networks guarantee to find provably
optimal networks. However, they may fail in difficult learning tasks due to
limited time or memory. In this research we adapt several anytime heuristic
search-based algorithms to learn Bayesian networks. These algorithms find
high-quality solutions quickly, and continually improve the incumbent solution
or prove its optimality before resources are exhausted. Empirical results show
that the anytime window A* algorithm usually finds higher-quality, often
optimal, networks more quickly than other approaches. The results also show
that, surprisingly, while generating networks with few parents per variable are
structurally simpler, they are harder to learn than complex generating networks
with more parents per variable.
|
In a Bruhat-Tits building of split classical type (that is, of type $A_n$,
$B_n$, $C_n$, $D_n$, and any combination of them) over a local field, the
simplicial volume counts the vertices within the given simplicial distance from
a special vertex. This paper aims to study the asymptotic growth of the
simplicial volume. A formula of the simplicial volume is deduced from the
theory of concave functions. Then the dominant term in its asymptotic growth is
found using the theory of $q$-exponential polynomials developed in this paper.
|
We present a self-consistent, absolute isochronal age scale for young (< 200
Myr), nearby (< 100 pc) moving groups in the solar neighbourhood based on
homogeneous fitting of semi-empirical pre-main-sequence model isochrones using
the tau^2 maximum-likelihood fitting statistic of Naylor & Jeffries in the M_V,
V-J colour-magnitude diagram. The final adopted ages for the groups are:
149+51-19 Myr for the AB Dor moving group, 24+/-3 Myr for the {\beta} Pic
moving group (BPMG), 45+11-7 Myr for the Carina association, 42+6-4 Myr for the
Columba association, 11+/-3 Myr for the {\eta} Cha cluster, 45+/-4 Myr for the
Tucana-Horologium moving group (Tuc-Hor), 10+/-3 Myr for the TW Hya
association, and 22+4-3 Myr for the 32 Ori group. At this stage we are
uncomfortable assigning a final, unambiguous age to the Argus association as
our membership list for the association appears to suffer from a high level of
contamination, and therefore it remains unclear whether these stars represent a
single population of coeval stars.
Our isochronal ages for both the BPMG and Tuc-Hor are consistent with recent
lithium depletion boundary (LDB) ages, which unlike isochronal ages, are
relatively insensitive to the choice of low-mass evolutionary models. This
consistency between the isochronal and LDB ages instills confidence that our
self-consistent, absolute age scale for young, nearby moving groups is robust,
and hence we suggest that these ages be adopted for future studies of these
groups.
Software implementing the methods described in this study is available from
http: //www.astro.ex.ac.uk/people/timn/tau-squared/.
|
We propose and demonstrate a scalable scheme for the simultaneous
determination of internal and motional states in trapped ions with single-site
resolution. The scheme is applied to the study of polaritonic excitations in
the Jaynes- Cummings Hubbard model with trapped ions, in which the internal and
motional states of the ions are strongly correlated. We observe quantum phase
transitions of polaritonic excitations in two ions by directly evaluating their
variances per ion site. Our work establishes an essential technological method
for large-scale quantum simulations of polaritonic systems.
|
We are concerned with the mathematical study of the Mean Field Games system
(MFGS). In the conventional setup, the MFGS is a system of two coupled
nonlinear parabolic PDEs of the second order in a backward-forward manner,
namely one terminal and one initial conditions are prescribed respectively for
the value function and the population density. In this paper, we show that
uniqueness of solutions to the MFGS can be guaranteed if, among all four
possible terminal and initial conditions, eitheir only two terminal or only two
initial conditions are given. In both cases H\"older stability estimates are
proven. This means that he accuracies of the solutions are estimated in terms
of the given data. Moreover, these estimates readily imply uniqueness of
corresponding problems for the MFGS. The main mathematical apparatus to
establish those results is two new Carleman estimates, which may find
application in other contexts associated with coupled parabolic PDEs.
|
A scenario has recently been reported in which in order to stabilize complete
synchronization of an oscillator network---a symmetric state---the symmetry of
the system itself has to be broken by making the oscillators nonidentical. But
how often does such behavior---which we term asymmetry-induced synchronization
(AISync)---occur in oscillator networks? Here we present the first general
scheme for constructing AISync systems and demonstrate that this behavior is
the norm rather than the exception in a wide class of physical systems that can
be seen as multilayer networks. Since a symmetric network in complete synchrony
is the basic building block of cluster synchronization in more general
networks, AISync should be common also in facilitating cluster synchronization
by breaking the symmetry of the cluster subnetworks.
|
Interpretability of machine learning (ML) models becomes more relevant with
their increasing adoption. In this work, we address the interpretability of ML
based question answering (QA) models on a combination of knowledge bases (KB)
and text documents. We adapt post hoc explanation methods such as LIME and
input perturbation (IP) and compare them with the self-explanatory attention
mechanism of the model. For this purpose, we propose an automatic evaluation
paradigm for explanation methods in the context of QA. We also conduct a study
with human annotators to evaluate whether explanations help them identify
better QA models. Our results suggest that IP provides better explanations than
LIME or attention, according to both automatic and human evaluation. We obtain
the same ranking of methods in both experiments, which supports the validity of
our automatic evaluation paradigm.
|
A wide variety of apparently contradictory piezoresistance (PZR) behaviors
have been reported in p-type silicon nanowires (SiNW), from the usual positive
bulk effect to anomalous (negative) PZR and giant PZR. The origin of such a
range of diverse phenomena is unclear, and consequently so too is the
importance of a number of parameters including SiNW type (top down or bottom
up), stress concentration, electrostatic field effects, or surface chemistry.
Here we observe all these PZR behaviors in a single set of nominally p-type,
$\langle 110 \rangle$ oriented, top-down SiNWs at uniaxial tensile stresses up
to 0.5 MPa. Longitudinal $\pi$-coefficients varying from $-800\times10^{-11}$
Pa$^{-1}$ to $3000\times10^{-11}$ Pa$^{-1}$ are measured. Micro-Raman
spectroscopy on chemically treated nanowires reveals that stress concentration
is the principal source of giant PZR. The sign and an excess PZR similar in
magnitude to the bulk effect are related to the chemical treatment of the SiNW.
|
Given a distance matrix consisting of pairwise distances between species, a
distance-based phylogenetic reconstruction method returns a tree metric or
equidistant tree metric (ultrametric) that best fits the data. We investigate
distance-based phylogenetic reconstruction using the $l^\infty$-metric. In
particular, we analyze the set of $l^\infty$-closest ultrametrics and tree
metrics to an arbitrary dissimilarity map to determine its dimension and the
tree topologies it represents. In the case of ultrametrics, we decompose the
space of dissimilarity maps on 3 elements and on 4 elements relative to the
tree topologies represented.
Our approach is to first address uniqueness issues arising in
$l^\infty$-optimization to linear spaces. We show that the $l^\infty$-closest
point in a linear space is unique if and only if the underlying matroid of the
linear space is uniform. We also give a polyhedral decomposition of $\rr^m$
based on the dimension of the set of $l^\infty$-closest points in a linear
space.
|
Continuous-time event data are common in applications such as individual
behavior data, financial transactions, and medical health records. Modeling
such data can be very challenging, in particular for applications with many
different types of events, since it requires a model to predict the event types
as well as the time of occurrence. Recurrent neural networks that parameterize
time-varying intensity functions are the current state-of-the-art for
predictive modeling with such data. These models typically assume that all
event sequences come from the same data distribution. However, in many
applications event sequences are generated by different sources, or users, and
their characteristics can be very different. In this paper, we extend the broad
class of neural marked point process models to mixtures of latent embeddings,
where each mixture component models the characteristic traits of a given user.
Our approach relies on augmenting these models with a latent variable that
encodes user characteristics, represented by a mixture model over user behavior
that is trained via amortized variational inference. We evaluate our methods on
four large real-world datasets and demonstrate systematic improvements from our
approach over existing work for a variety of predictive metrics such as
log-likelihood, next event ranking, and source-of-sequence identification.
|
Inspired by the recent near-threshold $J/\psi$ photoproduction measurements,
we discuss gluon gravitational form factors (GFFs) and internal properties of
the proton. This work presents a complete analysis of the proton gluon GFFs
connecting the gluon part of the energy-momentum tensor and the heavy
quarkonium photoproduction. In particular, a global fitting of the $J/\psi$
differential and total cross section experimental data is used to determine the
gluon GFFs as functions of the squared momentum transfer $t$. Combined with the
quark contributions to the $D$-term form factor extracted from the deeply
virtual Compton scattering experiment, the total $D$-term is obtained to
investigate their applications in describing the proton mechanical properties.
These studies provide a unique perspective on investigating the proton gluon
GFFs and important information for enhancing QCD constraints on the gluon GFFs.
|
The free propagator for the scalar $\lambda \phi^4$--theory is calculated
exactly up to the second derivative of a background field. Using this
propagator I compute the one--loop effective action, which then contains all
powers of the field but with at most two derivatives acting on each field. The
standard derivative expansion, which only has a finite number of derivatives in
each term, breaks down for small fields when the mass is zero, while the
expression obtained here has a well--defined expansion in $\phi$. In this way
the resummation of derivatives cures the naive IR divergence. The extension to
finite temperature is also discussed.
|
This work presents a model that successfully describes the tensile properties
of macroscopic fibres of carbon nanotubes (CNTs). The core idea is to treat
such a fibre as a network of CNT bundles, similar to the structure of
high-performance polymer fibres, with tensile properties defined by the CNT
bundle orientation distribution function (ODF), shear modulus and shear
strength. Synchrotron small-angle X-ray scattering measurements on individual
fibres are used to determine the initial ODF and its evolution during in-situ
tensile testing. This enables prediction of tensile modulus, strength and
fracture envelope, with remarkable agreement with experimental data for fibres
produced in-house with different constituent CNTs and for different draw
ratios, as well as with literature data. The parameters extracted from the
model include: CNT bundle shear strength, shear modulus and tensile strength.
These are in agreement with data for commercial high-performance fibres,
although high compared with values for single-crystal graphite and short
individual CNTs. The manuscript also discusses the unusually high fracture
energy of CNT fibres and exceptionally high figure of merit for ballistic
protection. The model predicts that small improvements in orientation would
lead to superior ballistic peformance than any synthetic high-peformance fiber,
with values of strain wave velocity ($U^{1/3}$) exceeding $1000 m/s$.
|
New perspective form of equations for geodesic lines in Riemann Geometry was
found. This method is based on the use of differential forms in differential
equations as arguments of differentiation. At that, these forms do not have a
requirement of completeness, and that extends capabilities of equation
transformation, makes their writing very flexible and extend their scope of
application.
|
Distributed stochastic gradient descent (SGD) approach has been widely used
in large-scale deep learning, and the gradient collective method is vital to
ensure the training scalability of the distributed deep learning system.
Collective communication such as AllReduce has been widely adopted for the
distributed SGD process to reduce the communication time. However, AllReduce
incurs large bandwidth resources while most gradients are sparse in many cases
since many gradient values are zeros and should be efficiently compressed for
bandwidth saving. To reduce the sparse gradient communication overhead, we
propose Sparse-Sketch Reducer (S2 Reducer), a novel sketch-based sparse
gradient aggregation method with convergence guarantees. S2 Reducer reduces the
communication cost by only compressing the non-zero gradients with count-sketch
and bitmap, and enables the efficient AllReduce operators for parallel SGD
training. We perform extensive evaluation against four state-of-the-art methods
over five training models. Our results show that S2 Reducer converges to the
same accuracy, reduces 81\% sparse communication overhead, and achieves 1.8$
\times $ speedup compared to state-of-the-art approaches.
|
We study a model for two lasers that are mutually coupled opto-electronically
by modulating the pump of one laser with the intensity deviations from the
steady-state output of the other. Such a model is analogous to the equations
describing the spread of certain diseases such as dengue epidemics in human
populations coupled by migration, transportation, etc. In this report we
consider the possibility of both in-phase (complete) and anti-phase (inverse)
synchronization between the above-mentioned laser models. Depending on the
coupling rates between the systems a transition from the in-phase (complete)
synchronization to the anti-phase (inverse) synchronization might occur. The
results are important for the disruption of the spread of the certain
infectious diseases in human populations.
|
Reinforcement learning has shown great promise in the training of robot
behavior due to the sequential decision making characteristics. However, the
required enormous amount of interactive and informative training data provides
the major stumbling block for progress. In this study, we focus on accelerating
reinforcement learning (RL) training and improving the performance of
multi-goal reaching tasks. Specifically, we propose a precision-based
continuous curriculum learning (PCCL) method in which the requirements are
gradually adjusted during the training process, instead of fixing the parameter
in a static schedule. To this end, we explore various continuous curriculum
strategies for controlling a training process. This approach is tested using a
Universal Robot 5e in both simulation and real-world multi-goal reach
experiments. Experimental results support the hypothesis that a static training
schedule is suboptimal, and using an appropriate decay function for curriculum
learning provides superior results in a faster way.
|
In this work we investigate the stellar population, metallicity distribution
and ionized gas in the elliptical galaxy NGC 5044, using long-slit spectroscopy
and a stellar population synthesis method. We found differences in the slope of
metal-line profiles along the galaxy which suggests an enhancement of alpha
elements, particularly towards the central region. The presence of a
non-thermal ionization source, such as a low-luminosity AGN and/or shock
ionization, is implied by the large values of the ratio (N II])Ha observed in
all sampled regions. However, the emission lines observed in the external
regions indicate the presence of an additional ionization source, probably hot,
post-AGB stars.
|
Learning precise distributions of traffic features (e.g., burst sizes, packet
inter-arrival time) is still a largely unsolved problem despite being critical
for management tasks such as capacity planning or anomaly detection. A key
limitation nowadays is the lack of feedback between the control plane and the
data plane. Programmable data planes offer the opportunity to create systems
that let data- and control plane to work together, compensating their
respective shortcomings.
We present FitNets, an adaptive network monitoring system leveraging feedback
between the data- and the control plane to learn accurate traffic
distributions. In the control plane, FitNets relies on Kernel Density
Estimators which allow to provably learn distributions of any shape. In the
data plane, FitNets tests the accuracy of the learned distributions while
dynamically adapting data collection to the observed distribution fitness,
prioritizing under-fitted features.
We have implemented FitNets in Python and P4 (including on commercially
available programmable switches) and tested it on real and synthetic traffic
traces. FitNets is practical: it is able to estimate hundreds of distributions
from up to 60 millions samples per second, while providing accurate error
estimates and adapting to complex traffic patterns.
|
Classic statistical techniques (like the multi-dimensional likelihood and the
Fisher discriminant method) together with Multi-layer Perceptron and Learning
Vector Quantization Neural Networks have been systematically used in order to
find the best sensitivity when searching for $\nu_\mu \to \nu_{\tau}$
oscillations. We discovered that for a general direct $\nu_\tau$ appearance
search based on kinematic criteria: a) An optimal discrimination power is
obtained using only three variables ($E_{visible}$, $P_{T}^{miss}$ and
$\rho_{l}$) and their correlations. Increasing the number of variables (or
combinations of variables) only increases the complexity of the problem, but
does not result in a sensible change of the expected sensitivity. b) The
multi-layer perceptron approach offers the best performance. As an example to
assert numerically those points, we have considered the problem of $\nu_\tau$
appearance at the CNGS beam using a Liquid Argon TPC detector.
|
Tactile skins made from textiles enhance robot-human interaction by
localizing contact points and measuring contact forces. This paper presents a
solution for rapidly fabricating, calibrating, and deploying these skins on
industrial robot arms. The novel automated skin calibration procedure maps skin
locations to robot geometry and calibrates contact force. Through experiments
on a FANUC LR Mate 200id/7L industrial robot, we demonstrate that tactile skins
made from textiles can be effectively used for human-robot interaction in
industrial environments, and can provide unique opportunities in robot control
and learning, making them a promising technology for enhancing robot perception
and interaction.
|
The present article explores the application of randomized control techniques
in empirical asset pricing and performance evaluation. It introduces geometric
random walks, a class of Markov chain Monte Carlo methods, to construct
flexible control groups in the form of random portfolios adhering to investor
constraints. The sampling-based methods enable an exploration of the
relationship between academically studied factor premia and performance in a
practical setting. In an empirical application, the study assesses the
potential to capture premias associated with size, value, quality, and momentum
within a strongly constrained setup, exemplified by the investor guidelines of
the MSCI Diversified Multifactor index. Additionally, the article highlights
issues with the more traditional use case of random portfolios for drawing
inferences in performance evaluation, showcasing challenges related to the
intricacies of high-dimensional geometry.
|
Graph neural networks (GNNs) have received remarkable success in link
prediction (GNNLP) tasks. Existing efforts first predefine the subgraph for the
whole dataset and then apply GNNs to encode edge representations by leveraging
the neighborhood structure induced by the fixed subgraph. The prominence of
GNNLP methods significantly relies on the adhoc subgraph. Since node
connectivity in real-world graphs is complex, one shared subgraph is limited
for all edges. Thus, the choices of subgraphs should be personalized to
different edges. However, performing personalized subgraph selection is
nontrivial since the potential selection space grows exponentially to the scale
of edges. Besides, the inference edges are not available during training in
link prediction scenarios, so the selection process needs to be inductive. To
bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a
plug-and-play framework to automatically, personally, and inductively identify
optimal subgraphs for different edges when performing GNNLP. PS2 is
instantiated as a bi-level optimization problem that can be efficiently solved
differently. Coupling GNNLP models with PS2, we suggest a brand-new angle
towards GNNLP training: by first identifying the optimal subgraphs for edges;
and then focusing on training the inference model by using the sampled
subgraphs. Comprehensive experiments endorse the effectiveness of our proposed
method across various GNNLP backbones (GCN, GraphSage, NGCF, LightGCN, and
SEAL) and diverse benchmarks (Planetoid, OGB, and Recommendation datasets). Our
code is publicly available at \url{https://github.com/qiaoyu-tan/PS2}
|
We derive an expression for the relation between two scattering transition
amplitudes which reflect the same dynamics, but which differ in the description
of their initial and final state vectors. In one version, the incident and
scattered states are elements of a perturbative Fock space, and solve the
eigenvalue problem for the `free' part of the Hamiltonian --- the part that
remains after the interactions between particle excitations have been `switched
off'. Alternatively, the incident and scattered states may be coherent states
that are transforms of these Fock states. In earlier work, we reported on the
scattering amplitudes for QED, in which a unitary transformation relates
perturbative and non-perturbative sets of incident and scattered states. In
this work, we generalize this earlier result to the case of transformations
that are not necessarily unitary and that may not have unique inverses. We
discuss the implication of this relationship for Abelian and non-Abelian gauge
theories in which the `transformed', non-perturbative states implement
constraints, such as Gauss's law.
|
In 2016, an exposure meter was installed on the Lijiang Fiber-fed
High-Resolution Spectrograph to monitor the coupling of starlight to the
science fiber during observations. Based on it, we investigated a method to
estimate the exposure flux of the CCD in real time by using the counts of the
photomultiplier tubes (PMT) of the exposure meter, and developed a piece of
software to optimize the control of the exposure time. First, by using
flat-field lamp observations, we determined that there is a linear and
proportional relationship between the total counts of the PMT and the exposure
flux of the CCD. Second, using historical observations of different spectral
types, the corresponding relational conversion factors were determined and
obtained separately. Third, the method was validated using actual observation
data, which showed that all values of the coefficient of determination were
greater than 0.92. Finally, software was developed to display the counts of the
PMT and the estimated exposure flux of the CCD in real-time during the
observation, providing a visual reference for optimizing the exposure time
control.
|
We consider the difference Schr{\"o}dinger equation $\psi$(z + h) + $\psi$(z
-- h) + v(z)$\psi$(z) = 0 where z is a complex variable, h > 0 is a parameter,
and v is an analytic function. As h $\rightarrow$ 0 analytic solutions to this
equation have a standard quasiclassical behavior near the points where v(z) =
$\pm$2. We study analytic solutions near the points z 0 satisfying v(z 0) =
$\pm$2 and v (z 0) = 0. For the finite difference equation, these points are
the natural analogues of the simple turning points defined for the differential
equation --$\psi$ (z) + v(z)$\psi$(z) = 0. In an h-independent neighborhood of
such a point, we derive uniform asymptotic expansions for analytic solutions to
the difference equation.
|
In this paper, we introduce a horizontally-oriented, photophoretic `boat'
trap that is capable of capturing and self-loading large (radius ${\ge}1$
$\mu$m) solid gold particles in air for more than one hour. Once trapped,
particles are held stably, even as the trap is modified to scan axially or
expanded to a larger size to increase the capture cross-section. We
theoretically present and experimentally demonstrate each of these affordances.
We describe the utility of such to investigate large, metallic, and plasmonic
particles for display applications.
|
Deep neural networks (DNNs) achieve promising performance in visual
recognition under the independent and identically distributed (IID) hypothesis.
In contrast, the IID hypothesis is not universally guaranteed in numerous
real-world applications, especially in medical image analysis. Medical image
segmentation is typically formulated as a pixel-wise classification task in
which each pixel is classified into a category. However, this formulation
ignores the hard-to-classified pixels, e.g., some pixels near the boundary
area, as they usually confuse DNNs. In this paper, we first explore that
hard-to-classified pixels are associated with high uncertainty. Based on this,
we propose a novel framework that utilizes uncertainty estimation to highlight
hard-to-classified pixels for DNNs, thereby improving its generalization. We
evaluate our method on two popular benchmarks: prostate and fundus datasets.
The results of the experiment demonstrate that our method outperforms
state-of-the-art methods.
|
We present a machine-readable movement writing for sleight-of-hand moves with
cards -- a "Labanotation of card magic." This scheme of movement writing
contains 440 categories of motion, and appears to taxonomize all card sleights
that have appeared in over 1500 publications. The movement writing is
axiomatized in $\mathcal{SROIQ}$(D) Description Logic, and collected formally
as an Ontology of Card Sleights, a computational ontology that extends the
Basic Formal Ontology and the Information Artifact Ontology. The Ontology of
Card Sleights is implemented in OWL DL, a Description Logic fragment of the Web
Ontology Language. While ontologies have historically been used to classify at
a less granular level, the algorithmic nature of card tricks allows us to
transcribe a performer's actions step by step. We conclude by discussing design
criteria we have used to ensure the ontology can be accessed and modified with
a simple click-and-drag interface. This may allow database searches and
performance transcriptions by users with card magic knowledge, but no ontology
background.
|
State-of-the-art audio event detection (AED) systems rely on supervised
learning using strongly labeled data. However, this dependence severely limits
scalability to large-scale datasets where fine resolution annotations are too
expensive to obtain. In this paper, we propose a small-footprint multiple
instance learning (MIL) framework for multi-class AED using weakly annotated
labels. The proposed MIL framework uses audio embeddings extracted from a
pre-trained convolutional neural network as input features. We show that by
using audio embeddings the MIL framework can be implemented using a simple DNN
with performance comparable to recurrent neural networks.
We evaluate our approach by training an audio tagging system using a subset
of AudioSet, which is a large collection of weakly labeled YouTube video
excerpts. Combined with a late-fusion approach, we improve the F1 score of a
baseline audio tagging system by 17%. We show that audio embeddings extracted
by the convolutional neural networks significantly boost the performance of all
MIL models. This framework reduces the model complexity of the AED system and
is suitable for applications where computational resources are limited.
|
We discuss the isolation of prompt photons in hadronic collisions by means of
narrow isolation cones and the QCD computation of the corresponding cross
sections. We reconsider the occurence of large perturbative terms with
logarithmic dependence on the cone size and their impact on the fragmentation
scale dependence. We cure the apparent perturbative violation of unitarity for
small cone sizes, which had been noticed earlier in next-to-leading-order (NLO)
calculations, by resumming the leading logarithmic dependence on the cone size.
We discuss possible implications regarding the implementation of some hollow
cone variants of the cone criterion, which simulate the experimental difficulty
to impose isolation inside the region filled by the electromagnetic shower that
develops in the calorimeter.
|
This paper considers a downlink (DL) system where non-orthogonal multiple
access (NOMA) beamforming and dynamic user pairing are jointly optimized to
maximize the minimum throughput of all DL users. The resulting problem belongs
to a class of mixed-integer non-convex optimization. To solve the problem, we
first relax the binary variables to continuous ones, and then devise an
iterative algorithm based on the inner approximation method which provides at
least a local optimal solution. Numerical results verify that the proposed
algorithm outperforms other ones, such as conventional beamforming, NOMA with
random-pairing and heuristic-search strategies.
|
Entanglement is a central feature of many-body quantum systems and plays a
unique role in quantum phase transitions.
In many cases, the entanglement spectrum, which represents the spectrum of
the density matrix of a bipartite system, contains valuable information beyond
the sole entanglement entropy.
Here we investigate the entanglement spectrum of the long-range XXZ model. We
show that within the critical phase it exhibits a remarkable self-similarity.
The breakdown of self-similarity and the transition away from a Luttinger
liquid is consistent with renormalization group theory.
Combining the two, we are able to determine the quantum phase diagram of the
model and locate the corresponding phase transitions. Our results are confirmed
by numerically-exact calculations using tensor-network techniques.
Moreover, we show that the self-similar rescaling extends to the geometrical
entanglement as well as the Luttinger parameter in the critical phase.
Our results pave the way to further studies of entanglement properties in
long-range quantum models.
|
This work proposes a small pattern and polarization diversity multi-sector
annular antenna with electrical size and profile of ${ka=1.2}$ and
${0.018\lambda}$, respectively. The antenna is planar and comprises annular
sectors that are fed using different ports to enable digital beamforming
techniques, with efficiency and gain of up to 78% and 4.62 dBi, respectively.
The cavity mode analysis is used to describe the design concept and the antenna
diversity. The proposed method can produce different polarization states (e.g.
linearly and circularly polarized patterns), and pattern diversity
characteristics covering the elevation plane. Owing to its small electrical
size, low-profile and diversity properties, the solution shows good promise to
enable advanced radio applications like wireless physical layer security in
many emerging and size-constrained Internet of Things (IoT) devices.
|
An asymptotic expansion for the generalised quadratic Gauss sum
$$S_N(x,\theta)=\sum_{j=1}^{N} \exp (\pi ixj^2+2\pi ij\theta),$$ where $x$,
$\theta$ are real and $N$ is a positive integer, is obtained as $x\rightarrow
0$ and $N\rightarrow\infty$ such that $Nx$ is finite. The form of this
expansion holds for all values of $Nx+\theta$ and, in particular, in the
neighbourhood of integer values of $Nx+\theta$. A simple bound for the
remainder in the expansion is derived. Numerical results are presented to
demonstrate the accuracy of the expansion and the sharpness of the bound.
|
Two-spin asymmetries $A_{LL}^{\pi^0}$ are calculated for various types of
spin-dependent gluon distributions. It is concluded that the E581/704 data on
$A_{LL}^{\pi^0}$ do not necessarily rule out the large gluon polarization but
restrict severely the $x$ dependence of its distribution. Moreover,
$A_{LL}^{J/\psi}$ are calculated for the forthcoming test of spin-dependent
gluon distributions.
|
Evolving secret sharing schemes do not require prior knowledge of the number
of parties $n$ and $n$ may be infinitely countable. It is known that the
evolving $2$-threshold secret sharing scheme and prefix coding of integers have
a one-to-one correspondence. However, it is not known what prefix coding of
integers to use to construct the scheme better. In this paper, we propose a new
metric $K_{\Sigma}$ for evolving $2$-threshold secret sharing schemes $\Sigma$.
We prove that the metric $K_{\Sigma}\geq 1.5$ and construct a new prefix coding
of integers, termed $\lambda$ code, to achieve the metric
$K_{\Lambda}=1.59375$. Thus, it is proved that the range of the metric
$K_{\Sigma}$ for the optimal $(2,\infty)$-threshold secret sharing scheme is
$1.5\leq K_{\Sigma}\leq1.59375$. In addition, the reachable lower bound of the
sum of share sizes for $(2,n)$-threshold secret sharing schemes is proved.
|
The rapid progress of machine learning interatomic potentials over the past
couple of years produced a number of new architectures. Particularly notable
among these are the Atomic Cluster Expansion (ACE), which unified many of the
earlier ideas around atom density-based descriptors, and Neural Equivariant
Interatomic Potentials (NequIP), a message passing neural network with
equivariant features that showed state of the art accuracy. In this work, we
construct a mathematical framework that unifies these models: ACE is
generalised so that it can be recast as one layer of a multi-layer
architecture. From another point of view, the linearised version of NequIP is
understood as a particular sparsification of a much larger polynomial model.
Our framework also provides a practical tool for systematically probing
different choices in the unified design space. We demonstrate this by an
ablation study of NequIP via a set of experiments looking at in- and
out-of-domain accuracy and smooth extrapolation very far from the training
data, and shed some light on which design choices are critical for achieving
high accuracy. Finally, we present BOTNet (Body-Ordered-Tensor-Network), a
much-simplified version of NequIP, which has an interpretable architecture and
maintains accuracy on benchmark datasets.
|
In this work, compositions of CeFe11X and CeFe10X2 with all 3d, 4d, and 5d
transition metal substitutions are considered. Since many previous studies have
focused on the CeFe11Ti compound, this particular compound became the starting
point of our considerations and we gave it special attention. We first
determined the optimal symmetry of the simplest CeFe11Ti structure model. We
then observed that the calculated magnetocrystalline anisotropy energy (MAE)
correlates with the magnetic moment, which in turn strongly depends on the
choice of the exchange-correlation potential. MAE, magnetic moments, and
magnetic hardness were determined for all compositions considered. Moreover,
the calculated dependence of the MAE on the spin magnetic moment allowed us to
predict the upper limits of the MAE. We also showed that it does not depend on
the choice of the exchange-correlation potential form. The economically
justifiable compositions with the highest magnetic hardness values are CeFe11W,
CeFe10W2, CeFe11Mn, CeFe10Mn2, CeFe11Mo, CeFe10Mo2, and CeFe10Nb2. However,
calculations suggest that, like CeFe12, these compounds are not chemically
stable and could require additional treatments to stabilize the composition.
Further alloying of the selected compositions with elements embedded in
interstitial positions confirms the positive effect of such dopants on hard
magnetic properties. Subsequent calculations performed for comparison for
selected isostructural La-based compounds lead to similar MAE results as for
Ce-based compounds, suggesting a secondary effect of 4f electrons. Calculations
were performed using the full-potential local-orbital electronic structure code
FPLO18, whose unique fully relativistic implementation of the fixed spin moment
method allowed us to calculate the MAE dependence of the magnetic moment.
|
The properties of a class of quasi-realistic three-family perturbative
heterotic string vacua are addressed. String models in this class generically
contain an anomalous U(1), such that the nonzero Fayet-Iliopoulos term triggers
certain fields to acquire string scale VEV's along flat directions. This vacuum
shift reduces the rank of the gauge group and generates effective mass terms
and effective trilinear interactions. Techniques are discussed which yield a
systematic classification of the flat directions of a given string model which
can be proven to be F- flat to all orders. The effective superpotential along
such flat directions can then be calculated to all orders in the string (genus)
expansion.
|
It is always a challenging task to service sudden events in non-convex and
uncertain environments, and multi-agent coverage control provides a powerful
theoretical framework to investigate the deployment problem of mobile robotic
networks for minimizing the cost of handling random events. Inspired by the
divide-and-conquer methodology, this paper proposes a novel coverage
formulation to control multi-agent systems in the non-convex region while
equalizing the workload among subregions. Thereby, a distributed coverage
controller is designed to drive each agent towards the desired configurations
that minimize the service cost by integrating with the rotational partition
strategy. In addition, a circular search algorithm is proposed to identify
optimal solutions to the problem of lowering service cost. Moreover, it is
proved that this search algorithm enables to approximate the optimal
configuration of multi-agent systems with the arbitrary small tolerance.
Finally, numerical simulations are implemented to substantiate the efficacy of
proposed coverage control approach.
|
In cosmology, it has been a long-standing problem to establish a
\emph{parameter insensitive} evolution from an anisotropic phase to an
isotropic phase. On the other hand, it is of great importance to construct a
theory having extra dimensions as its intrinsic ingredients. We show that these
two problems are closely related and can naturally be solved simultaneously in
double field theory cosmology. Our derivations are based on general arguments
without any fine-tuning parameters. In addition, We find that when we begin
with FRW metric, the full spacetime metric of DFT totally agrees with
\emph{Kaluza-Klein theory}. There is a visible and invisible dimension exchange
between the pre- and post-big bangs. Our results indicate that double field
theory has profound physical consequences and the continuous
$O\left(D,D\right)$ is a very fundamental symmetry. This observation reinforces
the viewpoint that symmetries dictate physics.
|
We present a control and measurement setup for superconducting qubits based
on Xilinx 16-channel radio-frequency system-on-chip (RFSoC) device. The
proposed setup consists of four parts: multiple RFSoC boards, a setup to
synchronise every digital to analog converter (DAC), and analog to digital
converter (ADC) channel across multiple boards, a low-noise direct current (DC)
supply for tuning the qubit frequency and cloud access for remotely performing
experiments. We also design the setup to be free of physical mixers. The RFSoC
boards directly generate microwave pulses using sixteen DAC channels up to the
third Nyquist zone which are directly sampled by its eight ADC channels between
the fifth and the ninth zones.
|
Ultraviolet and optical spectra of the hydrogen-dominated atmosphere white
dwarf star G238-44 obtained with FUSE, Keck/HIRES, HST/COS, and HST/STIS reveal
ten elements heavier than helium: C, N, O, Mg, Al, Si, P, S, Ca, and Fe.
G238-44 is only the third white dwarf with nitrogen detected in its atmosphere
from polluting planetary system material. Keck/HIRES data taken on eleven
nights over 24 years show no evidence for variation in the equivalent width of
measured absorption lines, suggesting stable and continuous accretion from a
circumstellar reservoir. From measured abundances and limits on other elements
we find an anomalous abundance pattern and evidence for the presence of
metallic iron. If the pollution is from a single parent body, then it would
have no known counterpart within the solar system. If we allow for two distinct
parent bodies, then we can reproduce the observed abundances with a mix of
iron-rich Mercury-like material and an analog of an icy Kuiper Belt object with
a respective mass ratio of 1.7:1. Such compositionally disparate objects would
provide chemical evidence for both rocky and icy bodies in an exoplanetary
system and would be indicative of a planetary system so strongly perturbed that
G238-44 is able to capture both asteroid- and Kuiper Belt-analog bodies
near-simultaneously within its $<$100 Myr cooling age.
|
We propose a method for inferring \emph{parameterized regular types} for
logic programs as solutions for systems of constraints over sets of finite
ground Herbrand terms (set constraint systems). Such parameterized regular
types generalize \emph{parametric} regular types by extending the scope of the
parameters in the type definitions so that such parameters can relate the types
of different predicates. We propose a number of enhancements to the procedure
for solving the constraint systems that improve the precision of the type
descriptions inferred. The resulting algorithm, together with a procedure to
establish a set constraint system from a logic program, yields a program
analysis that infers tighter safe approximations of the success types of the
program than previous comparable work, offering a new and useful efficiency vs.
precision trade-off. This is supported by experimental results, which show the
feasibility of our analysis.
|
In this study, we propose a novel adversarial reprogramming (AR) approach for
low-resource spoken command recognition (SCR), and build an AR-SCR system. The
AR procedure aims to modify the acoustic signals (from the target domain) to
repurpose a pretrained SCR model (from the source domain). To solve the label
mismatches between source and target domains, and further improve the stability
of AR, we propose a novel similarity-based label mapping technique to align
classes. In addition, the transfer learning (TL) technique is combined with the
original AR process to improve the model adaptation capability. We evaluate the
proposed AR-SCR system on three low-resource SCR datasets, including Arabic,
Lithuanian, and dysarthric Mandarin speech. Experimental results show that with
a pretrained AM trained on a large-scale English dataset, the proposed AR-SCR
system outperforms the current state-of-the-art results on Arabic and
Lithuanian speech commands datasets, with only a limited amount of training
data.
|
Axisymmetric disks of eccentric orbits in near-Keplerian potentials are
unstable to an out-of-plane buckling. Recently, Zderic et al. (2020) showed
that an idealized disk saturates to a lopsided mode. Here we show that this
apsidal clustering also occurs in a primordial scattered disk in the outer
solar system which includes the orbit-averaged gravitational influence of the
giant planets. We explain the dynamics using Lynden-Bell (1979)'s mechanism for
bar formation in galaxies. We also show surface density and line of sight
velocity plots at different times during the instability, highlighting the
formation of concentric circles and spiral arms in velocity space.
|
Squeezed state in harmonic systems can be generated through a variety of
techniques, including varying the oscillator frequency or using nonlinear
two-photon Raman interaction. We focus on these two techniques to drive an
initial thermal state into a final squeezed thermal state with controlled
squeezing parameters -- amplitude and phase -- in arbitrary time. The protocols
are designed through reverse engineering for both unitary and open dynamics.
Control of the dissipation is achieved using stochastic processes, readily
implementable via, e.g., continuous quantum measurements. Importantly, this
allows controlling the state entropy and can be used for fast thermalization.
The developed protocols are thus suited to generate squeezed thermal states at
controlled temperature in arbitrary time.
|
Convolutions have long been regarded as fundamental to applied mathematics,
physics and engineering. Their mathematical elegance allows for common tasks
such as numerical differentiation to be computed efficiently on large data
sets. Efficient computation of convolutions is critical to artificial
intelligence in real-time applications, like machine vision, where convolutions
must be continuously and efficiently computed on tens to hundreds of kilobytes
per second. In this paper, we explore how convolutions are used in fundamental
machine vision applications. We present an accelerated n-dimensional
convolution package in the high performance computing language, Julia, and
demonstrate its efficacy in solving the time to contact problem for machine
vision. Results are measured against synthetically generated videos and
quantitatively assessed according to their mean squared error from the ground
truth. We achieve over an order of magnitude decrease in compute time and
allocated memory for comparable machine vision applications. All code is
packaged and integrated into the official Julia Package Manager to be used in
various other scenarios.
|
We prove long time Anderson localization for nonlinear random Schroedinger
equation in $\ell^2$ by making a Birkoff normal form type transform to creat an
energy barrier where there is essentially no mode propagation. One of the new
features is that this transform is in a small neighborhood enabling us to treat
"rough" data, where there are no moment conditions. The formulation of the
present result is inspired by the RAGE theorem.
|
Integrating epitaxial and ferromagnetic Europium Oxide (EuO) directly on
silicon is a perfect route to enrich silicon nanotechnology with spin filter
functionality.
To date, the inherent chemical reactivity between EuO and Si has prevented a
heteroepitaxial integration without significant contaminations of the interface
with Eu silicides and Si oxides.
We present a solution to this long-standing problem by applying two
complementary passivation techniques for the reactive EuO/Si interface:
($i$) an $in\:situ$ hydrogen-Si $(001)$ passivation and ($ii$) the
application of oxygen-protective Eu monolayers --- without using any additional
buffer layers.
By careful chemical depth profiling of the oxide-semiconductor interface via
hard x-ray photoemission spectroscopy, we show how to systematically minimize
both Eu silicide and Si oxide formation to the sub-monolayer regime --- and how
to ultimately interface-engineer chemically clean, heteroepitaxial and
ferromagnetic EuO/Si $(001)$ in order to create a strong spin filter contact to
silicon.
|
The Lakshmanan equivalent counterparts of the some Myrzakulov equations are
found.
|
In this paper we present an exact general analytic expression
$Z(sSFR)=y/\Lambda(sSFR)+I(sSFR)$ linking the gas metallicity Z to the specific
star formation rate (sSFR), that validates and extends the approximate relation
put forward by Lilly et al. (2013, L13), where $y$ is the yield per stellar
generation, $\Lambda(sSFR)$ is the instantaneous ratio between inflow and star
formation rate expressed as a function of the sSFR, and $I$ is the integral of
the past enrichment history, respectively. We then demonstrate that the
instantaneous metallicity of a self-regulating system, such that its sSFR
decreases with decreasing redshift, can be well approximated by the first term
on the right-hand side in the above formula, which provides an upper bound to
the metallicity. The metallicity is well approximated also by the L13 ideal
regulator case, which provides a lower bound to the actual metallicity. We
compare these approximate analytic formulae to numerical results and infer a
discrepancy <0.1 dex in a range of metallicities and almost three orders of
magnitude in the sSFR. We explore the consequences of the L13 model on the
mass-weighted metallicity in the stellar component of the galaxies. We find
that the stellar average metallicity lags 0.1-0.2 dex behind the gas-phase
metallicity relation, in agreement with the data. (abridged)
|
The relation between the globular cluster luminosity function (GCLF,
dN/dlogL) and globular cluster mass function (GCMF, dN/dlogM) is considered.
Due to low-mass star depletion, dissolving GCs have mass-to-light (M/L) ratios
that are lower than expected from their metallicities. This has been shown to
lead to an M/L ratio that increases with GC mass and luminosity. We model the
GCLF and GCMF and show that the power law slopes inherently differ (1.0 versus
0.7, respectively) when accounting for the variability of M/L. The observed
GCLF is found to be consistent with a Schechter-type initial cluster mass
function and a mass-dependent mass-loss rate.
|
Due to their sparsity, 60GHz channels are characterized by a few dominant
paths. Knowing the angular information of their dominant paths, we can develop
various applications, such as the prediction of link performance and the
tracking of an 802.11ad device. Although they are equipped with phased arrays,
the angular inference for 802.11ad devices is still challenging due to their
limited number of RF chains and limited phase control capabilities. Considering
the beam sweeping operation and the high communication bandwidth of 802.11ad
devices, we propose variation-based angle estimation (VAE), called VAE-CIR, by
utilizing beam-specific channel impulse responses (CIRs) measured under
different beams and the directional gains of the corresponding beams to infer
the angular information of dominant paths. Unlike state-of-the-arts, VAE-CIR
exploits the variations between different beam-specific CIRs, instead of their
absolute values, for angular inference. To evaluate the performance of VAE-CIR,
we generate the beam-specific CIRs by simulating the beam sweeping of 802.11ad
devices with the beam patterns measured on off-the-shelf 802.11ad devices. The
60GHz channel is generated via a ray-tracing simulator and the CIRs are
extracted via channel estimation based on Golay sequences. Through experiments
in various scenarios, we demonstrate the effectiveness of VAE-CIR and its
superiority to existing angular inference schemes for 802.11ad devices.
|
We introduce a hull operator on Poisson point processes, the easiest example
being the convex hull of the support of a point process in Euclidean space.
Assuming that the intensity measure of the process is known on the set
generated by the hull operator, we discuss estimation of an expected linear
statistic built on the Poisson process. In special cases, our general scheme
yields an estimator of the volume of a convex body or an estimator of an
integral of a H\"older function. We show that the estimation error is given by
the Kabanov--Skorohod integral with respect to the underlying Poisson process.
A crucial ingredient of our approach is a spatial strong Markov property of the
underlying Poisson process with respect to the hull. We derive the rate of
normal convergence for the estimation error, and illustrate it on an
application to estimators of integrals of a H\"older function. We also discuss
estimation of higher order symmetric statistics.
|
We study the gravitational effects of two celestrial bodies on a typical
object of the Kuyper Belt. The first body is a kuyperian object itself with
fairly large eccentricity and perihelion but with a large mass, about 16 times
the mass of the Earth. The second body is a star whose mass is 30% - 50% of the
mass of the sun that passes by our solar system at a speed between 25 km/sec
and 100 km/sec and at a distance of closest approach between 0.05 and 0.5 light
year. As a measure of the perturbations caused by these bodies on the light
kuyperian object, we analyse its eccentricity. We find that the effects due to
the passage of the wandering star are permanent in the sense that the
eccentricity of the kuyperian object remains anomalous long after the passage
of the star. The same is true of the heavy kuyperian object: it can perturb
greatly the orbit of the lighter kuyperian object, which leads to a permanent,
anomalous eccentricity.
|
A bipartite entanglement between two nearest-neighbor Heisenberg spins of a
spin-1/2 Ising-Heisenberg model on a triangulated Husimi lattice is quantified
using a concurrence. It is shown that the concurrence equals zero in a
classical ferromagnetic and a quantum disordered phase, while it becomes
sizable though unsaturated in a quantum ferromagnetic phase. A
thermally-assisted reentrance of the concurrence is found above a classical
ferromagnetic phase, whereas a quantum ferromagnetic phase displays a striking
cusp of the concurrence at a critical temperature.
|
This article introduces a novel family of decentralised caching policies,
applicable to wireless networks with finite storage at the edge-nodes
(stations). These policies are based on the Least-Recently-Used replacement
principle, and are, here, referred to as spatial multi-LRU. Based on these,
cache inventories are updated in a way that provides content diversity to users
who are covered by, and thus have access to, more than one station. Two
variations are proposed, namely the multi-LRU-One and -All, which differ in the
number of replicas inserted in the involved caches. By introducing spatial
approximations, we propose a Che-like method to predict the hit probability,
which gives very accurate results under the Independent Reference Model (IRM).
It is shown that the performance of multi-LRU increases the more the
multi-coverage areas increase, and it approaches the performance of other
proposed centralised policies, when multi-coverage is sufficient. For IRM
traffic multi-LRU-One outperforms multi-LRU-All, whereas when the traffic
exhibits temporal locality the -All variation can perform better.
|
The Milky Way has undergone significant transformations in its early history,
characterised by violent mergers and the accretion of satellite galaxies. Among
these events, the infall of the satellite galaxy Gaia-Enceladus/Sausage is
recognised as the last major merger event, fundamentally altering the evolution
of the Milky Way and shaping its chemo-dynamical structure. However, recent
observational evidence suggests that the Milky Way remains undergone notable
events of star formation in the past 4 Gyr, which is thought to be triggered by
the perturbations from Sagittarius dwarf galaxy (Sgr). Here we report chemical
signatures of the Sgr accretion event in the past 4 Gyr, using the [Fe/H] and
[O/Fe] ratios in the thin disc, which is reported for the first time. It
reveals that the previously discovered V-shape structure of age-[Fe/H] relation
varies across different Galactic locations and has rich substructures.
Interestingly, we discover a discontinuous structure at z$_{\rm max}$ $<$ 0.3
kpc, interrupted by a recent burst of star formation from 4 Gyr to 2 Gyr ago.
In this episode, we find a significant rise in oxygen abundance leading to a
distinct [O/Fe] gradient, contributing to the formation of young O-rich stars.
Combined with the simulated star formation history and chemical abundance of
Sgr, we suggest that the Sgr is an important actor in the discontinuous
chemical evolution of the Milky Way disc.
|
We derive a simple formula for the real-space chirality of twisted bilayer
graphene that can be related to the cross-product of its sheet currents. This
quantity shows well-defined plateaus for the first remote band as function of
the gate voltage which are approximately quantized for commensurate twist
angles. The zeroth plateau corresponds to the first magic angle where a sign
change occurs due to an emergent $C_6$-symmetry. Our observation offers a new
definition of the magic angle based on a macroscopic observable which is
accessible in typical transport experiments.
|
We consider several related problems of estimating the 'sparsity' or number
of nonzero elements $d$ in a length $n$ vector $\mathbf{x}$ by observing only
$\mathbf{b} = M \odot \mathbf{x}$, where $M$ is a predesigned test matrix
independent of $\mathbf{x}$, and the operation $\odot$ varies between problems.
We aim to provide a $\Delta$-approximation of sparsity for some constant
$\Delta$ with a minimal number of measurements (rows of $M$). This framework
generalizes multiple problems, such as estimation of sparsity in group testing
and compressed sensing. We use techniques from coding theory as well as
probabilistic methods to show that $O(D \log D \log n)$ rows are sufficient
when the operation $\odot$ is logical OR (i.e., group testing), and nearly this
many are necessary, where $D$ is a known upper bound on $d$. When instead the
operation $\odot$ is multiplication over $\mathbb{R}$ or a finite field
$\mathbb{F}_q$, we show that respectively $\Theta(D)$ and $\Theta(D \log_q
\frac{n}{D})$ measurements are necessary and sufficient.
|
To improve patient survival and treatment outcomes, early diagnosis of brain
tumors is an essential task. It is a difficult task to evaluate the magnetic
resonance imaging (MRI) images manually. Thus, there is a need for digital
methods for tumor diagnosis with better accuracy. However, it is still a very
challenging task in assessing their shape, volume, boundaries, tumor detection,
size, segmentation, and classification. In this proposed work, we propose a
hybrid ensemble method using Random Forest (RF), K-Nearest Neighbour, and
Decision Tree (DT) (KNN-RF-DT) based on Majority Voting Method. It aims to
calculate the area of the tumor region and classify brain tumors as benign and
malignant. In the beginning, segmentation is done by using Otsu's Threshold
method. Feature Extraction is done by using Stationary Wavelet Transform (SWT),
Principle Component Analysis (PCA), and Gray Level Co-occurrence Matrix (GLCM),
which gives thirteen features for classification. The classification is done by
hybrid ensemble classifier (KNN-RF-DT) based on the Majority Voting method.
Overall it aimed at improving the performance by traditional classifiers
instead of going to deep learning. Traditional classifiers have an advantage
over deep learning algorithms because they require small datasets for training
and have low computational time complexity, low cost to the users, and can be
easily adopted by less skilled people. Overall, our proposed method is tested
upon dataset of 2556 images, which are used in 85:15 for training and testing
respectively and gives good accuracy of 97.305%.
|
The recent introduction of machine learning techniques, especially
normalizing flows, for the sampling of lattice gauge theories has shed some
hope on improving the sampling efficiency of the traditional HMC algorithm.
Naive use of normalizing flows has been shown to lead to bad scaling with the
volume. In this talk we propose using local normalizing flows at a scale given
by the correlation length. Even if naively these transformations have a small
acceptance, when combined with the HMC algorithm lead to algorithms with high
acceptance, and also with reduced autocorrelation times compared with HMC.
Several scaling tests are performed in the $\phi^{4}$ theory in 2D.
|
It is well known that the labeling problems of graphs arise in many (but not
limited to) networking and telecommunication contexts. In this paper we
introduce the anti-$k$-labeling problem of graphs which we seek to minimize the
similarity (or distance) of neighboring nodes. For example, in the fundamental
frequency assignment problem in wireless networks where each node is assigned a
frequency, it is usually desirable to limit or minimize the frequency gap
between neighboring nodes so as to limit interference.
Let $k\geq1$ be an integer and $\psi$ is a labeling function
(anti-$k$-labeling) from $V(G)$ to $\{1,2,\cdots,k\}$ for a graph $G$. A {\em
no-hole anti-$k$-labeling} is an anti-$k$-labeling using all labels between 1
and $k$. We define $w_{\psi}(e)=|\psi(u)-\psi(v)|$ for an edge $e=uv$ and
$w_{\psi}(G)=\min\{w_{\psi}(e):e\in E(G)\}$ for an anti-$k$-labeling $\psi$ of
the graph $G$. {\em The anti-$k$-labeling number} of a graph $G$, $mc_k(G)$ is
$\max\{w_{\psi}(G): \psi\}$. In this paper, we first show that $mc_k(G)=\lfloor
\frac{k-1}{\chi-1}\rfloor$, and the problem that determines $mc_k(G)$ of graphs
is NP-hard. We mainly obtain the lower bounds on no-hole anti-$n$-labeling
number for trees, grids and $n$-cubes.
|
We adress the problem of Fock space representations of (free) multiplet
component fiels encountered in supersymmetric quantum field theory insisting on
positivity and causality. We look in detail on the scalar and Majorana
components of the chiral supersymmetric multiplet. Several Fock space
representations are introduced. The last section contains a short application
to the supersymmetric Epstein-Glaser method. The present paper is written in
the vane of axiomatic quantum field theory with applications to the causal
approach to supersymmetry.
|
In the usual treatment of electronic structure, all matter has cusps in the
electronic density at nuclei. Cusps can produce non-analytic behavior in time,
even in response to perturbations that are time-analytic. We analyze these
non-analyticities in a simple case from many perspectives. We describe a
method, the s-expansion, that can be used in several such cases, and illustrate
it with a variety of examples. These include both the sudden appearance of
electric fields and disappearance of nuclei, in both one and three dimensions.
When successful, the s-expansion yields the dominant short-time behavior, no
matter how strong the external electric field, but agrees with linear response
theory in the weak limit. We discuss the relevance of these results to
time-dependent density functional theory.
|
We present a model for the equilibrium of solid planetary cores embedded in a
gaseous nebula. From this model we are able to extract an idealized roadmap of
all hydrostatic states of the isothermal protoplanets. The complete
classification of the isothermal protoplanetary equilibria should improve the
understanding of the general problem of giant planet formation, within the
framework of the nucleated instability hypothesis. We approximate the
protoplanet as a spherically symmetric, isothermal, self-gravitating classical
ideal gas envelope in equilibrium, around a rigid body of given mass and
density, with the gaseous envelope required to fill the Hill-sphere. Starting
only with a core of given mass and an envelope gas density at the core surface,
the equilibria are calculated without prescribing the total protoplanetary mass
or nebula density. The static critical core masses of the protoplanets for the
typical orbits of 1, 5.2, and 30 AU, around a parent star of 1 solar mass are
found to be 0.1524, 0.0948, and 0.0335 Earth masses, respectively, for standard
nebula conditions (Kusaka et al. 1970). These values are much lower than
currently admitted ones primarily because our model is isothermal and the
envelope is in thermal equilibrium with the nebula. For a given core, multiple
solutions (at least two) are found to fit into the same nebula. We extend the
concept of the static critical core mass to the local and global critical core
mass. We conclude that the 'global static critical core mass' marks the meeting
point of all four qualitatively different envelope regions.
|
A nano-system in which electrons interact and in contact with Fermi leads
gives rise to an effective one-body scattering which depends on the presence of
other scatterers in the attached leads. This non local effect is a pure
many-body effect that one neglects when one takes non interacting models for
describing quantum transport. This enhances the non-local character of the
quantum conductance by exchange interactions of a type similar to the
RKKY-interaction between local magnetic moments. A theoretical study of this
effect is given assuming the Hartree-Fock approximation for spinless fermions
in an infinite chain embedding two scatterers separated by a segment of length
L\_c. The fermions interact only inside the two scatterers. The dependence of
one scatterer onto the other exhibits oscillations which decay as 1/L\_c and
which are suppressed when L\_c exceeds the thermal length L\_T. The
Hartree-Fock results are compared with exact numerical results obtained with
the embedding method and the DMRG algorithm.
|
Large scale discrete uniform and homogeneous $P$-values often arise in
applications with multiple testing. For example, this occurs in genome wide
association studies whenever a nonparametric one-sample (or two-sample) test is
applied throughout the gene loci. In this paper we consider $q$-values for such
scenarios based on several existing estimators for the proportion of true null
hypothesis, $\pi_0$, which take the discreteness of the $P$-values into
account. The theoretical guarantees of the several approaches with respect to
the estimation of $\pi_0$ and the false discovery rate control are reviewed.
The performance of the discrete $q$-values is investigated through intensive
Monte Carlo simulations, including location, scale and omnibus nonparametric
tests, and possibly dependent $P$-values. The methods are applied to genetic
and financial data for illustration purposes too. Since the particular
estimator of $\pi_0$ used to compute the $q$-values may influence the power,
relative advantages and disadvantages of the reviewed procedures are discussed.
Practical recommendations are given.
|
Mandarin Chinese is characterized by being a tonal language; the pitch (or
$F_0$) of its utterances carries considerable linguistic information. However,
speech samples from different individuals are subject to changes in amplitude
and phase which must be accounted for in any analysis which attempts to provide
a linguistically meaningful description of the language. A joint model for
amplitude, phase and duration is presented which combines elements from
Functional Data Analysis, Compositional Data Analysis and Linear Mixed Effects
Models. By decomposing functions via a functional principal component analysis,
and connecting registration functions to compositional data analysis, a joint
multivariate mixed effect model can be formulated which gives insights into the
relationship between the different modes of variation as well as their
dependence on linguistic and non-linguistic covariates. The model is applied to
the COSPRO-1 data set, a comprehensive database of spoken Taiwanese Mandarin,
containing approximately 50 thousand phonetically diverse sample $F_0$ contours
(syllables), and reveals that phonetic information is jointly carried by both
amplitude and phase variation.
|
Molecular line lists are important for modelling absorption and emission
processes in atmospheres of different astronomical objects, such as cool stars
and exoplanets. In order to be applicable for high temperatures, line lists for
molecules like methane must contain billions of transitions, which makes their
direct (line-by-line) application in radiative transfer calculations
impracticable. Here we suggest a new, hybrid line list format to mitigate this
problem, based on the idea of temperature-dependent absorption continuum.
The line list is partitioned into a large set of relatively weak lines and a
small set of important, stronger lines. The weaker lines are then used either
to construct a temperature-dependent (but pressure-independent) set of
intensity cross sections or are blended into a greatly reduced set of
super-lines. The strong lines are kept in the form of temperature independent
Einstein A coefficients.
A line list for methane is constructed as a combination of 17 million strong
absorption lines relative to the reference absorption spectra and a background
methane continuum in two temperature-dependent forms, of cross sections and
super-lines. This approach eases the use of large high temperature line lists
significantly as the computationally expensive calculation of pressure
dependent profiles only need to be performed for a relatively small number of
lines. Both the line list and cross sections were generated using a new 34
billion methane line list (34to10), which extends the 10to10 line list to
higher temperatures (up to 2000 K). The new hybrid scheme can be applied to any
large line lists containing billions of transitions. We recommend to use
super-lines generated on a high resolution grid based on resolving power (R =
1,000,000) to model the molecular continuum as a more flexible alternative to
the temperature dependent cross sections.
|
We propose a general method for deriving one-dimensional models for nonlinear
structures. It captures the contribution to the strain energy arising not only
from the macroscopic elastic strain as in classical structural models, but also
from the strain gradient. As an illustration, we derive one-dimensional
strain-gradient models for a hyper-elastic cylinder that necks, an axisymmetric
membrane that produces bulges, and a two-dimensional block of elastic material
subject to bending and stretching. The method offers three key advantages.
First, it is nonlinear and accounts for large deformations of the
cross-section, which makes it well suited for the analysis of localization in
slender structures. Second, it does not require any a priori assumption on the
form of the elastic solution in the cross-section, i.e., it is Ansatz-free.
Thirdly, it produces one-dimensional models that are asymptotically exact when
the macroscopic strain varies on a much larger length scale than the
cross-section diameter.
|
A model is presented of the market dynamics to emphasis the effects of
increasing returns to scale, including the description of the born and death of
the adaptive producers. The evolution of market structure and its behavior with
the technological shocks are discussed. Its dynamics is in good agreement with
some empirical stylized facts of industrial evolution. Together with the
diversities of demand and adaptive growth strategies of firms, the generalized
model has reproduced the power-law distribution of firm size. Three factors
mainly determine the competitive dynamics and the skewed size distributions of
firms: 1. Self-reinforcing mechanism; 2. Adaptive firm grows strategies; 3.
Demand diversities or widespread heterogeneity in the technological
capabilities of different firms. Key words: Econophysics, Increasing returns,
Industry dynamics, Size distribution of firms
|
The fast development of quantum technologies over the last decades has
offered a glimpse to a future where the quantum properties of multi-particle
systems might be more fully understood. In particular, quantum computing might
prove crucial to explore many aspects of high energy physics unaccessible to
classical methods. In this talk, we will describe how one can use digital
quantum computers to study the evolution of QCD jets in quark gluons plasmas.
We construct a quantum circuit to study single particle evolution in a dense
QCD medium. Focusing on the jet quenching parameter $\hat q $, we present some
early numerical results for a small quantum circuit. Future extensions of this
strategy are also addressed.
|
Recently the traffic related problems have become strategically important,
due to the continuously increasing vehicle number. As a result, microscopic
simulation software has become an efficient method in traffic engineering for
its cost-effectiveness and safety characteristics. In this paper, a new fuzzy
logic based simulation software (FLOWSIM) is introduced, which can reflect the
mixed traffic flow phenomenon in China better. The fuzzy logic based
car-following model and lane-changing model are explained in detail.
Furthermore, its applications for mixed traffic flow management in mid-size
cities and for signalized intersection management assessment in large cities
are illustrated by examples in China. Finally, further study objectives are
discussed.
|
Appearance-based gaze estimation has achieved significant improvement by
using deep learning. However, many deep learning-based methods suffer from the
vulnerability property, i.e., perturbing the raw image using noise confuses the
gaze estimation models. Although the perturbed image visually looks similar to
the original image, the gaze estimation models output the wrong gaze direction.
In this paper, we investigate the vulnerability of appearance-based gaze
estimation. To our knowledge, this is the first time that the vulnerability of
gaze estimation to be found. We systematically characterized the vulnerability
property from multiple aspects, the pixel-based adversarial attack, the
patch-based adversarial attack and the defense strategy. Our experimental
results demonstrate that the CA-Net shows superior performance against attack
among the four popular appearance-based gaze estimation networks, Full-Face,
Gaze-Net, CA-Net and RT-GENE. This study draws the attention of researchers in
the appearance-based gaze estimation community to defense from adversarial
attacks.
|
Adopting a joint approach towards state estimation and integrity monitoring
results in unbiased integrity monitoring unlike traditional approaches. So far,
a joint approach was used in Particle RAIM [l] for GNSS measurements only. In
our work, we extend Particle RAIM to a GNSS-camera fused system for joint state
estimation and integrity monitoring. To account for vision faults, we derive a
probability distribution over position from camera images using map-matching.
We formulate a Kullback-Leibler Divergence metric to assess the consistency of
GNSS and camera measurements and mitigate faults during sensor fusion. The
derived integrity risk upper bounds the probability of Hazardously Misleading
Information (HMI). Experimental validation on a real-world dataset shows that
our algorithm produces less than 11 m position error and the integrity risk
over bounds the probability of HMI with 0.11 failure rate for an 8 m Alert
Limit in an urban scenario.
|
The magnetic field generated by a bound muon in heavy muonic atoms results in
an induced nuclear magnetic dipole moment even for otherwise spinless nuclei.
This dipole moment interacts with the muon, altering the binding energy of the
muonic state. We investigate the relation of this simple,
semi-classical-insriped approach to nuclear polarisation (NP) calculations.
Motivated by the relative closeness of this simple estimate to evaluations of
NP, we extract effective values for the nuclear magnetic polarisability, a
quantity otherwise unknown, and put forward a simple back-of-the-envelope way
to estimate the magnetic part of NP.
|
We redevelop persistent homology (topological persistence) from a categorical
point of view. The main objects of study are diagrams, indexed by the poset of
real numbers, in some target category. The set of such diagrams has an
interleaving distance, which we show generalizes the previously-studied
bottleneck distance. To illustrate the utility of this approach, we greatly
generalize previous stability results for persistence, extended persistence,
and kernel, image and cokernel persistence. We give a natural construction of a
category of interleavings of these diagrams, and show that if the target
category is abelian, so is this category of interleavings.
|
Transformers have revolutionized deep learning and generative modeling,
enabling unprecedented advancements in natural language processing tasks.
However, the size of transformer models is increasing continuously, driven by
enhanced capabilities across various deep-learning tasks. This trend of
ever-increasing model size has given rise to new challenges in terms of memory
and computing requirements. Conventional computing platforms, including GPUs,
suffer from suboptimal performance due to the memory demands imposed by models
with millions/billions of parameters. The emerging chiplet-based platforms
provide a new avenue for compute- and data-intensive machine learning (ML)
applications enabled by a Network-on-Interposer (NoI). However, designing
suitable hardware accelerators for executing Transformer inference workloads is
challenging due to a wide variety of complex computing kernels in the
Transformer architecture. In this paper, we leverage chiplet-based
heterogeneous integration (HI) to design a high-performance and
energy-efficient multi-chiplet platform to accelerate transformer workloads. We
demonstrate that the proposed NoI architecture caters to the data access
patterns inherent in a transformer model. The optimized placement of the
chiplets and the associated NoI links and routers enable superior performance
compared to the state-of-the-art hardware accelerators. The proposed NoI-based
architecture demonstrates scalability across varying transformer models and
improves latency and energy efficiency by up to 22.8x and 5.36x respectively.
|
We give a provisional construction of the Kac-Moody Lie algebra module
structure on the hyperbolic restriction of the intersection cohomology complex
of the Coulomb branch of a framed quiver gauge theory, as a refinement of the
conjectural geometric Satake correspondence for Kac-Moody algebras proposed in
an earlier paper with Braverman, Finkelberg in 2019. This construction assumes
several geometric properties of the Coulomb branch under the torus action.
These properties are checked in affine type A, via the identification of the
Coulomb branch with a Cherkis bow variety established in a joint work with
Takayama.
|
We estimate the degree to which the baryon density, $\Omega_{b}$, can be
determined from the galaxy power spectrum measured from large scale galaxy
redshift surveys, and in particular, the Sloan Digital Sky Survey. A high
baryon density will cause wiggles to appear in the power spectrum, which should
be observable at the current epoch. We assume linear theory on scales $\geq
20h^{-1}Mpc$ and do not include the effects of redshift distortions, evolution,
or biasing. With an optimum estimate of $P(k)$ to $k\sim 2\pi/(20 h^{-1} Mpc)$,
the $1 \sigma$ uncertainties in $\Omega_{b}$ are roughly 0.07 and 0.016 in flat
and open ($\Omega_{0}=0.3$) cosmological models, respectively. This result
suggests that it should be possible to test for consistency with big bang
nucleosynthesis estimates of $\Omega_{b}$ if we live in an open universe.
|
We prove a topological version of the section conjecture for the profinite
completion of the fundamental group of finite CW-complexes equipped with the
action of a group of prime order $p$ whose $p$-torsion cohomology can be killed
by finite covers. As an application we derive the section conjecture for the
real points of a large class of varieties defined over the field of real
numbers and the natural analogue of the section conjecture for fixed points of
finite group actions on projective curves of positive genus defined over the
field of complex numbers.
|
The ability to manipulate two-dimensional (2D) electrons with external
electric fields provides a route to synthetic band engineering. By imposing
artificially designed and spatially periodic superlattice (SL) potentials, 2D
electronic properties can be further engineered beyond the constraints of
naturally occurring atomic crystals. Here we report a new approach to fabricate
high mobility SL devices by integrating surface dielectric patterning with
atomically thin van der Waals materials. By separating the device assembly and
SL fabrication processes, we address the intractable tradeoff between device
processing and mobility degradation that constrains SL engineering in
conventional systems. The improved electrostatics of atomically thin materials
moreover allows smaller wavelength SL patterns than previously achieved.
Replica Dirac cones in ballistic graphene devices with sub 40nm wavelength SLs
are demonstrated, while under large magnetic fields we report the fractal
Hofstadter spectra from SLs with designed lattice symmetries vastly different
from that of the host crystal. Our results establish a robust and versatile
technique for band structure engineering of graphene and related van der Waals
materials with dynamic tunability.
|
The theoretical status of the `proton spin' effect is reviewed. The
conventional QCD parton model analysis of polarised DIS is compared with a
complementary approach, the composite operator propagator-vertex (CPV) method,
each of which provides its own insight into the origin of the observed
suppression in the first moment of $g_1^p$. The current status of both
experiment and non-perturbative calculations is summarised. The future role of
semi-inclusive DIS experiments, in both the current and target fragmentation
regions, is described.
|
We design a heat engine with multi-heat-reservoir, ancillary system and
quantum memory. We then derive an inequality related with the second law of
thermodynamics, and give a new limitation about the work gain from the engine
by analyzing the entropy change and quantum mutual information change during
the process. In addition and remarkably, by combination of two independent
engines and with the help of the entropic uncertainty relation with quantum
memory, we find that the total maximum work gained from those two heat engines
should be larger than a quantity related with quantum entanglement between the
ancillary state and the quantum memory. This result provides a lower bound for
the maximum work extracted, in contrast with the upper bound in the
conventional second law of thermodynamics. However, the validity of this
inequality depends on whether the maximum work can achieve the upper bound.
|
Strong-flavor and parity a priori mixing in hadrons are shown to describe
well the experimental evidence on weak radiative decays of hyperons. An
independent determination of the a priori mixing angles is performed. The
values obtained for them are seen to have a universality-like property, when
compared to their values in non-leptonic decays of hyperons.
|
The universal critical point ratio $Q$ is exploited to determine positions of
the critical Ising transition lines on the phase diagram of the Ashkin-Teller
(AT) model on the square lattice. A leading-order expansion of the ratio $Q$ in
the presence of a non-vanishing thermal field is found from finite-size scaling
and the corresponding expression is fitted to the accurate perturbative
transfer-matrix data calculations for the $L\times L$ square clusters with
$L\leq 9$.
|
The motion of a satellite can experience secular resonances between the
precession frequencies of its orbit and the mean motion of the host planet
around the star. Some of these resonances can significantly modify the
eccentricity (evection resonance) and the inclination (eviction resonance) of
the satellite. In this paper, we study in detail the secular resonances that
can disturb the orbit of a satellite, in particular the eviction-like ones.
Although the inclination is always disturbed while crossing one eviction-like
resonance, capture can only occur when the semi-major axis is decreasing. This
is, for instance, the case of Phobos, the largest satellite of Mars, that will
cross some of these resonances in the future because its orbit is shrinking
owing to tidal effects. We estimate the impact of resonance crossing in the
orbit of the satellite, including the capture probabilities, as a function of
several parameters, such as the eccentricity and the inclination of the
satellite, and the obliquity of the planet. Finally, we use the method of the
frequency map analysis to study the resonant dynamics based on stability maps,
and we show that some of the secular resonances may overlap, which leads to
chaotic motion for the inclination of the satellite.
|
An electromagnetic analysis is presented for experiments with strong
permanent disc magnets. The analysis is based on the well known experiment that
demonstrates the effect of circulating eddy currents by dropping a strong
magnet through a vertically placed metal cylinder and observing how the magnet
is slowly falling through the cylinder with a constant velocity. This
experiment is quite spectacular with a super strong neodymium magnet and a
thick metal cylinder made of copper or aluminum. A rigorous theory for this
experiment is provided based on the quasi-static approximation of the Maxwell
equations, an infinitely long cylinder (no edge effects) and a homogeneous
magnetization of the disc magnet. The results are useful for teachers and
students in electromagnetics who wish to obtain a deeper insight into the
analysis and experiments regarding this phenomenon, or with industrial
applications such as the grading and calibration of strong permanent magnets or
with measurements of the conductivity of various metals, etc. Several
experiments and numerical computations are included to validate and to
illustrate the theory.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.