text
stringlengths 6
128k
|
---|
A major challenge for autonomous vehicles is interacting with other traffic
participants safely and smoothly. A promising approach to handle such traffic
interactions is equipping autonomous vehicles with interaction-aware
controllers (IACs). These controllers predict how surrounding human drivers
will respond to the autonomous vehicle's actions, based on a driver model.
However, the predictive validity of driver models used in IACs is rarely
validated, which can limit the interactive capabilities of IACs outside the
simple simulated environments in which they are demonstrated. In this paper, we
argue that besides evaluating the interactive capabilities of IACs, their
underlying driver models should be validated on natural human driving behavior.
We propose a workflow for this validation that includes scenario-based data
extraction and a two-stage (tactical/operational) evaluation procedure based on
human factors literature. We demonstrate this workflow in a case study on an
inverse-reinforcement-learning-based driver model replicated from an existing
IAC. This model only showed the correct tactical behavior in 40% of the
predictions. The model's operational behavior was inconsistent with observed
human behavior. The case study illustrates that a principled evaluation
workflow is useful and needed. We believe that our workflow will support the
development of appropriate driver models for future automated vehicles.
|
We give a basic explanation for the oscillating properties of some physical
quantities of a two-electron quantum dot in the presence of a static magnetic
field. This behaviour was discussed in a previous work of ours [AM Maniero,
{\it et al}. J. Phys. B: At. Mol. Opt. Phys. 53:185001, 2020] and was
identified as a manifestation of the {\it de Haas-van Alphen} effect,
originally observed in the framework of diamagnetism of metals in the 30's. We
show that this behaviour is a consequence of different eigenstates of the
system assuming, in a certain interval of the magnetic field, the condition of
the lowest energy singlet and triplet states.
|
In this paper, we compare the scalar field dynamics in axion-like and power
law potentials for both positive and negative values of the exponents. We find
that, for positive exponents, both the potentials exhibit similar scalar field
dynamics and it can be difficult to distinguish them at least at the background
level. Even though the potentials are oscillatory in nature for positive
exponents scaling solutions can be achieved for larger values of the exponent
for which the dynamics can be different during early times. Because of the
presence of this scaling nature there is a turnaround in the values of the
scalar field equation of state as we increase the values of the exponent in
both the potentials. This indicates the deviation from the oscillatory
behaviour for the larger values of the exponent. For negative values of the
exponent, the dynamics of the scalar field is distinguishable and axion-like
potential can give rise to cosmologically viable tracker solutions unlike the
power law potentials. For negative values of the exponent, axion-like potential
can behave like a cosmological constant around its minima and the dark energy
scale can be related to the potential scale. Due to the cosmological constant
like behavior of the axion-like potential for negative exponent around its
minima the late time dynamics can be similar to $\Lambda$CDM and we get similar
observational constraint on the parameters for both $\Lambda$CDM and axion-like
potential with negative exponent. So, while for positive exponents we may not
distinguish the two potentials for negative exponents the dynamics of the
scalar field is distinguishable.
|
We study structure formation in phenomenological models in which the
Friedmann equation receives a correction of the form
$H^{\alpha}/r_c^{2-\alpha}$, which realize an accelerated expansion without
dark energy. In order to address structure formation in these model, we
construct simple covariant gravitational equations which give the modified
Friedmann equation with $\alpha=2/n$ where $n$ is an integer. For $n=2$, the
underlying theory is known as a 5D braneworld model (the DGP model). Thus the
models interpolate between the DGP model ($n=2, \alpha=1$) and the LCDM model
in general relativity ($n \to \infty, \alpha \to 0$). Using the covariant
equations, cosmological perturbations are analyzed. It is shown that in order
to satisfy the Bianchi identity at a perturbative level, we need to introduce a
correction term $E_{\mu \nu}$ in the effective equations. In the DGP model,
$E_{\mu \nu}$ comes from 5D gravitational fields and correct conditions on
$E_{\mu \nu}$ can be derived by solving the 5D perturbations. In the general
case $n>2$, we have to assume the structure of a modified theory of gravity to
determine $E_{\mu \nu}$. We show that structure formation is different from a
dark energy model in general relativity with identical expansion history and
that quantitative features of the difference crucially depend on the conditions
on $E_{\mu \nu}$, that is, the structure of the underlying theory of modified
gravity. This implies that it is essential to identify underlying theories in
order to test these phenomenological models against observational data and,
once we identify a consistent theory, structure formation tests become
essential to distinguish modified gravity models from dark energy models in
general relativity.
|
We present a detailed study of the ground-state magnetic structure of
ultrathin Fe films on the surface of fcc Ir(001). We use the spin-cluster
expansion technique in combination with the relativistic disordered local
moment scheme to obtain parameters of spin models and then determine the
favored magnetic structure of the system by means of a mean field approach and
atomistic spin dynamics simulations. For the case of a single monolayer of Fe
we find that layer relaxations very strongly influence the ground-state spin
configurations, whereas Dzyaloshinskii-Moriya (DM) interactions and biquadratic
couplings also have remarkable effects. To characterize the latter effect we
introduce and analyze spin collinearity maps of the system. While for two
monolayers of Fe we find a single-q spin spiral as ground state due to DM
interactions, for the case of four monolayers the system shows a noncollinear
spin structure with nonzero net magnetization. These findings are consistent
with experimental measurements indicating ferromagnetic order in films of four
monolayers and thicker.
|
We introduce the notion of a pre-spectral triple, which is a generalisation
of a spectral triple $(\mathcal{A}, H, D)$ where $D$ is no longer required to
be self-adjoint, but closed and symmetric. Despite having weaker assumptions,
pre-spectral triples allow us to introduce noncompact noncommutative geometry
with boundary. In particular, we derive the Hochschild character theorem in
this setting. We give a detailed study of Dirac operators with Dirichlet
boundary conditions on open subsets of $\mathbb{R}^d$, $d \geq 2$.
|
In this set of papers we formulate a stand alone method to derive maximal
number of linearizing transformations for nonlinear ordinary differential
equations (ODEs) of any order including coupled ones from a knowledge of fewer
number of integrals of motion. The proposed algorithm is simple,
straightforward and efficient and helps to unearth several new types of
linearizing transformations besides the known ones in the literature. To make
our studies systematic we divide our analysis into two parts. In the first part
we confine our investigations to the scalar ODEs and in the second part we
focuss our attention on a system of two coupled second order ODEs. In the case
of scalar ODEs, we consider second and third order nonlinear ODEs in detail and
discuss the method of deriving maximal number of linearizing transformations
irrespective of whether it is local or nonlocal type and illustrate the
underlying theory with suitable examples. As a by-product of this investigation
we unearth a new type of linearizing transformation in third order nonlinear
ODEs. Finally the study is extended to the case of general scalar ODEs. We then
move on to the study of two coupled second order nonlinear ODEs in the next
part and show that the algorithm brings out a wide variety of linearization
transformations. The extraction of maximal number of linearizing
transformations in every case is illustrated with suitable examples.
|
We explore the role of the modified Eddington limit due to rapid rotation
(the so-called $\Omega\Gamma-$limit) in the formation of Population III stars.
We performed one-dimensional stellar evolution simulations of mass-accreting
zero-metallicity protostars at a very high rate ($\dot{M} \sim
10^{-3}~\mathrm{M_\odot~yr^{-1}}$) and dealt with stellar rotation as a
separate post-process. The protostar would reach the Keplerian rotation very
soon after the onset of mass accretion, but mass accretion would continue as
stellar angular momentum is transferred outward to the accretion disk by
viscous stress. The protostar envelope expands rapidly when the stellar mass
reaches $5\sim7~\mathrm{M_\odot}$ and the Eddington factor sharply increases.
This makes the protostar rotate critically at a rate that is significantly
below the Keplerian value (i.e., the $\Omega\Gamma-$limit). The resultant
positive gradient of the angluar velocity in the boundary layer between the
protostar and the Keplerian disk prohibits angular momentum transport from the
star to the disk, and consequently further rapid mass accretion. This would
prevent the protostar from growing significantly beyond $20 -
40~\mathrm{M_\odot}$. Another important consequence of the $\Omega\Gamma-$limit
is that the protostar can remain fairly compact ($R \lesssim
50~\mathrm{R_\odot}$) and avoid a fluffy structure ($R \gtrsim
500~\mathrm{R_\odot}$) that is usually found with a very high mass accretion
rate. This effect would make the protostar less prone to binary interactions
during the protostar phase. Although our analysis is based on Pop III protostar
models, this role of the $\Omega\Gamma-$limit would be universal in the
formation process of massive stars, regardless of metallicity.
|
We study the electromagnetic radiation by a fermion carrying an electric
charge $q$ embedded in a medium rotating with constant angular velocity
$\bf\Omega$ parallel or anti-parallel to an external constant magnetic field
$\bf B$. We assume that the rotation is "relatively slow"; namely, that the
angular velocity $\Omega$ is much smaller than the inverse magnetic length
$\sqrt{qB}$. In practice, such angular velocity can be extremely high. The
fermion motion is a superposition of two circular motions: one due to its rigid
rotation caused by forces exerted by the medium, another due to the external
magnetic field. We derive an exact analytical expression for the spectral rate
and the total intensity of this type of synchrotron radiation. Our numerical
calculations indicate very high sensitivity of the radiation to the angular
velocity of rotation. We show that the radiation intensity is strongly enhanced
if $q\bf B$ and $\bf \Omega$ point in the opposite directions and is suppressed
otherwise.
|
We show that ``ergodic regime'' appears for generic dispersion relations in
the semiclassical motion of electrons in a metal and we prove that, in the
fixed energy picture, the measure of the set of such directions is zero.
|
First preliminary results of the balloon-borne experiment SPHERE-2 on the
all-nuclei primary cosmic rays (PCR) spectrum and primary composition are
presented. The primary spectrum in the energy range $10^{16}$--$5\cdot10^{17}$
eV was reconstructed using characteristics of Vavilov-Cherenkov radiation of
extensive air showers (EAS), reflected from a snow surface. Several sources of
systematic uncertainties of the spectrum were analysed. A method for separation
of the primary nuclei' groups based on the lateral distribution function' (LDF)
steepness parameter is presented. Preliminary estimate of the mean light
nuclei' fraction $f_{30-150}$ at energies $3\cdot10^{16}$--$1.5\cdot10^{17}$ eV
was performed and yielded $f_{30-150}$= (21$\pm$11)%.
|
Suppose M is a connected PL 2-manifold and X is a compact connected
subpolyhedron of M (X \neq 1pt, a closed 2-manifold). Let E(X, M) denote the
space of topological embeddings of X into M with the compact-open topology and
let E(X, M)_0 denote the connected component of the inclusion i_X : X \subset M
in E(X, M). In this paper we classify the homotopy type of E(X, M)_0 in term of
the subgroup G = Im[{i_X}_\ast : \pi_1(X) \to \pi_1(M)]. We show that if G is
not a cyclic group and M \neq T^2, T^2 then E(X, M)_0 \simeq \ast, if G is a
nontrivial cyclic group and M \neq P^2, T^2, K^2 then E(X, M)_0 \simeq S^1, and
when G = 1, if X is an arc or M is orientable then E(X, M)_0 \simeq ST(M) and
if X is not an arc and M is nonorientable then E(X, M)_0 \simeq ST(\tilde{M}).
Here S^1 is the circle, T^2 is the torus, P^2 is the projective plane and K^2
is the Klein bottle. The symbol ST(M) denotes the tangent unit circle bundle of
M with respect to any Riemannian metric of M and \tilde{M} denotes the
orientation double cover of M.
|
As spin-based quantum processors grow in size and complexity, maintaining
high fidelities and minimizing crosstalk will be essential for the successful
implementation of quantum algorithms and error-correction protocols. In
particular, recent experiments have highlighted pernicious transient qubit
frequency shifts associated with microwave qubit driving. Workarounds for small
devices, including prepulsing with an off-resonant microwave burst to bring a
device to a steady-state, wait times prior to measurement, and qubit-specific
calibrations all bode ill for device scalability. Here, we make substantial
progress in understanding and overcoming this effect. We report a surprising
non-monotonic relation between mixing chamber temperature and spin Larmor
frequency which is consistent with observed frequency shifts induced by
microwave and baseband control signals. We find that purposefully operating the
device at 200 mK greatly suppresses the adverse heating effect while not
compromising qubit coherence or single-qubit fidelity benchmarks. Furthermore,
systematic non-Markovian crosstalk is greatly reduced. Our results provide a
straightforward means of improving the quality of multi-spin control while
simplifying calibration procedures for future spin-based quantum processors.
|
Darwin is a genomics co-processor that achieved a 15000x acceleration on long
read assembly through innovative hardware and algorithm co-design. Darwins
algorithms and hardware implementation were specifically designed for DNA
analysis pipelines. This paper analyzes the feasibility of applying Darwins
algorithms to the problem of protein sequence alignment. In addition to a
behavioral analysis of Darwin when aligning proteins, we propose an algorithmic
improvement to Darwins alignment algorithm, GACT, in the form of a multi-pass
variant that increases its accuracy on protein sequence alignment. Concretely,
our proposed multi-pass variant of GACT achieves on average 14\% better
alignment scores.
|
We consider the regular model of formula generation in conjunctive normal
form (CNF) introduced by Boufkhad et. al. We derive an upper bound on the
satisfiability threshold and NAE-satisfiability threshold for regular random
$k$-SAT for any $k \geq 3$. We show that these bounds matches with the
corresponding bound for the uniform model of formula generation.
We derive lower bound on the threshold by applying the second moment method
to the number of satisfying assignments. For large $k$, we note that the
obtained lower bounds on the threshold of a regular random formula converges to
the lower bound obtained for the uniform model. Thus, we answer the question
posed in \cite{AcM06} regarding the performance of the second moment method for
regular random formulas.
|
Rotating bodies in General Relativity produce frame dragging, also known as
the {\it gravitomagnetic effect} in analogy with classical electromagnetism. In
this work, we study the effect of magnetic field on the gravitomagnetic effect
in neutron stars with poloidal geometry, which is produced as a result of its
rotation. We show that the magnetic field has a non-negligible impact on frame
dragging. The maximum effect of the magnetic field appears along the polar
direction, where the frame-dragging frequency decreases with increase in
magnetic field, and along the equatorial direction, where its magnitude
increases. For intermediate angles, the effect of the magnetic field decreases,
and goes through a minimum for a particular angular value at which magnetic
field has no effect on gravitomagnetism. Beyond that particular angle
gravitomagnetic effect increases with increasing magnetic field. We try to
identify this `null region' for the case of magnetized neutron stars, both
inside and outside, as a function of the magnetic field, and suggest a thought
experiment to find the null region of a particular pulsar using the frame
dragging effect.
|
It has been conjectured that at distances smaller than the confinement scale
but large enough to allow for nonperturbative effects, QCD is described by an
effective $SU(N_c {\times} N_f)_L\times SU(N_c {\times} N_f)_R$ chiral
Lagrangian. The soliton solutions of such a Lagrangian are extended objects
with spin ${1\over 2}$. For $N_c{=}3$, $N_f{=}3$ they are triplets of color and
flavor and have baryon number ${1\over3}$, to be identified as constituent
quarks. We investigate in detail the static properties of such
constituent-quark solitons for the simplest case $N_f{=}1, N_c{=}3$. The mass
of these objects comes from the energy of the static soliton and from quantum
effects, described semiclassically by rotation of collective coordinates around
the classical solution. The quantum corrections tend to be large, but can be
controlled by exploring the Lagrangian's parameter space so as to maximize the
inertia tensor. We comment on the acceptable parameter space and discuss the
model's further predictive power.
|
The perturbations of chirped dissipative solitons are analyzed in the
spectral domain. It is shown, that the structure of the perturbed chirped
dissipative soliton is highly nontrivial and has a tendency to an enhancement
of the spectral perturbations especially at the spectrum edges, where the
irregularities develop. Even spectrally localized perturbations spread over a
whole soliton spectrum. As a result of spectral irregularity, the chaotic
dynamics develops due to the spectral loss action. In particular, the
dissipative soliton can become fragmented though remains localized.
|
In this work, we demonstrate the importance of considering correlations
between degenerate Zeeman sublevels that develop in dense atomic ensembles. In
order to do this, we develop a set of equations capable of simulating large
numbers of atoms while still incorporating correlations between degenerate
Zeeman sublevels. This set of equations is exact in the single-photon limit,
and may be interpreted as a generalization of the coupled harmonic oscillator
equations typically used the literature. Using these equations, we demonstrate
that in sufficiently dense systems, correlations between Zeeman sublevels can
cause non-trivial differences in the photon scattering lineshape in arrays and
clouds of atoms.
|
While the theoretical analysis of evolutionary algorithms (EAs) has made
significant progress for pseudo-Boolean optimization problems in the last 25
years, only sporadic theoretical results exist on how EAs solve
permutation-based problems.
To overcome the lack of permutation-based benchmark problems, we propose a
general way to transfer the classic pseudo-Boolean benchmarks into benchmarks
defined on sets of permutations. We then conduct a rigorous runtime analysis of
the permutation-based $(1+1)$ EA proposed by Scharnow, Tinnefeld, and Wegener
(2004) on the analogues of the \textsc{LeadingOnes} and \textsc{Jump}
benchmarks. The latter shows that, different from bit-strings, it is not only
the Hamming distance that determines how difficult it is to mutate a
permutation $\sigma$ into another one $\tau$, but also the precise cycle
structure of $\sigma \tau^{-1}$. For this reason, we also regard the more
symmetric scramble mutation operator. We observe that it not only leads to
simpler proofs, but also reduces the runtime on jump functions with odd jump
size by a factor of $\Theta(n)$. Finally, we show that a heavy-tailed version
of the scramble operator, as in the bit-string case, leads to a speed-up of
order $m^{\Theta(m)}$ on jump functions with jump size~$m$.%
|
A current challenge for many Bayesian analyses is determining when to
terminate high-dimensional Markov chain Monte Carlo simulations. To this end,
we propose using an automated sequential stopping procedure that terminates the
simulation when the computational uncertainty is small relative to the
posterior uncertainty. Such a stopping rule has previously been shown to work
well in settings with posteriors of moderate dimension. In this paper, we
illustrate its utility in high-dimensional simulations while overcoming some
current computational issues. Further, we investigate the relationship between
the stopping rule and effective sample size. As examples, we consider two
complex Bayesian analyses on spatially and temporally correlated datasets. The
first involves a dynamic space-time model on weather station data and the
second a spatial variable selection model on fMRI brain imaging data. Our
results show the sequential stopping rule is easy to implement, provides
uncertainty estimates, and performs well in high-dimensional settings.
|
Quaternions were appeared through Lagrangian formulation of mechanics in
Symplectic vector space. Its general form was obtained from the Clifford
algebra, and Frobenius' theorem, which says that "the only finite-dimensional
real division algebra are the real field ${\bf R}$, the complex field ${\bf C}$
and the algebra ${\bf H}$ of quaternions" was derived. They appear also through
Hamilton formulation of mechanics, as elements of rotation groups in the
symplectic vector spaces. Quaternions were used in the solution of
4-dimensional Dirac equation in QED, and also in solutions of Yang-Mills
equation in QCD as elements of noncommutative geometry. We present how
quaternions are formulated in Clifford Algebra, how it is used in explaining
rotation group in symplectic vector space and parallel transformation in
holonomic dynamics. When a dynamical system has hysteresis, pre-symplectic
manifolds and nonholonomic dynamics appear. Quaternions represent rotation of
3-dimensional sphere ${\bf S}^3$. Artin's generalized quaternions and
Rohlin-Pontryagin's embedding of quaternions on 4-dimensional manifolds, and
Kodaira's embedding of quaternions on ${\bf S}^1\times {\bf S}^3$ manifolds are
also discussed.
|
Photoluminescence of graphene quantum dots (GQDs) of shungite, attributed to
individual fragments of reduced graphene oxide (rGO), has been studied for the
frozen rGO colloidal dispersions in water, carbon tetrachloride, and toluene.
Morphological study shows a steady trend of GQDs to form fractals and a drastic
change in the colloids fractal structure caused by solvent was reliably
established. Spectral study reveals a dual character of emitting centers:
individual GQDs are responsible for the spectra position while fractal
structure of GQD colloids provides high broadening of the spectra due to
structural inhomogeneity of the colloidal dispersions and a peculiar dependence
on excitation wavelength. For the first time, photoluminescence spectra of
individual GQDs were observed in frozen toluene dispersions which pave the way
for a theoretical treatment of GQD photonics.
|
We extend the recently proposed mechanism for inducing low energy nuclear
reactions (LENR) to compute the reaction rate of deuteron with a heavy nucleus.
The process gets dominant contribution at second order in the time dependent
perturbation theory and is assisted by a resonance. The reaction proceeds by
breakdown of deuteron into a proton and a neutron due to the action of the
first perturbation. In the second, nuclear perturbation, the neutron gets
captured by the heavy nucleus. Both perturbations are assumed to be
electromagnetic and lead to the emission of two photons, one at each vertex.
The heavy nucleus is taken to be ${}^{58}$Ni although many other may be
considered.The reaction rate is found to be very small unless assisted by some
special conditions. In the present case we assume the presence of a nuclear
resonant state. In the presence of such a state we find that the reaction rate
is sufficiently large to be observable in laboratory even at low energies.
|
In the quest for high temperature superconductors, the interface between a
metal and a dielectric was proposed to possibly achieve very high
superconducting transition temperature ($T_c$) through interface-assisted
pairing. Recently, in single layer FeSe (SLF) films grown on SrTiO$_3$
substrates, signs for $T_c$ up to 65~K have been reported. However, besides
doping electrons and imposing strain, whether and how the substrate facilitates
the superconductivity are still unclear. Here we report the growth of various
SLF films on thick BaTiO$_3$ films atop KTaO$_3$ substrates, with signs for
$T_c$ up to $75$~K, close to the liquid nitrogen boiling temperature. SLF of
similar doping and lattice is found to exhibit high $T_c$ only if it is on the
substrate, and its band structure strongly depends on the substrate. Our
results highlight the profound role of substrate on the high-$T_c$ in SLF, and
provide new clues for understanding its mechanism.
|
Particle spin polarization is known to be linked both to rotation (angular
momentum) and magnetization of a many particle system. However, in the most
common formulation of relativistic kinetic theory, the spin degrees of freedom
appear only as degeneracy factors multiplying phase-space distributions. Thus,
it is important to develop theoretical tools that allow to make predictions
regarding the spin polarization of particles, which can be directly confronted
with experimental data. Herein, we discuss a link between the relativistic spin
tensor and particle spin polarization, and elucidate the connections between
the Wigner function and average polarization. Our results may be useful for
theoretical interpretation of heavy-ion data on spin polarization of the
produced hadrons.
|
In this work we develop the mathematical framework of !FTL, a new gesture
recognition algorithm and we prove its convergence. Such convergence suggests
to adopt a notion of shape for smooth gestures as a complex valued function.
However, the idea inspiring that notion came to us from Clifford numbers and
not from complex numbers. Moreover, the Clifford vector algebra can be used to
extend to higher dimensions the notion of shape of a gesture, while complex
numbers are useless to that purpose.
|
We present a proposal and a feasibility study for the creation and quantum
state tomography of a single polariton state of an atomic ensemble. The
collective non-classical and non-Gaussian state of the ensemble is generated by
detection of a single forward scattered photon. The state is subsequently
characterized by atomic state tomography performed using strong dispersive
light-atoms interaction followed by a homodyne measurement on the transmitted
light. The proposal is backed by preliminary experimental results showing
projection noise limited sensitivity and a simulation demonstrating the
feasibility of the proposed method for detection of a non-classical and
non-Gaussian state of the mesoscopic atomic ensemble. This work represents the
first attempt of hybrid discrete-continuous variable quantum state processing
with atomic ensembles.
|
Ten years later, astronomers are still puzzled by the stellar evolution that
produced SN 1987A --- a blue supergiant. In single star models, the new OPAL
opacities make blue solutions more difficult to achieve, though still possible
for certain choices of convection physics. We also consider rotation, which has
the desirable effect of producing large surface enhancements of nitrogen and
helium, but the undesirable effect of increasing the helium-core mass at the
expense of the envelope. The latter makes blue solutions more difficult. Still,
we seek a model that occurs with high probability in the LMC and for which the
time-scale for making the last transition from red to blue, $\sim$ 20,000
years, has a physical interpretation --- the Kelvin-Helmholtz time of the
helium core. Single star models satisfy both criteria and might yet prove to be
the correct explanation for Sk -69 202, provided new rotational or convection
physics can simultaneously give a blue star and explain the ring structure.
Some speculations on how this might be achieved are presented and some aspects
of binary models briefly discussed.
|
Context. Colliding wind binaries are massive systems featuring strong,
interacting stellar winds which may act as particle accelerators. Therefore,
such binaries are good candidates for detection at high energies. However, only
the massive binary Eta Carinae has been firmly associated with a gamma-ray
signal. A second system, gamma^2 Velorum, is positionally coincident with a
gamma-ray source, but unambiguous identification remains lacking. Aims.
Observing orbital modulation of the flux would establish an unambiguous
identification of the binary gamma^2 Velorum as the gamma-ray source detected
by the Fermi Large Area Telescope (Fermi-LAT). Methods. We have used more than
10 years of observations with Fermi-LAT. Events are folded with the orbital
period of the binary to search for variability. Systematic errors that might
arise from the strong emission of the nearby Vela pulsar are studied by
comparing with a more conservative pulse-gated analysis. Results. Hints of
orbital variability are found, indicating maximum flux from the binary during
apastron passage. Conclusions. Our analysis strengthens the possibility that
gamma-rays are produced in gamma^2 Velorum, most likely as a result of particle
acceleration in the wind collision region. The observed orbital variability is
consistent with predictions from recent MHD simulations, but contrasts with the
orbital variability from Eta Carinae, where the peak of the light curve is
found at periastron.
|
Biomedical systems are regulated by interacting mechanisms that operate
across multiple spatial and temporal scales and produce biosignals with linear
and non-linear information inside. In this sense entropy could provide a useful
measure about disorder in the system, lack of information in time-series and/or
irregularity of the signals. Essential tremor (ET) is the most common movement
disorder, being 20 times more common than Parkinson's disease, and 50-70% of
this disease cases are estimated to be genetic in origin. Archimedes spiral
drawing is one of the most used standard tests for clinical diagnosis. This
work, on selection of nonlinear biomarkers from drawings and handwriting, is
part of a wide-ranging cross study for the diagnosis of essential tremor in
BioDonostia Health Institute. Several entropy algorithms are used to generate
nonlinear feayures. The automatic analysis system consists of several Machine
Learning paradigms.
|
An accelerating boundary (mirror) acts as a horizon and black hole analog,
radiating energy with some particle spectrum. We demonstrate that a M\"obius
transformation on the null coordinate advanced time mirror trajectory uniquely
keeps invariant not only the energy flux but the particle spectrum. We clarify
how the geometric entanglement entropy is also invariant. The transform allows
generation of families of dynamically distinct trajectories, including
$\mathcal{PT}$-symmetric ones, mapping from the eternally thermal mirror to the
de Sitter horizon, and different boundary motions corresponding to Kerr or
Schwarzschild black holes.
|
Getting access to labelled datasets in certain sensitive application domains
can be challenging. Hence, one often resorts to transfer learning to transfer
knowledge learned from a source domain with sufficient labelled data to a
target domain with limited labelled data. However, most existing transfer
learning techniques only focus on one-way transfer which brings no benefit to
the source domain. In addition, there is the risk of a covert adversary
corrupting a number of domains, which can consequently result in inaccurate
prediction or privacy leakage. In this paper we construct a secure and
Verifiable collaborative Transfer Learning scheme, VerifyTL, to support two-way
transfer learning over potentially untrusted datasets by improving knowledge
transfer from a target domain to a source domain. Further, we equip VerifyTL
with a cross transfer unit and a weave transfer unit employing SPDZ computation
to provide privacy guarantee and verification in the two-domain setting and the
multi-domain setting, respectively. Thus, VerifyTL is secure against covert
adversary that can compromise up to n-1 out of n data domains. We analyze the
security of VerifyTL and evaluate its performance over two real-world datasets.
Experimental results show that VerifyTL achieves significant performance gains
over existing secure learning schemes.
|
We present the results of a comparison between the optical morphologies of a
complete sample of 46 southern 2Jy radio galaxies at intermediate redshifts
(0.05<z<0.7) and those of two control samples of quiescent early-type galaxies:
55 ellipticals at redshifts z<0.01 from the Observations of Bright Ellipticals
at Yale (OBEY) survey, and 107 early-type galaxies at redshifts 0.2<z<0.7 in
the Extended Groth Strip (EGS). Based on these comparisons, we discuss the role
of galaxy interactions in the triggering of powerful radio galaxies (PRGs). We
find that a significant fraction of quiescent ellipticals at low and
intermediate redshifts show evidence for disturbed morphologies at relatively
high surface brightness levels, which are likely the result of past or on-going
galaxy interactions. However, the morphological features detected in the galaxy
hosts of the PRGs (e.g. tidal tails, shells, bridges, etc.) are up to 2
magnitudes brighter than those present in their quiescent counterparts. Indeed,
if we consider the same surface brightness limits, the fraction of disturbed
morphologies is considerably smaller in the quiescent population (53% at z<0.2
and 48% at 0.2<z<0.7) than in the PRGs (93% at z<0.2 and 95% at 0.2<z<0.7
considering strong-line radio galaxies only). This supports a scenario in which
PRGs represent a fleeting active phase of a subset of the elliptical galaxies
that have recently undergone mergers/interactions. However, we demonstrate that
only a small proportion (<20%) of disturbed early-type galaxies are capable of
hosting powerful radio sources.
|
By means of a mean-field model extended to include magnetovolumic effects we
study the effect of external fields on the thermal response characterized
either by the isothermal entropy change and/or the adiabatic temperature
change. The model includes two different situations induced by the
magnetovolumic coupling. (i) A first order para- ferromagnetic phase transition
that entails a volume change. (ii) An inversion of the effective exchange
interaction that promotes the occurrence of an antiferromagnetic phase at low
temperatures. In both cases, we study the magneto- and baro-caloric effects as
well as the corresponding cross caloric responses. By comparing the present
theoretical results with available experimental data for several materials we
conclude that the present thermodynamical model reproduces the general trends
associated with the considered caloric and cross caloric responses.
|
Using high resolution, high-S/N archival UVES spectra, we have performed a
detailed spectroscopic analysis of 4 chemically peculiar HgMn stars (HD 71066,
HD 175640, HD 178065 and HD 221507). Using spectrum synthesis, mean
photospheric chemical abundances are derived for 22 ions of 16 elements. We
find good agreement between our derived abundances and those published
previously by other authors. For the 5 elements that present a sufficient
number of suitable lines, we have attempted to detect vertical chemical
stratification by analyzing the dependence of derived abundance as a function
of optical depth. For most elements and most stars we find no evidence of
chemical stratification with typical 3\sigma upper limits of \Delta\log
N_elem/N_tot~0.1-0.2 dex per unit optical depth. However, for Mn in the
atmosphere of HD 178065 we find convincing evidence of stratification. Modeling
of the line profiles using a two-step model for the abundance of Mn yields a
local abundance varying approximately linearly by ~0.7 dex through the optical
depth range log \tau_5000=-3.6 to -2.8.
|
ECML PKDD is the main European conference on machine learning and data
mining. Since its foundation it implemented the publication model common in
computer science: there was one conference deadline; conference submissions
were reviewed by a program committee; papers were accepted with a low
acceptance rate. Proceedings were published in several Springer Lecture Notes
in Artificial (LNAI) volumes, while selected papers were invited to special
issues of the Machine Learning and Data Mining and Knowledge Discovery
journals. In recent years, this model has however come under stress. Problems
include: reviews are of highly variable quality; the purpose of bringing the
community together is lost; reviewing workloads are high; the information
content of conferences and journals decreases; there is confusion among
scientists in interdisciplinary contexts. In this paper, we present a new
publication model, which will be adopted for the ECML PKDD 2013 conference, and
aims to solve some of the problems of the traditional model. The key feature of
this model is the creation of a journal track, which is open to submissions all
year long and allows for revision cycles.
|
A central feature in the Hilbert space formulation of classical mechanics is
the quantisation of classical Liouville densities, leading to what may be
termed term Groenewold operators. We investigate the spectra of the Groenewold
operators that correspond to Gaussian and to certain uniform Liouville
densities. We show that when the classical coordinate-momentum uncertainty
product falls below Heisenberg's limit, the Groenewold operators in the
Gaussian case develop negative eigenvalues and eigenvalues larger than 1.
However, in the uniform case, negative eigenvalues are shown to persist for
arbitrarily large values of the classical uncertainty product.
|
Rapidly decreasing tempered stable distributions are useful models for
financial applications. However, there has been no exact method for simulation
available in the literature. We remedy this by introducing an exact simulation
method in the finite variation case. Our methodology works for the wider class
of $p$-RDTS distributions.
|
We propose a system of coupled microring resonators for the generation
frequency combs and dissipative Kerr solitons in silicon at telecommunication
frequencies. By taking advantage of structural slow-light, the effective
non-linearity of the material is enhanced, thus relaxing the requirement of
ultra-high quality factors that currently poses a major obstacle to the
realization of silicon comb devices. We demonstrate a variety of frequency comb
solutions characterized by threshold power in the 10-milliwatt range and a
small footprint of $0.1$ mm$^2$, and study their robustness to structural
disorder. The results open the way to the realization of low-power compact comb
devices in silicon at the telecom band.
|
Key properties of physical systems can be described by the eigenvalues of
matrices that represent the system. Computational algorithms that determine the
eigenvalues of these matrices exist, but they generally suffer from a loss of
performance as the matrix grows in size. This process can be expanded to
quantum computation to find the eigenvalues with better performance than the
classical algorithms. One application of such an eigenvalue solver is to
determine energy levels of a molecule given a matrix representation of its
Hamiltonian using the variational principle. Using a variational quantum
eigensolver, we determine the ground state energies of different molecules. We
focus on the choice of optimization strategy for a Qiskit simulator on low-end
hardware. The benefits of several different optimizers were weighed in terms of
accuracy in comparison to an analytic classical solution as well as code
efficiency.
|
Let $P_N$ be a uniform random $N\times N$ permutation matrix and let
$\chi_N(z)=\det(zI_N- P_N)$ denote its characteristic polynomial. We prove a
law of large numbers for the maximum modulus of $\chi_N$ on the unit circle,
specifically, \[ \sup_{|z|=1}|\chi_N(z)|= N^{x_0 + o(1)} \] with probability
tending to one as $N\to \infty$, for a numerical constant $x_0\approx 0.652$.
The main idea of the proof is to uncover a logarithmic correlation structure
for the distribution of (the logarithm of) $\chi_N$, viewed as a random field
on the circle, and to adapt a well-known second moment argument for the maximum
of the branching random walk. Unlike the well-studied \emph{CUE field} in which
$P_N$ is replaced with a Haar unitary, the distribution of $\chi_N(e^{2\pi
it})$ is sensitive to Diophantine properties of the point $t$. To deal with
this we borrow tools from the Hardy--Littlewood circle method in analytic
number theory.
|
A representation of the quantum affine algebra $U_{q}(\widehat{sl}_3)$ of an
arbitrary level $k$ is constructed in the Fock module of eight boson fields.
This realization reduces the Wakimoto representation in the $q \rightarrow 1$
limit. The analogues of the screening currents are also obtained. They commute
with the action of $U_{q}(\widehat{sl}_3)$ modulo total differences of some
fields.
|
Taking the quantum Kitaev chain as an example, we have studied the universal
dynamical behaviors resulting from quantum criticality under the condition of
environmental temperature quench. Our findings reveal that when the quantum
parameter is at its critical value, both the excess excitation density at the
end of linear quench and the subsequent free relaxation behavior exhibit
universal scaling behaviors. The scaling laws observed upon quenching to the
zero-temperature quantum critical point and non-zero temperature points exhibit
distinct scaling exponents, which are all intimately related to the dynamical
critical exponents of the quantum phase transition. Additionally, for the case
of linear quench to finite temperatures, we have also discovered an intrinsic
universal dynamical behavior that is independent of quantum criticality. Our
research offers profound insights into the relationship between quantum
criticality and nonequilibrium dynamics from two perspectives:
Kibble-Zurek-like scaling behavior and free relaxation dynamics. Notably, the
Kibble-Zurek-like scaling behavior in this context differs from the standard
Kibble-Zurek mechanism. These two aspects jointly open up a new avenue for us
to understand quantum criticality through real-time dynamical behavior, even at
finite temperatures.
|
In personalized Federated Learning, each member of a potentially large set of
agents aims to train a model minimizing its loss function averaged over its
local data distribution. We study this problem under the lens of stochastic
optimization. Specifically, we introduce information-theoretic lower bounds on
the number of samples required from all agents to approximately minimize the
generalization error of a fixed agent. We then provide strategies matching
these lower bounds, in the all-for-one and all-for-all settings where
respectively one or all agents desire to minimize their own local function. Our
strategies are based on a gradient filtering approach: provided prior knowledge
on some notions of distances or discrepancies between local data distributions
or functions, a given agent filters and aggregates stochastic gradients
received from other agents, in order to achieve an optimal bias-variance
trade-off.
|
A preparation theorem for compositions of restricted log-exp-analytic
functions and power functions of the form $$h: \mathbb{R} \to \mathbb{R}, x
\mapsto \left\{\begin{array}{ll} x^r, & x > 0, \\ 0, & \textnormal{ else, }
\end{array}\right.$$ for $r \in \mathbb{R}$ is given. Consequently we obtain a
parametric version of Tamm's theorem for this class of functions which is
indeed a full generalisation of the parametric version of Tamm's theorem for
$\mathbb{R}_{\textnormal{an}}^{\mathbb{R}}$-definable functions.
|
In this paper we suggest a moment matching method for quadratic-bilinear
dynamical systems. Most system-theoretic reduction methods for nonlinear
systems rely on multivariate frequency representations. Our approach instead
uses univariate frequency representations tailored towards user-pre-defined
families of inputs. Then moment matching corresponds to a one-dimensional
interpolation problem, not to multi-dimensional interpolation as for the
multivariate approaches, i.e., it also involves fewer interpolation frequencies
to be chosen. Comparing to former contributions towards nonlinear model
reduction with univariate frequency representations, our approach shows
profound differences: Our derivation is more rigorous and general and reveals
additional tensor-structured approximation conditions, which should be
incorporated. Moreover, the proposed implementation exploits the inherent
low-rank tensor structure, which enhances its efficiency. In addition, our
approach allows for the incorporation of more general input relations in the
state equations - not only affine-linear ones as in existing system-theoretic
methods - in an elegant way. As a byproduct of the latter, also a novel
modification for the multivariate methods falls off, which is able to handle
more general input-relations.
|
Mobility edge (ME), representing the critical energy that distinguishes
between extended and localized states, is a key concept in understanding the
transition between extended (metallic) and localized (insulating) states in
disordered and quasiperiodic systems. Here we explore the impact of dissipation
on a quasiperiodic system featuring MEs by calculating steady-state density
matrix and analyzing quench dynamics with sudden introduction of dissipation,
and demonstrate that dissipation can lead the system into specific states
predominantly characterized by either extended or localized states,
irrespective of the initial state. Our results establish the use of dissipation
as a new avenue for inducing transitions between extended and localized states,
and for manipulating dynamic behaviors of particles.
|
Facial Expression Recognition (FER) suffers from data uncertainties caused by
ambiguous facial images and annotators' subjectiveness, resulting in excursive
semantic and feature covariate shifting problem. Existing works usually correct
mislabeled data by estimating noise distribution, or guide network training
with knowledge learned from clean data, neglecting the associative relations of
expressions. In this work, we propose an Adaptive Graph-based Feature
Normalization (AGFN) method to protect FER models from data uncertainties by
normalizing feature distributions with the association of expressions.
Specifically, we propose a Poisson graph generator to adaptively construct
topological graphs for samples in each mini-batches via a sampling process, and
correspondingly design a coordinate descent strategy to optimize proposed
network. Our method outperforms state-of-the-art works with accuracies of
91.84% and 91.11% on the benchmark datasets FERPlus and RAF-DB, respectively,
and when the percentage of mislabeled data increases (e.g., to 20%), our
network surpasses existing works significantly by 3.38% and 4.52%.
|
We construct a two-orbital effective model for a ferromagnetic Kagome-lattice
shandite, $\rm{{Co}_3{Sn}_2{S}_2}$, a candidate material of magnetic Weyl
semimetals, by considering one $d$ orbital from Co, and one $p$ orbital from
interlayer Sn. The energy spectrum near the Fermi level, and the configurations
of the Weyl points, computed by using our model, are similar to those obtained
by first principle calculations. We show also that nodal rings appear even with
spin-orbit coupling when the magnetization points in-plane direction.
Additionally, magnetic properties of $\rm{{Co}_3{Sn}_2{S}_2}$ and other
shandite materials are discussed.
|
The majority of real-world processes are spatiotemporal, and the data
generated by them exhibits both spatial and temporal evolution. Weather is one
of the most essential processes in this domain, and weather forecasting has
become a crucial part of our daily routine. Weather data analysis is considered
the most complex and challenging task. Although numerical weather prediction
models are currently state-of-the-art, they are resource-intensive and
time-consuming. Numerous studies have proposed time series-based models as a
viable alternative to numerical forecasts. Recent research in the area of time
series analysis indicates significant advancements, particularly regarding the
use of state-space-based models (white box) and, more recently, the integration
of machine learning and deep neural network-based models (black box). The most
famous examples of such models are RNNs and transformers. These models have
demonstrated remarkable results in the field of time-series analysis and have
demonstrated effectiveness in modelling temporal correlations. It is crucial to
capture both temporal and spatial correlations for a spatiotemporal process, as
the values at nearby locations and time affect the values of a spatiotemporal
process at a specific point. This self-contained paper explores various
regional data-driven weather forecasting methods, i.e., forecasting over
multiple latitude-longitude points (matrix-shaped spatial grid) to capture
spatiotemporal correlations. The results showed that spatiotemporal prediction
models reduced computational costs while improving accuracy. In particular, the
proposed tensor train dynamic mode decomposition-based forecasting model has
comparable accuracy to the state-of-the-art models without the need for
training. We provide convincing numerical experiments to show that the proposed
approach is practical.
|
Entanglement is the key feature of many-body quantum systems, and the
development of new tools to probe it in the laboratory is an outstanding
challenge. Measuring the entropy of different partitions of a quantum system
provides a way to probe its entanglement structure. Here, we present and
experimentally demonstrate a new protocol for measuring entropy, based on
statistical correlations between randomized measurements. Our experiments,
carried out with a trapped-ion quantum simulator, prove the overall coherent
character of the system dynamics and reveal the growth of entanglement between
its parts - both in the absence and presence of disorder. Our protocol
represents a universal tool for probing and characterizing engineered quantum
systems in the laboratory, applicable to arbitrary quantum states of up to
several tens of qubits.
|
An explicit check of the AGT relation between the W_N-symmetry controlled
conformal blocks and U(N) Nekrasov functions requires knowledge of the
Shapovalov matrix and various triple correlators for W-algebra descendants. We
collect simplest expressions of this type for N=3 and for the two lowest
descendant levels, together with the detailed derivations, which can be now
computerized and used in more general studies of conformal blocks and AGT
relations at higher levels.
|
We tackle the problem of estimating correspondences from a general marker,
such as a movie poster, to an image that captures such a marker.
Conventionally, this problem is addressed by fitting a homography model based
on sparse feature matching. However, they are only able to handle plane-like
markers and the sparse features do not sufficiently utilize appearance
information. In this paper, we propose a novel framework NeuralMarker, training
a neural network estimating dense marker correspondences under various
challenging conditions, such as marker deformation, harsh lighting, etc.
Besides, we also propose a novel marker correspondence evaluation method
circumstancing annotations on real marker-image pairs and create a new
benchmark. We show that NeuralMarker significantly outperforms previous methods
and enables new interesting applications, including Augmented Reality (AR) and
video editing.
|
Using numerical, theoretical and general methods, we construct evaluation
formulas for the Jacobi $\theta$ functions. Some of our results are
conjectures, but are verified numerically.
|
A set $R \subseteq V(G)$ is a resolving set of a graph $G$ if for all
distinct vertices $v,u \in V(G)$ there exists an element $r \in R$ such that
$d(r,v) \neq d(r,u)$. The metric dimension $\dim(G)$ of the graph $G$ is the
minimum cardinality of a resolving set of $G$. A resolving set with cardinality
$\dim(G)$ is called a metric basis of $G$. We consider vertices that are in all
metric bases, and we call them basis forced vertices. We give several
structural properties of sparse and dense graphs where basis forced vertices
are present. In particular, we give bounds for the maximum number of edges in a
graph containing basis forced vertices. Our bound is optimal whenever the
number of basis forced vertices is even. Moreover, we provide a method of
constructing fairly sparse graphs with basis forced vertices. We also study
vertices which are in no metric basis in connection to cut-vertices and
pendants. Furthermore, we show that deciding whether a vertex is in all metric
bases is co-NP-hard, and deciding whether a vertex is in no metric basis is
NP-hard.
|
Human-object interaction detection is an important and relatively new class
of visual relationship detection tasks, essential for deeper scene
understanding. Most existing approaches decompose the problem into object
localization and interaction recognition. Despite showing progress, these
approaches only rely on the appearances of humans and objects and overlook the
available context information, crucial for capturing subtle interactions
between them. We propose a contextual attention framework for human-object
interaction detection. Our approach leverages context by learning
contextually-aware appearance features for human and object instances. The
proposed attention module then adaptively selects relevant instance-centric
context information to highlight image regions likely to contain human-object
interactions. Experiments are performed on three benchmarks: V-COCO, HICO-DET
and HCVRD. Our approach outperforms the state-of-the-art on all datasets. On
the V-COCO dataset, our method achieves a relative gain of 4.4% in terms of
role mean average precision ($mAP_{role}$), compared to the existing best
approach.
|
We suggest that most nearby active galactic nuclei are fed by a series of
small--scale, randomly--oriented accretion events. Outside a certain radius
these events promote rapid star formation, while within it they fuel the
supermassive black hole. We show that the events have a characteristic time
evolution. This picture agrees with several observational facts. The expected
luminosity function is broadly in agreement with that observed for
moderate--mass black holes. The spin of the black hole is low, and aligns with
the inner disc in each individual feeding event. This implies radio jets
aligned with the axis of the obscuring torus, and uncorrelated with the
large--scale structure of the host galaxy. The ring of young stars observed
about the Galactic Centre are close to where our picture predicts that star
formation should occur.
|
In diffusion-based molecular communication (DMC), one important functionality
of a transmitter nano-machine is signal modulation. In particular, the
transmitter has to be able to control the release of signaling molecules for
modulation of the information bits. An important class of control mechanisms in
natural cells for releasing molecules is based on ion channels which are
pore-forming proteins across the cell membrane whose opening and closing may be
controlled by a gating parameter. In this paper, a modulator for DMC based on
ion channels is proposed which controls the rate at which molecules are
released from the transmitter by modulating a gating parameter signal.
Exploiting the capabilities of the proposed modulator, an on-off keying
modulation scheme is introduced and the corresponding average modulated signal,
i.e., the average release rate of the molecules from the transmitter, is
derived in the Laplace domain. By making a simplifying assumption, a
closed-form expression for the average modulated signal in the time domain is
obtained which constitutes an upper bound on the total number of released
molecules regardless of this assumption. The derived average modulated signal
is compared to results obtained with a particle based simulator. The numerical
results show that the derived upper bound is tight if the number of ion
channels distributed across the transmitter (cell) membrane is small.
|
Quantum interference effects in inter-conversion between cold atoms and
diatomic molecules are analysed. Within the framework of Fano's theory,
continuum-bound anisotropic dressed state formalism of atom-molecule quantum
dynamics is presented. This formalism is applicable in photo- and
magneto-associative strong-coupling regimes. The significance of Fano effect in
ultracold atom-molecule transitions is discussed. Quantum effects at low energy
atom-molecule interface are important for exploring coherent phenomena in
hither-to unexplored parameter regimes.
|
In this work, we propose an approach to determine the dosages of antithyroid
agents to treat hyperthyroid patients. Instead of relying on a trial-and-error
approach as it is commonly done in clinical practice, we suggest to determine
the dosages by means of a model predictive control (MPC) scheme. To this end,
we extend a mathematical model of the pituitary-thyroid feedback loop such that
the intake of methimazole, a common antithyroid agent, can be considered. Based
on this extension, we develop an MPC scheme to determine suitable dosages. In
numerical simulations, we consider scenarios in which (i) patients are affected
by Graves' disease and take the medication orally, (ii) patients are
additionally affected by high intrathyroidal iodide concentrations and take the
medication orally and, (iii) patients suffering from a life-threatening
thyrotoxicosis, in which the medication is usually given intravenously. Our
results suggest that determining the medication dosages by means of an MPC
scheme is a promising alternative to the currently applied trial-and-error
approach.
|
The geo-localization and navigation technology of unmanned aerial vehicles
(UAVs) in denied environments is currently a prominent research area. Prior
approaches mainly employed a two-stream network with non-shared weights to
extract features from UAV and satellite images separately, followed by related
modeling to obtain the response map. However, the two-stream network extracts
UAV and satellite features independently. This approach significantly affects
the efficiency of feature extraction and increases the computational load. To
address these issues, we propose a novel coarse-to-fine one-stream network
(OS-FPI). Our approach allows information exchange between UAV and satellite
features during early image feature extraction. To improve the model's
performance, the framework retains feature maps generated at different stages
of the feature extraction process for the feature fusion network, and
establishes additional connections between UAV and satellite feature maps in
the feature fusion network. Additionally, the framework introduces offset
prediction to further refine and optimize the model's prediction results based
on the classification tasks. Our proposed model, boasts a similar inference
speed to FPI while significantly reducing the number of parameters. It can
achieve better performance with fewer parameters under the same conditions.
Moreover, it achieves state-of-the-art performance on the UL14 dataset.
Compared to previous models, our model achieved a significant 10.92-point
improvement on the RDS metric, reaching 76.25. Furthermore, its performance in
meter-level localization accuracy is impressive, with 182.62% improvement in
3-meter accuracy, 164.17% improvement in 5-meter accuracy, and 137.43%
improvement in 10-meter accuracy.
|
FASER is one of the promising experiments which search for long-lived
particles beyond the Standard Model. In this paper, we consider charged lepton
flavor violation (CLFV) via a light and weakly interacting boson and discuss
the detectability by FASER. We focus on four types of CLFV interactions, i.e.,
the scalar-, pseudoscalar-, vector-, and dipole-type interaction, and calculate
the sensitivity of FASER to each CLFV interaction. We show that, with the setup
of FASER2, a wide region of the parameter space can be explored. Particularly,
it is found that FASER2 has a sensitivity to very small coupling regions in
which the rare muon decays, such as $\mu \rightarrow e\gamma$, cannot place
bounds, and that there is a possibility to detect CLFV decays of the new light
bosons.
|
Measurement and astrophysical interpretation of characteristic gamma-ray
lines from nucleosynthesis was one of the prominent science goals of the
INTEGRAL mission and in particular its spectrometer SPI. Emission from 26Al and
from 60Fe decay lines originates from accumulated ejecta of nucleosynthesis
sources, and appears diffuse in nature. 26Al and 60Fe are believed to originate
mostly from massive star clusters. Gamma-ray observations open an interesting
window to trace the fate and flow of nucleosynthesis ejecta, after they have
left the immediate sources and their birth sites, and on their path to mix with
ambient interstellar gas. The INTEGRAL 26Al emission image confirms earlier
findings of clumpiness and an extent along the entire plane of the Galaxy,
supporting its origin from massive-star groups. INTEGRAL spectroscopy resolved
the line and found Doppler broadenings and systematic shifts from large-scale
galactic rotation. But an excess velocity of ~200 km/s suggests that 26Al
decays preferentially within large superbubbles that extend in forward
directions between spiral arms. The detection of 26Al line emission from nearby
Orion and the Eridanus superbubble supports this interpretation. Positrons from
beta+ decays of 26Al and other nucleosynthesis ejecta have been found to not
explain the morphology of positron annihilation gamma-rays at 511 keV that have
been measured by INTEGRAL. The 60Fe signal measured by INTEGRAL is diffuse but
too weak for an imaging interpretation, an origin from point-like/concentrated
sources is excluded. The 60Fe/26Al ratio is constrained to a range 0.2-0.4.
Beyond improving precision of these results, diffuse nucleosynthesis
contributions from novae (through 22Na radioactivity) and from past neutron
star mergers in our Galaxy (from r-process radioactivity) are exciting new
prospects for the remaining mission extensions.
|
In this article we consider the Cauchy problem for the cubic focusing
nonlinear Schr\"o\-dinger (NLS) equation on the line with initial datum close
to a particular $N$-soliton. Using inverse scattering and the $\bar{\partial}$
method we establish the decay of the $L^{\infty}$ norm of the residual term in
time.
|
Pulsar timing arrays (PTAs) and the Laser Interferometer Space Antenna (LISA)
will open complementary observational windows on massive black-hole binaries
(MBHBs), i.e., with masses in the range $\sim 10^6 - 10^{10}\,$ M$_{\odot}$.
While PTAs may detect a stochastic gravitational-wave background from a
population of MBHBs, during operation LISA will detect individual merging
MBHBs. To demonstrate the profound interplay between LISA and PTAs, we estimate
the number of MBHB mergers that one can expect to observe with LISA by
extrapolating direct observational constraints on the MBHB merger rate inferred
from PTA data. For this, we postulate that the common signal observed by PTAs
(and consistent with the increased evidence recently reported) is an
astrophysical background sourced by a single MBHB population. We then constrain
the LISA detection rate, $\mathcal{R}$, in the mass-redshift space by combining
our Bayesian-inferred merger rate with LISA's sensitivity to spin-aligned,
inspiral-merger-ringdown waveforms. Using an astrophysically-informed formation
model, we predict a 95$\%$ upper limit on the detection rate of $\mathcal{R} <
134\,{\rm yr}^{-1}$ for binaries with total masses in the range $10^7 - 10^8\,$
M$_{\odot}$. For higher masses, i.e., $>10^8\,$ M$_{\odot}$, we find
$\mathcal{R} < 2\,(1)\,\mathrm{yr}^{-1}$ using an astrophysically-informed
(agnostic) formation model, rising to $11\,(6)\,\mathrm{yr}^{-1}$ if the LISA
sensitivity bandwidth extends down to $10^{-5}$ Hz. Forecasts of LISA science
potential with PTA background measurements should improve as PTAs continue
their search.
|
We consider the computation of syzygies of multivariate polynomials in a
finite-dimensional setting: for a $\mathbb{K}[X_1,\dots,X_r]$-module
$\mathcal{M}$ of finite dimension $D$ as a $\mathbb{K}$-vector space, and given
elements $f_1,\dots,f_m$ in $\mathcal{M}$, the problem is to compute syzygies
between the $f_i$'s, that is, polynomials $(p_1,\dots,p_m)$ in
$\mathbb{K}[X_1,\dots,X_r]^m$ such that $p_1 f_1 + \dots + p_m f_m = 0$ in
$\mathcal{M}$. Assuming that the multiplication matrices of the $r$ variables
with respect to some basis of $\mathcal{M}$ are known, we give an algorithm
which computes the reduced Gr\"obner basis of the module of these syzygies, for
any monomial order, using $O(m D^{\omega-1} + r D^\omega \log(D))$ operations
in the base field $\mathbb{K}$, where $\omega$ is the exponent of matrix
multiplication. Furthermore, assuming that $\mathcal{M}$ is itself given as
$\mathcal{M} = \mathbb{K}[X_1,\dots,X_r]^n/\mathcal{N}$, under some assumptions
on $\mathcal{N}$ we show that these multiplication matrices can be computed
from a Gr\"obner basis of $\mathcal{N}$ within the same complexity bound. In
particular, taking $n=1$, $m=1$ and $f_1=1$ in $\mathcal{M}$, this yields a
change of monomial order algorithm along the lines of the FGLM algorithm with a
complexity bound which is sub-cubic in $D$.
|
Background: Solving nuclear many-body problems with an ab initio approach is
widely recognized as a computationally challenging problem. Quantum computers
offer a promising path to address this challenge. There are urgent needs to
develop quantum algorithms for this purpose.
Objective: In this work, we explore the application of the quantum algorithm
of adiabatic state preparation with quantum phase estimation in ab initio
nuclear structure theory. We focus on solving the low-lying spectra (including
both the ground and excited states) of simple nuclear systems.
Ideas: The efficiency of this algorithm is hindered by the emergence of small
energy gaps (level crossings) during the adiabatic evolution. In order to
improve the efficiency, we introduce techniques to avoid level crossings: 1) by
suitable design of the reference Hamiltonian; 2) by insertions of perturbation
terms to modify the adiabatic path.
Results: We illustrate this algorithm by solving the deuteron ground state
energy and the spectrum of the deuteron bounded in a harmonic oscillator trap
implementing the IBM Qiskit quantum simulator. The quantum results agree well
the classical results obtained by matrix diagonalization.
Outlook: With our improvements to the efficiency, this algorithm provides a
promising tool for investigating the low-lying spectra of complex nuclei on
future quantum computers.
|
1. From long-term, spatial capture-recapture (SCR) surveys we infer a
population's dynamics over time and distribution over space. It is becoming
more computationally feasible to fit these open population SCR (openSCR) models
to large datasets and include complex model components, e.g., spatially-varying
density surfaces and time-varying population dynamics. Yet, there is limited
knowledge on how these methods perform.
2. As a case study, we analyze a multi-year, photo-ID survey on bottlenose
dolphins (Tursiops truncatus) in Barataria Bay, Louisana, USA. This population
has been monitored due to the impacts of the nearby Deepwater Horizon oil spill
in 2010. Over 2000 capture histories have been collected between 2010 and 2019.
Our aim is to identify the challenges in applying openSCR methods to real data
and to describe a workflow for other analysts using these methods.
3. We show that inference on survival, recruitment, and density over time
since the oil spill provides insight into increased mortality after the spill,
possible redistribution of the population thereafter, and continued population
decline. Issues in the application are highlighted throughout: possible model
misspecification, sensitivity of parameters to model selection, and difficulty
in interpreting results due to model assumptions and irregular surveying in
time and space. For each issue, we present practical solutions including
assessing goodness-of-fit, model-averaging, and clarifying the difference
between quantitative results and their qualitative interpretation.
4. Overall, this case study serves as a practical template other analysts can
follow and extend; it also highlights the need for further research on the
applicability of these methods as we demand richer inference from them.
|
Specific heat and magnetization measurements have been performed on
high-quality single crystals of filled-skutterudite PrFe_4P_{12} in order to
study the high-field heavy-fermion state (HFS) and low-field ordered state
(ODS). From a broad hump observed in C/T vs T in HFS for magnetic fields
applied along the <100> direction, the Kondo temperature of ~ 9 K and the
existence of ferromagnetic Pr-Pr interactions are deduced. The {141}-Pr nuclear
Schottky contribution, which works as a highly-sensitive on-site probe for the
Pr magnetic moment, sets an upper bound for the ordered moment as ~ 0.03
\mu_B/Pr-ion. This fact strongly indicates that the primary order parameter in
the ODS is nonmagnetic and most probably of quadrupolar origin, combined with
other experimental facts. Significantly suppressed heavy-fermion behavior in
the ODS suggests a possibility that the quadrupolar degrees of freedom is
essential for the heavy quasiparticle band formation in the HFS. Possible
crystalline-electric-field level schemes estimated from the anisotropy in the
magnetization are consistent with this conjecture.
|
Network coding-based link failure recovery techniques provide near-hitless
recovery and offer high capacity efficiency. Diversity coding is the first
technique to incorporate coding in this field and is easy to implement over
small arbitrary networks. However, its capacity efficiency is restricted by its
systematic coding and high design complexity even though its design complexity
is lower than the other coding-based recovery techniques. Alternative
techniques mitigate some of these limitations, but they are difficult to
implement over arbitrary networks. In this paper, we propose a simple column
generation-based design algorithm and a novel advanced diversity coding
technique to achieve near-hitless recovery over arbitrary networks. The design
framework consists of two parts: a main problem and subproblem. Main problem is
realized with Linear Programming (LP) and Integer Linear Programming (ILP),
whereas the subproblem can be realized with different methods. The simulation
results suggest that both the novel coding structure and the novel design
algorithm lead to higher capacity efficiency for near-hitless recovery. The
novel design algorithm simplifies the capacity placement problem which enables
implementing diversity coding-based techniques on very large arbitrary
networks.
|
Robustness evaluation against adversarial examples has become increasingly
important to unveil the trustworthiness of the prevailing deep models in
natural language processing (NLP). However, in contrast to the computer vision
domain where the first-order projected gradient descent (PGD) is used as the
benchmark approach to generate adversarial examples for robustness evaluation,
there lacks a principled first-order gradient-based robustness evaluation
framework in NLP. The emerging optimization challenges lie in 1) the discrete
nature of textual inputs together with the strong coupling between the
perturbation location and the actual content, and 2) the additional constraint
that the perturbed text should be fluent and achieve a low perplexity under a
language model. These challenges make the development of PGD-like NLP attacks
difficult. To bridge the gap, we propose TextGrad, a new attack generator using
gradient-driven optimization, supporting high-accuracy and high-quality
assessment of adversarial robustness in NLP. Specifically, we address the
aforementioned challenges in a unified optimization framework. And we develop
an effective convex relaxation method to co-optimize the continuously-relaxed
site selection and perturbation variables and leverage an effective sampling
method to establish an accurate mapping from the continuous optimization
variables to the discrete textual perturbations. Moreover, as a first-order
attack generation method, TextGrad can be baked into adversarial training to
further improve the robustness of NLP models. Extensive experiments are
provided to demonstrate the effectiveness of TextGrad not only in attack
generation for robustness evaluation but also in adversarial defense.
|
Recently, I reported the discovery of a new fundamental relationship of the
major elements (Fe, Mg, Si) of chondrites that admits the possibility that
ordinary chondrite meteorites are derived from two components, a relatively
oxidized and undifferentiated, primitive component and a somewhat
differentiated, planetary component, with oxidation state like the highly
reduced enstatite chondrites, which I suggested was identical to Mercury's
complement of lost elements. Subsequently, on the basis of that relationship, I
derived expressions, as a function of the mass of planet Mercury and the mass
of its core, to estimate the mass of Mercury's lost elements, the mass of
Mercury's alloy and rock protoplanetary core, and the mass of Mercury's gaseous
protoplanet. Here, on the basis of the supposition that Mercury's complement of
lost elements is in fact identical to the planetary component of ordinary
chondrite formation, I estimate, as a function of Mercury's core mass, the
total mass of ordinary chondrite matter originally present in the Solar System.
Although Mercury's mass is well known, its core mass is not, being widely
believed to be in the range of 70 to 80 percent of the planet mass. For a core
mass of 75 percent, the calculated total mass of ordinary chondrite matter
originally present in the Solar System amounts to 1.83E24 kg, about 5.5 times
the mass of Mercury. That amount of mass is insufficient in itself to form a
planet as massive as the Earth, but may have contributed significantly to the
formation of Mars, as well as adding to the veneer of other planets, including
the Earth. Presently, only about 0.1 percent of that mass remains in the
asteroid belt.
|
Computations with quantum harmonic oscillators or qumodes is a promising and
rapidly evolving approach towards quantum computing. In contrast to qubits,
which are two-level quantum systems, bosonic qumodes can in principle have
infinite discrete levels, and can also be represented with continuous variable
bases. One of the most promising applications of quantum computing is
simulating many-fermion problems such as molecular electronic structure.
Although there has been a lot of recent progress on simulating many-fermion
systems on qubit-based quantum hardware, they can not be easily extended to
bosonic quantum devices due to the fundamental difference in physics
represented by qubits and qumodes. In this work, we show how an electronic
structure Hamiltonian can be transformed into a system of qumodes with a
fermion to boson mapping scheme and apply it to simulate the electronic
structure of dihydrogen molecule as a system of two qumodes. Our work opens the
door for simulating many-fermion systems by harnessing the power of bosonic
quantum devices.
|
Aesthetic assessment is subjective, and the distribution of the aesthetic
levels is imbalanced. In order to realize the auto-assessment of photo
aesthetics, we focus on using repetitive self-revised learning (RSRL) to train
the CNN-based aesthetics classification network by imbalanced data set. As
RSRL, the network is trained repetitively by dropping out the low likelihood
photo samples at the middle levels of aesthetics from the training data set
based on the previously trained network. Further, the retained two networks are
used in extracting highlight regions of the photos related with the aesthetic
assessment. Experimental results show that the CNN-based repetitive
self-revised learning is effective for improving the performances of the
imbalanced classification.
|
There exists a correlation between geospatial activity temporal patterns and
type of land use. A novel self-supervised approach is proposed to stratify
landscape based on mobility activity time series. First, the time series signal
is transformed to the frequency domain and then compressed into task-agnostic
temporal embeddings by a contractive autoencoder, which preserves cyclic
temporal patterns observed in time series. The pixel-wise embeddings are
converted to image-like channels that can be used for task-based, multimodal
modeling of downstream geospatial tasks using deep semantic segmentation.
Experiments show that temporal embeddings are semantically meaningful
representations of time series data and are effective across different tasks
such as classifying residential area and commercial areas. Temporal embeddings
transform sequential, spatiotemporal motion trajectory data into semantically
meaningful image-like tensor representations that can be combined (multimodal
fusion) with other data modalities that are or can be transformed into
image-like tensor representations (for e.g., RBG imagery, graph embeddings of
road networks, passively collected imagery like SAR, etc.) to facilitate
multimodal learning in geospatial computer vision. Multimodal computer vision
is critical for training machine learning models for geospatial feature
detection to keep a geospatial mapping service up-to-date in real-time and can
significantly improve user experience and above all, user safety.
|
We introduce a discrete deformation of Rieffel type for finite (quantum)
groups. Using this, we give a non-trivial example of a finite quantum group of
order 18. We also give a deformation of finite groups of Lie type by using
their maximal abelian subgroups.
|
We present the complete NLO electroweak contribution to the production of
diagonal squark--anti-squark pairs in proton--proton collisions. We discuss
their effects for the production of squarks different from top squarks, in the
SPS1a' scenario.
|
Recent studies have found higher galaxy metallicities in richer environments.
It is not yet clear, however, whether metallicity-environment dependencies are
merely an indirect consequence of environmentally dependent formation
histories, or of environment related processes directly affecting metallicity.
Here, we present a first detailed study of metallicity-environment correlations
in a cosmological hydrodynamical simulation, in particular the Illustris
simulation. Illustris galaxies display similar relations to those observed.
Utilizing our knowledge of simulated formation histories, and leveraging the
large simulation volume, we construct galaxy samples of satellites and centrals
that are matched in formation histories. This allows us to find that ~1/3 of
the metallicity-environment correlation is due to different formation histories
in different environments. This is a combined effect of satellites (in
particular, in denser environments) having on average lower z=0 star formation
rates (SFRs), and of their older stellar ages, even at a given z=0 SFR. Most of
the difference, ~2/3, however, is caused by the higher concentration of
star-forming disks of satellite galaxies, as this biases their SFR-weighted
metallicities toward their inner, more metal-rich parts. With a newly defined
quantity, the 'radially averaged' metallicity, which captures the metallicity
profile but is independent of the SFR profile, the metallicities of satellites
and centrals become environmentally independent once they are matched in
formation history. We find that circumgalactic metallicity (defined as rapidly
inflowing gas around the virial radius), while sensitive to environment, has no
measurable effect on the metallicity of the star-forming gas inside the
galaxies.
|
We investigate the density dependence of the symmetry energy in a
relativistic description by decomposing the iso-vector mean field into
contributions with different Lorentz properties. We find important effects of
the iso-vector, scalar $\delta$ channel on the density behavior of the symmetry
energy. Finite nuclei studies show only moderate effects originating from the
virtual $\delta$ meson. In heavy ion collisions from Fermi to relativistic
energies up to $1-2 AGeV$ one finds important contributions on the dynamics
arising from the different treatment of the microscopic Lorentz structure of
the symmetry energy. We discuss a variety of possible signals which could set
constraints on the still unknown density dependence of the symmetry energy,
when experimental data will be available. Examples of such observables are
isospin collective flow, threshold production of pions and kaons, isospin
equilibration and stopping in asymmetric systems like $Au+Au$, $Sn+Sn$ and
$Ru(Zr)+Zr(Ru)$.
|
The representation of numbers by tensor product states of composite quantum
systems is examined. Consideration is limited to k-ary representations of
length L and arithmetic modulo k^{L}. An abstract representation on an L fold
tensor product Hilbert space H^{arith} of number states and operators for the
basic arithmetic operations is described. Unitary maps onto a physical
parameter based tensor product space H^{phy} are defined and the relations
between these two spaces and the dependence of algorithm dynamics on the
unitary maps is discussed. The important condition of efficient implementation
by physically realizable Hamiltonians of the basic arithmetic operations is
also discussed.
|
The algebraic approach to operator perturbation method has been applied to
two quantum--mechanical systems ``The Stark Effect in the Harmonic Oscillator''
and ``The Generalized Zeeman Effect''. To that end, two realizations of the
superoperators involved in the formalism have been carried out. The first of
them has been based on the Heisenberg--Dirac algebra of $\hat{a}^\dagger$,
$\hat{a}$, $\hat{1}$ operators, the second one has been based in the angular
momemtum algebra of $\hat{L}_+$, $\hat{L}_-$ and $\hat{L}_0$ operators. The
successful results achieved in predicting the discrete spectra of both systems
have put in evidence the reliability and accuracy of the theory.
|
We examine the dispersive properties of linear fast standing modes in
transversely nonuniform solar coronal slabs with finite gas pressure, or,
equivalently, finite plasma beta. We derive a generic dispersion relation
governing fast waves in coronal slabs for which the continuous transverse
distributions of the physical parameters comprise a uniform core, a uniform
external medium, and a transition layer (TL) in between. The profiles in the TL
are allowed to be essentially arbitrary. Restricting ourselves to the first
several branches of fast modes, which are of most interest from the
observational standpoint, we find that a finite plasma beta plays an at most
marginal role in influencing the periods ($P$), damping times ($\tau$), and
critical longitudinal wavenumbers ($k_{\rm c}$), when both $P$ and $\tau$ are
measured in units of the transverse fast time. However, these parameters are in
general significantly affected by how the TL profiles are described. We
conclude that, for typical coronal structures, the dispersive properties of the
first several branches of fast standing modes can be evaluated with the much
simpler theory for cold slabs provided that the transverse profiles are
properly addressed and the transverse Alfv\'en time in cold MHD is replaced
with the transverse fast time.
|
Distance matrices are matrices whose elements are the relative distances
between points located on a certain manifold. In all cases considered here all
their eigenvalues except one are non-positive. When the points are uncorrelated
and randomly distributed we investigate the average density of their
eigenvalues and the structure of their eigenfunctions. The spectrum exhibits
delocalized and strongly localized states which possess different power-law
average behaviour. The exponents depend only on the dimensionality of the
manifold.
|
It is well-known that no local model - in theory - can simulate the outcome
statistics of a Bell-type experiment as long as the detection efficiency is
higher than a threshold value. For the Clauser-Horne-Shimony-Holt (CHSH) Bell
inequality this theoretical threshold value is $\eta_{\text{T}} = 2
(\sqrt{2}-1) \approx 0.8284$. On the other hand, Phys.\ Rev.\ Lett.\ 107,
170404 (2011) outlined an explicit practical model that can fake the CHSH
inequality for a detection efficiency of up to $0.5$. In this work, we close
this gap. More specifically, we propose a method to emulate a Bell inequality
at the threshold detection efficiency using existing optical detector control
techniques. For a Clauser-Horne-Shimony-Holt inequality, it emulates the CHSH
violation predicted by quantum mechanics up to $\eta_{\text{T}}$. For the
Garg-Mermin inequality - re-calibrated by incorporating non-detection events -
our method emulates its exact local bound at any efficiency above the
threshold. This confirms that attacks on secure quantum communication protocols
based on Bell violation is a real threat if the detection efficiency loophole
is not closed.
|
Entanglement, a phenomenon that has puzzled scientists since its discovery,
has been extensively studied by many researchers through both theoretical and
experimental aspect of both quantum information processing (QIP) and quantum
mechanics (QM). But how can entanglement be most effectively taught to computer
science students compared to applied physics students?. in this educational
pursuit, we propose using Yao.jl, a quantum computing framework written in
Julia for teaching entanglement to graduate computer science students attending
a quantum computing class at Johns Hopkins University.
David Mermin's just enough QM for them to understand and develop algorithms
in quantum computation [Mer98, Mer03] idea aligns with the purpose of this
work. Additionally, the authors of the study Improving students understanding
of QM via the Stern-Gerlach experiment (SGE) argue that this experiment should
be a key part of any QM education. Here, we explore the concept of entanglement
and it's quantification in various quantum information processing experiments,
including one inequality-free form of Bell's theorem: (1) Superposition via the
Hadamard, (2) Bell-state generation and (3) GHZ state generation. The
utilisation of circuit diagrams and code fragmentsis a central theme in this
work's philosophy.
|
We show that with every separable calssical Stackel system of Benenti type on
a Riemannian space one can associate, by a proper deformation of the metric
tensor, a multi-parameter family of non-Hamiltonian systems on the same space,
sharing the same trajectories and related to the seed system by appropriate
reciprocal transformations. These system are known as bi-cofactor systems and
are integrable in quadratures as the seed Hamiltonian system is. We show that
with each class of bi-cofactor systems a pair of separation curves can be
related. We also investigate conditions under which a given flat bi-cofactor
system can be deformed to a family of geodesically equivalent flat bi-cofactor
systems.
|
The Oeljeklaus-Toma (OT-) manifolds are compact, complex, non-Kahler
manifolds constructed by Oeljeklaus and Toma, and generalizing the Inoue
surfaces. Their construction uses the number-theoretic data: a number field $K$
and a torsion-free subgroup $U$ in the group of units of the ring of integers
of $K$, with rank of $U$ equal to the number of real embeddings of $K$. We
prove that any complex subvariety of smallest possible positive dimension in an
OT-manifold is also flat affine. This is used to show that if all non-trivial
elements in $U$ are primitive in $K$, then $X$ contains no proper complex
subvarieties.
|
We present a compositional analysis of the 10 micron silicate spectra for
brown dwarf disks in the Taurus and Upper Scorpius (UppSco) star-forming
regions, using archival Spitzer/IRS observations. A variety in the silicate
features is observed, ranging from a narrow profile with a peak at 9.8 micron,
to nearly flat, low-contrast features. For most objects, we find nearly equal
fractions for the large-grain and crystalline mass fractions, indicating both
processes to be active in these disks. The median crystalline mass fraction for
the Taurus brown dwarfs is found to be 20%, a factor of ~2 higher than the
median reported for the higher mass stars in Taurus. The large-grain mass
fractions are found to increase with an increasing strength in the X-ray
emission, while the opposite trend is observed for the crystalline mass
fractions. A small 5% of the Taurus brown dwarfs are still found to be
dominated by pristine ISM-like dust, with an amorphous sub-micron grain mass
fraction of ~87%. For 15% of the objects, we find a negligible large-grain mass
fraction, but a >60% small amorphous silicate fraction. These may be the cases
where substantial grain growth and dust sedimentation has occurred in the
disks, resulting in a high fraction of amorphous sub-micron grains in the disk
surface. Among the UppSco brown dwarfs, only usd161939 has a S/N high enough to
properly model its silicate spectrum. We find a 74% small amorphous grain and a
~26% crystalline mass fraction for this object.
|
Machine learning (ML) methods are becoming integral to scientific inquiry in
numerous disciplines, such as material sciences. In this manuscript, we
demonstrate how ML can be used to predict several properties in solid-state
chemistry, in particular the heat of formation of a given complex
crystallographic phase (here the $\sigma-$phase, $tP30$, $D8_{b}$). Based on an
independent and unprecedented large first principles dataset containing about
10,000 $\sigma-$compounds with $n=14$ different elements, we used a supervised
learning approach, to predict all the $\sim$500,000 possible configurations
within a mean absolute error of 23 meV/at ($\sim$2 kJ.mol$^{-1}$) on the heat
of formation and $\sim$0.06 Ang. on the tetragonal cell parameters. We showed
that neural network regression algorithms provide a significant improvement in
accuracy of the predicted output compared to traditional regression techniques.
Adding descriptors having physical nature (atomic radius, number of valence
electrons) improves the learning precision. Based on our analysis, the training
database composed of the only binary-compositions plays a major role in
predicting the higher degree system configurations. Our result opens a broad
avenue to efficient high-throughput investigations of the combinatorial binary
calculation for multicomponent prediction of a complex phase.
|
Given positive numbers p_1 < p_2 < ... < p_n, and a real number r let L_r be
the n by n matrix with its (i,j) entry equal to (p_i^r-p_j^r)/(p_i-p_j). A
well-known theorem of C. Loewner says that L_r is positive definite when 0 < r
< 1. In contrast, R. Bhatia and J. Holbrook, (Indiana Univ. Math. J, 49 (2000)
1153-1173) showed that when 1 < r < 2, the matrix L_r has only one positive
eigenvalue, and made a conjecture about the signatures of eigenvalues of L_r
for other r. That conjecture is proved in this paper.
|
In view of the huge success of convolution neural networks (CNN) for image
classification and object recognition, there have been attempts to generalize
the method to general graph-structured data. One major direction is based on
spectral graph theory and graph signal processing. In this paper, we study the
problem from a completely different perspective, by introducing parallel flow
decomposition of graphs. The essential idea is to decompose a graph into
families of non-intersecting one dimensional (1D) paths, after which, we may
apply a 1D CNN along each family of paths. We demonstrate that the our method,
which we call GraphFlow, is able to transfer CNN architectures to general
graphs. To show the effectiveness of our approach, we test our method on the
classical MNIST dataset, synthetic datasets on network information propagation
and a news article classification dataset.
|
We investigate a graphene quantum pump, adiabatically driven by two thin
potential barriers vibrating around their equilibrium positions. For the highly
doped leads, the pumped current per mode diverges at the Dirac point due to the
more efficient contribution of the evanescent modes in the pumping process. The
pumped current shows an oscillatory behavior with an increasing amplitude as a
function of the carrier concentration. This effect is in contrast to the
decreasing oscillatory behavior of the similar normal pump. The graphene pump
driven by two vibrating thin barriers operates more efficient than the graphene
pump driven by two oscillating thin barriers.
|
In this paper, we present the case for a declarative foundation for
data-intensive machine learning systems. Instead of creating a new system for
each specific flavor of machine learning task, or hardcoding new optimizations,
we argue for the use of recursive queries to program a variety of machine
learning systems. By taking this approach, database query optimization
techniques can be utilized to identify effective execution plans, and the
resulting runtime plans can be executed on a single unified data-parallel query
processing engine. As a proof of concept, we consider two programming
models--Pregel and Iterative Map-Reduce-Update---from the machine learning
domain, and show how they can be captured in Datalog, tuned for a specific
task, and then compiled into an optimized physical plan. Experiments performed
on a large computing cluster with real data demonstrate that this declarative
approach can provide very good performance while offering both increased
generality and programming ease.
|
Current imitation learning techniques are too restrictive because they
require the agent and expert to share the same action space. However,
oftentimes agents that act differently from the expert can solve the task just
as good. For example, a person lifting a box can be imitated by a ceiling
mounted robot or a desktop-based robotic-arm. In both cases, the end goal of
lifting the box is achieved, perhaps using different strategies. We denote this
setup as \textit{Inspiration Learning} - knowledge transfer between agents that
operate in different action spaces. Since state-action expert demonstrations
can no longer be used, Inspiration learning requires novel methods to guide the
agent towards the end goal. In this work, we rely on ideas of Preferential
based Reinforcement Learning (PbRL) to design Advantage Actor-Critic algorithms
for solving inspiration learning tasks. Unlike classic actor-critic
architectures, the critic we use consists of two parts: a) a state-value
estimation as in common actor-critic algorithms and b) a single step reward
function derived from an expert/agent classifier. We show that our method is
capable of extending the current imitation framework to new horizons. This
includes continuous-to-discrete action imitation, as well as primitive-to-macro
action imitation.
|
An ecological flow network is a weighted directed graph in which nodes are
species, edges are "who eats whom" relationships and weights are rates of
energy or nutrients transfer between species. Allometric scaling is a
ubiquitous feature for flow systems like river basins, vascular networks and
food webs. By "ecological network analysis" method, we can reveal the hidden
allometry directly on the original flow networks without cutting edges. On the
other hand, dissipation law, which is another significant scaling relationship
between the energy dissipation (respiration) and the throughflow of any species
is also discovered on the collected flow networks. Interestingly, the exponents
of allometric law ($\eta$) and the dissipation law ($\gamma$) have a strong
connection for both empirical and simulated flow networks. The dissipation law
exponent $\gamma$ rather than the topology of the network is the most important
ingredient to the allometric exponent $\eta$. By reinterpreting $\eta$ as the
inequality of species impacts (direct and indirect influences) to the whole
network along all energy flow pathways but not the energy transportation
efficiency, we found that as $\gamma$ increases, the relative energy loss of
large nodes (with high throughflow) increases, $\eta$ decreases, and the
inequality of the whole flow network as well as the relative importance of
large species decreases. Therefore, flow structure and thermodynamic constraint
are connected.
|
We study the effect of disorder on the anomalous Hall effect (AHE) in
two-dimensional ferromagnets. The topological nature of AHE leads to the
integer quantum Hall effect from a metal, i.e., the quantization of
$\sigma_{xy}$ induced by the localization except for the few extended states
carrying Chern number. Extensive numerical study on a model reveals that
Pruisken's two-parameter scaling theory holds even when the system has no gap
with the overlapping multibands and without the uniform magnetic field.
Therefore the condition for the quantized AHE is given only by the Hall
conductivity $\sigma_{xy}$ without the quantum correction, i.e., $|\sigma_{xy}|
> e^2/(2h)$.
|
Advances in machine learning methods provide tools that have broad
applicability in scientific research. These techniques are being applied across
the diversity of nuclear physics research topics, leading to advances that will
facilitate scientific discoveries and societal applications.
This Review gives a snapshot of nuclear physics research which has been
transformed by machine learning techniques.
|
The effect of shear on the growth of large scale magnetic fields in helical
turbulence is investigated. The resulting large-scale magnetic field is also
helical and continues to evolve, after saturation of the small scale field, on
a slow resistive time scale. This is a consequence of magnetic helicity
conservation. Because of shear, the time scale needed to reach an
equipartition-strength large scale field is shortened proportionally to the
ratio of the resulting toroidal to poloidal large scale fields.
|
The functional determinant of a special second order quantum-mechanical
operator is calculated with its zero mode gauged out by the method of the
Faddeev-Popov gauge fixing procedure. This operator subject to periodic
boundary conditions arises in applications of the early Universe theory and, in
particular, determines the one-loop statistical sum in quantum cosmology
generated by a conformal field theory (CFT). The calculation is done for a
special case of a periodic zero mode of this operator having two roots (nodes)
within the period range, which corresponds to the class of cosmological
instantons in the CFT driven cosmology with one oscillation of the cosmological
scale factor of its Euclidean Friedmann-Robertson-Walker metric.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.