text
stringlengths 6
128k
|
---|
Post-market medical device surveillance is a challenge facing manufacturers,
regulatory agencies, and health care providers. Electronic health records are
valuable sources of real world evidence to assess device safety and track
device-related patient outcomes over time. However, distilling this evidence
remains challenging, as information is fractured across clinical notes and
structured records. Modern machine learning methods for machine reading promise
to unlock increasingly complex information from text, but face barriers due to
their reliance on large and expensive hand-labeled training sets. To address
these challenges, we developed and validated state-of-the-art deep learning
methods that identify patient outcomes from clinical notes without requiring
hand-labeled training data. Using hip replacements as a test case, our methods
accurately extracted implant details and reports of complications and pain from
electronic health records with up to 96.3% precision, 98.5% recall, and 97.4%
F1, improved classification performance by 12.7- 53.0% over rule-based methods,
and detected over 6 times as many complication events compared to using
structured data alone. Using these events to assess complication-free
survivorship of different implant systems, we found significant variation
between implants, including for risk of revision surgery, which could not be
detected using coded data alone. Patients with revision surgeries had more hip
pain mentions in the post-hip replacement, pre-revision period compared to
patients with no evidence of revision surgery (mean hip pain mentions 4.97 vs.
3.23; t = 5.14; p < 0.001). Some implant models were associated with higher or
lower rates of hip pain mentions. Our methods complement existing surveillance
mechanisms by requiring orders of magnitude less hand-labeled training data,
offering a scalable solution for national medical device surveillance.
|
It is well-known that the space of derivations of $n$-dimensional evolution
algebras with non-singular matrices is zero. On the other hand, the space of
derivations of evolution algebras with matrices of rank $n-1$ has also been
completely described in the literature. In this work we provide a complete
description of the space of derivations of evolution algebras associated to
graphs, depending on the twin partition of the graph. For graphs without twin
classes with at least three elements we prove that the space of derivations of
the associated evolution algebra is zero. Moreover, we describe the spaces of
derivations for evolution algebras associated to the remaining families of
finite graphs. It is worth pointing out that our analysis includes examples of
finite dimensional evolution algebras with matrices of any rank.
|
Flocking behavior of multiple agents can be widely observed in nature such as
schooling fish and flocking birds. Recent literature has proposed the
possibility that flocking is possible even only a small fraction of agents are
informed of the desired position and velocity. However, it is still a
challenging problem to determine which agents should be informed or have the
ability to detect the desired information. This paper aims to address this
problem. By combining the ideas of virtual force and pseudo-leader mechanism,
where a pseudo-leader represents an agent who can detect the desired
information, we propose a scheme for choosing pseudo-leaders in a multi-agent
group. The presented scheme can be applied to a multi-agent group even with an
unconnected or switching neighbor graph. Experiments are given to show that the
methods presented in this paper are of high accuracy and perform well.
|
We introduce a sub-Riemannian analogue of the Bence-Merriman-Osher diffusion
driven algorithm and show that it leads to weak solutions of the horizontal
mean curvature flow of graphs over sub-Riemannian Carnot groups. The proof
follows the nonlinear semi-group theory approach originally introduced by L. C.
Evans in the Euclidean setting and is based on new results on the relation
between sub-Riemannian heat flows of characteristic functions of subgraphs and
the horizontal mean curvature of the corresponding graphs.
|
An ellipsoid is the image of a ball under an affine transformation. If this
affine transformation is over the complex numbers, we refer to it as a complex
ellipsoid. Characterizations of real ellipsoids have received much attention
over the years however, characterizations of complex ellipsoids have been
studied very little. This paper is a review of what is known about complex
ellipsoids from the point of view of convex geometry. In particular, the proof
of the Complex Banach Conjecture.
|
We explicitly construct the rank one primitive Stark (equivalently,
Kolyvagin) system extending a constant multiple of Flach's zeta elements for
semi-stable elliptic curves. As its arithmetic applications, we obtain the
equivalence between a specific behavior of the Stark system and the minimal
modularity lifting theorem, and we also discuss the cyclicity of the adjoint
Selmer groups. This Stark system construction yields a more refined
interpretation of the collection of Flach's zeta elements than the "geometric
Euler system" approach due to Flach, Wiles, Mazur, and Weston.
|
For the problem of solving Reynolds equation under natural boundary
conditions, the corresponding hypothetical solution can be obtained by assuming
the free boundary. If the solution satisfies natural boundary conditions, then
the boundary is the boundary we are looking for. Obviously, there is a set S
formed by all the boundaries that assume the solution is positive. We prove
equivalence between maximum element of S and natural condition. We prove the
closeness of set S under addition. Therefore, we prove that the set S must have
a unique greatest element. Furthermore, we obtain the uniqueness and existence
of solutions of Reynolds equation under natural boundary conditions.
In engineering, the zero setting method is often used to find the boundary of
Reynolds equation, and we also give the proof that the zero setting method has
only one iterative error solution. We give an algorithm for solving Reynolds
equation under one-dimensional conditions, which reaches the theoretical upper
bound.
We discuss the physical meaning of this method at the end.
|
The end compactification |\Gamma| of the locally finite graph \Gamma is the
union of the graph and its ends, endowed with a suitable topology. We show that
\pi_1(|\Gamma|) embeds into a nonstandard free group with hyperfinitely many
generators, i.e. an ultraproduct of finitely generated free groups, and that
the embedding we construct factors through an embedding into an inverse limit
of free groups, recovering a result of Diestel and Spr\"ussel.
|
Within the self-energy embedding theory (SEET) framework, we study coupled
cluster Green's function (GFCC) method in two different contexts: as a method
to treat either the system or environment present in the embedding
construction. Our study reveals that when GFCC is used to treat the environment
we do not see improvement in total energies in comparison to the coupled
cluster method itself. To rationalize this puzzling result, we analyze the
performance of GFCC as an impurity solver with a series of transition metal
oxides. These studies shed light on strength and weaknesses of such a solver
and demonstrate that such a solver gives very accurate results when the size of
the impurity is small. We investigate if it is possible to achieve a systematic
accuracy of the embedding solution when we increase the size of the impurity
problem. We found that in such a case, the performance of the solver worsens,
both in terms of finding the ground state solution of the impurity problem as
well as the self-energies produced. We concluded that increasing the rank of
GFCC solver is necessary to be able to enlarge impurity problems and achieve a
reliable accuracy. We also have shown that natural orbitals from weakly
correlated perturbative methods are better suited than symmetrized atomic
orbitals (SAO) when the total energy of the system is the target quantity.
|
We characterize the water repartition within the partially saturated
(two-phase) zone (PSZ) during evaporation out of mixed wettable porous media by
controlling the wettability of glass beads, their sizes, and as well the
surrounding relative humidity. Here, Capillary numbers are low and under these
conditions, the percolating front is stabilized by gravity. Using experimental
and numerical analyses, we find that the PSZ saturation decreases with the Bond
number, where packing of smaller particles have higher saturation values than
packing made of larger particles. Results also reveal that the extent (height)
of the PSZ, as well as water saturation in the PSZ, both increase with
wettability. We also numerically calculate the saturation exclusively contained
in connected liquid films and results show that values are less than the
expected PSZ saturation. These results strongly reflect that the two-phase zone
is not solely made up of connected capillary networks, but also made of
disconnected water clusters or pockets. Moreover, we also find that global
saturation (PSZ + full wet zone) decreases with wettability, confirming that
greater quantity of water is lost via evaporation with increasing
hydrophilicity. These results show that connected liquid films are favored in
more hydrophilic systems while disconnected water pockets are favored in less
hydrophilic systems.
|
We develop the celebrated semigroup approach \`a la Bakry et al on Finsler
manifolds, where natural Laplacian and heat semigroup are nonlinear, based on
the Bochner-Weitzenb\"ock formula established by Sturm and the author. We show
the $L^1$-gradient estimate on Finsler manifolds (under some additional
assumptions in the noncompact case), which is equivalent to a lower weighted
Ricci curvature bound and the improved Bochner inequality. As a geometric
application, we prove Bakry-Ledoux's Gaussian isoperimetric inequality, again
under some additional assumptions in the noncompact case. This extends
Cavalletti-Mondino's inequality on reversible Finsler manifolds to
non-reversible metrics, and improves the author's previous estimate, both based
on the localization (also called needle decomposition) method.
|
This paper joins a series compiling consistent emission line measurements of
large AGN spectral databases, useful for reliable statistical studies of
emission line properties. It is preceded by emission line measurements of 993
spectra from the Large Bright Quasar Survey (Forster et al. 2001) and 174
spectra of AGN obtained from the Faint Object Spectrograph (FOS) on HST prior
to the installation of COSTAR (Kuraszkiewicz et al. 2002). This time we
concentrate on 220 spectra obtained with the FOS after the installation of
COSTAR, completing the emission line analysis of all FOS archival spectra. We
use the same automated technique as in previous papers, which accounts for
Galactic extinction, models blended optical and UV iron emission, includes
Galactic and intrinsic absorption lines and models emission lines using
multiple Gaussians. We present UV and optical emission line parameters
(equivalent widths, fluxes, FWHM, line positions) for a large number (28) of
emission lines including upper limits for undetected lines. Further scientific
analyses will be presented in subsequent papers.
|
We investigate the asymptotic symmetry group of a SU(2)-Yang-Mills theory
coupled to a Higgs field in the Hamiltonian formulation. This extends previous
work on the asymptotic structure of pure electromagnetism by Henneaux and
Troessaert, and on electromagnetism coupled to scalar fields and pure
Yang-Mills fields by Tanzi and Giulini. We find that there are no obstructions
to global electric and magnetic charges, though that is rather subtle in the
magnetic case. Again it is the Hamiltionian implementation of boost symmetries
that need a careful and technically subtle discussion of fall-off and parity
conditions of all fields involved.
|
At BME (Budapest University of Technology and Economics) NTI (Institute of
Nuclear Technics), a 7 pin rod bundle test section has been built in order to
investigate the hydraulic behavior of the coolant in such design and to develop
CFD models that could properly simulate the flow conditions in the ALLEGRO
core. PIROUETTE (PIv ROd bUndlE Test faciliTy at bmE) is a test facility, which
was designed to investigate the emerging flow conditions in various nuclear
fuel assembly rod bundles. The measurement method is based on Particle Image
Velocimetry (PIV) with Matching of Index of Refractory (MIR) method. In the
test loop, it was necessary to install a flow straightener that was able to
condition the velocity field before the rod bundle. The results of CFD
simulations could be used to improve the understanding of the inlet conditions
in the rod bundle test section.The second part of the benchmark deals with the
3D CFD modeling of the velocity field within the 7 pin rod bundle placed in the
test section. The geometry of the test section will be given to the
participants in an easy-to-use 3D format (.obj, .stp or .stl).
|
We show, for the first time, that ${\rm H_2}$ formation on dust grains can be
enhanced in disk galaxies under strong ram-pressure (RP). We numerically
investigate how the time evolution, of ${\rm H}$ {\sc i} and ${\rm H_2}$
components in disk galaxies orbiting a group/cluster of galaxies, can be
influenced by hydrodynamical interaction between the gaseous components of the
galaxies and the hot intra-cluster medium (ICM). We find that compression of
${\rm H}$ {\sc i} caused by RP increases ${\rm H_2}$ formation in disk
galaxies, before RP rapidly strips ${\rm H}$ {\sc i}, cutting off the fuel
supply and causing a drop in ${\rm H_2}$ density. We also find that the level
of this ${\rm H_2}$ formation enhancement in a disk galaxy under RP depends on
the mass of its host cluster dark matter (DM) halo, initial positions and
velocities of the disk galaxy, and disk inclination angle with respect to the
orbital plane. We demonstrate that dust growth is a key factor in the evolution
of the ${\rm H}$ {\sc i} and ${\rm H_2}$ mass in disk galaxies under strong RP.
We discuss how the correlation between ${\rm H_2}$ fractions and surface gas
densities of disk galaxies evolves with time in the galaxies under RP. We also
discuss whether or not galaxy-wide star formation rates (SFRs) in cluster disk
galaxies can be enhanced by RP if the SFRs depend on ${\rm H_2}$ densities.
|
Using recent measurements of the spectrum and chemical composition of the
highest energy cosmic rays, we consider the sources of these particles. We find
that the data strongly prefers models in which the sources of the ultra-high
energy cosmic rays inject predominantly intermediate mass nuclei, with
comparatively few protons or heavy nuclei, such as iron or silicon. If the
number density of sources per comoving volume does not evolve with redshift,
the injected spectrum must be very hard ($\alpha\simeq 1$) in order to fit the
spectrum observed at Earth. Such a hard spectral index would be surprising and
difficult to accommodate theoretically. In contrast, much softer spectral
indices, consistent with the predictions of Fermi acceleration ($\alpha\simeq
2$), are favored in models with negative source evolution. With this
theoretical bias, these observations thus favor models in which the sources of
the highest energy cosmic rays are preferentially located within the
low-redshift universe.
|
Grid computing is a computation methodology using group of clusters connected
over high-speed networks that involves coordinating and sharing computational
power, data storage and network resources. Integrating a set of clusters of
workstations into one large computing environment can improve the availability
of computing power. The goal of scheduling is to achieve highest possible
system throughput and to match the application need with the available
computing resources. A secure scheduling model is presented, that performs job
grouping activity at runtime. In a Grid environment, security is necessary
because grid is a dynamic environment and participates are independent bodies
with different policies, objectives and requirements. Authentication should be
verified for Grid resource owners as well as resource requesters before they
are allowed to join in scheduling activities. In order to achieve secure
resource and job scheduling including minimum processing time and maximum
resource utilization, A Secure Resource by using RSA algorithm on Networking
and Job Scheduling model with Job Grouping strategy(JGS) in Grid Computing has
been proposed. The result shows significant improvement in the processing time
of jobs and resource utilization as compared to dynamic job grouping (DJG)
based scheduling on smart grids (SG).
|
In a recent paper we conjectured that for ferromagnetic Heisenberg models the
smallest eigenvalues in the invariant subspaces of fixed total spin are
monotone decreasing as a function of the total spin and called this property
ferromagnetic ordering of energy levels (FOEL). We have proved this conjecture
for the Heisenberg model with arbitrary spins and coupling constants on a
chain. In this paper we give a pedagogical introduction to this result and also
discuss some extensions and implications. The latter include the property that
the relaxation time of symmetric simple exclusion processes on a graph for
which FOEL can be proved, equals the relaxation time of a random walk on the
same graph. This equality of relaxation times is known as Aldous' Conjecture.
|
We revisit the simplest model of Higgs portal fermionic dark matter. The dark
matter in this scenario is thermally produced in the early universe due to the
interactions with the Higgs boson which is described by a non-renormalisable
dimension-5 operator. The dark matter-Higgs scattering amplitude grows as
$\propto \sqrt{s}$, signalling a breakdown of the effective description of the
Higgs-dark matter interactions at large enough (compared to the mass scale
$\Lambda$ of the dimention-5 operator) energies. Therefore, in order to
reliably compute Higgs-dark matter scattering cross sections, we employ the
K-matrix unitarisation procedure. To account for the desired dark matter
abundance, the unitarised theory requires appreaciably smaller $\Lambda$ than
the non-unitarised version, especially for dark matter masses around and below
the Higgs resonance, $m_{\chi}\lesssim 65$ GeV, and $m_{\chi}\gtrsim $ few TeV.
Consequently, we find that the pure scalar CP-conserving model is fully
excluded by current direct dark matter detection experiments.
|
High entry speed (> 25 km/s) and low density (< 2500 kg/m3) are two factors
that lower the chance of a meteoroid to drop meteorites. The 26 g carbonaceous
(CM2) meteorite Maribo recovered in Denmark in 2009 was delivered by a
superbolide observed by several instruments across northern and central Europe.
By reanalyzing the available data, we confirmed the previously reported high
entry speed of (28.3 +/- 0.3) km/s and trajectory with slope of 31 degrees to
horizontal. In order to understand how such a fragile material survived, we
applied three different models of meteoroid atmospheric fragmentation to the
detailed bolide light curve obtained by radiometers located in Czech Republic.
The Maribo meteoroid was found to be quite inhomogeneous with different parts
fragmenting at different dynamic pressures. While 30 - 40% of the (2000 +/-
1000) kg entry mass was destroyed already at 0.02 MPa, another 25 - 40%,
according to different models, survived without fragmentation until relatively
large dynamic pressures of 3 - 5 MPa. These pressures are only slightly lower
than the measured tensile strengths of hydrated carbonaceous chondrite (CC)
meteorites and are comparable with usual atmospheric fragmentation pressures of
ordinary chondritic (OC) meteoroids. While internal cracks weaken OC meteoroids
in comparison with meteorites, this effects seems to be absent in CC, enabling
meteorite delivery even at high speeds, though in the form of only small
fragments.
|
Single lateral InGaAs quantum dot molecules have been embedded in a planar
micro-cavity in order to increase the luminescence extraction efficiency. Using
a combination of metal-organic vapor phase and molecular beam epitaxy samples
could be produced that exhibit a 30 times enhanced single-photon emission rate.
We also show that the single-photon emission is fully switchable between two
different molecular excitonic recombination energies by applying a lateral
electric field. Furthermore, the presence of a polarization fine-structure
splitting of the molecular neutral excitonic states is reported which leads to
two polarization-split classically correlated biexciton exciton cascades. The
fine-structure splitting is found to be on the order of 10 micro-eV.
|
Effective caching is crucial for the performance of modern-day computing
systems. A key optimization problem arising in caching -- which item to evict
to make room for a new item -- cannot be optimally solved without knowing the
future. There are many classical approximation algorithms for this problem, but
more recently researchers started to successfully apply machine learning to
decide what to evict by discovering implicit input patterns and predicting the
future. While machine learning typically does not provide any worst-case
guarantees, the new field of learning-augmented algorithms proposes solutions
that leverage classical online caching algorithms to make the machine-learned
predictors robust. We are the first to comprehensively evaluate these
learning-augmented algorithms on real-world caching datasets and
state-of-the-art machine-learned predictors. We show that a straightforward
method -- blindly following either a predictor or a classical robust algorithm,
and switching whenever one becomes worse than the other -- has only a low
overhead over a well-performing predictor, while competing with classical
methods when the coupled predictor fails, thus providing a cheap worst-case
insurance.
|
We study the Rendezvous problem for 2 autonomous mobile robots in
asynchronous settings with persistent memory called light. It is well known
that Rendezvous is impossible in a basic model when robots have no lights, even
if the system is semi-synchronous. On the other hand, Rendezvous is possible if
robots have lights of various types with a constant number of colors. If robots
can observe not only their own lights but also other robots' lights, their
lights are called full-light. If robots can only observe the state of other
robots' lights, the lights are called external-light.
In this paper, we focus on robots with external-lights in asynchronous
settings and a particular class of algorithms (called L-algorithms), where an
L-algorithm computes a destination based only on the current colors of
observable lights. When considering L-algorithms, Rendezvous can be solved by
robots with full-lights and 3 colors in general asynchronous settings (called
ASYNC) and the number of colors is optimal under these assumptions. In
contrast, there exists no L-algorithms in ASYNC with external-lights regardless
of the number of colors. In this paper, we consider a fairly large subclass of
ASYNC in which Rendezvous can be solved by L-algorithms using external-lights
with a finite number of colors, and we show that the algorithms are optimal in
the number of colors they use.
|
In the present paper we investigate the conservative conditions in Quadratic
Gravity. It is shown explicitly that the Bianchi identities lead to the
conservative condition of the left-hand-side of the (gravitational) field
equation. Therefore, the total energy-momentum tensor is conservative in the
bulk (like in General Relativity). However, in Quadratic Gravity it is possible
to have singular hupersurfaces separating the bulk regions with different
behavior of the matter energy-momentum tensor or different vacua. They require
special consideration. We derived the conservative conditions on such singular
hypersurfaces and demonstrated the very possibility of the matter creation. In
the remaining part of the paper we considered some applications illustrating
the obtained results.
|
We compare nuclear and neutron matter predictions based on two different ab
initio approaches to nuclear forces and the nuclear many-body problem. The
first consists of a realistic meson-theoretic nucleon-nucleon potential
together with the relativistic counterpart of the Brueckner-Hartree-Fock theory
of nuclear matter. The second is based on chiral effective field theory, with
density-dependent interactions derived from leading order chiral three-nucleon
forces. We find the results to be very close and conclude that both approaches
contain important features governing the physics of nuclear and neutron matter.
|
The unmanned aerial vehicle (UAV)-based wireless mesh networks can
economically provide wireless services for the areas with disasters. However,
the capacity of air-to-air communications is limited due to the multi-hop
transmissions. In this paper, the spectrum sharing between UAV-based wireless
mesh networks and ground networks is studied to improve the capacity of the UAV
networks. Considering the distribution of UAVs as a three-dimensional (3D)
homogeneous Poisson point process (PPP) within a vertical range, the stochastic
geometry is applied to analyze the impact of the height of UAVs, the transmit
power of UAVs, the density of UAVs and the vertical range, etc., on the
coverage probability of ground network user and UAV network user, respectively.
The optimal height of UAVs is numerically achieved in maximizing the capacity
of UAV networks with the constraint of the coverage probability of ground
network user. This paper provides a basic guideline for the deployment of
UAV-based wireless mesh networks.
|
In this article, we study the geometry of plane curves obtained by three
sections and another section given as their sum on certain rational elliptic
surfaces. We make use of Mumford representations of semi-reduced divisors in
order to study the geometry of sections. As a result, we are able to give new
proofs for some classical results on singular plane quartics and their
bitangent lines.
|
In this study, we introduce JarviX, a sophisticated data analytics framework.
JarviX is designed to employ Large Language Models (LLMs) to facilitate an
automated guide and execute high-precision data analyzes on tabular datasets.
This framework emphasizes the significance of varying column types,
capitalizing on state-of-the-art LLMs to generate concise data insight
summaries, propose relevant analysis inquiries, visualize data effectively, and
provide comprehensive explanations for results drawn from an extensive data
analysis pipeline. Moreover, JarviX incorporates an automated machine learning
(AutoML) pipeline for predictive modeling. This integration forms a
comprehensive and automated optimization cycle, which proves particularly
advantageous for optimizing machine configuration. The efficacy and
adaptability of JarviX are substantiated through a series of practical use case
studies.
|
We give an expression for the Garsia entropy of Bernoulli convolutions in
terms of products of matrices. This gives an explicit rate of convergence of
the Garsia entropy and shows that one can calculate the Hausdorff dimension of
the Bernoulli convolution $\nu_\beta$ to arbitrary given accuracy whenever
$\beta$ is algebraic. In particular, if the Garsia entropy $H(\beta)$ is not
equal to $\log(\beta)$ then we have a finite time algorithm to determine
whether or not $\mathrm{dim}_\mathrm{H} (\nu_\beta)=1$.
|
Light sterile neutrinos can be probed in a number of ways, including
electroweak decays, cosmology and neutrino oscillation experiments. At
long-baseline experiments, the neutral-current data is directly sensitive to
the presence of light sterile neutrinos: once the active neutrinos have
oscillated into a sterile state, a depletion in the neutral-current data sample
is expected since they do not interact with the $Z$ boson. This channel offers
a direct avenue to probe the mixing between a sterile neutrino and the tau
neutrino, which remains largely unconstrained by current data. In this work, we
study the potential of the DUNE experiment to constrain the mixing angle which
parametrizes this mixing, $\theta_{34}$, through the observation of
neutral-current events at the far detector. We find that DUNE will be able to
improve significantly over current constraints thanks to its large statistics
and excellent discrimination between neutral- and charged-current events.
|
In this paper, we propose a novel one-pass and tree-shaped tableau method for
Timed Propositional Temporal Logic and for a bounded variant of its extension
with past operators. Timed Propositional Temporal Logic (TPTL) is a real-time
temporal logic, with an EXPSPACE-complete satisfiability problem, which has
been successfully applied to the verification of real-time systems. In contrast
to LTL, adding past operators to TPTL makes the satisfiability problem for the
resulting logic (TPTL+P) non-elementary. In this paper, we devise a one-pass
and tree-shaped tableau for both TPTL and bounded TPTL+P (TPTLb+P), a syntactic
restriction introduced to encode timeline-based planning problems, which
recovers the EXPSPACE-complete complexity. The tableau systems for TPTL and
TPTLb+P are presented in a unified way, being very similar to each other,
providing a common skeleton that is then specialised to each logic. In doing
that, we characterise the semantics of TPTLb+P in terms of a purely syntactic
fragment of TPTL+P, giving a translation that embeds the former into the
latter. Soundness and completeness of the system are proved fully. In
particular, we give a greatly simplified model-theoretic completeness proof,
which sidesteps the complex combinatorial argument used by known proofs for the
one-pass and tree-shaped tableau systems for LTL and LTL+P.
|
New experimental measurements of charge state distributions produced by a
20Ne10+ beam at 15 MeV/u colliding on various thin solid targets are presented.
The use of the MAGNEX magnetic spectrometer enabled measurements of the 8+
charge state down to fractions of a few 10-5. The use of different
post-stripper foils located downstream of the main target is explored, showing
that low Z materials are particularly effective to shift the charge state
distributions towards fully stripped conditions. The dependence on the foil
thickness is also studied and discussed.
|
We study a continuum model of directed polymer in random environment. The law
of the polymer is defined as the Brownian motion conditioned to survive among
space-time Poissonian disasters. This model is well-studied in the positive
temperature regime. However, at zero-temperature, even the existence of the
free energy has not been proved. In this article, we prove that the free energy
exists and is continuous at zero-temperature.
|
Decoherence largely limits the physical realization of qubits and its
mitigation is critical to quantum science. Here, we construct a robust qubit
embedded in a decoherence-protected subspace, obtained by hybridizing an
applied microwave drive with the ground-state electron spin of a silicon
carbide divacancy defect. The qubit is protected from magnetic, electric, and
temperature fluctuations, which account for nearly all relevant decoherence
channels in the solid state. This culminates in an increase of the qubit's
inhomogeneous dephasing time by over four orders of magnitude (to > 22
milliseconds), while its Hahn-echo coherence time approaches 64 milliseconds.
Requiring few key platform-independent components, this result suggests that
substantial coherence improvements can be achieved in a wide selection of
quantum architectures.
|
This paper summarizes the challenges identified at the MAMI Management and
Measurement Summit (M3S) for network management with the increased deployment
of encrypted traffic based on a set of use cases and deployed techniques (for
network monitoring, performance enhancing proxies, firewalling as well as
network-supported DDoS protection and migration), and provides recommendations
for future use cases and the development of new protocols and mechanisms. In
summary, network architecture and protocol design efforts should 1) provide for
independent measurability when observations may be contested, 2) support
different security associations at different layers, and 3) replace transparent
middleboxes with middlebox transparency in order to increase visibility,
rebalance control and enable cooperation.
|
High to ultrahigh energy neutrino detectors can uniquely probe the properties
of dark matter $\chi$ by searching for the secondary products produced through
annihilation and/or decay processes. We evaluate the sensitivities to dark
matter thermally averaged annihilation cross section $\langle\sigma v\rangle$
and partial decay width into neutrinos $\Gamma_{\chi\rightarrow\nu\bar{\nu}}$
(in the mass scale $10^7 \leq m_\chi/{\rm GeV} \leq 10^{15}$) for next
generation observatories like POEMMA and GRAND. We show that in the range $
10^7 \leq m_\chi/{\rm GeV} \leq 10^{11}$, space-based Cherenkov detectors like
POEMMA have the advantage of full-sky coverage and rapid slewing, enabling an
optimized dark matter observation strategy focusing on the Galactic center. We
also show that ground-based radio detectors such as GRAND can achieve high
sensitivities and high duty cycles in radio quiet areas. We compare the
sensitivities of next generation neutrino experiments with existing constraints
from IceCube and updated 90\% C.L. upper limits on $\langle\sigma v\rangle$ and
$\Gamma_{\chi\rightarrow\nu\bar{\nu}}$ using results from the Pierre Auger
Collaboration and ANITA. We show that in the range $ 10^7 \leq m_\chi/{\rm GeV}
\leq 10^{11}$ POEMMA and GRAND10k will improve the neutrino sensitivity to
particle dark matter by factors of 2 to 10 over existing limits, whereas
GRAND200k will improve this sensitivity by two orders of magnitude. In the
range $10^{11} \leq m_\chi/{\rm GeV} \leq 10^{15}$, POEMMA's fluorescence
observation mode will achieve an unprecedented sensitivity to dark matter
properties. Finally, we highlight the importance of the uncertainties related
to the dark matter distribution in the Galactic halo, using the latest fit and
estimates of the Galactic parameters.
|
In this work, vertical tunnel field-effect transistors (v-TFETs) based on
vertically stacked heretojunctions from 2D transition metal dichalcogenide
(TMD) materials are studied by atomistic quantum transport simulations. The
switching mechanism of v-TFET is found to be different from previous
predictions. As a consequence of this switching mechanism, the extension
region, where the materials are not stacked over is found to be critical for
turning off the v-TFET. This extension region makes the scaling of v-TFETs
challenging. In addition, due to the presence of both positive and negative
charges inside the channel, v-TFETs also exhibit negative capacitance. As a
result, v-TFETs have good energy-delay products and are one of the promising
candidates for low power applications.
|
Common clustering algorithms require multiple scans of all the data to
achieve convergence, and this is prohibitive when large databases, with data
arriving in streams, must be processed. Some algorithms to extend the popular
K-means method to the analysis of streaming data are present in literature
since 1998 (Bradley et al. in Scaling clustering algorithms to large databases.
In: KDD. p. 9-15, 1998; O'Callaghan et al. in Streaming-data algorithms for
high-quality clustering. In: Proceedings of IEEE international conference on
data engineering. p. 685, 2001), based on the memorization and recursive update
of a small number of summary statistics, but they either don't take into
account the specific variability of the clusters, or assume that the random
vectors which are processed and grouped have uncorrelated components.
Unfortunately this is not the case in many practical situations. We here
propose a new algorithm to process data streams, with data having correlated
components and coming from clusters with different covariance matrices. Such
covariance matrices are estimated via an optimal double shrinkage method, which
provides positive definite estimates even in presence of a few data points, or
of data having components with small variance. This is needed to invert the
matrices and compute the Mahalanobis distances that we use for the data
assignment to the clusters. We also estimate the total number of clusters from
the data.
|
The second largest HII region in the Large Magellanic Cloud, N11B has been
surveyed in the near IR.We present JHKs images of the N11B nebula.These images
are combined with CO(1-0) emission line data and with archival NTT and
HST/WFPC2 optical images to address the star formation activity of the
region.IR photometry of all the IR sources detected is given.We confirm that a
second generation of stars is currently forming in the N11B region. Our IR
images show the presence of several bright IR sources which appear located
towards the molecular cloud as seen from the CO emission in the area.Several of
these sources show IR colours with YSO characteristics and they are prime
candidates to be intermediate-mass Herbig Ae/Be stars.For the first time an
extragalactic methanol maser is directly associated with IR sources embedded in
a molecular core.Two IR sources are found at 2"(0.5 pc) of the methanol maser
reported position.Additionally, we present the association of the N11A compact
HII region to the molecular gas where we find that the young massive O stars
have eroded a cavity in the parental molecular cloud, typical of a champagne
flow. The N11 region turns out to be a very good laboratory for studying the
interaction of winds, UV radiation and molecular gas.Several photodissociation
regions are found.
|
The physics of atomic quantum gases is currently taking advantage of a
powerful tool, the possibility to fully adjust the interaction strength between
atoms using a magnetically controlled Feshbach resonance. For fermions with two
internal states, formally two opposite spin states, this allows to prepare long
lived strongly interacting three-dimensional gases and to study the BEC-BCS
crossover. Of particular interest along the BEC-BCS crossover is the so-called
unitary gas, where the atomic interaction potential between the opposite spin
states has virtually an infinite scattering length and a zero range. This
unitary gas is the main subject of the present chapter: It has fascinating
symmetry properties, from a simple scaling invariance, to a more subtle
dynamical symmetry in an isotropic harmonic trap, which is linked to a
separability of the N-body problem in hyperspherical coordinates. Other
analytical results, valid over the whole BEC-BCS crossover, are presented,
establishing a connection between three recently measured quantities, the tail
of the momentum distribution, the short range part of the pair distribution
function and the mean number of closed channel molecules.
|
We present a deep machine learning (ML)-based technique for accurately
determining $\sigma_8$ and $\Omega_m$ from mock 3D galaxy surveys. The mock
surveys are built from the AbacusCosmos suite of $N$-body simulations, which
comprises 40 cosmological volume simulations spanning a range of cosmological
models, and we account for uncertainties in galaxy formation scenarios through
the use of generalized halo occupation distributions (HODs). We explore a trio
of ML models: a 3D convolutional neural network (CNN), a power-spectrum-based
fully connected network, and a hybrid approach that merges the two to combine
physically motivated summary statistics with flexible CNNs. We describe best
practices for training a deep model on a suite of matched-phase simulations and
we test our model on a completely independent sample that uses previously
unseen initial conditions, cosmological parameters, and HOD parameters. Despite
the fact that the mock observations are quite small
($\sim0.07h^{-3}\,\mathrm{Gpc}^3$) and the training data span a large parameter
space (6 cosmological and 6 HOD parameters), the CNN and hybrid CNN can
constrain $\sigma_8$ and $\Omega_m$ to $\sim3\%$ and $\sim4\%$, respectively.
|
Due to a resource-constrained environment, network compression has become an
important part of deep neural networks research. In this paper, we propose a
new compression method, \textit{Inter-Layer Weight Prediction} (ILWP) and
quantization method which quantize the predicted residuals between the weights
in all convolution layers based on an inter-frame prediction method in
conventional video coding schemes. Furthermore, we found a phenomenon
\textit{Smoothly Varying Weight Hypothesis} (SVWH) which is that the weights in
adjacent convolution layers share strong similarity in shapes and values, i.e.,
the weights tend to vary smoothly along with the layers. Based on SVWH, we
propose a second ILWP and quantization method which quantize the predicted
residuals between the weights in adjacent convolution layers. Since the
predicted weight residuals tend to follow Laplace distributions with very low
variance, the weight quantization can more effectively be applied, thus
producing more zero weights and enhancing the weight compression ratio. In
addition, we propose a new \textit{inter-layer loss} for eliminating
non-texture bits, which enabled us to more effectively store only texture bits.
That is, the proposed loss regularizes the weights such that the collocated
weights between the adjacent two layers have the same values. Finally, we
propose an ILWP with an inter-layer loss and quantization method. Our
comprehensive experiments show that the proposed method achieves a much higher
weight compression rate at the same accuracy level compared with the previous
quantization-based compression methods in deep neural networks.
|
We present the first results from our on-going program to estimate black hole
masses [M(BH)] of nearby BL Lac objects. The estimates are based on stellar
velocity dispersion (sigma) of the BL Lac host galaxies from optical
spectroscopy, and the recently found tight correlation between M{BH} and sigma
in nearby early-type galaxies. For the first three BL Lacs, we find log M(BH) =
7.5 - 8.7 and M(BH)/M(host) = 0.03 - 0.1.
|
Contention resolution addresses the challenge of coordinating access by
multiple processes to a shared resource such as memory, disk storage, or a
communication channel. Originally spurred by challenges in database systems and
bus networks, contention resolution has endured as an important abstraction for
resource sharing, despite decades of technological change. Here, we survey the
literature on resolving worst-case contention, where the number of processes
and the time at which each process may start seeking access to the resource is
dictated by an adversary. We highlight the evolution of contention resolution,
where new concerns -- such as security, quality of service, and energy
efficiency -- are motivated by modern systems. These efforts have yielded
insights into the limits of randomized and deterministic approaches, as well as
the impact of different model assumptions such as global clock synchronization,
knowledge of the number of processors, feedback from access attempts, and
attacks on the availability of the shared resource.
|
The lithosphere activity during seismogenic or occurrence of one earthquake
may emit electromagnetic wave which propagate to ionosphere and radiation belt,
then induce disturbance of electric and magnetic field and the precipitation of
high energy charged particles. This paper, based on the data detected by
DEMETER satellite, present the high energy charged particle burst(PB) with 4 to
6 times enhancement over the average value observed about ten days days before
Chile earthquake. The obvious particle burst was also observed in the northern
hemisphere mirror points conjugate of epicenter and no PB events in different
years over the same epicenter region was found. The energy spectra of the PBs
are different from the one averaged within the first three months in 2010. At
the same time, the disturbance of the VLF electric spectrum in ionosphere over
the epicenter detected by the DEMETER satellite are also observed in the same
two orbits. Those observations from energetic PB and VLF electric spectrum
disturbance demonstrates the coupling relation among the electromagnetic wave
emitted by seismic activity, energetic particle and electric field in
ionosphere. We eliminate the possible origination of PB including magnetic
burst and Solar activities. Finally we think the PB is likely to be related to
Chile earthquake and can be taken as the precursor of this earthquake.
|
Nonlocal nature apparently shown in entanglement is one of the most striking
features of quantum theory. We examine the locality assumption in Bell-type
proofs for entangled qubits, i.e. the outcome of a qubit at one end is
independent of the basis choice at the other end. It has recently been claimed
that in order to properly incorporate the phenomenon of self-observation, the
Heisenberg picture with time going backwards provides a consistent description.
We show that, if this claim holds true, the assumption in nonlocality proofs
that basis choices at two ends are independent of each other may no longer be
true, and may pose a threat to the validity of Bell-type proofs.
|
Context. Zeta Pup is the X-ray brightest O-type star of the sky. This object
was regularly observed with the RGS instrument aboard XMM-Newton for
calibration purposes, leading to an unprecedented set of high-quality spectra.
Aims. We have previously reduced and extracted this data set and combined it
into the most detailed high-resolution X-ray spectrum of any early-type star so
far. Here we present the analysis of this spectrum accounting for the presence
of structures in the stellar wind. Methods. For this purpose, we use our new
modeling tool that allows fitting the entire spectrum with a multi-temperature
plasma. We illustrate the impact of a proper treatment of the radial dependence
of the X-ray opacity of the cool wind on the best-fit radial distribution of
the temperature of the X-ray plasma. Results. The best fit of the RGS spectrum
of Zeta Pup is obtained assuming no porosity. Four plasma components at
temperatures between 0.10 and 0.69 keV are needed to adequately represent the
observed spectrum. Whilst the hardest emission is concentrated between ~3 and 4
R*, the softer emission starts already at 1.5 R* and extends to the outer
regions of the wind. Conclusions. The inferred radial distribution of the
plasma temperatures agrees rather well with theoretical expectations. The mass-
loss rate and CNO abundances corresponding to our best-fit model also agree
quite well with the results of recent studies of Zeta Pup in the UV and optical
domain.
|
Although recent scientific output focuses on multiple shortest-path problem
definitions for road networks, none of the existing solutions does efficiently
answer all different types of SP queries. This work proposes SALT, a novel
framework that not only efficiently answers SP related queries but also
k-nearest neighbor queries not handled by previous approaches. Our solution
offers all the benefits needed for practical use-cases, including excellent
query performance and very short preprocessing times, thus making it also a
viable option for dynamic road networks, i.e., edge weights changing frequently
due to traffic updates. The proposed SALT framework is a deployable software
solution capturing a range of network-related query problems under one
"algorithmic hood".
|
We prove that if $G$ is a $2r$-regular edge graceful $(p,q)$ graph with
$(r,kp)=1$ then $kG$ is edge graceful for odd $k$. We also prove that for
certain specific classes of $2r$-regular edge graceful graphs it is possible to
drop the requirement that $(r,kp)=1$
|
Combined synchrotron angle-dispersive powder diffraction and micro-Raman
spectroscopy are used to investigate the pressure-induced lattice instabilities
that are accompanied by T$_{\rm c}$ anomalies in YBa$_{\rm 2}$Cu$_{\rm
4}$O$_{\rm 8}$, in comparison with the optimally doped YBa$_{\rm 2}$Cu$_{\rm
3}$O$_{\rm 7-\delta}$ and the non-superconducting PrBa$_{\rm 2}$Cu$_{\rm
3}$O$_{\rm 6.92}$. In the first two superconducting systems there is a clear
anomaly in the evolution of the lattice parameters and an increase of lattice
disorder with pressure, that starts at $\approx3.7 GPa$ as well as
irreversibility that induces a hysteresis. On the contrary, in the Pr-compound
the lattice parameters follow very well the expected equation of state (EOS) up
to 7 GPa. In complete agreement with the structural data, the micro-Raman data
of the superconducting compounds show that the energy and width of the A$_{\rm
g}$ phonons show anomalies at the same pressure range where the lattice
parameters deviate from the EOS and the average Cu2-O$_{pl}$ bond length
exhibits a strong contraction and correlate with the non-linear pressure
dependence of T$_{\rm c}$. This is not the case for the non superconducting Pr
sample, clearly indicating a connection with the charge carriers. It appears
that the cuprates close to optimal doping are at the edge of lattice
instability.
|
Given two equally long, uniformly random binary strings, the expected length
of their longest common subsequence (LCS) is asymptotically proportional to the
strings' length. Finding the proportionality coefficient $\gamma$, i.e. the
limit of the normalised LCS length for two random binary strings of length $n
\to \infty$, is a very natural problem, first posed by Chv\'atal and Sankoff in
1975, and as yet unresolved. This problem has relevance to diverse fields
ranging from combinatorics and algorithm analysis to coding theory and
computational biology. Using methods of statistical mechanics, as well as some
existing results on the combinatorial structure of LCS, we link constant
$\gamma$ to the parameters of a certain stochastic particle process. These
parameters are determined by a specific (large) system of polynomial equations
with integer coefficients, which implies that $\gamma$ is an algebraic number.
Short of finding an exact closed-form solution for such a polynomial system,
which appears to be unlikely, our approach essentially resolves the
Chv\'atal-Sankoff problem, albeit in a somewhat unexpected way with a rather
negative flavour.
|
We propose a novel symmetrization procedure to beat decoherence for
oscillator-assisted quantum gate operations. The enacted symmetry is related to
the global geometric features of qubits transformation based on ancillary
oscillator modes, e.g. phonons in an ion-trap system. It is shown that the
devised multi-circuit symmetrized evolution endows the system with a two-fold
resilience against decoherence: insensitivity to thermal fluctuations and
quantum dissipation.
|
We consider a family of non-compact manifolds $X_\eps$ (``graph-like
manifolds'') approaching a metric graph $X_0$ and establish convergence results
of the related natural operators, namely the (Neumann) Laplacian $\laplacian
{X_\eps}$ and the generalised Neumann (Kirchhoff) Laplacian $\laplacian {X_0}$
on the metric graph. In particular, we show the norm convergence of the
resolvents, spectral projections and eigenfunctions. As a consequence, the
essential and the discrete spectrum converge as well. Neither the manifolds nor
the metric graph need to be compact, we only need some natural uniformity
assumptions. We provide examples of manifolds having spectral gaps in the
essential spectrum, discrete eigenvalues in the gaps or even manifolds
approaching a fractal spectrum. The convergence results will be given in a
completely abstract setting dealing with operators acting in different spaces,
applicable also in other geometric situations.
|
We determine the frequency ratios $\tau\equiv \omega_z/\omega_{\rho}$ for
which the Hamiltonian system with a potential \[
V=\frac{1}{r}+\frac{1}{2}\Big({\omega_{\rho}}^2(x^2+y^2)+{\omega_z}^2 z^2\Big)
\] is completely integrable. We relate this result to the existence of
conformal Killing tensors of the associated Eisenhart metric on $\mathbb{R}^{1,
4}$. Finally we show that trajectories of a particle moving under the influence
of the potential $V$ are not unparametrised geodesics of any Riemannian metric
on $\mathbb{R}^3$.
|
A way to improve the accuracy of the spectral properties in density
functional theory (DFT) is to impose constraints on the effective, Kohn-Sham
(KS), local potential [J. Chem. Phys. {\bf 136}, 224109 (2012)]. As
illustrated, a convenient variational quantity in that approach is the
``screening'' or ``electron repulsion'' density, $\rho_{\rm rep}$,
corresponding to the local, KS Hartree, exchange and correlation potential
through Poisson's equation. Two constraints, applied to this minimization,
largely remove self-interaction errors from the effective potential: (i)
$\rho_{\rm rep}$ integrates to $N-1$, where $N$ is the number of electrons, and
(ii) $\rho_{\rm rep}\geq 0$ everywhere. In the present work, we introduce an
effective ``screening'' amplitude, $f$, as the variational quantity, with the
screening density being $\rho_{\rm rep}=f^2$. In this way, the positivity
condition for $\rho_{\rm rep}$ is automatically satisfied and the minimization
problem becomes more efficient and robust. We apply this technique to molecular
calculations employing several approximations in DFT and in reduced density
matrix functional theory. We find that the proposed development is an accurate,
yet robust, variant of the constrained effective potential method.
|
We present a generalization of the classical Schur modules of $GL(N)$
exhibiting the same interplay among algebra, geometry, and combinatorics. A
generalized Young diagram $D$ is an arbitrary finite subset of $\NN \times
\NN$. For each $D$, we define the Schur module $S_D$ of $GL(N)$. We introduce a
projective variety $\FF_D$ and a line bundle $\LL_D$, and describe the Schur
module in terms of sections of $\LL_D$. For diagrams with the ``northeast''
property,
$$(i_1,j_1),\ (i_2, j_2) \in D \to (\min(i_1,i_2),\max(j_1,j_2)) \in D ,$$
which includes the skew diagrams, we resolve the singularities of $\FD$ and
show analogs of Bott's and Kempf's vanishing theorems. Finally, we apply the
Atiyah-Bott Fixed Point Theorem to establish a Weyl-type character formula of
the form:
$$ {\Char}_{S_D}(x) = \sum_t {x^{\wt(t)} \over \prod_{i,j} (1-x_i
x_j^{-1})^{d_{ij}(t)}} \ ,$$
where $t$ runs over certain standard tableaux of $D$.
Our results are valid over fields of arbitrary characteristic.
|
Recent advancements in autonomous driving have relied on data-driven
approaches, which are widely adopted but face challenges including dataset
bias, overfitting, and uninterpretability. Drawing inspiration from the
knowledge-driven nature of human driving, we explore the question of how to
instill similar capabilities into autonomous driving systems and summarize a
paradigm that integrates an interactive environment, a driver agent, as well as
a memory component to address this question. Leveraging large language models
(LLMs) with emergent abilities, we propose the DiLu framework, which combines a
Reasoning and a Reflection module to enable the system to perform
decision-making based on common-sense knowledge and evolve continuously.
Extensive experiments prove DiLu's capability to accumulate experience and
demonstrate a significant advantage in generalization ability over
reinforcement learning-based methods. Moreover, DiLu is able to directly
acquire experiences from real-world datasets which highlights its potential to
be deployed on practical autonomous driving systems. To the best of our
knowledge, we are the first to leverage knowledge-driven capability in
decision-making for autonomous vehicles. Through the proposed DiLu framework,
LLM is strengthened to apply knowledge and to reason causally in the autonomous
driving domain. Project page: https://pjlab-adg.github.io/DiLu/
|
We explore effects of the Shakura-Sunyaev alpha-viscosity on the dynamics and
oscillations of slender tori. We start with a slow secular evolution of the
torus. We show that the angular-momentum profile approaches the Keplerian one
on the timescale longer than a dynamical one by a factor of the order of
1/\alpha. Then we focus our attention on the oscillations of the torus. We
discuss effects of various angular momentum distributions. Using a perturbation
theory, we have found a rather general result that the high-order acoustic
modes are damped by the viscosity, while the high-order inertial modes are
enhanced. We calculate a viscous growth rates for the lowest-order modes and
show that already lowest-order inertial mode is unstable for less steep angular
momentum profiles or very close to the central gravitating object.
|
In our previous works, we have analyzed the evolution of bulk viscous matter
dominated universe with a more general form for bulk viscous coefficient,
$\zeta=\zeta_{0}+\zeta_{1}\frac{\dot{a}}{a}+\zeta_{2}\frac{\ddot{a}}{\dot{a}}$
and also carried out the dynamical system analysis. We found that the model
reasonably describes the evolution of the universe if the viscous coefficient
is a constant. In the present work we are contrasting this model with the
standard $\Lambda$CDM model of the universe using the Bayesian method. We have
shown that, even though the viscous model gives a reasonable back ground
evolution of the universe, the Bayes factor of the model indicates that, it is
not so superior over the $\Lambda$CDM model, but have a slight advantage over
it.
|
A graph is 1-planar if it can be drawn in the plane so that each edge is
crossed at most once. However, there are 1-planar graphs which do not admit a
straight-line 1-planar drawing. We show that every 1-planar graph has a
straight-line drawing with a two-coloring of the edges, so that edges of the
same color do not cross. Hence, 1-planar graphs have geometric thickness two.
In addition, each edge is crossed by edges with a common vertex if it is
crossed more than twice. The drawings use high precision arithmetic with
numbers with O(n log n) digits and can be computed in linear time from a
1-planar drawing
|
We have used the VLA to study radio variability among a sample of 18 low
luminosity active galactic nuclei (LLAGNs), on time scales of a few hours to 10
days. The goal was to measure or limit the sizes of the LLAGN radio-emitting
regions, in order to use the size measurements as input to models of the radio
emission mechanisms in LLAGNs. We detect variability on typical time scales of
a few days, at a confidence level of 99%, in half of the target galaxies.
Either variability that is intrinsic to the radio emitting regions, or that is
caused by scintillation in the Galactic interstellar medium, is consistent with
the data. For either interpretation, the brightness temperature of the emission
is below the inverse-Compton limit for all of our LLAGNs, and has a mean value
of about 1E10 K. The variability measurements plus VLBI upper limits imply that
the typical angular size of the LLAGN radio cores at 8.5 GHz is 0.2
milliarcseconds, plus or minus a factor of two. The ~ 1E10 K brightness
temperature strongly suggests that a population of high-energy nonthermal
electrons must be present, in addition to a hypothesized thermal population in
an accretion flow, in order to produce the observed radio emission.
|
If $(G,V)$ is a multiplity free space with a one dimensional quotient we give
generators and relations for the non-commutative algebra $D(V)^{G'}$ of
invariant differential operators under the semi-simple part $G'$ of the
reductive group $G$. More precisely we show that $D(V)^{G'}$ is the quotient of
a Smith algebra by a completely described two-sided ideal.
|
Although Generative Adversarial Networks (GANs) are successfully applied to
diverse fields, training GANs on synthetic aperture radar (SAR) data is a
challenging task mostly due to speckle noise. On the one hands, in a learning
perspective of human's perception, it is natural to learn a task by using
various information from multiple sources. However, in the previous GAN works
on SAR target image generation, the information on target classes has only been
used. Due to the backscattering characteristics of SAR image signals, the
shapes and structures of SAR target images are strongly dependent on their pose
angles. Nevertheless, the pose angle information has not been incorporated into
such generative models for SAR target images. In this paper, we firstly propose
a novel GAN-based multi-task learning (MTL) method for SAR target image
generation, called PeaceGAN that uses both pose angle and target class
information, which makes it possible to produce SAR target images of desired
target classes at intended pose angles. For this, the PeaceGAN has two
additional structures, a pose estimator and an auxiliary classifier, at the
side of its discriminator to combine the pose and class information more
efficiently. In addition, the PeaceGAN is jointly learned in an end-to-end
manner as MTL with both pose angle and target class information, thus enhancing
the diversity and quality of generated SAR target images The extensive
experiments show that taking an advantage of both pose angle and target class
learning by the proposed pose estimator and auxiliary classifier can help the
PeaceGAN's generator effectively learn the distributions of SAR target images
in the MTL framework, so that it can better generate the SAR target images more
flexibly and faithfully at intended pose angles for desired target classes
compared to the recent state-of-the-art methods.
|
A NJL Lagrangian extended to six and eight quark interactions is applied to
study temperature effects (SU(3) flavor limit, massless case), and (realistic
massive case). The transition temperature can be considerably reduced as
compared to the standard approach, in accordance with recent lattice
calculations. The mesonic spectra built on the spontaneously broken vacuum
induced by the 't Hooft interaction strength, as opposed to the commonly
considered case driven by the four-quark coupling, undergoes a rapid crossover
to the unbroken phase, with a slope and at a temperature which is regulated by
the strength of the OZI violating eight-quark interactions. This strength can
be adjusted in consonance with the four-quark coupling and leaves the spectra
unchanged, except for the sigma meson mass, which decreases. A first order
transition behavior is also a possible solution within the present approach.
|
Very sensitive 21cm HI measurements have been made at several locations
around the Local Group galaxy M31 using the Green Bank Telescope (GBT) at an
angular resolution of 9.1', with a 5$\sigma$ detection level of $\rm{N_{HI} =
3.9 \times 10^{17}~cm^{-2}}$ for a 30 $\rm{km~s^{-1}}$ line. Most of the HI in
a 12 square degree area almost equidistant between M31 and M33 is contained in
nine discrete clouds that have a typical size of a few kpc and HI mass of
$10^5$ M$_{\odot}$. Their velocities in the Local Group Standard of Rest lie
between -100 and +40 $\rm{km~s^{-1}}$, comparable to the systemic velocities of
M31 and M33. The clouds appear to be isolated kinematically and spatially from
each other. The total HI mass of all nine clouds is $1.4 \times 10^6$
M$_{\odot}$ for an adopted distance of 800 kpc with perhaps another $0.2 \times
10^6$ M$_{\odot}$ in smaller clouds or more diffuse emission. The HI mass of
each cloud is typically three orders of magnitude less than the dynamical
(virial) mass needed to bind the cloud gravitationally. Although they have the
size and HI mass of dwarf galaxies, the clouds are unlikely to be part of the
satellite system of the Local Group as they lack stars. To the north of M31,
sensitive HI measurements on a coarse grid find emission that may be associated
with an extension of the M31 high-velocity cloud population to projected
distances of $\sim 100$ kpc. An extension of the M31 high-velocity cloud
population at a similar distance to the south, toward M33, is not observed.
|
By using the same algorithm in the Baade-Wesselink analyses of Galactic RR
Lyrae and Cepheid variables, it is shown that, within 0.03 mag (1 sigma)
statistical error, they yield the same distance modulus for the Large
Magellanic Cloud. By fixing the zero point of the color-temperature calibration
to those of the current infrared flux methods and using updated
period-luminosity-color relations, we get an average value of 18.55 for the
true distance modulus of the LMC.
|
This paper studies co-segmenting the common semantic object in a set of
images. Existing works either rely on carefully engineered networks to mine the
implicit semantic information in visual features or require extra data (i.e.,
classification labels) for training. In this paper, we leverage the contrastive
language-image pre-training framework (CLIP) for the task. With a backbone
segmentation network that independently processes each image from the set, we
introduce semantics from CLIP into the backbone features, refining them in a
coarse-to-fine manner with three key modules: i) an image set feature
correspondence module, encoding global consistent semantic information of the
image set; ii) a CLIP interaction module, using CLIP-mined common semantics of
the image set to refine the backbone feature; iii) a CLIP regularization
module, drawing CLIP towards this co-segmentation task, identifying the best
CLIP semantic and using it to regularize the backbone feature. Experiments on
four standard co-segmentation benchmark datasets show that the performance of
our method outperforms state-of-the-art methods.
|
Semantic annotation is fundamental to deal with large-scale lexical
information, mapping the information to an enumerable set of categories over
which rules and algorithms can be applied, and foundational ontology classes
can be used as a formal set of categories for such tasks. A previous alignment
between WordNet noun synsets and DOLCE provided a starting point for
ontology-based annotation, but in NLP tasks verbs are also of substantial
importance. This work presents an extension to the WordNet-DOLCE noun mapping,
aligning verbs according to their links to nouns denoting perdurants,
transferring to the verb the DOLCE class assigned to the noun that best
represents that verb's occurrence. To evaluate the usefulness of this resource,
we implemented a foundational ontology-based semantic annotation framework,
that assigns a high-level foundational category to each word or phrase in a
text, and compared it to a similar annotation tool, obtaining an increase of
9.05% in accuracy.
|
The notion of a qubit is ubiquitous in quantum information processing. In
spite of the simple abstract definition of qubits as two-state quantum systems,
identifying qubits in physical systems is often unexpectedly difficult. There
are an astonishing variety of ways in which qubits can emerge from devices.
What essential features are required for an implementation to properly
instantiate a qubit? We give three typical examples and propose an operational
characterization of qubits based on quantum observables and subsystems.
|
Federated Learning (FL) exhibits privacy vulnerabilities under gradient
inversion attacks (GIAs), which can extract private information from individual
gradients. To enhance privacy, FL incorporates Secure Aggregation (SA) to
prevent the server from obtaining individual gradients, thus effectively
resisting GIAs. In this paper, we propose a stealthy label inference attack to
bypass SA and recover individual clients' private labels. Specifically, we
conduct a theoretical analysis of label inference from the aggregated gradients
that are exclusively obtained after implementing SA. The analysis results
reveal that the inputs (embeddings) and outputs (logits) of the final fully
connected layer (FCL) contribute to gradient disaggregation and label
restoration. To preset the embeddings and logits of FCL, we craft a fishing
model by solely modifying the parameters of a single batch normalization (BN)
layer in the original model. Distributing client-specific fishing models, the
server can derive the individual gradients regarding the bias of FCL by
resolving a linear system with expected embeddings and the aggregated gradients
as coefficients. Then the labels of each client can be precisely computed based
on preset logits and gradients of FCL's bias. Extensive experiments show that
our attack achieves large-scale label recovery with 100\% accuracy on various
datasets and model architectures.
|
We study the quantum localization phenomena for a random matrix model
belonging to the Gaussian orthogonal ensemble (GOE). An oscillating external
field is applied on the system. After the transient time evolution, energy is
saturated to various values depending on the frequencies. We investigate the
frequency dependence of the saturated energy. This dependence cannot be
explained by a naive picture of successive independent Landau-Zener transitions
at avoided level crossing points. The effect of quantum interference is
essential. We define the number of Floquet states which have large overlap with
the initial state, and calculate its frequency dependence. The number of
Floquet states shows approximately linear dependence on the frequency, when the
frequency is small. Comparing the localization length in Floquet states and
that in energy states from the viewpoint of the Anderson localization, we
conclude that the Landau-Zener picture works for the local transition processes
between levels.
|
In this paper, we define Mannheim partner curves in a three dimensional Lie
group G with a bi-invariant metric. And then the main result in this paper is
given as (Theorem 3.3): A curve {\alpha} with the Frenet apparatus
{T,N,B,{\kappa},{\tau}} in G is a Mannheim partner curve if and only if
{\lambda}{\kappa}(1+H2)=1, where {\lambda} is constant and H is the harmonic
curvature function of the curve {\alpha}.
|
Dense sub-graphs of sparse graphs (communities), which appear in most
real-world complex networks, play an important role in many contexts. Most
existing community detection algorithms produce a hierarchical structure of
community and seek a partition into communities that optimizes a given quality
function. We propose new methods to improve the results of any of these
algorithms. First we show how to optimize a general class of additive quality
functions (containing the modularity, the performance, and a new similarity
based quality function we propose) over a larger set of partitions than the
classical methods. Moreover, we define new multi-scale quality functions which
make it possible to detect the different scales at which meaningful community
structures appear, while classical approaches find only one partition.
|
The family of topologies that induce the Euclidean metric space on every time
axis and every space axis exhibits no maximal element when partially ordered by
the relation ``finer than'', as demonstrated in this article. One conclusion
and two reflections emerge and are addressed herein:
Conclusion: a. Zeeman's fine topology [1] and G\"{o}bel's extension to
arbitrary spacetimes [2] do not exist. Reflections: a. Both authors' attempts
may be classified as type-2 strategies, within the taxonomy of [3]. b. How
could these inexistent topologies be used for decades?
|
A large sample of cosmic ray events collected by the CMS detector is
exploited to measure the specific energy loss of muons in the lead tungstate of
the electromagnetic calorimeter. The measurement spans a momentum range from 5
GeV/c to 1 TeV/c. The results are consistent with the expectations over the
entire range. The calorimeter energy scale, set with 120 GeV/c electrons, is
validated down to the sub-GeV region using energy deposits, of order 100 MeV,
associated with low-momentum muons. The muon critical energy in lead tungstate
is measured to be 160+5/-6 plus or minus 8 GeV, in agreement with expectations.
This is the first experimental determination of muon critical energy.
|
Penalized (or regularized) regression, as represented by Lasso and its
variants, has become a standard technique for analyzing high-dimensional data
when the number of variables substantially exceeds the sample size. The
performance of penalized regression relies crucially on the choice of the
tuning parameter, which determines the amount of regularization and hence the
sparsity level of the fitted model. The optimal choice of tuning parameter
depends on both the structure of the design matrix and the unknown random error
distribution (variance, tail behavior, etc). This article reviews the current
literature of tuning parameter selection for high-dimensional regression from
both theoretical and practical perspectives. We discuss various strategies that
choose the tuning parameter to achieve prediction accuracy or support recovery.
We also review several recently proposed methods for tuning-free
high-dimensional regression.
|
The dynamics of macroscopically homogeneous sheared suspensions of neutrally
buoyant, non-Brownian spheres is investigated in the limit of vanishingly small
Reynolds numbers using Stokesian dynamics. We show that the complex dynamics of
sheared suspensions can be characterized as a chaotic motion in phase space and
determine the dependence of the largest Lyapunov exponent on the volume
fraction $\phi$. The loss of memory at the microscopic level of individual
particles is also shown in terms of the autocorrelation functions for the two
transverse velocity components. Moreover, a negative correlation in the
transverse particle velocities is seen to exist at the lower concentrations, an
effect which we explain on the basis of the dynamics of two isolated spheres
undergoing simple shear. In addition, we calculate the probability distribution
function of the velocity fluctuations and observe, with increasing $\phi$, a
transition from exponential to Gaussian distributions.
The simulations include a non-hydrodynamic repulsive interaction between the
spheres which qualitatively models the effects of surface roughness and other
irreversible effects, such as residual Brownian displacements, that become
particularly important whenever pairs of spheres are nearly touching. We
investigate the effects of such a non-hydrodynamic interparticle force on the
scaling of the particle tracer diffusion coefficient $D$ for very dilute
suspensions, and show that, when this force is very short-ranged, $D$ becomes
proportional to $\phi^2$ as $\phi \to 0$. In contrast, when the range of the
non-hydrodynamic interaction is increased, we observe a crossover in the
dependence of $D$ on $\phi$, from $\phi^2$ to $\phi$ as $\phi \to 0$.
|
We consider a six-parameter family of the square integrable wave functions
for the simple harmonic oscillator, which cannot be obtained by the standard
separation of variables. They are given by the action of the corresponding
maximal kinematical invariance group on the standard solutions. In addition,
the phase space oscillations of the electron position and linear momentum
probability distributions are computer animated and some possible applications
are briefly discussed. A visualization of the Heisenberg Uncertainty Principle
is presented.
|
The quantum evolution after a metallic lead is suddenly connected to an
electron system contains information about the excitation spectrum of the
combined system. We exploit this type of "quantum quench" to probe the presence
of Majorana fermions at the ends of a topological superconducting wire. We
obtain an algebraically decaying overlap (Loschmidt echo) ${\cal L}(t)=| <
\psi(0) | \psi(t) > |^2\sim t^{-\alpha}$ for large times after the quench, with
a universal critical exponent $\alpha$=1/4 that is found to be remarkably
robust against details of the setup, such as interactions in the normal lead,
the existence of additional lead channels or the presence of bound levels
between the lead and the superconductor. As in recent quantum dot experiments,
this exponent could be measured by optical absorption, offering a new signature
of Majorana zero modes that is distinct from interferometry and tunneling
spectroscopy.
|
We investigate the dynamics of two Jordan Wigner solvable models, namely, the
one dimensional chain of hard-core bosons (HCB) and the one-dimensional
transverse field Ising model under coin-toss like aperiodically driven
staggered on-site potential and the transverse field, respectively. It is
demonstrated that both the models heat up to the infinite temperature ensemble
for a minimal aperiodicity in driving. Consequently, in the case of the HCB
chain, we show that the initial current generated by the application of a twist
vanishes in the asymptotic limit for any driving frequency. For the transverse
Ising chain, we establish that the system not only reaches the diagonal
ensemble but the entanglement also attains the thermal value in the asymptotic
limit following initial ballistic growth. All these findings, contrasted with
that of the perfectly periodic situation, are analytically established in the
asymptotic limit within an exact disorder matrix formalism developed using the
uncorrelated binary nature of the coin-toss aperiodicity.
|
We introduce a model-complete theory which completely axiomatizes the
structure $Z_{\alpha}=(Z, +, 0, 1, f)$ where $f : x \to \lfloor{\alpha} x
\rfloor $ is a unary function with $\alpha$ a fixed transcendental number. When
$\alpha$ is computable, our theory is recursively enumerable, and hence
decidable as a result of completeness. Therefore, this result fits into the
more general theme of adding traces of multiplication to integers without
losing decidability.
|
Human collective intelligence has proved itself as an important factor in a
society's ability to accomplish large-scale behavioral feats. As societies have
grown in population-size, individuals have seen a decrease in their ability to
activeily participate in the problem-solving processes of the group.
Representative decision-making structures have been used as a modern solution
to society's inadequate information-processing infrastructure. With computer
and network technologies being further embedded within the fabric of society,
the implementation of a general-purpose societal-scale human-collective
problem-solving engine is envisioned as a means of furthering the
collective-intelligence potential of society. This paper provides both a novel
framework for creating collective intelligence systems and a method for
implementing a representative and expertise system based on social-network
theory.
|
In this work, we investigate some extensions of the Kiselev black hole
solutions in the context of $f(\mathbb{T},\CMcal{T})$ gravity. By mapping the
components of the Kiselev energy-momentum tensor into the anisotropic
energy-momentum tensor and assuming a particular form of
$f(\mathbb{T},\CMcal{T})$, we obtain exact solutions for the field equation in
this theory that carries dependence on the coupling constant and on the
parameter of the equation of state of the fluid. We show that in this scenario
of modified gravity some new structure is added to the geometry of spacetime as
compared to the Kiselev black hole. We analyse the energy conditions, mass,
horizons and the Hawking temperature considering particular values for the
parameter of the equation of state.
|
This talk discusses the formation of primordial intermediate-mass black
holes, in a double-inflationary theory, of sufficient abundance possibly to
provide all of the cosmological dark matter. There follows my, hopefully
convincing, explanation of the dark energy problem, based on the observation
that the visible universe is well approximated by a black hole. Finally, I
discuss that Gell-Mann is among the five greatest theoreticians of the
twentieth century.
|
Fingerprint recognition plays an important role in many commercial
applications and is used by millions of people every day, e.g. for unlocking
mobile phones. Fingerprint image segmentation is typically the first processing
step of most fingerprint algorithms and it divides an image into foreground,
the region of interest, and background. Two types of error can occur during
this step which both have a negative impact on the recognition performance:
'true' foreground can be labeled as background and features like minutiae can
be lost, or conversely 'true' background can be misclassified as foreground and
spurious features can be introduced. The contribution of this paper is
threefold: firstly, we propose a novel factorized directional bandpass (FDB)
segmentation method for texture extraction based on the directional Hilbert
transform of a Butterworth bandpass (DHBB) filter interwoven with
soft-thresholding. Secondly, we provide a manually marked ground truth
segmentation for 10560 images as an evaluation benchmark. Thirdly, we conduct a
systematic performance comparison between the FDB method and four of the most
often cited fingerprint segmentation algorithms showing that the FDB
segmentation method clearly outperforms these four widely used methods. The
benchmark and the implementation of the FDB method are made publicly available.
|
Using a streak camera, we directly measure time- and space-resolved dynamics
of N2+ emission from a self-seeded filament. We observe characteristic
signatures of superfluorescence even under ambient conditions and show that the
timing of the emitted light varies along the length of the filament. These
effects must be taken into consideration for accurate modelling of light
filaments in air, and can be exploited to engineer the temporal profile of
light emission in air lasing.
|
Performance in natural language processing, and specifically for the
question-answer task, is typically measured by comparing a model\'s most
confident (primary) prediction to golden answers (the ground truth). We are
making the case that it is also useful to quantify how close a model came to
predicting a correct answer even for examples that failed. We define the Golden
Rank (GR) of an example as the rank of its most confident prediction that
exactly matches a ground truth, and show why such a match always exists. For
the 16 transformer models we analyzed, the majority of exactly matched golden
answers in secondary prediction space hover very close to the top rank. We
refer to secondary predictions as those ranking above 0 in descending
confidence probability order. We demonstrate how the GR can be used to classify
questions and visualize their spectrum of difficulty, from persistent near
successes to persistent extreme failures. We derive a new aggregate statistic
over entire test sets, named the Golden Rank Interpolated Median (GRIM) that
quantifies the proximity of failed predictions to the top choice made by the
model. To develop some intuition and explore the applicability of these metrics
we use the Stanford Question Answering Dataset (SQuAD-2) and a few popular
transformer models from the Hugging Face hub. We first demonstrate that the
GRIM is not directly correlated with the F1 and exact match (EM) scores. We
then calculate and visualize these scores for various transformer
architectures, probe their applicability in error analysis by clustering failed
predictions, and compare how they relate to other training diagnostics such as
the EM and F1 scores. We finally suggest various research goals, such as
broadening data collection for these metrics and their possible use in
adversarial training.
|
Software Defined Networking (SDN) offers a flexible and scalable architecture
that abstracts decision making away from individual devices and provides a
programmable network platform. However, implementing a centralized SDN
architecture within the constraints of a low-power wireless network faces
considerable challenges. Not only is controller traffic subject to jitter due
to unreliable links and network contention, but the overhead generated by SDN
can severely affect the performance of other traffic. This paper addresses the
challenge of bringing high-overhead SDN architecture to IEEE 802.15.4 networks.
We explore how traditional SDN needs to evolve in order to overcome the
constraints of low-power wireless networks, and discuss protocol and
architectural optimizations necessary to reduce SDN control overhead - the main
barrier to successful implementation. We argue that interoperability with the
existing protocol stack is necessary to provide a platform for controller
discovery and coexistence with legacy networks. We consequently introduce
{\mu}SDN, a lightweight SDN framework for Contiki, with both IPv6 and
underlying routing protocol interoperability, as well as optimizing a number of
elements within the SDN architecture to reduce control overhead to practical
levels. We evaluate {\mu}SDN in terms of latency, energy, and packet delivery.
Through this evaluation we show how the cost of SDN control overhead (both
bootstrapping and management) can be reduced to a point where comparable
performance and scalability is achieved against an IEEE 802.15.4-2012 RPL-based
network. Additionally, we demonstrate {\mu}SDN through simulation: providing a
use-case where the SDN configurability can be used to provide Quality of
Service (QoS) for critical network flows experiencing interference, and we
achieve considerable reductions in delay and jitter in comparison to a scenario
without SDN.
|
The dynamics of magnetic flux distributions across a YBaCuO strip carrying
transport current is measured using magneto-optical imaging at 20 K. The
current is applied in pulses of 40-5000 ms duration and magnitude close to the
critical one, 5.5 A. During the pulse some extra flux usually penetrates the
strip, so the local field increases in magnitude. When the strip is initially
penetrated by flux, the local field either increases or decreases depending
both on the spatial coordinate and the current magnitude. Meanwhile, the
current density always tends to redistribute more uniformly. Despite the
relaxation, all distributions remain qualitatively similar to the Bean model
predictions.
|
We combine Wooley's efficient congruencing method with earlier work of
Vinogradov and Hua to get effective bounds on Vinogradov's mean value theorem.
|
The complexity of today's robot control systems implies difficulty in
developing them efficiently and reliably. Systems engineering (SE) and
frameworks come to help. The framework metamodels are needed to support the
standardisation and correctness of the created application models. Although the
use of frameworks is widespread nowadays, for the most popular of them, Robot
Operating System (ROS), a contemporary metamodel has been missing so far. This
article proposes a new metamodel for ROS called MeROS, which addresses the
running system and developer workspace. The ROS comes in two versions: ROS 1
and ROS 2. The metamodel includes both versions. In particular, the latest ROS
1 concepts are considered, such as nodelet, action, and metapackage. An
essential addition to the original ROS concepts is the grouping of these
concepts, which provides an opportunity to illustrate the system's
decomposition and varying degrees of detail in its presentation. The metamodel
is derived from the requirements and verified on the practical example of Rico
assistive robot. The matter is described in a standardised way in SysML
(Systems Modeling Language). Hence, common development tools that support SysML
can help develop robot controllers in the spirit of SE.
|
The weight distribution of error correction codes is a critical determinant
of their error-correcting performance, making enumeration of utmost importance.
In the case of polar codes, the minimum weight $\wm$ (which is equal to minimum
distance $d$) is the only weight for which an explicit enumerator formula is
currently available. Having closed-form weight enumerators for polar codewords
with weights greater than the minimum weight not only simplifies the
enumeration process but also provides valuable insights towards constructing
better polar-like codes. In this paper, we contribute towards understanding the
algebraic structure underlying higher weights by analyzing Minkowski sums of
orbits. Our approach builds upon the lower triangular affine (LTA) group of
decreasing monomial codes. Specifically, we propose a closed-form expression
for the enumeration of codewords with weight $1.5\wm$. Our simulations
demonstrate the potential for extending this method to higher weights.
|
We explore the properties of the low-temperature phase of the O($n$) loop
model in two dimensions by means of transfer-matrix calculations and
finite-size scaling. We determine the stability of this phase with respect to
several kinds of perturbations, including cubic anisotropy, attraction between
loop segments, double bonds and crossing bonds. In line with Coulomb gas
predictions, cubic anisotropy and crossing bonds are found to be relevant and
introduce crossover to different types of behavior. Whereas perturbations in
the form of loop-loop attractions and double bonds are irrelevant, sufficiently
strong perturbations of these types induce a phase transition of the Ising
type, at least in the cases investigated. This Ising transition leaves the
underlying universal low-temperature O($n$) behavior unaffected.
|
We address the issue of fluctuations, about an exponential lineshape, in a
pair of one-dimensional kicked quantum systems exhibiting dynamical
localization. An exact renormalization scheme establishes the fractal character
of the fluctuations and provides a new method to compute the localization
length in terms of the fluctuations. In the case of a linear rotor, the
fluctuations are independent of the kicking parameter $k$ and exhibit
self-similarity for certain values of the quasienergy. For given $k$, the
asymptotic localization length is a good characteristic of the localized
lineshapes for all quasienergies. This is in stark contrast to the quadratic
rotor, where the fluctuations depend upon the strength of the kicking and
exhibit local "resonances". These resonances result in strong deviations of the
localization length from the asymptotic value. The consequences are
particularly pronounced when considering the time evolution of a packet made up
of several quasienergy states.
|
The idea behind FIRST (Fibered Imager foR a Single Telescope) is to use
single-mode fibers to combine multiple apertures in a pupil plane as such as to
synthesize a bigger aperture. The advantages with respect to a pure imager are
i) relaxed tolerance on the pointing and cophasing, ii) higher accuracy in
phase measurement, and iii) availability of compact, precise, and active
single-mode optics like Lithium Niobate. The latter point being a huge asset in
the context of a space mission. One of the problems of DARWIN or SIM-like
projects was the difficulty to find low cost pathfinders missions. But the fact
that Lithium Niobate optic is small and compact makes it easy to test through
small nanosats missions. Moreover, they are commonly used in the telecom
industry, and have already been tested on communication satellites. The idea of
the FIRST-S demonstrator is to spatialize a 3U CubeSat with a Lithium Niobate
nulling interferometer. The technical challenges of the project are: star
tracking, beam combination, and nulling capabilities. The optical baseline of
the interferometer would be 30 cm, giving a 2.2 AU spatial resolution at
distance of 10 pc. The scientific objective of this mission would be to study
the visible emission of exozodiacal light in the habitable zone around the
closest stars.
|
We introduce a set of constraint preserving boundary conditions for the
Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formulation of the Einstein evolution
equations in spherical symmetry, based on its hyperbolic structure. While the
outgoing eigenfields are left to propagate freely off the numerical grid,
boundary conditions are set to enforce that the incoming eigenfields don't
introduce spurious reflections and, more importantly, that there are no fields
introduced at the boundary that violate the constraint equations. In order to
do this we adopt two different approaches to set boundary conditions for the
extrinsic curvature, by expressing either the radial or the time derivative of
its associated outgoing eigenfield in terms of the constraints. We find that
these boundary conditions are very robust in practice, allowing us to perform
long lasting evolutions that remain accurate and stable, and that converge to a
solution that satisfies the constraints all the way to the boundary.
|
With the use of tensor product of Hilbert space, and a diagonalization
procedure from operator theory, we derive an approximation formula for a
general class of stochastic integrals. Further we establish a generalized
Fourier expansion for these stochastic integrals. In our extension, we
circumvent some of the limitations of the more widely used stochastic integral
due to Wiener and Ito, i.e., stochastic integration with respect to Brownian
motion. Finally we discuss the connection between the two approaches, as well
as a priori estimates and applications.
|
We compute the master integrals relevant for the two-loop corrections to
pseudo-scalar quarkonium and leptonium production and decay. We present both
analytic and high-precision numerical results. The analytic expressions are
given in terms of multiple polylogarithms (MPLs), elliptic multiple
polylogarithms (eMPLs) and iterated integrals of Eisenstein series. As an
application of our results, we obtain for the first time an analytic expression
for the two-loop amplitude for para-positronium decay to two photons at two
loops.
|
We use the semi-analytical program RCFORGV to evaluate radiative corrections
to one-photon radiative emission in the high-energy scattering of pions in the
Coulomb field of a nucleus with atomic number Z. It is shown that radiative
corrections can simulate a pion polarizability effect. The average effect was
estimated for pion energies 40-600 GeV. We also study the range of
applicability of the equivalent photon approximation in describing one-photon
radiative emission.
|
We show that all smooth solutions of model non-linear sums of squares of
vector fields are locally real analytic. A global result for more general
operators is presented in a paper by Makhlouf Derridj and the first author
under the title "Global Analytic Hypoellipticity for a Class of Quasilinear
Sums of Squares of Vector Fields".
|
Subsets and Splits