text
stringlengths 6
128k
|
---|
A specialized algorithm for quadratic optimization (QO, or, formerly, QP)
with disjoint linear constraints is presented. In the considered class of
problems, a subset of variables are subject to linear equality constraints,
while variables in a different subset are constrained to remain in a convex
set. The proposed algorithm exploits the structure by combining steps in the
nullspace of the equality constraint's matrix with projections onto the convex
set. The algorithm is motivated by application in weather forecasting.
Numerical results on a simple model designed for predicting rain show that the
algorithm is an improvement on current practice and that it reduces the
computational burden compared to a more general interior point QO method. In
particular, if constraints are disjoint and the rank of the set of linear
equality constraints is small, further reduction in computational costs can be
achieved, making it possible to apply this algorithm in high dimensional
weather forecasting problems.
|
In the current work we consider the minimization problems for the number of
nonzero or negative values of vectors from the first and second eigenspaces of
the Johnson scheme respectively. The topic is a meeting point for
generalizations of the Manikam-Mikl\'{o}s-Singhi conjecture proven by Blinovski
and the minimum support problem for the eigenspaces of the Johnson graph,
asymptotically solved by authors in a recent paper.
|
Third-order nonlinear processes require phase matching between the
interacting fields to achieve high efficiencies. Typically in guided-wave
$\chi^{(3)}$ platforms this is achieved by engineering the dispersion of the
modes through the transverse profile of the device. However, this limits the
flexibility of the phase matching that can be achieved. Instead, we analyze
four-wave mixing in a pair of asymmetric waveguides and show that phasematching
may be achieved in any $\chi^{(3)}$ waveguide by coupling of a nondegenerate
pump from an adjacent waveguide. We demonstrate the additional flexibility that
this approach yields in the case of photon-pair generation by spontaneous FWM,
where the supermode dispersion may be modified to produce pure heralded single
photons -- a critical capability required for example by silicon platforms for
chip-scale quantum photonics.
|
As a part of future developments of beam diagnostics, a low energy
experimental bench (LEEx-B) has been recently designed, built and commissioned
at IPHC-CNRS of Strasbourg. The bench is composed of a Cs+ ion gun installed on
a HV platform and providing beams up to 25 keV. A beam profiler and an
Allison-type emittance-meter allow the qualification of the setup and also the
characterization of the beam. During the commissioning process, the
electronics, and the control system were upgraded in order to push the limits
towards low beam currents measured by the emittance-meter.
|
This article reconsiders the relative-velocity-dependent approach to modeling
electromagnetism that was proposed initially by Weber before data from
cathode-ray-tube (CRT) experiments was available. It is shown that identifying
the nonlinear, relative-velocity terms using CRT data results in a model, which
not only captures standard relativistic effects in optics, high-energy
particles, and gravitation, but also explains apparent discrepancies between
predicted and measured energy (i) in high-energy-particle absorption
experiments and (ii) in the classical beta-ray spectrum of radium-E.
|
In this work, the cohomology theory for partial actions of co-commutative
Hopf algebras over commutative algebras is formulated. This theory generalizes
the cohomology theory for Hopf algebras introduced by Sweedler and the
cohomology theory for partial group actions, introduced by Dokuchaev and
Khrypchenko. Some nontrivial examples, not coming from groups are constructed.
Given a partial action of a co-commutative Hopf algebra $H$ over a commutative
algebra $A$, we prove that there exists a new Hopf algebra $\widetilde{A}$,
over a commutative ring $E(A)$, upon which $H$ still acts partially and which
gives rise to the same cochain complex as the original algebra $A$. We also
study the partially cleft extensions of commutative algebras by partial actions
of cocommutative Hopf algebras and prove that these partially cleft extensions
can be viewed as a cleft extensions by Hopf algebroids.
|
The specific energy loss (dE/dx) resolution is critical for particle
identification and separation in TPC. In connection with the R&D for the TPC at
the Circular Electron Positron Collider (CEPC), we find the laser ionisation
obeys the Gaussian distribution theoretically and experimentally. Based on
this, we develop a novel method using a 266 nm UV laser to measure the dE/dx
resolution. Comparing to the traditional methods with particle beams or cosmic
rays, the novel method with 266 nm UV laser is more convenient and flexible. We
verify the method's feasibility with a TPC prototype and extrapolate the dE/dx
resolution to the CEPC TPC. The estimation of the dE/dx resolution is without
the magnetic field.
|
We explore threshold vocabulary trimming in Byte-Pair Encoding subword
tokenization, a postprocessing step that replaces rare subwords with their
component subwords. The technique is available in popular tokenization
libraries but has not been subjected to rigorous scientific scrutiny. While the
removal of rare subwords is suggested as best practice in machine translation
implementations, both as a means to reduce model size and for improving model
performance through robustness, our experiments indicate that, across a large
space of hyperparameter settings, vocabulary trimming fails to improve
performance, and is even prone to incurring heavy degradation.
|
We have identified 2XMM J125556.57+565846.4, at a distance of 600 pc, as a
binary system consisting of a normal star and a probable dormant neutron star.
Optical spectra exhibit a slightly evolved F-type single star, displaying
periodic Doppler shifts with a 2.76-day Keplerian circular orbit, with no
indication of light from a secondary component. Optical and UV photometry
reveal ellipsoidal variations with half the orbital period, due to the tidal
deformation of the F star. The mass of the unseen companion is constrained to
the range $1.1$--$2.1\,M_{\odot}$ at $3\sigma$ confidence, with the median of
the mass distribution at $1.4\,M_{\odot}$, the typical mass of known neutron
stars. A main-sequence star cannot masquerade as the dark companion. The
distribution of possible companion masses still allows for the possibility of a
very massive white dwarf. The companion itself could also be a close pair
consisting of a white dwarf and an M star, or two white dwarfs, although the
binary evolution that would lead to such a close triple system is unlikely.
Similar ambiguities regarding the certain identification of a dormant neutron
star are bound to affect most future discoveries of this type of
non-interacting system. If the system indeed contains a dormant neutron star,
it will become, in the future, a bright X-ray source and afterwards might even
host a millisecond pulsar.
|
We have theoretically investigated the possibility of using any of several
continuous-variable Bell-type inequalities - for which the dichotomic
measurements are achieved with coarse-grained quadrature (homodyne)
measurements - in a multi-party configuration where each participant is given a
section, in the frequency domain, of the output of an optical parametric
oscillator which has been synchronously-pumped with a frequency comb. Such
light sources are undergoing intense study due to their novel properties,
including the potential for production of light entangled in many hundreds of
physical modes - a critical component for many proposals in optical or
hybrid-optical quantum computation proposals. The situation we study notably
uses only highly-efficient optical homodyne detection, meaning that in such
systems the fair-sampling loophole would be relatively easy to avoid.
|
Contextual information is vital in visual understanding problems, such as
semantic segmentation and object detection. We propose a Criss-Cross Network
(CCNet) for obtaining full-image contextual information in a very effective and
efficient way. Concretely, for each pixel, a novel criss-cross attention module
harvests the contextual information of all the pixels on its criss-cross path.
By taking a further recurrent operation, each pixel can finally capture the
full-image dependencies. Besides, a category consistent loss is proposed to
enforce the criss-cross attention module to produce more discriminative
features. Overall, CCNet is with the following merits: 1) GPU memory friendly.
Compared with the non-local block, the proposed recurrent criss-cross attention
module requires 11x less GPU memory usage. 2) High computational efficiency.
The recurrent criss-cross attention significantly reduces FLOPs by about 85% of
the non-local block. 3) The state-of-the-art performance. We conduct extensive
experiments on semantic segmentation benchmarks including Cityscapes, ADE20K,
human parsing benchmark LIP, instance segmentation benchmark COCO, video
segmentation benchmark CamVid. In particular, our CCNet achieves the mIoU
scores of 81.9%, 45.76% and 55.47% on the Cityscapes test set, the ADE20K
validation set and the LIP validation set respectively, which are the new
state-of-the-art results. The source codes are available at
\url{https://github.com/speedinghzl/CCNet}.
|
We discuss and implement experimentally a method for characterizing quantum
gates operating on superpositions of coherent states. The peculiarity of this
encoding of qubits is to work with a non-orthogonal basis, and therefore some
technical limitations prevent us from using standard methods, such as process
tomography. We adopt a different technique, that relies on some a-priori
knowledge about the physics underlying the functioning of the device. A
parameter characterizing the global quality of the quantum gate is obtained by
\virtually" processing an entangled state.
|
Let $f: X\to Y$ be a surjective morphism of smooth $n$-dimensional projective
varieties, with $Y$ of maximal Albanese dimension. Hacon and Pardini studied
the structure of $f$ assuming $P_m(X)=P_m(Y)$ for some $m\geq 2$. We extend
their result by showing that, under the above assumtions, $f$ is birationally
equivalent to a quotient by a finite abelian group. We also study the
pluricanonical map of varieties of maximal Albanese dimesnion. The main result
is that the linear series $|5K_X|$ incuces the Iitaka model of $X$.
|
This is a set of four lectures devoted to simple ideas about turbulent
transport, a ubiquitous non-equilibrium phenomenon. In the course similar to
that given by the author in 2006 in Warwick [45], we discuss lessons which have
been learned from naive models of turbulent mixing that employ simple random
velocity ensembles and study related stochastic processes. In the first
lecture, after a brief reminder of the turbulence phenomenology, we describe
the intimate relation between the passive advection of particles and fields by
hydrodynamical flows. The second lecture is devoted to some useful tools of the
multiplicative ergodic theory for random dynamical systems. In the third
lecture, we apply these tools to the example of particle flows in the Kraichnan
ensemble of smooth velocities that mimics turbulence at intermediate Reynolds
numbers. In the fourth lecture, we extends the discussion of particle flows to
the case of non-smooth Kraichnan velocities that model fully developed
turbulence. We stress the unconventional aspects of particle flows that appear
in this regime and lead to phase transitions in the presence of
compressibility. The intermittency of scalar fields advected by fully turbulent
velocities and the scenario linking it to hidden statistical conservation laws
of multi-particle flows are briefly explained.
|
A variety of approaches has been developed to deal with uncertain
optimization problems. Often, they start with a given set of uncertainties and
then try to minimize the influence of these uncertainties. Depending on the
approach used, the corresponding price of robustness is different. The reverse
view is to first set a budget for the price one is willing to pay and then find
the most robust solution.
In this article, we aim to unify these inverse approaches to robustness. We
provide a general problem definition and a proof of the existence of its
solution. We study properties of this solution such as closedness, convexity,
and boundedness. We also provide a comparison with existing robustness concepts
such as the stability radius, the resilience radius, and the robust feasibility
radius. We show that the general definition unifies these approaches. We
conclude with examples that demonstrate the flexibility of the introduced
concept.
|
There are constructed exact solutions of the quantum-mechanical equation for
a spin S=1 particle in 2-dimensional Riemannian space of constant positive
curvature, spherical plane, in presence of an external magnetic field, analogue
of the homogeneous magnetic field in the Minkowski space. A generalized formula
for energy levels describing quantization of the motion of the vector particle
in magnetic field on the 2-dimensional space S_{2} has been found,
nonrelativistic and relativistic equations have been solved.
|
We revisit the random tree model with nearest-neighbour interaction as
described in previous work, enhancing growth. When the underlying free
Bienaym\'e-Galton-Watson (BGW) model is sub-critical, we show that the
(non-Markov) model with interaction exhibits a phase transition between sub-
and super-critical regimes. In the critical regime, using tools from dynamical
systems, we show that the partition function of the model approaches a limit at
rate $n^{-1}$ in the generation number $n$. In the critical regime with almost
sure extinction, we also prove that the mean number of external nodes in the
tree at generation $n$ decays like $n^{-2}$. Finally, we give a spin
representation of the random tree, opening the way to tools from the theory of
Gibbs states, including FKG inequalities. We extend the construction in
previous work when the law of the branching mechanism of the free BGW process
has unbounded support.
|
We study extreme mass-ratio binary systems in which a stellar mass compact
object spirals into a supermassive black hole surrounded by a scalar cloud.
Scalar clouds can form through superradiant instabilities of massive scalar
fields around spinning black holes and can also serve as a proxy for dark
matter halos. Our framework is fully relativistic and assumes that the impact
of the cloud on the geometry can be treated perturbatively. As a proof of
concept, here we consider a point particle in circular, equatorial motion
around a non-spinning black hole surrounded either by a spherically symmetric
or a dipolar non-axisymmetric scalar cloud, but the framework can in principle
be generalized to generic black hole spins and scalar cloud geometries. We
compute the leading-order power lost by the point particle due to scalar
radiation and show that, in some regimes, it can dominate over
gravitational-wave emission. We confirm the presence of striking signatures due
to the presence of a scalar cloud that had been predicted using Newtonian
approximations, such as resonances that can give rise to sinking and floating
orbits, as well as "sharp features" in the power lost by the particle at given
orbital radii. Finally, for a spherically symmetric scalar cloud, we also
compute the leading-order corrections to the black-hole geometry and to the
gravitational-wave energy flux, focusing only on axial metric perturbations for
the latter. We find that, for non-compact clouds, the corrections to the
(axial) gravitational-wave fluxes at high frequencies can be understood in
terms of a gravitational-redshift effect, in agreement with previous works.
|
We have enlarged the sample of stars in the planet search at Lick
Observatory. Doppler measurements of 82 new stars observed at Lick Observatory,
with additional velocities from Keck Observatory, have revealed two new planet
candidates.
The G3V/IV star, HD 195019, exhibits Keplerian velocity variations with a
period of 18.27 d, an orbital eccentricity of 0.03 +/- 0.03, and M sin i = 3.51
M_Jup. Based on a measurement of Ca II H&K emission, this star is
chromospherically inactive. We estimate the metallicity of HD 195019 to be
approximately solar from ubvy photometry.
The second planet candidate was detected around HD 217107, a G7V star. This
star exhibits a 7.12 d Keplerian period with eccentricity 0.14 +/- 0.05 and M
sin i = 1.27 M_Jup. HD 217107 is also chromospherically inactive. The
photometric metallicity is found to be [Fe/H] = +0.29 +/- 0.1 dex. Given the
relatively short orbital period, the absence of tidal spin-up of HD 217107
provides a theoretical constraint on the upper limit of the companion mass of <
11 M_Jup.
|
We construct numerically gravitational duals of theories deformed by
localized Dirac delta sources for scalar operators both at zero and at finite
temperature. We find that requiring that the backreacted geometry preserves the
original scale invariance of the source uniquely determines the potential for
the scalar field to be the one found in a certain Kaluza-Klein compactification
of $11D$ supergravity. This result is obtained using an efficient perturbative
expansion of the backreacted background at zero temperature and is confirmed by
a direct numerical computation. Numerical solutions at finite temperatures are
obtained and a detailed discussion of the numerical approach to the treatment
of the Dirac delta sources is presented. The physics of defect configurations
is illustrated with a calculation of entanglement entropy.
|
Quantum optical cluster states have been increasingly explored, in the light
of their importance for measurement-based quantum computing. Here we set forth
a new method for generating quantum controlled cluster states: pumping an
optical parametric oscillator with spatially structured light. We show that
state-of-the-art techniques for producing clusters in the spectral and temporal
domains are improved by a structured light pump, which manipulates the spatial
mode couplings in the parametric interaction. We illustrate the method
considering a second-order pump structure, and show that a simple mode rotation
yields different cluster states, including, but not limited to the
two-dimensional square and hexagonal lattices. We also introduce a novel
technique for generating non-uniform cluster states, and propose a simple setup
for its implementation.
|
A number of compactifications familiar in complex-analytic geometry, in
particular, the Baily-Borel compactification and its toroidal variants, as well
as the Deligne-Mumford compactifications, can be covered by open subsets whose
nonempty intersections are Eilenberg-MacLane spaces. We exploit this fact to
describe the (rational) homotopy type of these spaces and the natural maps
between them in terms of the simplicial sets attached to certain categories. We
thus generalize an old result of Charney-Lee on the Baily-Borel
compactification of A_g and recover (and rephrase) a more recent one of
Ebert-Giansiracusa on the Deligne-Mumford compactifications. We also describe
an extension of the period map for Riemann surfaces (going from the
Deligne-Mumford compactification to the Baily-Borel compactification of the
moduli space of principally polarized varieties) in these terms.
|
Modelling of the point spread function of the UKIRT IRCAM3 array was
conducted in order to test any extended emission around the X-ray binary Cyg
X-3. We found that the point spread function cannot be represented by a simple
Gaussian, but modelling of the stars required additional functions namely
Lorrentzian and exponential components. After modelling for the PSF, we found
that Cyg X-3 could be represented by two stellar-type profiles, 0.56" apart.
|
All sets of lines providing a partition of the set of internal points to a
conic C in PG(2,q), q odd, are determined. There exist only three such linesets
up to projectivities, namely the set of all nontangent lines to C through an
external point to C, the set of all nontangent lines to C through a point in C,
and, for square q, the set of all nontangent lines to C belonging to a Baer
subplane with at least 5 common points with C. This classification theorem is
the analogous of a classical result by Segre and Korchmaros characterizing the
pencil of lines through an internal point to C as the unique set of lines, up
to projectivities, which provides a partition of the set of all noninternal
points to C. However, the proof is not analogous, since it does not rely on the
famous Lemma of Tangents of Segre. The main tools in the present paper are
certain partitions in conics of the set of all internal points to C, together
with some recent combinatorial characterizations of blocking sets of non-secant
lines, and of blocking sets of external lines.
|
By using a new test function and the gradient estimate technique, we obtain a
better Bernstein type result of translating solitons.
|
We start from a new theory (discussed earlier) in which the arena for physics
is not spacetime, but its straightforward extension-the so called Clifford
space ($C$-space), a manifold of points, lines, areas, etc..; physical
quantities are Clifford algebra valued objects, called polyvectors. This
provides a natural framework for description of supersymmetry, since spinors
are just left or right minimal ideals of Clifford algebra. The geometry of
curved $C$-space is investigated. It is shown that the curvature in $C$-space
contains higher orders of the curvature in the underlying ordinary space. A
$C$-space is parametrized not only by 1-vector coordinates $x^\mu$ but also by
the 2-vector coordinates $\sigma^{\mu \nu}$, 3-vector coordinates $\sigma^{\mu
\nu \rho}$, etc., called also {\it holographic coordinates}, since they
describe the holographic projections of 1-lines, 2-loops, 3-loops, etc., onto
the coordinate planes. A remarkable relation between the "area" derivative $\p/
\p \sigma^{\mu \nu}$ and the curvature and torsion is found: if a scalar valued
quantity depends on the coordinates $\sigma^{\mu \nu}$ this indicates the
presence of torsion, and if a vector valued quantity depends so, this implies
non vanishing curvature. We argue that such a deeper understanding of the
$C$-space geometry is a prerequisite for a further development of this new
theory which in our opinion will lead us towards a natural and elegant
formulation of $M$-theory.
|
We used Raman and terahertz spectroscopies to investigate lattice and
magnetic excitations and their cross-coupling in the hexagonal YMnO3
multiferroic. Two phonon modes are strongly affected by the magnetic order.
Magnon excitations have been identified thanks to comparison with neutron
measurements and spin wave calculations but no electromagnon has been observed.
In addition, we evidenced two additional Raman active peaks. We have compared
this observation with the anti-crossing between magnon and acoustic phonon
branches measured by neutron. These optical measurements underly the unusual
strong spin-phonon coupling.
|
We present time-resolved measurements of ion heating due to collisional
plasma shocks and interpenetrating supersonic plasma flows, which are formed by
the oblique merging of two coaxial-gun-formed plasma jets. Our study was
repeated using four jet species: N, Ar, Kr, and Xe. In conditions with small
interpenetration between jets, the observed peak ion temperature Ti is
consistent with the predictions of collisional plasma-shock theory, showing a
substantial elevation of Ti above the electron temperature Te and also the
subsequent decrease of Ti on the classical ion-electron
temperature-equilibration time scale. In conditions of significant
interpenetration between jets, such that shocks do not apparently form, the
observed peak Ti is still appreciable and greater than Te, but much lower than
that predicted by collisional plasma-shock theory. Experimental results are
compared with multi-fluid plasma simulations.
|
The present, partly expository, monograph consists of three parts. The first
part treats Spin- and Pin-structures from three different perspectives and
shows them to be suitably equivalent. It also introduces an intrinsic
perspective on the relative Spin- and Pin-structures of Fukaya-Oh-Ohta-Ono and
Solomon, establishes properties of these structures in both perspectives, and
again shows them to be suitably equivalent. The second part uses the intrinsic
perspective on (relative) Spin- and Pin-structures to detail constructions of
orientations on the determinants of real Cauchy-Riemann operators and study
their properties. The final part applies the results of the first two parts to
the enumerative geometry of real curves and obtains an explicit comparison
between the curve signs in the intrinsic definition of Welschinger and later
Pin-structure dependent definitions. This comparison makes use of both the
classical and instrinisc perspectives on Pin-structures and thus of the
equivalence between them established in this monograph. The preface and the
introductions to the three parts describe the present work in more detail.
|
E-commerce search systems such as Taobao Search, the largest e-commerce
searching system in China, aim at providing users with the most preferred items
(e.g., products). Due to the massive data and limited time for response, a
typical industrial ranking system consists of three or more modules, including
matching, pre-ranking, and ranking. The pre-ranking is widely considered a
mini-ranking module, as it needs to rank hundreds of times more items than the
ranking under limited latency. Existing researches focus on building a lighter
model that imitates the ranking model. As such, the metric of a pre-ranking
model follows the ranking model using Area Under ROC (AUC) for offline
evaluation. However, such a metric is inconsistent with online A/B tests in
practice, so engineers have to perform costly online tests to reach a
convincing conclusion. In our work, we rethink the role of the pre-ranking. We
argue that the primary goal of the pre-ranking stage is to return an optimal
unordered set rather than an ordered list of items because it is the ranking
that determines the final exposures. Since AUC measures the quality of an
ordered item list, it is not suitable for evaluating the quality of the output
unordered set. This paper proposes a new evaluation metric called All-Scenario
Hitrate (ASH) for pre-ranking. ASH is proven effective in the offline
evaluation and consistent with online A/B tests based on numerous experiments
in Taobao Search. We also introduce an all-scenario-based multi-objective
learning framework (ASMOL), which improves the ASH significantly. Surprisingly,
the new pre-ranking model can outperforms the ranking model when outputting
thousands of items. The phenomenon validates that the pre-ranking stage should
not imitate the ranking blindly. With the improvements in ASH consistently
translating to online improvement, it makes a 1.2% GMV improvement on Taobao
Search.
|
Pulsars out of their parent SNR directly interact with the ISM producing so
called Bow-Shock Pulsar Wind Nebulae, the relativistic equivalents of the
heliosphere/heliotail system. These have been directly observed from Radio to
X-ray, and are found also associated to TeV halos, with a large variety of
morphologies. They offer a unique environment where the pulsar wind can be
studied by modelling its interaction with the surrounding ambient medium, in a
fashion that is different/complementary from the canonical Plerions. These
systems have also been suggested as the possible origin of the positron excess
detected by AMS and PAMELA, in contrast to dark matter. I will present results
from 3D Relativistic MHD simulations of such nebulae. On top of these
simulations we computed the expected emission signatures, the properties of
high energy particle escape, the role of current sheets in channeling cosmic
rays, the level of turbulence and magnetic amplification, and how they depend
on the wind structure and magnetisation.
|
This study investigates the strain-induced structural transitions of $\eta
\leftrightarrow \theta$ and the changes in electronic band structures of
Au$_2$X (X=S, Se, Te, Si, Ge) and Au$_4$SSe. We focus on Au$_2$S monolayers,
which can form multiple meta-stable monolayers theoretically, including
$\eta$-Au$_2$S, a buckled penta-monolayer composed of a square Au lattice and S
adatoms. The $\theta$-Au$_2$S is regarded as a distorted structure of
$\eta$-Au$_2$S. Based on density functional theory (DFT) calculations using a
generalized gradient approximation, the conduction and the valence bands of
$\theta$-Au$_2$S intersect at the $\Gamma$ point, leading to linear dispersion,
whereas $\eta$-Au$_2$S has a band gap of 1.02 eV. The conduction band minimum
depends on the specific Au-Au bond distance, while the valence band maximum
depends on both Au-S and Au-Au interactions. The band gap undergoes significant
changes during the $\eta \leftrightarrow \theta$ phase transition of Au$_2$S
induced by applying tensile or compressive in-plane biaxial strain to the
lattice. Moreover, substituting S atoms with other elements alters the
electronic band structures, resulting in a variety of physical properties
without disrupting the fundamental Au lattice network. Therefore, the family of
Au$_2$X monolayers holds potential as materials for atomic scale network
devices.
|
Due to various green initiatives, renewable energy will be massively
incorporated into the future smart grid. However, the intermittency of the
renewables may result in power imbalance, thus adversely affecting the
stability of a power system. Frequency regulation may be used to maintain the
power balance at all times. As electric vehicles (EVs) become popular, they may
be connected to the grid to form a vehicle-to-grid (V2G) system. An aggregation
of EVs can be coordinated to provide frequency regulation services. However,
V2G is a dynamic system where the participating EVs come and go independently.
Thus it is not easy to estimate the regulation capacities for V2G. In a
preliminary study, we modeled an aggregation of EVs with a queueing network,
whose structure allows us to estimate the capacities for regulation-up and
regulation-down, separately. The estimated capacities from the V2G system can
be used for establishing a regulation contract between an aggregator and the
grid operator, and facilitating a new business model for V2G. In this paper, we
extend our previous development by designing a smart charging mechanism which
can adapt to given characteristics of the EVs and make the performance of the
actual system follow the analytical model.
|
We describe a variational calculation for the problem of screening of a point
charge in a layered correlated metal for dopings close to the Mott transition
where the screening is non-linear due to the proximity to the incompressible
insulating state. We find that external charge can induce locally
incompressible regions and that the non-linear dependence of the screening on
density can induce overscreening in the nearest nearby layers while preserving
overall charge neutrality.
|
In this paper we introduce a general framework for casting fully dynamic
transitive closure into the problem of reevaluating polynomials over matrices.
With this technique, we improve the best known bounds for fully dynamic
transitive closure. In particular, we devise a deterministic algorithm for
general directed graphs that achieves $O(n^2)$ amortized time for updates,
while preserving unit worst-case cost for queries. In case of deletions only,
our algorithm performs updates faster in O(n) amortized time.
Our matrix-based approach yields an algorithm for directed acyclic graphs
that breaks through the $O(n^2)$ barrier on the single-operation complexity of
fully dynamic transitive closure. We can answer queries in $O(n^\epsilon)$ time
and perform updates in $O(n^{\omega(1,\epsilon,1)-\epsilon}+n^{1+\epsilon})$
time, for any $\epsilon\in[0,1]$, where $\omega(1,\epsilon,1)$ is the exponent
of the multiplication of an $n\times n^{\epsilon}$ matrix by an
$n^{\epsilon}\times n$ matrix. The current best bounds on
$\omega(1,\epsilon,1)$ imply an $O(n^{0.58})$ query time and an $O(n^{1.58})$
update time. Our subquadratic algorithm is randomized, and has one-side error.
|
We derive conservation laws for Dirac-harmonic maps and their extensions to
manifolds that have isometries, where we mostly focus on the spherical case. In
addition, we discuss several geometric and analytic applications of the latter.
|
We propose a predictive Density Functional Theory (DFT) for the calculation
of solvation free energies. Our approach is based on a Helmholtz free-energy
functional that is consistent with the perturbed-chain SAFT (PC-SAFT) equation
of state. This allows a coarse-grained description of the solvent, based on an
inhomogeneous density of PC-SAFT segments. The solute, on the other hand, is
described in full detail by atomistic Lennard-Jones interaction sites. The
approach is entirely predictive, as it only takes the PC-SAFT parameters of the
solvent and the force-field parameters of the solute as input. No adjustable
parameters or empirical corrections are involved. The framework is applied to
study self-solvation of n-alkanes and to the calculation of residual chemical
potentials in binary solvent mixtures. Our DFT approach accurately predicts
solvation free energies of small molecular solutes in three different solvents.
Additionally, we show the calculated solvation free energies agree well with
those obtained by molecular dynamics simulations and with the residual chemical
potential calculated by the bulk PC-SAFT equation of state. We observe higher
deviations for the solvation free energy of systems with significant
solute-solvent Coulomb interactions.
|
A brief review of BRAHMS measurements of bulk particle production in RHIC
Au+Au collisions at $\sqrt{s_{NN}}=200GeV$ is presented, together with some
discussion of baryon number transport. Intermediate $p_{T}$ measurements in
different collision systems (Au+Au, d+Au and p+p) are also discussed in the
context of jet quenching and saturation of the gluon density in Au ions at RHIC
energies. This report also includes preliminary results for identified
particles at forward rapidities in d+Au and Au+Au collisions.
|
From a known result of diophantine equations of the first degree with 2
unknowns we simply find the results of the distribution function of the
sequences of positive integers generated by the functions at the origin of the
3x+1 and 5x+1 problems .
|
We present STADEE, a \textbf{STA}tistics-based \textbf{DEE}p detection method
to identify machine-generated text, addressing the limitations of current
methods that rely heavily on fine-tuning pre-trained language models (PLMs).
STADEE integrates key statistical text features with a deep classifier,
focusing on aspects like token probability and cumulative probability, crucial
for handling nucleus sampling. Tested across diverse datasets and scenarios
(in-domain, out-of-domain, and in-the-wild), STADEE demonstrates superior
performance, achieving an 87.05% F1 score in-domain and outperforming both
traditional statistical methods and fine-tuned PLMs, especially in
out-of-domain and in-the-wild settings, highlighting its effectiveness and
generalizability.
|
We present new photometric observations for twelve asteroids ((122) Gerda,
(152) Atala, (260) Huberta, (665) Sabine, (692) Hippodamia, (723) Hammonia,
(745) Mauritia, (768) Struveana, (863) Benkoela, (1113) Katja, (1175) Margo,
(2057) Rosemary) from the outer part of the main belt aimed to obtain the
magnitude-phase curves and to verify geometric albedo and taxonomic class based
on their magnitude-phase behaviors. The measured magnitude-phase relations
confirm previously determined composition types of (260) Huberta (C-type),
(692) Hippodamia (S-type) and (1175) Margo (S-type). Asteroids (665) Sabine and
(768) Struveana previously classified as X-type show phase-curve behavior
typical for moderate-albedo asteroids and may belong to the M-type. The
phase-curve of (723) Hammonia is typical for the S-type which contradicts the
previously determined C-type. We confirmed the moderate-albedo of asteroids
(122) Gerda and (152) Atala, but their phase-curves are different from typical
for the S-type and may indicate more rare compositional types. Based on
magnitude-phase behaviors and V-R colors, (2057) Rosemary most probably belongs
to M-type, while asteroids (745) Mauritia and (1113) Katja belong to S-complex.
The phase curve of the A-type asteroid (863) Benkoela does not cover the
opposition effect range and further observations are needed to understand
typical features of the phase-curves of A-type asteroids in comparison with
other types. We have also determined lightcurve amplitudes of the observed
asteroids and obtained new or improved values of the rotation periods for most
of them.
|
Image compositing plays a vital role in photo editing. After inserting a
foreground object into another background image, the composite image may look
unnatural and inharmonious. When the foreground is photorealistic and the
background is an artistic painting, painterly image harmonization aims to
transfer the style of background painting to the foreground object, which is a
challenging task due to the large domain gap between foreground and background.
In this work, we employ adversarial learning to bridge the domain gap between
foreground feature map and background feature map. Specifically, we design a
dual-encoder generator, in which the residual encoder produces the residual
features added to the foreground feature map from main encoder. Then, a
pixel-wise discriminator plays against the generator, encouraging the refined
foreground feature map to be indistinguishable from background feature map.
Extensive experiments demonstrate that our method could achieve more harmonious
and visually appealing results than previous methods.
|
Several approaches have been proposed to the problem of provisioning traffic
engineering between core network nodes in Internet Service Provider (ISP)
networks. Such approaches aim to minimize network delay, increase capacity, and
enhance security services between two core (relay) network nodes, an ingress
node and an egress node. MATE (Multipath Adaptive Traffic Engineering) has been
proposed for multipath adaptive traffic engineering between an ingress node
(source) and an egress node (destination) to distribute the network flow among
multiple disjoint paths. Its novel idea is to avoid network congestion and
attacks that might exist in edge and node disjoint paths between two core
network nodes.
This paper proposes protection schemes over asymmetric channels. Precisely,
the paper aims to develop an adaptive, robust, and reliable traffic engineering
scheme to improve performance and reliability of communication networks. This
scheme will also provision Quality of Server (QoS) and protection of traffic
engineering to maximize network efficiency. Specifically, S-MATE (secure MATE)
is proposed to protect the network traffic between two core nodes (routers,
switches, etc.) in a cloud network. S-MATE secures against a single link
attack/failure by adding redundancy in one of the operational redundant paths
between the sender and receiver nodes. It is also extended to secure against
multiple attacked links. The proposed scheme can be applied to secure core
networks such as optical and IP networks.
|
We consider the perturbations of a relativistic star as an initial-value
problem. Having discussed the formulation of the problem (the perturbation
equations and the appropriate boundary conditions at the centre and the surface
of the star) in detail we evolve the equations numerically from several
different sets of initial data. In all the considered cases we find that the
resulting gravitational waves carry the signature of several of the star's
pulsation modes. Typically, the fluid $f$-mode, the first two $p$-modes and the
slowest damped gravitational $w$-mode are present in the signal. This indicates
that the pulsation modes may be an interesting source for detectable
gravitational waves from colliding neutron stars or supernovae. We also survey
the literature and find several indications of mode presence in numerical
simulations of rotating core collapse and coalescing neutron stars. If such
mode-signals can be detected by future gravitational-wave antennae one can hope
to infer detailed information about neutron stars. Since a perturbation
evolution should adequately describe the late time behaviour of a dynamically
excited neutron star, the present work can also be used as a bench-mark test
for future fully nonlinear simulations.
|
If gamma-ray bursts (GRBs) produce high energy cosmic rays, neutrinos are
expected to be generated in GRBs due to photo-pion productions. However we
stress that the same process also generates electromagnetic (EM) emission
induced by the production of secondary electrons and photons, and that the EM
emission is expected to be correlated to the neutrino flux. Using the Fermi/LAT
observational results on gamma-ray flux from GRBs, the GRB neutrino emission is
limited to be below ~20 GeV/m^2 per GRB event on average, which is independent
of the unknown GRB proton luminosity. This neutrino limit suggests that the
full IceCube needs stacking more than 130 GRBs in order to detect one GRB muon
neutrino.
|
The sterile neutrino is a weakly-interacting particle that emerges in the
framework of certain extensions of standard particle physics and that fits
naturally with the properties of a warm dark matter particle candidate. We
present an analysis of a deep archival Chandra observation of Willman 1, one of
the darkest purported ultra-faint dwarf galaxies, that excludes the presence of
sterile neutrinos in the 1.6-16 keV mass range within 55 pc of its centre down
to the limits of the observation. Spectral analysis of the Chandra data fails
to find any non-instrumental spectral feature possibly connected with the
radiative decay of a dark matter particle. Accordingly, we establish upper
bounds on the sterile neutrino parameter space and discuss it in the context of
previous measurements. Regarding the point source population, we identify a
total of 26 sources within the central 5 arcminutes to a limiting 0.5-2.0 keV
X-ray flux of 6 x 10^{-16} erg cm^{-2} s^{-1}. While some of these sources
could be formal members of Willman 1, we find no outstanding evidence for
either an unusual population of bright X-ray sources or a densely populated
cluster core. In fact, the entire X-ray population could be explained by
background active galactic nuclei and/or foreground stars unrelated to Willman
1. Finally, possible associations of the X-ray point like population with
optical sources from the SDSS DR7 catalogue are discussed.
|
We consider the 2D incompressible Euler equation on a corner domain $\Omega$
with angle $\nu\pi$ with $\frac{1}{2}<\nu<1$. We prove that if the initial
vorticity $\omega_0 \in L^{1}(\Omega)\cap L^{\infty}(\Omega)$ and if $\omega_0$
is non-negative and supported on one side of the angle bisector of the domain,
then the weak solutions are unique. This is the first result which proves
uniqueness when the velocity is far from Lipschitz and the initial vorticity is
nontrivial around the boundary.
|
Using accurate dissipative DFT-NEGF atomistic-simulation techniques within
the Wannier-Function formalism, we give a fresh look at the possibility of
sub-10 nm scaling for high-performance CMOS applications. We show that a
combination of good electrostatic control together with a high mobility is
paramount to meet the stringent roadmap targets. Such requirements typically
play against each other at sub-10 nm gate length for MOS transistors made of
conventional semiconductor materials like Si, Ge or III-V and dimensional
scaling is expected to end around 12 nm gate-length. We demonstrate that using
alternative 2D channel materials, such as the less explored HfS2 or ZrS2,
high-drive current down to about 6 nm is, however, achievable. We also propose
a novel transistor concept, the Dynamically-Doped Field-Effect Transistor, that
scales better than its MOSFET counterpart. Used in combination with a
high-mobility material such as HfS2, it allows for keeping the stringent
high-performance CMOS on current and competitive energy-delay performance, when
scaling down to 1 nm gate length using a single-gate architecture and an
ultra-compact design. The Dynamically-Doped Field-Effect Transistor further
addresses the grand-challenge of doping in ultra-scaled devices and 2D
materials in particular.
|
The performance potential for simulating quantum electron transport on
graphical processing units (GPUs) is studied. Using graphene ribbons of
realistic sizes as an example it is shown that GPUs provide significant
speed-ups in comparison to central processing units as the transverse dimension
of the ribbon grows. The recursive Green's function algorithm is employed and
implementation details on GPUs are discussed. Calculated conductances were
found to accumulate significant numerical error due to single-precision
floating-point arithmetic at energies close to the charge neutrality point of
the graphene.
|
In earlier work, we set up an effective potential approach at zero
temperature for the Gribov-Zwanziger model that takes into account not only the
restriction to the first Gribov region as a way to deal with the gauge fixing
ambiguity, but also the effect of dynamical dimension-two vacuum condensates.
Here, we investigate the model at finite temperature in presence of a
background gauge field that allows access to the Polyakov loop expectation
value and the Yang-Mills (de)confinement phase structure. This necessitates
paying attention to BRST and background gauge invariance of the whole
construct. We employ two such methods as proposed elsewhere in literature: one
based on using an appropriate dressed, BRST invariant, gluon field by the
authors and one based on a Wilson-loop dressed Gribov-Zwanziger auxiliary field
sector by Kroff and Reinosa. The latter approach outperforms the former, in
estimating the critical temperature for N=2, 3 as well as correctly predicting
the order of the transition for both cases.
|
The log Gaussian Cox process is a flexible class of Cox processes, whose
intensity surface is stochastic, for incorporating complex spatial and time
structure of point patterns. The straightforward inference based on Markov
chain Monte Carlo is computationally heavy because the computational cost of
inverse or Cholesky decomposition of high dimensional covariance matrices of
Gaussian latent variables is cubic order of their dimension. Furthermore, since
hyperparameters for Gaussian latent variables have high correlations with
sampled Gaussian latent processes themselves, standard Markov chain Monte Carlo
strategies are inefficient. In this paper, we propose an efficient and scalable
computational strategy for spatial log Gaussian Cox processes. The proposed
algorithm is based on pseudo-marginal Markov chain Monte Carlo approach. Based
on this approach, we propose estimation of approximate marginal posterior for
parameters and comprehensive model validation strategies. We provide details
for all of the above along with some simulation investigation for univariate
and multivariate settings and analysis of a point pattern of tree data
exhibiting positive and negative interaction between different species.
|
We consider a singlet fermionic dark matter (DM) $\chi$ in a gauged
$U(1)_{B-3L_i}$ extension of the Standard Model (SM), with $i\in
e\,,\mu\,,\tau$, and derive bounds on the allowed parameter space, considering
its production via freeze-in mechanism. The DM communicates with the SM only
through flavorful vector-portal $Z_\text{B3L}$ due to its non-trivial charge
$x$ under $U(1)_{B-3L_{i}}$, which also guarantees the stability of the DM over
the age of the Universe for $x\neq\{\pm 3/2,\pm 3\}$. Considering
$Z_\text{B3L}$ to lie within the mass range of a few MeV up to a few GeV, we
obtain constraints on the gauge coupling $g_\text{B3L}$ from the requirement of
producing right relic abundance. Taking limits from various (present and
future) experimental facilities, e.g., NuCal, NA64, FASER, SHiP into account,
we show that the relic density allowed parameter space for the frozen in DM can
be probed with $g_\text{B3L}\gtrsim 10^{-8}$ for both $m_\chi<m_\text{ZB3L}/2$
and $m_\chi\gtrsim m_\text{ZB3L}$, while $g_\text{B3L}\lesssim 10^{-8}$ remains
mostly unconstrained. We also briefly comment on the implications of neutrino
mass generation via Type-I seesaw and anomalous $(g-2)_\mu$ in context with
$B-3L_\mu$ gauged symmetry.
|
We quantify for the first time the gravitational wave (GW) phase shift
appearing in the waveform of eccentric binary black hole (BBH) mergers formed
dynamically in three-body systems. For this, we have developed a novel
numerical method where we construct a reference binary, by evolving the
post-Newtonian (PN) evolution equations backwards from a point near merger
without the inclusion of the third object, that can be compared to the real
binary that evolves under the influence from the third BH. From this we
quantify how the interplay between dynamical tides, PN-effects, and the
time-dependent Doppler shift of the eccentric GW source results in unique
observable GW phase shifts that can be mapped to the gravitational dynamics
taking place at formation. We further find a new analytical expression for the
GW phase shift, which surprisingly has a universal functional form that only
depends on the time-evolving BBH eccentricity. The normalization scales with
the BH masses and initial separation, which can be linked to the underlying
astrophysical environment. GW phase shifts from a chaotic 3-body BH scattering
taking place in a cluster, and from a BBH inspiraling in a disk migration trap
near a super-massive BH, are also shown for illustration. When current and
future GW detectors start to observe eccentric GW sources with high enough
signal-to-noise-ratio, we propose this to be among the only ways of directly
probing the dynamical origin of individual BBH mergers using GWs alone.
|
We present data on the modulation of the dark port power at the free spectral
range frequency of the LIGO 4 km interferometers. It is found that the power is
modulated exactly at the tidal frequencies to a precision of 6e-9 Hz.
|
From a combinatorial point of view, we consider the Earth Mover's Distance
(EMD) associated with a metric measure space. The specific case considered is
deceptively simple: Let the finite set [n] = {1,...,n} be regarded as a metric
space by restricting the usual Euclidean distance on the real numbers. The EMD
is defined on ordered pairs of probability distributions on [n]. We provide an
easy method to compute a generating function encoding the values of EMD in its
coefficients, which is related to the Segre embedding from projective algebraic
geometry. As an application we use the generating function to compute the
expected value of EMD in this one-dimensional case. The EMD is then used in
clustering analysis for a specific data set.
|
The lattice Boltzmann method has become a standard for efficiently solving
problems in fluid dynamics. While unstructured grids allow for a more efficient
geometrical representation of complex boundaries, the lattice Boltzmann methods
is often implemented using regular grids. Here we analyze two implementations
of the lattice Boltzmann method on unstructured grids, the standard forward
Euler method and the operator splitting method. We derive the evolution of the
macroscopic variables by means of the Chapman-Enskog expansion, and we prove
that it yields the Navier-Stokes equation and is first order accurate in terms
of the temporal discretization and second order in terms of the spatial
discretization. Relations between the kinetic viscosity and the integration
time step are derived for both the Euler method and the operator splitting
method. Finally we suggest an improved version of the bounce-back boundary
condition. We test our implementations in both standard benchmark geometries
and in the pore network of a real sample of a porous rock.
|
We propose a model order reduction technique integrating the Shifted Boundary
Method (SBM) with a POD-Galerkin strategy. This approach allows to treat more
complex parametrized domains in an efficient and straightforward way. The
impact of the proposed approach is threefold.
First, problems involving parametrizations of complex geometrical shapes
and/or large domain deformations can be efficiently solved at full-order by
means of the SBM, an unfitted boundary method that avoids remeshing and the
tedious handling of cut cells by introducing an approximate surrogate boundary.
Second, the computational effort is further reduced by the development of a
reduced order model (ROM) technique based on a POD-Galerkin approach.
Third, the SBM provides a smooth mapping from the true to the surrogate
domain, and for this reason, the stability and performance of the reduced order
basis are enhanced. This feature is the net result of the combination of the
proposed ROM approach and the SBM. Similarly, the combination of the SBM with a
projection-based ROM gives the great advantage of an easy and fast to implement
algorithm considering geometrical parametrization with large deformations. The
transformation of each geometry to a reference geometry (morphing) is in fact
not required.
These combined advantages will allow the solution of PDE problems more
efficiently. We illustrate the performance of this approach on a number of
two-dimensional Stokes flow problems.
|
Videogames have been a catalyst for advances in many research fields, such as
artificial intelligence, human-computer interaction or virtual reality. Over
the years, research in fields such as artificial intelligence has enabled the
design of new types of games, while games have often served as a powerful tool
for testing and simulation. Can this also happen with neuroscience? What is the
current relationship between neuroscience and games research? what can we
expect from the future? In this article, we'll try to answer these questions,
analysing the current state-of-the-art at the crossroads between neuroscience
and games and envisioning future directions.
|
This paper presents an initial survey of the properties of accretion flows in
the Kerr metric from three-dimensional, general relativistic
magnetohydrodynamic simulations of accretion tori. We consider three fiducial
models of tori around rotating, both prograde and retrograde, and nonrotating
black holes; these three fiducial models are also contrasted with axisymmetric
simulations and a pseudo-Newtonian simulation with equivalent initial
conditions to delineate the limitations of these approximations.
|
The alternative version of Hamiltonian formalism for higher-derivative
theories is proposed. As compared with the standard Ostrogradski approach it
has the following advantages: (i) the Lagrangian, when expressed in terms of
new variables yields proper equations of motion; no additional Lagrange
multipliers are necessary (ii) the Legendre transformation can be performed in
a straightforward way provided the Lagrangian is nonsingular in Ostrogradski
sense. The generalization to singular Lagrangians as well as field theory are
presented.
|
Distribution-free prediction sets play a pivotal role in uncertainty
quantification for complex statistical models. Their validity hinges on
reliable calibration data, which may not be readily available as real-world
environments often undergo unknown changes over time. In this paper, we propose
a strategy for choosing an adaptive window and use the data therein to
construct prediction sets. The window is selected by optimizing an estimated
bias-variance tradeoff. We provide sharp coverage guarantees for our method,
showing its adaptivity to the underlying temporal drift. We also illustrate its
efficacy through numerical experiments on synthetic and real data.
|
Spin coating technique was used to synthesize pure ZnO and Cu doped ZnO films
on amorphous and conducting glass substrates. The doped amount of Cu in ZnO was
varied up to 5% in atomic percentage. Speed of spin coating system, coating
time, initial chemical solution and annealing conditions were varied to
optimize the properties of samples. Transmittance of samples was measured for
ZnO doped with 1, 2, 3, 4 and 5% of Cu. Absorbance, reflectance and refractive
index were derived from the measured transmittance. Film thickness of each film
was calculated using the graphs of refractive index versus wavelength. Film
thickness varies in a random manner depending on the amount of ZnO or Cu doped
ZnO solutions spread on the substrate. The energy gap of each film was
calculated using the graph of square of absorption coefficient time photon
energy versus photon energy. The calculated energy gap values of Cu doped ZnO
film samples decrease with the Cu concentration in ZnO. This means that the
conductivity of ZnO can be increased by adding a trace amount of conducting
material such as Cu.
|
We consider branes $N=I\times\so$, where $\so$ is an $n$\ndash dimensional
space form, not necessarily compact, in a Schwarzschild-AdS_{(n+2)} bulk $\mc
N$. The branes have a big crunch singularity. If a brane is an ARW space, then,
under certain conditions, there exists a smooth natural transition flow through
the singularity to a reflected brane $\hat N$, which has a big bang singularity
and which can be viewed as a brane in a reflected Schwarzschild-AdS_{(n+2)}
bulk $\hat{\mc N}$. The joint branes $N\uu \hat N$ can thus be naturally
embedded in $R^2\times \so$, hence there exists a second possibility of
defining a smooth transition from big crunch to big bang by requiring that
$N\uu\hat N$ forms a $C^\infty$-hypersurface in $R^2\times \so$. This last
notion of a smooth transition also applies to branes that are not ARW spaces,
allowing a wide range of possible equations of state.
|
Any state r = (x,y,z) of a qubit, written in the Pauli basis and initialized
in the pure state r = (0,0,1), can be prepared by composing three quantum
operations: two unitary rotation gates to reach a pure state on the Bloch
sphere, followed by a depolarization gate to decrease |r|. Here we discuss the
complementary state-preparation protocol for qubits initialized at the center
of the Bloch ball, r=0, based on increasing or amplifying |r| to its desired
value, then rotating. Bloch vector amplification may or may not increase qubit
energy, but it necessarily increases purity and decreases entropy.
Amplification can be achieved with a linear Markovian CPTP channel by placing
the channel's fixed point away from r=0, making it nonunital, but the resulting
gate suffers from a critical slowing down as that fixed point is approached.
Here we consider alternative designs based on linear and nonlinear Markovian
PTP channels, which offer benefits relative to linear CPTP channels, namely
fast Bloch vector amplification without deceleration. These operations simulate
a reversal of the thermodynamic arrow of time for the qubit and would provide
striking experimental demonstrations of non-CP dynamics.
|
Linear super-resolution microscopy via synthesis aperture approach permits
fast acquisition because of its wide-field implementations, however, it has
been limited in resolution because a missing spatial-frequency band occurs when
trying to use a shift magnitude surpassing the cutoff frequency of the
detection system beyond a factor of two, which causes ghosting to appear. Here,
we propose a method of chip-based 3D nanoscopy through large and tunable
spatial-frequency-shift effect, capable of covering full extent of the
spatial-frequency component within a wide passband. The missing of
spatial-frequency can be effectively solved by developing a
spatial-frequency-shift actively tuning approach through wave vector
manipulation and operation of optical modes propagating along multiple
azimuthal directions on a waveguide chip to interfere. In addition, the method
includes a chip-based sectioning capability, which is enabled by saturated
absorption of fluorophores. By introducing ultra-large propagation effective
refractive index, nanoscale resolution is possible, without sacrificing the
temporal resolution and the field-of-view. Imaging on GaP waveguide material
demonstrates a lateral resolution of lamda/10, which is 5.4 folds above Abbe
diffraction limit, and an axial resolution of lamda/19 using 0.9 NA detection
objective. Simulation with an assumed propagation effective refractive index of
10 demonstrates a lateral resolution of lamda/22, in which the huge gap between
the directly shifted and the zero-order components is completely filled to
ensure the deep-subwavelength resolvability. It means that, a fast wide-field
3D deep-subdiffraction visualization could be realized using a standard
microscope by adding a mass-producible and cost-effective
spatial-frequency-shift illumination chip.
|
In an electromagnetic cloak based on a transformation approach, reduced sets
of material properties are generally favored due to their easier implementation
in reality, although a seemingly inevitable drawback of undesired reflection
exists in such cloaks. Here we suggest using high-order transformations to
create smooth moduli at the outer boundary of the cloak, therefore completely
eliminating the detrimental scattering within the limit of geometric optics. We
apply this scheme to a non-magnetic cylindrical cloak and demonstrate that the
scattered field is reduced substantially in a cloak with optimal quadratic
transformation as compared to its linear counterpart.
|
Let $S$ be a polynomial ring over a field and $I\subseteq S$ a homogeneous
ideal containing a regular sequence of forms of degrees $d_1, \ldots, d_c$. In
this paper we prove the Lex-plus-powers Conjecture when the field has
characteristic 0 for all regular sequences such that $d_i \geq \sum_{j=1}^{i-1}
(d_j-1)+1$ for each $i$; that is, we show that the Betti table of $I$ is
bounded above by the Betti table of the lex-plus-powers ideal of $I$. As an
application, when the characteristic is 0, we obtain bounds for the Betti
numbers of any homogeneous ideal containing a regular sequence of known
degrees, which are sharper than the previously known ones from the
Bigatti-Hulett-Pardue Theorem.
|
What makes people 'click' on a first date and become mutually attracted to
one another? While understanding and predicting the dynamics of romantic
interactions used to be exclusive to human judgment, we show that Large
Language Models (LLMs) can detect romantic attraction during brief
getting-to-know-you interactions. Examining data from 964 speed dates, we show
that ChatGPT (and Claude 3) can predict both objective and subjective
indicators of speed dating success (r=0.12-0.23). ChatGPT's predictions of
actual matching (i.e., the exchange of contact information) were not only on
par with those of human judges who had access to the same information but
incremental to speed daters' own predictions. While some of the variance in
ChatGPT's predictions can be explained by common content dimensions (such as
the valence of the conversations) the fact that there remains a substantial
proportion of unexplained variance suggests that ChatGPT also picks up on
conversational dynamics. In addition, ChatGPT's judgments showed substantial
overlap with those made by the human observers (mean r=0.29), highlighting
similarities in their representation of romantic attraction that is, partially,
independent of accuracy.
|
Internet based businesses and products (e.g. e-commerce, music streaming) are
becoming more and more sophisticated every day with a lot of focus on improving
customer satisfaction. A core way they achieve this is by providing customers
with an easy access to their products by structuring them in catalogues using
navigation bars and providing recommendations. We refer to these catalogues as
product concepts, e.g. product categories on e-commerce websites, public
playlists on music streaming platforms. These product concepts typically
contain products that are linked with each other through some common features
(e.g. a playlist of songs by the same artist). How they are defined in the
backend of the system can be different for different products. In this work, we
represent product concepts using database queries and tackle two learning
problems. First, given sets of products that all belong to the same unknown
product concept, we learn a database query that is a representation of this
product concept. Second, we learn product concepts and their corresponding
queries when the given sets of products are associated with multiple product
concepts. To achieve these goals, we propose two approaches that combine the
concepts of PU learning with Decision Trees and Clustering. Our experiments
demonstrate, via a simulated setup for a music streaming service, that our
approach is effective in solving these problems.
|
We present a comprehensive study of event-by-event multiplicity fluctuations
in nucleon-nucleon and nucleus-nucleus interactions from AGS/FAIR to RHIC
energies within the UrQMD transport approach. The scaled variances of negative,
positive, and all charged hadrons are analysed. The scaled variance in central
Pb+Pb collisions increases with energy and behaves similar to inelastic p+p
interactions. We find a non-trivial dependence of multiplicity fluctuations on
the rapidity and transverse momentum interval used for the analysis and on the
centrality selection procedure. Quantitative predictions for the NA49
experiment are given, taking into account the acceptance of the detector and
the selection procedure of central events.
|
The generic linear evolution of the density matrix of a system with a
finite-dimensional state space is by stochastic maps which take a density
matrix linearly into the set of density matrices. These dynamical stochastic
maps form a linear convex set that may be viewed as supermatrices. The property
of hermiticity of density matrices renders an associated supermatrix hermitian
and hence diagonalizable; but the positivity of the density matrix does not
make this associated supermatrix positive. If it is positive, the map is called
completely positive and they have a simple parametrization. This is extended to
all positive (not completely positive) maps. A contraction of a norm-preserving
map of the combined system can be contracted to obtain all dynamical maps. The
reconstruction of the extended dynamics is given.
|
The cross section for exclusive psi(2S) ultraperipheral production at the LHC
is calculated using gluon parametrisations extracted from exclusive J/psi
measurements performed at HERA and the LHC. Predictions are given at leading
and next-to-leading order for pp centre-of-mass energies of 7, 8 and 14 TeV,
assuming the non-relativistic approximation for the psi(2S) wave function.
|
We construct Brownian Sachdev-Ye-Kitaev (SYK) chains subjected to continuous
monitoring and explore possible entanglement phase transitions therein. We
analytically derive the effective action in the large-$N$ limit and show that
an entanglement transition is caused by the symmetry breaking in the enlarged
replica space. In the noninteracting case with SYK$_2$ chains, the model
features a continuous $O(2)$ symmetry between two replicas and a transition
corresponding to spontaneous breaking of that symmetry upon varying the
measurement rate. In the symmetry broken phase at low measurement rate, the
emergent replica criticality associated with the Goldstone mode leads to a
log-scaling entanglement entropy that can be attributed to the free energy of
vortices. In the symmetric phase at higher measurement rate, the entanglement
entropy obeys area-law scaling. In the interacting case, the continuous $O(2)$
symmetry is explicitly lowered to a discrete $C_4$ symmetry, giving rise to
volume-law entanglement entropy in the symmetry-broken phase due to the
enhanced linear free energy cost of domain walls compared to vortices. The
interacting transition is described by $C_4$ symmetry breaking. We also verify
the large-$N$ critical exponents by numerically solving the Schwinger-Dyson
equation.
|
Broadband visible light emitting, three-dimensional hexagonal annular
microstructures with InGaN/GaN multiple quantum wells (MQWs) are fabricated via
selective-area epitaxial growth. The single hexagonal annular structure is
composed of not only polar facet of (0001) on top surface but also semi-polar
facets of {10-11} and {11-22} in inner and outer sidewalls, exhibiting
multi-color visible light emission from InGaN/GaN MQWs formed on the different
facets. The InGaN MQWs on (0001) facet emits the longer wavelength (green
color) due to the larger well thickness and the higher In composition, while
those on semi-polar facets of {10-11} and {11-22} had highly-efficient shorter
wavelength (violet to blue color) emission caused by smaller well thickness and
smaller In composition. By combining the multiple color emission depending on
different facets, high efficiency broadband visible light emission could be
achieved. The emission color can be changed with excitation power density owing
to the built-in electric field on the (0001) facet, which is confirmed by
time-resolved luminescence experiments. The hexagonal annular structures can be
a critical building block for highly efficient broadband visible light emitting
sources, providing a solution to previous problems related to the fabrication
issues for phosphor-free white light emitting devices.
|
Multimode fibers hold great promise to advance data rates in optical
communications but come with the challenge to compensate for modal crosstalk
and mode-dependent losses, resulting in strong distortions. The holographic
measurement of the transmission matrix enables not only correcting distortions
but also harnessing these effects for creating a confidential data connection
between legitimate communication parties, Alice and Bob. The feasibility of
this physical-layer-security-based approach is demonstrated experimentally for
the first time on a multimode fiber link to which the eavesdropper Eve is
physically coupled. Once the proper structured light field is launched at
Alice's side, the message can be delivered to Bob, and, simultaneously, the
decipherment for an illegitimate wiretapper Eve is destroyed. Within a real
communication scenario, we implement wiretap codes and demonstrate
confidentiality by quantifying the level of secrecy. Compared to an uncoded
data transmission, the amount of securely exchanged data is enhanced by a
factor of 538. The complex light transportation phenomena that have long been
considered limiting and have restricted the widespread use of multimode fiber
are exploited for opening new perspectives on information security in spatial
multiplexing communication systems.
|
Learning subtle yet discriminative features (e.g., beak and eyes for a bird)
plays a significant role in fine-grained image recognition. Existing
attention-based approaches localize and amplify significant parts to learn
fine-grained details, which often suffer from a limited number of parts and
heavy computational cost. In this paper, we propose to learn such fine-grained
features from hundreds of part proposals by Trilinear Attention Sampling
Network (TASN) in an efficient teacher-student manner. Specifically, TASN
consists of 1) a trilinear attention module, which generates attention maps by
modeling the inter-channel relationships, 2) an attention-based sampler which
highlights attended parts with high resolution, and 3) a feature distiller,
which distills part features into a global one by weight sharing and feature
preserving strategies. Extensive experiments verify that TASN yields the best
performance under the same settings with the most competitive approaches, in
iNaturalist-2017, CUB-Bird, and Stanford-Cars datasets.
|
This is a review of the properties of spectral fluctations in disordered
metals, their relation with Random Matrix Theory and semiclassical picture. We
also review the physics of persistent currents in mesoscopic isolated rings,
the parametric correlations and curvature distributions.
|
Photon detectors are an elementary tool to measure electromagnetic waves at
the quantum limit and are heavily demanded in the emerging quantum technologies
such as communication, sensing, and computing. Of particular interest is a
quantum non-demolition (QND) type detector, which projects the quantum state of
a photonic mode onto the photon-number basis without affecting the temporal or
spatial properties. This is in stark contrast to conventional photon detectors
which absorb a photon to trigger a `click' and thus inevitably destroy the
photon. The long-sought QND detection of a flying photon was recently
demonstrated in the optical domain using a single atom in a cavity. However,
the counterpart for microwaves has been elusive despite the recent progress in
microwave quantum optics using superconducting circuits. Here, we implement a
deterministic entangling gate between a superconducting qubit and a propagating
microwave pulse mode reflected by a cavity containing the qubit. Using the
entanglement and the high-fidelity qubit readout, we demonstrate a QND
detection of a single photon with the quantum efficiency of 0.84, the photon
survival probability of 0.87, and the dark-count probability of 0.0147. Our
scheme can be a building block for quantum networks connecting distant qubit
modules as well as a microwave photon counting device for multiple-photon
signals.
|
We investigate the compositeness condition in four dimensions at the
non-trivial order (next-to-leading order in $1/N$ with the number $N$ of the
colors), taking the Nambu-Jona-Lasinio model as an example. In spite of its
non-renormalizability, the compositeness condition can be successfully derived
and solved in compact forms. The resultant problems are discussed.
|
If "seed" central black holes were common in the subgalactic building blocks
that merged to form present-day massive galaxies, then relic intermediate-mass
black holes (IMBHs) should be present in the Galactic bulge and halo. We use a
particle tagging technique to dynamically populate the N-body Via Lactea II
high-resolution simulation with black holes, and assess the size, properties,
and detectability of the leftover population. The method assigns a black hole
to the most tightly bound central particle of each subhalo at infall according
to an extrapolation of the M_BH-sigma_* relation, and self-consistently follows
the accretion and disruption of Milky Way progenitor dwarfs and their holes in
a cosmological "live" host from high redshift to today. We show that, depending
on the minimum stellar velocity dispersion, sigma_m, below which central black
holes are assumed to be increasingly rare, as many as ~2000 (sigma_m=3 km/s) or
as few as ~70 (sigma_m=12 km/s) IMBHs may be left wandering in the halo of the
Milky Way today. The fraction of IMBHs kicked out of their host by
gravitational recoil is < 20%. We identify two main Galactic subpopulations,
"naked" IMBHs, whose host subhalos were totally destroyed after infall, and
"clothed" IMBHs residing in dark matter satellites that survived tidal
stripping. Naked IMBHs typically constitute 40-50% of the total and are more
centrally concentrated. We show that, in the sigma_m=12 km/s scenario, the
clusters of tightly bound stars that should accompany naked IMBHs would be
fainter than m_V=16 mag, spatially resolvable, and have proper motions of
0.1-10 milliarcsec per year. Their detection may provide an observational tool
to constrain the formation history of massive black holes in the early
Universe.
|
The TRAPPIST-1 system is unique in that it has a chain of seven terrestrial
Earth-like planets located close to or in its habitable zone. In this paper, we
study the effect of potential cometary impacts on the TRAPPIST-1 planets and
how they would affect the primordial atmospheres of these planets. We consider
both atmospheric mass loss and volatile delivery with a view to assessing
whether any sort of life has a chance to develop. We ran N-body simulations to
investigate the orbital evolution of potential impacting comets, to determine
which planets are more likely to be impacted and the distributions of impact
velocities. We consider three scenarios that could potentially throw comets
into the inner region (i.e within 0.1au where the seven planets are located)
from an (as yet undetected) outer belt similar to the Kuiper belt or an Oort
cloud: Planet scattering, the Kozai-Lidov mechanism and Galactic tides. For the
different scenarios, we quantify, for each planet, how much atmospheric mass is
lost and what mass of volatiles can be delivered over the age of the system
depending on the mass scattered out of the outer belt. We find that the
resulting high velocity impacts can easily destroy the primordial atmospheres
of all seven planets, even if the mass scattered from the outer belt is as low
as that of the Kuiper belt. However, we find that the atmospheres of the
outermost planets f, g and h can also easily be replenished with cometary
volatiles (e.g. $\sim$ an Earth ocean mass of water could be delivered). These
scenarios would thus imply that the atmospheres of these outermost planets
could be more massive than those of the innermost planets, and have
volatiles-enriched composition.
|
In this paper, we construct consistent statistical estimators of the Hurst
index, volatility coefficient, and drift parameter for Bessel processes driven
by fractional Brownian motion with $H<1/2$. As an auxiliary result, we also
prove the continuity of the fractional Bessel process. The results are
illustrated with simulations.
|
Large low-Earth orbit (LEO) satellite networks are being built to provide
low-latency broadband Internet access to a global subscriber base. In addition
to network transmissions, researchers have proposed embedding compute resources
in satellites to support LEO edge computing. To make software systems ready for
the LEO edge, they need to be adapted for its unique execution environment,
e.g., to support handovers in face of satellite mobility.
So far, research around LEO edge software systems has focused on the
predictable behavior of satellite networks, such as orbital movements.
Additionally, we must also consider failure patterns, e.g., effects of
radiation on compute hardware in space. In this paper, we present a taxonomy of
failures that may occur in LEO edge computing and how they could affect
software systems. From there, we derive considerations for LEO edge software
systems and lay out avenues for future work.
|
We introduce the class of plane valuations at infinity and prove an analogue
to the Abhyankar-Moh (semigroup) Theorem for it.
|
We study bond percolation on the Hamming hypercube {0,1}^m around the
critical probability p_c. It is known that if p=p_c(1+O(2^{-m/3})), then with
high probability the largest connected component C_1 is of size Theta(2^{2m/3})
and that this quantity is non-concentrated. Here we show that for any sequence
eps_m such that eps_m=o(1) but eps_m >> 2^{-m/3} percolation on the hypercube
at p_c(1+eps_m) has
|C_1| = (2+o(1)) eps_m 2^m and |C_2| = o(eps_m 2^m) with high probability,
where C_2 is the second largest component. This resolves a conjecture of Borgs,
Chayes, the first author, Slade and Spencer [17].
|
The emerging beyond 5G and envisioned 6G wireless networks are considered as
key enablers in supporting a diversified set of applications for industrial
mobile robots (MRs). The scenario under investigation in this paper relates to
mobile robots that autonomously roam in an industrial floor and perform a
variety of tasks at different locations whilst utilizing high directivity
beamformers in mmWave small cells. In such scenarios, the potential close
proximity of mobile robots connected to different base stations, may cause
excessive levels of interference having as a net result a decrease in the
overall achievable data rate in the network. To resolve this issue, a novel
mixed integer linear programming formulation is proposed where the trajectory
of the mobile robots is considered jointly with the interference level at
different beam sectors. Therefore, creating a low interference path for each
mobile robot in the industrial floor. A wide set of numerical investigations
reveal that the proposed path planning optimization approach for the mmWave
connected mobile robots can improve the overall achievable throughput by up to
31% compared to an interference oblivious scheme, without penalizing the
overall travelling time.
|
Sensors are embedded in security-critical applications from medical devices
to nuclear power plants, but their outputs can be spoofed through
electromagnetic and other types of signals transmitted by attackers at a
distance. To address the lack of a unifying framework for evaluating the
effects of such transmissions, we introduce a system and threat model for
signal injection attacks. We further define the concepts of existential,
selective, and universal security, which address attacker goals from mere
disruptions of the sensor readings to precise waveform injections. Moreover, we
introduce an algorithm which allows circuit designers to concretely calculate
the security level of real systems. Finally, we apply our definitions and
algorithm in practice using measurements of injections against a smartphone
microphone, and analyze the demodulation characteristics of commercial
Analog-to-Digital Converters (ADCs). Overall, our work highlights the
importance of evaluating the susceptibility of systems against signal injection
attacks, and introduces both the terminology and the methodology to do so.
|
In this paper, Recofigurable Intelligent Surface (RIS) assisted dual-hop
multicast wireless communication network is proposed with two source nodes and
two destination nodes. RIS boosts received signal strength through an
intelligent software-controlled array of discrete phase-shifting metamaterials.
The multicast communication from the source nodes is enabled using a Decode and
Forward (DF) relay node. In the relay node, the Physical Layer Network Coding
(PLNC) concept is applied and the PLNC symbol is transmitted to the destination
nodes. The joint RIS-Multicast channels between source nodes and the relay node
are modeled as the sum of two scaled non-central Chi-Square distributions.
Analytical expressions are derived for Bit Error Rate (BER) at relay node and
destination nodes using Moment Generating Function (MGF) approach and the
results are validated using Monte-Carlo simulations. It is observed that the
BER performance of the proposed RIS assisted network is a lot better than the
conventional non-RIS channels links.
|
We consider the evolution of populations under the joint action of mutation
and differential reproduction, or selection. The population is modelled as a
finite-type Markov branching process in continuous time, and the associated
genealogical tree is viewed both in the forward and the backward direction of
time. The stationary type distribution of the reversed process, the so-called
ancestral distribution, turns out as a key for the study of mutation-selection
balance. This balance can be expressed in the form of a variational principle
that quantifies the respective roles of reproduction and mutation for any
possible type distribution. It shows that the mean growth rate of the
population results from a competition for a maximal long-term growth rate, as
given by the difference between the current mean reproduction rate, and an
asymptotic decay rate related to the mutation process; this tradeoff is won by
the ancestral distribution.
Our main application is the quasispecies model of sequence evolution with
mutation coupled to reproduction but independent across sites, and a fitness
function that is invariant under permutation of sites. Here, the variational
principle is worked out in detail and yields a simple, explicit result.
|
Natural Language Processing (NLP) models optimized for predictive performance
often make high confidence errors and suffer from vulnerability to adversarial
and out-of-distribution data. Existing work has mainly focused on mitigation of
such errors using either humans or an automated approach. In this study, we
explore the usage of large language models (LLMs) for data augmentation as a
potential solution to the issue of NLP models making wrong predictions with
high confidence during classification tasks. We compare the effectiveness of
synthetic data generated by LLMs with that of human data obtained via the same
procedure. For mitigation, humans or LLMs provide natural language
characterizations of high confidence misclassifications to generate synthetic
data, which are then used to extend the training set. We conduct an extensive
evaluation of our approach on three classification tasks and demonstrate its
effectiveness in reducing the number of high confidence misclassifications
present in the model, all while maintaining the same level of accuracy.
Moreover, we find that the cost gap between humans and LLMs surpasses an order
of magnitude, as LLMs attain human-like performance while being more scalable.
|
We investigate the post-bounce background dynamics in a certain class of
single bounce scenarios studied in the literature, in which the cosmic bounce
is driven by a scalar field with negative exponential potential such as the
ekpyrotic potential. We show that those models can actually lead to cyclic
evolutions with repeated bounces. These cyclic evolutions, however, do not
account for the currently observed late-time accelerated expansion and hence
are not cosmologically viable. In this respect we consider a new kind of cyclic
model proposed recently and derive some cosmological constraints on this model.
|
The behavior of complex networks under attack depends strongly on the
specific attack scenario. Of special interest are scale-free networks, which
are usually seen as robust under random failure or attack but appear to be
especially vulnerable to targeted attacks. In a recent study of public
transport networks of 14 major cities of the world we have shown that these
networks may exhibit scale-free behaviour [Physica A 380, 585 (2007)]. Our
further analysis, subject of this report, focuses on the effects that defunct
or removed nodes have on the properties of public transport networks.
Simulating different attack strategies we elaborate vulnerability criteria that
allow to find minimal strategies with high impact on these systems.
|
Large language models (LLMs) often benefit from intermediate steps of
reasoning to generate answers to complex problems. When these intermediate
steps of reasoning are used to monitor the activity of the model, it is
essential that this explicit reasoning is faithful, i.e. that it reflects what
the model is actually reasoning about. In this work, we focus on one potential
way intermediate steps of reasoning could be unfaithful: encoded reasoning,
where an LLM could encode intermediate steps of reasoning in the generated text
in a way that is not understandable to human readers. We show that language
models can be trained to make use of encoded reasoning to get higher
performance without the user understanding the intermediate steps of reasoning.
We argue that, as language models get stronger, this behavior becomes more
likely to appear naturally. Finally, we describe a methodology that enables the
evaluation of defenses against encoded reasoning, and show that, under the
right conditions, paraphrasing successfully prevents even the best encoding
schemes we built from encoding more than 3 bits of information per KB of text.
|
Given a normal matrix $A$ and an arbitrary square matrix $B$ (not necessarily
of the same size), what relationships between $A$ and $B$, if any, guarantee
that $B$ is also a normal matrix? We provide an answer to this question in
terms of pseudospectra and norm behavior. In doing so, we prove that a certain
distance formula, known to be a necessary condition for normality, is in fact
sufficient and demonstrates that the spectrum of a matrix can be used to
recover the spectral norm of its resolvent precisely when the matrix is normal.
These results lead to new normality criteria and other interesting
consequences.
|
We present a brief overview of the basic concepts of the soliton stability
theory and discuss some characteristic examples of the instability-induced
soliton dynamics, in application to spatial optical solitons described by the
NLS-type nonlinear models and their generalizations. In particular, we
demonstrate that the soliton internal modes are responsible for the appearance
of the soliton instability, and outline an analytical approach based on a
multi-scale asymptotic technique that allows to analyze the soliton dynamics
near the marginal stability point. We also discuss some results of the rigorous
linear stability analysis of fundamental solitary waves and nonlinear impurity
modes. Finally, we demonstrate that multi-hump vector solitary waves may become
stable in some nonlinear models, and discuss the examples of stable
(1+1)-dimensional composite solitons and (2+1)-dimensional dipole-mode solitons
in a model of two incoherently interacting optical beams.
|
The genuine twist-3 quark-gluon contributions to the Generalized Parton
Distributions (GPDs) are estimated in the model of the instanton vacuum. These
twist-3 effects are found to be parametrically suppressed relative to the
``kinematical'' twist-3 ones due to the small packing fraction of the instanton
vacuum. We derive exact sum rules for the twist-3 GPDs.
|
We report the detection of CO(1-0) and CO(2-1) emission from the central
region of nearby 3CR radio galaxies (z$<$ 0.03). Out of 21 galaxies, 8 have
been detected in, at least, one of the two CO transitions. The total molecular
gas content is below 10$^9$ \msun. Their individual CO emission exhibit, for 5
cases, a double-horned line profile that is characteristic of an inclined
rotating disk with a central depression at the rising part of its rotation
curve. The inferred disk or ring distributions of the molecular gas is
consistent with the observed presence of dust disks or rings detected optically
in the cores of the galaxies. We reason that if their gas originates from the
mergers of two gas-rich disk galaxies, as has been invoked to explain the
molecular gas in other radio galaxies, then these galaxies must have merged a
long time ago (few Gyr or more) but their remnant elliptical galaxies only
recently (last 10$^7$ years or less) become active radio galaxies. Instead, we
argue the the cannibalism of gas-rich galaxies provide a simpler explanation
for the origin of molecular gas in the elliptical hosts of radio galaxies (Lim
et al. 2000). Given the transient nature of their observed disturbances, these
galaxies probably become active in radio soon after the accretion event when
sufficient molecular gas agglomerates in their nuclei.
|
Adult neurogenesis has long been documented in the vertebrate brain, and
recently even in humans. Although it has been conjectured for many years that
its functional role is related to the renewing of memories, no clear mechanism
as to how this can be achieved has been proposed. We present a scheme in which
incorporation of new neurons proceeds at a constant rate, while their survival
is activity-dependent and thus contingent upon new neurons establishing
suitable connections. We show that a simple mathematical model following these
rules organizes its activity so as to maximize the difference between its
responses, and can adapt to changing environmental conditions in unsupervised
fashion.
|
We shall show that the rotation of some irrational rotation number on the
circle admits suspensions which are kinematic expansive.
|
We investigate properties of the spherically averaged atomic one-electron
density rho~(r). For a rho~ which stems from a physical ground state we prove
that rho~ > 0. We also give exponentially decreasing lower bounds to rho~ in
the case when the eigenvalue is below the corresponding essential spectrum.
|
Subsets and Splits