text
stringlengths 6
128k
|
---|
We present a $\approx 11.5$ year adaptive optics (AO) study of stellar
variability and search for eclipsing binaries in the central $\sim 0.4$ pc
($\sim 10''$) of the Milky Way nuclear star cluster. We measure the photometry
of 563 stars using the Keck II NIRC2 imager ($K'$-band, $\lambda_0 = 2.124
\text{ } \mu \text{m}$). We achieve a photometric uncertainty floor of $\Delta
m_{K'} \sim 0.03$ ($\approx 3\%$), comparable to the highest precision achieved
in other AO studies. Approximately half of our sample ($50 \pm 2 \%$) shows
variability. $52 \pm 5\%$ of known early-type young stars and $43 \pm 4 \%$ of
known late-type giants are variable. These variability fractions are higher
than those of other young, massive star populations or late-type giants in
globular clusters, and can be largely explained by two factors. First, our
experiment time baseline is sensitive to long-term intrinsic stellar
variability. Second, the proper motion of stars behind spatial inhomogeneities
in the foreground extinction screen can lead to variability. We recover the two
known Galactic center eclipsing binary systems: IRS 16SW and S4-258 (E60). We
constrain the Galactic center eclipsing binary fraction of known early-type
stars to be at least $2.4 \pm 1.7\%$. We find no evidence of an eclipsing
binary among the young S-stars nor among the young stellar disk members. These
results are consistent with the local OB eclipsing binary fraction. We identify
a new periodic variable, S2-36, with a 39.43 day period. Further observations
are necessary to determine the nature of this source.
|
A subset of a model of ${\sf PA}$ is called neutral if it does not change the
$\mathrm{dcl}$ relation. A model with undefinable neutral classes is called
neutrally expandable. We study the existence and non-existence of neutral sets
in various models of ${\sf PA}$. We show that cofinal extensions of prime
models are neutrally expandable, and $\omega_1$-like neutrally expandable
models exist, while no recursively saturated model is neutrally expandable. We
also show that neutrality is not a first-order property. In the last section,
we study a local version of neutral expandability.
|
In the last decade, neutrino astronomy has taken off with two major
breakthroughs, the first observation of high-energy astrophysical neutrinos in
2013 and the first evidence for gamma-rays and neutrinos from a single object
published in summer 2018. In this talk, we will review these important
milestones as well as the other noteworthy achievements reached by the
community. We will emphasize the important role of neutrino searches in the
multi-messenger era and describe the current efforts carried out in the large
neutrino telescope community. We will conclude with an outlook for the coming
decade.
|
Within entanglement theory there are criteria which certify that some quantum
states cannot be distilled into pure entanglement. An example is the positive
partial transposition criterion. Here we present, for the first time, the
analogous thing for secret correlations. We introduce a computable criterion
which certifies that a probability distribution between two honest parties and
an eavesdropper cannot be (asymptotically) distilled into a secret key. The
existence of non-distillable correlations with positive secrecy cost, also
known as bound information, is an open question. This criterion may be the key
for finding bound information. However, if it turns out that this criterion
does not detect bound information, then, a very interesting consequence
follows: any distribution with positive secrecy cost can increase the secrecy
content of another distribution. In other words, all correlations with positive
secrecy cost constitute a useful resource.
|
Present day quantum field theory (QFT) is founded on canonical quantization,
which has served quite well, but also has led to several issues. The free field
describing a free particle (with no interaction term) can suddenly become
nonrenormalizable the instant a suitable interaction term appears. For example,
using canonical quantization, $\varphi^4_4$, has been deemed a ``free" theory
with no difference from a truly free field [1], [2], [3]. Using the same model,
affine quantization has led to a truly interacting theory [4]. This fact alone
asserts that canonical and affine tools of quantization deserve to be open to
their procedures together as a significant enlargement of QFT.
|
For a family of weight functions that include the general Jacobi weight
functions as special cases, exact condition for the convergence of the Fourier
orthogonal series in the weighted $L^p$ space is given. The result is then used
to establish a Marcinkiewicz-Zygmund type inequality and to study weighted mean
convergence of various interpolating polynomials based on the zeros of the
corresponding orthogonal polynomials.
|
Complex potential transformations which add imaginary parts to chosen energy
levels are given and qualitatively explained. Unexpected shape similarity of
potential perturbations for real and imaginary E-shifts of bound states are
exhibited. The imaginary E-shifts in the continuous spectrum lead to a
surprising quasi-periodic field raking up initial propagating waves into
localized states. Complex periodic potentials without lacunas (!) are
constructed. The fission of quasi-bound states when neighbour complex
eigenvalues approach one another is demonstrated. E-shift algorithms represent
wide classes of exactly solvable quantum models for non-self-adjoint operators.
|
Background: Heavy-ion fusion reactions at energies near the Coulomb barrier
are influenced by couplings between the relative motion and nuclear intrinsic
degrees of freedom of the colliding nuclei. The time-dependent Hartree-Fock
(TDHF) theory, incorporating the couplings at the mean-field level, as well as
the coupled-channels (CC) method are standard approaches to describe low energy
nuclear reactions.
Purpose: To investigate the effect of couplings to inelastic and transfer
channels on the fusion cross sections for the reactions $^{40}$Ca+$^{58}$Ni and
$^{40}$Ca+$^{64}$Ni.
Methods: Fusion cross sections around and below the Coulomb barrier have been
obtained from coupled-channels (CC) calculations, using the bare
nucleus-nucleus potential calculated with the frozen Hartree-Fock method and
coupling parameters taken from known nuclear structure data. The fusion
thresholds and neutron transfer probabilities have been calculated with the
TDHF method.
Results: For $^{40}$Ca+$^{58}$Ni, the TDHF fusion threshold is in agreement
with the most probable barrier obtained in the CC calculations including the
couplings to the low-lying octupole $3_1^{-}$ state for $^{40}$Ca and to the
low-lying quadrupole $2_1^{+}$ state for $^{58}$Ni. This indicates that the
octupole and quadrupole states are the dominant excitations while neutron
transfer is shown to be weak. For $^{40}$Ca+$^{64}$Ni, the TDHF barrier is
lower than predicted by the CC calculations including the same inelastic
couplings as those for $^{40}$Ca+$^{58}$Ni. TDHF calculations show large
neutron transfer probabilities in $^{40}$Ca+$^{64}$Ni which could result in a
lowering of the fusion threshold.
Conclusions: Inelastic channels play an important role in $^{40}$Ca+$^{58}$Ni
and $^{40}$Ca+$^{64}$Ni reactions. The role of neutron transfer channels has
been highlighted in $^{40}$Ca+$^{64}$Ni.
|
We review colossal magnetoresistance in single phase manganites, as related
to the field sensitive spin charge interactions and phase separation; the
rectifying property and negative/positive magnetoresistance in
manganite/Nb:SrTiO3 pn junctions in relation to the special interface
electronic structure; magnetoelectric coupling in manganite/ferroelectric
structures that takes advantage of strain, carrier density, and magnetic field
sensitivity; tunneling magnetoresistance in tunnel junctions with dielectric,
ferroelectric, and organic semiconductor spacers using the fully spin polarized
nature of manganites; and the effect of particle size on magnetic properties in
manganite nanoparticles
|
The recently discovered kagome superconductors AV3Sb5 exhibit tantalizing
high-pressure phase diagrams, in which a new dome-like superconducting phase
emerges under moderate pressure. However, its origin is as yet unknown. Here,
we carried out the high-pressure electrical measurements up to 150 GPa,
together with the high-pressure X-ray diffraction measurements and
first-principles calculations on CsV3Sb5. We find the new superconducting phase
to be rather robust and inherently linked to the interlayer Sb2-Sb2
interactions. The formation of Sb2-Sb2 bonds at high pressure tunes the system
from two-dimensional to three-dimensional and pushes the Pz orbital of Sb2
upward across the Fermi level, resulting in enhanced density of states and
increase of TC. Our work demonstrates that the dimensional crossover at high
pressure can induce a topological phase transition and is related to the
abnormal high-pressure TC evolution. Our findings should apply for other
layered materials.
|
In e^+e^- annihilation at \sqrt{s}=10.6 GeV Belle Collaboration found, that
J/psi mesons are predominantly produced in association with an extra \bar{c}c
pair. The possible mechanisms of J/psi production are discussed and the
probability of the associate production of \bar{c}c pair is estimated. The
choice between these mechanisms can be done by measuring J/psi polarization. It
is suggested, that in case of heavy ion collisions one may expect remarkable
transverse polarization of produced J/psi, if quark-gluon plasma if formed. The
measurement of asymmetry of e^+e^-(\mu^+\mu^-) angular distribution in J/psi->
e^+e^-(\mu^+\mu^-) decay is a useful tool for detection of quark-gluon plasma
formation in heavy ion collisions.
|
We study the minimization of convex, variational integrals of linear growth
among all functions in the Sobolev space $W^{1,1}$ with prescribed boundary
values (or its equivalent formulation as a boundary value problem for a
degenerately elliptic Euler--Lagrange equation). Due to insufficient
compactness properties of these Dirichlet classes, the existence of solutions
does not follow in a standard way by the direct method in the calculus of
variations and in fact might fail, as it is well-known already for the
non-parametric minimal surface problem. Assuming radial structure, we establish
a necessary and sufficient condition on the integrand such that the Dirichlet
problem is in general solvable, in the sense that a Lipschitz solution exists
for any regular domain and all prescribed regular boundary values, via the
construction of appropriate barrier functions in the tradition of Serrin's
paper [19].
|
A criterion for effective irrelevancy of the spin-orbit coupling in the
heavy-fermion superconductivity is discussed on the basis of the impurity
Anderson model with two sets of Kramers doublets. Using Wilson's numerical
renormalization-group method, we demonstrate a formation of the quasiparticle
as well as the renormalization of the rotational symmetry-breaking interaction
in the lower Kramers doublet (quasispin) space. A comparison with the quasispin
conserving interaction exhibits the effective irrelevancy of the
symmetry-breaking interaction for the splitting of two doublets Delta larger
than the characteristic energy of the local spin fluctuation T_K. The formula
for the ratio of two interactions is also determined.
|
Some general features of the Bethe-Weizsacker mass formula recently extended
to light nuclei have been explored. Though this formula improves fits to the
properties of light nuclei and it does seem to work well in delineating the
positions of all old and new magic numbers found in that region, yet it is not
well tuned for predicting finer details. The mass predictions have also been
found to be less accurate compared to those by the macroscopic-microscopic
calculations. It is concluded that such semi-empirical mass formulae can not be
a substitute for more fundamental mass formulae having its origin based upon
the basic nucleon-nucleon effective interaction.
|
In this paper, we describe and establish iteration-complexity of two
accelerated composite gradient (ACG) variants to solve a smooth nonconvex
composite optimization problem whose objective function is the sum of a
nonconvex differentiable function $ f $ with a Lipschitz continuous gradient
and a simple nonsmooth closed convex function $ h $. When $f$ is convex, the
first ACG variant reduces to the well-known FISTA for a specific choice of the
input, and hence the first one can be viewed as a natural extension of the
latter one to the nonconvex setting. The first variant requires an input pair
$(M,m)$ such that $f$ is $m$-weakly convex, $\nabla f$ is $M$-Lipschitz
continuous, and $m \le M$ (possibly $m<M$), which is usually hard to obtain or
poorly estimated. The second variant on the other hand can start from an
arbitrary input pair $(M,m)$ of positive scalars and its complexity is shown to
be not worse, and better in some cases, than that of the first variant for a
large range of the input pairs. Finally, numerical results are provided to
illustrate the efficiency of the two ACG variants.
|
We investigate a fundamental question regarding a benchmark class of shapes
in one of the simplest, yet most widely utilized abstract models of algorithmic
tile self-assembly. Specifically, we study the directed tile complexity of a $k
\times N$ thin rectangle in Winfree's abstract Tile Assembly Model, assuming
that cooperative binding cannot be enforced (temperature-1 self-assembly) and
that tiles are allowed to be placed at most one step into the third dimension
(just-barely 3D). While the directed tile complexities of a square and a
scaled-up version of any algorithmically specified shape at temperature 1 in
just-barely 3D are both asymptotically the same as they are (respectively) at
temperature 2 in 2D, the bounds on the directed tile complexity of a thin
rectangle at temperature 2 in 2D are not known to hold at temperature 1 in
just-barely 3D. Motivated by this discrepancy, we establish new lower and upper
bounds on the directed tile complexity of a thin rectangle at temperature 1 in
just-barely 3D. We develop a new, more powerful type of Window Movie Lemma that
lets us upper bound the number of "sufficiently similar" ways to assign glues
to a set of fixed locations. Consequently, our lower bound,
$\Omega\left(N^{\frac{1}{k}}\right)$, is an asymptotic improvement over the
previous best lower bound and is more aesthetically pleasing since it
eliminates the $k$ that used to divide $N^{\frac{1}{k}}$. The proof of our
upper bound is based on a just-barely 3D, temperature-1 counter, organized
according to "digit regions", which affords it roughly fifty percent more
digits for the same target rectangle compared to the previous best counter.
This increase in digit density results in an upper bound of
$O\left(N^{\frac{1}{\left\lfloor\frac{k}{2}\right\rfloor}}+\log N\right)$, that
is an asymptotic improvement over the previous best upper bound and roughly the
square of our lower bound.
|
Outside-knowledge visual question answering (OK-VQA) requires the agent to
comprehend the image, make use of relevant knowledge from the entire web, and
digest all the information to answer the question. Most previous works address
the problem by first fusing the image and question in the multi-modal space,
which is inflexible for further fusion with a vast amount of external
knowledge. In this paper, we call for a paradigm shift for the OK-VQA task,
which transforms the image into plain text, so that we can enable knowledge
passage retrieval, and generative question-answering in the natural language
space. This paradigm takes advantage of the sheer volume of gigantic knowledge
bases and the richness of pre-trained language models. A
Transform-Retrieve-Generate framework (TRiG) framework is proposed, which can
be plug-and-played with alternative image-to-text models and textual knowledge
bases. Experimental results show that our TRiG framework outperforms all
state-of-the-art supervised methods by at least 11.1% absolute margin.
|
The RangL project hosted by The Alan Turing Institute aims to encourage the
wider uptake of reinforcement learning by supporting competitions relating to
real-world dynamic decision problems. This article describes the reusable code
repository developed by the RangL team and deployed for the 2022 Pathways to
Net Zero Challenge, supported by the UK Net Zero Technology Centre. The winning
solutions to this particular Challenge seek to optimize the UK's energy
transition policy to net zero carbon emissions by 2050. The RangL repository
includes an OpenAI Gym reinforcement learning environment and code that
supports both submission to, and evaluation in, a remote instance of the open
source EvalAI platform as well as all winning learning agent strategies. The
repository is an illustrative example of RangL's capability to provide a
reusable structure for future challenges.
|
We study ideals generated by $2$--minors of generic Hankel matrices.
|
We study the lift-and-project procedures of Lov\'asz and Schrijver for 0-1
integer programming problems. We prove that the procedure using the positive
semidefiniteness constraint is not better than the one without it, in the worst
case. Various examples are considered. We also provide geometric conditions
characterizing when the positive semidefiniteness constraint does not help.
|
This article introduces a new class of socio-technical systems, interspecies
information systems (IIS) by describing several examples of these systems
emerging through the use of commercially available data-driven animal-centered
technology. When animal-centered technology, such as pet wearables, cow health
monitoring, or even wildlife drones captures animal data and inform humans of
actions to take towards animals, interspecies information systems emerge. I
discuss the importance of understanding them as information systems rather than
isolated technology or technology-mediated interactions, and propose a
conceptual model capturing the key components and information flow of a general
interspecies information system. I conclude by proposing multiple practical
challenges that are faced in the successful design, engineering and use of any
interspecies information systems where animal data informs human actions.
|
We present results for higher-order corrections to exclusive
$\mathrm{J}/\psi$ production. This includes the first relativistic correction
of order $v^2$ in quark velocity, and next-to-leading order corrections in
$\alpha_s$ for longitudinally polarized production. The relativistic
corrections are found to be important for a good description of the HERA data,
especially at small values of the photon virtuality. The next-to-leading order
results for longitudinal production are evaluated numerically. We also
demonstrate how the vector meson production provides complementary information
to the structure functions for extracting the initial condition for the
small-$x$ evolution of the dipole-proton scattering amplitude.
|
We report on the development of an induction based low temperature high
frequency ac susceptometer capable of measuring at frequencies up to 3.5 MHz
and at temperatures between 2 K and 300 K. Careful balancing of the detection
coils and calibration have allowed a sample magnetic moment resolution of
$5\times10^{-10} Am^2$ at 1 MHz. We will discuss the design and
characterization of the susceptometer, and explain the calibration process. We
also include some example measurements on the spin ice material CdEr$_2$S$_4$
and iron oxide based nanoparticles to illustrate functionality.
|
This paper describes an automated, formal and rigorous analysis of the Ad hoc
On-Demand Distance Vector (AODV) routing protocol, a popular protocol used in
wireless mesh networks.
We give a brief overview of a model of AODV implemented in the UPPAAL model
checker. It is derived from a process-algebraic model which reflects precisely
the intention of AODV and accurately captures the protocol specification.
Furthermore, we describe experiments carried out to explore AODV's behaviour in
all network topologies up to 5 nodes. We were able to automatically locate
problematic and undesirable behaviours. This is in particular useful to
discover protocol limitations and to develop improved variants. This use of
model checking as a diagnostic tool complements other formal-methods-based
protocol modelling and verification techniques, such as process algebra.
|
A social group is represented by a graph, where each pair of nodes is
connected by two oppositely directed links. At the beginning, a given amount
$p(i)$ of resources is assigned randomly to each node $i$. Also, each link
$r(i,j)$ is initially represented by a random positive value, which means the
percentage of resources of node $i$ which is offered to node $j$. Initially
then, the graph is fully connected, i.e. all non-diagonal matrix elements
$r(i,j)$ are different from zero. During the simulation, the amounts of
resources $p(i)$ change according to the balance equation. Also, nodes
reorganise their activity with time, going to give more resources to those
which give them more. This is the rule of varying the coefficients $r(i,j)$.
The result is that after some transient time, only some pairs $(m,n)$ of nodes
survive with non-zero $p(m)$ and $p(n)$, each pair with symmetric and positive
$r(m,n)=r(n,m)$. Other coefficients $r(m,i\ne n)$ vanish. Unpaired nodes remain
with no resources, i.e. their $p(i)=0$, and they cease to be active, as they
have nothing to offer. The percentage of survivors (i.e. those with with $p(i)$
positive) increases with the velocity of varying the numbers $r(i,j)$, and it
slightly decreases with the size of the group. The picture and the results can
be interpreted as a description of a social algorithm leading to marriages.
|
Using both ground-based transit photometry and high-precision radial velocity
(RV) spectroscopy, we confirm the planetary nature of TOI-3785 b. This
transiting Neptune orbits an M2-Dwarf star with a period of ~4.67 days, a
planetary radius of 5.14 +/- 0.16 Earth Radii, a mass of 14.95 +4.10, -3.92
Earth Masses, and a density of 0.61 +0.18, -0.17 g/cm^3. TOI-3785 b belongs to
a rare population of Neptunes (4 Earth Radii < Rp < 7 Earth Radii) orbiting
cooler, smaller M-dwarf host stars, of which only ~10 have been confirmed. By
increasing the number of confirmed planets, TOI-3785 b offers an opportunity to
compare similar planets across varying planetary and stellar parameter spaces.
Moreover, with a high transmission spectroscopy metric (TSM) of ~150 combined
with a relatively cool equilibrium temperature of 582 +/- 16 K and an inactive
host star, TOI-3785 b is one of the more promising low-density M-dwarf Neptune
targets for atmospheric follow-up. Future investigation into atmospheric mass
loss rates of TOI-3785 b may yield new insights into the atmospheric evolution
of these low-mass gas planets around M-dwarfs.
|
We suggest that compactifications on Anti-de-Sitter (AdS) spaces of type IIA,
IIB, heterotic strings and eleven dimensional vacuua of M-theory are related by
a combination of $T$ and strong/weak dualities. Maldacena conjecture relates
then all these vacuua to a conformal theory on the boundaries. Furthermore
acting with discrete groups on part of the internal spaces of these theories
will lead to further dual theories with less or no supersymmetry.
|
We consider a discrete-time, generically incomplete market model and a
behavioural investor with power-like utility and distortion functions. The
existence of optimal strategies in this setting has been shown in a previous
paper under certain conditions on the parameters of these power functions.
In the present paper we prove the existence of optimal strategies under a
different set of conditions on the parameters, identical to the ones which were
shown to be necessary and sufficient in the Black-Scholes model.
Although there exists no natural dual problem for optimisation under
behavioural criteria (due to the lack of concavity), we will rely on techniques
based on the usual duality between attainable contingent claims and equivalent
martingale measures.
|
We show that the sum over planar trees formula of Kontsevich and Soibelman
transfers C-infinity structures along a contraction. Applying this result to a
cosimplicial commutative algebra A^* over a field of characteristic zero, we
exhibit a canonical unital C-infinity structure on Tot(A^*), which is unital if
A^* is; in particular, we obtain a canonical C-infinity structure on the
cochain complex of a simplicial set.
|
The topological Tverberg theorem states that for any prime power q and
continuous map from a (d+1)(q-1)-simplex to R}^d, there are q disjoint faces
F_i of the simplex whose images intersect. It is possible to put conditions on
which pairs of vertices of the simplex that are allowed to be in the same face
F_i. A graph with the same vertex set as the simplex, and with two vertices
adjacent if they should not be in the same F_i, is called a Tverberg graph if
the topological Tverberg theorem still work.
These graphs have been studied by Hell, Schoneborn and Ziegler, and it is
known that disjoint unions of small paths, cycles, and complete graphs are
Tverberg graphs. We find many new examples by establishing a local criterion
for a graph to be Tverberg. An easily stated corollary of our main theorem is
that if the maximal degree of a graph is D, and D(D+1)<q, then it is a Tverberg
graph.
We state the affine versions of our results and also describe how they can be
used to enumerate Tverberg partitions.
|
Graphene nanowiggles (GNW) are graphene-based nanostructures obtained by
making alternated regular cuts in pristine graphene nanoribbons. GNW were
recently synthesized and it was demonstrated that they exhibit tunable
electronic and magnetic properties by just varying their shape. Here, we have
investigated the mechanical properties and fracture patterns of a large number
of GNW of different shapes and sizes using fully atomistic reactive molecular
dynamics simulations. Our results show that the GNW mechanical properties are
strongly dependent on its shape and size and, as a general trend narrow sheets
have larger ultimate strength and Young's modulus than wide ones. The estimated
Young's modulus values were found to be in a range of ~ 100-1000 GPa and the
ultimate strength in a range of ~ 20-110 GPa, depending on GNW shape. Also,
super-ductile behaviour under strain was observed for some structures.
|
Most data intensive applications often access only a few fields of the
objects they are operating on. Since NVM provides fast, byte-addressable access
to durable memory, it is possible to access various fields of an object stored
in NVM directly without incurring any serialization and deserialization cost.
This paper proposes a novel tiered object storage model that modifies a data
structure such that only a chosen subset of fields of the data structure are
stored in NVM, while the remaining fields are stored in a cheaper (and a
traditional) storage layer such as HDDs/SSDs. We introduce a novel
linear-programming based optimization framework for deciding the field
placement. Our proof of concept demonstrates that a tiered object storage model
improves the execution time of standard operations by up to 50\% by avoiding
the cost of serialization/deserialization and by reducing the memory footprint
of operations.
|
We analytically study diffusive particle acceleration in relativistic,
collisionless shocks. We find a simple relation between the spectral index s
and the anisotropy of the momentum distribution along the shock front. Based on
this relation, we obtain s = (3beta_u - 2beta_u*beta_d^2 + beta_d^3) / (beta_u
- beta_d) for isotropic diffusion, where beta_u (beta_d) is the upstream
(downstream) fluid velocity normalized to the speed of light. This result is in
agreement with previous numerical determinations of s for all (beta_u,beta_d),
and yields s=38/9 in the ultra-relativistic limit. The spectrum-anisotropy
connection is useful for testing numerical studies and for constraining
non-isotropic diffusion results. It implies that the spectrum is highly
sensitive to the form of the diffusion function for particles travelling along
the shock front.
|
We investigate all static spherically symmetric solutions in the context of
general relativity surrounded by a minimally-coupled quintessence field, using
dynamical system analysis. Applying the 1+1+2 formalism and introducing
suitable normalized variables involving the Gaussian curvature, we were able to
reformulate the field equations as first order differential equations. In the
case of a massless canonical scalar field we recovered all known black hole
results, such as the Fisher solution, and we found that apart from the
Schwarzschild solution all other solutions are naked singularities.
Additionally, we identified the symmetric phase space which corresponds to the
white hole part of the solution and in the case of a phantom field, we were
able to extract the conditions for the existence of wormholes and define all
possible class of solutions such as Cold Black holes, singular spacetimes and
wormholes like Ellis wormhole, for example. For an exponential potential, we
found that the black hole solution which is asymptotically flat is unique and
it is the Schwarzschild spacetime, while all other solutions are naked
singularities. Furthermore, we found solutions connecting to a white hole
through a maximum radius, and not a minimum radius (throat) such as wormhole
solutions, therefore violating the flare-out condition. Finally, we have found
a necessary and sufficient condition on the form of the potential to have an
asymptotically AdS spacetime along with a necessary condition for the existence
of asymptotically flat black holes.
|
We present spectroscopic observations of the massive early type system
V745\,Cas, embedded in a multiple star system. The brightest star of the system
is the eclipsing binary V745\,Cas with an orbital period of 1.41 days. The
radial velocities of both components and light curves obtained by $INTEGRAL$
and $Hipparcos$ missions were analysed. The components of V745\,Cas are shown
to be a B0\,V primary with a mass M$_p$=18.31$\pm$0.51 M$_{\odot}$ and radius
R$_p$=6.94$\pm$0.07 R$_{\odot}$ and a B(1-2)\,V secondary with a mass
M$_s$=10.47$\pm$0.28 M$_{\odot}$ and radius R$_s$=5.35$\pm$0.05 R$_{\odot}$.
Our analysis shows that both components fill their corresponding $Roche$ lobes,
indicating double contact configuration. Using the UBVJHK magnitudes and
interstellar absorption we estimated the mean distance to the system as
1700$\pm$50\,pc. The locations of the component stars in the mass-luminosity,
mass-radius, effective temperature-mass and surface gravity-mass are in
agreement with those of the main-sequence massive stars. We also obtained $UBV$
photometry of the three visual companions and we estimate that all are B-type
stars based upon their de-reddened colours.We suspect that this multiple system
is probably a member of the Cas OB4 association in the $Perseus$ arm of the
$Galaxy$.
|
We simulate the spin torque-induced reversal of the magnetization in thin
disks with perpendicular anisotropy at zero temperature. Disks typically
smaller than 20 nm in diameter exhibit coherent reversal. A domain wall is
involved in larger disks. We derive the critical diameter of this transition.
Using a proper definition of the critical voltage, a macrospin model can
account perfectly for the reversal dynamics when the reversal is coherent. The
same critical voltage appears to match with the micromagnetics switching
voltage regardless of the switching path.
|
The anomaly for the fermion number current is calculated on the lattice in a
simple prototype model with an even number of fermion doublets.
|
We study the effects of external offset charges on the phase diagram of
Josephson junction arrays. Using the path integral approach, we provide a
pedagogical derivation of the equation for the phase boundary line between the
insulating and the superconducting phase within the mean-field theory
approximation. For a uniform offset charge q=e the superconducting phase
increases with respect to q=0 and a characteristic lobe structure appears in
the phase diagram when the critical line is plotted as a function of q at fixed
temperature. We review our analysis of the physically relevant situation where
a Josephson network feels the effect of random offset charges. We observe that
the Mott-insulating lobe structure of the phase diagram disappears for large
variance (\sigma > e) of the offset charges probability distribution; with
nearest-neighbor interactions, the insulating lobe around q=e is destroyed even
for small values of \sigma. Finally, we study the case of random
self-capacitances: here we observe that, until the variance of the distribution
reaches a critical value, the superconducting phase increases in comparison to
the situation in which all self-capacitances are equal.
|
We report the first successful extraction of accumulated ultracold neutrons
(UCN) from a converter of superfluid helium, in which they were produced by
downscattering neutrons of a cold beam from the Munich research reactor.
Windowless UCN extraction is performed in vertical direction through a
mechanical cold valve. This prototype of a versatile UCN source is comprised of
a novel cryostat designed to keep the source portable and to allow for rapid
cooldown. We measured time constants for UCN storage and extraction into a
detector at room temperature, with the converter held at various temperatures
between 0.7 and 1.3 K. The UCN production rate inferred from the count rate of
extracted UCN is close to the theoretical expectation.
|
We investigate relations between elliptic multiple zeta values and describe a
method to derive the number of indecomposable elements of given weight and
length. Our method is based on representing elliptic multiple zeta values as
iterated integrals over Eisenstein series and exploiting the connection with a
special derivation algebra. Its commutator relations give rise to constraints
on the iterated integrals over Eisenstein series relevant for elliptic multiple
zeta values and thereby allow to count the indecomposable representatives.
Conversely, the above connection suggests apparently new relations in the
derivation algebra. Under https://tools.aei.mpg.de/emzv we provide relations
for elliptic multiple zeta values over a wide range of weights and lengths.
|
We investigate how a Higgs mechanism could be responsible for the emergence
of gravity in extensions of Einstein theory. In this scenario, at high
energies, symmetry restoration could "turn off" gravity, with dramatic
implications for cosmology and quantum gravity. The sense in which gravity is
muted depends on the details of the implementation. In the most extreme case
gravity's dynamical degrees of freedom would only be unleashed after the Higgs
field acquires a non-trivial vacuum expectation value, with gravity reduced to
a topological field theory in the symmetric phase. We might also identify the
Higgs and the Brans-Dicke fields in such a way that in the unbroken phase
Newton's constant vanishes, decoupling matter and gravity. We discuss the broad
implications of these scenarios.
|
We associate an integer to any simple polytope and we study its properties.
|
The e+e- --> pi+pi-pi0pi0 reaction cross section as a function of the
incident energy is calculated using a model that is an extension of our
recently published model of the e+e- annihilation into four charged pions. The
latter considered the intermediate states with the pi, rho, and a_1 mesons and
fixed the mixing angle of the a_1-rho-pi Lagrangian and other parameters by
fitting the cross section data. Here we supplement the original intermediate
states with those containing omega(782) and h_1(1170), but keep unchanged the
values of those parameters that enter both charged and mixed channel
calculations. The inclusion of omega is vital for obtaining a good fit to the
cross section data, while the intermediate states with h_1 further improve it.
Finally, we merge our models of the e+e- --> pi+pi-pi0pi0 and e+e- -->
pi+pi-pi+pi- reactions and obtain a simultaneous good fit.
|
According to the standard paradigm, the strong and compact luminosity of
active galactic nuclei (AGN) is due to multi-temperature black body emission
originating from an accretion disk formed around a supermassive black hole.
This central engine is thought to be surrounded by a dusty region along the
equatorial plane and by ionized winds along the poles. The innermost regions
cannot yet be resolved neither in the optical nor in the infrared and it is
fair to say that we still lack a satisfactory understanding of the physical
processes, geometry and composition of the central (sub-parsec) components of
AGN. Like spectral or polarimetric observations, the reverberation data needs
to be modeled in order to infer constraints on the AGN geometry (such as the
inner radius or the half-opening angle of the dusty torus). In this research
note, we present preliminary modeling results using a time-dependent Monte
Carlo method to solve the radiative transfer in a simplified AGN set up. We
investigate different model configurations using both polarization and time
lags and find a high dependency on the geometry to the time-lag response. For
all models there is a clear distinction between edge-on or face-on viewing
angles for fluxes and time lags, the later showing a higher
wavelength-dependence than the former. Time lags, polarization and fluxes point
toward a clear dichotomy between the different inclinations of AGN, a method
that could help us to determine the true orientation of the nucleus in Seyfert
galaxies.
|
Recent attoclock experiments using the attsecond angular streaking technique
enabled the measurement of the tunneling time delay during laser induced strong
field ionization. Theoretically the tunneling time delay is commonly modelled
by the Wigner time delay concept which is derived from the derivative of the
electron wave function phase with respect to energy. Here, we present an
alternative method for the calculation of the Wigner time delay by using the
fixed energy propagator. The developed formalism is applied to the
nonrelativistic as well as to the relativistic regime of the tunnel-ionization
process from a zero-range potential, where in the latter regime the propagator
can be given by means of the proper-time method.
|
In this paper, we give the structure of free n-Lie algebras. Next, we
introduce basic commutators in n-Lie algebras and generalize the Witt formula
to calculate the number of the basic commutators. Also, we prove that the set
of all of the basic commutators of weight w and length n+(w-2)(n-1) is a basis
for Fw, where Fw is the wth term of the lower central series in the free n-Lie
algebra F.
|
We perform statistical analyses to study the infall of galaxies onto groups
and clusters in the nearby Universe. The study is based on the UZC and SSRS2
group catalogs and peculiar velocity samples. We find a clear signature of
infall of galaxies onto groups over a wide range of scales 5 h^{-1} Mpc<r<30
h^{-1} Mpc, with an infall amplitude on the order of a few hundred kilometers
per second. We obtain a significant increase in the infall amplitude with group
virial mass (M_{V}) and luminosity of group member galaxies (L_{g}). Groups
with M_{V}<10^{13} M_{\odot} show infall velocities V_{infall} \simeq 150 km
s^{-1} whereas for M_{V}>10^{13} M_{\odot} a larger infall is observed,
V_{infall} \simeq 200 km s^{-1}. Similarly, we find that galaxies surrounding
groups with L_{g}<10^{15} L_{\odot} have V_{infall} \simeq 100 km s^{-1},
whereas for L_{g}>10^{15} L_{\odot} groups, the amplitude of the galaxy infall
can be as large as V_{infall} \simeq 250 km s^{-1}. The observational results
are compared with the results obtained from mock group and galaxy samples
constructed from numerical simulations, which include galaxy formation through
semianalytical models. We obtain a general agreement between the results from
the mock catalogs and the observations. The infall of galaxies onto groups is
suitably reproduced in the simulations and, as in the observations, larger
virial mass and luminosity groups exhibit the largest galaxy infall amplitudes.
We derive estimates of the integrated mass overdensities associated with groups
by applying linear theory to the infall velocities after correcting for the
effects of distance uncertainties obtained using the mock catalogs. The
resulting overdensities are consistent with a power law with \delta \sim 1 at r
\sim 10 h^{-1}Mpc.
|
The fusion of multispectral and panchromatic images is always dubbed
pansharpening. Most of the available deep learning-based pan-sharpening methods
sharpen the multispectral images through a one-step scheme, which strongly
depends on the reconstruction ability of the network. However, remote sensing
images always have large variations, as a result, these one-step methods are
vulnerable to the error accumulation and thus incapable of preserving spatial
details as well as the spectral information. In this paper, we propose a novel
two-step model for pan-sharpening that sharpens the MS image through the
progressive compensation of the spatial and spectral information. Firstly, a
deep multiscale guided generative adversarial network is used to preliminarily
enhance the spatial resolution of the MS image. Starting from the pre-sharpened
MS image in the coarse domain, our approach then progressively refines the
spatial and spectral residuals over a couple of generative adversarial networks
(GANs) that have reverse architectures. The whole model is composed of triple
GANs, and based on the specific architecture, a joint compensation loss
function is designed to enable the triple GANs to be trained simultaneously.
Moreover, the spatial-spectral residual compensation structure proposed in this
paper can be extended to other pan-sharpening methods to further enhance their
fusion results. Extensive experiments are performed on different datasets and
the results demonstrate the effectiveness and efficiency of our proposed
method.
|
In the first half of the paper, we study in details NS-branes, including the
NS5-brane, the Kaluza-Klein monopole and the exotic $5_2^2$- or Q-brane,
together with Bianchi identities for NSNS (non)-geometric fluxes.
Four-dimensional Bianchi identities are generalized to ten dimensions with
non-constant fluxes, and get corrected by a source term in presence of an
NS-brane. The latter allows them to reduce to the expected Poisson equation.
Without sources, our Bianchi identities are also recovered by squaring a
nilpotent $Spin(D,D) \times \mathbb{R}^+$ Dirac operator. Generalized Geometry
allows us in addition to express the equations of motion explicitly in terms of
fluxes. In the second half, we perform a general analysis of ten-dimensional
geometric backgrounds with non-geometric fluxes, in the context of
$\beta$-supergravity. We determine a well-defined class of such vacua, that are
non-geometric in standard supergravity: they involve $\beta$-transforms, a
manifest symmetry of $\beta$-supergravity with isometries. We show as well that
these vacua belong to a geometric T-duality orbit.
|
This is a continuation of our recent paper. We continue studying properties
of the triangular projection ${\mathscr P}_n$ on the space of $n\times n$
matrices. We establish sharp estimates for the $p$-norms of ${\mathscr P}_n$ as
an operator on the Schatten--von Neumann class $\boldsymbol{S}_p$ for $0<p<1$.
Our estimates are uniform in $n$ and $p$ as soon as $p$ is separated away from
0. The main result of the paper shows that for $p\in(0,1)$, the $p$-norms of
${\mathscr P}_n$ on $\boldsymbol{S}_p$ behave as $n\to\infty$ and $p\to1$ as
$n^{1/p-1}\min\big\{(1-p)^{-1},\log n\big\}$.
|
We analyze a fiber-optic gyroscope design enhanced by the injection of
quantum-optical squeezed vacuum into a fiber-based Sagnac interferometer. In
the presence of fiber loss, we compute the maximum attainable enhancement over
a classical, laser-driven fiber-optic gyroscope in terms of the angular
velocity estimate variance from a homodyne measurement. We find a constant
enhancement factor that depends on the degree of squeezing introduced into the
system but has diminishing returns beyond $10$--$15$ dB of squeezing. Under a
realistic constraint of fixed total fiber length, we show that segmenting the
available fiber into multiple Sagnac interferometers fed with a
multi-mode-entangled squeezed vacuum, thereby establishing quantum entanglement
across the individual interferometers, improves the rotation estimation
variance by a factor of $e\approx2.718$.
|
Recent literature has explored various ways to improve soft sensors by
utilizing learning algorithms with transferability. A performance gain is
generally attained when knowledge is transferred among strongly related soft
sensor learning tasks. A particularly relevant case for transferability is when
developing soft sensors of the same type for similar, but physically different
processes or units. Then, the data from each unit presents a soft sensor
learning task, and it is reasonable to expect strongly related tasks. Applying
methods that exploit transferability in this setting leads to what we call
multi-unit soft sensing.
This paper formulates multi-unit soft sensing as a probabilistic,
hierarchical model, which we implement using a deep neural network. The
learning capabilities of the model are studied empirically on a large-scale
industrial case by developing virtual flow meters (a type of soft sensor) for
80 petroleum wells. We investigate how the model generalizes with the number of
wells/units. Interestingly, we demonstrate that multi-unit models learned from
data from many wells, permit few-shot learning of virtual flow meters for new
wells. Surprisingly, regarding the difficulty of the tasks, few-shot learning
on 1-3 data points often leads to high performance on new wells.
|
A well-known conjecture of Erd\H{o}s and S\'os states that every graph with
average degree exceeding $m-1$ contains every tree with $m$ edges as a
subgraph. We propose a variant of this conjecture, which states that every
graph of maximum degree exceeding $m$ and minimum degree at least $\lfloor
\frac{2m}{3}\rfloor$ contains every tree with $m$ edges.
As evidence for our conjecture we show (i) for every $m$ there is a $g(m)$
such that the weakening of the conjecture obtained by replacing $m$ by $g(m)$
holds, and (ii) there is a $\gamma>0$ such that the weakening of the conjecture
obtained by replacing $\lfloor \frac{2m}{3}\rfloor$ by $(1-\gamma)m$ holds.
|
We compute the Weierstrass semigroup at one totally ramified place for Kummer
extensions defined by $y^m=f(x)^{\lambda}$ where $f(x)$ is a separable
polynomial over $\mathbb{F}_q$. In addition, we compute the Weierstrass
semigroup at two certain totally ramified places. We then apply our results to
construct one- and two-point algebraic geometric codes with good parameters.
|
Many problems intractable on classical devices could be solved by algorithms
explicitly based on quantum mechanical laws, i.e. exploiting quantum
information processing. As a result, increasing efforts from different fields
are nowadays directed to the actual realization of quantum devices. Here we
provide an introduction to Quantum Information Processing, focusing on a
promising setup for its implementation, represented by molecular spin clusters
known as Molecular Nanomagnets. We introduce the basic tools to understand and
design quantum algorithms, always referring to their actual realization on a
molecular spin architecture. We then examine the most important sources of
noise in this class of systems and then one of their most peculiar features,
i.e. the possibility to exploit many (more than two) available states to encode
information and to self-correct it from errors via proper design of quantum
error correction codes. Finally, we present some examples of quantum algorithms
proposed and implemented on a molecular spin qudit hardware.
|
We investigate a model of evolutionary dynamics on a smooth landscape which
features a ``mutator'' allele whose effect is to increase the mutation rate. We
show that the expected proportion of mutators far from equilibrium, when the
fitness is steadily increasing in time, is governed solely by the transition
rates into and out of the mutator state. This results is a much faster rate of
fitness increase than would be the case without the mutator allele. Near the
fitness equilibrium, however, the mutators are severely suppressed, due to the
detrimental effects of a large mutation rate near the fitness maximum. We
discuss the results of a recent experiment on natural selection of E. coli in
the light of our model.
|
A semiclassical approach to the universal ergodic spectral statistics in
quantum star graphs is presented for all known ten symmetry classes of quantum
systems. The approach is based on periodic orbit theory, the exact
semiclassical trace formula for star graphs and on diagrammatic techniques. The
appropriate spectral form factors are calculated upto one order beyond the
diagonal and self-dual approximations. The results are in accordance with the
corresponding random-matrix theories which supports a properly generalized
Bohigas-Giannoni-Schmit conjecture.
|
There is increasing necessity for low background active materials as
ton-scale, rare-event and cryogenic detectors are developed.
Poly(ethylene-2,6-naphthalate) (PEN) has been considered for these applications
because of its robust structural characteristics, and its scintillation light
in the blue wavelength region. Radioluminescent properties of PEN have been
measured to aid in the evaluation of this material. In this article we present
a measurement of PEN's quenching factor using three different neutron sources;
neutrons emitted from spontaneous fission in \cf, neutrons generated from a DD
generator, and neutrons emitted from the \Can and the \Lipn nuclear reactions.
The fission source used time-of-flight to determine the neutron energy, and the
neutron energy from the nuclear reactions was defined using thin targets and
reaction kinematics. The Birk's factor and scintillation efficiency were found
to be $kB = 0.12 \pm 0.01$ mm MeV$^{-1}$ and $S = 1.31\pm0.09$ MeV$_{ee}$
MeV$^{-1}$ from a simultaneous analysis of the data obtained from the three
different sources. With these parameters, it is possible to evaluate PEN as a
viable material for large-scale, low background physics experiments.
|
Iterative Gaussianization is a fixed-point iteration procedure that can
transform any continuous random vector into a Gaussian one. Based on iterative
Gaussianization, we propose a new type of normalizing flow model that enables
both efficient computation of likelihoods and efficient inversion for sample
generation. We demonstrate that these models, named Gaussianization flows, are
universal approximators for continuous probability distributions under some
regularity conditions. Because of this guaranteed expressivity, they can
capture multimodal target distributions without compromising the efficiency of
sample generation. Experimentally, we show that Gaussianization flows achieve
better or comparable performance on several tabular datasets compared to other
efficiently invertible flow models such as Real NVP, Glow and FFJORD. In
particular, Gaussianization flows are easier to initialize, demonstrate better
robustness with respect to different transformations of the training data, and
generalize better on small training sets.
|
High speed imaging was used to capture the fast dynamics of two injection
methods. The first one and perhaps the oldest known, is based on solid needles
and used for dermal pigmentation, or tattooing. The second, is a novel
needle-free micro-jet injector based on thermocavitation. We performed
injections in agarose gel skin surrogates, and studied both methods using ink
formulations with different fluidic properties to understand better the
end-point injection. Both methods were used to inject water and a
glycerin-water mixture. Commercial inks were used with the tattoo machine and
compared with the other liquids injected. The agarose gel was kept stationary
or in motion at a constant speed, along a plane perpendicular to the needle.
The agarose deformation process due to the solid needle injection was also
studied. The advantages and limitations of both methods are discussed, and we
conclude that micro-jet injection has better performance than solid injection
when comparing several quantities for three different liquids, such as the
energy and volumetric delivery efficiencies per injection, depth and width of
penetrations. A newly defined dimensionless quantity, the penetration strength,
is used to indicate potential excessive damage to skin surrogates. Needle-free
methods, such as the micro-jet injector here presented, could reduce the
environmental impact of used needles, and benefit the health of millions of
people that use needles on a daily basis for medical and cosmetic use.
|
Large-payload deep space missions are impractical with current rocket
propulsion technologies in use. Chemical thrusters yield a high thrust but low
efficiency while ion thrusters are efficient but provide too little thrust for
large satellites and manned spacecraft. Plasma propulsion is a viable
alternative with a higher thrust than electric ion thrusters and specific
impulse far exceeding those of chemical rocket engines. In this paper, a hybrid
thruster is explored which affords the high mass flow rate of plasma thrusters
while maximizing the specific impulse. The two primary processes of this system
are the ion cyclotron resonance heating of plasma and subsequent electrostatic
acceleration of ions with gridded electrodes. Through a particle-in-cell
simulation of these two components, the exhaust velocities of Xenon, Argon, and
Helium are compared. It has been found that while the combination of systems
results in a far greater exhaust velocity, the acceleration is largely from the
gridded electrodes, and thus Xenon is the most suitable propellant with a
specific impulse upward of 4200 s. Advancements in nuclear fusion and fission
technologies will facilitate the deployment of high-power plasma thrusters that
will enable spacecraft to travel farther and faster in the solar system.
|
We discuss preliminary results from our programme to map the fields of
high-redshift AGN. In the context of the hierarchical models such fields are
predicted to contain an over-density of young, luminous galaxies destined to
evolve into the core of a rich cluster by the present epoch. We have thus
imaged from submillimetre to X-ray wavelengths the few-arcmin scale fields of a
small sample of high-redshift QSOs. We find that submillimetre wavelength data
from SCUBA show striking over-densities of luminous star-forming galaxies over
scales of ~500 kpc. Whilst many of these galaxies are undetected even in deep
near-IR imaging almost all of them are detected by Spitzer at 4.5, 8.0 and 24
um, showing that they have extremely red colours. However, they are not
detected in our XMM-Newton observations suggesting that any AGN must be highly
obscured. Optical-through-mid-IR SEDs show the redshifted 1.6 um bump from
star-light giving preliminary evidence that the galaxies lie at the same
redshift, and thus in the same structure, as the QSO although this finding must
be confirmed with photometric and/or spectroscopic redshifts.
|
We discuss the presence of a geometrical phase in the evolution of a qubit
state and its gauge structure. The time evolution operator is found to be the
free energy operator, rather than the Hamiltonian operator.
|
Light-cone gauge formulation of relativistic dynamics of a continuous-spin
field propagating in the flat space is developed. Cubic interaction vertices of
continuous-spin massless fields and totally symmetric arbitrary spin massive
fields are studied. We consider parity invariant cubic vertices that involve
one continuous-spin massless field and two arbitrary spin massive fields and
parity invariant cubic vertices that involve two continuous-spin massless
fields and one arbitrary spin massive field. We construct the complete list of
such vertices explicitly. Also we demonstrate that there are no cubic vertices
describing consistent interaction of continuous-spin massless fields with
arbitrary spin massless fields.
|
In this work we use the formalism of chord functions (\emph{i.e.}
characteristic functions) to analytically solve quadratic non-autonomous
Hamiltonians coupled to a reservoir composed by an infinity set of oscillators,
with Gaussian initial state. We analytically obtain a solution for the
characteristic function under dissipation, and therefore for the determinant of
the covariance matrix and the von Neumann entropy, where the latter is the
physical quantity of interest. We study in details two examples that are known
to show dynamical squeezing and instability effects: the inverted harmonic
oscillator and an oscillator with time dependent frequency. We show that it
will appear in both cases a clear competition between instability and
dissipation. If the dissipation is small when compared to the instability, the
squeezing generation is dominant and one can see an increasing in the von
Neumann entropy. When the dissipation is large enough, the dynamical squeezing
generation in one of the quadratures is retained, thence the growth in the von
Neumann entropy is contained.
|
Dynamics of the 1D electron transport between two reservoirs are studied
based on the inhomogeneous Tomonaga- Luttinger Liquid (ITLL) model in the case
when the effect of the electron backscattering on the impurities is negligible.
The inhomogeneities of the interaction lead to a charge wave reflection. This
effect supposes a special behavior of the transport characteristics at the
microwave frequencies. New features are predicted in the current noise spectrum
and in the a.c. current- frequency dependence. Knowledge of them may be very
useful to identify experimental setup with the one specified by the ITLL model.
|
This is a survey of the "Fourier symmetry" of measures and distributions on
the circle in relation with the size of their support. Mostly it is based on
our paper arxiv:1004.3631 and a talk given by the second author in the 2012
Abel symposium.
|
Approaching the long-time dynamics of non-Markovian open quantum systems
presents a challenging task if the bath is strongly coupled. Recent proposals
address this problem through a representation of the so-called process tensor
in terms of a tensor network. We show that for Gaussian environments highly
efficient contraction to matrix product operator (MPO) form can be achieved
with infinite MPO evolution methods, leading to significant computational
speed-up over existing proposals. The result structurally resembles open system
evolution with carefully designed auxiliary degrees of freedom, as in
hierarchical or pseudomode methods. Here, however, these degrees of freedom are
generated automatically by the MPO evolution algorithm. Moreover, the
semi-group form of the resulting propagator enables us to explore steady-state
physics, such as phase transitions.
|
Single particle cryo-electron microscopy has become a critical tool in
structural biology over the last decade, able to achieve atomic scale
resolution in three dimensional models from hundreds of thousands of (noisy)
two-dimensional projection views of particles frozen at unknown orientations.
This is accomplished by using a suite of software tools to (i) identify
particles in large micrographs, (ii) obtain low-resolution reconstructions,
(iii) refine those low-resolution structures, and (iv) finally match the
obtained electron scattering density to the constituent atoms that make up the
macromolecule or macromolecular complex of interest.
Here, we focus on the second stage of the reconstruction pipeline: obtaining
a low resolution model from picked particle images. Our goal is to create an
algorithm that is capable of ab initio reconstruction from small data sets (on
the order of a few thousand selected particles). More precisely, we propose an
algorithm that is robust, automatic, and fast enough that it can potentially be
used to assist in the assessment of particle quality as the data is being
generated during the microscopy experiment.
|
Semantic reconstruction of indoor scenes refers to both scene understanding
and object reconstruction. Existing works either address one part of this
problem or focus on independent objects. In this paper, we bridge the gap
between understanding and reconstruction, and propose an end-to-end solution to
jointly reconstruct room layout, object bounding boxes and meshes from a single
image. Instead of separately resolving scene understanding and object
reconstruction, our method builds upon a holistic scene context and proposes a
coarse-to-fine hierarchy with three components: 1. room layout with camera
pose; 2. 3D object bounding boxes; 3. object meshes. We argue that
understanding the context of each component can assist the task of parsing the
others, which enables joint understanding and reconstruction. The experiments
on the SUN RGB-D and Pix3D datasets demonstrate that our method consistently
outperforms existing methods in indoor layout estimation, 3D object detection
and mesh reconstruction.
|
We show that a digital sphere, constructed by the circular sweep of a digital
semicircle (generatrix) around its diameter, consists of some holes
(absentee-voxels), which appear on its spherical surface of revolution. This
incompleteness calls for a proper characterization of the absentee-voxels whose
restoration will yield a complete spherical surface without any holes. In this
paper, we present a characterization of such absentee-voxels using certain
techniques of digital geometry and show that their count varies quadratically
with the radius of the semicircular generatrix. Next, we design an algorithm to
fill these absentee-voxels so as to generate a spherical surface of revolution,
which is more realistic from the viewpoint of visual perception. We further
show that covering a solid sphere by a set of complete spheres also results in
an asymptotically larger count of absentees, which is cubic in the radius of
the sphere. The characterization and generation of complete solid spheres
without any holes can also be accomplished in a similar fashion. We furnish
test results to substantiate our theoretical findings.
|
High-dose-rate brachytherapy is a tumor treatment method where a highly
radioactive source is brought in close proximity to the tumor. In this paper we
develop a simulated annealing algorithm to optimize the dwell times at
preselected dwell positions to maximize tumor coverage under dose-volume
constraints on the organs at risk. Compared to existing algorithms, our
algorithm has advantages in terms of speed and objective value and does not
require an expensive general purpose solver. Its success mainly depends on
exploiting the efficiency of matrix multiplication and a careful selection of
the neighboring states. In this paper we outline its details and make an
in-depth comparison with existing methods using real patient data.
|
We study the low-energy quantum dynamics of vortex strings in the Higgs phase
of N=2 supersymmetric QCD. The exact BPS spectrum of the stretched string is
shown to coincide with the BPS spectrum of the four-dimensional parent gauge
theory. Perturbative string excitations correspond to bound W-bosons and quarks
while the monopoles appear as kinks on the vortex string. This provides a
physical explanation for an observation by N. Dorey relating the quantum
spectra of theories in two and four dimensions.
|
In order to understand the observed physical and orbital diversity of
extrasolar planetary systems, a full investigation of these objects and of
their host stars is necessary. Within this field, one of the main purposes of
the GAPS observing project with HARPS-N@TNG is to provide a more detailed
characterisation of already known systems. In this framework we monitored the
star, hosting two giant planets, HD108874, with HARPS-N for three years in
order to refine the orbits, to improve the dynamical study and to search for
additional low-mass planets in close orbits. We subtracted the radial velocity
(RV) signal due to the known outer planets, finding a clear modulation of 40.2
d period. We analysed the correlation between RV residuals and the activity
indicators and modelled the magnetic activity with a dedicated code. Our
analysis suggests that the 40.2 d periodicity is a signature of the rotation
period of the star. A refined orbital solution is provided, revealing that the
system is close to a mean motion resonance of about 9:2, in a stable
configuration over 1 Gyr. Stable orbits for low-mass planets are limited to
regions very close to the star or far from it. Our data exclude super-Earths
with Msin i \gtrsim 5 M_Earth within 0.4 AU and objects with Msin i \gtrsim 2
M_Earth with orbital periods of a few days. Finally we put constraints on the
habitable zone of the system, assuming the presence of an exomoon orbiting the
inner giant planet.
|
Rotation is typically assumed to induce strictly symmetric rotational
splitting into the rotational multiplets of pure p- and g-modes. However, for
evolved stars exhibiting mixed modes, avoided crossings between different
multiplet components are known to yield asymmetric rotational splitting,
particularly for near-degenerate mixed-mode pairs, where notional pure p-modes
are fortuitiously in resonance with pure g-modes. These near-degeneracy effects
have been described in subgiants, but their consequences for the
characterisation of internal rotation in red giants has not previously been
investigated in detail, in part owing to theoretical intractability. We employ
new developments in the analytic theory of mixed-mode coupling to study these
near-resonance phenomena. In the vicinity of the most p-dominated mixed modes,
the near-degenerate intrinsic asymmetry from pure rotational splitting
increases dramatically over the course of stellar evolution, and depends
strongly on the mode mixing fraction $\zeta$. We also find that a linear
treatment of rotation remains viable for describing the underlying p- and
g-modes, even when it does not for the resulting mixed modes undergoing these
avoided crossings. We explore observational consequences for potential
measurements of asymmetric mixed-mode splitting, which has been proposed as a
magnetic-field diagnostic. Finally, we propose improved measurement techniques
for rotational characterisation, exploiting the linearity of rotational effects
on the underlying p/g modes, while still accounting for these mixed-mode
coupling effects.
|
Here we study the problem of matched record clustering in unsupervised entity
resolution. We build upon a state-of-the-art probabilistic framework named the
Data Washing Machine (DWM). We introduce a graph-based hierarchical 2-step
record clustering method (GDWM) that first identifies large, connected
components or, as we call them, soft clusters in the matched record pairs using
a graph-based transitive closure algorithm utilized in the DWM. That is
followed by breaking down the discovered soft clusters into more precise entity
clusters in a hierarchical manner using an adapted graph-based modularity
optimization method. Our approach provides several advantages over the original
implementation of the DWM, mainly a significant speed-up, increased precision,
and overall increased F1 scores. We demonstrate the efficacy of our approach
using experiments on multiple synthetic datasets. Our results also provide
evidence of the utility of graph theory-based algorithms despite their sparsity
in the literature on unsupervised entity resolution.
|
In these lectures, we discuss different types of renormalization problems in
QCD and their non-perturbative solution in the framework of the lattice
formulation. In particular the recursive finite size methods to compute the
scale-dependence of renormalized quantities is explained. An important
ingredient in the practical applications is the Schr\"odinger functional. It is
introduced and its renormalization properties are discussed.
Concerning applications, the computation of the running coupling and the
running quark mass are covered in detail and it is shown how the
$\Lambda$-parameter and renormalization group invariant quark mass can be
obtained. Further topics are the renormalization of isovector currents and
non-perturbative Symanzik improvement.
|
We consider a missing data problem in the context of automatic segmentation
methods for Magnetic Resonance Imaging (MRI) brain scans. Usually, automated
MRI scan segmentation is based on multiple scans (e.g., T1-weighted,
T2-weighted, T1CE, FLAIR). However, quite often a scan is blurry, missing or
otherwise unusable. We investigate the question whether a missing scan can be
synthesized. We exemplify that this is in principle possible by synthesizing a
T2-weighted scan from a given T1-weighted scan. Our first aim is to compute a
picture that resembles the missing scan closely, measured by average mean
squared error (MSE). We develop/use several methods for this, including a
random baseline approach, a clustering-based method and pixel-to-pixel
translation method by Isola et al. (Pix2Pix) which is based on conditional
GANs. The lowest MSE is achieved by our clustering-based method. Our second aim
is to compare the methods with respect to the effect that using the synthesized
scan has on the segmentation process. For this, we use a DeepMedic model
trained with the four input scan modalities named above. We replace the
T2-weighted scan by the synthesized picture and evaluate the segmentations with
respect to the tumor identification, using Dice scores as numerical evaluation.
The evaluation shows that the segmentation works well with synthesized scans
(in particular, with Pix2Pix methods) in many cases.
|
We show that the comment by A.F. Volkov ignores a delicate issue in the
conductance measurement for a hall bar system. In such system, $\rho
_{xx}\approx \rho_{xy}^{2}\sigma_{xx}$ while $\sigma_{xy}\gg \sigma_{xx}$, as
correctly pointed out in Ref.3. We clarify that the so called "zero resistance
state" is actually a "zero conductance state". A discussion concerning the
phase transition induced by the negative conductance is presented.
|
During the last few years, a number of works in computer simulation have
focused on the clustering and percolation properties of simple fluids based in
an energetic connectivity criterion proposed long ago by T.L. Hill [J. Chem.
Phys. 23, 617 (1955)]. This connectivity criterion appears to be the most
appropriate in the study of gas-liquid phase transition. So far, integral
equation theories have relayed on a velocity-averaged version of this
criterion. We show, by using molecular dynamics simulations, that this average
strongly overestimates percolation densities in the Lennard-Jones fluid making
unreliable any prediction based on it. Additionally, we use a recently
developed integral equation theory [Phys. Rev. E 61, R6067 (2000)] to show how
this velocity-average can be overcome.
|
Text-conditioned diffusion models can generate impressive images, but fall
short when it comes to fine-grained control. Unlike direct-editing tools like
Photoshop, text conditioned models require the artist to perform "prompt
engineering," constructing special text sentences to control the style or
amount of a particular subject present in the output image. Our goal is to
provide fine-grained control over the style and substance specified by the
prompt, for example to adjust the intensity of styles in different regions of
the image (Figure 1). Our approach is to decompose the text prompt into
conceptual elements, and apply a separate guidance term for each element in a
single diffusion process. We introduce guidance scale functions to control when
in the diffusion process and \emph{where} in the image to intervene. Since the
method is based solely on adjusting diffusion guidance, it does not require
fine-tuning or manipulating the internal layers of the diffusion model's neural
network, and can be used in conjunction with LoRA- or DreamBooth-trained models
(Figure2). Project page: https://mshu1.github.io/dreamwalk.github.io/
|
Despite recent impressive results on single-object and single-domain image
generation, the generation of complex scenes with multiple objects remains
challenging. In this paper, we start with the idea that a model must be able to
understand individual objects and relationships between objects in order to
generate complex scenes well. Our layout-to-image-generation method, which we
call Object-Centric Generative Adversarial Network (or OC-GAN), relies on a
novel Scene-Graph Similarity Module (SGSM). The SGSM learns representations of
the spatial relationships between objects in the scene, which lead to our
model's improved layout-fidelity. We also propose changes to the conditioning
mechanism of the generator that enhance its object instance-awareness. Apart
from improving image quality, our contributions mitigate two failure modes in
previous approaches: (1) spurious objects being generated without corresponding
bounding boxes in the layout, and (2) overlapping bounding boxes in the layout
leading to merged objects in images. Extensive quantitative evaluation and
ablation studies demonstrate the impact of our contributions, with our model
outperforming previous state-of-the-art approaches on both the COCO-Stuff and
Visual Genome datasets. Finally, we address an important limitation of
evaluation metrics used in previous works by introducing SceneFID -- an
object-centric adaptation of the popular Fr{\'e}chet Inception Distance metric,
that is better suited for multi-object images.
|
The transition between the class-B and class-A dynamical behaviors of a
semiconductor laser is directly observed by continuously controlling the
lifetime of the photons in a cavity of sub-millimetric to centimetric length.
It is experimentally and theoretically proved that the transition from a
resonant to an overdamped behavior occurs progressively, without any
discontinuity. In particular, the intermediate regime is found to exhibit
features typical from both the class-A and class-B regimes. The laser intensity
noise is proved to be a powerful probe of the laser dynamical behavior.
|
This large-scale study, consisting of 24.5 million hand hygiene opportunities
spanning 19 distinct facilities in 10 different states, uses linear predictive
models to expose factors that may affect hand hygiene compliance. We examine
the use of features such as temperature, relative humidity, influenza severity,
day/night shift, federal holidays and the presence of new residents in
predicting daily hand hygiene compliance. The results suggest that colder
temperatures and federal holidays have an adverse effect on hand hygiene
compliance rates, and that individual cultures and attitudes regarding hand
hygiene seem to exist among facilities.
|
This article extends the theory of dual-consistent summation-by-parts (SBP)
and generalized SBP (GSBP) time-marching methods by showing that they are
implicit Runge-Kutta schemes. Through this connection, the accuracy theory for
the pointwise solution, as well as the solution projected to the end of each
time step, is extended for nonlinear problems. Furthermore, it is shown that
these minimum guaranteed order results can be superseded by leveraging the full
nonlinear order conditions of Runge-Kutta methods. The connection to
Runge-Kutta methods is also exploited to derive conditions under which SBP and
GSBP time-marching methods associated with dense norms are nonlinearly stable.
A few known and novel Runge-Kutta methods with associated GSBP operators are
presented. The novel methods, all of which are L-stable and
algebraically-stable, include a four-stage seventh-order fully-implicit method,
a three-stage third-order diagonally-implicit method, and a fourth-order
four-stage diagonally-implicit method.
|
We develop an algorithm for the asymptotically fast evaluation of layer
potentials close to and on the source geometry, combining Geometric Global
Accelerated QBX (`GIGAQBX') and target-specific expansions. GIGAQBX is a fast
high-order scheme for evaluation of layer potentials based on Quadrature by
Expansion (`QBX') using local expansions formed via the Fast Multipole Method
(FMM). Target-specific expansions serve to lower the cost of the formation and
evaluation of QBX local expansions, reducing the associated computational
effort from $O((p+1)^{2})$ to $O(p+1)$ in three dimensions, without any
accuracy loss compared with conventional expansions, but with the loss of
source/target separation in the expansion coefficients. GIGAQBX is a `global'
QBX scheme, meaning that the potential is mediated entirely through expansions
for points close to or on the boundary. In our scheme, this single global
expansion is decomposed into two parts that are evaluated separately: one part
incorporating near-field contributions using target-specific expansions, and
one part using conventional spherical harmonic expansions of far-field
contributions, noting that convergence guarantees only exist for the sum of the
two sub-expansions. By contrast, target-specific expansions were originally
introduced as an acceleration mechanism for `local' QBX schemes, in which the
far-field does not contribute to the QBX expansion. Compared with the
unmodified GIGAQBX algorithm, we show through a reproducible, time-calibrated
cost model that the combined scheme yields a considerable cost reduction for
the near-field evaluation part of the computation. We support the effectiveness
of our scheme through numerical results demonstrating performance improvements
for Laplace and Helmholtz kernels.
|
Unsupervised domain adaptation (UDA) for cross-modality medical image
segmentation has shown great progress by domain-invariant feature learning or
image appearance translation. Adapted feature learning usually cannot detect
domain shifts at the pixel level and is not able to achieve good results in
dense semantic segmentation tasks. Image appearance translation, e.g. CycleGAN,
translates images into different styles with good appearance, despite its
population, its semantic consistency is hardly to maintain and results in poor
cross-modality segmentation. In this paper, we propose intra- and
cross-modality semantic consistency (ICMSC) for UDA and our key insight is that
the segmentation of synthesised images in different styles should be
consistent. Specifically, our model consists of an image translation module and
a domain-specific segmentation module. The image translation module is a
standard CycleGAN, while the segmentation module contains two domain-specific
segmentation networks. The intra-modality semantic consistency (IMSC) forces
the reconstructed image after a cycle to be segmented in the same way as the
original input image, while the cross-modality semantic consistency (CMSC)
encourages the synthesized images after translation to be segmented exactly the
same as before translation. Comprehensive experimental results on
cross-modality hip joint bone segmentation show the effectiveness of our
proposed method, which achieves an average DICE of 81.61% on the acetabulum and
88.16% on the proximal femur, outperforming other state-of-the-art methods. It
is worth to note that without UDA, a model trained on CT for hip joint bone
segmentation is non-transferable to MRI and has almost zero-DICE segmentation.
|
It is a well-established fact in the scientific literature that Picard
iterations of backward stochastic differential equations with globally
Lipschitz continuous nonlinearity converge at least exponentially fast to the
solution. In this paper we prove that this convergence is in fact at least
square-root factorially fast. We show for one example that no higher
convergence speed is possible in general. Moreover, if the nonlinearity is
$z$-independent, then the convergence is even factorially fast. Thus we reveal
a phase transition in the speed of convergence of Picard iterations of backward
stochastic differential equations.
|
We prove that the wave operators of scattering theory for the fourth order
Schr\"odinger operators $\Delta^2 + V(x)$ in ${\mathbb R}^4$ are bounded in
$L^p({\mathbb R}^4)$ for the set of $p$'s of $(1,\infty)$ depending on the kind
of spectral singularities of $H$ at zero which can be described by the space of
bounded solutions of $(\Delta^2 + V(x))u(x)=0$.
|
The ALFALFA blind HI survey will enable a census of the distribution of
gas-rich galaxies in the local Universe. Sensitive to an HI mass of 10**7 solar
masses at the distance of the Virgo cluster, ALFALFA will probe the smallest
objects locally and provide a new consideration of near-field cosmology.
Additionally, with a larger, cosmologically significant sample volume and wider
bandwidth than previous blind surveys, a much larger number of detections in
each mass bin is possible, with adequate angular resolution to eliminate the
need for extensive follow-up observations. This increased sensitivity will
greatly enhance the utility of cosmological probles in HI. ALFALFA will
eventually measure the correlation function of HI selected galaxies in a large
local volume. The larger sample and volume size of the ALFALFA dataset will
also robustly measure the HI mass function (HIMF). Here, we present the
preliminary results on the distribution of local gas-rich galaxies from a first
ALFALFA catalog covering 540 deg**2.
|
The finite-amplitude method (FAM) is one of the most promising methods for
optimizing the computational performance of the random-phase approximation
(RPA) calculations in deformed nuclei. In this report, we will mainly focus on
our recent progress in the self-consistent relativistic RPA established by
using the FAM. It is found that the effects of Dirac sea can be taken into
account implicitly in the coordinate-space representation and the rearrangement
terms due to the density-dependent couplings can be treated without extra
computational costs.
|
Let $\Lambda = \mathrm{SL}_2(\Bbb Z)$ be the modular group and let
$c_n(\Lambda)$ be the number of congruence subgroups of $\Lambda$ of index at
most $n$. We prove that $\lim\limits_{n\to \infty} \frac{\log
c_n(\Lambda)}{(\log n)^2/\log\log n} = \frac{3-2\sqrt{2}}{4}.$ The proof is
based on the Bombieri-Vinogradov `Riemann hypothesis on the average' and on the
solution of a new type of extremal problem in combinatorial number theory.
Similar surprisingly sharp estimates are obtained for the subgroup growth of
lattices in higher rank semisimple Lie groups. If $G$ is such a Lie group and
$\Gamma$ is an irreducible lattice of $G$ it turns out that the subgroup growth
of $\Gamma$ is independent of the lattice and depends only on the Lie type of
the direct factors of $G$. It can be calculated easily from the root system.
The most general case of this result relies on the Generalized Riemann
Hypothesis but many special cases are unconditional. The proofs use techniques
from number theory, algebraic groups, finite group theory and combinatorics.
|
Using transmission electron microscopy (TEM) we studied CaCu3Ti4O12, an
intriguing material that exhibits a huge dielectric response, up to kilohertz
frequencies, over a wide range of temperature. Neither in single crystals, nor
in polycrystalline samples, including sintered bulk- and thin-films, did we
observe the twin domains suggested in the literature. Nevertheless, in the
single crystals, we saw a very high density of dislocations with a Burger
vector of [110], as well as regions with cation disorder and planar defects
with a displacement vector 1/4[110]. In the polycrystalline samples, we
observed many grain boundaries with oxygen deficiency, in comparison with the
grain interior. The defect-related structural disorders and inhomogeneity,
serving as an internal barrier layer capacitance (IBLC) in a semiconducting
matrix, might explain the very large dielectric response of the material. Our
TEM study of the structure defects in CaCu3Ti4O12 supports a recently proposed
morphological model with percolating conducting regions and blocking regions.
|
Solving the optimal power flow (OPF) problem in real-time electricity market
improves the efficiency and reliability in the integration of low-carbon energy
resources into the power grids. To address the scalability and adaptivity
issues of existing end-to-end OPF learning solutions, we propose a new graph
neural network (GNN) framework for predicting the electricity market prices
from solving OPFs. The proposed GNN-for-OPF framework innovatively exploits the
locality property of prices and introduces physics-aware regularization, while
attaining reduced model complexity and fast adaptivity to varying grid
topology. Numerical tests have validated the learning efficiency and adaptivity
improvements of our proposed method over existing approaches.
|
Rim and back geometry determine much of the behavior of sound inside the pot,
whose effect on total, produced sound is subtle but discernible. The theory of
sound inside a cylinder is reviewed and demonstrated. And previous work on the
Helmholtz resonance and the interplay between the Helmholtz resonance and the
lowest head mode is revisited using some improved techniques.
|
Transport properties of liquid methanol and ethanol are predicted by
molecular dynamics simulation. The molecular models for the alcohols are rigid,
non-polarizable and of united-atom type. They were developed in preceding work
using experimental vapor-liquid equilibrium data only. Self- and Maxwell-Stefan
diffusion coefficients as well as the shear viscosity of methanol, ethanol and
their binary mixture are determined using equilibrium molecular dynamics and
the Green-Kubo formalism. Non-equilibrium molecular dynamics is used for
predicting the thermal conductivity of the two pure substances. The transport
properties of the fluids are calculated over a wide temperature range at
ambient pressure and compared with experimental and simulation data from the
literature. Overall, a very good agreement with the experiment is found. For
instance, the self-diffusion coefficient and the shear viscosity are predicted
with average deviations of less 8% for the pure alcohols and 12% for the
mixture. The predicted thermal conductivity agrees in average within 5% with
the experimental data. Additionally, some velocity and shear viscosity
autocorrelation functions are presented and discussed. Radial distribution
functions for ethanol are also presented. The predicted excess volume, excess
enthalpy and the vapor-liquid equilibrium of the binary mixture methanol +
ethanol are assessed and the vapor-liquid equilibrium agree well with
experimental data.
|
In this study, we investigate the shapes of starless and protostellar cores
using hydrodynamic, self-gravitating adaptive mesh refinement simulations of
turbulent molecular clouds. We simulate observations of these cores in dust
emission, including realistic noise and telescope resolution, and compare to
the observed core shapes measured in Orion by Nutter & Ward-Thompson (2007).
The simulations and the observations have generally high statistical
similarity, with particularly good agreement between simulations and Orion B.
Although protostellar cores tend to have semi-major axis to semi-minor axis
ratios closer to one, the distribution of axis ratios for starless and
protostellar cores are not significantly different for either the actual
observations of Orion or the simulated observations. Because of the high level
of agreement between the non-magnetic hydrodynamic simulations and observation,
contrary to a number of previous authors, one cannot infer the presence of
magnetic fields from core shape distributions.
|
Pronouns are frequently omitted in pro-drop languages, such as Chinese,
generally leading to significant challenges with respect to the production of
complete translations. Recently, Wang et al. (2018) proposed a novel
reconstruction-based approach to alleviating dropped pronoun (DP) translation
problems for neural machine translation models. In this work, we improve the
original model from two perspectives. First, we employ a shared reconstructor
to better exploit encoder and decoder representations. Second, we jointly learn
to translate and predict DPs in an end-to-end manner, to avoid the errors
propagated from an external DP prediction model. Experimental results show that
our approach significantly improves both translation performance and DP
prediction accuracy.
|
The perturbative double copy is by now a highly established correspondence
between gravity and gauge theories. Non-perturbatively, information ranging
from classical solutions to topological quantities on both sides have been
related to each other via the double copy correspondence. In this paper, we add
another result, where we show that the single copy of the Ricci flow is the
Yang-Mills flow on the space of connections of a principal
$\text{U}(1)$-bundle.
|
Some members of the vegetal kingdom can achieve surprisingly fast movements
making use of a clever combination of evaporation, elasticity and cavitation.
In this process, enthalpic energy is transformed into elastic energy and
suddenly released in a cavitation event which produces kinetic energy. Here we
study this uncommon energy transformation by a model system: a droplet in an
elastic medium shrinks slowly by diffusion and eventually transforms into a
bubble by a rapid cavitation event. The experiments reveal the cavity dynamics
over the extremely disparate timescales of the process, spanning 9 orders of
magnitude. We model the initial shrinkage as a classical diffusive process,
while the sudden bubble growth and oscillations are described using an
inertial-(visco)elastic model, in excellent agreement with the experiments.
Such a model system could serve as a new paradigm for motile synthetic
materials.
|
Subsets and Splits