text
stringlengths 6
128k
|
---|
We develop a variant of Coleman and Perrin Riou's methods giving, for a de
Rham $p$-adic Galois representation, a construction of $p$-adic $L$ functions
from a compatible system of global elements. As a result, we construct analytic
functions on an open set of the $p$-adic weight space containing all locally
algebraic characters of large enough conductor. Applied to Kato's Euler system,
this gives $p$-adic $L$-functions for elliptic curves with additive bad
reduction and, more generally, for modular forms which are supercuspidal at
$p$. In the case of dimension $2$, we prove a functional equation for our
$p$-adic $L$-functions.
|
Recently Tevatron released their measurements on invariant mass spectrum of
electron/positron, as well as the di-jet arising from WW+WZ production with one
W leptonically decay. Though the statistics is not significant, there are two
bumps around 240 GeV and 120-160 GeV respectively. We proposed that the two
bumps correspond to the extra light gauge bosons $Z^\prime$ and $W^\prime$,
which couple with quarks with the deci-weak strength. In this brief report, we
also simulated di-jet invariant mass distribution at the current running LHC.
|
We study cosmic evolution based on the fixed points in the dynamical analysis
of the Degenerate Higher-Order Scalar-Tensor (DHOST) theories. We consider the
DHOST theory in which the propagation speed of gravitational waves is equal to
the speed of light, the tensor perturbations do not decay to dark energy
perturbations, and the scaling solutions exist. The scaling fixed point
associated with late time acceleration of universe can be either stable or
saddle depending on the parameters of the theory. For some ranges of the
parameters, this scaling fixed point and field dominated fixed point can be
simultaneously stable. Cosmic evolution will reach either the scaling attractor
or the field dominated attractor depending on signs of time derivative of the
scalar field in the theory during the matter domination. The density parameter
of dark matter can be larger than unity before reaching the scaling attractor
if the deviation from the Einstein theory of gravity is too large. For this
DHOST theory, stabilities of $\phi$-matter-dominated epoch ($\phi$MDE) and
field dominated solutions are similar to the coupled dark energy models in
Einstein gravity even though gravity is described by different theories. In our
consideration, the universe can only evolve from the $\phi$MDE regime to the
field dominated regime. The ghost and gradient instabilities up to linear order
in cosmological perturbations have been investigated. There is no gradient
instability, while the ghost instability can be avoided for some range of the
parameters of the model.
|
In the simplest realization of Brownian motion, a colloidal sphere moves
randomly in an isotropic fluid; its mean squared displacement (MSD) grows
linearly with time $\textit{\tau}$. Brownian motion in an orientationally
ordered fluid, a nematic, is anisotropic, with the MSD being larger along the
axis of molecular orientation, called the director. We show that at short time
scales, the anisotropic diffusion in a nematic becomes also anomalous, with the
MSD growing slower (subdiffusion) and faster (superdiffusion) than
$\textit{\tau}$. The anomalous diffusion occurs at time scales that correspond
to the relaxation times of director deformations around the sphere. Once the
nematic melts, the diffusion becomes normal and isotropic. The experiment shows
that the deformations and fluctuations of long-range orientational order
profoundly influence diffusive regimes.
|
The Polyakov loop variable serves as an order parameter to characterize the
confined and deconfined phases of Yang-Mills theory. By integrating out the
vector fields in the SU(2) Yang-Mills partition function in one-loop
approximation, an effective action is obtained for the Polyakov loop to second
order in a derivative expansion. The resulting effective potential for the
Polyakov loop is capable of describing a second-order deconfinement transition
as a function of temperature.
|
We study the Maximum Independent Set of Rectangles (MISR) problem: given a
set of $n$ axis-parallel rectangles, find a largest-cardinality subset of the
rectangles, such that no two of them overlap. MISR is a basic geometric
optimization problem with many applications, that has been studied extensively.
Until recently, the best approximation algorithm for it achieved an $O(\log
\log n)$-approximation factor. In a recent breakthrough, Adamaszek and Wiese
provided a quasi-polynomial time approximation scheme: a
$(1-\epsilon)$-approximation algorithm with running time
$n^{O(\operatorname{poly}(\log n)/\epsilon)}$. Despite this result, obtaining a
PTAS or even a polynomial-time constant-factor approximation remains a
challenging open problem. In this paper we make progress towards this goal by
providing an algorithm for MISR that achieves a $(1 - \epsilon)$-approximation
in time $n^{O(\operatorname{poly}(\log\log{n} / \epsilon))}$. We introduce
several new technical ideas, that we hope will lead to further progress on this
and related problems.
|
The transition from the diffuse warm neutral medium (WNM) to the dense cold
neutral medium (CNM) is what set the initial conditions to the formation of
molecular clouds. The properties of the turbulent cascade in the WNM, essential
to describe this radiative condensation process, have remained elusive in part
due to the difficulty to map out the structure and kinematics of each H I
thermal phases. Here we present an analysis of a 21 cm hyper-spectral data cube
from the GHIGLS HI survey where the contribution of the WNM is extracted using
ROHSA, a Gaussian decomposition tool that includes spatial regularization. The
distance and volume of the WNM emission is estimated using 3D dust extinction
map information. The thermal and turbulent contributions to the Doppler line
width of the WNM were disentangled using two techniques, one based on the
statistical properties of the column density and centroid velocity fields, and
another on the relative motions of CNM structures as a probe of turbulent
motions. We found that the volume of WNM sampled here, located at the outer
edge of the Local Bubble, shows thermal properties in accordance with expected
values for heating and cooling processes typical of the Solar neighbourhood.
The WNM has the properties of sub/trans-sonic turbulence, with a turbulent Mach
number at the largest scale probed here (l = 130 pc) of Ms = 0.87 +- 0.15, a
density contrast of 0.6 +- 0.2, and velocity and density power spectra
compatible with k-11/3. The low Mach number of the WNM provides dynamical
conditions that allows the condensation mode of thermal instability (TI) to
grow freely and form CNM structures, as predicted by theory.
|
The emission times of laser-triggered electrons from a sharp tungsten tip are
directly characterized under ultrafast, near-infrared laser excitation at
Keldysh parameters $6.6< \gamma < 19.1$. Emission delays up to 10 fs are
observed, which are inferred from the energy gain of photoelectrons emitted
into a synchronously driven microwave cavity. ~ fs timing resolution is
achieved in a configuration capable of measuring timing shifts up to 55 ps. The
technique can also be used to measure the microwave phase inside the cavity
with a precision below 70 fs upon the energy resolved detection of a single
electron.
|
We consider a Wright-Fisher diffusion (x(t)) whose current state cannot be
observed directly. Instead, at times t1 < t2 < . . ., the observations y(ti)
are such that, given the process (x(t)), the random variables (y(ti)) are
independent and the conditional distribution of y(ti) only depends on x(ti).
When this conditional distribution has a specific form, we prove that the model
((x(ti), y(ti)), i 1) is a computable filter in the sense that all
distributions involved in filtering, prediction and smoothing are exactly
computable. These distributions are expressed as finite mixtures of parametric
distributions. Thus, the number of statistics to compute at each iteration is
finite, but this number may vary along iterations.
|
Technology has changed both our way of life and the way in which we learn.
Students now attend lectures with laptops and mobile phones, and this situation
is accentuated in the case of students on Computer Science degrees, since they
require their computers in order to participate in both theoretical and
practical lessons. Problems, however, arise when the students' social networks
are opened on their computers and they receive notifications that interrupt
their work. We set up a workshop regarding time, thoughts and attention
management with the objective of teaching our students techniques that would
allow them to manage interruptions, concentrate better and definitively make
better use of their time. Those who took part in the workshop were then
evaluated to discover its effects. The results obtained are quite optimistic
and are described in this paper with the objective of encouraging other
universities to perform similar initiatives.
|
In this paper, a recently conducted measurement campaign for
unmanned-aerial-vehicle (UAV) channels is introduced. The downlink signals of
an in-service long-time-evolution (LTE) network which is deployed in a suburban
scenario were acquired. Five horizontal and five vertical flight routes were
considered. The channel impulse responses (CIRs) are extracted from the
received data by exploiting the cell specific signals (CRSs). Based on the
CIRs, the parameters of multipath components (MPCs) are estimated by using a
high-resolution algorithm derived according to the space-alternating
generalized expectation-maximization (SAGE) principle. Based on the SAGE
results, channel characteristics including the path loss, shadow fading, fast
fading, delay spread and Doppler frequency spread are thoroughly investigated
for different heights and horizontal distances, which constitute a stochastic
model.
|
Sparse reduced rank regression is an essential statistical learning method.
In the contemporary literature, estimation is typically formulated as a
nonconvex optimization that often yields to a local optimum in numerical
computation. Yet, their theoretical analysis is always centered on the global
optimum, resulting in a discrepancy between the statistical guarantee and the
numerical computation. In this research, we offer a new algorithm to address
the problem and establish an almost optimal rate for the algorithmic solution.
We also demonstrate that the algorithm achieves the estimation with a
polynomial number of iterations. In addition, we present a generalized
information criterion to simultaneously ensure the consistency of support set
recovery and rank estimation. Under the proposed criterion, we show that our
algorithm can achieve the oracle reduced rank estimation with a significant
probability. The numerical studies and an application in the ovarian cancer
genetic data demonstrate the effectiveness and scalability of our approach.
|
We develop a theory of ergodicity for a class of random dynamical systems
where the driving noise is not white. The two main tools of our analysis are
the strong Feller property and topological irreducibility, introduced in this
work for a class of non-Markovian systems. They allow us to obtain a criteria
for ergodicity which is similar in nature to the Doob--Khas'minskii theorem.
The second part of this article shows how it is possible to apply these results
to the case of stochastic differential equations driven by fractional Brownian
motion. It follows that under a nondegeneracy condition on the noise, such
equations admit a unique adapted stationary solution.
|
We report extensive calculations of the decay properties of fine-structure
K-vacancy levels in Fe X-Fe XVII. A large set of level energies, wavelengths,
radiative and Auger rates, and fluorescence yields has been computed using
three different standard atomic codes, namely Cowan's HFR, AUTOSTRUCTURE and
the Breit-Pauli R-matrix package. This multi-code approach is used to the study
the effects of core relaxation, configuration interaction and the Breit
interaction, and enables the estimate of statistical accuracy ratings. The
K-alpha and KLL Auger widths have been found to be nearly independent of both
the outer-electron configuration and electron occupancy keeping a constant
ratio of 1.53+/-0.06. By comparing with previous theoretical and measured
wavelengths, the accuracy of the present set is determined to be within 2 mA.
Also, the good agreement found between the different radiative and Auger data
sets that have been computed allow us to propose with confidence an accuracy
rating of 20% for the line fluorescence yields greater than 0.01. Emission and
absorption spectral features are predicted finding good correlation with
measurements in both laboratory and astrophysical plasmas.
|
We study quantum tunneling for the de Sitter radiation in the planar
coordinates and global coordinates, which are nonstationary coordinates and
describe the expanding geometry. Using the phase-integral approximation for the
Hamilton-Jacobi action in the complex plane of time, we obtain the
particle-production rate in both coordinates and derive the additional
sinusoidal factor depending on the dimensionality of spacetime and the quantum
number for spherical harmonics in the global coordinates. This approach
resolves the factor of two problem in the tunneling method.
|
We analyze thoroughly the mean-field dynamics of a linear chain of three
coupled Bose-Einstein condensates, where both the tunneling and the
central-well relative depth are adjustable parameters. Owing to its
nonintegrability, entailing a complex dynamics with chaos occurrence, this
system is a paradigm for longer arrays whose simplicity still allows a thorough
analytical study.We identify the set of dynamics fixed points, along with the
associated proper modes, and establish their stability character depending on
the significant parameters. As an example of the remarkable operational value
of our analysis, we point out some macroscopic effects that seem viable to
experiments.
|
This article studies an infinite dimensional analog of Milstein's scheme for
finite dimensional stochastic ordinary differential equations (SODEs). The
Milstein scheme is known to be impressively efficient for SODEs which fulfill a
certain commutativity type condition. This article introduces the infinite
dimensional analog of this commutativity type condition and observes that a
certain class of semilinear stochastic partial differential equation (SPDEs)
with multiplicative trace class noise naturally fulfills the resulting infinite
dimensional commutativity condition. In particular, a suitable infinite
dimensional analog of Milstein's algorithm can be simulated efficiently for
such SPDEs and requires less computational operations and random variables than
previously considered algorithms for simulating such SPDEs. The analysis is
supported by numerical results for a stochastic heat equation and stochastic
reaction diffusion equations showing signifficant computational savings.
|
We consider continuously monitored quantum systems and introduce definitions
of work and heat along individual quantum trajectories that are valid for
coherent superpositions of energy eigenstates. We use these quantities to
extend the first and second laws of stochastic thermodynamics to the quantum
domain. We illustrate our results with the case of a weakly measured driven
two-level system and show how to distinguish between quantum work and heat
contributions. We finally employ quantum feedback control to suppress detector
backaction and determine the work statistics.
|
Time evolution of a black hole lattice universe with a positive cosmological
constant $\Lambda$ is simulated. The vacuum Einstein equations are numerically
solved in a cubic box with a black hole in the center. Periodic boundary
conditions on all pairs of opposite faces are imposed. Configurations of
marginally trapped surfaces are analyzed. We describe the time evolution of not
only black hole horizons, but also cosmological horizons. Defining the
effective scale factor by using the area of a surface of the cubic box, we
compare it with that in the spatially flat dust dominated FLRW universe with
the same value of $\Lambda$. It is found that the behaviour of the effective
scale factor is well approximated by that in the FLRW universe. Our result
suggests that local inhomogeneities do not significantly affect the global
expansion law of the universe irrespective of the value of $\Lambda$.
|
MOLSCAT is a general-purpose program for quantum-mechanical calculations on
nonreactive atom-atom, atom-molecule and molecule-molecule collisions. It
constructs the coupled-channel equations of atomic and molecular scattering
theory, and solves them by propagating the wavefunction or log-derivative
matrix outwards from short range to the asymptotic region. It then applies
scattering boundary conditions to extract the scattering matrix (S matrix).
Built-in coupling cases include atom + rigid linear molecule, atom + vibrating
diatom, atom + rigid symmetric top, atom + asymmetric or spherical top, rigid
diatom + rigid diatom, rigid diatom + asymmetric top, and diffractive
scattering of an atom from a crystal surface. Interaction potentials may be
specified either in program input (for simple cases) or with user-supplied
routines. For the built-in coupling cases, MOLSCAT can loop over partial wave
(or total angular momentum) to calculate elastic and inelastic cross integral
sections and spectroscopic line-shape cross sections. Post-processors are
available to calculate differential cross sections, transport, relaxation and
Senftleben-Beenakker cross sections, and to fit the parameters of scattering
resonances. MOLSCAT also provides an interface for plug-in routines to specify
coupling cases (Hamiltonians and basis sets) that are not built in; plug-in
routines are supplied to handle collisions of a pair of alkali-metal atoms with
hyperfine structure in an applied magnetic field. For low-energy scattering,
MOLSCAT can calculate scattering lengths and effective ranges and can locate
and characterize scattering resonances as a function of an external variable
such as the magnetic field.
|
Let $\bm p_0,...,\bm p_{m-1}$ be points in ${\mathbb R}^d$, and let
$\{f_j\}_{j=0}^{m-1}$ be a one-parameter family of similitudes of ${\mathbb
R}^d$: $$ f_j(\bm x) = \lambda\bm x + (1-\lambda)\bm p_j, j=0,...,m-1, $$ where
$\lambda\in(0,1)$ is our parameter. Then, as is well known, there exists a
unique self-similar attractor $S_\lambda$ satisfying
$S_\lambda=\bigcup_{j=0}^{m-1} f_j(S_\lambda)$. Each $\bm x\in S_\lambda$ has
at least one address $(i_1,i_2,...)\in\prod_1^\infty\{0,1,...,m-1\}$, i.e.,
$\lim_n f_{i_1}f_{i_2}... f_{i_n}({\bf 0})=\bm x$.
We show that for $\lambda$ sufficiently close to 1, each $\bm x\in
S_\lambda\setminus\{\bm p_0,...,\bm p_{m-1}\}$ has $2^{\aleph_0}$ different
addresses. If $\lambda$ is not too close to 1, then we can still have an
overlap, but there exist $\bm x$'s which have a unique address. However, we
prove that almost every $\bm x\in S_\lambda$ has $2^{\aleph_0}$ addresses,
provided $S_\lambda$ contains no holes and at least one proper overlap. We
apply these results to the case of expansions with deleted digits.
Furthermore, we give sharp sufficient conditions for the Open Set Condition
to fail and for the attractor to have no holes.
These results are generalisations of the corresponding one-dimensional
results, however most proofs are different.
|
In hep-lat/0701018, Creutz claims that the rooting trick used in simulations
of staggered fermions to reduce the number of tastes misses key physics
whenever the desired theory has an odd number of continuum flavors, and uses
this argument to call into question the rooting trick in general. Here we show
that his argument fails as the continuum limit is approached, and therefore
does not imply any problem for staggered simulations. We also show that the
cancellations necessary to restore unitarity in physical correlators in the
continuum limit are a straightforward consequence of the restoration of taste
symmetry.
|
In the recent years, singing voice separation systems showed increased
performance due to the use of supervised training. The design of training
datasets is known as a crucial factor in the performance of such systems. We
investigate on how the characteristics of the training dataset impacts the
separation performances of state-of-the-art singing voice separation
algorithms. We show that the separation quality and diversity are two important
and complementary assets of a good training dataset. We also provide insights
on possible transforms to perform data augmentation for this task.
|
We study the spectrum of the Volterra composition operator in the space
$L_2[0,1]$
|
A new- and old-form theory for Bessel periods of Saito-Kurokawa
representations is given. We introduce arithmetic subgroups so that a local
Bessel vector fixed by the subgroup indexed by the conductor of the
representation is unique up to scalars. The global Langlands L-function of a
holomorphic Saito-Kurokawa representation coincides with a canonically settled
Piatetski-Shapiro zeta integral of the newform.
|
Neutron scattering from high-quality YBCO6.334 single crystals with a T$_c$
of 8.4 K shows that there is no coexistence with long-range antiferromagnetic
order at this very low, near-critical doping of $\sim$0.055, in contrast to
claims based on local probe techniques. We find that the neutron resonance seen
in optimally doped YBCO7 and underdoped YBCO6.5, has undergone large softening
and damping. It appears that the overdamped resonance, with a relaxation rate
of 2 meV, is coupled to a zero-energy central mode that grows with cooling and
eventually saturates with no change at or below T$_c$. Although a similar
qualitative behaviour is found for YBCO6.35, our study shows that the central
mode is stronger in YBCO6.334 than YBCO6.35. The system remains subcritical
with short-ranged three dimensional correlations.
|
We show that various systematics related to certain instrumental effects and
data reduction anomalies in wide field variability surveys can be efficiently
corrected by a Trend Filtering Algorithm (TFA) applied to the photometric time
series produced by standard data pipelines. Statistical tests, performed on the
database of the HATNet project, show that by the application of this filtering
method the cumulative detection probability of periodic transits increases by
up to 0.4 for variables brighter than 11 mag with a trend of increasing
efficiency toward brighter magnitudes. We also show that TFA can be used for
the reconstruction of periodic signals by iteratively filtering out systematic
distortions.
|
We investigate the resonance type behaviour of an overdamped Brownian
particle in a bistable potential driven by external periodic signal. It has
been shown previously that the input energy pumped into the system by the
external drive shows resonance type behaviour as function of noise strength. We
further extend this idea to study the behaviour as function of frequency of the
external driving force and show the occurrence of similar nonmonotonic
behaviour, which can be ascribed as a signature of bona fide stochastic
resonance. Both weak and strong driving limit has been explored indicating the
occurrence of marginal supra-threshold stochastic resonance in a bistable
potential system.
|
We have shown that self interaction effects in massive quantum
electrodynamics can lead to the formation of bound states of quark antiquark
pairs. A current-current fermion coupling term is introduced, which induces a
well in the potential energy profile. Explicit expressions of the effective
potential and renormalized parameters are provided.
|
Mobile edge computing (MEC) has been regarded as a promising technique to
support latencysensitivity and computation-intensive serves. However, the low
offloading rate caused by the random channel fading characteristic becomes a
major bottleneck in restricting the performance of the MEC. Fortunately,
reconfigurable intelligent surface (RIS) can alleviate this problem since it
can boost both the spectrum- and energy- efficiency. Different from the
existing works adopting either fully active or fully passive RIS, we propose a
novel hybrid RIS in which reflecting units can flexibly switch between active
and passive modes. To achieve a tradeoff between the latency and energy
consumption, an optimization problem is formulated by minimizing the total
cost. In light of the intractability of the problem, we develop an alternating
optimization-based iterative algorithm by combining the successive convex
approximation method, the variable substitution, and the singular value
decomposition (SVD) to obtain sub-optimal solutions. Furthermore, in order to
gain more insight into the problem, we consider two special cases involving a
latency minimization problem and an energy consumption minimization problem,
and respectively analyze the tradeoff between the number of active and passive
units. Simulation results verify that the proposed algorithm can achieve
flexible mode switching and significantly outperforms existing algorithms.
|
In this paper we introduce a method to resolve transient excitations in
time-frequency space from molecular dynamics simulations. Our technique is
based on continuous wavelet transform of velocity time series coupled to a
threshold-dependent filtering procedure to isolate excitation events from
background noise in a given spectral region. By following in time the center of
mass of the reference frequency interval, the data can be easily exploited to
investigate the statistics of the burst excitation dynamics, by computing, for
instance, the distribution of the burst lifetimes, excitation times, amplitudes
and energies. As an illustration of our method, we investigate transient
excitations in the gap of NaI crystals at thermal equilibrium at different
temperatures. Our results reveal complex ensembles of transient nonlinear
bursts in the gap, whose lifetime and excitation rate increase with
temperature. The method described in this paper is a powerful tool to
investigate transient excitations in many-body systems at thermal equilibrium.
Our procedure gives access to both the equilibrium and the kinetics of
transient excitation processes, allowing one in principle to reconstruct the
full picture of the dynamical process under examination.
|
Assuming a large-scale homogeneous magnetic field, we follow the covariant
and gauge-invariant approach used by Tsagas and Barrow to describe the
evolution of density and magnetic field inhomogeneities and curvature
perturbations in a matter-radiation universe. We use a two parameter
approximation scheme to linearize their exact non-linear general-relativistic
equations for magneto-hydrodynamic evolution. Using a two-fluid approach we set
up the governing equations as a fourth order autonomous dynamical system.
Analysis of the equilibrium points for the radiation dominated era lead to
solutions similar to the super-horizon modes found analytically by Tsagas and
Maartens. We find that a study of the dynamical system in the dust-dominated
era leads naturally to a magnetic critical length scale closely related to the
Jeans Length. Depending on the size of wavelengths relative to this scale,
these solutions show three distinct behaviours: large-scale stable growing
modes, intermediate decaying modes, and small-scale damped oscillatory
solutions.
|
Traditional smart meter measurements lack the granularity needed for
real-time decision-making. To address this practical problem, we create a
generative adversarial networks (GAN) model that enforces temporal consistency
on its high-resolution outputs via hard inequality constraints using a convex
optimization layer. A unique feature of our GAN model is that it is trained
solely on slow timescale aggregated power information obtained from historical
smart meter data. The results demonstrate that the model can successfully
create minutely interval temporally-correlated instantaneous power injection
profiles from 15-minute average power consumption information. This innovative
approach, emphasizing inter-neuron constraints, offers a promising avenue for
improved high-speed state estimation in distribution systems and enhances the
applicability of data-driven solutions for monitoring such systems.
|
We study N=2(4) superstring backgrounds which are four-dimensional
non-\Kahlerian with non-trivial dilaton and torsion fields. In particular we
consider the case that the backgrounds possess at least one $U(1)$ isometry and
are characterized by the continual Toda equation and the Laplace equation. We
obtain a string background associated with a non-trivial solution of the
continual Toda equation, which is mapped, under the T-duality transformation,
to the hyper-\Kahler Taub-NUT instanton background. It is shown that the
integrable property of the non-\Kahlerian spaces have the direct origin in the
real heavens: real, self-dual, euclidean, Einstein spaces. The Laplace equation
and the continual Toda equation imposed on quasi-\Kahler geometry for
consistent string propagation are related to the self-duality conditions of the
real heavens with ``translational'' and ``rotational''Killing symmetry
respectively.
|
Let $\Omega$ and $\Omega'$ be open subsets of a flat $(2,3,5)$-distribution.
We show that a $C^1$-smooth contact mapping $f : \Omega \to \Omega'$ is a
$C^\infty$-smooth contact mapping. Ultimately, this is a consequence of the
rigidity of the associated stratified Lie group (the Tanaka prolongation of the
Lie algebra is of finite-type). The conclusion is reached through a careful
study of some differential identities satisfied by components of the
Pansu-derivative of a $C^1$-smooth contact mapping.
|
I propose that physical theories defined over finite places (including
$p$-adic fields) can be used to construct conventional theories over the reals,
or conversely, that certain theories over the reals "decompose" over the finite
places, and that this decomposition applies to quantum mechanics, field theory,
gravity, and string theory, in both Euclidean and Lorentzian signatures. I
present two examples of the decomposition: quantum mechanics of a free
particle, and Euclidean two-dimensional Einstein gravity. For the free
particle, the finite place theory is the usual free particle $p$-adic quantum
mechanics, with the Hamiltonian obtained from the real one by replacing the
usual derivatives with Vladimirov derivatives, and numerical coefficients with
$p$-adic norms. For Euclidean two-dimensional gravity, the finite place objects
mimicking the role of the spacetime are $\mathrm{SL}(\mathbb{Q}_p)$ Bruhat-Tits
trees. I furthermore propose quadratic extension Bruhat-Tits trees as the
finite place objects into which Lorentzian $\mathrm{AdS}_2$ decomposes, and
Bruhat-Tits buildings as a natural generalization to higher dimensions, with
the same symmetry group on the finite and real sides for the manifolds and
buildings corresponding to the vacuum state.
|
The $T^{\prime}$ infinite layered nickelates have recently garnered
significant attention owing to the discovery of superconductivity in hole-doped
RNiO$_2$ (R $=$ La, Pr, or Nd), which is the $n = \infty$ member of the series
R$_{n+1}$Ni$_n$O$_{2n+2}$. Here, we investigate the $n = 3$ member, namely
R$_4$Ni$_3$O$_8$ (R $=$ La, Pr, or Nd) of this family. The compound
La$_4$Ni$_3$O$_8$ exhibits simultaneous charge/spin-stripe ordering at
T$_N^\ast$ $=$ 105 K, which is also concomitant with the onset of
metal-to-insulator (MIT) transition upon lowering the temperature below
T$_N^\ast$. We investigate the conspicuous absence of this transition in the Pr
and Nd analogues of La$_4$Ni$_3$O$_8$. To achieve this purpose, we synthesized
solid-solutions of the form (La, Pr)$_4$Ni$_3$O$_8$ and (La, Nd)$_4$Ni$_3$O$_8$
and examined the behavior of T$_N^\ast$ as a function of the average R-site
ionic radius ($r_{\overline{R}}$). We show that after an initial quasilinear
decrease with decreasing $r_{\overline{R}}$, T$_N^\ast$ suddenly vanishes in
the narrow range 1.134 $\AA$ $\leq$ $r_{\overline{R}}$ $\leq$ 1.143 $\AA$. In
the same range, we observed the emergence of a new transition below T$^\ast$,
whose onset temperature increases as $r_{\overline{R}}$ further decreases. We,
therefore, argue that the sudden vanishing of charge/spin-stripe/MIT ordering
upon decreasing $r_{\overline{R}}$ is due to the appearance of a new competing
phase. The point $r_{\overline{R}}$ $\approx$ $r_c$, where T$_N^\ast$ vanishes
and T$^\ast$ appears -- a quantum critical point -- should be investigated
further. In this regard, Pr$_4$Ni$_3$O$_8$ and Pr-rich samples should be useful
due to the weak magnetization response associated with the Pr-sublattice, as
shown here.
|
Equilibrium bifurcation in natural systems can sometimes be explained as a
route to stress shielding for preventing failure. Although compressive buckling
has been known for a long time, its less-intuitive tensile counterpart was only
recently discovered and yet never identified in living structures or organisms.
Through the analysis of an unprecedented all-in-one paradigm of elastic
instability, it is theoretically and experimentally shown that coexistence of
two curvatures in human finger joints is the result of an optimal design by
nature that exploits both compressive and tensile buckling for inducing
luxation in case of traumas, so realizing a unique mechanism for protecting
tissues and preventing more severe damage under extreme loads. Our findings
might pave the way to conceive complex architectured and bio-inspired
materials, as well as next generation artificial joint prostheses and robotic
arms for bio-engineering and healthcare applications.
|
We study the structure of the ground state wave functions of bosonic Symmetry
Protected Topological (SPT) insulators in 3 space dimensions. We demonstrate
that the differences with conventional insulators are captured simply in a dual
vortex description. As an example we show that a previously studied bosonic
topological insulator with both global U(1) and time-reversal symmetry can be
described by a rather simple wave function written in terms of dual "vortex
ribbons". The wave function is a superposition of all the vortex ribbon
configurations of the boson, and a factor (-1) is associated with each
self-linking of the vortex ribbons. This wave function can be conveniently
derived using an effective field theory of the SPT in the strong coupling
limit, and it naturally explains all the phenomena of this SPT phase discussed
previously. The ground state structure for other 3d bosonic SPT phases are also
discussed similarly in terms of vortex loop gas wave functions. We show that
our methods reproduce known results on the ground state structure of some 2d
SPT phases.
|
A proof of the adiabatic theorem for quantum systems whose time evolution
proceeds along discrete time, e.g., quantum maps and quantum circuits, is
shown.
|
Microservice resilience, the ability of microservices to recover from
failures and continue providing reliable and responsive services, is crucial
for cloud vendors. However, the current practice relies on manually configured
rules specific to a certain microservice system, resulting in labor-intensity
and flexibility issues, given the large scale and high dynamics of
microservices. A more labor-efficient and versatile solution is desired. Our
insight is that resilient deployment can effectively prevent the dissemination
of degradation from system performance metrics to user-aware metrics, and the
latter directly affects service quality. In other words, failures in a
non-resilient deployment can impact both types of metrics, leading to user
dissatisfaction. With this in mind, we propose MicroRes, the first versatile
resilience profiling framework for microservices via degradation dissemination
indexing. MicroRes first injects failures into microservices and collects
available monitoring metrics. Then, it ranks the metrics according to their
contributions to the overall service degradation. It produces a resilience
index by how much the degradation is disseminated from system performance
metrics to user-aware metrics. Higher degradation dissemination indicates lower
resilience. We evaluate MicroRes on two open-source and one industrial
microservice system. The experiments show MicroRes' efficient and effective
resilience profiling of microservices. We also showcase MicroRes' practical
usage in production.
|
We generalize the scale-free network model of Barab\`asi and Albert [Science
286, 509 (1999)] by proposing a class of stochastic models for scale-free
interdependent networks in which interdependent nodes are not randomly
connected but rather are connected via preferential attachment (PA). Each
network grows through the continuous addition of new nodes, and new nodes in
each network attach preferentially and simultaneously to (a) well-connected
nodes within the same network and (b) well-connected nodes in other networks.
We present analytic solutions for the power-law exponents as functions of the
number of links both between networks and within networks. We show that a
cross-clustering coefficient vs. size of network $N$ follows a power law. We
illustrate the models using selected examples from the Internet and finance.
|
A classic route for destroying long-lived electronic quasiparticles in a
weakly interacting Fermi liquid is to couple them to other low-energy degrees
of freedom that effectively act as a bath. We consider here the problem of
electrons scattering off the spin fluctuations of a geometrically frustrated
antiferromagnet, whose non-linear Landau-Lifshitz dynamics, which remains
non-trivial at all temperatures, we model in detail. At intermediate
temperatures and in the absence of any magnetic ordering, the fluctuating
local-moments lead to a non-trivial angular anisotropy of the scattering-rate
along the Fermi surface, which disappears with increasing temperature,
elucidating the role of "hot-spots". Over a remarkably broad window of
intermediate and high temperatures, the electronic properties can be described
by employing a local approximation for the dynamical spin-response. This we
contrast with the more familiar setup of electrons scattering off classical
phonons, whose high-temperature limit differs fundamentally on account of their
unbounded Hilbert space. We place our results in the context of layered
magnetic delafossite compounds.
|
Cluster formation of microscopic swimmers is key to the formation of biofilms
and colonies, efficient motion and nutrient uptake, but, in the absence of
other interactions, requires high swimmer concentrations to occur. Here we
experimentally and numerically show that cluster formation can be dramatically
enhanced by an anisotropic swimmer shape. We analyze a class of model
microswimmers with a shape that can be continuously tuned from spherical to
bent and straight rods. In all cases, clustering can be described by
Michaelis-Menten kinetics governed by a single scaling parameter that depends
on particle density and shape only. We rationalize these shape-dependent
dynamics from the interplay between interlocking probability and cluster
stability. The bent rod shape promotes assembly even at vanishingly low
particle densities and we identify the most efficient shape to be a semicircle.
Our work provides key insights into how shape can be used to rationally design
out-of-equilibrium self-organization, key to creating active functional
materials and designing targeted two-component drug delivery.
|
A symmetric 1 to 2 quantum cloning machine (QCM) is presented that provides
high-fidelity copies with $0.90 \le F \le 0.95$ for all pure (single-qubit)
input states from a given meridian of the Bloch sphere. \cor{Emphasize is
placed especially on the states of the (so-called) Eastern meridian, that
includes the computational basis states $\ketm{0}, \ketm{1}$ together with the
diagonal state $\ketm{+} = \frac{1}{\sqrt{2}} (\ketm{0}
+ \ketm{1})$, for which suggested cloning transformation is shown to be
optimal.} In addition, we also show how this QCM can be utilized for
eavesdropping in Bennett's B92 protocol for quantum key distribution with a
substantial higher success rate than obtained for universal or equatorial
quantum copying.
|
We initiate the study of fairness for ordinal regression. We adapt two
fairness notions previously considered in fair ranking and propose a strategy
for training a predictor that is approximately fair according to either notion.
Our predictor has the form of a threshold model, composed of a scoring function
and a set of thresholds, and our strategy is based on a reduction to fair
binary classification for learning the scoring function and local search for
choosing the thresholds. We provide generalization guarantees on the error and
fairness violation of our predictor, and we illustrate the effectiveness of our
approach in extensive experiments.
|
The transfer Hamiltonian tunneling current is derived in a time-dependent
density matrix formulation and is used to examine photon-assisted tunneling.
Bardeen's tunneling expression arises as the result of first order perturbation
theory in a mean-field expansion of the density matrix. Photon-assisted
tunneling from confined electromagnetic fields in the forbidden barrier region
occurs due to time-varying polarization and wavefunction overlap in the gap,
which leads to a non-zero tunneling current in asymmetric device structures,
even in an unbiased state. The photon energy is seen to act as an effective
temperature dependent bias in a uniform barrier asymmetric tunneling example
problem. Higher order terms in the density matrix expansion give rise to
multi-photon enhanced tunneling currents that can be considered an extension of
non-linear optics where the non-linear conductance plays a similar role as the
non-linear susceptibilities in the continuity equations.
|
We show that induced representations for a pair of $\textit{diffeological Lie
groups}$ exist, in the form of an indexed colimit in the category of
diffeological spaces.
|
In previous work, RAF theory has been developed as a tool for making
theoretical progress on the origin of life question, providing insight into the
structure and occurrence of self-sustaining and collectively autocatalytic sets
within catalytic polymer networks. We present here an extension in which there
are two "independent" polymer sets, where catalysis occurs within and between
the sets, but there are no reactions combining polymers from both sets. Such an
extension reflects the interaction between nucleic acids and peptides observed
in modern cells and proposed forms of early life.
|
The effect of water exposure on MgB2 is studied by submerging an 800 nm thick
MgB2 film into deionized water at room temperature for 1 hour, 4 hours, 10
hours, and 15 hours, and by analyzing the resulting material using scanning
electron microscopy and resistance vs. temperature measurements. It is clearly
observed that Tconset of these films (obtained by an ex-situ reaction of a
e-beam evaporated boron layer) remains unchanged throughout this process,
indicating that at least a portion of the sample retains its original bulk-like
properties. The data is consistent with an interpretation in which a portion of
the exposed film - likely the region closest to the substrate - becomes
superconducting only at about 25 K. It is possible that this low-Tc region
already exists in the as-prepared film, and we observe that its Tc coincides
with that of MgB2 films obtained by annealing precursor films prepared by
pulsed laser deposition. Therefore the data presented here not only illustrate
the degradation of MgB2 in water but also shed light onto the differences and
similarities between films obtained via different routes.
|
Searches for new physics push experiments to look for increasingly rare
interactions. As a result, detectors require increasing sensitivity and
specificity, and materials must be screened for naturally occurring,
background-producing radioactivity. Furthermore the detectors used for
screening must approach the sensitivities of the physics-search detectors
themselves, thus motivating iterative development of detectors capable of both
physics searches and background screening. We report on the design,
installation, and performance of a novel, low-background, fourteen-element
high-purity germanium detector named the CAGe (CUP Array of Germanium),
installed at the Yangyang underground laboratory in Korea.
|
As a critical cue for understanding human intention, human gaze provides a
key signal for Human-Computer Interaction(HCI) applications. Appearance-based
gaze estimation, which directly regresses the gaze vector from eye images, has
made great progress recently based on Convolutional Neural Networks(ConvNets)
architecture and open-source large-scale gaze datasets. However, encoding
model-based knowledge into CNN model to further improve the gaze estimation
performance remains a topic that needs to be explored. In this paper, we
propose HybridGazeNet(HGN), a unified framework that encodes the geometric
eyeball model into the appearance-based CNN architecture explicitly. Composed
of a multi-branch network and an uncertainty module, HybridGazeNet is trained
using a hyridized strategy. Experiments on multiple challenging gaze datasets
shows that HybridGazeNet has better accuracy and generalization ability
compared with existing SOTA methods. The code will be released later.
|
We introduce a novel matching algorithm, called DeepMatching, to compute
dense correspondences between images. DeepMatching relies on a hierarchical,
multi-layer, correlational architecture designed for matching images and was
inspired by deep convolutional approaches. The proposed matching algorithm can
handle non-rigid deformations and repetitive textures and efficiently
determines dense correspondences in the presence of significant changes between
images. We evaluate the performance of DeepMatching, in comparison with
state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al
2005), the MPI-Sintel (Butler et al 2012) and the Kitti (Geiger et al 2013)
datasets. DeepMatching outperforms the state-of-the-art algorithms and shows
excellent results in particular for repetitive textures.We also propose a
method for estimating optical flow, called DeepFlow, by integrating
DeepMatching in the large displacement optical flow (LDOF) approach of Brox and
Malik (2011). Compared to existing matching algorithms, additional robustness
to large displacements and complex motion is obtained thanks to our matching
approach. DeepFlow obtains competitive performance on public benchmarks for
optical flow estimation.
|
In this paper for power series in many (real or complex)variables a radius of
(absolute) convergence is offered. This radius can be evaluated by a formula
similar to Cauchy-Hadamard formula and in one variable case they are same.
|
Understanding the informative structures of scenes is essential for low-level
vision tasks. Unfortunately, it is difficult to obtain a concrete visual
definition of the informative structures because influences of visual features
are task-specific. In this paper, we propose a single general neural network
architecture for extracting task-specific structure guidance for scenes. To do
this, we first analyze traditional spectral clustering methods, which computes
a set of eigenvectors to model a segmented graph forming small compact
structures on image domains. We then unfold the traditional graph-partitioning
problem into a learnable network, named \textit{Scene Structure Guidance
Network (SSGNet)}, to represent the task-specific informative structures. The
SSGNet yields a set of coefficients of eigenvectors that produces explicit
feature representations of image structures. In addition, our SSGNet is
light-weight ($\sim$ 55K parameters), and can be used as a plug-and-play module
for off-the-shelf architectures. We optimize the SSGNet without any supervision
by proposing two novel training losses that enforce task-specific scene
structure generation during training. Our main contribution is to show that
such a simple network can achieve state-of-the-art results for several
low-level vision applications including joint upsampling and image denoising.
We also demonstrate that our SSGNet generalizes well on unseen datasets,
compared to existing methods which use structural embedding frameworks. Our
source codes are available at https://github.com/jsshin98/SSGNet.
|
The pole positions of the various baryon resonances are known to reveal
well-pronounced clustering, so-called Hoehler clusters. For nonstrange baryons
the Hoehler clusters are shown to be identical to Lorentz multiplets of the
type (j,j)*[(1/2,0)+(0,1/2)] with j being a half-integer. For the Lambda
hyperons below 1800 MeV these clusters are shown to be of the type [(1,0)+
(0,1)]*[(1/2,0)+(0,1/2)] while above 1800 MeV they are parity duplicated
(J,0)+(0,J) (Weinberg-Ahluwalia) states. Therefore, for Lambda hyperons the
restoration of chiral symmetry takes place above 1800 MeV. Finally, it is
demonstrated that the description of spin-3/2 particles in terms of a 2nd rank
antisymmetric Lorentz tensor with Dirac spinor components does not contain any
off-shell parameters and avoids the main difficulties of the Rarita-Schwinger
description based upon a 4-vector with Dirac spinor components.
|
After giving a short review on the impact picture approach for the elastic
scattering amplitude, we will discuss the importance of some issues related to
its real and imaginary parts. This will be illustrated in the context of recent
data from RHIC, Tevatron and LHC.
|
We study singular stochastic control of a two dimensional stochastic
differential equation, where the first component is linear with random and
unbounded coefficients. We derive existence of an optimal relaxed control and
necessary conditions for optimality in the form of a mixed relaxed-singular
maximum principle in a global form. A motivating example is given in the form
of an optimal investment and consumption problem with transaction costs, where
we consider a portfolio with a continuum of bonds and where the portfolio
weights are modeled as measure-valued processes on the set of times to
maturity.
|
The similarity of local atomic environments is an important concept in many
machine-learning techniques which find applications in computational chemistry
and material science. Here, we present and discuss a connection between the
information entropy and the similarity matrix of a molecule. The resulting
entropy can be used as a measure of the complexity of a molecule. Exemplarily,
we introduce and evaluate two specific choices for defining the similarity: one
is based on a SMILES representation of local substructures and the other is
based on the SOAP kernel. By tuning the sensitivity of the latter, we can
achieve a good agreement between the respective entropies. Finally, we consider
the entropy of two molecules in a mixture. The gain of entropy due to the
mixing can be used as a similarity measure of the molecules. We compare this
measure to the average and the best-match kernel. The results indicate a
connection between the different approaches and demonstrate the usefulness and
broad applicability of the similarity-based entropy approach.
|
The decays ~nu_e -> e- ~X_1^+, ~nu_e* -> e+ ~X_1^-, when kinematically
allowed, constitute a source of 100% polarised charginos. We study the process
e+ e- -> ~nu_e* ~nu_e -> e+ ~X_1^- e- ~X_1^+, with subsequent chargino decays
\~X_1^- -> nu_mu mu- ~X_1^0, X_1^+ -> q qbar' ~X_1^0 or their charge conjugate
\~X_1^- -> qbar q' ~X_1^0, ~X_1^+ -> nu_mu mu+ ~X_1^0. The kinematics of this
process allows the reconstruction of the sneutrino and chargino rest frames,
and thus a clean analysis of the angular distributions of the visible final
state products in these reference systems. Furthermore, a triple product CP
asymmetry can be built in ~X_1^+- hadronic decays, involving the chargino spins
and the momenta of the quark and antiquark. With a good c tagging efficiency,
this CP asymmetry is quite sensitive to the phase of the bino mass term M_1 and
can test CP violation in the neutralino sector.
|
In this paper, we tackle the problem of RGB-D semantic segmentation of indoor
images. We take advantage of deconvolutional networks which can predict
pixel-wise class labels, and develop a new structure for deconvolution of
multiple modalities. We propose a novel feature transformation network to
bridge the convolutional networks and deconvolutional networks. In the feature
transformation network, we correlate the two modalities by discovering common
features between them, as well as characterize each modality by discovering
modality specific features. With the common features, we not only closely
correlate the two modalities, but also allow them to borrow features from each
other to enhance the representation of shared information. With specific
features, we capture the visual patterns that are only visible in one modality.
The proposed network achieves competitive segmentation accuracy on NYU depth
dataset V1 and V2.
|
Automatic laparoscope motion control is fundamentally important for surgeons
to efficiently perform operations. However, its traditional control methods
based on tool tracking without considering information hidden in surgical
scenes are not intelligent enough, while the latest supervised imitation
learning (IL)-based methods require expensive sensor data and suffer from
distribution mismatch issues caused by limited demonstrations. In this paper,
we propose a novel Imitation Learning framework for Laparoscope Control (ILLC)
with reinforcement learning (RL), which can efficiently learn the control
policy from limited surgical video clips. Specially, we first extract surgical
laparoscope trajectories from unlabeled videos as the demonstrations and
reconstruct the corresponding surgical scenes. To fully learn from limited
motion trajectory demonstrations, we propose Shape Preserving Trajectory
Augmentation (SPTA) to augment these data, and build a simulation environment
that supports parallel RGB-D rendering to reinforce the RL policy for
interacting with the environment efficiently. With adversarial training for IL,
we obtain the laparoscope control policy based on the generated rollouts and
surgical demonstrations. Extensive experiments are conducted in unseen
reconstructed surgical scenes, and our method outperforms the previous IL
methods, which proves the feasibility of our unified learning-based framework
for laparoscope control.
|
In this thesis we discuss how the brane world scenario can be realized
dynamically within the field theoretical framework using topological solitons.
As a playground we consider a bosonic sector of a (4+1)-dimensional
supersymmetric gauge theory, which naturally supports soliton of co-dimension
one, a domain wall. We first discuss separate localization of matter fields and
gauge fields on the world-volume of the domain wall and then we present two
explicit five-dimensional models, where both matter fields and gauge fields are
localized together with minimal interactions. We show that matter fields
localize in the adjoint representation of the non-Abelian gauge group and we
calculate the effective interaction Lagrangian of these matter fields up to the
second order in derivatives. We discuss similarities of our models with
effective models describing pions in QCD and with D-branes from string theory.
|
We study the effects of the degree-degree correlations on the pressure
congestion J when we apply a dynamical process on scale free complex networks
using the gradient network approach. We find that the pressure congestion for
disassortative (assortative) networks is lower (bigger) than the one for
uncorrelated networks which allow us to affirm that disassortative networks
enhance transport through them. This result agree with the fact that many real
world transportation networks naturally evolve to this kind of correlation. We
explain our results showing that for the disassortative case the clusters in
the gradient network turn out to be as much elongated as possible, reducing the
pressure congestion J and observing the opposite behavior for the assortative
case. Finally we apply our model to real world networks, and the results agree
with our theoretical model.
|
The decay of inner-shell vacancy in an atom through radiative and
non-radiative transitions leads to final charged ions. The de-excitation decay
of 3s, 3p and 3d vacancies in Kr atoms are calculated using Monte-Carlo
simulation method. The vacancy cascade pathway resulted from the de-excitation
decay of deep core hole in 3s subshell in Kr atoms is discussed. The generation
of spectator vacancies during the vacancy cascade development gives rise to
Auger satellite spectra. The last transitions of the de-excitation decay of 3s,
3p and 3d holes lead to specific charged ions. Dirac-Fock-Slater wave functions
are adapted to calculate radiative and non-radiative transition probabilities.
The intensity of Kr^{4+} ions are high for 3s hole state, whereas Kr^{3+} and
Kr^{2+} ions have highest intensities for 3p and 3d hole states, respectively.
The present results of ion charge state distributions agree well with the
experimental data.
|
The analysis of the intraday dynamics of correlations among high-frequency
returns is challenging due to the presence of asynchronous trading and market
microstructure noise. Both effects may lead to significant data reduction and
may severely underestimate correlations if traditional methods for
low-frequency data are employed. We propose to model intraday log-prices
through a multivariate local-level model with score-driven covariance matrices
and to treat asynchronicity as a missing value problem. The main advantages of
this approach are: (i) all available data are used when filtering correlations,
(ii) market microstructure noise is taken into account, (iii) estimation is
performed through standard maximum likelihood methods. Our empirical analysis,
performed on 1-second NYSE data, shows that opening hours are dominated by
idiosyncratic risk and that a market factor progressively emerges in the second
part of the day. The method can be used as a nowcasting tool for high-frequency
data, allowing to study the real-time response of covariances to macro-news
announcements and to build intraday portfolios with very short optimization
horizons.
|
In the former part of this paper, we summarize our previous results on
infinite series involving the hyperbolic sine function, especially, with a
focus on the hyperbolic sine analogue of Eisenstein series. Those are based on
the classical results given by Cauchy, Mellin and Kronecker. In the latter
part, we give new formulas for some infinite series involving the hyperbolic
cosine function.
|
Einstein's 1905 derivation of E = mc^2 has been criticized for being
circular. Although such criticisms have been challenged it is certainly true
that the reasoning in Einstein's original derivation is not at all obvious.
Einstein's original derivation could have been made clearer. This article
describes a clear way of doing so.
|
The SHINE program is a large high-contrast near-infrared survey of 600 young,
nearby stars. It is aimed at searching for and characterizing new planetary
systems using VLT/SPHERE's unprecedented high-contrast and high-angular
resolution imaging capabilities. It also intends at placing statistical
constraints on the occurrence and orbital properties of the giant planet
population at large orbits as a function of the stellar host mass and age to
test planet formation theories. We use the IRDIS dual-band imager and the IFS
integral field spectrograph of SPHERE to acquire high-constrast coronagraphic
differential near-infrared images and spectra of the young A2 star HIP65426. It
is a member of the ~17 Myr old Lower Centaurus-Crux association. At a
separation of 830 mas (92 au projected) from the star, we detect a faint red
companion. Multi-epoch observations confirm that it shares common proper motion
with HIP65426. Spectro-photometric measurements extracted with IFS and IRDIS
between 0.95 and 2.2um indicate a warm, dusty atmosphere characteristic of
young low surface-gravity L5-L7 dwarfs. Hot-start evolutionary models predict a
luminosity consistent with a 6-12 MJup, Teff=1300-1600 K and R=1.5 RJup giant
planet. Finally, the comparison with Exo-REM and PHOENIX BT-Settl synthetic
atmosphere models gives consistent effective temperatures but with slightly
higher surface gravity solutions of log(g)=4.0-5.0 with smaller radii (1.0-1.3
RJup). Given its physical and spectral properties, HIP65426b occupies a rather
unique placement in terms of age, mass and spectral-type among the currently
known imaged planets. It represents a particularly interesting case to study
the presence of clouds as a function of particle size, composition, and
location in the atmosphere, to search for signatures of non-equilibrium
chemistry, and finally to test the theory of planet formation and evolution.
|
Multi-access edge computing provides local resources in mobile networks as
the essential means for meeting the demands of emerging ultra-reliable
low-latency communications. At the edge, dynamic computing requests require
advanced resource management for adaptive network slicing, including resource
allocations, function scaling and load balancing to utilize only the necessary
resources in resource-constraint networks. Recent solutions are designed for a
static number of slices. Therefore, the painful process of optimization is
required again with any update on the number of slices. In addition, these
solutions intend to maximize instant rewards, neglecting long-term resource
scheduling. Unlike these efforts, we propose an algorithmic approach based on
multi-agent deep deterministic policy gradient (MADDPG) for optimizing resource
management for edge network slicing. Our objective is two-fold: (i) maximizing
long-term network slicing benefits in terms of delay and energy consumption,
and (ii) adapting to slice number changes. Through simulations, we demonstrate
that MADDPG outperforms benchmark solutions including a static slicing-based
one from the literature, achieving stable and high long-term performance.
Additionally, we leverage incremental learning to facilitate a dynamic number
of edge slices, with enhanced performance compared to pre-trained base models.
Remarkably, this approach yields superior reward performance while saving
approximately 90% of training time costs.
|
The brain's self-monitoring of activities, including internal activities -- a
functionality that we refer to as awareness -- has been suggested as a key
element of consciousness. Here we investigate whether the presence of an
inner-eye-like process (monitor) that supervises the activities of a number of
subsystems (operative agents) engaged in the solution of a problem can improve
the problem-solving efficiency of the system. The problem is to find the global
maximum of a NK fitness landscape and the performance is measured by the time
required to find that maximum. The operative agents explore blindly the fitness
landscape and the monitor provides them with feedback on the quality (fitness)
of the proposed solutions. This feedback is then used by the operative agents
to bias their searches towards the fittest regions of the landscape. We find
that a weak feedback between the monitor and the operative agents improves the
performance of the system, regardless of the difficulty of the problem, which
is gauged by the number of local maxima in the landscape. For easy problems
(i.e., landscapes without local maxima), the performance improves monotonically
as the feedback strength increases, but for difficult problems, there is an
optimal value of the feedback strength beyond which the system performance
degrades very rapidly.
|
Treatment decisions for brain metastatic disease rely on knowledge of the
primary organ site, and currently made with biopsy and histology. Here we
develop a novel deep learning approach for accurate non-invasive digital
histology with whole-brain MRI data. Our IRB-approved single-site retrospective
study was comprised of patients (n=1,399) referred for MRI treatment-planning
and gamma knife radiosurgery over 21 years. Contrast-enhanced T1-weighted and
T2-weighted Fluid-Attenuated Inversion Recovery brain MRI exams (n=1,582) were
preprocessed and input to the proposed deep learning workflow for tumor
segmentation, modality transfer, and primary site classification into one of
five classes. Ten-fold cross-validation generated overall AUC of 0.878
(95%CI:0.873,0.883), lung class AUC of 0.889 (95%CI:0.883,0.895), breast class
AUC of 0.873 (95%CI:0.860,0.886), melanoma class AUC of 0.852
(95%CI:0.842,0.862), renal class AUC of 0.830 (95%CI:0.809,0.851), and other
class AUC of 0.822 (95%CI:0.805,0.839). These data establish that whole-brain
imaging features are discriminative to allow accurate diagnosis of the primary
organ site of malignancy. Our end-to-end deep radiomic approach has great
potential for classifying metastatic tumor types from whole-brain MRI images.
Further refinement may offer an invaluable clinical tool to expedite primary
cancer site identification for precision treatment and improved outcomes.
|
In this paper, we develop an in-depth analysis of non-reversible Markov
chains on denumerable state space from a similarity orbit perspective. In
particular, we study the class of Markov chains whose transition kernel is in
the similarity orbit of a normal transition kernel, such as the one of
birth-death chains or reversible Markov chains. We start by identifying a set
of sufficient conditions for a Markov chain to belong to the similarity orbit
of a birth-death one. As by-products, we obtain a spectral representation in
terms of non-self-adjoint resolutions of identity in the sense of Dunford [21]
and offer a detailed analysis on the convergence rate, separation cutoff and
${\rm{L}}^2$-cutoff of this class of non-reversible Markov chains. We also look
into the problem of estimating the integral functionals from discrete
observations for this class. In the last part of this paper, we investigate a
particular similarity orbit of reversible Markov kernels, that we call the pure
birth orbit, and analyze various possibly non-reversible variants of classical
birth-death processes in this orbit.
|
We have investigated the electronic structure of the $p$-type diluted
magnetic semiconductor In$_{1-x}$Mn$_x$As by photoemission spectroscopy. The Mn
3$d$ partial density of states is found to be basically similar to that of
Ga$_{1-x}$Mn$_x$As. However, the impurity-band like states near the top of
the valence band have not been observed by angle-resolved photoemission
spectroscopy unlike Ga$_{1-x}$Mn$_x$As. This difference would explain the
difference in transport, magnetic and optical properties of
In$_{1-x}$Mn$_x$As and Ga$_{1-x}$Mn$_x$As. The different electronic
structures are attributed to the weaker Mn 3$d$ - As 4$p$ hybridization in
In$_{1-x}$Mn$_x$As than in Ga$_{1-x}$Mn$_x$As.
|
Following the previous works on the A. Pr\'astaro's formulation of algebraic
topology of quantum (super) PDE's, it is proved that a canonical Heyting
algebra ({\em integral Heyting algebra}) can be associated to any quantum PDE.
This is directly related to the structure of its global solutions. This allows
us to recognize a new inside in the concept of quantum logic for microworlds.
Furthermore, the Prastaro's geometric theory of quantum PDE's is applied to the
new category of {\em quantum hypercomplex manifolds}, related to the well-known
Cayley-Dickson construction for algebras. Theorems of existence for local and
global solutions are obtained for (singular) PDE's in this new category of
noncommutative manifolds. Finally the extension of the concept of exotic PDE's,
recently introduced by A.Pr\'astaro, has been extended to quantum PDE's. Then a
smooth quantum version of the quantum (generalized) Poincar\'e conjecture is
given too. These results extend ones for quantum (generalized) Poincar\'e
conjecture, previously given by A. Pr\'astaro.
|
Given a graph $G$ with $n$ nodes and two nodes $u,v\in G$, the {\em
CoSimRank} value $s(u,v)$ quantifies the similarity between $u$ and $v$ based
on graph topology. Compared to SimRank, CoSimRank is shown to be more accurate
and effective in many real-world applications, including synonym expansion,
lexicon extraction, and entity relatedness in knowledge graphs. The computation
of all pairwise CoSimRanks in $G$ is highly expensive and challenging. Existing
solutions all focus on devising approximate algorithms for the computation of
all pairwise CoSimRanks. To attain a desired absolute accuracy guarantee
$\epsilon$, the state-of-the-art approximate algorithm for computing all
pairwise CoSimRanks requires $O(n^3\log_2(\ln(\frac{1}{\epsilon})))$ time,
which is prohibitively expensive even though $\epsilon$ is large. In this
paper, we propose \rsim, a fast randomized algorithm for computing all pairwise
CoSimRank values. The basic idea of \rsim is to approximate the $n\times n$
matrix multiplications in CoSimRank computation via random projection.
Theoretically, \rsim runs in
$O(\frac{n^2\ln(n)}{\epsilon^2}\ln(\frac{1}{\epsilon}))$ time and meanwhile
ensures an absolute error of at most $\epsilon$ in each CoSimRank value in $G$
with a high probability. Extensive experiments using six real graphs
demonstrate that \rsim is more than orders of magnitude faster than the state
of the art. In particular, on a million-edge Twitter graph, \rsim answers the
$\epsilon$-approximate ($\epsilon=0.1$) all pairwise CoSimRank query within 4
hours, using a single commodity server, while existing solutions fail to
terminate within a day.
|
Future galaxy redshift surveys aim to measure cosmological quantities from
the galaxy power spectrum. A prime example is the detection of baryonic
acoustic oscillations (BAOs), providing a standard ruler to measure the dark
energy equation of state, w(z), to high precision. The strongest practical
limitation for these experiments is how quickly accurate redshifts can be
measured for sufficient galaxies to map the large-scale structure. A promising
strategy is to target emission-line (i.e. star-forming) galaxies at
high-redshift (z~0.5-2); not only is the space density of this population
increasing out to z~2, but also emission-lines provide an efficient method of
redshift determination. Motivated by the prospect of future dark energy surveys
targeting H-alpha emitters at near-infrared wavelengths (i.e. z>0.5), we use
the latest empirical data to model the evolution of the H-alpha luminosity
function out to z~2, and thus provide predictions for the abundance of H-alpha
emitters for practical limiting fluxes. We caution that the estimates presented
in this work must be tempered by an efficiency factor, epsilon, giving the
redshift success rate from these potential targets. For a range of practical
efficiencies and limiting fluxes, we provide an estimate of nP_{0.2}, where n
is the 3D galaxy number density and P_{0.2} is the galaxy power spectrum
evaluated at k=0.2h/Mpc. Ideal surveys must provide nP_{0.2}>1 in order to
balance shot-noise and cosmic variance errors. We show that a realistic
emission-line survey (epsilon=0.5) could achieve nP_{0.2}=1 out to z~1.5 with a
limiting flux of 10^{-16} erg/s/cm^{-2}. If the limiting flux is a factor 5
brighter, then this goal can only be achieved out to z~0.5, highlighting the
importance of survey depth and efficiency in cosmological redshift surveys.
|
Accurate estimation of stereo camera extrinsic parameters is the key to
guarantee the performance of stereo matching algorithms. In prior arts, the
online self-calibration of stereo cameras has commonly been formulated as a
specialized visual odometry problem, without taking into account the principles
of stereo rectification. In this paper, we first delve deeply into the concept
of rectifying homography, which serves as the cornerstone for the development
of our novel stereo camera online self-calibration algorithm, for cases where
only a single pair of images is available. Furthermore, we introduce a simple
yet effective solution for global optimum extrinsic parameter estimation in the
presence of stereo video sequences. Additionally, we emphasize the
impracticality of using three Euler angles and three components in the
translation vectors for performance quantification. Instead, we introduce four
new evaluation metrics to quantify the robustness and accuracy of extrinsic
parameter estimation, applicable to both single-pair and multi-pair cases.
Extensive experiments conducted across indoor and outdoor environments using
various experimental setups validate the effectiveness of our proposed
algorithm. The comprehensive evaluation results demonstrate its superior
performance in comparison to the baseline algorithm. Our source code, demo
video, and supplement are publicly available at mias.group/StereoCalibrator.
|
Fast and efficient numerical-analytical approach is proposed for modeling
complex collective behaviour in accelerator/plasma physics models based on
BBGKY hierarchy of kinetic equations. Our calculations are based on variational
and multiresolution approaches in the bases of polynomial tensor algebras of
generalized coherent states/wavelets. We construct the representation for
hierarchy of reduced distribution functions via the multiscale decomposition in
high-localized eigenmodes. Numerical modeling shows the creation of different
internal coherent structures from localized modes, which are related to
stable/unstable type of behaviour and corresponding pattern (waveletons)
formation.
|
We propose an analytical framework able to investigate discussions about
polarized topics in online social networks from many different angles. The
framework supports the analysis of social networks along several dimensions:
time, space and sentiment. We show that the proposed analytical framework and
the methodology can be used to mine knowledge about the perception of complex
social phenomena. We selected the refugee crisis discussions over Twitter as
the case study. This difficult and controversial topic is an increasingly
important issue for the EU. The raw stream of tweets is enriched with space
information (user and mentioned locations), and sentiment (positive vs.
negative) w.r.t. refugees. Our study shows differences in positive and negative
sentiment in EU countries, in particular in UK, and by matching events,
locations and perception it underlines opinion dynamics and common prejudices
regarding the refugees.
|
The large columns of dusty gas enshrouding and fuelling star-formation in
young, massive stellar clusters may render such systems optically thick to
radiation well into the infrared. This raises the prospect that both "direct"
radiation pressure produced by absorption of photons leaving stellar surfaces
and "indirect" radiation pressure from photons absorbed and then re-emitted by
dust grains may be important sources of feedback in such systems. Here we
evaluate this possibility by deriving the conditions under which a spheroidal,
self-gravitating, mixed gas-star cloud can avoid catastrophic disruption by the
combined effects of direct and indirect radiation pressure. We show that
radiation pressure sets a maximum star cluster formation efficiency of
$\epsilon_{\rm max} \sim 0.9$ at a (very large) gas surface density of $\sim
10^5 M_\odot$ pc$^{-2} (Z_\odot/Z) \simeq 20$ g cm$^{-2} (Z_\odot/Z)$, but that
gas clouds above this limit undergo significant radiation-driven expansion
during star formation, leading to a maximum stellar surface density very near
this value for all star clusters. Data on the central surface mass density of
compact stellar systems, while sparse and partly confused by dynamical effects,
are broadly consistent with the existence of a metallicity-dependent
upper-limit comparable to this value. Our results imply that this limit may
preclude the formation of the progenitors of intermediate-mass black holes for
systems with $Z \gtrsim 0.2 Z_\odot$.
|
Long waves bring many important challenges in the ocean and coastal
engineering, including but are not limited to harbor resonance and run-up.
Therefore, understanding and modeling their dynamics is crucially important.
Although their dynamics over various types of geometries are well-studied in
the literature, the study of the geometries with power-law variations remains
an open problem in this setting. With this motivation, in this paper, we derive
the exact analytical solutions of the long-wave equation over nonlinear depth
and breadth profiles having power-law forms given by $h(x)=c_1 x^a$ and
$b(x)=c_2 x^c$, where the parameters $c_1, c_2, a, c$ are some constants. We
show that for these types of power-law forms of depth and breadth profiles, the
long-wave equation admits solutions in terms of Bessel functions and
Cauchy-Euler series. We also derive the seiching periods and resonance
conditions for these forms of depth and breadth variations. Our results can be
used to investigate the long-wave dynamics and their envelope characteristics
over equilibrium beach profiles, the effects of nonlinear harbor entrances and
angled nonlinear seawall breadth variations in the power-law forms on these
dynamics, and the effects of reconstruction, geomorphological changes,
sedimentation, and dredging to harbor resonance, to the shift in resonance
periods and to the seiching characteristics in lakes and barrages.
|
Separatrices divide the phase space of some holomorphic dynamical systems
into separate basins of attraction or 'stability regions' for distinct fixed
points. 'Bundling' (high density) and mutual 'repulsion' of trajectories are
often observed at separatrices in phase portraits, but their global
mathematical characterisation is a difficult problem. For 1-D complex
polynomial dynamical systems we prove the existence of a separatrix for each
critical point at infinity via transformation to the Poincar\'e sphere. We show
that introduction of complex time allows a significantly extended view with the
study of corresponding Riemann surface solutions, their topology, geometry and
their bifurcations/ramifications related to separatrices. We build a bridge to
the Riemann $\xi$-function and present a polynomial approximation of its Newton
flow solution manifold with precision depending on the polynomial degree.
|
Interactive exploration of large, multidimensional datasets plays a very
important role in various scientific fields. It makes it possible not only to
identify important structural features and forms, such as clusters of vertices
and their connection patterns, but also to evaluate their interrelationships in
terms of position, distance, shape and connection density. We argue that the
visualization of multidimensional data is well approximated by the problem of
two-dimensional embedding of undirected nearest-neighbor graphs. The size of
complex networks is a major challenge for today's computer systems and still
requires more efficient data embedding algorithms. Existing reduction methods
are too slow and do not allow interactive manipulation. We show that
high-quality embeddings are produced with minimal time and memory complexity.
We present very efficient IVHD algorithms (CPU and GPU) and compare them with
the latest and most popular dimensionality reduction methods. We show that the
memory and time requirements are dramatically lower than for base codes. At the
cost of a slight degradation in embedding quality, IVHD preserves the main
structural properties of the data well with a much lower time budget. We also
present a meta-algorithm that allows the use of any unsupervised data embedding
method in a supervised manner.
|
Many datasets are biased, namely they contain easy-to-learn features that are
highly correlated with the target class only in the dataset but not in the true
underlying distribution of the data. For this reason, learning unbiased models
from biased data has become a very relevant research topic in the last years.
In this work, we tackle the problem of learning representations that are robust
to biases. We first present a margin-based theoretical framework that allows us
to clarify why recent contrastive losses (InfoNCE, SupCon, etc.) can fail when
dealing with biased data. Based on that, we derive a novel formulation of the
supervised contrastive loss (epsilon-SupInfoNCE), providing more accurate
control of the minimal distance between positive and negative samples.
Furthermore, thanks to our theoretical framework, we also propose FairKL, a new
debiasing regularization loss, that works well even with extremely biased data.
We validate the proposed losses on standard vision datasets including CIFAR10,
CIFAR100, and ImageNet, and we assess the debiasing capability of FairKL with
epsilon-SupInfoNCE, reaching state-of-the-art performance on a number of biased
datasets, including real instances of biases in the wild.
|
Two-dimensional (2D) metal halides have received more attention because of
their electronic and optoelectronic properties. Recently, researchers are
interested to investigate the thermoelectric properties of metal halide
monolayers because of their ultralow lattice conductivity, high Seebeck
coefficient and figure of merit. Here, we have investigated thermoelectric and
optoelectronic properties of XI$_2$ (X=Sn and Si) monolayers with the help of
density functional theory and Boltzmann transport equation. The structural
parameters have been optimized with relaxation of atomic positions. Excellent
thermoelectric and optical properties have been obtained for both SnI$_2$ and
SiI$_2$ monolayers. For SnI$_2$ an indirect bandgap of 2.06 eV was observed and
the absorption peak was found at 4.68 eV. For this the highest ZT value of 0.84
for p-type doping at 600K has been calculated. Similarly, for SiI$_2$ a
comparatively low indirect bandgap of 1.63 eV was observed, and the absorption
peak was obtained at 4.86 eV. The calculated ZT product for SiI$_2$ was 0.87 at
600K. Both the crystals having high absorbance and ZT value suggest that they
can be promising candidates for optoelectronic and thermoelectric devices.
|
Recent highlights from the anisotropic flow and the azimuthal correlation
measurements in a heavy-ion collisions at the LHC are presented. Various flow
harmonics measured for the charged and identified particles versus transverse
momentum, pseudo-rapidity, and the collision centrality are reported. New
experimental probes of the local parity violation at the LHC energies using the
charge dependent azimuthal correlations are also discussed.
|
The theoretical model by Sorasio et al. (2006) for the steady state Mach
number of electrostatic shocks formed in the interaction of two plasma slabs of
arbitrary density and temperature is generalized for relativistic electron and
non-relativistic ion temperatures. We find that the relativistic correction
leads to lower Mach numbers, and as a consequence, ions are reflected with
lower energies. The steady state bulk velocity of the downstream population is
introduced as an additional parameter to describe the transition between the
minimum and maximum Mach numbers in dependence of the initial density and
temperature ratios. In order to transform the soliton-like solution in the
upstream region into a shock, a population of reflected ions is considered and
differences to a zero-ion temperature model are discussed.
|
The ARGO-YBJ experiment is a full coverage air shower detector operated at
the Yangbajing International Cosmic Ray Observatory. The detector has been in
stable data taking in its full configuration since November 2007 to February
2013. The high altitude and the high segmentation and spacetime resolution
offer the possibility to explore the cosmic ray energy spectrum in a very wide
range, from a few TeV up to the PeV region. The high segmentation allows a
detailed measurement of the lateral distribution, which can be used in order to
discriminate showers produced by light and heavy elements. In this work we
present the measurement of the cosmic ray light component spectrum in the
energy range 3-3000 TeV. The analysis has been carried out by using a
two-dimensional unfolding method based on the Bayes' theorem.
|
The main contribution of this paper is to prove the subexponential tail
equivalence of the stationary queue length distributions in the BMAP/GI/1
queues with and without retrials. We first present a
stochastic-decomposition-like result of the stationary queue length in the
BMAP/GI/1 retrial queue, which is an extension of the stochastic decomposition
of the stationary queue length in the M${}^X$/GI/1 retrial queue. The
stochastic-decomposition-like result shows that the stationary queue length
distribution in the BMAP/GI/1 retrial queue is decomposed into two parts: the
stationary conditional queue length distribution given that the server is idle;
and a certain matrix sequence associated with the stationary queue length
distribution in the corresponding standard BMAP/GI/1 queue (without retrials).
Using the stochastic-decomposition-like result and matrix analytic methods, we
prove the subexponential tail equivalence of the stationary queue length
distributions in the BMAP/GI/1 queues with and without retrials. This tail
equivalence result does not necessarily require that the size of an arriving
batch is light-tailed, unlike Yamamuro's result for the M${}^X$/GI/1 retrial
queue (Queueing Syst. 70:187--205, 2012). As a by-product, the key lemma to the
roof of the main theorem presents a subexponential asymptotic formula for the
stationary distribution of a level-dependent M/G/1-type Markov chain, which is
the first reported result on the subexponential asymptotics of level-dependent
block-structured Markov chains.
|
This paper addresses the management of water flow in a rectangular open
channel, considering the dynamic nature of both the channel's bathymetry and
the suspended sediment particles caused by entrainment and deposition effects.
The control-oriented model under study is a set of coupled nonlinear partial
differential equations (PDEs) describing conservation of mass and momentum
while accounting for constitutive relations that govern sediment erosion and
deposition phenomena. The proposed boundary control problem presents a fresh
perspective in water canal management and expands Saint-Venant Exner (SVE)
control frameworks by integrating dynamics related to the transport of fine
particles. After linearization, PDE backstepping design is employed to
stabilize both the bathymetry, the water dynamics together with the
concentration of suspended sediment particles. Two underflow sluice gates are
used for flow control at the upstream and downstream boundaries with only the
downstream component being actuated. An observer-based backstepping control
design is carried out for the downstream gate using state measurement at the
upstream gate to globally exponentially stabilize the linearized system to a
desired equilibrium point in $\mathscr{L}^2$ sense. The stability analysis is
performed on the linearized model which is a system of four coupled PDEs, three
of which are rightward convecting and one leftward. The proposed control design
has the potential to facilitate efficient reservoir flushing operations.
Consistent simulation results are presented to illustrate the feasibility of
the designed control law.
|
Cores and filamentary structures are the prime birthplaces of stars, and play
key roles in the process of star formation. Latest advances in the methods of
multi-scale source and filament extraction, and in making high-resolution
column density map from $Herschel$ multi-wavelength observations enable us to
detect the filamentary network structures in highly complex molecular cloud
environments. The statistics for physical parameters shows that core mass
strongly correlates with core dust temperature, and $M/L$ strongly correlates
with $M/T$, which is in line with the prediction of the blackbody radiation,
and can be used to trace evolutionary sequence from unbound starless cores to
robust prestellar cores. Crest column densities of the filamentary structures
are clearly related with mass per unit length ($M_{\rm line}$), but are
uncorrelated by three orders ranging from $\sim 10^{20}$ to $\sim 10^{22}$ $
\rm cm^{-2}$ with widths. Full width at half maximum (FWHM) have a median value
of 0.15 pc, which is consistent with the 0.1 pc typical inner width of the
filamentary structures reported by previous research. We find $\sim $70\% of
robust prestellar cores (135/199) embedded in supercritical filaments with
$M_{\rm line}>16~M_{\odot}/{\rm pc}$, which implies that the gravitationally
bound cores come from fragmentation of supercritical filaments. And on the
basis of observational evidences that probability distribution function (PDF)
with power-law distribution in the Perseus south is flatter than north, YSO
number is significantly less than that in the north, and dust temperature
difference. We infer that south region is more gravitationally bound than north
region.
|
The smallness of the neutrino masses can be well understood within the seesaw
mechanism. We analyse two cases of the minimal extension of the standard model
when one or two right-handed fields are added to the three left-handed fields.
A second Higgs doublet is included in our model. We calculate the one-loop
radiative corrections to the mass parameters which produce mass terms for the
neutral leptons. In both cases we numerically analyse light neutrino masses as
functions of the heavy neutrinos masses. Parameters of the model are varied to
find light neutrino masses that are compatible with experimental data of solar
$\Delta m^2_\odot$ and atmospheric $\Delta m^2_\mathrm{atm}$ neutrino
oscillations for normal and inverted hierarchy.
|
We address the online combinatorial choice of weight multipliers for
multi-objective optimization of many loss terms parameterized by neural works
via a probabilistic graphical model (PGM) for the joint model parameter and
multiplier evolution process, with a hypervolume based likelihood promoting
multi-objective descent. The corresponding parameter and multiplier estimation
as a sequential decision process is then cast into an optimal control problem,
where the multi-objective descent goal is dispatched hierarchically into a
series of constraint optimization sub-problems. The subproblem constraint
automatically adapts itself according to Pareto dominance and serves as the
setpoint for the low level multiplier controller to schedule loss landscapes
via output feedback of each loss term. Our method is multiplier-free and
operates at the timescale of epochs, thus saves tremendous computational
resources compared to full training cycle multiplier tuning. It also
circumvents the excessive memory requirements and heavy computational burden of
existing multi-objective deep learning methods. We applied it to domain
invariant variational auto-encoding with 6 loss terms on the PACS domain
generalization task, and observed robust performance across a range of
controller hyperparameters, as well as different multiplier initial conditions,
outperforming other multiplier scheduling methods. We offered modular
implementation of our method, admitting extension to custom definition of many
loss terms.
|
We have examined the occurrence of Extremely Red Objects (EROs) in the fields
of 13 luminous quasars (11 radio-loud and two radio-quiet) at 1.8 < z < 3.0.
The average surface density of K_s<=19 mag EROs is two-three times higher than
in large, random-field surveys, and the excess is significant at the $\approx
3$ sigma level even after taking into account that the ERO distribution is
highly inhomogeneous. This is the first systematic investigation of the surface
density of EROs in the fields of radio-loud quasars above z=2, and shows that a
large number of the fields contain clumps of EROs, similar to what is seen only
in the densest areas in random-field surveys. The high surface densities and
angular distribution of EROs suggest that the excess originates in high-z
galaxy concentrations, possibly young clusters of galaxies. The fainter EROs at
K_s>19 mag show some evidence of being more clustered in the immediate 20
arcsec region around the quasars, suggesting an association with the
quasars.Comparing with predictions from spectral synthesis models, we find that
if the $K_s\approx19$ mag ERO excess is associated with the quasars at
$z\approx2$, their magnitudes are typical of >~ L* passively evolving galaxies
formed at z~3.5 (Omega_m=0.3, Omega_l=0.7, and H0=70 km/s/Mpc). Another
interpretation of our results is that the excess originates in concentrations
of galaxies at $z\approx1$ lying along the line of sight to the quasars. If
this is the case, the EROs may be tracing massive structures responsible for a
magnification bias of the quasars.
|
We investigate the evolution of line of sight (LOS) blockage over both time
and space for vehicle-to-vehicle (V2V) channels. Using realistic vehicular
mobility and building and foliage locations from maps, we first perform LOS
blockage analysis to extract LOS probabilities in real cities and on highways
for varying vehicular densities. Next, to model the time evolution of LOS
blockage for V2V links, we employ a three-state discrete-time Markov chain
comprised of the following states: i) LOS; ii) non-LOS due to static objects
(e.g., buildings, trees, etc.); and iii) non-LOS due to mobile objects
(vehicles). We obtain state transition probabilities based on the evolution of
LOS blockage. Finally, we perform curve fitting and obtain a set of
distance-dependent equations for both LOS and transition probabilities. These
equations can be used to generate time-evolved V2V channel realizations for
representative urban and highway environments. Our results can be used to
perform highly efficient and accurate simulations without the need to employ
complex geometry-based models for link evolution.
|
Nonlinear differential equations are derived that describe the time evolution
of the test particle coordinates during finite motions in the gravitational
field of oscillating dark matter. It is shown that in the weak field
approximation, the radial oscillations of a test particle and oscillations in
orbital motion are described by the Hill equation and the nonhomogeneous Hill
equation, respectively. In the case of scalar dark matter with a logarithmic
self-interactions, these equations are integrated numerically, and the
solutions are compared with the corresponding solutions of the original
nonlinear system to identify possible resonance effects.
|
It is possible to achieve an arbitrary amount of entanglement between two
atoms using only spontaneously emitted photons, linear optics, single photon
sources and projective measurements. This is in contrast to all current
experimental proposals for entangling two atoms, which are fundamentally
restricted to one entanglement bit or ebit.
|
With the establishment of the AUT University 12m radio telescope at
Warkworth, New Zealand has now become a part of the international Very Long
Baseline Interferometry (VLBI) community. A major product of VLBI observations
are images in the radio domain of astronomical objects such as Active Galactic
Nuclei (AGN). Using large geographical separations between radio antennas, very
high angular resolution can be achieved. Detailed images can be created using
the technique of VLBI Earth Rotation Aperture Synthesis. We review the current
process of VLBI radio imaging. In addition we model VLBI configurations using
the Warkworth telescope, AuScope (a new array of three 12m antennas in
Australia) and the Australian Square Kilometre Array Pathfinder (ASKAP) array
currently under construction in Western Australia, and discuss how the
configuration of these arrays affects the quality of images. Recent imaging
results that demonstrate the modeled improvements from inclusion of the AUT and
first ASKAP telescope in the Australian Long Baseline Array (LBA) are
presented.
|
Multispectral imaging plays an important role in many applications from
astronomical imaging, earth observation to biomedical imaging. However, the
current technologies are complex with multiple alignment-sensitive components,
predetermined spatial and spectral parameters by manufactures. Here, we
demonstrate a single-shot multispectral imaging technique that gives
flexibility to end-users with a very simple optical setup, thank to spatial
correlation and spectral decorrelation of speckle patterns. These seemingly
random speckle patterns are point spreading functions (PSFs) generated by light
from point sources propagating through a strongly scattering medium. The
spatial correlation of PSFs allows image recovery with deconvolution
techniques, while the spectral decorrelation allows them to play the role of
tune-able spectral filters in the deconvolution process. Our demonstrations
utilizing optical physics of strongly scattering media and computational
imaging present the most cost-effective approach for multispectral imaging with
great advantages.
|
Subsets and Splits