text
stringlengths 6
128k
|
---|
In this paper, we aim at using the DECi-hertz Interferometer
Gravitational-wave Observatory (DECIGO), a future Japanese space
gravitational-wave antenna sensitive to frequency range between LISA and
ground-based detectors, to provide gravitational-wave constraints on the cosmic
curvature at $z\sim 5$. In the framework of the well-known distance sum rule,
the perfect redshift coverage of the standard sirens observed by DECIGO,
compared with lensing observations including the source and lens from LSST,
makes such cosmological-model-independent test more natural and general.
Focusing on three kinds of spherically symmetric mass distributions for the
lensing galaxies, we find that the cosmic curvature is expected to be
constrained with the precision of $\Delta \Omega_K \sim 10^{-2}$ in the early
universe ($z\sim5.0$), improving the sensitivity of ET constraints by about a
factor of 10. However, in order to investigate this further, the mass density
profiles of early-type galaxies should be properly taken into account.
Specially, our analysis demonstrates the strong degeneracy between the spatial
curvature and the lens parameters, especially the redshift evolution of
power-law lens index parameter. When the extended power law mass density
profile is assumed, the weakest constraint on the cosmic curvature can be
obtained. Whereas, the addition of DECIGO to the combination of LSST+DECIGO
does improve the constraint on the luminosity density slope and the anisotropy
of the stellar velocity dispersion significantly. Therefore, our paper
highlights the benefits of synergies between DECIGO and LSST in constraining
new physics beyond the standard model, which could manifest itself through
accurate determination of the cosmic curvature.
|
With the development of audio playback devices and fast data transmission,
the demand for high sound quality is rising for both entertainment and
communications. In this quest for better sound quality, challenges emerge from
distortions and interferences originating at the recording side or caused by an
imperfect transmission pipeline. To address this problem, audio restoration
methods aim to recover clean sound signals from the corrupted input data. We
present here audio restoration algorithms based on diffusion models, with a
focus on speech enhancement and music restoration tasks. Traditional
approaches, often grounded in handcrafted rules and statistical heuristics,
have shaped our understanding of audio signals. In the past decades, there has
been a notable shift towards data-driven methods that exploit the modeling
capabilities of DNNs. Deep generative models, and among them diffusion models,
have emerged as powerful techniques for learning complex data distributions.
However, relying solely on DNN-based learning approaches carries the risk of
reducing interpretability, particularly when employing end-to-end models.
Nonetheless, data-driven approaches allow more flexibility in comparison to
statistical model-based frameworks, whose performance depends on distributional
and statistical assumptions that can be difficult to guarantee. Here, we aim to
show that diffusion models can combine the best of both worlds and offer the
opportunity to design audio restoration algorithms with a good degree of
interpretability and a remarkable performance in terms of sound quality. We
explain the diffusion formalism and its application to the conditional
generation of clean audio signals. We believe that diffusion models open an
exciting field of research with the potential to spawn new audio restoration
algorithms that are natural-sounding and remain robust in difficult acoustic
situations.
|
Here we study the impact of non-Markovian evolution on prominent
characteristics of quantum thermodynamics, such as ergotropy and power. These
are benchmarked by the behavior of the quantum speed limit time. We make use of
both geometric-based, particularly quantum Fisher and Wigner-Yanase information
metric, and physical properties based-measures, particularly relative purity
measure and relative entropy of coherence measure, to compute the quantum speed
limit time. A simple non-Markovian model of a qubit in a bosonic bath
exhibiting non-Markovian amplitude damping evolution is considered, which, from
the quantum thermodynamic perspective with finite initial ergotropy, can be
envisaged as a quantum battery. To this end, we explore the connections between
the physical properties-based measures of quantum speed limit time and the
coherent component of ergotropy. The non-Markovian evolution is shown to impact
the recharging process of the quantum battery. Further, a connection between
the discharging-charging cycle of the quantum battery and the geometric
measures of quantum speed limit time is observed.
|
Dominance and subordinate behaviours are important ingredients in the social
organizations of group living animals. Behavioural observations on the two
eusocial species \textit{Ropalidia marginata} and \textit{Ropalidia
cyathiformis} suggest varying complexities in their social systems. The queen
of R. cyathiformis is an aggressive individual who usually holds the top
position in the dominance hierarchy although she does not necessarily show the
maximum number of acts of dominance, while the R. marginata queen rarely shows
aggression and usually does not hold the top position in the dominance
hierarchy of her colony. These differences are reflected in the distribution of
dominance-subordinate interactions among the hierarchically ranked individuals
in both the species. The percentage of dominance interactions decrease
gradually with hierarchical ranks in R. marginata while in R. cyathiformis it
first increases and then decreases. We use an agent-based model to investigate
the underlying mechanism that could give rise to the observed patterns for both
the species. The model assumes, besides some non-interacting individuals, that
the interaction probabilities of the agents depend on their pre-differentiated
winning abilities. Our simulations show that if the queen takes up a strategy
of being involved in a moderate number of dominance interactions, one could get
the pattern similar to R. cyathiformis, while taking up the strategy of very
low interactions by the queen could lead to the pattern of R. marginata. We
infer that both the species follow a common interaction pattern, while the
differences in their social organization are due to the slight changes in queen
as well as worker strategies. These changes in strategies are expected to
accompany the evolution of more complex societies from simpler ones.
|
We study quantum field models in indefinite metric. We introduce the modified
Wightman axioms of Morchio and Strocchi as a general framework of indefinite
metric quantum field theory (QFT) and present concrete interacting relativistic
models obtained by analytical continuation from some stochastic processes with
Euclidean invariance. As a first step towards scattering theory in indefinite
metric QFT, we give a proof of the spectral condition on the translation group
for the relativistic models.
|
The general space-time evolution of the scattering of an incident acoustic
plane wave pulse by an arbitrary configuration of targets is treated by
employing a recently developed non-singular boundary integral method to solve
the Helmholtz equation in the frequency domain from which the fast Fourier
transform is used to obtain the full space-time solution of the wave equation.
The non-singular boundary integral solution can enforce the radiation boundary
condition at infinity exactly and can account for multiple scattering effects
at all spacings between scatterers without adverse effects on the numerical
precision. More generally, the absence of singular kernels in the non-singular
integral equation confers high numerical stability and precision for smaller
numbers of degrees of freedom. The use of fast Fourier transform to obtain the
time dependence is not constrained to discrete time steps and is particularly
efficient for studying the response to different incident pulses by the same
configuration of scatterers. The precision that can be attained using a smaller
number of Fourier components is also quantified.
|
We consider one-dimensional classical time-dependent Hamiltonian systems with
quasi-periodic orbits. It is well-known that such systems possess an adiabatic
invariant which coincides with the action variable of the Hamiltonian
formalism. We present a new proof of the adiabatic invariance of this quantity
and illustrate our arguments by means of explicit calculations for the harmonic
oscillator.
The new proof makes essential use of the Hamiltonian formalism. The key step
is the introduction of a slowly-varying quantity closely related to the action
variable. This new quantity arises naturally within the Hamiltonian framework
as follows: a canonical transformation is first performed to convert the system
to action-angle coordinates; then the new quantity is constructed as an action
integral (effectively a new action variable) using the new coordinates. The
integration required for this construction provides, in a natural way, the
averaging procedure introduced in other proofs, though here it is an average in
phase space rather than over time.
|
Typically, quantum superpositions, and thus measurement projections of
quantum states involving interference, decrease (or increase) monotonically as
a function of increased distinguishability. Distinguishability, in turn, can be
a consequence of decoherence, for example caused by the (simultaneous) loss of
excitation or due to inadequate mode matching (either deliberate or
indeliberate). It is known that for some cases of multi-photon interference,
non-monotonic decay of projection probabilities occurs, which has so far been
attributed to interference between four or more two photons. We show that such
a non-monotonic behaviour of projection probabilities is not unnatural, and can
also occur for single-photon and even semiclassical states. Thus, while the
effect traces its roots from indistinguishability and thus interference, the
states for which this can be observed do not need to have particular quantum
features.
|
We show that Feynman's proof applies to Newtonian gravitation, implying thus
the existence of gravitational analogous of the electric and magnetic fields
and the corresponding Lorentz-like force. Consistency of the formalism require
particular properties of the electric and magnetic-like fields under Galilei
transformations, which coincide with those obtained in previous analysis of
Galilean electromagnetism.
|
We classify all edge-to-edge spherical isohedral 4-gonal tilings such that
the skeletons are pseudo-double wheels. For this, we characterize these
spherical tilings by a quadratic equation for the cosine of an edge-length. By
the classification, we see: there are indeed two non-congruent, edge-to-edge
spherical isohedral 4-gonal tilings such that the skeletons are the same
pseudo-double wheel and the cyclic list of the four inner angles of the tiles
are the same. This contrasts with that every edge-to-edge spherical tiling by
congruent 3-gons is determined by the skeleton and the inner angles of the
skeleton. We show that for a particular spherical isohedral tiling over the
pseudo-double wheel of twelve faces, the quadratic equation has a double
solution and the copies of the tile also organize a spherical non-isohedral
tiling over the same skeleton.
|
The structure of the moduli spaces $\M := \A/\G$ of (all, not just flat)
$SL(2,C)$ and $SU(1,1)$ connections on a n-manifold is analysed. For any
topology on the corresponding spaces $\A$ of all connections which satisfies
the weak requirement of compatibility with the affine structure of $\A$, the
moduli space $\M$ is shown to be non-Hausdorff. It is then shown that the
Wilson loop functionals --i.e., the traces of holonomies of connections around
closed loops-- are complete in the sense that they suffice to separate all
separable points of $\M$. The methods are general enough to allow the
underlying n-manifold to be topologically non-trivial and for connections to be
defined on non-trivial bundles. The results have implications for canonical
quantum general relativity in 4 and 3 dimensions.
|
In the relatively short history of machine learning, the subtle balance
between engineering and theoretical progress has been proved critical at
various stages. The most recent wave of AI has brought to the IR community
powerful techniques, particularly for pattern recognition. While many benefits
from the burst of ideas as numerous tasks become algorithmically feasible, the
balance is tilting toward the application side. The existing theoretical tools
in IR can no longer explain, guide, and justify the newly-established
methodologies.
The consequences can be suffering: in stark contrast to how the IR industry
has envisioned modern AI making life easier, many are experiencing increased
confusion and costs in data manipulation, model selection, monitoring,
censoring, and decision making. This reality is not surprising: without handy
theoretical tools, we often lack principled knowledge of the pattern
recognition model's expressivity, optimization property, generalization
guarantee, and our decision-making process has to rely on over-simplified
assumptions and human judgments from time to time.
Time is now to bring the community a systematic tutorial on how we
successfully adapt those tools and make significant progress in understanding,
designing, and eventually productionize impactful IR systems. We emphasize
systematicity because IR is a comprehensive discipline that touches upon
particular aspects of learning, causal inference analysis, interactive (online)
decision-making, etc. It thus requires systematic calibrations to render the
actual usefulness of the imported theoretical tools to serve IR problems, as
they usually exhibit unique structures and definitions. Therefore, we plan this
tutorial to systematically demonstrate our learning and successful experience
of using advanced theoretical tools for understanding and designing IR systems.
|
A number of measurements in decays induced by the semileptonic $b\to s$ and
$b\to c$ transitions hint towards a possible role of new physics in both
sectors. Motivated by these anomalies, we investigate the lepton flavor
violating $B\to K^*_2 (1430)\mu^{\pm}\tau^{\mp}$ decays. We calculate the
two-fold angular distribution of $B\to K^*_2\ell_1\ell_2$ decay in presence of
vector, axial-vector, scalar and pseudo-scalar new physics interactions. We
then compute the branching fraction and lepton forward-backward asymmetry in
the framework of $U^{2/3}_1$ vector leptoquark which is a viable solution to
the current $B$ anomalies. We find that the upper limits are $\mathcal{B}(B\to
K^*_2\mu^-\tau^+)\leq 1.64\times 10^{-7}$ and $\mathcal{B}(B\to
K^*_2\mu^+\tau^-)\leq 0.60\times 10^{-7}$ at $90\%$ C.L.
|
We employ the first fully three-dimensional simulation to study the role of
magnetic fields and ion-neutral friction in regulating gravitationally-driven
fragmentation of molecular clouds. The cores in an initially subcritical cloud
develop gradually over an ambipolar diffusion time while the cores in an
initially supercritical cloud develop in a dynamical time. The infall speeds on
to cores are subsonic in the case of an initially subcritical cloud, while an
extended (\ga 0.1 pc) region of supersonic infall exists in the case of an
initially supercritical cloud. These results are consistent with previous
two-dimensional simulations. We also found that a snapshot of the relation
between density (rho) and the strength of the magnetic field (B) at different
spatial points of the cloud coincides with the evolutionary track of an
individual core. When the density becomes large, both relations tend to B
\propto \rho^{0.5}.
|
First calculated results with the new HIJING++ are presented for identified
hadron production in high-energy heavy ion collisions. The recently developed
HIJING++ version is based on the latest version of PYTHIA8 and contains all the
nuclear effects has been included in the HIJING2.552, which will be improved by
a new version of the shadowing parametrization and jet quenching module. Here,
we summarize the major changes of the new program code beside the comparison
between experimental data for some specific high-energy nucleus-nucleus
collisions.
|
Electric vehicles (EV) are an important part of future sustainable
transportation. The increasing integration of EV charging stations (EVCSs) in
the existing power grids require new scaleable control algorithms that maintain
the stability and resilience of the grid. Here, we present such a control
approach using an averaged port-Hamiltonian model. In this approach, the
underlying switching behavior of the power converters is approximated by an
averaged non-linear system. The averaged models are used to derive various
types of stabilizing controllers, including the typically used PI controllers.
The pH modeling is showcased by means of a generic setup of an EVCS, where the
battery of the vehicle is connected to an AC grid via power lines, converters,
and filters. Finally, the control design methods are compared for the averaged
pH system and validated using a simulation model of the switched charging
station.
|
Texture is an important visual attribute used to describe images. There are
many methods available for texture analysis. However, they do not capture the
details richness of the image surface. In this paper, we propose a new method
to describe textures using the artificial crawler model. This model assumes
that each agent can interact with the environment and each other. Since this
swarm system alone does not achieve a good discrimination, we developed a new
method to increase the discriminatory power of artificial crawlers, together
with the fractal dimension theory. Here, we estimated the fractal dimension by
the Bouligand-Minkowski method due to its precision in quantifying structural
properties of images. We validate our method on two texture datasets and the
experimental results reveal that our method leads to highly discriminative
textural features. The results indicate that our method can be used in
different texture applications.
|
The first observing run of Advanced LIGO spanned 4 months, from September 12,
2015 to January 19, 2016, during which gravitational waves were directly
detected from two binary black hole systems, namely GW150914 and GW151226.
Confident detection of gravitational waves requires an understanding of
instrumental transients and artifacts that can reduce the sensitivity of a
search. Studies of the quality of the detector data yield insights into the
cause of instrumental artifacts and data quality vetoes specific to a search
are produced to mitigate the effects of problematic data. In this paper, the
systematic removal of noisy data from analysis time is shown to improve the
sensitivity of searches for compact binary coalescences. The output of the
PyCBC pipeline, which is a python-based code package used to search for
gravitational wave signals from compact binary coalescences, is used as a
metric for improvement. GW150914 was a loud enough signal that removing noisy
data did not improve its significance. However, the removal of data with excess
noise decreased the false alarm rate of GW151226 by more than two orders of
magnitude, from 1 in 770 years to less than 1 in 186000 years.
|
In this letter, we study collider phenomenology in the supersymmetric
Standard Model with a certain type of non-universal gaugino masses at the gauge
coupling unification scale, motivated by the little hierarchy problem. In this
scenario, especially the wino mass is relatively large compared to the gluino
mass at the unification scale, and the heavy wino can relax the fine-tuning of
the higgsino mass parameter, so-called $\mu$-parameter. Besides, it will
enhance the lightest Higgs boson mass due to the relatively large left-right
mixing of top squarks through the renormalization group (RG) effect. Then $125$
GeV Higgs boson could be accomplished, even if the top squarks are lighter than
$1$ TeV and the $\mu$ parameter is within a few hundreds GeV. The right-handed
top squark tends to be lighter than the other sfermions due to the RG runnings,
then we focus on the top squark search at the LHC. Since the top squark is
almost right-handed and the higgsinos are nearly degenerate, $2b + E_T^{\rm
miss}$ channel is the most sensitive to this scenario. We figure out current
and expected experimental bounds on the lightest top squark mass and model
parameters at the gauge coupling unification scale.
|
Current phylogenetic comparative methods generally employ the
Ornstein-Uhlenbeck(OU) process for modeling trait evolution. Being able of
tracking the optimum of a trait within a group of related species, the OU
process provides information about the stabilizing selection where the
population mean adopts a particular trait value. The optima of a trait may
follow certain stochastic dynamics along the evolutionary history. In this
paper, we extend the current framework by adopting a rate of evolution which
behave according to pertinent stochastic dynamics. The novel model is applied
to analyze about 225 datasets collected from the existing literature. Results
validate that the new framework provides a better fit for the majority of these
datasets.
|
We theoretically study the generation of terahertz (THz) radiation by
two-color filamentation of ultrashort laser pulses with different wavelengths.
We consider wavelengths in the range from 0.6 to 10.6 $\mu$m, thus covering the
whole range of existing and future powerful laser sources in the near, mid and
far-infrared. We show how different parameters of two-color filaments and
generated THz pulses depend on the laser wavelength. We demonstrate that there
is an optimal laser wavelength for two-color filamentation that provides the
highest THz conversion efficiency and results in generation of extremely
intense single cycle THz fields.
|
Being able to model and forecast international migration as precisely as
possible is crucial for policymaking. Recently Google Trends data in addition
to other economic and demographic data have been shown to improve the
forecasting quality of a gravity linear model for the one-year ahead
forecasting. In this work, we replace the linear model with a long short-term
memory (LSTM) approach and compare it with two existing approaches: the linear
gravity model and an artificial neural network (ANN) model. Our LSTM approach
combined with Google Trends data outperforms both these models on various
metrics in the task of forecasting the one-year ahead incoming international
migration to 35 Organization for Economic Co-operation and Development (OECD)
countries: for example the root mean square error (RMSE) and the mean average
error (MAE) have been divided by 5 and 4 on the test set. This positive result
demonstrates that machine learning techniques constitute a serious alternative
over traditional approaches for studying migration mechanisms.
|
We study the ground state properties and the excitation spectrum of bosons
which, in addition to a short-range repulsive two body potential, interact
through the exchange of some dispersionless bosonic modes. The latter induces a
time dependent (retarded) boson-boson interaction which is attractive in the
static limit. Moreover the coupling with dispersionless modes introduces a
reference frame for the moving boson system and hence breaks the Galilean
invariance of this system. The ground state of such a system is depleted {\it
linearly} in the boson density due to the zero point fluctuations driven by the
retarded part of the interaction. Both quasiparticle (microscopic) and
compressional (macroscopic) sound velocities of the system are studied. The
microscopic sound velocity is calculated up the second order in the effective
two body interaction in a perturbative treatment, similar to that of Beliaev
for the dilute weakly interacting Bose gas. The hydrodynamic equations are used
to obtain the macroscopic sound velocity. We show that these velocities are
identical within our perturbative approach. We present analytical results for
them in terms of two dimensional parameters -- an effective interaction
strength and an adiabaticity parameter -- which characterize the system. We
find that due the presence of several competing effects, which determine the
speed of the sound of the system, three qualitatively different regimes can be
in principle realized in the parameter space and discuss them on physical
grounds.
|
The interpretation of data in terms of multi-parameter models of new physics,
using the Bayesian approach, requires the construction of multi-parameter
priors. We propose a construction that uses elements of Bayesian reference
analysis. Our idea is to initiate the chain of inference with the reference
prior for a likelihood function that depends on a single parameter of interest
that is a function of the parameters of the physics model. The reference
posterior density of the parameter of interest induces on the parameter space
of the physics model a class of posterior densities. We propose to continue the
chain of inference with a particular density from this class, namely, the one
for which indistinguishable models are equiprobable and use it as the prior for
subsequent analysis. We illustrate our method by applying it to the constrained
minimal supersymmetric Standard Model and two non-universal variants of it.
|
Let $ \mathcal{H}(\mathbb{D}) $ be the class of all holomorphic functions in
the unit disk $ \mathbb{D} $. We aim to explore the complex symmetry exhibited
by generalized weighted composition-differentiation operators, denoted as
$L_{n, \psi, \phi}$ and is defined by \begin{align*}
L_{n, \psi, \phi}:=\sum_{k=1}^{n}c_kD_{k, \psi_k, \phi},\; \mbox{where }\;
c_k\in\mathbb{C}\; \mbox{for}\; k=1, 2, \ldots, n, \end{align*} where $ D_{k,
\psi, \phi}f(z):=\psi(z)f^{(k)}(\phi(z)),\; f\in
\mathcal{A}^2_{\alpha}(\mathbb{D}), $ in the reproducing kernel Hilbert space,
labeled as $\mathcal{A}^2_{\alpha}(\mathbb{D})$, which encompasses analytic
functions defined on the unit disk $\mathbb{D}$. By deriving a condition that
is both necessary and sufficient, we provide insights into the $ C_{\mu, \eta}
$-symmetry exhibited by $L_{n, \psi, \phi}$. The explicit conditions for which
the operator T is Hermitian and normal are obtained through our investigation.
Additionally, we conduct an in-depth analysis of the spectral properties of $
L_{n, \psi, \phi} $ under the assumption of $ C_{\mu, \eta} $-symmetry and
thoroughly examine the kernel of the adjoint operator of $L_{n, \psi, \phi}$.
|
We place constraints on the average density (Omega_m) and clustering
amplitude (sigma_8) of matter using a combination of two measurements from the
Sloan Digital Sky Survey: the galaxy two-point correlation function, w_p, and
the mass-to-galaxy-number ratio within galaxy clusters, M/N, analogous to
cluster M/L ratios. Our w_p measurements are obtained from DR7 while the sample
of clusters is the maxBCG sample, with cluster masses derived from weak
gravitational lensing. We construct non-linear galaxy bias models using the
Halo Occupation Distribution (HOD) to fit both w_p and M/N for different
cosmological parameters. HOD models that match the same two-point clustering
predict different numbers of galaxies in massive halos when Omega_m or sigma_8
is varied, thereby breaking the degeneracy between cosmology and bias. We
demonstrate that this technique yields constraints that are consistent and
competitive with current results from cluster abundance studies, even though
this technique does not use abundance information. Using w_p and M/N alone, we
find Omega_m^0.5*sigma_8=0.465+/-0.026, with individual constraints of
Omega_m=0.29+/-0.03 and sigma_8=0.85+/-0.06. Combined with current CMB data,
these constraints are Omega_m=0.290+/-0.016 and sigma_8=0.826+/-0.020. All
errors are 1-sigma. The systematic uncertainties that the M/N technique are
most sensitive to are the amplitude of the bias function of dark matter halos
and the possibility of redshift evolution between the SDSS Main sample and the
maxBCG sample. Our derived constraints are insensitive to the current level of
uncertainties in the halo mass function and in the mass-richness relation of
clusters and its scatter, making the M/N technique complementary to cluster
abundances as a method for constraining cosmology with future galaxy surveys.
|
It is known that monopoles can be confined by vortex-strings in d=3+1 while
vortices can be confined by domain-lines in d=2+1. Here, as a higher
dimensional generalization of these, we show that Yang-Mills instantons can be
confined by monopole-strings in d=4+1. We achieve this by putting the system
into the Higgs phase in which the configuration can be constructed inside a
non-Abelian vortex sheet.
|
High-temperature ($q\to1$) asymptotics of 4d superconformal indices of
Lagrangian theories have been recently analyzed up to exponentially suppressed
corrections. Here we use RG-inspired tools to extend the analysis to the
exponentially suppressed terms in the context of Schur indices of $N=2$ SCFTs.
In particular, our approach explains the curious patterns of logarithms
(polynomials in $1/\log q$) found by Dedushenko and Fluder in their numerical
study of the high-temperature expansion of rank-$1$ theories. We also
demonstrate compatibility of our results with the conjecture of Beem and
Rastelli that Schur indices satisfy finite-order, possibly twisted, modular
linear differential equations (MLDEs), and discuss the interplay between our
approach and the MLDE approach to the high-temperature expansion. The
expansions for $q$ near roots of unity are also treated. A byproduct of our
analysis is a proof (for Lagrangian theories) of rationality of the conformal
dimensions of all characters of the associated VOA, that mix with the Schur
index under modular transformations.
|
We consider a two dimensional Turing like system with two diffusing species
which interact with each other. Considering the species to be charged, we
include the effect of an electric field along a given direction which can lead
to a drift induced instability found by A.B.Rovinsky and M.Menzinger\cite{9}.
This allows one to study the competition between diffusion and drift as was
done numerically by Riaz et al. We show here that an analytic formula can be
found on the basis of a linear stability analysis that incorporates all the
effects that are known for the system and also allows for some detailed
predictions.
|
In this paper we continue the study of the subalgebra lattice of a Leibniz
algebra. In particular, we find out that solvable Leibniz algebras with an
upper semi-modular lattice are either almost-abelian or have an abelian ideal
spanned by the elements with square zero. We also study Leibniz algebras in
which every subalgebra is a weak quasi-ideal, as well as modular symmetric
Leibniz algebras.
|
Fe3GeTe2 has emerged as one of the most fascinating van der Waals crystals
due to its two-dimensional (2D) itinerant ferromagnetism, topological nodal
lines and Kondo lattice behavior. However, lattice dynamics, chirality of
phonons and spin-phonon coupling in this material, which set the foundation for
these exotic phenomena, have remained unexplored. Here we report the first
experimental investigation of the phonons and mutual interactions between spin
and lattice degrees of freedom in few-layer Fe3GeTe2. Our results elucidate
three prominent Raman modes at room temperature: two A1g({\Gamma}) and one
E2g({\Gamma}) phonons. The doubly degenerate E2g({\Gamma}) mode reverses the
helicity of incident photon, indicating the pseudo-angular momentum and
chirality. Through analysis of temperature-dependent phonon energies and
lifetimes, which strongly diverge from the anharmonic model below Curie
temperature, we determine the spin-phonon coupling in Fe3GeTe2. Such
interaction between lattice oscillations and spin significantly enhances the
Raman susceptibility, allowing us to observe two additional Raman modes at the
cryogenic temperature range. In addition, we reveal laser radiation induced
degradation of Fe3GeTe2 in ambient conditions and the corresponding Raman
fingerprint. Our results provide the first experimental analysis of phonons in
this novel 2D itinerant ferromagnet and their applicability for further
fundamental studies and application development.
|
We discuss heat conductivity from the point of view of a variational
multi-fluid model, treating entropy as a dynamical entity. We demonstrate that
a two-fluid model with a massive fluid component and a massless entropy can
reproduce a number of key results from extended irreversible thermodynamics. In
particular, we show that the entropy entrainment is intimately linked to the
thermal relaxation time that is required to make heat propagation in solids
causal. We also discuss non-local terms that arise naturally in a dissipative
multi-fluid model, and relate these terms to those of phonon hydrodynamics.
Finally, we formulate a complete heat conducting two-component model and
discuss briefly the new dissipative terms that arise.
|
After hydrogen, oxygen, and carbon, nitrogen is one of the most chemically
active species in the interstellar medium (ISM). Nitrogen bearing molecules
have great importance as they are actively involved in the formation of
biomolecules. Therefore, it is essential to look for nitrogen-bearing species
in various astrophysical sources, specifically around high-mass star-forming
regions where the evolutionary history is comparatively poorly understood. In
this paper, we report the observation of three potential pre-biotic molecules,
namely, isocyanic acid (HNCO), formamide (NH2CHO), and methyl isocyanate
(CH3NCO), which contain peptide-like bonds (-NH-C(=O)-) in a hot molecular
core, G10.47+0.03 (hereafter, G10). Along with the identification of these
three complex nitrogen-bearing species, we speculate their spatial distribution
in the source and discuss their possible formation pathways under such
conditions. The rotational diagram method under the LTE condition has been
employed to estimate the excitation temperature and the column density of the
observed species. Markov Chain Monte Carlo method was used to obtain the best
suited physical parameters of G10 as well as line properties of some species.
We also determined the hydrogen column density and the optical depth for
different continuum observed in various frequency ranges. Finally, based on
these observational results, we have constructed a chemical model to explain
the observational findings. We found that HNCO, NH2CHO, and CH3NCO are
chemically linked with each other.
|
One of the main factors driving object-oriented software development in the
Web- age is the need for systems to evolve as user requirements change. A
crucial factor in the creation of adaptable systems dealing with changing
requirements is the suitability of the underlying technology in allowing the
evolution of the system. A reflective system utilizes an open architecture
where implicit system aspects are reified to become explicit first-class
(meta-data) objects. These implicit system aspects are often fundamental
structures which are inaccessible and immutable, and their reification as
meta-data objects can serve as the basis for changes and extensions to the
system, making it self- describing. To address the evolvability issue, this
paper proposes a reflective architecture based on two orthogonal abstractions -
model abstraction and information abstraction. In this architecture the
modeling abstractions allow for the separation of the description meta-data
from the system aspects they represent so that they can be managed and
versioned independently, asynchronously and explicitly. A practical example of
this philosophy, the CRISTAL project, is used to demonstrate the use of
meta-data objects to handle system evolution.
|
Deep neural networks (DNNs) form the backbone of almost every
state-of-the-art technique in the fields such as computer vision, speech
processing, and text analysis. The recent advances in computational technology
have made the use of DNNs more practical. Despite the overwhelming performances
by DNN and the advances in computational technology, it is seen that very few
researchers try to train their models from the scratch. Training of DNNs still
remains a difficult and tedious job. The main challenges that researchers face
during training of DNNs are the vanishing/exploding gradient problem and the
highly non-convex nature of the objective function which has up to million
variables. The approaches suggested in He and Xavier solve the vanishing
gradient problem by providing a sophisticated initialization technique. These
approaches have been quite effective and have achieved good results on standard
datasets, but these same approaches do not work very well on more practical
datasets. We think the reason for this is not making use of data statistics for
initializing the network weights. Optimizing such a high dimensional loss
function requires careful initialization of network weights. In this work, we
propose a data dependent initialization and analyze its performance against the
standard initialization techniques such as He and Xavier. We performed our
experiments on some practical datasets and the results show our algorithm's
superior classification accuracy.
|
Given a sequence A of 2n real numbers, the Even-Rank-Sum problem asks for the
sum of the n values that are at the even positions in the sorted order of the
elements in A. We prove that, in the algebraic computation-tree model, this
problem has time complexity \Theta(n log n). This solves an open problem posed
by Michael Shamos at the Canadian Conference on Computational Geometry in 2008.
|
In this paper, we propose another characterization of the generalized mirror
transformation on the quantum cohomology rings of general type projective
hypersurfaces. This characterics is useful for explicit determination of the
form of the generalized mirror transformation. As applications, we rederive the
generalized mirror transformation up to $d=3$ rational Gromov-Witten invariants
obtained in our previous article, and determine explicitly the the generalized
mirror transformation for the $d=4, 5$ rational Gromov-Witten invariants in the
case when the first Chern class of the hypersurface equals $-H$ (i.e.,
$k-N=1$).
|
In this short note we explain in detail the construction of a
$O(n)$-equivariant isomorphism of topological operads $F_n \cong WF_n$ , where
$F_n$ is the Fulton Mac Pherson operad and $W$ is the Boardman-Vogt
construction
|
Ultrasound image degradation in the human body is complex and occurs due to
the distortion of the wave as it propagates to and from the target. Here, we
establish a simulation based framework that deconstructs the sources of image
degradation into a separable parameter space that includes phase aberration
from speed variation, multiple reverberations, and trailing reverberation.
These separable parameters are then used to reconstruct images with known and
independently modulable amounts of degradation using methods that depend on the
additive or multiplicative nature of the degradation. Experimental measurements
and Fullwave simulations in the human abdomen demonstrate this calibrated
process in abdominal imaging by matching relevant imaging metrics such as phase
aberration, reverberation strength, speckle brightness and coherence length.
Applications of the reconstruction technique are illustrated for beamforming
strategies (phase aberration correction, spatial coherence imaging), in a
standard abdominal environment, as well as in impedance ranges much higher than
those naturally occurring in the body.
|
We consider, in the context of a 331 model with a single neutral right-handed
singlet, the generation of lepton masses. At zeroth order two neutrinos and one
charged lepton are massless, while the other leptons, two neutrinos and two
charged leptons, are massive. However the charged ones are still mass
degenerate. The massless fields get a mass through radiative corrections which
also break the degeneracy in the charged leptons.
|
We first construct a derived equivalence between a small crepant resolution
of an affine toric Calabi-Yau 3-fold and a certain quiver with a
superpotential. Under this derived equivalence we establish a wall-crossing
formula for the generating function of the counting invariants of perverse
coherent systems. As an application we provide certain equations on
Donaldson-Thomas, Pandeharipande-Thomas and Szendroi's invariants. Finally, we
show that moduli spaces associated with a quiver given by successive mutations
are realized as the moduli spaces associated the original quiver by changing
the stability conditions.
|
We prove several theorems concerning Tutte polynomials $T(G,x,y)$ for
recursive families of graphs. In addition to its interest in mathematics, the
Tutte polynomial is equivalent to an important function in statistical physics,
the Potts model partition function of the $q$-state Potts model, $Z(G,q,v)$,
where $v$ is a temperature-dependent variable. We determine the structure of
the Tutte polynomial for a cyclic clan graph $G[(K_r)_m,L=jn]$ comprised of a
chain of $m$ copies of the complete graph $K_r$ such that the linkage $L$
between each successive pair of $K_r$'s is a join $jn$, and $r$ and $m$ are
arbitrary. The explicit calculation of the case $r=3$ (for arbitrary $m$) is
presented. The continuous accumulation set of the zeros of $Z$ in the limit $m
\to \infty$ is considered. Further, we present calculations of two special
cases of Tutte polynomials, namely, flow and reliability polynomials, for
cyclic clan graphs and discuss the respective continuous accumulation sets of
their zeros in the limit $m \to \infty$. Special valuations of Tutte
polynomials give enumerations of spanning trees and acyclic orientations. Two
theorems are presented that determine the number of spanning trees on
$G[(K_r)_m,jn]$ and $G[(K_r)_m,id]$, where $L=id$ means that the identity
linkage. We report calculations of the number of acyclic orientations for
strips of the square lattice and use these to obtain an improved lower bound on
the exponential growth rate of the number of these acyclic orientations.
|
The functorial mathematical definition of conformal field theory was first
formulated approximately 30 years ago. The underlying geometric category is
based on the moduli space of Riemann surfaces with parametrized boundary
components and the sewing operation. We survey the recent and careful study of
these objects, which has led to significant connections with quasiconformal
Teichmuller theory and geometric function theory.
In particular we propose that the natural analytic setting for conformal
field theory is the moduli space of Riemann surfaces with so-called
Weil-Petersson class parametrizations. A collection of rigorous analytic
results is advanced here as evidence. This class of parametrizations has the
required regularity for CFT on one hand, and on the other hand are natural and
of interest in their own right in geometric function theory.
|
In chemical engineering, process data are expensive to acquire, and complex
phenomena are difficult to fully model. We explore the use of physics-informed
neural networks (PINNs) for dynamic processes with incomplete mechanistic
semi-explicit differential-algebraic equation systems and scarce process data.
In particular, we focus on estimating states for which neither direct
observational data nor constitutive equations are available. We propose an
easy-to-apply heuristic to assess whether estimation of such states may be
possible. As numerical examples, we consider a continuously stirred tank
reactor and a liquid-liquid separator. We find that PINNs can infer unmeasured
states with reasonable accuracy, and they generalize better in low-data
scenarios than purely data-driven models. We thus show that PINNs are capable
of modeling processes when relatively few experimental data and only partially
known mechanistic descriptions are available, and conclude that they constitute
a promising avenue that warrants further investigation.
|
In recent years, the upstream of Large Language Models (LLM) has also
encouraged the computer vision community to work on substantial multimodal
datasets and train models on a scale in a self-/semi-supervised manner,
resulting in Vision Foundation Models (VFM), as, e.g., Contrastive
Language-Image Pre-training (CLIP). The models generalize well and perform
outstandingly on everyday objects or scenes, even on downstream tasks, tasks
the model has not been trained on, while the application in specialized
domains, as in an industrial context, is still an open research question. Here,
fine-tuning the models or transfer learning on domain-specific data is
unavoidable when objecting to adequate performance. In this work, we, on the
one hand, introduce a pipeline to generate the Industrial Language-Image
Dataset (ILID) based on web-crawled data; on the other hand, we demonstrate
effective self-supervised transfer learning and discussing downstream tasks
after training on the cheaply acquired ILID, which does not necessitate human
labeling or intervention. With the proposed approach, we contribute by
transferring approaches from state-of-the-art research around foundation
models, transfer learning strategies, and applications to the industrial
domain.
|
Website reliability labels underpin almost all research in misinformation
detection. However, misinformation sources often exhibit transient behavior,
which makes many such labeled lists obsolete over time. We demonstrate that
Search Engine Optimization (SEO) attributes provide strong signals for
predicting news site reliability. We introduce a novel attributed webgraph
dataset with labeled news domains and their connections to outlinking and
backlinking domains. We demonstrate the success of graph neural networks in
detecting news site reliability using these attributed webgraphs, and show that
our baseline news site reliability classifier outperforms current SoTA methods
on the PoliticalNews dataset, achieving an F1 score of 0.96. Finally, we
introduce and evaluate a novel graph-based algorithm for discovering previously
unknown misinformation news sources.
|
Let $\{ X_{\bf n}, {\bf n}\in \mathbb{N}^d \}$ be a random field i.e. a
family of random variables indexed by $\mathbb{N}^d $, $d\ge 2$. Complete
convergence, convergence rates for non identically distributed, negatively
dependent and martingale random fields are studied by application of Fuk-Nagaev
inequality. The results are proved in asymmetric convergence case i.e. for the
norming sequence equal $n_1^{\alpha_1}\cdot n_2^{\alpha_2}\cdot\ldots\cdot
n_d^{\alpha_d}$, where $(n_1,n_2,\ldots, n_d)=\mathbf{n} \in \mathbb{N}^d$ and
$\min\limits_{1\leq i \leq d}\alpha_i \geq \frac{1}{2}.$
|
Quantum correlations between two parties are essential for the argument of
Einstein, Podolsky, and Rosen in favour of the incompleteness of quantum
mechanics. Schr\"odinger noted that an essential point is the fact that one
party can influence the wave function of the other party by performing suitable
measurements. He called this phenomenon quantum steering and studied its
properties, but only in the last years this kind of quantum correlation
attracted significant interest in quantum information theory. In this paper the
theory of quantum steering is reviewed. First, the basic concepts of steering
and local hidden state models are presented and their relation to entanglement
and Bell nonlocality is explained. Then various criteria for characterizing
steerability and structural results on the phenomenon are described. A detailed
discussion is given on the connections between steering and incompatibility of
quantum measurements. Finally, applications of steering in quantum information
processing and further related topics are reviewed.
|
We propose a new strategy for the experimental search of the QCD phase
transition in heavy ion collisions: One may tune collision energy around the
point where the lifetime of the fireball is expected to be longest. We
demonstrate that the hydrodynamic evolution of excited nuclear matter does
change dramatically as the initial energy density goes through the "softest
point" (where the pressure to energy density ratio reaches its minimum). For
our choice of equation of state, this corresponds to epsilon_i approx. = 1.5
GeV/fm^3 and collision energy E_lab/A approx. = 30 GeV (for Au+Au). Various
observables seem to show distinct changes near the softest point.
|
The transition to Terahertz (THz) frequencies, providing an ultra-wide
bandwidth, is a key driver for future wireless communication networks. However,
the specific properties of the THz channel, such as severe path loss and
vulnerability to blockage, pose a significant challenge in balancing data rate
and reliability. This work considers reconfigurable intelligent surface
(RIS)-aided THz communication, where the effective exploitation of a strong,
but intermittent line-of-sight (LOS) path versus a reliable, yet weaker
RIS-path is studied. We introduce a mixed-criticality superposition coding
scheme that addresses this tradeoff from a data significance perspective. The
results show that the proposed scheme enables reliable transmission for a
portion of high-criticality data without significantly impacting the overall
achievable sum rate and queuing delay. Additionally, we gain insights into how
the LOS blockage probability and the channel gain of the RIS-link influence the
rate performance of our scheme.
|
We show that solitonic solutions of the classical string action on the AdS_5
x S^5 background that carry charges (spins) of the Cartan subalgebra of the
global symmetry group can be classified in terms of periodic solutions of the
Neumann integrable system. We derive equations which determine the energy of
these solitons as a function of spins. In the limit of large spins J, the first
subleading 1/J coefficient in the expansion of the string energy is expected to
be non-renormalised to all orders in the inverse string tension expansion and
thus can be directly compared to the 1-loop anomalous dimensions of the
corresponding composite operators in N=4 super YM theory. We obtain a closed
system of equations that determines this subleading coefficient and, therefore,
the 1-loop anomalous dimensions of the dual SYM operators. We expect that an
equivalent system of equations should follow from the thermodynamic limit of
the algebraic Bethe ansatz for the SO(6) spin chain derived from SYM theory. We
also identify a particular string solution whose classical energy exactly
reproduces the one-loop anomalous dimension of a certain set of SYM operators
with two independent R charges J_1, J_2.
|
In recent years, researchers pay growing attention to the few-shot learning
(FSL) task to address the data-scarce problem. A standard FSL framework is
composed of two components: i) Pre-train. Employ the base data to generate a
CNN-based feature extraction model (FEM). ii) Meta-test. Apply the trained FEM
to the novel data (category is different from base data) to acquire the feature
embeddings and recognize them. Although researchers have made remarkable
breakthroughs in FSL, there still exists a fundamental problem. Since the
trained FEM with base data usually cannot adapt to the novel class flawlessly,
the novel data's feature may lead to the distribution shift problem. To address
this challenge, we hypothesize that even if most of the decisions based on
different FEMs are viewed as weak decisions, which are not available for all
classes, they still perform decently in some specific categories. Inspired by
this assumption, we propose a novel method Multi-Decision Fusing Model (MDFM),
which comprehensively considers the decisions based on multiple FEMs to enhance
the efficacy and robustness of the model. MDFM is a simple, flexible,
non-parametric method that can directly apply to the existing FEMs. Besides, we
extend the proposed MDFM to two FSL settings (i.e., supervised and
semi-supervised settings). We evaluate the proposed method on five benchmark
datasets and achieve significant improvements of 3.4%-7.3% compared with
state-of-the-arts.
|
Monocular depth estimation is a fundamental task in computer vision and has
drawn increasing attention. Recently, some methods reformulate it as a
classification-regression task to boost the model performance, where continuous
depth is estimated via a linear combination of predicted probability
distributions and discrete bins. In this paper, we present a novel framework
called BinsFormer, tailored for the classification-regression-based depth
estimation. It mainly focuses on two crucial components in the specific task:
1) proper generation of adaptive bins and 2) sufficient interaction between
probability distribution and bins predictions. To specify, we employ the
Transformer decoder to generate bins, novelly viewing it as a direct set-to-set
prediction problem. We further integrate a multi-scale decoder structure to
achieve a comprehensive understanding of spatial geometry information and
estimate depth maps in a coarse-to-fine manner. Moreover, an extra scene
understanding query is proposed to improve the estimation accuracy, which turns
out that models can implicitly learn useful information from an auxiliary
environment classification task. Extensive experiments on the KITTI, NYU, and
SUN RGB-D datasets demonstrate that BinsFormer surpasses state-of-the-art
monocular depth estimation methods with prominent margins. Code and pretrained
models will be made publicly available at
\url{https://github.com/zhyever/Monocular-Depth-Estimation-Toolbox}.
|
We present the first results of a pilot program to conduct an Atacama Large
Millimeter/submillimeter Array (ALMA) Band 6 (211-275 GHz) spectral line study
of young stellar objects (YSO) that are undergoing rapid accretion episodes,
i.e. FU Ori objects (FUors). Here, we report on molecular emission line
observations of the FUor system, V883 Ori. In order to image the FUor object
with full coverage from ~0.5 arcsec to the map size of ~30 arcsec, i.e. from
disc to outflow scales, we combine the ALMA main array (the 12-m array) with
the Atacama Compact Array (7-m array) and the total power (TP) array. We detect
HCN, HCO$^{+}$, CH$_{3}$OH, SO, DCN, and H$_{2}$CO emissions with most of these
lines displaying complex kinematics. From PV diagrams, the detected molecules
HCN, HCO$^{+}$, CH$_{3}$OH, DCN, SO, and H$_{2}$CO probe a Keplerian rotating
disc in a direction perpendicular to the large-scale outflow detected
previously with the $^{12}$CO and $^{13}$CO lines. Additionally, HCN and
HCO$^{+}$ reveal kinematic signatures of infall motion. The north outflow is
seen in HCO$^{+}$, H$_{2}$CO, and SO emissions. Interestingly, HCO$^{+}$
emission reveals a pronounced inner depression or "hole" with a size comparable
to the radial extension estimated for the CH$_{3}$OH and 230 GHz continuum. The
inner depression in the integrated HCO$^{+}$ intensity distribution of V883 Ori
is most likely the result of optical depth effects, wherein the optically thick
nature of the HCO$^{+}$ and continuum emission towards the innermost parts of
V883 Ori can result in a continuum subtraction artifact in the final HCO$^{+}$
flux level.
|
We discuss the general theory of D-branes on Calabi-Yaus, recent results from
the theory of boundary states, and new results on the spectrum of branes on the
quintic CY. (Contribution to the proceedings of Strings '99 in Potsdam,
Germany.)
|
We present an individual-based model for the coevolutionary dynamics between
CD8+ cytotoxic T lymphocytes (CTLs) and tumour cells. In this model, every cell
is viewed as an individual agent whose phenotypic state is modelled by a
discrete variable. For tumour cells this variable represents a parameterisation
of the antigen expression profiles, while for CTLs it represents a
parameterisation of the target antigens of T-cell receptors (TCRs). We formally
derive the deterministic continuum limit of this individual-based model, which
comprises a non-local partial differential equation for the phenotype
distribution of tumour cells coupled with an integro-differential equation for
the phenotype distribution of CTLs. The biologically relevant homogeneous
steady-state solutions of the continuum model equations are found. The
linear-stability analysis of these steady-state solutions is then carried out
in order to identify possible conditions on the model parameters that may lead
to different outcomes of immune competition and to the emergence of patterns of
phenotypic coevolution between tumour cells and CTLs. We report on
computational results of the individual-based model, and show that there is a
good agreement between them and analytical and numerical results of the
continuum model. These results shed light on the way in which different
parameters affect the coevolutionary dynamics between tumour cells and CTLs.
Moreover, they support the idea that TCR-tumour antigen binding affinity may be
a good intervention target for immunotherapy and offer a theoretical basis for
the development of anti-cancer therapy aiming at engineering TCRs so as to
shape their affinity for cancer targets.
|
There is a growing interest in developing data-driven reduced-order models
for atmospheric and oceanic flows that are trained on data obtained either from
high-resolution simulations or satellite observations. The data-driven models
are non-intrusive in nature and offer significant computational savings
compared to large-scale numerical models. These low-dimensional models can be
utilized to reduce the computational burden of generating forecasts and
estimating model uncertainty without losing the key information needed for data
assimilation to produce accurate state estimates. This paper aims at exploring
an equation-free surrogate modeling approach at the intersection of machine
learning and data assimilation in Earth system modeling. With this objective,
we introduce an end-to-end non-intrusive reduced-order modeling (NIROM)
framework equipped with contributions in modal decomposition, time series
prediction, optimal sensor placement, and sequential data assimilation.
Specifically, we use proper orthogonal decomposition (POD) to identify the
dominant structures of the flow, and a long short-term memory network to model
the dynamics of the POD modes. The NIROM is integrated within the deterministic
ensemble Kalman filter (DEnKF) to incorporate sparse and noisy observations at
optimal sensor locations obtained through QR pivoting. The feasibility and the
benefit of the proposed framework are demonstrated for the NOAA Optimum
Interpolation Sea Surface Temperature (SST) V2 dataset. Our results indicate
that the NIROM is stable for long-term forecasting and can model dynamics of
SST with a reasonable level of accuracy. Furthermore, the prediction accuracy
of the NIROM gets improved by one order of magnitude by the DEnKF algorithm.
This work provides a way forward toward transitioning these methods to fuse
information from Earth system models and observations to achieve accurate
forecasts.
|
While heating of a current carrying Ohmic conductors is an obvious
consequence of the diffusive nature of the conduction in such systems, current
induced cooling has been recently reported in some molecular conduction
junctions. In this paper we demonstrate by simple models the possibility of
cooling molecular junctions under applied bias, and discuss several mechanisms
for such an effect. Our model is characterized by single electron tunneling
between electrodes represented by free electron reservoirs through a system
characterized by it electron levels, nuclear vibrations and their structures.
We consider cooling mechasims resulting from (a) cooling of one electrode
surface by tunneling induced depletion of high energy electrons; (b) cooling by
coherent sub resonance electronic transport analogous to atomic laser nduced
cooling and (c) the incoherent analog of process (b) - cooling by driven
activated transport. The non-equilibrium Green function formulation of junction
transport is used in the first two cases, while a master equation approach is
applied in the analysis of the third.
|
Online bipartite matching (OBM) is a fundamental model underpinning many
important applications, including search engine advertisement, website banner
and pop-up ads, and ride-hailing. We study the i.i.d. OBM problem, where one
side of the bipartition is fixed and known in advance, while nodes from the
other side appear sequentially as i.i.d. realizations of an underlying
distribution, and must immediately be matched or discarded. We introduce
dynamic relaxations of the set of achievable matching probabilities, show how
they theoretically dominate lower-dimensional, static relaxations from previous
work, and perform a polyhedral study to theoretically examine the new
relaxations' strength. We also discuss how to derive heuristic policies from
the relaxations' dual prices, in a similar fashion to dynamic resource prices
used in network revenue management. We finally present a computational study to
demonstrate the empirical quality of the new relaxations and policies.
|
We present a preliminary measurement of time-dependent CP-violating
asymmetries in B0 -> J/psi K0S and B0 -> psi(2S) K0S decays recorded by the
BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC. The data
sample consists of 9.0 fb-1 collected at the Y(4S) resonance and 0.8 fb-1
off-resonance. One of the neutral B mesons, produced in pairs at the Y(4S), is
fully reconstructed. The flavor of the other neutral B meson is tagged at the
time of its decay, mainly with the charge of identified leptons and kaons. A
neural network tagging algorithm is used to recover events without a clear
lepton or kaon tag. The time difference between the decays is determined by
measuring the distance between the decay vertices. Wrong-tag probabilities and
the time resolution function are measured with samples of fully-reconstructed
semileptonic and hadronic neutral B final states. The value of the asymmetry
amplitude, sin2beta, is determined from a maximum likelihood fit to the time
distribution of 120 tagged B0 -> J/psi K0S and B0 -> psi(2S) K0S candidates to
be sin2beta = 0.12+/-0.37 (stat) +/- 0.09 (syst) (preliminary).
|
The ALICE Collaboration is planning a major upgrade of its central barrel
detectors to be able to cope with the increased LHC luminosity beyond 2020. For
the TPC, this implies a replacement of the currently used gated MWPCs
(Multi-Wire Proportional Chamber) by GEM (Gas Electron Multiplier) based
readout chambers. In order to prove, that the present particle identification
capabilities via measurement of the specific energy loss are retained after the
upgrade, a prototype of the ALICE IROC (Inner Readout Chamber) has been
evaluated in a test beam campaign at the CERN PS. The d$E$/d$x$ resolution of
the prototype has been proven to be fully compatible with the current MWPCs.
|
Recent advances in text-to-speech have significantly improved the
expressiveness of synthesized speech. However, it is still challenging to
generate speech with contextually appropriate and coherent speaking style for
multi-sentence text in audiobooks. In this paper, we propose a context-aware
coherent speaking style prediction method for audiobook speech synthesis. To
predict the style embedding of the current utterance, a hierarchical
transformer-based context-aware style predictor with a mixture attention mask
is designed, considering both text-side context information and speech-side
style information of previous speeches. Based on this, we can generate
long-form speech with coherent style and prosody sentence by sentence.
Objective and subjective evaluations on a Mandarin audiobook dataset
demonstrate that our proposed model can generate speech with more expressive
and coherent speaking style than baselines, for both single-sentence and
multi-sentence test.
|
In this paper, we consider an accelerated method for solving nonconvex and
nonsmooth minimization problems. We propose a Bregman Proximal Gradient
algorithm with extrapolation(BPGe). This algorithm extends and accelerates the
Bregman Proximal Gradient algorithm (BPG), which circumvents the restrictive
global Lipschitz gradient continuity assumption needed in Proximal Gradient
algorithms (PG). The BPGe algorithm has higher generality than the recently
introduced Proximal Gradient algorithm with extrapolation(PGe), and besides,
due to the extrapolation step, BPGe converges faster than BPG algorithm.
Analyzing the convergence, we prove that any limit point of the sequence
generated by BPGe is a stationary point of the problem by choosing parameters
properly. Besides, assuming Kurdyka-{\'L}ojasiewicz property, we prove the
whole sequences generated by BPGe converges to a stationary point. Finally, to
illustrate the potential of the new method BPGe, we apply it to two important
practical problems that arise in many fundamental applications (and that not
satisfy global Lipschitz gradient continuity assumption): Poisson linear
inverse problems and quadratic inverse problems. In the tests the accelerated
BPGe algorithm shows faster convergence results, giving an interesting new
algorithm.
|
In this note we study supergravity models with constrained superfields. We
construct a supergravity framework in which all (super)symmetry breaking
dynamics happen in vacuum with naturally (or otherwise asymptotically)
vanishing energy. Supersymmetry is generically broken in multiple sectors each
of them is parametrized by a nilpotent goldstino superfield. Dynamical fields
(the Higgs, inflaton, etc) below the supersymmetry breaking scale are
constrained superfields of various types. In this framework, there is a
dominant supersymmetry breaking sector which uplifts the potential to zero
value. Other sources of supersymmetry breaking have (asymptotically) vanishing
contribution to vacuum energy such that supersymmetry is locally restored.
Demanding vanishing vacuum energy constrains the structure of the
superpotential and Kahler potential; there is a superpotential term for each
secluded sector directly interacting with a nilpotent superfield and the Kahler
potential must have a shift symmetry along Higgs field directions. This
structure is inspired by elements that appear in string theory. We also study
the Higgs dynamics during inflation and show that the swampland Festina Lente
bound could be realized in this framework.
|
Recent work has shown that Just-In-Time (JIT) compilation can introduce
timing side-channels to constant-time programs, which would otherwise be a
principled and effective means to counter timing attacks. In this paper, we
propose a novel approach to eliminate JIT-induced leaks from these programs.
Specifically, we present an operational semantics and a formal definition of
constant-time programs under JIT compilation, laying the foundation for
reasoning about programs with JIT compilation. We then propose to eliminate
JIT-induced leaks via a fine-grained JIT compilation for which we provide an
automated approach to generate policies and a novel type system to show its
soundness. We develop a tool DeJITLeak for Java based on our approach and
implement the fine-grained JIT compilation in HotSpot. Experimental results
show that DeJITLeak can effectively and efficiently eliminate JIT-induced leaks
on three datasets used in side-channel detection
|
When designing infographics, general users usually struggle with getting
desired color palettes using existing infographic authoring tools, which
sometimes sacrifice customizability, require design expertise, or neglect the
influence of elements' spatial arrangement. We propose a data-driven method
that provides flexibility by considering users' preferences, lowers the
expertise barrier via automation, and tailors suggested palettes to the spatial
layout of elements. We build a recommendation engine by utilizing deep learning
techniques to characterize good color design practices from data, and further
develop InfoColorizer, a tool that allows users to obtain color palettes for
their infographics in an interactive and dynamic manner. To validate our
method, we conducted a comprehensive four-part evaluation, including case
studies, a controlled user study, a survey study, and an interview study. The
results indicate that InfoColorizer can provide compelling palette
recommendations with adequate flexibility, allowing users to effectively obtain
high-quality color design for input infographics with low effort.
|
The Gr\"obner basis detection (GBD) is defined as follows: Given a set of
polynomials, decide whether there exists -and if "yes" find- a term order such
that the set of polynomials is a Gr\"obner basis. This problem was shown to be
NP-hard by Sturmfels and Wiegelmann. We show that GBD when studied in the
context of zero dimensional ideals is also NP-hard. An algorithm to solve GBD
for zero dimensional ideals is also proposed which runs in polynomial time if
the number of indeterminates is a constant.
|
In this short note we define a new cohomology for a Lie algebroid
$\mathcal{A}$, that we call the \emph{twisted cohomology} of $\mathcal{A}$ by
an odd cocycle $\theta$ in the Lie algebroid cohomology of $\mathcal{A}$. We
proof that this cohomology only depends on the Lie algebroid cohomology class
$[\theta]$ of the odd cocycle $\theta$. We give a few examples showing that
this new cohomology encompasses various well-known cohomology theories.
|
We examine the variations in the spectral characteristics and intensities of
PAHs in two different scenarios of PAH processing (or formation): (1) small
PAHs are being destroyed (or equivalently large PAHs are being formed, referred
to as SPR i.e. small PAHs removed), and (2) large PAHs are being destroyed (or
equivalently small PAHs are being formed referred to as LPR i.e. large PAHs
removed). PAH emission was measured considering both the presence or absence of
plateau components. The variation in the PAH band intensities as a function of
the average number of carbon atoms <N$_{C}$> has the highest dynamic range in
the SPR case suggesting that smaller PAHs have higher impact on the PAH band
strengths. The plateaus show overall declining emission with <N$_{C}$>, and
their higher dynamic range in the SPR case also suggests that smaller PAHs are
mainly contributing to the plateau emission. The 7.7/(11.0+11.2) $\mu$m PAH
band ratio presents the least amount of variance with the lowest dynamic range,
rendering this ratio as the better choice for tracing PAH charge. The
3.3/(11.2+11.0) $\mu$m PAH band ratio is the only ratio that has both a
monotonic variance and fully separated values among the SPR and LPR scenarios,
highlighting its efficiency as PAH size tracer but also allowing the
characterization of the dominant scenario of processing or formation in a given
region or source. We present new PAH charge $-$ size diagnostic diagrams, which
can provide insights on the average, maximum, or minimum N$_{C}$ within
astrophysical sources.
|
Internal waves are believed to be of primary importance as they affect ocean
mixing and energy transport. Several processes can lead to the breaking of
internal waves and they usually involve non linear interactions between waves.
In this work, we study experimentally the parametric subharmonic instability
(PSI), which provides an efficient mechanism to transfer energy from large to
smaller scales. It corresponds to the destabilization of a primary plane wave
and the spontaneous emission of two secondary waves, of lower frequencies and
different wave vectors. Using a time-frequency analysis, we observe the time
evolution of the secondary waves, thus measuring the growth rate of the
instability. In addition, a Hilbert transform method allows the measurement of
the different wave vectors. We compare these measurements with theoretical
predictions, and study the dependence of the instability with primary wave
frequency and amplitude, revealing a possible effect of the confinement due to
the finite size of the beam, on the selection of the unstable mode.
|
Using 3D radiation-hydrodynamic simulations and analytic theory, we study the
orbital evolution of asymptotic-giant-branch (AGB) binary systems for various
initial orbital separations and mass ratios, and thus different initial
accretion modes. The time evolution of binary separations and orbital periods
are calculated directly from the averaged mass loss rate, accretion rate and
angular momentum loss rate. We separately consider spin-orbit synchronized and
zero spin AGB cases. We find that the the angular momentum carried away by the
mass loss together with the mass transfer can effectively shrink the orbit when
accretion occurs via wind-Roche-lobe overflow. In contrast, the larger fraction
of mass lost in Bondi-Hoyle-Lyttleton accreting systems acts to enlarge the
orbit. Synchronized binaries tend to experience stronger orbital period decay
in close binaries. We also find that orbital period decay is faster when we
account for the nonlinear evolution of the accretion mode as the binary starts
to tighten. This can increase the fraction of binaries that result in common
envelope, luminous red novae, Type Ia supernovae and planetary nebulae with
tight central binaries. The results also imply that planets in the the
habitable zone around white dwarfs are unlikely to be found.
|
The transient execution attack is a type of attack leveraging the
vulnerability of modern CPU optimization technologies. New attacks surface
rapidly. The side-channel is a key part of transient execution attacks to leak
data. In this work, we discover a vulnerability that the change of the EFLAGS
register in transient execution may have a side effect on the Jcc (jump on
condition code) instruction after it in Intel CPUs. Based on our discovery, we
propose a new side-channel attack that leverages the timing of both transient
execution and Jcc instructions to deliver data. This attack encodes secret data
to the change of register which makes the execution time of context slightly
slower, which can be measured by the attacker to decode data. This attack
doesn't rely on the cache system and doesn't need to reset the EFLAGS register
manually to its initial state before the attack, which may make it more
difficult to detect or mitigate. We implemented this side-channel on machines
with Intel Core i7-6700, i7-7700, and i9-10980XE CPUs. In the first two
processors, we combined it as the side-channel of the Meltdown attack, which
could achieve 100\% success leaking rate. We evaluate and discuss potential
defenses against the attack. Our contributions include discovering security
vulnerabilities in the implementation of Jcc instructions and EFLAGS register
and proposing a new side-channel attack that does not rely on the cache system.
|
Many experiments have been carried out to study the beta-decay rates of a
variety of nuclides, and many - but not all - of these experiments yield
evidence of variability of these rates. While there is as yet no accepted
theory to explain patterns in the results, a number of conjectures have been
proposed. We discuss three prominent conjectures (which are not mutually
exclusive) - that variability of beta-decay rates may be due to (a)
environmental influences, (b) solar neutrinos, and (c) cosmic neutrinos. We
find evidence in support of each of these conjectures.
|
Point cloud registration sits at the core of many important and challenging
3D perception problems including autonomous navigation, SLAM, object/scene
recognition, and augmented reality. In this paper, we present a new
registration algorithm that is able to achieve state-of-the-art speed and
accuracy through its use of a hierarchical Gaussian Mixture Model (GMM)
representation. Our method constructs a top-down multi-scale representation of
point cloud data by recursively running many small-scale data likelihood
segmentations in parallel on a GPU. We leverage the resulting representation
using a novel PCA-based optimization criterion that adaptively finds the best
scale to perform data association between spatial subsets of point cloud data.
Compared to previous Iterative Closest Point and GMM-based techniques, our
tree-based point association algorithm performs data association in
logarithmic-time while dynamically adjusting the level of detail to best match
the complexity and spatial distribution characteristics of local scene
geometry. In addition, unlike other GMM methods that restrict covariances to be
isotropic, our new PCA-based optimization criterion well-approximates the true
MLE solution even when fully anisotropic Gaussian covariances are used.
Efficient data association, multi-scale adaptability, and a robust MLE
approximation produce an algorithm that is up to an order of magnitude both
faster and more accurate than current state-of-the-art on a wide variety of 3D
datasets captured from LiDAR to structured light.
|
We derive the anomalous transformation law of the quantum stress tensor for a
2D massless scalar field coupled to an external dilaton. This provides a
generalization of the Virasoro anomaly which turns out to be consistent with
the trace anomaly. We apply these results to compute vacuum polarization of a
spherical star based on the equivalence principle.
|
Terahertz (THz) communications are regarded as a pillar technology for the 6G
systems, by offering multi-ten-GHz bandwidth. To overcome the huge propagation
loss while reducing the hardware complexity, THz ultra-massive (UM) MIMO
systems with hybrid beamforming are proposed to offer high array gain. Notably,
the adjustable-phase-shifters considered in most existing hybrid beamforming
studies are power-hungry and difficult to realize in the THz band. Moreover,
due to the ultra-massive antennas, full channel-state-information (CSI) is
challenging to obtain. To address these practical concerns, in this paper, an
energy-efficient dynamic-subarray with fixed-phase-shifters (DS-FPS)
architecture is proposed for THz hybrid beamforming. To compensate for the
spectral efficiency loss caused by the fixed-phase of FPS, a switch network is
inserted to enable dynamic connections. In addition, by considering the partial
CSI, we propose a row-successive-decomposition (RSD) algorithm to design the
hybrid beamforming matrices for DS-FPS. A row-by-row (RBR) algorithm is further
proposed to reduce computational complexity. Extensive simulation results show
that, the proposed DS-FPS architecture with the RSD and RBR algorithms achieves
much higher energy efficiency than the existing architectures. Moreover, the
DS-FPS architecture with partial CSI achieves 97% spectral efficiency of that
with full CSI.
|
Ray flow methods are an efficient tool to estimate vibro-acoustic or
electromagnetic energy transport in complex domains at high-frequencies. Here,
a Petrov-Galerkin discretization of a phase-space boundary integral equation
for transporting wave energy densities on two-dimensional surfaces is proposed.
The directional dependence of the energy density is approximated at each point
on the boundary in terms of a finite local set of directions propagating into
the domain. The direction of propagation can be preserved for transport across
multi-component domains when the directions within the local set are inherited
from a global direction set. The range of applicability and computational cost
of the method will be explored through a series of numerical experiments,
including wave problems from both acoustics and elasticity in both single and
multi-component domains. The domain geometries considered range from both
regular and irregular polygons to curved surfaces, including a cast aluminium
shock tower from a Range Rover car.
|
The paper presents a generalization and further development of our recent
publications where solutions of the Klein-Fock-Gordon equation defined on a few
particular $D=(2+1)$-dim static space-time manifolds were considered. The
latter involve toy models of 2-dim spaces with axial symmetry, including
dimension reduction to the 1-dim space as a singular limiting case.
Here the non-static models of space geometry with axial symmetry are under
consideration. To make these models closer to physical reality, we define a set
of "admissible" shape functions $\rho(t,z)$ as the $(2+1)$-dim Einstein
equations solutions in the vacuum space-time, in the presence of the
$\Lambda$-term, and for the space-time filled with the standard "dust". It is
curious that in the last case the Einstein equations reduce to the well-known
Monge-Amp\`{e}re equation, thus enabling one to obtain the general solution of
the Cauchy problem, as well as a set of other specific solutions involving one
arbitrary function. A few explicit solutions of the Klein-Fock-Gordon equation
in this set are given.
An interesting qualitative feature of these solutions relates to the
dimension reduction points, their classification, and time behavior. In
particular, these new entities could provide us with novel insight into the
nature of P- and T-violation, and of Big Bang. A short comparison with other
attempts to utilize dimensional reduction of the space-time is given.
|
We propose a new action principle to be associated with a noncommutative
space $(\Ac ,\Hc ,D)$. The universal formula for the spectral action is $(\psi
,D\psi) + \Trace (\chi (D /$ $\Lb))$ where $\psi$ is a spinor on the Hilbert
space, $\Lb$ is a scale and $\chi$ a positive function. When this principle is
applied to the noncommutative space defined by the spectrum of the standard
model one obtains the standard model action coupled to Einstein plus Weyl
gravity. There are relations between the gauge coupling constants identical to
those of $SU(5)$ as well as the Higgs self-coupling, to be taken at a fixed
high energy scale.
|
We use a constrained Monte Carlo technique to analyze ultrametric features of
a 4 dimensional Edwards-Anderson spin glass with quenched couplings J=\pm 1. We
find that in the large volume limit an ultrametric structure emerges quite
clearly in the overlap of typical equilibrium configurations.
|
In this paper we explore the properties of a 1-dimensional spin chain in the
presence of chiral interactions, focusing on the system's transition to
distinct chiral phases for various values of the chiral coupling. By employing
the mean field theory approximation we establish a connection between this
chiral system and a Dirac particle in the curved spacetime of a black hole.
Surprisingly, the black hole horizon coincides with the interface between
distinct chiral phases. We examine the chiral properties of the system for
homogeneous couplings and in scenarios involving position dependent couplings
that correspond to black hole geometries. To determine the significance of
interactions in the chiral chain we employ bosonization techniques and derive
the corresponding Luttinger liquid model. Furthermore, we investigate the
classical version of the model to understand the impact of the chiral operator
on the spins and gain insight into the observed chirality. Our findings shed
light on the behavior of the spin chain under the influence of the chiral
operator, elucidating the implications of chirality in various contexts,
including black hole physics.
|
We study relationships between the neutron-rich skin of a heavy nucleus and
the properties of neutron-star crusts. Relativistic effective field theories
with a thicker neutron skin in $^{208}$Pb have a larger electron fraction and a
lower liquid-to-solid transition density for neutron-rich matter. These
properties are determined by the density dependence of the symmetry energy
which we vary by adding nonlinear couplings between isoscalar and isovector
mesons. An accurate measurement of the neutron radius in $^{208}$Pb---via
parity violating electron scattering---may have important implications for the
structure of neutron stars.
|
In this work, we formulate NEWRON: a generalization of the McCulloch-Pitts
neuron structure. This new framework aims to explore additional desirable
properties of artificial neurons. We show that some specializations of NEWRON
allow the network to be interpretable with no change in their expressiveness.
By just inspecting the models produced by our NEWRON-based networks, we can
understand the rules governing the task. Extensive experiments show that the
quality of the generated models is better than traditional interpretable models
and in line or better than standard neural networks.
|
In the electroweak standard model we observe two remarkable empirical mass
relations, m_W + m_B = v/2 and m_W - m_B = e v/2 where m^2_Z = m^2_W + m^2_B, e
is the positron electric charge and v, the strength of the Higgs condensate.
|
We present a swarm model of Brownian particles with harmonic interactions,
where the individuals undergo canonical active Brownian motion, i.e. each
Brownian particle can convert internal energy to mechanical energy of motion.
We assume the existence of a single global internal energy of the system.
Numerical simulations show amorphous swarming behavior as well as static
configurations. Analytic understanding of the system is provided by studying
stability properties of equilibria.
|
Bent functions of the form $\mathbb{F}_2^n\rightarrow\mathbb{Z}_q$, where
$q\geqslant2$ is a positive integer, are known as generalized bent (gbent)
functions. Gbent functions for which it is possible to define a dual gbent
function are called regular. A regular gbent function is said to be self-dual
if it coincides with its dual. In this paper we explore self-dual gbent
functions for even $q$. We consider several primary and secondary constructions
of such functions. It is proved that the numbers of self-dual and anti-self
dual gbent functions coincide. We give necessary and sufficient conditions for
the self-duality of Maiorana--McFarland gbent functions and find Hamming and
Lee distances spectrums between them. We find all self-dual gbent functions
symmetric with respect to two variables and prove that self-dual gbent function
can not be affine. The properties of sign functions of self-dual gbent
functions are considered. Symmetries that preserve self-duality are also
discussed.
|
In this paper, we describe a method for estimating the joint probability
density from data samples by assuming that the underlying distribution can be
decomposed as a mixture of product densities with few mixture components. Prior
works have used such a decomposition to estimate the joint density from
lower-dimensional marginals, which can be estimated more reliably with the same
number of samples. We combine two key ideas: dictionaries to represent 1-D
densities, and random projections to estimate the joint distribution from 1-D
marginals, explored separately in prior work. Our algorithm benefits from
improved sample complexity over the previous dictionary-based approach by using
1-D marginals for reconstruction. We evaluate the performance of our method on
estimating synthetic probability densities and compare it with the previous
dictionary-based approach and Gaussian Mixture Models (GMMs). Our algorithm
outperforms these other approaches in all the experimental settings.
|
We follow-up on a previous finding that AGB Mira variables containing the
3DUP indicator technetium (Tc) in their atmosphere form a different sequence of
K-[22] colour as a function of pulsation period than Miras without Tc. A near-
to mid-infrared colour such as K-[22] is a good probe for the dust mass-loss
rate of the stars. Contrary to what might be expected, Tc-poor Miras show
redder K-[22] colours (i.e. higher dust mass-loss rates) than Tc-rich Miras at
a given period. Here, the previous sample is extended and the analysis is
expanded towards other colours and dust spectra. The most important aim is to
investigate if the same two sequences can be revealed in the gas mass-loss
rate. We analysed new optical spectra and expanded the sample by including more
stars from the literature. Near- and mid-IR photometry and ISO dust spectra of
our stars were investigated. Literature data of gas mass-loss rates of Miras
and semi-regular variables were collected and analysed. Our results show that
Tc-poor Miras are redder than Tc-rich Miras in a broad range of the mid-IR,
suggesting that the previous finding based on the K-[22] colour is not due to a
specific dust feature in the 22 micron band. We establish a linear relation
between K-[22] and the gas mass-loss rate. We also find that the 13 micron
feature disappears above K-[22]~2.17 mag, corresponding to $\dot{M}_{\rm
g}\sim2.6\times10^{-7}M_{\sun}yr^{-1}$. No similar sequences of Tc-poor and
Tc-rich Miras in the gas mass-loss rate vs. period diagram are found, most
probably owing to limitations in the available data. Different hypotheses to
explain the observation of two sequences in the P vs. K-[22] diagram are
discussed and tested, but so far none of them convincingly explains the
observations. Nevertheless, we might have found an hitherto unknown but
potentially important process influencing mass loss on the TP-AGB.
|
Smartphone technology has drastically improved over the past decade. These
improvements have seen the creation of specialized health applications, which
offer consumers a range of health-related activities such as tracking and
checking symptoms of health conditions or diseases through their smartphones.
We term these applications as Symptom Checking apps or simply SymptomCheckers.
Due to the sensitive nature of the private data they collect, store and manage,
leakage of user information could result in significant consequences. In this
paper, we use a combination of techniques from both static and dynamic analysis
to detect, trace and categorize security and privacy issues in 36 popular
SymptomCheckers on Google Play. Our analyses reveal that SymptomCheckers
request a significantly higher number of sensitive permissions and embed a
higher number of third-party tracking libraries for targeted advertisements and
analytics exploiting the privileged access of the SymptomCheckers in which they
exist, as a mean of collecting and sharing critically sensitive data about the
user and their device. We find that these are sharing the data that they
collect through unencrypted plain text to the third-party advertisers and, in
some cases, to malicious domains. The results reveal that the exploitation of
SymptomCheckers is present in popular apps, still readily available on Google
Play.
|
We introduce a new general class of metric f-manifolds which we call (nearly)
trans-S-manifolds and includes S- manifolds, C-manifolds, s-th Sasakian
manifolds and generalized Kenmotsu manifold studied previously. We prove their
main properties and we present many examples which justify their study.
|
Hydrogen-bonded mixtures with varying concentration are a complicated
networked system that demands a detection technique with both time and
frequency resolutions. Hydrogen-bonded pyridine-water mixtures are studied by a
time-frequency resolved coherent Raman spectroscopic technique. Femtosecond
broadband dual-pulse excitation and delayed picosecond probing provide
sub-picosecond time resolution in the mixtures temporal evolution. For
different pyridine concentrations in water, asymmetric blue versus red shifts
(relative to pure pyridine spectral peaks) were observed by simultaneously
recording both the coherent anti-Stokes and Stokes Raman spectra. Macroscopic
coherence dephasing times for the perturbed pyridine ring modes were observed
in ranges of 0.9 - 2.6 picoseconds for both 18 and 10 cm-1 broad probe pulses.
For high pyridine concentrations in water, an additional spectral broadening
(or escalated dephasing) for a triangular ring vibrational mode was observed.
This can be understood as a result of ultrafast collective emissions from
coherently excited ensemble of pairs of pyridine molecules bound to water
molecules.
|
This paper considers the quantum collapse of infinitesimally thin dust shells
in 2+1 gravity. In 2+1 gravity a shell is no longer a sphere but a ring of
matter. The classical equation of motion has been considered by Peleg and Steif
and Cristosomo and Olea. The minisuperspace quantum problem can be reduced to
that of a harmonic oscillator in terms of the curvature radius of the shell,
allowing the use of well-known methods to find the motion of coherent wave
packets that give the quantum collapse of the shell. Classically, as the radius
of the shell falls below a certain point, a horizon forms. In the quantum
problem one can define various quantities that give "indications" of horizon
formation. Without proper definitions of a "horizon" in quantum gravity, these
can be nothing but indications.
|
We study a model for two-species hard-core bosons in one dimension. In this
model, the same-species bosons have a hard-core condition at the same site,
while different-species bosons are allowed to occupy the same site with a local
interaction $U$. At half-filling, by Jordan-Wigner transformation, the model
can be exactly mapped to a fermionic Hubbard model. Due to this correspondence,
the phase transition from superfluid ($U=0$) to Mott insulator ($U>0$) can be
explained by simple one-band theory at half-filling. By using an exact
diagonalization method adopting a modified Lanczos method, we obtain the ground
states as a function of $U$ for the lattice size upto $L=16$. We calculate
directional current-current correlation functions in this model, which indicate
that there are some remaining counter-flow in the Mott insulating region
($U>0$) and co-flow in the charge-density-wave region ($U<0$) for the finite
lattices.
|
We study a set of topological roots of the local Bernstein-Sato polynomial of
arbitrary plane curve singularities. These roots are characterized in terms of
certain divisorial valuations and the numerical data of the minimal log
resolution. In particular, this set of roots strictly contains both the
opposites of the jumping numbers in $(0, 1)$ and the poles of the motivic zeta
function counted with multiplicity. As a consequence, we prove the multiplicity
part of the Strong Monodromy Conjecture for $n = 2$.
|
We present the dust properties and star-formation histories of local
submillimetre-selected galaxies in Herschel-ATLAS, classified by optical
morphology. The early-type galaxies (ETGs) that are detected contain as much
dust as typical spirals, and form a unique sample that has been blindly
selected at submillimetre wavelengths.
Comparing H-ATLAS galaxies to a control sample of optically selected
galaxies, we find 5.5% of luminous ETGs are detected in H-ATLAS. The H-ATLAS
ETGs contain a significant mass of cold dust: the mean dust mass is 5.5x10^7
Msun, with individual galaxies ranging from 9x10^5-4x10^8 Msun. This is
comparable to that of spirals in our sample, and is an order of magnitude more
dust than that found for the control ETGs, which have a median dust mass
inferred from stacking of (0.8-4.0)x10^6 Msun. The ETGs detected in H-ATLAS
have bluer NUV-r colours, higher specific star-formation rates and younger
stellar populations than ETGs which are optically selected, and may be
transitioning from the blue cloud to the red sequence. We also find that
H-ATLAS and control ETGs inhabit similar low-density environments. We conclude
that the dust in H-ATLAS and control ETGs cannot be solely from stellar
sources, and a large contribution from dust formed in the ISM or external
sources is required. Alternatively, dust destruction may not be as efficient as
predicted.
We also explore the properties of the most passive spiral galaxies in our
sample with SSFR<10^-11/yr. We find these passive spirals have lower
dust-to-stellar mass ratios, higher stellar masses and older stellar population
ages than normal spirals. The passive spirals inhabit low density environments
similar to those of the normal spiral galaxies in our sample. This shows that
the processes which turn spirals passive do not occur solely in the
intermediate density environments of group and cluster outskirts. (Abridged)
|
This report covers an intelligent decision support system (IDSS), which
handles an efficient and effective way to rapidly inspect containerized cargos
for defection. Defection is either cargo exposure to radiation, physical
damages such as holes, punctured surfaces, iron surface oxidation, etc. The
system uses a sorting array triangulation technique (SAT) and surface damage
detection (SDD) to conduct the inspection. This new technique saves time and
money on finding damaged goods during transportation such that, instead of
running $n$ inspections on $n$ containers, only 3 inspections per triangulation
or a ratio of $3:n$ is required, assuming $n > 3$ containers. The damaged stack
in the array is virtually detected contiguous to an actually-damaged cargo by
calculating nearby distances of such cargos, delivering reliable estimates for
the whole local stack population. The estimated values on damaged, somewhat
damaged and undamaged cargo stacks, are listed and profiled after being sorted
by the program, thereby submitted to the manager for a final decision. The
report describes the problem domain and the implementation of the simulator
prototype, showing how the system operates via software, hardware with/without
human agents, conducting real-time inspections and management per se.
|
A proper edge coloring of a graph $G$ with colors $1,2,\dots,t$ is called a
\emph{cyclic interval $t$-coloring} if for each vertex $v$ of $G$ the edges
incident to $v$ are colored by consecutive colors, under the condition that
color $1$ is considered as consecutive to color $t$. We prove that a bipartite
graph $G$ with even maximum degree $\Delta(G)\geq 4$ admits a cyclic interval
$\Delta(G)$-coloring if for every vertex $v$ the degree $d_G(v)$ satisfies
either $d_G(v)\geq \Delta(G)-2$ or $d_G(v)\leq 2$. We also prove that every
Eulerian bipartite graph $G$ with maximum degree at most $8$ has a cyclic
interval coloring. Some results are obtained for $(a,b)$-biregular graphs, that
is, bipartite graphs with the vertices in one part all having degree $a$ and
the vertices in the other part all having degree $b$; it has been conjectured
that all these have cyclic interval colorings. We show that all
$(4,7)$-biregular graphs as well as all $(2r-2,2r)$-biregular ($r\geq 2$)
graphs have cyclic interval colorings. Finally, we prove that all complete
multipartite graphs admit cyclic interval colorings; this settles in the
affirmative, a conjecture of Petrosyan and Mkhitaryan.
|
We explore how the lens of fictional superpowers can help characterize how
visualizations empower people and provide inspiration for new visualization
systems. Researchers and practitioners often tout visualizations' ability to
"make the invisible visible" and to "enhance cognitive abilities." Meanwhile
superhero comics and other modern fiction often depict characters with
similarly fantastic abilities that allow them to see and interpret the world in
ways that transcend traditional human perception. We investigate the
intersection of these domains, and show how the language of superpowers can be
used to characterize existing visualization systems and suggest opportunities
for new and empowering ones. We introduce two frameworks: The first
characterizes seven underlying mechanisms that form the basis for a variety of
visual superpowers portrayed in fiction. The second identifies seven ways in
which visualization tools and interfaces can instill a sense of empowerment in
the people who use them. Building on these observations, we illustrate a
diverse set of "visualization superpowers" and highlight opportunities for the
visualization community to create new systems and interactions that empower new
experiences with data.
|
Recent advances in artificial intelligence have been strongly driven by the
use of game environments for training and evaluating agents. Games are often
accessible and versatile, with well-defined state-transitions and goals
allowing for intensive training and experimentation. However, agents trained in
a particular environment are usually tested on the same or slightly varied
distributions, and solutions do not necessarily imply any understanding. If we
want AI systems that can model and understand their environment, we need
environments that explicitly test for this. Inspired by the extensive
literature on animal cognition, we present an environment that keeps all the
positive elements of standard gaming environments, but is explicitly designed
for the testing of animal-like artificial cognition.
|
For $\ba \in \R_{\geq 0}^{n}$, the Tesler polytope $\tes_{n}(\ba)$ is the set
of upper triangular matrices with non-negative entries whose hook sum vector is
$\ba$. Motivated by a conjecture of Morales', we study the questions of whether
the coefficients of the Ehrhart polynomial of $\tes_n(1,1,\dots,1)$ are
positive. We attack this problem by studying a certain function constructed by
Berline-Vergne and its values on faces of a unimodularly equivalent copy of
$\tes_n(1,1,\dots,1).$ We develop a method of obtaining the dot products
appeared in formulas for computing Berline-Vergne's function directly from
facet normal vectors. Using this method together with known formulas, we are
able to show Berline-Vergne's function has positive values on codimension $2$
and $3$ faces of the polytopes we consider. As a consequence, we prove that the
$3$rd and $4$th coefficients of the Ehrhart polynomial of $\tes_{n}(1,\dots,1)$
are positive. Using the Reduction Theorem by Castillo and the second author, we
generalize the above result to all deformations of $\tes_{n}(1,\dots,1)$
including all the integral Tesler polytopes.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.