text
stringlengths 6
128k
|
---|
Full batch training of Graph Convolutional Network (GCN) models is not
feasible on a single GPU for large graphs containing tens of millions of
vertices or more. Recent work has shown that, for the graphs used in the
machine learning community, communication becomes a bottleneck and scaling is
blocked outside of the single machine regime. Thus, we propose MG-GCN, a
multi-GPU GCN training framework taking advantage of the high-speed
communication links between the GPUs present in multi-GPU systems. MG-GCN
employs multiple High-Performance Computing optimizations, including efficient
re-use of memory buffers to reduce the memory footprint of training GNN models,
as well as communication and computation overlap. These optimizations enable
execution on larger datasets, that generally do not fit into memory of a single
GPU in state-of-the-art implementations. Furthermore, they contribute to
achieve superior speedup compared to the state-of-the-art. For example, MG-GCN
achieves super-linear speedup with respect to DGL, on the Reddit graph on both
DGX-1 (V100) and DGX-A100.
|
Properties of hypernuclei $_{\Lambda \Lambda}^5$H and $_{\Lambda \Lambda
}^5$He are studied in a two-channel approach with explicit treatment of
coupling of channels ^3\text{Z}+\Lambda+\Lambda and \alpha+\Xi. Diagonal
\Lambda\Lambda and coupling \Lambda\Lambda-\Xi N interactions are derived
within G-matrix procedure from Nijmegen meson-exchange models. Bond energy
\Delta B_{\Lambda\Lambda} in $_{\Lambda \Lambda}^5$He exceeds significantly
that in $_{\Lambda \Lambda}^5$H due to the channel coupling. Diagonal \Xi\alpha
attraction amplifies the effect, which is sensitive also to \Lambda-core
interaction. The difference of the \Delta B_{\Lambda\Lambda} values can be an
unambiguous signature of the \Lambda\Lambda-\Xi N coupling in \Lambda\Lambda
hypernuclei. However, improved knowledge of the hyperon-nucleus potentials is
needed for quantitative extraction of the coupling strength from future data on
the \Lambda\Lambda hypernuclear binding energies.
|
Time-dependent systems have recently been shown to support novel types of
topological order that cannot be realised in static systems. In this paper, we
consider a range of time-dependent, interacting systems in one dimension that
are protected by an Abelian symmetry group. We classify the distinct
topological phases that can exist in this setting and find that they may be
described by a bulk invariant associated with the unitary evolution of the
closed system. In the open system, nontrivial phases correspond to the
appearance of edge modes in the many-body quasienergy spectrum, which relate to
the bulk invariant through a form of bulk-edge correspondence. We introduce
simple models which realise nontrivial dynamical phases in a number of cases,
and outline a loop construction that can be used to generate such phases more
generally.
|
We propose a minimal model to simultaneously account for a realistic neutrino
spectrum through a type-I seesaw mechanism and a viable dark matter relic
density. The model is an extension of the Littlest Seesaw model in which the
two right-handed neutrinos of the model are coupled to a $Z_2$-odd dark sector
via right-handed neutrino portal couplings. In this model, a highly constrained
and direct link between dark matter and neutrino physics is achieved by
considering the freeze-in production mechanism of dark matter. We show that the
neutrino Yukawa couplings which describe neutrino mass and mixing may also play
a dominant role in the dark matter production. We investigate the allowed
regions in the parameter space of the model that provide the correct neutrino
masses and mixing and simultaneously give the correct dark matter relic
abundance. In certain cases the right-handed neutrino mass may be arbitrarily
large, for example in the range $10^{10}-10^{11}$ GeV required for vanilla
leptogenesis, with a successful relic density arising from frozen-in dark
matter particles with masses around this scale, which we refer to as
"fimpzillas".
|
Upcoming deep optical surveys such as the Vera C. Rubin Observatory Legacy
Survey of Space and Time will scan the sky to unprecedented depths and detect
billions of galaxies. This amount of detections will however cause the apparent
superposition of galaxies on the images, called blending, and generate a new
systematic error due to the confusion of sources. As consequences, the
measurements of individual galaxies properties such as their redshifts or
shapes will be impacted, and some galaxies will not be detected. However,
galaxy shapes are key quantities, used to estimate masses of large scale
structures, such as galaxy clusters, through weak gravitational lensing. This
work presents a new catalog matching algorithm, called friendly, for the
detection and characterization of blends in simulated LSST data for the DESC
Data Challenge 2. By identifying a specific type of blends, we show that
removing them from the data may partially correct the amplitude of the
$\Delta\Sigma$ weak lensing profile that could be biased low by around 20% due
to blending. This would result in impacting clusters weak lensing mass estimate
and cosmology.
|
In this paper we prove the universal property of skew $PBW$ extensions
generalizing this way the well known universal property of skew polynomial
rings. For this, we will show first a result about the existence of this class
of non-commutative rings. Skew $PBW$ extensions include as particular examples
Weyl algebras, enveloping algebras of finite-dimensional Lie algebras (and its
quantization), Artamonov quantum polynomials, diffusion algebras, Manin algebra
of quantum matrices, among many others. As a corollary we will give a new short
proof of the Poincar\'e-Birkhoff-Witt theorem about the bases of enveloping
algebras of finite-dimensional Lie algebras.
|
Generating photo-realistic video portrait with arbitrary speech audio is a
crucial problem in film-making and virtual reality. Recently, several works
explore the usage of neural radiance field in this task to improve 3D realness
and image fidelity. However, the generalizability of previous NeRF-based
methods to out-of-domain audio is limited by the small scale of training data.
In this work, we propose GeneFace, a generalized and high-fidelity NeRF-based
talking face generation method, which can generate natural results
corresponding to various out-of-domain audio. Specifically, we learn a
variaitional motion generator on a large lip-reading corpus, and introduce a
domain adaptative post-net to calibrate the result. Moreover, we learn a
NeRF-based renderer conditioned on the predicted facial motion. A head-aware
torso-NeRF is proposed to eliminate the head-torso separation problem.
Extensive experiments show that our method achieves more generalized and
high-fidelity talking face generation compared to previous methods.
|
We present some applications of the notion of numerosity to measure theory,
including the construction of a non-Archimedean model for the probability of
infinite sequences of coin tosses.
|
The ability to recognise emotions lends a conversational artificial
intelligence a human touch. While emotions in chit-chat dialogues have received
substantial attention, emotions in task-oriented dialogues remain largely
unaddressed. This is despite emotions and dialogue success having equally
important roles in a natural system. Existing emotion-annotated task-oriented
corpora are limited in size, label richness, and public availability, creating
a bottleneck for downstream tasks. To lay a foundation for studies on emotions
in task-oriented dialogues, we introduce EmoWOZ, a large-scale manually
emotion-annotated corpus of task-oriented dialogues. EmoWOZ is based on
MultiWOZ, a multi-domain task-oriented dialogue dataset. It contains more than
11K dialogues with more than 83K emotion annotations of user utterances. In
addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine
dialogues within the same set of domains to sufficiently cover the space of
various emotions that can happen during the lifetime of a data-driven dialogue
system. To the best of our knowledge, this is the first large-scale open-source
corpus of its kind. We propose a novel emotion labelling scheme, which is
tailored to task-oriented dialogues. We report a set of experimental results to
show the usability of this corpus for emotion recognition and state tracking in
task-oriented dialogues.
|
Examples of "doubly robust" estimator for missing data include augmented
inverse probability weighting (AIPWT) models (Robins et al., 1994) and
penalized splines of propensity prediction (PSPP) models (Zhang and Little,
2009). Doubly-robust estimators have the property that, if either the response
propensity or the mean is modeled correctly, a consistent estimator of the
population mean is obtained. However, doubly-robust estimators can perform
poorly when modest misspecification is present in both models (Kang and
Schafer, 2007). Here we consider extensions of the AIPWT and PSPP models that
use Bayesian Additive Regression Trees (BART; Chipman et al., 2010) to provide
highly robust propensity and mean model estimation. We term these
"robust-squared" in the sense that the propensity score, the means, or both can
be estimated with minimal model misspecification, and applied to the
doubly-robust estimator. We consider their behavior via simulations where
propensities and/or mean models are misspecified. We apply our proposed method
to impute missing instantaneous velocity (delta-v) values from the 2014
National Automotive Sampling System Crashworthiness Data System dataset and
missing Blood Alcohol Concentration values from the 2015 Fatality Analysis
Reporting System dataset. We found that BART applied to PSPP and AIPWT,
provides a more robust and efficient estimate compared to PSPP and AIPWT, with
the BART-estimated propensity score combined with PSPP providing the most
efficient estimator with close to nominal coverage.
|
We examine the minimal supergravity (mSUGRA) model under the assumption that
the strong CP problem is solved by the Peccei-Quinn mechanism. In this case,
the relic dark matter (DM) abundance consists of three components: {\it i}).
cold axions, {\it ii}). warm axinos from neutralino decay, and {\it iii}). cold
or warm thermally produced axinos. To sustain a high enough re-heat temperature
(T_R\agt 10^6 GeV) for many baryogenesis mechanisms to function, we find that
the bulk of DM should consist of cold axions, while the admixture of cold and
warm axinos should be rather slight, with a very light axino of mass \sim 100
keV. For mSUGRA with mainly axion cold DM (CDM), the most DM-preferred
parameter space regions are precisely those which are least preferred in the
case of neutralino DM. Thus, rather different SUSY signatures are expected at
the LHC in the case of mSUGRA with mainly axion CDM, as compared to mSUGRA with
neutralino CDM.
|
Absorption of microwave by metallic conductors is exclusively inefficient,
though being natively broadband, due to the huge impedance mismatch between
metal and free space. Reducing the thickness to ultrathin conductive film may
improve the absorbing efficiency, but is still bounded by a maximal 50% limit
induced by the field continuity. Here, we show that broadband perfect (100%)
absorption of microwave can be realized on a single layer of ultrathin
conductive film when it is illuminated coherently by two oppositely incident
beams. Such an effect of breaking the 50% limit maintains the intrinsic
broadband feature from the free carrier dissipation, and is
frequency-independent in an ultrawide spectrum, ranging typically from
kilohertz to gigahertz and exhibiting an unprecedented bandwidth close to 200%.
In particular, it occurs on extremely subwavelength scales, ~{\lambda}/10000 or
even thinner, which is the film thickness. Our work proposes a way to achieve
total electromagnetic wave absorption in a broadband spectrum of radio waves
and microwaves with a simple conductive film.
|
Inspired by the narrow Feshbach resonance in systems with the two-body
interaction, we propose the two-channel model of three-component fermions with
the three-body interaction that takes into account the finite-range effects in
low dimensions. Within this model, the $p$-wave Efimov-like effect in the
four-body sector is predicted in fractional dimensions above 1D. The impact of
the finite-range interaction on the formation of the four-body bound states in
$d=1$ is also discussed in detail.
|
In this paper we determine the weak-interaction corrections of order
$\alpha_s^2\alpha$ to hadronic top-quark pair production. First we compute the
one-loop weak corrections to $t \bar t$ production due to gluon fusion and the
order $\alpha_s^2\alpha$ corrections to $t \bar t$ production due to
(anti)quark gluon scattering in the Standard Model. With our previous result
this yields the complete corrections of order $\alpha_s^2\alpha$ to $gg$, $q
\bar q$, $q g$, and ${\bar q} g$ induced hadronic $t \bar t$ production with
$t$ and $\bar t$ polarizations and spin-correlations fully taken into account.
For the Tevatron and the LHC we determine the weak contributions to the
transverse top-momentum and to the $t \bar t$ invariant mass distributions. At
the LHC these corrections can be of the order of 10 percent compared with the
leading-order results, for large $p_T$ and $\mtt$, respectively. Apart from
parity-even $t \bar t$ spin correlations we analyze also parity-violating
double and single spin asymmetries, and show how they are related if CP
invariance holds. For $t$ (and $\bar t$) quarks which decay semileptonically,
we compute a resulting charged-lepton forward-backward asymmetry $A_{PV}$ with
respect to the $t$ ($\bar t$) direction, which is of the order of one percent
at the LHC for suitable invariant-mass cuts.
|
We formulate a scattering theory of polarization and heat transport through a
ballistic ferroelectric point contact. We predict a polarization current under
either an electric field or a temperature difference that depends strongly on
the direction of the ferroelectric order and can be detected by its magnetic
stray field and associated thermovoltage and Peltier effect.
|
We report on the observation of weakly-bound dimers of bosonic Dysprosium
with a strong universal s-wave halo character, associated with broad magnetic
Feshbach resonances. These states surprisingly decouple from the chaotic
backgound of narrow resonances, persisting across many such narrow resonances.
In addition they show the highest reported magnetic moment
$\mu\simeq20\,\mu_{\rm B}$ of any ultracold molecule. We analyze our findings
using a coupled-channel theory taking into account the short range van der
Waals interaction and a correction due to the strong dipole moment of
Dysprosium. We are able to extract the scattering length as a function of
magnetic field associated with these resonances and obtain a background
scattering length $a_{\rm bg}=91(16)\,a_0$. These results offer prospects of a
tunability of the interactions in Dysprosium, which we illustrate by observing
the saturation of three-body losses.
|
There is a lot of waste in an industrial environment that could cause harmful
effects to both the products and the workers resulting in product defects,
itchy eyes or chronic obstructive pulmonary disease, etc. While automative
cleaning robots could be used, the environment is often too big for one robot
to clean alone in addition to the fact that it does not have adequate stored
dirt capacity. We present a multi-robotic dirt cleaning system algorithm for
multiple automatic iRobot Creates teaming to efficiently clean an environment.
Moreover, since some spaces in the environment are clean while others are
dirty, our multi-robotic system possesses a path planning algorithm to allow
the robot team to clean efficiently by spending more time on the area with
higher dirt level. Overall, our multi-robotic system outperforms the single
robot system in time efficiency while having almost the same total battery
usage and cleaning efficiency result.
|
We report the realization of a new iterative Fourier-transform algorithm for
creating holograms that can diffract light into an arbitrary two-dimensional
intensity profile. We show that the predicted intensity distributions are
smooth with a fractional error from the target distribution at the percent
level. We demonstrate that this new algorithm outperforms the most frequently
used alternatives typically by one and two orders of magnitude in accuracy and
roughness, respectively. The techniques described in this paper outline a path
to creating arbitrary holographic atom traps in which the only remaining hurdle
is physical implementation.
|
A novel type of a plasmonic waveguide has been proposed featuring an "open"
design that is easy to manufacture, simple to excite and that offers a
convenient access to a plasmonic mode. Optical properties of photonic bandgap
(PBG) plasmonic waveguides are investigated experimentally by leakage radiation
microscopy and numerically using the finite element method confirming photonic
bandgap guidance in a broad spectral range. Propagation and localization
characteristics of a PBG plasmonic waveguide have been discussed as a function
of the wavelength of operation, waveguide core size and the number of ridges in
the periodic reflector for fundamental and higher order plasmonic modes of the
waveguide.
|
We report measurements of the cluster X-ray luminosity function out to z=0.8
based on the final sample of 201 galaxy systems from the 160 Square Degree
ROSAT Cluster Survey. There is little evidence for any measurable change in
cluster abundance out to z~0.6 at luminosities less than a few times 10^44
ergs/s (0.5-2.0 keV). However, between 0.6 < z < 0.8 and at luminosities above
10^44 ergs/s, the observed volume densities are significantly lower than those
of the present-day population. We quantify this cluster deficit using
integrated number counts and a maximum-likelihood analysis of the observed
luminosity-redshift distribution fit with a model luminosity function. The
negative evolution signal is >3 sigma regardless of the adopted local
luminosity function or cosmological framework. Our results and those from
several other surveys independently confirm the presence of evolution. Whereas
the bulk of the cluster population does not evolve, the most luminous and
presumably most massive structures evolve appreciably between z=0.8 and the
present. Interpreted in the context of hierarchical structure formation, we are
probing sufficiently large mass aggregations at sufficiently early times in
cosmological history where the Universe has yet to assemble these clusters to
present-day volume densities.
|
Novel Ni-Co-Mn-Ti all-d-metal Heusler alloys are exciting due to large
multicaloric effects combined with enhanced mechanical properties. An optimized
heat treatment for a series of these compounds leads to very sharp phase
transitions in bulk alloys with isothermal entropy changes of up to 38 J
kg$^{-1}$ K$^{-1}$ for a magnetic field change of 2 T. The differences of
as-cast and annealed samples are analyzed by investigating microstructure and
phase transitions in detail by optical microscopy. We identify different grain
structures as well as stoichiometric (in)homogenieties as reasons for
differently sharp martensitic transitions after ideal and non-ideal annealing.
We develop alloy design rules for tuning the magnetostructural phase transition
and evaluate specifically the sensitivity of the transition temperature towards
the externally applied magnetic fields ($\frac{dT_t}{\mu_0dH}$) by analyzing
the different stoichiometries. We then set up a phase diagram including
martensitic transition temperatures and austenite Curie temperatures depending
on the e/a ratio for varying Co and Ti content. The evolution of the Curie
temperature with changing stoichiometry is compared to other Heusler systems.
Density Functional Theory calculations reveal a correlation of T$_C$ with the
stoichiometry as well as with the order state of the austenite. This combined
approach of experiment and theory allows for an efficient development of new
systems towards promising magnetocaloric properties. Direct adiabatic
temperature change measurements show here the largest change of -4 K in a
magnetic field change of 1.93 T for Ni$_{35}$Co$_{15}$Mn$_{37}$Ti$_{13}$.
|
Wireless sensor networks (WSN) are fundamental to the Internet of Things
(IoT) by bridging the gap between the physical and the cyber worlds. Anomaly
detection is a critical task in this context as it is responsible for
identifying various events of interests such as equipment faults and
undiscovered phenomena. However, this task is challenging because of the
elusive nature of anomalies and the volatility of the ambient environments. In
a resource-scarce setting like WSN, this challenge is further elevated and
weakens the suitability of many existing solutions. In this paper, for the
first time, we introduce autoencoder neural networks into WSN to solve the
anomaly detection problem. We design a two-part algorithm that resides on
sensors and the IoT cloud respectively, such that (i) anomalies can be detected
at sensors in a fully distributed manner without the need for communicating
with any other sensors or the cloud, and (ii) the relatively more
computation-intensive learning task can be handled by the cloud with a much
lower (and configurable) frequency. In addition to the minimal communication
overhead, the computational load on sensors is also very low (of polynomial
complexity) and readily affordable by most COTS sensors. Using a real WSN
indoor testbed and sensor data collected over 4 consecutive months, we
demonstrate via experiments that our proposed autoencoder-based anomaly
detection mechanism achieves high detection accuracy and low false alarm rate.
It is also able to adapt to unforeseeable and new changes in a non-stationary
environment, thanks to the unsupervised learning feature of our chosen
autoencoder neural networks.
|
An ordered hypergraph is a hypergraph whose vertex set is linearly ordered,
and a convex geometric hypergraph is a hypergraph whose vertex set is
cyclically ordered. Extremal problems for ordered and convex geometric graphs
have a rich history with applications to a variety of problems in combinatorial
geometry. In this paper, we consider analogous extremal problems for uniform
hypergraphs, and determine the order of magnitude of the extremal function for
various ordered and convex geometric paths and matchings. Our results
generalize earlier works of Bra{\ss}-K\'{a}rolyi-Valtr, Capoyleas-Pach and
Aronov-Dujmovi\v{c}-Morin-Ooms-da Silveira. We also provide a new
generalization of the Erd\H os-Ko-Rado theorem in the ordered setting.
|
Recent years mesh-based Peer-to-Peer live streaming has become a promising
way for service providers to offer high-quality live video streaming service to
Internet users. In this paper, we make a detailed study on modeling and
performance analysis of the pull-based P2P streaming systems. We establish the
analytical framework for the pull-based streaming schemes in P2P network, give
accurate models of the chunk selection and peer selection strategies, and
organize them into three categories, i.e., the chunk first scheme, the peer
first scheme and the epidemic scheme. Through numerical performance evaluation,
the impacts of some important parameters, such as size of neighbor set, reply
number, buffer size and so on are investigated. For the peer first and chunk
first scheme, we show that the pull-based schemes do not perform as well as the
push-based schemes when peers are limited to reply only one request in each
time slot. When the reply number increases, the pull-based streaming schemes
will reach close to optimal playout probability. As to the pull-based epidemic
scheme, we find it has unexpected poor performance, which is significantly
different from the push-based epidemic scheme. Therefore we propose a simple,
efficient and easily deployed push-pull scheme which can significantly improve
the playout probability.
|
Introducing common sense to natural language understanding systems has
received increasing research attention. It remains a fundamental question on
how to evaluate whether a system has a sense making capability. Existing
benchmarks measures commonsense knowledge indirectly and without explanation.
In this paper, we release a benchmark to directly test whether a system can
differentiate natural language statements that make sense from those that do
not make sense. In addition, a system is asked to identify the most crucial
reason why a statement does not make sense. We evaluate models trained over
large-scale language modeling tasks as well as human performance, showing that
there are different challenges for system sense making.
|
The dynamics of a two-component dilute Bose gas of atoms at zero temperature
is described in the mean field approximation by a two-component
Gross-Pitaevskii Equation. We solve this equation assuming a Gaussian shape for
the wavefunction, where the free parameters of the trial wavefunction are
determined using a moment method. We derive equilibrium states and the phase
diagrams for the stability for positive and negative s-wave scattering lengths,
and obtain the low energy excitation frequencies corresponding to the
collective motion of the two Bose condensates.
|
This paper has been withdrawn by the corresponding author because the newest
version is now published in Discrete Applied Mathematics.
|
The biological processes involved in a drug's mechanisms of action are
oftentimes dynamic, complex and difficult to discern. Time-course gene
expression data is a rich source of information that can be used to unravel
these complex processes, identify biomarkers of drug sensitivity and predict
the response to a drug. However, the majority of previous work has not fully
utilized this temporal dimension. In these studies, the gene expression data is
either considered at one time-point (before the administration of the drug) or
two timepoints (before and after the administration of the drug). This is
clearly inadequate in modeling dynamic gene-drug interactions, especially for
applications such as long-term drug therapy.
In this work, we present a novel REcursive Prediction (REP) framework for
drug response prediction by taking advantage of time-course gene expression
data. Our goal is to predict drug response values at every stage of a long-term
treatment, given the expression levels of genes collected in the previous
time-points. To this end, REP employs a built-in recursive structure that
exploits the intrinsic time-course nature of the data and integrates past
values of drug responses for subsequent predictions. It also incorporates
tensor completion that can not only alleviate the impact of noise and missing
data, but also predict unseen gene expression levels (GELs). These advantages
enable REP to estimate drug response at any stage of a given treatment from
some GELs measured in the beginning of the treatment. Extensive experiments on
a dataset corresponding to 53 multiple sclerosis patients treated with
interferon are included to showcase the effectiveness of REP.
|
We introduce a novel strategy for controlling the temporal evolution of a
quantum system at the nanoscale. Our method relies on the use of graphene
plasmons, which can be electrically tuned in frequency by external gates.
Quantum emitters (e.g., quantum dots) placed in the vicinity of a graphene
nanostructure are subject to the strong interaction with the plasmons of this
material, thus undergoing time variations in their mutual interaction and
quantum evolution that are dictated by the externally applied gating voltages.
This scheme opens a new path towards the realization of quantum-optics devices
in the robust solid-state environment of graphene.
|
The (super) Schottky uniformization of compact (super) Riemann surfaces is
briefly reviewed. Deformations of super Riemann surface by gravitinos and
Beltrami parameters are recast in terms of super Schottky group cohomology. It
is checked that the super Schottky group formula for the period matrix of a
non-split surface matches its expression in terms of a gravitino and Beltrami
parameter on a split surface. The relationship between (super) Schottky groups
and the construction of surfaces by gluing pairs of punctures is discussed in
an appendix.
|
We propose a model for Quantum Chromodynamics, obtained by ignoring the
angular dependence of the gluon fields, which could qualitatively describe
systems containing one heavy quark. This leads to a two dimensional gauge
theory which has chiral symmetry and heavy quark symmetry. We show that in a
light cone formalism, the Hamiltonian of this spherical QCD can be expressed
entirely in terms of color singlet variables. Furthermore, in the large $N_c$
limit, it tends to a classical hadron theory. We derive an integral equation
for the masses and wavefunctions of a heavy meson. This can be interpreted as a
relativistic potential model. The integral equation is scale invariant, but
renormalization of the coupling constant generates a scale. We compute the
approximate beta function of the coupling constant, which has an ultraviolet
stable fixed point at the origin.
|
In this paper, we prove that the Banach contraction principle proved by S. G.
Matthews in 1994 on 0--complete partial metric spaces can be extended to
cyclical mappings. However, the generalized contraction principle proved by D.
Ili\'{c}, V. Pavlovi\'{c} and V. Rako\u{c}evi\'{c} in "Some new extensions of
Banach's contraction principle to partial metric spaces, Appl. Math. Lett. 24
(2011), 1326--1330" on complete partial metric spaces can not be extended to
cyclical mappings. Some examples are given to illustrate the effectiveness of
our results. Moreover, we generalize some of the results obtained by W. A.
Kirk, P. S. Srinivasan and P. Veeramani in "Fixed points for mappings
satisfying cyclical contractive conditions, Fixed Point Theory 4 (1)
(2003),79--89". Finally, an Edelstein's type theorem is also extended in case
one of the sets in the cyclic decomposition is 0-compact.
|
Random binning is an efficient, yet complex, coding technique for the
symmetric L-description source coding problem. We propose an alternative
approach, that uses the quantized samples of a bandlimited source as
"descriptions". By the Nyquist condition, the source can be reconstructed if
enough samples are received. We examine a coding scheme that combines sampling
and noise-shaped quantization for a scenario in which only K < L descriptions
or all L descriptions are received. Some of the received K-sets of descriptions
correspond to uniform sampling while others to non-uniform sampling. This
scheme achieves the optimum rate-distortion performance for uniform-sampling
K-sets, but suffers noise amplification for nonuniform-sampling K-sets. We then
show that by increasing the sampling rate and adding a random-binning stage,
the optimal operation point is achieved for any K-set.
|
We theoretically investigate the effects of Coulomb interaction, at the level
of unscreened Hartree-Fock approximation, on third harmonic generation of
undoped graphene in an equation of motion framework. The unperturbed electronic
states are described by a widely used two-band tight binding model, and the
Coulomb interaction is described by the Ohno potential. The ground state is
renormalized by taking into account the Hartree-Fock term, and the optical
conductivities are obtained by numerically solving the equations of motion. The
absolute values of conductivity for third harmonic generation depend on the
photon frequency $\Omega$ as $\Omega^{-n}$ for $\hbar\Omega<1$, and then show a
peak as $3\hbar\Omega$ approaches the renormalized energy of the $M$ point.
Taking into account the Coulomb interaction, $n$ is found to be $5.5$, which is
significantly greater than the value of $4$ found with the neglect of the
Coulomb interaction. Therefore the Coulomb interaction enhances third harmonic
generation at low photon energies -- for our parameters $\hbar\Omega<0.8$~eV --
and then reduces it until the photon energy reaches about $2.1$~eV. The effect
of the background dielectric constant is also considered.
|
The $\mu(I)$-rheology has been recently proposed as a potential candidate to
model the flow of frictional grains in a dense inertial regime. However, this
rheology was shown to be ill-posed in the mathematical sense for a large range
of parameters, notably in the slow and fast flow limits \citep{Barker2015}. In
this rapid communication, we extend the stability analysis to compressible
flows. We show that compressibility regularizes mostly the equations, making
them well-posed for all parameters, at the condition that sufficient
dissipation is associated with volume changes. In addition to the usual Coulomb
shear friction coefficient $\mu$, we introduce a bulk friction coefficient
$\mu_b$, associated to volume changes and show that the equations are
well-posed in two dimensions if $\mu_b>2-2\mu$ ($\mu_b>3-7\mu/2$ in three
dimensions). Moreover, we show that the ill-posed domain defined in
\citep{Barker2015} transforms into a domain where the equations are unstable
but stay well-posed when compressibility is taken into account. These results
suggest thus the importance of compressibility in dense granular flows.
|
Let $M$ be a $d$-dimensional connected compact Riemannian manifold with
boundary $\partial M$, let $V\in C^2(M)$ such that $\mu(dx):=e^{V(x)} d x$ is a
probability measure, and let $X_t$ be the diffusion process generated by
$L:=\Delta+\nabla V$ with $\tau:=\inf\{t\ge 0: X_t\in\partial M\}$. Consider
the conditional empirical measure
$\mu_t^\nu:= \mathbb E^\nu\big(\frac 1 t \int_0^t \delta_{X_s}d
s\big|t<\tau\big)$ for the diffusion process with initial distribution $\nu$
such that $\nu(\partial M)<1$. Then $$\lim_{t\to\infty} \big\{t\mathbb
W_2(\mu_t^\nu,\mu_0)\big\}^2 = \frac 1 {\{\mu(\phi_0)\nu(\phi_0)\}^2}
\sum_{m=1}^\infty \frac{\{\nu(\phi_0)\mu(\phi_m)+ \mu(\phi_0)
\nu(\phi_m)\}^2}{(\lambda_m-\lambda_0)^3},$$ where $\nu(f):=\int_Mf {d} \nu$
for a measure $\nu$ and $f\in L^1(\nu)$, $\mu_0:=\phi_0^2\mu$,
$\{\phi_m\}_{m\ge 0}$ is the eigenbasis of $-L$ in $L^2(\mu)$ with the
Dirichlet boundary,
$\{\lambda_m\}_{m\ge 0}$ are the corresponding Dirichlet eigenvalues, and
$\mathbb W_2$ is the $L^2$-Wasserstein distance induced by the Riemannian
metric.
|
Scaling nature of absorbing critical phenomena is well understood for the
directed percolation (DP) and the directed Ising (DI) systems. However, a full
analysis of the crossover behavior is still lacking, which is of our interest
in this study. There are three different routes from the DI to the DP classes
by introducing a symmetry breaking field (SB), breaking a modulo 2 conservation
(CB), or making channels connecting two equivalent absorbing states (CC). Each
route can be characterized by a crossover exponent, which is found numerically
as $\phi=2.1\pm 0.1$ (SB), $4.6\pm 0.2$ (CB), and $2.9\pm 0.1$ (CC),
respectively. The difference between the SB and CB crossover can be understood
easily in the domain wall language, while the CC crossover involves an
additional critical singularity in the auxiliary field density with the memory
effect to identify itself independent.
|
Research on sound event detection (SED) in environmental settings has seen
increased attention in recent years. The large amounts of (private) domestic or
urban audio data needed raise significant logistical and privacy concerns. The
inherently distributed nature of these tasks, make federated learning (FL) a
promising approach to take advantage of largescale data while mitigating
privacy issues. While FL has also seen increased attention recently, to the
best of our knowledge there is no research towards FL for SED. To address this
gap and foster further research in this field, we create and publish novel FL
datasets for SED in domestic and urban environments. Furthermore, we provide
baseline results on the datasets in a FL context for three deep neural network
architectures. The results indicate that FL is a promising approach for SED,
but faces challenges with divergent data distributions inherent to distributed
client edge devices.
|
Given a subset S of vertices of an undirected graph G, the cut-improvement
problem asks us to find a subset S that is similar to A but has smaller
conductance. A very elegant algorithm for this problem has been given by
Andersen and Lang [AL08] and requires solving a small number of
single-commodity maximum flow computations over the whole graph G. In this
paper, we introduce LocalImprove, the first cut-improvement algorithm that is
local, i.e. that runs in time dependent on the size of the input set A rather
than on the size of the entire graph. Moreover, LocalImprove achieves this
local behaviour while essentially matching the same theoretical guarantee as
the global algorithm of Andersen and Lang.
The main application of LocalImprove is to the design of better
local-graph-partitioning algorithms. All previously known local algorithms for
graph partitioning are random-walk based and can only guarantee an output
conductance of O(\sqrt{OPT}) when the target set has conductance OPT \in [0,1].
Very recently, Zhu, Lattanzi and Mirrokni [ZLM13] improved this to O(OPT /
\sqrt{CONN}) where the internal connectivity parameter CONN \in [0,1] is
defined as the reciprocal of the mixing time of the random walk over the graph
induced by the target set. In this work, we show how to use LocalImprove to
obtain a constant approximation O(OPT) as long as CONN/OPT = Omega(1). This
yields the first flow-based algorithm. Moreover, its performance strictly
outperforms the ones based on random walks and surprisingly matches that of the
best known global algorithm, which is SDP-based, in this parameter regime
[MMV12].
Finally, our results show that spectral methods are not the only viable
approach to the construction of local graph partitioning algorithm and open
door to the study of algorithms with even better approximation and locality
guarantees.
|
For each $p>1$ and each positive integer $m$ we use divided differences to
give intrinsic characterizations of the restriction of the Sobolev space
$W^m_p(R)$ to an arbitrary closed subset of the real line.
|
We investigate the evolution of the afterglow produced by the deceleration of
the non-relativistic material due to its surroundings. The ejecta mass is
launched into the circumstellar medium with equivalent kinetic energy expressed
as a power-law velocity distribution $E\propto (\Gamma\beta)^{-\alpha}$. The
density profile of this medium follows a power law $n(r)\propto r^{-k}$ with
$k$ the stratification parameter, which accounts for the usual cases of a
constant medium ($k=0$) and a wind-like medium ($k=2$). A long-lasting central
engine, which injects energy into the ejected material as ($E\propto t^{1-q}$)
was also assumed. With our model, we show the predicted light curves associated
with this emission for different sets of initial conditions and notice the
effect of the variation of these parameters on the frequencies, timescales and
intensities. The results are discussed in the Kilonova scenario.
|
Given cell-average data values of a piecewise smooth bivariate function $f$
within a domain $\Omega$, we look for a piecewise adaptive approximation to
$f$. We are interested in an explicit and global (smooth) approach. Bivariate
approximation techniques, as trigonometric or splines approximations, achieve
reduced approximation orders near the boundary of the domain and near curves of
jump singularities of the function or its derivatives. Whereas the boundary of
$\Omega$ is assumed to be known, the subdivision of $\Omega$ to subdomains on
which $f$ is smooth is unknown. The first challenge of the proposed
approximation algorithm would be to find a good approximation to the curves
separating the smooth subdomains of $f$. In the second stage, we simultaneously
look for approximations to the different smooth segments of $f$, where on each
segment we approximate the function by a linear combination of basis functions
$\{p_i\}_{i=1}^M$, considering the corresponding cell-averages. A discrete
Laplacian operator applied to the given cell-average data intensifies the
structure of the singularity of the data across the curves separating the
smooth subdomains of $f$. We refer to these derived values as the signature of
the data, and we use it for both approximating the singularity curves
separating the different smooth regions of $f$. The main contributions here are
improved convergence rates to both the approximation of the singularity curves
and the approximation of $f$, an explicit and global formula, and, in
particular, the derivation of a piecewise smooth high order approximation to
the function.
|
The quantum even-dimensional balls are defined as the $C^*$-algebras
generated by certain graphs. We exhibit a polynomial algebra for each
even-dimensional quantum ball, and classify the irreducible representations of
it.
|
We develop a technique to construct analytical solutions of the linear
perturbations of inflation with a nonlinear dispersion relation, due to quantum
effects of the early universe. Error bounds are given and studied in detail.
The analytical solutions describe the exact evolution of the perturbations
extremely well even when only the first-order approximations is used.
|
Wavefront stabilization is a fundamental challenge to high contrast imaging
of exoplanets. For both space and ground observations, wavefront control
performance is ultimately limited by the finite amount of starlight available
for sensing, so wavefront measurements must be as efficient as possible. To
meet this challenge, we propose to sense residual errors using bright
focal-plane speckles at wavelengths outside the high contrast spectral
bandwidth. We show that a linear relationship exists between the intensity of
the bright out-of-band speckles and residual wavefront aberrations. An
efficient linear control loop can exploit this relationship. The proposed
scheme, referred to as Spectral Linear Dark Field Control (spectral LDFC), is
more sensitive than conventional approaches for ultra-high contrast imaging.
Spectral LDFC is closely related to, and can be combined with, the recently
proposed spatial LDFC which uses light at the observation wavelength but
located outside of the high contrast area in the focal plane image. Both LDFC
techniques do not require starlight to be mixed with the high contrast speckle
field, so full-sensitivity uninterrupted high contrast observations can be
conducted simultaneously with wavefront correction iterations. We also show
that LDFC is robust against deformable mirror calibration errors and drifts, as
it relies on detector response stability instead of deformable mirror
stability. LDFC is particularly advantageous when science acquisition is
performed at a non-optimal wavefront sensing wavelength, such as nearIR
observations of planets around solar-type stars, for which visible-light
speckle sensing is ideal. We describe the approach at a fundamental level and
provide an algorithm for its implementation. We demonstrate, through numerical
simulation, that spectral LDFC is well-suited for picometer-level cophasing of
a large segmented space telescope.
|
In this study, we find continued fraction expansion of sqrt(d) when d=a^2+2a
where a is positive integer. We consider the integer solutions of the Pell
equation x^2-(a^2+2a)y^2=N when N={-1,+1,-4,+4}. We formulate the n-th solution
(x_{n},y_{n}) by using the continued fraction expansion. We also formulate the
n-th solution (x_{n},y_{n}) via the generalized Fibonacci and Lucas sequences.
|
We introduce a minimally interacting pure gauge compact U(1)xU(1) model
consistent with abelian projection symmetries. This paradigm, whose
interactions are entirely due to compactness, illustrates how compactness can
contribute to interspecies interactions. Furthermore, it has a much richer
phase structure(including a magnetically confining phase) than obtained by
naively tensoring together two compact U(1) copies.
|
The Sun was recently predicted to be an extended source of gamma-ray
emission, produced by inverse-Compton scattering of cosmic-ray electrons with
the solar radiation. The emission was predicted to contribute to the diffuse
extragalactic background even at large angular distances from the Sun. While
this emission is expected to be readily detectable in future by GLAST, the
situation for available EGRET data is more challenging. We present a detailed
study of the EGRET database, using a time dependent analysis, accounting for
the effect of the emission from 3C 279, the moon, and other sources, which
interfere with the solar signal. The technique has been tested on the moon
signal, with results consistent with previous work. We find clear evidence for
emission from the Sun and its vicinity. The observations are compared with our
model for the extended emission.
|
This paper focuses on multiscale dynamics occurring in steam supply systems.
The dynamics of interest are originally described by a distributed-parameter
model for fast steam flows over a pipe network coupled with a lumped-parameter
model for slow internal dynamics of boilers. We derive a lumped-parameter model
for the dynamics through physically-relevant approximations. The derived model
is then analyzed theoretically and numerically in terms of existence of
normally hyperbolic invariant manifold in the phase space of the model. The
existence of the manifold is a dynamical evidence that the derived model
preserves the slow-fast dynamics, and suggests a separation principle of
short-term and long-term operations of steam supply systems, which is analogue
to electric power systems. We also quantitatively verify the correctness of the
derived model by comparison with brute-force simulation of the original model.
|
JWST has revealed a population of low-luminosity AGN at $z>4$ in compact, red
hosts (the "Little Red Dots", or LRDs), which are largely undetected in X-rays.
We investigate this phenomenon using GRRMHD simulations of super-Eddington
accretion onto a SMBH with $M_\bullet=10^7 \,\rm M_\odot$ at $z\sim6$,
representing the median population; the SEDs that we obtain are intrinsically
X-ray weak. The highest levels of X-ray weakness occur in SMBHs accreting at
mildly super-Eddington rates ($1.4<f_{\rm Edd}<4$) with zero spin, viewed at
angles $>30^\circ$ from the pole. X-ray bolometric corrections in the observed
$2-10$ keV band reach $\sim10^4$ at $z=6$, $\sim5$ times higher than the
highest constraint from X-ray stacking. Most SEDs exhibit $\alpha_{\rm ox}$
values outside standard ranges, with X-ray weakness increasing with optical-UV
luminosity; they are also extraordinarily steep and soft in the X-rays (median
photon index $\Gamma=3.1$, mode of $\Gamma=4.4$). SEDs strong in the X-rays
have harder spectra with a high-energy bump when viewed near the hot ($>10^8$
K) and highly-relativistic jet, whereas X-ray weak SEDs lack this feature.
Viewing a SMBH within $10^\circ$ of its pole, where beaming enhances the X-ray
emission, has a $\sim1.5\%$ probability, matching the LRD X-ray detection rate.
Next-generation observatories like AXIS will detect X-ray weak LRDs at $z\sim6$
from any viewing angle. Although many SMBHs in the LRDs are already estimated
to accrete at super-Eddington rates, our model explains $50\%$ of their
population by requiring that their masses are overestimated by a mere factor of
$\sim3$. In summary, we suggest that LRDs host slowly spinning SMBHs accreting
at mildly super-Eddington rates, with large covering factors and broad emission
lines enhanced by strong winds, providing a self-consistent explanation for
their X-ray weakness and complementing other models.
|
We present a new fragment of axiomatic set theory for pure sets and for the
iteration of power sets within given transitive sets. It turns out that this
formal system admits an interesting hierarchy of models with true membership
relation and with only finite or countably infinite ordinals. Still a
considerable part of mathematics can be formalized within this system.
|
This article describes the theory of cosmological perturbations around a
homogeneous and anisotropic universe of the Bianchi I type. Starting from a
general parameterisation of the perturbed spacetime a la Bardeen, a complete
set of gauge invariant variables is constructed. Three physical degrees of
freedom are identified and it is shown that, in the case where matter is
described by a scalar field, they generalize the Mukhanov-Sasaki variables. In
order to show that they are canonical variables, the action for the
cosmological perturbations at second order is derived. Two major physical
imprints of the primordial anisotropy are identified: (1) a scalar-tensor
``see-saw'' mechanism arising from the fact that scalar, vector and tensor
modes do not decouple and (2) an explicit dependence of the statistical
properties of the density perturbations and gravity waves on the wave-vector
instead of its norm. This analysis extends, but also sheds some light on, the
quantization procedure that was developed under the assumption of a
Friedmann-Lemaitre background spacetime, and allows to investigate the
robustness of the predictions of the standard inflationary scenario with
respect to the hypothesis on the symmetries of the background spacetime. These
effects of a primordial anisotropy may be related to some anomalies of the
cosmic microwave background anisotropies on large angular scales.
|
The strong coupling regime in a ZnO microcavity is investigated through room
temperature photoluminescence and reflectivity experiments. The simultaneous
strong coupling of excitons to the cavity mode and the first Bragg mode is
demonstrated at room temperature. The polariton relaxation is followed as a
function of the excitation density. A relaxation bottleneck is evidenced in the
Bragg-mode polariton branch. It is partly broken under strong excitation
density, so that the emission from this branch dominates the one from
cavity-mode polaritons.
|
In this paper, we show that conditional inference trees and ensembles are
suitable methods for modeling linguistic variation. As against earlier
linguistic applications, however, we claim that their suitability is strongly
increased if we combine prediction and interpretation. To that end, we have
developed a statistical method, PrInDT (Prediction and Interpretation with
Decision Trees), which we introduce and discuss in the present paper.
|
The theorems of M. Ratner, describing the finite ergodic invariant measures
and the orbit closures for unipotent flows on homogeneous spaces of Lie groups,
are extended for actions of subgroups generated by unipotent elements. More
precisely: Let G be a Lie group (not necessarily connected) and Gamma a closed
subgroup of G. Let W be a subgroup of G such that Ad(W) is contained in the
Zariski closure (in the group of automorphisms of the Lie algebra of G) of the
subgroup generated by the unipotent elements of Ad(W). Then any finite ergodic
invariant measure for the action of W on G/Gamma is a homogeneous measure
(i.e., it is supported on a closed orbit of a subgroup preserving the measure).
Moreover, if G/Gamma has finite volume (i.e., has a finite G-invariant
measure), then the closure of any orbit of W on G/Gamma is a homogeneous set
(i.e., a finite volume closed orbit of a subgroup containing W). Both the above
results hold if W is replaced by any subgroup Lambda of W such that W/Lambda
has finite volume.
|
State-of-the-art video deblurring methods cannot handle blurry videos
recorded in dynamic scenes, since they are built under a strong assumption that
the captured scenes are static. Contrary to the existing methods, we propose a
video deblurring algorithm that can deal with general blurs inherent in dynamic
scenes. To handle general and locally varying blurs caused by various sources,
such as moving objects, camera shake, depth variation, and defocus, we estimate
pixel-wise non-uniform blur kernels. We infer bidirectional optical flows to
handle motion blurs, and also estimate Gaussian blur maps to remove optical
blur from defocus in our new blur model. Therefore, we propose a single energy
model that jointly estimates optical flows, defocus blur maps and latent
frames. We also provide a framework and efficient solvers to minimize the
proposed energy model. By optimizing the energy model, we achieve significant
improvements in removing general blurs, estimating optical flows, and extending
depth-of-field in blurry frames. Moreover, in this work, to evaluate the
performance of non-uniform deblurring methods objectively, we have constructed
a new realistic dataset with ground truths. In addition, extensive experimental
on publicly available challenging video data demonstrate that the proposed
method produces qualitatively superior performance than the state-of-the-art
methods which often fail in either deblurring or optical flow estimation.
|
The non-stationary nature of real-world Multivariate Time Series (MTS) data
presents forecasting models with a formidable challenge of the time-variant
distribution of time series, referred to as distribution shift. Existing
studies on the distribution shift mostly adhere to adaptive normalization
techniques for alleviating temporal mean and covariance shifts or time-variant
modeling for capturing temporal shifts. Despite improving model generalization,
these normalization-based methods often assume a time-invariant transition
between outputs and inputs but disregard specific intra-/inter-series
correlations, while time-variant models overlook the intrinsic causes of the
distribution shift. This limits model expressiveness and interpretability of
tackling the distribution shift for MTS forecasting. To mitigate such a
dilemma, we present a unified Probabilistic Graphical Model to Jointly
capturing intra-/inter-series correlations and modeling the time-variant
transitional distribution, and instantiate a neural framework called JointPGM
for non-stationary MTS forecasting. Specifically, JointPGM first employs
multiple Fourier basis functions to learn dynamic time factors and designs two
distinct learners: intra-series and inter-series learners. The intra-series
learner effectively captures temporal dynamics by utilizing temporal gates,
while the inter-series learner explicitly models spatial dynamics through
multi-hop propagation, incorporating Gumbel-softmax sampling. These two types
of series dynamics are subsequently fused into a latent variable, which is
inversely employed to infer time factors, generate final prediction, and
perform reconstruction. We validate the effectiveness and efficiency of
JointPGM through extensive experiments on six highly non-stationary MTS
datasets, achieving state-of-the-art forecasting performance of MTS
forecasting.
|
In this paper we present a single-microphone speech enhancement algorithm. A
hybrid approach is proposed merging the generative mixture of Gaussians (MoG)
model and the discriminative neural network (NN). The proposed algorithm is
executed in two phases, the training phase, which does not recur, and the test
phase. First, the noise-free speech power spectral density (PSD) is modeled as
a MoG, representing the phoneme based diversity in the speech signal. An NN is
then trained with phoneme labeled database for phoneme classification with
mel-frequency cepstral coefficients (MFCC) as the input features. Given the
phoneme classification results, a speech presence probability (SPP) is obtained
using both the generative and discriminative models. Soft spectral subtraction
is then executed while simultaneously, the noise estimation is updated. The
discriminative NN maintain the continuity of the speech and the generative
phoneme-based MoG preserves the speech spectral structure. Extensive
experimental study using real speech and noise signals is provided. We also
compare the proposed algorithm with alternative speech enhancement algorithms.
We show that we obtain a significant improvement over previous methods in terms
of both speech quality measures and speech recognition results.
|
A reparametrization (of a continuous path) is given by a surjective weakly
increasing self-map of the unit interval. We show that the monoid of
reparametrizations (with respect to compositions) can be understood via
``stop-maps'' that allow to investigate compositions and factorizations, and we
compare it to the distributive lattice of countable subsets of the unit
interval. The results obtained are used to analyse the space of traces in a
topological space, i.e., the space of continuous paths up to reparametrization
equivalence. This space is shown to be homeomorphic to the space of regular
paths (without stops) up to increasing reparametrizations. Directed versions of
the results are important in directed homotopy theory.
|
We derive an analytic formula at three loops for the cusp anomalous dimension
Gamma_cusp(phi) in N=4 super Yang-Mills. This is done by exploiting the
relation of the latter to the Regge limit of massive amplitudes. We comment on
the corresponding three loops quark anti-quark potential. Our result also
determines a considerable part of the three-loop cusp anomalous dimension in
QCD. Finally, we consider a limit in which only ladder diagrams contribute to
physical observables. In that limit, a precise agreement with strong coupling
is observed.
|
In this paper, we provide the proof of $L^2$ consistency for the $k$th
nearest neighbour distance estimator of the Shannon entropy for an arbitrary
fixed $k\geq 1.$ We construct the non-parametric test of goodness-of-fit for a
class of introduced generalized multivariate Gaussian distributions based on a
maximum entropy principle. The theoretical results are followed by numerical
studies on simulated samples.
|
We obtain new elliptic function identities, which are an elliptic analogue of
Fukuhara's trigonometric identities. We show that the coefficients of Laurent
expansions at $z=0$ of our elliptic identities give rise to some reciprocity
laws for elliptic Dedekind sums.
|
We propose several adaptive algorithmic methods for problems of non-smooth
convex optimization. The first of them is based on a special artificial
inexactness. Namely, the concept of inexact ($ \delta, \Delta, L$)-model of
objective functional in optimization is introduced and some gradient-type
methods with adaptation of inexactness parameters are proposed. A similar
concept of an inexact model is introduced for variational inequalities as well
as for saddle point problems. Analogues of switching sub-gradient schemes are
proposed for convex programming problems with some general assumptions.
|
The rapid evolution of autonomous vehicles (AVs) has significantly influenced
global transportation systems. In this context, we present ``Snow Lion'', an
autonomous shuttle meticulously designed to revolutionize on-campus
transportation, offering a safer and more efficient mobility solution for
students, faculty, and visitors. The primary objective of this research is to
enhance campus mobility by providing a reliable, efficient, and eco-friendly
transportation solution that seamlessly integrates with existing infrastructure
and meets the diverse needs of a university setting. To achieve this goal, we
delve into the intricacies of the system design, encompassing sensing,
perception, localization, planning, and control aspects. We evaluate the
autonomous shuttle's performance in real-world scenarios, involving a
1146-kilometer road haul and the transportation of 442 passengers over a
two-month period. These experiments demonstrate the effectiveness of our system
and offer valuable insights into the intricate process of integrating an
autonomous vehicle within campus shuttle operations. Furthermore, a thorough
analysis of the lessons derived from this experience furnishes a valuable
real-world case study, accompanied by recommendations for future research and
development in the field of autonomous driving.
|
The Noetherian type of a space is the least k for which the space has a
k^op-like base, i.e., a base in which no element has k-many supersets. We prove
some results about Noetherian types of (generalized) ordered spaces and
products thereof. For example: the density of a product of not-too-many compact
linear orders never exceeds its Noetherian type, with equality possible only
for singular Noetherian types; we prove a similar result for products of
Lindelof GO-spaces. A countable product of compact linear orders has an
omega_1^op-like base if and only if it is metrizable, and every metrizable
space has an omega^op-like base. An infinite cardinal k is the Noetherian type
of a compact LOTS if and only if k is not omega_1 and k is not weakly
inaccessible. There is a Lindelof LOTS with Noetherian type omega_1 and there
consistently is a Lindelof LOTS with weakly inaccessible Noetherian type.
|
Online reinforcement learning agents are currently able to process an
increasing amount of data by converting it into a higher order value functions.
This expansion of the information collected from the environment increases the
agent's state space enabling it to scale up to a more complex problems but also
increases the risk of forgetting by learning on redundant or conflicting data.
To improve the approximation of a large amount of data, a random mini-batch of
the past experiences that are stored in the replay memory buffer is often
replayed at each learning step. The proposed work takes inspiration from a
biological mechanism which act as a protective layer of human brain higher
cognitive functions: active memory consolidation mitigates the effect of
forgetting of previous memories by dynamically processing the new ones. The
similar dynamics are implemented by a proposed augmented memory replay AMR
capable of optimizing the replay of the experiences from the agent's memory
structure by altering or augmenting their relevance. Experimental results show
that an evolved AMR augmentation function capable of increasing the
significance of the specific memories is able to further increase the stability
and convergence speed of the learning algorithms dealing with the complexity of
continuous action domains.
|
The magnetic translation algebra plays an important role in the quantum Hall
effect. Murthy and Shankar, arXiv:1207.2133, have shown how to realize this
algebra using fermionic bilinears defined on a two-dimensional square lattice.
We show that, in any dimension $d$, it is always possible to close the magnetic
translation algebra using fermionic bilinears, whether in the continuum or on
the lattice. We also show that these generators are complete in even, but not
odd, dimensions, in the sense that any fermionic Hamiltonian in even dimensions
that conserves particle number can be represented in terms of the generators of
this algebra, whether or not time-reversal symmetry is broken. As an example,
we reproduce the $f$-sum rule of interacting electrons at vanishing magnetic
field using this representation. We also show that interactions can
significantly change the bare bandwidth of lattice Hamiltonians when
represented in terms of the generators of the magnetic translation algebra.
|
Results are presented of a search for a "natural" supersymmetry scenario with
gauge mediated symmetry breaking. It is assumed that only the supersymmetric
partners of the top quark (the top squark) and the Higgs boson (higgsino) are
accessible. Events are examined in which there are two photons forming a Higgs
boson candidate, and at least two b-quark jets. In 19.7 inverse femtobarns of
proton-proton collision data at sqrt(s) = 8 TeV, recorded in the CMS
experiment, no evidence of a signal is found and lower limits at the 95%
confidence level are set, excluding the top squark mass below 360 to 410 GeV,
depending on the higgsino mass.
|
We analyze the efficiency of markets with friction, particularly power
markets. We model the market as a dynamic system with $(d_t;\,t\geq 0)$ the
demand process and $(s_t;\,t\geq 0)$ the supply process. Using stochastic
differential equations to model the dynamics with friction, we investigate the
efficiency of the market under an integrated expected undiscounted cost
function solving the optimal control problem. Then, we extend the setup to a
game theoretic model where multiple suppliers and consumers interact
continuously by setting prices in a dynamic market with friction. We
investigate the equilibrium, and analyze the efficiency of the market under an
integrated expected social cost function. We provide an intriguing
efficiency-volatility no-free-lunch trade-off theorem.
|
Double peaked broad emission lines in active galactic nuclei are generally
considered to be formed in an accretion disc. In this paper, we compute the
profiles of reprocessing emission lines from a relativistic, warped accretion
disc around a black hole in order to explore the possibility that certain
asymmetries in the double-peaked emission line profile which can not be
explained by a circular Keplerian disc may be induced by disc warping. The disc
warping also provides a solution for the energy budget in the emission line
region because it increases the solid angle of the outer disc portion subtended
to the inner portion of the disc. We adopted a parametrized disc geometry and a
central point-like source of ionizing radiation to capture the main
characteristics of the emission line profile from such discs. We find that the
ratio between the blue and red peaks of the line profiles becoming less than
unity can be naturally predicted by a twisted warped disc, and a third peak can
be produced in some cases. We show that disc warping can reproduce the main
features of multi-peaked line profiles of four active galactic nuclei from the
Sloan Digital Sky Survey.
|
Nonlinear action of the group of spatial rotations on commuting components of
a position operator of a massless particle (Hawton operator) is studied. Using
Callan, Coleman, Wess and Zumino method it is shown that coordinates which
linearize this action correspond to the Pryce operator with non-commuting
components.
|
We investigate infinite sets that witness the failure of certain
Ramsey-theoretic statements, such as Ramsey's or (appropriately phrased)
Hindman's theorem; such sets may exist if one does not assume the Axiom of
Choice. We obtain very precise information as to where such sets are located
within the hierarchy of infinite Dedekind-finite sets.
|
In this article, we generalize Duflo's conjecture to understand the branching
laws of non-discrete series. We give a unified description on the geometric
side about the restriction of an irreducible unitary representation $\pi$ of
$\mathrm{GL}_n(k)$, $k=\mathbb{R}$ or $\mathbb{C}$, to the mirabolic subgroup,
where $\pi$ is attached to a certain kind of coadjoint orbit.
|
Let $\Gamma_1,\dots,\Gamma_n$ be hyperbolic, property (T) groups, for some
$n\ge 1$. We prove that if a product $\Gamma_1\times\dots\times\Gamma_n
\curvearrowright X_1\times\dots\times X_n$ of measure preserving actions is
stably orbit equivalent to a measure preserving action $\Lambda\curvearrowright
Y$, then $\Lambda\curvearrowright Y$ is induced from an action
$\Lambda_0\curvearrowright Y_0$ such that there exists a direct product
decomposition $\Lambda_0=\Lambda_1\times\dots\times\Lambda_n$ into $n$ infinite
groups. Moreover, there exists a measure preserving action
$\Lambda_i\curvearrowright Y_i$ that is stably orbit equivalent to
$\Gamma_i\curvearrowright X_i$, for any $1\leq i\leq n$, and the product action
$\Lambda_1\times\dots\times\Lambda_n\curvearrowright Y_1\times\dots\times Y_n$
is isomorphic to $\Lambda_0\curvearrowright Y_0$.
|
The linear isometries between weighted Banach spaces of continuous functions
are considered. Some of well known theorems on isometries between spaces of
continuous functions are proved and stated, but all they are in an appropriate
form. In this paper, we present some new results, too, and results that extend
some of our PhD(1993year) disertation theorems. We hope this letter will be
useful for obtaining some email friendships.
|
We establish some uniform limit results in the setting of additive regression
model estimation. Our results allow to give an asymptotic 100% confidence bands
for these components. These results are stated in the framework of i.i.d random
vectors when the marginal integration estimation method is used.
|
We reveal by first-principles calculations that the interlayer binding in a
twisted MoS2/MoTe2 heterobilayer decreases with increasing twist angle, due to
the increase of the interlayer overlapping degree, a geometric quantity
describing well the interlayer steric effect. The binding energy is found to be
a Gaussian-like function of twist angle. The resistance to rotation, an
analogue to the interlayer sliding barrier, can also be defined accordingly. In
sharp contrast to the case of MoS2 homobilayer, here the energy band gap
reduces with increasing twist angle. We find a remarkable interlayer charge
transfer from MoTe2 to MoS2 which enlarges the band gap, but this charge
transfer weakens with greater twisting and interlayer overlapping degree. Our
discovery provides a solid basis in twistronics and practical instruction in
band structure engineering of van der Waals heterostructures.
|
We characterize the dynamical states of a piezoelectric
microelectromechanical system (MEMS) using several numerical quantifers
including the maximal Lyapunov exponent, the Poincare Surface of Section and a
chaos detection method called the Smaller Alignment Index (SALI). The analysis
makes use of the MEMS Hamiltonian. We start our study by considering the case
of a conservative piezoelectric MEMS model and describe the behavior of some
representative phase space orbits of the system. We show that the dynamics of
the piezoelectric MEMS becomes considerably more complex as the natural
frequency of the system's mechanical part decreases.This refers to the
reduction of the stiffness of the piezoelectric transducer. Then, taking into
account the effects of damping and time dependent forces on the piezoelectric
MEMS, we derive the corresponding non-autonomous Hamiltonian and investigate
its dynamical behavior. We find that the non-conservative system exhibits a
rich dynamics, which is strongly influenced by the values of the parameters
that govern the piezoelectric MEMS energy gain and loss. Our results provide
further evidences of the ability of the SALI to efficiently characterize the
chaoticity of dynamical systems.
|
The visualization of hierarchically structured data over time is an ongoing
challenge and several approaches exist trying to solve it. Techniques such as
animated or juxtaposed tree visualizations are not capable of providing a good
overview of the time series and lack expressiveness in conveying changes over
time. Nested streamgraphs provide a better understanding of the data evolution,
but lack the clear outline of hierarchical structures at a given timestep.
Furthermore, these approaches are often limited to static hierarchies or
exclude complex hierarchical changes in the data, limiting their use cases. We
propose a novel visual metaphor capable of providing a static overview of all
hierarchical changes over time, as well as clearly outlining the hierarchical
structure at each individual time step. Our method allows for smooth
transitions between tree maps and nested streamgraphs, enabling the exploration
of the trade-off between dynamic behavior and hierarchical structure. As our
technique handles topological changes of all types, it is suitable for a wide
range of applications. We demonstrate the utility of our method on several use
cases, evaluate it with a user study, and provide its full source code.
|
Cosymplectic geometry has been proven to be a very useful geometric
background to describe time-dependent Hamiltonian dynamics. In this work, we
address the globalization problem of locally cosymplectic Hamiltonian dynamics
that failed to be globally defined. We investigate both the geometry of locally
conformally cosymplectic (abbreviated as LCC) manifolds and the Hamiltonian
dynamics constructed on such LCC manifolds. Further, we provide a geometric
Hamilton-Jacobi theory on this geometric framework.
|
We propose an approach to learning agents for active robotic mapping, where
the goal is to map the environment as quickly as possible. The agent learns to
map efficiently in simulated environments by receiving rewards corresponding to
how fast it constructs an accurate map. In contrast to prior work, this
approach learns an exploration policy based on a user-specified prior over
environment configurations and sensor model, allowing it to specialize to the
specifications. We evaluate the approach through a simulated Disaster Mapping
scenario and find that it achieves performance slightly better than a
near-optimal myopic exploration scheme, suggesting that it could be useful in
more complicated problem scenarios.
|
The surge in political information, discourse, and interaction has been one
of the most important developments in social media over the past several years.
There is rich structure in the interaction among different viewpoints on the
ideological spectrum. However, we still have only a limited analytical
vocabulary for expressing the ways in which these viewpoints interact.
In this paper, we develop network-based methods that operate on the ways in
which users share content; we construct \emph{invocation graphs} on Web domains
showing the extent to which pages from one domain are invoked by users to reply
to posts containing pages from other domains. When we locate the domains on a
political spectrum induced from the data, we obtain an embedded graph showing
how these interaction links span different distances on the spectrum. The
structure of this embedded network, and its evolution over time, helps us
derive macro-level insights about how political interaction unfolded through
2016, leading up to the US Presidential election. In particular, we find that
the domains invoked in replies spanned increasing distances on the spectrum
over the months approaching the election, and that there was clear asymmetry
between the left-to-right and right-to-left patterns of linkage.
|
We present a numerical procedure allowing one to extract Feshbach resonance
parameters from numerical calculations without relying on approximate fitting
procedures. Our approach is based on a simple decomposition of the reactance
matrix in terms of poles and residual background contribution, and can be
applied to the general situation of inelastic overlapping resonances. A simple
lineshape for overlapping inelastic resonances, equivalent to known results in
the particular cases of isolated and overlapping elastic features, is also
rigorously derived.
|
Recent developments in Artificial Intelligence (AI) have fueled the emergence
of human-AI collaboration, a setting where AI is a coequal partner. Especially
in clinical decision-making, it has the potential to improve treatment quality
by assisting overworked medical professionals. Even though research has started
to investigate the utilization of AI for clinical decision-making, its
potential benefits do not imply its adoption by medical professionals. While
several studies have started to analyze adoption criteria from a technical
perspective, research providing a human-centered perspective with a focus on
AI's potential for becoming a coequal team member in the decision-making
process remains limited. Therefore, in this work, we identify factors for the
adoption of human-AI collaboration by conducting a series of semi-structured
interviews with experts in the healthcare domain. We identify six relevant
adoption factors and highlight existing tensions between them and effective
human-AI collaboration.
|
Markov-modulated fluids have a long history. They form a simple class of
Markov additive processes, and were initially developed in the 1950s as models
for dams and reservoirs, before gaining much popularity in the 1980s as models
for buffers in telecommunication systems, when they became known as fluid
queues. More recent applications are in risk theory and in environmental
studies. In telecommunication systems modelling, the attention focuses on
determining the stationary distribution of the buffer content. Early ODE
resolution techniques have progressively given way to approaches grounded in
the analysis of the physical evolution of the system, and one only needs now to
solve a Riccati equation in order to obtain several quantities of interest. To
the early algorithms proposed in the Applied Probability literature, numerical
analysts have added new algorithms, improved in terms of convergence speed,
numerical accuracy, and domain of applicability. We give here a high-level
presentation of the matrix-analytic approach to the analysis of fluid queues,
briefly address computational issues, and conclude by indicating how this has
been extended to more general processes.
|
We present a geometric description of lepton flavor mixing and CP violation
in matter by using the language of leptonic unitarity triangles. The exact
analytical relations for both sides and inner angles are established between
every unitarity triangle in vacuum and its effective counterpart in matter. The
typical shape evolution of six triangles with the terrestrial matter density is
illustrated for a realistic long-baseline neutrino oscillation experiment.
|
Neural Networks have been shown to be sensitive to common perturbations such
as blur, Gaussian noise, rotations, etc. They are also vulnerable to some
artificial malicious corruptions called adversarial examples. The adversarial
examples study has recently become very popular and it sometimes even reduces
the term "adversarial robustness" to the term "robustness". Yet, we do not know
to what extent the adversarial robustness is related to the global robustness.
Similarly, we do not know if a robustness to various common perturbations such
as translations or contrast losses for instance, could help with adversarial
corruptions. We intend to study the links between the robustnesses of neural
networks to both perturbations. With our experiments, we provide one of the
first benchmark designed to estimate the robustness of neural networks to
common perturbations. We show that increasing the robustness to carefully
selected common perturbations, can make neural networks more robust to unseen
common perturbations. We also prove that adversarial robustness and robustness
to common perturbations are independent. Our results make us believe that
neural network robustness should be addressed in a broader sense.
|
We present a superconducting circuit in which non-Abelian geometric
transformations can be realized using an adiabatic parameter cycle. In contrast
to previous proposals, we employ quantum evolution in the ground state. We
propose an experiment in which the transition from non-Abelian to Abelian
cycles can be observed by measuring the pumped charge as a function of the
period of the cycle. Alternatively, the non-Abelian phase can be detected using
a single-electron transistor working as a charge sensor.
|
We report our experiences implementing standards-based grading at scale in an
Algorithms course, which serves as the terminal required CS Theory course in
our department's undergraduate curriculum. The course had 200-400 students,
taught by two instructors, eight graduate teaching assistants, and supported by
two additional graders and several undergraduate course assistants. We
highlight the role of standards-based grading in supporting our students during
the COVID-19 pandemic. We conclude by detailing the successes and adjustments
we would make to the course structure.
|
CMS-HF Calorimeters have been undergoing a major upgrade for the last couple
of years to alleviate the problems encountered during Run I, especially in the
PMT and the readout systems. In this poster, the problems caused by the old
PMTs installed in the detectors and their solutions will be explained.
Initially, regular PMTs with thicker windows, causing large Cherenkov
radiation, were used. Instead of the light coming through the fibers from the
detector, stray muons passing through the PMT itself produce Cherenkov
radiation in the PMT window, resulting in erroneously large signals. Usually,
large signals are the result of very high-energy particles in the calorimeter
and are tagged as important. As a result, these so-called window events
generate false triggers. Four-anode PMTs with thinner windows were selected to
reduce these window events. Additional channels also help eliminate such
remaining events through algorithms comparing the output of different PMT
channels. During the EYETS 16/17 period in the LHC operations, the final
components of the modifications to the readout system, namely the two-channel
front-end electronics cards, are installed. Complete upgrade of the HF
Calorimeter, including the preparations for the Run II will be discussed in
this poster, with possible effects on the eventual data taking.
|
Social virtual reality is an emerging medium of communication. In this
medium, a user's avatar (virtual representation) is controlled by the tracked
motion of the user's headset and hand controllers. This tracked motion is a
rich data stream that can leak characteristics of the user or can be
effectively matched to previously-identified data to identify a user. To better
understand the boundaries of motion data identifiability, we investigate how
varying training data duration and train-test delay affects the accuracy at
which a machine learning model can correctly classify user motion in a
supervised learning task simulating re-identification. The dataset we use has a
unique combination of a large number of participants, long duration per
session, large number of sessions, and a long time span over which sessions
were conducted. We find that training data duration and train-test delay affect
identifiability; that minimal train-test delay leads to very high accuracy; and
that train-test delay should be controlled in future experiments.
|
Architectural Technical Debt (ATD) is considered as the most significant type
of TD in industrial practice. In this study, we interview 21 software engineers
and architects to investigate a specific type of ATD, namely architectural
smells (AS). Our goal is to understand the phenomenon of AS better and support
practitioners to better manage it and researchers to offer relevant support.
The findings of this study provide insights on how practitioners perceive AS
and how they introduce them, the maintenance and evolution issues they
experienced and associated to the presence of AS, and what practices and tools
they adopt to manage AS.
|
Using some combinatorial techniques, in this note, it is proved that if
$\alpha\geq 0.28866$, then any digraph on $n$ vertices with minimum outdegree
at least $\alpha n$ contains a directed cycle of length at most 4.
|
In this work we establish a correspondence between the tachyon, K-essence and
dilaton scalar field models with the interacting entropy-corrected holographic
dark (ECHD) model in non-flat FRW universe. The reconstruction of potentials
and dynamics of these scalar fields according to the evolutionary behavior of
the interacting ECHDE model are be done. It has been shown that the phantom
divide can not be crossed in ECHDE tachyon model while it is achieved for ECHDE
K-essence and ECHDE dilaton scenarios. At last we calculate the limiting case
of interacting ECHDE model, without entropy-correction.
|
We report the discovery of planetary companions orbiting four low-luminosity
giant stars with M$_\star$ between 1.04 and 1.39 M$_\odot$. All four host stars
have been independently observed by the EXoPlanets aRound Evolved StarS
(EXPRESS) program and the Pan-Pacific Planet Search (PPPS). The companion
signals were revealed by multi-epoch precision radial velocities obtained
during nearly a decade. The planetary companions exhibit orbital periods
between $\sim$ 1.2 and 7.1 years, minimum masses of m$_{\rm p}$sini $\sim$
1.8-3.7 M$_{jup}$ and eccentricities between 0.08 and 0.42. Including these
four new systems, we have detected planetary companions to 11 out of the 37
giant stars that are common targets between the EXPRESS and PPPS. After
excluding four compact binaries from the common sample, we obtained a fraction
of giant planets (m$_{\rm p} \gtrsim$ 1-2 M$\_{jup}$) orbiting within 5 AU from
their parent star of $f = 33.3^{+9.0}_{-7.1} \%$. This fraction is
significantly higher than that previously reported in the literature by
different radial velocity surveys. Similarly, planet formation models under
predict the fraction of gas giant around stars more massive than the Sun.
|
In this paper we study the following nonlocal Dirichlet equation of double
phase type
\begin{align*}
-\psi \left [ \int_\Omega \left ( \frac{|\nabla u |^p}{p} + \mu(x)
\frac{|\nabla u|^q}{q}\right)\,\mathrm{d} x\right] \mathcal{G}(u) = f(x,u)\quad
\text{in } \Omega, \quad u = 0\quad \text{on } \partial\Omega,
\end{align*}
where $\mathcal{G}$ is the double phase operator given by
\begin{align*}
\mathcal{G}(u)=\operatorname{div} \left(|\nabla u|^{p-2}\nabla u + \mu(x)
|\nabla u|^{q-2}\nabla u \right)\quad u\in W^{1,\mathcal{H}}_0(\Omega),
\end{align*}
$\Omega\subseteq \mathbb{R}^N$, $N\geq 2$, is a bounded domain with Lipschitz
boundary $\partial\Omega$, $1<p<N$, $p<q<p^*=\frac{Np}{N-p}$, $0 \leq
\mu(\cdot)\in L^\infty(\Omega)$, $\psi(s) = a_0 + b_0 s^{\vartheta-1}$ for
$s\in\mathbb{R}$, with $a_0 \geq 0$, $b_0>0$ and $\vartheta \geq 1$, and
$f\colon\Omega\times\mathbb{R}\to\mathbb{R}$ is a Carath\'{e}odory function
that grows superlinearly and subcritically. We prove the existence of two
constant sign solutions (one is positive, the other one negative) and of a
sign-changing solution which turns out to be a least energy sign-changing
solution of the problem above. Our proofs are based on variational tools in
combination with the quantitative deformation lemma and the
Poincar\'{e}-Miranda existence theorem.
|
Optical metasurfaces have shown to be a powerful approach to planar optical
elements, enabling an unprecedented control over light phase and amplitude. At
that stage, where wide variety of static functionalities have been
accomplished, most efforts are being directed towards achieving reconfigurable
optical elements. Here, we present our approach to an electrically controlled
varifocal metalens operating in the visible frequency range. It relies on
dynamically controlling the refractive index environment of a silicon metalens
by means of an electric resistor embedded into a thermo-optical polymer. We
demonstrate precise and continuous tuneability of the focal length and achieve
focal length variation larger than the Rayleigh length for voltage as small as
12 volts. The system time-response is of the order of 100 ms, with the
potential to be reduced with further integration. Finally, the imaging
capability of our varifocal metalens is successfully validated in an optical
microscopy setting. Compared to conventional bulky reconfigurable lenses, the
presented technology is a lightweight and compact solution, offering new
opportunities for miniaturized smart imaging devices.
|
To apply eyeshadow without a brush, should I use a cotton swab or a
toothpick? Questions requiring this kind of physical commonsense pose a
challenge to today's natural language understanding systems. While recent
pretrained models (such as BERT) have made progress on question answering over
more abstract domains - such as news articles and encyclopedia entries, where
text is plentiful - in more physical domains, text is inherently limited due to
reporting bias. Can AI systems learn to reliably answer physical common-sense
questions without experiencing the physical world? In this paper, we introduce
the task of physical commonsense reasoning and a corresponding benchmark
dataset Physical Interaction: Question Answering or PIQA. Though humans find
the dataset easy (95% accuracy), large pretrained models struggle (77%). We
provide analysis about the dimensions of knowledge that existing models lack,
which offers significant opportunities for future research.
|
Inspired by examples of Katok and Milnor \cite{Milnor1997}, we construct a
simple example of skew-product volume preserving diffeomorphism where the
center foliation is pathological in the sense that, there is a full measure set
whose intersection with any center leaf contains at most one point.
|
Modern neural-network-based speech processing systems are typically required
to be robust against reverberation, and the training of such systems thus needs
a large amount of reverberant data. During the training of the systems,
on-the-fly simulation pipeline is nowadays preferred as it allows the model to
train on infinite number of data samples without pre-generating and saving them
on harddisk. An RIR simulation method thus needs to not only generate more
realistic artificial room impulse response (RIR) filters, but also generate
them in a fast way to accelerate the training process. Existing RIR simulation
tools have proven effective in a wide range of speech processing tasks and
neural network architectures, but their usage in on-the-fly simulation pipeline
remains questionable due to their computational complexity or the quality of
the generated RIR filters. In this paper, we propose FRAM-RIR, a fast random
approximation method of the widely-used image-source method (ISM), to
efficiently generate realistic multi-channel RIR filters. FRAM-RIR bypasses the
explicit calculation of sound propagation paths in ISM-based algorithms by
randomly sampling the location and number of reflections of each virtual sound
source based on several heuristic assumptions, while still maintains accurate
direction-of-arrival (DOA) information of all sound sources. Visualization of
oracle beampatterns and directional features shows that FRAM-RIR can generate
more realistic RIR filters than existing widely-used ISM-based tools, and
experiment results on multi-channel noisy speech separation and dereverberation
tasks with a wide range of neural network architectures show that models
trained with FRAM-RIR can also achieve on par or better performance on real
RIRs compared to other RIR simulation tools with a significantly accelerated
training procedure. A Python implementation of FRAM-RIR is released.
|
Subsets and Splits