text
stringlengths 6
128k
|
---|
We propose using recognition networks for approximate inference inBayesian
networks (BNs). A recognition network is a multilayerperception (MLP) trained
to predict posterior marginals given observedevidence in a particular BN. The
input to the MLP is a vector of thestates of the evidential nodes. The activity
of an output unit isinterpreted as a prediction of the posterior marginal of
thecorresponding variable. The MLP is trained using samples generated fromthe
corresponding BN.We evaluate a recognition network that was trained to do
inference ina large Bayesian network, similar in structure and complexity to
theQuick Medical Reference, Decision Theoretic (QMR-DT). Our networkis a
binary, two-layer, noisy-OR network containing over 4000 potentially observable
nodes and over 600 unobservable, hidden nodes. Inreal medical diagnosis, most
observables are unavailable, and there isa complex and unknown bias that
selects which ones are provided. Weincorporate a very basic type of selection
bias in our network: a knownpreference that available observables are positive
rather than negative.Even this simple bias has a significant effect on the
posterior. We compare the performance of our recognition network
tostate-of-the-art approximate inference algorithms on a large set oftest
cases. In order to evaluate the effect of our simplistic modelof the selection
bias, we evaluate algorithms using a variety ofincorrectly modeled observation
biases. Recognition networks performwell using both correct and incorrect
observation biases.
|
A search for the resonant production of high-mass photon pairs is presented.
The analysis is based on samples of proton-proton collision data collected by
the CMS experiment at center-of-mass energies of 8 and 13 TeV, corresponding to
integrated luminosities of 19.7 and 3.3 inverse femtobarns, respectively. The
search focuses on spin-0 and spin-2 resonances with masses between 0.5 and 4
TeV and with widths, relative to the mass, between 1.4E-4 and 5.6E-2. Limits
are set on scalar resonances produced through gluon-gluon fusion, and on
Randall-Sundrum gravitons. A modest excess of events compatible with a narrow
resonance with a mass of about 750 GeV is observed. The local significance of
the excess is approximately 3.4 standard deviations. The significance is
reduced to 1.6 standard deviations once the effect of searching under multiple
signal hypotheses is considered. More data are required to determine the origin
of this excess.
|
Let $n\geq 2$ and $1\leq q<p<\fz$. We prove that if $\Omega\subset\mathbb
R^n$ is a Sobolev $(p, q)$-extension domain, with additional capacitory
restrictions on boundary in the case $q\leq n-1$, $n>2$, then
$|\partial\Omega|=0$. In the case $1\leq q<n-1$, we give an example of a
Sobolev $(p,q)$-extension domain with $|\partial\Omega|>0$.
|
This paper focuses on the teaming aspects and the role of heterogeneity in a
multi-robot system applied to robot-aided urban search and rescue (USAR)
missions. We propose a needs-driven multi-robot cooperation mechanism
represented through a Behavior Tree structure and evaluate the system's
performance in terms of the group utility and energy cost to achieve the rescue
mission in a limited time. From the theoretical analysis, we prove that the
needs-driven cooperation in a heterogeneous robot system enables higher group
utility than a homogeneous robot system. We also perform simulation experiments
to verify the proposed needs-driven collaboration and show that the
heterogeneous multi-robot cooperation can achieve better performance and
increase system robustness by reducing uncertainty in task execution. Finally,
we discuss the application to human-robot teaming.
|
Given a pair of finite groups $F, G$ and a normalized 3-cocycle $\omega$ of
$G$, where $F$ acts on $G$ as automorphisms, we consider quasi-Hopf algebras
defined as a cleft extension $\Bbbk^G_\omega\#_c\,\Bbbk F$ where $c$ denotes
some suitable cohomological data. When $F\rightarrow \overline{F}:=F/A$ is a
quotient of $F$ by a central subgroup $A$ acting trivially on $G$, we give
necessary and sufficient conditions for the existence of a surjection of
quasi-Hopf algebras and cleft extensions of the type $\Bbbk^G_\omega\#_c\,
\Bbbk F\rightarrow \Bbbk^G_\omega\#_{\overline{c}} \, \Bbbk \overline{F}$. Our
construction is particularly natural when $F=G$ acts on $G$ by conjugation, and
$\Bbbk^G_\omega\#_c \Bbbk G$ is a twisted quantum double $D^{\omega}(G)$. In
this case, we give necessary and sufficient conditions that
Rep($\Bbbk^G_\omega\#_{\overline{c}} \, \Bbbk \overline{G}$) is a modular
tensor category.
|
The problem of the possible creation of mixed hadron-quark-gluon matter, that
can arise at nuclear or heavy-ion collisions, is addressed. It is shown that
there can exist several different kinds of such a mixed matter. The main types
of this matter can be classified onto macroscopic mixture, mesoscopic mixture,
and microscopic mixture. Different types of these mixtures require principally
different descriptions. Before comparing theoretical results with experiments,
one has to analyze thermodynamic stability of all these mixed states,
classifying them onto unstable, metastable, and stable. Only the most stable
mixed state should be compared with experiment. Mixed states also need to be
checked with regard to stratification instability. In addition to the static
stratification instability, there can happen dynamic instability occurring in a
mixture of components moving with respect to each other. This effect, called
counterflow instability, has also to be taken into account, since it can lead
to the stratification of mixed matter.
|
A simple method to disperse individual single walled carbon nanotubes (SWCNT
) on an atomically flat substrate is presented. Proper tuning of ac modes of
atomic force microscopes(AFM) is discussed. This is needed to discriminate
between individual nanotubes and very small bundles. The distribution of
lengths of the nanotubes measured by these methods is reported.
|
Weight-sharing neural architecture search (NAS) is an effective technique for
automating efficient neural architecture design. Weight-sharing NAS builds a
supernet that assembles all the architectures as its sub-networks and jointly
trains the supernet with the sub-networks. The success of weight-sharing NAS
heavily relies on distilling the knowledge of the supernet to the sub-networks.
However, we find that the widely used distillation divergence, i.e., KL
divergence, may lead to student sub-networks that over-estimate or
under-estimate the uncertainty of the teacher supernet, leading to inferior
performance of the sub-networks. In this work, we propose to improve the
supernet training with a more generalized alpha-divergence. By adaptively
selecting the alpha-divergence, we simultaneously prevent the over-estimation
or under-estimation of the uncertainty of the teacher model. We apply the
proposed alpha-divergence based supernets training to both slimmable neural
networks and weight-sharing NAS, and demonstrate significant improvements.
Specifically, our discovered model family, AlphaNet, outperforms prior-art
models on a wide range of FLOPs regimes, including BigNAS, Once-for-All
networks, and AttentiveNAS. We achieve ImageNet top-1 accuracy of 80.0% with
only 444M FLOPs. Our code and pretrained models are available at
https://github.com/facebookresearch/AlphaNet.
|
A nuclear structure model based on linear response theory (i.e., Random Phase
Approximation) and which includes pairing correlations and anharmonicities
(coupling with collective vibrations), has been implemented in such a way that
it can be applied on the same footing to magic as well as open-shell nuclei. As
applications, we have chosen to study the dipole excitations both in
well-known, stable isotopes like $^{208}$Pb and $^{120}$Sn as well as in the
neutron-rich, unstable $^{132}$Sn nucleus, by addressing in the latter case the
question about the nature of the low-lying strength. Our results suggest that
the model is reliable and predicts in all cases low-lying strength of non
collective nature.
|
We present some fragments of a teaching experiment realized in a first grade
classroom, to sow the seeds for a mathematical definition of rectangles that
includes squares. Within the paradigm of semiotic mediation, we studied the
emergence of pivot signs, which were exploited by the teacher to pave the way
towards an inclusive definition of rectangles and squares. This was done to
favor overcoming children's spontaneous distinction of these figures into
distinct categories, reinforced by everyday language. The experiment is an
example of an approach towards the theoretical dimension of mathematics in
early childhood.
|
We report the detection at 850um of the central source in SSA22-LAB1, the
archetypal Lyman-alpha Blob (LAB), a 100kpc-scale radio-quiet emission-line
nebula at z=3.1. The flux density of the source, $S_{850}=4.6\pm1.1$mJy implies
the presence of a galaxy, or group of galaxies, with a total luminosity of
$L_{\rm IR}\approx10^{12}L_\odot$. The position of an active source at the
center of a ~50kpc-radius ring of linearly polarized Ly-alpha emission detected
by Hayes et al. (2011) suggests that the central source is leaking Ly-alpha
photons preferentially in the plane of the sky, which undergo scattering in HI
clouds at large galactocentric radius. The Ly-alpha morphology around the
submillimeter detection is reminiscent of biconical outflow, and the average
Ly-alpha line profiles of the two `lobes' are dominated by a red peak, expected
for a resonant line emerging from a medium with a bulk velocity gradient that
is outflowing relative to the line center. Taken together, these observations
provide compelling evidence that the central active galaxy (or galaxies) is
responsible for a large fraction of the extended Ly-alpha emission and
morphology. Less clear is the history of the cold gas in the circumgalactic
medium being traced by Ly-alpha: is it mainly pristine material accreting into
the halo that has not yet been processed through an interstellar medium (ISM),
now being blown back as it encounters an outflow, or does it mainly comprise
gas that has been swept-up within the ISM and expelled from the galaxy?
|
The existence of weak solutions to the "viscous incompressible fluid + rigid
body" system with Navier slip-with-friction conditions in a 3D bounded domain
has been recently proved by G\'{e}rard-Varet and Hillairet in \cite{exi:GeH}.
In 2D for a fluid alone (without any rigid body) it is well-known since Leray
that weak solutions are unique, continuous in time with $ L^{2} $ regularity in
space and satisfy the energy equality.In this paper we prove that these
properties also hold for the 2D "viscous incompressible fluid + rigid body"
system.
|
The paper is devoted to studying the performance of a computational pipeline,
the number of simultaneously executing stages of which at each time is bounded
from above by a fixed number. A look at the restriction as a structural hazard
makes it possible to construct an analytical model for calculating the
processing time of a given input data amount. Using this model, led to a
formula for calculating the optimal depth of a bounded pipeline for a given
volume of input data. The formula shows that the optimal depth can get large
changes for small changes in the amount of data. To eliminate this disadvantage
and to obtain a more convenient formula for optimal depth, a pipeline with a
single random hazard is constructed, the mathematical expectation of a random
value of the processing time of which approximates the analytical model of the
bounded pipeline. In addition, a pipeline with two hazards has been built, the
analytical model of which allowed obtaining formulas for calculating the
optimal depth of a bounded pipeline with restart for a given amount of data. To
check whether the proposed analytical models are consistent with the
experiments to calculate the processing time, two methods of computer
simulation of bounded pipelines are used, the first of which is constructed as
a multi-threaded application, and the second is based on the theory of free
partially commutative monoids.
|
The diffraction of a plane wave by a transversely inhomogeneous isotropic
nonmagnetic linearly polarized dielectric layer filled with a Kerr-type
nonlinear medium is considered. The analytical and numerical solution
techniques are developed. The diffraction problem is reduced to a singular
boundary value problem for a semilinear second-order ordinary differential
equation with a cubic nonlinearity and then to a cubic-nonlinear integral
equation of the second kind and to a system of nonlinear operator equations of
the second kind solved using iterations. Sufficient conditions of the unique
solvability are obtained using the contraction principle.
|
Precious metal alloys enables new possibilities to tailor materials for
specific optical functions. Here we present a systematic study of the effects
of a nanoscale alloying on the permittivity of Au-Ag-Cu metals at 38 different
atomic mixing ratios. The permittivity was measured and analyzed numerically by
applying the Drude model. X-ray diffraction (XRD) revealed the face centered
cubic lattice of the alloys. Both, optical spectra and XRD results point
towards an equivalent composition-dependent electron scattering behavior.
Correlation between the fundamental structural parameters of alloys and the
resulting optical properties is elucidated. Plasmonic properties of the
Au-Ag-Cu alloy nanoparticles were investigated by numerical simulations.
Guidelines for designing plasmonic response of nano- structures and their
patterns are presented from the material science perspective.
|
Reversible pump turbines are praised for their operational flexibility
leading to their recent wide adoption within pumped storage hydropower plants.
However, frequently imposed off-design operating conditions in these plants
give rise to large flow instability within RPT flow zones, where the vaneless
space (VS) between the runner and guide vanes is claimed to be the base. Recent
studies have pointed out the possibility of these instabilities stretching to
other flow zones causing more losses and subsequent machine operational
performance degradation. This study therefore intends to investigate the VS
flow instability, its propagation characteristics, and the effect of machine
influx and runner blade number on the same. CFD-backed simulations are
conducted on ten flow conditions spanning from turbine zone through runaway
vicinities to turbine brake (OC1 to OC15), using three runner models with
different blades (7BL, 8BL, and 9BL). While VS pressure pulsation amplitudes
increased with runner blades number decrease, the continuously decreasing flow
led to gradual VS pressure pulsation level drop within the Turbine zone before
increasing to Runaway and dropping back to deep turbine brake zone. The effect
of the same parameters on the transmission mode to VS upstream flow zones is
more remarkable than the downstream flow zones.
|
We define a dynamic model of random networks, where new vertices are
connected to old ones with a probability proportional to a sublinear function
of their degree. We first give a strong limit law for the empirical degree
distribution, and then have a closer look at the temporal evolution of the
degrees of individual vertices, which we describe in terms of large and
moderate deviation principles. Using these results, we expose an interesting
phase transition: in cases of strong preference of large degrees, eventually a
single vertex emerges forever as vertex of maximal degree, whereas in cases of
weak preference, the vertex of maximal degree is changing infinitely often.
Loosely speaking, the transition between the two phases occurs in the case when
a new edge is attached to an existing vertex with a probability proportional to
the root of its current degree.
|
An integral domain is called atomic if every nonzero nonunit element factors
into irreducibles. On the other hand, an integral domain is said to satisfy the
ascending chain condition on principal ideals (ACCP) if every ascending chain
of principal ideals terminates. It was asserted by Cohn back in the sixties
that every atomic domain satisfies the ACCP, but such an assertion was refuted
by Grams in the seventies with an explicit construction of a neat example.
Still, atomic domains without the ACCP are notoriously elusive, and just a few
classes have been found since Grams' first construction. In the first part of
this paper, we generalize Grams' construction to provide new classes of atomic
domains without the ACCP. In the second part of this paper, we construct what
seems to be the first atomic semigroup ring without the ACCP in the existing
literature.
|
A generalization of the Gibbs-von Neumann relative entropy is proposed based
on the quantum BBGKY [Bogolyubov-Born-Green-Kirkwood-Yvon] hierarchy as the
nonequilibrium entropy for an N-body system. By using a generalization of the
Liouville-von Neumann equation describing the evolution of a density super-
operator, it is demonstrated that the entropy production for an isolated system
is non-negative, which provides an arrow of time. Moreover, following the
procedure of non-equilibrium thermodynamics a master matrix is introduced for
which a mi- croscopic expression is obtained. Then, the quantum Boltzmann
equation is derived in terms of a transition superoperator related to that
master matrix.
|
The origin of non-baryonic Dark Matter remains elusive despite ongoing
sensitive searches for heavy, thermally produced dark matter particles.
Recently, it has been shown, that non-thermally produced vector bosons
(sometimes called hidden photons) related to a broken U(1) gauge symmetry are
among the possible WISP (weakly interacting slim particles) dark matter
candidates. The WISP Dark Matter eXperiment (WISPDMX) is the first direct
hidden photon dark matter search experiment probing the particle masses within
the 0.8-2.07 $\mu$eV range with four resonant modes of a tunable radio
frequency cavity and down to 0.4 $n$eV outside of resonance. In this paper, we
present the results from the first science run of WISPDMX comprising 22000
broadband spectra with a 500 MHz bandwidth and a 50 Hz spectral resolution,
obtained during 10-second integrations made at each individual tuning step of
the measurements. No plausible dark matter candidate signal is found, both in
the individual spectra reaching minimum detectable power of $8^{-19}$ Watt and
in the averaged spectrum of all the measurements with the minimum detectable
power of $5^{-22}$ Watt attained for a total of 61 \hours of data taking. Using
these spectra, we derive upper limits on the coupling constant of the hidden
photon at the levels of $10^{e-13}$ for the resonant frequency ranges and
$10^{-12}$ for broadband mass range 0.2-2.07 $\mu$eV, and steadily increasing
at masses below 0.2$\mu$eV.
|
Detection of curvilinear structures in images has long been of interest. One
of the most challenging aspects of this problem is inferring the graph
representation of the curvilinear network. Most existing delineation approaches
first perform binary segmentation of the image and then refine it using either
a set of hand-designed heuristics or a separate classifier that assigns
likelihood to paths extracted from the pixel-wise prediction. In our work, we
bridge the gap between segmentation and path classification by training a deep
network that performs those two tasks simultaneously. We show that this
approach is beneficial because it enforces consistency across the whole
processing pipeline. We apply our approach on roads and neurons datasets.
|
This paper deals with analyzing structural breaks in the covariance operator
of sequentially observed functional data. For this purpose, procedures are
developed to segment an observed stretch of curves into periods for which
second-order stationarity may be reasonably assumed. The proposed methods are
based on measuring the fluctuations of sample eigenvalues, either individually
or jointly, and traces of the sample covariance operator computed from segments
of the data. To implement the tests, new limit results are introduced that deal
with the large-sample behavior of vector-valued processes built from partial
sample eigenvalue estimates. These results in turn enable the calibration of
the tests to a prescribed asymptotic level. A simulation study and an
application to Australian annual minimum temperature curves confirm that the
proposed methods work well in finite samples. The application suggests that the
variation in annual minimum temperature underwent a structural break in the
1950s, after which typical fluctuations from the generally increasing
trendstarted to be significantly smaller.
|
In this paper, we define NC complex spaces as complex spaces together with a
structure sheaf of associative algebras in such a way that the abelization of
the structure sheaf is the sheaf of holomorphic functions.
|
We report the discovery of ancient massive merger events in the early-type
galaxies NGC 1380 and NGC 1427, members of the Fornax galaxy cluster. Both
galaxies have been observed by the MUSE IFU instrument on the VLT, as part of
the Fornax3D project. By fitting recently-developed population-orbital
superposition models to the observed surface brightness as well as stellar
kinematic, age, and metallicity maps, we obtain the stellar orbits, age and
metallicity distributions of each galaxy. We then decompose each galaxy into
multiple orbital-based components, including a dynamically hot inner stellar
halo component which is identified as the relic of past massive mergers. By
comparing to analogues from cosmological galaxy simulations, chiefly TNG50, we
find that the formation of such a hot inner stellar halo requires the merger
with a now-destroyed massive satellite galaxy of $3.7_{-1.5}^{+2.7} \times
10^{10}$ Msun (about $1/5$ of its current stellar mass) in the case of NGC 1380
and of $1.5_{-0.7}^{+1.6} \times10^{10}$ Msun (about $1/4$ of its current
stellar mass) in the case of NGC 1427. Moreover, we infer that the last massive
merger in NGC 1380 happened $\sim10$ Gyr ago based on the stellar age
distribution of the re-grown dynamically cold disk, whereas the merger in NGC
1427 ended $t\lesssim 8$ Gyr ago based on the stellar populations in its hot
inner stellar halo. The major merger event in NGC 1380 is the first one with
both merger mass and merger time quantitatively inferred in a galaxy beyond the
Local Volume. Moreover, it is the oldest and most massive merger uncovered in
nearby galaxies so far.
|
A connected graph is 2K2-free if it does not contain a pair of independent
edges as an induced subgraph. In this paper, we present the structural
characterization of minimal vertex separator and show that there are polynomial
number of minimal vertex separators in 2K2-free graphs. Further, using the
enumeration we show that finding minimum connected vertex separator in 2K2-free
graphs is polynomial time solvable. We highlight that finding minimum connected
vertex separator is NP-complete in Chordality 5 graphs, which is a super graph
class of 2K2-free graphs. Other study includes, enumeration of all distinct
maximal independent sets and testing 2K2-free graphs. Also, we present an
polynomial time algorithm for feedback vertex set problem in the subclass of
2K2-free graphs.
|
We consider the spin-orbit-induced spin Hall effect and spin swapping in
diffusive superconductors. By employing the non-equilibrium Keldysh Green's
function technique in the quasiclassical approximation, we derive coupled
transport equations for the spectral spin and particle distributions and for
the energy density in the elastic scattering regime. We compute four
contributions to the spin Hall conductivity, namely, skew scattering,
side-jump, anomalous velocity, and the Yafet contribution. The reduced density
of states in the superconductor causes a renormalization of the spin Hall
angle. We demonstrate that all four of these contributions to the spin Hall
conductivity are renormalized in the same way in the superconducting state. In
its simplest manifestation, spin swapping transforms a primary spin current
into a secondary spin current with swapped current and polarization directions.
We find that the spin-swapping coefficient is not explicitly but only
implicitly affected by superconducting correlations through the renormalized
diffusion coefficients. We discuss experimental consequences for measurements
of the (inverse) spin Hall effect and spin swapping in four-terminal
geometries. In our geometry, below the superconducting transition temperature,
the spin-swapping signal is increased an order of magnitude while changes in
the (inverse) spin Hall signal are moderate.
|
We introduce a new class of event shapes to characterize the jet-like
structure of an event. Like traditional event shapes, our observables are
infrared/collinear safe and involve a sum over all hadrons in an event, but
like a jet clustering algorithm, they incorporate a jet radius parameter and a
transverse momentum cut. Three of the ubiquitous jet-based observables---jet
multiplicity, summed scalar transverse momentum, and missing transverse
momentum---have event shape counterparts that are closely correlated with their
jet-based cousins. Due to their "local" computational structure, these jet-like
event shapes could potentially be used for trigger-level event selection at the
LHC. Intriguingly, the jet multiplicity event shape typically takes on
non-integer values, highlighting the inherent ambiguity in defining jets. By
inverting jet multiplicity, we show how to characterize the transverse momentum
of the n-th hardest jet without actually finding the constituents of that jet.
Since many physics applications do require knowledge about the jet
constituents, we also build a hybrid event shape that incorporates (local) jet
clustering information. As a straightforward application of our general
technique, we derive an event-shape version of jet trimming, allowing
event-wide jet grooming without explicit jet identification. Finally, we
briefly mention possible applications of our method for jet substructure
studies.
|
A growing body of literature is aimed at designing private mempools in
blockchains. The ultimate goal of this research is addressing several phenomena
broadly classed under MEV with sandwich attacks as the canonical example. The
literature has primarily viewed MEV as a problem arising from oversights in
distributed systems and cryptographic protocol design and has attempted to
address it with the standard tool sets from those disciplines. This paper
argues that the impact of private mempools on markets and agent incentives
renders analyses that do not consider the economic lens incomplete. The paper
presents several observations across blockchains and traditional finance to
justify this argument and highlight specific dynamics for future study.
|
Fermi-edge absorption theory predicting the spectrum, A(\omega)\propto
\omega^{-2\delta_0/\pi+\delta^2_0/\pi^2}, relies on the assumption that
scattering phase, \delta_0, is frequency-independent. Dependence of \delta_0 on
\omega becomes crucial near the resonant condition, where the phase changes
abruptly by \pi. In this limit, due to finite time spent by electron on a
resonant level, the scattering is dynamic. We incorporate this time delay into
the theory, solve the Dyson equation with a modified kernel and find that, near
the resonance, A(\omega) behaves as \omega^{-3/4} |\ln \omega|. Resonant
scattering off the core hole takes place in 1D and 2D in the presence of an
empty subband above the Fermi level; then attraction to hole splits off a
resonant level from the bottom of the empty subband. Fermi-edge absorption in
the regime when resonant level transforms into a Kondo peak is discussed.
|
We discuss two approaches to searches for gravitational-wave (GW) and
electromagnetic (EM) counterparts of binary neutron star mergers. The first
approach relies on triggering archival searches of GW detector data based on
detections of EM transients. We introduce a quantitative approach to evaluate
the improvement to GW detector reach due to the extra information gained from
the EM transient and the increased confidence in the presence of a signal from
a binary merger. We also advocate utilizing other transients in addition to
short gamma ray bursts. The second approach involves following up GW candidates
with targeted EM observations. We argue for the use of slower but optimal
parameter-estimation techniques to localize the source on the sky, and for a
more sophisticated use of astrophysical prior information, including galaxy
catalogs, to find preferred followup locations.
|
We discuss two types of neutrino mass matrices which both give $\theta_{23} =
45^\circ$, i.e., a maximal atmospheric mixing angle. We review three models,
based on the seesaw mechanism and on simple extensions of the scalar sector of
the Standard Model, where those mass matrices are obtained from symmetries.
|
The knowledge concept recommendation in Massive Open Online Courses (MOOCs)
is a significant issue that has garnered widespread attention. Existing methods
primarily rely on the explicit relations between users and knowledge concepts
on the MOOC platforms for recommendation. However, there are numerous implicit
relations (e.g., shared interests or same knowledge levels between users)
generated within the users' learning activities on the MOOC platforms. Existing
methods fail to consider these implicit relations, and these relations
themselves are difficult to learn and represent, causing poor performance in
knowledge concept recommendation and an inability to meet users' personalized
needs. To address this issue, we propose a novel framework based on contrastive
learning, which can represent and balance the explicit and implicit relations
for knowledge concept recommendation in MOOCs (CL-KCRec). Specifically, we
first construct a MOOCs heterogeneous information network (HIN) by modeling the
data from the MOOC platforms. Then, we utilize a relation-updated graph
convolutional network and stacked multi-channel graph neural network to
represent the explicit and implicit relations in the HIN, respectively.
Considering that the quantity of explicit relations is relatively fewer
compared to implicit relations in MOOCs, we propose a contrastive learning with
prototypical graph to enhance the representations of both relations to capture
their fruitful inherent relational knowledge, which can guide the propagation
of students' preferences within the HIN. Based on these enhanced
representations, to ensure the balanced contribution of both towards the final
recommendation, we propose a dual-head attention mechanism for balanced fusion.
Experimental results demonstrate that CL-KCRec outperforms several
state-of-the-art baselines on real-world datasets in terms of HR, NDCG and MRR.
|
The resource quality and the temporal generation pattern of variable
renewable energy sources vary significantly across Europe. In this paper
spatial distributions of renewable assets are explored which exploit this
heterogeneity to lower the total system costs for a high level of renewable
electricity in Europe. Several intuitive heuristic algorithms, optimal
portfolio theory and a local search algorithm are used to find optimal
distributions of renewable generation capacities that minimise the total costs
of backup, transmission and renewable capacity simultaneously. Using current
cost projections, an optimal heterogeneous distribution favours onshore wind,
particularly in countries bordering the North Sea, which results in average
electricity costs that are up to 11% lower than for a homogeneous reference
distribution of renewables proportional to each country's mean load. The
reduction becomes even larger, namely 18%, once the transmission capacities are
put to zero in the homogeneous reference distribution. Heuristic algorithms to
distribute renewable capacity based on each country's wind and solar capacity
factors are shown to provide a satisfactory approximation to fully optimised
renewable distributions, while maintaining the benefits of transparency and
comprehensibility. The sensitivities of the results to changing costs of solar
generation and gas supply as well as to the possible cross-sectoral usage of
unavoidable curtailment energy are also examined.
|
We present a Chern-Simons theory of the fractional quantum Hall effect in
which flux attachment is followed by a transformation that effectively attaches
the correlation holes. We extract the correlated wavefunctions, compute the
drift and cyclotron currents (due to inhomogeneous density), exhibit the Read
operator, and operators that create quasi-particles and holes. We show how the
bare kinetic energy can get quenched and replaced by one due to interactions.
We find that for $\nu =1/2$ the low energy theory has neutral quasiparticles
and give the effective hamiltonian and constraints.
|
In this work, new equivalences of topological statements and weaker axioms
than ${\bf AC}$ are proven. This equivalences include the use of
anti-properties. All this equivalences have been checked with a computer using
the theorem proving system Isabelle/Isar and are available at the isarmathlib
repository.
|
We propose a Multi-vAlue Rule Set (MRS) model for in-hospital predicting
patient mortality. Compared to rule sets built from single-valued rules, MRS
adopts a more generalized form of association rules that allows multiple values
in a condition. Rules of this form are more concise than classical
single-valued rules in capturing and describing patterns in data. Our
formulation also pursues a higher efficiency of feature utilization, which
reduces possible cost in data collection and storage. We propose a Bayesian
framework for formulating a MRS model and propose an efficient inference method
for learning a maximum \emph{a posteriori}, incorporating theoretically
grounded bounds to iteratively reduce the search space and improve the search
efficiency. Experiments show that our model was able to achieve better
performance than baseline method including the current system used by the
hospital.
|
In this article, we consider a small rigid body moving in a viscous fluid
filling the whole plane. We assume that the diameter of the rigid body goes to
0, that the initial velocity has bounded energy and that the density of the
rigid body goes to infinity. We prove that the rigid body has no influence on
the limit equation by showing convergence of the solutions towards a solution
of the Navier-Stokes equations in the full plane.
|
We report electronic transport experiments on a graphene single electron
transistor. The device consists of a graphene island connected to source and
drain electrodes via two narrow graphene constrictions. It is electrostatically
tunable by three lateral graphene gates and an additional back gate. The
tunneling coupling is a strongly nonmonotonic function of gate voltage
indicating the presence of localized states in the barriers. We investigate
energy scales for the tunneling gap, the resonances in the constrictions and
for the Coulomb blockade resonances. From Coulomb diamond measurements in
different device configurations (i.e. barrier configurations) we extract a
charging energy of 3.4 meV and estimate a characteristic energy scale for the
constriction resonances of 10 meV.
|
We introduce a notion of extraction-contraction coproduct on twisted
bialgebras, that is to say bialgebras in the category of linear species. If $P$
is a twisted bialgebra, a contraction-extraction coproduct sends $P[X]$ to
$P[X/\sim]\otimes P[X]$ for any finite set $X$ and any equivalence relation
$\sim$ on $X$, with a coassociativity constraint and compatibilities with the
product and coproduct of $P$. We prove that if $P$ is a twisted bialgebra with
an extraction-contraction coproduct, then $P\circ Com$ is a bialgebra in the
category of coalgebraic species, that is to say species in the category of
coalgebras.We then introduce a coloured version of the bosonic Fock functor.
This induces a bifunctor which associates to any bialgebra $(V,\cdot,\delta_V)$
and to any twisted bialgebra $P$ with an extraction-contraction coproduct a
comodule-bialgebra $F_V[P]$: this object inherits a product $m$ and two
coproducts $\Delta$ and $\delta$, such that $(F_V[P],m,\Delta)$ is a bialgebra
in the category of right $(F_V[P],m,\delta)$-comodules.As an example, this is
applied to the twisted bialgebra of graphs. The coloured Fock functors then
allow to extend the construction of the double bialgebra of graphs to double
bialgebras of graphs which vertices are decorated by elements of any bialgebra
$V$. Other examples (on mixed graphs, hypergraphs, noncrossing partitions...)
will be given in a series of forthcoming papers.
|
Astrometry is less sensitive to stellar activity than the radial velocity
technique when attempting to detect Earth mass planets in the habitable zone of
solar-type stars. This is due to a smaller number of physical processes
affecting the signal, and a larger ratio of the amplitude of the planetary
signal to the stellar signal. A few high-precision astrometric missions have
therefore been proposed over the past two decades. We aim to re-estimate the
detection limits in astrometry for the nearby stars which are the main targets
proposed for the THEIA astrometric mission, the most elaborate mission to
search for planets, and to characterise its performance. This analysis is
performed for the 55 F-G-K stars in the THEIA sample. We used realistic
simulations of stellar activity and selected those that correspond best to each
star in terms of spectral type and average activity level. Then, we performed
blind tests to estimate the performance. We find worse detection limits
compared to those previously obtained for that sample based on a careful
analysis of the false positive rate, with values in the Earth-mass regime for
most stars of the sample. The difference is attributed to the fact that we
analysed full time series, adapted to each star in the sample, rather than
using the expected solar jitter only. Although these detection limits have a
relatively low signal-to-noise ratio, the fitted parameters have small
uncertainties. We confirm the low impact of stellar activity on exoplanet
detectability for solar-type stars, although it plays a significant role for
the closest stars such as alpha Cen A and B. However, for the few stars in the
sample with a habitable zone corresponding to long periods, namely subgiants,
the THEIA observational strategy is not well adapted and should prevent the
detection of planets in the habitable zone, unless a longer mission can be
proposed.
|
The gravitational-wave (GW) detection of GW190521 has provided new insights
on the mass distribution of black holes and new constraints for astrophysical
formation channels. With independent claims of GW190521 having significant
pre-merger eccentricity, we investigate what this implies for GW190521-like
binaries that form dynamically. The Laser Interferometer Space Antenna (LISA)
will also be sensitive to GW190521-like binaries if they are circular from an
isolated formation channel. We show, however, that GW190521-like binaries that
form dynamically may skip the LISA band entirely. To this end, we simulate
GW190521 analogues that dynamically form via post-Newtonian binary-single
scattering. From these scattering experiments, we find that GW190521-like
binaries may enter the LIGO-Virgo band with significant eccentricity as
suggested by recent studies, though well below an eccentricity of $e_{\rm 10Hz}
\lesssim 0.7$. Eccentric GW190521-like binaries further motivate the
astrophysical science case for a decihertz GW observatory, such as the
kilometer-scale version of the Midband Atomic Gravitational-wave
Interferometric Sensor (MAGIS). Pre-merger observations of GW190521-like
binaries with such a decihertz GW detector would be able to constrain the
eccentricity of GW190521-like binaries to greater precision than with just
LIGO-Virgo alone. These eccentricity constraints would also provide additional
insights into the possible environments that GW190521-like binaries form in.
|
Intermediate redshifts between galaxy surveys and the cosmic microwave
background (CMB) remain unexplored territory. Line intensity mapping (LIM)
offers a way to probe the $z\gtrsim 1$ Universe, including the epoch of
reionization and the dark ages. Via exact nulling of the lensing kernel, we
show that LIM lensing, in combination with galaxy (resp., CMB) lensing, can
uniquely probe the $z\gtrsim 1$ (resp., pre-reionization) Universe. However,
LIM foregrounds are a key hurdle to this futuristic technique. While continuum
foregrounds can be controlled by discarding modes perpendicular to the line of
sight (low $k_\parallel$ modes), interloper foregrounds haven't been addressed
in the context of LIM lensing. In this paper, we quantify the interloper bias
to LIM lensing for the first time, and derive a ''LIM-pair'' estimator which
avoids it exactly after cross-correlating with CMB lensing. This new quadratic
lensing estimator works by combining two intensity maps in different lines,
from the same redshift, whose interlopers are uncorrelated. As a result, this
foreground avoidance method is robust to even large changes in the amplitude of
the interloper power and non-Gaussianity. The cross-spectrum of the LIM-pair
estimator with CMB lensing is thus robust to the currently large theoretical
uncertainties in LIM modeling at high redshift.
|
We study the convergence of the false discovery proportion (FDP) of the
Benjamini-Hochberg procedure in the Gaussian equi-correlated model, when the
correlation $\rho_m$ converges to zero as the hypothesis number $m$ grows to
infinity. By contrast with the standard convergence rate $m^{1/2}$ holding
under independence, this study shows that the FDP converges to the false
discovery rate (FDR) at rate $\{\min(m,1/\rho_m)\}^{1/2}$ in this
equi-correlated model.
|
We review the observed demographics and inferred evolution of supermassive
black holes (BHs) found by dynamical modeling of spatially resolved kinematics.
Most influential was the discovery of a tight correlation between BH mass and
the velocity dispersion of the host-galaxy bulge. It and other correlations led
to the belief that BHs and bulges coevolve by regulating each other's growth.
New results are now replacing this simple story with a richer and more
plausible picture in which BHs correlate differently with different galaxy
components. BHs are found in pure-disk galaxies, so classical
(elliptical-galaxy-like) bulges are not necessary to grow BHs. But BHs do not
correlate with galaxy disks. And any correlations with disk-grown pseudobulges
or halo dark matter are so weak as to imply no close coevolution. We suggest
that there are four regimes of BH feedback. 1- Local, stochastic feeding of
small BHs in mainly bulgeless galaxies involves too little energy to result in
coevolution. 2- Global feeding in major, wet galaxy mergers grows giant BHs in
short, quasar-like "AGN" events whose feedback does affect galaxies. This makes
classical bulges and coreless-rotating ellipticals. 3- At the highest BH
masses, maintenance-mode feedback into X-ray gas has the negative effect of
helping to keep baryons locked up in hot gas. This happens in giant,
core-nonrotating ellipticals. They inherit coevolution magic from smaller
progenitors. 4- Independent of any feedback physics, the averaging that results
from successive mergers helps to engineer tight BH correlations.
|
Monolithic pixel sensors for charged particle detection and imaging
applications have been designed and fabricated using commercially available,
deep-submicron Silicon-On-Insulator (SOI) processes, which insulate a thin
layer of integrated full CMOS electronics from a high-resistivity substrate by
means of a buried oxide. The substrate is contacted from the electronics layer
through vias etched in the buried oxide, allowing pixel implanting and reverse
biasing. This paper summarizes the performances achieved with a first prototype
manufactured in the OKI 0.15 micrometer FD-SOI process, featuring analog and
digital pixels on a 10 micrometer pitch. The design and preliminary results on
the analog section of a second prototype manufactured in the OKI 0.20
micrometer FD-SOI process are briefly discussed.
|
The recent decade has seen an enormous rise in the popularity of deep
learning and neural networks. These algorithms have broken many previous
records and achieved remarkable results. Their outstanding performance has
significantly sped up the progress of AI, and so far various milestones have
been achieved earlier than expected. However, in the case of relatively small
datasets, the performance of Deep Neural Networks (DNN) may suffer from reduced
accuracy compared to other Machine Learning models. Furthermore, it is
difficult to construct prediction intervals or evaluate the uncertainty of
predictions when dealing with regression tasks. In this paper, we propose an
ensemble method that attempts to estimate the uncertainty of predictions,
increase their accuracy and provide an interval for the expected variation.
Compared with traditional DNNs that only provide a prediction, our proposed
method can output a prediction interval by combining DNNs, extreme gradient
boosting (XGBoost) and dissimilarity computation techniques. Albeit the simple
design, this approach significantly increases accuracy on small datasets and
does not introduce much complexity to the architecture of the neural network.
The proposed method is tested on various datasets, and a significant
improvement in the performance of the neural network model is seen. The model's
prediction interval can include the ground truth value at an average rate of
71% and 78% across training sizes of 90% and 55%, respectively. Finally, we
highlight other aspects and applications of the approach in experimental error
estimation, and the application of transfer learning.
|
While many zeros of the Riemann zeta function are located on the critical
line $\Re(s)=1/2$, the non-existence of zeros in the remaining part of the
critical strip $\Re(s) \in \, ]0, 1[$ is the main scope to be proven for the
Riemann hypothesis. The Riemann zeta functional leads to a relation between the
zeros on either sides of the critical line. Given $s$ a complex number and
$\bar{s}$ its complex conjugate, if $s$ is a zero of the Riemann zeta function
in the critical strip $\Re(s) \in \, ]0, 1[$, then $\zeta(s) =
\zeta(1-\bar{s})$, as a key proposition to prove the Riemann hypothesis.
|
Let K be a p-adic field. We explore Igusa's p-adic zeta function, which is
associated to a K-analytic function on an open and compact subset of K^n. First
we deduce a formula for an important coefficient in the Laurent series of this
meromorphic function at a candidate pole. Afterwards we use this formula to
determine all values less than -1/2 for n=2 and less than -1 for n=3 which
occur as the real part of a pole.
|
We present a specialized network simplex algorithm for the budget-constrained
minimum cost flow problem, which is an extension of the traditional minimum
cost flow problem by a second kind of costs associated with each edge, whose
total value in a feasible flow is constrained by a given budget B. We present a
fully combinatorial description of the algorithm that is based on a novel
incorporation of two kinds of integral node potentials and three kinds of
reduced costs. We prove optimality criteria and combine two methods that are
commonly used to avoid cycling in traditional network simplex algorithms into
new techniques that are applicable to our problem. With these techniques and
our definition of the reduced costs, we are able to prove a pseudo-polynomial
running time of the overall procedure, which can be further improved by
incorporating Dantzig's pivoting rule. Moreover, we present computational
results that compare our procedure with Gurobi.
|
We present a theoretical study of the formation of self-trapped excitons
(STEs) and the associated broadband emission in metal-halide perovskites
Cs$_4$SnBr$_6$ and Cs$_2$AgInCl$_6$, using time-dependent density functional
theory (TDDFT) with the dielectric-dependent hybrid (DDH) functional. Our
approach allows for an accurate description of the excitonic effect and
geometry relaxation in the electronic excited states and yields optical gap,
STE emission energy, and emission spectra in reasonable agreement with
experiments. We point out the significance of considering geometry relaxations
in the electronic excited state by showing that the exciton-phonon coupling
computed in the ground-state atomic geometry is insufficient to describe the
physical properties of STEs. Overall, we find that TDDFT with the DDH hybrid
functional is a suitable approach for the study of the formation of STEs in
perovskite and provides insights for designing metal-halide perovskites with
tailored emission properties.
|
We consider the matrix representation of the Eisenstein numbers and in this
context we discuss the theory of the Pseudo Hyperbolic Functions. We develop a
geometrical interpretation and show the usefulness of the method in Physical
problems related to the anomalous scattering of light by crystals
|
We propose a multi-agent system that enables groups of agents to collaborate
and work autonomously to execute tasks. Groups can work in a decentralized
manner and can adapt to dynamic changes in the environment. Groups of agents
solve assigned tasks by exploring the solution space cooperatively based on the
highest reward first. The tasks have a dependency structure associated with
them. We rigorously evaluated the performance of the system and the individual
group performance using centralized and decentralized control approaches for
task distribution. Based on the results, the centralized approach is more
efficient for systems with a less-dependent system $G_{18}$ (a well-known
program graph that contains $18$ nodes with few links), while the decentralized
approach performs better for systems with a highly-dependent system $G_{40}$ (a
program graph that contains $40$ highly interlinked nodes). We also evaluated
task allocation to groups that do not have interdependence. Our findings reveal
that there was significantly less difference in the number of tasks allocated
to each group in a less-dependent system than in a highly-dependent one. The
experimental results showed that a large number of small-size cooperative
groups of agents unequivocally improved the system's performance compared to a
small number of large-size cooperative groups of agents. Therefore, it is
essential to identify the optimal group size for a system to enhance its
performance.
|
This article reviews the physics and technology of producing large quantities
of highly spin-polarized, or hyperpolarized, $^3$He nuclei using spin-exchange
(SEOP) and metastability-exchange (MEOP) optical pumping, and surveys
applications of polarized $^3$He. Several recent developments are emphasized
for each method. For SEOP, the use of spectrally narrowed lasers and Rb/K
mixtures has substantially increased the achievable polarization and polarizing
rate. MEOP in high magnetic fields has likewise significantly increased the
pressure at which this method can be performed, and has led to the observation
of a light-induced relaxation mechanism. In both methods the increased
capabilities have led to more extensive study and modeling of the basic
underlying physics. New unexplained dependences of relaxation on temperature
and magnetic field have been discovered in SEOP cells. Applications of both
methods are also reviewed, including targets for charged particle and photon
beams, neutron spin filters, magnetic resonance imaging, and precision
measurements.
|
Raman spectroscopy is a promising technique used for noninvasive analysis of
samples in various fields of application due to its ability for fingerprint
probing of samples at the molecular level. Chemometrics methods are widely used
nowadays for better understanding of the recorded spectral fingerprints of
samples and differences in their chemical composition. This review considers a
number of manuscripts published in the Spectrochimica Acta, Part A: Molecular
and Biomolecular Spectroscopy Journal that presented findings regarding the
application of Raman spectroscopy in combination with chemometrics to study
samples and their changes caused by different factors. In 57 reviewed
manuscripts, we analyzed application of chemometrics algorithms, statistical
modeling parameters, utilization of cross validation, sample sizes, as well as
the performance of the proposed classification and regression model. We
summarized the best strategies for creating classification models and
highlighted some common drawbacks when it comes to the application of
chemometrics techniques. According to our estimations, about 70% of the papers
are likely to contain unsupported or invalid data due to insufficient
description of the utilized methods or drawbacks of the proposed classification
models. These drawbacks include: (1) insufficient experimental sample size for
classification/regression to achieve significant and reliable results, (2) lack
of cross validation (or a test set) for verification of the
classifier/regression performance, (3) incorrect division of the spectral data
into the training and the test/validation sets; (4) improper selection of the
PC number to reduce the analyzed spectral data dimension.
|
Multi-Object Tracking (MOT) has gained extensive attention in recent years
due to its potential applications in traffic and pedestrian detection. We note
that tracking by detection may suffer from errors generated by noise detectors,
such as an imprecise bounding box before the occlusions, and observed that in
most tracking scenarios, objects tend to move and lost within specific
locations. To counter this, we present a novel tracker to deal with the bad
detector and occlusions. Firstly, we proposed a location-wise sub-region
recognition method which equally divided the frame, which we called mesh. Then
we proposed corresponding location-wise loss management strategies and
different matching strategies. The resulting Mesh-SORT, ablation studies
demonstrate its effectiveness and made 3% fragmentation 7.2% ID switches drop
and 0.4% MOTA improvement compared to the baseline on MOT17 datasets. Finally,
we analyze its limitation on the specific scene and discussed what future works
can be extended.
|
We study containment and uniqueness problems concerning matrix convex sets.
First, to what extent is a matrix convex set determined by its first level? Our
results in this direction quantify the disparity between two product
operations, namely the product of the smallest matrix convex sets over $K_i
\subseteq \mathbb{C}^d$, and the smallest matrix convex set over the product of
$K_i$. Second, if a matrix convex set is given as the matrix range of an
operator tuple $T$, when is $T$ determined uniquely? We provide counterexamples
to results in the literature, showing that a compact tuple meeting a minimality
condition need not be determined uniquely, even if its matrix range is a
particularly friendly set. Finally, our results may be used to improve dilation
scales, such as the norm bound on the dilation of (non self-adjoint)
contractions to commuting normal operators, both concretely and abstractly.
|
We show that the homotopy category of commutative algebra spectra over the
Eilenberg-Mac Lane spectrum of the integers is equivalent to the homotopy
category of E-infinity-monoids in unbounded chain complexes. We do this by
establishing a chain of Quillen equivalences between the corresponding model
categories. We also provide a Quillen equivalence to commutative monoids in the
category of functors from the category of finite sets and injections to
unbounded chain complexes.
|
Direct reaction techniques are powerful tools to study the single-particle
nature of nuclei. Performing direct reactions on short-lived nuclei requires
radioactive ion beams produced either via fragmentation or the Isotope
Separation OnLine (ISOL) method. Some of the most interesting regions to study
with direct reactions are close to the magic numbers where changes in shell
structure can be tracked. These changes can impact the final abundances of
explosive nucleosynthesis. The structure of the chain of tin isotopes is
strongly influenced by the Z=50 proton shell closure, as well as the neutron
shell closures lying in the neutron-rich, N=82, and neutron-deficient, N=50,
regions. Here we present two examples of direct reactions on exotic tin
isotopes. The first uses a one-neutron transfer reaction and a low-energy
reaccelerated ISOL beam to study states in 131Sn from across the N=82 shell
closure. The second example utilizes a one-neutron knockout reaction on
fragmentation beams of neutron-deficient 106,108Sn. In both cases, measurements
of gamma rays in coincidence with charged particles proved to be invaluable.
|
This paper presents a quantum algorithm for the solution of prototypical
second-order linear elliptic partial differential equations discretized by
$d$-linear finite elements on Cartesian grids of a bounded $d$-dimensional
domain. An essential step in the construction is a BPX preconditioner, which
transforms the linear system into a sufficiently well-conditioned one, making
it amenable to quantum computation. We provide a constructive proof
demonstrating that our quantum algorithm can compute suitable functionals of
the solution to a given tolerance $\texttt{tol}$ with a complexity linear in
$\texttt{tol}^{-1}$ for a fixed dimension $d$, neglecting logarithmic terms.
This complexity is proportional to that of its one-dimensional counterpart and
improves previous quantum algorithms by a factor of order $\texttt{tol}^{-2}$.
We also detail the design and implementation of a quantum circuit capable of
executing our algorithm, and present simulator results that support the quantum
feasibility of the finite element method in the near future, paving the way for
quantum computing approaches to a wide range of PDE-related challenges.
|
Instruction tuning significantly enhances the performance of large language
models (LLMs) across various tasks. However, the procedure to optimizing the
mixing of instruction datasets for LLM fine-tuning is still poorly understood.
This study categorizes instructions into three primary types: NLP downstream
tasks, coding, and general chat. We explore the effects of instruction tuning
on different combinations of datasets on LLM performance, and find that certain
instruction types are more advantageous for specific applications but can
negatively impact other areas. This work provides insights into instruction
mixtures, laying the foundations for future research.
|
The experimental study of the modulation of the envelope of spin-echo signals
due to internal and external fields is an important spectroscopic tool to
detect very small internal magnetic fields. We derive the free induction decay
and the frequency spectrum and amplitude of spin-echo signals for arbitrary
orientation of fields with respect to crystalline axis for nuclei in a crystal
of orthorhombic symmetry. Results reproduce the results that no modulation
should be observed in tetragonal crystals for fields either along the c-axis or
any direction in the basal plane and give details of the signal as a function
of the orthorhombicity parameter. They are used to discuss recent experimental
results and provide guidelines for future experiments.
|
In this paper we characterize generalized quasi-arithmetic means, that is
means of the form $M(x_1,...,x_n):=(f_1+...+f_n)^{-1}(f_1(x_1)+...+f_n(x_n))$,
where $f_1,...,f_n:I\to\mathbb{R}$ are strictly increasing and continuous
functions. Our characterization involves the Gauss composition of the cyclic
mean-type mapping induced by $M$ and a generalized bisymmetry equation.
|
We consider a formulation of the algebraic Bethe ansatz for the six vertex
model with non-diagonal open boundaries. Specifically, we study the case where
both left and right $K$-matrices have an upper triangular form. We show that
the main difficulty entailed by those form of the $K$-matrices is the
construction of the excited states. However, it is possible to treat this
problem with aid of an auxiliary transfer matrix and by means of a generalized
creation operator.
|
Lawn area measurement is an application of image processing and deep
learning. Researchers have been used hierarchical networks, segmented images
and many other methods to measure lawn area. Methods effectiveness and accuracy
varies. In this project Image processing and deep learning methods has been
compared to find the best way to measure the lawn area. Three Image processing
methods using OpenCV has been compared to Convolutional Neural network which is
one of the most famous and effective deep learning methods. We used Keras and
TensorFlow to estimate the lawn area. Convolutional Neural Network or shortly
CNN shows very high accuracy (94-97%) for this purpose. In image processing
methods, Thresholding with 80-87% accuracy and Edge detection are effective
methods to measure the lawn area but Contouring with 26-31% accuracy does not
calculate the lawn area successfully. We may conclude that deep learning
methods especially CNN could be the best detective method comparing to image
processing and other deep learning techniques.
|
In this paper, we solve the massive scalar field in the Reissner-Nordstr\"om
spacetime. The scalar field in the Reissner-Nordstr\"om spacetime has both
bound states and scattering states. For bound states, we solve the bound-state
wave function and the eigenvalue spectrum. For scattering states, we solve the
scattering wave function and give an explicit expression for scattering phase
shift by the integral equation method. Especially, we introduce the tortoise
coordinate for the Reissner-Nordstr\"om spacetime.
|
Given a probability space $(X, {\cal B}, m)$, measure preserving
transformations $g_1, \dots , g_k$ of $X$, and a colour set $C$, a colouring
rule is a way to colour the space with $C$ such that the colours allowed for a
point $x$ are determined by that point's location and the colours of the
finitely $g_1 (x), \dots , g_k(x)$ with $g_i(x) \not= x$ for all $i$ and almost
all $x$. We represent a colouring rule as a correspondence $F$ defined on
$X\times C^k$ with values in $C$. A function $f: X\rightarrow C$ satisfies the
rule at $x$ if $f(x) \in F( x, f(g_1 x), \dots , f(g_k x))$. A colouring rule
is paradoxical if it can be satisfied in some way almost everywhere with
respect to $m$, but not in {\bf any} way that is measurable with respect to a
finitely additive measure that extends the probability measure $m$ and for
which the finitely many transformations $g_1, \dots , g_k$ remain measure
preserving. We show that a colouring rule can be paradoxical when the $g_1,
\dots, g_k$ are members of a group $G$, the probability space $X$ and the
colour set $C$ are compact sets, $C$ is convex and finite dimensional, and the
colouring rule says if $c: X\rightarrow C$ is the colouring function then the
colour $c(x)$ must lie ($m$ a.e.) in $F(x, c(g_1(x) ), \dots , c(g_k(x)))$ for
a non-empty upper-semi-continuous convex-valued correspondence $F$ defined on
$X\times C^k$. We show that any colouring that approximates the correspondence
by $\epsilon$ for small enough positive $\epsilon$ cannot be measurable in the
same finitely additive way. Furthermore any function satisfying the colouring
rule illustrates a paradox through finitely many measure preserving shifts
defining injective maps from the whole space to subsets of measure summing up
to less than one.
|
Percolation is an emblematic model to assess the robustness of interconnected
systems when some of their components are corrupted. It is usually investigated
in simple scenarios, such as the removal of the system's units in random order,
or sequentially ordered by specific topological descriptors. However, in the
vast majority of empirical applications, it is required to dismantle the
network following more sophisticated protocols, for instance, by combining
topological properties and non-topological node metadata. We propose a novel
mathematical framework to fill this gap: networks are enriched with features
and their nodes are removed according to the importance in the feature space.
We consider features of different nature, from ones related to the network
construction to ones related to dynamical processes such as epidemic spreading.
Our framework not only provides a natural generalization of percolation but,
more importantly, offers an accurate way to test the robustness of networks in
realistic scenarios.
|
Automatic eye gaze estimation has interested researchers for a while now. In
this paper, we propose an unsupervised learning based method for estimating the
eye gaze region. To train the proposed network "Ize-Net" in self-supervised
manner, we collect a large `in the wild' dataset containing 1,54,251 images
from the web. For the images in the database, we divide the gaze into three
regions based on an automatic technique based on pupil-centers localization and
then use a feature-based technique to determine the gaze region. The
performance is evaluated on the Tablet Gaze and CAVE datasets by fine-tuning
results of Ize-Net for the task of eye gaze estimation. The feature
representation learned is also used to train traditional machine learning
algorithms for eye gaze estimation. The results demonstrate that the proposed
method learns a rich data representation, which can be efficiently fine-tuned
for any eye gaze estimation dataset.
|
The suffix trees are fundamental data structures for various kinds of string
processing. The suffix tree of a text string $T$ of length $n$ has $O(n)$ nodes
and edges, and the string label of each edge is encoded by a pair of positions
in $T$. Thus, even after the tree is built, the input string $T$ needs to be
kept stored and random access to $T$ is still needed. The \emph{linear-size
suffix tries} (\emph{LSTs}), proposed by Crochemore et al. [Linear-size suffix
tries, TCS 638:171-178, 2016], are a "stand-alone" alternative to the suffix
trees. Namely, the LST of an input text string $T$ of length $n$ occupies
$O(n)$ total space, and supports pattern matching and other tasks with the same
efficiency as the suffix tree without the need to store the input text string
$T$. Crochemore et al. proposed an \emph{offline} algorithm which transforms
the suffix tree of $T$ into the LST of $T$ in $O(n \log \sigma)$ time and
$O(n)$ space, where $\sigma$ is the alphabet size. In this paper, we present
two types of \emph{online} algorithms which "directly" construct the LST, from
right to left, and from left to right, without constructing the suffix tree as
an intermediate structure. Both algorithms construct the LST incrementally when
a new symbol is read, and do not access the previously read symbols. Both of
the right-to-left construction algorithm and the left-to-right construction
algorithm work in $O(n \log \sigma)$ time and $O(n)$ space. The main feature of
our algorithms is that the input text string does not need to be stored.
|
Reliably detecting anomalies in a given set of images is a task of high
practical relevance for visual quality inspection, surveillance, or medical
image analysis. Autoencoder neural networks learn to reconstruct normal images,
and hence can classify those images as anomalies, where the reconstruction
error exceeds some threshold. Here we analyze a fundamental problem of this
approach when the training set is contaminated with a small fraction of
outliers. We find that continued training of autoencoders inevitably reduces
the reconstruction error of outliers, and hence degrades the anomaly detection
performance. In order to counteract this effect, an adversarial autoencoder
architecture is adapted, which imposes a prior distribution on the latent
representation, typically placing anomalies into low likelihood-regions.
Utilizing the likelihood model, potential anomalies can be identified and
rejected already during training, which results in an anomaly detector that is
significantly more robust to the presence of outliers during training.
|
In the present paper, we give harmonic weight enumerators and Jacobi
polynomials for the first-order Reed--Muller codes and the extended Hamming
codes. As a corollary, we show the nonexistence of combinatorial $4$-designs in
these codes.
|
We determine the most general scalar field theories which have an action that
depends on derivatives of order two or less, and have equations of motion that
stay second order and lower on flat space-time. We show that those theories can
all be obtained from linear combinations of Lagrangians made by multiplying a
particular form of the Galileon Lagrangian by an arbitrary scalar function of
the scalar field and its first derivatives. We also obtain curved space-time
extensions of those theories which have second order field equations for both
the metric and the scalar field. This provide the most general extension, under
the condition that field equations stay second order, of k-essence, Galileons,
k-Mouflage as well as of the kinetically braided scalars. It also gives the
most general action for a scalar classicalizer, which has second order field
equations. We discuss the relation between our construction and the Euler
hierachies of Fairlie et al, showing in particular that Euler hierachies allow
one to obtain the most general theory when the latter is shift symmetric. As a
simple application of our formalism, we give the covariantized version of the
conformal Galileon.
|
Traditional link adaptation (LA) schemes in cellular network must be revised
for networks beyond the fifth generation (b5G), to guarantee the strict latency
and reliability requirements advocated by ultra reliable low latency
communications (URLLC). In particular, a poor error rate prediction potentially
increases retransmissions, which in turn increase latency and reduce
reliability. In this paper, we present an interference prediction method to
enhance LA for URLLC. To develop our prediction method, we propose a kernel
based probability density estimation algorithm, and provide an in depth
analysis of its statistical performance. We also provide a low complxity
version, suitable for practical scenarios. The proposed scheme is compared with
state-of-the-art LA solutions over fully compliant 3rd generation partnership
project (3GPP) calibrated channels, showing the validity of our proposal.
|
We develop a Born-Oppenheimer type formalism for the description of quantum
thermal transport along hybrid nanoscale objects. Our formalism is suitable for
treating heat transfer in the off-resonant regime, where e.g., the relevant
vibrational modes of the interlocated molecule are high relative to typical
bath frequencies, and at low temperatures when tunneling effects dominate. A
general expression for the thermal energy current is accomplished, in the form
of a generalized Landauer formula. In the harmonic limit this expression
reduces to the standard Landauer result for heat transfer, while in the
presence of nonlinearities multiphonon tunneling effects are realized.
|
In this paper we investigate the photoproduction of QED bound states in
proton -- proton, proton -- nucleus and nucleus -- nucleus collisions at RHIC,
LHC and FCC energies considering an accurate treatment of the absorptive
corrections and for the nuclear form factor. The total cross sections for the
production of singlet and triplet $(l^+ l^-)$ states, with $l = e,\, \mu, \,
\tau$, are estimated and a detailed analysis of the photoproduction of singlet
QED bound states is performed considering the rapidity ranges covered by
central and forward detectors. The impact of the Coulomb corrections on the
parapositronium production in heavy ion collisions is estimated. We predict a
large number of events associated to the production of ($e^+ e^-$) and ($\mu^+
\mu^-$) states in hadronic collisions.
|
Inspired by Labov's seminal work on stylistic variation as a function of
social stratification, we develop and compare neural models that predict a
person's presumed socio-economic status, obtained through distant
supervision,from their writing style on social media. The focus of our work is
on identifying the most important stylistic parameters to predict
socio-economic group. In particular, we show the effectiveness of
morpho-syntactic features as stylistic predictors of socio-economic group,in
contrast to lexical features, which are good predictors of topic.
|
We propose and study a model for the equilibrium statistical mechanics of a
pressurised semiflexible polymer ring. The Hamiltonian has a term which couples
to the algebraic area of the ring and a term which accounts for bending
(semiflexibility). The model allows for self-intersections. Using a combination
of Monte Carlo simulations, Flory-type scaling theory, mean-field
approximations and lattice enumeration techniques, we obtain a phase diagram in
which collapsed and inflated phases are separated by a continuous transition.
The scaling properties of the averaged area as a function of the number of
units of the ring are derived. For large pressures, the asymptotic behaviour of
the area is calculated for both discrete and lattice versions of the model. For
small pressures, the area is obtained through a mapping onto the quantum
mechanical problem of an electron moving in a magnetic field. The simulation
data agree well with the analytic and mean-field results.
|
Many commonly used statistical estimators are derived from optimization
problems. This includes maximum likelihood estimation, empirical risk
minimization, and so on. In many cases, the resulting estimators can be written
as solutions to estimating equations, sometimes referred to as $Z$-estimators.
Asymptotic normality for $Z$-estimators is a well-known result albeit when the
dimension of the parameter is asymptotically smaller than the square root of
the sample size. This hinders statistical inference when the dimension is
"large." In this paper, we propose a self-normalization-based confidence set
bypassing the asymptotic normality results. The proposed method is valid in the
full range of dimensions growing smaller than the sample size (ignoring
logarithmic factors) and asymptotically matches the asymptotic normality based
confidence sets when asymptotic normality holds. Our proposal represents the
first such general construction of confidence sets in the full range of
consistency of $Z$-estimators.
|
Motivated by the resurgence of neural networks in being able to solve complex
learning tasks we undertake a study of high depth networks using ReLU gates
which implement the function $x \mapsto \max\{0,x\}$. We try to understand the
role of depth in such neural networks by showing size lowerbounds against such
network architectures in parameter regimes hitherto unexplored. In particular
we show the following two main results about neural nets computing Boolean
functions of input dimension $n$,
1. We use the method of random restrictions to show almost linear,
$\Omega(\epsilon^{2(1-\delta)}n^{1-\delta})$, lower bound for completely weight
unrestricted LTF-of-ReLU circuits to match the Andreev function on at least
$\frac{1}{2} +\epsilon$ fraction of the inputs for $\epsilon >
\sqrt{2\frac{\log^{\frac {2}{2-\delta}}(n)}{n}}$ for any $\delta \in (0,\frac 1
2)$
2. We use the method of sign-rank to show exponential in dimension lower
bounds for ReLU circuits ending in a LTF gate and of depths upto $O(n^{\xi})$
with $\xi < \frac{1}{8}$ with some restrictions on the weights in the bottom
most layer. All other weights in these circuits are kept unrestricted. This in
turns also implies the same lowerbounds for LTF circuits with the same
architecture and the same weight restrictions on their bottom most layer.
Along the way we also show that there exists a $\mathbb{R}^ n\rightarrow
\mathbb{R}$ Sum-of-ReLU-of-ReLU function which Sum-of-ReLU neural nets can
never represent no matter how large they are allowed to be.
|
In an Internet of Things (IoT) environment (e.g., smart home), several IoT
devices may be available that are interconnected with each other. In such
interconnected environments, a faulty or compromised IoT device could impact
the operation of other IoT devices. In other words, anomalous behavior
exhibited by an IoT device could propagate to other devices in an IoT
environment. In this paper, we argue that mitigating the propagation of the
anomalous behavior exhibited by a device to other devices is equally important
to detecting this behavior in the first place. In line with this observation,
we present a framework, called IoT Anomaly Detector (IoT-AD), that can not only
detect the anomalous behavior of IoT devices, but also limit and recover from
anomalous behavior that might have affected other devices. We implemented a
prototype of IoT-AD, which we evaluated based on open-source IoT device
datasets as well as through real-world deployment on a small-scale IoT testbed
we have built. We have further evaluated IoT-AD in comparison to prior relevant
approaches. Our evaluation results show that IoT-AD can identify anomalous
behavior of IoT devices in less than 2.12 milliseconds and with up to 98% of
accuracy.
|
Let $L/K$ be a $G$-Galois extension of fields with an $H$-Hopf Galois
structure of type $N$. We study the ratio $GC(G, N)$, which is the number of
intermediate fields $E$ with $K \subseteq E \subseteq L$ that are in the image
of the Galois correspondence for the $H$-Hopf Galois structure on $L/K$,
divided by the number of intermediate fields. By Galois descent, $L \otimes_K H
= LN$ where $N$ is a $G$-invariant regular subgroup of $\mathrm{Perm}(G)$, and
then $GC(G, N)$ is the number of $G$-invariant subgroups of $N$, divided by the
number of subgroups of $G$. We look at the Galois correspondence ratio for a
Hopf Galois structure by translating the problem into counting certain
subgroups of the corresponding skew brace. We look at skew braces arising from
finite radical algebras $A$ and from Zappa-Sz\'ep products of finite groups,
and in particular when $A^3 = 0$ or the Zappa-Sz\'ep product is a semidirect
product, in which cases the corresponding skew brace is a bi-skew brace, that
is, a set $G$ with two group operations $\circ$ and $\star$ in such a way that
$G$ is a skew brace with either group structure acting as the additive group of
the skew brace. We obtain the Galois correspondence ratio for several examples.
In particular, if $(G, \circ, \star)$ is a bi-skew brace of squarefree order
$2m$ where $(G, \circ) \cong Z_{2m}$ is cyclic and $(G, \star) = D_m$ is
dihedral, then for large $m$, $GC(Z_{2m},D_m), $ is close to 1/2 while $GC(D_m,
Z_{2m})$ is near 0.
|
The increasing interaction of industrial control systems (ICSs) with public
networks and digital devices introduces new cyber threats to power systems and
other critical infrastructure. Recent cyber-physical attacks such as Stuxnet
and Irongate revealed unexpected ICS vulnerabilities and a need for improved
security measures. Intrusion detection systems constitute a key security
technology, which typically monitors cyber network data for detecting malicious
activities. However, a central characteristic of modern ICSs is the increasing
interdependency of physical and cyber network processes. Thus, the integration
of network and physical process data is seen as a promising approach to improve
predictability in real-time intrusion detection for ICSs by accounting for
physical constraints and underlying process patterns. This work systematically
assesses machine learning-based cyber-physical intrusion detection and
multi-class classification through a comparison to its purely network
data-based counterpart and evaluation of misclassifications and detection
delay. Multiple supervised detection and classification pipelines are applied
on a recent cyber-physical dataset, which describes various cyber attacks and
physical faults on a generic ICS. A key finding is that the integration of
physical process data improves detection and classification of all considered
attack types. In addition, it enables simultaneous processing of attacks and
faults, paving the way for holistic cross-domain root cause identification.
|
Why study astronomy, why teach astronomy? We give answers to these
fundamental questions based on our experience with the Astronomical Camp "Beli
Brezi" (White Aspens; Kardzhali, Bulgaria). It has been a place for teaching
astronomy to high schools kids for nearly half a century. We describe shortly
the history of the camp and draw some conclusions based on nearly five decades
of experience. Major among them is that the camp has gone further than just
distributing astronomical knowledge - while this is an important and worthy
task, the main achievement has been the cultivation of critical thinking among
the pupils and we think that that is the main motivation to give positively
reassuring answers the questions we asked at the beginning.
|
The entanglement of graph states up to eight qubits is calculated in the
regime of iteration calculation. The entanglement measures could be the
relative entropy of entanglement, the logarithmic robustness or the geometric
measure. All 146 local inequivalent graphs are classified as two categories:
graphs with identical upper LOCC entanglement bound and lower bipartite
entanglement bound, graphs with unequal bounds. The late may displays
non-integer entanglement. The precision of iteration calculation of the
entanglement is less than $10^{-14}$.
|
The thermal decoupling description of dark matter (DM) and co-annihilating
partners is reconsidered. If DM is realized at around the TeV-mass region or
above, even the heaviest electroweak force carriers could act as long-range
forces, leading to the existence of meta-stable DM bound states. The formation
and subsequent decay of the latter further deplete the relic density during the
freeze-out process on top of the Sommerfeld enhancement, allowing for larger DM
masses. While so far the bound-state formation was described via the emission
of an on-shell mediator ($W^{\pm}$, $Z$, $H$, $g$, photon or exotic), we point
out that this particular process does not have to be the dominant
scattering-bound state conversion channel in general. If the mediator is
coupled in a direct way to any relativistic species present in the Early
Universe, the bound-state formation can efficiently occur through particle
scattering, where a mediator is exchanged virtually. To demonstrate that such a
virtually stimulated conversion process can dominate the on-shell emission even
for all temperatures, we analyze a simplified model where DM is coupled to only
one relativistic species in the primordial plasma through an electroweak-scale
mediator. We find that the bound-state formation cross section via particle
scattering can exceed the on-shell emission by up to several orders of
magnitude.
|
Using Totaro-Bloch-Kriz's linear fractional cycles Gangl and Muller-Stach
recently prove the 5-term relations for the dilogarithm in Bloch's higher Chow
group CH^2(F,3) and the Kummer-Spence relations in some group G(F) over an
arbitrary field F where G(F) is isomorphic to CH^3(F,5) up to torsions under
the Beilinson-Soule vanishing conjecture that CH^2(F,n)=0 for n>3. In this
paper we show that Goncharov's 22-term relations for the trilogarithm also hold
in G(F).
|
We present the discovery of the X-ray and optical afterglows of the
short-duration GRB 150101B, pinpointing the event to an early-type host galaxy
at z=0.1343 +/- 0.0030. This makes GRB 150101B the most nearby short GRB with
an early-type host galaxy discovered to date. Fitting the spectral energy
distribution of the host galaxy results in an inferred stellar mass of ~7x10^10
M_sol, stellar population age of ~2-2.5 Gyr, and star formation rate of <0.4
M_sol yr^-1. The host of GRB 150101B is one of the largest and most luminous
short GRB host galaxies, with a B-band luminosity of ~4.3L* and half-light
radius of ~8 kpc. GRB 150101B is located at a projected distance of 7.35 +/-
0.07 kpc from its host center, and lies on a faint region of its host
rest-frame optical light. Its location, combined with the lack of associated
supernova, is consistent with a NS-NS/NS-BH merger progenitor. From modeling
the evolution of the broad-band afterglow, we calculate isotropic-equivalent
gamma-ray and kinetic energies of ~1.3x10^49 erg and ~(6-14)x10^51 erg,
respectively, a circumburst density of ~(0.8-4)x10^-5 cm^-3, and a jet opening
angle of >9 deg. Using observations extending to ~30 days, we place upper
limits of <(2-4)x10^41 erg s^-1 on associated kilonova emission. We compare
searches following previous short GRBs to existing kilonova models, and
demonstrate the difficulty of performing effective kilonova searches from
cosmological short GRBs using current ground-based facilities. We show that at
the Advanced LIGO/VIRGO horizon distance of 200 Mpc, searches reaching depths
of ~23-24 AB mag are necessary to probe a meaningful range of kilonova models.
|
Multi-modal MR imaging is routinely used in clinical practice to diagnose and
investigate brain tumors by providing rich complementary information. Previous
multi-modal MRI segmentation methods usually perform modal fusion by
concatenating multi-modal MRIs at an early/middle stage of the network, which
hardly explores non-linear dependencies between modalities. In this work, we
propose a novel Nested Modality-Aware Transformer (NestedFormer) to explicitly
explore the intra-modality and inter-modality relationships of multi-modal MRIs
for brain tumor segmentation. Built on the transformer-based multi-encoder and
single-decoder structure, we perform nested multi-modal fusion for high-level
representations of different modalities and apply modality-sensitive gating
(MSG) at lower scales for more effective skip connections. Specifically, the
multi-modal fusion is conducted in our proposed Nested Modality-aware Feature
Aggregation (NMaFA) module, which enhances long-term dependencies within
individual modalities via a tri-orientated spatial-attention transformer, and
further complements key contextual information among modalities via a
cross-modality attention transformer. Extensive experiments on BraTS2020
benchmark and a private meningiomas segmentation (MeniSeg) dataset show that
the NestedFormer clearly outperforms the state-of-the-arts. The code is
available at https://github.com/920232796/NestedFormer.
|
Quantum dense coding is one of the most important protocols in quantum
communication. It derives from the idea of using quantum resources to boost the
communication capacity and now serves as a key primitive across a variety of
quantum information protocols. Here, we focus on the basic theoretical ideas
behind quantum dense coding, discussing its development history from discrete
and continuous variables to quantum networks, then to its variant protocols and
applications in quantum secure communication. With this basic background in
hand, we then review the main experimental achievements, from photonic qubits
and qudits to optical modes, nuclear magnetic resonance, and atomic systems.
Besides the state of the art, we finally discuss potential future steps.
|
Modern applications of atomic physics, including the determination of
frequency standards, and the analysis of astrophysical spectra, require
prediction of atomic properties with exquisite accuracy. For complex atomic
systems, high-precision calculations are a major challenge due to the
exponential scaling of the involved electronic configuration sets. This
exacerbates the problem of required computational resources for these
computations, and makes indispensable the development of approaches to select
the most important configurations out of otherwise intractably huge sets. We
have developed a neural network (NN) tool for running high-precision atomic
configuration interaction (CI) computations with iterative selection of the
most important configurations. Integrated with the established pCI atomic
codes, our approach results in computations with significantly reduced
computational requirements in comparison with those without NN support. We
showcase a number of NN-supported computations for the energy levels of
Fe$^{16+}$ and Ni$^{12+}$, and demonstrate that our approach can be reliably
used and automated for solving specific computational problems for a wide
variety of systems.
|
Motives or goals are recognized in psychology literature as the most
fundamental drive that explains and predicts why people do what they do,
including when they browse the web. Although providing enormous value, these
higher-ordered goals are often unobserved, and little is known about how to
leverage such goals to assist people's browsing activities. This paper proposes
to take a new approach to address this problem, which is fulfilled through a
novel neural framework, Goal-directed Web Browsing (GoWeB). We adopt a
psychologically-sound taxonomy of higher-ordered goals and learn to build their
representations in a structure-preserving manner. Then we incorporate the
resulting representations for enhancing the experiences of common activities
people perform on the web. Experiments on large-scale data from Microsoft Edge
web browser show that GoWeB significantly outperforms competitive baselines for
in-session web page recommendation, re-visitation classification, and
goal-based web page grouping. A follow-up analysis further characterizes how
the variety of human motives can affect the difference observed in human
behavioral patterns.
|
Video Moment Retrieval (VMR) requires precise modelling of fine-grained
moment-text associations to capture intricate visual-language relationships.
Due to the lack of a diverse and generalisable VMR dataset to facilitate
learning scalable moment-text associations, existing methods resort to joint
training on both source and target domain videos for cross-domain applications.
Meanwhile, recent developments in vision-language multimodal models pre-trained
on large-scale image-text and/or video-text pairs are only based on coarse
associations (weakly labelled). They are inadequate to provide fine-grained
moment-text correlations required for cross-domain VMR. In this work, we solve
the problem of unseen cross-domain VMR, where certain visual and textual
concepts do not overlap across domains, by only utilising target domain
sentences (text prompts) without accessing their videos. To that end, we
explore generative video diffusion for fine-grained editing of source videos
controlled by the target sentences, enabling us to simulate target domain
videos. We address two problems in video editing for optimising unseen domain
VMR: (1) generation of high-quality simulation videos of different moments with
subtle distinctions, (2) selection of simulation videos that complement
existing source training videos without introducing harmful noise or
unnecessary repetitions. On the first problem, we formulate a two-stage video
diffusion generation controlled simultaneously by (1) the original video
structure of a source video, (2) subject specifics, and (3) a target sentence
prompt. This ensures fine-grained variations between video moments. On the
second problem, we introduce a hybrid selection mechanism that combines two
quantitative metrics for noise filtering and one qualitative metric for
leveraging VMR prediction on simulation video selection.
|
We propose improving the cross-target and cross-scene generalization of
visual navigation through learning an agent that is guided by conceiving the
next observations it expects to see. This is achieved by learning a variational
Bayesian model, called NeoNav, which generates the next expected observations
(NEO) conditioned on the current observations of the agent and the target view.
Our generative model is learned through optimizing a variational objective
encompassing two key designs. First, the latent distribution is conditioned on
current observations and the target view, leading to a model-based,
target-driven navigation. Second, the latent space is modeled with a Mixture of
Gaussians conditioned on the current observation and the next best action. Our
use of mixture-of-posteriors prior effectively alleviates the issue of
over-regularized latent space, thus significantly boosting the model
generalization for new targets and in novel scenes. Moreover, the NEO
generation models the forward dynamics of agent-environment interaction, which
improves the quality of approximate inference and hence benefits data
efficiency. We have conducted extensive evaluations on both real-world and
synthetic benchmarks, and show that our model consistently outperforms the
state-of-the-art models in terms of success rate, data efficiency, and
generalization.
|
In this article we offer some modification of Monte-Carlo method for multiple
parametric integral computation and solving of a linear integral Fredholm
equation of a second kind (well posed problem).
We prove that the rate of convergence of offered method is optimal under
natural conditions still in the uniform norm, and construct an asymptotical and
non-asymptotical confidence region, again in the uniform norm.
|
We study a new type of one-dimensional chiral states that can be created in
bilayer graphene (BLG) by electrostatic lateral confinement. These states
appear on the domain walls separating insulating regions experiencing the
opposite gating polarity. While the states are similar to conventional
solitonic zero-modes, their properties are defined by the unusual chiral BLG
quasiparticles, from which they derive. The number of zero-mode branches is
fixed by the topological vacuum charge of the insulating BLG state. We discuss
how these chiral states can manifest experimentally, and emphasize their
relevance for valleytronics.
|
We prove that there exists a phase space of canonical variables for the
initial value problem for axially symmetric Maxwell fields propagating in
Kerr-de Sitter black hole spacetimes such that their motion is restricted to
the level sets of a positive-definite Hamiltonian, despite the ergo-region.
|
The interconversion between left- and right-handed helical folds of a
polypeptide defines a dual-funneled free energy landscape. In this context, the
funnel minima are connected through a continuum of unfolded conformations,
evocative of the classical helix-coil transition. Physical intuition and recent
conjectures suggest that this landscape can be mapped by assigning a left- or
right-handed helical state to each residue. We explore this possibility using
all-atom replica exchange molecular dynamics and an Ising-like model,
demonstrating that the energy landscape architecture is at odds with a
two-state picture. A three-state model - left, right, and unstructured - can
account for most key intermediates during chiral interconversion. Competing
folds and excited conformational states still impose limitations on the scope
of this approach. However, the improvement is stark: Moving from a two-state to
a three-state model decreases the fit error from 1.6 $k_B T$ to 0.3 $k_B T$
along the left-to-right interconversion pathway.
|
Although considerable efforts have been devoted to transformer-based ranking
models for document search, the relevance-efficiency tradeoff remains a
critical problem for ad-hoc ranking. To overcome this challenge, this paper
presents BECR (BERT-based Composite Re-Ranking), a composite re-ranking scheme
that combines deep contextual token interactions and traditional lexical
term-matching features. In particular, BECR exploits a token encoding mechanism
to decompose the query representations into pre-computable uni-grams and
skip-n-grams. By applying token encoding on top of a dual-encoder architecture,
BECR separates the attentions between a query and a document while capturing
the contextual semantics of a query. In contrast to previous approaches, this
framework does not perform expensive BERT computations during online inference.
Thus, it is significantly faster, yet still able to achieve high
competitiveness in ad-hoc ranking relevance. Finally, an in-depth comparison
between BECR and other start-of-the-art neural ranking baselines is described
using the TREC datasets, thereby further demonstrating the enhanced relevance
and efficiency of BECR.
|
Deep clustering (DC), a fusion of deep representation learning and
clustering, has recently demonstrated positive results in data science,
particularly text processing and computer vision. However, joint optimization
of feature learning and data distribution in the multi-dimensional space is
domain-specific, so existing DC methods struggle to generalize to other
application domains (such as data integration and cleaning). In data management
tasks, where high-density embeddings and overlapping clusters dominate, a data
management-specific DC algorithm should be able to interact better with the
data properties for supporting data cleaning and integration tasks. This paper
presents a deep clustering algorithm for tabular data (TableDC) that reflects
the properties of data management applications, particularly schema inference,
entity resolution, and domain discovery. To address overlapping clusters,
TableDC integrates Mahalanobis distance, which considers variance and
correlation within the data, offering a similarity method suitable for tables,
rows, or columns in high-dimensional latent spaces. TableDC provides
flexibility for the final clustering assignment and shows higher tolerance to
outliers through its heavy-tailed Cauchy distribution as the similarity kernel.
The proposed similarity measure is particularly beneficial where the embeddings
of raw data are densely packed and exhibit high degrees of overlap. Data
cleaning tasks may involve a large number of clusters, which affects the
scalability of existing DC methods. TableDC's self-supervised module
efficiently learns data embeddings with a large number of clusters compared to
existing benchmarks, which scale in quadratic time. We evaluated TableDC with
several existing DC, Standard Clustering (SC), and state-of-the-art bespoke
methods over benchmark datasets. TableDC consistently outperforms existing DC,
SC, and bespoke methods.
|
Crypto-currency market uncertainty drives the need to find adaptive solutions
to maximise gain or at least to avoid loss throughout the periods of trading
activity. Given the high dimensionality and complexity of the state-action
space in this domain, it can be treated as a "Narrow AGI" problem with the
scope of goals and environments bound to financial markets. Adaptive
Multi-Strategy Agent approach for market-making introduces a new solution to
maximise positive "alpha" in long-term handling limit order book (LOB)
positions by using multiple sub-agents implementing different strategies with a
dynamic selection of these agents based on changing market conditions. AMSA
provides no specific strategy of its own while being responsible for segmenting
the periods of market-making activity into smaller execution sub-periods,
performing internal backtesting on historical data on each of the sub-periods,
doing sub- agent performance evaluation and re-selection of them at the end of
each sub- period, and collecting returns and losses incrementally. With this
approach, the return becomes a function of hyper-parameters such as market data
granularity (refresh rate), the execution sub-period duration, number of active
sub-agents, and their individual strategies. Sub-agent selection for the next
trading sub-period is made based on return/loss and alpha values obtained
during internal backtesting as well as real trading. Experiments with the AMSA
have been performed under different market conditions relying on historical
data and proved a high probability of positive alpha throughout the periods of
trading activity in the case of properly selected hyper-parameters.
|
Subsets and Splits