text
stringlengths 6
128k
|
---|
This paper studies a sparse signal recovery task in time-varying
(time-adaptive) environments. The contribution of the paper to sparsity-aware
online learning is threefold; first, a Generalized Thresholding (GT) operator,
which relates to both convex and non-convex penalty functions, is introduced.
This operator embodies, in a unified way, the majority of well-known
thresholding rules which promote sparsity. Second, a non-convexly constrained,
sparsity-promoting, online learning scheme, namely the Adaptive
Projection-based Generalized Thresholding (APGT), is developed that
incorporates the GT operator with a computational complexity that scales
linearly to the number of unknowns. Third, the novel family of partially
quasi-nonexpansive mappings is introduced as a functional analytic tool for
treating the GT operator. By building upon the rich fixed point theory, the
previous class of mappings helps us, also, to establish a link between the GT
operator and a union of linear subspaces; a non-convex object which lies at the
heart of any sparsity promoting technique, batch or online. Based on such a
functional analytic framework, a convergence analysis of the APGT is provided.
Furthermore, extensive experiments suggest that the APGT exhibits competitive
performance when compared to computationally more demanding alternatives, such
as the sparsity-promoting Affine Projection Algorithm (APA)- and Recursive
Least Squares (RLS)-based techniques.
|
Despite its widespread adoption as the prominent neural architecture, the
Transformer has spurred several independent lines of work to address its
limitations. One such approach is selective state space models, which have
demonstrated promising results for language modelling. However, their
feasibility for learning self-supervised, general-purpose audio representations
is yet to be investigated. This work proposes Audio Mamba, a selective state
space model for learning general-purpose audio representations from randomly
masked spectrogram patches through self-supervision. Empirical results on ten
diverse audio recognition downstream tasks show that the proposed models,
pretrained on the AudioSet dataset, consistently outperform comparable
self-supervised audio spectrogram transformer (SSAST) baselines by a
considerable margin and demonstrate better performance in dataset size,
sequence length and model size comparisons.
|
Modern computing platforms are highly-configurable with thousands of
interacting configurations. However, configuring these systems is challenging.
Erroneous configurations can cause unexpected non-functional faults. This paper
proposes CADET (short for Causal Debugging Toolkit) that enables users to
identify, explain, and fix the root cause of non-functional faults early and in
a principled fashion. CADET builds a causal model by observing the performance
of the system under different configurations. Then, it uses casual path
extraction followed by counterfactual reasoning over the causal model to: (a)
identify the root causes of non-functional faults, (b) estimate the effects of
various configurable parameters on the performance objective(s), and (c)
prescribe candidate repairs to the relevant configuration options to fix the
non-functional fault. We evaluated CADET on 5 highly-configurable systems
deployed on 3 NVIDIA Jetson systems-on-chip. We compare CADET with
state-of-the-art configuration optimization and ML-based debugging approaches.
The experimental results indicate that CADET can find effective repairs for
faults in multiple non-functional properties with (at most) 17% more accuracy,
28% higher gain, and $40\times$ speed-up than other ML-based performance
debugging methods. Compared to multi-objective optimization approaches, CADET
can find fixes (at most) $9\times$ faster with comparable or better performance
gain. Our case study of non-functional faults reported in NVIDIA's forum show
that CADET can find $14%$ better repairs than the experts' advice in less than
30 minutes.
|
It is shown that a Coulomb potential using a running coupling slightly
modified from the perturbative form can produce an interquark potential that
appears nearly linear over a large distance range. Recent high-statistics SU(2)
lattice gauge theory data fit well to this potential without the need for a
linear string-tension term. This calls into question the accuracy of string
tension measurements which are based on the assumption of a constant
coefficient for the Coulomb term. It also opens up the possibility of obtaining
an effectively confining potential from gluon exchange alone.
|
In an experiment performed at the LISE3 facility of GANIL, we studied the
decay of 22Al produced by the fragmentation of a 36Ar primary beam. A
beta-decay half-life of 91.1 +- 0.5 ms was measured. The beta-delayed one- and
two-proton emission as well as beta-alpha and beta-delayed gamma decays were
measured and allowed us to establish a partial decay scheme for this nucleus.
New levels were determined in the daughter nucleus 22Mg. The comparison with
model calculations strongly favours a spin-parity of 4+ for the ground state of
22Al.
|
The goal of the international Muon Ionization Cooling Experiment (MICE) is to
demonstrate muon beam ionization cooling for the first time. It constitutes a
key part of the R&D towards a future neutrino factory or muon collider. The
intended MICE precision requires the development of analysis tools that can
account for any effects (e.g., optical aberrations) which may lead to
inaccurate cooling measurements. Non-parametric density estimation techniques,
in particular, kernel density estimation (KDE), allow very precise calculations
of the muon beam phase-space density and its increase as a result of cooling.
In this study, kernel density estimation technique and its application to
measuring the reduction in MICE muon beam phase-space volume is investigated.
|
Fluctuations of global additive quantities, like total energy or
magnetization for instance, can in principle be described by statistics of sums
of (possibly correlated) random variables. Yet, it turns out that extreme
values (the largest value among a set of random variables) may also play a role
in the statistics of global quantities, in a direct or indirect way. This
review discusses different connections that may appear between problems of sums
and of extreme values of random variables, and emphasizes physical situations
in which such connections are relevant. Along this line of thought, standard
convergence theorems for sums and extreme values of independent and identically
distributed random variables are recalled, and some rigorous results as well as
more heuristic reasonings are presented for correlated or non-identically
distributed random variables. More specifically, the role of extreme values
within sums of broadly distributed variables is addressed, and a general
mapping between extreme values and sums is presented, allowing us to identify a
class of correlated random variables whose sum follows (generalized) extreme
value distributions. Possible applications of this specific class of random
variables are illustrated on the example of two simple physical models. A few
extensions to other related classes of random variables sharing similar
qualitative properties are also briefly discussed, in connection with the
so-called BHP distribution.
|
We study two nearby, early-type galaxies, NGC4342 and NGC4291, that host
unusually massive black holes relative to their low stellar mass. The observed
black hole-to-bulge mass ratios of NGC4342 and NGC4291 are ~6.9% and ~1.9%,
respectively, which significantly exceed the typical observed ratio of ~0.2%.
As a consequence of the exceedingly large black hole-to-bulge mass ratios,
NGC4342 and NGC4291 are ~5.1 sigma and ~3.4 sigma outliers from the M_BH -
M_bulge scaling relation, respectively. In this paper, we explore the origin of
the unusually high black hole-to-bulge mass ratio. Based on Chandra X-ray
observations of the hot gas content of NGC4342 and NGC4291, we compute
gravitating mass profiles, and conclude that both galaxies reside in massive
dark matter halos, which extend well beyond the stellar light. The presence of
dark matter halos around NGC4342 and NGC4291 and a deep optical image of the
environment of NGC4342 indicate that tidal stripping, in which >90% of the
stellar mass was lost, cannot explain the observed high black hole-to-bulge
mass ratios. Therefore, we conclude that these galaxies formed with low stellar
masses, implying that the bulge and black hole did not grow in tandem. We also
find that the black hole mass correlates well with the properties of the dark
matter halo, suggesting that dark matter halos may play a major role in
regulating the growth of the supermassive black holes.
|
Given closed convex sets $C_i$, $i=1,\ldots,\ell$, and some nonzero linear
maps $A_i$, $i = 1,\ldots,\ell$, of suitable dimensions, the multi-set split
feasibility problem aims at finding a point in $\bigcap_{i=1}^\ell A_i^{-1}C_i$
based on computing projections onto $C_i$ and multiplications by $A_i$ and
$A_i^T$. In this paper, we consider the associated best approximation problem,
i.e., the problem of computing projections onto $\bigcap_{i=1}^\ell
A_i^{-1}C_i$; we refer to this problem as the best approximation problem in
multi-set split feasibility settings (BA-MSF). We adapt the Dykstra's
projection algorithm, which is classical for solving the BA-MSF in the special
case when all $A_i = I$, to solve the general BA-MSF. Our Dykstra-type
projection algorithm is derived by applying (proximal) coordinate gradient
descent to the Lagrange dual problem, and it only requires computing
projections onto $C_i$ and multiplications by $A_i$ and $A_i^T$ in each
iteration. Under a standard relative interior condition and a genericity
assumption on the point we need to project, we show that the dual objective
satisfies the Kurdyka-Lojasiewicz property with an explicitly computable
exponent on a neighborhood of the (typically unbounded) dual solution set when
each $C_i$ is $C^{1,\alpha}$-cone reducible for some $\alpha\in (0,1]$: this
class of sets covers the class of $C^2$-cone reducible sets, which include all
polyhedrons, second-order cone, and the cone of positive semidefinite matrices
as special cases. Using this, explicit convergence rate (linear or sublinear)
of the sequence generated by the Dykstra-type projection algorithm is derived.
Concrete examples are constructed to illustrate the necessity of some of our
assumptions.
|
How should zoomorphic, or bio-inspired, robots indicate to humans that
interactions will be safe and fun? Here, a survey is used to measure how human
willingness to interact with a simulated butterfly robot is affected by
different flight patterns. Flapping frequency, flap to glide ratio, and
flapping pattern were independently varied based on a literature review of
butterfly and moth flight. Human willingness to interact with these simulations
and demographic information were self-reported via an online survey. Low
flapping frequency and greater proportion of gliding were preferred, and prior
experience with butterflies strongly predicted greater interaction willingness.
The preferred flight parameters correspond to migrating butterfly flight
patterns that are rarely directly observed by humans and do not correspond to
the species that inspired the wing shape of the robot model. The most realistic
butterfly simulations were among the least preferred. An analysis of animated
butterflies in popular media revealed a convergence on slower, less realistic
flight parameters. This iterative and interactive artistic process provides a
model for determining human preferences and identifying functional requirements
of robots for human interaction. Thus, the robotic design process can be
streamlined by leveraging animated models and surveys prior to construction.
|
Video representation is an important and challenging task in the computer
vision community. In this paper, we assume that image frames of a moving scene
can be modeled as a Linear Dynamical System. We propose a sparse coding
framework, named adaptive video dictionary learning (AVDL), to model a video
adaptively. The developed framework is able to capture the dynamics of a moving
scene by exploring both sparse properties and the temporal correlations of
consecutive video frames. The proposed method is compared with state of the art
video processing methods on several benchmark data sequences, which exhibit
appearance changes and heavy occlusions.
|
The aims of this paper are: 1) to identify "worst smells", i.e., bad smells
that never have a good reason to exist, 2) to determine the frequency,
change-proneness, and severity associated with worst smells, and 3) to identify
the "worst reasons", i.e., the reasons for introducing these worst smells in
the first place. To achieve these aims we ran a survey with 71 developers. We
learned that 80 out of 314 catalogued code smells are "worst"; that is,
developers agreed that these 80 smells should never exist in any code base. We
then checked the frequency and change-proneness of these worst smells on 27
large Apache open-source projects. Our results show insignificant differences,
in both frequency and change proneness, between worst and non-worst smells.
That is to say, these smells are just as damaging as other smells, but there is
never any justifiable reason to introduce them. Finally, in follow-up phone
interviews with five developers we confirmed that these smells are indeed
worst, and the interviewees proposed seven reasons for why they may be
introduced in the first place. By explicitly identifying these seven reasons,
project stakeholders can, through quality gates or reviews, ensure that such
smells are never accepted in a code base, thus improving quality without
compromising other goals such as agility or time to market.
|
We consider the Random Walk Metropolis algorithm on $\mathbb{R}^n$ with
Gaussian proposals, and when the target probability measure is the $n$-fold
product of a one-dimensional law. It is well known (see Roberts et al. (Ann.
Appl. Probab. 7 (1997) 110-120)) that, in the limit $n\to\infty$, starting at
equilibrium and for an appropriate scaling of the variance and of the timescale
as a function of the dimension $n$, a diffusive limit is obtained for each
component of the Markov chain. In Jourdain et al. (Optimal scaling for the
transient phase of the random walk Metropolis algorithm: The mean-field limit
(2012) Preprint), we generalize this result when the initial distribution is
not the target probability measure. The obtained diffusive limit is the
solution to a stochastic differential equation nonlinear in the sense of
McKean. In the present paper, we prove convergence to equilibrium for this
equation. We discuss practical counterparts in order to optimize the variance
of the proposal distribution to accelerate convergence to equilibrium. Our
analysis confirms the interest of the constant acceptance rate strategy (with
acceptance rate between $1/4$ and $1/3$) first suggested in Roberts et al.
(Ann. Appl. Probab. 7 (1997) 110-120). We also address scaling of the
Metropolis-Adjusted Langevin Algorithm. When starting at equilibrium, a
diffusive limit for an optimal scaling of the variance is obtained in Roberts
and Rosenthal (J. R. Stat. Soc. Ser. B. Stat. Methodol. 60 (1998) 255-268). In
the transient case, we obtain formally that the optimal variance scales very
differently in $n$ depending on the sign of a moment of the distribution, which
vanishes at equilibrium. This suggest that it is difficult to derive practical
recommendations for MALA from such asymptotic results.
|
According to the National Academies, a weekly forecast of velocity, vertical
structure, and duration of the Loop Current (LC) and its eddies is critical for
understanding the oceanography and ecosystem, and for mitigating outcomes of
anthropogenic and natural disasters in the Gulf of Mexico (GoM). However, this
forecast is a challenging problem since the LC behaviour is dominated by
long-range spatial connections across multiple timescales. In this paper, we
extend spatiotemporal predictive learning, showing its effectiveness beyond
video prediction, to a 4D model, i.e., a novel Physics-informed Tensor-train
ConvLSTM (PITT-ConvLSTM) for temporal sequences of 3D geospatial data
forecasting. Specifically, we propose 1) a novel 4D higher-order recurrent
neural network with empirical orthogonal function analysis to capture the
hidden uncorrelated patterns of each hierarchy, 2) a convolutional tensor-train
decomposition to capture higher-order space-time correlations, and 3) to
incorporate prior physic knowledge that is provided from domain experts by
informing the learning in latent space. The advantage of our proposed method is
clear: constrained by physical laws, it simultaneously learns good
representations for frame dependencies (both short-term and long-term
high-level dependency) and inter-hierarchical relations within each time frame.
Experiments on geospatial data collected from the GoM demonstrate that
PITT-ConvLSTM outperforms the state-of-the-art methods in forecasting the
volumetric velocity of the LC and its eddies for a period of over one week.
|
Syntactic structures used to play a vital role in natural language processing
(NLP), but since the deep learning revolution, NLP has been gradually dominated
by neural models that do not consider syntactic structures in their design. One
vastly successful class of neural models is transformers. When used as an
encoder, a transformer produces contextual representation of words in the input
sentence. In this work, we propose a new model of contextual word
representation, not from a neural perspective, but from a purely syntactic and
probabilistic perspective. Specifically, we design a conditional random field
that models discrete latent representations of all words in a sentence as well
as dependency arcs between them; and we use mean field variational inference
for approximate inference. Strikingly, we find that the computation graph of
our model resembles transformers, with correspondences between dependencies and
self-attention and between distributions over latent representations and
contextual embeddings of words. Experiments show that our model performs
competitively to transformers on small to medium sized datasets. We hope that
our work could help bridge the gap between traditional syntactic and
probabilistic approaches and cutting-edge neural approaches to NLP, and inspire
more linguistically-principled neural approaches in the future.
|
The virtual skein relation for the Jones polynomial of the virtual link
diagram was introduced by N. Kamada, S. Nakabo, and S. Satoh. H. A. Dye, L. H.
Kauffman, and Y. Miyazawa introduced multivariable polynomial, an invariant of
virtual links, which is a refinement of Jones polynomial. In this paper, we
give a skein relation for the multivariable polynomials among positive,
negative, and virtual crossings with some restrictions. We apply this relation
to study some properties of virtual links obtained by replacing a real crossing
by a virtual crossing.
|
Muon neutrino disappearance measurements at NO$\nu$A suggest that maximal
$\theta_{23}$ is excluded at the 2.6$\sigma$ CL. This is in mild tension with
T2K data which prefer maximal mixing. Considering that NO$\nu$A has a much
longer baseline than T2K, we point out that the apparent departure from maximal
mixing in NO$\nu$A may be a consequence of nonstandard neutrino propagation in
matter.
|
Given an $n$-dimensional random vector $X^{(n)}$ , for $k < n$, consider its
$k$-dimensional projection $\mathbf{a}_{n,k}X^{(n)}$, where $\mathbf{a}_{n,k}$
is an $n \times k$-dimensional matrix belonging to the Stiefel manifold
$\mathbb{V}_{n,k}$ of orthonormal $k$-frames in $\mathbb{R}^n$. For a class of
sequences $\{X^{(n)}\}$ that includes the uniform distributions on scaled
$\ell_p^n$ balls, $p \in (1,\infty]$, and product measures with sufficiently
light tails, it is shown that the sequence of projected vectors
$\{\mathbf{a}_{n,k}^\intercal X^{(n)}\}$ satisfies a large deviation principle
whenever the empirical measures of the rows of $\sqrt{n} \mathbf{a}_{n,k}$
converge, as $n \rightarrow \infty$, to a probability measure on
$\mathbb{R}^k$. In particular, when $\mathbf{A}_{n,k}$ is a random matrix drawn
from the Haar measure on $\mathbb{V}_{n,k}$, this is shown to imply a large
deviation principle for the sequence of random projections
$\{\mathbf{A}_{n,k}^\intercal X^{(n)}\}$ in the quenched sense (that is,
conditioned on almost sure realizations of $\{\mathbf{A}_{n,k}\}$). Moreover, a
variational formula is obtained for the rate function of the large deviation
principle for the annealed projections $\{\mathbf{A}_{n,k}^\intercal
X^{(n)}\}$, which is expressed in terms of a family of quenched rate functions
and a modified entropy term. A key step in this analysis is a large deviation
principle for the sequence of empirical measures of rows of $\sqrt{n}
\mathbf{A}_{n,k}$, which may be of independent interest. The study of
multi-dimensional random projections of high-dimensional measures is of
interest in asymptotic functional analysis, convex geometry and statistics.
Prior results on quenched large deviations for random projections of $\ell_p^n$
balls have been essentially restricted to the one-dimensional setting.
|
Motivated by the fundamental role that bosonic and fermionic symmetries play
in physics, we study (non-invertible) one-form symmetries in $2 + 1$d
consisting of topological lines with bosonic and fermionic self-statistics. We
refer to these lines as Bose-Fermi-Braided (BFB) symmetries and argue that they
can be classified. Unlike the case of generic anyonic lines, BFB symmetries are
closely related to groups. In particular, when BFB lines are non-invertible,
they are non-intrinsically non-invertible. Moreover, BFB symmetries are, in a
categorical sense, weakly group theoretical. Using this understanding, we study
invariants of renormalization group flows involving non-topological QFTs with
BFB symmetry.
|
In this paper, we show that equality in Courant's nodal domain theorem can
only be reached for a finite number of eigenvalues of the Neumann Laplacian, in
the case of an open, bounded and connected set in R n with a C 1,1 boundary.
This result is analogous to Pleijel's nodal domain theorem for the Dirichlet
Laplacian (1956). It confirms, in all dimensions, a conjecture formulated by
Pleijel, which had already been solved by I. Polterovich for a two-dimensional
domain with a piecewise-analytic boundary (2009). We also show that the
argument and the result extend to a class of Robin boundary conditions.
|
Using a sharp version of the reverse Young inequality, and a R\'enyi entropy
comparison result due to Fradelizi, Madiman, and Wang, the authors are able to
derive R\'enyi entropy power inequalities for log-concave random vectors when
R\'enyi parameters belong to $(0,1)$. Furthermore, the estimates are shown to
be sharp up to absolute constants.
|
Our starting point is a basic problem in Hermite interpolation theory, namely
determining the least degree of a homogeneous polynomial that vanishes to some
specified order at every point of a given finite set. We solve this problem if
the number of points is small compared to the dimension of their linear span.
This also allows us to establish results on the Hilbert function of ideals
generated by powers of linear forms. The Verlinde formula determines such a
Hilbert function in a specific instance. We complement this result and also
determine the Castelnuovo-Mumford regularity of the corresponding ideals. As
applications we establish new instances of conjectures by Chudnovsky and by
Demailly on the Waldschmidt constant. Moreover, we show that conjectures on the
failure of the weak Lefschetz property by Harbourne, Schenck, and Seceleanu as
well as by Migliore, Mir\'o-Roig, and the first author are true asymptotically.
The latter also relies on a new result about Eulerian numbers.
|
The classic algorithm AdaBoost allows to convert a weak learner, that is an
algorithm that produces a hypothesis which is slightly better than chance, into
a strong learner, achieving arbitrarily high accuracy when given enough
training data. We present a new algorithm that constructs a strong learner from
a weak learner but uses less training data than AdaBoost and all other weak to
strong learners to achieve the same generalization bounds. A sample complexity
lower bound shows that our new algorithm uses the minimum possible amount of
training data and is thus optimal. Hence, this work settles the sample
complexity of the classic problem of constructing a strong learner from a weak
learner.
|
A submanifold is said to be tangentially biharmonic if the bitension field of
the isometric immersion that defines the submanifold has vanishing tangential
component. The purpose of this paper is to prove that a surface in Euclidean
$3$-space has tangentially biharmonic normal bundle if and only if it is either
minimal, a part of a round sphere, or a part of a circular cylinder.
|
If $L$ is a list assignment of $r$ colors to each vertex of an $n$-vertex
graph $G$, then an equitable $L$-coloring of $G$ is a proper coloring of
vertices of $G$ from their lists such that no color is used more than $\lceil
n/r\rceil$ times. A graph is equitably $r$-choosable if it has an equitable
$L$-coloring for every $r$-list assignment $L$. In 2003, Kostochka, Pelsmajer
and West (KPW) conjectured that an analog of the famous Hajnal-Szemer\'edi
Theorem on equitable coloring holds for equitable list coloring, namely, that
for each positive integer $r$ every graph $G$ with maximum degree at most $r-1$
is equitably $r$-choosable.
The main result of this paper is that for each $r\geq 9$ and each planar
graph $G$, a stronger statement holds: if the maximum degree of $G$ is at most
$r$, then $G$ is equitably $r$-choosable. In fact, we prove the result for a
broader class of graphs -- the class ${\mathcal{B}}$ of the graphs in which
each bipartite subgraph $B$ with $|V(B)|\ge3$ has at most $2|V(B)|-4$ edges.
Together with some known results, this implies that the KPW Conjecture holds
for all graphs in ${\mathcal{B}}$, in particular, for all planar graphs.
|
Domain adaptation refers to the learning scenario that a model learned from
the source data is applied on the target data which have the same categories
but different distribution. While it has been widely applied, the distribution
discrepancy between source data and target data can substantially affect the
adaptation performance. The problem has been recently addressed by employing
adversarial learning and distinctive adaptation performance has been reported.
In this paper, a deep adversarial domain adaptation model based on a
multi-layer joint kernelized distance metric is proposed. By utilizing the
abstract features extracted from deep networks, the multi-layer joint
kernelized distance (MJKD) between the $j$th target data predicted as the $m$th
category and all the source data of the $m'$th category is computed. Base on
MJKD, a class-balanced selection strategy is utilized in each category to
select target data that are most likely to be classified correctly and treat
them as labeled data using their pseudo labels. Then an adversarial
architecture is used to draw the newly generated labeled training data and the
remaining target data close to each other. In this way, the target data itself
provide valuable information to enhance the domain adaptation. An analysis of
the proposed method is also given and the experimental results demonstrate that
the proposed method can achieve a better performance than a number of
state-of-the-art methods.
|
Holographic dualities between certain gravitational theories in four and five
spacetime dimensions and 2D conformal field theories (CFTs) have been proposed
based on hidden conformal symmetry exhibited by the radial Klein-Gordon (KG)
operator in a so-called near-region limit. In this paper, we examine hidden
conformal symmetry of black rings and black strings solutions, thus
demonstrating that the presence of hidden conformal symmetry is not linked to
the separability of the KG-equation (or the existence of a Killing-Yano
tensor). Further, we will argue that these classes of non-extremal black holes
have a dual 2D CFT. New revised monodromy techniques are developed to encompass
all the cases we consider.
|
The current atmospheric and solar neutrino experimental data favors the
bi-maximal mixing solution of the Zee-type neutrino mass matrix in which
neutrino masses are generated radiatively. This model requires the existence of
a weak singlet charged Higgs boson. While low energy data are unlikely to
further constrain the parameters of this model, the direct search of charged
Higgs production at the CERN LEP experiments can provide useful information on
this mechanism of neutrino mass generation by analyzing their data with
electrons and/or muons (in contrast to taus or charms) in the final state with
missing transverse energies. We also discuss the difference in the production
rates of a weak singlet from a weak doublet charged Higgs boson pairs at LEP.
|
For a finite group $G$, let $\mathrm{diam}(G)$ denote the maximum diameter of
a connected Cayley graph of $G$. A well-known conjecture of Babai states that
$\mathrm{diam}(G)$ is bounded by ${(\log_{2} |G|)}^{O(1)}$ in case $G$ is a
non-abelian finite simple group. Let $G$ be a finite simple group of Lie type
of Lie rank $n$ over the field $F_{q}$. Babai's conjecture has been verified in
case $n$ is bounded, but it is wide open in case $n$ is unbounded. Recently,
Biswas and Yang proved that $\mathrm{diam}(G)$ is bounded by $q^{O( n
{(\log_{2}n + \log_{2}q)}^{3})}$. We show that in fact $\mathrm{diam}(G) <
q^{O(n {(\log_{2}n)}^{2})}$ holds. Note that our bound is significantly smaller
than the order of $G$ for $n$ large, even if $q$ is large. As an application,
we show that more generally $\mathrm{diam}(H) < q^{O( n {(\log_{2}n)}^{2})}$
holds for any subgroup $H$ of $\mathrm{GL}(V)$, where $V$ is a vector space of
dimension $n$ defined over the field $F_q$.
|
This document describes the Gloss infrastructure supporting implementation of
location-aware services. The document is in two parts. The first part describes
software architecture for the smart space. As described in D8, a local
architecture provides a framework for constructing Gloss applications, termed
assemblies, that run on individual physical nodes, whereas a global
architecture defines an overlay network for linking individual assemblies. The
second part outlines the hardware installation for local sensing. This
describes the first phase of the installation in Strathclyde University.
|
In brain neural networks, Local Field Potential (LFP) signals represent the
dynamic flow of information. Analyzing LFP clinical data plays a critical role
in improving our understanding of brain mechanisms. One way to enhance our
understanding of these mechanisms is to identify a global model to predict
brain signals in different situations. This paper identifies a global
data-driven based on LFP recordings of the Nucleus Accumbens and Hippocampus
regions in freely moving rats. The LFP is recorded from each rat in two
different situations: before and after the process of getting a reward which
can be either a drug (Morphine) or natural food (like popcorn or biscuit). A
comparison of five machine learning methods including Long Short Term Memory
(LSTM), Echo State Network (ESN), Deep Echo State Network (DeepESN), Radial
Basis Function (RBF), and Local Linear Model Tree (LLM) is conducted to develop
this model. LoLiMoT was chosen with the best performance among all methods.
This model can predict the future states of these regions with one pre-trained
model. Identifying this model showed that Morphine and natural rewards do not
change the dynamic features of neurons in these regions.
|
A classic graph coloring problem is to assign colors to vertices of any graph
so that distinct colors are assigned to adjacent vertices. Optimal graph
coloring colors a graph with a minimum number of colors, which is its chromatic
number. Finding out the chromatic number is a combinatorial optimization
problem proven to be computationally intractable, which implies that no
algorithm that computes large instances of the problem in a reasonable time is
known. For this reason, approximate methods and metaheuristics form a set of
techniques that do not guarantee optimality but obtain good solutions in a
reasonable time. This paper reports a comparative study of the Hill-Climbing,
Simulated Annealing, Tabu Search, and Iterated Local Search metaheuristics for
the classic graph coloring problem considering its time efficiency for
processing the DSJC125 and DSJC250 instances of the DIMACS benchmark.
|
In dependence modeling, various copulas have been utilized. Among them, the
Frank copula has been one of the most typical choices due to its simplicity. In
this work, we demonstrate that the Frank copula is the minimum information
copula under fixed Kendall's $\tau$ (MICK), both theoretically and numerically.
First, we explain that both MICK and the Frank density follow the hyperbolic
Liouville equation. Moreover, we show that the copula density satisfying the
Liouville equation is uniquely the Frank copula. Our result asserts that
selecting the Frank copula as an appropriate copula model is equivalent to
using Kendall's $\tau$ as the sole available information about the true
distribution, based on the entropy maximization principle.
|
The mechanism responsible for the natal kicks of neutron stars continues to
be a challenging problem. Indeed, many mechanisms have been suggested, and one
hydrodynamic mechanism may require large initial asymmetries in the cores of
supernova progenitor stars. Goldreich, Lai, & Sahrling (1997) suggested that
unstable g-modes trapped in the iron (Fe) core by the convective burning layers
and excited by the $\epsilon$-mechanism may provide the requisite asymmetries.
We perform a modal analysis of the last minutes before collapse of published
core structures and derive eigenfrequencies and eigenfunctions, including the
nonadiabatic effects of growth by nuclear burning and decay by both neutrino
and acoustic losses. In general, we find two types of g-modes: inner-core
g-modes, which are stabilized by neutrino losses and outer-core g-modes which
are trapped near the burning shells and can be unstable. Without exception, we
find at least one unstable g-mode for each progenitor in the entire mass range
we consider, 11 M$_{\sun}$ to 40 M$_{\sun}$. More importantly, we find that the
timescales for growth and decay are an order of magnitude or more longer than
the time until the commencement of core collapse. We conclude that the
$\epsilon$-mechanism may not have enough time to significantly amplify core
g-modes prior to collapse.
|
As an observational case study, we consider the origin of a prominent
poleward surge of leading polarity, visible in the magnetic butterfly diagram
during Solar Cycle 24. A new technique is developed for assimilating individual
regions of strong magnetic flux into a surface flux transport model. By
isolating the contribution of each of these regions, the model shows the surge
to originate primarily in a single high-latitude activity group consisting of a
bipolar active region present in Carrington Rotations 2104-05 (November
2010-January 2011) and a multipolar active region in Rotations 2107-08
(February-April 2011). This group had a strong axial dipole moment opposed to
Joy's law. On the other hand, the modelling suggests that the transient
influence of this group on the butterfly diagram will not be matched by a large
long-term contribution to the polar field, because of its location at high
latitude. This is in accordance with previous flux transport models.
|
Preferential attachment is widely used to model power-law behavior of degree
distributions in both directed and undirected networks. Practical analyses on
the tail exponent of the power-law degree distribution use the Hill estimator
as one of the key summary statistics, whose consistency is justified mostly for
iid data. The major goal in this paper is to answer the question whether the
Hill estimator is still consistent when applied to non-iid network data. To do
this, we first derive the asymptotic behavior of the degree sequence via
embedding the degree growth of a fixed node into a birth immigration process.
We also need to show the convergence of the tail empirical measure, from which
the consistency of Hill estimators is obtained. This step requires checking the
concentration of degree counts. We give a proof for a particular linear
preferential attachment model and use simulation results as an illustration in
other choices of models.
|
We consider a supersymmetric version of the standard model extended by an
additional U(1)_{B-L}. This model can be embedded in an mSUGRA-inspired model
where the mass parameters of the scalars and gauginos unify at the scale of
grand unification. In this class of models the renormalization group equation
evolution of gauge couplings as well as of the soft SUSY-breaking parameters
require the proper treatment of gauge kinetic mixing. We first show that this
has a profound impact on the phenomenolgy of the Z' and as a consequence the
current LHC bounds on its mass are reduced significantly from about 1920 GeV to
1725 GeV. They are even further reduced if the Z' can decay into supersymmetric
particles. Secondly, we show that in this way sleptons can be produced at the
LHC in the 14 TeV phase with masses of several hundred GeV. In the case of
squark and gluino masses in the multi-TeV range, this might become an important
discovery channel for sleptons up to 650 GeV (800 GeV) for an integrated
luminosity of 100 fb^{-1} (300 fb^{-1}).
|
We consider online coordinated precoding design for downlink wireless network
virtualization (WNV) in a multi-cell multiple-input multiple-output (MIMO)
network with imperfect channel state information (CSI). In our WNV framework,
an infrastructure provider (InP) owns each base station that is shared by
several service providers (SPs) oblivious of each other. The SPs design their
precoders as virtualization demands for user services, while the InP designs
the actual precoding solution to meet the service demands from the SPs. Our aim
is to minimize the long-term time-averaged expected precoding deviation over
MIMO fading channels, subject to both per-cell long-term and short-term
transmit power limits. We propose an online coordinated precoding algorithm for
virtualization, which provides a fully distributed semi-closed-form precoding
solution at each cell, based only on the current imperfect CSI without any CSI
exchange across cells. Taking into account the two-fold impact of imperfect CSI
on both the InP and the SPs, we show that our proposed algorithm is within an
$O(\delta)$ gap from the optimum over any time horizon, where $\delta$ is a CSI
inaccuracy indicator. Simulation results validate the performance of our
proposed algorithm under two commonly used precoding techniques in a typical
urban micro-cell network environment.
|
We reanalyze $2\alpha+t$ cluster features of $3/2^-$ states in $^{11}$B by
investigating the $t$ cluster distribution around a $2\alpha$ core in $^{11}$B,
calculated with the method of antisymmetrized molecular dynamics (AMD). In the
$3/2^-_3$ state, a $t$ cluster is distributed in a wide region around
$2\alpha$, indicating that the $t$ cluster moves rather freely in angular as
well as radial motion. From the weak angular correlation and radial extent of
the $t$ cluster distribution, we propose an interpretation of a $2\alpha+t$
cluster gas for the $3/2^-_3$ state. In this study, we compare the $2\alpha+t$
cluster feature in $^{11}$B($3/2^-_3$) with the $3\alpha$ cluster feature in
$^{12}$C($0^+_2$), and discuss their similarities and differences.
|
Systems biology approaches to the integrative study of cells, organs and
organisms offer the best means of understanding in a holistic manner the
diversity of molecular assays that can be now be implemented in a high
throughput manner. Such assays can sample the genome, epigenome, proteome,
metabolome and microbiome contemporaneously, allowing us for the first time to
perform a complete analysis of physiological activity. The central problem
remains empowering the scientific community to actually implement such an
integration, across seemingly diverse data types and measurements. One
promising solution is to apply semantic techniques on a self-consistent and
implicitly correct ontological representation of these data types. In this
paper we describe how we have applied one such solution, based around the
InterMine data warehouse platform which uses as its basis the Sequence
Ontology, to facilitate a systems biology analysis of virulence in the
apicomplexan pathogen $Toxoplasma~gondii$, a common parasite that infects up to
half the worlds population, with acute pathogenic risks for immuno-compromised
individuals or pregnant mothers. Our solution, which we named `toxoMine', has
provided both a platform for our collaborators to perform such integrative
analyses and also opportunities for such cyberinfrastructure to be further
developed, particularly to take advantage of possible semantic similarities of
value to knowledge discovery in the Omics enterprise. We discuss these
opportunities in the context of further enhancing the capabilities of this
powerful integrative platform.
|
It is a long-time pursuit of computations with \emph{ab initio} precision of
thermal contributions to phase behaviors of condensed matters under extreme
conditions. In this work, the pressure induced structural phase transitions of
crystalline aluminum up to $600$ GPa at room temperature are investigated based
on the criterion of Gibbs free energy derived directly from the partition
function that formulated in the ensemble theory with the interatomic
interactions characterized by density functional theory computations. The
transition pressures of the FCC$\rightarrow$HCP$\rightarrow$BCC phase
transitions are determined at $194$ and $361$ GPa, the axial ratio of the
stable HCP structure is found to be equal to $1.62$ and the discontinuities in
the equations of states are confirmed to be associated with $-0.67\%$ and
$-0.90\%$ volume changes, which are all in an excellent agreement with the
measurements by one of the recent experiments but differ from other
experimental observations. Compared with the results obtained by the criterion
of enthalpy at $0$K, this work further shows the nontrivial thermal impacts on
the structural stability of aluminum under ultrahigh-pressure circumstances
even at room temperature.
|
A full subcategory of a Grothendieck category is called deconstructible if it
consists of all transfinite extensions of some set of objects. This concept
provides a handy framework for structure theory and construction of
approximations for subcategories of Grothendieck categories. It also allows to
construct model structures and t-structures on categories of complexes over a
Grothendieck category. In this paper we aim to establish fundamental results on
deconstructible classes and outline how to apply these in the areas mentioned
above. This is related to recent work of Gillespie, Enochs, Estrada, Guil
Asensio, Murfet, Neeman, Prest, Trlifaj and others.
|
Pressure dependence of the Shubnikov-de Haas (SdH) oscillations spectra of
the quasi-two di- mensional organic metal (ET)8[Hg4Cl12(C6H5Br)]2 have been
studied up to 1.1 GPa in pulsed magnetic fields of up to 54 T. According to
band structure calculations, its Fermi surface can be regarded as a network of
compensated orbits. The SdH spectra exhibit many Fourier components typical of
such a network, most of them being forbidden in the framework of the
semiclassical model. Their amplitude remains large in all the pressure range
studied which likely rules out chemical potential oscillation as a dominant
contribution to their origin, in agreement with recent calculations relevant to
compensated Fermi liquids. In addition to a strong decrease of the magnetic
breakdown field and effective masses, the latter being likely due to a
reduction of the strength of electron correlations, a sizeable increase of the
scattering rate is observed as the applied pressure increases. This latter
point, which is at variance with data of most charge transfer salts is
discussed in connection with pressure-induced features of the temperature
dependence of the zero-field interlayer resistance
|
I investigate the caustics produced by the fall of collisionless dark matter
in and out of a galaxy in the limit of negligible velocity dispersion. The
outer caustics are spherical shells enveloping the galaxy. The inner caustics
are rings. These are located near where the particles with the most angular
momentum are at their distance of closest approach to the galactic center. The
surface of a caustic ring is a closed tube whose cross-section is a $D_{-4}$
catastrophe. It has three cusps amongst which exists a discrete $Z_3$ symmetry.
A detailed analysis is given in the limit where the flow of particles is
axially and reflection symmetric and where the transverse dimensions of the
ring are small compared to the ring radius. Five parameters describe the
caustic in that limit. The relations between these parameters and the initial
velocity distribution of the particles are derived. The structure of the
caustic ring is used to predict the shape of the bump produced in a galactic
rotation curve by a caustic ring lying in the galactic plane.
|
Graphene-based devices are planned to augment the functionality of Si and
III-V based technology in radio-frequency (RF) electronics. The expectations in
designing graphene {field-effect} transistors (GFETs) with enhanced RF
performance have attracted significant experimental efforts, mainly
concentrated on achieving high mobility samples. However, little attention has
been paid, so far, to the role of the access regions in these devices.
\mbox{Here, we analyse} in detail, via numerical simulations, how the GFET
transfer response is severely impacted by these regions, showing that they play
a significant role in the asymmetric saturated behaviour commonly observed in
GFETs. We also investigate how the modulation of the access region conductivity
(i.e., by the influence of a back gate) and the presence of imperfections in
the graphene layer (e.g., charge puddles) affects the transfer response. The
analysis is extended to assess the application of GFETs for RF applications,
by~evaluating their cut-off frequency.
|
Optimal transport (OT) formalizes the problem of finding an optimal coupling
between probability measures given a cost matrix. The inverse problem of
inferring the cost given a coupling is Inverse Optimal Transport (IOT). IOT is
less well understood than OT. We formalize and systematically analyze the
properties of IOT using tools from the study of entropy-regularized OT.
Theoretical contributions include characterization of the manifold of
cross-ratio equivalent costs, the implications of model priors, and derivation
of an MCMC sampler. Empirical contributions include visualizations of
cross-ratio equivalent effect on basic examples and simulations validating
theoretical results.
|
In the present work, we quantize a closed Friedmann-Robertson-Walker model in
the presence of a positive cosmological constant and radiation. It gives rise
to a Wheeler-DeWitt equation for the scale factor which has the form of a
Schr\"{o}dinger equation for a potential with a barrier. We solve it
numerically and determine the tunneling probability for the birth of a
asymptotically DeSitter, inflationary universe, initially, as a function of the
mean energy of the initial wave-function. Then, we verify that the tunneling
probability increases with the cosmological constant, for a fixed value of the
mean energy of the initial wave-function. Our treatment of the problem is more
general than previous ones, based on the WKB approximation. That is the case
because we take into account the fact that the scale factor ($a$) cannot be
smaller than zero. It means that, one has to introduce an infinity potential
wall at $a = 0$, which forces any wave-packet to be zero there. That condition
introduces new results, in comparison with previous works.
|
Graph neural networks (GNNs) encounter significant computational challenges
when handling large-scale graphs, which severely restricts their efficacy
across diverse applications. To address this limitation, graph condensation has
emerged as a promising technique, which constructs a small synthetic graph for
efficiently training GNNs while retaining performance. However, due to the
topology structure among nodes, graph condensation is limited to condensing
only the observed training nodes and their corresponding structure, thus
lacking the ability to effectively handle the unseen data. Consequently, the
original large graph is still required in the inference stage to perform
message passing to inductive nodes, resulting in substantial computational
demands. To overcome this issue, we propose mapping-aware graph condensation
(MCond), explicitly learning the one-to-many node mapping from original nodes
to synthetic nodes to seamlessly integrate new nodes into the synthetic graph
for inductive representation learning. This enables direct information
propagation on the synthetic graph, which is much more efficient than on the
original large graph. Specifically, MCond employs an alternating optimization
scheme with innovative loss terms from transductive and inductive perspectives,
facilitating the mutual promotion between graph condensation and node mapping
learning. Extensive experiments demonstrate the efficacy of our approach in
inductive inference. On the Reddit dataset, MCond achieves up to 121.5x
inference speedup and 55.9x reduction in storage requirements compared with
counterparts based on the original graph.
|
We iteratively apply a recently formulated adiabatic theorem for the
strong-coupling limit in finite-dimensional quantum systems. This allows us to
improve approximations to a perturbed dynamics, beyond the standard
approximation based on quantum Zeno dynamics and adiabatic elimination. The
effective generators describing the approximate evolutions are endowed with the
same block structure as the unperturbed part of the generator, and exhibit
adiabatic evolutions. This iterative adiabatic theorem reveals that
adiabaticity holds eternally, that is, the system evolves within each
eigenspace of the unperturbed part of the generator, with an error bounded by
$O(1/\gamma)$ uniformly in time, where $\gamma$ characterizes the strength of
the unperturbed part of the generator. We prove that the iterative adiabatic
theorem reproduces Bloch's perturbation theory in the unitary case, and is
therefore a full generalization to open systems. We furthermore prove the
equivalence of the Schrieffer-Wolff and des Cloiseaux approaches in the unitary
case and generalize both to arbitrary open systems, showing that they share the
eternal adiabaticity, and providing explicit error bounds. Finally we discuss
the physical structure of the effective adiabatic generators and show that
ideal effective generators for open systems do not exist in general.
|
This paper addresses the problem of scalability for a cell-free massive MIMO
(CF-mMIMO) system that performs integrated sensing and communications (ISAC).
Specifically, the case where a large number of access points (APs) are deployed
to perform simultaneous communication with mobile users and monitoring of the
surrounding environment in the same time-frequency slot is considered, and a
target-centric approach on top of the user-centric architecture used for
communication services is introduced. In the paper, other practical aspects
such as the fronthaul load and scanning protocol are also considered. The
proposed scalable ISAC-enabled CF-mMIMO network has lower levels of system
complexity, permits managing the scenario in which multiple targets are to be
tracked/sensed by the APs, and achieves performance levels superior or, in some
cases, close to those of the non-scalable solutions.
|
We consider a Fleming-Viot-type particle system consisting of independently
moving particles that are killed on the boundary of a domain. At the time of
death of a particle, another particle branches. If there are only two particles
and the underlying motion is a Bessel process on $(0,\infty)$, both particles
converge to 0 at a finite time if and only if the dimension of the Bessel
process is less than 0. If the underlying diffusion is Brownian motion with a
drift stronger than (but arbitrarily close to, in a suitable sense) the drift
of a Bessel process, all particles converge to 0 at a finite time, for any
number of particles.
|
This paper aims at improving how machines can answer questions directly from
text, with the focus of having models that can answer correctly multiple types
of questions and from various types of texts, documents or even from large
collections of them. To that end, we introduce the Weaver model that uses a new
way to relate a question to a textual context by weaving layers of recurrent
networks, with the goal of making as few assumptions as possible as to how the
information from both question and context should be combined to form the
answer. We show empirically on six datasets that Weaver performs well in
multiple conditions. For instance, it produces solid results on the very
popular SQuAD dataset (Rajpurkar et al., 2016), solves almost all bAbI tasks
(Weston et al., 2015) and greatly outperforms state-of-the-art methods for open
domain question answering from text (Chen et al., 2017).
|
We review our expectations in the last year before the LHC commissioning.
|
This paper introduces and explores a new programming paradigm, Model-based
Programming, designed to address the challenges inherent in applying deep
learning models to real-world applications. Despite recent significant
successes of deep learning models across a range of tasks, their deployment in
real business scenarios remains fraught with difficulties, such as complex
model training, large computational resource requirements, and integration
issues with existing programming languages. To ameliorate these challenges, we
propose the concept of 'Model-based Programming' and present a novel
programming language - M Language, tailored to a prospective model-centered
programming paradigm. M Language treats models as basic computational units,
enabling developers to concentrate more on crucial tasks such as model loading,
fine-tuning, evaluation, and deployment, thereby enhancing the efficiency of
creating deep learning applications. We posit that this innovative programming
paradigm will stimulate the extensive application and advancement of deep
learning technology and provide a robust foundation for a model-driven future.
|
We study the origins of the five ten-dimensional ``matrix superstring''
theories, supplementing old results with new ones, and find that they all fit
into a unified framework. In all cases the matrix definition of the string in
the limit of vanishingly small coupling is a trivial 1+1 dimensional infra-red
fixed point (an orbifold conformal field theory) characterized uniquely by
matrix versions of the appropriate Green-Schwarz action. The Fock space of the
matrix string is built out of winding T-dual strings. There is an associated
dual supergravity description in terms of the near horizon geometry of the
fundamental string solution of those T-dual strings. The singularity at their
core is related to the orbifold target space in the matrix theory. At
intermediate coupling, for the IIB and SO(32) systems, the matrix string
description is in terms of non-trivial 2+1 dimensional fixed points. Their
supergravity duals involve Anti de-Sitter space (or an orbifold thereof) and
are well-defined everywhere, providing a complete description of the fixed
point theory. In the case of the type IIB system, the two extra organizational
dimensions normally found in F-theory appear here as well. The fact that they
are non-dynamical has a natural interpretation in terms of holography.
|
In this paper, we present convergence guarantees for a modified trust-region
method designed for minimizing objective functions whose value and gradient and
Hessian estimates are computed with noise. These estimates are produced by
generic stochastic oracles, which are not assumed to be unbiased or consistent.
We introduce these oracles and show that they are more general and have more
relaxed assumptions than the stochastic oracles used in prior literature on
stochastic trust-region methods. Our method utilizes a relaxed step acceptance
criterion and a cautious trust-region radius updating strategy which allows us
to derive exponentially decaying tail bounds on the iteration complexity for
convergence to points that satisfy approximate first- and second-order
optimality conditions. Finally, we present two sets of numerical results. We
first explore the tightness of our theoretical results on an example with
adversarial zeroth- and first-order oracles. We then investigate the
performance of the modified trust-region algorithm on standard noisy
derivative-free optimization problems.
|
We prove that a pair of heterodimensional cycles can be born at the
bifurcations of a pair of Shilnikov loops (homoclinic loops to a saddle-focus
equilibrium) having a one-dimensional unstable manifold in a volume-hyperbolic
flow with a $\mathbb{Z}_2$ symmetry. We also show that these heterodimensional
cycles can belong to a chain-transitive attractor of the system along with
persistent homoclinic tangency.
|
We report long-baseline interferometric measurements of circumstellar dust
around massive evolved stars with the MIDI instrument on the Very Large
Telescope Interferometer and provide spectrally dispersed visibilities in the
8-13 micron wavelength band. We also present diffraction-limited observations
at 10.7 micron on the Keck Telescope with baselines up to 8.7 m which explore
larger scale structure. We have resolved the dust shells around the late type
WC stars WR 106 and WR 95, and the enigmatic NaSt1 (formerly WR 122), suspected
to have recently evolved from a Luminous Blue Variable (LBV) stage. For AG Car,
the protoypical LBV in our sample, we marginally resolve structure close to the
star, distinct from the well-studied detached nebula. The dust shells around
the two WC stars show fairly constant size in the 8-13 micron MIDI band, with
gaussian half-widths of ~ 25 to 40 mas. The compact dust we detect around NaSt1
and AG Car favors recent or ongoing dust formation.
Using the measured visibilities, we build spherically symmetric radiative
transfer models of the WC dust shells which enable detailed comparison with
existing SED-based models. Our results indicate that the inner radii of the
shells are within a few tens of AU from the stars. In addition, our models
favor grain size distributions with large (~ 1 micron) dust grains. This
proximity of the inner dust to the hot central star emphasizes the difficulty
faced by current theories in forming dust in the hostile environment around WR
stars. Although we detect no direct evidence for binarity for these objects,
dust production in a colliding-wind interface in a binary system is a feasible
mechanism in WR systems under these conditions.
|
The cellular device explosion in the past few decades has created many
different opportunities for development for future generations. The 5G network
offers a greater speed in the transmissions, a lower latency, and therefore
greater capacity for remote execution. The benefits of AI for 5G network
slicing orchestration and management will be discussed in this survey paper. We
will study these topics in light of the EU-funded MonB5G project that works
towards providing zero-touch management and orchestration in the support of
network slicing at massive scales for 5G LTE and beyond.
|
One of the most challenging open problems in heavy quarkonium physics is the
double charm production in $e^+e^-$ annihilation at B factories. The measured
cross section of $e^+ e^- \to J/\psi + \eta_c$ is much larger than leading
order (LO) theoretical predictions. With the nonrelativistic QCD factorization
formalism, we calculate the next-to-leading order (NLO) QCD correction to this
process. Taking all one-loop self-energy, triangle, box, and pentagon diagrams
into account, and factoring the Coulomb-singular term into the $c\bar c$ bound
state wave function, we get an ultraviolet and infrared finite correction to
the cross section of $e^+e^-\to J/\psi + \eta_c$ at $\sqrt{s} =10.6$ GeV. We
find that the NLO QCD correction can substantially enhance the cross section
with a K factor (the ratio of NLO to LO) of about 1.8-2.1; hence it greatly
reduces the large discrepancy between theory and experiment. With $m_c=1.4{\rm
GeV}$ and $\mu=2m_c$, the NLO cross section is estimated to be 18.9 fb, which
reaches to the lower bound of experiment.
|
The NEMO-3 experiment measured the half-life of the $2\nu\beta\beta$ decay
and searched for the $0\nu\beta\beta$ decay of $^{116}$Cd. Using $410$ g of
$^{116}$Cd installed in the detector with an exposure of $5.26$ y,
($4968\pm74$) events corresponding to the $2\nu\beta\beta$ decay of $^{116}$Cd
to the ground state of $^{116}$Sn have been observed with a signal to
background ratio of about $12$. The half-life of the $2\nu\beta\beta$ decay has
been measured to be $
T_{1/2}^{2\nu}=[2.74\pm0.04\mbox{(stat.)}\pm0.18\mbox{(syst.)}]\times10^{19}$
y. No events have been observed above the expected background while searching
for $0\nu\beta\beta$ decay. The corresponding limit on the half-life is
determined to be $T_{1/2}^{0\nu} \ge 1.0 \times 10^{23}$ y at the $90$ % C.L.
which corresponds to an upper limit on the effective Majorana neutrino mass of
$\langle m_{\nu} \rangle \le 1.4-2.5$ eV depending on the nuclear matrix
elements considered. Limits on other mechanisms generating $0\nu\beta\beta$
decay such as the exchange of R-parity violating supersymmetric particles,
right-handed currents and majoron emission are also obtained.
|
We consider the weighted completion time minimization problem for capacitated
parallel machines, which is a fundamental problem in modern cloud computing
environments. We study settings in which the processed jobs may have varying
duration, resource requirements and importance (weight). Each server (machine)
can process multiple concurrent jobs up to its capacity.
Due to the problem's $\mathcal{NP}$-hardness, we study heuristic approaches
with provable approximation guarantees. We first analyze an algorithm that
prioritizes the jobs with the smallest volume-by-weight ratio. We bound its
approximation ratio with a decreasing function of the ratio between the highest
resource demand of any job to the server's capacity.
Then, we use the algorithm for scheduling jobs with resource demands equal to
or smaller than 0.5 of the server's capacity in conjunction with the classic
weighted shortest processing time algorithm for jobs with resource demands
higher than 0.5. We thus create a hybrid, constant approximation algorithm for
two or more machines. We also develop a constant approximation algorithm for
the case with a single machine. This research is the first, to the best of our
knowledge, to propose a polynomial-time algorithm with a constant approximation
ratio for minimizing the weighted sum of job completion times for capacitated
parallel machines.
|
We describe a light-weight system of bash scripts for efficiently bundling
supercomputing tasks into large jobs, so that one can take advantage of
incentives or discounts for requesting large allocations. The software can
backfill computational tasks, avoiding wasted cycles, and can streamline
collaboration between different users. It is simple to use, functioning
similarly to batch systems like PBS, MOAB, and SLURM.
|
The {\sc Plane Diameter Completion} problem asks, given a plane graph $G$ and
a positive integer $d$, if it is a spanning subgraph of a plane graph $H$ that
has diameter at most $d$. We examine two variants of this problem where the
input comes with another parameter $k$. In the first variant, called BPDC, $k$
upper bounds the total number of edges to be added and in the second, called
BFPDC, $k$ upper bounds the number of additional edges per face. We prove that
both problems are {\sf NP}-complete, the first even for 3-connected graphs of
face-degree at most 4 and the second even when $k=1$ on 3-connected graphs of
face-degree at most 5. In this paper we give parameterized algorithms for both
problems that run in $O(n^{3})+2^{2^{O((kd)^2\log d)}}\cdot n$ steps.
|
In this paper, we study a new discrete tree and the resulting branching
process, which we call the \textbf{E}rlang \textbf{W}eighted
\textbf{T}ree(\textbf{EWT}). The EWT appears as the local weak limit of a
random graph model proposed in~\cite{La2015}. In contrast to the local weak
limit of well-known random graph models, the EWT has an interdependent
structure. In particular, its vertices encode a multi-type branching process
with uncountably many types.
We derive the main properties of the EWT, such as the probability of
extinction, growth rate, etc. We show that the probability of extinction is the
smallest fixed point of an operator. We then take a point process perspective
and analyze the growth rate operator. We derive the Krein--Rutman eigenvalue
$\beta_0$ and the corresponding eigenfunctions of the growth operator, and show
that the probability of extinction equals one if and only if $\beta_0 \leq 1$.
|
Space-charge dominated streamer discharges can emerge in free space from
single electrons. We reinvestigate the Raether-Meek criterion and show that
streamer emergence depends not only on ionization and attachment rates and gap
length, but also on electron diffusion. Motivated by simulation results, we
derive an explicit quantitative criterion for the avalanche-to-streamer
transition both for pure non-attaching gases and for air, under the assumption
that the avalanche emerges from a single free electron and evolves in a
homogenous field.
|
The dynamics of stability and collapse of a trapped atomic Bose-Einstein
condensate (BEC) coupled to a molecular one is studied using the time-dependent
Gross-Pitaevskii (GP) equation including a nonlinear interaction term which can
transform two atoms into a molecule and vice versa. We find interesting
oscillation of the number of atoms and molecules for a BEC of fixed mass. This
oscillation is a consequence of continuous transformation in the condensate of
two atoms into a molecule and vice versa. For the study of collapse an
absorptive contact interaction and an imaginary quartic three-body
recombination term are included in the GP equation. It is possible to have a
collapse of one or both components when one or more of the nonlinear terms in
the GP equation are attractive in nature, respectively.
|
The purpose of this paper is to explore the impact of the cloud technology on
current research information systems (CRIS). Based on an overview of published
literature and on empirical evidence from surveys, the paper presents main
characteristics, delivery models, service levels and general benefits of cloud
computing. The second part assesses how the cloud computing challenges the
research information management, from three angles: networking, specific
benefits, and the ingestion of data in the cloud. The third part describes
three aspects of the implementation of current research systems in the clouds,
i.e. service models, requirements and potential risks and barriers. The paper
concludes with some perspectives for future work. The paper is written for CRIS
administrators and users, in order to improve research information management
and to contribute to future development and implementation of these systems,
but also for scholars and students who want to have detailed knowledge on this
topic.
|
We present the exact O(alpha) correction to the process e+ e- -> e+ e- +
gamma in the low angle luminosity regime at SLC/LEP energies. We give explicit
formulas for the completely differential cross section. As an important
application, we illustrate the size of the respective corrections of O(alpha^2)
to the SLC/LEP luminosity cross section. We show explicitly that our results
have the correct infrared limit, as a cross-check. Some comments are made about
the implementation of our results in the framework of a Monte Carlo event
generator. This latter implementation will appear elsewhere.
|
The operator associated with the radially integrated Wigner function is found
to lack justification as a phase operator.
|
Atomic ions trapped in ultra-high vacuum form an especially well-understood
and useful physical system for quantum information processing. They provide
excellent shielding of quantum information from environmental noise, while
strong, well-controlled laser interactions readily provide quantum logic gates.
A number of basic quantum information protocols have been demonstrated with
trapped ions. Much current work aims at the construction of large-scale
ion-trap quantum computers using complex microfabricated trap arrays. Several
groups are also actively pursuing quantum interfacing of trapped ions with
photons.
|
We describe a novel approach for the rational design and synthesis of
self-assembled periodic nanostructures using martensitic phase transformations.
We demonstrate this approach in a thin film of perovskite SrSnO3 with
reconfigurable periodic nanostructures consisting of regularly spaced regions
of sharply contrasted dielectric properties. The films can be designed to have
different periodicities and relative phase fractions via chemical doping or
strain engineering. The dielectric contrast within a single film can be tuned
using temperature and laser wavelength, effectively creating a variable
photonic crystal. Our results show the realistic possibility of designing
large-area self-assembled periodic structures using martensitic phase
transformations with the potential of implementing "built-to-order"
nanostructures for tailored optoelectronic functionalities.
|
Maximum distance profile (MDP) convolutional codes have the property that
their column distances are as large as possible for given rate and degree.
There exists a well-known criterion to check whether a code is MDP using the
generator or the parity-check matrix of the code.
In this paper, we show that under the assumption that $n-k$ divides $\delta$
or $k$ divides $\delta$, a polynomial matrix that fulfills the MDP criterion is
actually always left prime. In particular, when $k$ divides $\delta$, this
implies that each MDP convolutional code is noncatastrophic. Moreover, when
$n-k$ and $k$ do not divide $\delta$, we show that the MDP criterion is in
general not enough to ensure left primeness. In this case, with one more
assumption, we still can guarantee the result.
|
Intelligent creatures can explore their environments and learn useful skills
without supervision. In this paper, we propose DIAYN ('Diversity is All You
Need'), a method for learning useful skills without a reward function. Our
proposed method learns skills by maximizing an information theoretic objective
using a maximum entropy policy. On a variety of simulated robotic tasks, we
show that this simple objective results in the unsupervised emergence of
diverse skills, such as walking and jumping. In a number of reinforcement
learning benchmark environments, our method is able to learn a skill that
solves the benchmark task despite never receiving the true task reward. We show
how pretrained skills can provide a good parameter initialization for
downstream tasks, and can be composed hierarchically to solve complex, sparse
reward tasks. Our results suggest that unsupervised discovery of skills can
serve as an effective pretraining mechanism for overcoming challenges of
exploration and data efficiency in reinforcement learning.
|
In survival analysis, prediction models are needed as stand-alone tools and
in applications of causal inference to estimate nuisance parameters. The super
learner is a machine learning algorithm which combines a library of prediction
models into a meta learner based on cross-validated loss. In right-censored
data, the choice of the loss function and the estimation of the expected loss
need careful consideration. We introduce the state learner, a new super learner
for survival analysis, which simultaneously evaluates libraries of prediction
models for the event of interest and the censoring distribution. The state
learner can be applied to all types of survival models, works in the presence
of competing risks, and does not require a single pre-specified estimator of
the conditional censoring distribution. We establish an oracle inequality for
the state learner and investigate its performance through numerical
experiments. We illustrate the application of the state learner with prostate
cancer data, as a stand-alone prediction tool, and, for causal inference, as a
way to estimate the nuisance parameter models of a smooth statistical
functional.
|
Since the collisional mean free path of charged particles in hot accretion
flows can be significantly larger than the typical length-scale of the
accretion flows, the gas pressure is anisotropic to magnetic field lines. For
such a large collisional mean free path, the resistive dissipation can also
play a key role in hot accretion flows. In this paper, we study the dynamics of
resistive hot accretion flows in the presence of anisotropic pressure. We
present a set of self-similar solutions where the flow variables are assumed to
be a function only of radius. Our results show that the radial and rotational
velocities and the sound speed increase considerably with the strength of
anisotropic pressure. The increase in infall velocity and in sound speed are
more significant if the resistive dissipation is taken into account. We find
that such changes depend on the field strength. Our results indicate that the
resistive heating is $10\%$ of the heating by the work done by anisotropic
pressure when the strength of anisotropic pressure is 0.1. This value becomes
higher when the strength of anisotropic pressure reduces. The increase in disk
temperature can lead to heating and acceleration of the electrons in such
flows. It helps us to explain the origin of phenomena such as the flares in
Galactic Center Sgr A*.
|
We consider the universality class of the two-dimensional Tricritical Ising
Model. The scaling form of the free-energy naturally leads to the definition of
universal ratios of critical amplitudes which may have experimental relevance.
We compute these universal ratios by a combined use of results coming from
Perturbed Conformal Field Theory, Integrable Quantum Field Theory and numerical
methods.
|
In this paper, we prove a functorial aspect of the formal geometric
quantization procedure of non-compact spin-c manifolds.
|
Hopfield attractor networks are robust distributed models of human memory,
but lack a general mechanism for effecting state-dependent attractor
transitions in response to input. We propose construction rules such that an
attractor network may implement an arbitrary finite state machine (FSM), where
states and stimuli are represented by high-dimensional random vectors, and all
state transitions are enacted by the attractor network's dynamics. Numerical
simulations show the capacity of the model, in terms of the maximum size of
implementable FSM, to be linear in the size of the attractor network for dense
bipolar state vectors, and approximately quadratic for sparse binary state
vectors. We show that the model is robust to imprecise and noisy weights, and
so a prime candidate for implementation with high-density but unreliable
devices. By endowing attractor networks with the ability to emulate arbitrary
FSMs, we propose a plausible path by which FSMs could exist as a distributed
computational primitive in biological neural networks.
|
We study the joint unicast and multi-group multicast transmission in massive
multiple-input-multiple-output (MIMO) systems. We consider a system model that
accounts for channel estimation and pilot contamination, and derive achievable
spectral efficiencies (SEs) for unicast and multicast user terminals (UTs),
under maximum ratio transmission and zero-forcing precoding. For unicast
transmission, our objective is to maximize the weighted sum SE of the unicast
UTs, and for the multicast transmission, our objective is to maximize the
minimum SE of the multicast UTs. These two objectives are coupled in a
conflicting manner, due to their shared power resource. Therefore, we formulate
a multiobjective optimization problem (MOOP) for the two conflicting
objectives. We derive the Pareto boundary of the MOOP analytically. As each
Pareto optimal point describes a particular efficient trade-off between the two
objectives of the system, we determine the values of the system parameters
(uplink training powers, downlink transmission powers, etc.) to achieve any
desired Pareto optimal point. Moreover, we prove that the Pareto region is
convex, hence the system should serve the unicast and multicast UTs at the same
time-frequency resource. Finally, we validate our results using numerical
simulations.
|
In previous publications I have proposed a geometrical framework underpinning
the local, realistic, and deterministic origins of the strong quantum
correlations observed in Nature, without resorting to superdeterminism,
retrocausality, or other conspiracy loopholes usually employed to circumvent
Bell's argument against such a possibility. The geometrical framework I have
proposed is based on a Clifford-algebraic interplay between the quaternionic
3-sphere, or $S^3$, which I have taken to model the geometry of the
three-dimensional physical space in which we are confined to perform all our
physical experiments, and an octonion-like 7-sphere, or $S^7$, which arises as
an algebraic representation space of this quaternionic 3-sphere. In this paper
I first review the above geometrical framework, then strengthen its
Clifford-algebraic foundations employing the language of geometric algebra, and
finally refute some of its critiques.
|
Standard neuroimaging techniques provide non-invasive access not only to
human brain anatomy but also to its physiology. The activity recorded with
these techniques is generally called functional imaging, but what is observed
per se is an instance of dynamics, from which functional brain activity should
be extracted. Distinguishing between bare dynamics and genuine function is a
highly non-trivial task, but a crucially important one when comparing
experimental observations and interpreting their significance. Here we
illustrate how the ability of neuroimaging to extract genuine functional brain
activity is bounded by the structure of functional representations. To do so,
we first provide a simple definition of functional brain activity from a
system-level brain imaging perspective. We then review how the properties of
the space on which brain activity is represented allow defining relations
ranging from distinguishability to accessibility of observed imaging data. We
show how these properties result from the structure defined on dynamical data
and dynamics-to-function projections, and consider some implications that the
way and extent to which these are defined have for the interpretation of
experimental data from standard system-level brain recording techniques.
|
We propose a novel, theoretically-grounded, acquisition function for Batch
Bayesian optimization informed by insights from distributionally ambiguous
optimization. Our acquisition function is a lower bound on the well-known
Expected Improvement function, which requires evaluation of a Gaussian
Expectation over a multivariate piecewise affine function. Our bound is
computed instead by evaluating the best-case expectation over all probability
distributions consistent with the same mean and variance as the original
Gaussian distribution. Unlike alternative approaches, including Expected
Improvement, our proposed acquisition function avoids multi-dimensional
integrations entirely, and can be computed exactly - even on large batch sizes
- as the solution of a tractable convex optimization problem. Our suggested
acquisition function can also be optimized efficiently, since first and second
derivative information can be calculated inexpensively as by-products of the
acquisition function calculation itself. We derive various novel theorems that
ground our work theoretically and we demonstrate superior performance via
simple motivating examples, benchmark functions and real-world problems.
|
Here we introduce researchers in algebraic biology to the exciting new field
of cophylogenetics. Cophylogenetics is the study of concomitantly evolving
organisms (or genes), such as host and parasite species. Thus the natural
objects of study in cophylogenetics are tuples of related trees, instead of
individual trees. We review various research topics in algebraic statistics for
phylogenetics, and propose analogs for cophylogenetics. In particular we
propose spaces of cophylogenetic trees, cophylogenetic reconstruction, and
cophylogenetic invariants. We conclude with open problems.
|
Small-scale features of shallow water flow obtained from direct numerical
simulation (DNS) with two different computational codes for the shallow water
equations are gathered offline and subsequently employed with the aim of
constructing a reduced-order correction. This is used to facilitate
high-fidelity online flow predictions at much reduced costs on coarse meshes.
The resolved small-scale features at high resolution represent subgrid
properties for the coarse representation. Measurements of the subgrid dynamics
are obtained as the difference between the evolution of a coarse grid solution
and the corresponding DNS result. The measurements are sensitive to the
particular numerical methods used for the simulation on coarse computational
grids and can be used to approximately correct the associated discretization
errors. The subgrid features are decomposed into empirical orthogonal functions
(EOFs), after which a corresponding correction term is constructed. By
increasing the number of EOFs in the approximation of the measured values the
correction term can in principle be made arbitrarily accurate. Both
computational methods investigated here show a significant decrease in the
simulation error already when applying the correction based on the dominant
EOFs only. The error reduction accounts for the particular discretization
errors that incur and are hence specific to the particular simulation method
that is adopted. This improvement is also observed for very coarse grids which
may be used for computational model reduction in geophysical and turbulent flow
problems.
|
With the increased focus on making cities "smarter", we see an upsurge in
investment in sensing technologies embedded in the urban infrastructure. The
deployment of GPS sensors aboard taxis and buses, smartcards replacing paper
tickets, and other similar initiatives have led to an abundance of data on
human mobility, generated at scale and available real-time. Further still,
users of social media platforms such as Twitter and LBSNs continue to
voluntarily share multimedia content revealing in-situ information on their
respective localities. The availability of such longitudinal multimodal data
not only allows for both the characterization of the dynamics of the city, but
also, in detecting anomalies, resulting from events (e.g., concerts) that
disrupt such dynamics, transiently. In this work, we investigate the
capabilities of such urban sensor modalities, both physical and social, in
detecting a variety of local events of varying intensities (e.g., concerts)
using statistical outlier detection techniques. We look at loading levels on
arriving bus stops, telecommunication records, and taxi trips, accrued via the
public APIs made available through the local transport authorities from
Singapore and New York City, and Twitter/Foursquare check-ins collected during
the same period, and evaluate against a set of events assimilated from multiple
event websites. In particular, we report on our early findings on (1) the
spatial impact evident via each modality (i.e., how far from the event venue is
the anomaly still present), and (2) the utility in combining decisions from the
collection of sensors using rudimentary fusion techniques.
|
AI assistants for coding are on the rise. However one of the reasons
developers and companies avoid harnessing their full potential is the
questionable security of the generated code. This paper first reviews the
current state-of-the-art and identifies areas for improvement on this issue.
Then, we propose a systematic approach based on prompt-altering methods to
achieve better code security of (even proprietary black-box) AI-based code
generators such as GitHub Copilot, while minimizing the complexity of the
application from the user point-of-view, the computational resources, and
operational costs. In sum, we propose and evaluate three prompt altering
methods: (1) scenario-specific, (2) iterative, and (3) general clause, while we
discuss their combination. Contrary to the audit of code security, the latter
two of the proposed methods require no expert knowledge from the user. We
assess the effectiveness of the proposed methods on the GitHub Copilot using
the OpenVPN project in realistic scenarios, and we demonstrate that the
proposed methods reduce the number of insecure generated code samples by up to
16\% and increase the number of secure code by up to 8\%. Since our approach
does not require access to the internals of the AI models, it can be in general
applied to any AI-based code synthesizer, not only GitHub Copilot.
|
To alleviate the congestion caused by rapid growth in demand for mobile data,
wireless service providers (WSPs) have begun encouraging users to offload some
of their traffic onto supplementary network technologies, e.g., offloading from
3G or 4G to WiFi or femtocells. With the growing popularity of such offerings,
a deeper understanding of the underlying economic principles and their impact
on technology adoption is necessary. To this end, we develop a model for user
adoption of a base technology (e.g., 3G) and a bundle of the base plus a
supplementary technology (e.g., 3G + WiFi). Users individually make their
adoption decisions based on several factors, including the technologies'
intrinsic qualities, negative congestion externalities from other subscribers,
and the flat access rates that a WSP charges. We then show how these user-level
decisions translate into aggregate adoption dynamics and prove that these
converge to a unique equilibrium for a given set of exogenously determined
system parameters. We fully characterize these equilibria and study adoption
behaviors of interest to a WSP. We then derive analytical expressions for the
revenue-maximizing prices and optimal coverage factor for the supplementary
technology and examine some resulting non-intuitive user adoption behaviors.
Finally, we develop a mobile app to collect empirical 3G/WiFi usage data and
numerically investigate the profit-maximizing adoption levels when a WSP
accounts for its cost of deploying the supplemental technology and savings from
offloading traffic onto this technology.
|
In this letter, a binary sparse Bayesian learning (BSBL) algorithm is
proposed to slove the one-bit compressed sensing (CS) problem in both single
measurement vector (SMV) and multiple measurement vectors (MMVs). By utilising
the Bussgang-like decomposition, the one-bit CS problem can be approximated as
a standard linear model. Consequently, the standard SBL algorithm can be
naturally incorporated. Numerical results demonstrate the effectiveness of the
BSBL algorithm.
|
In this work, we show that a recently proposed method for experimental
nonlinear modal analysis based on the extended periodic motion concept is well
suited to extract modal properties for strongly nonlinear systems (i.e. in the
presence of large frequency shifts, high and nonlinear damping, changes of the
mode shape, and higher harmonics). To this end, we design a new test rig that
exhibits a large extent of friction-induced damping (modal damping ratio up to
15 %) and frequency shift by 36 %. The specimen, called RubBeR, is a
cantilevered beam under the influence of dry friction, ranging from full stick
to mainly sliding. With the specimen's design, the measurements are well
repeatable for a system subjected to dry frictional force. Then, we apply the
method to the specimen and show that single-point excitation is sufficient to
track the modal properties even though the deflection shape changes with
amplitude. Computed frequency responses using a single nonlinear-modal
oscillator with the identified modal properties agree well with measured
reference curves of different excitation levels, indicating the modal
properties' significance and accuracy.
|
Navigation is a rich and well-grounded problem domain that drives progress in
many different areas of research: perception, planning, memory, exploration,
and optimisation in particular. Historically these challenges have been
separately considered and solutions built that rely on stationary datasets -
for example, recorded trajectories through an environment. These datasets
cannot be used for decision-making and reinforcement learning, however, and in
general the perspective of navigation as an interactive learning task, where
the actions and behaviours of a learning agent are learned simultaneously with
the perception and planning, is relatively unsupported. Thus, existing
navigation benchmarks generally rely on static datasets (Geiger et al., 2013;
Kendall et al., 2015) or simulators (Beattie et al., 2016; Shah et al., 2018).
To support and validate research in end-to-end navigation, we present
StreetLearn: an interactive, first-person, partially-observed visual
environment that uses Google Street View for its photographic content and broad
coverage, and give performance baselines for a challenging goal-driven
navigation task. The environment code, baseline agent code, and the dataset are
available at http://streetlearn.cc
|
Physics-based character animation has seen significant advances in recent
years with the adoption of Deep Reinforcement Learning (DRL). However,
DRL-based learning methods are usually computationally expensive and their
performance crucially depends on the choice of hyperparameters. Tuning
hyperparameters for these methods often requires repetitive training of control
policies, which is even more computationally prohibitive. In this work, we
propose a novel Curriculum-based Multi-Fidelity Bayesian Optimization framework
(CMFBO) for efficient hyperparameter optimization of DRL-based character
control systems. Using curriculum-based task difficulty as fidelity criterion,
our method improves searching efficiency by gradually pruning search space
through evaluation on easier motor skill tasks. We evaluate our method on two
physics-based character control tasks: character morphology optimization and
hyperparameter tuning of DeepMimic. Our algorithm significantly outperforms
state-of-the-art hyperparameter optimization methods applicable for
physics-based character animation. In particular, we show that hyperparameters
optimized through our algorithm result in at least 5x efficiency gain comparing
to author-released settings in DeepMimic.
|
The smart grid concept has transformed the traditional power grid into a
massive cyber-physical system that depends on advanced two-way communication
infrastructure to integrate a myriad of different smart devices. While the
introduction of the cyber component has made the grid much more flexible and
efficient with so many smart devices, it also broadened the attack surface of
the power grid. Particularly, compromised devices pose a great danger to the
healthy operations of the smart-grid. For instance, the attackers can control
the devices to change the behaviour of the grid and can impact the
measurements. In this paper, to detect such misbehaving malicious smart grid
devices, we propose a machine learning and convolution-based classification
framework. Our framework specifically utilizes system and library call lists at
the kernel level of the operating system on both resource-limited and
resource-rich smart grid devices such as RTUs, PLCs, PMUs, and IEDs. Focusing
on the types and other valuable features extracted from the system calls, the
framework can successfully identify malicious smart-grid devices. In order to
test the efficacy of the proposed framework, we built a representative testbed
conforming to the IEC-61850 protocol suite and evaluated its performance with
different system calls. The proposed framework in different evaluation
scenarios yields very high accuracy (avg. 91%) which reveals that the framework
is effective to overcome compromised smart grid devices problem.
|
This paper is wrong and is therefore withdrawn.
|
Image segmentation in total knee arthroplasty is crucial for precise
preoperative planning and accurate implant positioning, leading to improved
surgical outcomes and patient satisfaction. The biggest challenges of image
segmentation in total knee arthroplasty include accurately delineating complex
anatomical structures, dealing with image artifacts and noise, and developing
robust algorithms that can handle anatomical variations and pathologies
commonly encountered in patients. The potential of using machine learning for
image segmentation in total knee arthroplasty lies in its ability to improve
segmentation accuracy, automate the process, and provide real-time assistance
to surgeons, leading to enhanced surgical planning, implant placement, and
patient outcomes. This paper proposes a methodology to use deep learning for
robust and real-time total knee arthroplasty image segmentation. The deep
learning model, trained on a large dataset, demonstrates outstanding
performance in accurately segmenting both the implanted femur and tibia,
achieving an impressive mean-Average-Precision (mAP) of 88.83 when compared to
the ground truth while also achieving a real-time segmented speed of 20 frames
per second (fps). We have introduced a novel methodology for segmenting
implanted knee fluoroscopic or x-ray images that showcases remarkable levels of
accuracy and speed, paving the way for various potential extended applications.
|
We calculate the massive Wilson coefficients for the heavy flavor
contributions to the non-singlet charged current deep-inelastic scattering
structure function $xF_3^{W^+}(x,Q^2)+xF_3^{W^-}(x,Q^2)$ in the asymptotic
region $Q^2 \gg m^2$ to 3-loop order in Quantum Chromodynamics (QCD) at general
values of the Mellin variable $N$ and the momentum fraction $x$. Besides the
heavy quark pair production also the single heavy flavor excitation $s
\rightarrow c$ contributes. Numerical results are presented for the charm quark
contributions and consequences on the Gross-Llewellyn Smith sum rule are
discussed.
|
The two-proton removal reaction from 28Mg projectiles has been studied at 93
MeV/u at the NSCL. First coincidence measurements of the heavy 26Ne projectile
residues, the removed protons and other light charged particles enabled the
relative cross sections from each of the three possible elastic and inelastic
proton removal mechanisms to be determined. These more final-state-exclusive
measurements are key for further interrogation of these reaction mechanisms and
use of the reaction channel for quantitative spectroscopy of very neutron-rich
nuclei. The relative and absolute yields of the three contributing mechanisms
are compared to reaction model expectations - based on the use of eikonal
dynamics and sd-shell-model structure amplitudes.
|
The ability to control single dopants in solid-state devices has opened the
way towards reliable quantum computation schemes. In this perspective it is
essential to understand the impact of interfaces and electric fields, inherent
to address coherent electronic manipulation, on the dopants atomic scale
properties. This requires both fine energetic and spatial resolution of the
energy spectrum and wave-function, respectively. Here we present an experiment
fulfilling both conditions: we perform transport on single donors in silicon
close to a vacuum interface using a scanning tunneling microscope (STM) in the
single electron tunneling regime. The spatial degrees of freedom of the STM tip
provide a versatility allowing a unique understanding of electrostatics. We
obtain the absolute energy scale from the thermal broadening of the resonant
peaks, allowing to deduce the charging energies of the donors. Finally we use a
rate equations model to derive the current in presence of an excited state,
highlighting the benefits of the highly tunable vacuum tunnel rates which
should be exploited in further experiments. This work provides a general
framework to investigate dopant-based systems at the atomic scale.
|
Adversary thinking is an essential skill for cybersecurity experts, enabling
them to understand cyber attacks and set up effective defenses. While this
skill is commonly exercised by Capture the Flag games and hands-on activities,
we complement these approaches with a key innovation: undergraduate students
learn methods of network attack and defense by creating educational games in a
cyber range. In this paper, we present the design of two courses, instruction
and assessment techniques, as well as our observations over the last three
semesters. The students report they had a unique opportunity to deeply
understand the topic and practice their soft skills, as they presented their
results at a faculty open day event. Their peers, who played the created games,
rated the quality and educational value of the games overwhelmingly positively.
Moreover, the open day raised awareness about cybersecurity and research and
development in this field at our faculty. We believe that sharing our teaching
experience will be valuable for instructors planning to introduce active
learning of cybersecurity and adversary thinking.
|
In a recent work, a successful prediction has been made for $\sin^2 \theta_W$
at an energy scale of O(TeV) based on the Dirac quantization condition of an
electroweak monopole of the EW-$\nu_R$ model. The fact that such a prediction
can be made has prompted the following question: Can $SU(2)$ be unified with
$U(1)$ at O(TeV) scale since a prediction for $\sin^2 \theta_W$ necessarily
relates the $U(1)$ coupling $g^{\prime}$ to the $SU(2)$ weak coupling $g$? It
is shown in this manuscript that this can be accomplished by embedding $SU(2)
\times U(1)$ into $SU(3)_W$ (The Weak Eightfold Way) with the following
results: 1) The same prediction of the weak mixing angle is obtained; 2) The
scalar representations of $SU(3)_W$ contain all those that are needed to build
the the EW-$\nu_R$ model and, in particular, the real Higgs triplet used in the
construction of the electroweak monopole. 3) Anomaly freedom requires the
existence of mirror fermions present in the EW-$\nu_R$ model. 4) Vector-Like
Quarks (VLQ) with unconventional electric charges are needed to complete the
$SU(3)_W$ representations containing the right-handed up-quarks, with
interesting experimental implications such as the prediction of doubly-charged
hybrid mesons.
|
Subsets and Splits