title
stringlengths 1
280
| abstract
stringlengths 7
5.09k
|
---|---|
Balanced Allocations with the Choice of Noise | We consider the allocation of $m$ balls (jobs) into $n$ bins (servers). In
the standard Two-Choice process, at each step $t=1,2,\ldots,m$ we first sample
two randomly chosen bins, compare their two loads and then place a ball in the
least loaded bin. It is well-known that for any $m\geq n$, this results in a
gap (difference between the maximum and average load) of $\log_2\log
n+\Theta(1)$ (with high probability).
In this work, we consider Two-Choice in different settings with noisy load
comparisons. One key setting involves an adaptive adversary whose power is
limited by some threshold $g\in\mathbb{N}$. In each step, such adversary can
determine the result of any load comparison between two bins whose loads differ
by at most $g$, while if the load difference is greater than $g$, the
comparison is correct.
For this adversarial setting, we first prove that for any $m \geq n$ the gap
is $O(g+\log n)$ with high probability. Then through a refined analysis we
prove that if $g\leq\log n$, then for any $m \geq n$ the gap is
$O(\frac{g}{\log g}\cdot\log\log n)$. For constant values of $g$, this
generalizes the heavily loaded analysis of [BCSV06, TW14] for the Two-Choice
process, and establishes that asymptotically the same gap bound holds even if
load comparisons among "similarly loaded" bins are wrong. Finally, we
complement these upper bounds with tight lower bounds, which establish an
interesting phase transition on how the parameter $g$ impacts the gap.
The analysis also applies to settings with outdated and delayed information.
For example, for the setting of [BCEFN12] where balls are allocated in
consecutive batches of size $b=n$, we present an improved and tight gap bound
of $\Theta(\frac{\log n}{\log\log n})$. This bound also extends for a range of
values of $b$ and applies to a relaxed setting where the reported load of a bin
can be any load value from the last $b$ steps.
|
Learning Oriented Cross-Entropy Approach to User Association in
Load-Balanced HetNet | This letter considers optimizing user association in a heterogeneous network
via utility maximization, which is a combinatorial optimization problem due to
integer constraints. Different from existing solutions based on convex
optimization, we alternatively propose a cross-entropy (CE)-based algorithm
inspired by a sampling approach developed in machine learning. Adopting a
probabilistic model, we first reformulate the original problem as a CE
minimization problem which aims to learn the probability distribution of
variables in the optimal association. An efficient solution by stochastic
sampling is introduced to solve the learning problem. The integer constraint is
directly handled by the proposed algorithm, which is robust to network
deployment and algorithm parameter choices. Simulations verify that the
proposed CE approach achieves near-optimal performance quite efficiently.
|
On the similarity of meshless discretizations of Peridynamics and
Smooth-Particle Hydrodynamics | This paper discusses the similarity of meshless discretizations of
Peridynamics and Smooth-Particle-Hydrodynamics (SPH), if Peridynamics is
applied to classical material models based on the deformation gradient. We show
that the discretized equations of both methods coincide if nodal integration is
used. This equivalence implies that Peridynamics reduces to an old meshless
method and all instability problems of collocation-type particle methods apply.
These instabilities arise as a consequence of the nodal integration scheme,
which causes rank-deficiency and leads to spurious zero-energy modes. As a
result of the demonstrated equivalence to SPH, enhanced implementations of
Peridynamics should employ more accurate integration schemes.
|
UNBIAS PUF: A Physical Implementation Bias Agnostic Strong PUF | The Physical Unclonable Function (PUF) is a promising hardware security
primitive because of its inherent uniqueness and low cost. To extract the
device-specific variation from delay-based strong PUFs, complex routing
constraints are imposed to achieve symmetric path delays; and systematic
variations can severely compromise the uniqueness of the PUF. In addition, the
metastability of the arbiter circuit of an Arbiter PUF can also degrade the
quality of the PUF due to the induced instability. In this paper we propose a
novel strong UNBIAS PUF that can be implemented purely by Register Transfer
Language (RTL), such as verilog, without imposing any physical design
constraints or delay characterization effort to solve the aforementioned
issues. Efficient inspection bit prediction models for unbiased response
extraction are proposed and validated. Our experimental results of the strong
UNBIAS PUF show 5.9% intra-Fractional Hamming Distance (FHD) and 45.1%
inter-FHD on 7 Field Programmable Gate Array (FPGA) boards without applying any
physical layout constraints or additional XOR gates. The UNBIAS PUF is also
scalable because no characterization cost is required for each challenge to
compensate the implementation bias. The averaged intra-FHD measured at worst
temperature and voltage variation conditions is 12%, which is still below the
margin of practical Error Correction Code (ECC) with error reduction techniques
for PUFs.
|
A General and Efficient Training for Transformer via Token Expansion | The remarkable performance of Vision Transformers (ViTs) typically requires
an extremely large training cost. Existing methods have attempted to accelerate
the training of ViTs, yet typically disregard method universality with accuracy
dropping. Meanwhile, they break the training consistency of the original
transformers, including the consistency of hyper-parameters, architecture, and
strategy, which prevents them from being widely applied to different
Transformer networks. In this paper, we propose a novel token growth scheme
Token Expansion (termed ToE) to achieve consistent training acceleration for
ViTs. We introduce an "initialization-expansion-merging" pipeline to maintain
the integrity of the intermediate feature distribution of original
transformers, preventing the loss of crucial learnable information in the
training process. ToE can not only be seamlessly integrated into the training
and fine-tuning process of transformers (e.g., DeiT and LV-ViT), but also
effective for efficient training frameworks (e.g., EfficientTrain), without
twisting the original training hyper-parameters, architecture, and introducing
additional training strategies. Extensive experiments demonstrate that ToE
achieves about 1.3x faster for the training of ViTs in a lossless manner, or
even with performance gains over the full-token training baselines. Code is
available at https://github.com/Osilly/TokenExpansion .
|
Polar Coded Merkle Tree: Improved Detection of Data Availability Attacks
in Blockchain Systems | Light nodes in blockchain systems are known to be vulnerable to data
availability (DA) attacks where they accept an invalid block with unavailable
portions. Previous works have used LDPC and 2-D Reed Solomon (2D-RS) codes with
Merkle Trees to mitigate DA attacks. While these codes have demonstrated
improved performance across a variety of metrics such as DA detection
probability, they are difficult to apply to blockchains with large blocks due
to generally intractable code guarantees for large codelengths (LDPC), large
decoding complexity (2D-RS), or large coding fraud proof sizes (2D-RS). We
address these issues by proposing the novel Polar Coded Merkle Tree (PCMT)
which is a Merkle Tree built from the encoding graphs of polar codes and a
specialized polar code construction called Sampling-Efficient Freezing (SEF).
We demonstrate that the PCMT with SEF polar codes performs well in detecting DA
attacks for large block sizes.
|
Efficient Knowledge Base Management in DCSP | DCSP (Distributed Constraint Satisfaction Problem) has been a very important
research area in AI (Artificial Intelligence). There are many application
problems in distributed AI that can be formalized as DSCPs. With the increasing
complexity and problem size of the application problems in AI, the required
storage place in searching and the average searching time are increasing too.
Thus, to use a limited storage place efficiently in solving DCSP becomes a very
important problem, and it can help to reduce searching time as well. This paper
provides an efficient knowledge base management approach based on general usage
of hyper-resolution-rule in consistence algorithm. The approach minimizes the
increasing of the knowledge base by eliminate sufficient constraint and false
nogood. These eliminations do not change the completeness of the original
knowledge base increased. The proofs are given as well. The example shows that
this approach decrease both the new nogoods generated and the knowledge base
greatly. Thus it decreases the required storage place and simplify the
searching process.
|
Noise-Aware Texture-Preserving Low-Light Enhancement | A simple and effective low-light image enhancement method based on a
noise-aware texture-preserving retinex model is proposed in this work. The new
method, called NATLE, attempts to strike a balance between noise removal and
natural texture preservation through a low-complexity solution. Its cost
function includes an estimated piece-wise smooth illumination map and a
noise-free texture-preserving reflectance map. Afterwards, illumination is
adjusted to form the enhanced image together with the reflectance map.
Extensive experiments are conducted on common low-light image enhancement
datasets to demonstrate the superior performance of NATLE.
|
Minimal complexity of equidistributed infinite permutations | An infinite permutation is a linear ordering of the set of natural numbers.
An infinite permutation can be defined by a sequence of real numbers where only
the order of elements is taken into account. In the paper we investigate a new
class of {\it equidistributed} infinite permutations, that is, infinite
permutations which can be defined by equidistributed sequences. Similarly to
infinite words, a complexity $p(n)$ of an infinite permutation is defined as a
function counting the number of its subpermutations of length $n$. For infinite
words, a classical result of Morse and Hedlund, 1938, states that if the
complexity of an infinite word satisfies $p(n) \leq n$ for some $n$, then the
word is ultimately periodic. Hence minimal complexity of aperiodic words is
equal to $n+1$, and words with such complexity are called Sturmian. For
infinite permutations this does not hold: There exist aperiodic permutations
with complexity functions growing arbitrarily slowly, and hence there are no
permutations of minimal complexity. We show that, unlike for permutations in
general, the minimal complexity of an equidistributed permutation $\alpha$ is
$p_{\alpha}(n)=n$. The class of equidistributed permutations of minimal
complexity coincides with the class of so-called Sturmian permutations,
directly related to Sturmian words.
|
Enhancing Protein Predictive Models via Proteins Data Augmentation: A
Benchmark and New Directions | Augmentation is an effective alternative to utilize the small amount of
labeled protein data. However, most of the existing work focuses on design-ing
new architectures or pre-training tasks, and relatively little work has studied
data augmentation for proteins. This paper extends data augmentation techniques
previously used for images and texts to proteins and then benchmarks these
techniques on a variety of protein-related tasks, providing the first
comprehensive evaluation of protein augmentation. Furthermore, we propose two
novel semantic-level protein augmentation methods, namely Integrated Gradients
Substitution and Back Translation Substitution, which enable protein
semantic-aware augmentation through saliency detection and biological
knowledge. Finally, we integrate extended and proposed augmentations into an
augmentation pool and propose a simple but effective framework, namely
Automated Protein Augmentation (APA), which can adaptively select the most
suitable augmentation combinations for different tasks. Extensive experiments
have shown that APA enhances the performance of five protein related tasks by
an average of 10.55% across three architectures compared to vanilla
implementations without augmentation, highlighting its potential to make a
great impact on the field.
|
NarrativeXL: A Large-scale Dataset For Long-Term Memory Models | We propose a new large-scale (nearly a million questions) ultra-long-context
(more than 50,000 words average document length) reading comprehension dataset.
Using GPT 3.5, we summarized each scene in 1,500 hand-curated fiction books
from Project Gutenberg, which resulted in approximately 150 scene-level
summaries per book. After that, we created a number of reading comprehension
questions based on these summaries, including three types of multiple-choice
scene recognition questions, as well as free-form narrative reconstruction
questions. With 990,595 total questions, our dataset is an order of magnitude
larger than the closest alternatives. Crucially, most questions have a known
``retention demand'', indicating how long-term of a memory is needed to answer
them, which should aid long-term memory performance evaluation. We validate our
data in four small-scale experiments: one with human labelers, and three with
existing language models. We show that our questions 1) adequately represent
the source material 2) can be used to diagnose a model's memory capacity 3) are
not trivial for modern language models even when the memory demand does not
exceed those models' context lengths. Lastly, we provide our code which can be
used to further expand the dataset with minimal human labor.
|
Out-of-Plane Polarization from Spin Reflection Induces Field-Free
Spin-Orbit Torque Switching in Structures with Canted NiO Interfacial Moments | Realizing deterministic current-induced spin-orbit torque (SOT) magnetization
switching, especially in systems exhibiting perpendicular magnetic anisotropy
(PMA), typically requires the application of a collinear in-plane field, posing
a challenging problem. In this study, we successfully achieve field-free SOT
switching in the CoFeB/MgO system. In a Ta/CoFeB/MgO/NiO/Ta structure, spin
reflection at the NiO interface, characterized by noncollinear spin structures
with canted magnetization, generates a spin current with an out-of-plane spin
polarization {\sigma}z. We confirm the contribution of {\sigma}z to the
field-free SOT switching through measurements of the shift effect in the
out-of-plane magnetization hysteresis loops under different currents. The
incorporation of NiO as an antiferromagnetic insulator, mitigates the current
shunting effect and ensures excellent thermal stability of the device. The
sample with 0.8 nm MgO and 2 nm NiO demonstrates an impressive optimal
switching ratio approaching 100% without an in-plane field. This breakthrough
in the CoFeB/MgO system promises significant applications in spintronics,
advancing us closer to realizing innovative technologies.
|
Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative
Latent Attention | We present Perceiver-VL, a vision-and-language framework that efficiently
handles high-dimensional multimodal inputs such as long videos and text.
Powered by the iterative latent cross-attention of Perceiver, our framework
scales with linear complexity, in contrast to the quadratic complexity of
self-attention used in many state-of-the-art transformer-based models. To
further improve the efficiency of our framework, we also study applying
LayerDrop on cross-attention layers and introduce a mixed-stream architecture
for cross-modal retrieval. We evaluate Perceiver-VL on diverse video-text and
image-text benchmarks, where Perceiver-VL achieves the lowest GFLOPs and
latency while maintaining competitive performance. In addition, we also provide
comprehensive analyses of various aspects of our framework, including
pretraining data, scalability of latent size and input size, dropping
cross-attention layers at inference to reduce latency, modality aggregation
strategy, positional encoding, and weight initialization strategy. Our code and
checkpoints are available at: https://github.com/zinengtang/Perceiver_VL
|
Private Broadcasting over Independent Parallel Channels | We study private broadcasting of two messages to two groups of receivers over
independent parallel channels. One group consists of an arbitrary number of
receivers interested in a common message, whereas the other group has only one
receiver. Each message must be kept confidential from the receiver(s) in the
other group. Each of the sub-channels is degraded, but the order of receivers
on each channel can be different. While corner points of the capacity region
were characterized in earlier works, we establish the capacity region and show
the optimality of a superposition strategy. For the case of parallel Gaussian
channels, we show that a Gaussian input distribution is optimal. We also
discuss an extension of our setup to broadcasting over a block-fading channel
and demonstrate significant performance gains using the proposed scheme over a
baseline time-sharing scheme.
|
ConVis: Contrastive Decoding with Hallucination Visualization for
Mitigating Hallucinations in Multimodal Large Language Models | Hallucinations in Multimodal Large Language Models (MLLMs) where generated
responses fail to accurately reflect the given image pose a significant
challenge to their reliability. To address this, we introduce ConVis, a novel
training-free contrastive decoding method. ConVis leverages a text-to-image
(T2I) generation model to semantically reconstruct the given image from
hallucinated captions. By comparing the contrasting probability distributions
produced by the original and reconstructed images, ConVis enables MLLMs to
capture visual contrastive signals that penalize hallucination generation.
Notably, this method operates purely within the decoding process, eliminating
the need for additional data or model updates. Our extensive experiments on
five popular benchmarks demonstrate that ConVis effectively reduces
hallucinations across various MLLMs, highlighting its potential to enhance
model reliability.
|
Improving the Security of United States Elections with Robust
Optimization | For more than a century, election officials across the United States have
inspected voting machines before elections using a procedure called Logic and
Accuracy Testing (LAT). This procedure consists of election officials casting a
test deck of ballots into each voting machine and confirming the machine
produces the expected vote total for each candidate. We bring a scientific
perspective to LAT by introducing the first formal approach to designing test
decks with rigorous security guarantees. Specifically, our approach employs
robust optimization to find test decks that are guaranteed to detect any voting
machine misconfiguration that would cause votes to be swapped across
candidates. Out of all the test decks with this security guarantee, our robust
optimization problem yields the test deck with the minimum number of ballots,
thereby minimizing implementation costs for election officials. To facilitate
deployment at scale, we develop a practically efficient exact algorithm for
solving our robust optimization problems based on the cutting plane method. In
partnership with the Michigan Bureau of Elections, we retrospectively applied
our approach to all 6928 ballot styles from Michigan's November 2022 general
election; this retrospective study reveals that the test decks with rigorous
security guarantees obtained by our approach require, on average, only 1.2%
more ballots than current practice. Our approach has since been piloted in
real-world elections by the Michigan Bureau of Elections as a low-cost way to
improve election security and increase public trust in democratic institutions.
|
Generalized Proportional Allocation Mechanism Design for Unicast Service
on the Internet | In this report we construct two mechanisms that fully implement social
welfare maximising allocation in Nash equilibria for the case of a single
infinitely divisible good subject to multiple inequality constraints. The first
mechanism achieves weak budget balance, while the second is an extension of the
first, and achieves strong budget balance. One important application of this
mechanism is unicast service on the Internet where a network operator wishes to
allocate rates among strategic users in such a way that maximise overall user
satisfaction while respecting capacity constraints on every link in the
network. The emphasis of this work is on full implementation, which means that
all Nash equilibria of the induced game result in the optimal allocations of
the centralized allocation problem.
|
ModuleNet: Knowledge-inherited Neural Architecture Search | Although Neural Architecture Search (NAS) can bring improvement to deep
models, they always neglect precious knowledge of existing models.
The computation and time costing property in NAS also means that we should
not start from scratch to search, but make every attempt to reuse the existing
knowledge.
In this paper, we discuss what kind of knowledge in a model can and should be
used for new architecture design.
Then, we propose a new NAS algorithm, namely ModuleNet, which can fully
inherit knowledge from existing convolutional neural networks.
To make full use of existing models, we decompose existing models into
different \textit{module}s which also keep their weights, consisting of a
knowledge base.
Then we sample and search for new architecture according to the knowledge
base.
Unlike previous search algorithms, and benefiting from inherited knowledge,
our method is able to directly search for architectures in the macro space by
NSGA-II algorithm without tuning parameters in these \textit{module}s.
Experiments show that our strategy can efficiently evaluate the performance
of new architecture even without tuning weights in convolutional layers.
With the help of knowledge we inherited, our search results can always
achieve better performance on various datasets (CIFAR10, CIFAR100) over
original architectures.
|
A Data-to-Product Multimodal Conceptual Framework to Achieve Automated
Software Evolution for Context-rich Intelligent Applications | While AI is extensively transforming Software Engineering (SE) fields, SE is
still in need of a framework to overall consider all phases to facilitate
Automated Software Evolution (ASEv), particularly for intelligent applications
that are context-rich, instead of conquering each division independently. Its
complexity comes from the intricacy of the intelligent applications, the
heterogeneity of the data sources, and the constant changes in the context.
This study proposes a conceptual framework for achieving automated software
evolution, emphasizing the importance of multimodality learning. A Selective
Sequential Scope Model (3S) model is developed based on the conceptual
framework, and it can be used to categorize existing and future research when
it covers different SE phases and multimodal learning tasks. This research is a
preliminary step toward the blueprint of a higher-level ASEv. The proposed
conceptual framework can act as a practical guideline for practitioners to
prepare themselves for diving into this area. Although the study is about
intelligent applications, the framework and analysis methods may be adapted for
other types of software as AI brings more intelligence into their life cycles.
|
Information Structures for Feedback Capacity of Channels with Memory and
Transmission Cost: Stochastic Optimal Control & Variational Equalities-Part I | The Finite Transmission Feedback Information (FTFI) capacity is characterized
for any class of channel conditional distributions $\big\{{\bf P}_{B_i|B^{i-1},
A_i} :i=0, 1, \ldots, n\big\}$ and $\big\{ {\bf P}_{B_i|B_{i-M}^{i-1}, A_i}
:i=0, 1, \ldots, n\big\}$, where $M$ is the memory of the channel, $B^n
{\stackrel{\triangle}{=}} \{B_j: j=\ldots, 0,1, \ldots, n\}$ are the channel
outputs and $A^n{\stackrel{\triangle}{=}} \{A_j: j=\ldots, 0,1, \ldots, n\}$
are the channel inputs. The characterizations of FTFI capacity, are obtained by
first identifying the information structures of the optimal channel input
conditional distributions ${\cal P}_{[0,n]} {\stackrel{\triangle}{=}} \big\{
{\bf P}_{A_i|A^{i-1}, B^{i-1}}: i=0, \ldots, n\big\}$, which maximize directed
information. The main theorem states, for any channel with memory $M$, the
optimal channel input conditional distributions occur in the subset satisfying
conditional independence $\stackrel{\circ}{\cal
P}_{[0,n]}{\stackrel{\triangle}{=}} \big\{ {\bf P}_{A_i|A^{i-1}, B^{i-1}}= {\bf
P}_{A_i|B_{i-M}^{i-1}}: i=1, \ldots, n\big\}$, and the characterization of FTFI
capacity is given by $C_{A^n \rightarrow B^n}^{FB, M} {\stackrel{\triangle}{=}}
\sup_{ \stackrel{\circ}{\cal P}_{[0,n]} } \sum_{i=0}^n I(A_i;
B_i|B_{i-M}^{i-1}) $. The methodology utilizes stochastic optimal control
theory and a variational equality of directed information, to derive upper
bounds on $I(A^n \rightarrow B^n)$, which are achievable over specific subsets
of channel input conditional distributions ${\cal P}_{[0,n]}$, which are
characterized by conditional independence. For any of the above classes of
channel distributions and transmission cost functions, a direct analogy, in
terms of conditional independence, of the characterizations of FTFI capacity
and Shannon's capacity formulae of Memoryless Channels is identified.
|
Four-Photon Kapitza-Dirac Effect as Electron Spin Filter | We theoretically demonstrate the feasibility of producing electron beam
splitter using Kapitza-Dirac diffraction on bichromatic standing waves which
are created by the fundamental frequency and the third harmonic. The
relativistic electron in Bragg regime absorbs three photons with frequency of w
and emits a photon with frequency of 3w, four-photon Kapitza-Dirac effect. In
this four-photon Kapitza-Dirac effect distinct spin effects arise in different
polarizations of the third harmonic laser beam. It is shown that the shape of
Rabi oscillation between initial and scattered states is changed and finds two
unequal peaks. In circular polarization for fundamental and third harmonic,
despite Rabi oscillation, the spin down electron in 0.56 fs intervals maintains
its momentum and spin. Also we present an electron spin filter with combination
of a linearly polarized fundamental laser beam and a third harmonic with
circular polarization that scatters the electron beam according to its spin
state.
|
Plane Symmetric, Cylindrically Symmetric and Spherically Symmetric
Vacuum Solutions of Einstein Field Equations | In this paper we present Plane symmetric, Cylindrically Symmetric and
Spherically Symmetric Black hole or Vacuum solutions of Einstein Field
Equations(EFEs). Some of these solutions are new which we have not seen in the
literature. This calculation will help us in understanding the gravitational
wave and gravitational wave spacetimes.
|
Discriminating cosmic muon and x-ray based on rising time using GEM
detector | Gas electron multiplier(GEM) detector is used in Cosmic Muon Scattering
Tomography and neutron imaging in the last decade. In this work, a triple GEM
device with an effective readout area of 10 cm X 10 cm is developed, and an
experiment of discriminating between cosmic muon and x-ray based on rising time
is tested. The energy resolution of GEM detector is tested by 55Fe ray source
to prove the GEM detector has a good performance. The analysis of the complete
signal-cycles allows to get the rising time and pulse heights. The experiment
result indicates that cosmic muon and x-ray can be discriminated with an
appropriate rising time threshold.
|
A.M.E.L.I.E. Apparatus for Muon Experimental Lifetime Investigation and
Evaluation | The muon is one of the first elementary particles discovered. It is also
known as heavy electron, and it's the main component of cosmic rays flux at sea
level. Its flow is continuous, 24h/7d, and it is free. It is natural and does
not have any radio protection banning or limitation to its use in schools and
can be managed safely by the students. AMELIE is a light, small and didactic
apparatus to measure the lifetime of the muons. It is useful tool to introduce
the modern physics, particle physics, particles instability and decay, special
relativity etc. It can be used for small didactic but complete experiments for
measurement of muon rate and lifetime, correction and equalization of data
collected etc. A useful instrument to introduce and teach the scientific method
to the students. Last but not least, do not contain any dangerous system like
high voltage or explosive gas and the cost is relatively cheap.
|
On Complexity of Computing Bottleneck and Lexicographic Optimal Cycles
in a Homology Class | Homology features of spaces which appear in applications, for instance 3D
meshes, are among the most important topological properties of these objects.
Given a non-trivial cycle in a homology class, we consider the problem of
computing a representative in that homology class which is optimal. We study
two measures of optimality, namely, the lexicographic order of cycles (the
lex-optimal cycle) and the bottleneck norm (a bottleneck-optimal cycle). We
give a simple algorithm for computing the lex-optimal cycle for a 1-homology
lass in a closed orientable surface. In contrast to this, our main result is
that, in the case of 3-Manifolds of size $n^2$ in the Euclidean 3-space, the
problem of finding a bottleneck optimal cycle cannot be solved more efficiently
than solving a system of linear equations with an $n \times n$ sparse matrix.
From this reduction, we deduce several hardness results. Most notably, we show
that for 3-manifolds given as a subset of the 3-space of size $n^2$, persistent
homology computations are at least as hard as rank computation (for sparse
matrices) while ordinary homology computations can be done in $O(n^2 \log n)$
time. This is the first such distinction between these two computations.
Moreover, it follows that the same disparity exists between the height
persistent homology computation and general sub-level set persistent homology
computation for simplicial complexes in the 3-space.
|
Enabling Dialogue Management with Dynamically Created Dialogue Actions | In order to take up the challenge of realising user-adaptive system
behaviour, we present an extension for the existing OwlSpeak Dialogue Manager
which enables the handling of dynamically created dialogue actions. This leads
to an increase in flexibility which can be used for adaptation tasks. After the
implementation of the modifications and the integration of the Dialogue Manager
into a full Spoken Dialogue System, an evaluation of the system has been
carried out. The results indicate that the participants were able to conduct
meaningful dialogues and that the system performs satisfactorily, showing that
the implementation of the Dialogue Manager was successful.
|
Complex Claim Verification with Evidence Retrieved in the Wild | Evidence retrieval is a core part of automatic fact-checking. Prior work
makes simplifying assumptions in retrieval that depart from real-world use
cases: either no access to evidence, access to evidence curated by a human
fact-checker, or access to evidence available long after the claim has been
made. In this work, we present the first fully automated pipeline to check
real-world claims by retrieving raw evidence from the web. We restrict our
retriever to only search documents available prior to the claim's making,
modeling the realistic scenario where an emerging claim needs to be checked.
Our pipeline includes five components: claim decomposition, raw document
retrieval, fine-grained evidence retrieval, claim-focused summarization, and
veracity judgment. We conduct experiments on complex political claims in the
ClaimDecomp dataset and show that the aggregated evidence produced by our
pipeline improves veracity judgments. Human evaluation finds the evidence
summary produced by our system is reliable (it does not hallucinate
information) and relevant to answering key questions about a claim, suggesting
that it can assist fact-checkers even when it cannot surface a complete
evidence set.
|
Pragmatic Space-Time Trellis Codes for Block Fading Channels | A pragmatic approach for the construction of space-time codes over block
fading channels is investigated. The approach consists in using common
convolutional encoders and Viterbi decoders with suitable generators and rates,
thus greatly simplifying the implementation of space-time codes. For the design
of pragmatic space-time codes a methodology is proposed and applied, based on
the extension of the concept of generalized transfer function for convolutional
codes over block fading channels. Our search algorithm produces the
convolutional encoder generators of pragmatic space-time codes for various
number of states, number of antennas and fading rate. Finally it is shown that,
for the investigated cases, the performance of pragmatic space-time codes is
better than that of previously known space-time codes, confirming that they are
a valuable choice in terms of both implementation complexity and performance.
|
In-situ soil parametrization from multi-layer moisture data | Inversion methodology has been used to obtain, from multi-layer soil probes
records, a complete soil parametrisation, namely water retention curve,
unsaturated conductivity curve and bulk density at 4 depths. The approach
integrates water dynamics, hysteresis and the effect of bulk density on
conductivity to extract soil parameters required from most simulation models.
The method is applied to sub-sets of data collection, allowing to understand
that not every data-sets contains the information required for method
convergence. A comparison with experimental bulk-density values show that
inversion could give information even with a better adherence to model, as it
considers the effect of roots and skeleton. The method may be applied to any
type of multi-layer water content probes giving the opportunity to enrich soil
parameter availability and reliability.
|
Policy Reuse for Communication Load Balancing in Unseen Traffic
Scenarios | With the continuous growth in communication network complexity and traffic
volume, communication load balancing solutions are receiving increasing
attention. Specifically, reinforcement learning (RL)-based methods have shown
impressive performance compared with traditional rule-based methods. However,
standard RL methods generally require an enormous amount of data to train, and
generalize poorly to scenarios that are not encountered during training. We
propose a policy reuse framework in which a policy selector chooses the most
suitable pre-trained RL policy to execute based on the current traffic
condition. Our method hinges on a policy bank composed of policies trained on a
diverse set of traffic scenarios. When deploying to an unknown traffic
scenario, we select a policy from the policy bank based on the similarity
between the previous-day traffic of the current scenario and the traffic
observed during training. Experiments demonstrate that this framework can
outperform classical and adaptive rule-based methods by a large margin.
|
Reducing the hydrogen content in liquid helium | Helium has the lowest boiling point of any element in nature at normal
atmospheric pressure. Therefore, any unwanted substance like impurities present
in liquid helium will be frozen and will be in solid form. Even if these solid
impurities can be easily eliminated by filtering, liquid helium may contain a
non-negligible quantity of molecular hydrogen. These traces of molecular
hydrogen are the causes of a known problem worldwide: the blocking of
fine-capillary tubes used as flow impedances in helium evaporation cryostats to
achieve temperatures below 4,2K. This problem seriously affects a wide range of
cryogenic equipment used in low-temperature physics research and leads to a
dramatic loss of time and costs due to the high price of helium. Here, we
present first the measurement of molecular hydrogen content in helium gas.
Three measures to decrease this molecular hydrogen are afterward proposed; (i)
improving the helium quality, (ii) release of helium gas in the atmosphere
during purge time for the regeneration cycle of the helium liquefier's internal
purifier, and (iii) installation of two catalytic converters in a closed helium
circuit. These actions have eliminated our low-temperature impedance blockage
occurrences now for more than two years.
|
Compact Spin-Polarized Positron Acceleration in Multi-Layer Microhole
Array Films | Compact spin-polarized positron accelerators play a major role in promoting
significant positron application research, which typically require high
acceleration gradients and polarization degree, both of which, however, are
still great challenging. Here, we put forward a novel spin-polarized positron
acceleration method which employs an ultrarelativistic high-density electron
beam passing through any hole of multi-layer microhole array films to excite
strong electrostatic and transition radiation fields. Positrons in the
polarized electron-positron pair plasma, filled in the front of the multi-layer
films, can be captured, accelerated, and focused by the electrostatic and
transition radiation fields, while maintaining high polarization of above 90%
and high acceleration gradient of about TeV/m. Multi-layer design allows for
capturing more positrons and achieving cascade acceleration. Our method offers
a promising solution for accelerator miniaturization, positron injection, and
polarization maintaining, and also can be used to accelerate other charged
particles.
|
Epidemic Model with Isolation in Multilayer Networks | The Susceptible-Infected-Recovered (SIR) model has successfully mimicked the
propagation of such airborne diseases as influenza A (H1N1). Although the SIR
model has recently been studied in a multilayer networks configuration, in
almost all the research the isolation of infected individuals is disregarded.
Hence we focus our study in an epidemic model in a two-layer network, and we
use an isolation parameter to measure the effect of isolating infected
individuals from both layers during an isolation period. We call this process
the Susceptible-Infected-Isolated-Recovered ($SI_IR$) model. The isolation
reduces the transmission of the disease because the time in which infection can
spread is reduced. In this scenario we find that the epidemic threshold
increases with the isolation period and the isolation parameter. When the
isolation period is maximum there is a threshold for the isolation parameter
above which the disease never becomes an epidemic. We also find that epidemic
models, like $SIR$ overestimate the theoretical risk of infection. Finally, our
model may provide a foundation for future research to study the temporal
evolution of the disease calibrating our model with real data.
|
Listen to Users, but Only 85% of the Time: How Black Swans Can Save
Innovation in a Data-Driven World | Data-driven design is a proven success factor that more and more digital
businesses embrace. At the same time, academics and practitioners alike warn
that when virtually everything must be tested and proven with numbers, that can
stifle creativity and innovation. This article argues that Taleb's Black Swan
theory can solve this dilemma. It shows that online experimentation, and
therefore digital design, are fat-tailed phenomena and, hence, prone to Black
Swans. It introduces the notion of Black Swan designs -- "crazy" designs that
make sense only in hindsight -- along with four specific criteria. To ensure
incremental improvements and their potential for innovation, businesses should
apply Taleb's barbell strategy: Invest 85-90% of resources into data-driven
approaches and 10-15% into potential Black Swans.
|
Polarization-sensitive terahertz time-domain spectroscopy system without
mechanical moving parts | We report on the measurement of terahertz electric-field vector waveforms by
using a system that contains no mechanical moving parts. It is known that two
phase-locked femtosecond lasers with different repetition rates can be used to
perform time-domain spectroscopy without using a mechanical delay stage.
Furthermore, an electro-optic modulator can be used to perform polarization
measurements without rotating any polarizers or waveplates. We experimentally
demonstrate the combination of these two methods and explain the analysis of
data obtained by such a system. Such a system provides a robust platform that
can promote the usage of polarization-sensitive terahertz time-domain
spectroscopy in basic science and practical applications. For the experimental
demonstration, we alter the polarization of a terahertz wave by a polarizer.
|
Dynamic stability of spindles controlled by molecular motor kinetics | We analyze the role of the force-dependent kinetics of motor proteins in the
stability of antiparallel arrays of polar filaments, such as those in the
mitotic spindle. We determine the possible stable structures and show that
there exists an instability associated to the collective behavior of motors
that leads to the collapse of the spindle. Our analysis provides a general
framework to understand several experimental observations in eukaryotic cell
division.
|
Learning High-Dimensional Nonparametric Differential Equations via
Multivariate Occupation Kernel Functions | Learning a nonparametric system of ordinary differential equations (ODEs)
from $n$ trajectory snapshots in a $d$-dimensional state space requires
learning $d$ functions of $d$ variables. Explicit formulations scale
quadratically in $d$ unless additional knowledge about system properties, such
as sparsity and symmetries, is available. In this work, we propose a linear
approach to learning using the implicit formulation provided by vector-valued
Reproducing Kernel Hilbert Spaces. By rewriting the ODEs in a weaker integral
form, which we subsequently minimize, we derive our learning algorithm. The
minimization problem's solution for the vector field relies on multivariate
occupation kernel functions associated with the solution trajectories. We
validate our approach through experiments on highly nonlinear simulated and
real data, where $d$ may exceed 100. We further demonstrate the versatility of
the proposed method by learning a nonparametric first order quasilinear partial
differential equation.
|
NeurDB: An AI-powered Autonomous Data System | In the wake of rapid advancements in artificial intelligence (AI), we stand
on the brink of a transformative leap in data systems. The imminent fusion of
AI and DB (AIxDB) promises a new generation of data systems, which will relieve
the burden on end-users across all industry sectors by featuring AI-enhanced
functionalities, such as personalized and automated in-database AI-powered
analytics, self-driving capabilities for improved system performance, etc. In
this paper, we explore the evolution of data systems with a focus on deepening
the fusion of AI and DB. We present NeurDB, an AI-powered autonomous data
system designed to fully embrace AI design in each major system component and
provide in-database AI-powered analytics. We outline the conceptual and
architectural overview of NeurDB, discuss its design choices and key
components, and report its current development and future plan.
|
Voltage Collapse Stabilization in Star DC Networks | Voltage collapse is a type of blackout-inducing dynamic instability that
occurs when power demand exceeds the maximum power that can be transferred
through a network. The traditional (preventive) approach to avoid voltage
collapse is based on ensuring that the network never reaches its maximum
capacity. However, such an approach leads to inefficient use of network
resources and does not account for unforeseen events. To overcome this
limitation, this paper seeks to initiate the study of voltage collapse
stabilization, i.e., the design of load controllers aimed at stabilizing the
point of voltage collapse. We formulate the problem of voltage stability for a
star direct current network as a dynamic problem where each load seeks to
achieve a constant power consumption by updating its conductance as the voltage
changes. We introduce a voltage collapse stabilization controller and show that
the high-voltage equilibrium is stabilized. More importantly, we are able to
achieve proportional load shedding under extreme loading conditions. We further
highlight the key features of our controller using numerical illustrations.
|
The Chandler wobble and Solar day | This work supplements the main results given in our paper "The Chandler
wobble is a phantom" (eprint arXiv:1109.4969) and refines the reasons for which
researchers previously failed in interpreting the physical meaning of observed
zenith distance variations.The main reason for the Chandler wobble problem
emergence was that, in analyzing time series with the step multiple of solar
day, researchers ignored the nature of the solar day itself. In addition,
astrometric instruments used to measure the zenith distance relative the local
normal are, by definition, gravity independent, since the local normal is
tangential to the gravitation field line at the observation point. Therefore,
the measured zenith distances involve all the instantaneous gravitational field
distortions. The direct dependence of the zenith distance observations on the
gravitational effect of the Moon's perigee mass enables us to conclude that the
Chandler wobble is fully independent of the possible motion of the Earth's
rotation axis within the Earth.
|
Characterizations of scoring methods for preference aggregation | The paper surveys more than forty characterizations of scoring methods for
preference aggregation and contains one new result. A general scoring operator
is {\it self-consistent} if alternative $i$ is assigned a greater score than
$j$ whenever $i$ gets no worse (better) results of comparisons and its
`opponents' are assigned respectively greater (no smaller) scores than those of
$j$. We prove that self-consistency is satisfied if and only if the application
of a scoring operator reduces to the solution of a homogeneous system of
algebraic equations with a monotone function on the left-hand side.
|
How Does Forecasting Affect the Convergence of DRL Techniques in O-RAN
Slicing? | The success of immersive applications such as virtual reality (VR) gaming and
metaverse services depends on low latency and reliable connectivity. To provide
seamless user experiences, the open radio access network (O-RAN) architecture
and 6G networks are expected to play a crucial role. RAN slicing, a critical
component of the O-RAN paradigm, enables network resources to be allocated
based on the needs of immersive services, creating multiple virtual networks on
a single physical infrastructure. In the O-RAN literature, deep reinforcement
learning (DRL) algorithms are commonly used to optimize resource allocation.
However, the practical adoption of DRL in live deployments has been sluggish.
This is primarily due to the slow convergence and performance instabilities
suffered by the DRL agents both upon initial deployment and when there are
significant changes in network conditions. In this paper, we investigate the
impact of time series forecasting of traffic demands on the convergence of the
DRL-based slicing agents. For that, we conduct an exhaustive experiment that
supports multiple services including real VR gaming traffic. We then propose a
novel forecasting-aided DRL approach and its respective O-RAN practical
deployment workflow to enhance DRL convergence. Our approach shows up to 22.8%,
86.3%, and 300% improvements in the average initial reward value, convergence
rate, and number of converged scenarios respectively, enhancing the
generalizability of the DRL agents compared with the implemented baselines. The
results also indicate that our approach is robust against forecasting errors
and that forecasting models do not have to be ideal.
|
Long-Term Planning and Situational Awareness in OpenAI Five | Understanding how knowledge about the world is represented within model-free
deep reinforcement learning methods is a major challenge given the black box
nature of its learning process within high-dimensional observation and action
spaces. AlphaStar and OpenAI Five have shown that agents can be trained without
any explicit hierarchical macro-actions to reach superhuman skill in games that
require taking thousands of actions before reaching the final goal. Assessing
the agent's plans and game understanding becomes challenging given the lack of
hierarchy or explicit representations of macro-actions in these models, coupled
with the incomprehensible nature of the internal representations.
In this paper, we study the distributed representations learned by OpenAI
Five to investigate how game knowledge is gradually obtained over the course of
training. We also introduce a general technique for learning a model from the
agent's hidden states to identify the formation of plans and subgoals. We show
that the agent can learn situational similarity across actions, and find
evidence of planning towards accomplishing subgoals minutes before they are
executed. We perform a qualitative analysis of these predictions during the
games against the DotA 2 world champions OG in April 2019.
|
EE-TTS: Emphatic Expressive TTS with Linguistic Information | While Current TTS systems perform well in synthesizing high-quality speech,
producing highly expressive speech remains a challenge. Emphasis, as a critical
factor in determining the expressiveness of speech, has attracted more
attention nowadays. Previous works usually enhance the emphasis by adding
intermediate features, but they can not guarantee the overall expressiveness of
the speech. To resolve this matter, we propose Emphatic Expressive TTS
(EE-TTS), which leverages multi-level linguistic information from syntax and
semantics. EE-TTS contains an emphasis predictor that can identify appropriate
emphasis positions from text and a conditioned acoustic model to synthesize
expressive speech with emphasis and linguistic information. Experimental
results indicate that EE-TTS outperforms baseline with MOS improvements of 0.49
and 0.67 in expressiveness and naturalness. EE-TTS also shows strong
generalization across different datasets according to AB test results.
|
Non-Markovian Vibrational Relaxation Dynamics at Surfaces | Vibrational dynamics of adsorbates near surfaces plays both an important role
for applied surface science and as model lab for studying fundamental problems
of open quantum systems. We employ a previously developed model for the
relaxation of a D-Si-Si bending mode at a D:Si(100)-(2$\times$1) surface,
induced by a "bath" of more than $2000$ phonon modes [U. Lorenz, P. Saalfrank,
Chem. Phys. {\bf 482}, 69 (2017)], to extend previous work along various
directions. First, we use a Hierarchical Effective Mode (HEM) model [E.W.
Fischer, F. Bouakline, M. Werther, P. Saalfrank, J. Chem. Phys. {\bf 153},
064704 (2020)] to study relaxation of higher excited vibrational states than
hitherto done, by solving a high-dimensional system-bath time-dependent
Schr\"odinger equation (TDSE). In the HEM approach, (many) real bath modes are
replaced by (much less) effective bath modes. Accordingly, we are able to
examine scaling laws for vibrational relaxation lifetimes for a realistic
surface science problem. Second, we compare the performance of the multilayer
multiconfigurational time-dependent Hartree (ML-MCTDH) approach with the
recently developed coherent-state based multi-Davydov D2 {\it ansatz} [N. Zhou,
Z. Huang, J. Zhu, V. Chernyak, Y. Zhao, {J. Chem. Phys.} {\bf 143}, 014113
(2015)]. Both approaches work well, with some computational advantages for the
latter in the presented context. Third, we apply open-system density matrix
theory in comparison with basically "exact" solutions of the multi-mode TDSEs.
Specifically, we use an open-system Liouville-von Neumann (LvN) equation
treating vibration-phonon coupling as Markovian dissipation in Lindblad form to
quantify effects beyond the Born-Markov approximation.
|
Consensus in the Presence of Multiple Opinion Leaders: Effect of Bounded
Confidence | The problem of analyzing the performance of networked agents exchanging
evidence in a dynamic network has recently grown in importance. This problem
has relevance in signal and data fusion network applications and in studying
opinion and consensus dynamics in social networks. Due to its capability of
handling a wider variety of uncertainties and ambiguities associated with
evidence, we use the framework of Dempster-Shafer (DS) theory to capture the
opinion of an agent. We then examine the consensus among agents in dynamic
networks in which an agent can utilize either a cautious or receptive updating
strategy. In particular, we examine the case of bounded confidence updating
where an agent exchanges its opinion only with neighboring nodes possessing
'similar' evidence. In a fusion network, this captures the case in which nodes
only update their state based on evidence consistent with the node's own
evidence. In opinion dynamics, this captures the notions of Social Judgment
Theory (SJT) in which agents update their opinions only with other agents
possessing opinions closer to their own. Focusing on the two special DS
theoretic cases where an agent state is modeled as a Dirichlet body of evidence
and a probability mass function (p.m.f.), we utilize results from matrix
theory, graph theory, and networks to prove the existence of consensus agent
states in several time-varying network cases of interest. For example, we show
the existence of a consensus in which a subset of network nodes achieves a
consensus that is adopted by follower network nodes. Of particular interest is
the case of multiple opinion leaders, where we show that the agents do not
reach a consensus in general, but rather converge to 'opinion clusters'.
Simulation results are provided to illustrate the main results.
|
Adaptive Superpixel for Active Learning in Semantic Segmentation | Learning semantic segmentation requires pixel-wise annotations, which can be
time-consuming and expensive. To reduce the annotation cost, we propose a
superpixel-based active learning (AL) framework, which collects a dominant
label per superpixel instead. To be specific, it consists of adaptive
superpixel and sieving mechanisms, fully dedicated to AL. At each round of AL,
we adaptively merge neighboring pixels of similar learned features into
superpixels. We then query a selected subset of these superpixels using an
acquisition function assuming no uniform superpixel size. This approach is more
efficient than existing methods, which rely only on innate features such as RGB
color and assume uniform superpixel sizes. Obtaining a dominant label per
superpixel drastically reduces annotators' burden as it requires fewer clicks.
However, it inevitably introduces noisy annotations due to mismatches between
superpixel and ground truth segmentation. To address this issue, we further
devise a sieving mechanism that identifies and excludes potentially noisy
annotations from learning. Our experiments on both Cityscapes and PASCAL VOC
datasets demonstrate the efficacy of adaptive superpixel and sieving
mechanisms.
|
On the use of Mahalanobis distance for out-of-distribution detection
with neural networks for medical imaging | Implementing neural networks for clinical use in medical applications
necessitates the ability for the network to detect when input data differs
significantly from the training data, with the aim of preventing unreliable
predictions. The community has developed several methods for
out-of-distribution (OOD) detection, within which distance-based approaches -
such as Mahalanobis distance - have shown potential. This paper challenges the
prevailing community understanding that there is an optimal layer, or
combination of layers, of a neural network for applying Mahalanobis distance
for detection of any OOD pattern. Using synthetic artefacts to emulate OOD
patterns, this paper shows the optimum layer to apply Mahalanobis distance
changes with the type of OOD pattern, showing there is no one-fits-all
solution. This paper also shows that separating this OOD detector into multiple
detectors at different depths of the network can enhance the robustness for
detecting different OOD patterns. These insights were validated on real-world
OOD tasks, training models on CheXpert chest X-rays with no support devices,
then using scans with unseen pacemakers (we manually labelled 50% of CheXpert
for this research) and unseen sex as OOD cases. The results inform
best-practices for the use of Mahalanobis distance for OOD detection. The
manually annotated pacemaker labels and the project's code are available at:
https://github.com/HarryAnthony/Mahalanobis-OOD-detection.
|
Synchronization of Interacting Quantum Dipoles | Macroscopic ensembles of radiating dipoles are ubiquitous in the physical and
natural sciences. In the classical limit the dipoles can be described as
damped-driven oscillators, which are able to spontaneously synchronize and
collectively lock their phases. Here we investigate the correspond- ing
phenomenon in the quantum regime with arrays of quantized two-level systems
coupled via long-range and anisotropic dipolar interactions. Our calculations
demonstrate that the dipoles may overcome the decoherence induced by quantum
fluctuations and inhomogeneous couplings and evolve to a synchronized
steady-state. This steady-state bears much similarity to that observed in
classical systems, and yet also exhibits genuine quantum properties such as
quantum correlations and quan- tum phase diffusion (reminiscent of lasing). Our
predictions could be relevant for the development of better atomic clocks and a
variety of noise tolerant quantum devices.
|
MAP moving horizon state estimation with binary measurements | The paper addresses state estimation for discrete-time systems with binary
(threshold) measurements by following a Maximum A posteriori Probability (MAP)
approach and exploiting a Moving Horizon (MH) approximation of the MAP
cost-function. It is shown that, for a linear system and noise distributions
with log-concave probability density function, the proposed MH-MAP state
estimator involves the solution, at each sampling interval, of a convex
optimization problem. Application of the MH-MAP estimator to dynamic estimation
of a diffusion field given pointwise-in-time-and-space binary measurements of
the field is also illustrated and, finally, simulation results relative to this
application are shown to demonstrate the effectiveness of the proposed
approach.
|
Boolean Functions with Biased Inputs: Approximation and Noise
Sensitivity | This paper considers the problem of approximating a Boolean function $f$
using another Boolean function from a specified class. Two classes of
approximating functions are considered: $k$-juntas, and linear Boolean
functions. The $n$ input bits of the function are assumed to be independently
drawn from a distribution that may be biased. The quality of approximation is
measured by the mismatch probability between $f$ and the approximating function
$g$. For each class, the optimal approximation and the associated mismatch
probability is characterized in terms of the biased Fourier expansion of $f$.
The technique used to analyze the mismatch probability also yields an
expression for the noise sensitivity of $f$ in terms of the biased Fourier
coefficients, under a general i.i.d. input perturbation model.
|
Artificial Constraints and Lipschitz Hints for Unconstrained Online
Learning | We provide algorithms that guarantee regret $R_T(u)\le \tilde O(G\|u\|^3 +
G(\|u\|+1)\sqrt{T})$ or $R_T(u)\le \tilde O(G\|u\|^3T^{1/3} + GT^{1/3}+
G\|u\|\sqrt{T})$ for online convex optimization with $G$-Lipschitz losses for
any comparison point $u$ without prior knowledge of either $G$ or $\|u\|$.
Previous algorithms dispense with the $O(\|u\|^3)$ term at the expense of
knowledge of one or both of these parameters, while a lower bound shows that
some additional penalty term over $G\|u\|\sqrt{T}$ is necessary. Previous
penalties were exponential while our bounds are polynomial in all quantities.
Further, given a known bound $\|u\|\le D$, our same techniques allow us to
design algorithms that adapt optimally to the unknown value of $\|u\|$ without
requiring knowledge of $G$.
|
TCoMX: Tomotherapy Complexity Metrics EXtractor | TCoMX (Tomotherapy Complexity Metrics EXtractor) is a newly developed tool
for the automatic extraction of complexity metrics from the DICOM RT-PLAN files
of helical tomotherapy (HT) treatments. TCoMX allows the extraction of all the
different complexity metrics proposed in the literature. This document contains
all the needed guidelines to install and use TCoMX. Furthermore, all the
metrics included in TCoMX are described in detail.
|
DetZero: Rethinking Offboard 3D Object Detection with Long-term
Sequential Point Clouds | Existing offboard 3D detectors always follow a modular pipeline design to
take advantage of unlimited sequential point clouds. We have found that the
full potential of offboard 3D detectors is not explored mainly due to two
reasons: (1) the onboard multi-object tracker cannot generate sufficient
complete object trajectories, and (2) the motion state of objects poses an
inevitable challenge for the object-centric refining stage in leveraging the
long-term temporal context representation. To tackle these problems, we propose
a novel paradigm of offboard 3D object detection, named DetZero. Concretely, an
offline tracker coupled with a multi-frame detector is proposed to focus on the
completeness of generated object tracks. An attention-mechanism refining module
is proposed to strengthen contextual information interaction across long-term
sequential point clouds for object refining with decomposed regression methods.
Extensive experiments on Waymo Open Dataset show our DetZero outperforms all
state-of-the-art onboard and offboard 3D detection methods. Notably, DetZero
ranks 1st place on Waymo 3D object detection leaderboard with 85.15 mAPH (L2)
detection performance. Further experiments validate the application of taking
the place of human labels with such high-quality results. Our empirical study
leads to rethinking conventions and interesting findings that can guide future
research on offboard 3D object detection.
|
Robust Instance-Optimal Recovery of Sparse Signals at Unknown Noise
Levels | We consider the problem of sparse signal recovery from noisy measurements.
Many of frequently used recovery methods rely on some sort of tuning depending
on either noise or signal parameters. If no estimates for either of them are
available, the noisy recovery problem is significantly harder. The square root
LASSO and the least absolute deviation LASSO are known to be noise-blind, in
the sense that the tuning parameter can be chosen independent on the noise and
the signal. We generalize those recovery methods to the \hrlone{} and give a
recovery guarantee once the tuning parameter is above a threshold. Moreover, we
analyze the effect of a bad chosen tuning parameter mistuning on a theoretic
level and prove the optimality of our recovery guarantee. Further, for Gaussian
matrices we give a refined analysis of the threshold of the tuning parameter
and proof a new relation of the tuning parameter on the dimensions. Indeed, for
a certain amount of measurements the tuning parameter becomes independent on
the sparsity. Finally, we verify that the least absolute deviation LASSO can be
used with random walk matrices of uniformly at random chosen left regular
biparitite graphs.
|
Deep Occlusion Reasoning for Multi-Camera Multi-Target Detection | People detection in single 2D images has improved greatly in recent years.
However, comparatively little of this progress has percolated into multi-camera
multi-people tracking algorithms, whose performance still degrades severely
when scenes become very crowded. In this work, we introduce a new architecture
that combines Convolutional Neural Nets and Conditional Random Fields to
explicitly model those ambiguities. One of its key ingredients are high-order
CRF terms that model potential occlusions and give our approach its robustness
even when many people are present. Our model is trained end-to-end and we show
that it outperforms several state-of-art algorithms on challenging scenes.
|
Joint Switch Upgrade and Controller Deployment in Hybrid
Software-Defined Networks | To improve traffic management ability, Internet Service Providers (ISPs) are
gradually upgrading legacy network devices to programmable devices that support
Software-Defined Networking (SDN). The coexistence of legacy and SDN devices
gives rise to a hybrid SDN. Existing hybrid SDNs do not consider the potential
performance issues introduced by a centralized SDN controller: flow requests
processed by a highly loaded controller may experience long tail processing
delay; inappropriate multi-controller deployment could increase the propagation
delay of flow requests.
In this paper, we propose to jointly consider the deployment of SDN switches
and their controllers for hybrid SDNs. We formulate the joint problem as an
optimization problem that maximizes the number of flows that can be controlled
and managed by the SDN and minimizes the propagation delay of flow requests
between SDN controllers and switches under a given upgrade budget constraint.
We show this problem is NP-hard. To efficiently solve the problem, we propose
some techniques (e.g., strengthening the constraints and adding additional
valid inequalities) to accelerate the global optimization solver for solving
the problem for small networks and an efficient heuristic algorithm for solving
it for large networks. The simulation results from real network topologies
illustrate the effectiveness of the proposed techniques and show that our
proposed heuristic algorithm uses a small number of controllers to manage a
high amount of flows with good performance.
|
Can recurrent neural networks learn process model structure? | Various methods using machine and deep learning have been proposed to tackle
different tasks in predictive process monitoring, forecasting for an ongoing
case e.g. the most likely next event or suffix, its remaining time, or an
outcome-related variable. Recurrent neural networks (RNNs), and more
specifically long short-term memory nets (LSTMs), stand out in terms of
popularity. In this work, we investigate the capabilities of such an LSTM to
actually learn the underlying process model structure of an event log. We
introduce an evaluation framework that combines variant-based resampling and
custom metrics for fitness, precision and generalization. We evaluate 4
hypotheses concerning the learning capabilities of LSTMs, the effect of
overfitting countermeasures, the level of incompleteness in the training set
and the level of parallelism in the underlying process model. We confirm that
LSTMs can struggle to learn process model structure, even with simplistic
process data and in a very lenient setup. Taking the correct anti-overfitting
measures can alleviate the problem. However, these measures did not present
themselves to be optimal when selecting hyperparameters purely on predicting
accuracy. We also found that decreasing the amount of information seen by the
LSTM during training, causes a sharp drop in generalization and precision
scores. In our experiments, we could not identify a relationship between the
extent of parallelism in the model and the generalization capability, but they
do indicate that the process' complexity might have impact.
|
X-pire! - A digital expiration date for images in social networks | The Internet and its current information culture of preserving all kinds of
data cause severe problems with privacy. Most of today's Internet users,
especially teenagers, publish various kinds of sensitive information, yet
without recognizing that revealing this information might be detrimental to
their future life and career. Unflattering images that can be openly accessed
now and in the future, e.g., by potential employers, constitute a particularly
important such privacy concern. We have developed a novel, fast, and scalable
system called X-pire! that allows users to set an expiration date for images in
social networks (e.g., Facebook and Flickr) and on static websites, without
requiring any form of additional interaction with these web pages. Once the
expiration date is reached, the images become unavailable. Moreover, the
publishing user can dynamically prolong or shorten the expiration dates of his
images later, and even enforce instantaneous expiration. Rendering the approach
possible for social networks crucially required us to develop a novel technique
for embedding encrypted information within JPEG files in a way that survives
JPEG compression, even for highly optimized implementations of JPEG
post-processing with their various idiosyncrasies as commonly used in such
networks. We have implemented our system and conducted performance measurements
to demonstrate its robustness and efficiency.
|
Non-Rectangular Convolutions and (Sub-)Cadences with Three Elements | The discrete acyclic convolution computes the 2n-1 sums sum_{i+j=k; (i,j) in
[0,1,2,...,n-1]^2} (a_i b_j) in O(n log n) time. By using suitable offsets and
setting some of the variables to zero, this method provides a tool to calculate
all non-zero sums sum_{i+j=k; (i,j) in (P cap Z^2)} (a_i b_j) in a rectangle P
with perimeter p in O(p log p) time.
This paper extends this geometric interpretation in order to allow arbitrary
convex polygons P with k vertices and perimeter p. Also, this extended
algorithm only needs O(k + p(log p)^2 log k) time.
Additionally, this paper presents fast algorithms for counting sub-cadences
and cadences with 3 elements using this extended method.
|
Ordering dynamics with two non-excluding options: Bilingualism in
language competition | We consider a modification of the voter model in which a set of interacting
elements (agents) can be in either of two equivalent states (A or B) or in a
third additional mixed AB state. The model is motivated by studies of language
competition dynamics, where the AB state is associated with bilingualism. We
study the ordering process and associated interface and coarsening dynamics in
regular lattices and small world networks. Agents in the AB state define the
interfaces, changing the interfacial noise driven coarsening of the voter model
to curvature driven coarsening. We argue that this change in the coarsening
mechanism is generic for perturbations of the voter model dynamics. When
interaction is through a small world network the AB agents restore coarsening,
eliminating the metastable states of the voter model. The time to reach the
absorbing state scales with system size as $\tau \sim \ln N$ to be compared
with the result $\tau \sim N$ for the voter model in a small world network.
|
VIB is Half Bayes | In discriminative settings such as regression and classification there are
two random variables at play, the inputs X and the targets Y. Here, we
demonstrate that the Variational Information Bottleneck can be viewed as a
compromise between fully empirical and fully Bayesian objectives, attempting to
minimize the risks due to finite sampling of Y only. We argue that this
approach provides some of the benefits of Bayes while requiring only some of
the work.
|
A Bounded Multi-Vacation Queue Model for Multi-stage Sleep Control 5G
Base station | Modelling and control of energy consumption is an important problem in
telecommunication systems.To model such systems, this paper publishes a bounded
multi-vacation queue model. The energy consumption predicted by the model shows
an average error rate of 0.0177 and the delay predicted by the model shows an
average error rate of 0.0655 over 99 test instances.Subsequently, an
optimization algorithm is proposed to minimize the energy consumption while not
violate the delay bound. Furthermore, given current state of art 5G base
station system configuration, numerical results shows that with the increase of
traffic load, energy saving rate becomes less.
|
Advances in Synthetic Gauge Fields for Light Through Dynamic Modulation | Photons are weak particles that do not directly couple to magnetic fields.
However, it is possible to generate a photonic gauge field by breaking
reciprocity such that the phase of light depends on its direction of
propagation. This non-reciprocal phase indicates the presence of an effective
magnetic field for the light itself. By suitable tailoring of this phase it is
possible to demonstrate quantum effects typically associated with electrons,
and as has been recently shown, non-trivial topological properties of light.
This paper reviews dynamic modulation as a process for breaking the
time-reversal symmetry of light and generating a synthetic gauge field, and
discusses its role in topological photonics, as well as recent developments in
exploring topological photonics in higher dimensions.
|
Hierarchical Recurrent Attention Network for Response Generation | We study multi-turn response generation in chatbots where a response is
generated according to a conversation context. Existing work has modeled the
hierarchy of the context, but does not pay enough attention to the fact that
words and utterances in the context are differentially important. As a result,
they may lose important information in context and generate irrelevant
responses. We propose a hierarchical recurrent attention network (HRAN) to
model both aspects in a unified framework. In HRAN, a hierarchical attention
mechanism attends to important parts within and among utterances with word
level attention and utterance level attention respectively. With the word level
attention, hidden vectors of a word level encoder are synthesized as utterance
vectors and fed to an utterance level encoder to construct hidden
representations of the context. The hidden vectors of the context are then
processed by the utterance level attention and formed as context vectors for
decoding the response. Empirical studies on both automatic evaluation and human
judgment show that HRAN can significantly outperform state-of-the-art models
for multi-turn response generation.
|
A Neural-embedded Choice Model: TasteNet-MNL Modeling Taste
Heterogeneity with Flexibility and Interpretability | Discrete choice models (DCMs) require a priori knowledge of the utility
functions, especially how tastes vary across individuals. Utility
misspecification may lead to biased estimates, inaccurate interpretations and
limited predictability. In this paper, we utilize a neural network to learn
taste representation. Our formulation consists of two modules: a neural network
(TasteNet) that learns taste parameters (e.g., time coefficient) as flexible
functions of individual characteristics; and a multinomial logit (MNL) model
with utility functions defined with expert knowledge. Taste parameters learned
by the neural network are fed into the choice model and link the two modules.
Our approach extends the L-MNL model (Sifringer et al., 2020) by allowing the
neural network to learn the interactions between individual characteristics and
alternative attributes. Moreover, we formalize and strengthen the
interpretability condition - requiring realistic estimates of behavior
indicators (e.g., value-of-time, elasticity) at the disaggregated level, which
is crucial for a model to be suitable for scenario analysis and policy
decisions. Through a unique network architecture and parameter transformation,
we incorporate prior knowledge and guide the neural network to output realistic
behavior indicators at the disaggregated level. We show that TasteNet-MNL
reaches the ground-truth model's predictability and recovers the nonlinear
taste functions on synthetic data. Its estimated value-of-time and choice
elasticities at the individual level are close to the ground truth. On a
publicly available Swissmetro dataset, TasteNet-MNL outperforms benchmarking
MNLs and Mixed Logit model's predictability. It learns a broader spectrum of
taste variations within the population and suggests a higher average
value-of-time.
|
Statistically Motivated Second Order Pooling | Second-order pooling, a.k.a.~bilinear pooling, has proven effective for deep
learning based visual recognition. However, the resulting second-order networks
yield a final representation that is orders of magnitude larger than that of
standard, first-order ones, making them memory-intensive and cumbersome to
deploy. Here, we introduce a general, parametric compression strategy that can
produce more compact representations than existing compression techniques, yet
outperform both compressed and uncompressed second-order models. Our approach
is motivated by a statistical analysis of the network's activations, relying on
operations that lead to a Gaussian-distributed final representation, as
inherently used by first-order deep networks. As evidenced by our experiments,
this lets us outperform the state-of-the-art first-order and second-order
models on several benchmark recognition datasets.
|
Nonlinear Regression Analysis Using Multi-Verse Optimizer | Regression analysis is an important machine learning task used for predictive
analytic in business, sports analysis, etc. In regression analysis,
optimization algorithms play a significant role in search the coefficients in
the regression model. In this paper, nonlinear regression analysis using a
recently developed meta-heuristic Multi-Verse Optimizer (MVO) is proposed. The
proposed method is applied to 10 well-known benchmark nonlinear regression
problems. A comparative study has been conducted with Particle Swarm Optimizer
(PSO). The experimental results demonstrate that the proposed method
statistically outperforms PSO algorithm.
|
Automatic Generation of Moment-Based Invariants for Prob-Solvable Loops | One of the main challenges in the analysis of probabilistic programs is to
compute invariant properties that summarise loop behaviours. Automation of
invariant generation is still at its infancy and most of the times targets only
expected values of the program variables, which is insufficient to recover the
full probabilistic program behaviour. We present a method to automatically
generate moment-based invariants of a subclass of probabilistic programs,
called Prob-Solvable loops, with polynomial assignments over random variables
and parametrised distributions. We combine methods from symbolic summation and
statistics to derive invariants as valid properties over higher-order moments,
such as expected values or variances, of program variables. We successfully
evaluated our work on several examples where full automation for computing
higher-order moments and invariants over program variables was not yet
possible.
|
Coinductive proof search for polarized logic with applications to full
intuitionistic propositional logic | The approach to proof search dubbed "coinductive proof search", and
previously developed by the authors for implicational intuitionistic logic, is
in this paper extended to LJP, a focused sequent-calculus presentation of
polarized intuitionistic logic, including an array of positive and negative
connectives. As before, this includes developing a coinductive description of
the search space generated by a sequent, an equivalent inductive syntax
describing the same space, and decision procedures for inhabitation problems in
the form of predicates defined by recursion on the inductive syntax. We prove
the decidability of existence of focused inhabitants, and of finiteness of the
number of focused inhabitants for polarized intuitionistic logic, by means of
such recursive procedures. Moreover, the polarized logic can be used as a
platform from which proof search for other logics is understood. We illustrate
the technique with LJT, a focused sequent calculus for full intuitionistic
propositional logic (including disjunction). For that, we have to work out the
"negative translation" of LJT into LJP (that sees all intuitionistic types as
negative types), and verify that the translation gives a faithful
representation of proof search in LJT as proof search in the polarized logic.
We therefore inherit decidability of both problems studied for LJP and thus get
new proofs of these results for LJT.
|
AIM 2019 Challenge on Image Demoireing: Dataset and Study | This paper introduces a novel dataset, called LCDMoire, which was created for
the first-ever image demoireing challenge that was part of the Advances in
Image Manipulation (AIM) workshop, held in conjunction with ICCV 2019. The
dataset comprises 10,200 synthetically generated image pairs (consisting of an
image degraded by moire and a clean ground truth image). In addition to
describing the dataset and its creation, this paper also reviews the challenge
tracks, competition, and results, the latter summarizing the current
state-of-the-art on this dataset.
|
Semi-Supervised Learning for In-Game Expert-Level Music-to-Dance
Translation | Music-to-dance translation is a brand-new and powerful feature in recent
role-playing games. Players can now let their characters dance along with
specified music clips and even generate fan-made dance videos. Previous works
of this topic consider music-to-dance as a supervised motion generation problem
based on time-series data. However, these methods suffer from limited training
data pairs and the degradation of movements. This paper provides a new
perspective for this task where we re-formulate the translation problem as a
piece-wise dance phrase retrieval problem based on the choreography theory.
With such a design, players are allowed to further edit the dance movements on
top of our generation while other regression based methods ignore such user
interactivity. Considering that the dance motion capture is an expensive and
time-consuming procedure which requires the assistance of professional dancers,
we train our method under a semi-supervised learning framework with a large
unlabeled dataset (20x than labeled data) collected. A co-ascent mechanism is
introduced to improve the robustness of our network. Using this unlabeled
dataset, we also introduce self-supervised pre-training so that the translator
can understand the melody, rhythm, and other components of music phrases. We
show that the pre-training significantly improves the translation accuracy than
that of training from scratch. Experimental results suggest that our method not
only generalizes well over various styles of music but also succeeds in
expert-level choreography for game players.
|
Transferable Multi-Domain State Generator for Task-Oriented Dialogue
Systems | Over-dependence on domain ontology and lack of knowledge sharing across
domains are two practical and yet less studied problems of dialogue state
tracking. Existing approaches generally fall short in tracking unknown slot
values during inference and often have difficulties in adapting to new domains.
In this paper, we propose a Transferable Dialogue State Generator (TRADE) that
generates dialogue states from utterances using a copy mechanism, facilitating
knowledge transfer when predicting (domain, slot, value) triplets not
encountered during training. Our model is composed of an utterance encoder, a
slot gate, and a state generator, which are shared across domains. Empirical
results demonstrate that TRADE achieves state-of-the-art joint goal accuracy of
48.62% for the five domains of MultiWOZ, a human-human dialogue dataset. In
addition, we show its transferring ability by simulating zero-shot and few-shot
dialogue state tracking for unseen domains. TRADE achieves 60.58% joint goal
accuracy in one of the zero-shot domains, and is able to adapt to few-shot
cases without forgetting already trained domains.
|
Moving Embedded Solitons | The first theoretical results are reported predicting {\em moving} solitons
residing inside ({\it embedded} into) the continuous spectrum of radiation
modes. The model taken is a Bragg-grating medium with Kerr nonlinearity and
additional second-derivative (wave) terms. The moving embedded solitons (ESs)
are doubly isolated (of codimension 2), but, nevertheless, structurally stable.
Like quiescent ESs, moving ESs are argued to be stable to linear approximation,
and {\it semi}-stable nonlinearly. Estimates show that moving ESs may be
experimentally observed as $\sim$10 fs pulses with velocity $\leq 1/10$th that
of light.
|
Design a Persian Automated Plagiarism Detector (AMZPPD) | Currently there are lots of plagiarism detection approaches. But few of them
implemented and adapted for Persian languages. In this paper, our work on
designing and implementation of a plagiarism detection system based on
pre-processing and NLP technics will be described. And the results of testing
on a corpus will be presented.
|
Full-F Turbulent Simulation in a Linear Device using a Gyro-Moment
Approach | Simulations of plasma turbulence in a linear plasma device configuration are
presented. These simulations are based on a simplified version of the
gyrokinetic (GK) model proposed by B. J. Frei et al. [J. Plasma Phys. 86,
905860205 (2020)] where the full-F distribution function is expanded on a
velocity-space polynomial basis allowing us to reduce its evolution to the
solution of an arbitrary number of fluid-like equations for the expansion
coefficients, denoted as the gyro-moments (GM). By focusing on the
electrostatic and neglecting finite Larmor radius effects, a full-F GM
hierarchy equation is derived to evolve the ion dynamics, which includes a
nonlinear Dougherty collision operator, localized sources, and Bohm sheath
boundary conditions. An electron fluid Braginskii model is used to evolve the
electron dynamics, coupled to the full-F ion GM hierarchy equation via a
vorticity equation where the Boussinesq approximation is used. A set of full-F
turbulent simulations are then performed using the parameters of the LArge
Plasma Device (LAPD) experiments with different numbers of ion GMs and
different values of collisionality. The ion distribution function is analyzed
illustrating the convergence properties of the GM approach. In particular, we
show that higher-order GMs are damped by collisions in the high-collisional
regime relevant to LAPD experiments. The GM results are then compared with
those from two-fluid Braginskii simulations, finding qualitative agreement in
the time-averaged profiles and statistical turbulent properties.
|
Investigating Guiding Information for Adaptive Collocation Point
Sampling in PINNs | Physics-informed neural networks (PINNs) provide a means of obtaining
approximate solutions of partial differential equations and systems through the
minimisation of an objective function which includes the evaluation of a
residual function at a set of collocation points within the domain. The quality
of a PINNs solution depends upon numerous parameters, including the number and
distribution of these collocation points. In this paper we consider a number of
strategies for selecting these points and investigate their impact on the
overall accuracy of the method. In particular, we suggest that no single
approach is likely to be ``optimal'' but we show how a number of important
metrics can have an impact in improving the quality of the results obtained
when using a fixed number of residual evaluations. We illustrate these
approaches through the use of two benchmark test problems: Burgers' equation
and the Allen-Cahn equation.
|
Recurrent Brain Graph Mapper for Predicting Time-Dependent Brain Graph
Evaluation Trajectory | Several brain disorders can be detected by observing alterations in the
brain's structural and functional connectivities. Neurological findings suggest
that early diagnosis of brain disorders, such as mild cognitive impairment
(MCI), can prevent and even reverse its development into Alzheimer's disease
(AD). In this context, recent studies aimed to predict the evolution of brain
connectivities over time by proposing machine learning models that work on
brain images. However, such an approach is costly and time-consuming. Here, we
propose to use brain connectivities as a more efficient alternative for
time-dependent brain disorder diagnosis by regarding the brain as instead a
large interconnected graph characterizing the interconnectivity scheme between
several brain regions. We term our proposed method Recurrent Brain Graph Mapper
(RBGM), a novel efficient edge-based recurrent graph neural network that
predicts the time-dependent evaluation trajectory of a brain graph from a
single baseline. Our RBGM contains a set of recurrent neural network-inspired
mappers for each time point, where each mapper aims to project the ground-truth
brain graph onto its next time point. We leverage the teacher forcing method to
boost training and improve the evolved brain graph quality. To maintain the
topological consistency between the predicted brain graphs and their
corresponding ground-truth brain graphs at each time point, we further
integrate a topological loss. We also use l1 loss to capture time-dependency
and minimize the distance between the brain graph at consecutive time points
for regularization. Benchmarks against several variants of RBGM and
state-of-the-art methods prove that we can achieve the same accuracy in
predicting brain graph evolution more efficiently, paving the way for novel
graph neural network architecture and a highly efficient training scheme.
|
Deep Lidar CNN to Understand the Dynamics of Moving Vehicles | Perception technologies in Autonomous Driving are experiencing their golden
age due to the advances in Deep Learning. Yet, most of these systems rely on
the semantically rich information of RGB images. Deep Learning solutions
applied to the data of other sensors typically mounted on autonomous cars (e.g.
lidars or radars) are not explored much. In this paper we propose a novel
solution to understand the dynamics of moving vehicles of the scene from only
lidar information. The main challenge of this problem stems from the fact that
we need to disambiguate the proprio-motion of the 'observer' vehicle from that
of the external 'observed' vehicles. For this purpose, we devise a CNN
architecture which at testing time is fed with pairs of consecutive lidar
scans. However, in order to properly learn the parameters of this network,
during training we introduce a series of so-called pretext tasks which also
leverage on image data. These tasks include semantic information about
vehicleness and a novel lidar-flow feature which combines standard image-based
optical flow with lidar scans. We obtain very promising results and show that
including distilled image information only during training, allows improving
the inference results of the network at test time, even when image data is no
longer used.
|
GenKL: An Iterative Framework for Resolving Label Ambiguity and Label
Non-conformity in Web Images Via a New Generalized KL Divergence | Web image datasets curated online inherently contain ambiguous
in-distribution (ID) instances and out-of-distribution (OOD) instances, which
we collectively call non-conforming (NC) instances. In many recent approaches
for mitigating the negative effects of NC instances, the core implicit
assumption is that the NC instances can be found via entropy maximization. For
"entropy" to be well-defined, we are interpreting the output prediction vector
of an instance as the parameter vector of a multinomial random variable, with
respect to some trained model with a softmax output layer. Hence, entropy
maximization is based on the idealized assumption that NC instances have
predictions that are "almost" uniformly distributed. However, in real-world web
image datasets, there are numerous NC instances whose predictions are far from
being uniformly distributed. To tackle the limitation of entropy maximization,
we propose $(\alpha, \beta)$-generalized KL divergence,
$\mathcal{D}_{\text{KL}}^{\alpha, \beta}(p\|q)$, which can be used to identify
significantly more NC instances. Theoretical properties of
$\mathcal{D}_{\text{KL}}^{\alpha, \beta}(p\|q)$ are proven, and we also show
empirically that a simple use of $\mathcal{D}_{\text{KL}}^{\alpha,
\beta}(p\|q)$ outperforms all baselines on the NC instance identification task.
Building upon $(\alpha,\beta)$-generalized KL divergence, we also introduce a
new iterative training framework, GenKL, that identifies and relabels NC
instances. When evaluated on three web image datasets, Clothing1M,
Food101/Food101N, and mini WebVision 1.0, we achieved new state-of-the-art
classification accuracies: $81.34\%$, $85.73\%$ and $78.99\%$/$92.54\%$
(top-1/top-5), respectively.
|
Streaming Noise Context Aware Enhancement For Automatic Speech
Recognition in Multi-Talker Environments | One of the most challenging scenarios for smart speakers is multi-talker,
when target speech from the desired speaker is mixed with interfering speech
from one or more speakers. A smart assistant needs to determine which voice to
recognize and which to ignore and it needs to do so in a streaming, low-latency
manner. This work presents two multi-microphone speech enhancement algorithms
targeted at this scenario. Targeting on-device use-cases, we assume that the
algorithm has access to the signal before the hotword, which is referred to as
the noise context. First is the Context Aware Beamformer which uses the noise
context and detected hotword to determine how to target the desired speaker.
The second is an adaptive noise cancellation algorithm called Speech Cleaner
which trains a filter using the noise context. It is demonstrated that the two
algorithms are complementary in the signal-to-noise ratio conditions under
which they work well. We also propose an algorithm to select which one to use
based on estimated SNR. When using 3 microphone channels, the final system
achieves a relative word error rate reduction of 55% at -12dB, and 43\% at
12dB.
|
Possible Coexistence of Antihydrogen with Hydrogen, Deuterium and
Tritium Atoms | Recent productions of large numbers of cold antiprotons as well as the
formation of antihydrogens at CERN and Fermilab have raised basic questions
about possible coexistence of matter and antimatter in nature. In the present
work, previous mathematical considerations are revisited which support the
possible coexistence of Antihydrogen with Hydrogen, Deuterium and Tritium
atoms. In particular, the main objective of the present work is to present
computational treatments which confirm the possible formation of these quasi
molecules in laboratory. These treatments are based on a nonadiabatic picture
of the system in which generalized basis functions are adjusted within the
framework of Rayleigh-Ritz' variational method. Thus, it is ruled out in the
present work the Born-Oppenheimer adiabatic picture of the system, which
demands the existence of bound states composed of fixed quasi heavy atoms
(containing at least two baryons, e.g. protonium (Pn), with mean lifetime
1.0x10^^-6 s) and quasi light atoms (composed of two leptons, e.g. positronium
(Ps), with mean lifetime 125x10^^-12 s for para-Ps and 142.05x10^^-9 s for
ortho-Ps). Our calculations of the binding energies and internal structure of
Antihydrogen-Hydrogen, Antihydrogen-Deuterium and Antihydrogen-Tritium show
that these quasi molecules are bound and could be formed in nature. On the
other hand, having in mind the adiabatic picture of the systems, our results
suggest the possible formation of these molecules as resonant states in
Antihydrogen-Atom interaction. Nevertheless, several arguments are accumulated
in the conclusion as consequences of the proposed bound states.
|
Using small-angle scattering to guide functional magnetic nanoparticle
design | Magnetic nanoparticles offer unique potential for various technological,
biomedical, or environmental applications thanks to the size-, shape- and
material-dependent tunability of their magnetic properties. To optimize
particles for a specific application, it is crucial to interrelate their
performance with their structural and magnetic properties. This review presents
the advantages of small-angle X-ray and neutron scattering techniques for
achieving a detailed multiscale characterization of magnetic nanoparticles and
their ensembles in a mesoscopic size range from 1 to a few hundred nanometers
with nanometer resolution. Both X-rays and neutrons allow the ensemble-averaged
determination of structural properties, such as particle morphology or particle
arrangement in multilayers and 3D assemblies. Additionally, the magnetic
scattering contributions enable retrieving the internal magnetization profile
of the nanoparticles as well as the inter-particle moment correlations caused
by interactions within dense assemblies. Most measurements are used to
determine the time-averaged ensemble properties, in addition advanced
small-angle scattering techniques exist that allow accessing particle and spin
dynamics on various timescales. In this review, we focus on conventional
small-angle X-ray and neutron scattering (SAXS and SANS), X-ray and neutron
reflectometry, gracing-incidence SAXS and SANS, X-ray resonant magnetic
scattering, and neutron spin-echo spectroscopy techniques. For each technique,
we provide a general overview, present the latest scientific results, and
discuss its strengths as well as sample requirements. Finally, we give our
perspectives on how future small-angle scattering experiments, especially in
combination with micromagnetic simulations, could help to optimize the
performance of magnetic nanoparticles for specific applications.
|
Automatic Reuse, Adaption, and Execution of Simulation Experiments via
Provenance Patterns | Simulation experiments are typically conducted repeatedly during the model
development process, for example, to re-validate if a behavioral property still
holds after several model changes. Approaches for automatically reusing and
generating simulation experiments can support modelers in conducting simulation
studies in a more systematic and effective manner. They rely on explicit
experiment specifications and, so far, on user interaction for initiating the
reuse. Thereby, they are constrained to support the reuse of simulation
experiments in a specific setting. Our approach now goes one step further by
automatically identifying and adapting the experiments to be reused for a
variety of scenarios. To achieve this, we exploit provenance graphs of
simulation studies, which provide valuable information about the previous
modeling and experimenting activities, and contain meta-information about the
different entities that were used or produced during the simulation study. We
define provenance patterns and associate them with a semantics, which allows us
to interpret the different activities, and construct transformation rules for
provenance graphs. Our approach is implemented in a Reuse and Adapt framework
for Simulation Experiments (RASE) which can interface with various modeling and
simulation tools. In the case studies, we demonstrate the utility of our
framework for a) the repeated sensitivity analysis of an agent-based model of
migration routes, and b) the cross-validation of two models of a cell signaling
pathway.
|
Mathematical modelling and computational reduction of molten glass fluid
flow in a furnace melting basin | In this work, we present the modelling and numerical simulation of a molten
glass fluid flow in a furnace melting basin. We first derive a model for a
molten glass fluid flow and present numerical simulations based on the Finite
Element Method (FEM). We further discuss and validate the results obtained from
the simulations by comparing them with experimental results. Finally, we also
present a non-intrusive Proper Orthogonal Decomposition (POD) based on
Artificial Neural Networks (ANN) to efficiently handle scenarios which require
multiple simulations of the fluid flow upon changing parameters of relevant
industrial interest. This approach lets us obtain solutions of a complex 3D
model, with good accuracy with respect to the FEM solution, yet with negligible
associated computational times.
|
OP-IMS @ DIACR-Ita: Back to the Roots: SGNS+OP+CD still rocks Semantic
Change Detection | We present the results of our participation in the DIACR-Ita shared task on
lexical semantic change detection for Italian. We exploit one of the earliest
and most influential semantic change detection models based on Skip-Gram with
Negative Sampling, Orthogonal Procrustes alignment and Cosine Distance and
obtain the winning submission of the shared task with near to perfect accuracy
.94. Our results once more indicate that, within the present task setup in
lexical semantic change detection, the traditional type-based approaches yield
excellent performance.
|
MC-LCR: Multi-modal contrastive classification by locally correlated
representations for effective face forgery detection | As the remarkable development of facial manipulation technologies is
accompanied by severe security concerns, face forgery detection has become a
recent research hotspot. Most existing detection methods train a binary
classifier under global supervision to judge real or fake. However, advanced
manipulations only perform small-scale tampering, posing challenges to
comprehensively capture subtle and local forgery artifacts, especially in high
compression settings and cross-dataset scenarios. To address such limitations,
we propose a novel framework named Multi-modal Contrastive Classification by
Locally Correlated Representations(MC-LCR), for effective face forgery
detection. Instead of specific appearance features, our MC-LCR aims to amplify
implicit local discrepancies between authentic and forged faces from both
spatial and frequency domains. Specifically, we design the shallow style
representation block that measures the pairwise correlation of shallow feature
maps, which encodes local style information to extract more discriminative
features in the spatial domain. Moreover, we make a key observation that subtle
forgery artifacts can be further exposed in the patch-wise phase and amplitude
spectrum and exhibit different clues. According to the complementarity of
amplitude and phase information, we develop a patch-wise amplitude and phase
dual attention module to capture locally correlated inconsistencies with each
other in the frequency domain. Besides the above two modules, we further
introduce the collaboration of supervised contrastive loss with cross-entropy
loss. It helps the network learn more discriminative and generalized
representations. Through extensive experiments and comprehensive studies, we
achieve state-of-the-art performance and demonstrate the robustness and
generalization of our method.
|
Initial nonrepetitive complexity of regular episturmian words and their
Diophantine exponents | Regular episturmian words are episturmian words whose directive words have a
regular and restricted form making them behave more like Sturmian words than
general episturmian words. We present a method to evaluate the initial
nonrepetitive complexity of regular episturmian words extending the work of
Wojcik on Sturmian words. For this, we develop a theory of generalized
Ostrowski numeration systems and show how to associate with each episturmian
word a unique sequence of numbers written in this numeration system.
The description of the initial nonrepetitive complexity allows us to obtain
novel results on the Diophantine exponents of regular episturmian words. We
prove that the Diophantine exponent of a regular episturmian word is finite if
and only if its directive word has bounded partial quotients. Moreover, we
prove that the Diophantine exponent of a regular episturmian word is strictly
greater than $2$ if the sequence of partial quotients is eventually at least
$3$.
Given an infinite word $x$ over an integer alphabet, we may consider a real
number $\xi_x$ having $x$ as a fractional part. The Diophantine exponent of $x$
is a lower bound for the irrationality exponent of $\xi_x$. Our results thus
yield nontrivial lower bounds for the irrationality exponents of real numbers
whose fractional parts are regular episturmian words. As a consequence, we
identify a new uncountable class of transcendental numbers whose irrationality
exponents are strictly greater than $2$. This class contains an uncountable
subclass of Liouville numbers.
|
Revisiting Surgical Instrument Segmentation Without Human Intervention:
A Graph Partitioning View | Surgical instrument segmentation (SIS) on endoscopic images stands as a
long-standing and essential task in the context of computer-assisted
interventions for boosting minimally invasive surgery. Given the recent surge
of deep learning methodologies and their data-hungry nature, training a neural
predictive model based on massive expert-curated annotations has been
dominating and served as an off-the-shelf approach in the field, which could,
however, impose prohibitive burden to clinicians for preparing fine-grained
pixel-wise labels corresponding to the collected surgical video frames. In this
work, we propose an unsupervised method by reframing the video frame
segmentation as a graph partitioning problem and regarding image pixels as
graph nodes, which is significantly different from the previous efforts. A
self-supervised pre-trained model is firstly leveraged as a feature extractor
to capture high-level semantic features. Then, Laplacian matrixs are computed
from the features and are eigendecomposed for graph partitioning. On the "deep"
eigenvectors, a surgical video frame is meaningfully segmented into different
modules such as tools and tissues, providing distinguishable semantic
information like locations, classes, and relations. The segmentation problem
can then be naturally tackled by applying clustering or threshold on the
eigenvectors. Extensive experiments are conducted on various datasets (e.g.,
EndoVis2017, EndoVis2018, UCL, etc.) for different clinical endpoints. Across
all the challenging scenarios, our method demonstrates outstanding performance
and robustness higher than unsupervised state-of-the-art (SOTA) methods. The
code is released at https://github.com/MingyuShengSMY/GraphClusteringSIS.git.
|
Interactive Visual Task Learning for Robots | We present a framework for robots to learn novel visual concepts and tasks
via in-situ linguistic interactions with human users. Previous approaches have
either used large pre-trained visual models to infer novel objects zero-shot,
or added novel concepts along with their attributes and representations to a
concept hierarchy. We extend the approaches that focus on learning visual
concept hierarchies by enabling them to learn novel concepts and solve unseen
robotics tasks with them. To enable a visual concept learner to solve robotics
tasks one-shot, we developed two distinct techniques. Firstly, we propose a
novel approach, Hi-Viscont(HIerarchical VISual CONcept learner for Task), which
augments information of a novel concept to its parent nodes within a concept
hierarchy. This information propagation allows all concepts in a hierarchy to
update as novel concepts are taught in a continual learning setting. Secondly,
we represent a visual task as a scene graph with language annotations, allowing
us to create novel permutations of a demonstrated task zero-shot in-situ. We
present two sets of results. Firstly, we compare Hi-Viscont with the baseline
model (FALCON) on visual question answering(VQA) in three domains. While being
comparable to the baseline model on leaf level concepts, Hi-Viscont achieves an
improvement of over 9% on non-leaf concepts on average. We compare our model's
performance against the baseline FALCON model. Our framework achieves 33%
improvements in success rate metric, and 19% improvements in the object level
accuracy compared to the baseline model. With both of these results we
demonstrate the ability of our model to learn tasks and concepts in a continual
learning setting on the robot.
|
Solving the subset sum problem with a nonideal biological computer | We consider the solution of the subset sum problem based on a parallel
computer consisting of self-propelled biological agents moving in a
nanostructured network that encodes the NP-complete task in its geometry. We
develop an approximate analytical method to analyze the effects of small errors
in the nonideal junctions composing the computing network by using a Gaussian
confidence interval approximation of the multinomial distribution. We
concretely evaluate the probability distribution for error-induced paths and
determine the minimal number of agents required to obtain a proper solution. We
finally validate our theoretical results with exact numerical simulations of
the subset sum problem for different set sizes and error probabilities.
|
Excitation and propagation of spin waves in non-uniformly magnetized
waveguides | The characteristics of spin waves in ferromagnetic waveguides with nonuniform
magnetization have been investigated for situations where the shape anisotropy
field of the waveguide is comparable to the external bias field. Spin-wave
generation was realized by the magnetoelastic effect by applying normal and
shear strain components, as well as by the Oersted field emitted by an
inductive antenna. The magnetoelastic excitation field has a nonuniform profile
over the width of the waveguide because of the nonuniform magnetization
orientation, whereas the Oersted field remains uniform. Using micromagnetic
simulations, we indicate that both types of excitation fields generate
quantised width modes with both odd and even mode numbers as well as tilted
phase fronts. We demonstrate that these effects originate from the average
magnetization orientation with respect to the main axes of the magnetic
waveguide. Furthermore, it is indicated that the excitation efficiency of the
second-order mode generally surpasses that of the first-order mode due to their
symmetry. The relative intensity of the excited modes can be controlled by the
strain state as well as by tuning the dimensions of the excitation area.
Finally, we demonstrate that the nonreciprocity of spin-wave radiation due to
the chirality of an Oersted field generated by an inductive antenna is absent
for magnetoelastic spin-wave excitation.
|
Creating Multimodal Interactive Agents with Imitation and
Self-Supervised Learning | A common vision from science fiction is that robots will one day inhabit our
physical spaces, sense the world as we do, assist our physical labours, and
communicate with us through natural language. Here we study how to design
artificial agents that can interact naturally with humans using the
simplification of a virtual environment. We show that imitation learning of
human-human interactions in a simulated world, in conjunction with
self-supervised learning, is sufficient to produce a multimodal interactive
agent, which we call MIA, that successfully interacts with non-adversarial
humans 75% of the time. We further identify architectural and algorithmic
techniques that improve performance, such as hierarchical action selection.
Altogether, our results demonstrate that imitation of multi-modal, real-time
human behaviour may provide a straightforward and surprisingly effective means
of imbuing agents with a rich behavioural prior from which agents might then be
fine-tuned for specific purposes, thus laying a foundation for training capable
agents for interactive robots or digital assistants. A video of MIA's behaviour
may be found at https://youtu.be/ZFgRhviF7mY
|
Limiting Self-Propagating Malware Based on Connection Failure Behavior
through Hyper-Compact Estimators | Self-propagating malware (e.g., an Internet worm) exploits security loopholes
in software to infect servers and then use them to scan the Internet for more
vulnerable servers. While the mechanisms of worm infection and their
propagation models are well understood, defense against worms remains an open
problem. One branch of defense research investigates the behavioral difference
between worm-infected hosts and normal hosts to set them apart. One particular
observation is that a worm-infected host, which scans the Internet with
randomly selected addresses, has a much higher connection-failure rate than a
normal host. Rate-limit algorithms have been proposed to control the spread of
worms by traffic shaping based on connection failure rate. However, these
rate-limit algorithms can work properly only if it is possible to measure
failure rates of individual hosts efficiently and accurately. This paper points
out a serious problem in the prior method. To address this problem, we first
propose a solution based on a highly efficient double-bitmap data structure,
which places only a small memory footprint on the routers, while providing good
measurement of connection failure rates whose accuracy can be tuned by system
parameters. Furthermore, we propose another solution based on shared register
array data structure, achieving better memory efficiency and much larger
estimation range than our double-bitmap solution.
|
Setups for eliminating static charge of the ATLAS18 strip sensors | Construction of the new all-silicon Inner Tracker (ITk), developed by the
ATLAS collaboration for the High Luminosity LHC, started in 2020 and is
expected to continue till 2028. The ITk detector will include 18,000 highly
segmented and radiation hard n+-in-p silicon strip sensors (ATLAS18), which are
being manufactured by Hamamatsu Photonics. Mechanical and electrical
characteristics of produced sensors are measured upon their delivery at several
institutes participating in a complex Quality Control (QC) program. The QC
tests performed on each individual sensor check the overall integrity and
quality of the sensor. During the QC testing of production ATLAS18 strip
sensors, an increased number of sensors that failed the electrical tests was
observed. In particular, IV measurements indicated an early breakdown, while
large areas containing several tens or hundreds of neighbouring strips with low
interstrip isolation were identified by the Full strip tests, and leakage
current instabilities were measured in a long-term leakage current stability
setup. Moreover, a high surface electrostatic charge reaching a level of
several hundreds of volts per inch was measured on a large number of sensors
and on the plastic sheets, which mechanically protect these sensors in their
paper envelopes. Accumulated data indicates a clear correlation between
observed electrical failures and the sensor charge-up. To mitigate the
above-described issues, the QC testing sites significantly modified the sensor
handling procedures and introduced sensor recovery techniques based on
irradiation of the sensor surface with UV light or application of intensive
flows of ionized gas. In this presentation, we will describe the setups
implemented by the QC testing sites to treat silicon strip sensors affected by
static charge and evaluate the effectiveness of these setups in terms of
improvement of the sensor performance.
|
Manipulation of Articulated Objects using Dual-arm Robots via Answer Set
Programming | The manipulation of articulated objects is of primary importance in Robotics,
and can be considered as one of the most complex manipulation tasks.
Traditionally, this problem has been tackled by developing ad-hoc approaches,
which lack flexibility and portability.
In this paper we present a framework based on Answer Set Programming (ASP)
for the automated manipulation of articulated objects in a robot control
architecture. In particular, ASP is employed for representing the configuration
of the articulated object, for checking the consistency of such representation
in the knowledge base, and for generating the sequence of manipulation actions.
The framework is exemplified and validated on the Baxter dual-arm manipulator
in a first, simple scenario. Then, we extend such scenario to improve the
overall setup accuracy, and to introduce a few constraints in robot actions
execution to enforce their feasibility. The extended scenario entails a high
number of possible actions that can be fruitfully combined together. Therefore,
we exploit macro actions from automated planning in order to provide more
effective plans. We validate the overall framework in the extended scenario,
thereby confirming the applicability of ASP also in more realistic Robotics
settings, and showing the usefulness of macro actions for the robot-based
manipulation of articulated objects. Under consideration in Theory and Practice
of Logic Programming (TPLP).
|
Leveraging Query Resolution and Reading Comprehension for Conversational
Passage Retrieval | This paper describes the participation of UvA.ILPS group at the TREC CAsT
2020 track. Our passage retrieval pipeline consists of (i) an initial retrieval
module that uses BM25, and (ii) a re-ranking module that combines the score of
a BERT ranking model with the score of a machine comprehension model adjusted
for passage retrieval. An important challenge in conversational passage
retrieval is that queries are often under-specified. Thus, we perform query
resolution, that is, add missing context from the conversation history to the
current turn query using QuReTeC, a term classification query resolution model.
We show that our best automatic and manual runs outperform the corresponding
median runs by a large margin.
|
An Optimizing Framework on MLIR for Efficient FPGA-based Accelerator
Generation | With the increasing demand for computing capability given limited resource
and power budgets, it is crucial to deploy applications to customized
accelerators like FPGAs. However, FPGA programming is non-trivial. Although
existing high-level synthesis (HLS) tools improve productivity to a certain
extent, they are limited in scope and capability to support sufficient
FPGA-oriented optimizations. This paper focuses on FPGA-based accelerators and
proposes POM, an optimizing framework built on multi-level intermediate
representation (MLIR). POM has several features which demonstrate its scope and
capability of performance optimization. First, most HLS tools depend
exclusively on a single-level IR to perform all the optimizations, introducing
excessive information into the IR and making debugging an arduous task. In
contrast, POM introduces three layers of IR to perform operations at suitable
abstraction levels, streamlining the implementation and debugging process and
exhibiting better flexibility, extensibility, and systematicness. Second, POM
integrates the polyhedral model into MLIR, enabling advanced dependence
analysis and various FPGA-oriented loop transformations. By representing nested
loops with integer sets and maps, loop transformations can be conducted
conveniently through manipulations on polyhedral semantics. Finally, to further
relieve design effort, POM has a user-friendly programming interface (DSL) that
allows a concise description of computation and includes a rich collection of
scheduling primitives. An automatic design space exploration (DSE) engine is
provided to search for high-performance optimization schemes efficiently and
generate optimized accelerators automatically. Experimental results show that
POM achieves a $6.46\times$ average speedup on typical benchmark suites and a
$6.06\times$ average speedup on real-world applications compared to the
state-of-the-art.
|
Robust parallel nonlinear solvers for implicit time discretizations of
the Bidomain equations | In this work, we study the convergence and performance of nonlinear solvers
for the Bidomain equations after decoupling the ordinary and partial
differential equations of the cardiac system. Firstly, we provide a rigorous
proof of the global convergence of Quasi-Newton methods, such as BFGS, and
nonlinear Conjugate-Gradient methods, such as Fletcher--Reeves, for the
Bidomain system, by analyzing an auxiliary variational problem under physically
reasonable hypotheses. Secondly, we compare several nonlinear Bidomain solvers
in terms of execution time, robustness with respect to the data and parallel
scalability. Our findings indicate that Quasi-Newton methods are the best
choice for nonlinear Bidomain systems, since they exhibit faster convergence
rates compared to standard Newton-Krylov methods, while maintaining robustness
and scalability. Furthermore, first-order methods also demonstrate
competitiveness and serve as a viable alternative, particularly for matrix-free
implementations that are well-suited for GPU computing.
|
Completeness classes in algebraic complexity theory | The purpose of this overview is to explain the enormous impact of Les
Valiant's eponymous short conference contribution from 1979 on the development
of algebraic complexity.
|
Subsets and Splits