title
stringlengths 1
280
| abstract
stringlengths 7
5.09k
|
---|---|
Autonomous requirements specification processing using natural language
processing | We describe our ongoing research that centres on the application of natural
language processing (NLP) to software engineering and systems development
activities. In particular, this paper addresses the use of NLP in the
requirements analysis and systems design processes. We have developed a
prototype toolset that can assist the systems analyst or software engineer to
select and verify terms relevant to a project. In this paper we describe the
processes employed by the system to extract and classify objects of interest
from requirements documents. These processes are illustrated using a small
example.
|
Adaptive Boolean Monotonicity Testing in Total Influence Time | The problem of testing monotonicity of a Boolean function $f:\{0,1\}^n \to
\{0,1\}$ has received much attention recently. Denoting the proximity parameter
by $\varepsilon$, the best tester is the non-adaptive
$\widetilde{O}(\sqrt{n}/\varepsilon^2)$ tester of Khot-Minzer-Safra (FOCS
2015). Let $I(f)$ denote the total influence of $f$. We give an adaptive tester
whose running time is $I(f)poly(\varepsilon^{-1}\log n)$.
|
A p-robust polygonal discontinuous Galerkin method with minus one
stabilization | We introduce a new stabilization for discontinuous Galerkin methods for the
Poisson problem on polygonal meshes, which induces optimal convergence rates in
the polynomial approximation degree $p$. In the setting of [S. Bertoluzza and
D. Prada, A polygonal discontinuous Galerkin method with minus one
stabilization, ESAIM Math. Mod. Numer. Anal. (DOI: 10.1051/m2an/2020059)], the
stabilization is obtained by penalizing, in each mesh element $K$, a residual
in the norm of the dual of $H^1(K)$. This negative norm is algebraically
realized via the introduction of new auxiliary spaces. We carry out a
$p$-explicit stability and error analysis, proving $p$-robustness of the
overall method. The theoretical findings are demonstrated in a series of
numerical experiments.
|
Suppression of Rayleigh-Taylor turbulence by time-periodic acceleration | The dynamics of Rayleigh-Taylor turbulence convection in presence of an
alternating, time periodic acceleration is studied by means of extensive direct
numerical simulations of the Boussinesq equations. Within this framework, we
discover a new mechanism of relaminarization of turbulence: The alternating
acceleration, which initially produces a growing turbulent mixing layer, at
longer times suppresses turbulent fluctuation and drives the system toward an
asymptotic stationary configuration. Dimensional arguments and linear stability
theory are used to predict the width of the mixing layer in the asymptotic
state as a function of the period of the acceleration. Our results provide an
example of simple control and suppression of turbulent convection with
potential applications in different fields.
|
Effective Numerical Simulations of Synchronous Generator System | Synchronous generator system is a complicated dynamical system for energy
transmission, which plays an important role in modern industrial production. In
this article, we propose some predictor-corrector methods and
structure-preserving methods for a generator system based on the first
benchmark model of subsynchronous resonance, among which the
structure-preserving methods preserve a Dirac structure associated with the
so-called port-Hamiltonian descriptor systems. To illustrate this, the
simplified generator system in the form of index-1 differential-algebraic
equations has been derived. Our analyses provide the global error estimates for
a special class of structure-preserving methods called Gauss methods, which
guarantee their superior performance over the PSCAD/EMTDC and the
predictor-corrector methods in terms of computational stability. Numerical
simulations are implemented to verify the effectiveness and advantages of our
methods.
|
Filter Bubble effect in the multistate voter model | Social media influence online activity by recommending to users content
strongly correlated with what they have preferred in the past. In this way they
constrain users within filter bubbles that strongly limit their exposure to new
or alternative content. We investigate this type of dynamics by considering a
multistate voter model where, with a given probability $\lambda$, a user
interacts with a "personalized information" suggesting the opinion most
frequently held in the past. By means of theoretical arguments and numerical
simulations, we show the existence of a nontrivial transition between a region
(for small $\lambda$) where consensus is reached and a region (above a
threshold $\lambda_c$) where the system gets polarized and clusters of users
with different opinions persist indefinitely. The threshold always vanishes for
large system size $N$, showing that consensus becomes impossible for a large
number of users. This finding opens new questions about the side effects of the
widespread use of personalized recommendation algorithms.
|
Developing a Fine-Grained Corpus for a Less-resourced Language: the case
of Kurdish | Kurdish is a less-resourced language consisting of different dialects written
in various scripts. Approximately 30 million people in different countries
speak the language. The lack of corpora is one of the main obstacles in Kurdish
language processing. In this paper, we present KTC-the Kurdish Textbooks
Corpus, which is composed of 31 K-12 textbooks in Sorani dialect. The corpus is
normalized and categorized into 12 educational subjects containing 693,800
tokens (110,297 types). Our resource is publicly available for non-commercial
use under the CC BY-NC-SA 4.0 license.
|
The hierarchical and perturbative forms of stochastic Schr\"odinger
equations and their applications to carrier dynamics in organic materials | A number of non-Markovian stochastic Schr\"odinger equations, ranging from
the numerically exact hierarchical form towards a series of perturbative
expressions sequentially presented in an ascending degrees of approximations
are revisited in this short review, aiming at providing a systematic framework
which is capable to connect different kinds of the wavefunction-based
approaches for an open system coupled to the harmonic bath. One can
optimistically expect the extensive future applications of those non-Markovian
stochastic Schr\"odinger equations in large-scale realistic complex systems,
benefiting from their favorable scaling with respect to the system size, the
stochastic nature which is extremely suitable for parallel computing, and many
other distinctive advantages. In addition, we have presented a few examples
showing the excitation energy transfer in Fenna-Matthews-Olson complex, a
quantitative measure of decoherence timescale of hot exciton, and the study of
quantum interference effects upon the singlet fission processes in organic
materials, since a deep understanding of both mechanisms is very important to
explore the underlying microscopic processes and to provide novel design
principles for highly efficient organic photovoltaics.
|
The scintillation of liquid argon | A spectroscopic study of liquid argon from the vacuum ultraviolet at 110 nm
to 1000 nm is presented. Excitation was performed using continuous and pulsed
12 keV electron beams. The emission is dominated by the analogue of the so
called 2nd excimer continuum. Various additional emission features were found.
The time structure of the light emission has been measured for a set of well
defined wavelength positions. The results help to interpret literature data in
the context of liquid rare gas detectors in which the wavelength information is
lost due to the use of wavelength shifters.
|
Embedding Capabilities of Neural ODEs | A class of neural networks that gained particular interest in the last years
are neural ordinary differential equations (neural ODEs). We study input-output
relations of neural ODEs using dynamical systems theory and prove several
results about the exact embedding of maps in different neural ODE architectures
in low and high dimension. The embedding capability of a neural ODE
architecture can be increased by adding, for example, a linear layer, or
augmenting the phase space. Yet, there is currently no systematic theory
available and our work contributes towards this goal by developing various
embedding results as well as identifying situations, where no embedding is
possible. The mathematical techniques used include as main components iterative
functional equations, Morse functions and suspension flows, as well as several
further ideas from analysis. Although practically, mainly universal
approximation theorems are used, our geometric dynamical systems viewpoint on
universal embedding provides a fundamental understanding, why certain neural
ODE architectures perform better than others.
|
Detection of Novel Social Bots by Ensembles of Specialized Classifiers | Malicious actors create inauthentic social media accounts controlled in part
by algorithms, known as social bots, to disseminate misinformation and agitate
online discussion. While researchers have developed sophisticated methods to
detect abuse, novel bots with diverse behaviors evade detection. We show that
different types of bots are characterized by different behavioral features. As
a result, supervised learning techniques suffer severe performance
deterioration when attempting to detect behaviors not observed in the training
data. Moreover, tuning these models to recognize novel bots requires retraining
with a significant amount of new annotations, which are expensive to obtain. To
address these issues, we propose a new supervised learning method that trains
classifiers specialized for each class of bots and combines their decisions
through the maximum rule. The ensemble of specialized classifiers (ESC) can
better generalize, leading to an average improvement of 56\% in F1 score for
unseen accounts across datasets. Furthermore, novel bot behaviors are learned
with fewer labeled examples during retraining. We deployed ESC in the newest
version of Botometer, a popular tool to detect social bots in the wild, with a
cross-validation AUC of 0.99.
|
Detecting Slag Formations with Deep Convolutional Neural Networks | We investigate the ability to detect slag formations in images from inside a
Grate-Kiln system furnace with two deep convolutional neural networks. The
conditions inside the furnace cause occasional obstructions of the camera view.
Our approach suggests dealing with this problem by introducing a convLSTM-layer
in the deep convolutional neural network. The results show that it is possible
to achieve sufficient performance to automate the decision of timely
countermeasures in the industrial operational setting. Furthermore, the
addition of the convLSTM-layer results in fewer outlying predictions and a
lower running variance of the fraction of detected slag in the image time
series.
|
Understanding the Mechanics of Some Localized Protocols by Theory of
Complex Networks | In the study of ad hoc sensor networks, clustering plays an important role in
energy conservation therefore analyzing the mechanics of such topology can be
helpful to make logistic decisions .Using the theory of complex network the
topological model is extended, where we account for the probability of
preferential attachment and anti preferential attachment policy of sensor nodes
to analyze the formation of clusters and calculate the probability of
clustering. The theoretical analysis is conducted to determine nature of
topology and quantify some of the observed facts during the execution of
topology control protocols. The quantification of the observed facts leads to
the alternative understanding of the energy efficiency of the routing
protocols.
|
Static stability of collapsible tube conveying non-Newtonian fluid | The global static stability of a Starling Resistor conveying non-Newtonian
fluid is considered. The Starling Resistor consists of two rigid circular tubes
and axisymmetric collapsible tube mounted between them. Upstream and downstream
pressures are the boundary condition as well as external to the collapsible
tube pressure. Quasi one-dimensional model has been proposed and a boundary
value problem in terms of nondimensional parameters obtained. Nonuniqueness of
the boundary value problem is regarded as static instability. The analytical
condition of instability which defines a surface in parameter space has been
studied numerically. The influence of fluid rheology on stability of
collapsible tube is established.
|
Service Oriented Architecture A Revolution For Comprehensive Web Based
Project Management Software | Service Oriented Architecture A Revolution for Project Management Software
has changed the way projects today are moving on the fly with the help of web
services booming the industry. Service oriented architecture improves
performance and the communication between the distributed and remote teams. Web
Services to Provide Project Management software the visibility and control of
the application development lifecycle-giving a better control over the entire
development process, from the management stage through development. The goal of
Service Oriented Architecture for Project Management Software is to produce a
product that is delivered on time, within the allocated budget, and with the
capabilities expected by the customer. Web Services in Project management
Project management software is basically a properly managed project and has a
clear, communicated, and managed set of goals and objectives, whose progress is
quantifiable and controlled. Resources are used effectively and efficiently to
produce the desired product. With the help of service oriented architecture we
can move into the future without abandoning the past. A project usually has a
communicated set of processes that cover the daily activities of the project,
forming the project framework. As a result, every team member understands their
roles, responsibilities and how they fit into the big picture thus promoting
the efficient use of resources.
|
How have the Eastern European countries of the former Warsaw Pact
developed since 1990? A bibliometric study | Did the demise of the Soviet Union in 1991 influence the scientific
performance of the researchers in Eastern European countries? Did this
historical event affect international collaboration by researchers from the
Eastern European countries with those of Western countries? Did it also change
international collaboration among researchers from the Eastern European
countries? Trying to answer these questions, this study aims to shed light on
international collaboration by researchers from the Eastern European countries
(Russia, Ukraine, Belarus, Moldova, Bulgaria, the Czech Republic, Hungary,
Poland, Romania and Slovakia). The number of publications and normalized
citation impact values are compared for these countries based on InCites
(Thomson Reuters), from 1981 up to 2011. The international collaboration by
researchers affiliated to institutions in Eastern European countries at the
time points of 1990, 2000 and 2011 was studied with the help of Pajek and
VOSviewer software, based on data from the Science Citation Index (Thomson
Reuters). Our results show that the breakdown of the communist regime did not
lead, on average, to a huge improvement in the publication performance of the
Eastern European countries and that the increase in international co-authorship
relations by the researchers affiliated to institutions in these countries was
smaller than expected. Most of the Eastern European countries are still subject
to changes and are still awaiting their boost in scientific development.
|
Large-scale Kernel Methods and Applications to Lifelong Robot Learning | As the size and richness of available datasets grow larger, the opportunities
for solving increasingly challenging problems with algorithms learning directly
from data grow at the same pace. Consequently, the capability of learning
algorithms to work with large amounts of data has become a crucial scientific
and technological challenge for their practical applicability. Hence, it is no
surprise that large-scale learning is currently drawing plenty of research
effort in the machine learning research community. In this thesis, we focus on
kernel methods, a theoretically sound and effective class of learning
algorithms yielding nonparametric estimators. Kernel methods, in their
classical formulations, are accurate and efficient on datasets of limited size,
but do not scale up in a cost-effective manner. Recent research has shown that
approximate learning algorithms, for instance random subsampling methods like
Nystr\"om and random features, with time-memory-accuracy trade-off mechanisms
are more scalable alternatives. In this thesis, we provide analyses of the
generalization properties and computational requirements of several types of
such approximation schemes. In particular, we expose the tight relationship
between statistics and computations, with the goal of tailoring the accuracy of
the learning process to the available computational resources. Our results are
supported by experimental evidence on large-scale datasets and numerical
simulations. We also study how large-scale learning can be applied to enable
accurate, efficient, and reactive lifelong learning for robotics. In
particular, we propose algorithms allowing robots to learn continuously from
experience and adapt to changes in their operational environment. The proposed
methods are validated on the iCub humanoid robot in addition to other
benchmarks.
|
Computing Nash equilibria for integer programming games | The recently defined class of integer programming games (IPG) models
situations where multiple self-interested decision makers interact, with their
strategy sets represented by a finite set of linear constraints together with
integer requirements. Many real-world problems can suitably be fit in this
class, and hence anticipating IPG outcomes is of crucial value for policy
makers and regulators. Nash equilibria have been widely accepted as the
solution concept of a game. Consequently, their computation provides a
reasonable prediction of the games outcome.
In this paper, we start by showing the computational complexity of deciding
the existence of a Nash equilibrium for an IPG. Then, using sufficient
conditions for their existence, we develop two general algorithmic approaches
that are guaranteed to approximate an equilibrium under mild conditions. We
also showcase how our methodology can be changed to determine other equilibria
definitions. The performance of our methods is analyzed through computational
experiments in a knapsack game, a competitive lot-sizing game, and a kidney
exchange game. To the best of our knowledge, this is the first time that
equilibria computation methods for general integer programming games have been
designed and computationally tested.
|
Federated Learning via Indirect Server-Client Communications | Federated Learning (FL) is a communication-efficient and privacy-preserving
distributed machine learning framework that has gained a significant amount of
research attention recently. Despite the different forms of FL algorithms
(e.g., synchronous FL, asynchronous FL) and the underlying optimization
methods, nearly all existing works implicitly assumed the existence of a
communication infrastructure that facilitates the direct communication between
the server and the clients for the model data exchange. This assumption,
however, does not hold in many real-world applications that can benefit from
distributed learning but lack a proper communication infrastructure (e.g.,
smart sensing in remote areas). In this paper, we propose a novel FL framework,
named FedEx (short for FL via Model Express Delivery), that utilizes mobile
transporters (e.g., Unmanned Aerial Vehicles) to establish indirect
communication channels between the server and the clients. Two algorithms,
called FedEx-Sync and FedEx-Async, are developed depending on whether the
transporters adopt a synchronized or an asynchronized schedule. Even though the
indirect communications introduce heterogeneous delays to clients for both the
global model dissemination and the local model collection, we prove the
convergence of both versions of FedEx. The convergence analysis subsequently
sheds lights on how to assign clients to different transporters and design the
routes among the clients. The performance of FedEx is evaluated through
experiments in a simulated network on two public datasets.
|
Language-Bridged Spatial-Temporal Interaction for Referring Video Object
Segmentation | Referring video object segmentation aims to predict foreground labels for
objects referred by natural language expressions in videos. Previous methods
either depend on 3D ConvNets or incorporate additional 2D ConvNets as encoders
to extract mixed spatial-temporal features. However, these methods suffer from
spatial misalignment or false distractors due to delayed and implicit
spatial-temporal interaction occurring in the decoding phase. To tackle these
limitations, we propose a Language-Bridged Duplex Transfer (LBDT) module which
utilizes language as an intermediary bridge to accomplish explicit and adaptive
spatial-temporal interaction earlier in the encoding phase. Concretely,
cross-modal attention is performed among the temporal encoder, referring words
and the spatial encoder to aggregate and transfer language-relevant motion and
appearance information. In addition, we also propose a Bilateral Channel
Activation (BCA) module in the decoding phase for further denoising and
highlighting the spatial-temporal consistent features via channel-wise
activation. Extensive experiments show our method achieves new state-of-the-art
performances on four popular benchmarks with 6.8% and 6.9% absolute AP gains on
A2D Sentences and J-HMDB Sentences respectively, while consuming around 7x less
computational overhead.
|
Efficiency of pair formation in a model society | In a recent paper a set of differential equations was proposed to describe a
social process, where pairs of partners emerge in a community. The choice was
performed on a basis of attractive resources and of random initial preferences.
An efficiency of the process, defined as the probability of finding a partner,
was found to depend on the community size. Here we demonstrate, that if the
resources are not relevant, the efficiency is equal to unity; everybody finds a
partner. With this new formulation, about 80 percent of community members enter
to dyads; the remaining 20 percent form triads.
|
Are You Sure? Challenging LLMs Leads to Performance Drops in The
FlipFlop Experiment | The interactive nature of Large Language Models (LLMs) theoretically allows
models to refine and improve their answers, yet systematic analysis of the
multi-turn behavior of LLMs remains limited. In this paper, we propose the
FlipFlop experiment: in the first round of the conversation, an LLM completes a
classification task. In a second round, the LLM is challenged with a follow-up
phrase like "Are you sure?", offering an opportunity for the model to reflect
on its initial answer, and decide whether to confirm or flip its answer. A
systematic study of ten LLMs on seven classification tasks reveals that models
flip their answers on average 46% of the time and that all models see a
deterioration of accuracy between their first and final prediction, with an
average drop of 17% (the FlipFlop effect). We conduct finetuning experiments on
an open-source LLM and find that finetuning on synthetically created data can
mitigate - reducing performance deterioration by 60% - but not resolve
sycophantic behavior entirely. The FlipFlop experiment illustrates the
universality of sycophantic behavior in LLMs and provides a robust framework to
analyze model behavior and evaluate future models.
|
SSDA3D: Semi-supervised Domain Adaptation for 3D Object Detection from
Point Cloud | LiDAR-based 3D object detection is an indispensable task in advanced
autonomous driving systems. Though impressive detection results have been
achieved by superior 3D detectors, they suffer from significant performance
degeneration when facing unseen domains, such as different LiDAR
configurations, different cities, and weather conditions. The mainstream
approaches tend to solve these challenges by leveraging unsupervised domain
adaptation (UDA) techniques. However, these UDA solutions just yield
unsatisfactory 3D detection results when there is a severe domain shift, e.g.,
from Waymo (64-beam) to nuScenes (32-beam). To address this, we present a novel
Semi-Supervised Domain Adaptation method for 3D object detection (SSDA3D),
where only a few labeled target data is available, yet can significantly
improve the adaptation performance. In particular, our SSDA3D includes an
Inter-domain Adaptation stage and an Intra-domain Generalization stage. In the
first stage, an Inter-domain Point-CutMix module is presented to efficiently
align the point cloud distribution across domains. The Point-CutMix generates
mixed samples of an intermediate domain, thus encouraging to learn
domain-invariant knowledge. Then, in the second stage, we further enhance the
model for better generalization on the unlabeled target set. This is achieved
by exploring Intra-domain Point-MixUp in semi-supervised learning, which
essentially regularizes the pseudo label distribution. Experiments from Waymo
to nuScenes show that, with only 10% labeled target data, our SSDA3D can
surpass the fully-supervised oracle model with 100% target label. Our code is
available at https://github.com/yinjunbo/SSDA3D.
|
Towards Deep Learning in Hindi NER: An approach to tackle the Labelled
Data Scarcity | In this paper we describe an end to end Neural Model for Named Entity
Recognition NER) which is based on Bi-Directional RNN-LSTM. Almost all NER
systems for Hindi use Language Specific features and handcrafted rules with
gazetteers. Our model is language independent and uses no domain specific
features or any handcrafted rules. Our models rely on semantic information in
the form of word vectors which are learnt by an unsupervised learning algorithm
on an unannotated corpus. Our model attained state of the art performance in
both English and Hindi without the use of any morphological analysis or without
using gazetteers of any sort.
|
Performance modeling of public permissionless blockchains: A survey | Public permissionless blockchains facilitate peer-to-peer digital
transactions, yet face performance challenges specifically minimizing
transaction confirmation time to decrease energy and time consumption per
transaction. Performance evaluation and prediction are crucial in achieving
this objective, with performance modeling as a key solution despite the
complexities involved in assessing these blockchains. This survey examines
prior research concerning the performance modeling blockchain systems,
specifically focusing on public permissionless blockchains. Initially, it
provides foundational knowledge about these blockchains and the crucial
performance parameters for their assessment. Additionally, the study delves
into research on the performance modeling of public permissionless blockchains,
predominantly considering these systems as bulk service queues. It also
examines prior studies on workload and traffic modeling, characterization, and
analysis within these blockchain networks. By analyzing existing research, our
survey aims to provide insights and recommendations for researchers keen on
enhancing the performance of public permissionless blockchains or devising
novel mechanisms in this domain.
|
mFLICA: An R package for Inferring Leadership of Coordination From Time
Series | Leadership is a process that leaders influence followers to achieve
collective goals. One of special cases of leadership is the coordinated pattern
initiation. In this context, leaders are initiators who initiate coordinated
patterns that everyone follows. Given a set of individual-multivariate time
series of real numbers, the mFLICA package provides a framework for R users to
infer coordination events within time series, initiators and followers of these
coordination events, as well as dynamics of group merging and splitting. The
mFLICA package also has a visualization function to make results of leadership
inference more understandable. The package is available on Comprehensive R
Archive Network (CRAN) at https://CRAN.R-project.org/package=mFLICA.
|
SC-Tune: Unleashing Self-Consistent Referential Comprehension in Large
Vision Language Models | Recent trends in Large Vision Language Models (LVLMs) research have been
increasingly focusing on advancing beyond general image understanding towards
more nuanced, object-level referential comprehension. In this paper, we present
and delve into the self-consistency capability of LVLMs, a crucial aspect that
reflects the models' ability to both generate informative captions for specific
objects and subsequently utilize these captions to accurately re-identify the
objects in a closed-loop process. This capability significantly mirrors the
precision and reliability of fine-grained visual-language understanding. Our
findings reveal that the self-consistency level of existing LVLMs falls short
of expectations, posing limitations on their practical applicability and
potential. To address this gap, we introduce a novel fine-tuning paradigm named
Self-Consistency Tuning (SC-Tune). It features the synergistic learning of a
cyclic describer-locator system. This paradigm is not only data-efficient but
also exhibits generalizability across multiple LVLMs. Through extensive
experiments, we demonstrate that SC-Tune significantly elevates performance
across a spectrum of object-level vision-language benchmarks and maintains
competitive or improved performance on image-level vision-language benchmarks.
Both our model and code will be publicly available at
https://github.com/ivattyue/SC-Tune.
|
Bidding policies for market-based HPC workflow scheduling | This paper considers the scheduling of jobs on distributed, heterogeneous
High Performance Computing (HPC) clusters. Market-based approaches are known to
be efficient for allocating limited resources to those that are most prepared
to pay. This context is applicable to an HPC or cloud computing scenario where
the platform is overloaded. In this paper, jobs are composed of dependent
tasks. Each job has a non-increasing time-value curve associated with it. Jobs
are submitted to and scheduled by a market-clearing centralised auctioneer.
This paper compares the performance of several policies for generating task
bids. The aim investigated here is to maximise the value for the platform
provider while minimising the number of jobs that do not complete (or starve).
It is found that the Projected Value Remaining bidding policy gives the highest
level of value under a typical overload situation, and gives the lowest number
of starved tasks across the space of utilisation examined. It does this by
attempting to capture the urgency of tasks in the queue. At high levels of
overload, some alternative algorithms produce slightly higher value, but at the
cost of a hugely higher number of starved workflows.
|
A note on power allocation for optimal capacity | The problems of determining the optimal power allocation, within maximum
power bounds, to (i) maximize the minimum Shannon capacity, and (ii) minimize
the weighted latency are considered. In the first case, the global optima can
be achieved in polynomial time by solving a sequence of linear programs (LP).
In the second case, the original non-convex problem is replaced by a convex
surrogate (a geometric program), using a functional approximation. Since the
approximation error is relatively low, the optima of the surrogate is close to
the global optimal point of the original problem. In either cases, there is no
assumption on the SINR range. The use of LPs and geometric programming make the
proposed algorithms numerically efficient. Computations are provided for
corroboration.
|
Adaptivity in Agent-Based Routing for Data Networks | Adaptivity, both of the individual agents and of the interaction structure
among the agents, seems indispensable for scaling up multi-agent systems
(MAS's) in noisy environments. One important consideration in designing
adaptive agents is choosing their action spaces to be as amenable as possible
to machine learning techniques, especially to reinforcement learning (RL)
techniques. One important way to have the interaction structure connecting
agents itself be adaptive is to have the intentions and/or actions of the
agents be in the input spaces of the other agents, much as in Stackelberg
games. We consider both kinds of adaptivity in the design of a MAS to control
network packet routing.
We demonstrate on the OPNET event-driven network simulator the perhaps
surprising fact that simply changing the action space of the agents to be
better suited to RL can result in very large improvements in their potential
performance: at their best settings, our learning-amenable router agents
achieve throughputs up to three and one half times better than that of the
standard Bellman-Ford routing algorithm, even when the Bellman-Ford protocol
traffic is maintained. We then demonstrate that much of that potential
improvement can be realized by having the agents learn their settings when the
agent interaction structure is itself adaptive.
|
Empirical Study of DSRC Performance Based on Safety Pilot Model
Deployment Data | Dedicated Short Range Communication (DSRC) was designed to provide reliable
wireless communication for intelligent transportation system applications.
Sharing information among cars and between cars and the infrastructure,
pedestrians, or "the cloud" has great potential to improve safety, mobility and
fuel economy. DSRC is being considered by the US Department of Transportation
to be required for ground vehicles. In the past, their performance has been
assessed thoroughly in the labs and limited field testing, but not on a large
fleet. In this paper, we present the analysis of DSRC performance using data
from the world's largest connected vehicle test program - Safety Pilot Model
Deployment lead by the University of Michigan. We first investigate their
maximum and effective range, and then study the effect of environmental
factors, such as trees/foliage, weather, buildings, vehicle travel direction,
and road elevation. The results can be used to guide future DSRC equipment
placement and installation, and can be used to develop DSRC communication
models for numerical simulations.
|
Intelligent Transportation Systems to Mitigate Road Traffic Congestion | Intelligent transport systems have efficiently and effectively proved
themselves in settling up the problem of traffic congestion around the world.
The multi-agent based transportation system is one of the most important
intelligent transport systems, which represents an interaction among the
neighbouring vehicles, drivers, roads, infrastructure and vehicles. In this
paper, two traffic management models have been created to mitigate congestion
and to ensure that emergency vehicles arrive as quickly as possible. A
tool-chain SUMO-JADE is employed to create a microscopic simulation symbolizing
the interactions of traffic. The simulation model has showed a significant
reduction of at least 50% in the average time delay and thus a real improvement
in the entire journey time.
|
Data-Augmentation for Graph Neural Network Learning of the Relaxed
Energies of Unrelaxed Structures | Computational materials discovery has continually grown in utility over the
past decade due to advances in computing power and crystal structure prediction
algorithms (CSPA). However, the computational cost of the \textit{ab initio}
calculations required by CSPA limits its utility to small unit cells, reducing
the compositional and structural space the algorithms can explore. Past studies
have bypassed many unneeded \textit{ab initio} calculations by utilizing
machine learning methods to predict formation energy and determine the
stability of a material. Specifically, graph neural networks display high
fidelity in predicting formation energy. Traditionally graph neural networks
are trained on large data sets of relaxed structures. Unfortunately, the
geometries of unrelaxed candidate structures produced by CSPA often deviate
from the relaxed state, which leads to poor predictions hindering the model's
ability to filter energetically unfavorable prior to \textit{ab initio}
evaluation. This work shows that the prediction error on relaxed structures
reduces as training progresses, while the prediction error on unrelaxed
structures increases, suggesting an inverse correlation between relaxed and
unrelaxed structure prediction accuracy. To remedy this behavior, we propose a
simple, physically motivated, computationally cheap perturbation technique that
augments training data to improve predictions on unrelaxed structures
dramatically. On our test set consisting of 623 Nb-Sr-H hydride structures, we
found that training a crystal graph convolutional neural networks, utilizing
our augmentation method, reduced the MAE of formation energy prediction by 66\%
compared to training with only relaxed structures. We then show how this error
reduction can accelerates CSPA by improving the model's ability to filter out
energetically unfavorable structures accurately.
|
Detailed study of dissipative quantum dynamics of K-2 attached to helium
nanodroplets | We thoroughly investigate vibrational quantum dynamics of dimers attached to
He droplets motivated by recent measurements with K-2 [1]. For those
femtosecond pump-probe experiments, crucial observed features are not
reproduced by gas phase calculations but agreement is found using a description
based on dissipative quantum dynamics, as briefly shown in [2]. Here we present
a detailed study of the influence of possible effects induced by the droplet.
The helium droplet causes electronic decoherence, shifts of potential surfaces,
and relaxation of wave packets in attached dimers. Moreover, a realistic
description of (stochastic) desorption of dimers off the droplet needs to be
taken into account. Step by step we include and study the importance of these
effects in our full quantum calculation. This allows us to reproduce and
explain all major experimental findings. We find that desorption is fast and
occurs already within 2-10 ps after electronic excitation. A further finding is
that slow vibrational motion in the ground state can be considered
frictionless.
|
Adapting Convolutional Neural Networks for Geographical Domain Shift | We present the winning solution for the Inclusive Images Competition
organized as part of the Conference on Neural Information Processing Systems
(NeurIPS 2018) Competition Track. The competition was organized to study ways
to cope with domain shift in image processing, specifically geographical shift:
the training and two test sets in the competition had different geographical
distributions. Our solution has proven to be relatively straightforward and
simple: it is an ensemble of several CNNs where only the last layer is
fine-tuned with the help of a small labeled set of tuning labels made available
by the organizers. We believe that while domain shift remains a formidable
problem, our approach opens up new possibilities for alleviating this problem
in practice, where small labeled datasets from the target domain are usually
either available or can be obtained and labeled cheaply.
|
ANOCA: AC Network-aware Optimal Curtailment Approach for Dynamic Hosting
Capacity | With exponential growth in distributed energy resources (DERs) coupled with
at-capacity distribution grid infrastructure, prosumers cannot always export
all extra power to the grid without violating technical limits. Consequently, a
slew of dynamic hosting capacity (DHC) algorithms have emerged for optimal
utilization of grid infrastructure while maximizing export from DERs. Most of
these DHC algorithms utilize the concept of operating envelopes (OE), where the
utility gives prosumers technical power export limits, and they are free to
export power within these limits. Recent studies have shown that OE-based
frameworks have drawbacks, as most develop power export limits based on convex
or linear grid models. As OEs must capture extreme operating conditions, both
convex and linear models can violate technical limits in practice because they
approximate grid physics. However, AC models are unsuitable because they may
not be feasible within the whole region of OE. We propose a new two-stage
optimization framework for DHC built on three-phase AC models to address the
current gaps. In this approach, the prosumers first run a receding horizon
multi-period optimization to identify optimal export power setpoints to
communicate with the utility. The utility then performs an infeasibility-based
optimization to either accept the prosumer's request or dispatch an optimal
curtail signal such that overall system technical constraints are not violated.
To explore various curtailment strategies, we develop an L1, L2, and Linf
norm-based dispatch algorithm with an exact three-phase AC model. We test our
framework on a 1420 three-phase node meshed distribution network and show that
the proposed algorithm optimally curtails DERs while guaranteeing the AC
feasibility of the network.
|
Understanding the robustness of deep neural network classifiers for
breast cancer screening | Deep neural networks (DNNs) show promise in breast cancer screening, but
their robustness to input perturbations must be better understood before they
can be clinically implemented. There exists extensive literature on this
subject in the context of natural images that can potentially be built upon.
However, it cannot be assumed that conclusions about robustness will transfer
from natural images to mammogram images, due to significant differences between
the two image modalities. In order to determine whether conclusions will
transfer, we measure the sensitivity of a radiologist-level screening mammogram
image classifier to four commonly studied input perturbations that natural
image classifiers are sensitive to. We find that mammogram image classifiers
are also sensitive to these perturbations, which suggests that we can build on
the existing literature. We also perform a detailed analysis on the effects of
low-pass filtering, and find that it degrades the visibility of clinically
meaningful features called microcalcifications. Since low-pass filtering
removes semantically meaningful information that is predictive of breast
cancer, we argue that it is undesirable for mammogram image classifiers to be
invariant to it. This is in contrast to natural images, where we do not want
DNNs to be sensitive to low-pass filtering due to its tendency to remove
information that is human-incomprehensible.
|
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
ImageNet Classification | Rectified activation units (rectifiers) are essential for state-of-the-art
neural networks. In this work, we study rectifier neural networks for image
classification from two aspects. First, we propose a Parametric Rectified
Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU
improves model fitting with nearly zero extra computational cost and little
overfitting risk. Second, we derive a robust initialization method that
particularly considers the rectifier nonlinearities. This method enables us to
train extremely deep rectified models directly from scratch and to investigate
deeper or wider network architectures. Based on our PReLU networks
(PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012
classification dataset. This is a 26% relative improvement over the ILSVRC 2014
winner (GoogLeNet, 6.66%). To our knowledge, our result is the first to surpass
human-level performance (5.1%, Russakovsky et al.) on this visual recognition
challenge.
|
Cooperative Relaying with State Available at the Relay | We consider a state-dependent full-duplex relay channel with the state of the
channel non-causally available at only the relay. In the framework of
cooperative wireless networks, some specific terminals can be equipped with
cognition capabilities, i.e, the relay in our model. In the discrete memoryless
(DM) case, we derive lower and upper bounds on channel capacity. The lower
bound is obtained by a coding scheme at the relay that consists in a
combination of codeword splitting, Gel'fand-Pinsker binning, and a
decode-and-forward scheme. The upper bound is better than that obtained by
assuming that the channel state is available at the source and the destination
as well. For the Gaussian case, we also derive lower and upper bounds on
channel capacity. The lower bound is obtained by a coding scheme which is based
on a combination of codeword splitting and Generalized dirty paper coding. The
upper bound is also better than that obtained by assuming that the channel
state is available at the source, the relay, and the destination. The two
bounds meet, and so give the capacity, in some special cases for the degraded
Gaussian case.
|
Mechanism Design for Stable Matching with Contracts in a Dynamic
Manufacturing-as-a-Service (MaaS) Marketplace | Two-sided manufacturing-as-a-service (MaaS) marketplaces connect clients
requesting manufacturing services to suppliers providing those services.
Matching mechanisms i.e. allocation of clients' orders to suppliers is a key
design parameter of the marketplace platform. The platform might perform an
allocation to maximize its revenue or optimize for social welfare of all
participants. However, individual participants might not get maximum value from
their match and reject it to form matches (called blocking groups) themselves,
thereby bypassing the platform. This paper considers the bipartite matching
problem in MaaS marketplaces in a dynamic environment and proposes
approximately stable matching solutions using mechanism design and mathematical
programming approaches to limit the formation of blocking groups. Matching is
based on non-strict, incomplete and interdependent preferences of participants
over contracts enabling negotiations between both sides. Empirical simulations
are used to test the mechanisms in a simulated 3D printing marketplace and to
evaluate the impact of stability on its performance. It is found that stable
matching results in small degradation in social welfare of the marketplace.
However, it leads to a significantly better outcome in terms of stability of
allocation. Unstable matchings introduce anarchy into marketplace with
participants rejecting its allocation leading to performance poorer than stable
matchings.
|
A Posteriori Error Estimates for Elliptic Eigenvalue Problems Using
Auxiliary Subspace Techniques | We propose an a posteriori error estimator for high-order $p$- or $hp$-finite
element discretizations of selfadjoint linear elliptic eigenvalue problems that
is appropriate for estimating the error in the approximation of an eigenvalue
cluster and the corresponding invariant subspace. The estimator is based on the
computation of approximate error functions in a space that complements the one
in which the approximate eigenvectors were computed. These error functions are
used to construct estimates of collective measures of error, such as the
Hausdorff distance between the true and approximate clusters of eigenvalues,
and the subspace gap between the corresponding true and approximate invariant
subspaces. Numerical experiments demonstrate the practical effectivity of the
approach.
|
The Common Core Ontologies | The Common Core Ontologies (CCO) are designed as a mid-level ontology suite
that extends the Basic Formal Ontology. CCO has since been increasingly adopted
by a broad group of users and applications and is proposed as the first
standard mid-level ontology. Despite these successes, documentation of the
contents and design patterns of the CCO has been comparatively minimal. This
paper is a step toward providing enhanced documentation for the mid-level
ontology suite through a discussion of the contents of the eleven ontologies
that collectively comprise the Common Core Ontology suite.
|
Getting excited: Challenges in quantum-classical studies of excitons in
polymeric systems | A combination of classical molecular dynamics (MM/MD) and quantum chemical
calculations based on the density functional theory (DFT) was performed to
describe conformational properties of diphenylethyne (DPE), methylated-DPE and
poly para phenylene ethynylene (PPE). DFT calculations were employed to improve
and develop force field parameters for MM/MD simulations. Many-body Green's
functions theory within the GW approximation and the Bethe-Salpeter equation
were utilized to describe excited states of the systems. Reliability of the
excitation energies based on the MM/MD conformations was examined and compared
to the excitation energies from DFT conformations. The results show an overall
agreement between the optical excitations based on MM/MD conformations and DFT
conformations. This allows for calculation of excitation energies based on
MM/MD conformations.
|
Incomplete Descriptor Mining with Elastic Loss for Person
Re-Identification | In this paper, we propose a novel person Re-ID model, Consecutive Batch
DropBlock Network (CBDB-Net), to capture the attentive and robust person
descriptor for the person Re-ID task. The CBDB-Net contains two novel designs:
the Consecutive Batch DropBlock Module (CBDBM) and the Elastic Loss (EL). In
the Consecutive Batch DropBlock Module (CBDBM), we firstly conduct uniform
partition on the feature maps. And then, we independently and continuously drop
each patch from top to bottom on the feature maps, which can output multiple
incomplete feature maps. In the training stage, these multiple incomplete
features can better encourage the Re-ID model to capture the robust person
descriptor for the Re-ID task. In the Elastic Loss (EL), we design a novel
weight control item to help the Re-ID model adaptively balance hard sample
pairs and easy sample pairs in the whole training process. Through an extensive
set of ablation studies, we verify that the Consecutive Batch DropBlock Module
(CBDBM) and the Elastic Loss (EL) each contribute to the performance boosts of
CBDB-Net. We demonstrate that our CBDB-Net can achieve the competitive
performance on the three standard person Re-ID datasets (the Market-1501, the
DukeMTMC-Re-ID, and the CUHK03 dataset), three occluded Person Re-ID datasets
(the Occluded DukeMTMC, the Partial-REID, and the Partial iLIDS dataset), and a
general image retrieval dataset (In-Shop Clothes Retrieval dataset).
|
Families with infants: a general approach to solve hard partition
problems | We introduce a general approach for solving partition problems where the goal
is to represent a given set as a union (either disjoint or not) of subsets
satisfying certain properties. Many NP-hard problems can be naturally stated as
such partition problems. We show that if one can find a large enough system of
so-called families with infants for a given problem, then this problem can be
solved faster than by a straightforward algorithm. We use this approach to
improve known bounds for several NP-hard problems as well as to simplify the
proofs of several known results.
For the chromatic number problem we present an algorithm with
$O^*((2-\varepsilon(d))^n)$ time and exponential space for graphs of average
degree $d$. This improves the algorithm by Bj\"{o}rklund et al. [Theory Comput.
Syst. 2010] that works for graphs of bounded maximum (as opposed to average)
degree and closes an open problem stated by Cygan and Pilipczuk [ICALP 2013].
For the traveling salesman problem we give an algorithm working in
$O^*((2-\varepsilon(d))^n)$ time and polynomial space for graphs of average
degree $d$. The previously known results of this kind is a polyspace algorithm
by Bj\"{o}rklund et al. [ICALP 2008] for graphs of bounded maximum degree and
an exponential space algorithm for bounded average degree by Cygan and
Pilipczuk [ICALP 2013].
For counting perfect matching in graphs of average degree~$d$ we present an
algorithm with running time $O^*((2-\varepsilon(d))^{n/2})$ and polynomial
space. Recent algorithms of this kind due to Cygan, Pilipczuk [ICALP 2013] and
Izumi, Wadayama [FOCS 2012] (for bipartite graphs only) use exponential space.
|
A matrix-free parallel solution method for the three-dimensional
heterogeneous Helmholtz equation | The Helmholtz equation is related to seismic exploration, sonar, antennas,
and medical imaging applications. It is one of the most challenging problems to
solve in terms of accuracy and convergence due to the scalability issues of the
numerical solvers. For 3D large-scale applications, high-performance parallel
solvers are also needed. In this paper, a matrix-free parallel iterative solver
is presented for the three-dimensional (3D) heterogeneous Helmholtz equation.
We consider the preconditioned Krylov subspace methods for solving the linear
system obtained from finite-difference discretization. The Complex Shifted
Laplace Preconditioner (CSLP) is employed since it results in a linear increase
in the number of iterations as a function of the wavenumber. The preconditioner
is approximately inverted using one parallel 3D multigrid cycle. For parallel
computing, the global domain is partitioned blockwise. The matrix-vector
multiplication and preconditioning operator are implemented in a matrix-free
way instead of constructing large, memory-consuming coefficient matrices.
Numerical experiments of 3D model problems demonstrate the robustness and
outstanding strong scaling of our matrix-free parallel solution method.
Moreover, the weak parallel scalability indicates our approach is suitable for
realistic 3D heterogeneous Helmholtz problems with minimized pollution error.
|
Reinforcement Learning Approaches for Traffic Signal Control under
Missing Data | The emergence of reinforcement learning (RL) methods in traffic signal
control tasks has achieved better performance than conventional rule-based
approaches. Most RL approaches require the observation of the environment for
the agent to decide which action is optimal for a long-term reward. However, in
real-world urban scenarios, missing observation of traffic states may
frequently occur due to the lack of sensors, which makes existing RL methods
inapplicable on road networks with missing observation. In this work, we aim to
control the traffic signals in a real-world setting, where some of the
intersections in the road network are not installed with sensors and thus with
no direct observations around them. To the best of our knowledge, we are the
first to use RL methods to tackle the traffic signal control problem in this
real-world setting. Specifically, we propose two solutions: the first one
imputes the traffic states to enable adaptive control, and the second one
imputes both states and rewards to enable adaptive control and the training of
RL agents. Through extensive experiments on both synthetic and real-world road
network traffic, we reveal that our method outperforms conventional approaches
and performs consistently with different missing rates. We also provide further
investigations on how missing data influences the performance of our model.
|
Brain Tumor Segmentation and Survival Prediction using Automatic Hard
mining in 3D CNN Architecture | We utilize 3-D fully convolutional neural networks (CNN) to segment gliomas
and its constituents from multimodal Magnetic Resonance Images (MRI). The
architecture uses dense connectivity patterns to reduce the number of weights
and residual connections and is initialized with weights obtained from training
this model with BraTS 2018 dataset. Hard mining is done during training to
train for the difficult cases of segmentation tasks by increasing the dice
similarity coefficient (DSC) threshold to choose the hard cases as epoch
increases. On the BraTS2020 validation data (n = 125), this architecture
achieved a tumor core, whole tumor, and active tumor dice of 0.744, 0.876,
0.714,respectively. On the test dataset, we get an increment in DSC of tumor
core and active tumor by approximately 7%. In terms of DSC, our network
performances on the BraTS 2020 test data are 0.775, 0.815, and 0.85 for
enhancing tumor, tumor core, and whole tumor, respectively. Overall survival of
a subject is determined using conventional machine learning from rediomics
features obtained using a generated segmentation mask. Our approach has
achieved 0.448 and 0.452 as the accuracy on the validation and test dataset.
|
Alternating Traps in Muller and Parity Games | Muller games are played by two players moving a token along a graph; the
winner is determined by the set of vertices that occur infinitely often. The
central algorithmic problem is to compute the winning regions for the players.
Different classes and representations of Muller games lead to problems of
varying computational complexity. One such class are parity games; these are of
particular significance in computational complexity, as they remain one of the
few combinatorial problems known to be in NP and co-NP but not known to be in
P. We show that winning regions for a Muller game can be determined from the
alternating structure of its traps. To every Muller game we then associate a
natural number that we call its trap-depth; this parameter measures how
complicated the trap structure is. We present algorithms for parity games that
run in polynomial time for graphs of bounded trap depth, and in general run in
time exponential in the trap depth.
|
Gaussian process regression can turn non-uniform and undersampled
diffusion MRI data into diffusion spectrum imaging | We propose to use Gaussian process regression to accurately estimate the
diffusion MRI signal at arbitrary locations in q-space. By estimating the
signal on a grid, we can do synthetic diffusion spectrum imaging:
reconstructing the ensemble averaged propagator (EAP) by an inverse Fourier
transform. We also propose an alternative reconstruction method guaranteeing a
nonnegative EAP that integrates to unity. The reconstruction is validated on
data simulated from two Gaussians at various crossing angles. Moreover, we
demonstrate on non-uniformly sampled in vivo data that the method is far
superior to linear interpolation, and allows a drastic undersampling of the
data with only a minor loss of accuracy. We envision the method as a potential
replacement for standard diffusion spectrum imaging, in particular when
acquistion time is limited.
|
Algebraic-matrix calculation of vibrational levels of triatomic
molecules | We introduce an accurate and efficient algebraic technique for the
computation of the vibrational spectra of triatomic molecules, of both linear
and bent equilibrium geometry. The full three-dimensional potential energy
surface (PES), which can be based on entirely {\it ab initio} data, is
parameterized as a product Morse-cosine expansion, expressed in bond-angle
internal coordinates, and includes explicit interactions among the local modes.
We describe the stretching degrees of freedom in the framework of a Morse-type
expansion on a suitable algebraic basis, which provides exact analytical
expressions for the elements of a sparse Hamiltonian matrix. Likewise, we use a
cosine power expansion on a spherical harmonics basis for the bending degree of
freedom. The resulting matrix representation in the product space is very
sparse and vibrational levels and eigenfunctions can be obtained by efficient
diagonalization techniques. We apply this method to carbonyl sulfide OCS,
hydrogen cyanide HCN, water H$_2$O, and nitrogen dioxide NO$_2$. When we base
our calculations on high-quality PESs tuned to the experimental data, the
computed spectra are in very good agreement with the observed band origins.
|
Quark: A Gradient-Free Quantum Learning Framework for Classification
Tasks | As more practical and scalable quantum computers emerge, much attention has
been focused on realizing quantum supremacy in machine learning. Existing
quantum ML methods either (1) embed a classical model into a target Hamiltonian
to enable quantum optimization or (2) represent a quantum model using
variational quantum circuits and apply classical gradient-based optimization.
The former method leverages the power of quantum optimization but only supports
simple ML models, while the latter provides flexibility in model design but
relies on gradient calculation, resulting in barren plateau (i.e., gradient
vanishing) and frequent classical-quantum interactions. To address the
limitations of existing quantum ML methods, we introduce Quark, a gradient-free
quantum learning framework that optimizes quantum ML models using quantum
optimization. Quark does not rely on gradient computation and therefore avoids
barren plateau and frequent classical-quantum interactions. In addition, Quark
can support more general ML models than prior quantum ML methods and achieves a
dataset-size-independent optimization complexity. Theoretically, we prove that
Quark can outperform classical gradient-based methods by reducing model query
complexity for highly non-convex problems; empirically, evaluations on the Edge
Detection and Tiny-MNIST tasks show that Quark can support complex ML models
and significantly reduce the number of measurements needed for discovering
near-optimal weights for these tasks.
|
Modeling Gate-Level Abstraction Hierarchy Using Graph Convolutional
Neural Networks to Predict Functional De-Rating Factors | The paper is proposing a methodology for modeling a gate-level netlist using
a Graph Convolutional Network (GCN). The model predicts the overall functional
de-rating factors of sequential elements of a given circuit. In the preliminary
phase of the work, the important goal is making a GCN which able to take a
gate-level netlist as input information after transforming it into the
Probabilistic Bayesian Graph in the form of Graph Modeling Language (GML). This
part enables the GCN to learn the structural information of netlist in graph
domains. In the second phase of the work, the modeled GCN trained with the a
functional de-rating factor of a very low number of individual sequential
elements (flip-flops). The third phase includes understanding of GCN models
accuracy to model an arbitrary circuit netlist. The designed model was
validated for two circuits. One is the IEEE 754 standard double precision
floating point adder and the second one is the 10-Gigabit Ethernet MAC
IEEE802.3 standard. The predicted results compared to the standard fault
injection campaign results of the error called Single EventUpset (SEU). The
validated results are graphically pictured in the form of the histogram and
sorted probabilities and evaluated with the Confidence Interval (CI) metric
between the predicted and simulated fault injection results.
|
Quantum CZ Gate based on Single Gradient Metasurface | We propose a scheme to realize quantum controlled-Z (CZ) gates through single
gradient metasurface. Using its unique parallel beam-splitting feature, i.e., a
series of connected beam splitters with the same splitting ratio, one
metasurface can support a CZ gate, several independent CZ gates, or a cascaded
CZ gates. Taking advantage of the input polarization determined output
path-locking feature, both polarization-encoded and path-encoded CZ gates can
be demonstrated on the same metasurface, which further improves the integration
level of quantum devices. Our research paves the way for integrating quantum
logical function through the metasurface.
|
Measuring the perception of the personalized activities with CloudIA
robot | Socially Assistive Robots represent a valid solution for improving the
quality of life and the mood of older adults. In this context, this work
presents the CloudIA robot, a non-human-like robot intended to promote
sociality and well-being among older adults. The design of the robot and of the
provided services were carried out by a multidisciplinary team of designers and
technology developers in tandem with professional caregivers. The capabilities
of the robot were implemented according to the received guidelines and tested
in two nursing facilities by 15 older people. Qualitative and quantitative
metrics were used to investigate the engagement of the participants during the
interaction with the robot, and to investigate any differences in the
interaction during the proposed activities. The results highlighted the general
tendency of humanizing the robotic platform and demonstrated the feasibility of
introducing the CloudIA robot in support of the professional caregivers' work.
From this pilot test, further ideas on improving the personalization of the
robotic platform emerged.
|
Lessons from Formally Verified Deployed Software Systems (Extended
version) | The technology of formal software verification has made spectacular advances,
but how much does it actually benefit the development of practical software?
Considerable disagreement remains about the practicality of building systems
with mechanically-checked proofs of correctness. Is this prospect confined to a
few expensive, life-critical projects, or can the idea be applied to a wide
segment of the software industry? To help answer this question, the present
survey examines a range of projects, in various application areas, that have
produced formally verified systems and deployed them for actual use. It
considers the technologies used, the form of verification applied, the results
obtained, and the lessons that the software industry should draw regarding its
ability to benefit from formal verification techniques and tools.
Note: this version is the extended article, covering all the systems
identified as relevant. A shorter version, covering only a selection, is also
available.
|
A model for microinstability destabilization and enhanced transport in
the presence of shielded 3-D magnetic perturbations | A mechanism is presented that suggests shielded 3-D magnetic perturbations
can destabilize microinstabilities and enhance the associated anomalous
transport. Using local 3-D equilibrium theory, shaped tokamak equilibria with
small 3-D deformations are constructed. In the vicinity of rational magnetic
surfaces, the infinite-n ideal MHD ballooning stability boundary is strongly
perturbed by the 3-D modulations of the local magnetic shear associated with
the presence of nearresonant Pfirsch-Schluter currents. These currents are
driven by 3-D components of the magnetic field spectrum even when there is no
resonant radial component. The infinite-n ideal ballooning stability boundary
is often used as a proxy for the onset of virulent kinetic ballooning modes
(KBM) and associated stiff transport. These results suggest that the achievable
pressure gradient may be lowered in the vicinity of low order rational surfaces
when 3-D magnetic perturbations are applied. This mechanism may provide an
explanation for the observed reduction in the peak pressure gradient at the top
of the edge pedestal during experiments where edge localized modes have been
completely suppressed by applied 3-D magnetic fields.
|
Phase Transitions for the Information Bottleneck in Representation
Learning | In the Information Bottleneck (IB), when tuning the relative strength between
compression and prediction terms, how do the two terms behave, and what's their
relationship with the dataset and the learned representation? In this paper, we
set out to answer these questions by studying multiple phase transitions in the
IB objective: $\text{IB}_\beta[p(z|x)] = I(X; Z) - \beta I(Y; Z)$ defined on
the encoding distribution p(z|x) for input $X$, target $Y$ and representation
$Z$, where sudden jumps of $dI(Y; Z)/d \beta$ and prediction accuracy are
observed with increasing $\beta$. We introduce a definition for IB phase
transitions as a qualitative change of the IB loss landscape, and show that the
transitions correspond to the onset of learning new classes. Using second-order
calculus of variations, we derive a formula that provides a practical condition
for IB phase transitions, and draw its connection with the Fisher information
matrix for parameterized models. We provide two perspectives to understand the
formula, revealing that each IB phase transition is finding a component of
maximum (nonlinear) correlation between $X$ and $Y$ orthogonal to the learned
representation, in close analogy with canonical-correlation analysis (CCA) in
linear settings. Based on the theory, we present an algorithm for discovering
phase transition points. Finally, we verify that our theory and algorithm
accurately predict phase transitions in categorical datasets, predict the onset
of learning new classes and class difficulty in MNIST, and predict prominent
phase transitions in CIFAR10.
|
Criminal organizations exhibit hysteresis, resilience, and robustness by
balancing security and efficiency | The interplay between criminal organizations and law enforcement disruption
strategies is crucial in criminology. Criminal enterprises, like legitimate
businesses, balance visibility and security to thrive. This study uses
evolutionary game theory to analyze criminal networks' dynamics, resilience to
interventions, and responses to external conditions. We find strong hysteresis
effects, challenging traditional deterrence-focused strategies. Optimal
thresholds for organization formation or dissolution are defined by these
effects. Stricter punishment doesn't always deter organized crime linearly.
Network structure, particularly link density and skill assortativity,
significantly influences organization formation and stability. These insights
advocate for adaptive policy-making and strategic law enforcement to
effectively disrupt criminal networks.
|
Cooperative Self-training of Machine Reading Comprehension | Pretrained language models have significantly improved the performance of
downstream language understanding tasks, including extractive question
answering, by providing high-quality contextualized word embeddings. However,
training question answering models still requires large amounts of annotated
data for specific domains. In this work, we propose a cooperative self-training
framework, RGX, for automatically generating more non-trivial question-answer
pairs to improve model performance. RGX is built upon a masked answer
extraction task with an interactive learning environment containing an answer
entity Recognizer, a question Generator, and an answer eXtractor. Given a
passage with a masked entity, the generator generates a question around the
entity, and the extractor is trained to extract the masked entity with the
generated question and raw texts. The framework allows the training of question
generation and answering models on any text corpora without annotation.
Experiment results show that RGX outperforms the state-of-the-art (SOTA)
pretrained language models and transfer learning approaches on standard
question-answering benchmarks, and yields the new SOTA performance under given
model size and transfer learning settings.
|
Residual viscosity stabilized RBF-FD methods for solving nonlinear
conservation laws | In this paper, we solve nonlinear conservation laws using the radial basis
function generated finite difference (RBF-FD) method. Nonlinear conservation
laws have solutions that entail strong discontinuities and shocks, which give
rise to numerical instabilities when the solution is approximated by a
numerical method. We introduce a residual-based artificial viscosity (RV)
stabilization framework adjusted to the RBF-FD method, where the residual of
the conservation law adaptively locates discontinuities and shocks. The RV
stabilization framework is applied to the collocation RBF-FD method and the
oversampled RBF-FD method. Computational tests confirm that the stabilized
methods are reliable and accurate in solving scalar conservation laws and
conservation law systems such as compressible Euler equations.
|
3DVNet: Multi-View Depth Prediction and Volumetric Refinement | We present 3DVNet, a novel multi-view stereo (MVS) depth-prediction method
that combines the advantages of previous depth-based and volumetric MVS
approaches. Our key idea is the use of a 3D scene-modeling network that
iteratively updates a set of coarse depth predictions, resulting in highly
accurate predictions which agree on the underlying scene geometry. Unlike
existing depth-prediction techniques, our method uses a volumetric 3D
convolutional neural network (CNN) that operates in world space on all depth
maps jointly. The network can therefore learn meaningful scene-level priors.
Furthermore, unlike existing volumetric MVS techniques, our 3D CNN operates on
a feature-augmented point cloud, allowing for effective aggregation of
multi-view information and flexible iterative refinement of depth maps.
Experimental results show our method exceeds state-of-the-art accuracy in both
depth prediction and 3D reconstruction metrics on the ScanNet dataset, as well
as a selection of scenes from the TUM-RGBD and ICL-NUIM datasets. This shows
that our method is both effective and generalizes to new settings.
|
Local Causal Discovery with Background Knowledge | Causality plays a pivotal role in various fields of study. Based on the
framework of causal graphical models, previous works have proposed identifying
whether a variable is a cause or non-cause of a target in every Markov
equivalent graph solely by learning a local structure. However, the presence of
prior knowledge, often represented as a partially known causal graph, is common
in many causal modeling applications. Leveraging this prior knowledge allows
for the further identification of causal relationships. In this paper, we first
propose a method for learning the local structure using all types of causal
background knowledge, including direct causal information, non-ancestral
information and ancestral information. Then we introduce criteria for
identifying causal relationships based solely on the local structure in the
presence of prior knowledge. We also apply out method to fair machine learning,
and experiments involving local structure learning, causal relationship
identification, and fair machine learning demonstrate that our method is both
effective and efficient.
|
Determining Sequence of Image Processing Technique (IPT) to Detect
Adversarial Attacks | Developing secure machine learning models from adversarial examples is
challenging as various methods are continually being developed to generate
adversarial attacks. In this work, we propose an evolutionary approach to
automatically determine Image Processing Techniques Sequence (IPTS) for
detecting malicious inputs. Accordingly, we first used a diverse set of attack
methods including adaptive attack methods (on our defense) to generate
adversarial samples from the clean dataset. A detection framework based on a
genetic algorithm (GA) is developed to find the optimal IPTS, where the
optimality is estimated by different fitness measures such as Euclidean
distance, entropy loss, average histogram, local binary pattern and loss
functions. The "image difference" between the original and processed images is
used to extract the features, which are then fed to a classification scheme in
order to determine whether the input sample is adversarial or clean. This paper
described our methodology and performed experiments using multiple data-sets
tested with several adversarial attacks. For each attack-type and dataset, it
generates unique IPTS. A set of IPTS selected dynamically in testing time which
works as a filter for the adversarial attack. Our empirical experiments
exhibited promising results indicating the approach can efficiently be used as
processing for any AI model.
|
FogROS2-SGC: A ROS2 Cloud Robotics Platform for Secure Global
Connectivity | The Robot Operating System (ROS2) is the most widely used software platform
for building robotics applications. FogROS2 extends ROS2 to allow robots to
access cloud computing on demand. However, ROS2 and FogROS2 assume that all
robots are locally connected and that each robot has full access and control of
the other robots. With applications like distributed multi-robot systems,
remote robot control, and mobile robots, robotics increasingly involves the
global Internet and complex trust management. Existing approaches for
connecting disjoint ROS2 networks lack key features such as security,
compatibility, efficiency, and ease of use. We introduce FogROS2-SGC, an
extension of FogROS2 that can effectively connect robot systems across
different physical locations, networks, and Data Distribution Services (DDS).
With globally unique and location-independent identifiers, FogROS2-SGC securely
and efficiently routes data between robotics components around the globe.
FogROS2-SGC is agnostic to the ROS2 distribution and configuration, is
compatible with non-ROS2 software, and seamlessly extends existing ROS2
applications without any code modification. Experiments suggest FogROS2-SGC is
19x faster than rosbridge (a ROS2 package with comparable features, but lacking
security). We also apply FogROS2-SGC to 4 robots and compute nodes that are
3600km apart. Videos and code are available on the project website
https://sites.google.com/view/fogros2-sgc.
|
Deep Regression Representation Learning with Topology | Most works studying representation learning focus only on classification and
neglect regression. Yet, the learning objectives and, therefore, the
representation topologies of the two tasks are fundamentally different:
classification targets class separation, leading to disconnected
representations, whereas regression requires ordinality with respect to the
target, leading to continuous representations. We thus wonder how the
effectiveness of a regression representation is influenced by its topology,
with evaluation based on the Information Bottleneck (IB) principle. The IB
principle is an important framework that provides principles for learning
effective representations. We establish two connections between it and the
topology of regression representations. The first connection reveals that a
lower intrinsic dimension of the feature space implies a reduced complexity of
the representation Z. This complexity can be quantified as the conditional
entropy of Z on the target Y, and serves as an upper bound on the
generalization error. The second connection suggests a feature space that is
topologically similar to the target space will better align with the IB
principle. Based on these two connections, we introduce PH-Reg, a regularizer
specific to regression that matches the intrinsic dimension and topology of the
feature space with the target space. Experiments on synthetic and real-world
regression tasks demonstrate the benefits of PH-Reg. Code:
https://github.com/needylove/PH-Reg.
|
Enhanced sensing of molecular optical activity with plasmonic nanohole
arrays | Prospects of using metal hole arrays for the enhanced optical detection of
molecular chirality in nanosize volumes are investigated. Light transmission
through the holes filled with an optically active material is modeled and the
activity enhancement by more than an order of magnitude is demonstrated. The
spatial resolution of the chirality detection is shown to be of a few tens of
nanometers. From comparing the effect in arrays of cylindrical holes and holes
of complex chiral shape, it is concluded that the detection sensitivity is
determined by the plasmonic near field enhancement. The intrinsic chirality of
the arrays due to their shape appears to be less important.
|
Supporting Lock-Free Composition of Concurrent Data Objects | Lock-free data objects offer several advantages over their blocking
counterparts, such as being immune to deadlocks and convoying and, more
importantly, being highly concurrent. But they share a common disadvantage in
that the operations they provide are difficult to compose into larger atomic
operations while still guaranteeing lock-freedom. We present a lock-free
methodology for composing highly concurrent linearizable objects together by
unifying their linearization points. This makes it possible to relatively
easily introduce atomic lock-free move operations to a wide range of concurrent
objects. Experimental evaluation has shown that the operations originally
supported by the data objects keep their performance behavior under our
methodology.
|
Quality-Aware Multimodal Biometric Recognition | We present a quality-aware multimodal recognition framework that combines
representations from multiple biometric traits with varying quality and number
of samples to achieve increased recognition accuracy by extracting
complimentary identification information based on the quality of the samples.
We develop a quality-aware framework for fusing representations of input
modalities by weighting their importance using quality scores estimated in a
weakly-supervised fashion. This framework utilizes two fusion blocks, each
represented by a set of quality-aware and aggregation networks. In addition to
architecture modifications, we propose two task-specific loss functions:
multimodal separability loss and multimodal compactness loss. The first loss
assures that the representations of modalities for a class have comparable
magnitudes to provide a better quality estimation, while the multimodal
representations of different classes are distributed to achieve maximum
discrimination in the embedding space. The second loss, which is considered to
regularize the network weights, improves the generalization performance by
regularizing the framework. We evaluate the performance by considering three
multimodal datasets consisting of face, iris, and fingerprint modalities. The
efficacy of the framework is demonstrated through comparison with the
state-of-the-art algorithms. In particular, our framework outperforms the rank-
and score-level fusion of modalities of BIOMDATA by more than 30% for true
acceptance rate at false acceptance rate of $10^{-4}$.
|
Learning Sampling Dictionaries for Efficient and Generalizable Robot
Motion Planning with Transformers | Motion planning is integral to robotics applications such as autonomous
driving, surgical robots, and industrial manipulators. Existing planning
methods lack scalability to higher-dimensional spaces, while recent learning
based planners have shown promise in accelerating sampling-based motion
planners (SMP) but lack generalizability to out-of-distribution environments.
To address this, we present a novel approach, Vector Quantized-Motion Planning
Transformers (VQ-MPT) that overcomes the key generalization and scaling
drawbacks of previous learning-based methods. VQ-MPT consists of two stages.
Stage 1 is a Vector Quantized-Variational AutoEncoder model that learns to
represent the planning space using a finite number of sampling distributions,
and stage 2 is an Auto-Regressive model that constructs a sampling region for
SMPs by selecting from the learned sampling distribution sets. By splitting
large planning spaces into discrete sets and selectively choosing the sampling
regions, our planner pairs well with out-of-the-box SMPs, generating
near-optimal paths faster than without VQ-MPT's aid. It is generalizable in
that it can be applied to systems of varying complexities, from 2D planar to
14D bi-manual robots with diverse environment representations, including
costmaps and point clouds. Trained VQ-MPT models generalize to environments
unseen during training and achieve higher success rates than previous methods.
|
Capacity Regions and Optimal Power Allocation for Groupwise Multiuser
Detection | In this paper, optimal power allocation and capacity regions are derived for
GSIC (groupwise successive interference cancellation) systems operating in
multipath fading channels, under imperfect channel estimation conditions. It is
shown that the impact of channel estimation errors on the system capacity is
two-fold: it affects the receivers' performance within a group of users, as
well as the cancellation performance (through cancellation errors). An
iterative power allocation algorithm is derived, based on which it can be shown
that the total required received power is minimized when the groups are ordered
according to their cancellation errors, and the first detected group has the
smallest cancellation error.
Performace/complexity tradeoff issues are also discussed by directly
comparing the system capacity for different implementations: GSIC with linear
minimum-mean-square error (LMMSE) receivers within the detection groups, GSIC
with matched filter receivers, multicode LMMSE systems, and simple all matched
filter receivers systems.
|
Bayesian optimization of Bose-Einstein condensation via evaporative
cooling model | To achieve Bose-Einstein condensation, one may implement evaporative cooling
by dynamically regulating the power of laser beams forming the optical dipole
trap. We propose and experimentally demonstrate a protocol of Bayesian
optimization of Bose-Einstein condensation via the evaporative cooling model.
Applying this protocol, pure Bose-Einstein condensate of 87Rb with 2.4X10e4
atoms can be produced via evaporative cooling from the initial stage when the
number of atoms is 6.0X10e5 at a temperature of 12{\mu}K. In comparison with
Bayesian optimization via blackbox experiment, our protocol only needs a few
experiments required to verify some close-to-optimal curves for optical dipole
trap laser powers, therefore it greatly saves experimental resources.
|
MGN-Net: a multi-view graph normalizer for integrating heterogeneous
biological network populations | With the recent technological advances, biological datasets, often
represented by networks (i.e., graphs) of interacting entities, proliferate
with unprecedented complexity and heterogeneity. Although modern network
science opens new frontiers of analyzing connectivity patterns in such
datasets, we still lack data-driven methods for extracting an integral
connectional fingerprint of a multi-view graph population, let alone
disentangling the typical from the atypical variations across the population
samples. We present the multi-view graph normalizer network (MGN-Net;
https://github.com/basiralab/MGN-Net), a graph neural network based method to
normalize and integrate a set of multi-view biological networks into a single
connectional template that is centered, representative, and topologically
sound. We demonstrate the use of MGN-Net by discovering the connectional
fingerprints of healthy and neurologically disordered brain network populations
including Alzheimer's disease and Autism spectrum disorder patients.
Additionally, by comparing the learned templates of healthy and disordered
populations, we show that MGN-Net significantly outperforms conventional
network integration methods across extensive experiments in terms of producing
the most centered templates, recapitulating unique traits of populations, and
preserving the complex topology of biological networks. Our evaluations showed
that MGN-Net is powerfully generic and easily adaptable in design to different
graph-based problems such as identification of relevant connections,
normalization and integration.
|
Ethically Aligned Design of Autonomous Systems: Industry viewpoint and
an empirical study | Progress in the field of artificial intelligence has been accelerating
rapidly in the past two decades. Various autonomous systems from purely digital
ones to autonomous vehicles are being developed and deployed out on the field.
As these systems exert a growing impact on society, ethics in relation to
artificial intelligence and autonomous systems have recently seen growing
attention among the academia. However, the current literature on the topic has
focused almost exclusively on theory and more specifically on conceptualization
in the area. To widen the body of knowledge in the area, we conduct an
empirical study on the current state of practice in artificial intelligence
ethics. We do so by means of a multiple case study of five case companies, the
results of which indicate a gap between research and practice in the area.
Based on our findings we propose ways to tackle the gap.
|
Use of Dirichlet Distributions and Orthogonal Projection Techniques for
the Fluctuation Analysis of Steady-State Multivariate Birth-Death Systems | Approximate weak solutions of the Fokker-Planck equation can represent a
useful tool to analyze the equilibrium fluctuations of birth-death systems, as
they provide a quantitative knowledge lying in between numerical simulations
and exact analytic arguments. In the present paper, we adapt the general
mathematical formalism known as the Ritz-Galerkin method for partial
differential equations to the Fokker-Planck equation with time-independent
polynomial drift and diffusion coefficients on the simplex. Then, we show how
the method works in two examples, namely the binary and multi-state voter
models with zealots.
|
VEnvision3D: A Synthetic Perception Dataset for 3D Multi-Task Model
Research | Developing a unified multi-task foundation model has become a critical
challenge in computer vision research. In the current field of 3D computer
vision, most datasets only focus on single task, which complicates the
concurrent training requirements of various downstream tasks. In this paper, we
introduce VEnvision3D, a large 3D synthetic perception dataset for multi-task
learning, including depth completion, segmentation, upsampling, place
recognition, and 3D reconstruction. Since the data for each task is collected
in the same environmental domain, sub-tasks are inherently aligned in terms of
the utilized data. Therefore, such a unique attribute can assist in exploring
the potential for the multi-task model and even the foundation model without
separate training methods. Meanwhile, capitalizing on the advantage of virtual
environments being freely editable, we implement some novel settings such as
simulating temporal changes in the environment and sampling point clouds on
model surfaces. These characteristics enable us to present several new
benchmarks. We also perform extensive studies on multi-task end-to-end models,
revealing new observations, challenges, and opportunities for future research.
Our dataset and code will be open-sourced upon acceptance.
|
MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning
for Decentralized Traffic Signal Control | Traffic signal control aims to coordinate traffic signals across
intersections to improve the traffic efficiency of a district or a city. Deep
reinforcement learning (RL) has been applied to traffic signal control recently
and demonstrated promising performance where each traffic signal is regarded as
an agent. However, there are still several challenges that may limit its
large-scale application in the real world. To make the policy learned from a
training scenario generalizable to new unseen scenarios, a novel Meta
Variationally Intrinsic Motivated (MetaVIM) RL method is proposed to learn the
decentralized policy for each intersection that considers neighbor information
in a latent way. Specifically, we formulate the policy learning as a
meta-learning problem over a set of related tasks, where each task corresponds
to traffic signal control at an intersection whose neighbors are regarded as
the unobserved part of the state. Then, a learned latent variable is introduced
to represent the task's specific information and is further brought into the
policy for learning. In addition, to make the policy learning stable, a novel
intrinsic reward is designed to encourage each agent's received rewards and
observation transition to be predictable only conditioned on its own history.
Extensive experiments conducted on CityFlow demonstrate that the proposed
method substantially outperforms existing approaches and shows superior
generalizability.
|
Scaling and Universality in City Space Syntax: between Zipf and Matthew | We report about universality of rank-integration distributions of open spaces
in city space syntax similar to the famous rank-size distributions of cities
(Zipf's law). We also demonstrate that the degree of choice an open space
represents for other spaces directly linked to it in a city follows a power law
statistic. Universal statistical behavior of space syntax measures uncovers the
universality of the city creation mechanism. We suggest that the observed
universality may help to establish the international definition of a city as a
specific land use pattern.
|
InternLM-XComposer2: Mastering Free-form Text-Image Composition and
Comprehension in Vision-Language Large Model | We introduce InternLM-XComposer2, a cutting-edge vision-language model
excelling in free-form text-image composition and comprehension. This model
goes beyond conventional vision-language understanding, adeptly crafting
interleaved text-image content from diverse inputs like outlines, detailed
textual specifications, and reference images, enabling highly customizable
content creation. InternLM-XComposer2 proposes a Partial LoRA (PLoRA) approach
that applies additional LoRA parameters exclusively to image tokens to preserve
the integrity of pre-trained language knowledge, striking a balance between
precise vision understanding and text composition with literary talent.
Experimental results demonstrate the superiority of InternLM-XComposer2 based
on InternLM2-7B in producing high-quality long-text multi-modal content and its
exceptional vision-language understanding performance across various
benchmarks, where it not only significantly outperforms existing multimodal
models but also matches or even surpasses GPT-4V and Gemini Pro in certain
assessments. This highlights its remarkable proficiency in the realm of
multimodal understanding. The InternLM-XComposer2 model series with 7B
parameters are publicly available at
https://github.com/InternLM/InternLM-XComposer.
|
A compatible finite element discretisation for the nonhydrostatic
vertical slice equations | We present a compatible finite element discretisation for the vertical slice
compressible Euler equations, at next-to-lowest order (i.e., the pressure space
is bilinear discontinuous functions). The equations are numerically integrated
in time using a fully implicit timestepping scheme which is solved using
monolithic GMRES preconditioned by a linesmoother. The linesmoother only
involves local operations and is thus suitable for domain decomposition in
parallel. It allows for arbitrarily large timesteps but with iteration counts
scaling linearly with Courant number in the limit of large Courant number. This
solver approach is implemented using Firedrake, and the additive Schwarz
preconditioner framework of PETSc. We demonstrate the robustness of the scheme
using a standard set of testcases that may be compared with other approaches.
|
Amyloid-Beta Axial Plane PET Synthesis from Structural MRI: An Image
Translation Approach for Screening Alzheimer's Disease | In this work, an image translation model is implemented to produce synthetic
amyloid-beta PET images from structural MRI that are quantitatively accurate.
Image pairs of amyloid-beta PET and structural MRI were used to train the
model. We found that the synthetic PET images could be produced with a high
degree of similarity to truth in terms of shape, contrast and overall high SSIM
and PSNR. This work demonstrates that performing structural to quantitative
image translation is feasible to enable the access amyloid-beta information
from only MRI.
|
The Vlasov equation with strong magnetic field and oscillating electric
field as a model of isotope resonant separation | We study qualitative behavior of the Vlasov equation with strong external
magnetic field and oscillating electric field. This model is relevant in order
to understand isotop resonant separation. We show that the effective equation
is a kinetic equation with a memory term. This memory term involves a
pseudo-differential operator whose kernel is characterized by an integral
equation involving Bessel functions. In some particular cases, the kernel is
explicitly given.
|
Absorption of scalar waves in correlated disordered media and its
maximization using stealth hyperuniformity | We develop a multiple scattering theory for the absorption of waves in
disordered media. Based on a general expression of the average absorbed power,
we discuss the possibility to maximize absorption by using structural
correlations of disorder as a degree of freedom. In a model system made of
absorbing scatterers in a transparent background, we show that a stealth
hyperuniform distribution of the scatterers allows the average absorbed power
to reach its maximum value. This study provides a theoretical framework for the
design of efficient non-resonant absorbers made of dilute disordered materials,
for broadband and omnidirectional light, and other kinds of waves.
|
Advancing Humor-Focused Sentiment Analysis through Improved
Contextualized Embeddings and Model Architecture | Humor is a natural and fundamental component of human interactions. When
correctly applied, humor allows us to express thoughts and feelings
conveniently and effectively, increasing interpersonal affection, likeability,
and trust. However, understanding the use of humor is a computationally
challenging task from the perspective of humor-aware language processing
models. As language models become ubiquitous through virtual-assistants and IOT
devices, the need to develop humor-aware models rises exponentially. To further
improve the state-of-the-art capacity to perform this particular
sentiment-analysis task we must explore models that incorporate contextualized
and nonverbal elements in their design. Ideally, we seek architectures
accepting non-verbal elements as additional embedded inputs to the model,
alongside the original sentence-embedded input. This survey thus analyses the
current state of research in techniques for improved contextualized embedding
incorporating nonverbal information, as well as newly proposed deep
architectures to improve context retention on top of popular word-embeddings
methods.
|
Frictionless Authentication Systems: Emerging Trends, Research
Challenges and Opportunities | Authentication and authorization are critical security layers to protect a
wide range of online systems, services and content. However, the increased
prevalence of wearable and mobile devices, the expectations of a frictionless
experience and the diverse user environments will challenge the way users are
authenticated. Consumers demand secure and privacy-aware access from any
device, whenever and wherever they are, without any obstacles. This paper
reviews emerging trends and challenges with frictionless authentication systems
and identifies opportunities for further research related to the enrollment of
users, the usability of authentication schemes, as well as security and privacy
trade-offs of mobile and wearable continuous authentication systems.
|
Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models | Due to the ease of training, ability to scale, and high sample quality,
diffusion models (DMs) have become the preferred option for generative
modeling, with numerous pre-trained models available for a wide variety of
datasets. Containing intricate information about data distributions,
pre-trained DMs are valuable assets for downstream applications. In this work,
we consider learning from pre-trained DMs and transferring their knowledge to
other generative models in a data-free fashion. Specifically, we propose a
general framework called Diff-Instruct to instruct the training of arbitrary
generative models as long as the generated samples are differentiable with
respect to the model parameters. Our proposed Diff-Instruct is built on a
rigorous mathematical foundation where the instruction process directly
corresponds to minimizing a novel divergence we call Integral Kullback-Leibler
(IKL) divergence. IKL is tailored for DMs by calculating the integral of the KL
divergence along a diffusion process, which we show to be more robust in
comparing distributions with misaligned supports. We also reveal non-trivial
connections of our method to existing works such as DreamFusion, and generative
adversarial training. To demonstrate the effectiveness and universality of
Diff-Instruct, we consider two scenarios: distilling pre-trained diffusion
models and refining existing GAN models. The experiments on distilling
pre-trained diffusion models show that Diff-Instruct results in
state-of-the-art single-step diffusion-based models. The experiments on
refining GAN models show that the Diff-Instruct can consistently improve the
pre-trained generators of GAN models across various settings.
|
Spatial Context Awareness for Unsupervised Change Detection in Optical
Satellite Images | Detecting changes on the ground in multitemporal Earth observation data is
one of the key problems in remote sensing. In this paper, we introduce Sibling
Regression for Optical Change detection (SiROC), an unsupervised method for
change detection in optical satellite images with medium and high resolution.
SiROC is a spatial context-based method that models a pixel as a linear
combination of its distant neighbors. It uses this model to analyze differences
in the pixel and its spatial context-based predictions in subsequent time
periods for change detection. We combine this spatial context-based change
detection with ensembling over mutually exclusive neighborhoods and
transitioning from pixel to object-level changes with morphological operations.
SiROC achieves competitive performance for change detection with
medium-resolution Sentinel-2 and high-resolution Planetscope imagery on four
datasets. Besides accurate predictions without the need for training, SiROC
also provides a well-calibrated uncertainty of its predictions. This makes the
method especially useful in conjunction with deep-learning based methods for
applications such as pseudo-labeling.
|
Hydrodynamic View of Wave-Packet Interference: Quantum Caves | Wave-packet interference is investigated within the complex quantum
Hamilton-Jacobi formalism using a hydrodynamic description. Quantum
interference leads to the formation of the topological structure of quantum
caves in space-time Argand plots. These caves consist of the vortical and
stagnation tubes originating from the isosurfaces of the amplitude of the wave
function and its first derivative. Complex quantum trajectories display
counterclockwise helical wrapping around the stagnation tubes and hyperbolic
deflection near the vortical tubes. The string of alternating stagnation and
vortical tubes is sufficient to generate divergent trajectories. Moreover, the
average wrapping time for trajectories and the rotational rate of the nodal
line in the complex plane can be used to define the lifetime for interference
features.
|
Quantum Information Transmission over a Partially Degradable Channel | We investigate a quantum coding for quantum communication over a PD
(partially degradable) degradable quantum channel. For a PD channel, the
degraded environment state can be expressed from the channel output state up to
a degrading map. PD channels can be restricted to the set of optical channels
which allows for the parties to exploit the benefits in experimental quantum
communications. We show that for a PD channel, the partial degradability
property leads to higher quantum data rates in comparison to those of a
degradable channel. The PD property is particular convenient for quantum
communications and allows one to implement the experimental quantum protocols
with higher performance. We define a coding scheme for PD-channels and give the
achievable rates of quantum communication.
|
The Subfield Codes of Hyperoval and Conic codes | Hyperovals in $\PG(2,\gf(q))$ with even $q$ are maximal arcs and an
interesting research topic in finite geometries and combinatorics. Hyperovals
in $\PG(2,\gf(q))$ are equivalent to $[q+2,3,q]$ MDS codes over $\gf(q)$,
called hyperoval codes, in the sense that one can be constructed from the
other. Ovals in $\PG(2,\gf(q))$ for odd $q$ are equivalent to $[q+1,3,q-1]$ MDS
codes over $\gf(q)$, which are called oval codes. In this paper, we investigate
the binary subfield codes of two families of hyperoval codes and the $p$-ary
subfield codes of the conic codes. The weight distributions of these subfield
codes and the parameters of their duals are determined. As a byproduct, we
generalize one family of the binary subfield codes to the $p$-ary case and
obtain its weight distribution. The codes presented in this paper are optimal
or almost optimal in many cases. In addition, the parameters of these binary
codes and $p$-ary codes seem new.
|
Unrolling PALM for sparse semi-blind source separation | Sparse Blind Source Separation (BSS) has become a well established tool for a
wide range of applications - for instance, in astrophysics and remote sensing.
Classical sparse BSS methods, such as the Proximal Alternating Linearized
Minimization (PALM) algorithm, nevertheless often suffer from a difficult
hyperparameter choice, which undermines their results. To bypass this pitfall,
we propose in this work to build on the thriving field of algorithm
unfolding/unrolling. Unrolling PALM enables to leverage the data-driven
knowledge stemming from realistic simulations or ground-truth data by learning
both PALM hyperparameters and variables. In contrast to most existing unrolled
algorithms, which assume a fixed known dictionary during the training and
testing phases, this article further emphasizes on the ability to deal with
variable mixing matrices (a.k.a. dictionaries). The proposed Learned PALM
(LPALM) algorithm thus enables to perform semi-blind source separation, which
is key to increase the generalization of the learnt model in real-world
applications. We illustrate the relevance of LPALM in astrophysical
multispectral imaging: the algorithm not only needs up to $10^4-10^5$ times
fewer iterations than PALM, but also improves the separation quality, while
avoiding the cumbersome hyperparameter and initialization choice of PALM. We
further show that LPALM outperforms other unrolled source separation methods in
the semi-blind setting.
|
Ageing test of the ATLAS RPCs at X5-GIF | An ageing test of three ATLAS production RPC stations is in course at X5-GIF,
the CERN irradiation facility. The chamber efficiencies are monitored using
cosmic rays triggered by a scintillator hodoscope. Higher statistics
measurements are made when the X5 muon beam is available. We report here the
measurements of the efficiency versus operating voltage at different source
intensities, up to a maximum counting rate of about 700Hz/cm^2. We describe the
performance of the chambers during the test up to an overall ageing of 4 ATLAS
equivalent years corresponding to an integrated charge of 0.12C/cm^2, including
a safety factor of 5.
|
Generalized Mie theory for full-wave numerical calculations of
scattering near-field optical microscopy with arbitrary geometries | Scattering-type scanning near-field optical microscopy is becoming a premier
method for the nanoscale optical investigation of materials well beyond the
diffraction limit. A number of popular numerical methods exist to predict the
near-field contrast for axisymmetric configurations of scatterers on a surface
in the quasi-electrostatic approximation. Here, a fully electrodynamic approach
is given for the calculation of near-field contrast of several scatterers in
arbitrary configuration, based on the generalized Mie scattering method.
Examples for the potential of this new approach are given by showing the
coupling of hyperbolic phonon polaritons in hexagonal boron nitride layers and
showing enhanced scattering in core-shell systems. In general, this method
enables the numerical calculation of the near-field contrast in a variety of
strongly resonant scatterers and is able to accurately recreate spatial
near-field maps.
|
Plate motion in sheared granular fault system | Plate motion near the fault gouge layer, and the elastic interplay between
the gouge layer and the plate under stick-slip conditions, is key to
understanding the dynamics of sheared granular fault systems. Here, a
two-dimensional implementation of the combined finite-discrete element method
(FDEM), which merges the finite element method (FEM) and the discrete element
method (DEM), is used to explicitly to simulate a sheared granular fault
system. We focus on investigating the influence of normal load, driving shear
velocity and plate stiffness on the velocities and displacements measured at
locations on the upper and lower plates just adjacent to the gouge in the
direction parallel to the shear direction (x direction). The simulations show
that at slips the plate velocities are proportional to the normal load and may
be inversely proportional to the square root of the plate's Young's modulus;
whereas the driving shear velocity does not show distinct influence on the
plate velocities. During stick phases, the velocities of the upper and lower
plates are respectively slightly greater and slightly smaller than the half of
the driving shear velocity, and are both in the same direction of shear. The
shear strain rate of the gouge is calculated from this velocity difference
between the upper and lower plate during stick phases and thus the gouge
effective shear modulus can be calculated. The results show that the gouge
effective shear modulus increases proportionally with normal load, while the
influence of shear velocity and plate stiffness on gouge effective shear
modulus is minor. The simulations address the dynamics of a laboratory scale
fault gouge system and may help reveal the complexities of earthquake
frictional dynamics.
|
Evolving soft locomotion in aquatic and terrestrial environments:
effects of material properties and environmental transitions | Designing soft robots poses considerable challenges: automated design
approaches may be particularly appealing in this field, as they promise to
optimize complex multi-material machines with very little or no human
intervention. Evolutionary soft robotics is concerned with the application of
optimization algorithms inspired by natural evolution in order to let soft
robots (both morphologies and controllers) spontaneously evolve within
physically-realistic simulated environments, figuring out how to satisfy a set
of objectives defined by human designers. In this paper a powerful evolutionary
system is put in place in order to perform a broad investigation on the
free-form evolution of walking and swimming soft robots in different
environments. Three sets of experiments are reported, tackling different
aspects of the evolution of soft locomotion. The first two sets explore the
effects of different material properties on the evolution of terrestrial and
aquatic soft locomotion: particularly, we show how different materials lead to
the evolution of different morphologies, behaviors, and energy-performance
tradeoffs. It is found that within our simplified physics world stiffer robots
evolve more sophisticated and effective gaits and morphologies on land, while
softer ones tend to perform better in water. The third set of experiments
starts investigating the effect and potential benefits of major environmental
transitions (land - water) during evolution. Results provide interesting
morphological exaptation phenomena, and point out a potential asymmetry between
land-water and water-land transitions: while the first type of transition
appears to be detrimental, the second one seems to have some beneficial
effects.
|
A Hierarchical Key Management Scheme for Wireless Sensor Networks Based
on Identity-based Encryption | Limited resources (such as energy, computing power, storage, and so on) make
it impractical for wireless sensor networks (WSNs) to deploy traditional
security schemes. In this paper, a hierarchical key management scheme is
proposed on the basis of identity-based encryption (IBE).This proposed scheme
not only converts the distributed flat architecture of the WSNs to a
hierarchical architecture for better network management but also ensures the
independence and security of the sub-networks. This paper firstly reviews the
identity-based encryption, particularly, the Boneh-Franklin algorithm. Then a
novel hierarchical key management scheme based on the basic Boneh-Franklin and
Diffie-Hellman (DH) algorithms is proposed. At last, the security and
efficiency of our scheme is discussed by comparing with other identity-based
schemes for flat architecture of WSNs.
|
Prompt2Fashion: An automatically generated fashion dataset | Despite the rapid evolution and increasing efficacy of language and vision
generative models, there remains a lack of comprehensive datasets that bridge
the gap between personalized fashion needs and AI-driven design, limiting the
potential for truly inclusive and customized fashion solutions. In this work,
we leverage generative models to automatically construct a fashion image
dataset tailored to various occasions, styles, and body types as instructed by
users. We use different Large Language Models (LLMs) and prompting strategies
to offer personalized outfits of high aesthetic quality, detail, and relevance
to both expert and non-expert users' requirements, as demonstrated by
qualitative analysis. Up until now the evaluation of the generated outfits has
been conducted by non-expert human subjects. Despite the provided fine-grained
insights on the quality and relevance of generation, we extend the discussion
on the importance of expert knowledge for the evaluation of artistic
AI-generated datasets such as this one. Our dataset is publicly available on
GitHub at https://github.com/georgiarg/Prompt2Fashion.
|
Fundamental limits to the refractive index of transparent optical
materials | Increasing the refractive index available for optical and nanophotonic
systems opens new vistas for design: for applications ranging from broadband
metalenses to ultrathin photovoltaics to high-quality-factor resonators, higher
index directly leads to better devices with greater functionality. Although
standard transparent materials have been limited to refractive indices smaller
than 3 in the visible, recent metamaterials designs have achieved refractive
indices above 5, accompanied by high losses, and near the phase transition of a
ferroelectric perovskite a broadband index above 26 has been claimed. In this
work, we derive fundamental limits to the refractive index of any material,
given only the underlying electron density and either the maximum allowable
dispersion or the minimum bandwidth of interest. The Kramers--Kronig relations
provide a representation for any passive (and thereby causal) material, and a
well-known sum rule constrains the possible distribution of oscillator
strengths. In the realm of small to modest dispersion, our bounds are closely
approached and not surpassed by a wide range of natural materials, showing that
nature has already nearly reached a Pareto frontier for refractive index and
dispersion. Surprisingly, our bound shows a cube-root dependence on electron
density, meaning that a refractive index of 26 over all visible frequencies is
likely impossible. Conversely, for narrow-bandwidth applications, nature does
not provide the highly dispersive, high-index materials that our bounds suggest
should be possible. We use the theory of composites to identify metal-based
metamaterials that can exhibit small losses and sizeable increases in
refractive index over the current best materials.
|
Design and optimization of a portable LQCD Monte Carlo code using
OpenACC | The present panorama of HPC architectures is extremely heterogeneous, ranging
from traditional multi-core CPU processors, supporting a wide class of
applications but delivering moderate computing performance, to many-core GPUs,
exploiting aggressive data-parallelism and delivering higher performances for
streaming computing applications. In this scenario, code portability (and
performance portability) become necessary for easy maintainability of
applications; this is very relevant in scientific computing where code changes
are very frequent, making it tedious and prone to error to keep different code
versions aligned. In this work we present the design and optimization of a
state-of-the-art production-level LQCD Monte Carlo application, using the
directive-based OpenACC programming model. OpenACC abstracts parallel
programming to a descriptive level, relieving programmers from specifying how
codes should be mapped onto the target architecture. We describe the
implementation of a code fully written in OpenACC, and show that we are able to
target several different architectures, including state-of-the-art traditional
CPUs and GPUs, with the same code. We also measure performance, evaluating the
computing efficiency of our OpenACC code on several architectures, comparing
with GPU-specific implementations and showing that a good level of
performance-portability can be reached.
|
Exploiting locality and physical invariants to design effective Deep
Reinforcement Learning control of the unstable falling liquid film | Instabilities arise in a number of flow configurations. One such
manifestation is the development of interfacial waves in multiphase flows, such
as those observed in the falling liquid film problem. Controlling the
development of such instabilities is a problem of both academic and industrial
interest. However, this has proven challenging in most cases due to the strong
nonlinearity and high dimensionality of the underlying equations. In the
present work, we successfully apply Deep Reinforcement Learning (DRL) for the
control of the one-dimensional (1D) depth-integrated falling liquid film. In
addition, we introduce for the first time translational invariance in the
architecture of the DRL agent, and we exploit locality of the control problem
to define a dense reward function. This allows to both speed up learning
considerably, and to easily control an arbitrary large number of jets and
overcome the curse of dimensionality on the control output size that would take
place using a naive approach. This illustrates the importance of the
architecture of the agent for successful DRL control, and we believe this will
be an important element in the effective application of DRL to large
two-dimensional (2D) or three-dimensional (3D) systems featuring translational,
axisymmetric or other invariants.
|
Subsets and Splits