title
stringlengths 1
280
| abstract
stringlengths 7
5.09k
|
---|---|
Is Style All You Need? Dependencies Between Emotion and GST-based
Speaker Recognition | In this work, we study the hypothesis that speaker identity embeddings
extracted from speech samples may be used for detection and classification of
emotion. In particular, we show that emotions can be effectively identified by
learning speaker identities by use of a 1-D Triplet Convolutional Neural
Network (CNN) & Global Style Token (GST) scheme (e.g., DeepTalk Network) and
reusing the trained speaker recognition model weights to generate features in
the emotion classification domain. The automatic speaker recognition (ASR)
network is trained with VoxCeleb1, VoxCeleb2, and Librispeech datasets with a
triplet training loss function using speaker identity labels. Using an Support
Vector Machine (SVM) classifier, we map speaker identity embeddings into
discrete emotion categories from the CREMA-D, IEMOCAP, and MSP-Podcast
datasets. On the task of speech emotion detection, we obtain 80.8% ACC with
acted emotion samples from CREMA-D, 81.2% ACC with semi-natural emotion samples
in IEMOCAP, and 66.9% ACC with natural emotion samples in MSP-Podcast. We also
propose a novel two-stage hierarchical classifier (HC) approach which
demonstrates +2% ACC improvement on CREMA-D emotion samples. Through this work,
we seek to convey the importance of holistically modeling intra-user variation
within audio samples
|
Aligning with Whom? Large Language Models Have Gender and Racial Biases
in Subjective NLP Tasks | Human perception of language depends on personal backgrounds like gender and
ethnicity. While existing studies have shown that large language models (LLMs)
hold values that are closer to certain societal groups, it is unclear whether
their prediction behaviors on subjective NLP tasks also exhibit a similar bias.
In this study, leveraging the POPQUORN dataset which contains annotations of
diverse demographic backgrounds, we conduct a series of experiments on four
popular LLMs to investigate their capability to understand group differences
and potential biases in their predictions for politeness and offensiveness. We
find that for both tasks, model predictions are closer to the labels from White
and female participants. We further explore prompting with the target
demographic labels and show that including the target demographic in the prompt
actually worsens the model's performance. More specifically, when being
prompted to respond from the perspective of "Black" and "Asian" individuals,
models show lower performance in predicting both overall scores as well as the
scores from corresponding groups. Our results suggest that LLMs hold gender and
racial biases for subjective NLP tasks and that demographic-infused prompts
alone may be insufficient to mitigate such effects. Code and data are available
at https://github.com/Jiaxin-Pei/LLM-Group-Bias.
|
Problem with the derivation of the Navier-Stokes equation by means of
Zwanzig-Mori technique: Correction and solution | The derivation of the Navier-Stokes equation starting from the Liouville
equation using projector techniques yields a friction term which is nonlinear
in the velocity. As has been explained in the 1. version of this paper, when
the second-order part of the term is non-zero, this leads to an incorrect
formula for the equation.
In this 2. version, it is shown that the problem is due to an inadequate
treatment of one of the correlation functions appearing . Repeating the
calculation leads to zero second-order part. The Navier-Stokes equation is
correctly derived by projection operator technique.
|
Analysis of Non-binary Hybrid LDPC Codes | In this paper, we analyse asymptotically a new class of LDPC codes called
Non-binary Hybrid LDPC codes, which has been recently introduced. We use
density evolution techniques to derive a stability condition for hybrid LDPC
codes, and prove their threshold behavior. We study this stability condition to
conclude on asymptotic advantages of hybrid LDPC codes compared to their
non-hybrid counterparts.
|
Hybrid Control Policy for Artificial Pancreas via Ensemble Deep
Reinforcement Learning | Objective: The artificial pancreas (AP) has shown promising potential in
achieving closed-loop glucose control for individuals with type 1 diabetes
mellitus (T1DM). However, designing an effective control policy for the AP
remains challenging due to the complex physiological processes, delayed insulin
response, and inaccurate glucose measurements. While model predictive control
(MPC) offers safety and stability through the dynamic model and safety
constraints, it lacks individualization and is adversely affected by
unannounced meals. Conversely, deep reinforcement learning (DRL) provides
personalized and adaptive strategies but faces challenges with distribution
shifts and substantial data requirements. Methods: We propose a hybrid control
policy for the artificial pancreas (HyCPAP) to address the above challenges.
HyCPAP combines an MPC policy with an ensemble DRL policy, leveraging the
strengths of both policies while compensating for their respective limitations.
To facilitate faster deployment of AP systems in real-world settings, we
further incorporate meta-learning techniques into HyCPAP, leveraging previous
experience and patient-shared knowledge to enable fast adaptation to new
patients with limited available data. Results: We conduct extensive experiments
using the FDA-accepted UVA/Padova T1DM simulator across three scenarios. Our
approaches achieve the highest percentage of time spent in the desired
euglycemic range and the lowest occurrences of hypoglycemia. Conclusion: The
results clearly demonstrate the superiority of our methods for closed-loop
glucose management in individuals with T1DM. Significance: The study presents
novel control policies for AP systems, affirming the great potential of
proposed methods for efficient closed-loop glucose control.
|
Indication of multiscaling in the volatility return intervals of stock
markets | The distribution of the return intervals $\tau$ between volatilities above a
threshold $q$ for financial records has been approximated by a scaling
behavior. To explore how accurate is the scaling and therefore understand the
underlined non-linear mechanism, we investigate intraday datasets of 500 stocks
which consist of the Standard & Poor's 500 index. We show that the cumulative
distribution of return intervals has systematic deviations from scaling. We
support this finding by studying the m-th moment $\mu_m \equiv
<(\tau/<\tau>)^m>^{1/m}$, which show a certain trend with the mean interval
$<\tau>$. We generate surrogate records using the Schreiber method, and find
that their cumulative distributions almost collapse to a single curve and
moments are almost constant for most range of $<\tau>$. Those substantial
differences suggest that non-linear correlations in the original volatility
sequence account for the deviations from a single scaling law. We also find
that the original and surrogate records exhibit slight tendencies for short and
long $<\tau>$, due to the discreteness and finite size effects of the records
respectively. To avoid as possible those effects for testing the multiscaling
behavior, we investigate the moments in the range $10<<\tau>\leq100$, and find
the exponent $\alpha$ from the power law fitting $\mu_m\sim<\tau>^\alpha$ has a
narrow distribution around $\alpha\neq0$ which depend on m for the 500 stocks.
The distribution of $\alpha$ for the surrogate records are very narrow and
centered around $\alpha=0$. This suggests that the return interval distribution
exhibit multiscaling behavior due to the non-linear correlations in the
original volatility.
|
Codebook-Based Beam Tracking for Conformal ArrayEnabled UAV MmWave
Networks | Millimeter wave (mmWave) communications can potentially meet the high
data-rate requirements of unmanned aerial vehicle (UAV) networks. However, as
the prerequisite of mmWave communications, the narrow directional beam tracking
is very challenging because of the three-dimensional (3D) mobility and attitude
variation of UAVs. Aiming to address the beam tracking difficulties, we propose
to integrate the conformal array (CA) with the surface of each UAV, which
enables the full spatial coverage and the agile beam tracking in highly dynamic
UAV mmWave networks. More specifically, the key contributions of our work are
three-fold. 1) A new mmWave beam tracking framework is established for the
CA-enabled UAV mmWave network. 2) A specialized hierarchical codebook is
constructed to drive the directional radiating element (DRE)-covered
cylindrical conformal array (CCA), which contains both the angular beam pattern
and the subarray pattern to fully utilize the potential of the CA. 3) A
codebook-based multiuser beam tracking scheme is proposed, where the Gaussian
process machine learning enabled UAV position/attitude predication is developed
to improve the beam tracking efficiency in conjunction with the tracking-error
aware adaptive beamwidth control. Simulation results validate the effectiveness
of the proposed codebook-based beam tracking scheme in the CA-enabled UAV
mmWave network, and demonstrate the advantages of CA over the conventional
planner array in terms of spectrum efficiency and outage probability in the
highly dynamic scenarios.
|
Stellarator equilibrium axis-expansion to all orders in distance from
the axis for arbitrary plasma beta | A systematic theory of the asymptotic expansion of the magnetohydrodynamic
(MHD) equilibrium in the distance from the magnetic axis is developed to
include arbitrary smooth currents near the magnetic axis. Compared to the
vacuum and the force-free system, an additional magnetic differential equation
must be solved to obtain the pressure-driven currents. It is shown that there
exist variables in which the rest of the MHD system closely mimics the vacuum
system. Thus, a unified treatment of MHD fields is possible. The mathematical
structure of the near-axis expansions to arbitrary order is examined carefully
to show that the double-periodicity of physical quantities in a toroidal domain
can be satisfied order by order. The essential role played by the normal form
in solving the magnetic differential equations is highlighted. Several explicit
examples of vacuum, force-free, and MHD equilibrium in different geometries are
presented.
|
Polynomial-time computing over quadratic maps I: sampling in real
algebraic sets | Given a quadratic map Q : K^n -> K^k defined over a computable subring D of a
real closed field K, and a polynomial p(Y_1,...,Y_k) of degree d, we consider
the zero set Z=Z(p(Q(X)),K^n) of the polynomial p(Q(X_1,...,X_n)). We present a
procedure that computes, in (dn)^O(k) arithmetic operations in D, a set S of
(real univariate representations of) sampling points in K^n that intersects
nontrivially each connected component of Z. As soon as k=o(n), this is faster
than the standard methods that all have exponential dependence on n in the
complexity. In particular, our procedure is polynomial-time for constant k. In
contrast, the best previously known procedure (due to A.Barvinok) is only
capable of deciding in n^O(k^2) operations the nonemptiness (rather than
constructing sampling points) of the set Z in the case of p(Y)=sum_i Y_i^2 and
homogeneous Q.
A by-product of our procedure is a bound (dn)^O(k) on the number of connected
components of Z.
The procedure consists of exact symbolic computations in D and outputs
vectors of algebraic numbers. It involves extending K by infinitesimals and
subsequent limit computation by a novel procedure that utilizes knowledge of an
explicit isomorphism between real algebraic sets.
|
Spurious Correlations and Where to Find Them | Spurious correlations occur when a model learns unreliable features from the
data and are a well-known drawback of data-driven learning. Although there are
several algorithms proposed to mitigate it, we are yet to jointly derive the
indicators of spurious correlations. As a result, the solutions built upon
standalone hypotheses fail to beat simple ERM baselines. We collect some of the
commonly studied hypotheses behind the occurrence of spurious correlations and
investigate their influence on standard ERM baselines using synthetic datasets
generated from causal graphs. Subsequently, we observe patterns connecting
these hypotheses and model design choices.
|
Automatic Throughput and Critical Path Analysis of x86 and ARM Assembly
Kernels | Useful models of loop kernel runtimes on out-of-order architectures require
an analysis of the in-core performance behavior of instructions and their
dependencies. While an instruction throughput prediction sets a lower bound to
the kernel runtime, the critical path defines an upper bound. Such predictions
are an essential part of analytic (i.e., white-box) performance models like the
Roofline and Execution-Cache-Memory (ECM) models. They enable a better
understanding of the performance-relevant interactions between hardware
architecture and loop code. The Open Source Architecture Code Analyzer (OSACA)
is a static analysis tool for predicting the execution time of sequential
loops. It previously supported only x86 (Intel and AMD) architectures and
simple, optimistic full-throughput execution. We have heavily extended OSACA to
support ARM instructions and critical path prediction including the detection
of loop-carried dependencies, which turns it into a versatile
cross-architecture modeling tool. We show runtime predictions for code on Intel
Cascade Lake, AMD Zen, and Marvell ThunderX2 micro-architectures based on
machine models from available documentation and semi-automatic benchmarking.
The predictions are compared with actual measurements.
|
Multi-resolution lattice Green's function method for incompressible
flows | We propose a multi-resolution strategy that is compatible with the lattice
Green's function (LGF) technique for solving viscous, incompressible flows on
unbounded domains. The LGF method exploits the regularity of a finite-volume
scheme on a formally unbounded Cartesian mesh to yield robust and
computationally efficient solutions. The original method is spatially adaptive,
but challenging to integrate with embedded mesh refinement as the underlying
LGF is only defined for a fixed resolution. We present an ansatz for adaptive
mesh refinement, where the solutions to the pressure Poisson equation are
approximated using the LGF technique on a composite mesh constructed from a
series of infinite lattices of differing resolution. To solve the
incompressible Navier-Stokes equations, this is further combined with an
integrating factor for the viscous terms and an appropriate Runge-Kutta scheme
for the resulting differential-algebraic equations. The parallelized algorithm
is verified through with numerical simulations of vortex rings, and the
collision of vortex rings at high Reynolds number is simulated to demonstrate
the reduction in computational cells achievable with both spatial and
refinement adaptivity.
|
Hierarchical Watermarking Framework Based on Analysis of Local
Complexity Variations | Increasing production and exchange of multimedia content has increased the
need for better protection of copyright by means of watermarking. Different
methods have been proposed to satisfy the tradeoff between imperceptibility and
robustness as two important characteristics in watermarking while maintaining
proper data-embedding capacity. Many watermarking methods use image independent
set of parameters. Different images possess different potentials for robust and
transparent hosting of watermark data. To overcome this deficiency, in this
paper we have proposed a new hierarchical adaptive watermarking framework. At
the higher level of hierarchy, complexity of an image is ranked in comparison
with complexities of images of a dataset. For a typical dataset of images, the
statistical distribution of block complexities is found. At the lower level of
the hierarchy, for a single cover image that is to be watermarked, complexities
of blocks can be found. Local complexity variation (LCV) among a block and its
neighbors is used to adaptively control the watermark strength factor of each
block. Such local complexity analysis creates an adaptive embedding scheme,
which results in higher transparency by reducing blockiness effects. This two
level hierarchy has enabled our method to take advantage of all image blocks to
elevate the embedding capacity while preserving imperceptibility. For testing
the effectiveness of the proposed framework, contourlet transform (CT) in
conjunction with discrete cosine transform (DCT) is used to embed pseudo-random
binary sequences as watermark. Experimental results show that the proposed
framework elevates the performance the watermarking routine in terms of both
robustness and transparency.
|
IRFL: Image Recognition of Figurative Language | Figures of speech such as metaphors, similes, and idioms are integral parts
of human communication. They are ubiquitous in many forms of discourse,
allowing people to convey complex, abstract ideas and evoke emotion. As
figurative forms are often conveyed through multiple modalities (e.g., both
text and images), understanding multimodal figurative language is an important
AI challenge, weaving together profound vision, language, commonsense and
cultural knowledge. In this work, we develop the Image Recognition of
Figurative Language (IRFL) dataset. We leverage human annotation and an
automatic pipeline we created to generate a multimodal dataset, and introduce
two novel tasks as a benchmark for multimodal figurative language
understanding. We experimented with state-of-the-art vision and language models
and found that the best (22%) performed substantially worse than humans (97%).
We release our dataset, benchmark, and code, in hopes of driving the
development of models that can better understand figurative language.
|
Software Test Automation Maturity -- A Survey of the State of the
Practice | The software industry has seen an increasing interest in test automation. In
this paper, we present a test automation maturity survey serving as a
self-assessment for practitioners. Based on responses of 151 practitioners
coming from above 101 organizations in 25 countries, we make observations
regarding the state of the practice of test automation maturity: a) The level
of test automation maturity in different organizations is differentiated by the
practices they adopt; b) Practitioner reported the quite diverse situation with
respect to different practices, e.g., 85\% practitioners agreed that their test
teams have enough test automation expertise and skills, while 47\% of
practitioners admitted that there is lack of guidelines on designing and
executing automated tests; c) Some practices are strongly correlated and/or
closely clustered; d) The percentage of automated test cases and the use of
Agile and/or DevOps development models are good indicators for a higher test
automation maturity level; (e) The roles of practitioners may affect response
variation, e.g., QA engineers give the most optimistic answers, consultants
give the most pessimistic answers. Our results give an insight into present
test automation processes and practices and indicate chances for further
improvement in the present industry.
|
A Rational Model of Large-Scale Motion in Turbulence | A rational theory is proposed to describe the large-scale motion in
turbulence. The fluid element with inner orientational structures is proposed
to be the building block of fluid dynamics. The variance of the orientational
structures then constitutes new fields suitable to describe the vortex state in
turbulence. When the fluid element is assumed to be an open subsystem, the
differentiable manifold description of turbulence ought to be set up, and the
complete fluid dynamics can be deduced from a variational calculus on the
constructed Lagrangian dissipation energy density. The derived dynamical
equations indicate that the vortex evolution is naturally related with the
angular momentum balance.
|
Towards Generalization on Real Domain for Single Image Dehazing via
Meta-Learning | Learning-based image dehazing methods are essential to assist autonomous
systems in enhancing reliability. Due to the domain gap between synthetic and
real domains, the internal information learned from synthesized images is
usually sub-optimal in real domains, leading to severe performance drop of
dehaizing models. Driven by the ability on exploring internal information from
a few unseen-domain samples, meta-learning is commonly adopted to address this
issue via test-time training, which is hyperparameter-sensitive and
time-consuming. In contrast, we present a domain generalization framework based
on meta-learning to dig out representative and discriminative internal
properties of real hazy domains without test-time training. To obtain
representative domain-specific information, we attach two entities termed
adaptation network and distance-aware aggregator to our dehazing network. The
adaptation network assists in distilling domain-relevant information from a few
hazy samples and caching it into a collection of features. The distance-aware
aggregator strives to summarize the generated features and filter out
misleading information for more representative internal properties. To enhance
the discrimination of distilled internal information, we present a novel loss
function called domain-relevant contrastive regularization, which encourages
the internal features generated from the same domain more similar and that from
diverse domains more distinct. The generated representative and discriminative
features are regarded as some external variables of our dehazing network to
regress a particular and powerful function for a given domain. The extensive
experiments on real hazy datasets, such as RTTS and URHI, validate that our
proposed method has superior generalization ability than the state-of-the-art
competitors.
|
Differentially Private Inductive Miner | Protecting personal data about individuals, such as event traces in process
mining, is an inherently difficult task: an event trace leaks information about
the path in a process model that an individual has triggered. Yet, prior
anonymization methods of event traces like k-anonymity or event log
sanitization struggled to protect against such leakage, in particular against
adversaries with sufficient background knowledge. In this work, we provide a
method that tackles the challenge of summarizing sensitive event traces by
learning the underlying process tree in a privacy-preserving manner. We prove
via the so-called Differential Privacy (DP) property that from the resulting
summaries no useful inference can be drawn about any personal data in an event
trace. On the technical side, we introduce a differentially private
approximation (DPIM) of the Inductive Miner. Experimentally, we compare our
DPIM with the Inductive Miner on 8 real-world event traces by evaluating
well-known metrics: fitness, precision, simplicity, and generalization. The
experiments show that our DPIM not only protects personal data but also
generates faithful process trees that exhibit little utility loss above the
Inductive Miner.
|
Evolutionary Dynamics for Persistent Cooperation in Structured
Populations | The emergence and maintenance of cooperative behavior is a fascinating topic
in evolutionary biology and social science. The public goods game (PGG) is a
paradigm for exploring cooperative behavior. In PGG, the total resulting payoff
is divided equally among all participants. This feature still leads to the
dominance of defection without substantially magnifying the public good by a
multiplying factor. Much effort has been made to explain the evolution of
cooperative strategies, including a recent model in which only a portion of the
total benefit is shared by all the players through introducing a new strategy
named persistent cooperation. A persistent cooperator is a contributor who is
willing to pay a second cost to retrieve the remaining portion of the payoff
contributed by themselves. In a previous study, this model was analyzed in the
framework of well-mixed populations. This paper focuses on discussing the
persistent cooperation in lattice-structured populations. The evolutionary
dynamics of the structured populations consisting of three types of competing
players (pure cooperators, defectors and persistent cooperators) are revealed
by theoretical analysis and numerical simulations. In particular, the
approximate expressions of fixation probabilities for strategies are derived on
one-dimensional lattices. The phase diagrams of stationary states, the
evolution of frequencies and spatial patterns for strategies are illustrated on
both one-dimensional and square lattices by simulations. Our results are
consistent with the general observation that, at least in most situations, a
structured population facilitates the evolution of cooperation. Specifically,
here we find that the existence of persistent cooperators greatly suppresses
the spreading of defectors under more relaxed conditions in structured
populations compared to that obtained in well-mixed population.
|
Statistical Inference in the Differential Privacy Model | In modern settings of data analysis, we may be running our algorithms on
datasets that are sensitive in nature. However, classical machine learning and
statistical algorithms were not designed with these risks in mind, and it has
been demonstrated that they may reveal personal information. These concerns
disincentivize individuals from providing their data, or even worse,
encouraging intentionally providing fake data.
To assuage these concerns, we import the constraint of differential privacy
to the statistical inference, considered by many to be the gold standard of
data privacy. This thesis aims to quantify the cost of ensuring differential
privacy, i.e., understanding how much additional data is required to perform
data analysis with the constraint of differential privacy. Despite the maturity
of the literature on differential privacy, there is still inadequate
understanding in some of the most fundamental settings.
In particular, we make progress in the following problems:
$\bullet$ What is the sample complexity of DP hypothesis testing?
$\bullet$ Can we privately estimate distribution properties with a negligible
cost?
$\bullet$ What is the fundamental limit in private distribution estimation?
$\bullet$ How can we design algorithms to privately estimate random graphs?
$\bullet$ What is the trade-off between the sample complexity and the
interactivity in private hypothesis selection?
|
Effective Digital Image Watermarking in YCbCr Color Space Accompanied by
Presenting a Novel Technique Using DWT | In this paper, a quantization based watermark casting and blind watermark
retrieval algorithm operating in YCbCr color space using discrete wavelet
transform (DWT), for ownership verification and image authentication
applications is implemented. This method uses implicit visual masking by
inserting watermark bits into only the wavelet coefficients of high magnitude,
in Y channel of YCbCr color space. A blind watermark retrieval technique that
can detect the embedded watermark without the help from the original
uncorrupted image is devised which is computationally efficient. The new
watermarking algorithm combines and adapts various aspects from existing
watermarking methods. Experimental results show that the proposed technique to
embed watermark provides extra imperceptibility and robustness against various
signal processing attacks in comparison with the same technique in RGB color
space.
|
Nonlinear interactions of ion acoustic waves explored using fast imaging
decompositions | Fast camera imaging is used to study ion acoustic waves propagating
azimuthally in a magnetized plasma column. The high speed image sequences are
analyzed using Proper Orthogonal Decomposition and 2D Fourier Transform,
allowing to evaluate the assets and differences of both decomposition
techniques. The spatio-temporal features of the waves are extracted from the
high speed images, and highlight energy exchanges between modes. Growth rates
of the modes are extracted from the reconstructed temporal evolution of the
modes, revealing the influence of ion-neutral collisions as pressure increases.
Finally, the nonlinear interactions between modes are extracted using
bicoherence computations, and show the importance of interactions between modes
with azimuthal wave numbers $m$, $m-1$ and $-1$, with $m$ an integer.
|
Demystifying CLIP Data | Contrastive Language-Image Pre-training (CLIP) is an approach that has
advanced research and applications in computer vision, fueling modern
recognition systems and generative models. We believe that the main ingredient
to the success of CLIP is its data and not the model architecture or
pre-training objective. However, CLIP only provides very limited information
about its data and how it has been collected, leading to works that aim to
reproduce CLIP's data by filtering with its model parameters. In this work, we
intend to reveal CLIP's data curation approach and in our pursuit of making it
open to the community introduce Metadata-Curated Language-Image Pre-training
(MetaCLIP). MetaCLIP takes a raw data pool and metadata (derived from CLIP's
concepts) and yields a balanced subset over the metadata distribution. Our
experimental study rigorously isolates the model and training settings,
concentrating solely on data. MetaCLIP applied to CommonCrawl with 400M
image-text data pairs outperforms CLIP's data on multiple standard benchmarks.
In zero-shot ImageNet classification, MetaCLIP achieves 70.8% accuracy,
surpassing CLIP's 68.3% on ViT-B models. Scaling to 1B data, while maintaining
the same training budget, attains 72.4%. Our observations hold across various
model sizes, exemplified by ViT-H achieving 80.5%, without any
bells-and-whistles. Curation code and training data distribution on metadata is
made available at https://github.com/facebookresearch/MetaCLIP.
|
Leveraging Large Language Models to Power Chatbots for Collecting User
Self-Reported Data | Large language models (LLMs) provide a new way to build chatbots by accepting
natural language prompts. Yet, it is unclear how to design prompts to power
chatbots to carry on naturalistic conversations while pursuing a given goal,
such as collecting self-report data from users. We explore what design factors
of prompts can help steer chatbots to talk naturally and collect data reliably.
To this aim, we formulated four prompt designs with different structures and
personas. Through an online study (N = 48) where participants conversed with
chatbots driven by different designs of prompts, we assessed how prompt designs
and conversation topics affected the conversation flows and users' perceptions
of chatbots. Our chatbots covered 79% of the desired information slots during
conversations, and the designs of prompts and topics significantly influenced
the conversation flows and the data collection performance. We discuss the
opportunities and challenges of building chatbots with LLMs.
|
Cooperative Beamforming for Dual-Hop Amplify-and-Forward Multi-Antenna
Relaying Cellular Networks | In this paper, linear beamforming design for amplify-and-forward relaying
cellular networks is considered, in which base station, relay station and
mobile terminals are all equipped with multiple antennas. The design is based
on minimum mean-square-error criterion, and both uplink and downlink scenarios
are considered. It is found that the downlink and uplink beamforming design
problems are in the same form, and iterative algorithms with the same structure
can be used to solve the design problems. For the specific cases of fully
loaded or overloaded uplink systems, a novel algorithm is derived and its
relationships with several existing beamforming design algorithms for
conventional MIMO or multiuser systems are revealed. Simulation results are
presented to demonstrate the performance advantage of the proposed design
algorithms.
|
MAC design for WiFi infrastructure networks: a game-theoretic approach | In WiFi networks, mobile nodes compete for accessing a shared channel by
means of a random access protocol called Distributed Coordination Function
(DCF). Although this protocol is in principle fair, since all the stations have
the same probability to transmit on the channel, it has been shown that unfair
behaviors may emerge in actual networking scenarios because of non-standard
configurations of the nodes. Due to the proliferation of open source drivers
and programmable cards, enabling an easy customization of the channel access
policies, we propose a game-theoretic analysis of random access schemes.
Assuming that each node is rational and implements a best response strategy, we
show that efficient equilibria conditions can be reached when stations are
interested in both uploading and downloading traffic. More interesting, these
equilibria are reached when all the stations play the same strategy, thus
guaranteeing a fair resource sharing. When stations are interested in upload
traffic only, we also propose a mechanism design, based on an artificial
dropping of layer-2 acknowledgments, to force desired equilibria. Finally, we
propose and evaluate some simple DCF extensions for practically implementing
our theoretical findings.
|
Monitoring the Evolution of Behavioural Embeddings in Social Media
Recommendation | Emerging short-video platforms like TikTok, Instagram Reels, and ShareChat
present unique challenges for recommender systems, primarily originating from a
continuous stream of new content. ShareChat alone receives approximately 2
million pieces of fresh content daily, complicating efforts to assess quality,
learn effective latent representations, and accurately match content with the
appropriate user base, especially given limited user feedback. Embedding-based
approaches are a popular choice for industrial recommender systems because they
can learn low-dimensional representations of items, leading to effective
recommendation that can easily scale to millions of items and users.
Our work characterizes the evolution of such embeddings in short-video
recommendation systems, comparing the effect of batch and real-time updates to
content embeddings. We investigate \emph{how} embeddings change with subsequent
updates, explore the relationship between embeddings and popularity bias, and
highlight their impact on user engagement metrics. Our study unveils the
contrast in the number of interactions needed to achieve mature embeddings in a
batch learning setup versus a real-time one, identifies the point of highest
information updates, and explores the distribution of $\ell_2$-norms across the
two competing learning modes. Utilizing a production system deployed on a
large-scale short-video app with over 180 million users, our findings offer
insights into designing effective recommendation systems and enhancing user
satisfaction and engagement in short-video applications.
|
Towards Zero-Shot Frame Semantic Parsing for Domain Scaling | State-of-the-art slot filling models for goal-oriented human/machine
conversational language understanding systems rely on deep learning methods.
While multi-task training of such models alleviates the need for large
in-domain annotated datasets, bootstrapping a semantic parsing model for a new
domain using only the semantic frame, such as the back-end API or knowledge
graph schema, is still one of the holy grail tasks of language understanding
for dialogue systems. This paper proposes a deep learning based approach that
can utilize only the slot description in context without the need for any
labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The
main idea of this paper is to leverage the encoding of the slot names and
descriptions within a multi-task deep learned slot filling model, to implicitly
align slots across domains. The proposed approach is promising for solving the
domain scaling problem and eliminating the need for any manually annotated data
or explicit schema alignment. Furthermore, our experiments on multiple domains
show that this approach results in significantly better slot-filling
performance when compared to using only in-domain data, especially in the low
data regime.
|
OCD-FL: A Novel Communication-Efficient Peer Selection-based
Decentralized Federated Learning | The conjunction of edge intelligence and the ever-growing Internet-of-Things
(IoT) network heralds a new era of collaborative machine learning, with
federated learning (FL) emerging as the most prominent paradigm. With the
growing interest in these learning schemes, researchers started addressing some
of their most fundamental limitations. Indeed, conventional FL with a central
aggregator presents a single point of failure and a network bottleneck. To
bypass this issue, decentralized FL where nodes collaborate in a peer-to-peer
network has been proposed. Despite the latter's efficiency, communication costs
and data heterogeneity remain key challenges in decentralized FL. In this
context, we propose a novel scheme, called opportunistic
communication-efficient decentralized federated learning, a.k.a., OCD-FL,
consisting of a systematic FL peer selection for collaboration, aiming to
achieve maximum FL knowledge gain while reducing energy consumption.
Experimental results demonstrate the capability of OCD-FL to achieve similar or
better performances than the fully collaborative FL, while significantly
reducing consumed energy by at least 30% and up to 80%.
|
How Data Scientists Review the Scholarly Literature | Keeping up with the research literature plays an important role in the
workflow of scientists - allowing them to understand a field, formulate the
problems they focus on, and develop the solutions that they contribute, which
in turn shape the nature of the discipline. In this paper, we examine the
literature review practices of data scientists. Data science represents a field
seeing an exponential rise in papers, and increasingly drawing on and being
applied in numerous diverse disciplines. Recent efforts have seen the
development of several tools intended to help data scientists cope with a
deluge of research and coordinated efforts to develop AI tools intended to
uncover the research frontier. Despite these trends indicative of the
information overload faced by data scientists, no prior work has examined the
specific practices and challenges faced by these scientists in an
interdisciplinary field with evolving scholarly norms. In this paper, we close
this gap through a set of semi-structured interviews and think-aloud protocols
of industry and academic data scientists (N = 20). Our results while
corroborating other knowledge workers' practices uncover several novel
findings: individuals (1) are challenged in seeking and sensemaking of papers
beyond their disciplinary bubbles, (2) struggle to understand papers in the
face of missing details and mathematical content, (3) grapple with the deluge
by leveraging the knowledge context in code, blogs, and talks, and (4) lean on
their peers online and in-person. Furthermore, we outline future directions
likely to help data scientists cope with the burgeoning research literature.
|
Viscoelastic flow past an infinite plate with suction and constant heat
flux | While studying the viscoelastic flow past an infinite plate with suction and
constant heat flux between fluid and plate, Raptis and Tziyanidis gave the
solution of a pair of equations for velocity and temperature as functions of
distance. They then gave some approximate solutions. This letter shows that the
approximations are not justified and presents an exact analytical study.
|
Same, Same But Different - Recovering Neural Network Quantization Error
Through Weight Factorization | Quantization of neural networks has become common practice, driven by the
need for efficient implementations of deep neural networks on embedded devices.
In this paper, we exploit an oft-overlooked degree of freedom in most networks
- for a given layer, individual output channels can be scaled by any factor
provided that the corresponding weights of the next layer are inversely scaled.
Therefore, a given network has many factorizations which change the weights of
the network without changing its function. We present a conceptually simple and
easy to implement method that uses this property and show that proper
factorizations significantly decrease the degradation caused by quantization.
We show improvement on a wide variety of networks and achieve state-of-the-art
degradation results for MobileNets. While our focus is on quantization, this
type of factorization is applicable to other domains such as network-pruning,
neural nets regularization and network interpretability.
|
Fetishizing Food in Digital Age: #foodporn Around the World | What food is so good as to be considered pornographic? Worldwide, the popular
#foodporn hashtag has been used to share appetizing pictures of peoples'
favorite culinary experiences. But social scientists ask whether #foodporn
promotes an unhealthy relationship with food, as pornography would contribute
to an unrealistic view of sexuality. In this study, we examine nearly 10
million Instagram posts by 1.7 million users worldwide. An overwhelming (and
uniform across the nations) obsession with chocolate and cake shows the
domination of sugary dessert over local cuisines. Yet, we find encouraging
traits in the association of emotion and health-related topics with #foodporn,
suggesting food can serve as motivation for a healthy lifestyle. Social
approval also favors the healthy posts, with users posting with healthy
hashtags having an average of 1,000 more followers than those with unhealthy
ones. Finally, we perform a demographic analysis which shows nation-wide trends
of behavior, such as a strong relationship (r=0.51) between the GDP per capita
and the attention to healthiness of their favorite food. Our results expose a
new facet of food "pornography", revealing potential avenues for utilizing this
precarious notion for promoting healthy lifestyles.
|
Graph link prediction in computer networks using Poisson matrix
factorisation | Graph link prediction is an important task in cyber-security: relationships
between entities within a computer network, such as users interacting with
computers, or system libraries and the corresponding processes that use them,
can provide key insights into adversary behaviour. Poisson matrix factorisation
(PMF) is a popular model for link prediction in large networks, particularly
useful for its scalability. In this article, PMF is extended to include
scenarios that are commonly encountered in cyber-security applications.
Specifically, an extension is proposed to explicitly handle binary adjacency
matrices and include known categorical covariates associated with the graph
nodes. A seasonal PMF model is also presented to handle seasonal networks. To
allow the methods to scale to large graphs, variational methods are discussed
for performing fast inference. The results show an improved performance over
the standard PMF model and other statistical network models.
|
The Google Similarity Distance | Words and phrases acquire meaning from the way they are used in society, from
their relative semantics to other words and phrases. For computers the
equivalent of `society' is `database,' and the equivalent of `use' is `way to
search the database.' We present a new theory of similarity between words and
phrases based on information distance and Kolmogorov complexity. To fix
thoughts we use the world-wide-web as database, and Google as search engine.
The method is also applicable to other search engines and databases. This
theory is then applied to construct a method to automatically extract
similarity, the Google similarity distance, of words and phrases from the
world-wide-web using Google page counts. The world-wide-web is the largest
database on earth, and the context information entered by millions of
independent users averages out to provide automatic semantics of useful
quality. We give applications in hierarchical clustering, classification, and
language translation. We give examples to distinguish between colors and
numbers, cluster names of paintings by 17th century Dutch masters and names of
books by English novelists, the ability to understand emergencies, and primes,
and we demonstrate the ability to do a simple automatic English-Spanish
translation. Finally, we use the WordNet database as an objective baseline
against which to judge the performance of our method. We conduct a massive
randomized trial in binary classification using support vector machines to
learn categories based on our Google distance, resulting in an a mean agreement
of 87% with the expert crafted WordNet categories.
|
Analogy-Making as a Core Primitive in the Software Engineering Toolbox | An analogy is an identification of structural similarities and
correspondences between two objects. Computational models of analogy making
have been studied extensively in the field of cognitive science to better
understand high-level human cognition. For instance, Melanie Mitchell and
Douglas Hofstadter sought to better understand high-level perception by
developing the Copycat algorithm for completing analogies between letter
sequences. In this paper, we argue that analogy making should be seen as a core
primitive in software engineering. We motivate this argument by showing how
complex software engineering problems such as program understanding and
source-code transformation learning can be reduced to an instance of the
analogy-making problem. We demonstrate this idea using Sifter, a new
analogy-making algorithm suitable for software engineering applications that
adapts and extends ideas from Copycat. In particular, Sifter reduces
analogy-making to searching for a sequence of update rule applications. Sifter
uses a novel representation for mathematical structures capable of effectively
representing the wide variety of information embedded in software. We conclude
by listing major areas of future work for Sifter and analogy-making in software
engineering.
|
The Impact of IMSI Catcher Deployments on Cellular Network Security:
Challenges and Countermeasures in 4G and 5G Networks | IMSI (International Mobile Subscriber Identity) catchers, also known as
"Stingrays" or "cell site simulators," are rogue devices that pose a
significant threat to cellular network security [1]. IMSI catchers can
intercept and manipulate cellular communications, compromising the privacy and
security of mobile devices and their users. With the advent of 4G and 5G
networks, IMSI catchers have become more sophisticated and pose new challenges
to cellular network security [2]. This paper provides an overview of the impact
of IMSI catcher deployments on cellular network security in the context of 4G
and 5G networks. It discusses the challenges posed by IMSI catchers, including
the unauthorized collection of IMSI numbers, interception of communications,
and potential misuse of subscriber information. It also highlights the
potential consequences of IMSI catcher deployments, including the compromise of
user privacy, financial fraud, and unauthorized surveillance. The paper further
reviews the countermeasures that can be employed to mitigate the risks posed by
IMSI catchers. These countermeasures include network-based solutions such as
signal analysis, encryption, and authentication mechanisms, as well as
user-based solutions such as mobile applications and device settings. The paper
also discusses the limitations and effectiveness of these countermeasures in
the context of 4G and 5G networks. Finally, the paper identifies research gaps
and future directions for enhancing cellular network security against IMSI
catchers in the era of 4G and 5G networks. This includes the need for improved
encryption algorithms, authentication mechanisms, and detection techniques to
effectively detect and prevent IMSI catcher deployments. The paper also
emphasizes the importance of regulatory and policy measures to govern the
deployment and use of IMSI catchers to protect user privacy and security.
|
Antithetic integral feedback for the robust control of monostable and
oscillatory biomolecular circuits | Biomolecular feedback systems are now a central application area of interest
within control theory. While classical control techniques provide invaluable
insight into the function and design of both natural and synthetic biomolecular
systems, there are certain aspects of biological control that have proven
difficult to analyze with traditional methods. To this end, we describe here
how the recently developed tools of dominance analysis can be used to gain
insight into the nonlinear behavior of the antithetic integral feedback
circuit, a recently discovered control architecture which implements integral
control of arbitrary biomolecular processes using a simple feedback mechanism.
We show that dominance theory can predict both monostability and periodic
oscillations in the circuit, depending on the corresponding parameters and
architecture. We then use the theory to characterize the robustness of the
asymptotic behavior of the circuit in a nonlinear setting.
|
Continuous control with deep reinforcement learning | We adapt the ideas underlying the success of Deep Q-Learning to the
continuous action domain. We present an actor-critic, model-free algorithm
based on the deterministic policy gradient that can operate over continuous
action spaces. Using the same learning algorithm, network architecture and
hyper-parameters, our algorithm robustly solves more than 20 simulated physics
tasks, including classic problems such as cartpole swing-up, dexterous
manipulation, legged locomotion and car driving. Our algorithm is able to find
policies whose performance is competitive with those found by a planning
algorithm with full access to the dynamics of the domain and its derivatives.
We further demonstrate that for many of the tasks the algorithm can learn
policies end-to-end: directly from raw pixel inputs.
|
Alternative construction of the closed form of the Green's function for
the wavized Maxwell fish-eye problem | In the recent paper [J.\ Phys.\ A 44 (2011) 065203], we have arrived at the
closed-form expression for the Green's function for the partial differential
operator describing propagation of a scalar wave in an $N$-dimensional
($N\geqslant2$) Maxwell fish-eye medium. The derivation has been based on
unique transformation properties of the fish-eye wave equation under the
hyperspherical inversion. In this communication, we arrive at the same
expression for the fish-eye Green's function following a different route. The
alternative derivation we present here exploits the fact that there is a close
mathematical relationship, through the stereographic projection, between the
wavized fish-eye problem in $\mathbb{R}^{N}$ and the problem of propagation of
scalar waves over the surface of the $N$-dimensional hypersphere.
|
Coordinated Path Following Control of Fixed-wing Unmanned Aerial
Vehicles | In this paper, we investigate the problem of coordinated path following for
fixed-wing UAVs with speed constraints in 2D plane. The objective is to steer a
fleet of UAVs along the path(s) while achieving the desired sequenced inter-UAV
arc distance. In contrast to the previous coordinated path following studies,
we are able through our proposed hybrid control law to deal with the forward
speed and the angular speed constraints of fixed-wing UAVs. More specifically,
the hybrid control law makes all the UAVs work at two different levels: those
UAVs whose path following errors are within an invariant set (i.e., the
designed coordination set) work at the coordination level; and the other UAVs
work at the single-agent level. At the coordination level, we prove that even
with speed constraints, the proposed control law can make sure the path
following errors reduce to zero, while the desired arc distances converge to
the desired value. At the single-agent level, the convergence analysis for the
path following error entering the coordination set is provided. We develop a
hardware-in-the-loop simulation testbed of the multi-UAV system by using actual
autopilots and the X-Plane simulator. The effectiveness of the proposed
approach is corroborated with both MATLAB and the testbed.
|
Relativistic relative velocities and relativistic acceleration | It turns out that the standard application of the four-vector SR formalism
does not include the concept of relative velocity. Only the absolute velocity
is described by the four-vector, and even the Lorentz transformation parameters
is described by the three-dimensional velocity.
This gap in the development of the SR formalism reflects the lack of some
significant velocity subtraction operations. The differential application of
these operations leads to a relativistic acceleration.
|
A locking-free DPG scheme for Timoshenko beams | We develop a discontinuous Petrov-Galerkin scheme with optimal test functions
(DPG method) for the Timoshenko beam bending model with various boundary
conditions, combining clamped, supported, and free ends. Our scheme
approximates the transverse deflection and bending moment. It converges
quasi-optimally in $L_2$ and is locking free. In particular, it behaves well
(converges quasi-optimally) in the limit case of the Euler-Bernoulli model.
Several numerical results illustrate the performance of our method.
|
How to evaluate word embeddings? On importance of data efficiency and
simple supervised tasks | Maybe the single most important goal of representation learning is making
subsequent learning faster. Surprisingly, this fact is not well reflected in
the way embeddings are evaluated. In addition, recent practice in word
embeddings points towards importance of learning specialized representations.
We argue that focus of word representation evaluation should reflect those
trends and shift towards evaluating what useful information is easily
accessible. Specifically, we propose that evaluation should focus on data
efficiency and simple supervised tasks, where the amount of available data is
varied and scores of a supervised model are reported for each subset (as
commonly done in transfer learning).
In order to illustrate significance of such analysis, a comprehensive
evaluation of selected word embeddings is presented. Proposed approach yields a
more complete picture and brings new insight into performance characteristics,
for instance information about word similarity or analogy tends to be
non--linearly encoded in the embedding space, which questions the cosine-based,
unsupervised, evaluation methods. All results and analysis scripts are
available online.
|
Deep Region Hashing for Efficient Large-scale Instance Search from
Images | Instance Search (INS) is a fundamental problem for many applications, while
it is more challenging comparing to traditional image search since the
relevancy is defined at the instance level.
Existing works have demonstrated the success of many complex ensemble systems
that are typically conducted by firstly generating object proposals, and then
extracting handcrafted and/or CNN features of each proposal for matching.
However, object bounding box proposals and feature extraction are often
conducted in two separated steps, thus the effectiveness of these methods
collapses. Also, due to the large amount of generated proposals, matching speed
becomes the bottleneck that limits its application to large-scale datasets. To
tackle these issues, in this paper we propose an effective and efficient Deep
Region Hashing (DRH) approach for large-scale INS using an image patch as the
query. Specifically, DRH is an end-to-end deep neural network which consists of
object proposal, feature extraction, and hash code generation. DRH shares
full-image convolutional feature map with the region proposal network, thus
enabling nearly cost-free region proposals. Also, each high-dimensional,
real-valued region features are mapped onto a low-dimensional, compact binary
codes for the efficient object region level matching on large-scale dataset.
Experimental results on four datasets show that our DRH can achieve even better
performance than the state-of-the-arts in terms of MAP, while the efficiency is
improved by nearly 100 times.
|
QMUL-SDS at SCIVER: Step-by-Step Binary Classification for Scientific
Claim Verification | Scientific claim verification is a unique challenge that is attracting
increasing interest. The SCIVER shared task offers a benchmark scenario to test
and compare claim verification approaches by participating teams and consists
in three steps: relevant abstract selection, rationale selection and label
prediction. In this paper, we present team QMUL-SDS's participation in the
shared task. We propose an approach that performs scientific claim verification
by doing binary classifications step-by-step. We trained a BioBERT-large
classifier to select abstracts based on pairwise relevance assessments for each
<claim, title of the abstract> and continued to train it to select rationales
out of each retrieved abstract based on <claim, sentence>. We then propose a
two-step setting for label prediction, i.e. first predicting "NOT_ENOUGH_INFO"
or "ENOUGH_INFO", then label those marked as "ENOUGH_INFO" as either "SUPPORT"
or "CONTRADICT". Compared to the baseline system, we achieve substantial
improvements on the dev set. As a result, our team is the No. 4 team on the
leaderboard.
|
The State of the Art in Enhancing Trust in Machine Learning Models with
the Use of Visualizations | Machine learning (ML) models are nowadays used in complex applications in
various domains, such as medicine, bioinformatics, and other sciences. Due to
their black box nature, however, it may sometimes be hard to understand and
trust the results they provide. This has increased the demand for reliable
visualization tools related to enhancing trust in ML models, which has become a
prominent topic of research in the visualization community over the past
decades. To provide an overview and present the frontiers of current research
on the topic, we present a State-of-the-Art Report (STAR) on enhancing trust in
ML models with the use of interactive visualization. We define and describe the
background of the topic, introduce a categorization for visualization
techniques that aim to accomplish this goal, and discuss insights and
opportunities for future research directions. Among our contributions is a
categorization of trust against different facets of interactive ML, expanded
and improved from previous research. Our results are investigated from
different analytical perspectives: (a) providing a statistical overview, (b)
summarizing key findings, (c) performing topic analyses, and (d) exploring the
data sets used in the individual papers, all with the support of an interactive
web-based survey browser. We intend this survey to be beneficial for
visualization researchers whose interests involve making ML models more
trustworthy, as well as researchers and practitioners from other disciplines in
their search for effective visualization techniques suitable for solving their
tasks with confidence and conveying meaning to their data.
|
Machine Learning Models Disclosure from Trusted Research Environments
(TRE), Challenges and Opportunities | Artificial intelligence (AI) applications in healthcare and medicine have
increased in recent years. To enable access to personal data, Trusted Research
environments (TREs) provide safe and secure environments in which researchers
can access sensitive personal data and develop Artificial Intelligence (AI) and
Machine Learning models. However currently few TREs support the use of
automated AI-based modelling using Machine Learning. Early attempts have been
made in the literature to present and introduce privacy preserving machine
learning from the design point of view [1]. However, there exists a gap in the
practical decision-making guidance for TREs in handling models disclosure.
Specifically, the use of machine learning creates a need to disclose new types
of outputs from TREs, such as trained machine learning models. Although TREs
have clear policies for the disclosure of statistical outputs, the extent to
which trained models can leak personal training data once released is not well
understood and guidelines do not exist within TREs for the safe disclosure of
these models.
In this paper we introduce the challenge of disclosing trained machine
learning models from TREs. We first give an overview of machine learning models
in general and describe some of their applications in healthcare and medicine.
We define the main vulnerabilities of trained machine learning models in
general. We also describe the main factors affecting the vulnerabilities of
disclosing machine learning models. This paper also provides insights and
analyses methods that could be introduced within TREs to mitigate the risk of
privacy breaches when disclosing trained models.
|
Statistical Characteristics of the Electron Isotropy Boundary | Utilizing observations from the ELFIN satellites, we present a statistical
study of $\sim$2000 events in 2019-2020 characterizing the occurrence in
magnetic local time (MLT) and latitude of $\geq$50 keV electron isotropy
boundaries (IBs) at Earth, and the dependence of associated precipitation on
geomagnetic activity. The isotropy boundary for an electron of a given energy
is the magnetic latitude poleward of which persistent isotropized pitch-angle
distributions ($J_{prec}/J_{perp}\sim 1$) are first observed to occur,
interpreted as resulting from magnetic field-line curvature scattering (FLCS)
in the equatorial magnetosphere. We find that energetic electron IBs can be
well-recognized on the nightside from dusk until dawn, under all geomagnetic
activity conditions, with a peak occurrence rate of almost 90% near $\sim$22
hours in MLT, remaining above 80% from 21 to 01 MLT. The IBs span a wide range
of IGRF magnetic latitudes from $60^\circ$-$74^\circ$, with a maximum
occurrence between $66^\circ$-$71^\circ$ (L of 6-8), shifting to lower
latitudes and pre-midnight local times with activity. The precipitating energy
flux of $\geq$50 keV electrons averaged over the IB-associated latitudes varies
over four orders of magnitude, up to $\sim$1 erg/cm$^2$-s, and often includes
electron energies exceeding 1 MeV. The local time distribution of IB-associated
energies and precipitating fluxes also exhibit peak values near midnight for
low activity, shifting toward pre-midnight for elevated activity. The
percentage of the total energy deposited over the high-latitude regions
($55^\circ$ to $80^\circ$; or IGRF $L\gtrsim 3$) attributed to IBs is 10-20%,
on average, or about 10 MW of total atmospheric power input, but at times can
be up to $\sim$100% of the total $\geq$50 keV electron energy deposition over
the entire sub-auroral and auroral zone region, exceeding 1 GW in atmospheric
power input.
|
A Hybrid Submodular Optimization Approach to Controlled Islanding with
Post-Disturbance Stability Guarantees | Disturbances may create cascading failures in power systems and lead to
widespread blackouts. Controlled islanding is an effective approach to mitigate
cascading failures by partitioning the power system into a set of disjoint
islands. To retain the stability of the power system following disturbances,
the islanding strategy should not only be minimally disruptive, but also
guarantee post-disturbance stability. In this paper, we study the problem of
synthesizing post-disturbance stability-aware controlled islanding strategies.
To ensure post-disturbance stability, our computation of islanding strategies
takes load-generation balance and transmission line capacity constraints into
consideration, leading to a hybrid optimization problem with both discrete and
continuous variables. To mitigate the computational challenge incurred when
solving the hybrid optimization program, we propose the concepts of hybrid
submodularity and hybrid matroid. We show that the islanding problem is
equivalent to a hybrid matroid optimization program, whose objective function
is hybrid supermodular. Leveraging the supermodularity property, we develop an
efficient local search algorithm and show that the proposed algorithm achieves
1/2-optimality guarantee. We compare our approach with a baseline using
mixed-integer linear program on IEEE 118-bus, IEEE 300-bus, ActivSg 500-bus,
and Polish 2383-bus systems. Our results show that our approach outperforms the
baseline in terms of the total cost incurred during islanding across all test
cases. Furthermore, our proposed approach can find an islanding strategy for
large-scale test cases such as Polish 2383-bus system, whereas the baseline
approach becomes intractable.
|
Graph and Network Theory for the analysis of Criminal Networks | Social Network Analysis is the use of Network and Graph Theory to study
social phenomena, which was found to be highly relevant in areas like
Criminology. This chapter provides an overview of key methods and tools that
may be used for the analysis of criminal networks, which are presented in a
real-world case study. Starting from available juridical acts, we have
extracted data on the interactions among suspects within two Sicilian Mafia
clans, obtaining two weighted undirected graphs. Then, we have investigated the
roles of these weights on the criminal network's properties, focusing on two
key features: weight distribution and shortest path length. We also present an
experiment that aims to construct an artificial network that mirrors criminal
behaviours. To this end, we have conducted a comparative degree distribution
analysis between the real criminal networks, using some of the most popular
artificial network models: Watts-Strogatz, Erd\H{o}s-R\'{e}nyi, and
Barab\'{a}si-Albert, with some topology variations. This chapter will be a
valuable tool for researchers who wish to employ social network analysis within
their own area of interest.
|
Polariton lasing in AlGaN microring with GaN/AlGaN quantum wells | Microcavity polaritons are strongly interacting hybrid light-matter
quasiparticles, which are promising for the development of novel light sources
and active photonic devices. Here, we report polariton lasing in the UV
spectral range in microring resonators based on GaN/AlGaN slab waveguides, with
experiments carried out from 4 K up to room temperature. Stimulated polariton
relaxation into multiple ring resonator modes is observed, which exhibit
threshold-like dependence of the emission intensity with pulse energy. The
strong exciton-photon coupling regime is confirmed by the significant reduction
of the free spectral range with energy and the blueshift of the exciton-like
modes with increasing pulse energy. Importantly, the exciton emission shows no
broadening with power, further confirming that lasing is observed at
electron-hole densities well below the Mott transition. Overall, our work paves
the way towards development of novel UV devices based on the high-speed slab
waveguide polariton geometry operating up to room temperature with potential to
be integrated into complex photonic circuits.
|
Semantic Map-based Generation of Navigation Instructions | We are interested in the generation of navigation instructions, either in
their own right or as training material for robotic navigation task. In this
paper, we propose a new approach to navigation instruction generation by
framing the problem as an image captioning task using semantic maps as visual
input. Conventional approaches employ a sequence of panorama images to generate
navigation instructions. Semantic maps abstract away from visual details and
fuse the information in multiple panorama images into a single top-down
representation, thereby reducing computational complexity to process the input.
We present a benchmark dataset for instruction generation using semantic maps,
propose an initial model and ask human subjects to manually assess the quality
of generated instructions. Our initial investigations show promise in using
semantic maps for instruction generation instead of a sequence of panorama
images, but there is vast scope for improvement. We release the code for data
preparation and model training at https://github.com/chengzu-li/VLGen.
|
Transmutation of Elements through Capture of Electrons by Nuclei | A proton can capture an electron and turn into a neutron provided the
electron has a kinetic energy of 0.782 MeV or more.An element of the Periodic
Table can change into another on being exposed to such high energy electrons.
|
Distributed Resource Allocation over Time-varying Balanced Digraphs with
Discrete-time Communication | This work is concerned with the problem of distributed resource allocation in
continuous-time setting but with discrete-time communication over infinitely
jointly connected and balanced digraphs. We provide a passivity-based
perspective for the continuous-time algorithm, based on which an intermittent
communication scheme is developed. Particularly, a periodic communication
scheme is first derived through analyzing the passivity degradation over output
sampling of the distributed dynamics at each node. Then, an asynchronous
distributed event-triggered scheme is further developed. The sampled-based
event-triggered communication scheme is exempt from Zeno behavior as the
minimum inter-event time is lower bounded by the sampling period. The
parameters in the proposed algorithm rely only on local information of each
individual nodes, which can be designed in a truly distributed fashion
|
Secured Distributed Cognitive MAC and Complexity Reduction in Channel
Estimation for the Cross Layer based Cognitive Radio Networks | Secured opportunistic Medium Access Control (MAC) and complexity reduction in
channel estimation are proposed in the Cross layer design Cognitive Radio
Networks deploying the secured dynamic channel allocation from the endorsed
channel reservation. Channel Endorsement and Transmission policy is deployed to
optimize the free channel selection as well as channel utilization to cognitive
radio users. This strategy provide the secured and reliable link to secondary
users as well as the collision free link to primary users between the physical
and MAC layers which yields the better network performance. On the other hand,
Complexity Reduction in Minimum Mean Square Errror (CR-MMSE) and Maximum
Likelihood (CR-ML) algorithm on Decision Directed Channel Estimation (DDCE) is
deployed significantly to achieve computational complexity as Least Square (LS)
method. Rigorously, CR-MMSE in sample spaced channel impulse response (SS-CIR)
is implemented by allowing the computationally inspired matrix inversion.
Regarding CR-ML, Pilot Symbol Assisted Modulation (PSAM) with DDCE is
implemented such the pilot symbol sequence provides the significant performance
gain in frequency correlation using the finite delay spread. It is found that
CRMMSE demonstrates outstanding Symbol Error Rate (SER) performance over MMSE
and LS, and CR-ML over MMSE and ML.
|
Adversarial Regularizers in Inverse Problems | Inverse Problems in medical imaging and computer vision are traditionally
solved using purely model-based methods. Among those variational regularization
models are one of the most popular approaches. We propose a new framework for
applying data-driven approaches to inverse problems, using a neural network as
a regularization functional. The network learns to discriminate between the
distribution of ground truth images and the distribution of unregularized
reconstructions. Once trained, the network is applied to the inverse problem by
solving the corresponding variational problem. Unlike other data-based
approaches for inverse problems, the algorithm can be applied even if only
unsupervised training data is available. Experiments demonstrate the potential
of the framework for denoising on the BSDS dataset and for computed tomography
reconstruction on the LIDC dataset.
|
Nuclear polarization effects in atoms and ions | In heavy atoms and ions, nuclear structure effects are significantly enhanced
due to the overlap of the electron wave functions with the nucleus. This
overlap rapidly increases with the nuclear charge $Z$. We study the energy
level shifts induced by the electric dipole and electric quadrupole nuclear
polarization effects in atoms and ions with $Z \geq 20$. The electric dipole
polarization effect is enhanced by the nuclear giant dipole resonance. The
electric quadrupole polarization effect is enhanced because the electrons in a
heavy atom or ion move faster than the rotation of the deformed nucleus, thus
experiencing significant corrections to the conventional approximation in which
they `see' an averaged nuclear charge density. The electric nuclear
polarization effects are computed numerically for $1s$, $2s$, $2p_{1/2}$ and
high $ns$ electrons. The results are fitted with elementary functions of
nuclear parameters (nuclear charge, mass number, nuclear radius and
deformation). We construct an effective potential which models the energy level
shifts due to nuclear polarization. This effective potential, when added to the
nuclear Coulomb interaction, may be used to find energy level shifts in
multi-electron ions, atoms and molecules. The fitting functions and effective
potentials of the nuclear polarization effects are important for the studies of
isotope shifts and nonlinearity in the King plot which are now used to search
for new interactions and particles.
|
From diffusion experiments to mean-field theory simulations and back | Using previous experimental data of diffusion in metallic alloys, we obtain
real values for an interpolation parameter introduced in a mean-field theory
for diffusion with interaction. Values of order 1 were found as expected,
finding relevance for this quantity as a way to better understand the
underlying dynamics of diffusion processes. Furthermore, using this theory, we
are able to estimate the values of the mean-field potential from experimental
data. As a final test, we reobtain, with all this information as an input to
our simulations, the diffusion coefficient in the studied metallic alloys.
Therefore, the method provides appropriate transition probabilities to perform
Monte Carlo simulations that correctly describe the out of equilibrium
behavior.
|
A comparison of cluster algorithms as applied to unsupervised surveys | When considering answering important questions with data, unsupervised data
offers extensive insight opportunity and unique challenges. This study
considers student survey data with a specific goal of clustering students into
like groups with underlying concept of identifying different poverty levels.
Fuzzy logic is considered during the data cleaning and organizing phase helping
to create a logical dependent variable for analysis comparison. Using multiple
data reduction techniques, the survey was reduced and cleaned. Finally,
multiple clustering techniques (k-means, k-modes, and hierarchical clustering)
are applied and compared. Though each method has strengths, the goal was to
identify which was most viable when applied to survey data and specifically
when trying to identify the most impoverished students.
|
Complexity results for two kinds of colored disconnections of graphs | The concept of rainbow disconnection number of graphs was introduced by
Chartrand et al. in 2018. Inspired by this concept, we put forward the concepts
of rainbow vertex-disconnection and proper disconnection in graphs. In this
paper, we first show that it is $NP$-complete to decide whether a given
edge-colored graph $G$ with maximum degree $\Delta(G)=4$ is proper
disconnected. Then, for a graph $G$ with $\Delta(G)\leq 3$ we show that
$pd(G)\leq 2$ and determine the graphs with $pd(G)=1$ and $2$, respectively.
Furthermore, we show that for a general graph $G$, deciding whether $pd(G)=1$
is $NP$-complete, even if $G$ is bipartite. We also show that it is
$NP$-complete to decide whether a given vertex-colored graph $G$ is rainbow
vertex-disconnected, even though the graph $G$ has $\Delta(G)=3$ or is
bipartite.
|
Massive RF Simulation Applied to School Connectivity in Malawi | Providing Internet connectivity to schools has been identified as paramount
for development, for instance in the Giga project, cosponsored by ITU and
UNICEF, with the goal of connecting every school to the Internet by 2030. For a
country wide deployment, it is imperative to perform a thorough planning of the
whole installation, using radio frequency (RF) propagation models. While
statistical models based on empirical RF propagation data gathered in different
scenarios can be employed, for point to point links at microwave frequencies
the existence of a clear line of sight (LOS) is normally a prerequisite. The
Irregular terrain model which makes use of digital elevation maps (DEM) has
proved quite effective for simulating point to point links, but its application
to a great number of links becomes time consuming, so we have developed an
automated framework to perform this task. As a case study we have applied this
framework in the planning of a project mired at providing connectivity to
primary and secondary schools all over the country of Malawi.
|
Adaptive Point-to-Multipoint Transmission for Multimedia Broadcast
Multicast Services in LTE | This paper investigates point-to-multipoint (PTM) transmission supporting
adaptive modulation and coding (AMC) as well as retransmissions based on
incremental redundancy. In contrast to the classical PTM transmission which was
introduced by the Multimedia Broadcast Multicast Service (MBMS), the
adaptiveness requires user individual feedback channels that allow the
receivers to report their radio conditions and send positive or negative
acknowledgments (ACK/NACK) for a Layer 1 transport block to the eNodeB. In this
work, an adaptive PTM scheme based on feedback from multiple users is presented
and evaluated. Furthermore, a simple NACK-oriented feedback mechanism is
introduced to relieve the feedback channel that is used in the uplink. Finally,
the performance of different single-cell MBMS transmission modes is evaluated
by dynamic radio network simulations. It is shown that adaptive PTM
transmission outperforms the conventional MBMS configurations in terms of radio
resource consumption and user satisfaction rate.
|
Visible and Ultraviolet Laser Spectroscopy of ThF | The molecular ion ThF$^+$ is the species to be used in the next generation of
search for the electron's Electric Dipole Moment (eEDM) at JILA. The
measurement requires creating molecular ions in the eEDM sensitive state, the
rovibronic ground state $^3\Delta_1$, $v^+=0$, $J^+=1$. Survey spectroscopy of
neutral ThF is required to identify an appropriate intermediate state for a
Resonance Enhanced Multi-Photon Ionization (REMPI) scheme that will create ions
in the required state. We perform broadband survey spectroscopy (from 13000 to
44000~cm$^{-1}$) of ThF using both Laser Induced Fluorescence (LIF) and $1+1'$
REMPI spectroscopy. We observe and assign 345 previously unreported vibronic
bands of ThF. We demonstrate 30\% efficiency in the production of ThF$^+$ ions
in the eEDM sensitive state using the $\Omega = 3/2$ [32.85] intermediate
state. In addition, we propose a method to increase the aforementioned
efficiency to $\sim$100\% by using vibrational autoionization via
core-nonpenetrating Rydberg states, and discuss theoretical and experimental
challenges. Finally, we also report 83 vibronic bands of an impurity species,
ThO.
|
Testing Bipartiteness of Geometric Intersection Graphs | We show how to test the bipartiteness of an intersection graph of n line
segments or simple polygons in the plane, or of balls in R^d, in time O(n log
n). More generally we find subquadratic algorithms for connectivity and
bipartiteness testing of intersection graphs of a broad class of geometric
objects. For unit balls in R^d, connectivity testing has equivalent randomized
complexity to construction of Euclidean minimum spanning trees, and hence is
unlikely to be solved as efficiently as bipartiteness testing. For line
segments or planar disks, testing k-colorability of intersection graphs for k>2
is NP-complete.
|
An automated parameter domain decomposition approach for gravitational
wave surrogates using hp-greedy refinement | We introduce hp-greedy, a refinement approach for building gravitational wave
surrogates as an extension of the standard reduced basis framework. Our
proposal is data-driven, with a domain decomposition of the parameter space,
local reduced basis, and a binary tree as the resulting structure, which are
obtained in an automated way. When compared to the standard global reduced
basis approach, the numerical simulations of our proposal show three salient
features: i) representations of lower dimension with no loss of accuracy, ii) a
significantly higher accuracy for a fixed maximum dimensionality of the basis,
in some cases by orders of magnitude, and iii) results that depend on the
reduced basis seed choice used by the refinement algorithm. We first illustrate
the key parts of our approach with a toy model and then present a more
realistic use case of gravitational waves emitted by the collision of two
spinning, non-precessing black holes. We discuss performance aspects of
hp-greedy, such as overfitting with respect to the depth of the tree structure,
and other hyperparameter dependences. As two direct applications of the
proposed hp-greedy refinement, we envision: i) a further acceleration of
statistical inference, which might be complementary to focused reduced-order
quadratures, and ii) the search of gravitational waves through clustering and
nearest neighbors.
|
Adversarial Risk Bounds for Neural Networks through Sparsity based
Compression | Neural networks have been shown to be vulnerable against minor adversarial
perturbations of their inputs, especially for high dimensional data under
$\ell_\infty$ attacks. To combat this problem, techniques like adversarial
training have been employed to obtain models which are robust on the training
set. However, the robustness of such models against adversarial perturbations
may not generalize to unseen data. To study how robustness generalizes, recent
works assume that the inputs have bounded $\ell_2$-norm in order to bound the
adversarial risk for $\ell_\infty$ attacks with no explicit dimension
dependence. In this work we focus on $\ell_\infty$ attacks on $\ell_\infty$
bounded inputs and prove margin-based bounds. Specifically, we use a
compression based approach that relies on efficiently compressing the set of
tunable parameters without distorting the adversarial risk. To achieve this, we
apply the concept of effective sparsity and effective joint sparsity on the
weight matrices of neural networks. This leads to bounds with no explicit
dependence on the input dimension, neither on the number of classes. Our
results show that neural networks with approximately sparse weight matrices not
only enjoy enhanced robustness, but also better generalization.
|
Re-Invoke: Tool Invocation Rewriting for Zero-Shot Tool Retrieval | Recent advances in large language models (LLMs) have enabled autonomous
agents with complex reasoning and task-fulfillment capabilities using a wide
range of tools. However, effectively identifying the most relevant tools for a
given task becomes a key bottleneck as the toolset size grows, hindering
reliable tool utilization. To address this, we introduce Re-Invoke, an
unsupervised tool retrieval method designed to scale effectively to large
toolsets without training. Specifically, we first generate a diverse set of
synthetic queries that comprehensively cover different aspects of the query
space associated with each tool document during the tool indexing phase.
Second, we leverage LLM's query understanding capabilities to extract key
tool-related context and underlying intents from user queries during the
inference phase. Finally, we employ a novel multi-view similarity ranking
strategy based on intents to pinpoint the most relevant tools for each query.
Our evaluation demonstrates that Re-Invoke significantly outperforms
state-of-the-art alternatives in both single-tool and multi-tool scenarios, all
within a fully unsupervised setting. Notably, on the ToolE datasets, we achieve
a 20% relative improvement in nDCG@5 for single-tool retrieval and a 39%
improvement for multi-tool retrieval.
|
A Survey of Challenges for Runtime Verification from Advanced
Application Domains (Beyond Software) | Runtime verification is an area of formal methods that studies the dynamic
analysis of execution traces against formal specifications. Typically, the two
main activities in runtime verification efforts are the process of creating
monitors from specifications, and the algorithms for the evaluation of traces
against the generated monitors. Other activities involve the instrumentation of
the system to generate the trace and the communication between the system under
analysis and the monitor. Most of the applications in runtime verification have
been focused on the dynamic analysis of software, even though there are many
more potential applications to other computational devices and target systems.
In this paper we present a collection of challenges for runtime verification
extracted from concrete application domains, focusing on the difficulties that
must be overcome to tackle these specific challenges. The computational models
that characterize these domains require to devise new techniques beyond the
current state of the art in runtime verification.
|
Low-threshold Optical Parametric Oscillations in a Whispering Gallery
Mode Resonator | In whispering gallery mode (WGM) resonators light is guided by continuous
total internal reflection along a curved surface. Fabricating such resonators
from an optically nonlinear material one takes advantage of their exceptionally
high quality factors and small mode volumes to achieve extremely efficient
optical frequency conversion. Our analysis of the phase matching conditions for
optical parametric down conversion (PDC) in a spherical WGM resonator shows
their direct relation to the sum rules for photons' angular momenta and
predicts a very low parametric oscillations threshold. We realized such an
optical parametric oscillator (OPO) based on naturally phase-matched PDC in
Lithium Niobate. We demonstrated a single-mode, strongly non-degenerate OPO
with a threshold of 6.7 micro-W and linewidth under 10 MHz. This work
demonstrates the remarkable capabilities of WGM-based OPOs and opens the
perspectives for their applications in quantum and nonlinear optics,
particularly for the generation of squeezed light.
|
On the KZ Reduction | The Korkine-Zolotareff (KZ) reduction is one of the often used reduction
strategies for lattice decoding. In this paper, we first investigate some
important properties of KZ reduced matrices. Specifically, we present a linear
upper bound on the Hermit constant which is around $\frac{7}{8}$ times of the
existing sharpest linear upper bound, and an upper bound on the KZ constant
which is {\em polynomially} smaller than the existing sharpest one. We also
propose upper bounds on the lengths of the columns of KZ reduced matrices, and
an upper bound on the orthogonality defect of KZ reduced matrices which are
even {\em polynomially and exponentially} smaller than those of boosted KZ
reduced matrices, respectively. Then, we derive upper bounds on the magnitudes
of the entries of any solution of a shortest vector problem (SVP) when its
basis matrix is LLL reduced. These upper bounds are useful for analyzing the
complexity and understanding numerical stability of the basis expansion in a KZ
reduction algorithm. Finally, we propose a new KZ reduction algorithm by
modifying the commonly used Schnorr-Euchner search strategy for solving SVPs
and the basis expansion method proposed by Zhang {\em et al.} Simulation
results show that the new KZ reduction algorithm is much faster and more
numerically reliable than the KZ reduction algorithm proposed by Zhang {\em et
al.}, especially when the basis matrix is ill conditioned.
|
Designing the Topology of Graph Neural Networks: A Novel Feature Fusion
Perspective | In recent years, Graph Neural Networks (GNNs) have shown superior performance
on diverse real-world applications. To improve the model capacity, besides
designing aggregation operations, GNN topology design is also very important.
In general, there are two mainstream GNN topology design manners. The first one
is to stack aggregation operations to obtain the higher-level features but
easily got performance drop as the network goes deeper. Secondly, the multiple
aggregation operations are utilized in each layer which provides adequate and
independent feature extraction stage on local neighbors while are costly to
obtain the higher-level information. To enjoy the benefits while alleviating
the corresponding deficiencies of these two manners, we learn to design the
topology of GNNs in a novel feature fusion perspective which is dubbed
F$^2$GNN. To be specific, we provide a feature fusion perspective in designing
GNN topology and propose a novel framework to unify the existing topology
designs with feature selection and fusion strategies. Then we develop a neural
architecture search method on top of the unified framework which contains a set
of selection and fusion operations in the search space and an improved
differentiable search algorithm. The performance gains on eight real-world
datasets demonstrate the effectiveness of F$^2$GNN. We further conduct
experiments to show that F$^2$GNN can improve the model capacity while
alleviating the deficiencies of existing GNN topology design manners,
especially alleviating the over-smoothing problem, by utilizing different
levels of features adaptively.
|
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks | Deep Neural Networks (DNNs) are known to be vulnerable to both backdoor and
adversarial attacks. In the literature, these two types of attacks are commonly
treated as distinct robustness problems and solved separately, since they
belong to training-time and inference-time attacks respectively. However, this
paper revealed that there is an intriguing connection between them: (1)
planting a backdoor into a model will significantly affect the model's
adversarial examples; (2) for an infected model, its adversarial examples have
similar features as the triggered images. Based on these observations, a novel
Progressive Unified Defense (PUD) algorithm is proposed to defend against
backdoor and adversarial attacks simultaneously. Specifically, our PUD has a
progressive model purification scheme to jointly erase backdoors and enhance
the model's adversarial robustness. At the early stage, the adversarial
examples of infected models are utilized to erase backdoors. With the backdoor
gradually erased, our model purification can naturally turn into a stage to
boost the model's robustness against adversarial attacks. Besides, our PUD
algorithm can effectively identify poisoned images, which allows the initial
extra dataset not to be completely clean. Extensive experimental results show
that, our discovered connection between backdoor and adversarial attacks is
ubiquitous, no matter what type of backdoor attack. The proposed PUD
outperforms the state-of-the-art backdoor defense, including the model
repairing-based and data filtering-based methods. Besides, it also has the
ability to compete with the most advanced adversarial defense methods.
|
Nonlinear MHD modeling of soft $\beta$ limits in W7-AS | An important question for the outlook of stellarator reactors is their
robustness against pressure driven modes, and the underlying mechanism behind
experimentally observed soft $\beta$ limits. Towards building a robust answer
to these questions, simulation studies are presented using a recently derived
reduced nonlinear MHD model. First, the initial model implementation is
extended to capture fluid compression by including the influence of parallel
flows. Linear benchmarks of a (2, 1) tearing mode in W7-AS geometry, and
interchange modes in a finite $\beta$, net-zero current carrying stellarator
with low magnetic shear are then used to demonstrate the modeling capabilities.
Finally, a validation study is conducted on experimental reconstructions of
finite $\beta$ W7-AS discharges. In agreement with past experimental analysis,
it is shown that (i) the MHD activity is resistive, (ii) a soft $\beta$ limit
is observed, when the plasma resistivity approaches the estimated experimental
value, and (iii) low $n$ MHD activity is observed at intermediate $\beta$
values, particularly a nonlinearly dominant (2, 1) mode. The MHD activity is
mild, explaining the soft $\beta$ limit, because the plasma volume remains
separated into distinct sub-volumes in which field lines are ergodically
confined. For the assumed transport parameters, the enhanced perpendicular
transport along stochastic magnetic field lines can be overcome with the
experimental heating power. The limitations in the current modeling are
described, alongside an outlook for characterising soft $\beta$ limits in more
detail in future work.
|
Kinematic Basis of Emergent Energetics of Complex Dynamics | Stochastic kinematic description of a complex dynamics is shown to dictate an
energetic and thermodynamic structure. An energy function $\varphi(x)$ emerges
as the limit of the generalized, nonequilibrium free energy of a Markovian
dynamics with vanishing fluctuations. In terms of the $\nabla\varphi$ and its
orthogonal field $\gamma(x)\perp\nabla\varphi$, a general vector field $b(x)$
can be decomposed into $-D(x)\nabla\varphi+\gamma$, where
$\nabla\cdot\big(\omega(x)\gamma(x)\big)=$ $-\nabla\omega D(x)\nabla\varphi$.
The matrix $D(x)$ and scalar $\omega(x)$, two additional characteristics to the
$b(x)$ alone, represent the local geometry and density of states intrinsic to
the statistical motion in the state space at $x$. $\varphi(x)$ and $\omega(x)$
are interpreted as the emergent energy and degeneracy of the motion, with an
energy balance equation $d\varphi(x(t))/dt=\gamma D^{-1}\gamma-bD^{-1}b$,
reflecting the geometrical $\|D\nabla\varphi\|^2+\|\gamma\|^2=\|b\|^2$. The
partition function employed in statistical mechanics and J. W. Gibbs' method of
ensemble change naturally arise; a fluctuation-dissipation theorem is
established via the two leading-order asymptotics of entropy production as
$\epsilon\to 0$. The present theory provides a mathematical basis for P. W.
Anderson's emergent behavior in the hierarchical structure of complexity
science.
|
A Systematic Mapping Study on Testing of Machine Learning Programs | We aim to conduct a systematic mapping in the area of testing ML programs. We
identify, analyze and classify the existing literature to provide an overview
of the area. We followed well-established guidelines of systematic mapping to
develop a systematic protocol to identify and review the existing literature.
We formulate three sets of research questions, define inclusion and exclusion
criteria and systematically identify themes for the classification of existing
techniques. We also report the quality of the published works using established
assessment criteria. we finally selected 37 papers out of 1654 based on our
selection criteria up to January 2019. We analyze trends such as contribution
facet, research facet, test approach, type of ML and the kind of testing with
several other attributes. We also discuss the empirical evidence and reporting
quality of selected papers. The data from the study is made publicly available
for other researchers and practitioners. We present an overview of the area by
answering several research questions. The area is growing rapidly, however,
there is lack of enough empirical evidence to compare and assess the
effectiveness of the techniques. More publicly available tools are required for
use of practitioners and researchers. Further attention is needed on
non-functional testing and testing of ML programs using reinforcement learning.
We believe that this study can help researchers and practitioners to obtain an
overview of the area and identify several sub-areas where more research is
required
|
A Network Perspective on Software Modularity | Modularity is a desirable characteristic for software systems. In this
article we propose to use a quantitative method from complex network sciences
to estimate the coherence between the modularity of the dependency network of
large open source Java projects and their decomposition in terms of Java
packages. The results presented in this article indicate that our methodology
offers a promising and reasonable quantitative approach with potential impact
on software engineering processes.
|
Entropic effects on the structure of Lennard-Jones clusters | We examine in detail the causes of the structural transitions that occur for
those small Lennard-Jones clusters that have a non-icosahedral global minima.
Based on the principles learned from these examples we develop a method to
construct structural phase diagrams that show in a coarse-grained manner how
the equilibrium structure of large clusters depends on both size and
temperature. The method can be augmented to account for anharmonicity and
quantum effects. Our results illustrate that the vibrational entropy can play a
crucial role in determining the equilibrium structure of a cluster.
|
Neural Network Pruning by Cooperative Coevolution | Neural network pruning is a popular model compression method which can
significantly reduce the computing cost with negligible loss of accuracy.
Recently, filters are often pruned directly by designing proper criteria or
using auxiliary modules to measure their importance, which, however, requires
expertise and trial-and-error. Due to the advantage of automation, pruning by
evolutionary algorithms (EAs) has attracted much attention, but the performance
is limited for deep neural networks as the search space can be quite large. In
this paper, we propose a new filter pruning algorithm CCEP by cooperative
coevolution, which prunes the filters in each layer by EAs separately. That is,
CCEP reduces the pruning space by a divide-and-conquer strategy. The
experiments show that CCEP can achieve a competitive performance with the
state-of-the-art pruning methods, e.g., prune ResNet56 for $63.42\%$ FLOPs on
CIFAR10 with $-0.24\%$ accuracy drop, and ResNet50 for $44.56\%$ FLOPs on
ImageNet with $0.07\%$ accuracy drop.
|
Simulations in statistical physics and biology: some applications | One of the most active areas of physics in the last decades has been that of
critical phenomena, and Monte Carlo simulations have played an important role
as a guide for the validation and prediction of system properties close to the
critical points. The kind of phase transitions occurring for the Betts lattice
(lattice constructed removing 1/7 of the sites from the triangular lattice)
have been studied before with the Potts model for the values q=3, ferromagnetic
and antiferromagnetic regime. Here, we add up to this research line the
ferromagnetic case for q=4 and 5. In the first case, the critical exponents are
estimated for the second order transition, whereas for the latter case the
histogram method is applied for the occurring first order transition.
Additionally, Domany's Monte Carlo based clustering technique mainly used to
group genes similar in their expression levels is reviewed. Finally, a control
theory tool --an adaptive observer-- is applied to estimate the exponent
parameter involved in the well-known Gompertz curve. By treating all these
subjects our aim is to stress the importance of cooperation between distinct
disciplines in addressing the complex problems arising in biology.
Contents: Chapter 1 - Monte Carlo simulations in stat. physics; Chapter 2: MC
simulations in biology; Chapter 3: Gompertz equation
|
On the finite element approximation for fractional fast diffusion
equations | Considering fractional fast diffusion equations on bounded open polyhedral
domains in $\mathbb{R}^N$, we give a fully Galerkin approximation of the
solutions by $C^0$-piecewise linear finite elements in space and backward Euler
discretization in time, a priori estimates and the rates of convergence for the
approximate solutions are proved, which extends the results of \emph{Carsten
Ebmeyer and Wen Bin Liu, SIAM J. Numer. Anal., 46(2008), pp. 2393--2410}. We
also generalize the a priori estimates and the rates of convergence to a
parabolic integral equation under the framework of \emph{Qiang Du, Max
Gunzburger, Richaed B. Lehoucq and Kun Zhou, SIAM Rev., 54 (2012), no. 4, pp.
667--696.}
|
A Critical Note on the Evaluation of Clustering Algorithms | Experimental evaluation is a major research methodology for investigating
clustering algorithms and many other machine learning algorithms. For this
purpose, a number of benchmark datasets have been widely used in the literature
and their quality plays a key role on the value of the research work. However,
in most of the existing studies, little attention has been paid to the
properties of the datasets and they are often regarded as black-box problems.
For example, it is common to use datasets intended for classification in
clustering research and assume class la-bels as the ground truth for judging
the quality of cluster-ing. In our work, with the help of advanced
visualization and dimension reduction techniques, we show that this practice
may seriously compromise the research quality and produce misleading results.
We suggest that the applicability of existing benchmark datasets should be
carefully revisited and significant efforts need to be devoted to improving the
current practice of experimental evaluation of clustering algorithms to ensure
an essential match between algorithms and problems.
|
Challenges of GPT-3-based Conversational Agents for Healthcare | The potential to provide patients with faster information access while
allowing medical specialists to concentrate on critical tasks makes medical
domain dialog agents appealing. However, the integration of large-language
models (LLMs) into these agents presents certain limitations that may result in
serious consequences. This paper investigates the challenges and risks of using
GPT-3-based models for medical question-answering (MedQA). We perform several
evaluations contextualized in terms of standard medical principles. We provide
a procedure for manually designing patient queries to stress-test high-risk
limitations of LLMs in MedQA systems. Our analysis reveals that LLMs fail to
respond adequately to these queries, generating erroneous medical information,
unsafe recommendations, and content that may be considered offensive.
|
A calendar Quipu of the early 17th century and its relationship with the
Inca astronomy | The so-called Miccinelli documents are a set of documents which were written
by Jesuit scholars in Peru within the first half of the 17th century. Among
such documents, one contains the depiction of a Quipu, that is, a device made
out of cords of different nature and colors which, with the help of nodes, were
used by the Incas for storing data. This Quipu is claimed by the author, Blas
Valera, to be a reproduction of the Inca calendar of the year of the Spanish
conquest. We give here a complete analysis of the astronomical events occurred
in Cusco in that year, showing that they actually correspond closely to the
data reported in the Quipu, and compare the calendrical information - such as
the names and the rituals of each month - with those given by other documents,
especially the Nuova Coronica by G. Poma de Ayala. The possible relevance of
the document for the knowledge of the original Inca lore of the sky is
discussed in details.
|
Learning Multivariate CDFs and Copulas using Tensor Factorization | Learning the multivariate distribution of data is a core challenge in
statistics and machine learning. Traditional methods aim for the probability
density function (PDF) and are limited by the curse of dimensionality. Modern
neural methods are mostly based on black-box models, lacking identifiability
guarantees. In this work, we aim to learn multivariate cumulative distribution
functions (CDFs), as they can handle mixed random variables, allow efficient
box probability evaluation, and have the potential to overcome local sample
scarcity owing to their cumulative nature. We show that any grid sampled
version of a joint CDF of mixed random variables admits a universal
representation as a naive Bayes model via the Canonical Polyadic (tensor-rank)
decomposition. By introducing a low-rank model, either directly in the raw data
domain, or indirectly in a transformed (Copula) domain, the resulting model
affords efficient sampling, closed form inference and uncertainty
quantification, and comes with uniqueness guarantees under relatively mild
conditions. We demonstrate the superior performance of the proposed model in
several synthetic and real datasets and applications including regression,
sampling and data imputation. Interestingly, our experiments with real data
show that it is possible to obtain better density/mass estimates indirectly via
a low-rank CDF model, than a low-rank PDF/PMF model.
|
Connectivity of Underlay Cognitive Radio Networks with Directional
Antennas | In cognitive radio networks (CRNs), the connectivity of secondary users (SUs)
is difficult to be guaranteed due to the existence of primary users (PUs). Most
prior studies only consider cognitive radio networks equipped with
omni-directional antennas causing high interference at SUs. We name such CRNs
with omni-directional antennas as Omn-CRNs. Compared with an omni-directional
antenna, a directional antenna can concentrate the transmitting/receiving
capability at a certain direction, consequently resulting in less interference.
In this paper, we investigate the connectivity of SUs in CRNs with directional
antennas (named as Dir-CRNs). In particular, we derive closed-form expressions
of the connectivity of SUs of both Dir-CRNs and Omn-CRNs, thus enabling
tractability. We show that the connectivity of SUs is mainly affected by two
constraints: the spectrum availability of SUs and the topological connectivity
of SUs. Extensive simulations validate the accuracy of our proposed models.
Meanwhile, we also show that Dir-CRNs can have higher connectivity than
Omn-CRNs mainly due to the lower interference, the higher spectrum availability
and the higher topological connectivity brought by directional antennas.
|
Throughput Enhancement of Multicarrier Cognitive M2M Networks:
Universal-Filtered OFDM Systems | We consider a cognitive radio network consisting of a primary cellular system
and a secondary cognitive machine-to-machine (M2M) system, and study the
throughput enhancement problem of the latter system employing
universal-filtered orthogonal frequency division multiplexing (UF-OFDM)
modulation. The downlink transmission capacity of the cognitive M2M system is
thereby maximized, while keeping the interference introduced to the primary
users (PUs) below the pre-specified threshold, under total transmit power
budget of the secondary base station (SBS). The performance of UF-OFDM based CR
system is compared to the performances of OFDM-based and filter bank
multicarrier (FBMC)-based CR systems. We also propose a near-optimal resource
allocation method separating the subband and power allocation. The solution is
less complex compared to optimization of the original combinatorial problem. We
present numerical results that show that for given interference thresholds of
the PUs and maximum transmit power limit of the SBS, the UF-OFDM based CR
system exhibits intermediary performance in terms of achievable capacity
compared to OFDM and FBMC-based CR systems. Interestingly, for a certain degree
of robustness of the PUs, the UF-OFDM performs equally well as FBMC.
Furthermore, the percentage rate-gain of UF-OFDM based CR system increases by a
large amount when UF-OFDM modulation with lower sidelobes ripple is employed.
Numerical results also show that the proposed throughput enhancing method
despite having lower computational complexity compared to the optimal solution
achieves near-optimal performance.
|
Photonic Band Structure of Two-dimensional Atomic Lattices | Two-dimensional atomic arrays exhibit a number of intriguing quantum optical
phenomena, including subradiance, nearly perfect reflection of radiation and
long-lived topological edge states. Studies of emission and scattering of
photons in such lattices require complete treatment of the radiation pattern
from individual atoms, including long-range interactions. We describe a
systematic approach to perform the calculations of collective energy shifts and
decay rates in the presence of such long-range interactions for arbitrary
two-dimensional atomic lattices. As applications of our method, we investigate
the topological properties of atomic lattices both in free-space and near
plasmonic surfaces.
|
UAV-Enhanced Combination to Application: Comprehensive Analysis and
Benchmarking of a Human Detection Dataset for Disaster Scenarios | Unmanned aerial vehicles (UAVs) have revolutionized search and rescue (SAR)
operations, but the lack of specialized human detection datasets for training
machine learning models poses a significant challenge.To address this gap, this
paper introduces the Combination to Application (C2A) dataset, synthesized by
overlaying human poses onto UAV-captured disaster scenes. Through extensive
experimentation with state-of-the-art detection models, we demonstrate that
models fine-tuned on the C2A dataset exhibit substantial performance
improvements compared to those pre-trained on generic aerial datasets.
Furthermore, we highlight the importance of combining the C2A dataset with
general human datasets to achieve optimal performance and generalization across
various scenarios. This points out the crucial need for a tailored dataset to
enhance the effectiveness of SAR operations. Our contributions also include
developing dataset creation pipeline and integrating diverse human poses and
disaster scenes information to assess the severity of disaster scenarios. Our
findings advocate for future developments, to ensure that SAR operations
benefit from the most realistic and effective AI-assisted interventions
possible.
|
Compliant Fluidic Control Structures: Concept and Synthesis Approach | The concept and synthesis approach for planar Compliant Fluidic Control
Structures (CFCSs), monolithic flexible continua with embedded functional
pores, is presented in this manuscript. Such structures are envisioned to find
application in biomedicine as tunable microuidic devices for drug/nutrient
delivery. The functional pores enlarge and/or contract upondeformation of the
compliant structure in response to external stimuli, facilitating the regulated
control of fluid/nutrient/drug transport. A thickness design variable based
topology optimization problem is formulated to generate effective designs of
these structures. An objective based on hydraulic diameter(s) is
conceptualized, and it is extremized using a gradient based optimizer. Both
geometrical and material nonlinearities are considered. The nonlinear behaviour
of employed hyperelastic material is modeled via the Arruda-Boyce constitutive
material model. Large-displacement finite element analysis is performed using
the updated Lagrangian formulation in plane-stress setting. The proposed
synthesis approach is applied to various CFCSs for a variety of fluidic control
functionalities. The optimized designs of various CFCSs with single and/or
multiple functional pores are fabricated via a Polydimethylsiloxane (PDMS) soft
lithography process, using a high precision 3D printed mold and their
performances are compared. with the numerical predictions.
|
Protecting Locks Against Unbalanced Unlock() | The lock is a building-block synchronization primitive that enables mutually
exclusive access to shared data in shared-memory parallel programs. Mutual
exclusion is typically achieved by guarding the code that accesses the shared
data with a pair of lock() and unlock() operations. Concurrency bugs arise when
this ordering of operations is violated. In this paper, we study a particular
pattern of misuse where an unlock() is issued without first issuing a lock(),
which can happen in code with complex control flow. This misuse is surprisingly
common in several important open-source repositories we study. We
systematically study what happens due to this misuse in several popular locking
algorithms. We study how misuse can be detected and how the locking protocols
can be fixed to avoid the unwanted consequences of misuse. Most locks require
simple changes to detect and prevent this misuse. We evaluate the performance
traits of modified implementations, which show mild performance penalties in
most scalable locks.
|
Change Point Models for Real-time Cyber Attack Detection in Connected
Vehicle Environment | Connected vehicle (CV) systems are cognizant of potential cyber attacks
because of increasing connectivity between its different components such as
vehicles, roadside infrastructure, and traffic management centers. However, it
is a challenge to detect security threats in real-time and develop appropriate
or effective countermeasures for a CV system because of the dynamic behavior of
such attacks, high computational power requirement, and a historical data
requirement for training detection models. To address these challenges,
statistical models, especially change point models, have potentials for
real-time anomaly detections. Thus, the objective of this study is to
investigate the efficacy of two change point models, Expectation Maximization
(EM) and two forms of Cumulative Summation (CUSUM) algorithms (i.e., typical
and adaptive), for real-time V2I cyber attack detection in a CV Environment. To
prove the efficacy of these models, we evaluated these two models for three
different type of cyber attack, denial of service (DOS), impersonation, and
false information, using basic safety messages (BSMs) generated from CVs
through simulation. Results from numerical analysis revealed that EM, CUSUM,
and adaptive CUSUM could detect these cyber attacks, DOS, impersonation, and
false information, with an accuracy of (99%, 100%, 100%), (98%, 10%, 100%), and
(100%, 98%, 100%) respectively.
|
The Golden Rule as a Heuristic to Measure the Fairness of Texts Using
Machine Learning | In this paper we present a natural language programming framework to consider
how the fairness of acts can be measured. For the purposes of the paper, a fair
act is defined as one that one would be accepting of if it were done to
oneself. The approach is based on an implementation of the golden rule (GR) in
the digital domain. Despite the GRs prevalence as an axiom throughout history,
no transfer of this moral philosophy into computational systems exists. In this
paper we consider how to algorithmically operationalise this rule so that it
may be used to measure sentences such as: the boy harmed the girl, and
categorise them as fair or unfair. A review and reply to criticisms of the GR
is made. A suggestion of how the technology may be implemented to avoid unfair
biases in word embeddings is made - given that individuals would typically not
wish to be on the receiving end of an unfair act, such as racism, irrespective
of whether the corpus being used deems such discrimination as praiseworthy.
|
An efficient active-stress electromechanical isogeometric shell model
for muscular thin film simulations | We propose an isogeometric approach to model the deformation of active thin
films using layered, nonlinear, Kirchhoff Love shells. Isogeometric Collocation
and Galerkin formulations are employed to discretize the electrophysiological
and mechanical sub-problems, respectively, with the possibility to adopt
different element and time-step sizes. Numerical tests illustrate the
capabilities of the active stress based approach to effectively simulate the
contraction of thin films in both quasi-static and dynamic conditions.
|
Size does not matter -- in the virtual world. Comparing online social
networking behaviour with business success of entrepreneurs | We explore what benefits network position in online business social networks
like LinkedIn might confer to an aspiring entrepreneur. We compare two network
attributes, size and embeddedness, and two actor attributes, location and
diversity, between virtual and real-world networks. The promise of social
networks like LinkedIn is that network friends enable easier access to critical
resources such as legal and financial services, customers, and business
partners. Our setting consists of one million public member profiles of the
German business networking site XING (a German version of LinkedIn) from which
we extracted the network structure of 15,000 start-up entrepreneurs from 12
large German universities. We find no positive effect of virtual network size
and embeddedness, and small positive effects of location and diversity.
|
The complexity of simulating local measurements on quantum systems | An important task in quantum physics is the estimation of local quantities
for ground states of local Hamiltonians. Recently, [Ambainis, CCC 2014] defined
the complexity class P^QMA[log], and motivated its study by showing that the
physical task of estimating the expectation value of a local observable against
the ground state of a local Hamiltonian is P^QMA[log]-complete. In this paper,
we continue the study of P^QMA[log], obtaining the following lower and upper
bounds.
Lower bounds (hardness results): (1) The P^QMA[log]-completeness result of
[Ambainis, CCC 2014] requires O(log n)-local observables and Hamiltonians. We
show that simulating even a single qubit measurement on ground states of
5-local Hamiltonians is P^QMA[log]-complete, resolving an open question of
Ambainis. (2) We formalize the complexity theoretic study of estimating
two-point correlation functions against ground states, and show that this task
is similarly P^QMA[log]-complete. (3) We identify a flaw in [Ambainis, CCC
2014] regarding a P^UQMA[log]-hardness proof for estimating spectral gaps of
local Hamiltonians. By introducing a "query validation" technique, we build on
[Ambainis, CCC 2014] to obtain P^UQMA[log]-hardness for estimating spectral
gaps under polynomial-time Turing reductions.
Upper bounds (containment in complexity classes): P^QMA[log] is thought of as
"slightly harder" than QMA. We justify this formally by exploiting the
hierarchical voting technique of [Beigel, Hemachandra, Wechsung, SCT 1989] to
show P^QMA[log] is in PP. This improves the containment QMA is in PP [Kitaev,
Watrous, STOC 2000].
This work contributes a rigorous treatment of the subtlety involved in
studying oracle classes in which the oracle solves a promise problem. This is
particularly relevant for quantum complexity theory, where most natural classes
such as BQP and QMA are defined as promise classes.
|
Comparison of Multi-Class and Binary Classification Machine Learning
Models in Identifying Strong Gravitational Lenses | Typically, binary classification lens-finding schemes are used to
discriminate between lens candidates and non-lenses. However, these models
often suffer from substantial false-positive classifications. Such false
positives frequently occur due to images containing objects such as crowded
sources, galaxies with arms, and also images with a central source and smaller
surrounding sources. Therefore, a model might confuse the stated circumstances
with an Einstein ring. It has been proposed that by allowing such commonly
misclassified image types to constitute their own classes, machine learning
models will more easily be able to learn the difference between images that
contain real lenses, and images that contain lens imposters. Using Hubble Space
Telescope (HST) images, in the F814W filter, we compare the usage of binary and
multi-class classification models applied to the lens finding task. From our
findings, we conclude there is not a significant benefit to using the
multi-class model over a binary model. We will also present the results of a
simple lens search using a multi-class machine learning model, and potential
new lens candidates.
|
Fluctuations of Power versus Energy for Random Fields Near a Perfectly
Conducting Boundary | The standard deviations of the energy and Poynting power densities for an
isotropic random field near a perfectly conducting planar boundary are
characterized, based on quartic plane-wave expansions. For normal and
transverse components, different rates of decay exist as a function of
electrical distance from the boundary. At large distances, the envelopes for
the power are more strongly damped than for the energy, both showing inverse
power law decay. The decay for the standard deviation is generally one order
faster than for the corresponding mean. For the normally directed power flux,
its standard deviation near the boundary increases linearly with distance. The
relative uncertainty of the scalar power is much smaller than for the Poynting
power. Poynting's theorem for standard deviations is obtained and demonstrates
larger standard deviations of the energy imbalance and power flux than their
mean values.
|
Multilevel ensemble Kalman filtering for spatio-temporal processes | We design and analyse the performance of a multilevel ensemble Kalman filter
method (MLEnKF) for filtering settings where the underlying state-space model
is an infinite-dimensional spatio-temporal process. We consider underlying
models that needs to be simulated by numerical methods, with discretization in
both space and time. The multilevel Monte Carlo (MLMC) sampling strategy,
achieving variance reduction through pairwise coupling of ensemble particles on
neighboring resolutions, is used in the sample-moment step of MLEnKF to produce
an efficient hierarchical filtering method for spatio-temporal models. Under
sufficient regularity, MLEnKF is proven to be more efficient for weak
approximations than EnKF, asymptotically in the large-ensemble and
fine-numerical-resolution limit. Numerical examples support our theoretical
findings.
|
INN-PAR: Invertible Neural Network for PPG to ABP Reconstruction | Non-invasive and continuous blood pressure (BP) monitoring is essential for
the early prevention of many cardiovascular diseases. Estimating arterial blood
pressure (ABP) from photoplethysmography (PPG) has emerged as a promising
solution. However, existing deep learning approaches for PPG-to-ABP
reconstruction (PAR) encounter certain information loss, impacting the
precision of the reconstructed signal. To overcome this limitation, we
introduce an invertible neural network for PPG to ABP reconstruction (INN-PAR),
which employs a series of invertible blocks to jointly learn the mapping
between PPG and its gradient with the ABP signal and its gradient. INN-PAR
efficiently captures both forward and inverse mappings simultaneously, thereby
preventing information loss. By integrating signal gradients into the learning
process, INN-PAR enhances the network's ability to capture essential
high-frequency details, leading to more accurate signal reconstruction.
Moreover, we propose a multi-scale convolution module (MSCM) within the
invertible block, enabling the model to learn features across multiple scales
effectively. We have experimented on two benchmark datasets, which show that
INN-PAR significantly outperforms the state-of-the-art methods in both waveform
reconstruction and BP measurement accuracy.
|
Subsets and Splits