instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: k* Distribution: Evaluating the Latent Space of Deep Neural Networks using Local Neighborhood Analysis
Abstract: Most examinations of neural networks' learned latent spaces typically employ
dimensionality reduction techniques such as t-SNE or UMAP. While these methods
effectively capture the overall sample distribution in the entire learned
latent space, they tend to distort the structure of sample distributions within
specific classes in the subset of the latent space. This distortion complicates
the task of easily distinguishing classes identifiable by neural networks. In
response to this challenge, we introduce the k* Distribution methodology. This
approach focuses on capturing the characteristics and structure of sample
distributions for individual classes within the subset of the learned latent
space using local neighborhood analysis. The key concept is to facilitate easy
comparison of different k* distributions, enabling analysis of how various
classes are processed by the same neural network. This provides a more profound
understanding of existing contemporary visualizations. Our study reveals three
distinct distributions of samples within the learned latent space subset: a)
Fractured, b) Overlapped, and c) Clustered. We note and demonstrate that the
distribution of samples within the network's learned latent space significantly
varies depending on the class. Furthermore, we illustrate that our analysis can
be applied to explore the latent space of diverse neural network architectures,
various layers within neural networks, transformations applied to input
samples, and the distribution of training and testing data for neural networks.
We anticipate that our approach will facilitate more targeted investigations
into neural networks by collectively examining the distribution of different
samples within the learned latent space. | Machine Learning |
What field is the article from? | Title: Coop: Memory is not a Commodity
Abstract: Tensor rematerialization allows the training of deep neural networks (DNNs)
under limited memory budgets by checkpointing the models and recomputing the
evicted tensors as needed. However, the existing tensor rematerialization
techniques overlook the memory system in deep learning frameworks and
implicitly assume that free memory blocks at different addresses are identical.
Under this flawed assumption, discontiguous tensors are evicted, among which
some are not used to allocate the new tensor. This leads to severe memory
fragmentation and increases the cost of potential rematerializations. To
address this issue, we propose to evict tensors within a sliding window to
ensure all evictions are contiguous and are immediately used. Furthermore, we
proposed cheap tensor partitioning and recomputable in-place to further reduce
the rematerialization cost by optimizing the tensor allocation. We named our
method Coop as it is a co-optimization of tensor allocation and tensor
rematerialization. We evaluated Coop on eight representative DNNs. The
experimental results demonstrate that Coop achieves up to $2\times$ memory
saving and hugely reduces compute overhead, search latency, and memory
fragmentation compared to the state-of-the-art baselines. | Machine Learning |
What field is the article from? | Title: EELBERT: Tiny Models through Dynamic Embeddings
Abstract: We introduce EELBERT, an approach for compression of transformer-based models
(e.g., BERT), with minimal impact on the accuracy of downstream tasks. This is
achieved by replacing the input embedding layer of the model with dynamic, i.e.
on-the-fly, embedding computations. Since the input embedding layer accounts
for a significant fraction of the model size, especially for the smaller BERT
variants, replacing this layer with an embedding computation function helps us
reduce the model size significantly. Empirical evaluation on the GLUE benchmark
shows that our BERT variants (EELBERT) suffer minimal regression compared to
the traditional BERT models. Through this approach, we are able to develop our
smallest model UNO-EELBERT, which achieves a GLUE score within 4% of fully
trained BERT-tiny, while being 15x smaller (1.2 MB) in size. | Computational Linguistics |
What field is the article from? | Title: A theory for the sparsity emerged in the Forward Forward algorithm
Abstract: This report explores the theory that explains the high sparsity phenomenon
\citep{tosato2023emergent} observed in the forward-forward algorithm
\citep{hinton2022forward}. The two theorems proposed predict the sparsity
changes of a single data point's activation in two cases: Theorem
\ref{theorem:1}: Decrease the goodness of the whole batch. Theorem
\ref{theorem:2}: Apply the complete forward forward algorithm to decrease the
goodness for negative data and increase the goodness for positive data. The
theory aligns well with the experiments tested on the MNIST dataset. | Machine Learning |
What field is the article from? | Title: Modeling User Viewing Flow using Large Language Models for Article Recommendation
Abstract: This paper proposes the User Viewing Flow Modeling (SINGLE) method for the
article recommendation task, which models the user constant preference and
instant interest from user-clicked articles. Specifically, we employ a user
constant viewing flow modeling method to summarize the user's general interest
to recommend articles. We utilize Large Language Models (LLMs) to capture
constant user preferences from previously clicked articles, such as skills and
positions. Then we design the user instant viewing flow modeling method to
build interactions between user-clicked article history and candidate articles.
It attentively reads the representations of user-clicked articles and aims to
learn the user's different interest views to match the candidate article. Our
experimental results on the Alibaba Technology Association (ATA) website show
the advantage of SINGLE, which achieves 2.4% improvements over previous
baseline models in the online A/B test. Our further analyses illustrate that
SINGLE has the ability to build a more tailored recommendation system by
mimicking different article viewing behaviors of users and recommending more
appropriate and diverse articles to match user interests. | Information Retrieval |
What field is the article from? | Title: FinanceBench: A New Benchmark for Financial Question Answering
Abstract: FinanceBench is a first-of-its-kind test suite for evaluating the performance
of LLMs on open book financial question answering (QA). It comprises 10,231
questions about publicly traded companies, with corresponding answers and
evidence strings. The questions in FinanceBench are ecologically valid and
cover a diverse set of scenarios. They are intended to be clear-cut and
straightforward to answer to serve as a minimum performance standard. We test
16 state of the art model configurations (including GPT-4-Turbo, Llama2 and
Claude2, with vector stores and long context prompts) on a sample of 150 cases
from FinanceBench, and manually review their answers (n=2,400). The cases are
available open-source. We show that existing LLMs have clear limitations for
financial QA. Notably, GPT-4-Turbo used with a retrieval system incorrectly
answered or refused to answer 81% of questions. While augmentation techniques
such as using longer context window to feed in relevant evidence improve
performance, they are unrealistic for enterprise settings due to increased
latency and cannot support larger financial documents. We find that all models
examined exhibit weaknesses, such as hallucinations, that limit their
suitability for use by enterprises. | Computational Linguistics |
What field is the article from? | Title: Conflict Transformation and Management. From Cognitive Maps to Value Trees
Abstract: Conflict transformation and management are complex decision processes with
extremely high stakes at hand and could greatly benefit from formal approaches
to decision support. For this purpose we develop a general framework about how
to use problem structuring methods for such purposes. More precisely we show
how to transform cognitive maps to value trees in order to promote a more
design-oriented approach to decision support aiming at constructing innovative
solutions for conflict management purposes. We show that our findings have a
much wider validity since they allow to move from a descriptive representation
of a problem situation to a more prescriptive one using formal procedures and
models. | Artificial Intelligence |
What field is the article from? | Title: IEKM: A Model Incorporating External Keyword Matrices
Abstract: A customer service platform system with a core text semantic similarity (STS)
task faces two urgent challenges: Firstly, one platform system needs to adapt
to different domains of customers, i.e., different domains adaptation (DDA).
Secondly, it is difficult for the model of the platform system to distinguish
sentence pairs that are literally close but semantically different, i.e., hard
negative samples. In this paper, we propose an incorporation external keywords
matrices model (IEKM) to address these challenges. The model uses external
tools or dictionaries to construct external matrices and fuses them to the
self-attention layers of the Transformer structure through gating units, thus
enabling flexible corrections to the model results. We evaluate the method on
multiple datasets and the results show that our method has improved performance
on all datasets. To demonstrate that our method can effectively solve all the
above challenges, we conduct a flexible correction experiment, which results in
an increase in the F1 value from 56.61 to 73.53. Our code will be publicly
available. | Artificial Intelligence |
What field is the article from? | Title: Data Management For Large Language Models: A Survey
Abstract: Data plays a fundamental role in the training of Large Language Models
(LLMs). Effective data management, particularly in the formulation of a
well-suited training dataset, holds significance for enhancing model
performance and improving training efficiency during pretraining and supervised
fine-tuning phases. Despite the considerable importance of data management, the
current research community still falls short in providing a systematic analysis
of the rationale behind management strategy selection, its consequential
effects, methodologies for evaluating curated datasets, and the ongoing pursuit
of improved strategies. Consequently, the exploration of data management has
attracted more and more attention among the research community. This survey
provides a comprehensive overview of current research in data management within
both the pretraining and supervised fine-tuning stages of LLMs, covering
various noteworthy aspects of data management strategy design: data quantity,
data quality, domain/task composition, etc. Looking toward the future, we
extrapolate existing challenges and outline promising directions for
development in this field. Therefore, this survey serves as a guiding resource
for practitioners aspiring to construct powerful LLMs through effective data
management practices. The collection of the latest papers is available at
https://github.com/ZigeW/data_management_LLM. | Computational Linguistics |
What field is the article from? | Title: Resfusion: Prior Residual Noise embedded Denoising Diffusion Probabilistic Models
Abstract: Recently, Denoising Diffusion Probabilistic Models have been widely used in
image segmentation, by generating segmentation masks conditioned on the input
image. However, previous works can not seamlessly integrate existing end-to-end
models with denoising diffusion models. Existing research can only select
acceleration steps based on experience rather than calculating them
specifically. Moreover, most methods are limited to small models and
small-scale datasets, unable to generalize to general datasets and a wider
range of tasks. Therefore, we propose Resfusion with a novel resnoise-diffusion
process, which gradually generates segmentation masks or any type of target
image, seamlessly integrating state-of-the-art end-to-end models and denoising
diffusion models. Resfusion bridges the discrepancy between the likelihood
output and the ground truth output through a Markov process. Through the novel
smooth equivalence transformation in resnoise-diffusion process, we determine
the optimal acceleration step. Experimental results demonstrate that Resfusion
combines the capabilities of existing end-to-end models and denoising diffusion
models, further enhancing performance and achieving outstanding results.
Moreover, Resfusion is not limited to segmentation tasks, it can easily
generalize to any general tasks of image generation and exhibit strong
competitiveness. | Computer Vision |
What field is the article from? | Title: AI Alignment: A Comprehensive Survey
Abstract: AI alignment aims to make AI systems behave in line with human intentions and
values. As AI systems grow more capable, the potential large-scale risks
associated with misaligned AI systems become salient. Hundreds of AI experts
and public figures have expressed concerns about AI risks, arguing that
"mitigating the risk of extinction from AI should be a global priority,
alongside other societal-scale risks such as pandemics and nuclear war". To
provide a comprehensive and up-to-date overview of the alignment field, in this
survey paper, we delve into the core concepts, methodology, and practice of
alignment. We identify the RICE principles as the key objectives of AI
alignment: Robustness, Interpretability, Controllability, and Ethicality.
Guided by these four principles, we outline the landscape of current alignment
research and decompose them into two key components: forward alignment and
backward alignment. The former aims to make AI systems aligned via alignment
training, while the latter aims to gain evidence about the systems' alignment
and govern them appropriately to avoid exacerbating misalignment risks. Forward
alignment and backward alignment form a recurrent process where the alignment
of AI systems from the forward process is verified in the backward process,
meanwhile providing updated objectives for forward alignment in the next round.
On forward alignment, we discuss learning from feedback and learning under
distribution shift. On backward alignment, we discuss assurance techniques and
governance practices that apply to every stage of AI systems' lifecycle.
We also release and continually update the website (www.alignmentsurvey.com)
which features tutorials, collections of papers, blog posts, and other
resources. | Artificial Intelligence |
What field is the article from? | Title: Solving MaxSAT with Matrix Multiplication
Abstract: We propose an incomplete algorithm for Maximum Satisfiability (MaxSAT)
specifically designed to run on neural network accelerators such as GPUs and
TPUs. Given a MaxSAT problem instance in conjunctive normal form, our procedure
constructs a Restricted Boltzmann Machine (RBM) with an equilibrium
distribution wherein the probability of a Boolean assignment is exponential in
the number of clauses it satisfies. Block Gibbs sampling is used to
stochastically search the space of assignments with parallel Markov chains.
Since matrix multiplication is the main computational primitive for block Gibbs
sampling in an RBM, our approach leads to an elegantly simple algorithm (40
lines of JAX) well-suited for neural network accelerators. Theoretical results
about RBMs guarantee that the required number of visible and hidden units of
the RBM scale only linearly with the number of variables and constant-sized
clauses in the MaxSAT instance, ensuring that the computational cost of a Gibbs
step scales reasonably with the instance size. Search throughput can be
increased by batching parallel chains within a single accelerator as well as by
distributing them across multiple accelerators. As a further enhancement, a
heuristic based on unit propagation running on CPU is periodically applied to
the sampled assignments. Our approach, which we term RbmSAT, is a new design
point in the algorithm-hardware co-design space for MaxSAT. We present timed
results on a subset of problem instances from the annual MaxSAT Evaluation's
Incomplete Unweighted Track for the years 2018 to 2021. When allotted the same
running time and CPU compute budget (but no TPUs), RbmSAT outperforms other
participating solvers on problems drawn from three out of the four years'
competitions. Given the same running time on a TPU cluster for which RbmSAT is
uniquely designed, it outperforms all solvers on problems drawn from all four
years. | Artificial Intelligence |
What field is the article from? | Title: Foundation Model Assisted Weakly Supervised Semantic Segmentation
Abstract: This work aims to leverage pre-trained foundation models, such as contrastive
language-image pre-training (CLIP) and segment anything model (SAM), to address
weakly supervised semantic segmentation (WSSS) using image-level labels. To
this end, we propose a coarse-to-fine framework based on CLIP and SAM for
generating high-quality segmentation seeds. Specifically, we construct an image
classification task and a seed segmentation task, which are jointly performed
by CLIP with frozen weights and two sets of learnable task-specific prompts. A
SAM-based seeding (SAMS) module is designed and applied to each task to produce
either coarse or fine seed maps. Moreover, we design a multi-label contrastive
loss supervised by image-level labels and a CAM activation loss supervised by
the generated coarse seed map. These losses are used to learn the prompts,
which are the only parts need to be learned in our framework. Once the prompts
are learned, we input each image along with the learned segmentation-specific
prompts into CLIP and the SAMS module to produce high-quality segmentation
seeds. These seeds serve as pseudo labels to train an off-the-shelf
segmentation network like other two-stage WSSS methods. Experiments show that
our method achieves the state-of-the-art performance on PASCAL VOC 2012 and
competitive results on MS COCO 2014. Code is available at
https://github.com/HAL-42/FMA-WSSS.git. | Computer Vision |
What field is the article from? | Title: Case Repositories: Towards Case-Based Reasoning for AI Alignment
Abstract: Case studies commonly form the pedagogical backbone in law, ethics, and many
other domains that face complex and ambiguous societal questions informed by
human values. Similar complexities and ambiguities arise when we consider how
AI should be aligned in practice: when faced with vast quantities of diverse
(and sometimes conflicting) values from different individuals and communities,
with whose values is AI to align, and how should AI do so? We propose a
complementary approach to constitutional AI alignment, grounded in ideas from
case-based reasoning (CBR), that focuses on the construction of policies
through judgments on a set of cases. We present a process to assemble such a
case repository by: 1) gathering a set of ``seed'' cases -- questions one may
ask an AI system -- in a particular domain, 2) eliciting domain-specific key
dimensions for cases through workshops with domain experts, 3) using LLMs to
generate variations of cases not seen in the wild, and 4) engaging with the
public to judge and improve cases. We then discuss how such a case repository
could assist in AI alignment, both through directly acting as precedents to
ground acceptable behaviors, and as a medium for individuals and communities to
engage in moral reasoning around AI. | Artificial Intelligence |
What field is the article from? | Title: Towards Effective Paraphrasing for Information Disguise
Abstract: Information Disguise (ID), a part of computational ethics in Natural Language
Processing (NLP), is concerned with best practices of textual paraphrasing to
prevent the non-consensual use of authors' posts on the Internet. Research on
ID becomes important when authors' written online communication pertains to
sensitive domains, e.g., mental health. Over time, researchers have utilized
AI-based automated word spinners (e.g., SpinRewriter, WordAI) for paraphrasing
content. However, these tools fail to satisfy the purpose of ID as their
paraphrased content still leads to the source when queried on search engines.
There is limited prior work on judging the effectiveness of paraphrasing
methods for ID on search engines or their proxies, neural retriever (NeurIR)
models. We propose a framework where, for a given sentence from an author's
post, we perform iterative perturbation on the sentence in the direction of
paraphrasing with an attempt to confuse the search mechanism of a NeurIR system
when the sentence is queried on it. Our experiments involve the subreddit
'r/AmItheAsshole' as the source of public content and Dense Passage Retriever
as a NeurIR system-based proxy for search engines. Our work introduces a novel
method of phrase-importance rankings using perplexity scores and involves
multi-level phrase substitutions via beam search. Our multi-phrase substitution
scheme succeeds in disguising sentences 82% of the time and hence takes an
essential step towards enabling researchers to disguise sensitive content
effectively before making it public. We also release the code of our approach. | Information Retrieval |
What field is the article from? | Title: Data Valuation and Detections in Federated Learning
Abstract: Federated Learning (FL) enables collaborative model training while preserving
the privacy of raw data. A challenge in this framework is the fair and
efficient valuation of data, which is crucial for incentivizing clients to
contribute high-quality data in the FL task. In scenarios involving numerous
data clients within FL, it is often the case that only a subset of clients and
datasets are pertinent to a specific learning task, while others might have
either a negative or negligible impact on the model training process. This
paper introduces a novel privacy-preserving method for evaluating client
contributions and selecting relevant datasets without a pre-specified training
algorithm in an FL task. Our proposed approach FedBary, utilizes Wasserstein
distance within the federated context, offering a new solution for data
valuation in the FL framework. This method ensures transparent data valuation
and efficient computation of the Wasserstein barycenter and reduces the
dependence on validation datasets. Through extensive empirical experiments and
theoretical analyses, we demonstrate the potential of this data valuation
method as a promising avenue for FL research. | Machine Learning |
What field is the article from? | Title: CRoW: Benchmarking Commonsense Reasoning in Real-World Tasks
Abstract: Recent efforts in natural language processing (NLP) commonsense reasoning
research have yielded a considerable number of new datasets and benchmarks.
However, most of these datasets formulate commonsense reasoning challenges in
artificial scenarios that are not reflective of the tasks which real-world NLP
systems are designed to solve. In this work, we present CRoW, a
manually-curated, multi-task benchmark that evaluates the ability of models to
apply commonsense reasoning in the context of six real-world NLP tasks. CRoW is
constructed using a multi-stage data collection pipeline that rewrites examples
from existing datasets using commonsense-violating perturbations. We use CRoW
to study how NLP systems perform across different dimensions of commonsense
knowledge, such as physical, temporal, and social reasoning. We find a
significant performance gap when NLP systems are evaluated on CRoW compared to
humans, showcasing that commonsense reasoning is far from being solved in
real-world task settings. We make our dataset and leaderboard available to the
research community at https://github.com/mismayil/crow. | Computational Linguistics |
What field is the article from? | Title: Hulk: A Universal Knowledge Translator for Human-Centric Tasks
Abstract: Human-centric perception tasks, e.g., human mesh recovery, pedestrian
detection, skeleton-based action recognition, and pose estimation, have wide
industrial applications, such as metaverse and sports analysis. There is a
recent surge to develop human-centric foundation models that can benefit a
broad range of human-centric perception tasks. While many human-centric
foundation models have achieved success, most of them only excel in 2D vision
tasks or require extensive fine-tuning for practical deployment in real-world
scenarios. These limitations severely restrict their usability across various
downstream tasks and situations. To tackle these problems, we present Hulk, the
first multimodal human-centric generalist model, capable of addressing most of
the mainstream tasks simultaneously without task-specific finetuning, covering
2D vision, 3D vision, skeleton-based, and vision-language tasks. The key to
achieving this is condensing various task-specific heads into two general
heads, one for discrete representations, e.g., languages, and the other for
continuous representations, e.g., location coordinates. The outputs of two
heads can be further stacked into four distinct input and output modalities.
This uniform representation enables Hulk to treat human-centric tasks as
modality translation, integrating knowledge across a wide range of tasks. To
validate the effectiveness of our proposed method, we conduct comprehensive
experiments on 11 benchmarks across 8 human-centric tasks. Experimental results
surpass previous methods substantially, demonstrating the superiority of our
proposed method. The code will be available on
https://github.com/OpenGVLab/HumanBench. | Computer Vision |
What field is the article from? | Title: Negotiating with LLMS: Prompt Hacks, Skill Gaps, and Reasoning Deficits
Abstract: Large language models LLMs like ChatGPT have reached the 100 Mio user barrier
in record time and might increasingly enter all areas of our life leading to a
diverse set of interactions between those Artificial Intelligence models and
humans. While many studies have discussed governance and regulations
deductively from first-order principles, few studies provide an inductive,
data-driven lens based on observing dialogues between humans and LLMs
especially when it comes to non-collaborative, competitive situations that have
the potential to pose a serious threat to people. In this work, we conduct a
user study engaging over 40 individuals across all age groups in price
negotiations with an LLM. We explore how people interact with an LLM,
investigating differences in negotiation outcomes and strategies. Furthermore,
we highlight shortcomings of LLMs with respect to their reasoning capabilities
and, in turn, susceptiveness to prompt hacking, which intends to manipulate the
LLM to make agreements that are against its instructions or beyond any
rationality. We also show that the negotiated prices humans manage to achieve
span a broad range, which points to a literacy gap in effectively interacting
with LLMs. | Computational Linguistics |
What field is the article from? | Title: PROMINET: Prototype-based Multi-View Network for Interpretable Email Response Prediction
Abstract: Email is a widely used tool for business communication, and email marketing
has emerged as a cost-effective strategy for enterprises. While previous
studies have examined factors affecting email marketing performance, limited
research has focused on understanding email response behavior by considering
email content and metadata. This study proposes a Prototype-based Multi-view
Network (PROMINET) that incorporates semantic and structural information from
email data. By utilizing prototype learning, the PROMINET model generates
latent exemplars, enabling interpretable email response prediction. The model
maps learned semantic and structural exemplars to observed samples in the
training data at different levels of granularity, such as document, sentence,
or phrase. The approach is evaluated on two real-world email datasets: the
Enron corpus and an in-house Email Marketing corpus. Experimental results
demonstrate that the PROMINET model outperforms baseline models, achieving a
~3% improvement in F1 score on both datasets. Additionally, the model provides
interpretability through prototypes at different granularity levels while
maintaining comparable performance to non-interpretable models. The learned
prototypes also show potential for generating suggestions to enhance email text
editing and improve the likelihood of effective email responses. This research
contributes to enhancing sender-receiver communication and customer engagement
in email interactions. | Computational Linguistics |
What field is the article from? | Title: Less is more -- the Dispatcher/ Executor principle for multi-task Reinforcement Learning
Abstract: Humans instinctively know how to neglect details when it comes to solve
complex decision making problems in environments with unforeseeable variations.
This abstraction process seems to be a vital property for most biological
systems and helps to 'abstract away' unnecessary details and boost
generalisation. In this work we introduce the dispatcher/ executor principle
for the design of multi-task Reinforcement Learning controllers. It suggests to
partition the controller in two entities, one that understands the task (the
dispatcher) and one that computes the controls for the specific device (the
executor) - and to connect these two by a strongly regularizing communication
channel. The core rationale behind this position paper is that changes in
structure and design principles can improve generalisation properties and
drastically enforce data-efficiency. It is in some sense a 'yes, and ...'
response to the current trend of using large neural networks trained on vast
amounts of data and bet on emerging generalisation properties. While we agree
on the power of scaling - in the sense of Sutton's 'bitter lesson' - we will
give some evidence, that considering structure and adding design principles can
be a valuable and critical component in particular when data is not abundant
and infinite, but is a precious resource. | Machine Learning |
What field is the article from? | Title: On Leakage in Machine Learning Pipelines
Abstract: Machine learning (ML) provides powerful tools for predictive modeling. ML's
popularity stems from the promise of sample-level prediction with applications
across a variety of fields from physics and marketing to healthcare. However,
if not properly implemented and evaluated, ML pipelines may contain leakage
typically resulting in overoptimistic performance estimates and failure to
generalize to new data. This can have severe negative financial and societal
implications. Our aim is to expand understanding associated with causes leading
to leakage when designing, implementing, and evaluating ML pipelines.
Illustrated by concrete examples, we provide a comprehensive overview and
discussion of various types of leakage that may arise in ML pipelines. | Machine Learning |
What field is the article from? | Title: Resolving uncertainty on the fly: Modeling adaptive driving behavior as active inference
Abstract: Understanding adaptive human driving behavior, in particular how drivers
manage uncertainty, is of key importance for developing simulated human driver
models that can be used in the evaluation and development of autonomous
vehicles. However, existing traffic psychology models of adaptive driving
behavior either lack computational rigor or only address specific scenarios
and/or behavioral phenomena. While models developed in the fields of machine
learning and robotics can effectively learn adaptive driving behavior from
data, due to their black box nature, they offer little or no explanation of the
mechanisms underlying the adaptive behavior. Thus, a generalizable,
interpretable, computational model of adaptive human driving behavior is still
lacking. This paper proposes such a model based on active inference, a
behavioral modeling framework originating in computational neuroscience. The
model offers a principled solution to how humans trade progress against caution
through policy selection based on the single mandate to minimize expected free
energy. This casts goal-seeking and information-seeking (uncertainty-resolving)
behavior under a single objective function, allowing the model to seamlessly
resolve uncertainty as a means to obtain its goals. We apply the model in two
apparently disparate driving scenarios that require managing uncertainty, (1)
driving past an occluding object and (2) visual time sharing between driving
and a secondary task, and show how human-like adaptive driving behavior emerges
from the single principle of expected free energy minimization. | Robotics |
What field is the article from? | Title: Optimization dependent generalization bound for ReLU networks based on sensitivity in the tangent bundle
Abstract: Recent advances in deep learning have given us some very promising results on
the generalization ability of deep neural networks, however literature still
lacks a comprehensive theory explaining why heavily over-parametrized models
are able to generalize well while fitting the training data. In this paper we
propose a PAC type bound on the generalization error of feedforward ReLU
networks via estimating the Rademacher complexity of the set of networks
available from an initial parameter vector via gradient descent. The key idea
is to bound the sensitivity of the network's gradient to perturbation of the
input data along the optimization trajectory. The obtained bound does not
explicitly depend on the depth of the network. Our results are experimentally
verified on the MNIST and CIFAR-10 datasets. | Machine Learning |
What field is the article from? | Title: Land use/land cover classification of fused Sentinel-1 and Sentinel-2 imageries using ensembles of Random Forests
Abstract: The study explores the synergistic combination of Synthetic Aperture Radar
(SAR) and Visible-Near Infrared-Short Wave Infrared (VNIR-SWIR) imageries for
land use/land cover (LULC) classification. Image fusion, employing Bayesian
fusion, merges SAR texture bands with VNIR-SWIR imageries. The research aims to
investigate the impact of this fusion on LULC classification. Despite the
popularity of random forests for supervised classification, their limitations,
such as suboptimal performance with fewer features and accuracy stagnation, are
addressed. To overcome these issues, ensembles of random forests (RFE) are
created, introducing random rotations using the Forest-RC algorithm. Three
rotation approaches: principal component analysis (PCA), sparse random rotation
(SRP) matrix, and complete random rotation (CRP) matrix are employed.
Sentinel-1 SAR data and Sentinel-2 VNIR-SWIR data from the IIT-Kanpur region
constitute the training datasets, including SAR, SAR with texture, VNIR-SWIR,
VNIR-SWIR with texture, and fused VNIR-SWIR with texture. The study evaluates
classifier efficacy, explores the impact of SAR and VNIR-SWIR fusion on
classification, and significantly enhances the execution speed of Bayesian
fusion code. The SRP-based RFE outperforms other ensembles for the first two
datasets, yielding average overall kappa values of 61.80% and 68.18%, while the
CRP-based RFE excels for the last three datasets with average overall kappa
values of 95.99%, 96.93%, and 96.30%. The fourth dataset achieves the highest
overall kappa of 96.93%. Furthermore, incorporating texture with SAR bands
results in a maximum overall kappa increment of 10.00%, while adding texture to
VNIR-SWIR bands yields a maximum increment of approximately 3.45%. | Computer Vision |
What field is the article from? | Title: Linear Log-Normal Attention with Unbiased Concentration
Abstract: Transformer models have achieved remarkable results in a wide range of
applications. However, their scalability is hampered by the quadratic time and
memory complexity of the self-attention mechanism concerning the sequence
length. This limitation poses a substantial obstacle when dealing with long
documents or high-resolution images. In this work, we study the self-attention
mechanism by analyzing the distribution of the attention matrix and its
concentration ability. Furthermore, we propose instruments to measure these
quantities and introduce a novel self-attention mechanism, Linear Log-Normal
Attention, designed to emulate the distribution and concentration behavior of
the original self-attention. Our experimental results on popular natural
language benchmarks reveal that our proposed Linear Log-Normal Attention
outperforms other linearized attention alternatives, offering a promising
avenue for enhancing the scalability of transformer models. Our code is
available in supplementary materials. | Machine Learning |
What field is the article from? | Title: Localized Symbolic Knowledge Distillation for Visual Commonsense Models
Abstract: Instruction following vision-language (VL) models offer a flexible interface
that supports a broad range of multimodal tasks in a zero-shot fashion.
However, interfaces that operate on full images do not directly enable the user
to "point to" and access specific regions within images. This capability is
important not only to support reference-grounded VL benchmarks, but also, for
practical applications that require precise within-image reasoning. We build
Localized Visual Commonsense models, which allow users to specify (multiple)
regions as input. We train our model by sampling localized commonsense
knowledge from a large language model (LLM): specifically, we prompt an LLM to
collect commonsense knowledge given a global literal image description and a
local literal region description automatically generated by a set of VL models.
With a separately trained critic model that selects high-quality examples, we
find that training on the localized commonsense corpus can successfully distill
existing VL models to support a reference-as-input interface. Empirical results
and human evaluations in a zero-shot setup demonstrate that our distillation
method results in more precise VL models of reasoning compared to a baseline of
passing a generated referring expression to an LLM. | Artificial Intelligence |
What field is the article from? | Title: CritiqueLLM: Scaling LLM-as-Critic for Effective and Explainable Evaluation of Large Language Model Generation
Abstract: Since the natural language processing (NLP) community started to make large
language models (LLMs), such as GPT-4, act as a critic to evaluate the quality
of generated texts, most of them only train a critique generation model of a
specific scale on specific datasets. We argue that a comprehensive
investigation on the key factor of LLM-based evaluation models, such as scaling
properties, is lacking, so that it is still inconclusive whether these models
have potential to replace GPT-4's evaluation in practical scenarios. In this
paper, we propose a new critique generation model called CritiqueLLM, which
includes a dialogue-based prompting method for high-quality referenced /
reference-free evaluation data. Experimental results show that our model can
achieve comparable evaluation performance to GPT-4 especially in system-level
correlations, and even outperform GPT-4 in 3 out of 8 tasks in a challenging
reference-free setting. We conduct detailed analysis to show promising scaling
properties of our model in the quality of generated critiques. We also
demonstrate that our generated critiques can act as scalable feedback to
directly improve the generation quality of LLMs. | Computational Linguistics |
What field is the article from? | Title: Data and Approaches for German Text simplification -- towards an Accessibility-enhanced Communication
Abstract: This paper examines the current state-of-the-art of German text
simplification, focusing on parallel and monolingual German corpora. It reviews
neural language models for simplifying German texts and assesses their
suitability for legal texts and accessibility requirements. Our findings
highlight the need for additional training data and more appropriate approaches
that consider the specific linguistic characteristics of German, as well as the
importance of the needs and preferences of target groups with cognitive or
language impairments. The authors launched the interdisciplinary OPEN-LS
project in April 2023 to address these research gaps. The project aims to
develop a framework for text formats tailored to individuals with low literacy
levels, integrate legal texts, and enhance comprehensibility for those with
linguistic or cognitive impairments. It will also explore cost-effective ways
to enhance the data with audience-specific illustrations using image-generating
AI.
For more and up-to-date information, please visit our project homepage
https://open-ls.entavis.com | Computational Linguistics |
What field is the article from? | Title: Damage GAN: A Generative Model for Imbalanced Data
Abstract: This study delves into the application of Generative Adversarial Networks
(GANs) within the context of imbalanced datasets. Our primary aim is to enhance
the performance and stability of GANs in such datasets. In pursuit of this
objective, we introduce a novel network architecture known as Damage GAN,
building upon the ContraD GAN framework which seamlessly integrates GANs and
contrastive learning. Through the utilization of contrastive learning, the
discriminator is trained to develop an unsupervised representation capable of
distinguishing all provided samples. Our approach draws inspiration from the
straightforward framework for contrastive learning of visual representations
(SimCLR), leading to the formulation of a distinctive loss function. We also
explore the implementation of self-damaging contrastive learning (SDCLR) to
further enhance the optimization of the ContraD GAN model. Comparative
evaluations against baseline models including the deep convolutional GAN
(DCGAN) and ContraD GAN demonstrate the evident superiority of our proposed
model, Damage GAN, in terms of generated image distribution, model stability,
and image quality when applied to imbalanced datasets. | Machine Learning |
What field is the article from? | Title: Improving Intrinsic Exploration by Creating Stationary Objectives
Abstract: Exploration bonuses in reinforcement learning guide long-horizon exploration
by defining custom intrinsic objectives. Several exploration objectives like
count-based bonuses, pseudo-counts, and state-entropy maximization are
non-stationary and hence are difficult to optimize for the agent. While this
issue is generally known, it is usually omitted and solutions remain
under-explored. The key contribution of our work lies in transforming the
original non-stationary rewards into stationary rewards through an augmented
state representation. For this purpose, we introduce the Stationary Objectives
For Exploration (SOFE) framework. SOFE requires identifying sufficient
statistics for different exploration bonuses and finding an efficient encoding
of these statistics to use as input to a deep network. SOFE is based on
proposing state augmentations that expand the state space but hold the promise
of simplifying the optimization of the agent's objective. We show that SOFE
improves the performance of several exploration objectives, including
count-based bonuses, pseudo-counts, and state-entropy maximization. Moreover,
SOFE outperforms prior methods that attempt to stabilize the optimization of
intrinsic objectives. We demonstrate the efficacy of SOFE in hard-exploration
problems, including sparse-reward tasks, pixel-based observations, 3D
navigation, and procedurally generated environments. | Machine Learning |
What field is the article from? | Title: LongStory: Coherent, Complete and Length Controlled Long story Generation
Abstract: A human author can write any length of story without losing coherence. Also,
they always bring the story to a proper ending, an ability that current
language models lack. In this work, we present the LongStory for coherent,
complete, and length-controlled long story generation. LongStory introduces two
novel methodologies: (1) the long and short-term contexts weight calibrator
(CWC) and (2) long story structural positions (LSP). The CWC adjusts weights
for long-term context Memory and short-term context Cheating, acknowledging
their distinct roles. The LSP employs discourse tokens to convey the structural
positions of a long story. Trained on three datasets with varied average story
lengths, LongStory outperforms other baselines, including the strong story
generator Plotmachine, in coherence, completeness, relevance, and
repetitiveness. We also perform zero-shot tests on each dataset to assess the
model's ability to predict outcomes beyond its training data and validate our
methodology by comparing its performance with variants of our model. | Computational Linguistics |
What field is the article from? | Title: Expressive Sign Equivariant Networks for Spectral Geometric Learning
Abstract: Recent work has shown the utility of developing machine learning models that
respect the structure and symmetries of eigenvectors. These works promote sign
invariance, since for any eigenvector v the negation -v is also an eigenvector.
However, we show that sign invariance is theoretically limited for tasks such
as building orthogonally equivariant models and learning node positional
encodings for link prediction in graphs. In this work, we demonstrate the
benefits of sign equivariance for these tasks. To obtain these benefits, we
develop novel sign equivariant neural network architectures. Our models are
based on a new analytic characterization of sign equivariant polynomials and
thus inherit provable expressiveness properties. Controlled synthetic
experiments show that our networks can achieve the theoretically predicted
benefits of sign equivariant models. Code is available at
https://github.com/cptq/Sign-Equivariant-Nets. | Machine Learning |
What field is the article from? | Title: Bandit-Driven Batch Selection for Robust Learning under Label Noise
Abstract: We introduce a novel approach for batch selection in Stochastic Gradient
Descent (SGD) training, leveraging combinatorial bandit algorithms. Our
methodology focuses on optimizing the learning process in the presence of label
noise, a prevalent issue in real-world datasets. Experimental evaluations on
the CIFAR-10 dataset reveal that our approach consistently outperforms existing
methods across various levels of label corruption. Importantly, we achieve this
superior performance without incurring the computational overhead commonly
associated with auxiliary neural network models. This work presents a balanced
trade-off between computational efficiency and model efficacy, offering a
scalable solution for complex machine learning applications. | Machine Learning |
What field is the article from? | Title: FlexModel: A Framework for Interpretability of Distributed Large Language Models
Abstract: With the growth of large language models, now incorporating billions of
parameters, the hardware prerequisites for their training and deployment have
seen a corresponding increase. Although existing tools facilitate model
parallelization and distributed training, deeper model interactions, crucial
for interpretability and responsible AI techniques, still demand thorough
knowledge of distributed computing. This often hinders contributions from
researchers with machine learning expertise but limited distributed computing
background. Addressing this challenge, we present FlexModel, a software package
providing a streamlined interface for engaging with models distributed across
multi-GPU and multi-node configurations. The library is compatible with
existing model distribution libraries and encapsulates PyTorch models. It
exposes user-registerable HookFunctions to facilitate straightforward
interaction with distributed model internals, bridging the gap between
distributed and single-device model paradigms. Primarily, FlexModel enhances
accessibility by democratizing model interactions and promotes more inclusive
research in the domain of large-scale neural networks. The package is found at
https://github.com/VectorInstitute/flex_model. | Machine Learning |
What field is the article from? | Title: ALPHA: AnomaLous Physiological Health Assessment Using Large Language Models
Abstract: This study concentrates on evaluating the efficacy of Large Language Models
(LLMs) in healthcare, with a specific focus on their application in personal
anomalous health monitoring. Our research primarily investigates the
capabilities of LLMs in interpreting and analyzing physiological data obtained
from FDA-approved devices. We conducted an extensive analysis using anomalous
physiological data gathered in a simulated low-air-pressure plateau
environment. This allowed us to assess the precision and reliability of LLMs in
understanding and evaluating users' health status with notable specificity. Our
findings reveal that LLMs exhibit exceptional performance in determining
medical indicators, including a Mean Absolute Error (MAE) of less than 1 beat
per minute for heart rate and less than 1% for oxygen saturation (SpO2).
Furthermore, the Mean Absolute Percentage Error (MAPE) for these evaluations
remained below 1%, with the overall accuracy of health assessments surpassing
85%. In image analysis tasks, such as interpreting photoplethysmography (PPG)
data, our specially adapted GPT models demonstrated remarkable proficiency,
achieving less than 1 bpm error in cycle count and 7.28 MAE for heart rate
estimation. This study highlights LLMs' dual role as health data analysis tools
and pivotal elements in advanced AI health assistants, offering personalized
health insights and recommendations within the future health assistant
framework. | Machine Learning |
What field is the article from? | Title: Assessing Translation capabilities of Large Language Models involving English and Indian Languages
Abstract: Generative Large Language Models (LLMs) have achieved remarkable advancements
in various NLP tasks. In this work, our aim is to explore the multilingual
capabilities of large language models by using machine translation as a task
involving English and 22 Indian languages. We first investigate the translation
capabilities of raw large language models, followed by exploring the in-context
learning capabilities of the same raw models. We fine-tune these large language
models using parameter efficient fine-tuning methods such as LoRA and
additionally with full fine-tuning. Through our study, we have identified the
best performing large language model for the translation task involving LLMs,
which is based on LLaMA.
Our results demonstrate significant progress, with average BLEU scores of
13.42, 15.93, 12.13, 12.30, and 12.07, as well as CHRF scores of 43.98, 46.99,
42.55, 42.42, and 45.39, respectively, using 2-stage fine-tuned LLaMA-13b for
English to Indian languages on IN22 (conversational), IN22 (general),
flores200-dev, flores200-devtest, and newstest2019 testsets. Similarly, for
Indian languages to English, we achieved average BLEU scores of 14.03, 16.65,
16.17, 15.35 and 12.55 along with chrF scores of 36.71, 40.44, 40.26, 39.51,
and 36.20, respectively, using fine-tuned LLaMA-13b on IN22 (conversational),
IN22 (general), flores200-dev, flores200-devtest, and newstest2019 testsets.
Overall, our findings highlight the potential and strength of large language
models for machine translation capabilities, including for languages that are
currently underrepresented in LLMs. | Computational Linguistics |
What field is the article from? | Title: On the Road with GPT-4V(ision): Early Explorations of Visual-Language Model on Autonomous Driving
Abstract: The pursuit of autonomous driving technology hinges on the sophisticated
integration of perception, decision-making, and control systems. Traditional
approaches, both data-driven and rule-based, have been hindered by their
inability to grasp the nuance of complex driving environments and the
intentions of other road users. This has been a significant bottleneck,
particularly in the development of common sense reasoning and nuanced scene
understanding necessary for safe and reliable autonomous driving. The advent of
Visual Language Models (VLM) represents a novel frontier in realizing fully
autonomous vehicle driving. This report provides an exhaustive evaluation of
the latest state-of-the-art VLM, GPT-4V(ision), and its application in
autonomous driving scenarios. We explore the model's abilities to understand
and reason about driving scenes, make decisions, and ultimately act in the
capacity of a driver. Our comprehensive tests span from basic scene recognition
to complex causal reasoning and real-time decision-making under varying
conditions. Our findings reveal that GPT-4V demonstrates superior performance
in scene understanding and causal reasoning compared to existing autonomous
systems. It showcases the potential to handle out-of-distribution scenarios,
recognize intentions, and make informed decisions in real driving contexts.
However, challenges remain, particularly in direction discernment, traffic
light recognition, vision grounding, and spatial reasoning tasks. These
limitations underscore the need for further research and development. Project
is now available on GitHub for interested parties to access and utilize:
\url{https://github.com/PJLab-ADG/GPT4V-AD-Exploration} | Computer Vision |
What field is the article from? | Title: Detection of news written by the ChatGPT through authorship attribution performed by a Bidirectional LSTM model
Abstract: The large language based-model chatbot ChatGPT gained a lot of popularity
since its launch and has been used in a wide range of situations. This research
centers around a particular situation, when the ChatGPT is used to produce news
that will be consumed by the population, causing the facilitation in the
production of fake news, spread of misinformation and lack of trust in news
sources. Aware of these problems, this research aims to build an artificial
intelligence model capable of performing authorship attribution on news
articles, identifying the ones written by the ChatGPT. To achieve this goal, a
dataset containing equal amounts of human and ChatGPT written news was
assembled and different natural processing language techniques were used to
extract features from it that were used to train, validate and test three
models built with different techniques. The best performance was produced by
the Bidirectional Long Short Term Memory (LSTM) Neural Network model, achiving
91.57\% accuracy when tested against the data from the testing set. | Computational Linguistics |
What field is the article from? | Title: Analisis Eksploratif Dan Augmentasi Data NSL-KDD Menggunakan Deep Generative Adversarial Networks Untuk Meningkatkan Performa Algoritma Extreme Gradient Boosting Dalam Klasifikasi Jenis Serangan Siber
Abstract: This study proposes the implementation of Deep Generative Adversarial
Networks (GANs) for augmenting the NSL-KDD dataset. The primary objective is to
enhance the efficacy of eXtreme Gradient Boosting (XGBoost) in the
classification of cyber-attacks on the NSL-KDD dataset. As a result, the method
proposed in this research achieved an accuracy of 99.53% using the XGBoost
model without data augmentation with GAN, and 99.78% with data augmentation
using GAN. | Cryptography and Security |
What field is the article from? | Title: Scalable AI Generative Content for Vehicular Network Semantic Communication
Abstract: Perceiving vehicles in a driver's blind spot is vital for safe driving. The
detection of potentially dangerous vehicles in these blind spots can benefit
from vehicular network semantic communication technology. However, efficient
semantic communication involves a trade-off between accuracy and delay,
especially in bandwidth-limited situations. This paper unveils a scalable
Artificial Intelligence Generated Content (AIGC) system that leverages an
encoder-decoder architecture. This system converts images into textual
representations and reconstructs them into quality-acceptable images,
optimizing transmission for vehicular network semantic communication. Moreover,
when bandwidth allows, auxiliary information is integrated. The encoder-decoder
aims to maintain semantic equivalence with the original images across various
tasks. Then the proposed approach employs reinforcement learning to enhance the
reliability of the generated contents. Experimental results suggest that the
proposed method surpasses the baseline in perceiving vehicles in blind spots
and effectively compresses communication data. While this method is
specifically designed for driving scenarios, this encoder-decoder architecture
also holds potential for wide use across various semantic communication
scenarios. | Artificial Intelligence |
What field is the article from? | Title: Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks
Abstract: Graph contrastive learning has shown great promise when labeled data is
scarce, but large unlabeled datasets are available. However, it often does not
take uncertainty estimation into account. We show that a variational Bayesian
neural network approach can be used to improve not only the uncertainty
estimates but also the downstream performance on semi-supervised
node-classification tasks. Moreover, we propose a new measure of uncertainty
for contrastive learning, that is based on the disagreement in likelihood due
to different positive samples. | Machine Learning |
What field is the article from? | Title: Interpreting User Requests in the Context of Natural Language Standing Instructions
Abstract: Users of natural language interfaces, generally powered by Large Language
Models (LLMs),often must repeat their preferences each time they make a similar
request. To alleviate this, we propose including some of a user's preferences
and instructions in natural language -- collectively termed standing
instructions -- as additional context for such interfaces. For example, when a
user states I'm hungry, their previously expressed preference for Persian food
will be automatically added to the LLM prompt, so as to influence the search
for relevant restaurants. We develop NLSI, a language-to-program dataset
consisting of over 2.4K dialogues spanning 17 domains, where each dialogue is
paired with a user profile (a set of users specific standing instructions) and
corresponding structured representations (API calls). A key challenge in NLSI
is to identify which subset of the standing instructions is applicable to a
given dialogue. NLSI contains diverse phenomena, from simple preferences to
interdependent instructions such as triggering a hotel search whenever the user
is booking tickets to an event. We conduct experiments on NLSI using prompting
with large language models and various retrieval approaches, achieving a
maximum of 44.7% exact match on API prediction. Our results demonstrate the
challenges in identifying the relevant standing instructions and their
interpretation into API calls. | Computational Linguistics |
What field is the article from? | Title: InterVLS: Interactive Model Understanding and Improvement with Vision-Language Surrogates
Abstract: Deep learning models are widely used in critical applications, highlighting
the need for pre-deployment model understanding and improvement. Visual
concept-based methods, while increasingly used for this purpose, face
challenges: (1) most concepts lack interpretability, (2) existing methods
require model knowledge, often unavailable at run time. Additionally, (3) there
lacks a no-code method for post-understanding model improvement. Addressing
these, we present InterVLS. The system facilitates model understanding by
discovering text-aligned concepts, measuring their influence with
model-agnostic linear surrogates. Employing visual analytics, InterVLS offers
concept-based explanations and performance insights. It enables users to adjust
concept influences to update a model, facilitating no-code model improvement.
We evaluate InterVLS in a user study, illustrating its functionality with two
scenarios. Results indicates that InterVLS is effective to help users identify
influential concepts to a model, gain insights and adjust concept influence to
improve the model. We conclude with a discussion based on our study results. | Artificial Intelligence |
What field is the article from? | Title: Active Reinforcement Learning for Robust Building Control
Abstract: Reinforcement learning (RL) is a powerful tool for optimal control that has
found great success in Atari games, the game of Go, robotic control, and
building optimization. RL is also very brittle; agents often overfit to their
training environment and fail to generalize to new settings. Unsupervised
environment design (UED) has been proposed as a solution to this problem, in
which the agent trains in environments that have been specially selected to
help it learn. Previous UED algorithms focus on trying to train an RL agent
that generalizes across a large distribution of environments. This is not
necessarily desirable when we wish to prioritize performance in one environment
over others. In this work, we will be examining the setting of robust RL
building control, where we wish to train an RL agent that prioritizes
performing well in normal weather while still being robust to extreme weather
conditions. We demonstrate a novel UED algorithm, ActivePLR, that uses
uncertainty-aware neural network architectures to generate new training
environments at the limit of the RL agent's ability while being able to
prioritize performance in a desired base environment. We show that ActivePLR is
able to outperform state-of-the-art UED algorithms in minimizing energy usage
while maximizing occupant comfort in the setting of building control. | Machine Learning |
What field is the article from? | Title: The Generalization Gap in Offline Reinforcement Learning
Abstract: Despite recent progress in offline learning, these methods are still trained
and tested on the same environment. In this paper, we compare the
generalization abilities of widely used online and offline learning methods
such as online reinforcement learning (RL), offline RL, sequence modeling, and
behavioral cloning. Our experiments show that offline learning algorithms
perform worse on new environments than online learning ones. We also introduce
the first benchmark for evaluating generalization in offline learning,
collecting datasets of varying sizes and skill-levels from Procgen (2D video
games) and WebShop (e-commerce websites). The datasets contain trajectories for
a limited number of game levels or natural language instructions and at test
time, the agent has to generalize to new levels or instructions. Our
experiments reveal that existing offline learning algorithms struggle to match
the performance of online RL on both train and test environments. Behavioral
cloning is a strong baseline, outperforming state-of-the-art offline RL and
sequence modeling approaches when trained on data from multiple environments
and tested on new ones. Finally, we find that increasing the diversity of the
data, rather than its size, improves performance on new environments for all
offline learning algorithms. Our study demonstrates the limited generalization
of current offline learning algorithms highlighting the need for more research
in this area. | Machine Learning |
What field is the article from? | Title: Machine Learning-Enhanced Aircraft Landing Scheduling under Uncertainties
Abstract: This paper addresses aircraft delays, emphasizing their impact on safety and
financial losses. To mitigate these issues, an innovative machine learning
(ML)-enhanced landing scheduling methodology is proposed, aiming to improve
automation and safety. Analyzing flight arrival delay scenarios reveals strong
multimodal distributions and clusters in arrival flight time durations. A
multi-stage conditional ML predictor enhances separation time prediction based
on flight events. ML predictions are then integrated as safety constraints in a
time-constrained traveling salesman problem formulation, solved using
mixed-integer linear programming (MILP). Historical flight recordings and model
predictions address uncertainties between successive flights, ensuring
reliability. The proposed method is validated using real-world data from the
Atlanta Air Route Traffic Control Center (ARTCC ZTL). Case studies demonstrate
an average 17.2% reduction in total landing time compared to the
First-Come-First-Served (FCFS) rule. Unlike FCFS, the proposed methodology
considers uncertainties, instilling confidence in scheduling. The study
concludes with remarks and outlines future research directions. | Artificial Intelligence |
What field is the article from? | Title: Make a Donut: Language-Guided Hierarchical EMD-Space Planning for Zero-shot Deformable Object Manipulation
Abstract: Deformable object manipulation stands as one of the most captivating yet
formidable challenges in robotics. While previous techniques have predominantly
relied on learning latent dynamics through demonstrations, typically
represented as either particles or images, there exists a pertinent limitation:
acquiring suitable demonstrations, especially for long-horizon tasks, can be
elusive. Moreover, basing learning entirely on demonstrations can hamper the
model's ability to generalize beyond the demonstrated tasks. In this work, we
introduce a demonstration-free hierarchical planning approach capable of
tackling intricate long-horizon tasks without necessitating any training. We
employ large language models (LLMs) to articulate a high-level, stage-by-stage
plan corresponding to a specified task. For every individual stage, the LLM
provides both the tool's name and the Python code to craft intermediate subgoal
point clouds. With the tool and subgoal for a particular stage at our disposal,
we present a granular closed-loop model predictive control strategy. This
leverages Differentiable Physics with Point-to-Point correspondence
(DiffPhysics-P2P) loss in the earth mover distance (EMD) space, applied
iteratively. Experimental findings affirm that our technique surpasses multiple
benchmarks in dough manipulation, spanning both short and long horizons.
Remarkably, our model demonstrates robust generalization capabilities to novel
and previously unencountered complex tasks without any preliminary
demonstrations. We further substantiate our approach with experimental trials
on real-world robotic platforms. | Robotics |
What field is the article from? | Title: Training A Multi-stage Deep Classifier with Feedback Signals
Abstract: Multi-Stage Classifier (MSC) - several classifiers working sequentially in an
arranged order and classification decision is partially made at each step - is
widely used in industrial applications for various resource limitation reasons.
The classifiers of a multi-stage process are usually Neural Network (NN) models
trained independently or in their inference order without considering the
signals from the latter stages. Aimed at two-stage binary classification
process, the most common type of MSC, we propose a novel training framework,
named Feedback Training. The classifiers are trained in an order reverse to
their actual working order, and the classifier at the later stage is used to
guide the training of initial-stage classifier via a sample weighting method.
We experimentally show the efficacy of our proposed approach, and its great
superiority under the scenario of few-shot training. | Machine Learning |
What field is the article from? | Title: The Uli Dataset: An Exercise in Experience Led Annotation of oGBV
Abstract: Online gender based violence has grown concomitantly with adoption of the
internet and social media. Its effects are worse in the Global majority where
many users use social media in languages other than English. The scale and
volume of conversations on the internet has necessitated the need for automated
detection of hate speech, and more specifically gendered abuse. There is,
however, a lack of language specific and contextual data to build such
automated tools. In this paper we present a dataset on gendered abuse in three
languages- Hindi, Tamil and Indian English. The dataset comprises of tweets
annotated along three questions pertaining to the experience of gender abuse,
by experts who identify as women or a member of the LGBTQIA community in South
Asia. Through this dataset we demonstrate a participatory approach to creating
datasets that drive AI systems. | Computational Linguistics |
What field is the article from? | Title: GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation
Abstract: We present MM-Navigator, a GPT-4V-based agent for the smartphone graphical
user interface (GUI) navigation task. MM-Navigator can interact with a
smartphone screen as human users, and determine subsequent actions to fulfill
given instructions. Our findings demonstrate that large multimodal models
(LMMs), specifically GPT-4V, excel in zero-shot GUI navigation through its
advanced screen interpretation, action reasoning, and precise action
localization capabilities. We first benchmark MM-Navigator on our collected iOS
screen dataset. According to human assessments, the system exhibited a 91\%
accuracy rate in generating reasonable action descriptions and a 75\% accuracy
rate in executing the correct actions for single-step instructions on iOS.
Additionally, we evaluate the model on a subset of an Android screen navigation
dataset, where the model outperforms previous GUI navigators in a zero-shot
fashion. Our benchmark and detailed analyses aim to lay a robust groundwork for
future research into the GUI navigation task. The project page is at
https://github.com/zzxslp/MM-Navigator. | Computer Vision |
What field is the article from? | Title: LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms
Abstract: Large Language Models are traditionally finetuned on large instruction
datasets. However recent studies suggest that small, high-quality datasets can
suffice for general purpose instruction following. This lack of consensus
surrounding finetuning best practices is in part due to rapidly diverging
approaches to LLM evaluation. In this study, we ask whether a small amount of
diverse finetuning samples can improve performance on both traditional
perplexity-based NLP benchmarks, and on open-ended, model-based evaluation. We
finetune open-source MPT-7B and MPT-30B models on instruction finetuning
datasets of various sizes ranging from 1k to 60k samples. We find that subsets
of 1k-6k instruction finetuning samples are sufficient to achieve good
performance on both (1) traditional NLP benchmarks and (2) model-based
evaluation. Finally, we show that mixing textbook-style and open-ended QA
finetuning datasets optimizes performance on both evaluation paradigms. | Machine Learning |
What field is the article from? | Title: Exact Combinatorial Optimization with Temporo-Attentional Graph Neural Networks
Abstract: Combinatorial optimization finds an optimal solution within a discrete set of
variables and constraints. The field has seen tremendous progress both in
research and industry. With the success of deep learning in the past decade, a
recent trend in combinatorial optimization has been to improve state-of-the-art
combinatorial optimization solvers by replacing key heuristic components with
machine learning (ML) models. In this paper, we investigate two essential
aspects of machine learning algorithms for combinatorial optimization: temporal
characteristics and attention. We argue that for the task of variable selection
in the branch-and-bound (B&B) algorithm, incorporating the temporal information
as well as the bipartite graph attention improves the solver's performance. We
support our claims with intuitions and numerical results over several standard
datasets used in the literature and competitions. Code is available at:
https://developer.huaweicloud.com/develop/aigallery/notebook/detail?id=047c6cf2-8463-40d7-b92f-7b2ca998e935 | Machine Learning |
What field is the article from? | Title: Diagnosing and Rectifying Fake OOD Invariance: A Restructured Causal Approach
Abstract: Invariant representation learning (IRL) encourages the prediction from
invariant causal features to labels de-confounded from the environments,
advancing the technical roadmap of out-of-distribution (OOD) generalization.
Despite spotlights around, recent theoretical results verified that some causal
features recovered by IRLs merely pretend domain-invariantly in the training
environments but fail in unseen domains. The \emph{fake invariance} severely
endangers OOD generalization since the trustful objective can not be diagnosed
and existing causal surgeries are invalid to rectify. In this paper, we review
a IRL family (InvRat) under the Partially and Fully Informative Invariant
Feature Structural Causal Models (PIIF SCM /FIIF SCM) respectively, to certify
their weaknesses in representing fake invariant features, then, unify their
causal diagrams to propose ReStructured SCM (RS-SCM). RS-SCM can ideally
rebuild the spurious and the fake invariant features simultaneously. Given
this, we further develop an approach based on conditional mutual information
with respect to RS-SCM, then rigorously rectify the spurious and fake invariant
effects. It can be easily implemented by a small feature selection subnet
introduced in the IRL family, which is alternatively optimized to achieve our
goal. Experiments verified the superiority of our approach to fight against the
fake invariant issue across a variety of OOD generalization benchmarks. | Machine Learning |
What field is the article from? | Title: Anytime-Constrained Reinforcement Learning
Abstract: We introduce and study constrained Markov Decision Processes (cMDPs) with
anytime constraints. An anytime constraint requires the agent to never violate
its budget at any point in time, almost surely. Although Markovian policies are
no longer sufficient, we show that there exist optimal deterministic policies
augmented with cumulative costs. In fact, we present a fixed-parameter
tractable reduction from anytime-constrained cMDPs to unconstrained MDPs. Our
reduction yields planning and learning algorithms that are time and
sample-efficient for tabular cMDPs so long as the precision of the costs is
logarithmic in the size of the cMDP. However, we also show that computing
non-trivial approximately optimal policies is NP-hard in general. To circumvent
this bottleneck, we design provable approximation algorithms that efficiently
compute or learn an arbitrarily accurate approximately feasible policy with
optimal value so long as the maximum supported cost is bounded by a polynomial
in the cMDP or the absolute budget. Given our hardness results, our
approximation guarantees are the best possible under worst-case analysis. | Machine Learning |
What field is the article from? | Title: Applying Large Language Models and Chain-of-Thought for Automatic Scoring
Abstract: This study investigates the application of large language models (LLMs),
specifically GPT-3.5 and GPT-4, with Chain-of-Though (CoT)in the automatic
scoring of student-written responses to science assessments. We focused on
overcoming the challenges of accessibility, technical complexity, and lack of
explainability that have previously limited the use of automatic assessment
tools among researchers and educators. We used a testing dataset comprising six
assessment tasks (three binomial and three trinomial) with 1,650 student
responses. We employed six prompt engineering strategies, combining zero-shot
or few-shot learning with CoT, either alone or alongside item stem and scoring
rubrics. Results indicated that few-shot (acc = .67) outperformed zero-shot
learning (acc = .60), with 12.6\% increase. CoT, when used without item stem
and scoring rubrics, did not significantly affect scoring accuracy (acc = .60).
However, CoT prompting paired with contextual item stems and rubrics proved to
be a significant contributor to scoring accuracy (13.44\% increase for
zero-shot; 3.7\% increase for few-shot). Using a novel approach PPEAS, we found
a more balanced accuracy across different proficiency categories, highlighting
the importance of domain-specific reasoning in enhancing the effectiveness of
LLMs in scoring tasks. Additionally, we also found that GPT-4 demonstrated
superior performance over GPT-3.5 in various scoring tasks, showing 8.64\%
difference. The study revealed that the single-call strategy with GPT-4,
particularly using greedy sampling, outperformed other approaches, including
ensemble voting strategies. This study demonstrates the potential of LLMs in
facilitating automatic scoring, emphasizing that CoT enhances accuracy,
particularly when used with item stem and scoring rubrics. | Computational Linguistics |
What field is the article from? | Title: APRICOT: Acuity Prediction in Intensive Care Unit (ICU): Predicting Stability, Transitions, and Life-Sustaining Therapies
Abstract: The acuity state of patients in the intensive care unit (ICU) can quickly
change from stable to unstable, sometimes leading to life-threatening
conditions. Early detection of deteriorating conditions can result in providing
more timely interventions and improved survival rates. Current approaches rely
on manual daily assessments. Some data-driven approaches have been developed,
that use mortality as a proxy of acuity in the ICU. However, these methods do
not integrate acuity states to determine the stability of a patient or the need
for life-sustaining therapies. In this study, we propose APRICOT (Acuity
Prediction in Intensive Care Unit), a Transformer-based neural network to
predict acuity state in real-time in ICU patients. We develop and extensively
validate externally, temporally, and prospectively the APRICOT model on three
large datasets: University of Florida Health (UFH), eICU Collaborative Research
Database (eICU), and Medical Information Mart for Intensive Care (MIMIC)-IV.
The performance of APRICOT shows comparable results to state-of-the-art
mortality prediction models (external AUROC 0.93-0.93, temporal AUROC
0.96-0.98, and prospective AUROC 0.98) as well as acuity prediction models
(external AUROC 0.80-0.81, temporal AUROC 0.77-0.78, and prospective AUROC
0.87). Furthermore, APRICOT can make predictions for the need for
life-sustaining therapies, showing comparable results to state-of-the-art
ventilation prediction models (external AUROC 0.80-0.81, temporal AUROC
0.87-0.88, and prospective AUROC 0.85), and vasopressor prediction models
(external AUROC 0.82-0.83, temporal AUROC 0.73-0.75, prospective AUROC 0.87).
This tool allows for real-time acuity monitoring of a patient and can provide
helpful information to clinicians to make timely interventions. Furthermore,
the model can suggest life-sustaining therapies that the patient might need in
the next hours in the ICU. | Artificial Intelligence |
What field is the article from? | Title: Designing Interpretable ML System to Enhance Trustworthy AI in Healthcare: A Systematic Review of the Last Decade to A Proposed Robust Framework
Abstract: AI-based medical technologies, including wearables, telemedicine, LLMs, and
digital care twins, significantly impact healthcare. Ensuring AI results are
accurate and interpretable is crucial, especially for clinicians. This paper
reviews processes and challenges of interpretable ML (IML) and explainable AI
(XAI) in healthcare. Objectives include reviewing XAI processes, methods,
applications, and challenges, with a focus on quality control. The IML process
is classified into data pre-processing interpretability, interpretable
modeling, and post-processing interpretability. The paper aims to establish the
importance of robust interpretability in healthcare through experimental
results, providing insights for creating communicable clinician-AI tools.
Research questions, eligibility criteria, and goals were identified following
PRISMA and PICO methods. PubMed, Scopus, and Web of Science were systematically
searched using specific strings. The survey introduces a step-by-step roadmap
for implementing XAI in clinical applications, addressing existing gaps and
acknowledging XAI model limitations. | Artificial Intelligence |
What field is the article from? | Title: Effective Human-AI Teams via Learned Natural Language Rules and Onboarding
Abstract: People are relying on AI agents to assist them with various tasks. The human
must know when to rely on the agent, collaborate with the agent, or ignore its
suggestions. In this work, we propose to learn rules, grounded in data regions
and described in natural language, that illustrate how the human should
collaborate with the AI. Our novel region discovery algorithm finds local
regions in the data as neighborhoods in an embedding space where prior human
behavior should be corrected. Each region is then described using a large
language model in an iterative and contrastive procedure. We then teach these
rules to the human via an onboarding stage. Through user studies on object
detection and question-answering tasks, we show that our method can lead to
more accurate human-AI teams. We also evaluate our region discovery and
description algorithms separately. | Machine Learning |
What field is the article from? | Title: OffMix-3L: A Novel Code-Mixed Dataset in Bangla-English-Hindi for Offensive Language Identification
Abstract: Code-mixing is a well-studied linguistic phenomenon when two or more
languages are mixed in text or speech. Several works have been conducted on
building datasets and performing downstream NLP tasks on code-mixed data.
Although it is not uncommon to observe code-mixing of three or more languages,
most available datasets in this domain contain code-mixed data from only two
languages. In this paper, we introduce OffMix-3L, a novel offensive language
identification dataset containing code-mixed data from three different
languages. We experiment with several models on this dataset and observe that
BanglishBERT outperforms other transformer-based models and GPT-3.5. | Computational Linguistics |
What field is the article from? | Title: Characterizing Mechanisms for Factual Recall in Language Models
Abstract: Language Models (LMs) often must integrate facts they memorized in
pretraining with new information that appears in a given context. These two
sources can disagree, causing competition within the model, and it is unclear
how an LM will resolve the conflict. On a dataset that queries for knowledge of
world capitals, we investigate both distributional and mechanistic determinants
of LM behavior in such situations. Specifically, we measure the proportion of
the time an LM will use a counterfactual prefix (e.g., "The capital of Poland
is London") to overwrite what it learned in pretraining ("Warsaw"). On Pythia
and GPT2, the training frequency of both the query country ("Poland") and the
in-context city ("London") highly affect the models' likelihood of using the
counterfactual. We then use head attribution to identify individual attention
heads that either promote the memorized answer or the in-context answer in the
logits. By scaling up or down the value vector of these heads, we can control
the likelihood of using the in-context answer on new data. This method can
increase the rate of generating the in-context answer to 88\% of the time
simply by scaling a single head at runtime. Our work contributes to a body of
evidence showing that we can often localize model behaviors to specific
components and provides a proof of concept for how future methods might control
model behavior dynamically at runtime. | Computational Linguistics |
What field is the article from? | Title: CovarNav: Machine Unlearning via Model Inversion and Covariance Navigation
Abstract: The rapid progress of AI, combined with its unprecedented public adoption and
the propensity of large neural networks to memorize training data, has given
rise to significant data privacy concerns. To address these concerns, machine
unlearning has emerged as an essential technique to selectively remove the
influence of specific training data points on trained models. In this paper, we
approach the machine unlearning problem through the lens of continual learning.
Given a trained model and a subset of training data designated to be forgotten
(i.e., the "forget set"), we introduce a three-step process, named CovarNav, to
facilitate this forgetting. Firstly, we derive a proxy for the model's training
data using a model inversion attack. Secondly, we mislabel the forget set by
selecting the most probable class that deviates from the actual ground truth.
Lastly, we deploy a gradient projection method to minimize the cross-entropy
loss on the modified forget set (i.e., learn incorrect labels for this set)
while preventing forgetting of the inverted samples. We rigorously evaluate
CovarNav on the CIFAR-10 and Vggface2 datasets, comparing our results with
recent benchmarks in the field and demonstrating the efficacy of our proposed
approach. | Machine Learning |
What field is the article from? | Title: Exploring the Limits of ChatGPT in Software Security Applications
Abstract: Large language models (LLMs) have undergone rapid evolution and achieved
remarkable results in recent times. OpenAI's ChatGPT, backed by GPT-3.5 or
GPT-4, has gained instant popularity due to its strong capability across a wide
range of tasks, including natural language tasks, coding, mathematics, and
engaging conversations. However, the impacts and limits of such LLMs in system
security domain are less explored. In this paper, we delve into the limits of
LLMs (i.e., ChatGPT) in seven software security applications including
vulnerability detection/repair, debugging, debloating, decompilation, patching,
root cause analysis, symbolic execution, and fuzzing. Our exploration reveals
that ChatGPT not only excels at generating code, which is the conventional
application of language models, but also demonstrates strong capability in
understanding user-provided commands in natural languages, reasoning about
control and data flows within programs, generating complex data structures, and
even decompiling assembly code. Notably, GPT-4 showcases significant
improvements over GPT-3.5 in most security tasks. Also, certain limitations of
ChatGPT in security-related tasks are identified, such as its constrained
ability to process long code contexts. | Cryptography and Security |
What field is the article from? | Title: Towards a fuller understanding of neurons with Clustered Compositional Explanations
Abstract: Compositional Explanations is a method for identifying logical formulas of
concepts that approximate the neurons' behavior. However, these explanations
are linked to the small spectrum of neuron activations (i.e., the highest ones)
used to check the alignment, thus lacking completeness. In this paper, we
propose a generalization, called Clustered Compositional Explanations, that
combines Compositional Explanations with clustering and a novel search
heuristic to approximate a broader spectrum of the neurons' behavior. We define
and address the problems connected to the application of these methods to
multiple ranges of activations, analyze the insights retrievable by using our
algorithm, and propose desiderata qualities that can be used to study the
explanations returned by different algorithms. | Machine Learning |
What field is the article from? | Title: Pedestrian and Passenger Interaction with Autonomous Vehicles: Field Study in a Crosswalk Scenario
Abstract: This study presents the outcomes of empirical investigations pertaining to
human-vehicle interactions involving an autonomous vehicle equipped with both
internal and external Human Machine Interfaces (HMIs) within a crosswalk
scenario. The internal and external HMIs were integrated with implicit
communication techniques, incorporating a combination of gentle and aggressive
braking maneuvers within the crosswalk. Data were collected through a
combination of questionnaires and quantifiable metrics, including pedestrian
decision to cross related to the vehicle distance and speed. The questionnaire
responses reveal that pedestrians experience enhanced safety perceptions when
the external HMI and gentle braking maneuvers are used in tandem. In contrast,
the measured variables demonstrate that the external HMI proves effective when
complemented by the gentle braking maneuver. Furthermore, the questionnaire
results highlight that the internal HMI enhances passenger confidence only when
paired with the aggressive braking maneuver. | Human-Computer Interaction |
What field is the article from? | Title: Leveraging Large Language Models for Collective Decision-Making
Abstract: In various work contexts, such as meeting scheduling, collaborating, and
project planning, collective decision-making is essential but often challenging
due to diverse individual preferences, varying work focuses, and power dynamics
among members. To address this, we propose a system leveraging Large Language
Models (LLMs) to facilitate group decision-making by managing conversations and
balancing preferences among individuals. Our system extracts individual
preferences and suggests options that satisfy a significant portion of the
members. We apply this system to corporate meeting scheduling. We create
synthetic employee profiles and simulate conversations at scale, leveraging
LLMs to evaluate the system. Our results indicate efficient coordination with
reduced interactions between members and the LLM-based system. The system also
effectively refines proposed options over time, ensuring their quality and
equity. Finally, we conduct a survey study involving human participants to
assess our system's ability to aggregate preferences and reasoning. Our
findings show that the system exhibits strong performance in both dimensions. | Computational Linguistics |
What field is the article from? | Title: The DURel Annotation Tool: Human and Computational Measurement of Semantic Proximity, Sense Clusters and Semantic Change
Abstract: We present the DURel tool that implements the annotation of semantic
proximity between uses of words into an online, open source interface. The tool
supports standardized human annotation as well as computational annotation,
building on recent advances with Word-in-Context models. Annotator judgments
are clustered with automatic graph clustering techniques and visualized for
analysis. This allows to measure word senses with simple and intuitive
micro-task judgments between use pairs, requiring minimal preparation efforts.
The tool offers additional functionalities to compare the agreement between
annotators to guarantee the inter-subjectivity of the obtained judgments and to
calculate summary statistics giving insights into sense frequency
distributions, semantic variation or changes of senses over time. | Computational Linguistics |
What field is the article from? | Title: GSQA: An End-to-End Model for Generative Spoken Question Answering
Abstract: In recent advancements in spoken question answering (QA), end-to-end models
have made significant strides. However, previous research has primarily focused
on extractive span selection. While this extractive-based approach is effective
when answers are present directly within the input, it falls short in
addressing abstractive questions, where answers are not directly extracted but
inferred from the given information. To bridge this gap, we introduce the first
end-to-end Generative Spoken Question Answering (GSQA) model that empowers the
system to engage in abstractive reasoning. The challenge in training our GSQA
model lies in the absence of a spoken abstractive QA dataset. We propose using
text models for initialization and leveraging the extractive QA dataset to
transfer knowledge from the text generative model to the spoken generative
model. Experimental results indicate that our model surpasses the previous
extractive model by 3% on extractive QA datasets. Furthermore, the GSQA model
has only been fine-tuned on the spoken extractive QA dataset. Despite not
having seen any spoken abstractive QA data, it can still closely match the
performance of the cascade model. In conclusion, our GSQA model shows the
potential to generalize to a broad spectrum of questions, thus further
expanding spoken question answering capabilities of abstractive QA. Our code is
available at
\href{https://voidful.github.io/GSQA}{https://voidful.github.io/GSQA} | Computational Linguistics |
What field is the article from? | Title: A Foundational Multimodal Vision Language AI Assistant for Human Pathology
Abstract: The field of computational pathology has witnessed remarkable progress in the
development of both task-specific predictive models and task-agnostic
self-supervised vision encoders. However, despite the explosive growth of
generative artificial intelligence (AI), there has been limited study on
building general purpose, multimodal AI assistants tailored to pathology. Here
we present PathChat, a vision-language generalist AI assistant for human
pathology using an in-house developed foundational vision encoder pretrained on
100 million histology images from over 100,000 patient cases and 1.18 million
pathology image-caption pairs. The vision encoder is then combined with a
pretrained large language model and the whole system is finetuned on over
250,000 diverse disease agnostic visual language instructions. We compare
PathChat against several multimodal vision language AI assistants as well as
GPT4V, which powers the commercially available multimodal general purpose AI
assistant ChatGPT-4. When relevant clinical context is provided with the
histology image, PathChat achieved a diagnostic accuracy of 87% on
multiple-choice questions based on publicly available cases of diverse tissue
origins and disease models. Additionally, using open-ended questions and human
expert evaluation, we found that overall PathChat produced more accurate and
pathologist-preferable responses to diverse queries related to pathology. As an
interactive and general vision language AI assistant that can flexibly handle
both visual and natural language inputs, PathChat can potentially find
impactful applications in pathology education, research, and human-in-the-loop
clinical decision making. | Computer Vision |
What field is the article from? | Title: Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted Networks
Abstract: Learning curve extrapolation aims to predict model performance in later
epochs of training, based on the performance in earlier epochs. In this work,
we argue that, while the inherent uncertainty in the extrapolation of learning
curves warrants a Bayesian approach, existing methods are (i) overly
restrictive, and/or (ii) computationally expensive. We describe the first
application of prior-data fitted neural networks (PFNs) in this context. A PFN
is a transformer, pre-trained on data generated from a prior, to perform
approximate Bayesian inference in a single forward pass. We propose LC-PFN, a
PFN trained to extrapolate 10 million artificial right-censored learning curves
generated from a parametric prior proposed in prior art using MCMC. We
demonstrate that LC-PFN can approximate the posterior predictive distribution
more accurately than MCMC, while being over 10 000 times faster. We also show
that the same LC-PFN achieves competitive performance extrapolating a total of
20 000 real learning curves from four learning curve benchmarks (LCBench,
NAS-Bench-201, Taskset, and PD1) that stem from training a wide range of model
architectures (MLPs, CNNs, RNNs, and Transformers) on 53 different datasets
with varying input modalities (tabular, image, text, and protein data).
Finally, we investigate its potential in the context of model selection and
find that a simple LC-PFN based predictive early stopping criterion obtains 2 -
6x speed-ups on 45 of these datasets, at virtually no overhead. | Machine Learning |
What field is the article from? | Title: Privacy Measurement in Tabular Synthetic Data: State of the Art and Future Research Directions
Abstract: Synthetic data (SD) have garnered attention as a privacy enhancing
technology. Unfortunately, there is no standard for quantifying their degree of
privacy protection. In this paper, we discuss proposed quantification
approaches. This contributes to the development of SD privacy standards;
stimulates multi-disciplinary discussion; and helps SD researchers make
informed modeling and evaluation decisions. | Artificial Intelligence |
What field is the article from? | Title: Brain-inspired Computing Based on Machine Learning And Deep Learning:A Review
Abstract: The continuous development of artificial intelligence has a profound impact
on biomedical research and other fields.Brain-inspired computing is an
important intersection of multimodal technology and biomedical field. This
paper provides a comprehensive review of machine learning (ML) and deep
learning (DL) models in brain-inspired computing, tracking their evolution,
application value, challenges, and potential research trajectories. First, the
basic concepts and development history are reviewed, and their evolution is
divided into two stages: recent machine learning and current deep learning,
emphasizing the importance of each stage in the research state of
brain-inspired computing. In addition, the latest progress and key techniques
of deep learning in different tasks of brain-inspired computing are introduced
from six perspectives. Despite significant progress, challenges remain in
making full use of its capabilities. This paper aims to provide a comprehensive
review of brain-inspired computing models based on machine learning and deep
learning, highlighting their potential in various applications and providing a
valuable reference for future academic research. It can be accessed through the
following url: https://github.com/ultracoolHub/brain-inspired-computing | Artificial Intelligence |
What field is the article from? | Title: Post Turing: Mapping the landscape of LLM Evaluation
Abstract: In the rapidly evolving landscape of Large Language Models (LLMs),
introduction of well-defined and standardized evaluation methodologies remains
a crucial challenge. This paper traces the historical trajectory of LLM
evaluations, from the foundational questions posed by Alan Turing to the modern
era of AI research. We categorize the evolution of LLMs into distinct periods,
each characterized by its unique benchmarks and evaluation criteria. As LLMs
increasingly mimic human-like behaviors, traditional evaluation proxies, such
as the Turing test, have become less reliable. We emphasize the pressing need
for a unified evaluation system, given the broader societal implications of
these models. Through an analysis of common evaluation methodologies, we
advocate for a qualitative shift in assessment approaches, underscoring the
importance of standardization and objective criteria. This work serves as a
call for the AI community to collaboratively address the challenges of LLM
evaluation, ensuring their reliability, fairness, and societal benefit. | Computational Linguistics |
What field is the article from? | Title: Paloma: A Benchmark for Evaluating Language Model Fit
Abstract: Language models (LMs) commonly report perplexity on monolithic data held out
from training. Implicitly or explicitly, this data is composed of
domains$\unicode{x2013}$varying distributions of language. Rather than assuming
perplexity on one distribution extrapolates to others, Perplexity Analysis for
Language Model Assessment (Paloma), measures LM fit to 585 text domains,
ranging from nytimes.com to r/depression on Reddit. We invite submissions to
our benchmark and organize results by comparability based on compliance with
guidelines such as removal of benchmark contamination from pretraining.
Submissions can also record parameter and training token count to make
comparisons of Pareto efficiency for performance as a function of these
measures of cost. We populate our benchmark with results from 6 baselines
pretrained on popular corpora. In case studies, we demonstrate analyses that
are possible with Paloma, such as finding that pretraining without data beyond
Common Crawl leads to inconsistent fit to many domains. | Computational Linguistics |
What field is the article from? | Title: Sports Recommender Systems: Overview and Research Issues
Abstract: Sports recommender systems receive an increasing attention due to their
potential of fostering healthy living, improving personal well-being, and
increasing performances in sport. These systems support people in sports, for
example, by the recommendation of healthy and performance boosting food items,
the recommendation of training practices, talent and team recommendation, and
the recommendation of specific tactics in competitions. With applications in
the virtual world, for example, the recommendation of maps or opponents in
e-sports, these systems already transcend conventional sports scenarios where
physical presence is needed. On the basis of different working examples, we
present an overview of sports recommender systems applications and techniques.
Overall, we analyze the related state-of-the-art and discuss open research
issues. | Information Retrieval |
What field is the article from? | Title: Unscrambling the Rectification of Adversarial Attacks Transferability across Computer Networks
Abstract: Convolutional neural networks (CNNs) models play a vital role in achieving
state-of-the-art performances in various technological fields. CNNs are not
limited to Natural Language Processing (NLP) or Computer Vision (CV) but also
have substantial applications in other technological domains, particularly in
cybersecurity. The reliability of CNN's models can be compromised because of
their susceptibility to adversarial attacks, which can be generated
effortlessly, easily applied, and transferred in real-world scenarios.
In this paper, we present a novel and comprehensive method to improve the
strength of attacks and assess the transferability of adversarial examples in
CNNs when such strength changes, as well as whether the transferability
property issue exists in computer network applications. In the context of our
study, we initially examined six distinct modes of attack: the Carlini and
Wagner (C&W), Fast Gradient Sign Method (FGSM), Iterative Fast Gradient Sign
Method (I-FGSM), Jacobian-based Saliency Map (JSMA), Limited-memory Broyden
fletcher Goldfarb Shanno (L-BFGS), and Projected Gradient Descent (PGD) attack.
We applied these attack techniques on two popular datasets: the CIC and UNSW
datasets. The outcomes of our experiment demonstrate that an improvement in
transferability occurs in the targeted scenarios for FGSM, JSMA, LBFGS, and
other attacks. Our findings further indicate that the threats to security posed
by adversarial examples, even in computer network applications, necessitate the
development of novel defense mechanisms to enhance the security of DL-based
techniques. | Cryptography and Security |
What field is the article from? | Title: Multimodal Group Emotion Recognition In-the-wild Using Privacy-Compliant Features
Abstract: This paper explores privacy-compliant group-level emotion recognition
''in-the-wild'' within the EmotiW Challenge 2023. Group-level emotion
recognition can be useful in many fields including social robotics,
conversational agents, e-coaching and learning analytics. This research imposes
itself using only global features avoiding individual ones, i.e. all features
that can be used to identify or track people in videos (facial landmarks, body
poses, audio diarization, etc.). The proposed multimodal model is composed of a
video and an audio branches with a cross-attention between modalities. The
video branch is based on a fine-tuned ViT architecture. The audio branch
extracts Mel-spectrograms and feed them through CNN blocks into a transformer
encoder. Our training paradigm includes a generated synthetic dataset to
increase the sensitivity of our model on facial expression within the image in
a data-driven way. The extensive experiments show the significance of our
methodology. Our privacy-compliant proposal performs fairly on the EmotiW
challenge, with 79.24% and 75.13% of accuracy respectively on validation and
test set for the best models. Noticeably, our findings highlight that it is
possible to reach this accuracy level with privacy-compliant features using
only 5 frames uniformly distributed on the video. | Artificial Intelligence |
What field is the article from? | Title: XAI meets Biology: A Comprehensive Review of Explainable AI in Bioinformatics Applications
Abstract: Artificial intelligence (AI), particularly machine learning and deep learning
models, has significantly impacted bioinformatics research by offering powerful
tools for analyzing complex biological data. However, the lack of
interpretability and transparency of these models presents challenges in
leveraging these models for deeper biological insights and for generating
testable hypotheses. Explainable AI (XAI) has emerged as a promising solution
to enhance the transparency and interpretability of AI models in
bioinformatics. This review provides a comprehensive analysis of various XAI
techniques and their applications across various bioinformatics domains
including DNA, RNA, and protein sequence analysis, structural analysis, gene
expression and genome analysis, and bioimaging analysis. We introduce the most
pertinent machine learning and XAI methods, then discuss their diverse
applications and address the current limitations of available XAI tools. By
offering insights into XAI's potential and challenges, this review aims to
facilitate its practical implementation in bioinformatics research and help
researchers navigate the landscape of XAI tools. | Artificial Intelligence |
What field is the article from? | Title: Automatic Engineering of Long Prompts
Abstract: Large language models (LLMs) have demonstrated remarkable capabilities in
solving complex open-domain tasks, guided by comprehensive instructions and
demonstrations provided in the form of prompts. However, these prompts can be
lengthy, often comprising hundreds of lines and thousands of tokens, and their
design often requires considerable human effort. Recent research has explored
automatic prompt engineering for short prompts, typically consisting of one or
a few sentences. However, the automatic design of long prompts remains a
challenging problem due to its immense search space. In this paper, we
investigate the performance of greedy algorithms and genetic algorithms for
automatic long prompt engineering. We demonstrate that a simple greedy approach
with beam search outperforms other methods in terms of search efficiency.
Moreover, we introduce two novel techniques that utilize search history to
enhance the effectiveness of LLM-based mutation in our search algorithm. Our
results show that the proposed automatic long prompt engineering algorithm
achieves an average of 9.2% accuracy gain on eight tasks in Big Bench Hard,
highlighting the significance of automating prompt designs to fully harness the
capabilities of LLMs. | Artificial Intelligence |
What field is the article from? | Title: LaMPilot: An Open Benchmark Dataset for Autonomous Driving with Language Model Programs
Abstract: We present LaMPilot, a novel framework for planning in the field of
autonomous driving, rethinking the task as a code-generation process that
leverages established behavioral primitives. This approach aims to address the
challenge of interpreting and executing spontaneous user instructions such as
"overtake the car ahead," which have typically posed difficulties for existing
frameworks. We introduce the LaMPilot benchmark specifically designed to
quantitatively evaluate the efficacy of Large Language Models (LLMs) in
translating human directives into actionable driving policies. We then evaluate
a wide range of state-of-the-art code generation language models on tasks from
the LaMPilot Benchmark. The results of the experiments showed that GPT-4, with
human feedback, achieved an impressive task completion rate of 92.7% and a
minimal collision rate of 0.9%. To encourage further investigation in this
area, our code and dataset will be made available. | Computational Linguistics |
What field is the article from? | Title: Hierarchical Reinforcement Learning for Power Network Topology Control
Abstract: Learning in high-dimensional action spaces is a key challenge in applying
reinforcement learning (RL) to real-world systems. In this paper, we study the
possibility of controlling power networks using RL methods. Power networks are
critical infrastructures that are complex to control. In particular, the
combinatorial nature of the action space poses a challenge to both conventional
optimizers and learned controllers. Hierarchical reinforcement learning (HRL)
represents one approach to address this challenge. More precisely, a HRL
framework for power network topology control is proposed. The HRL framework
consists of three levels of action abstraction. At the highest level, there is
the overall long-term task of power network operation, namely, keeping the
power grid state within security constraints at all times, which is decomposed
into two temporally extended actions: 'do nothing' versus 'propose a topology
change'. At the intermediate level, the action space consists of all
controllable substations. Finally, at the lowest level, the action space
consists of all configurations of the chosen substation. By employing this HRL
framework, several hierarchical power network agents are trained for the IEEE
14-bus network. Whereas at the highest level a purely rule-based policy is
still chosen for all agents in this study, at the intermediate level the policy
is trained using different state-of-the-art RL algorithms. At the lowest level,
either an RL algorithm or a greedy algorithm is used. The performance of the
different 3-level agents is compared with standard baseline (RL or greedy)
approaches. A key finding is that the 3-level agent that employs RL both at the
intermediate and the lowest level outperforms all other agents on the most
difficult task. Our code is publicly available. | Machine Learning |
What field is the article from? | Title: RoKEPG: RoBERTa and Knowledge Enhancement for Prescription Generation of Traditional Chinese Medicine
Abstract: Traditional Chinese medicine (TCM) prescription is the most critical form of
TCM treatment, and uncovering the complex nonlinear relationship between
symptoms and TCM is of great significance for clinical practice and assisting
physicians in diagnosis and treatment. Although there have been some studies on
TCM prescription generation, these studies consider a single factor and
directly model the symptom-prescription generation problem mainly based on
symptom descriptions, lacking guidance from TCM knowledge. To this end, we
propose a RoBERTa and Knowledge Enhancement model for Prescription Generation
of Traditional Chinese Medicine (RoKEPG). RoKEPG is firstly pre-trained by our
constructed TCM corpus, followed by fine-tuning the pre-trained model, and the
model is guided to generate TCM prescriptions by introducing four classes of
knowledge of TCM through the attention mask matrix. Experimental results on the
publicly available TCM prescription dataset show that RoKEPG improves the F1
metric by about 2% over the baseline model with the best results. | Computational Linguistics |
What field is the article from? | Title: Sample based Explanations via Generalized Representers
Abstract: We propose a general class of sample based explanations of machine learning
models, which we term generalized representers. To measure the effect of a
training sample on a model's test prediction, generalized representers use two
components: a global sample importance that quantifies the importance of the
training point to the model and is invariant to test samples, and a local
sample importance that measures similarity between the training sample and the
test point with a kernel. A key contribution of the paper is to show that
generalized representers are the only class of sample based explanations
satisfying a natural set of axiomatic properties. We discuss approaches to
extract global importances given a kernel, and also natural choices of kernels
given modern non-linear models. As we show, many popular existing sample based
explanations could be cast as generalized representers with particular choices
of kernels and approaches to extract global importances. Additionally, we
conduct empirical comparisons of different generalized representers on two
image and two text classification datasets. | Machine Learning |
What field is the article from? | Title: MMICT: Boosting Multi-Modal Fine-Tuning with In-Context Examples
Abstract: Although In-Context Learning (ICL) brings remarkable performance gains to
Large Language Models (LLMs), the improvements remain lower than fine-tuning on
downstream tasks. This paper introduces Multi-Modal In-Context Tuning (MMICT),
a novel multi-modal fine-tuning paradigm that boosts multi-modal fine-tuning by
fully leveraging the promising ICL capability of multi-modal LLMs (MM-LLMs). We
propose the Multi-Modal Hub (M-Hub), a unified module that captures various
multi-modal features according to different inputs and objectives. Based on
M-Hub, MMICT enables MM-LLMs to learn from in-context visual-guided textual
features and subsequently generate outputs conditioned on the textual-guided
visual features. Moreover, leveraging the flexibility of M-Hub, we design a
variety of in-context demonstrations. Extensive experiments on a diverse range
of downstream multi-modal tasks demonstrate that MMICT significantly
outperforms traditional fine-tuning strategy and the vanilla ICT method that
directly takes the concatenation of all information from different modalities
as input. | Artificial Intelligence |
What field is the article from? | Title: Can ChatGPT Play the Role of a Teaching Assistant in an Introductory Programming Course?
Abstract: The emergence of Large language models (LLMs) is expected to have a major
impact on education. This paper explores the potential of using ChatGPT, an
LLM, as a virtual Teaching Assistant (TA) in an Introductory Programming
Course. We evaluate ChatGPT's capabilities by comparing its performance with
that of human TAs in some TA functions. The TA functions which we focus on
include (1) solving programming assignments, (2) grading student code
submissions, and (3) providing feedback to undergraduate students in an
introductory programming course. Firstly, we investigate how closely ChatGPT's
solutions align with those submitted by students. This analysis goes beyond
code correctness and also considers code quality. Secondly, we assess ChatGPT's
proficiency in grading student code submissions using a given grading rubric
and compare its performance with the grades assigned by human TAs. Thirdly, we
analyze the quality and relevance of the feedback provided by ChatGPT. This
evaluation considers how well ChatGPT addresses mistakes and offers suggestions
for improvement in student solutions from both code correctness and code
quality perspectives. We conclude with a discussion on the implications of
integrating ChatGPT into computing education for automated grading,
personalized learning experiences, and instructional support. | Human-Computer Interaction |
What field is the article from? | Title: Chain of Code: Reasoning with a Language Model-Augmented Code Emulator
Abstract: Code provides a general syntactic structure to build complex programs and
perform precise computations when paired with a code interpreter - we
hypothesize that language models (LMs) can leverage code-writing to improve
Chain of Thought reasoning not only for logic and arithmetic tasks, but also
for semantic ones (and in particular, those that are a mix of both). For
example, consider prompting an LM to write code that counts the number of times
it detects sarcasm in an essay: the LM may struggle to write an implementation
for "detect_sarcasm(string)" that can be executed by the interpreter (handling
the edge cases would be insurmountable). However, LMs may still produce a valid
solution if they not only write code, but also selectively "emulate" the
interpreter by generating the expected output of "detect_sarcasm(string)" and
other lines of code that cannot be executed. In this work, we propose Chain of
Code (CoC), a simple yet surprisingly effective extension that improves LM
code-driven reasoning. The key idea is to encourage LMs to format semantic
sub-tasks in a program as flexible pseudocode that the interpreter can
explicitly catch undefined behaviors and hand off to simulate with an LM (as an
"LMulator"). Experiments demonstrate that Chain of Code outperforms Chain of
Thought and other baselines across a variety of benchmarks; on BIG-Bench Hard,
Chain of Code achieves 84%, a gain of 12% over Chain of Thought. CoC scales
well with large and small models alike, and broadens the scope of reasoning
questions that LMs can correctly answer by "thinking in code". Project webpage:
https://chain-of-code.github.io. | Computational Linguistics |
What field is the article from? | Title: Efficiently Programming Large Language Models using SGLang
Abstract: Large language models (LLMs) are increasingly used for complex tasks
requiring multiple chained generation calls, advanced prompting techniques,
control flow, and interaction with external environments. However, efficient
systems for programming and executing these applications are lacking. To bridge
this gap, we introduce SGLang, a Structured Generation Language for LLMs.
SGLang is designed for the efficient programming of LLMs and incorporates
primitives for common LLM programming patterns. We have implemented SGLang as a
domain-specific language embedded in Python, and we developed an interpreter, a
compiler, and a high-performance runtime for SGLang. These components work
together to enable optimizations such as parallelism, batching, caching,
sharing, and other compilation techniques. Additionally, we propose
RadixAttention, a novel technique that maintains a Least Recently Used (LRU)
cache of the Key-Value (KV) cache for all requests in a radix tree, enabling
automatic KV cache reuse across multiple generation calls at runtime. SGLang
simplifies the writing of LLM programs and boosts execution efficiency. Our
experiments demonstrate that SGLang can speed up common LLM tasks by up to 5x,
while reducing code complexity and enhancing control. | Artificial Intelligence |
What field is the article from? | Title: Offloading and Quality Control for AI Generated Content Services in Edge Computing Networks
Abstract: AI-Generated Content (AIGC), as a novel manner of providing Metaverse
services in the forthcoming Internet paradigm, can resolve the obstacles of
immersion requirements. Concurrently, edge computing, as an evolutionary
paradigm of computing in communication systems, effectively augments real-time
interactive services. In pursuit of enhancing the accessibility of AIGC
services, the deployment of AIGC models (e.g., diffusion models) to edge
servers and local devices has become a prevailing trend. Nevertheless, this
approach faces constraints imposed by battery life and computational resources
when tasks are offloaded to local devices, limiting the capacity to deliver
high-quality content to users while adhering to stringent latency requirements.
So there will be a tradeoff between the utility of AIGC models and offloading
decisions in the edge computing paradigm. This paper proposes a joint
optimization algorithm for offloading decisions, computation time, and
diffusion steps of the diffusion models in the reverse diffusion stage.
Moreover, we take the average error into consideration as the metric for
evaluating the quality of the generated results. Experimental results
conclusively demonstrate that the proposed algorithm achieves superior joint
optimization performance compared to the baselines. | Artificial Intelligence |
What field is the article from? | Title: KOALA: Self-Attention Matters in Knowledge Distillation of Latent Diffusion Models for Memory-Efficient and Fast Image Synthesis
Abstract: Stable diffusion is the mainstay of the text-to-image (T2I) synthesis in the
community due to its generation performance and open-source nature. Recently,
Stable Diffusion XL (SDXL), the successor of stable diffusion, has received a
lot of attention due to its significant performance improvements with a higher
resolution of 1024x1024 and a larger model. However, its increased computation
cost and model size require higher-end hardware(e.g., bigger VRAM GPU) for
end-users, incurring higher costs of operation. To address this problem, in
this work, we propose an efficient latent diffusion model for text-to-image
synthesis obtained by distilling the knowledge of SDXL. To this end, we first
perform an in-depth analysis of the denoising U-Net in SDXL, which is the main
bottleneck of the model, and then design a more efficient U-Net based on the
analysis. Secondly, we explore how to effectively distill the generation
capability of SDXL into an efficient U-Net and eventually identify four
essential factors, the core of which is that self-attention is the most
important part. With our efficient U-Net and self-attention-based knowledge
distillation strategy, we build our efficient T2I models, called KOALA-1B &
-700M, while reducing the model size up to 54% and 69% of the original SDXL
model. In particular, the KOALA-700M is more than twice as fast as SDXL while
still retaining a decent generation quality. We hope that due to its balanced
speed-performance tradeoff, our KOALA models can serve as a cost-effective
alternative to SDXL in resource-constrained environments. | Computer Vision |
What field is the article from? | Title: Aligning with Whom? Large Language Models Have Gender and Racial Biases in Subjective NLP Tasks
Abstract: Human perception of language depends on personal backgrounds like gender and
ethnicity. While existing studies have shown that large language models (LLMs)
hold values that are closer to certain societal groups, it is unclear whether
their prediction behaviors on subjective NLP tasks also exhibit a similar bias.
In this study, leveraging the POPQUORN dataset which contains annotations of
diverse demographic backgrounds, we conduct a series of experiments on four
popular LLMs to investigate their capability to understand group differences
and potential biases in their predictions for politeness and offensiveness. We
find that for both tasks, model predictions are closer to the labels from White
and female participants. We further explore prompting with the target
demographic labels and show that including the target demographic in the prompt
actually worsens the model's performance. More specifically, when being
prompted to respond from the perspective of "Black" and "Asian" individuals,
models show lower performance in predicting both overall scores as well as the
scores from corresponding groups. Our results suggest that LLMs hold gender and
racial biases for subjective NLP tasks and that demographic-infused prompts
alone may be insufficient to mitigate such effects. Code and data are available
at https://github.com/Jiaxin-Pei/LLM-Group-Bias. | Computational Linguistics |
What field is the article from? | Title: Web News Timeline Generation with Extended Task Prompting
Abstract: The creation of news timeline is essential for a comprehensive and contextual
understanding of events as they unfold over time. This approach aids in
discerning patterns and trends that might be obscured when news is viewed in
isolation. By organizing news in a chronological sequence, it becomes easier to
track the development of stories, understand the interrelation of events, and
grasp the broader implications of news items. This is particularly helpful in
sectors like finance and insurance, where timely understanding of the event
development-ranging from extreme weather to political upheavals and health
crises-is indispensable for effective risk management. While traditional
natural language processing (NLP) techniques have had some success, they often
fail to capture the news with nuanced relevance that are readily apparent to
domain experts, hindering broader industry integration. The advance of Large
Language Models (LLMs) offers a renewed opportunity to tackle this challenge.
However, direct prompting LLMs for this task is often ineffective. Our study
investigates the application of an extended task prompting technique to assess
past news relevance. We demonstrate that enhancing conventional prompts with
additional tasks boosts their effectiveness on various news dataset, rendering
news timeline generation practical for professional use. This work has been
deployed as a publicly accessible browser extension which is adopted within our
network. | Artificial Intelligence |
What field is the article from? | Title: DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative Diffusion Models
Abstract: Nature evolves creatures with a high complexity of morphological and
behavioral intelligence, meanwhile computational methods lag in approaching
that diversity and efficacy. Co-optimization of artificial creatures'
morphology and control in silico shows promise for applications in physical
soft robotics and virtual character creation; such approaches, however, require
developing new learning algorithms that can reason about function atop pure
structure. In this paper, we present DiffuseBot, a physics-augmented diffusion
model that generates soft robot morphologies capable of excelling in a wide
spectrum of tasks. DiffuseBot bridges the gap between virtually generated
content and physical utility by (i) augmenting the diffusion process with a
physical dynamical simulation which provides a certificate of performance, and
(ii) introducing a co-design procedure that jointly optimizes physical design
and control by leveraging information about physical sensitivities from
differentiable simulation. We showcase a range of simulated and fabricated
robots along with their capabilities. Check our website at
https://diffusebot.github.io/ | Robotics |
What field is the article from? | Title: In Search of Lost Online Test-time Adaptation: A Survey
Abstract: In this paper, we present a comprehensive survey on online test-time
adaptation (OTTA), a paradigm focused on adapting machine learning models to
novel data distributions upon batch arrival. Despite the proliferation of OTTA
methods recently, the field is mired in issues like ambiguous settings,
antiquated backbones, and inconsistent hyperparameter tuning, obfuscating the
real challenges and making reproducibility elusive. For clarity and a rigorous
comparison, we classify OTTA techniques into three primary categories and
subject them to benchmarks using the potent Vision Transformer (ViT) backbone
to discover genuinely effective strategies. Our benchmarks span not only
conventional corrupted datasets such as CIFAR-10/100-C and ImageNet-C but also
real-world shifts embodied in CIFAR-10.1 and CIFAR-10-Warehouse, encapsulating
variations across search engines and synthesized data by diffusion models. To
gauge efficiency in online scenarios, we introduce novel evaluation metrics,
inclusive of FLOPs, shedding light on the trade-offs between adaptation
accuracy and computational overhead. Our findings diverge from existing
literature, indicating: (1) transformers exhibit heightened resilience to
diverse domain shifts, (2) the efficacy of many OTTA methods hinges on ample
batch sizes, and (3) stability in optimization and resistance to perturbations
are critical during adaptation, especially when the batch size is 1. Motivated
by these insights, we pointed out promising directions for future research. The
source code will be made available. | Artificial Intelligence |
What field is the article from? | Title: Improving Faithfulness for Vision Transformers
Abstract: Vision Transformers (ViTs) have achieved state-of-the-art performance for
various vision tasks. One reason behind the success lies in their ability to
provide plausible innate explanations for the behavior of neural architectures.
However, ViTs suffer from issues with explanation faithfulness, as their focal
points are fragile to adversarial attacks and can be easily changed with even
slight perturbations on the input image. In this paper, we propose a rigorous
approach to mitigate these issues by introducing Faithful ViTs (FViTs). Briefly
speaking, an FViT should have the following two properties: (1) The top-$k$
indices of its self-attention vector should remain mostly unchanged under input
perturbation, indicating stable explanations; (2) The prediction distribution
should be robust to perturbations. To achieve this, we propose a new method
called Denoised Diffusion Smoothing (DDS), which adopts randomized smoothing
and diffusion-based denoising. We theoretically prove that processing ViTs
directly with DDS can turn them into FViTs. We also show that Gaussian noise is
nearly optimal for both $\ell_2$ and $\ell_\infty$-norm cases. Finally, we
demonstrate the effectiveness of our approach through comprehensive experiments
and evaluations. Specifically, we compare our FViTs with other baselines
through visual interpretation and robustness accuracy under adversarial
attacks. Results show that FViTs are more robust against adversarial attacks
while maintaining the explainability of attention, indicating higher
faithfulness. | Computer Vision |
What field is the article from? | Title: 4M: Massively Multimodal Masked Modeling
Abstract: Current machine learning models for vision are often highly specialized and
limited to a single modality and task. In contrast, recent large language
models exhibit a wide range of capabilities, hinting at a possibility for
similarly versatile models in computer vision. In this paper, we take a step in
this direction and propose a multimodal training scheme called 4M. It consists
of training a single unified Transformer encoder-decoder using a masked
modeling objective across a wide range of input/output modalities - including
text, images, geometric, and semantic modalities, as well as neural network
feature maps. 4M achieves scalability by unifying the representation space of
all modalities through mapping them into discrete tokens and performing
multimodal masked modeling on a small randomized subset of tokens.
4M leads to models that exhibit several key capabilities: (1) they can
perform a diverse set of vision tasks out of the box, (2) they excel when
fine-tuned for unseen downstream tasks or new input modalities, and (3) they
can function as a generative model that can be conditioned on arbitrary
modalities, enabling a wide variety of expressive multimodal editing
capabilities with remarkable flexibility.
Through experimental analyses, we demonstrate the potential of 4M for
training versatile and scalable foundation models for vision tasks, setting the
stage for further exploration in multimodal learning for vision and other
domains. | Computer Vision |
What field is the article from? | Title: Adaptive Image Registration: A Hybrid Approach Integrating Deep Learning and Optimization Functions for Enhanced Precision
Abstract: Image registration has traditionally been done using two distinct approaches:
learning based methods, relying on robust deep neural networks, and
optimization-based methods, applying complex mathematical transformations to
warp images accordingly. Of course, both paradigms offer advantages and
disadvantages, and, in this work, we seek to combine their respective strengths
into a single streamlined framework, using the outputs of the learning based
method as initial parameters for optimization while prioritizing computational
power for the image pairs that offer the greatest loss. Our investigations
showed that an improvement of 1.5% in testing when utilizing the best
performing state-of-the-art model as the backbone of the framework, while
maintaining the same inference time and a substantial 0.94% points performance
gain in deformation field smoothness. | Computer Vision |
What field is the article from? | Title: A Social-aware Gaussian Pre-trained Model for Effective Cold-start Recommendation
Abstract: The use of pre-training is an emerging technique to enhance a neural model's
performance, which has been shown to be effective for many neural language
models such as BERT. This technique has also been used to enhance the
performance of recommender systems. In such recommender systems, pre-training
models are used to learn a better initialisation for both users and items.
However, recent existing pre-trained recommender systems tend to only
incorporate the user interaction data at the pre-training stage, making it
difficult to deliver good recommendations, especially when the interaction data
is sparse. To alleviate this common data sparsity issue, we propose to
pre-train the recommendation model not only with the interaction data but also
with other available information such as the social relations among users,
thereby providing the recommender system with a better initialisation compared
with solely relying on the user interaction data. We propose a novel
recommendation model, the Social-aware Gaussian Pre-trained model (SGP), which
encodes the user social relations and interaction data at the pre-training
stage in a Graph Neural Network (GNN). Afterwards, in the subsequent
fine-tuning stage, our SGP model adopts a Gaussian Mixture Model (GMM) to
factorise these pre-trained embeddings for further training, thereby benefiting
the cold-start users from these pre-built social relations. Our extensive
experiments on three public datasets show that, in comparison to 16 competitive
baselines, our SGP model significantly outperforms the best baseline by upto
7.7% in terms of NDCG@10. In addition, we show that SGP permits to effectively
alleviate the cold-start problem, especially when users newly register to the
system through their friends' suggestions. | Information Retrieval |
What field is the article from? | Title: Teaching Specific Scientific Knowledge into Large Language Models through Additional Training
Abstract: Through additional training, we explore embedding specialized scientific
knowledge into the Llama 2 Large Language Model (LLM). Key findings reveal that
effective knowledge integration requires reading texts from multiple
perspectives, especially in instructional formats. We utilize text augmentation
to tackle the scarcity of specialized texts, including style conversions and
translations. Hyperparameter optimization proves crucial, with different size
models (7b, 13b, and 70b) reasonably undergoing additional training. Validating
our methods, we construct a dataset of 65,000 scientific papers. Although we
have succeeded in partially embedding knowledge, the study highlights the
complexities and limitations of incorporating specialized information into
LLMs, suggesting areas for further improvement. | Computational Linguistics |
What field is the article from? | Title: Combinatorial Optimization with Policy Adaptation using Latent Space Search
Abstract: Combinatorial Optimization underpins many real-world applications and yet,
designing performant algorithms to solve these complex, typically NP-hard,
problems remains a significant research challenge. Reinforcement Learning (RL)
provides a versatile framework for designing heuristics across a broad spectrum
of problem domains. However, despite notable progress, RL has not yet
supplanted industrial solvers as the go-to solution. Current approaches
emphasize pre-training heuristics that construct solutions but often rely on
search procedures with limited variance, such as stochastically sampling
numerous solutions from a single policy or employing computationally expensive
fine-tuning of the policy on individual problem instances. Building on the
intuition that performant search at inference time should be anticipated during
pre-training, we propose COMPASS, a novel RL approach that parameterizes a
distribution of diverse and specialized policies conditioned on a continuous
latent space. We evaluate COMPASS across three canonical problems - Travelling
Salesman, Capacitated Vehicle Routing, and Job-Shop Scheduling - and
demonstrate that our search strategy (i) outperforms state-of-the-art
approaches on 11 standard benchmarking tasks and (ii) generalizes better,
surpassing all other approaches on a set of 18 procedurally transformed
instance distributions. | Machine Learning |
What field is the article from? | Title: ConvD: Attention Enhanced Dynamic Convolutional Embeddings for Knowledge Graph Completion
Abstract: Knowledge graphs generally suffer from incompleteness, which can be
alleviated by completing the missing information. Deep knowledge convolutional
embedding models based on neural networks are currently popular methods for
knowledge graph completion. However, most existing methods use external
convolution kernels and traditional plain convolution processes, which limits
the feature interaction capability of the model. In this paper, we propose a
novel dynamic convolutional embedding model ConvD for knowledge graph
completion, which directly reshapes the relation embeddings into multiple
internal convolution kernels to improve the external convolution kernels of the
traditional convolutional embedding model. The internal convolution kernels can
effectively augment the feature interaction between the relation embeddings and
entity embeddings, thus enhancing the model embedding performance. Moreover, we
design a priori knowledge-optimized attention mechanism, which can assign
different contribution weight coefficients to multiple relation convolution
kernels for dynamic convolution to improve the expressiveness of the model
further. Extensive experiments on various datasets show that our proposed model
consistently outperforms the state-of-the-art baseline methods, with average
improvements ranging from 11.30\% to 16.92\% across all model evaluation
metrics. Ablation experiments verify the effectiveness of each component module
of the ConvD model. | Computational Linguistics |
Subsets and Splits