id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
listlengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.02665 | Gabriel Roccabruna | Michele Yin, Gabriel Roccabruna, Abhinav Azad, Giuseppe Riccardi | Let's Give a Voice to Conversational Agents in Virtual Reality | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The dialogue experience with conversational agents can be greatly enhanced
with multimodal and immersive interactions in virtual reality. In this work, we
present an open-source architecture with the goal of simplifying the
development of conversational agents operating in virtual environments. The
architecture offers the possibility of plugging in conversational agents of
different domains and adding custom or cloud-based Speech-To-Text and
Text-To-Speech models to make the interaction voice-based. Using this
architecture, we present two conversational prototypes operating in the digital
health domain developed in Unity for both non-immersive displays and VR
headsets.
| [
{
"version": "v1",
"created": "Fri, 4 Aug 2023 18:51:38 GMT"
}
] | 1,691,452,800,000 | [
[
"Yin",
"Michele",
""
],
[
"Roccabruna",
"Gabriel",
""
],
[
"Azad",
"Abhinav",
""
],
[
"Riccardi",
"Giuseppe",
""
]
] |
2308.02666 | Justin Stevens | Justin Stevens, Vadim Bulitko, David Thue | Solving Witness-type Triangle Puzzles Faster with an Automatically
Learned Human-Explainable Predicate | 10 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Automatically solving puzzle instances in the game The Witness can guide
players toward solutions and help puzzle designers generate better puzzles. In
the latter case such an Artificial Intelligence puzzle solver can inform a
human puzzle designer and procedural puzzle generator to produce better
instances. The puzzles, however, are combinatorially difficult and search-based
solvers can require large amounts of time and memory. We accelerate such search
by automatically learning a human-explainable predicate that predicts whether a
partial path to a Witness-type puzzle is not completable to a solution path. We
prove a key property of the learned predicate which allows us to use it for
pruning successor states in search thereby accelerating search by an average of
six times while maintaining completeness of the underlying search. Conversely
given a fixed search time budget per puzzle our predicate-accelerated search
can solve more puzzle instances of larger sizes than the baseline search.
| [
{
"version": "v1",
"created": "Fri, 4 Aug 2023 18:52:18 GMT"
}
] | 1,691,452,800,000 | [
[
"Stevens",
"Justin",
""
],
[
"Bulitko",
"Vadim",
""
],
[
"Thue",
"David",
""
]
] |
2308.02730 | Mucahit Cevik | Mucahit Cevik, Can Kavaklioglu, Fahad Razak, Amol Verma, Ayse Basar | Assessing the impact of emergency department short stay units using
length-of-stay prediction and discrete event simulation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately predicting hospital length-of-stay at the time a patient is
admitted to hospital may help guide clinical decision making and resource
allocation. In this study we aim to build a decision support system that
predicts hospital length-of-stay for patients admitted to general internal
medicine from the emergency department. We conduct an exploratory data analysis
and employ feature selection methods to identify the attributes that result in
the best predictive performance. We also develop a discrete-event simulation
model to assess the performances of the prediction models in a practical
setting. Our results show that the recommendation performances of the proposed
approaches are generally acceptable and do not benefit from the feature
selection. Further, the results indicate that hospital length-of-stay could be
predicted with reasonable accuracy (e.g., AUC value for classifying short and
long stay patients is 0.69) using patient admission demographics, laboratory
test results, diagnostic imaging, vital signs and clinical documentation.
| [
{
"version": "v1",
"created": "Fri, 4 Aug 2023 22:26:02 GMT"
}
] | 1,691,452,800,000 | [
[
"Cevik",
"Mucahit",
""
],
[
"Kavaklioglu",
"Can",
""
],
[
"Razak",
"Fahad",
""
],
[
"Verma",
"Amol",
""
],
[
"Basar",
"Ayse",
""
]
] |
2308.02835 | Chathura Gamage | Chathura Gamage, Vimukthini Pinto, Matthew Stephenson, Jochen Renz | Physics-Based Task Generation through Causal Sequence of Physical
Interactions | The 19th AAAI Conference on Artificial Intelligence and Interactive
Digital Entertainment (AIIDE-23) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performing tasks in a physical environment is a crucial yet challenging
problem for AI systems operating in the real world. Physics simulation-based
tasks are often employed to facilitate research that addresses this challenge.
In this paper, first, we present a systematic approach for defining a physical
scenario using a causal sequence of physical interactions between objects.
Then, we propose a methodology for generating tasks in a physics-simulating
environment using these defined scenarios as inputs. Our approach enables a
better understanding of the granular mechanics required for solving
physics-based tasks, thereby facilitating accurate evaluation of AI systems'
physical reasoning capabilities. We demonstrate our proposed task generation
methodology using the physics-based puzzle game Angry Birds and evaluate the
generated tasks using a range of metrics, including physical stability,
solvability using intended physical interactions, and accidental solvability
using unintended solutions. We believe that the tasks generated using our
proposed methodology can facilitate a nuanced evaluation of physical reasoning
agents, thus paving the way for the development of agents for more
sophisticated real-world applications.
| [
{
"version": "v1",
"created": "Sat, 5 Aug 2023 10:15:18 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 16:51:45 GMT"
}
] | 1,692,230,400,000 | [
[
"Gamage",
"Chathura",
""
],
[
"Pinto",
"Vimukthini",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Renz",
"Jochen",
""
]
] |
2308.02950 | Louis Vervoort | Louis Vervoort, Vitaliy Mizyakov, Anastasia Ugleva | A criterion for Artificial General Intelligence: hypothetic-deductive
reasoning, tested on ChatGPT | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We argue that a key reasoning skill that any advanced AI, say GPT-4, should
master in order to qualify as 'thinking machine', or AGI, is
hypothetic-deductive reasoning. Problem-solving or question-answering can quite
generally be construed as involving two steps: hypothesizing that a certain set
of hypotheses T applies to the problem or question at hand, and deducing the
solution or answer from T - hence the term hypothetic-deductive reasoning. An
elementary proxy of hypothetic-deductive reasoning is causal reasoning. We
propose simple tests for both types of reasoning, and apply them to ChatGPT.
Our study shows that, at present, the chatbot has a limited capacity for either
type of reasoning, as soon as the problems considered are somewhat complex.
However, we submit that if an AI would be capable of this type of reasoning in
a sufficiently wide range of contexts, it would be an AGI.
| [
{
"version": "v1",
"created": "Sat, 5 Aug 2023 20:33:13 GMT"
}
] | 1,691,452,800,000 | [
[
"Vervoort",
"Louis",
""
],
[
"Mizyakov",
"Vitaliy",
""
],
[
"Ugleva",
"Anastasia",
""
]
] |
2308.03028 | Lei Song | Lei Song, Chuheng Zhang, Li Zhao, Jiang Bian | Pre-Trained Large Language Models for Industrial Control | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | For industrial control, developing high-performance controllers with few
samples and low technical debt is appealing. Foundation models, possessing rich
prior knowledge obtained from pre-training with Internet-scale corpus, have the
potential to be a good controller with proper prompts. In this paper, we take
HVAC (Heating, Ventilation, and Air Conditioning) building control as an
example to examine the ability of GPT-4 (one of the first-tier foundation
models) as the controller. To control HVAC, we wrap the task as a language game
by providing text including a short description for the task, several selected
demonstrations, and the current observation to GPT-4 on each step and execute
the actions responded by GPT-4. We conduct series of experiments to answer the
following questions: 1)~How well can GPT-4 control HVAC? 2)~How well can GPT-4
generalize to different scenarios for HVAC control? 3) How different parts of
the text context affect the performance? In general, we found GPT-4 achieves
the performance comparable to RL methods with few samples and low technical
debt, indicating the potential of directly applying foundation models to
industrial control tasks.
| [
{
"version": "v1",
"created": "Sun, 6 Aug 2023 06:01:18 GMT"
}
] | 1,691,452,800,000 | [
[
"Song",
"Lei",
""
],
[
"Zhang",
"Chuheng",
""
],
[
"Zhao",
"Li",
""
],
[
"Bian",
"Jiang",
""
]
] |
2308.03107 | Ruoling Peng | Ruoling Peng, Kang Liu, Po Yang, Zhipeng Yuan, Shunbao Li | Embedding-based Retrieval with LLM for Effective Agriculture Information
Extracting from Unstructured Data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pest identification is a crucial aspect of pest control in agriculture.
However, most farmers are not capable of accurately identifying pests in the
field, and there is a limited number of structured data sources available for
rapid querying. In this work, we explored using domain-agnostic general
pre-trained large language model(LLM) to extract structured data from
agricultural documents with minimal or no human intervention. We propose a
methodology that involves text retrieval and filtering using embedding-based
retrieval, followed by LLM question-answering to automatically extract entities
and attributes from the documents, and transform them into structured data. In
comparison to existing methods, our approach achieves consistently better
accuracy in the benchmark while maintaining efficiency.
| [
{
"version": "v1",
"created": "Sun, 6 Aug 2023 13:18:38 GMT"
}
] | 1,691,452,800,000 | [
[
"Peng",
"Ruoling",
""
],
[
"Liu",
"Kang",
""
],
[
"Yang",
"Po",
""
],
[
"Yuan",
"Zhipeng",
""
],
[
"Li",
"Shunbao",
""
]
] |
2308.03150 | Nemali Venkat Sai Abhishek | N V S Abhishek, Pushpak Bhattacharyya | "We care": Improving Code Mixed Speech Emotion Recognition in
Customer-Care Conversations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Speech Emotion Recognition (SER) is the task of identifying the emotion
expressed in a spoken utterance. Emotion recognition is essential in building
robust conversational agents in domains such as law, healthcare, education, and
customer support. Most of the studies published on SER use datasets created by
employing professional actors in a noise-free environment. In natural settings
such as a customer care conversation, the audio is often noisy with speakers
regularly switching between different languages as they see fit. We have worked
in collaboration with a leading unicorn in the Conversational AI sector to
develop Natural Speech Emotion Dataset (NSED). NSED is a natural code-mixed
speech emotion dataset where each utterance in a conversation is annotated with
emotion, sentiment, valence, arousal, and dominance (VAD) values. In this
paper, we show that by incorporating word-level VAD value we improve on the
task of SER by 2%, for negative emotions, over the baseline value for NSED.
High accuracy for negative emotion recognition is essential because customers
expressing negative opinions/views need to be pacified with urgency, lest
complaints and dissatisfaction snowball and get out of hand. Escalation of
negative opinions speedily is crucial for business interests. Our study then
can be utilized to develop conversational agents which are more polite and
empathetic in such situations.
| [
{
"version": "v1",
"created": "Sun, 6 Aug 2023 15:56:12 GMT"
}
] | 1,691,452,800,000 | [
[
"Abhishek",
"N V S",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
2308.03161 | Rafa\"el Brandt | Rafa\"el Brandt, Daan Raatjens, Georgi Gaydadjiev | Precise Benchmarking of Explainable AI Attribution Methods | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rationale behind a deep learning model's output is often difficult to
understand by humans. EXplainable AI (XAI) aims at solving this by developing
methods that improve interpretability and explainability of machine learning
models. Reliable evaluation metrics are needed to assess and compare different
XAI methods. We propose a novel evaluation approach for benchmarking
state-of-the-art XAI attribution methods. Our proposal consists of a synthetic
classification model accompanied by its derived ground truth explanations
allowing high precision representation of input nodes contributions. We also
propose new high-fidelity metrics to quantify the difference between
explanations of the investigated XAI method and those derived from the
synthetic model. Our metrics allow assessment of explanations in terms of
precision and recall separately. Also, we propose metrics to independently
evaluate negative or positive contributions of inputs. Our proposal provides
deeper insights into XAI methods output. We investigate our proposal by
constructing a synthetic convolutional image classification model and
benchmarking several widely used XAI attribution methods using our evaluation
approach. We compare our results with established prior XAI evaluation metrics.
By deriving the ground truth directly from the constructed model in our method,
we ensure the absence of bias, e.g., subjective either based on the training
set. Our experimental results provide novel insights into the performance of
Guided-Backprop and Smoothgrad XAI methods that are widely in use. Both have
good precision and recall scores among positively contributing pixels (0.7,
0.76 and 0.7, 0.77, respectively), but poor precision scores among negatively
contributing pixels (0.44, 0.61 and 0.47, 0.75, resp.). The recall scores in
the latter case remain close. We show that our metrics are among the fastest in
terms of execution time.
| [
{
"version": "v1",
"created": "Sun, 6 Aug 2023 17:03:32 GMT"
}
] | 1,691,452,800,000 | [
[
"Brandt",
"Rafaël",
""
],
[
"Raatjens",
"Daan",
""
],
[
"Gaydadjiev",
"Georgi",
""
]
] |
2308.03176 | Shuang Ao | Shuang Ao | Building Safe and Reliable AI systems for Safety Critical Tasks with
Vision-Language Processing | 4 pages | 2023 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although AI systems have been applied in various fields and achieved
impressive performance, their safety and reliability are still a big concern.
This is especially important for safety-critical tasks. One shared
characteristic of these critical tasks is their risk sensitivity, where small
mistakes can cause big consequences and even endanger life. There are several
factors that could be guidelines for the successful deployment of AI systems in
sensitive tasks: (i) failure detection and out-of-distribution (OOD) detection;
(ii) overfitting identification; (iii) uncertainty quantification for
predictions; (iv) robustness to data perturbations. These factors are also
challenges of current AI systems, which are major blocks for building safe and
reliable AI. Specifically, the current AI algorithms are unable to identify
common causes for failure detection. Furthermore, additional techniques are
required to quantify the quality of predictions. All these contribute to
inaccurate uncertainty quantification, which lowers trust in predictions. Hence
obtaining accurate model uncertainty quantification and its further improvement
are challenging. To address these issues, many techniques have been proposed,
such as regularization methods and learning strategies. As vision and language
are the most typical data type and have many open source benchmark datasets,
this thesis will focus on vision-language data processing for tasks like
classification, image captioning, and vision question answering. In this
thesis, we aim to build a safeguard by further developing current techniques to
ensure the accurate model uncertainty for safety-critical tasks.
| [
{
"version": "v1",
"created": "Sun, 6 Aug 2023 18:05:59 GMT"
}
] | 1,691,452,800,000 | [
[
"Ao",
"Shuang",
""
]
] |
2308.03179 | Shuang Ao | Shuang Ao, Stefan Rueger, Advaith Siddharthan | Empirical Optimal Risk to Quantify Model Trustworthiness for Failure
Detection | 7 pages | 2023 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Failure detection (FD) in AI systems is a crucial safeguard for the
deployment for safety-critical tasks. The common evaluation method of FD
performance is the Risk-coverage (RC) curve, which reveals the trade-off
between the data coverage rate and the performance on accepted data. One common
way to quantify the RC curve by calculating the area under the RC curve.
However, this metric does not inform on how suited any method is for FD, or
what the optimal coverage rate should be. As FD aims to achieve higher
performance with fewer data discarded, evaluating with partial coverage
excluding the most uncertain samples is more intuitive and meaningful than full
coverage. In addition, there is an optimal point in the coverage where the
model could achieve ideal performance theoretically. We propose the Excess Area
Under the Optimal RC Curve (E-AUoptRC), with the area in coverage from the
optimal point to the full coverage. Further, the model performance at this
optimal point can represent both model learning ability and calibration. We
propose it as the Trust Index (TI), a complementary evaluation metric to the
overall model accuracy. We report extensive experiments on three benchmark
image datasets with ten variants of transformer and CNN models. Our results
show that our proposed methods can better reflect the model trustworthiness
than existing evaluation metrics. We further observe that the model with high
overall accuracy does not always yield the high TI, which indicates the
necessity of the proposed Trust Index as a complementary metric to the model
overall accuracy. The code are available at
\url{https://github.com/AoShuang92/optimal_risk}.
| [
{
"version": "v1",
"created": "Sun, 6 Aug 2023 18:11:42 GMT"
}
] | 1,691,452,800,000 | [
[
"Ao",
"Shuang",
""
],
[
"Rueger",
"Stefan",
""
],
[
"Siddharthan",
"Advaith",
""
]
] |
2308.03185 | Guangmo Tong | Mina Samizadeh, Guangmo Tong | VN-Solver: Vision-based Neural Solver for Combinatorial Optimization
over Graphs | CIKM 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Data-driven approaches have been proven effective in solving combinatorial
optimization problems over graphs such as the traveling salesman problems and
the vehicle routing problem. The rationale behind such methods is that the
input instances may follow distributions with salient patterns that can be
leveraged to overcome the worst-case computational hardness. For optimization
problems over graphs, the common practice of neural combinatorial solvers
consumes the inputs in the form of adjacency matrices. In this paper, we
explore a vision-based method that is conceptually novel: can neural models
solve graph optimization problems by \textit{taking a look at the graph
pattern}? Our results suggest that the performance of such vision-based methods
is not only non-trivial but also comparable to the state-of-the-art
matrix-based methods, which opens a new avenue for developing data-driven
optimization solvers.
| [
{
"version": "v1",
"created": "Sun, 6 Aug 2023 18:33:11 GMT"
}
] | 1,691,452,800,000 | [
[
"Samizadeh",
"Mina",
""
],
[
"Tong",
"Guangmo",
""
]
] |
2308.03358 | Jingdi Chen | Jingdi Chen, Tian Lan, Carlee Joe-Wong | RGMComm: Return Gap Minimization via Discrete Communications in
Multi-Agent Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Communication is crucial for solving cooperative Multi-Agent Reinforcement
Learning tasks in partially observable Markov Decision Processes. Existing
works often rely on black-box methods to encode local information/features into
messages shared with other agents, leading to the generation of continuous
messages with high communication overhead and poor interpretability. Prior
attempts at discrete communication methods generate one-hot vectors trained as
part of agents' actions and use the Gumbel softmax operation for calculating
message gradients, which are all heuristic designs that do not provide any
quantitative guarantees on the expected return. This paper establishes an upper
bound on the return gap between an ideal policy with full observability and an
optimal partially observable policy with discrete communication. This result
enables us to recast multi-agent communication into a novel online clustering
problem over the local observations at each agent, with messages as cluster
labels and the upper bound on the return gap as clustering loss. To minimize
the return gap, we propose the Return-Gap-Minimization Communication (RGMComm)
algorithm, which is a surprisingly simple design of discrete message generation
functions and is integrated with reinforcement learning through the utilization
of a novel Regularized Information Maximization loss function, which
incorporates cosine-distance as the clustering metric. Evaluations show that
RGMComm significantly outperforms state-of-the-art multi-agent communication
baselines and can achieve nearly optimal returns with few-bit messages that are
naturally interpretable.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 07:26:55 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Aug 2023 14:06:59 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Aug 2023 19:25:33 GMT"
},
{
"version": "v4",
"created": "Wed, 13 Dec 2023 19:40:40 GMT"
},
{
"version": "v5",
"created": "Mon, 18 Dec 2023 20:20:19 GMT"
}
] | 1,703,030,400,000 | [
[
"Chen",
"Jingdi",
""
],
[
"Lan",
"Tian",
""
],
[
"Joe-Wong",
"Carlee",
""
]
] |
2308.03376 | Olivier Spanjaard | Hugo Gilbert (LAMSADE), Mohamed Ouaguenouni, Meltem Ozturk (LAMSADE),
Olivier Spanjaard | Robust Ordinal Regression for Subsets Comparisons with Interactions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is dedicated to a robust ordinal method for learning the
preferences of a decision maker between subsets. The decision model, derived
from Fishburn and LaValle (1996) and whose parameters we learn, is general
enough to be compatible with any strict weak order on subsets, thanks to the
consideration of possible interactions between elements. Moreover, we accept
not to predict some preferences if the available preference data are not
compatible with a reliable prediction. A predicted preference is considered
reliable if all the simplest models (Occam's razor) explaining the preference
data agree on it. Following the robust ordinal regression methodology, our
predictions are based on an uncertainty set encompassing the possible values of
the model parameters. We define a robust ordinal dominance relation between
subsets and we design a procedure to determine whether this dominance relation
holds. Numerical tests are provided on synthetic and real-world data to
evaluate the richness and reliability of the preference predictions made.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 07:54:33 GMT"
}
] | 1,691,452,800,000 | [
[
"Gilbert",
"Hugo",
"",
"LAMSADE"
],
[
"Ouaguenouni",
"Mohamed",
"",
"LAMSADE"
],
[
"Ozturk",
"Meltem",
"",
"LAMSADE"
],
[
"Spanjaard",
"Olivier",
""
]
] |
2308.03377 | Moyu Zhang | Moyu Zhang, Xinning Zhu, Chunhong Zhang, Wenchen Qian, Feng Pan, Hui
Zhao | Counterfactual Monotonic Knowledge Tracing for Assessing Students'
Dynamic Mastery of Knowledge Concepts | Accepted by CIKM 2023, 10 pages, 5 figures, 4 tables | CIKM 2023 | 10.1145/3583780.3614827 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As the core of the Knowledge Tracking (KT) task, assessing students' dynamic
mastery of knowledge concepts is crucial for both offline teaching and online
educational applications. Since students' mastery of knowledge concepts is
often unlabeled, existing KT methods rely on the implicit paradigm of
historical practice to mastery of knowledge concepts to students' responses to
practices to address the challenge of unlabeled concept mastery. However,
purely predicting student responses without imposing specific constraints on
hidden concept mastery values does not guarantee the accuracy of these
intermediate values as concept mastery values. To address this issue, we
propose a principled approach called Counterfactual Monotonic Knowledge Tracing
(CMKT), which builds on the implicit paradigm described above by using a
counterfactual assumption to constrain the evolution of students' mastery of
knowledge concepts.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 07:57:26 GMT"
}
] | 1,693,958,400,000 | [
[
"Zhang",
"Moyu",
""
],
[
"Zhu",
"Xinning",
""
],
[
"Zhang",
"Chunhong",
""
],
[
"Qian",
"Wenchen",
""
],
[
"Pan",
"Feng",
""
],
[
"Zhao",
"Hui",
""
]
] |
2308.03427 | Jingqing Ruan | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao,
Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | TPTU: Large Language Model-based AI Agents for Task Planning and Tool
Usage | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 09:22:03 GMT"
},
{
"version": "v2",
"created": "Sun, 22 Oct 2023 10:53:54 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Nov 2023 11:15:11 GMT"
}
] | 1,699,401,600,000 | [
[
"Ruan",
"Jingqing",
""
],
[
"Chen",
"Yihong",
""
],
[
"Zhang",
"Bin",
""
],
[
"Xu",
"Zhiwei",
""
],
[
"Bao",
"Tianpeng",
""
],
[
"Du",
"Guoqing",
""
],
[
"Shi",
"Shiwei",
""
],
[
"Mao",
"Hangyu",
""
],
[
"Li",
"Ziyue",
""
],
[
"Zeng",
"Xingyu",
""
],
[
"Zhao",
"Rui",
""
]
] |
2308.03447 | Rita T. Sousa | Rita T. Sousa, Sara Silva, Heiko Paulheim, Catia Pesquita | Biomedical Knowledge Graph Embeddings with Negative Statements | 19 pages, 4 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A knowledge graph is a powerful representation of real-world entities and
their relations. The vast majority of these relations are defined as positive
statements, but the importance of negative statements is increasingly
recognized, especially under an Open World Assumption. Explicitly considering
negative statements has been shown to improve performance on tasks such as
entity summarization and question answering or domain-specific tasks such as
protein function prediction. However, no attention has been given to the
exploration of negative statements by knowledge graph embedding approaches
despite the potential of negative statements to produce more accurate
representations of entities in a knowledge graph.
We propose a novel approach, TrueWalks, to incorporate negative statements
into the knowledge graph representation learning process. In particular, we
present a novel walk-generation method that is able to not only differentiate
between positive and negative statements but also take into account the
semantic implications of negation in ontology-rich knowledge graphs. This is of
particular importance for applications in the biomedical domain, where the
inadequacy of embedding approaches regarding negative statements at the
ontology level has been identified as a crucial limitation.
We evaluate TrueWalks in ontology-rich biomedical knowledge graphs in two
different predictive tasks based on KG embeddings: protein-protein interaction
prediction and gene-disease association prediction. We conduct an extensive
analysis over established benchmarks and demonstrate that our method is able to
improve the performance of knowledge graph embeddings on all tasks.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 10:08:25 GMT"
}
] | 1,691,452,800,000 | [
[
"Sousa",
"Rita T.",
""
],
[
"Silva",
"Sara",
""
],
[
"Paulheim",
"Heiko",
""
],
[
"Pesquita",
"Catia",
""
]
] |
2308.03450 | Zicong Hong | Zicong Hong, Xiaoyu Qiu, Jian Lin, Wuhui Chen, Yue Yu, Hui Wang, Song
Guo, Wen Gao | Intelligence-Endogenous Management Platform for Computing and Network
Convergence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Massive emerging applications are driving demand for the ubiquitous
deployment of computing power today. This trend not only spurs the recent
popularity of the \emph{Computing and Network Convergence} (CNC), but also
introduces an urgent need for the intelligentization of a management platform
to coordinate changing resources and tasks in the CNC. Therefore, in this
article, we present the concept of an intelligence-endogenous management
platform for CNCs called \emph{CNC brain} based on artificial intelligence
technologies. It aims at efficiently and automatically matching the supply and
demand with high heterogeneity in a CNC via four key building blocks, i.e.,
perception, scheduling, adaptation, and governance, throughout the CNC's life
cycle. Their functionalities, goals, and challenges are presented. To examine
the effectiveness of the proposed concept and framework, we also implement a
prototype for the CNC brain based on a deep reinforcement learning technology.
Also, it is evaluated on a CNC testbed that integrates two open-source and
popular frameworks (OpenFaas and Kubernetes) and a real-world business dataset
provided by Microsoft Azure. The evaluation results prove the proposed method's
effectiveness in terms of resource utilization and performance. Finally, we
highlight the future research directions of the CNC brain.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 10:12:15 GMT"
}
] | 1,691,452,800,000 | [
[
"Hong",
"Zicong",
""
],
[
"Qiu",
"Xiaoyu",
""
],
[
"Lin",
"Jian",
""
],
[
"Chen",
"Wuhui",
""
],
[
"Yu",
"Yue",
""
],
[
"Wang",
"Hui",
""
],
[
"Guo",
"Song",
""
],
[
"Gao",
"Wen",
""
]
] |
2308.03488 | Moyu Zhang | Moyu Zhang, Xinning Zhu, Chunhong Zhang, Feng Pan, Wenchen Qian, Hui
Zhao | No Length Left Behind: Enhancing Knowledge Tracing for Modeling
Sequences of Excessive or Insufficient Lengths | Accepted by CIKM 2023, 10 pages, 8 figures, 5 tables | CIKM 2023 | 10.1145/3583780.3614988 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge tracing (KT) aims to predict students' responses to practices based
on their historical question-answering behaviors. However, most current KT
methods focus on improving overall AUC, leaving ample room for optimization in
modeling sequences of excessive or insufficient lengths. As sequences get
longer, computational costs will increase exponentially. Therefore, KT methods
usually truncate sequences to an acceptable length, which makes it difficult
for models on online service systems to capture complete historical practice
behaviors of students with too long sequences. Conversely, modeling students
with short practice sequences using most KT methods may result in overfitting
due to limited observation samples. To address the above limitations, we
propose a model called Sequence-Flexible Knowledge Tracing (SFKT).
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 11:30:58 GMT"
}
] | 1,693,958,400,000 | [
[
"Zhang",
"Moyu",
""
],
[
"Zhu",
"Xinning",
""
],
[
"Zhang",
"Chunhong",
""
],
[
"Pan",
"Feng",
""
],
[
"Qian",
"Wenchen",
""
],
[
"Zhao",
"Hui",
""
]
] |
2308.03527 | Kristina Schaaff | Kristina Schaaff, Caroline Reinig, Tim Schlippe | Exploring ChatGPT's Empathic Abilities | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Empathy is often understood as the ability to share and understand another
individual's state of mind or emotion. With the increasing use of chatbots in
various domains, e.g., children seeking help with homework, individuals looking
for medical advice, and people using the chatbot as a daily source of everyday
companionship, the importance of empathy in human-computer interaction has
become more apparent. Therefore, our study investigates the extent to which
ChatGPT based on GPT-3.5 can exhibit empathetic responses and emotional
expressions. We analyzed the following three aspects: (1) understanding and
expressing emotions, (2) parallel emotional response, and (3) empathic
personality. Thus, we not only evaluate ChatGPT on various empathy aspects and
compare it with human behavior but also show a possible way to analyze the
empathy of chatbots in general. Our results show, that in 91.7% of the cases,
ChatGPT was able to correctly identify emotions and produces appropriate
answers. In conversations, ChatGPT reacted with a parallel emotion in 70.7% of
cases. The empathic capabilities of ChatGPT were evaluated using a set of five
questionnaires covering different aspects of empathy. Even though the results
show, that the scores of ChatGPT are still worse than the average of healthy
humans, it scores better than people who have been diagnosed with Asperger
syndrome / high-functioning autism.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 12:23:07 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Sep 2023 07:11:47 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Sep 2023 21:00:23 GMT"
}
] | 1,695,686,400,000 | [
[
"Schaaff",
"Kristina",
""
],
[
"Reinig",
"Caroline",
""
],
[
"Schlippe",
"Tim",
""
]
] |
2308.03598 | Mla{\dj}an Jovanovi\'c Dr | Peter Voss and Mladjan Jovanovic | Why We Don't Have AGI Yet | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The original vision of AI was re-articulated in 2002 via the term 'Artificial
General Intelligence' or AGI. This vision is to build 'Thinking Machines' -
computer systems that can learn, reason, and solve problems similar to the way
humans do. This is in stark contrast to the 'Narrow AI' approach practiced by
almost everyone in the field over the many decades. While several large-scale
efforts have nominally been working on AGI (most notably DeepMind), the field
of pure focused AGI development has not been well funded or promoted. This is
surprising given the fantastic value that true AGI can bestow on humanity. In
addition to the dearth of effort in this field, there are also several
theoretical and methodical missteps that are hampering progress. We highlight
why purely statistical approaches are unlikely to lead to AGI, and identify
several crucial cognitive abilities required to achieve human-like adaptability
and autonomous learning. We conclude with a survey of socio-technical factors
that have undoubtedly slowed progress towards AGI.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 13:59:31 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Aug 2023 14:49:24 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Aug 2023 11:30:22 GMT"
},
{
"version": "v4",
"created": "Tue, 19 Sep 2023 12:43:54 GMT"
}
] | 1,695,168,000,000 | [
[
"Voss",
"Peter",
""
],
[
"Jovanovic",
"Mladjan",
""
]
] |
2308.03880 | Juanita Puentes | Juanita Puentes, Angela Castillo, Wilmar Osejo, Yuly Calder\'on,
Viviana Quintero, Lina Saldarriaga, Diana Agudelo and Pablo Arbel\'aez | Guarding the Guardians: Automated Analysis of Online Child Sexual Abuse | Artificial Intelligence (AI) and Humanitarian Assistance and Disaster
Recovery (HADR) workshop, ICCV 2023 in Paris, France | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Online violence against children has increased globally recently, demanding
urgent attention. Competent authorities manually analyze abuse complaints to
comprehend crime dynamics and identify patterns. However, the manual analysis
of these complaints presents a challenge because it exposes analysts to harmful
content during the review process. Given these challenges, we present a novel
solution, an automated tool designed to analyze children's sexual abuse reports
comprehensively. By automating the analysis process, our tool significantly
reduces the risk of exposure to harmful content by categorizing the reports on
three dimensions: Subject, Degree of Criminality, and Damage. Furthermore,
leveraging our multidisciplinary team's expertise, we introduce a novel
approach to annotate the collected data, enabling a more in-depth analysis of
the reports. This approach improves the comprehension of fundamental patterns
and trends, enabling law enforcement agencies and policymakers to create
focused strategies in the fight against children's violence.
| [
{
"version": "v1",
"created": "Mon, 7 Aug 2023 19:19:02 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Aug 2023 17:37:27 GMT"
}
] | 1,691,712,000,000 | [
[
"Puentes",
"Juanita",
""
],
[
"Castillo",
"Angela",
""
],
[
"Osejo",
"Wilmar",
""
],
[
"Calderón",
"Yuly",
""
],
[
"Quintero",
"Viviana",
""
],
[
"Saldarriaga",
"Lina",
""
],
[
"Agudelo",
"Diana",
""
],
[
"Arbeláez",
"Pablo",
""
]
] |
2308.03992 | Chen Cao | Cassie Chen Cao, Zijian Ding, Jionghao Lin, Frank Hopfgartner | AI Chatbots as Multi-Role Pedagogical Agents: Transforming Engagement in
CS Education | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This study investigates the use of Artificial Intelligence (AI)-powered,
multi-role chatbots as a means to enhance learning experiences and foster
engagement in computer science education. Leveraging a design-based research
approach, we develop, implement, and evaluate a novel learning environment
enriched with four distinct chatbot roles: Instructor Bot, Peer Bot, Career
Advising Bot, and Emotional Supporter Bot. These roles, designed around the
tenets of Self-Determination Theory, cater to the three innate psychological
needs of learners - competence, autonomy, and relatedness. Additionally, the
system embraces an inquiry-based learning paradigm, encouraging students to ask
questions, seek solutions, and explore their curiosities.
We test this system in a higher education context over a period of one month
with 200 participating students, comparing outcomes with conditions involving a
human tutor and a single chatbot. Our research utilizes a mixed-methods
approach, encompassing quantitative measures such as chat log sequence
analysis, and qualitative methods including surveys and focus group interviews.
By integrating cutting-edge Natural Language Processing techniques such as
topic modelling and sentiment analysis, we offer an in-depth understanding of
the system's impact on learner engagement, motivation, and inquiry-based
learning.
This study, through its rigorous design and innovative approach, provides
significant insights into the potential of AI-empowered, multi-role chatbots in
reshaping the landscape of computer science education and fostering an
engaging, supportive, and motivating learning environment.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 02:13:44 GMT"
}
] | 1,691,539,200,000 | [
[
"Cao",
"Cassie Chen",
""
],
[
"Ding",
"Zijian",
""
],
[
"Lin",
"Jionghao",
""
],
[
"Hopfgartner",
"Frank",
""
]
] |
2308.04026 | Jiaju Lin | Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin
Chen | AgentSims: An Open-Source Sandbox for Large Language Model Evaluation | submit to EMNLP2023 demo track | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With ChatGPT-like large language models (LLM) prevailing in the community,
how to evaluate the ability of LLMs is an open question. Existing evaluation
methods suffer from following shortcomings: (1) constrained evaluation
abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that
task-based evaluation, where LLM agents complete tasks in a simulated
environment, is a one-for-all solution to solve above problems. We present
AgentSims, an easy-to-use infrastructure for researchers from all disciplines
to test the specific capacities they are interested in. Researchers can build
their evaluation tasks by adding agents and buildings on an interactive GUI or
deploy and test new support mechanisms, i.e. memory, planning and tool-use
systems, by a few lines of codes. Our demo is available at
https://agentsims.com .
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 03:59:28 GMT"
}
] | 1,691,539,200,000 | [
[
"Lin",
"Jiaju",
""
],
[
"Zhao",
"Haoran",
""
],
[
"Zhang",
"Aochi",
""
],
[
"Wu",
"Yiting",
""
],
[
"Ping",
"Huqiuyue",
""
],
[
"Chen",
"Qin",
""
]
] |
2308.04030 | Zhiyuan Peng | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue,
Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 04:12:29 GMT"
}
] | 1,691,539,200,000 | [
[
"Xu",
"Binfeng",
""
],
[
"Liu",
"Xukun",
""
],
[
"Shen",
"Hua",
""
],
[
"Han",
"Zeyu",
""
],
[
"Li",
"Yuhan",
""
],
[
"Yue",
"Murong",
""
],
[
"Peng",
"Zhiyuan",
""
],
[
"Liu",
"Yuchen",
""
],
[
"Yao",
"Ziyu",
""
],
[
"Xu",
"Dongkuan",
""
]
] |
2308.04161 | Frank Wolter | James P. Delgrande, Birte Glimm, Thomas Meyer, Miroslaw Truszczynski,
Frank Wolter | Current and Future Challenges in Knowledge Representation and Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Representation and Reasoning is a central, longstanding, and active
area of Artificial Intelligence. Over the years it has evolved significantly;
more recently it has been challenged and complemented by research in areas such
as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl
Perspectives workshop was held on Knowledge Representation and Reasoning. The
goal of the workshop was to describe the state of the art in the field,
including its relation with other areas, its shortcomings and strengths,
together with recommendations for future progress. We developed this manifesto
based on the presentations, panels, working groups, and discussions that took
place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge
Representation: its origins, goals, milestones, and current foci; its relation
to other disciplines, especially to Artificial Intelligence; and on its
challenges, along with key priorities for the next decade.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 09:47:44 GMT"
}
] | 1,691,539,200,000 | [
[
"Delgrande",
"James P.",
""
],
[
"Glimm",
"Birte",
""
],
[
"Meyer",
"Thomas",
""
],
[
"Truszczynski",
"Miroslaw",
""
],
[
"Wolter",
"Frank",
""
]
] |
2308.04172 | Charlie Abela Dr | Lizzy Farrugia, Lilian M. Azzopardi, Jeremy Debattista and Charlie
Abela | Predicting Drug-Drug Interactions Using Knowledge Graphs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the last decades, people have been consuming and combining more drugs than
before, increasing the number of Drug-Drug Interactions (DDIs). To predict
unknown DDIs, recently, studies started incorporating Knowledge Graphs (KGs)
since they are able to capture the relationships among entities providing
better drug representations than using a single drug property. In this paper,
we propose the medicX end-to-end framework that integrates several drug
features from public drug repositories into a KG and embeds the nodes in the
graph using various translation, factorisation and Neural Network (NN) based KG
Embedding (KGE) methods. Ultimately, we use a Machine Learning (ML) algorithm
that predicts unknown DDIs. Among the different translation and
factorisation-based KGE models, we found that the best performing combination
was the ComplEx embedding method with a Long Short-Term Memory (LSTM) network,
which obtained an F1-score of 95.19% on a dataset based on the DDIs found in
DrugBank version 5.1.8. This score is 5.61% better than the state-of-the-art
model DeepDDI. Additionally, we also developed a graph auto-encoder model that
uses a Graph Neural Network (GNN), which achieved an F1-score of 91.94%.
Consequently, GNNs have demonstrated a stronger ability to mine the underlying
semantics of the KG than the ComplEx model, and thus using higher dimension
embeddings within the GNN can lead to state-of-the-art performance.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 10:07:22 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Aug 2023 07:54:24 GMT"
}
] | 1,691,971,200,000 | [
[
"Farrugia",
"Lizzy",
""
],
[
"Azzopardi",
"Lilian M.",
""
],
[
"Debattista",
"Jeremy",
""
],
[
"Abela",
"Charlie",
""
]
] |
2308.04187 | Lutz Terfloth | Lutz Terfloth, Michael Schaffer, Heike M. Buhl, Carsten Schulte | Adding Why to What? Analyses of an Everyday Explanation | Paper accepted and presented at XAI World Conference 2023, Lisboa | null | 10.1007/978-3-031-44070-0_13 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In XAI it is important to consider that, in contrast to explanations for
professional audiences, one cannot assume common expertise when explaining for
laypeople. But such explanations between humans vary greatly, making it
difficult to research commonalities across explanations. We used the dual
nature theory, a techno-philosophical approach, to cope with these challenges.
According to it, one can explain, for example, an XAI's decision by addressing
its dual nature: by focusing on the Architecture (e.g., the logic of its
algorithms) or the Relevance (e.g., the severity of a decision, the
implications of a recommendation). We investigated 20 game explanations using
the theory as an analytical framework. We elaborate how we used the theory to
quickly structure and compare explanations of technological artifacts. We
supplemented results from analyzing the explanation contents with results from
a video recall to explore how explainers justified their explanation. We found
that explainers were focusing on the physical aspects of the game first
(Architecture) and only later on aspects of the Relevance. Reasoning in the
video recalls indicated that EX regarded the focus on the Architecture as
important for structuring the explanation initially by explaining the basic
components before focusing on more complex, intangible aspects. Shifting
between addressing the two sides was justified by explanation goals, emerging
misunderstandings, and the knowledge needs of the explainee. We discovered
several commonalities that inspire future research questions which, if further
generalizable, provide first ideas for the construction of synthetic
explanations.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 11:17:22 GMT"
}
] | 1,698,105,600,000 | [
[
"Terfloth",
"Lutz",
""
],
[
"Schaffer",
"Michael",
""
],
[
"Buhl",
"Heike M.",
""
],
[
"Schulte",
"Carsten",
""
]
] |
2308.04265 | Ninareh Mehrabi | Ninareh Mehrabi, Palash Goyal, Christophe Dupuy, Qian Hu, Shalini
Ghosh, Richard Zemel, Kai-Wei Chang, Aram Galstyan, Rahul Gupta | FLIRT: Feedback Loop In-context Red Teaming | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Warning: this paper contains content that may be inappropriate or offensive.
As generative models become available for public use in various applications,
testing and analyzing vulnerabilities of these models has become a priority.
Here we propose an automatic red teaming framework that evaluates a given model
and exposes its vulnerabilities against unsafe and inappropriate content
generation. Our framework uses in-context learning in a feedback loop to red
team models and trigger them into unsafe content generation. We propose
different in-context attack strategies to automatically learn effective and
diverse adversarial prompts for text-to-image models. Our experiments
demonstrate that compared to baseline approaches, our proposed strategy is
significantly more effective in exposing vulnerabilities in Stable Diffusion
(SD) model, even when the latter is enhanced with safety features. Furthermore,
we demonstrate that the proposed framework is effective for red teaming
text-to-text models, resulting in significantly higher toxic response
generation rate compared to previously reported numbers.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 14:03:08 GMT"
}
] | 1,691,539,200,000 | [
[
"Mehrabi",
"Ninareh",
""
],
[
"Goyal",
"Palash",
""
],
[
"Dupuy",
"Christophe",
""
],
[
"Hu",
"Qian",
""
],
[
"Ghosh",
"Shalini",
""
],
[
"Zemel",
"Richard",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Galstyan",
"Aram",
""
],
[
"Gupta",
"Rahul",
""
]
] |
2308.04299 | Jakub {\L}yskawa | Jakub {\L}yskawa, Pawe{\l} Wawrzy\'nski | Actor-Critic with variable time discretization via sustained actions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) methods work in discrete time. In order to apply
RL to inherently continuous problems like robotic control, a specific time
discretization needs to be defined. This is a choice between sparse time
control, which may be easier to train, and finer time control, which may allow
for better ultimate performance. In this work, we propose SusACER, an
off-policy RL algorithm that combines the advantages of different time
discretization settings. Initially, it operates with sparse time discretization
and gradually switches to a fine one. We analyze the effects of the changing
time discretization in robotic control environments: Ant, HalfCheetah, Hopper,
and Walker2D. In all cases our proposed algorithm outperforms state of the art.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 14:45:00 GMT"
}
] | 1,691,539,200,000 | [
[
"Łyskawa",
"Jakub",
""
],
[
"Wawrzyński",
"Paweł",
""
]
] |
2308.04371 | Yifan Zhang | Yifan Zhang, Jingqin Yang, Yang Yuan, Andrew Chi-Chih Yao | Cumulative Reasoning with Large Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent advancements in language models (LMs), their ability to
solve complex problems remains limited. This paper introduces Cumulative
Reasoning (CR), a novel approach that utilizes LMs cumulatively and
iteratively, mirroring human thought processes for problem-solving. CR
decomposes tasks into smaller, manageable components and leverages previous
propositions for effective composition, significantly enhancing problem-solving
capabilities. We demonstrate CR's superiority through several complex reasoning
tasks: it outperforms existing methods in logical inference tasks with up to a
9.3% improvement, achieving 98.04% accuracy on the curated FOLIO wiki dataset.
In the Game of 24, it achieves 98% accuracy, marking a 24% improvement over the
prior state-of-the-art. Additionally, CR sets new state-of-the-art on the MATH
dataset, achieving a 4.2% increase from previous methods and a 43% relative
improvement in the most challenging problems. By extending CR to incorporate a
code environment without external aids like retrieval or web browsing, we
further harness the computational and logical reasoning capabilities of LMs,
achieving a remarkable 72.2% accuracy on the MATH dataset and outperforming the
PAL/PoT method by 38.8%. Our work not only sets new state-of-the-art but also
paves the way toward more sophisticated AI reasoning methods. The code is
available at https://github.com/iiis-ai/cumulative-reasoning.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 16:18:20 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 14:37:37 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Aug 2023 08:24:09 GMT"
},
{
"version": "v4",
"created": "Fri, 25 Aug 2023 02:40:37 GMT"
},
{
"version": "v5",
"created": "Sat, 2 Dec 2023 02:59:12 GMT"
},
{
"version": "v6",
"created": "Tue, 2 Apr 2024 03:37:39 GMT"
}
] | 1,712,102,400,000 | [
[
"Zhang",
"Yifan",
""
],
[
"Yang",
"Jingqin",
""
],
[
"Yuan",
"Yang",
""
],
[
"Yao",
"Andrew Chi-Chih",
""
]
] |
2308.04372 | Anthony Hunter | Anthony Hunter | Some Options for Instantiation of Bipolar Argument Graphs with Deductive
Arguments | 15 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Argument graphs provide an abstract representation of an argumentative
situation. A bipolar argument graph is a directed graph where each node denotes
an argument, and each arc denotes the influence of one argument on another.
Here we assume that the influence is supporting, attacking, or ambiguous. In a
bipolar argument graph, each argument is atomic and so it has no internal
structure. Yet to better understand the nature of the individual arguments, and
how they interact, it is important to consider their internal structure. To
address this need, this paper presents a framework based on the use of logical
arguments to instantiate bipolar argument graphs, and a set of possible
constraints on instantiating arguments that take into account the internal
structure of the arguments, and the types of relationship between arguments.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 16:22:27 GMT"
}
] | 1,691,539,200,000 | [
[
"Hunter",
"Anthony",
""
]
] |
2308.04492 | Sang Yun Kwon | Sang Yun Kwon, Gagan Bhatia, El Moatez Billah Nagoud, Muhammad
Abdul-Mageed | ChatGPT for Arabic Grammatical Error Correction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, large language models (LLMs) fine-tuned to follow human instruction
have exhibited significant capabilities in various English NLP tasks. However,
their performance in grammatical error correction (GEC) tasks, particularly in
non-English languages, remains significantly unexplored. In this paper, we
delve into abilities of instruction fine-tuned LLMs in Arabic GEC, a task made
complex due to Arabic's rich morphology. Our findings suggest that various
prompting methods, coupled with (in-context) few-shot learning, demonstrate
considerable effectiveness, with GPT-4 achieving up to $65.49$
F\textsubscript{1} score under expert prompting (approximately $5$ points
higher than our established baseline). This highlights the potential of LLMs in
low-resource settings, offering a viable approach for generating useful
synthetic data for model training. Despite these positive results, we find that
instruction fine-tuned models, regardless of their size, significantly
underperform compared to fully fine-tuned models of significantly smaller
sizes. This disparity highlights a substantial room for improvements for LLMs.
Inspired by methods from low-resource machine translation, we also develop a
method exploiting synthetic data that significantly outperforms previous models
on two standard Arabic benchmarks. Our work sets new SoTA for Arabic GEC, with
$72.19\%$ and $73.26$ F$_{1}$ on the 2014 and 2015 QALB datasets, respectively.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 18:00:39 GMT"
}
] | 1,691,625,600,000 | [
[
"Kwon",
"Sang Yun",
""
],
[
"Bhatia",
"Gagan",
""
],
[
"Nagoud",
"El Moatez Billah",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
] |
2308.04586 | Mark Stefik | Mark Stefik and Robert Price | Bootstrapping Developmental AIs: From Simple Competences to Intelligent
Human-Compatible AIs | 112 pages, 28 figures, 4 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developmental AI creates embodied AIs that develop human-like abilities. The
AIs start with innate competences and learn more by interacting with the world
including people. Developmental AIs have been demonstrated, but their abilities
so far do not surpass those of pre-toddler children. In contrast, mainstream
approaches have led to impressive feats and commercially valuable AI systems.
The approaches include deep learning and generative AI (e.g., large language
models) and manually constructed symbolic modeling. However, manually
constructed AIs tend to be brittle even in circumscribed domains. Generative
AIs are helpful on average, but they can make strange mistakes and not notice
them. Not learning from their experience in the world, they can lack common
sense and social alignment. This position paper lays out prospects, gaps, and
challenges for a bootstrapping approach to developmental AI that follows a
bio-inspired trajectory. The approach creates experiential foundation models
for human-compatible AIs. A virtuous multidisciplinary research cycle has led
to developmental AIs with capabilities for multimodal perception, object
recognition, and manipulation. Computational models for hierarchical planning,
abstraction discovery, curiosity, and language acquisition exist but need to be
adapted to an embodied learning approach. The remaining gaps include nonverbal
communication, speech, reading, and writing. These competences enable people to
acquire socially developed competences. Aspirationally, developmental AIs would
learn, share what they learn, and collaborate to achieve high standards. They
would learn to communicate, establish common ground, read critically, consider
the provenance of information, test hypotheses, and collaborate. The approach
would make the training of AIs more democratic.
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 21:14:21 GMT"
},
{
"version": "v10",
"created": "Wed, 25 Oct 2023 17:33:24 GMT"
},
{
"version": "v11",
"created": "Tue, 31 Oct 2023 16:46:54 GMT"
},
{
"version": "v12",
"created": "Sat, 11 Nov 2023 15:43:13 GMT"
},
{
"version": "v13",
"created": "Sat, 16 Dec 2023 15:19:11 GMT"
},
{
"version": "v14",
"created": "Thu, 28 Dec 2023 17:48:24 GMT"
},
{
"version": "v15",
"created": "Thu, 4 Jan 2024 16:31:09 GMT"
},
{
"version": "v16",
"created": "Mon, 15 Jan 2024 14:40:51 GMT"
},
{
"version": "v17",
"created": "Fri, 19 Jan 2024 12:07:13 GMT"
},
{
"version": "v18",
"created": "Sun, 28 Jan 2024 15:36:04 GMT"
},
{
"version": "v19",
"created": "Mon, 12 Feb 2024 07:50:00 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Aug 2023 15:33:28 GMT"
},
{
"version": "v20",
"created": "Tue, 19 Mar 2024 16:18:02 GMT"
},
{
"version": "v21",
"created": "Thu, 4 Apr 2024 01:40:00 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Aug 2023 16:31:33 GMT"
},
{
"version": "v4",
"created": "Wed, 23 Aug 2023 18:38:29 GMT"
},
{
"version": "v5",
"created": "Tue, 29 Aug 2023 21:41:31 GMT"
},
{
"version": "v6",
"created": "Thu, 7 Sep 2023 23:06:24 GMT"
},
{
"version": "v7",
"created": "Wed, 13 Sep 2023 17:13:52 GMT"
},
{
"version": "v8",
"created": "Thu, 21 Sep 2023 17:46:49 GMT"
},
{
"version": "v9",
"created": "Wed, 4 Oct 2023 22:59:10 GMT"
}
] | 1,712,275,200,000 | [
[
"Stefik",
"Mark",
""
],
[
"Price",
"Robert",
""
]
] |
2308.04639 | Tianshu Yu | Zhang-Hua Fu, Sipeng Sun, Jintong Ren, Tianshu Yu, Haoyu Zhang,
Yuanyuan Liu, Lingxiao Huang, Xiang Yan, Pinyan Lu | A Hierarchical Destroy and Repair Approach for Solving Very Large-Scale
Travelling Salesman Problem | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | For prohibitively large-scale Travelling Salesman Problems (TSPs), existing
algorithms face big challenges in terms of both computational efficiency and
solution quality. To address this issue, we propose a hierarchical
destroy-and-repair (HDR) approach, which attempts to improve an initial
solution by applying a series of carefully designed destroy-and-repair
operations. A key innovative concept is the hierarchical search framework,
which recursively fixes partial edges and compresses the input instance into a
small-scale TSP under some equivalence guarantee. This neat search framework is
able to deliver highly competitive solutions within a reasonable time. Fair
comparisons based on nineteen famous large-scale instances (with 10,000 to
10,000,000 cities) show that HDR is highly competitive against existing
state-of-the-art TSP algorithms, in terms of both efficiency and solution
quality. Notably, on two large instances with 3,162,278 and 10,000,000 cities,
HDR breaks the world records (i.e., best-known results regardless of
computation time), which were previously achieved by LKH and its variants,
while HDR is completely independent of LKH. Finally, ablation studies are
performed to certify the importance and validity of the hierarchical search
framework.
| [
{
"version": "v1",
"created": "Wed, 9 Aug 2023 00:44:02 GMT"
}
] | 1,691,625,600,000 | [
[
"Fu",
"Zhang-Hua",
""
],
[
"Sun",
"Sipeng",
""
],
[
"Ren",
"Jintong",
""
],
[
"Yu",
"Tianshu",
""
],
[
"Zhang",
"Haoyu",
""
],
[
"Liu",
"Yuanyuan",
""
],
[
"Huang",
"Lingxiao",
""
],
[
"Yan",
"Xiang",
""
],
[
"Lu",
"Pinyan",
""
]
] |
2308.04719 | Yang Li | Yang Li and Kun Xiong and Yingping Zhang and Jiangcheng Zhu and
Stephen Mcaleer and Wei Pan and Jun Wang and Zonghong Dai and Yaodong Yang | JiangJun: Mastering Xiangqi by Tackling Non-Transitivity in Two-Player
Zero-Sum Games | 28 pages, accepted by Transactions on Machine Learning Research
(TMLR) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents an empirical exploration of non-transitivity in
perfect-information games, specifically focusing on Xiangqi, a traditional
Chinese board game comparable in game-tree complexity to chess and shogi. By
analyzing over 10,000 records of human Xiangqi play, we highlight the existence
of both transitive and non-transitive elements within the game's strategic
structure. To address non-transitivity, we introduce the JiangJun algorithm, an
innovative combination of Monte-Carlo Tree Search (MCTS) and Policy Space
Response Oracles (PSRO) designed to approximate a Nash equilibrium. We evaluate
the algorithm empirically using a WeChat mini program and achieve a Master
level with a 99.41\% win rate against human players. The algorithm's
effectiveness in overcoming non-transitivity is confirmed by a plethora of
metrics, such as relative population performance and visualization results. Our
project site is available at
\url{https://sites.google.com/view/jiangjun-site/}.
| [
{
"version": "v1",
"created": "Wed, 9 Aug 2023 05:48:58 GMT"
}
] | 1,691,625,600,000 | [
[
"Li",
"Yang",
""
],
[
"Xiong",
"Kun",
""
],
[
"Zhang",
"Yingping",
""
],
[
"Zhu",
"Jiangcheng",
""
],
[
"Mcaleer",
"Stephen",
""
],
[
"Pan",
"Wei",
""
],
[
"Wang",
"Jun",
""
],
[
"Dai",
"Zonghong",
""
],
[
"Yang",
"Yaodong",
""
]
] |
2308.04749 | Bing Han | Bing Han, Feifei Zhao, Yi Zeng, Wenxuan Pan, Guobin Shen | Enhancing Efficient Continual Learning with Dynamic Structure
Development of Spiking Neural Networks | null | IJCAI2023 Camera ready | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Children possess the ability to learn multiple cognitive tasks sequentially,
which is a major challenge toward the long-term goal of artificial general
intelligence. Existing continual learning frameworks are usually applicable to
Deep Neural Networks (DNNs) and lack the exploration on more brain-inspired,
energy-efficient Spiking Neural Networks (SNNs). Drawing on continual learning
mechanisms during child growth and development, we propose Dynamic Structure
Development of Spiking Neural Networks (DSD-SNN) for efficient and adaptive
continual learning. When learning a sequence of tasks, the DSD-SNN dynamically
assigns and grows new neurons to new tasks and prunes redundant neurons,
thereby increasing memory capacity and reducing computational overhead. In
addition, the overlapping shared structure helps to quickly leverage all
acquired knowledge to new tasks, empowering a single network capable of
supporting multiple incremental tasks (without the separate sub-network mask
for each task). We validate the effectiveness of the proposed model on multiple
class incremental learning and task incremental learning benchmarks. Extensive
experiments demonstrated that our model could significantly improve
performance, learning speed and memory capacity, and reduce computational
overhead. Besides, our DSD-SNN model achieves comparable performance with the
DNNs-based methods, and significantly outperforms the state-of-the-art (SOTA)
performance for existing SNNs-based continual learning methods.
| [
{
"version": "v1",
"created": "Wed, 9 Aug 2023 07:36:40 GMT"
}
] | 1,691,625,600,000 | [
[
"Han",
"Bing",
""
],
[
"Zhao",
"Feifei",
""
],
[
"Zeng",
"Yi",
""
],
[
"Pan",
"Wenxuan",
""
],
[
"Shen",
"Guobin",
""
]
] |
2308.04778 | Yasser KHALAFAOUI | Yasser Khalafaoui (Alteca, ETIS - UMR 8051, CY), Nistor Grozavu (ETIS
- UMR 8051, CY), Basarab Matei (LIPN), Laurent-Walter Goix | Multi-modal Multi-view Clustering based on Non-negative Matrix
Factorization | null | 2022 IEEE Symposium Series on Computational Intelligence (SSCI),
Dec 2022, Singapore, Singapore. pp.1386-1391 | 10.1109/SSCI51031.2022.10022129 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By combining related objects, unsupervised machine learning techniques aim to
reveal the underlying patterns in a data set. Non-negative Matrix Factorization
(NMF) is a data mining technique that splits data matrices by imposing
restrictions on the elements' non-negativity into two matrices: one
representing the data partitions and the other to represent the cluster
prototypes of the data set. This method has attracted a lot of attention and is
used in a wide range of applications, including text mining, clustering,
language modeling, music transcription, and neuroscience (gene separation). The
interpretation of the generated matrices is made simpler by the absence of
negative values. In this article, we propose a study on multi-modal clustering
algorithms and present a novel method called multi-modal multi-view
non-negative matrix factorization, in which we analyze the collaboration of
several local NMF models. The experimental results show the value of the
proposed approach, which was evaluated using a variety of data sets, and the
obtained results are very promising compared to state of art methods.
| [
{
"version": "v1",
"created": "Wed, 9 Aug 2023 08:06:03 GMT"
}
] | 1,691,625,600,000 | [
[
"Khalafaoui",
"Yasser",
"",
"Alteca, ETIS - UMR 8051, CY"
],
[
"Grozavu",
"Nistor",
"",
"ETIS\n - UMR 8051, CY"
],
[
"Matei",
"Basarab",
"",
"LIPN"
],
[
"Goix",
"Laurent-Walter",
""
]
] |
2308.04814 | Gunjan Singh | Gunjan Singh, Sumit Bhatia, Raghava Mutharaju | Neuro-Symbolic RDF and Description Logic Reasoners: The State-Of-The-Art
and Challenges | This paper is a part of the book titled Compendium of Neuro-Symbolic
Artificial Intelligence which can be found at the following link:
https://www.iospress.com/
catalog/books/compendium-of-neurosymbolic-artificial-intelligence | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ontologies are used in various domains, with RDF and OWL being prominent
standards for ontology development. RDF is favored for its simplicity and
flexibility, while OWL enables detailed domain knowledge representation.
However, as ontologies grow larger and more expressive, reasoning complexity
increases, and traditional reasoners struggle to perform efficiently. Despite
optimization efforts, scalability remains an issue. Additionally, advancements
in automated knowledge base construction have created large and expressive
ontologies that are often noisy and inconsistent, posing further challenges for
conventional reasoners. To address these challenges, researchers have explored
neuro-symbolic approaches that combine neural networks' learning capabilities
with symbolic systems' reasoning abilities. In this chapter,we provide an
overview of the existing literature in the field of neuro-symbolic deductive
reasoning supported by RDF(S), the description logics EL and ALC, and OWL 2 RL,
discussing the techniques employed, the tasks they address, and other relevant
efforts in this area.
| [
{
"version": "v1",
"created": "Wed, 9 Aug 2023 09:12:35 GMT"
}
] | 1,691,625,600,000 | [
[
"Singh",
"Gunjan",
""
],
[
"Bhatia",
"Sumit",
""
],
[
"Mutharaju",
"Raghava",
""
]
] |
2308.04914 | Jiawen Kang | Xumin Huang, Yuan Wu, Jiawen Kang, Jiangtian Nie, Weifeng Zhong, Dong
In Kim, and Shengli Xie | Service Reservation and Pricing for Green Metaverses: A Stackelberg Game
Approach | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Metaverse enables users to communicate, collaborate and socialize with each
other through their digital avatars. Due to the spatio-temporal
characteristics, co-located users are served well by performing their software
components in a collaborative manner such that a Metaverse service provider
(MSP) eliminates redundant data transmission and processing, ultimately
reducing the total energy consumption. The energyefficient service provision is
crucial for enabling the green and sustainable Metaverse. In this article, we
take an augmented reality (AR) application as an example to achieve this goal.
Moreover, we study an economic issue on how the users reserve offloading
services from the MSP and how the MSP determines an optimal charging price
since each user is rational to decide whether to accept the offloading service
by taking into account the monetary cost. A single-leader multi-follower
Stackelberg game is formulated between the MSP and users while each user
optimizes an offloading probability to minimize the weighted sum of time,
energy consumption and monetary cost. Numerical results show that our scheme
achieves energy savings and satisfies individual rationality simultaneously
compared with the conventional schemes. Finally, we identify and discuss open
directions on how several emerging technologies are combined with the
sustainable green Metaverse.
| [
{
"version": "v1",
"created": "Wed, 9 Aug 2023 12:27:49 GMT"
}
] | 1,691,625,600,000 | [
[
"Huang",
"Xumin",
""
],
[
"Wu",
"Yuan",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Nie",
"Jiangtian",
""
],
[
"Zhong",
"Weifeng",
""
],
[
"Kim",
"Dong In",
""
],
[
"Xie",
"Shengli",
""
]
] |
2308.05012 | Awad Abdelhalim | Michael Leong, Awad Abdelhalim, Jude Ha, Dianne Patterson, Gabriel L.
Pincus, Anthony B. Harris, Michael Eichler, Jinhua Zhao | MetRoBERTa: Leveraging Traditional Customer Relationship Management Data
to Develop a Transit-Topic-Aware Language Model | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Transit riders' feedback provided in ridership surveys, customer relationship
management (CRM) channels, and in more recent times, through social media is
key for transit agencies to better gauge the efficacy of their services and
initiatives. Getting a holistic understanding of riders' experience through the
feedback shared in those instruments is often challenging, mostly due to the
open-ended, unstructured nature of text feedback. In this paper, we propose
leveraging traditional transit CRM feedback to develop and deploy a
transit-topic-aware large language model (LLM) capable of classifying
open-ended text feedback to relevant transit-specific topics. First, we utilize
semi-supervised learning to engineer a training dataset of 11 broad transit
topics detected in a corpus of 6 years of customer feedback provided to the
Washington Metropolitan Area Transit Authority (WMATA). We then use this
dataset to train and thoroughly evaluate a language model based on the RoBERTa
architecture. We compare our LLM, MetRoBERTa, to classical machine learning
approaches utilizing keyword-based and lexicon representations. Our model
outperforms those methods across all evaluation metrics, providing an average
topic classification accuracy of 90%. Finally, we provide a value proposition
of this work demonstrating how the language model, alongside additional text
processing tools, can be applied to add structure to open-ended text sources of
feedback like Twitter. The framework and results we present provide a pathway
for an automated, generalizable approach for ingesting, visualizing, and
reporting transit riders' feedback at scale, enabling agencies to better
understand and improve customer experience.
| [
{
"version": "v1",
"created": "Wed, 9 Aug 2023 15:11:37 GMT"
}
] | 1,691,625,600,000 | [
[
"Leong",
"Michael",
""
],
[
"Abdelhalim",
"Awad",
""
],
[
"Ha",
"Jude",
""
],
[
"Patterson",
"Dianne",
""
],
[
"Pincus",
"Gabriel L.",
""
],
[
"Harris",
"Anthony B.",
""
],
[
"Eichler",
"Michael",
""
],
[
"Zhao",
"Jinhua",
""
]
] |
2308.05062 | Holger Hoos | Chris Fawcett, Mauro Vallati, Holger H. Hoos, Alfonso E. Gerevini | Competitions in AI -- Robustly Ranking Solvers Using Statistical
Resampling | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solver competitions play a prominent role in assessing and advancing the
state of the art for solving many problems in AI and beyond. Notably, in many
areas of AI, competitions have had substantial impact in guiding research and
applications for many years, and for a solver to be ranked highly in a
competition carries considerable weight. But to which extent can we expect
competition results to generalise to sets of problem instances different from
those used in a particular competition? This is the question we investigate
here, using statistical resampling techniques. We show that the rankings
resulting from the standard interpretation of competition results can be very
sensitive to even minor changes in the benchmark instance set used as the basis
for assessment and can therefore not be expected to carry over to other samples
from the same underlying instance distribution. To address this problem, we
introduce a novel approach to statistically meaningful analysis of competition
results based on resampling performance data. Our approach produces confidence
intervals of competition scores as well as statistically robust solver rankings
with bounded error. Applied to recent SAT, AI planning and computer vision
competitions, our analysis reveals frequent statistical ties in solver
performance as well as some inversions of ranks compared to the official
results based on simple scoring.
| [
{
"version": "v1",
"created": "Wed, 9 Aug 2023 16:47:04 GMT"
}
] | 1,691,625,600,000 | [
[
"Fawcett",
"Chris",
""
],
[
"Vallati",
"Mauro",
""
],
[
"Hoos",
"Holger H.",
""
],
[
"Gerevini",
"Alfonso E.",
""
]
] |
2308.05385 | Tao Zou | Tao Zou, Le Yu, Leilei Sun, Bowen Du, Deqing Wang, Fuzhen Zhuang | Adaptive Taxonomy Learning and Historical Patterns Modelling for Patent
Classification | 13 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patent classification aims to assign multiple International Patent
Classification (IPC) codes to a given patent. Recent methods for automatically
classifying patents mainly focus on analyzing the text descriptions of patents.
However, apart from the texts, each patent is also associated with some
assignees, and the knowledge of their applied patents is often valuable for
classification. Furthermore, the hierarchical taxonomy formulated by the IPC
system provides important contextual information and enables models to leverage
the correlations between IPC codes for more accurate classification. However,
existing methods fail to incorporate the above aspects. In this paper, we
propose an integrated framework that comprehensively considers the information
on patents for patent classification. To be specific, we first present an IPC
codes correlations learning module to derive their semantic representations via
adaptively passing and aggregating messages within the same level and across
different levels along the hierarchical taxonomy. Moreover, we design a
historical application patterns learning component to incorporate the
corresponding assignee's previous patents by a dual channel aggregation
mechanism. Finally, we combine the contextual information of patent texts that
contains the semantics of IPC codes, and assignees' sequential preferences to
make predictions. Experiments on real-world datasets demonstrate the
superiority of our approach over the existing methods. Besides, we present the
model's ability to capture the temporal patterns of assignees and the semantic
dependencies among IPC codes.
| [
{
"version": "v1",
"created": "Thu, 10 Aug 2023 07:02:24 GMT"
}
] | 1,691,712,000,000 | [
[
"Zou",
"Tao",
""
],
[
"Yu",
"Le",
""
],
[
"Sun",
"Leilei",
""
],
[
"Du",
"Bowen",
""
],
[
"Wang",
"Deqing",
""
],
[
"Zhuang",
"Fuzhen",
""
]
] |
2308.05391 | Segev Shlomov | Sivan Schwartz, Avi Yaeli, Segev Shlomov | Enhancing Trust in LLM-Based AI Automation Agents: New Considerations
and Future Challenges | Accepted to the First International Workshop on the Future of No-Code
Digital Apprentices | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trust in AI agents has been extensively studied in the literature, resulting
in significant advancements in our understanding of this field. However, the
rapid advancements in Large Language Models (LLMs) and the emergence of
LLM-based AI agent frameworks pose new challenges and opportunities for further
research. In the field of process automation, a new generation of AI-based
agents has emerged, enabling the execution of complex tasks. At the same time,
the process of building automation has become more accessible to business users
via user-friendly no-code tools and training mechanisms. This paper explores
these new challenges and opportunities, analyzes the main aspects of trust in
AI agents discussed in existing literature, and identifies specific
considerations and challenges relevant to this new generation of automation
agents. We also evaluate how nascent products in this category address these
considerations. Finally, we highlight several challenges that the research
community should address in this evolving landscape.
| [
{
"version": "v1",
"created": "Thu, 10 Aug 2023 07:12:11 GMT"
}
] | 1,691,712,000,000 | [
[
"Schwartz",
"Sivan",
""
],
[
"Yaeli",
"Avi",
""
],
[
"Shlomov",
"Segev",
""
]
] |
2308.05501 | Sapir Gershov | Sapir Gershov, Fadi Mahameed, Aeyal Raz, Shlomi Laufer | More Than Meets the Eye: Analyzing Anesthesiologists' Visual Attention
in the Operating Room Using Deep Learning Models | Submitted to MICCAI Aml4HC 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Patient's vital signs, which are displayed on monitors, make the
anesthesiologist's visual attention (VA) a key component in the safe management
of patients under general anesthesia; moreover, the distribution of said VA and
the ability to acquire specific cues throughout the anesthetic, may have a
direct impact on patient's outcome. Currently, most studies employ wearable
eye-tracking technologies to analyze anesthesiologists' visual patterns. Albeit
being able to produce meticulous data, wearable devices are not a sustainable
solution for large-scale or long-term use for data collection in the operating
room (OR). Thus, by utilizing a novel eye-tracking method in the form of deep
learning models that process monitor-mounted webcams, we collected continuous
behavioral data and gained insight into the anesthesiologist's VA distribution
with minimal disturbance to their natural workflow. In this study, we collected
OR video recordings using the proposed framework and compared different visual
behavioral patterns. We distinguished between baseline VA distribution during
uneventful periods to patterns associated with active phases or during
critical, unanticipated incidents. In the future, such a platform may serve as
a crucial component of context-aware assistive technologies in the OR.
| [
{
"version": "v1",
"created": "Thu, 10 Aug 2023 11:12:04 GMT"
}
] | 1,691,712,000,000 | [
[
"Gershov",
"Sapir",
""
],
[
"Mahameed",
"Fadi",
""
],
[
"Raz",
"Aeyal",
""
],
[
"Laufer",
"Shlomi",
""
]
] |
2308.05567 | Pan Liang | Pan Liang, Danwei Ye, Zihao Zhu, Yunchao Wang, Wang Xia, Ronghua
Liang, and Guodao Sun | C5: Towards Better Conversation Comprehension and Contextual Continuity
for ChatGPT | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs), such as ChatGPT, have demonstrated outstanding
performance in various fields, particularly in natural language understanding
and generation tasks. In complex application scenarios, users tend to engage in
multi-turn conversations with ChatGPT to keep contextual information and obtain
comprehensive responses. However, human forgetting and model contextual
forgetting remain prominent issues in multi-turn conversation scenarios, which
challenge the users' conversation comprehension and contextual continuity for
ChatGPT. To address these challenges, we propose an interactive conversation
visualization system called C5, which includes Global View, Topic View, and
Context-associated Q\&A View. The Global View uses the GitLog diagram metaphor
to represent the conversation structure, presenting the trend of conversation
evolution and supporting the exploration of locally salient features. The Topic
View is designed to display all the question and answer nodes and their
relationships within a topic using the structure of a knowledge graph, thereby
display the relevance and evolution of conversations. The Context-associated
Q\&A View consists of three linked views, which allow users to explore
individual conversations deeply while providing specific contextual information
when posing questions. The usefulness and effectiveness of C5 were evaluated
through a case study and a user study.
| [
{
"version": "v1",
"created": "Thu, 10 Aug 2023 13:29:12 GMT"
}
] | 1,691,712,000,000 | [
[
"Liang",
"Pan",
""
],
[
"Ye",
"Danwei",
""
],
[
"Zhu",
"Zihao",
""
],
[
"Wang",
"Yunchao",
""
],
[
"Xia",
"Wang",
""
],
[
"Liang",
"Ronghua",
""
],
[
"Sun",
"Guodao",
""
]
] |
2308.05585 | Miao Fan | Miao Fan, Chen Hu, Shuchang Zhou | Proximal Policy Optimization Actual Combat: Manipulating Output
Tokenizer Length | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Reinforcement Learning from Human Feedback (RLHF) plays a pivotal role in
shaping the impact of large language models (LLMs), contributing significantly
to controlling output toxicity and selecting output styles, particularly as
LLMs often harbor misleading content, highlighting the urgency to align them
with human values for secure AI systems. The RLHF, characterized by complexity,
instability, and sensitivity to hyperparameters, makes the evaluation of the
reward model for complex tasks challenging, thereby further complicating the
use of Proximal Policy Optimization (PPO). In this paper, we introduce a simple
task designed to employ Gloden as a reward model that validates the
effectiveness of PPO and inspires it, primarily explaining the task of
utilizing PPO to manipulate the tokenizer length of the output generated by the
model. Experiments confirm that PPO is not only effective in manipulating the
output tokenizer length to a certain extent in this type of task but also
exhibits facilitated training once the influence of the reward model effect is
excluded, making it an exciting development.
| [
{
"version": "v1",
"created": "Thu, 10 Aug 2023 13:50:17 GMT"
}
] | 1,691,712,000,000 | [
[
"Fan",
"Miao",
""
],
[
"Hu",
"Chen",
""
],
[
"Zhou",
"Shuchang",
""
]
] |
2308.05617 | Hanzhao Wang | Hanzhao Wang, Zhongze Cai, Xiaocheng Li, Kalyan Talluri | A Neural Network Based Choice Model for Assortment Optimization | arXiv admin note: substantial text overlap with arXiv:2208.09325 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discrete-choice models are used in economics, marketing and revenue
management to predict customer purchase probabilities, say as a function of
prices and other features of the offered assortment. While they have been shown
to be expressive, capturing customer heterogeneity and behaviour, they are also
hard to estimate, often based on many unobservables like utilities; and
moreover, they still fail to capture many salient features of customer
behaviour. A natural question then, given their success in other contexts, is
if neural networks can eliminate the necessity of carefully building a
context-dependent customer behaviour model and hand-coding and tuning the
estimation. It is unclear however how one would incorporate assortment effects
into such a neural network, and also how one would optimize the assortment with
such a black-box generative model of choice probabilities. In this paper we
investigate first whether a single neural network architecture can predict
purchase probabilities for datasets from various contexts and generated under
various models and assumptions. Next, we develop an assortment optimization
formulation that is solvable by off-the-shelf integer programming solvers. We
compare against a variety of benchmark discrete-choice models on simulated as
well as real-world datasets, developing training tricks along the way to make
the neural network prediction and subsequent optimization robust and comparable
in performance to the alternates.
| [
{
"version": "v1",
"created": "Thu, 10 Aug 2023 15:01:52 GMT"
}
] | 1,691,712,000,000 | [
[
"Wang",
"Hanzhao",
""
],
[
"Cai",
"Zhongze",
""
],
[
"Li",
"Xiaocheng",
""
],
[
"Talluri",
"Kalyan",
""
]
] |
2308.05780 | Rohan Gupta | Sidhantha Poddar and Rohan Gupta | Optical Script Identification for multi-lingual Indic-script | 20 pages , 12 figures Keywords: Optical character Identification,
Pre-processing, feature extraction, multi-script, Indic-script, Script
Recognition | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Script identification and text recognition are some of the major domains in
the application of Artificial Intelligence. In this era of digitalization, the
use of digital note-taking has become a common practice. Still, conventional
methods of using pen and paper is a prominent way of writing. This leads to the
classification of scripts based on the method they are obtained. A survey on
the current methodologies and state-of-art methods used for processing and
identification would prove beneficial for researchers. The aim of this article
is to discuss the advancement in the techniques for script pre-processing and
text recognition. In India there are twelve prominent Indic scripts, unlike the
English language, these scripts have layers of characteristics. Complex
characteristics such as similarity in text shape make them difficult to
recognize and analyze, thus this requires advance preprocessing methods for
their accurate recognition. A sincere attempt is made in this survey to provide
a comparison between all algorithms. We hope that this survey would provide
insight to a researcher working not only on Indic scripts but also other
languages.
| [
{
"version": "v1",
"created": "Thu, 10 Aug 2023 14:02:05 GMT"
}
] | 1,691,971,200,000 | [
[
"Poddar",
"Sidhantha",
""
],
[
"Gupta",
"Rohan",
""
]
] |
2308.05960 | Zhiwei Liu | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke,
Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit,
Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | Preprint | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}.
| [
{
"version": "v1",
"created": "Fri, 11 Aug 2023 06:37:54 GMT"
}
] | 1,691,971,200,000 | [
[
"Liu",
"Zhiwei",
""
],
[
"Yao",
"Weiran",
""
],
[
"Zhang",
"Jianguo",
""
],
[
"Xue",
"Le",
""
],
[
"Heinecke",
"Shelby",
""
],
[
"Murthy",
"Rithesh",
""
],
[
"Feng",
"Yihao",
""
],
[
"Chen",
"Zeyuan",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Arpit",
"Devansh",
""
],
[
"Xu",
"Ran",
""
],
[
"Mui",
"Phil",
""
],
[
"Wang",
"Huan",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Savarese",
"Silvio",
""
]
] |
2308.05984 | Alberto Pozanco | Parisa Zehtabi, Alberto Pozanco, Ayala Bloch, Daniel Borrajo, Sarit
Kraus | Contrastive Explanations of Centralized Multi-agent Optimization
Solutions | Paper accepted at ICAPS 2024. This is a extended version that
includes Supplementary Material | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many real-world scenarios, agents are involved in optimization problems.
Since most of these scenarios are over-constrained, optimal solutions do not
always satisfy all agents. Some agents might be unhappy and ask questions of
the form ``Why does solution $S$ not satisfy property $P$?''. We propose CMAoE,
a domain-independent approach to obtain contrastive explanations by: (i)
generating a new solution $S^\prime$ where property $P$ is enforced, while also
minimizing the differences between $S$ and $S^\prime$; and (ii) highlighting
the differences between the two solutions, with respect to the features of the
objective function of the multi-agent system. Such explanations aim to help
agents understanding why the initial solution is better in the context of the
multi-agent system than what they expected. We have carried out a computational
evaluation that shows that CMAoE can generate contrastive explanations for
large multi-agent optimization problems. We have also performed an extensive
user study in four different domains that shows that: (i) after being presented
with these explanations, humans' satisfaction with the original solution
increases; and (ii) the constrastive explanations generated by CMAoE are
preferred or equally preferred by humans over the ones generated by state of
the art approaches.
| [
{
"version": "v1",
"created": "Fri, 11 Aug 2023 07:42:17 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Mar 2024 13:56:05 GMT"
}
] | 1,710,374,400,000 | [
[
"Zehtabi",
"Parisa",
""
],
[
"Pozanco",
"Alberto",
""
],
[
"Bloch",
"Ayala",
""
],
[
"Borrajo",
"Daniel",
""
],
[
"Kraus",
"Sarit",
""
]
] |
2308.05985 | Liang Zhang | Liang Zhang, Nathaniel Xu, Pengfei Yang, Gaojie Jin, Cheng-Chao Huang,
Lijun Zhang | TrajPAC: Towards Robustness Verification of Pedestrian Trajectory
Prediction Models | ICCV 2023 version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust pedestrian trajectory forecasting is crucial to developing safe
autonomous vehicles. Although previous works have studied adversarial
robustness in the context of trajectory forecasting, some significant issues
remain unaddressed. In this work, we try to tackle these crucial problems.
Firstly, the previous definitions of robustness in trajectory prediction are
ambiguous. We thus provide formal definitions for two kinds of robustness,
namely label robustness and pure robustness. Secondly, as previous works fail
to consider robustness about all points in a disturbance interval, we utilise a
probably approximately correct (PAC) framework for robustness verification.
Additionally, this framework can not only identify potential counterexamples,
but also provides interpretable analyses of the original methods. Our approach
is applied using a prototype tool named TrajPAC. With TrajPAC, we evaluate the
robustness of four state-of-the-art trajectory prediction models --
Trajectron++, MemoNet, AgentFormer, and MID -- on trajectories from five scenes
of the ETH/UCY dataset and scenes of the Stanford Drone Dataset. Using our
framework, we also experimentally study various factors that could influence
robustness performance.
| [
{
"version": "v1",
"created": "Fri, 11 Aug 2023 07:43:00 GMT"
}
] | 1,691,971,200,000 | [
[
"Zhang",
"Liang",
""
],
[
"Xu",
"Nathaniel",
""
],
[
"Yang",
"Pengfei",
""
],
[
"Jin",
"Gaojie",
""
],
[
"Huang",
"Cheng-Chao",
""
],
[
"Zhang",
"Lijun",
""
]
] |
2308.05996 | Qi Liu | Qi Liu, Zhilong Zhou, Gangwei Jiang, Tiezheng Ge, Defu Lian | Deep Task-specific Bottom Representation Network for Multi-Task
Recommendation | CIKM'23 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural-based multi-task learning (MTL) has gained significant improvement,
and it has been successfully applied to recommendation system (RS). Recent deep
MTL methods for RS (e.g. MMoE, PLE) focus on designing soft gating-based
parameter-sharing networks that implicitly learn a generalized representation
for each task. However, MTL methods may suffer from performance degeneration
when dealing with conflicting tasks, as negative transfer effects can occur on
the task-shared bottom representation. This can result in a reduced capacity
for MTL methods to capture task-specific characteristics, ultimately impeding
their effectiveness and hindering the ability to generalize well on all tasks.
In this paper, we focus on the bottom representation learning of MTL in RS and
propose the Deep Task-specific Bottom Representation Network (DTRN) to
alleviate the negative transfer problem. DTRN obtains task-specific bottom
representation explicitly by making each task have its own representation
learning network in the bottom representation modeling stage. Specifically, it
extracts the user's interests from multiple types of behavior sequences for
each task through the parameter-efficient hypernetwork. To further obtain the
dedicated representation for each task, DTRN refines the representation of each
feature by employing a SENet-like network for each task. The two proposed
modules can achieve the purpose of getting task-specific bottom representation
to relieve tasks' mutual interference. Moreover, the proposed DTRN is flexible
to combine with existing MTL methods. Experiments on one public dataset and one
industrial dataset demonstrate the effectiveness of the proposed DTRN.
| [
{
"version": "v1",
"created": "Fri, 11 Aug 2023 08:04:43 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 01:37:14 GMT"
}
] | 1,692,576,000,000 | [
[
"Liu",
"Qi",
""
],
[
"Zhou",
"Zhilong",
""
],
[
"Jiang",
"Gangwei",
""
],
[
"Ge",
"Tiezheng",
""
],
[
"Lian",
"Defu",
""
]
] |
2308.06088 | Arne Bewersdorff | Arne Bewersdorff, Kathrin Se{\ss}ler, Armin Baur, Enkelejda Kasneci,
Claudia Nerdel | Assessing Student Errors in Experimentation Using Artificial
Intelligence and Large Language Models: A Comparative Study with Human Raters | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Identifying logical errors in complex, incomplete or even contradictory and
overall heterogeneous data like students' experimentation protocols is
challenging. Recognizing the limitations of current evaluation methods, we
investigate the potential of Large Language Models (LLMs) for automatically
identifying student errors and streamlining teacher assessments. Our aim is to
provide a foundation for productive, personalized feedback. Using a dataset of
65 student protocols, an Artificial Intelligence (AI) system based on the
GPT-3.5 and GPT-4 series was developed and tested against human raters. Our
results indicate varying levels of accuracy in error detection between the AI
system and human raters. The AI system can accurately identify many fundamental
student errors, for instance, the AI system identifies when a student is
focusing the hypothesis not on the dependent variable but solely on an expected
observation (acc. = 0.90), when a student modifies the trials in an ongoing
investigation (acc. = 1), and whether a student is conducting valid test trials
(acc. = 0.82) reliably. The identification of other, usually more complex
errors, like whether a student conducts a valid control trial (acc. = .60),
poses a greater challenge. This research explores not only the utility of AI in
educational settings, but also contributes to the understanding of the
capabilities of LLMs in error detection in inquiry-based learning like
experimentation.
| [
{
"version": "v1",
"created": "Fri, 11 Aug 2023 12:03:12 GMT"
}
] | 1,691,971,200,000 | [
[
"Bewersdorff",
"Arne",
""
],
[
"Seßler",
"Kathrin",
""
],
[
"Baur",
"Armin",
""
],
[
"Kasneci",
"Enkelejda",
""
],
[
"Nerdel",
"Claudia",
""
]
] |
2308.06137 | Kushal Kedia | Kushal Kedia, Prithwish Dan, Sanjiban Choudhury | A Game-Theoretic Framework for Joint Forecasting and Planning | IROS 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Planning safe robot motions in the presence of humans requires reliable
forecasts of future human motion. However, simply predicting the most likely
motion from prior interactions does not guarantee safety. Such forecasts fail
to model the long tail of possible events, which are rarely observed in limited
datasets. On the other hand, planning for worst-case motions leads to overtly
conservative behavior and a "frozen robot". Instead, we aim to learn forecasts
that predict counterfactuals that humans guard against. We propose a novel
game-theoretic framework for joint planning and forecasting with the payoff
being the performance of the planner against the demonstrator, and present
practical algorithms to train models in an end-to-end fashion. We demonstrate
that our proposed algorithm results in safer plans in a crowd navigation
simulator and real-world datasets of pedestrian motion. We release our code at
https://github.com/portal-cornell/Game-Theoretic-Forecasting-Planning.
| [
{
"version": "v1",
"created": "Fri, 11 Aug 2023 13:56:39 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Oct 2023 03:40:56 GMT"
}
] | 1,698,019,200,000 | [
[
"Kedia",
"Kushal",
""
],
[
"Dan",
"Prithwish",
""
],
[
"Choudhury",
"Sanjiban",
""
]
] |
2308.06922 | Peng Zhao | Peng Zhao | Probabilistic contingent planning based on HTN for high-quality plans | 10 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deterministic planning assumes that the planning evolves along a fully
predictable path, and therefore it loses the practical value in most real
projections. A more realistic view is that planning ought to take into
consideration partial observability beforehand and aim for a more flexible and
robust solution. What is more significant, it is inevitable that the quality of
plan varies dramatically in the partially observable environment. In this paper
we propose a probabilistic contingent Hierarchical Task Network (HTN) planner,
named High-Quality Contingent Planner (HQCP), to generate high-quality plans in
the partially observable environment. The formalisms in HTN planning are
extended into partial observability and are evaluated regarding the cost. Next,
we explore a novel heuristic for high-quality plans and develop the integrated
planning algorithm. Finally, an empirical study verifies the effectiveness and
efficiency of the planner both in probabilistic contingent planning and for
obtaining high-quality plans.
| [
{
"version": "v1",
"created": "Mon, 14 Aug 2023 03:55:14 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Sep 2023 06:53:01 GMT"
}
] | 1,695,945,600,000 | [
[
"Zhao",
"Peng",
""
]
] |
2308.07307 | Yuhe Nie | Yuhe Nie, Shaoming Zheng, Zhan Zhuang, Xuan Song | Extend Wave Function Collapse to Large-Scale Content Generation | This paper is accepted by IEEE Conference on Games 2023 (nomination
of the Best Paper Award) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wave Function Collapse (WFC) is a widely used tile-based algorithm in
procedural content generation, including textures, objects, and scenes.
However, the current WFC algorithm and related research lack the ability to
generate commercialized large-scale or infinite content due to constraint
conflict and time complexity costs. This paper proposes a Nested WFC (N-WFC)
algorithm framework to reduce time complexity. To avoid conflict and
backtracking problems, we offer a complete and sub-complete tileset preparation
strategy, which requires only a small number of tiles to generate aperiodic and
deterministic infinite content. We also introduce the weight-brush system that
combines N-WFC and sub-complete tileset, proving its suitability for game
design. Our contribution addresses WFC's challenge in massive content
generation and provides a theoretical basis for implementing concrete games.
| [
{
"version": "v1",
"created": "Mon, 14 Aug 2023 17:50:38 GMT"
}
] | 1,692,057,600,000 | [
[
"Nie",
"Yuhe",
""
],
[
"Zheng",
"Shaoming",
""
],
[
"Zhuang",
"Zhan",
""
],
[
"Song",
"Xuan",
""
]
] |
2308.07322 | Robert Burdett | Robert L Burdett, Paul Corry, Prasad Yarlagadda, David Cook, Sean
Birgan | Multicriteria Optimization Techniques for Understanding the Case Mix
Landscape of a Hospital | 38 pages, 17 figures, 11 tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Various medical and surgical units operate in a typical hospital and to treat
their patients these units compete for infrastructure like operating rooms (OR)
and ward beds. How that competition is regulated affects the capacity and
output of a hospital. This article considers the impact of treating different
patient case mix (PCM) in a hospital. As each case mix has an economic
consequence and a unique profile of hospital resource usage, this consideration
is important. To better understand the case mix landscape and to identify those
which are optimal from a capacity utilisation perspective, an improved
multicriteria optimization (MCO) approach is proposed. As there are many
patient types in a typical hospital, the task of generating an archive of
non-dominated (i.e., Pareto optimal) case mix is computationally challenging.
To generate a better archive, an improved parallelised epsilon constraint
method (ECM) is introduced. Our parallel random corrective approach is
significantly faster than prior methods and is not restricted to evaluating
points on a structured uniform mesh. As such we can generate more solutions.
The application of KD-Trees is another new contribution. We use them to perform
proximity testing and to store the high dimensional Pareto frontier (PF). For
generating, viewing, navigating, and querying an archive, the development of a
suitable decision support tool (DST) is proposed and demonstrated.
| [
{
"version": "v1",
"created": "Mon, 31 Jul 2023 22:55:48 GMT"
}
] | 1,692,144,000,000 | [
[
"Burdett",
"Robert L",
""
],
[
"Corry",
"Paul",
""
],
[
"Yarlagadda",
"Prasad",
""
],
[
"Cook",
"David",
""
],
[
"Birgan",
"Sean",
""
]
] |
2308.07327 | Juho Kim | Juho Kim | PokerKit: A Comprehensive Python Library for Fine-Grained Multi-Variant
Poker Game Simulations | 8 pages, 1 figure, submission to IEEE Transactions on Games | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | PokerKit is an open-source Python library designed to overcome the
restrictions of existing poker game simulation and hand evaluation tools, which
typically support only a handful of poker variants and lack flexibility in game
state control. In contrast, PokerKit significantly expands this scope by
supporting an extensive array of poker variants and it provides a flexible
architecture for users to define their custom games. This paper details the
design and implementation of PokerKit, including its intuitive programmatic
API, multi-variant game support, and a unified hand evaluation suite across
different hand types. The flexibility of PokerKit allows for applications in
diverse areas, such as poker AI development, tool creation, and online poker
casino implementation. PokerKit's reliability has been established through
static type checking, extensive doctests, and unit tests, achieving 99% code
coverage. The introduction of PokerKit represents a significant contribution to
the field of computer poker, fostering future research and advanced AI
development for a wide variety of poker games. The source code is available at
https://github.com/uoftcprg/pokerkit
| [
{
"version": "v1",
"created": "Tue, 8 Aug 2023 13:54:48 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Sep 2023 22:20:32 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Oct 2023 23:42:04 GMT"
},
{
"version": "v4",
"created": "Wed, 11 Oct 2023 06:34:56 GMT"
},
{
"version": "v5",
"created": "Mon, 16 Oct 2023 14:33:02 GMT"
}
] | 1,697,500,800,000 | [
[
"Kim",
"Juho",
""
]
] |
2308.07457 | Michael Wilbur | Michael Wilbur, Amutheezan Sivagnanam, Afiya Ayman, Samitha
Samaranayeke, Abhishek Dubey, Aron Laszka | Artificial Intelligence for Smart Transportation | This is a pre-print for a book chapter to appear in Vorobeychik,
Yevgeniy., and Mukhopadhyay, Ayan., (Eds.). (2023). Artificial Intelligence
and Society. ACM Press | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are more than 7,000 public transit agencies in the U.S. (and many more
private agencies), and together, they are responsible for serving 60 billion
passenger miles each year. A well-functioning transit system fosters the growth
and expansion of businesses, distributes social and economic benefits, and
links the capabilities of community members, thereby enhancing what they can
accomplish as a society. Since affordable public transit services are the
backbones of many communities, this work investigates ways in which Artificial
Intelligence (AI) can improve efficiency and increase utilization from the
perspective of transit agencies. This book chapter discusses the primary
requirements, objectives, and challenges related to the design of AI-driven
smart transportation systems. We focus on three major topics. First, we discuss
data sources and data. Second, we provide an overview of how AI can aid
decision-making with a focus on transportation. Lastly, we discuss
computational problems in the transportation domain and AI approaches to these
problems.
| [
{
"version": "v1",
"created": "Mon, 14 Aug 2023 21:01:00 GMT"
}
] | 1,692,144,000,000 | [
[
"Wilbur",
"Michael",
""
],
[
"Sivagnanam",
"Amutheezan",
""
],
[
"Ayman",
"Afiya",
""
],
[
"Samaranayeke",
"Samitha",
""
],
[
"Dubey",
"Abhishek",
""
],
[
"Laszka",
"Aron",
""
]
] |
2308.07738 | Debraj Chakraborty | Debraj Chakraborty, Damien Busatto-Gaston, Jean-Fran\c{c}ois Raskin
and Guillermo A. P\'erez | Formally-Sharp DAgger for MCTS: Lower-Latency Monte Carlo Tree Search
using Data Aggregation with Formal Methods | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We study how to efficiently combine formal methods, Monte Carlo Tree Search
(MCTS), and deep learning in order to produce high-quality receding horizon
policies in large Markov Decision processes (MDPs). In particular, we use
model-checking techniques to guide the MCTS algorithm in order to generate
offline samples of high-quality decisions on a representative set of states of
the MDP. Those samples can then be used to train a neural network that imitates
the policy used to generate them. This neural network can either be used as a
guide on a lower-latency MCTS online search, or alternatively be used as a
full-fledged policy when minimal latency is required. We use statistical model
checking to detect when additional samples are needed and to focus those
additional samples on configurations where the learnt neural network policy
differs from the (computationally-expensive) offline policy. We illustrate the
use of our method on MDPs that model the Frozen Lake and Pac-Man environments
-- two popular benchmarks to evaluate reinforcement-learning algorithms.
| [
{
"version": "v1",
"created": "Tue, 15 Aug 2023 12:33:58 GMT"
}
] | 1,692,144,000,000 | [
[
"Chakraborty",
"Debraj",
""
],
[
"Busatto-Gaston",
"Damien",
""
],
[
"Raskin",
"Jean-François",
""
],
[
"Pérez",
"Guillermo A.",
""
]
] |
2308.08307 | Toon Van de Maele | Toon Van de Maele, Bart Dhoedt, Tim Verbelen, Giovanni Pezzulo | Integrating cognitive map learning and active inference for planning in
ambiguous environments | Accepted at IWAI 2023 (International Workshop on Active Inference) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Living organisms need to acquire both cognitive maps for learning the
structure of the world and planning mechanisms able to deal with the challenges
of navigating ambiguous environments. Although significant progress has been
made in each of these areas independently, the best way to integrate them is an
open research question. In this paper, we propose the integration of a
statistical model of cognitive map formation within an active inference agent
that supports planning under uncertainty. Specifically, we examine the
clone-structured cognitive graph (CSCG) model of cognitive map formation and
compare a naive clone graph agent with an active inference-driven clone graph
agent, in three spatial navigation scenarios. Our findings demonstrate that
while both agents are effective in simple scenarios, the active inference agent
is more effective when planning in challenging scenarios, in which sensory
observations provide ambiguous information about location.
| [
{
"version": "v1",
"created": "Wed, 16 Aug 2023 12:10:23 GMT"
}
] | 1,692,230,400,000 | [
[
"Van de Maele",
"Toon",
""
],
[
"Dhoedt",
"Bart",
""
],
[
"Verbelen",
"Tim",
""
],
[
"Pezzulo",
"Giovanni",
""
]
] |
2308.09267 | Lang Cao | Lang Cao | GraphReason: Enhancing Reasoning Capabilities of Large Language Models
through A Graph-Based Verification Approach | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have showcased impressive reasoning
capabilities, particularly when guided by specifically designed prompts in
complex reasoning tasks such as math word problems. These models typically
solve tasks using a chain-of-thought approach, which not only bolsters their
reasoning abilities but also provides valuable insights into their
problem-solving process. However, there is still significant room for enhancing
the reasoning abilities of LLMs. Some studies suggest that the integration of
an LLM output verifier can boost reasoning accuracy without necessitating
additional model training. In this paper, we follow these studies and introduce
a novel graph-based method to further augment the reasoning capabilities of
LLMs. We posit that multiple solutions to a reasoning task, generated by an
LLM, can be represented as a reasoning graph due to the logical connections
between intermediate steps from different reasoning paths. Therefore, we
propose the Reasoning Graph Verifier (GraphReason) to analyze and verify the
solutions generated by LLMs. By evaluating these graphs, models can yield more
accurate and reliable results.Our experimental results show that our
graph-based verification method not only significantly enhances the reasoning
abilities of LLMs but also outperforms existing verifier methods in terms of
improving these models' reasoning performance.
| [
{
"version": "v1",
"created": "Fri, 18 Aug 2023 03:12:59 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 05:24:34 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Sep 2023 16:35:58 GMT"
},
{
"version": "v4",
"created": "Sun, 21 Apr 2024 01:45:34 GMT"
}
] | 1,713,830,400,000 | [
[
"Cao",
"Lang",
""
]
] |
2308.09595 | Muhammad Arrasy Rahman | Arrasy Rahman, Jiaxun Cui, Peter Stone | Minimum Coverage Sets for Training Robust Ad Hoc Teamwork Agents | Accepted at AAAI-24 conference | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Robustly cooperating with unseen agents and human partners presents
significant challenges due to the diverse cooperative conventions these
partners may adopt. Existing Ad Hoc Teamwork (AHT) methods address this
challenge by training an agent with a population of diverse teammate policies
obtained through maximizing specific diversity metrics. However, prior
heuristic-based diversity metrics do not always maximize the agent's robustness
in all cooperative problems. In this work, we first propose that maximizing an
AHT agent's robustness requires it to emulate policies in the minimum coverage
set (MCS), the set of best-response policies to any partner policies in the
environment. We then introduce the L-BRDiv algorithm that generates a set of
teammate policies that, when used for AHT training, encourage agents to emulate
policies from the MCS. L-BRDiv works by solving a constrained optimization
problem to jointly train teammate policies for AHT training and approximating
AHT agent policies that are members of the MCS. We empirically demonstrate that
L-BRDiv produces more robust AHT agents than state-of-the-art methods in a
broader range of two-player cooperative problems without the need for extensive
hyperparameter tuning for its objectives. Our study shows that L-BRDiv
outperforms the baseline methods by prioritizing discovering distinct members
of the MCS instead of repeatedly finding redundant policies.
| [
{
"version": "v1",
"created": "Fri, 18 Aug 2023 14:45:22 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Jan 2024 03:05:25 GMT"
}
] | 1,704,326,400,000 | [
[
"Rahman",
"Arrasy",
""
],
[
"Cui",
"Jiaxun",
""
],
[
"Stone",
"Peter",
""
]
] |
2308.09830 | Oscar J. Romero | Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic | Synergistic Integration of Large Language Models and Cognitive
Architectures for Robust AI: An Exploratory Analysis | AAAI 2023 Fall Symposium | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper explores the integration of two AI subdisciplines employed in the
development of artificial agents that exhibit intelligent behavior: Large
Language Models (LLMs) and Cognitive Architectures (CAs). We present three
integration approaches, each grounded in theoretical models and supported by
preliminary empirical evidence. The modular approach, which introduces four
models with varying degrees of integration, makes use of chain-of-thought
prompting, and draws inspiration from augmented LLMs, the Common Model of
Cognition, and the simulation theory of cognition. The agency approach,
motivated by the Society of Mind theory and the LIDA cognitive architecture,
proposes the formation of agent collections that interact at micro and macro
cognitive levels, driven by either LLMs or symbolic components. The
neuro-symbolic approach, which takes inspiration from the CLARION cognitive
architecture, proposes a model where bottom-up learning extracts symbolic
representations from an LLM layer and top-down guidance utilizes symbolic
representations to direct prompt engineering in the LLM layer. These approaches
aim to harness the strengths of both LLMs and CAs, while mitigating their
weaknesses, thereby advancing the development of more robust AI systems. We
discuss the tradeoffs and challenges associated with each approach.
| [
{
"version": "v1",
"created": "Fri, 18 Aug 2023 21:42:47 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Sep 2023 17:32:08 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Sep 2023 15:10:56 GMT"
}
] | 1,695,945,600,000 | [
[
"Romero",
"Oscar J.",
""
],
[
"Zimmerman",
"John",
""
],
[
"Steinfeld",
"Aaron",
""
],
[
"Tomasic",
"Anthony",
""
]
] |
2308.10899 | Tignting Liao | Tingting Liao, Hongwei Yi, Yuliang Xiu, Jiaxaing Tang, Yangyi Huang,
Justus Thies, Michael J. Black | TADA! Text to Animatable Digital Avatars | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce TADA, a simple-yet-effective approach that takes textual
descriptions and produces expressive 3D avatars with high-quality geometry and
lifelike textures, that can be animated and rendered with traditional graphics
pipelines. Existing text-based character generation methods are limited in
terms of geometry and texture quality, and cannot be realistically animated due
to inconsistent alignment between the geometry and the texture, particularly in
the face region. To overcome these limitations, TADA leverages the synergy of a
2D diffusion model and an animatable parametric body model. Specifically, we
derive an optimizable high-resolution body model from SMPL-X with 3D
displacements and a texture map, and use hierarchical rendering with score
distillation sampling (SDS) to create high-quality, detailed, holistic 3D
avatars from text. To ensure alignment between the geometry and texture, we
render normals and RGB images of the generated character and exploit their
latent embeddings in the SDS training process. We further introduce various
expression parameters to deform the generated character during training,
ensuring that the semantics of our generated character remain consistent with
the original SMPL-X model, resulting in an animatable character. Comprehensive
evaluations demonstrate that TADA significantly surpasses existing approaches
on both qualitative and quantitative measures. TADA enables creation of
large-scale digital character assets that are ready for animation and
rendering, while also being easily editable through natural language. The code
will be public for research purposes.
| [
{
"version": "v1",
"created": "Mon, 21 Aug 2023 17:59:10 GMT"
}
] | 1,692,662,400,000 | [
[
"Liao",
"Tingting",
""
],
[
"Yi",
"Hongwei",
""
],
[
"Xiu",
"Yuliang",
""
],
[
"Tang",
"Jiaxaing",
""
],
[
"Huang",
"Yangyi",
""
],
[
"Thies",
"Justus",
""
],
[
"Black",
"Michael J.",
""
]
] |
2308.10988 | Adel Ammar | Adel Ammar | ERA*: Enhanced Relaxed A* algorithm for Solving the Shortest Path
Problem in Regular Grid Maps | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper introduces a novel algorithm for solving the point-to-point
shortest path problem in a static regular 8-neighbor connectivity (G8) grid.
This algorithm can be seen as a generalization of Hadlock algorithm to G8
grids, and is shown to be theoretically equivalent to the relaxed $A^*$
($RA^*$) algorithm in terms of the provided solution's path length, but with
substantial time and memory savings, due to a completely different computation
strategy, based on defining a set of lookup matrices. Through an experimental
study on grid maps of various types and sizes (1290 runs on 43 maps), it is
proven to be 2.25 times faster than $RA^*$ and 17 times faster than the
original $A^*$, in average. Moreover, it is more memory-efficient, since it
does not need to store a G score matrix.
| [
{
"version": "v1",
"created": "Tue, 15 Aug 2023 07:25:13 GMT"
}
] | 1,692,748,800,000 | [
[
"Ammar",
"Adel",
""
]
] |
2308.11755 | Raj Korpan | Raj Korpan | VBMO: Voting-Based Multi-Objective Path Planning | First International Workshop on Search and Planning with Complex
Objectives (WoSePCO) at IJCAI'2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | This paper presents VBMO, the Voting-Based Multi-Objective path planning
algorithm, that generates optimal single-objective plans, evaluates each of
them with respect to the other objectives, and selects one with a voting
mechanism. VBMO does not use hand-tuned weights, consider the multiple
objectives at every step of search, or use an evolutionary algorithm. Instead,
it considers how a plan that is optimal in one objective may perform well with
respect to others. VBMO incorporates three voting mechanisms: range, Borda, and
combined approval. Extensive evaluation in diverse and complex environments
demonstrates the algorithm's ability to efficiently produce plans that satisfy
multiple objectives.
| [
{
"version": "v1",
"created": "Tue, 22 Aug 2023 19:51:48 GMT"
}
] | 1,692,835,200,000 | [
[
"Korpan",
"Raj",
""
]
] |
2308.12411 | Michael Hochberg | Michael E. Hochberg | A Theory of Intelligences | 37 pages, 1 Table, 6 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Intelligence is a human construct to represent the ability to achieve goals.
Given this wide berth, intelligence has been defined countless times, studied
in a variety of ways and represented using numerous measures. Understanding
intelligence ultimately requires theory and quantification, both of which have
proved elusive. I develop a framework -- the Theory of Intelligences (TIS) --
that applies across all systems from physics, to biology, humans and AI. TIS
likens intelligence to a calculus, differentiating, correlating and integrating
information. Intelligence operates at many levels and scales and TIS distils
these into a parsimonious macroscopic framework centered on solving, planning
and their optimization to accomplish goals. Notably, intelligence can be
expressed in informational units or in units relative to goal difficulty, the
latter defined as complexity relative to system (individual or benchmarked)
ability. I present general equations for intelligence and its components, and a
simple expression for the evolution of intelligence traits. The measures
developed here could serve to gauge different facets of intelligence for any
step-wise transformation of information. I argue that proxies such as
environment, technology, society and collectives are essential to a general
theory of intelligence and to possible evolutionary transitions in
intelligence, particularly in humans. I conclude with testable predictions of
TIS and offer several speculations.
| [
{
"version": "v1",
"created": "Wed, 23 Aug 2023 20:18:43 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Apr 2024 21:36:17 GMT"
}
] | 1,712,620,800,000 | [
[
"Hochberg",
"Michael E.",
""
]
] |
2308.12486 | Bowen Xu | Bowen Xu | A Brain-Inspired Sequence Learning Model based on a Logic | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequence learning is an essential aspect of intelligence. In Artificial
Intelligence, sequence prediction task is usually used to test a sequence
learning model. In this paper, a model of sequence learning, which is
interpretable through Non-Axiomatic Logic, is designed and tested. The learning
mechanism is composed of three steps, hypothesizing, revising, and recycling,
which enable the model to work under the Assumption of Insufficient Knowledge
and Resources. Synthetic datasets for sequence prediction task are generated to
test the capacity of the model. The results show that the model works well
within different levels of difficulty. In addition, since the model adopts
concept-centered representation, it theoretically does not suffer from
catastrophic forgetting, and the practical results also support this property.
This paper shows the potential of learning sequences in a logical way.
| [
{
"version": "v1",
"created": "Thu, 24 Aug 2023 01:01:41 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Nov 2023 16:26:09 GMT"
}
] | 1,699,315,200,000 | [
[
"Xu",
"Bowen",
""
]
] |
2308.12682 | Rishi Hazra | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | SayCanPay: Heuristic Planning with Large Language Models using Learnable
Domain Knowledge | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches.
| [
{
"version": "v1",
"created": "Thu, 24 Aug 2023 09:47:28 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Jan 2024 19:28:22 GMT"
}
] | 1,704,240,000,000 | [
[
"Hazra",
"Rishi",
""
],
[
"Martires",
"Pedro Zuidberg Dos",
""
],
[
"De Raedt",
"Luc",
""
]
] |
2308.13147 | Lyndon Benke | Lyndon Benke, Tim Miller, Michael Papasimeon, and Nir Lipovetzky | Diverse, Top-k, and Top-Quality Planning Over Simulators | This paper has been accepted at the 26th European Conference on
Artificial Intelligence (ECAI 2023) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Diverse, top-k, and top-quality planning are concerned with the generation of
sets of solutions to sequential decision problems. Previously this area has
been the domain of classical planners that require a symbolic model of the
problem instance. This paper proposes a novel alternative approach that uses
Monte Carlo Tree Search (MCTS), enabling application to problems for which only
a black-box simulation model is available. We present a procedure for
extracting bounded sets of plans from pre-generated search trees in best-first
order, and a metric for evaluating the relative quality of paths through a
search tree. We demonstrate this approach on a path-planning problem with
hidden information, and suggest adaptations to the MCTS algorithm to increase
the diversity of generated plans. Our results show that our method can generate
diverse and high-quality plan sets in domains where classical planners are not
applicable.
| [
{
"version": "v1",
"created": "Fri, 25 Aug 2023 02:55:19 GMT"
}
] | 1,693,180,800,000 | [
[
"Benke",
"Lyndon",
""
],
[
"Miller",
"Tim",
""
],
[
"Papasimeon",
"Michael",
""
],
[
"Lipovetzky",
"Nir",
""
]
] |
2308.13433 | Tom Westermann | Tom Westermann, Milapji Singh Gill, Alexander Fay | Representing Timed Automata and Timing Anomalies of Cyber-Physical
Production Systems in Knowledge Graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model-Based Anomaly Detection has been a successful approach to identify
deviations from the expected behavior of Cyber-Physical Production Systems.
Since manual creation of these models is a time-consuming process, it is
advantageous to learn them from data and represent them in a generic formalism
like timed automata. However, these models - and by extension, the detected
anomalies - can be challenging to interpret due to a lack of additional
information about the system. This paper aims to improve model-based anomaly
detection in CPPS by combining the learned timed automaton with a formal
knowledge graph about the system. Both the model and the detected anomalies are
described in the knowledge graph in order to allow operators an easier
interpretation of the model and the detected anomalies. The authors
additionally propose an ontology of the necessary concepts. The approach was
validated on a five-tank mixing CPPS and was able to formally define both
automata model as well as timing anomalies in automata execution.
| [
{
"version": "v1",
"created": "Fri, 25 Aug 2023 15:25:57 GMT"
}
] | 1,693,180,800,000 | [
[
"Westermann",
"Tom",
""
],
[
"Gill",
"Milapji Singh",
""
],
[
"Fay",
"Alexander",
""
]
] |
2308.13542 | Thommen George Karimpanal | Thommen George Karimpanal, Laknath Buddhika Semage, Santu Rana, Hung
Le, Truyen Tran, Sunil Gupta and Svetha Venkatesh | LaGR-SEQ: Language-Guided Reinforcement Learning with Sample-Efficient
Querying | 18 pages, 11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have recently demonstrated their impressive
ability to provide context-aware responses via text. This ability could
potentially be used to predict plausible solutions in sequential decision
making tasks pertaining to pattern completion. For example, by observing a
partial stack of cubes, LLMs can predict the correct sequence in which the
remaining cubes should be stacked by extrapolating the observed patterns (e.g.,
cube sizes, colors or other attributes) in the partial stack. In this work, we
introduce LaGR (Language-Guided Reinforcement learning), which uses this
predictive ability of LLMs to propose solutions to tasks that have been
partially completed by a primary reinforcement learning (RL) agent, in order to
subsequently guide the latter's training. However, as RL training is generally
not sample-efficient, deploying this approach would inherently imply that the
LLM be repeatedly queried for solutions; a process that can be expensive and
infeasible. To address this issue, we introduce SEQ (sample efficient
querying), where we simultaneously train a secondary RL agent to decide when
the LLM should be queried for solutions. Specifically, we use the quality of
the solutions emanating from the LLM as the reward to train this agent. We show
that our proposed framework LaGR-SEQ enables more efficient primary RL
training, while simultaneously minimizing the number of queries to the LLM. We
demonstrate our approach on a series of tasks and highlight the advantages of
our approach, along with its limitations and potential future research
directions.
| [
{
"version": "v1",
"created": "Mon, 21 Aug 2023 02:07:35 GMT"
}
] | 1,693,267,200,000 | [
[
"Karimpanal",
"Thommen George",
""
],
[
"Semage",
"Laknath Buddhika",
""
],
[
"Rana",
"Santu",
""
],
[
"Le",
"Hung",
""
],
[
"Tran",
"Truyen",
""
],
[
"Gupta",
"Sunil",
""
],
[
"Venkatesh",
"Svetha",
""
]
] |
2308.13548 | Arpan Tripathi | Ahad Shams, Douglas Summers-Stay, Arpan Tripathi, Vsevolod Metelsky,
Alexandros Titonis, Karan Malhotra | Towards a Holodeck-style Simulation Game | 18 pages, 11 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce Infinitia, a simulation game system that uses generative image
and language models at play time to reshape all aspects of the setting and NPCs
based on a short description from the player, in a way similar to how settings
are created on the fictional Holodeck. Building off the ideas of the Generative
Agents paper, our system introduces gameplay elements, such as infinite
generated fantasy worlds, controllability of NPC behavior, humorous dialogue,
cost & time efficiency, collaboration between players and elements of
non-determinism among in-game events. Infinitia is implemented in the Unity
engine with a server-client architecture, facilitating the addition of exciting
features by community developers in the future. Furthermore, it uses a
multiplayer framework to allow humans to be present and interact in the
simulation. The simulation will be available in open-alpha shortly at
https://infinitia.ai/ and we are looking forward to building upon it with the
community.
| [
{
"version": "v1",
"created": "Tue, 22 Aug 2023 19:19:19 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Sep 2023 10:03:25 GMT"
}
] | 1,694,563,200,000 | [
[
"Shams",
"Ahad",
""
],
[
"Summers-Stay",
"Douglas",
""
],
[
"Tripathi",
"Arpan",
""
],
[
"Metelsky",
"Vsevolod",
""
],
[
"Titonis",
"Alexandros",
""
],
[
"Malhotra",
"Karan",
""
]
] |
2308.13755 | Bayu Trisedya | Bayu Distiawan Trisedya, Flora D Salim, Jeffrey Chan, Damiano Spina,
Falk Scholer, Mark Sanderson | i-Align: an interpretable knowledge graph alignment model | Data Min Knowl Disc (2023) | null | 10.1007/s10618-023-00963-3 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge graphs (KGs) are becoming essential resources for many downstream
applications. However, their incompleteness may limit their potential. Thus,
continuous curation is needed to mitigate this problem. One of the strategies
to address this problem is KG alignment, i.e., forming a more complete KG by
merging two or more KGs. This paper proposes i-Align, an interpretable KG
alignment model. Unlike the existing KG alignment models, i-Align provides an
explanation for each alignment prediction while maintaining high alignment
performance. Experts can use the explanation to check the correctness of the
alignment prediction. Thus, the high quality of a KG can be maintained during
the curation process (e.g., the merging process of two KGs). To this end, a
novel Transformer-based Graph Encoder (Trans-GE) is proposed as a key component
of i-Align for aggregating information from entities' neighbors (structures).
Trans-GE uses Edge-gated Attention that combines the adjacency matrix and the
self-attention matrix to learn a gating mechanism to control the information
aggregation from the neighboring entities. It also uses historical embeddings,
allowing Trans-GE to be trained over mini-batches, or smaller sub-graphs, to
address the scalability issue when encoding a large KG. Another component of
i-Align is a Transformer encoder for aggregating entities' attributes. This
way, i-Align can generate explanations in the form of a set of the most
influential attributes/neighbors based on attention weights. Extensive
experiments are conducted to show the power of i-Align. The experiments include
several aspects, such as the model's effectiveness for aligning KGs, the
quality of the generated explanations, and its practicality for aligning large
KGs. The results show the effectiveness of i-Align in these aspects.
| [
{
"version": "v1",
"created": "Sat, 26 Aug 2023 03:48:52 GMT"
}
] | 1,693,267,200,000 | [
[
"Trisedya",
"Bayu Distiawan",
""
],
[
"Salim",
"Flora D",
""
],
[
"Chan",
"Jeffrey",
""
],
[
"Spina",
"Damiano",
""
],
[
"Scholer",
"Falk",
""
],
[
"Sanderson",
"Mark",
""
]
] |
2308.13871 | Jiaxi Lv | Jiaxi Lv, Liang Zhang, Yi Huang, Jiancheng Huang, Shifeng Chen | Graph Edit Distance Learning via Different Attention | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recently, more and more research has focused on using Graph Neural Networks
(GNN) to solve the Graph Similarity Computation problem (GSC), i.e., computing
the Graph Edit Distance (GED) between two graphs. These methods treat GSC as an
end-to-end learnable task, and the core of their architecture is the feature
fusion modules to interact with the features of two graphs. Existing methods
consider that graph-level embedding is difficult to capture the differences in
local small structures between two graphs, and thus perform fine-grained
feature fusion on node-level embedding can improve the accuracy, but leads to
greater time and memory consumption in the training and inference phases.
However, this paper proposes a novel graph-level fusion module Different
Attention (DiffAtt), and demonstrates that graph-level fusion embeddings can
substantially outperform these complex node-level fusion embeddings. We posit
that the relative difference structure of the two graphs plays an important
role in calculating their GED values. To this end, DiffAtt uses the difference
between two graph-level embeddings as an attentional mechanism to capture the
graph structural difference of the two graphs. Based on DiffAtt, a new GSC
method, named Graph Edit Distance Learning via Different Attention (REDRAFT),
is proposed, and experimental results demonstrate that REDRAFT achieves
state-of-the-art performance in 23 out of 25 metrics in five benchmark
datasets. Especially on MSE, it respectively outperforms the second best by
19.9%, 48.8%, 29.1%, 31.6%, and 2.2%. Moreover, we propose a quantitative test
Remaining Subgraph Alignment Test (RESAT) to verify that among all graph-level
fusion modules, the fusion embedding generated by DiffAtt can best capture the
structural differences between two graphs.
| [
{
"version": "v1",
"created": "Sat, 26 Aug 2023 13:05:01 GMT"
}
] | 1,693,267,200,000 | [
[
"Lv",
"Jiaxi",
""
],
[
"Zhang",
"Liang",
""
],
[
"Huang",
"Yi",
""
],
[
"Huang",
"Jiancheng",
""
],
[
"Chen",
"Shifeng",
""
]
] |
2308.14269 | Elad Liebman | Elad Liebman, Peter Stone | Utilizing Mood-Inducing Background Music in Human-Robot Interaction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Past research has clearly established that music can affect mood and that
mood affects emotional and cognitive processing, and thus decision-making. It
follows that if a robot interacting with a person needs to predict the person's
behavior, knowledge of the music the person is listening to when acting is a
potentially relevant feature. To date, however, there has not been any concrete
evidence that a robot can improve its human-interactive decision-making by
taking into account what the person is listening to. This research fills this
gap by reporting the results of an experiment in which human participants were
required to complete a task in the presence of an autonomous agent while
listening to background music. Specifically, the participants drove a simulated
car through an intersection while listening to music. The intersection was not
empty, as another simulated vehicle, controlled autonomously, was also crossing
the intersection in a different direction. Our results clearly indicate that
such background information can be effectively incorporated in an agent's world
representation in order to better predict people's behavior. We subsequently
analyze how knowledge of music impacted both participant behavior and the
resulting learned policy.\setcounter{footnote}{2}\footnote{An earlier version
of part of the material in this paper appeared originally in the first author's
Ph.D. Dissertation~\cite{liebman2020sequential} but it has not appeared in any
pear-reviewed conference or journal.}
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 02:54:05 GMT"
}
] | 1,693,267,200,000 | [
[
"Liebman",
"Elad",
""
],
[
"Stone",
"Peter",
""
]
] |
2308.14284 | Longchao Da | Longchao Da, Minquan Gao, Hao Mei, Hua Wei | Prompt to Transfer: Sim-to-Real Transfer for Traffic Signal Control with
Prompt Learning | 9 pages, 7 figures. Accepted to AAAI 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous solutions are proposed for the Traffic Signal Control (TSC) tasks
aiming to provide efficient transportation and mitigate congestion waste. In
recent, promising results have been attained by Reinforcement Learning (RL)
methods through trial and error in simulators, bringing confidence in solving
cities' congestion headaches. However, there still exist performance gaps when
simulator-trained policies are deployed to the real world. This issue is mainly
introduced by the system dynamic difference between the training simulator and
the real-world environments. The Large Language Models (LLMs) are trained on
mass knowledge and proved to be equipped with astonishing inference abilities.
In this work, we leverage LLMs to understand and profile the system dynamics by
a prompt-based grounded action transformation. Accepting the cloze prompt
template, and then filling in the answer based on accessible context, the
pre-trained LLM's inference ability is exploited and applied to understand how
weather conditions, traffic states, and road types influence traffic dynamics,
being aware of this, the policies' action is taken and grounded based on
realistic dynamics, thus help the agent learn a more realistic policy. We
conduct experiments using DQN to show the effectiveness of the proposed
PromptGAT's ability in mitigating the performance gap from simulation to
reality (sim-to-real).
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 03:49:13 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 22:31:44 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Oct 2023 02:15:31 GMT"
},
{
"version": "v4",
"created": "Mon, 8 Jan 2024 10:03:06 GMT"
},
{
"version": "v5",
"created": "Wed, 17 Jan 2024 21:30:16 GMT"
},
{
"version": "v6",
"created": "Sat, 20 Jan 2024 09:41:55 GMT"
}
] | 1,706,054,400,000 | [
[
"Da",
"Longchao",
""
],
[
"Gao",
"Minquan",
""
],
[
"Mei",
"Hao",
""
],
[
"Wei",
"Hua",
""
]
] |
2308.14301 | Chirag Shah | Muhammad Rahman, Sachi Figliolini, Joyce Kim, Eivy Cedeno, Charles
Kleier, Chirag Shah, Aman Chadha | Artificial Intelligence in Career Counseling: A Test Case with ResumAI | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The rise of artificial intelligence (AI) has led to various means of
integration of AI aimed to provide efficiency in tasks, one of which is career
counseling. A key part of getting a job is having a solid resume that passes
through the first round of programs and recruiters. It is difficult to find
good resources or schedule an appointment with a career counselor to help with
editing a resume for a specific role. With the rise of ChatGPT, Bard, and
several other AI chat programs it is possible to provide specific, automated
feedback on various concerns to suggest places for improvement within the
context of career counseling. This paper begins with a quick literature review
on the ethical considerations and limitations of AI in career counseling. The
authors also have created their own website service, called ResumAI, to test
and review the functionality of an AI career counselor. The findings of this
study will contribute to the understanding of chat AI ResumAI reviewer programs
and sites. The implications of the findings for the field of career counseling,
AI development, and ethical practice will be discussed.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 04:35:20 GMT"
}
] | 1,693,267,200,000 | [
[
"Rahman",
"Muhammad",
""
],
[
"Figliolini",
"Sachi",
""
],
[
"Kim",
"Joyce",
""
],
[
"Cedeno",
"Eivy",
""
],
[
"Kleier",
"Charles",
""
],
[
"Shah",
"Chirag",
""
],
[
"Chadha",
"Aman",
""
]
] |
2308.14363 | Jinliang Yuan | Jinliang Yuan, Chen Yang, Dongqi Cai, Shihe Wang, Xin Yuan, Zeling
Zhang, Xiang Li, Dingge Zhang, Hanzi Mei, Xianqing Jia, Shangguang Wang,
Mengwei Xu | Mobile Foundation Model as Firmware | 17 pages, 15 figures, published to ACM MobiCom'24 | The 30th Annual International Conference on Mobile Computing and
Networking, 2024 | 10.1145/3636534.3649361 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In today's landscape, smartphones have evolved into hubs for hosting a
multitude of deep learning models aimed at local execution. A key realization
driving this work is the notable fragmentation among these models,
characterized by varied architectures, operators, and implementations. This
fragmentation imposes a significant burden on the comprehensive optimization of
hardware, system settings, and algorithms.
Buoyed by the recent strides in large foundation models, this work introduces
a pioneering paradigm for mobile AI: a collaborative management approach
between the mobile OS and hardware, overseeing a foundational model capable of
serving a broad spectrum of mobile AI tasks, if not all. This foundational
model resides within the NPU and remains impervious to app or OS revisions,
akin to firmware. Concurrently, each app contributes a concise, offline
fine-tuned "adapter" tailored to distinct downstream tasks. From this concept
emerges a concrete instantiation known as \sys. It amalgamates a curated
selection of publicly available Large Language Models (LLMs) and facilitates
dynamic data flow. This concept's viability is substantiated through the
creation of an exhaustive benchmark encompassing 38 mobile AI tasks spanning 50
datasets, including domains such as Computer Vision (CV), Natural Language
Processing (NLP), audio, sensing, and multimodal inputs. Spanning this
benchmark, \sys unveils its impressive performance. It attains accuracy parity
in 85\% of tasks, demonstrates improved scalability in terms of storage and
memory, and offers satisfactory inference speed on Commercial Off-The-Shelf
(COTS) mobile devices fortified with NPU support. This stands in stark contrast
to task-specific models tailored for individual applications.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 07:21:26 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Mar 2024 16:18:17 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Mar 2024 02:17:03 GMT"
}
] | 1,710,288,000,000 | [
[
"Yuan",
"Jinliang",
""
],
[
"Yang",
"Chen",
""
],
[
"Cai",
"Dongqi",
""
],
[
"Wang",
"Shihe",
""
],
[
"Yuan",
"Xin",
""
],
[
"Zhang",
"Zeling",
""
],
[
"Li",
"Xiang",
""
],
[
"Zhang",
"Dingge",
""
],
[
"Mei",
"Hanzi",
""
],
[
"Jia",
"Xianqing",
""
],
[
"Wang",
"Shangguang",
""
],
[
"Xu",
"Mengwei",
""
]
] |
2308.14390 | Konstantinos Lampropoulos | Konstantinos Lampropoulos, Thanos Kosmidis, Serge Autexier, Milos
Savic, Manos Athanatos, Miltiadis Kokkonidis, Tzortzia Koutsouri, Anamaria
Vizitiu, Antonios Valachis, Miriam Quintero Padron | ASCAPE: An open AI ecosystem to support the quality of life of cancer
patients | null | null | 10.1109/ICHI52183.2021.00054 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The latest cancer statistics indicate a decrease in cancer-related mortality.
However, due to the growing and ageing population, the absolute number of
people living with cancer is set to keep increasing. This paper presents
ASCAPE, an open AI infrastructure that takes advantage of the recent advances
in Artificial Intelligence (AI) and Machine Learning (ML) to support cancer
patients quality of life (QoL). With ASCAPE health stakeholders (e.g.
hospitals) can locally process their private medical data and then share the
produced knowledge (ML models) through the open AI infrastructure.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 08:14:12 GMT"
}
] | 1,693,267,200,000 | [
[
"Lampropoulos",
"Konstantinos",
""
],
[
"Kosmidis",
"Thanos",
""
],
[
"Autexier",
"Serge",
""
],
[
"Savic",
"Milos",
""
],
[
"Athanatos",
"Manos",
""
],
[
"Kokkonidis",
"Miltiadis",
""
],
[
"Koutsouri",
"Tzortzia",
""
],
[
"Vizitiu",
"Anamaria",
""
],
[
"Valachis",
"Antonios",
""
],
[
"Padron",
"Miriam Quintero",
""
]
] |
2308.14474 | Shuxian Du | Shuxian Du, Yaxiu Sun and Changyi Du | Causality-Based Feature Importance Quantifying Methods: PN-FI, PS-FI and
PNS-FI | 7 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the current ML field models are getting larger and more complex, and data
used for model training are also getting larger in quantity and higher in
dimensions. Therefore, in order to train better models, and save training time
and computational resources, a good Feature Selection (FS) method in the
preprocessing stage is necessary. Feature importance (FI) is of great
importance since it is the basis of feature selection. Therefore, this paper
creatively introduces the calculation of PN (the probability of Necessity), PN
(the probability of Sufficiency), and PNS (the probability of Necessity and
Sufficiency) of Causality into quantifying feature importance and creates 3 new
FI measuring methods, PN-FI, which means how much importance a feature has in
image recognition tasks, PS-FI that means how much importance a feature has in
image generating tasks, and PNS-FI which measures both. The main body of this
paper is three RCTs, with whose results we show how PS-FI, PN-FI, and PNS-FI of
3 features, dog nose, dog eyes, and dog mouth are calculated. The experiments
show that firstly, FI values are intervals with tight upper and lower bounds.
Secondly, the feature dog eyes has the most importance while the other two have
almost the same. Thirdly, the bounds of PNS and PN are tighter than the bounds
of PS.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 10:24:51 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Sep 2023 06:28:41 GMT"
}
] | 1,695,081,600,000 | [
[
"Du",
"Shuxian",
""
],
[
"Sun",
"Yaxiu",
""
],
[
"Du",
"Changyi",
""
]
] |
2308.14475 | Mozhgan Vazifehdoostirani | Mozhgan Vazifehdoostirani, Laura Genga, Xixi Lu, Rob Verhoeven,
Hanneke van Laarhoven, Remco Dijkman | Interactive Multi Interest Process Pattern Discovery | 16 pages, 5 figures, To appear in the preceedings of 21st
International Conference on Business Process Management (BPM), 11-15
September 2023, Utrecht, the Netherlands | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Process pattern discovery methods (PPDMs) aim at identifying patterns of
interest to users. Existing PPDMs typically are unsupervised and focus on a
single dimension of interest, such as discovering frequent patterns. We present
an interactive multi interest driven framework for process pattern discovery
aimed at identifying patterns that are optimal according to a multi-dimensional
analysis goal. The proposed approach is iterative and interactive, thus taking
experts knowledge into account during the discovery process. The paper focuses
on a concrete analysis goal, i.e., deriving process patterns that affect the
process outcome. We evaluate the approach on real world event logs in both
interactive and fully automated settings. The approach extracted meaningful
patterns validated by expert knowledge in the interactive setting. Patterns
extracted in the automated settings consistently led to prediction performance
comparable to or better than patterns derived considering single interest
dimensions without requiring user defined thresholds.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 10:26:37 GMT"
}
] | 1,693,267,200,000 | [
[
"Vazifehdoostirani",
"Mozhgan",
""
],
[
"Genga",
"Laura",
""
],
[
"Lu",
"Xixi",
""
],
[
"Verhoeven",
"Rob",
""
],
[
"van Laarhoven",
"Hanneke",
""
],
[
"Dijkman",
"Remco",
""
]
] |
2308.14550 | Aizaz Sharif | Aizaz Sharif and Dusica Marijan | ReMAV: Reward Modeling of Autonomous Vehicles for Finding Likely Failure
Events | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous vehicles are advanced driving systems that are well known to be
vulnerable to various adversarial attacks, compromising vehicle safety and
posing a risk to other road users. Rather than actively training complex
adversaries by interacting with the environment, there is a need to first
intelligently find and reduce the search space to only those states where
autonomous vehicles are found to be less confident. In this paper, we propose a
black-box testing framework ReMAV that uses offline trajectories first to
analyze the existing behavior of autonomous vehicles and determine appropriate
thresholds to find the probability of failure events. To this end, we introduce
a three-step methodology which i) uses offline state action pairs of any
autonomous vehicle under test, ii) builds an abstract behavior representation
using our designed reward modeling technique to analyze states with uncertain
driving decisions, and iii) uses a disturbance model for minimal perturbation
attacks where the driving decisions are less confident. Our reward modeling
technique helps in creating a behavior representation that allows us to
highlight regions of likely uncertain behavior even when the standard
autonomous vehicle performs well. We perform our experiments in a high-fidelity
urban driving environment using three different driving scenarios containing
single- and multi-agent interactions. Our experiment shows an increase in 35,
23, 48, and 50% in the occurrences of vehicle collision, road object collision,
pedestrian collision, and offroad steering events, respectively by the
autonomous vehicle under test, demonstrating a significant increase in failure
events. We compare ReMAV with two baselines and show that ReMAV demonstrates
significantly better effectiveness in generating failure events compared to the
baselines in all evaluation metrics.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 13:09:00 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Dec 2023 11:05:53 GMT"
}
] | 1,704,153,600,000 | [
[
"Sharif",
"Aizaz",
""
],
[
"Marijan",
"Dusica",
""
]
] |
2308.14719 | Gal Elgavish | Gal Elgavish | Hierarchical Time Series Forecasting with Bayesian Modeling | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We encounter time series data in many domains such as finance, physics,
business, and weather. One of the main tasks of time series analysis, one that
helps to take informed decisions under uncertainty, is forecasting. Time series
are often hierarchically structured, e.g., a company sales might be broken down
into different regions, and each region into different stores. In some cases
the number of series in the hierarchy is too big to fit in a single model to
produce forecasts in relevant time, and a decentralized approach is beneficial.
One way to do this is to train independent forecasting models for each series
and for some summary statistics series implied by the hierarchy (e.g. the sum
of all series) and to pass those models to a reconciliation algorithm to
improve those forecasts by sharing information between the series.
In this work we focus on the reconciliation step, and propose a method to do
so from a Bayesian perspective - Bayesian forecast reconciliation. We also
define the common case of linear Gaussian reconciliation, where the forecasts
are Gaussian and the hierarchy has linear structure, and show that we can
compute reconciliation in closed form. We evaluate these methods on synthetic
and real data sets, and compare them to other work in this field.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 17:20:47 GMT"
}
] | 1,693,267,200,000 | [
[
"Elgavish",
"Gal",
""
]
] |
2308.14732 | Renato Krohling | Renato A. Krohling | Bayesian artificial brain with ChatGPT | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper aims to investigate the mathematical problem-solving capabilities
of Chat Generative Pre-Trained Transformer (ChatGPT) in case of Bayesian
reasoning. The study draws inspiration from Zhu & Gigerenzer's research in
2006, which posed the question: Can children reason the Bayesian way? In the
pursuit of answering this question, a set of 10 Bayesian reasoning problems
were presented. The results of their work revealed that children's ability to
reason effectively using Bayesian principles is contingent upon a
well-structured information representation. In this paper, we present the same
set of 10 Bayesian reasoning problems to ChatGPT. Remarkably, the results
demonstrate that ChatGPT provides the right solutions to all problems.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 17:34:24 GMT"
}
] | 1,693,267,200,000 | [
[
"Krohling",
"Renato A.",
""
]
] |
2308.14840 | Mihai Christodorescu | Clark Barrett, Brad Boyd, Elie Burzstein, Nicholas Carlini, Brad Chen,
Jihye Choi, Amrita Roy Chowdhury, Mihai Christodorescu, Anupam Datta, Soheil
Feizi, Kathleen Fisher, Tatsunori Hashimoto, Dan Hendrycks, Somesh Jha,
Daniel Kang, Florian Kerschbaum, Eric Mitchell, John Mitchell, Zulfikar
Ramzan, Khawaja Shams, Dawn Song, Ankur Taly, Diyi Yang | Identifying and Mitigating the Security Risks of Generative AI | null | Foundations and Trends in Privacy and Security 6 (2023) 1-52 | 10.1561/3300000041 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Every major technical invention resurfaces the dual-use dilemma -- the new
technology has the potential to be used for good as well as for harm.
Generative AI (GenAI) techniques, such as large language models (LLMs) and
diffusion models, have shown remarkable capabilities (e.g., in-context
learning, code-completion, and text-to-image generation and editing). However,
GenAI can be used just as well by attackers to generate new attacks and
increase the velocity and efficacy of existing attacks.
This paper reports the findings of a workshop held at Google (co-organized by
Stanford University and the University of Wisconsin-Madison) on the dual-use
dilemma posed by GenAI. This paper is not meant to be comprehensive, but is
rather an attempt to synthesize some of the interesting findings from the
workshop. We discuss short-term and long-term goals for the community on this
topic. We hope this paper provides both a launching point for a discussion on
this important topic as well as interesting problems that the research
community can work to address.
| [
{
"version": "v1",
"created": "Mon, 28 Aug 2023 18:51:09 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Oct 2023 05:05:12 GMT"
},
{
"version": "v3",
"created": "Tue, 17 Oct 2023 23:27:11 GMT"
},
{
"version": "v4",
"created": "Fri, 29 Dec 2023 00:30:34 GMT"
}
] | 1,704,067,200,000 | [
[
"Barrett",
"Clark",
""
],
[
"Boyd",
"Brad",
""
],
[
"Burzstein",
"Elie",
""
],
[
"Carlini",
"Nicholas",
""
],
[
"Chen",
"Brad",
""
],
[
"Choi",
"Jihye",
""
],
[
"Chowdhury",
"Amrita Roy",
""
],
[
"Christodorescu",
"Mihai",
""
],
[
"Datta",
"Anupam",
""
],
[
"Feizi",
"Soheil",
""
],
[
"Fisher",
"Kathleen",
""
],
[
"Hashimoto",
"Tatsunori",
""
],
[
"Hendrycks",
"Dan",
""
],
[
"Jha",
"Somesh",
""
],
[
"Kang",
"Daniel",
""
],
[
"Kerschbaum",
"Florian",
""
],
[
"Mitchell",
"Eric",
""
],
[
"Mitchell",
"John",
""
],
[
"Ramzan",
"Zulfikar",
""
],
[
"Shams",
"Khawaja",
""
],
[
"Song",
"Dawn",
""
],
[
"Taly",
"Ankur",
""
],
[
"Yang",
"Diyi",
""
]
] |
2308.15002 | Yi Xu | Yi Xu, Junjie Ou, Hui Xu, Luoyi Fu, Lei Zhou, Xinbing Wang, Chenghu
Zhou | Exploring the Limits of Historical Information for Temporal Knowledge
Graph Extrapolation | Extended version of AAAI paper arXiv:2211.10904 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Temporal knowledge graphs, representing the dynamic relationships and
interactions between entities over time, have been identified as a promising
approach for event forecasting. However, a limitation of most temporal
knowledge graph reasoning methods is their heavy reliance on the recurrence or
periodicity of events, which brings challenges to inferring future events
related to entities that lack historical interaction. In fact, the current
state of affairs is often the result of a combination of historical information
and underlying factors that are not directly observable. To this end, we
investigate the limits of historical information for temporal knowledge graph
extrapolation and propose a new event forecasting model called Contrastive
Event Network (CENET) based on a novel training framework of historical
contrastive learning. CENET learns both the historical and non-historical
dependency to distinguish the most potential entities that best match the given
query. Simultaneously, by launching contrastive learning, it trains
representations of queries to probe whether the current moment is more
dependent on historical or non-historical events. These representations further
help train a binary classifier, whose output is a boolean mask, indicating the
related entities in the search space. During the inference process, CENET
employs a mask-based strategy to generate the final results. We evaluate our
proposed model on five benchmark graphs. The results demonstrate that CENET
significantly outperforms all existing methods in most metrics, achieving at
least 8.3% relative improvement of Hits@1 over previous state-of-the-art
baselines on event-based datasets.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 03:26:38 GMT"
}
] | 1,693,353,600,000 | [
[
"Xu",
"Yi",
""
],
[
"Ou",
"Junjie",
""
],
[
"Xu",
"Hui",
""
],
[
"Fu",
"Luoyi",
""
],
[
"Zhou",
"Lei",
""
],
[
"Wang",
"Xinbing",
""
],
[
"Zhou",
"Chenghu",
""
]
] |
2308.15030 | Rui Kong | Rui Kong, Yuanchun Li, Qingtian Feng, Weijun Wang, Xiaozhou Ye, Ye
Ouyang, Linghe Kong, Yunxin Liu | SwapMoE: Serving Off-the-shelf MoE-based Large Language Models with
Tunable Memory Budget | Accepted at ACL 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mixture of experts (MoE) is a popular technique to improve capacity of Large
Language Models (LLMs) with conditionally-activated parallel experts. However,
serving MoE models on memory-constrained devices is challenging due to the
large parameter size. Typical solutions such as memory swapping or expert
pruning may lead to significantly higher latency or severe accuracy loss. In
this paper, we introduce SwapMoE, a framework for efficient serving of
MoE-based large language models with tunable memory budgets. The main idea of
SwapMoE is to keep a small dynamic set of important experts, namely Virtual
Experts, in the main memory for inference, while seamlessly maintaining how the
Virtual Experts map to the actual experts. Experiments have shown that SwapMoE
can reduce the memory footprint while maintaining reasonable accuracy. For
example, on text summarization tasks with Switch Transformer, SwapMoE can
reduce the memory consumption from 14.2 GiB to 4.7 GiB, together with 50\%
latency reduction and a slight Rouge-2 score drop of 0.041.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 05:25:21 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Dec 2023 02:53:41 GMT"
},
{
"version": "v3",
"created": "Tue, 28 May 2024 02:08:30 GMT"
},
{
"version": "v4",
"created": "Wed, 29 May 2024 08:25:03 GMT"
}
] | 1,717,027,200,000 | [
[
"Kong",
"Rui",
""
],
[
"Li",
"Yuanchun",
""
],
[
"Feng",
"Qingtian",
""
],
[
"Wang",
"Weijun",
""
],
[
"Ye",
"Xiaozhou",
""
],
[
"Ouyang",
"Ye",
""
],
[
"Kong",
"Linghe",
""
],
[
"Liu",
"Yunxin",
""
]
] |
2308.15168 | Erkan Karabulut | Erkan Karabulut, Salvatore F. Pileggi, Paul Groth and Victoria Degeler | Ontologies in Digital Twins: A Systematic Literature Review | The Systematic Literature Review (SLR) is submitted to Future
Generation Computer System journal's Special Issue on Digital Twin for Future
Networks and Emerging IoT Applications (2023) | null | 10.1016/j.future.2023.12.013 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Digital Twins (DT) facilitate monitoring and reasoning processes in
cyber-physical systems. They have progressively gained popularity over the past
years because of intense research activity and industrial advancements.
Cognitive Twins is a novel concept, recently coined to refer to the involvement
of Semantic Web technology in DTs. Recent studies address the relevance of
ontologies and knowledge graphs in the context of DTs, in terms of knowledge
representation, interoperability and automatic reasoning. However, there is no
comprehensive analysis of how semantic technologies, and specifically
ontologies, are utilized within DTs. This Systematic Literature Review (SLR) is
based on the analysis of 82 research articles, that either propose or benefit
from ontologies with respect to DT. The paper uses different analysis
perspectives, including a structural analysis based on a reference DT
architecture, and an application-specific analysis to specifically address the
different domains, such as Manufacturing and Infrastructure. The review also
identifies open issues and possible research directions on the usage of
ontologies and knowledge graphs in DTs.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 09:52:21 GMT"
}
] | 1,703,203,200,000 | [
[
"Karabulut",
"Erkan",
""
],
[
"Pileggi",
"Salvatore F.",
""
],
[
"Groth",
"Paul",
""
],
[
"Degeler",
"Victoria",
""
]
] |
2308.15239 | Filipe Assun\c{c}\~ao | Sofia Aparicio, Samuel Arcadinho, Jo\~ao Nadkarni, David Apar\'icio,
Jo\~ao Lages, Mariana Louren\c{c}o, Bart{\l}omiej Matejczyk, Filipe
Assun\c{c}\~ao | Natural language to SQL in low-code platforms | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | One of the developers' biggest challenges in low-code platforms is retrieving
data from a database using SQL queries. Here, we propose a pipeline allowing
developers to write natural language (NL) to retrieve data. In this study, we
collect, label, and validate data covering the SQL queries most often performed
by OutSystems users. We use that data to train a NL model that generates SQL.
Alongside this, we describe the entire pipeline, which comprises a feedback
loop that allows us to quickly collect production data and use it to retrain
our SQL generation model. Using crowd-sourcing, we collect 26k NL and SQL pairs
and obtain an additional 1k pairs from production data. Finally, we develop a
UI that allows developers to input a NL query in a prompt and receive a
user-friendly representation of the resulting SQL query. We use A/B testing to
compare four different models in production and observe a 240% improvement in
terms of adoption of the feature, 220% in terms of engagement rate, and a 90%
decrease in failure rate when compared against the first model that we put into
production, showcasing the effectiveness of our pipeline in continuously
improving our feature.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 11:59:02 GMT"
}
] | 1,693,353,600,000 | [
[
"Aparicio",
"Sofia",
""
],
[
"Arcadinho",
"Samuel",
""
],
[
"Nadkarni",
"João",
""
],
[
"Aparício",
"David",
""
],
[
"Lages",
"João",
""
],
[
"Lourenço",
"Mariana",
""
],
[
"Matejczyk",
"Bartłomiej",
""
],
[
"Assunção",
"Filipe",
""
]
] |
2308.15324 | Pengwei Xing | Pengwei Xing, Songtao Lu, Han Yu | Federated Neuro-Symbolic Learning | accepted by ICML 2024 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuro-symbolic learning (NSL) models complex symbolic rule patterns into
latent variable distributions by neural networks, which reduces rule search
space and generates unseen rules to improve downstream task performance.
Centralized NSL learning involves directly acquiring data from downstream
tasks, which is not feasible for federated learning (FL). To address this
limitation, we shift the focus from such a one-to-one interactive
neuro-symbolic paradigm to one-to-many Federated Neuro-Symbolic Learning
framework (FedNSL) with latent variables as the FL communication medium. Built
on the basis of our novel reformulation of the NSL theory, FedNSL is capable of
identifying and addressing rule distribution heterogeneity through a simple and
effective Kullback-Leibler (KL) divergence constraint on rule distribution
applicable under the FL setting. It further theoretically adjusts variational
expectation maximization (V-EM) to reduce the rule search space across domains.
This is the first incorporation of distribution-coupled bilevel optimization
into FL. Extensive experiments based on both synthetic and real-world data
demonstrate significant advantages of FedNSL compared to five state-of-the-art
methods. It outperforms the best baseline by 17% and 29% in terms of unbalanced
average training accuracy and unseen average testing accuracy, respectively.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 14:20:17 GMT"
},
{
"version": "v2",
"created": "Mon, 27 May 2024 14:29:29 GMT"
}
] | 1,716,854,400,000 | [
[
"Xing",
"Pengwei",
""
],
[
"Lu",
"Songtao",
""
],
[
"Yu",
"Han",
""
]
] |
2308.15339 | Elham Nasarian | Elham Nasarian, Danial Sharifrazi, Saman Mohsenirad, Kwok Tsui,
Roohallah Alizadehsani | AI Framework for Early Diagnosis of Coronary Artery Disease: An
Integration of Borderline SMOTE, Autoencoders and Convolutional Neural
Networks Approach | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The accuracy of coronary artery disease (CAD) diagnosis is dependent on a
variety of factors, including demographic, symptom, and medical examination,
ECG, and echocardiography data, among others. In this context, artificial
intelligence (AI) can help clinicians identify high-risk patients early in the
diagnostic process, by synthesizing information from multiple factors. To this
aim, Machine Learning algorithms are used to classify patients based on their
CAD disease risk. In this study, we contribute to this research filed by
developing a methodology for balancing and augmenting data for more accurate
prediction when the data is imbalanced and the sample size is small. The
methodology can be used in a variety of other situations, particularly when
data collection is expensive and the sample size is small. The experimental
results revealed that the average accuracy of our proposed method for CAD
prediction was 95.36, and was higher than random forest (RF), decision tree
(DT), support vector machine (SVM), logistic regression (LR), and artificial
neural network (ANN).
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 14:33:38 GMT"
}
] | 1,693,353,600,000 | [
[
"Nasarian",
"Elham",
""
],
[
"Sharifrazi",
"Danial",
""
],
[
"Mohsenirad",
"Saman",
""
],
[
"Tsui",
"Kwok",
""
],
[
"Alizadehsani",
"Roohallah",
""
]
] |
2308.15390 | Leila Bagheriye | Otto van der Himst, Leila Bagheriye, and Johan Kwisthout | Bayesian Integration of Information Using Top-Down Modulated WTA
Networks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Winner Take All (WTA) circuits a type of Spiking Neural Networks (SNN) have
been suggested as facilitating the brain's ability to process information in a
Bayesian manner. Research has shown that WTA circuits are capable of
approximating hierarchical Bayesian models via Expectation Maximization (EM).
So far, research in this direction has focused on bottom up processes. This is
contrary to neuroscientific evidence that shows that, besides bottom up
processes, top down processes too play a key role in information processing by
the human brain. Several functions ascribed to top down processes include
direction of attention, adjusting for expectations, facilitation of encoding
and recall of learned information, and imagery. This paper explores whether WTA
circuits are suitable for further integrating information represented in
separate WTA networks. Furthermore, it explores whether, and under what
circumstances, top down processes can improve WTA network performance with
respect to inference and learning. The results show that WTA circuits are
capable of integrating the probabilistic information represented by other WTA
networks, and that top down processes can improve a WTA network's inference and
learning performance. Notably, it is able to do this according to key
neuromorphic principles, making it ideal for low-latency and energy efficient
implementation on neuromorphic hardware.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 15:33:51 GMT"
}
] | 1,693,353,600,000 | [
[
"van der Himst",
"Otto",
""
],
[
"Bagheriye",
"Leila",
""
],
[
"Kwisthout",
"Johan",
""
]
] |
2308.15514 | Robert Trager | Robert Trager, Ben Harack, Anka Reuel, Allison Carnegie, Lennart Heim,
Lewis Ho, Sarah Kreps, Ranjit Lall, Owen Larter, Se\'an \'O h\'Eigeartaigh,
Simon Staffell, Jos\'e Jaime Villalobos | International Governance of Civilian AI: A Jurisdictional Certification
Approach | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This report describes trade-offs in the design of international governance
arrangements for civilian artificial intelligence (AI) and presents one
approach in detail. This approach represents the extension of a standards,
licensing, and liability regime to the global level. We propose that states
establish an International AI Organization (IAIO) to certify state
jurisdictions (not firms or AI projects) for compliance with international
oversight standards. States can give force to these international standards by
adopting regulations prohibiting the import of goods whose supply chains embody
AI from non-IAIO-certified jurisdictions. This borrows attributes from models
of existing international organizations, such as the International Civilian
Aviation Organization (ICAO), the International Maritime Organization (IMO),
and the Financial Action Task Force (FATF). States can also adopt multilateral
controls on the export of AI product inputs, such as specialized hardware, to
non-certified jurisdictions. Indeed, both the import and export standards could
be required for certification. As international actors reach consensus on risks
of and minimum standards for advanced AI, a jurisdictional certification regime
could mitigate a broad range of potential harms, including threats to public
safety.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 16:43:59 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Sep 2023 14:03:37 GMT"
}
] | 1,694,476,800,000 | [
[
"Trager",
"Robert",
""
],
[
"Harack",
"Ben",
""
],
[
"Reuel",
"Anka",
""
],
[
"Carnegie",
"Allison",
""
],
[
"Heim",
"Lennart",
""
],
[
"Ho",
"Lewis",
""
],
[
"Kreps",
"Sarah",
""
],
[
"Lall",
"Ranjit",
""
],
[
"Larter",
"Owen",
""
],
[
"hÉigeartaigh",
"Seán Ó",
""
],
[
"Staffell",
"Simon",
""
],
[
"Villalobos",
"José Jaime",
""
]
] |
2308.15568 | Singh Akansha | Singh Akansha | Over-Squashing in Graph Neural Networks: A Comprehensive survey | 14 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) revolutionize machine learning for
graph-structured data, effectively capturing complex relationships. They
disseminate information through interconnected nodes, but long-range
interactions face challenges known as "over-squashing". This survey delves into
the challenge of over-squashing in Graph Neural Networks (GNNs), where
long-range information dissemination is hindered, impacting tasks reliant on
intricate long-distance interactions. It comprehensively explores the causes,
consequences, and mitigation strategies for over-squashing. Various
methodologies are reviewed, including graph rewiring, novel normalization,
spectral analysis, and curvature-based strategies, with a focus on their
trade-offs and effectiveness. The survey also discusses the interplay between
over-squashing and other GNN limitations, such as over-smoothing, and provides
a taxonomy of models designed to address these issues in node and graph-level
tasks. Benchmark datasets for performance evaluation are also detailed, making
this survey a valuable resource for researchers and practitioners in the GNN
field.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 18:46:15 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Sep 2023 11:54:33 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Sep 2023 13:06:01 GMT"
},
{
"version": "v4",
"created": "Sat, 21 Oct 2023 09:39:48 GMT"
},
{
"version": "v5",
"created": "Tue, 28 Nov 2023 11:03:06 GMT"
},
{
"version": "v6",
"created": "Mon, 29 Apr 2024 14:15:42 GMT"
}
] | 1,714,435,200,000 | [
[
"Akansha",
"Singh",
""
]
] |
2308.15620 | Pakizar Shamoi Dr | Izbassar Assylzhan, Muragul Muratbekova, Daniyar Amangeldi, Nazzere
Oryngozha, Anna Ogorodova, Pakizar Shamoi | Intelligent System for Assessing University Student Personality
Development and Career Readiness | 8 pages. Submitted to Elsevier conference | null | 10.1016/j.procs.2023.12.138 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While academic metrics such as transcripts and GPA are commonly used to
evaluate students' knowledge acquisition, there is a lack of comprehensive
metrics to measure their preparedness for the challenges of post-graduation
life. This research paper explores the impact of various factors on university
students' readiness for change and transition, with a focus on their
preparedness for careers. The methodology employed in this study involves
designing a survey based on Paul J. Mayer's "The Balance Wheel" to capture
students' sentiments on various life aspects, including satisfaction with the
educational process and expectations of salary. The collected data from a KBTU
student survey (n=47) were processed through machine learning models: Linear
Regression, Support Vector Regression (SVR), Random Forest Regression.
Subsequently, an intelligent system was built using these models and fuzzy
sets. The system is capable of evaluating graduates' readiness for their future
careers and demonstrates a high predictive power. The findings of this research
have practical implications for educational institutions. Such an intelligent
system can serve as a valuable tool for universities to assess and enhance
students' preparedness for post-graduation challenges. By recognizing the
factors contributing to students' readiness for change, universities can refine
curricula and processes to better prepare students for their career journeys.
| [
{
"version": "v1",
"created": "Tue, 29 Aug 2023 20:32:58 GMT"
}
] | 1,705,881,600,000 | [
[
"Assylzhan",
"Izbassar",
""
],
[
"Muratbekova",
"Muragul",
""
],
[
"Amangeldi",
"Daniyar",
""
],
[
"Oryngozha",
"Nazzere",
""
],
[
"Ogorodova",
"Anna",
""
],
[
"Shamoi",
"Pakizar",
""
]
] |
2308.15802 | Junjie Zhang | Yangkun Chen, Joseph Suarez, Junjie Zhang, Chenghui Yu, Bo Wu, Hanmo
Chen, Hengman Zhu, Rui Du, Shanliang Qian, Shuai Liu, Weijun Hong, Jinke He,
Yibing Zhang, Liang Zhao, Clare Zhu, Julian Togelius, Sharada Mohanty, Jiaxin
Chen, Xiu Li, Xiaolong Zhu, Phillip Isola | Benchmarking Robustness and Generalization in Multi-Agent Systems: A
Case Study on Neural MMO | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the results of the second Neural MMO challenge, hosted at IJCAI
2022, which received 1600+ submissions. This competition targets robustness and
generalization in multi-agent systems: participants train teams of agents to
complete a multi-task objective against opponents not seen during training. The
competition combines relatively complex environment design with large numbers
of agents in the environment. The top submissions demonstrate strong success on
this task using mostly standard reinforcement learning (RL) methods combined
with domain-specific engineering. We summarize the competition design and
results and suggest that, as an academic community, competitions may be a
powerful approach to solving hard problems and establishing a solid benchmark
for algorithms. We will open-source our benchmark including the environment
wrapper, baselines, a visualization tool, and selected policies for further
research.
| [
{
"version": "v1",
"created": "Wed, 30 Aug 2023 07:16:11 GMT"
}
] | 1,693,440,000,000 | [
[
"Chen",
"Yangkun",
""
],
[
"Suarez",
"Joseph",
""
],
[
"Zhang",
"Junjie",
""
],
[
"Yu",
"Chenghui",
""
],
[
"Wu",
"Bo",
""
],
[
"Chen",
"Hanmo",
""
],
[
"Zhu",
"Hengman",
""
],
[
"Du",
"Rui",
""
],
[
"Qian",
"Shanliang",
""
],
[
"Liu",
"Shuai",
""
],
[
"Hong",
"Weijun",
""
],
[
"He",
"Jinke",
""
],
[
"Zhang",
"Yibing",
""
],
[
"Zhao",
"Liang",
""
],
[
"Zhu",
"Clare",
""
],
[
"Togelius",
"Julian",
""
],
[
"Mohanty",
"Sharada",
""
],
[
"Chen",
"Jiaxin",
""
],
[
"Li",
"Xiu",
""
],
[
"Zhu",
"Xiaolong",
""
],
[
"Isola",
"Phillip",
""
]
] |
2308.15819 | Tuukka Korhonen | Tuukka Korhonen, Matti J\"arvisalo | SharpSAT-TD in Model Counting Competitions 2021-2023 | 3 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe SharpSAT-TD, our submission to the unweighted and weighted tracks
of the Model Counting Competition in 2021-2023, which has won in total $6$
first places in different tracks of the competition. SharpSAT-TD is based on
SharpSAT [Thurley, SAT 2006], with the primary novel modification being the use
of tree decompositions in the variable selection heuristic as introduced by the
authors in [CP 2021]. Unlike the version of SharpSAT-TD evaluated in [CP 2021],
the current version that is available in https://github.com/Laakeri/sharpsat-td
features also other significant modifications compared to the original
SharpSAT, for example, a new preprocessor.
| [
{
"version": "v1",
"created": "Wed, 30 Aug 2023 07:43:12 GMT"
}
] | 1,693,440,000,000 | [
[
"Korhonen",
"Tuukka",
""
],
[
"Järvisalo",
"Matti",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.