id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.09111 | Michael Thielscher | Michael Cunanan and Michael Thielscher | On Optimal Strategies for Wordle and General Guessing Games | This is an extended version, with full proofs and additional examples
in the appendix, of a paper accepted for publication and presentation at
IJCAI 2023 (http://www.ijcai.org) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent popularity of Wordle has revived interest in guessing games. We
develop a general method for finding optimal strategies for guessing games
while avoiding an exhaustive search. Our main contributions are several
theorems that build towards a general theory to prove the optimality of a
strategy for a guessing game. This work is developed to apply to any guessing
game, but we use Wordle as an example to present concrete results.
| [
{
"version": "v1",
"created": "Tue, 16 May 2023 02:30:10 GMT"
}
] | 1,684,281,600,000 | [
[
"Cunanan",
"Michael",
""
],
[
"Thielscher",
"Michael",
""
]
] |
2305.09200 | Paolo Liberatore | Paolo Liberatore | Representing states in iterated belief revision | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Iterated belief revision requires information about the current beliefs. This
information is represented by mathematical structures called doxastic states.
Most literature concentrates on how to revise a doxastic state and neglects
that it may exponentially grow. This problem is studied for the most common
ways of storing a doxastic state. All four methods are able to store every
doxastic state, but some do it in less space than others. In particular, the
explicit representation (an enumeration of the current beliefs) is the more
wasteful on space. The level representation (a sequence of propositional
formulae) and the natural representation (a history of natural revisions) are
more compact than it. The lexicographic representation (a history of
lexicographic revision) is even more compact than them.
| [
{
"version": "v1",
"created": "Tue, 16 May 2023 06:16:23 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Feb 2024 16:45:05 GMT"
}
] | 1,708,905,600,000 | [
[
"Liberatore",
"Paolo",
""
]
] |
2305.09247 | Jiong Yang | Jiong Yang and Kuldeep S. Meel | Rounding Meets Approximate Model Counting | 18 pages, 3 figures, to be published in CAV23 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of model counting, also known as #SAT, is to compute the number
of models or satisfying assignments of a given Boolean formula $F$. Model
counting is a fundamental problem in computer science with a wide range of
applications. In recent years, there has been a growing interest in using
hashing-based techniques for approximate model counting that provide
$(\varepsilon, \delta)$-guarantees: i.e., the count returned is within a
$(1+\varepsilon)$-factor of the exact count with confidence at least
$1-\delta$. While hashing-based techniques attain reasonable scalability for
large enough values of $\delta$, their scalability is severely impacted for
smaller values of $\delta$, thereby preventing their adoption in application
domains that require estimates with high confidence.
The primary contribution of this paper is to address the Achilles heel of
hashing-based techniques: we propose a novel approach based on rounding that
allows us to achieve a significant reduction in runtime for smaller values of
$\delta$. The resulting counter, called RoundMC, achieves a substantial runtime
performance improvement over the current state-of-the-art counter, ApproxMC. In
particular, our extensive evaluation over a benchmark suite consisting of 1890
instances shows that RoundMC solves 204 more instances than ApproxMC, and
achieves a $4\times$ speedup over ApproxMC.
| [
{
"version": "v1",
"created": "Tue, 16 May 2023 07:53:17 GMT"
}
] | 1,684,281,600,000 | [
[
"Yang",
"Jiong",
""
],
[
"Meel",
"Kuldeep S.",
""
]
] |
2305.09840 | Masataro Asai | Stephen Wissow, Masataro Asai | Scale-Adaptive Balancing of Exploration and Exploitation in Classical
Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Balancing exploration and exploitation has been an important problem in both
game tree search and automated planning. However, while the problem has been
extensively analyzed within the Multi-Armed Bandit (MAB) literature, the
planning community has had limited success when attempting to apply those
results. We show that a more detailed theoretical understanding of MAB
literature helps improve existing planning algorithms that are based on Monte
Carlo Tree Search (MCTS) / Trial Based Heuristic Tree Search (THTS). In
particular, THTS uses UCB1 MAB algorithms in an ad hoc manner, as UCB1's
theoretical requirement of fixed bounded support reward distributions is not
satisfied within heuristic search for classical planning. The core issue lies
in UCB1's lack of adaptations to the different scales of the rewards. We
propose GreedyUCT-Normal, a MCTS/THTS algorithm with UCB1-Normal bandit for
agile classical planning, which handles distributions with different scales by
taking the reward variance into consideration, and resulted in an improved
algorithmic performance (more plans found with less node expansions) that
outperforms Greedy Best First Search and existing MCTS/THTS-based algorithms
(GreedyUCT,GreedyUCT*).
| [
{
"version": "v1",
"created": "Tue, 16 May 2023 22:46:37 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 20:00:03 GMT"
}
] | 1,688,601,600,000 | [
[
"Wissow",
"Stephen",
""
],
[
"Asai",
"Masataro",
""
]
] |
2305.09974 | Kai Wang | Kai Wang and Siqiang Luo and Dan Lin | River of No Return: Graph Percolation Embeddings for Efficient Knowledge
Graph Reasoning | 9 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study Graph Neural Networks (GNNs)-based embedding techniques for
knowledge graph (KG) reasoning. For the first time, we link the path redundancy
issue in the state-of-the-art KG reasoning models based on path encoding and
message passing to the transformation error in model training, which brings us
new theoretical insights into KG reasoning, as well as high efficacy in
practice. On the theoretical side, we analyze the entropy of transformation
error in KG paths and point out query-specific redundant paths causing entropy
increases. These findings guide us to maintain the shortest paths and remove
redundant paths for minimized-entropy message passing. To achieve this goal, on
the practical side, we propose an efficient Graph Percolation Process motivated
by the percolation model in Fluid Mechanics, and design a lightweight GNN-based
KG reasoning framework called Graph Percolation Embeddings (GraPE). GraPE
outperforms previous state-of-the-art methods in both transductive and
inductive reasoning tasks while requiring fewer training parameters and less
inference time.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 06:13:28 GMT"
}
] | 1,684,368,000,000 | [
[
"Wang",
"Kai",
""
],
[
"Luo",
"Siqiang",
""
],
[
"Lin",
"Dan",
""
]
] |
2305.10032 | Alessio Zanga | Alessio Zanga, Fabio Stella | A Survey on Causal Discovery: Theory and Practice | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Understanding the laws that govern a phenomenon is the core of scientific
progress. This is especially true when the goal is to model the interplay
between different aspects in a causal fashion. Indeed, causal inference itself
is specifically designed to quantify the underlying relationships that connect
a cause to its effect. Causal discovery is a branch of the broader field of
causality in which causal graphs is recovered from data (whenever possible),
enabling the identification and estimation of causal effects. In this paper, we
explore recent advancements in a unified manner, provide a consistent overview
of existing algorithms developed under different settings, report useful tools
and data, present real-world applications to understand why and how these
methods can be fruitfully exploited.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 08:18:56 GMT"
}
] | 1,698,969,600,000 | [
[
"Zanga",
"Alessio",
""
],
[
"Stella",
"Fabio",
""
]
] |
2305.10041 | Alessio Zanga | Alessio Zanga, Alice Bernasconi, Peter J.F. Lucas, Hanny Pijnenborg,
Casper Reijnen, Marco Scutari, Fabio Stella | Risk Assessment of Lymph Node Metastases in Endometrial Cancer Patients:
A Causal Approach | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Assessing the pre-operative risk of lymph node metastases in endometrial
cancer patients is a complex and challenging task. In principle, machine
learning and deep learning models are flexible and expressive enough to capture
the dynamics of clinical risk assessment. However, in this setting we are
limited to observational data with quality issues, missing values, small sample
size and high dimensionality: we cannot reliably learn such models from limited
observational data with these sources of bias. Instead, we choose to learn a
causal Bayesian network to mitigate the issues above and to leverage the prior
knowledge on endometrial cancer available from clinicians and physicians. We
introduce a causal discovery algorithm for causal Bayesian networks based on
bootstrap resampling, as opposed to the single imputation used in related
works. Moreover, we include a context variable to evaluate whether selection
bias results in learning spurious associations. Finally, we discuss the
strengths and limitations of our findings in light of the presence of missing
data that may be missing-not-at-random, which is common in real-world clinical
settings.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 08:33:32 GMT"
}
] | 1,684,368,000,000 | [
[
"Zanga",
"Alessio",
""
],
[
"Bernasconi",
"Alice",
""
],
[
"Lucas",
"Peter J. F.",
""
],
[
"Pijnenborg",
"Hanny",
""
],
[
"Reijnen",
"Casper",
""
],
[
"Scutari",
"Marco",
""
],
[
"Stella",
"Fabio",
""
]
] |
2305.10051 | Bahare Salmani | Bahare Salmani and Joost-Pieter Katoen | Finding an $\epsilon$-close Variation of Parameters in Bayesian Networks | IJCAI-2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper addresses the $\epsilon$-close parameter tuning problem for
Bayesian Networks (BNs): find a minimal $\epsilon$-close amendment of
probability entries in a given set of (rows in) conditional probability tables
that make a given quantitative constraint on the BN valid. Based on the
state-of-the-art "region verification" techniques for parametric Markov chains,
we propose an algorithm whose capabilities go beyond any existing techniques.
Our experiments show that $\epsilon$-close tuning of large BN benchmarks with
up to 8 parameters is feasible. In particular, by allowing (i) varied
parameters in multiple CPTs and (ii) inter-CPT parameter dependencies, we treat
subclasses of parametric BNs that have received scant attention so far.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 08:46:53 GMT"
}
] | 1,684,368,000,000 | [
[
"Salmani",
"Bahare",
""
],
[
"Katoen",
"Joost-Pieter",
""
]
] |
2305.10069 | Raphael Mazzine Barbosa De Oliveira | Raphael Mazzine Barbosa de Oliveira, Sofie Goethals, Dieter Brughmans,
and David Martens | Unveiling the Potential of Counterfactuals Explanations in Employability | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In eXplainable Artificial Intelligence (XAI), counterfactual explanations are
known to give simple, short, and comprehensible justifications for complex
model decisions. However, we are yet to see more applied studies in which they
are applied in real-world cases. To fill this gap, this study focuses on
showing how counterfactuals are applied to employability-related problems which
involve complex machine learning algorithms. For these use cases, we use real
data obtained from a public Belgian employment institution (VDAB). The use
cases presented go beyond the mere application of counterfactuals as
explanations, showing how they can enhance decision support, comply with legal
requirements, guide controlled changes, and analyze novel insights.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 09:13:53 GMT"
}
] | 1,684,368,000,000 | [
[
"de Oliveira",
"Raphael Mazzine Barbosa",
""
],
[
"Goethals",
"Sofie",
""
],
[
"Brughmans",
"Dieter",
""
],
[
"Martens",
"David",
""
]
] |
2305.10091 | Ziyuan Zhou | Ziyuan Zhou, Guanjun Liu, Ying Tang | Multi-Agent Reinforcement Learning: Methods, Applications, Visionary
Prospects, and Challenges | 43 pages, 5 figures | null | 10.1109/TIV.2024.3408257 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multi-agent reinforcement learning (MARL) is a widely used Artificial
Intelligence (AI) technique. However, current studies and applications need to
address its scalability, non-stationarity, and trustworthiness. This paper aims
to review methods and applications and point out research trends and visionary
prospects for the next decade. First, this paper summarizes the basic methods
and application scenarios of MARL. Second, this paper outlines the
corresponding research methods and their limitations on safety, robustness,
generalization, and ethical constraints that need to be addressed in the
practical applications of MARL. In particular, we believe that trustworthy MARL
will become a hot research topic in the next decade. In addition, we suggest
that considering human interaction is essential for the practical application
of MARL in various societies. Therefore, this paper also analyzes the
challenges while MARL is applied to human-machine interaction.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 09:53:13 GMT"
}
] | 1,717,718,400,000 | [
[
"Zhou",
"Ziyuan",
""
],
[
"Liu",
"Guanjun",
""
],
[
"Tang",
"Ying",
""
]
] |
2305.10192 | Constantin Waubert de Puiseau | Constantin Waubert de Puiseau, Hasan Tercan, Tobias Meisen | Curriculum Learning in Job Shop Scheduling using Reinforcement Learning | in: Proceedings of the Conference on Production Systems and
Logistics: CPSL 2023 | null | 10.15488/13422 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Solving job shop scheduling problems (JSSPs) with a fixed strategy, such as a
priority dispatching rule, may yield satisfactory results for several problem
instances but, nevertheless, insufficient results for others. From this
single-strategy perspective finding a near optimal solution to a specific JSSP
varies in difficulty even if the machine setup remains the same. A recent
intensively researched and promising method to deal with difficulty variability
is Deep Reinforcement Learning (DRL), which dynamically adjusts an agent's
planning strategy in response to difficult instances not only during training,
but also when applied to new situations. In this paper, we further improve DLR
as an underlying method by actively incorporating the variability of difficulty
within the same problem size into the design of the learning process. We base
our approach on a state-of-the-art methodology that solves JSSP by means of DRL
and graph neural network embeddings. Our work supplements the training routine
of the agent by a curriculum learning strategy that ranks the problem instances
shown during training by a new metric of problem instance difficulty. Our
results show that certain curricula lead to significantly better performances
of the DRL solutions. Agents trained on these curricula beat the top
performance of those trained on randomly distributed training data, reaching
3.2% shorter average makespans.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 13:15:27 GMT"
}
] | 1,684,368,000,000 | [
[
"de Puiseau",
"Constantin Waubert",
""
],
[
"Tercan",
"Hasan",
""
],
[
"Meisen",
"Tobias",
""
]
] |
2305.10378 | Kayla Boggess | Kayla Boggess, Sarit Kraus, and Lu Feng | Explainable Multi-Agent Reinforcement Learning for Temporal Queries | 9 pages, 4 figures, 1 table, 3 algorithms, IJCAI 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As multi-agent reinforcement learning (MARL) systems are increasingly
deployed throughout society, it is imperative yet challenging for users to
understand the emergent behaviors of MARL agents in complex environments. This
work presents an approach for generating policy-level contrastive explanations
for MARL to answer a temporal user query, which specifies a sequence of tasks
completed by agents with possible cooperation. The proposed approach encodes
the temporal query as a PCTL logic formula and checks if the query is feasible
under a given MARL policy via probabilistic model checking. Such explanations
can help reconcile discrepancies between the actual and anticipated multi-agent
behaviors. The proposed approach also generates correct and complete
explanations to pinpoint reasons that make a user query infeasible. We have
successfully applied the proposed approach to four benchmark MARL domains (up
to 9 agents in one domain). Moreover, the results of a user study show that the
generated explanations significantly improve user performance and satisfaction.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 17:04:29 GMT"
}
] | 1,684,368,000,000 | [
[
"Boggess",
"Kayla",
""
],
[
"Kraus",
"Sarit",
""
],
[
"Feng",
"Lu",
""
]
] |
2305.10538 | Christian Blakely | Christian D. Blakely | Generating Bayesian Network Models from Data Using Tsetlin Machines | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Bayesian networks (BN) are directed acyclic graphical (DAG) models that have
been adopted into many fields for their strengths in transparency,
interpretability, probabilistic reasoning, and causal modeling. Given a set of
data, one hurdle towards using BNs is in building the network graph from the
data that properly handles dependencies, whether correlated or causal. In this
paper, we propose an initial methodology for discovering network structures
using Tsetlin Machines.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 19:50:56 GMT"
}
] | 1,684,454,400,000 | [
[
"Blakely",
"Christian D.",
""
]
] |
2305.10556 | Shulu Chen | Shulu Chen, Antony Evans, Marc Brittain and Peng Wei | Integrated Conflict Management for UAM with Strategic Demand Capacity
Balancing and Learning-based Tactical Deconfliction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Urban air mobility (UAM) has the potential to revolutionize our daily
transportation, offering rapid and efficient deliveries of passengers and cargo
between dedicated locations within and around the urban environment. Before the
commercialization and adoption of this emerging transportation mode, however,
aviation safety must be guaranteed, i.e., all the aircraft have to be safely
separated by strategic and tactical deconfliction. Reinforcement learning has
demonstrated effectiveness in the tactical deconfliction of en route commercial
air traffic in simulation. However, its performance is found to be dependent on
the traffic density. In this project, we propose a novel framework that
combines demand capacity balancing (DCB) for strategic conflict management and
reinforcement learning for tactical separation. By using DCB to precondition
traffic to proper density levels, we show that reinforcement learning can
achieve much better performance for tactical safety separation. Our results
also indicate that this DCB preconditioning can allow target levels of safety
to be met that are otherwise impossible. In addition, combining strategic DCB
with reinforcement learning for tactical separation can meet these safety
levels while achieving greater operational efficiency than alternative
solutions.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 20:23:18 GMT"
}
] | 1,684,454,400,000 | [
[
"Chen",
"Shulu",
""
],
[
"Evans",
"Antony",
""
],
[
"Brittain",
"Marc",
""
],
[
"Wei",
"Peng",
""
]
] |
2305.10654 | Brendan Conway-Smith | Brendan Conway-Smith and Robert L. West | Clarifying System 1 & 2 through the Common Model of Cognition | In Proceedings of ICCM 2022 20th International Conference on
Cognitive Modelling
http://www.frankritter.com/papers/ICCM2022Proceedings.pdf. arXiv admin note:
substantial text overlap with arXiv:2305.09091 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | There have been increasing challenges to dual-system descriptions of System-1
and System-2, critiquing them as imprecise and fostering misconceptions. We
address these issues here by way of Dennett's appeal to use computational
thinking as an analytical tool, specifically we employ the Common Model of
Cognition. Results show that the characteristics thought to be distinctive of
System-1 and System-2 instead form a spectrum of cognitive properties. By
grounding System-1 and System-2 in the Common Model we aim to clarify their
underlying mechanisms, persisting misconceptions, and implications for
metacognition.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 02:25:03 GMT"
}
] | 1,684,454,400,000 | [
[
"Conway-Smith",
"Brendan",
""
],
[
"West",
"Robert L.",
""
]
] |
2305.10708 | Ayomide Owoyemi | Ayomide Owoyemi, Emmanuel Nnaemeka, Temitope O. Benson, Ronald Ikpe,
Blessing Nwachukwu, Temitope Isedowo | Machine Learning Recommendation System For Health Insurance Decision
Making In Nigeria | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The uptake of health insurance has been poor in Nigeria, a significant step
to improving this includes improved awareness, access to information and tools
to support decision making. Artificial intelligence (AI) based recommender
systems have gained popularity in helping individuals find movies, books,
music, and different types of products on the internet including diverse
applications in healthcare. The content-based methodology (item-based approach)
was employed in the recommender system. We applied both the K-Nearest Neighbor
(KNN) and Cosine similarity algorithm. We chose the Cosine similarity as our
chosen algorithm after several evaluations based of their outcomes in
comparison with domain knowledge. The recommender system takes into
consideration the choices entered by the user, filters the health management
organization (HMO) data by location and chosen prices. It then recommends the
top 3 HMOs with closest similarity in services offered. A recommendation tool
to help people find and select the best health insurance plan for them is
useful in reducing the barrier of accessing health insurance. Users are
empowered to easily find appropriate information on available plans, reduce
cognitive overload in dealing with over 100 options available in the market and
easily see what matches their financial capacity.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 04:54:23 GMT"
}
] | 1,684,454,400,000 | [
[
"Owoyemi",
"Ayomide",
""
],
[
"Nnaemeka",
"Emmanuel",
""
],
[
"Benson",
"Temitope O.",
""
],
[
"Ikpe",
"Ronald",
""
],
[
"Nwachukwu",
"Blessing",
""
],
[
"Isedowo",
"Temitope",
""
]
] |
2305.10726 | Tosin Ige | Amos Okomayin, Tosin Ige | Ambient Technology & Intelligence | 10 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Today, we have a mixture of young and older individuals, people with special
needs, and people who can care for themselves. Over 1 billion people are
estimated to be disabled; this figure corresponds to about 15% of the world's
population, with 3.8% (approximately 190 million people) accounting for people
aged 15 and up (Organization, 2011). The number of people with disabilities is
upward due to the increase in chronic health conditions and many other things.
These and other factors have made the need for proper care facilities urgent in
today's society. Several care facilities are built to help people with
disabilities live their everyday lives and not be left out of the community.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 05:55:41 GMT"
}
] | 1,684,454,400,000 | [
[
"Okomayin",
"Amos",
""
],
[
"Ige",
"Tosin",
""
]
] |
2305.10782 | Raj Sanjay Shah | Raj Sanjay Shah, Vijay Marupudi, Reba Koenen, Khushi Bhardwaj, Sashank
Varma | Human Behavioral Benchmarking: Numeric Magnitude Comparison Effects in
Large Language Models | ACL findings 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs) do not differentially represent numbers, which
are pervasive in text. In contrast, neuroscience research has identified
distinct neural representations for numbers and words. In this work, we
investigate how well popular LLMs capture the magnitudes of numbers (e.g., that
$4 < 5$) from a behavioral lens. Prior research on the representational
capabilities of LLMs evaluates whether they show human-level performance, for
instance, high overall accuracy on standard benchmarks. Here, we ask a
different question, one inspired by cognitive science: How closely do the
number representations of LLMscorrespond to those of human language users, who
typically demonstrate the distance, size, and ratio effects? We depend on a
linking hypothesis to map the similarities among the model embeddings of number
words and digits to human response times. The results reveal surprisingly
human-like representations across language models of different architectures,
despite the absence of the neural circuitry that directly supports these
representations in the human brain. This research shows the utility of
understanding LLMs using behavioral benchmarks and points the way to future
work on the number representations of LLMs and their cognitive plausibility.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 07:50:44 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 00:42:10 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Nov 2023 12:39:51 GMT"
}
] | 1,704,844,800,000 | [
[
"Shah",
"Raj Sanjay",
""
],
[
"Marupudi",
"Vijay",
""
],
[
"Koenen",
"Reba",
""
],
[
"Bhardwaj",
"Khushi",
""
],
[
"Varma",
"Sashank",
""
]
] |
2305.10783 | Julia Kiseleva | Shrestha Mohanty and Negar Arabzadeh and Julia Kiseleva and Artem
Zholus and Milagro Teruel and Ahmed Awadallah and Yuxuan Sun and Kavya Srinet
and Arthur Szlam | Transforming Human-Centered AI Collaboration: Redefining Embodied Agents
Capabilities through Interactive Grounded Language Instructions | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Human intelligence's adaptability is remarkable, allowing us to adjust to new
tasks and multi-modal environments swiftly. This skill is evident from a young
age as we acquire new abilities and solve problems by imitating others or
following natural language instructions. The research community is actively
pursuing the development of interactive "embodied agents" that can engage in
natural conversations with humans and assist them with real-world tasks. These
agents must possess the ability to promptly request feedback in case
communication breaks down or instructions are unclear. Additionally, they must
demonstrate proficiency in learning new vocabulary specific to a given domain.
In this paper, we made the following contributions: (1) a crowd-sourcing tool
for collecting grounded language instructions; (2) the largest dataset of
grounded language instructions; and (3) several state-of-the-art baselines.
These contributions are suitable as a foundation for further research.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 07:51:33 GMT"
}
] | 1,684,454,400,000 | [
[
"Mohanty",
"Shrestha",
""
],
[
"Arabzadeh",
"Negar",
""
],
[
"Kiseleva",
"Julia",
""
],
[
"Zholus",
"Artem",
""
],
[
"Teruel",
"Milagro",
""
],
[
"Awadallah",
"Ahmed",
""
],
[
"Sun",
"Yuxuan",
""
],
[
"Srinet",
"Kavya",
""
],
[
"Szlam",
"Arthur",
""
]
] |
2305.10830 | Lufeng Wang | Lufeng Wang, Jiepeng Liu, Guozhong Cheng, En Liu, Wei Chen | Constructing a personalized AI assistant for shear wall layout using
Stable Diffusion | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Shear wall structures are widely used in high-rise residential buildings, and
the layout of shear walls requires many years of design experience and
iterative trial and error. Currently, there are methods based on heuristic
algorithms, but they generate results too slowly. Those based on Generative
Adversarial Networks (GANs) or Graph Neural Networks (GNNs) can only generate
single arrangements and require large amounts of training data. At present,
Stable Diffusion is being widely used, and by using the Low-Rank Adaptation
(LoRA) method to fine-tune large models with small amounts of data, good
generative results can be achieved. Therefore, this paper proposes a
personalized AI assistant for shear wall layout based on Stable Diffusion,
which has been proven to produce good generative results through testing.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 09:12:07 GMT"
}
] | 1,684,454,400,000 | [
[
"Wang",
"Lufeng",
""
],
[
"Liu",
"Jiepeng",
""
],
[
"Cheng",
"Guozhong",
""
],
[
"Liu",
"En",
""
],
[
"Chen",
"Wei",
""
]
] |
2305.10961 | Weronika Hryniewska | Weronika Hryniewska, Piotr Czarnecki, Jakub Wi\'sniewski,
Przemys{\l}aw Bombi\'nski, Przemys{\l}aw Biecek | Prevention is better than cure: a case study of the abnormalities
detection in the chest | null | CVPR 2021 Workshop Beyond Fairness: Towards a Just, Equitable, and
Accountable Computer Vision | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prevention is better than cure. This old truth applies not only to the
prevention of diseases but also to the prevention of issues with AI models used
in medicine. The source of malfunctioning of predictive models often lies not
in the training process but reaches the data acquisition phase or design of the
experiment phase.
In this paper, we analyze in detail a single use case - a Kaggle competition
related to the detection of abnormalities in X-ray lung images. We demonstrate
how a series of simple tests for data imbalance exposes faults in the data
acquisition and annotation process. Complex models are able to learn such
artifacts and it is difficult to remove this bias during or after the training.
Errors made at the data collection stage make it difficult to validate the
model correctly.
Based on this use case, we show how to monitor data and model balance
(fairness) throughout the life cycle of a predictive model, from data
acquisition to parity analysis of model scores.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 13:28:00 GMT"
}
] | 1,684,454,400,000 | [
[
"Hryniewska",
"Weronika",
""
],
[
"Czarnecki",
"Piotr",
""
],
[
"Wiśniewski",
"Jakub",
""
],
[
"Bombiński",
"Przemysław",
""
],
[
"Biecek",
"Przemysław",
""
]
] |
2305.11014 | Tom Silver | Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie
Pack Kaelbling, Michael Katz | Generalized Planning in PDDL Domains with Pretrained Large Language
Models | AAAI 2024 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent work has considered whether large language models (LLMs) can function
as planners: given a task, generate a plan. We investigate whether LLMs can
serve as generalized planners: given a domain and training tasks, generate a
program that efficiently produces plans for other tasks in the domain. In
particular, we consider PDDL domains and use GPT-4 to synthesize Python
programs. We also consider (1) Chain-of-Thought (CoT) summarization, where the
LLM is prompted to summarize the domain and propose a strategy in words before
synthesizing the program; and (2) automated debugging, where the program is
validated with respect to the training tasks, and in case of errors, the LLM is
re-prompted with four types of feedback. We evaluate this approach in seven
PDDL domains and compare it to four ablations and four baselines. Overall, we
find that GPT-4 is a surprisingly powerful generalized planner. We also
conclude that automated debugging is very important, that CoT summarization has
non-uniform impact, that GPT-4 is far superior to GPT-3.5, and that just two
training tasks are often sufficient for strong generalization.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 14:48:20 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Dec 2023 19:44:09 GMT"
}
] | 1,703,030,400,000 | [
[
"Silver",
"Tom",
""
],
[
"Dan",
"Soham",
""
],
[
"Srinivas",
"Kavitha",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Kaelbling",
"Leslie Pack",
""
],
[
"Katz",
"Michael",
""
]
] |
2305.11074 | Tong Ye | Tong Ye, Lingfei Wu, Tengfei Ma, Xuhong Zhang, Yangkai Du, Peiyu Liu,
Shouling Ji, Wenhai Wang | Tram: A Token-level Retrieval-augmented Mechanism for Source Code
Summarization | NAACL 2024 Findings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically generating human-readable text describing the functionality of
a program is the intent of source code summarization. Although neural language
models achieve significant performance in this field, they are limited by their
inability to access external knowledge. To address this limitation, an emerging
trend is combining neural models with external knowledge through retrieval
methods. Previous methods have relied on the sentence-level retrieval paradigm
on the encoder side. However, this paradigm is coarse-grained, noise-filled and
cannot directly take advantage of the high-quality retrieved summary tokens on
the decoder side. In this paper, we propose a fine-grained Token-level
retrieval-augmented mechanism (Tram) on the decoder side rather than the
encoder side to enhance the performance of neural models and produce more
low-frequency tokens in generating summaries. Furthermore, to overcome the
challenge of token-level retrieval in capturing contextual code semantics, we
also propose integrating code semantics into individual summary tokens. The
results of extensive experiments and human evaluation show that our token-level
retrieval-augmented approach significantly improves performance and is more
interpretable.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 16:02:04 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Mar 2024 02:04:56 GMT"
},
{
"version": "v3",
"created": "Sat, 30 Mar 2024 10:45:22 GMT"
}
] | 1,712,016,000,000 | [
[
"Ye",
"Tong",
""
],
[
"Wu",
"Lingfei",
""
],
[
"Ma",
"Tengfei",
""
],
[
"Zhang",
"Xuhong",
""
],
[
"Du",
"Yangkai",
""
],
[
"Liu",
"Peiyu",
""
],
[
"Ji",
"Shouling",
""
],
[
"Wang",
"Wenhai",
""
]
] |
2305.11098 | Hiroyuki Kido | Hiroyuki Kido | A Simple Generative Model of Logical Reasoning and Statistical Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical learning and logical reasoning are two major fields of AI
expected to be unified for human-like machine intelligence. Most existing work
considers how to combine existing logical and statistical systems. However,
there is no theory of inference so far explaining how basic approaches to
statistical learning and logical reasoning stem from a common principle.
Inspired by the fact that much empirical work in neuroscience suggests Bayesian
(or probabilistic generative) approaches to brain function including learning
and reasoning, we here propose a simple Bayesian model of logical reasoning and
statistical learning. The theory is statistically correct as it satisfies
Kolmogorov's axioms, is consistent with both Fenstad's representation theorem
and maximum likelihood estimation and performs exact Bayesian inference with a
linear-time complexity. The theory is logically correct as it is a data-driven
generalisation of uncertain reasoning from consistency, possibility,
inconsistency and impossibility. The theory is correct in terms of machine
learning as its solution to generation and prediction tasks on the MNIST
dataset is not only empirically reasonable but also theoretically correct
against the K nearest neighbour method. We simply model how data causes
symbolic knowledge in terms of its satisfiability in formal logic. Symbolic
reasoning emerges as a result of the process of going the causality forwards
and backwards. The forward and backward processes correspond to an
interpretation and inverse interpretation in formal logic, respectively. The
inverse interpretation differentiates our work from the mainstream often
referred to as inverse entailment, inverse deduction or inverse resolution. The
perspective gives new insights into learning and reasoning towards human-like
machine intelligence.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 16:34:51 GMT"
}
] | 1,684,454,400,000 | [
[
"Kido",
"Hiroyuki",
""
]
] |
2305.11130 | Junkai Zhou | Junkai Zhou, Liang Pang, Huawei Shen, Xueqi Cheng | SimOAP: Improve Coherence and Consistency in Persona-based Dialogue
Generation via Over-sampling and Post-evaluation | Accepted by ACL 2023 Main | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language models trained on large-scale corpora can generate remarkably fluent
results in open-domain dialogue. However, for the persona-based dialogue
generation task, consistency and coherence are also key factors, which are
great challenges for language models. Existing works mainly focus on valuable
data filtering, model structure modifying, or objective function designing,
while their improvements are limited and hard to generalize to all types of
pre-trained language models. However, we find that language models can produce
consistent and coherent responses if we consider enough generations. Thus, the
problems lay in large-scale response generation and target response selection.
In this work, a simple but effective two-stage SimOAP strategy is proposed,
i.e., over-sampling and post-evaluation. The over-sampling stage takes
large-scale responses from existing trained models efficiently via
off-the-shelf distilling and compressing methods, and the post-evaluation stage
selects a good response based on multiple well-designed evaluation metrics from
large-scale candidates. Experimental results show that the proposed plug-in
SimOAP strategy improves the backbone models and outperforms the baseline
strategies in both automatic and human evaluations.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 17:23:00 GMT"
},
{
"version": "v2",
"created": "Sat, 20 May 2023 06:30:01 GMT"
}
] | 1,684,800,000,000 | [
[
"Zhou",
"Junkai",
""
],
[
"Pang",
"Liang",
""
],
[
"Shen",
"Huawei",
""
],
[
"Cheng",
"Xueqi",
""
]
] |
2305.11137 | Joshua McGraw | Joshua McGraw, Donsuk Lee, Justin Wood | Parallel development of social preferences in fish and machines | 7 Pages. 2 figures, 1 table. This paper was accepted to the CogSci
2023 Conference. (https://cognitivesciencesociety.org/) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | What are the computational foundations of social grouping? Traditional
approaches to this question have focused on verbal reasoning or simple
(low-dimensional) quantitative models. In the real world, however, social
preferences emerge when high-dimensional learning systems (brains and bodies)
interact with high-dimensional sensory inputs during an animal's embodied
interactions with the world. A deep understanding of social grouping will
therefore require embodied models that learn directly from sensory inputs using
high-dimensional learning mechanisms. To this end, we built artificial neural
networks (ANNs), embodied those ANNs in virtual fish bodies, and raised the
artificial fish in virtual fish tanks that mimicked the rearing conditions of
real fish. We then compared the social preferences that emerged in real fish
versus artificial fish. We found that when artificial fish had two core
learning mechanisms (reinforcement learning and curiosity-driven learning),
artificial fish developed fish-like social preferences. Like real fish, the
artificial fish spontaneously learned to prefer members of their own group over
members of other groups. The artificial fish also spontaneously learned to
self-segregate with their in-group, akin to self-segregation behavior seen in
nature. Our results suggest that social grouping can emerge from three
ingredients: (1) reinforcement learning, (2) intrinsic motivation, and (3)
early social experiences with in-group members. This approach lays a foundation
for reverse engineering animal-like social behavior with image-computable
models, bridging the divide between high-dimensional sensory inputs and social
preferences.
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 17:32:59 GMT"
}
] | 1,684,454,400,000 | [
[
"McGraw",
"Joshua",
""
],
[
"Lee",
"Donsuk",
""
],
[
"Wood",
"Justin",
""
]
] |
2305.11294 | Adrian Groza | Adrian Groza | Solving probability puzzles with logic toolkit | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The proposed approach is to formalise the probabilistic puzzle in equational
FOL. Two formalisations are needed: one theory for all models of the given
puzzle, and a second theory for the favorable models. Then Mace4 - that
computes all the interpretation models of a FOL theory - is called twice.
First, it is asked to compute all the possible models M p .Second, the
additional constraint is added, and Mace4 computes only favourabile models M f.
Finally, the definition of probability is applied: the number of favorable
models is divided by the number of possible models. The proposed approach
equips students from the logic tribe to find the correct solution for puzzles
from the probabilitistic tribe, by using their favourite instruments: modelling
and formalisation. I have exemplified here five probabilistic puzzles and how
they can be solved by translating the min FOL and then find the corresponding
interpretation models. Mace4 was the tool of choice here. Ongoing work is
investigating the limits of this method on various collections of probabilistic
puzzles
| [
{
"version": "v1",
"created": "Thu, 18 May 2023 20:35:46 GMT"
}
] | 1,684,713,600,000 | [
[
"Groza",
"Adrian",
""
]
] |
2305.11301 | Navdeep Kaur | Ishaan Singh and Navdeep Kaur and Garima Gaur and Mausam | NeuSTIP: A Novel Neuro-Symbolic Model for Link and Time Prediction in
Temporal Knowledge Graphs | 13 pages, 2 Figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | While Knowledge Graph Completion (KGC) on static facts is a matured field,
Temporal Knowledge Graph Completion (TKGC), that incorporates validity time
into static facts is still in its nascent stage. The KGC methods fall into
multiple categories including embedding-based, rule-based, GNN-based,
pretrained Language Model based approaches. However, such dimensions have not
been explored in TKG. To that end, we propose a novel temporal neuro-symbolic
model, NeuSTIP, that performs link prediction and time interval prediction in a
TKG. NeuSTIP learns temporal rules in the presence of the Allen predicates that
ensure the temporal consistency between neighboring predicates in a given rule.
We further design a unique scoring function that evaluates the confidence of
the candidate answers while performing link prediction and time interval
prediction by utilizing the learned rules. Our empirical evaluation on two time
interval based TKGC datasets suggests that our model outperforms
state-of-the-art models for both link prediction and the time interval
prediction task.
| [
{
"version": "v1",
"created": "Mon, 15 May 2023 13:46:34 GMT"
}
] | 1,684,713,600,000 | [
[
"Singh",
"Ishaan",
""
],
[
"Kaur",
"Navdeep",
""
],
[
"Gaur",
"Garima",
""
],
[
"Mausam",
"",
""
]
] |
2305.11383 | Po-Nien Kung | Po-Nien Kung and Nanyun Peng | Do Models Really Learn to Follow Instructions? An Empirical Study of
Instruction Tuning | Proceedings of the 61th Annual Meeting of the Association for
Computational Linguistics (Volume 2: Short Papers) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent works on instruction tuning (IT) have achieved great performance with
zero-shot generalizability to unseen tasks. With additional context (e.g., task
definition, examples) provided to models for fine-tuning, they achieved much
higher performance than untuned models. Despite impressive performance gains,
what models learn from IT remains understudied. In this work, we analyze how
models utilize instructions during IT by comparing model training with altered
vs. original instructions. Specifically, we create simplified task definitions
by removing all semantic components and only leaving the output space
information, and delusive examples that contain incorrect input-output mapping.
Our experiments show that models trained on simplified task definition or
delusive examples can achieve comparable performance to the ones trained on the
original instructions and examples. Furthermore, we introduce a random baseline
to perform zeroshot classification tasks, and find it achieves similar
performance (42.6% exact-match) as IT does (43% exact-match) in low resource
setting, while both methods outperform naive T5 significantly (30% per
exact-match). Our analysis provides evidence that the impressive performance
gain of current IT models can come from picking up superficial patterns, such
as learning the output format and guessing. Our study highlights the urgent
need for more reliable IT methods and evaluation.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 02:00:47 GMT"
},
{
"version": "v2",
"created": "Thu, 25 May 2023 21:07:07 GMT"
}
] | 1,685,318,400,000 | [
[
"Kung",
"Po-Nien",
""
],
[
"Peng",
"Nanyun",
""
]
] |
2305.11407 | Jun Wen | Jun Wen, Jue Hou, Clara-Lea Bonzel, Yihan Zhao, Victor M. Castro,
Vivian S. Gainer, Dana Weisenfeld, Tianrun Cai, Yuk-Lam Ho, Vidul A.
Panickan, Lauren Costa, Chuan Hong, J. Michael Gaziano, Katherine P. Liao,
Junwei Lu, Kelly Cho, Tianxi Cai | LATTE: Label-efficient Incident Phenotyping from Longitudinal Electronic
Health Records | ERHs data | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Electronic health record (EHR) data are increasingly used to support
real-world evidence (RWE) studies. Yet its ability to generate reliable RWE is
limited by the lack of readily available precise information on the timing of
clinical events such as the onset time of heart failure. We propose a
LAbel-efficienT incidenT phEnotyping (LATTE) algorithm to accurately annotate
the timing of clinical events from longitudinal EHR data. By leveraging the
pre-trained semantic embedding vectors from large-scale EHR data as prior
knowledge, LATTE selects predictive EHR features in a concept re-weighting
module by mining their relationship to the target event and compresses their
information into longitudinal visit embeddings through a visit attention
learning network. LATTE employs a recurrent neural network to capture the
sequential dependency between the target event and visit embeddings
before/after it. To improve label efficiency, LATTE constructs highly
informative longitudinal silver-standard labels from large-scale unlabeled
patients to perform unsupervised pre-training and semi-supervised joint
training. Finally, LATTE enhances cross-site portability via contrastive
representation learning. LATTE is evaluated on three analyses: the onset of
type-2 diabetes, heart failure, and the onset and relapses of multiple
sclerosis. We use various evaluation metrics present in the literature
including the $ABC_{gain}$, the proportion of reduction in the area between the
observed event indicator and the predicted cumulative incidences in reference
to the prediction per incident prevalence. LATTE consistently achieves
substantial improvement over benchmark methods such as SAMGEP and RETAIN in all
settings.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 03:28:51 GMT"
}
] | 1,684,713,600,000 | [
[
"Wen",
"Jun",
""
],
[
"Hou",
"Jue",
""
],
[
"Bonzel",
"Clara-Lea",
""
],
[
"Zhao",
"Yihan",
""
],
[
"Castro",
"Victor M.",
""
],
[
"Gainer",
"Vivian S.",
""
],
[
"Weisenfeld",
"Dana",
""
],
[
"Cai",
"Tianrun",
""
],
[
"Ho",
"Yuk-Lam",
""
],
[
"Panickan",
"Vidul A.",
""
],
[
"Costa",
"Lauren",
""
],
[
"Hong",
"Chuan",
""
],
[
"Gaziano",
"J. Michael",
""
],
[
"Liao",
"Katherine P.",
""
],
[
"Lu",
"Junwei",
""
],
[
"Cho",
"Kelly",
""
],
[
"Cai",
"Tianxi",
""
]
] |
2305.11461 | Ioktong Lei | Ioktong Lei and Zhidong Deng | Hint of Thought prompting: an explainable and zero-shot approach to
reasoning tasks with LLMs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As a way of communicating with users and any LLMs like GPT or PaLM2,
prompting becomes an increasingly important research topic for better
utilization of LLMs. Although simple prompting performs well on single-step
questions, it cannot permanently activate the correct knowledge path for
multi-step reasoning tasks. The chain of thought (CoT), which often contains
zero-shot CoT and few-shot CoT, is a recently developed prompting method that
can explain the reasoning process to the LLM and outperforms simple prompting
in three challenging reasoning tasks, including arithmetic, symbolic, and
commonsense reasoning. In this paper, we propose a novel hint of thought (HoT)
prompting with explainability and zero-shot generalization. First, it is
decomposed into the following three steps: explainable sub-questions, logical
reasoning, and answer extraction. Second, such three steps are sequentially
ordered in the format of step-by-step hints, which can be easily adjusted and
explained to different tasks. Finally, experimental results demonstrate that
our HoT prompting has a significant advantage on the zero-shot reasoning task
compared to existing zero-shot CoT. We did zero-shot experiments on math tasks
like GSM8K, ADDSUB, AQUA, SVAMP and commonsense tasks such as StrategyQA. In
particular, the accuracy of the proposed HoT prompting is improved with GSM8K
from 40.50% to 67.80%, with AQUA from 31.9% to 46.4%, with SVAMP from 63.7% to
76.9%, and with ADDSUB from 74.7% to 87.34%, respectively, which even defeats
the competitive PoT approach on GSM8k, AQUA, and SVAMP.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 06:30:17 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 06:18:16 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Jul 2023 05:46:46 GMT"
},
{
"version": "v4",
"created": "Mon, 27 Nov 2023 05:45:34 GMT"
},
{
"version": "v5",
"created": "Thu, 29 Feb 2024 13:47:27 GMT"
},
{
"version": "v6",
"created": "Wed, 5 Jun 2024 06:16:49 GMT"
}
] | 1,717,632,000,000 | [
[
"Lei",
"Ioktong",
""
],
[
"Deng",
"Zhidong",
""
]
] |
2305.11472 | Joseph Sifakis | Joseph Sifakis | Testing System Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss the adequacy of tests for intelligent systems and practical
problems raised by their implementation. We propose the replacement test as the
ability of a system to replace successfully another system performing a task in
a given context. We show how it can characterize salient aspects of human
intelligence that cannot be taken into account by the Turing test. We argue
that building intelligent systems passing the replacement test involves a
series of technical problems that are outside the scope of current AI. We
present a framework for implementing the proposed test and validating the
properties of the intelligent systems. We discuss the inherent limitations of
intelligent system validation and advocate new theoretical foundations for
extending existing rigorous test methods. We suggest that the replacement test,
based on the complementarity of skills between human and machine, can lead to a
multitude of intelligence concepts reflecting the ability to combine data-based
and symbolic knowledge to varying degrees.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 06:46:32 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Aug 2023 07:19:20 GMT"
}
] | 1,692,057,600,000 | [
[
"Sifakis",
"Joseph",
""
]
] |
2305.11537 | Asadullah Tariq Mr | Asadullah Tariq, Mohamed Adel Serhani, Farag Sallabi, Tariq Qayyum,
Ezedin S. Barka, Khaled A. Shuaib | Trustworthy Federated Learning: A Survey | 45 Pages, 8 Figures, 9 Tables | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Federated Learning (FL) has emerged as a significant advancement in the field
of Artificial Intelligence (AI), enabling collaborative model training across
distributed devices while maintaining data privacy. As the importance of FL
increases, addressing trustworthiness issues in its various aspects becomes
crucial. In this survey, we provide an extensive overview of the current state
of Trustworthy FL, exploring existing solutions and well-defined pillars
relevant to Trustworthy . Despite the growth in literature on trustworthy
centralized Machine Learning (ML)/Deep Learning (DL), further efforts are
necessary to identify trustworthiness pillars and evaluation metrics specific
to FL models, as well as to develop solutions for computing trustworthiness
levels. We propose a taxonomy that encompasses three main pillars:
Interpretability, Fairness, and Security & Privacy. Each pillar represents a
dimension of trust, further broken down into different notions. Our survey
covers trustworthiness challenges at every level in FL settings. We present a
comprehensive architecture of Trustworthy FL, addressing the fundamental
principles underlying the concept, and offer an in-depth analysis of trust
assessment mechanisms. In conclusion, we identify key research challenges
related to every aspect of Trustworthy FL and suggest future research
directions. This comprehensive survey serves as a valuable resource for
researchers and practitioners working on the development and implementation of
Trustworthy FL systems, contributing to a more secure and reliable AI
landscape.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 09:11:26 GMT"
}
] | 1,684,713,600,000 | [
[
"Tariq",
"Asadullah",
""
],
[
"Serhani",
"Mohamed Adel",
""
],
[
"Sallabi",
"Farag",
""
],
[
"Qayyum",
"Tariq",
""
],
[
"Barka",
"Ezedin S.",
""
],
[
"Shuaib",
"Khaled A.",
""
]
] |
2305.11597 | Alistair Nottle | Vedran Galeti\'c, Alistair Nottle | Flexible and Inherently Comprehensible Knowledge Representation for
Data-Efficient Learning and Trustworthy Human-Machine Teaming in
Manufacturing Environments | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Trustworthiness of artificially intelligent agents is vital for the
acceptance of human-machine teaming in industrial manufacturing environments.
Predictable behaviours and explainable (and understandable) rationale allow
humans collaborating with (and building) these agents to understand their
motivations and therefore validate decisions that are made. To that aim, we
make use of G\"ardenfors's cognitively inspired Conceptual Space framework to
represent the agent's knowledge using concepts as convex regions in a space
spanned by inherently comprehensible quality dimensions. A simple typicality
quantification model is built on top of it to determine fuzzy category
membership and classify instances interpretably. We apply it on a use case from
the manufacturing domain, using objects' physical properties obtained from
cobots' onboard sensors and utilisation properties from crowdsourced
commonsense knowledge available at public knowledge bases. Such flexible
knowledge representation based on property decomposition allows for
data-efficient representation learning of typically highly specialist or
specific manufacturing artefacts. In such a setting, traditional data-driven
(e.g., computer vision-based) classification approaches would struggle due to
training data scarcity. This allows for comprehensibility of an AI agent's
acquired knowledge by the human collaborator thus contributing to
trustworthiness. We situate our approach within an existing explainability
framework specifying explanation desiderata. We provide arguments for our
system's applicability and appropriateness for different roles of human agents
collaborating with the AI system throughout its design, validation, and
operation.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 11:18:23 GMT"
}
] | 1,684,713,600,000 | [
[
"Galetić",
"Vedran",
""
],
[
"Nottle",
"Alistair",
""
]
] |
2305.11624 | Kaichao You | Kaichao You, Guo Qin, Anchang Bao, Meng Cao, Ping Huang, Jiulong Shan,
Mingsheng Long | Efficient ConvBN Blocks for Transfer Learning and Beyond | ICLR 2024, camera ready version | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Convolution-BatchNorm (ConvBN) blocks are integral components in various
computer vision tasks and other domains. A ConvBN block can operate in three
modes: Train, Eval, and Deploy. While the Train mode is indispensable for
training models from scratch, the Eval mode is suitable for transfer learning
and beyond, and the Deploy mode is designed for the deployment of models. This
paper focuses on the trade-off between stability and efficiency in ConvBN
blocks: Deploy mode is efficient but suffers from training instability; Eval
mode is widely used in transfer learning but lacks efficiency. To solve the
dilemma, we theoretically reveal the reason behind the diminished training
stability observed in the Deploy mode. Subsequently, we propose a novel Tune
mode to bridge the gap between Eval mode and Deploy mode. The proposed Tune
mode is as stable as Eval mode for transfer learning, and its computational
efficiency closely matches that of the Deploy mode. Through extensive
experiments in object detection, classification, and adversarial example
generation across $5$ datasets and $12$ model architectures, we demonstrate
that the proposed Tune mode retains the performance while significantly
reducing GPU memory footprint and training time, thereby contributing efficient
ConvBN blocks for transfer learning and beyond. Our method has been integrated
into both PyTorch (general machine learning framework) and MMCV/MMEngine
(computer vision framework). Practitioners just need one line of code to enjoy
our efficient ConvBN blocks thanks to PyTorch's builtin machine learning
compilers.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 12:06:34 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Feb 2024 14:34:06 GMT"
}
] | 1,709,164,800,000 | [
[
"You",
"Kaichao",
""
],
[
"Qin",
"Guo",
""
],
[
"Bao",
"Anchang",
""
],
[
"Cao",
"Meng",
""
],
[
"Huang",
"Ping",
""
],
[
"Shan",
"Jiulong",
""
],
[
"Long",
"Mingsheng",
""
]
] |
2305.11811 | Yang You | Yang You, Vincent Thomas, Francis Colas, Olivier Buffet | Monte-Carlo Search for an Equilibrium in Dec-POMDPs | Accepted to UAI 2023, preliminary version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized partially observable Markov decision processes (Dec-POMDPs)
formalize the problem of designing individual controllers for a group of
collaborative agents under stochastic dynamics and partial observability.
Seeking a global optimum is difficult (NEXP complete), but seeking a Nash
equilibrium -- each agent policy being a best response to the other agents --
is more accessible, and allowed addressing infinite-horizon problems with
solutions in the form of finite state controllers. In this paper, we show that
this approach can be adapted to cases where only a generative model (a
simulator) of the Dec-POMDP is available. This requires relying on a
simulation-based POMDP solver to construct an agent's FSC node by node. A
related process is used to heuristically derive initial FSCs. Experiment with
benchmarks shows that MC-JESP is competitive with exisiting Dec-POMDP solvers,
even better than many offline methods using explicit models.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 16:47:46 GMT"
}
] | 1,684,713,600,000 | [
[
"You",
"Yang",
""
],
[
"Thomas",
"Vincent",
""
],
[
"Colas",
"Francis",
""
],
[
"Buffet",
"Olivier",
""
]
] |
2305.11814 | Jakub Kowalski | Jakub Kowalski, Rados{\l}aw Miernik | Summarizing Strategy Card Game AI Competition | IEEE Conference on Games 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper concludes five years of AI competitions based on Legends of Code
and Magic (LOCM), a small Collectible Card Game (CCG), designed with the goal
of supporting research and algorithm development. The game was used in a number
of events, including Community Contests on the CodinGame platform, and Strategy
Card Game AI Competition at the IEEE Congress on Evolutionary Computation and
IEEE Conference on Games. LOCM has been used in a number of publications
related to areas such as game tree search algorithms, neural networks,
evaluation functions, and CCG deckbuilding. We present the rules of the game,
the history of organized competitions, and a listing of the participant and
their approaches, as well as some general advice on organizing AI competitions
for the research community. Although the COG 2022 edition was announced to be
the last one, the game remains available and can be played using an online
leaderboard arena.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 16:49:36 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 07:31:22 GMT"
}
] | 1,688,947,200,000 | [
[
"Kowalski",
"Jakub",
""
],
[
"Miernik",
"Radosław",
""
]
] |
2305.12167 | Ran Gilad-Bachrach | Hofit Wasserman Rozen, Niva Elkin-Koren, Ran Gilad-Bachrach | The Case Against Explainability | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | As artificial intelligence (AI) becomes more prevalent there is a growing
demand from regulators to accompany decisions made by such systems with
explanations. However, a persistent gap exists between the need to execute a
meaningful right to explanation vs. the ability of Machine Learning systems to
deliver on such a legal requirement. The regulatory appeal towards "a right to
explanation" of AI systems can be attributed to the significant role of
explanations, part of the notion called reason-giving, in law. Therefore, in
this work we examine reason-giving's purposes in law to analyze whether reasons
provided by end-user Explainability can adequately fulfill them.
We find that reason-giving's legal purposes include: (a) making a better and
more just decision, (b) facilitating due-process, (c) authenticating human
agency, and (d) enhancing the decision makers' authority. Using this
methodology, we demonstrate end-user Explainabilty's inadequacy to fulfil
reason-giving's role in law, given reason-giving's functions rely on its impact
over a human decision maker. Thus, end-user Explainability fails, or is
unsuitable, to fulfil the first, second and third legal function. In contrast
we find that end-user Explainability excels in the fourth function, a quality
which raises serious risks considering recent end-user Explainability research
trends, Large Language Models' capabilities, and the ability to manipulate
end-users by both humans and machines. Hence, we suggest that in some cases the
right to explanation of AI systems could bring more harm than good to end
users. Accordingly, this study carries some important policy ramifications, as
it calls upon regulators and Machine Learning practitioners to reconsider the
widespread pursuit of end-user Explainability and a right to explanation of AI
systems.
| [
{
"version": "v1",
"created": "Sat, 20 May 2023 10:56:19 GMT"
}
] | 1,684,800,000,000 | [
[
"Rozen",
"Hofit Wasserman",
""
],
[
"Elkin-Koren",
"Niva",
""
],
[
"Gilad-Bachrach",
"Ran",
""
]
] |
2305.12453 | Markus Ulbricht | Markus Ulbricht, Nico Potyka, Anna Rapberger, and Francesca Toni | Non-flat ABA is an Instance of Bipolar Argumentation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Assumption-based Argumentation (ABA) is a well-known structured argumentation
formalism, whereby arguments and attacks between them are drawn from rules,
defeasible assumptions and their contraries. A common restriction imposed on
ABA frameworks (ABAFs) is that they are flat, i.e., each of the defeasible
assumptions can only be assumed, but not derived. While it is known that flat
ABAFs can be translated into abstract argumentation frameworks (AFs) as
proposed by Dung, no translation exists from general, possibly non-flat ABAFs
into any kind of abstract argumentation formalism. In this paper, we close this
gap and show that bipolar AFs (BAFs) can instantiate general ABAFs. To this end
we develop suitable, novel BAF semantics which borrow from the notion of
deductive support. We investigate basic properties of our BAFs, including
computational complexity, and prove the desired relation to ABAFs under several
semantics. Finally, in order to support computation and explainability, we
propose the notion of dispute trees for our BAF semantics.
| [
{
"version": "v1",
"created": "Sun, 21 May 2023 13:18:08 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jan 2024 17:06:18 GMT"
}
] | 1,704,758,400,000 | [
[
"Ulbricht",
"Markus",
""
],
[
"Potyka",
"Nico",
""
],
[
"Rapberger",
"Anna",
""
],
[
"Toni",
"Francesca",
""
]
] |
2305.12623 | Archana Vadakattu | Archana Vadakattu, Michelle Blom, Adrian R. Pearce | Strategy Extraction in Single-Agent Games | 9 pages, 6 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to continuously learn and adapt to new situations is one where
humans are far superior compared to AI agents. We propose an approach to
knowledge transfer using behavioural strategies as a form of transferable
knowledge influenced by the human cognitive ability to develop strategies. A
strategy is defined as a partial sequence of events - where an event is both
the result of an agent's action and changes in state - to reach some predefined
event of interest. This information acts as guidance or a partial solution that
an agent can generalise and use to make predictions about how to handle unknown
observed phenomena. As a first step toward this goal, we develop a method for
extracting strategies from an agent's existing knowledge that can be applied in
multiple contexts. Our method combines observed event frequency information
with local sequence alignment techniques to find patterns of significance that
form a strategy. We show that our method can identify plausible strategies in
three environments: Pacman, Bank Heist and a dungeon-crawling video game. Our
evaluation serves as a promising first step toward extracting knowledge for
generalisation and, ultimately, transfer learning.
| [
{
"version": "v1",
"created": "Mon, 22 May 2023 01:28:59 GMT"
}
] | 1,684,800,000,000 | [
[
"Vadakattu",
"Archana",
""
],
[
"Blom",
"Michelle",
""
],
[
"Pearce",
"Adrian R.",
""
]
] |
2305.13206 | Jannis Weil | Jannis Weil, Johannes Czech, Tobias Meuser, Kristian Kersting | Know your Enemy: Investigating Monte-Carlo Tree Search with Opponent
Models in Pommerman | Accepted at the Adaptive and Learning Agents Workshop (ALA) at AAMAS
2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In combination with Reinforcement Learning, Monte-Carlo Tree Search has shown
to outperform human grandmasters in games such as Chess, Shogi and Go with
little to no prior domain knowledge. However, most classical use cases only
feature up to two players. Scaling the search to an arbitrary number of players
presents a computational challenge, especially if decisions have to be planned
over a longer time horizon. In this work, we investigate techniques that
transform general-sum multiplayer games into single-player and two-player games
that consider other agents to act according to given opponent models. For our
evaluation, we focus on the challenging Pommerman environment which involves
partial observability, a long time horizon and sparse rewards. In combination
with our search methods, we investigate the phenomena of opponent modeling
using heuristics and self-play. Overall, we demonstrate the effectiveness of
our multiplayer search variants both in a supervised learning and reinforcement
learning setting.
| [
{
"version": "v1",
"created": "Mon, 22 May 2023 16:39:20 GMT"
}
] | 1,684,800,000,000 | [
[
"Weil",
"Jannis",
""
],
[
"Czech",
"Johannes",
""
],
[
"Meuser",
"Tobias",
""
],
[
"Kersting",
"Kristian",
""
]
] |
2305.13258 | David Herron | David Herron, Ernesto Jim\'enez-Ruiz, Giacomo Tarroni and Tillman
Weyde | NeSy4VRD: A Multifaceted Resource for Neurosymbolic AI Research using
Knowledge Graphs in Visual Relationship Detection | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | NeSy4VRD is a multifaceted resource designed to support the development of
neurosymbolic AI (NeSy) research. NeSy4VRD re-establishes public access to the
images of the VRD dataset and couples them with an extensively revised,
quality-improved version of the VRD visual relationship annotations. Crucially,
NeSy4VRD provides a well-aligned, companion OWL ontology that describes the
dataset domain.It comes with open source infrastructure that provides
comprehensive support for extensibility of the annotations (which, in turn,
facilitates extensibility of the ontology), and open source code for loading
the annotations to/from a knowledge graph. We are contributing NeSy4VRD to the
computer vision, NeSy and Semantic Web communities to help foster more NeSy
research using OWL-based knowledge graphs.
| [
{
"version": "v1",
"created": "Mon, 22 May 2023 17:28:25 GMT"
}
] | 1,684,800,000,000 | [
[
"Herron",
"David",
""
],
[
"Jiménez-Ruiz",
"Ernesto",
""
],
[
"Tarroni",
"Giacomo",
""
],
[
"Weyde",
"Tillman",
""
]
] |
2305.13823 | Zhanwen Zhou | Zhanwen Zhou, Hankz Hankui Zhuo, Xiaowu Zhang, Qiyuan Deng | XRoute Environment: A Novel Reinforcement Learning Environment for
Routing | arXiv admin note: text overlap with arXiv:1907.11180 by other authors | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Routing is a crucial and time-consuming stage in modern design automation
flow for advanced technology nodes. Great progress in the field of
reinforcement learning makes it possible to use those approaches to improve the
routing quality and efficiency. However, the scale of the routing problems
solved by reinforcement learning-based methods in recent studies is too small
for these methods to be used in commercial EDA tools. We introduce the XRoute
Environment, a new reinforcement learning environment where agents are trained
to select and route nets in an advanced, end-to-end routing framework. Novel
algorithms and ideas can be quickly tested in a safe and reproducible manner in
it. The resulting environment is challenging, easy to use, customize and add
additional scenarios, and it is available under a permissive open-source
license. In addition, it provides support for distributed deployment and
multi-instance experiments. We propose two tasks for learning and build a
full-chip test bed with routing benchmarks of various region sizes. We also
pre-define several static routing regions with different pin density and number
of nets for easier learning and testing. For net ordering task, we report
baseline results for two widely used reinforcement learning algorithms (PPO and
DQN) and one searching-based algorithm (TritonRoute). The XRoute Environment
will be available at https://github.com/xplanlab/xroute_env.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 08:46:25 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 07:53:23 GMT"
}
] | 1,686,009,600,000 | [
[
"Zhou",
"Zhanwen",
""
],
[
"Zhuo",
"Hankz Hankui",
""
],
[
"Zhang",
"Xiaowu",
""
],
[
"Deng",
"Qiyuan",
""
]
] |
2305.14909 | Lin Guan | Lin Guan, Karthik Valmeekam, Sarath Sreedharan, Subbarao Kambhampati | Leveraging Pre-trained Large Language Models to Construct and Utilize
World Models for Model-based Task Planning | NeurIPS 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a growing interest in applying pre-trained large language models
(LLMs) to planning problems. However, methods that use LLMs directly as
planners are currently impractical due to several factors, including limited
correctness of plans, strong reliance on feedback from interactions with
simulators or even the actual environment, and the inefficiency in utilizing
human feedback. In this work, we introduce a novel alternative paradigm that
constructs an explicit world (domain) model in planning domain definition
language (PDDL) and then uses it to plan with sound domain-independent
planners. To address the fact that LLMs may not generate a fully functional
PDDL model initially, we employ LLMs as an interface between PDDL and sources
of corrective feedback, such as PDDL validators and humans. For users who lack
a background in PDDL, we show that LLMs can translate PDDL into natural
language and effectively encode corrective feedback back to the underlying
domain model. Our framework not only enjoys the correctness guarantee offered
by the external planners but also reduces human involvement by allowing users
to correct domain models at the beginning, rather than inspecting and
correcting (through interactive prompting) every generated plan as in previous
work. On two IPC domains and a Household domain that is more complicated than
commonly used benchmarks such as ALFWorld, we demonstrate that GPT-4 can be
leveraged to produce high-quality PDDL models for over 40 actions, and the
corrected PDDL models are then used to successfully solve 48 challenging
planning tasks. Resources, including the source code, are released at:
https://guansuns.github.io/pages/llm-dm.
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 08:59:15 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Nov 2023 03:06:19 GMT"
}
] | 1,698,969,600,000 | [
[
"Guan",
"Lin",
""
],
[
"Valmeekam",
"Karthik",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2305.15113 | Martin Uray | Simon Schindler, Martin Uray, Stefan Huber | A Mini Review on the utilization of Reinforcement Learning with OPC UA | preprint of Paper submitted to INDIN'23 | null | 10.1109/INDIN51400.2023.10218289 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reinforcement Learning (RL) is a powerful machine learning paradigm that has
been applied in various fields such as robotics, natural language processing
and game playing achieving state-of-the-art results. Targeted to solve
sequential decision making problems, it is by design able to learn from
experience and therefore adapt to changing dynamic environments. These
capabilities make it a prime candidate for controlling and optimizing complex
processes in industry. The key to fully exploiting this potential is the
seamless integration of RL into existing industrial systems. The industrial
communication standard Open Platform Communications UnifiedArchitecture (OPC
UA) could bridge this gap. However, since RL and OPC UA are from different
fields,there is a need for researchers to bridge the gap between the two
technologies. This work serves to bridge this gap by providing a brief
technical overview of both technologies and carrying out a semi-exhaustive
literature review to gain insights on how RL and OPC UA are applied in
combination. With this survey, three main research topics have been identified,
following the intersection of RL with OPC UA. The results of the literature
review show that RL is a promising technology for the control and optimization
of industrial processes, but does not yet have the necessary standardized
interfaces to be deployed in real-world scenarios with reasonably low effort.
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 13:03:48 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Oct 2023 11:52:42 GMT"
}
] | 1,698,710,400,000 | [
[
"Schindler",
"Simon",
""
],
[
"Uray",
"Martin",
""
],
[
"Huber",
"Stefan",
""
]
] |
2305.15256 | Munyque Mittelmann | Munyque Mittelmann, Aniello Murano, Laurent Perrussel | Discounting in Strategy Logic | Extended version of the paper accepted at IJCAI 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Discounting is an important dimension in multi-agent systems as long as we
want to reason about strategies and time. It is a key aspect in economics as it
captures the intuition that the far-away future is not as important as the near
future. Traditional verification techniques allow to check whether there is a
winning strategy for a group of agents but they do not take into account the
fact that satisfying a goal sooner is different from satisfying it after a long
wait. In this paper, we augment Strategy Logic with future discounting over a
set of discounted functions D, denoted SLdisc[D]. We consider "until" operators
with discounting functions: the satisfaction value of a specification in
SLdisc[D] is a value in [0, 1], where the longer it takes to fulfill
requirements, the smaller the satisfaction value is. We motivate our approach
with classical examples from Game Theory and study the complexity of
model-checking SLdisc[D]-formulas.
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 15:40:53 GMT"
}
] | 1,684,972,800,000 | [
[
"Mittelmann",
"Munyque",
""
],
[
"Murano",
"Aniello",
""
],
[
"Perrussel",
"Laurent",
""
]
] |
2305.15318 | Kilian R\"uckschlo{\ss} | Rafael Kiesel, Kilian R\"uckschlo{\ss} and Felix Weitk\"amper | "What if?" in Probabilistic Logic Programming | null | 2023 International Conference on Logic Programming | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A ProbLog program is a logic program with facts that only hold with a
specified probability. In this contribution we extend this ProbLog language by
the ability to answer "What if" queries. Intuitively, a ProbLog program defines
a distribution by solving a system of equations in terms of mutually
independent predefined Boolean random variables. In the theory of causality,
Judea Pearl proposes a counterfactual reasoning for such systems of equations.
Based on Pearl's calculus, we provide a procedure for processing these
counterfactual queries on ProbLog programs, together with a proof of
correctness and a full implementation. Using the latter, we provide insights
into the influence of different parameters on the scalability of inference.
Finally, we also show that our approach is consistent with CP-logic, i.e. with
the causal semantics for logic programs with annotated with disjunctions.
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 16:35:24 GMT"
}
] | 1,684,972,800,000 | [
[
"Kiesel",
"Rafael",
""
],
[
"Rückschloß",
"Kilian",
""
],
[
"Weitkämper",
"Felix",
""
]
] |
2305.15324 | Toby Shevlane | Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, Mary Phuong, Jess
Whittlestone, Jade Leung, Daniel Kokotajlo, Nahema Marchal, Markus
Anderljung, Noam Kolt, Lewis Ho, Divya Siddarth, Shahar Avin, Will Hawkins,
Been Kim, Iason Gabriel, Vijay Bolina, Jack Clark, Yoshua Bengio, Paul
Christiano, Allan Dafoe | Model evaluation for extreme risks | Fixed typos; added citation | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Current approaches to building general-purpose AI systems tend to produce
systems with both beneficial and harmful capabilities. Further progress in AI
development could lead to capabilities that pose extreme risks, such as
offensive cyber capabilities or strong manipulation skills. We explain why
model evaluation is critical for addressing extreme risks. Developers must be
able to identify dangerous capabilities (through "dangerous capability
evaluations") and the propensity of models to apply their capabilities for harm
(through "alignment evaluations"). These evaluations will become critical for
keeping policymakers and other stakeholders informed, and for making
responsible decisions about model training, deployment, and security.
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 16:38:43 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 18:48:42 GMT"
}
] | 1,695,686,400,000 | [
[
"Shevlane",
"Toby",
""
],
[
"Farquhar",
"Sebastian",
""
],
[
"Garfinkel",
"Ben",
""
],
[
"Phuong",
"Mary",
""
],
[
"Whittlestone",
"Jess",
""
],
[
"Leung",
"Jade",
""
],
[
"Kokotajlo",
"Daniel",
""
],
[
"Marchal",
"Nahema",
""
],
[
"Anderljung",
"Markus",
""
],
[
"Kolt",
"Noam",
""
],
[
"Ho",
"Lewis",
""
],
[
"Siddarth",
"Divya",
""
],
[
"Avin",
"Shahar",
""
],
[
"Hawkins",
"Will",
""
],
[
"Kim",
"Been",
""
],
[
"Gabriel",
"Iason",
""
],
[
"Bolina",
"Vijay",
""
],
[
"Clark",
"Jack",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Christiano",
"Paul",
""
],
[
"Dafoe",
"Allan",
""
]
] |
2305.15695 | Xiaoyu Chen | Xiaoyu Chen, Shenao Zhang, Pushi Zhang, Li Zhao, Jianyu Chen | Asking Before Acting: Gather Information in Embodied Decision Making
with Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With strong capabilities of reasoning and a broad understanding of the world,
Large Language Models (LLMs) have demonstrated immense potential in building
versatile embodied decision-making agents capable of executing a wide array of
tasks. Nevertheless, when deployed in unfamiliar environments, we show that LLM
agents encounter challenges in efficiently gathering essential information,
leading to suboptimal performance. Conversely, human individuals often seek
additional information from their peers prior to taking action, harnessing
external knowledge to avoid unnecessary trial and error. Drawing inspiration
from this behavior, we propose \textit{Asking Before Acting} (ABA), a method
that empowers the agent to proactively inquire with external sources for
pertinent information using natural language during their interactions within
the environment. In this way, the agent is able to enhance its efficiency and
performance by circumventing potentially laborious steps and combating the
difficulties associated with exploration in unfamiliar environments and
vagueness of the instructions. We conduct extensive experiments involving a
spectrum of environments including text-based household everyday tasks, robot
arm manipulation tasks, and real world open domain image based embodied tasks.
The experiments involve various models from Vicuna to GPT-4. The results
demonstrate that, even with modest prompts modifications, ABA exhibits
substantial advantages on both performance and efficiency over baseline LLM
agents. Further finetuning ABA with reformulated metadata (ABA-FT) faciliates
learning the rationale for asking and allows for additional enhancements
especially in tasks that baselines struggle to solve.
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 04:05:08 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Apr 2024 13:24:59 GMT"
}
] | 1,713,312,000,000 | [
[
"Chen",
"Xiaoyu",
""
],
[
"Zhang",
"Shenao",
""
],
[
"Zhang",
"Pushi",
""
],
[
"Zhao",
"Li",
""
],
[
"Chen",
"Jianyu",
""
]
] |
2305.15743 | Ding Wang | Ding Wang, Xuhong Wang, Liang Chen, Shengyue Yao, Ming Jing, Honghai
Li, Li Li, Shiqiang Bao, Fei-Yue Wang, Yilun Lin | TransWorldNG: Traffic Simulation via Foundation Model | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Traffic simulation is a crucial tool for transportation decision-making and
policy development. However, achieving realistic simulations in the face of the
high dimensionality and heterogeneity of traffic environments is a longstanding
challenge. In this paper, we present TransWordNG, a traffic simulator that uses
Data-driven algorithms and Graph Computing techniques to learn traffic dynamics
from real data. The functionality and structure of TransWorldNG are introduced,
which utilize a foundation model for transportation management and control. The
results demonstrate that TransWorldNG can generate more realistic traffic
patterns compared to traditional simulators. Additionally, TransWorldNG
exhibits better scalability, as it shows linear growth in computation time as
the scenario scale increases. To the best of our knowledge, this is the first
traffic simulator that can automatically learn traffic patterns from real-world
data and efficiently generate accurate and realistic traffic environments.
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 05:49:30 GMT"
}
] | 1,685,059,200,000 | [
[
"Wang",
"Ding",
""
],
[
"Wang",
"Xuhong",
""
],
[
"Chen",
"Liang",
""
],
[
"Yao",
"Shengyue",
""
],
[
"Jing",
"Ming",
""
],
[
"Li",
"Honghai",
""
],
[
"Li",
"Li",
""
],
[
"Bao",
"Shiqiang",
""
],
[
"Wang",
"Fei-Yue",
""
],
[
"Lin",
"Yilun",
""
]
] |
2305.15771 | Karthik Valmeekam | Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, Subbarao
Kambhampati | On the Planning Abilities of Large Language Models : A Critical
Investigation | NeurIPS 2023 Spotlight. arXiv admin note: substantial text overlap
with arXiv:2206.10498 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intrigued by the claims of emergent reasoning capabilities in LLMs trained on
general web corpora, in this paper, we set out to investigate their planning
capabilities. We aim to evaluate (1) the effectiveness of LLMs in generating
plans autonomously in commonsense planning tasks and (2) the potential of LLMs
in LLM-Modulo settings where they act as a source of heuristic guidance for
external planners and verifiers. We conduct a systematic study by generating a
suite of instances on domains similar to the ones employed in the International
Planning Competition and evaluate LLMs in two distinct modes: autonomous and
heuristic. Our findings reveal that LLMs' ability to generate executable plans
autonomously is rather limited, with the best model (GPT-4) having an average
success rate of ~12% across the domains. However, the results in the LLM-Modulo
setting show more promise. In the LLM-Modulo setting, we demonstrate that
LLM-generated plans can improve the search process for underlying sound
planners and additionally show that external verifiers can help provide
feedback on the generated plans and back-prompt the LLM for better plan
generation.
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 06:32:23 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Nov 2023 07:00:12 GMT"
}
] | 1,701,043,200,000 | [
[
"Valmeekam",
"Karthik",
""
],
[
"Marquez",
"Matthew",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
2305.15921 | Francesca Toni | Maurizio Proietti and Francesca Toni | Learning Assumption-based Argumentation Frameworks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose a novel approach to logic-based learning which generates
assumption-based argumentation (ABA) frameworks from positive and negative
examples, using a given background knowledge. These ABA frameworks can be
mapped onto logic programs with negation as failure that may be non-stratified.
Whereas existing argumentation-based methods learn exceptions to general rules
by interpreting the exceptions as rebuttal attacks, our approach interprets
them as undercutting attacks. Our learning technique is based on the use of
transformation rules, including some adapted from logic program transformation
rules (notably folding) as well as others, such as rote learning and assumption
introduction. We present a general strategy that applies the transformation
rules in a suitable order to learn stratified frameworks, and we also propose a
variant that handles the non-stratified case. We illustrate the benefits of our
approach with a number of examples, which show that, on one hand, we are able
to easily reconstruct other logic-based learning approaches and, on the other
hand, we can work out in a very simple and natural way problems that seem to be
hard for existing techniques.
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 10:41:09 GMT"
}
] | 1,685,059,200,000 | [
[
"Proietti",
"Maurizio",
""
],
[
"Toni",
"Francesca",
""
]
] |
2305.15934 | Oliver Niggemann | Maria Krantz and Oliver Niggemann | A Diagnosis Algorithms for a Rotary Indexing Machine | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rotary Indexing Machines (RIMs) are widely used in manufacturing due to their
ability to perform multiple production steps on a single product without manual
repositioning, reducing production time and improving accuracy and consistency.
Despite their advantages, little research has been done on diagnosing faults in
RIMs, especially from the perspective of the actual production steps carried
out on these machines. Long downtimes due to failures are problematic,
especially for smaller companies employing these machines. To address this gap,
we propose a diagnosis algorithm based on the product perspective, which
focuses on the product being processed by RIMs. The algorithm traces the steps
that a product takes through the machine and is able to diagnose possible
causes in case of failure. We also analyze the properties of RIMs and how these
influence the diagnosis of faults in these machines. Our contributions are
three-fold. Firstly, we provide an analysis of the properties of RIMs and how
they influence the diagnosis of faults in these machines. Secondly, we suggest
a diagnosis algorithm based on the product perspective capable of diagnosing
faults in such a machine. Finally, we test this algorithm on a model of a
rotary indexing machine, demonstrating its effectiveness in identifying faults
and their root causes.
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 11:03:10 GMT"
}
] | 1,685,059,200,000 | [
[
"Krantz",
"Maria",
""
],
[
"Niggemann",
"Oliver",
""
]
] |
2305.16151 | Vishal Pallagani | Vishal Pallagani and Bharath Muppasani and Keerthiram Murugesan and
Francesca Rossi and Biplav Srivastava and Lior Horesh and Francesco Fabiano
and Andrea Loreggia | Understanding the Capabilities of Large Language Models for Automated
Planning | 12 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Automated planning is concerned with developing efficient algorithms to
generate plans or sequences of actions to achieve a specific goal in a given
environment. Emerging Large Language Models (LLMs) can answer questions, write
high-quality programming code, and predict protein folding, showcasing their
versatility in solving various tasks beyond language-based problems. In this
paper, we aim to explore how LLMs can also be used for automated planning. To
do so, we seek to answer four key questions. Firstly, we want to understand the
extent to which LLMs can be used for plan generation. Secondly, we aim to
identify which pre-training data is most effective in facilitating plan
generation. Thirdly, we investigate whether fine-tuning or prompting is a more
effective approach for plan generation. Finally, we explore whether LLMs are
capable of plan generalization. By answering these questions, the study seeks
to shed light on the capabilities of LLMs in solving complex planning problems
and provide insights into the most effective approaches for using LLMs in this
context.
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 15:21:09 GMT"
}
] | 1,685,059,200,000 | [
[
"Pallagani",
"Vishal",
""
],
[
"Muppasani",
"Bharath",
""
],
[
"Murugesan",
"Keerthiram",
""
],
[
"Rossi",
"Francesca",
""
],
[
"Srivastava",
"Biplav",
""
],
[
"Horesh",
"Lior",
""
],
[
"Fabiano",
"Francesco",
""
],
[
"Loreggia",
"Andrea",
""
]
] |
2305.16924 | Tom Bewley | Tom Bewley, Jonathan Lawry, Arthur Richards | Learning Interpretable Models of Aircraft Handling Behaviour by
Reinforcement Learning from Human Feedback | arXiv admin note: substantial text overlap with arXiv:2210.01007 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method to capture the handling abilities of fast jet pilots in a
software model via reinforcement learning (RL) from human preference feedback.
We use pairwise preferences over simulated flight trajectories to learn an
interpretable rule-based model called a reward tree, which enables the
automated scoring of trajectories alongside an explanatory rationale. We train
an RL agent to execute high-quality handling behaviour by using the reward tree
as the objective, and thereby generate data for iterative preference collection
and further refinement of both tree and agent. Experiments with synthetic
preferences show reward trees to be competitive with uninterpretable neural
network reward models on quantitative and qualitative evaluations.
| [
{
"version": "v1",
"created": "Fri, 26 May 2023 13:37:59 GMT"
}
] | 1,685,318,400,000 | [
[
"Bewley",
"Tom",
""
],
[
"Lawry",
"Jonathan",
""
],
[
"Richards",
"Arthur",
""
]
] |
2305.17196 | Agnieszka Lawrynowicz | Agnieszka {\L}awrynowicz | A Knowledge Engineering Primer | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The aim of this primer is to introduce the subject of knowledge engineering
in a concise but synthetic way to develop the reader's intuition about the
area.
| [
{
"version": "v1",
"created": "Fri, 26 May 2023 18:39:25 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Mar 2024 05:50:33 GMT"
}
] | 1,711,411,200,000 | [
[
"Ławrynowicz",
"Agnieszka",
""
]
] |
2305.17308 | Habtom Kahsay Gidey | Habtom Kahsay Gidey, Peter Hillmann, Andreas Karcher, Alois Knoll | Towards Cognitive Bots: Architectural Research Challenges | null | In: Hammer, P., Alirezaie, M., Stranneg{\aa}rd, C. (eds)
Artificial General Intelligence. AGI 2023. Lecture Notes in Computer
Science(), vol 13921 | 10.1007/978-3-031-33469-6_11 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Software bots operating in multiple virtual digital platforms must understand
the platforms' affordances and behave like human users. Platform affordances or
features differ from one application platform to another or through a life
cycle, requiring such bots to be adaptable. Moreover, bots in such platforms
could cooperate with humans or other software agents for work or to learn
specific behavior patterns. However, present-day bots, particularly chatbots,
other than language processing and prediction, are far from reaching a human
user's behavior level within complex business information systems. They lack
the cognitive capabilities to sense and act in such virtual environments,
rendering their development a challenge to artificial general intelligence
research. In this study, we problematize and investigate assumptions in
conceptualizing software bot architecture by directing attention to significant
architectural research challenges in developing cognitive bots endowed with
complex behavior for operation on information systems. As an outlook, we
propose alternate architectural assumptions to consider in future bot design
and bot development frameworks.
| [
{
"version": "v1",
"created": "Fri, 26 May 2023 23:51:49 GMT"
}
] | 1,685,404,800,000 | [
[
"Gidey",
"Habtom Kahsay",
""
],
[
"Hillmann",
"Peter",
""
],
[
"Karcher",
"Andreas",
""
],
[
"Knoll",
"Alois",
""
]
] |
2305.17526 | Rustem Takhanov | Rustem Takhanov | Computing a partition function of a generalized pattern-based energy
over a semiring | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Valued constraint satisfaction problems with ordered variables (VCSPO) are a
special case of Valued CSPs in which variables are totally ordered and soft
constraints are imposed on tuples of variables that do not violate the order.
We study a restriction of VCSPO, in which soft constraints are imposed on a
segment of adjacent variables and a constraint language $\Gamma$ consists of
$\{0,1\}$-valued characteristic functions of predicates. This kind of
potentials generalizes the so-called pattern-based potentials, which were
applied in many tasks of structured prediction.
For a constraint language $\Gamma$ we introduce a closure operator, $
\overline{\Gamma^{\cap}}\supseteq \Gamma$, and give examples of constraint
languages for which $|\overline{\Gamma^{\cap}}|$ is small. If all predicates in
$\Gamma$ are cartesian products, we show that the minimization of a generalized
pattern-based potential (or, the computation of its partition function) can be
made in ${\mathcal O}(|V|\cdot |D|^2 \cdot |\overline{\Gamma^{\cap}}|^2 )$
time, where $V$ is a set of variables, $D$ is a domain set. If, additionally,
only non-positive weights of constraints are allowed, the complexity of the
minimization task drops to ${\mathcal O}(|V|\cdot |\overline{\Gamma^{\cap}}|
\cdot |D| \cdot \max_{\rho\in \Gamma}\|\rho\|^2 )$ where $\|\rho\|$ is the
arity of $\rho\in \Gamma$. For a general language $\Gamma$ and non-positive
weights, the minimization task can be carried out in ${\mathcal O}(|V|\cdot
|\overline{\Gamma^{\cap}}|^2)$ time.
We argue that in many natural cases $\overline{\Gamma^{\cap}}$ is of moderate
size, though in the worst case $|\overline{\Gamma^{\cap}}|$ can blow up and
depend exponentially on $\max_{\rho\in \Gamma}\|\rho\|$.
| [
{
"version": "v1",
"created": "Sat, 27 May 2023 16:53:10 GMT"
}
] | 1,685,404,800,000 | [
[
"Takhanov",
"Rustem",
""
]
] |
2305.17601 | Johannes Treutlein | Caspar Oesterheld, Johannes Treutlein, Emery Cooper, Rubi Hudson | Incentivizing honest performative predictions with proper scoring rules | Accepted for the 39th Conference on Uncertainty in Artificial
Intelligence (UAI 2023) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Proper scoring rules incentivize experts to accurately report beliefs,
assuming predictions cannot influence outcomes. We relax this assumption and
investigate incentives when predictions are performative, i.e., when they can
influence the outcome of the prediction, such as when making public predictions
about the stock market. We say a prediction is a fixed point if it accurately
reflects the expert's beliefs after that prediction has been made. We show that
in this setting, reports maximizing expected score generally do not reflect an
expert's beliefs, and we give bounds on the inaccuracy of such reports. We show
that, for binary predictions, if the influence of the expert's prediction on
outcomes is bounded, it is possible to define scoring rules under which optimal
reports are arbitrarily close to fixed points. However, this is impossible for
predictions over more than two outcomes. We also perform numerical simulations
in a toy setting, showing that our bounds are tight in some situations and that
prediction error is often substantial (greater than 5-10%). Lastly, we discuss
alternative notions of optimality, including performative stability, and show
that they incentivize reporting fixed points.
| [
{
"version": "v1",
"created": "Sun, 28 May 2023 00:53:26 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 17:20:13 GMT"
}
] | 1,685,491,200,000 | [
[
"Oesterheld",
"Caspar",
""
],
[
"Treutlein",
"Johannes",
""
],
[
"Cooper",
"Emery",
""
],
[
"Hudson",
"Rubi",
""
]
] |
2305.18015 | David Jaime Tena Cucala | David Tena Cucala, Bernardo Cuenca Grau, Boris Motik, Egor V. Kostylev | On the Correspondence Between Monotonic Max-Sum GNNs and Datalog | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Although there has been significant interest in applying machine learning
techniques to structured data, the expressivity (i.e., a description of what
can be learned) of such techniques is still poorly understood. In this paper,
we study data transformations based on graph neural networks (GNNs). First, we
note that the choice of how a dataset is encoded into a numeric form
processable by a GNN can obscure the characterisation of a model's
expressivity, and we argue that a canonical encoding provides an appropriate
basis. Second, we study the expressivity of monotonic max-sum GNNs, which cover
a subclass of GNNs with max and sum aggregation functions. We show that, for
each such GNN, one can compute a Datalog program such that applying the GNN to
any dataset produces the same facts as a single round of application of the
program's rules to the dataset. Monotonic max-sum GNNs can sum an unbounded
number of feature vectors which can result in arbitrarily large feature values,
whereas rule application requires only a bounded number of constants. Hence,
our result shows that the unbounded summation of monotonic max-sum GNNs does
not increase their expressive power. Third, we sharpen our result to the
subclass of monotonic max GNNs, which use only the max aggregation function,
and identify a corresponding class of Datalog programs.
| [
{
"version": "v1",
"created": "Mon, 29 May 2023 11:13:38 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 15:06:33 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Jun 2023 09:22:01 GMT"
}
] | 1,686,873,600,000 | [
[
"Cucala",
"David Tena",
""
],
[
"Grau",
"Bernardo Cuenca",
""
],
[
"Motik",
"Boris",
""
],
[
"Kostylev",
"Egor V.",
""
]
] |
2305.19274 | Mahdi Mollakazemiha | Mahdi Mollakazemiha, Hassan Fatzade | Memory as a Mass-based Graph: Towards a Conceptual Framework for the
Simulation Model of Human Memory in AI | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | There are two approaches for simulating memory as well as learning in
artificial intelligence; the functionalistic approach and the cognitive
approach. The necessary condition to put the second approach into account is to
provide a model of brain activity that contains a quite good congruence with
observational facts such as mistakes and forgotten experiences. Given that
human memory has a solid core that includes the components of our identity, our
family and our hometown, the major and determinative events of our lives, and
the countless repeated and accepted facts of our culture, the more we go to the
peripheral spots the data becomes flimsier and more easily exposed to oblivion.
It was essential to propose a model in which the topographical differences are
quite distinguishable. In our proposed model, we have translated this
topographical situation into quantities, which are attributed to the nodes. The
result is an edge-weighted graph with mass-based values on the nodes which
demonstrates the importance of each atomic proposition, as a truth, for an
intelligent being. Furthermore, it dynamically develops and modifies, and in
successive phases, it changes the mass of the nodes and weight of the edges
depending on gathered inputs from the environment.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 01:42:16 GMT"
}
] | 1,685,577,600,000 | [
[
"Mollakazemiha",
"Mahdi",
""
],
[
"Fatzade",
"Hassan",
""
]
] |
2305.19861 | Ryan Carey | Ryan Carey and Tom Everitt | Human Control: Definitions and Algorithms | UAI 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can humans stay in control of advanced artificial intelligence systems?
One proposal is corrigibility, which requires the agent to follow the
instructions of a human overseer, without inappropriately influencing them. In
this paper, we formally define a variant of corrigibility called shutdown
instructability, and show that it implies appropriate shutdown behavior,
retention of human autonomy, and avoidance of user harm. We also analyse the
related concepts of non-obstruction and shutdown alignment, three previously
proposed algorithms for human control, and one new algorithm.
| [
{
"version": "v1",
"created": "Wed, 31 May 2023 13:53:02 GMT"
}
] | 1,685,577,600,000 | [
[
"Carey",
"Ryan",
""
],
[
"Everitt",
"Tom",
""
]
] |
2306.00036 | Heng Dong | Heng Dong, Junyu Zhang, Tonghan Wang, Chongjie Zhang | Symmetry-Aware Robot Design with Structured Subgroups | The Fortieth International Conference on Machine Learning (ICML 2023) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Robot design aims at learning to create robots that can be easily controlled
and perform tasks efficiently. Previous works on robot design have proven its
ability to generate robots for various tasks. However, these works searched the
robots directly from the vast design space and ignored common structures,
resulting in abnormal robots and poor performance. To tackle this problem, we
propose a Symmetry-Aware Robot Design (SARD) framework that exploits the
structure of the design space by incorporating symmetry searching into the
robot design process. Specifically, we represent symmetries with the subgroups
of the dihedral group and search for the optimal symmetry in structured
subgroups. Then robots are designed under the searched symmetry. In this way,
SARD can design efficient symmetric robots while covering the original design
space, which is theoretically analyzed. We further empirically evaluate SARD on
various tasks, and the results show its superior efficiency and
generalizability.
| [
{
"version": "v1",
"created": "Wed, 31 May 2023 08:57:03 GMT"
}
] | 1,685,664,000,000 | [
[
"Dong",
"Heng",
""
],
[
"Zhang",
"Junyu",
""
],
[
"Wang",
"Tonghan",
""
],
[
"Zhang",
"Chongjie",
""
]
] |
2306.00175 | Alex Altair | Alex Altair | A Comparison of Decision Algorithms on Newcomblike Problems | 17 pages, 10 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | When formulated using Bayesian networks, two standard decision algorithms
(Evidential Decision Theory and Causal Decision Theory) can be shown to fail
systematically when faced with aspects of the prisoner's dilemma and so-called
"Newcomblike" problems. We describe a new form of decision algorithm, called
Timeless Decision Theory, which consistently wins on these problems.
| [
{
"version": "v1",
"created": "Wed, 31 May 2023 20:50:08 GMT"
}
] | 1,685,664,000,000 | [
[
"Altair",
"Alex",
""
]
] |
2306.00249 | Robert Moss | Robert J. Moss, Anthony Corso, Jef Caers, Mykel J. Kochenderfer | BetaZero: Belief-State Planning for Long-Horizon POMDPs using Learned
Approximations | 16 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Real-world planning problems, including autonomous driving and sustainable
energy applications like carbon storage and resource exploration, have recently
been modeled as partially observable Markov decision processes (POMDPs) and
solved using approximate methods. To solve high-dimensional POMDPs in practice,
state-of-the-art methods use online planning with problem-specific heuristics
to reduce planning horizons and make the problems tractable. Algorithms that
learn approximations to replace heuristics have recently found success in
large-scale fully observable domains. The key insight is the combination of
online Monte Carlo tree search with offline neural network approximations of
the optimal policy and value function. In this work, we bring this insight to
partially observed domains and propose BetaZero, a belief-state planning
algorithm for high-dimensional POMDPs. BetaZero learns offline approximations
that replace heuristics to enable online decision making in long-horizon
problems. We address several challenges inherent in large-scale partially
observable domains; namely challenges of transitioning in stochastic
environments, prioritizing action branching with a limited search budget, and
representing beliefs as input to the network. To formalize the use of all
limited search information we train against a novel Q-weighted policy vector
target. We test BetaZero on various well-established benchmark POMDPs found in
the literature and a real-world, high-dimensional problem of critical mineral
exploration. Experiments show that BetaZero outperforms state-of-the-art POMDP
solvers on a variety of tasks.
| [
{
"version": "v1",
"created": "Wed, 31 May 2023 23:47:31 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jun 2023 22:58:35 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Dec 2023 19:49:52 GMT"
}
] | 1,702,944,000,000 | [
[
"Moss",
"Robert J.",
""
],
[
"Corso",
"Anthony",
""
],
[
"Caers",
"Jef",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
2306.00335 | Shivani Bathla | Shivani Bathla, Vinita Vasudevan | Approximate inference of marginals using the IBIA framework | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Exact inference of marginals in probabilistic graphical models (PGM) is known
to be intractable, necessitating the use of approximate methods. Most of the
existing variational techniques perform iterative message passing in loopy
graphs which is slow to converge for many benchmarks. In this paper, we propose
a new algorithm for marginal inference that is based on the incremental
build-infer-approximate (IBIA) paradigm. Our algorithm converts the PGM into a
sequence of linked clique tree forests (SLCTF) with bounded clique sizes, and
then uses a heuristic belief update algorithm to infer the marginals. For the
special case of Bayesian networks, we show that if the incremental build step
in IBIA uses the topological order of variables then (a) the prior marginals
are consistent in all CTFs in the SLCTF and (b) the posterior marginals are
consistent once all evidence variables are added to the SLCTF. In our approach,
the belief propagation step is non-iterative and the accuracy-complexity
trade-off is controlled using user-defined clique size bounds. Results for
several benchmark sets from recent UAI competitions show that our method gives
either better or comparable accuracy than existing variational and sampling
based methods, with smaller runtimes.
| [
{
"version": "v1",
"created": "Thu, 1 Jun 2023 04:24:21 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Oct 2023 11:36:41 GMT"
}
] | 1,698,710,400,000 | [
[
"Bathla",
"Shivani",
""
],
[
"Vasudevan",
"Vinita",
""
]
] |
2306.01746 | Michael Gr. Voskoglou Prof. Dr. | Michael Gr. Voskoglou | An Application of Neutrosophic Sets to Decision Making | 9 pages, 4 tables | Neutrosophic Sets and Systems, 53, 1-9, 2023 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Maji et al. introduced in 2002 a method of parametric decision making using
soft sets as tools and representing their tabular form as a binary matrix. In
cases, however, where some or all of the parameters used for the
characterization of the elements of the universal set are of fuzzy texture,
their method does not give always the best decision making solution. In order
to tackle this problem, we modified in earlier works the method of Maji et al.
by replacing the binary elements in the tabular form of the corresponding soft
set either by grey numbers or by triangular fuzzy numbers. In this work, in
order to tackle more efficiently cases in which the decision maker has doubts
about the correctness of the fuzzy/qualitative characterizations assigned to
some or all of the elements of the universal set, we replace the binary
elements of the tabular form by neutrosophic triplets. Our new, neutrosophic
decision making method is illustrated by an application concerning the choice
of a new player by a soccer club.
| [
{
"version": "v1",
"created": "Tue, 16 May 2023 10:46:22 GMT"
}
] | 1,686,009,600,000 | [
[
"Voskoglou",
"Michael Gr.",
""
]
] |
2306.01771 | Amin Beheshti | Amin Beheshti, Jian Yang, Quan Z. Sheng, Boualem Benatallah, Fabio
Casati, Schahram Dustdar, Hamid Reza Motahari Nezhad, Xuyun Zhang, Shan Xue | ProcessGPT: Transforming Business Process Management with Generative
Artificial Intelligence | Accepted in: 2023 IEEE International Conference on Web Services
(ICWS); Corresponding author: Prof. Amin Beheshti ([email protected]) | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Generative Pre-trained Transformer (GPT) is a state-of-the-art machine
learning model capable of generating human-like text through natural language
processing (NLP). GPT is trained on massive amounts of text data and uses deep
learning techniques to learn patterns and relationships within the data,
enabling it to generate coherent and contextually appropriate text. This
position paper proposes using GPT technology to generate new process models
when/if needed. We introduce ProcessGPT as a new technology that has the
potential to enhance decision-making in data-centric and knowledge-intensive
processes. ProcessGPT can be designed by training a generative pre-trained
transformer model on a large dataset of business process data. This model can
then be fine-tuned on specific process domains and trained to generate process
flows and make decisions based on context and user input. The model can be
integrated with NLP and machine learning techniques to provide insights and
recommendations for process improvement. Furthermore, the model can automate
repetitive tasks and improve process efficiency while enabling knowledge
workers to communicate analysis findings, supporting evidence, and make
decisions. ProcessGPT can revolutionize business process management (BPM) by
offering a powerful tool for process augmentation, automation and improvement.
Finally, we demonstrate how ProcessGPT can be a powerful tool for augmenting
data engineers in maintaining data ecosystem processes within large bank
organizations. Our scenario highlights the potential of this approach to
improve efficiency, reduce costs, and enhance the quality of business
operations through the automation of data-centric and knowledge-intensive
processes. These results underscore the promise of ProcessGPT as a
transformative technology for organizations looking to improve their process
workflows.
| [
{
"version": "v1",
"created": "Mon, 29 May 2023 02:27:46 GMT"
}
] | 1,686,009,600,000 | [
[
"Beheshti",
"Amin",
""
],
[
"Yang",
"Jian",
""
],
[
"Sheng",
"Quan Z.",
""
],
[
"Benatallah",
"Boualem",
""
],
[
"Casati",
"Fabio",
""
],
[
"Dustdar",
"Schahram",
""
],
[
"Nezhad",
"Hamid Reza Motahari",
""
],
[
"Zhang",
"Xuyun",
""
],
[
"Xue",
"Shan",
""
]
] |
2306.01772 | Deshendran Moodley | Deshendran Moodley and Christopher Seebregts | Re-imagining health and well-being in low resource African settings
using an augmented AI system and a 3D digital twin | Submitted to Workshop on AI for Digital Twins and Cyber-physical
applications at IJCAI 2023, August 19--21, 2023, Macau, S.A.R | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses and explores the potential and relevance of recent
developments in artificial intelligence (AI) and digital twins for health and
well-being in low-resource African countries. We use the case of public health
emergency response to disease outbreaks and epidemic control. There is
potential to take advantage of the increasing availability of data and
digitization to develop advanced AI methods for analysis and prediction. Using
an AI systems perspective, we review emerging trends in AI systems and digital
twins and propose an initial augmented AI system architecture to illustrate how
an AI system can work with a 3D digital twin to address public health goals. We
highlight scientific knowledge discovery, continual learning, pragmatic
interoperability, and interactive explanation and decision-making as essential
research challenges for AI systems and digital twins.
| [
{
"version": "v1",
"created": "Mon, 29 May 2023 06:17:58 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Jul 2023 08:25:19 GMT"
}
] | 1,689,033,600,000 | [
[
"Moodley",
"Deshendran",
""
],
[
"Seebregts",
"Christopher",
""
]
] |
2306.01872 | Mengjiao Yang | Mengjiao Yang, Yilun Du, Bo Dai, Dale Schuurmans, Joshua B. Tenenbaum,
Pieter Abbeel | Probabilistic Adaptation of Text-to-Video Models | Project website https://video-adapter.github.io/. First two authors
contributed equally | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large text-to-video models trained on internet-scale data have demonstrated
exceptional capabilities in generating high-fidelity videos from arbitrary
textual descriptions. However, adapting these models to tasks with limited
domain-specific data, such as animation or robotics videos, poses a significant
computational challenge, since finetuning a pretrained large model can be
prohibitively expensive. Inspired by how a small modifiable component (e.g.,
prompts, prefix-tuning) can adapt a large language model to perform new tasks
without requiring access to the model weights, we investigate how to adapt a
large pretrained text-to-video model to a variety of downstream domains and
tasks without finetuning. In answering this question, we propose Video Adapter,
which leverages the score function of a large pretrained video diffusion model
as a probabilistic prior to guide the generation of a task-specific small video
model. Our experiments show that Video Adapter is capable of incorporating the
broad knowledge and preserving the high fidelity of a large pretrained video
model in a task-specific small video model that is able to generate
high-quality yet specialized videos on a variety of tasks such as animation,
egocentric modeling, and modeling of simulated and real-world robotics data.
More videos can be found on the website https://video-adapter.github.io/.
| [
{
"version": "v1",
"created": "Fri, 2 Jun 2023 19:00:17 GMT"
}
] | 1,686,009,600,000 | [
[
"Yang",
"Mengjiao",
""
],
[
"Du",
"Yilun",
""
],
[
"Dai",
"Bo",
""
],
[
"Schuurmans",
"Dale",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
2306.01913 | Xin Dai | Xin Dai, Yujie Fan, Zhongfang Zhuang, Shubham Jain, Chin-Chia Michael
Yeh, Junpeng Wang, Liang Wang, Yan Zheng, Prince Osei Aboagye, Wei Zhang | PDT: Pretrained Dual Transformers for Time-aware Bipartite Graphs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Pre-training on large models is prevalent and emerging with the ever-growing
user-generated content in many machine learning application categories. It has
been recognized that learning contextual knowledge from the datasets depicting
user-content interaction plays a vital role in downstream tasks. Despite
several studies attempting to learn contextual knowledge via pre-training
methods, finding an optimal training objective and strategy for this type of
task remains a challenging problem. In this work, we contend that there are two
distinct aspects of contextual knowledge, namely the user-side and the
content-side, for datasets where user-content interaction can be represented as
a bipartite graph. To learn contextual knowledge, we propose a pre-training
method that learns a bi-directional mapping between the spaces of the user-side
and the content-side. We formulate the training goal as a contrastive learning
task and propose a dual-Transformer architecture to encode the contextual
knowledge. We evaluate the proposed method for the recommendation task. The
empirical studies have demonstrated that the proposed method outperformed all
the baselines with significant gains.
| [
{
"version": "v1",
"created": "Fri, 2 Jun 2023 20:38:43 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 06:20:42 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Sep 2023 17:31:16 GMT"
}
] | 1,695,686,400,000 | [
[
"Dai",
"Xin",
""
],
[
"Fan",
"Yujie",
""
],
[
"Zhuang",
"Zhongfang",
""
],
[
"Jain",
"Shubham",
""
],
[
"Yeh",
"Chin-Chia Michael",
""
],
[
"Wang",
"Junpeng",
""
],
[
"Wang",
"Liang",
""
],
[
"Zheng",
"Yan",
""
],
[
"Aboagye",
"Prince Osei",
""
],
[
"Zhang",
"Wei",
""
]
] |
2306.02019 | MD Abdullah Al Nasim | Angona Biswas, MD Abdullah Al Nasim, Al Imran, Anika Tabassum Sejuty,
Fabliha Fairooz, Sai Puppala, Sajedul Talukder | Generative Adversarial Networks for Data Augmentation | 13 pages, 6 figures, 1 table; Acceptance of the chapter for the
Springer book "Data-driven approaches to medical imaging" | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | One way to expand the available dataset for training AI models in the medical
field is through the use of Generative Adversarial Networks (GANs) for data
augmentation. GANs work by employing a generator network to create new data
samples that are then assessed by a discriminator network to determine their
similarity to real samples. The discriminator network is taught to
differentiate between actual and synthetic samples, while the generator system
is trained to generate data that closely resemble real ones. The process is
repeated until the generator network can produce synthetic data that is
indistinguishable from genuine data. GANs have been utilized in medical image
analysis for various tasks, including data augmentation, image creation, and
domain adaptation. They can generate synthetic samples that can be used to
increase the available dataset, especially in cases where obtaining large
amounts of genuine data is difficult or unethical. However, it is essential to
note that the use of GANs in medical imaging is still an active area of
research to ensure that the produced images are of high quality and suitable
for use in clinical settings.
| [
{
"version": "v1",
"created": "Sat, 3 Jun 2023 06:33:33 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 20:15:59 GMT"
}
] | 1,686,268,800,000 | [
[
"Biswas",
"Angona",
""
],
[
"Nasim",
"MD Abdullah Al",
""
],
[
"Imran",
"Al",
""
],
[
"Sejuty",
"Anika Tabassum",
""
],
[
"Fairooz",
"Fabliha",
""
],
[
"Puppala",
"Sai",
""
],
[
"Talukder",
"Sajedul",
""
]
] |
2306.02043 | Yukyung Lee | Yukyung Lee, Jaehee Kim, Doyoon Kim, Yookyung Kho, Younsun Kim,
Pilsung Kang | Painsight: An Extendable Opinion Mining Framework for Detecting Pain
Points Based on Online Customer Reviews | WASSA at ACL 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As the e-commerce market continues to expand and online transactions
proliferate, customer reviews have emerged as a critical element in shaping the
purchasing decisions of prospective buyers. Previous studies have endeavored to
identify key aspects of customer reviews through the development of sentiment
analysis models and topic models. However, extracting specific dissatisfaction
factors remains a challenging task. In this study, we delineate the pain point
detection problem and propose Painsight, an unsupervised framework for
automatically extracting distinct dissatisfaction factors from customer reviews
without relying on ground truth labels. Painsight employs pre-trained language
models to construct sentiment analysis and topic models, leveraging attribution
scores derived from model gradients to extract dissatisfaction factors. Upon
application of the proposed methodology to customer review data spanning five
product categories, we successfully identified and categorized dissatisfaction
factors within each group, as well as isolated factors for each type. Notably,
Painsight outperformed benchmark methods, achieving substantial performance
enhancements and exceptional results in human evaluations.
| [
{
"version": "v1",
"created": "Sat, 3 Jun 2023 07:51:57 GMT"
}
] | 1,686,009,600,000 | [
[
"Lee",
"Yukyung",
""
],
[
"Kim",
"Jaehee",
""
],
[
"Kim",
"Doyoon",
""
],
[
"Kho",
"Yookyung",
""
],
[
"Kim",
"Younsun",
""
],
[
"Kang",
"Pilsung",
""
]
] |
2306.02055 | MD Abdullah Al Nasim | Shuvra Sarker, Angona Biswas, MD Abdullah Al Nasim, Md Shahin Ali, Sai
Puppala, Sajedul Talukder | Case Studies on X-Ray Imaging, MRI and Nuclear Imaging | 15 pages, 3 figures, 4 tables; Acceptance of the chapter for the
Springer book "Data-driven approaches to medical imaging" | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | The field of medical imaging is an essential aspect of the medical sciences,
involving various forms of radiation to capture images of the internal tissues
and organs of the body. These images provide vital information for clinical
diagnosis, and in this chapter, we will explore the use of X-ray, MRI, and
nuclear imaging in detecting severe illnesses. However, manual evaluation and
storage of these images can be a challenging and time-consuming process. To
address this issue, artificial intelligence (AI)-based techniques, particularly
deep learning (DL), have become increasingly popular for systematic feature
extraction and classification from imaging modalities, thereby aiding doctors
in making rapid and accurate diagnoses. In this review study, we will focus on
how AI-based approaches, particularly the use of Convolutional Neural Networks
(CNN), can assist in disease detection through medical imaging technology. CNN
is a commonly used approach for image analysis due to its ability to extract
features from raw input images, and as such, will be the primary area of
discussion in this study. Therefore, we have considered CNN as our discussion
area in this study to diagnose ailments using medical imaging technology.
| [
{
"version": "v1",
"created": "Sat, 3 Jun 2023 09:05:35 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 19:31:06 GMT"
},
{
"version": "v3",
"created": "Sat, 17 Jun 2023 17:14:19 GMT"
}
] | 1,687,305,600,000 | [
[
"Sarker",
"Shuvra",
""
],
[
"Biswas",
"Angona",
""
],
[
"Nasim",
"MD Abdullah Al",
""
],
[
"Ali",
"Md Shahin",
""
],
[
"Puppala",
"Sai",
""
],
[
"Talukder",
"Sajedul",
""
]
] |
2306.02177 | Christopher Michael Rytting | Christopher Michael Rytting, Taylor Sorensen, Lisa Argyle, Ethan
Busby, Nancy Fulda, Joshua Gubler, David Wingate | Towards Coding Social Science Datasets with Language Models | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Researchers often rely on humans to code (label, annotate, etc.) large sets
of texts. This kind of human coding forms an important part of social science
research, yet the coding process is both resource intensive and highly variable
from application to application. In some cases, efforts to automate this
process have achieved human-level accuracies, but to achieve this, these
attempts frequently rely on thousands of hand-labeled training examples, which
makes them inapplicable to small-scale research studies and costly for large
ones. Recent advances in a specific kind of artificial intelligence tool -
language models (LMs) - provide a solution to this problem. Work in computer
science makes it clear that LMs are able to classify text, without the cost (in
financial terms and human effort) of alternative methods. To demonstrate the
possibilities of LMs in this area of political science, we use GPT-3, one of
the most advanced LMs, as a synthetic coder and compare it to human coders. We
find that GPT-3 can match the performance of typical human coders and offers
benefits over other machine learning methods of coding text. We find this
across a variety of domains using very different coding procedures. This
provides exciting evidence that language models can serve as a critical advance
in the coding of open-ended texts in a variety of applications.
| [
{
"version": "v1",
"created": "Sat, 3 Jun 2023 19:11:34 GMT"
}
] | 1,686,009,600,000 | [
[
"Rytting",
"Christopher Michael",
""
],
[
"Sorensen",
"Taylor",
""
],
[
"Argyle",
"Lisa",
""
],
[
"Busby",
"Ethan",
""
],
[
"Fulda",
"Nancy",
""
],
[
"Gubler",
"Joshua",
""
],
[
"Wingate",
"David",
""
]
] |
2306.02199 | Bo Xiong | Bo Xiong, Mojtaba Nayyer, Shirui Pan, Steffen Staab | Shrinking Embeddings for Hyper-Relational Knowledge Graphs | To appear in ACL 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Link prediction on knowledge graphs (KGs) has been extensively studied on
binary relational KGs, wherein each fact is represented by a triple. A
significant amount of important knowledge, however, is represented by
hyper-relational facts where each fact is composed of a primal triple and a set
of qualifiers comprising a key-value pair that allows for expressing more
complicated semantics. Although some recent works have proposed to embed
hyper-relational KGs, these methods fail to capture essential inference
patterns of hyper-relational facts such as qualifier monotonicity, qualifier
implication, and qualifier mutual exclusion, limiting their generalization
capability. To unlock this, we present \emph{ShrinkE}, a geometric
hyper-relational KG embedding method aiming to explicitly model these patterns.
ShrinkE models the primal triple as a spatial-functional transformation from
the head into a relation-specific box. Each qualifier ``shrinks'' the box to
narrow down the possible answer set and, thus, realizes qualifier monotonicity.
The spatial relationships between the qualifier boxes allow for modeling core
inference patterns of qualifiers such as implication and mutual exclusion.
Experimental results demonstrate ShrinkE's superiority on three benchmarks of
hyper-relational KGs.
| [
{
"version": "v1",
"created": "Sat, 3 Jun 2023 21:14:59 GMT"
}
] | 1,686,009,600,000 | [
[
"Xiong",
"Bo",
""
],
[
"Nayyer",
"Mojtaba",
""
],
[
"Pan",
"Shirui",
""
],
[
"Staab",
"Steffen",
""
]
] |
2306.02211 | Mohamed Mohsen | Mohamed Mohsen, Hamada Rizk, Moustafa Youssef | Privacy-Preserving by Design: Indoor Positioning System Using Wi-Fi
Passive TDOA | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Indoor localization systems have become increasingly important in a wide
range of applications, including industry, security, logistics, and emergency
services. However, the growing demand for accurate localization has heightened
concerns over privacy, as many localization systems rely on active signals that
can be misused by an adversary to track users' movements or manipulate their
measurements. This paper presents PassiFi, a novel passive Wi-Fi time-based
indoor localization system that effectively balances accuracy and privacy.
PassiFi uses a passive WiFi Time Difference of Arrival (TDoA) approach that
ensures users' privacy and safeguards the integrity of their measurement data
while still achieving high accuracy. The system adopts a fingerprinting
approach to address multi-path and non-line-of-sight problems and utilizes deep
neural networks to learn the complex relationship between TDoA and location.
Evaluation in a real-world testbed demonstrates PassiFi's exceptional
performance, surpassing traditional multilateration by 128%, achieving
sub-meter accuracy on par with state-of-the-art active measurement systems, all
while preserving privacy.
| [
{
"version": "v1",
"created": "Sat, 3 Jun 2023 23:27:38 GMT"
}
] | 1,686,009,600,000 | [
[
"Mohsen",
"Mohamed",
""
],
[
"Rizk",
"Hamada",
""
],
[
"Youssef",
"Moustafa",
""
]
] |
2306.02257 | Kohei Hattori | Kohei Hattori, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu
Fujiyoshi | Learning from AI: An Interactive Learning Method Using a DNN Model
Incorporating Expert Knowledge as a Teacher | 12 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual explanation is an approach for visualizing the grounds of judgment by
deep learning, and it is possible to visually interpret the grounds of a
judgment for a certain input by visualizing an attention map. As for
deep-learning models that output erroneous decision-making grounds, a method
that incorporates expert human knowledge in the model via an attention map in a
manner that improves explanatory power and recognition accuracy is proposed. In
this study, based on a deep-learning model that incorporates the knowledge of
experts, a method by which a learner "learns from AI" the grounds for its
decisions is proposed. An "attention branch network" (ABN), which has been
fine-tuned with attention maps modified by experts, is prepared as a teacher.
By using an interactive editing tool for the fine-tuned ABN and attention maps,
the learner learns by editing the attention maps and changing the inference
results. By repeatedly editing the attention maps and making inferences so that
the correct recognition results are output, the learner can acquire the grounds
for the expert's judgments embedded in the ABN. The results of an evaluation
experiment with subjects show that learning using the proposed method is more
efficient than the conventional method.
| [
{
"version": "v1",
"created": "Sun, 4 Jun 2023 04:22:55 GMT"
}
] | 1,686,009,600,000 | [
[
"Hattori",
"Kohei",
""
],
[
"Hirakawa",
"Tsubasa",
""
],
[
"Yamashita",
"Takayoshi",
""
],
[
"Fujiyoshi",
"Hironobu",
""
]
] |
2306.02342 | Theo Adrai | Theo Adrai, Guy Ohayon, Tomer Michaeli and Michael Elad | Deep Optimal Transport: A Practical Algorithm for Photo-realistic Image
Restoration | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose an image restoration algorithm that can control the perceptual
quality and/or the mean square error (MSE) of any pre-trained model, trading
one over the other at test time. Our algorithm is few-shot: Given about a dozen
images restored by the model, it can significantly improve the perceptual
quality and/or the MSE of the model for newly restored images without further
training. Our approach is motivated by a recent theoretical result that links
between the minimum MSE (MMSE) predictor and the predictor that minimizes the
MSE under a perfect perceptual quality constraint. Specifically, it has been
shown that the latter can be obtained by optimally transporting the output of
the former, such that its distribution matches the source data. Thus, to
improve the perceptual quality of a predictor that was originally trained to
minimize MSE, we approximate the optimal transport by a linear transformation
in the latent space of a variational auto-encoder, which we compute in
closed-form using empirical means and covariances. Going beyond the theory, we
find that applying the same procedure on models that were initially trained to
achieve high perceptual quality, typically improves their perceptual quality
even further. And by interpolating the results with the original output of the
model, we can improve their MSE on the expense of perceptual quality. We
illustrate our method on a variety of degradations applied to general content
images of arbitrary dimensions.
| [
{
"version": "v1",
"created": "Sun, 4 Jun 2023 12:21:53 GMT"
}
] | 1,686,009,600,000 | [
[
"Adrai",
"Theo",
""
],
[
"Ohayon",
"Guy",
""
],
[
"Michaeli",
"Tomer",
""
],
[
"Elad",
"Michael",
""
]
] |
2306.02359 | Jiancheng Zhao | Jiancheng Zhao, Jiaqi Yue, Liangjun Feng, Chunhui Zhao, and Jinliang
Ding | Addressing Domain Shift via Knowledge Space Sharing for Generalized
Zero-Shot Industrial Fault Diagnosis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fault diagnosis is a critical aspect of industrial safety, and supervised
industrial fault diagnosis has been extensively researched. However, obtaining
fault samples of all categories for model training can be challenging due to
cost and safety concerns. As a result, the generalized zero-shot industrial
fault diagnosis has gained attention as it aims to diagnose both seen and
unseen faults. Nevertheless, the lack of unseen fault data for training poses a
challenging domain shift problem (DSP), where unseen faults are often
identified as seen faults. In this article, we propose a knowledge space
sharing (KSS) model to address the DSP in the generalized zero-shot industrial
fault diagnosis task. The KSS model includes a generation mechanism (KSS-G) and
a discrimination mechanism (KSS-D). KSS-G generates samples for rare faults by
recombining transferable attribute features extracted from seen samples under
the guidance of auxiliary knowledge. KSS-D is trained in a supervised way with
the help of generated samples, which aims to address the DSP by modeling seen
categories in the knowledge space. KSS-D avoids misclassifying rare faults as
seen faults and identifies seen fault samples. We conduct generalized zero-shot
diagnosis experiments on the benchmark Tennessee-Eastman process, and our
results show that our approach outperforms state-of-the-art methods for the
generalized zero-shot industrial fault diagnosis problem.
| [
{
"version": "v1",
"created": "Sun, 4 Jun 2023 13:50:01 GMT"
}
] | 1,686,009,600,000 | [
[
"Zhao",
"Jiancheng",
""
],
[
"Yue",
"Jiaqi",
""
],
[
"Feng",
"Liangjun",
""
],
[
"Zhao",
"Chunhui",
""
],
[
"Ding",
"Jinliang",
""
]
] |
2306.02415 | Roy Abel | Roy Abel, Shimon Ullman | Top-Down Network Combines Back-Propagation with Attention | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cortical processing, in vision and other domains, combines bottom-up (BU)
with extensive top-down (TD) processing. Two primary goals attributed to TD
processing are learning and directing attention. These two roles are
accomplished in current network models through distinct mechanisms. Attention
guidance is often implemented by extending the model's architecture, while
learning is typically accomplished by an external learning algorithm such as
back-propagation. In the current work, we present an integration of the two
functions above, which appear unrelated, using a single unified mechanism
inspired by the human brain. We propose a novel symmetric bottom-up top-down
network structure that can integrate conventional bottom-up networks with a
symmetric top-down counterpart, allowing each network to recurrently guide and
influence the other. For example, during multi-task learning, the same top-down
network is being used for both learning, via propagating feedback signals, and
at the same time also for top-down attention, by guiding the bottom-up network
to perform a selected task. In contrast with standard models, no external
back-propagation is used for learning. Instead, we propose a 'Counter-Hebb'
learning, which adjusts the weights of both the bottom-up and top-down networks
simultaneously. We show that our method achieves competitive performance on
standard multi-task learning benchmarks. Yet, unlike existing methods, we rely
on single-task architectures and optimizers, without any task-specific
parameters. The results, which show how attention-guided multi-tasks can be
combined efficiently with internal learning in a unified TD process, suggest a
possible model for combining BU and TD processing in human vision.
| [
{
"version": "v1",
"created": "Sun, 4 Jun 2023 17:38:06 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Aug 2023 14:11:30 GMT"
}
] | 1,693,440,000,000 | [
[
"Abel",
"Roy",
""
],
[
"Ullman",
"Shimon",
""
]
] |
2306.02488 | Xiaoting Li | Xiaoting Li, Lingwei Chen, Dinghao Wu | Adversary for Social Good: Leveraging Adversarial Attacks to Protect
Personal Attribute Privacy | null | ACM Trans. Knowl. Discov. Data (August 2023) | 10.1145/3614098 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social media has drastically reshaped the world that allows billions of
people to engage in such interactive environments to conveniently create and
share content with the public. Among them, text data (e.g., tweets, blogs)
maintains the basic yet important social activities and generates a rich source
of user-oriented information. While those explicit sensitive user data like
credentials has been significantly protected by all means, personal private
attribute (e.g., age, gender, location) disclosure due to inference attacks is
somehow challenging to avoid, especially when powerful natural language
processing (NLP) techniques have been effectively deployed to automate
attribute inferences from implicit text data. This puts users' attribute
privacy at risk. To address this challenge, in this paper, we leverage the
inherent vulnerability of machine learning to adversarial attacks, and design a
novel text-space Adversarial attack for Social Good, called Adv4SG. In other
words, we cast the problem of protecting personal attribute privacy as an
adversarial attack formulation problem over the social media text data to
defend against NLP-based attribute inference attacks. More specifically, Adv4SG
proceeds with a sequence of word perturbations under given constraints such
that the probed attribute cannot be identified correctly. Different from the
prior works, we advance Adv4SG by considering social media property, and
introducing cost-effective mechanisms to expedite attribute obfuscation over
text data under the black-box setting. Extensive experiments on real-world
social media datasets have demonstrated that our method can effectively degrade
the inference accuracy with less computational cost over different attribute
settings, which substantially helps mitigate the impacts of inference attacks
and thus achieve high performance in user attribute privacy protection.
| [
{
"version": "v1",
"created": "Sun, 4 Jun 2023 21:40:23 GMT"
}
] | 1,696,377,600,000 | [
[
"Li",
"Xiaoting",
""
],
[
"Chen",
"Lingwei",
""
],
[
"Wu",
"Dinghao",
""
]
] |
2306.02519 | Ted Sanders | Ari Allyn-Feuer and Ted Sanders | Transformative AGI by 2043 is <1% likely | 114 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is a submission to the Open Philanthropy AI Worldviews Contest. In
it, we estimate the likelihood of transformative artificial general
intelligence (AGI) by 2043 and find it to be <1%.
Specifically, we argue:
The bar is high: AGI as defined by the contest - something like AI that can
perform nearly all valuable tasks at human cost or less - which we will call
transformative AGI is a much higher bar than merely massive progress in AI, or
even the unambiguous attainment of expensive superhuman AGI or cheap but uneven
AGI.
Many steps are needed: The probability of transformative AGI by 2043 can be
decomposed as the joint probability of a number of necessary steps, which we
group into categories of software, hardware, and sociopolitical factors.
No step is guaranteed: For each step, we estimate a probability of success by
2043, conditional on prior steps being achieved. Many steps are quite
constrained by the short timeline, and our estimates range from 16% to 95%.
Therefore, the odds are low: Multiplying the cascading conditional
probabilities together, we estimate that transformative AGI by 2043 is 0.4%
likely. Reaching >10% seems to require probabilities that feel unreasonably
high, and even 3% seems unlikely.
Thoughtfully applying the cascading conditional probability approach to this
question yields lower probability values than is often supposed. This framework
helps enumerate the many future scenarios where humanity makes partial but
incomplete progress toward transformative AGI.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 00:58:51 GMT"
}
] | 1,686,009,600,000 | [
[
"Allyn-Feuer",
"Ari",
""
],
[
"Sanders",
"Ted",
""
]
] |
2306.02560 | Maolin Wang | Maolin Wang, Yaoming Zhen, Yu Pan, Yao Zhao, Chenyi Zhuang, Zenglin
Xu, Ruocheng Guo, Xiangyu Zhao | Tensorized Hypergraph Neural Networks | SIAM International Conference on Data Mining (SDM24) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hypergraph neural networks (HGNN) have recently become attractive and
received significant attention due to their excellent performance in various
domains. However, most existing HGNNs rely on first-order approximations of
hypergraph connectivity patterns, which ignores important high-order
information. To address this issue, we propose a novel adjacency-tensor-based
\textbf{T}ensorized \textbf{H}ypergraph \textbf{N}eural \textbf{N}etwork
(THNN). THNN is a faithful hypergraph modeling framework through high-order
outer product feature message passing and is a natural tensor extension of the
adjacency-matrix-based graph neural networks. The proposed THNN is equivalent
to a high-order polynomial regression scheme, which enables THNN with the
ability to efficiently extract high-order information from uniform hypergraphs.
Moreover, in consideration of the exponential complexity of directly processing
high-order outer product features, we propose using a partially symmetric CP
decomposition approach to reduce model complexity to a linear degree.
Additionally, we propose two simple yet effective extensions of our method for
non-uniform hypergraphs commonly found in real-world applications. Results from
experiments on two widely used {hypergraph datasets for 3-D visual object
classification} show the model's promising performance.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 03:26:06 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Jan 2024 10:03:32 GMT"
}
] | 1,704,931,200,000 | [
[
"Wang",
"Maolin",
""
],
[
"Zhen",
"Yaoming",
""
],
[
"Pan",
"Yu",
""
],
[
"Zhao",
"Yao",
""
],
[
"Zhuang",
"Chenyi",
""
],
[
"Xu",
"Zenglin",
""
],
[
"Guo",
"Ruocheng",
""
],
[
"Zhao",
"Xiangyu",
""
]
] |
2306.02588 | Ilya Safro | David Marasco, Ilya Tyagin, Justin Sybrandt, James H. Spencer, Ilya
Safro | Literature-based Discovery for Landscape Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This project demonstrates how medical corpus hypothesis generation, a
knowledge discovery field of AI, can be used to derive new research angles for
landscape and urban planners. The hypothesis generation approach herein
consists of a combination of deep learning with topic modeling, a probabilistic
approach to natural language analysis that scans aggregated research databases
for words that can be grouped together based on their subject matter
commonalities; the word groups accordingly form topics that can provide
implicit connections between two general research terms. The hypothesis
generation system AGATHA was used to identify likely conceptual relationships
between emerging infectious diseases (EIDs) and deforestation, with the
objective of providing landscape planners guidelines for productive research
directions to help them formulate research hypotheses centered on deforestation
and EIDs that will contribute to the broader health field that asserts causal
roles of landscape-level issues. This research also serves as a partial
proof-of-concept for the application of medical database hypothesis generation
to medicine-adjacent hypothesis discovery.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 04:32:46 GMT"
}
] | 1,686,009,600,000 | [
[
"Marasco",
"David",
""
],
[
"Tyagin",
"Ilya",
""
],
[
"Sybrandt",
"Justin",
""
],
[
"Spencer",
"James H.",
""
],
[
"Safro",
"Ilya",
""
]
] |
2306.02593 | Yayue Deng | Dengfeng Ke, Yayue Deng, Yukang Jia, Jinlong Xue, Qi Luo, Ya Li,
Jianqing Sun, Jiaen Liang, Binghuai Lin | Rhythm-controllable Attention with High Robustness for Long Sentence
Speech Synthesis | 5 pages, 3 figures, Published in: 2022 13th International Symposium
on Chinese Spoken Language Processing (ISCSLP) | null | 10.1109/ISCSLP57327.2022.10037822 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Regressive Text-to-Speech (TTS) system utilizes attention mechanism to
generate alignment between text and acoustic feature sequence. Alignment
determines synthesis robustness (e.g, the occurence of skipping, repeating, and
collapse) and rhythm via duration control. However, current attention
algorithms used in speech synthesis cannot control rhythm using external
duration information to generate natural speech while ensuring robustness. In
this study, we propose Rhythm-controllable Attention (RC-Attention) based on
Tracotron2, which improves robustness and naturalness simultaneously. Proposed
attention adopts a trainable scalar learned from four kinds of information to
achieve rhythm control, which makes rhythm control more robust and natural,
even when synthesized sentences are extremely longer than training corpus. We
use word errors counting and AB preference test to measure robustness of
proposed method and naturalness of synthesized speech, respectively. Results
shows that RC-Attention has the lowest word error rate of nearly 0.6%, compared
with 11.8% for baseline system. Moreover, nearly 60% subjects prefer to the
speech synthesized with RC-Attention to that with Forward Attention, because
the former has more natural rhythm.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 04:52:33 GMT"
}
] | 1,686,009,600,000 | [
[
"Ke",
"Dengfeng",
""
],
[
"Deng",
"Yayue",
""
],
[
"Jia",
"Yukang",
""
],
[
"Xue",
"Jinlong",
""
],
[
"Luo",
"Qi",
""
],
[
"Li",
"Ya",
""
],
[
"Sun",
"Jianqing",
""
],
[
"Liang",
"Jiaen",
""
],
[
"Lin",
"Binghuai",
""
]
] |
2306.02697 | Viktoriia Chekalina | Viktoriia Chekalina, Georgii Novikov, Julia Gusak, Ivan Oseledets,
Alexander Panchenko | Efficient GPT Model Pre-training using Tensor Train Matrix
Representation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large-scale transformer models have shown remarkable performance in language
modelling tasks. However, such models feature billions of parameters, leading
to difficulties in their deployment and prohibitive training costs from
scratch. To reduce the number of the parameters in the GPT-2 architecture, we
replace the matrices of fully-connected layers with the corresponding Tensor
Train Matrix~(TTM) structure. Finally, we customize forward and backward
operations through the TTM-based layer for simplicity and the stableness of
further training. % The resulting GPT-2-based model stores up to 40% fewer
parameters, showing the perplexity comparable to the original model. On the
downstream tasks, including language understanding and text summarization, the
model performs similarly to the original GPT-2 model. The proposed tensorized
layers could be used to efficiently pre-training other Transformer models.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 08:38:25 GMT"
}
] | 1,686,009,600,000 | [
[
"Chekalina",
"Viktoriia",
""
],
[
"Novikov",
"Georgii",
""
],
[
"Gusak",
"Julia",
""
],
[
"Oseledets",
"Ivan",
""
],
[
"Panchenko",
"Alexander",
""
]
] |
2306.02845 | Puneet Kumar | Puneet Kumar and Xiaobai Li | Interpretable Multimodal Emotion Recognition using Facial Features and
Physiological Signals | Accepted for Oral Presentation in DAI 2023
(https://rbcdsai.iitm.ac.in/DAI-2023/program.html) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper aims to demonstrate the importance and feasibility of fusing
multimodal information for emotion recognition. It introduces a multimodal
framework for emotion understanding by fusing the information from visual
facial features and rPPG signals extracted from the input videos. An
interpretability technique based on permutation feature importance analysis has
also been implemented to compute the contributions of rPPG and visual
modalities toward classifying a given input video into a particular emotion
class. The experiments on IEMOCAP dataset demonstrate that the emotion
classification performance improves by combining the complementary information
from multiple modalities.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 12:57:07 GMT"
}
] | 1,686,009,600,000 | [
[
"Kumar",
"Puneet",
""
],
[
"Li",
"Xiaobai",
""
]
] |
2306.02910 | Riccardo Lo Bianco | Riccardo Lo Bianco, Remco Dijkman, Wim Nuijten, Willem van Jaarsveld | Action-Evolution Petri Nets: a Framework for Modeling and Solving
Dynamic Task Assignment Problems | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Dynamic task assignment involves assigning arriving tasks to a limited number
of resources in order to minimize the overall cost of the assignments. To
achieve optimal task assignment, it is necessary to model the assignment
problem first. While there exist separate formalisms, specifically Markov
Decision Processes and (Colored) Petri Nets, to model, execute, and solve
different aspects of the problem, there is no integrated modeling technique. To
address this gap, this paper proposes Action-Evolution Petri Nets (A-E PN) as a
framework for modeling and solving dynamic task assignment problems. A-E PN
provides a unified modeling technique that can represent all elements of
dynamic task assignment problems. Moreover, A-E PN models are executable, which
means they can be used to learn close-to-optimal assignment policies through
Reinforcement Learning (RL) without additional modeling effort. To evaluate the
framework, we define a taxonomy of archetypical assignment problems. We show
for three cases that A-E PN can be used to learn close-to-optimal assignment
policies. Our results suggest that A-E PN can be used to model and solve a
broad range of dynamic task assignment problems.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 14:14:48 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 11:41:31 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Jun 2023 09:36:22 GMT"
}
] | 1,686,528,000,000 | [
[
"Bianco",
"Riccardo Lo",
""
],
[
"Dijkman",
"Remco",
""
],
[
"Nuijten",
"Wim",
""
],
[
"van Jaarsveld",
"Willem",
""
]
] |
2306.02979 | Xiaoding Lu | Xiaoding Lu, Aleksey Korshuk, Zongyi Liu, William Beauchamp | The Chai Platform's AI Safety Framework | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Chai empowers users to create and interact with customized chatbots, offering
unique and engaging experiences. Despite the exciting prospects, the work
recognizes the inherent challenges of a commitment to modern safety standards.
Therefore, this paper presents the integrated AI safety principles into Chai to
prioritize user safety, data protection, and ethical technology use. The paper
specifically explores the multidimensional domain of AI safety research,
demonstrating its application in Chai's conversational chatbot platform. It
presents Chai's AI safety principles, informed by well-established AI research
centres and adapted for chat AI. This work proposes the following safety
framework: Content Safeguarding; Stability and Robustness; and Operational
Transparency and Traceability. The subsequent implementation of these
principles is outlined, followed by an experimental analysis of Chai's AI
safety framework's real-world impact. We emphasise the significance of
conscientious application of AI safety principles and robust safety measures.
The successful implementation of the safe AI framework in Chai indicates the
practicality of mitigating potential risks for responsible and ethical use of
AI technologies. The ultimate vision is a transformative AI tool fostering
progress and innovation while prioritizing user safety and ethical standards.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 15:51:38 GMT"
}
] | 1,686,009,600,000 | [
[
"Lu",
"Xiaoding",
""
],
[
"Korshuk",
"Aleksey",
""
],
[
"Liu",
"Zongyi",
""
],
[
"Beauchamp",
"William",
""
]
] |
2306.03048 | Xuanxiang Huang | Xuanxiang Huang, Joao Marques-Silva | From Robustness to Explainability and Back Again | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In contrast with ad-hoc methods for eXplainable Artificial Intelligence
(XAI), formal explainability offers important guarantees of rigor. However,
formal explainability is hindered by poor scalability for some families of
classifiers, the most significant being neural networks. As a result, there are
concerns as to whether formal explainability might serve to complement other
approaches in delivering trustworthy AI. This paper addresses the limitation of
scalability of formal explainability, and proposes novel algorithms for
computing formal explanations. The novel algorithm computes explanations by
answering instead a number of robustness queries, and such that the number of
such queries is at most linear on the number of features. Consequently, the
proposed algorithm establishes a direct relationship between the practical
complexity of formal explainability and that of robustness. More importantly,
the paper generalizes the definition of formal explanation, thereby allowing
the use of robustness tools that are based on different distance norms, and
also by reasoning in terms of some target degree of robustness. The experiments
validate the practical efficiency of the proposed approach.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 17:21:05 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jul 2023 06:58:33 GMT"
}
] | 1,690,848,000,000 | [
[
"Huang",
"Xuanxiang",
""
],
[
"Marques-Silva",
"Joao",
""
]
] |
2306.03082 | Lichang Chen | Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, Tianyi Zhou | InstructZero: Efficient Instruction Optimization for Black-Box Large
Language Models | 15 pages; 9 figures; Our code is available at
https://lichang-chen.github.io/InstructZero/ | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models~(LLMs) are instruction followers, but it can be
challenging to find the best instruction for different situations, especially
for black-box LLMs on which backpropagation is forbidden. Instead of directly
optimizing the discrete instruction, we optimize a low-dimensional soft prompt
applied to an open-source LLM to generate the instruction for the black-box
LLM. On each iteration of the proposed method, which we call InstructZero, a
soft prompt is converted into an instruction using the open-source LLM, which
is then submitted to the black-box LLM for zero-shot evaluation, and the
performance is sent to Bayesian optimization to produce new soft prompts
improving the zero-shot performance. We evaluate InstructZero on different
combinations of open-source LLMs and APIs including Vicuna and ChatGPT. Our
results show that InstructZero outperforms SOTA auto-instruction methods across
a variety of downstream tasks. Our code and data are publicly available at
https://github.com/Lichang-Chen/InstructZero.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 17:55:22 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Aug 2023 17:33:54 GMT"
}
] | 1,691,539,200,000 | [
[
"Chen",
"Lichang",
""
],
[
"Chen",
"Jiuhai",
""
],
[
"Goldstein",
"Tom",
""
],
[
"Huang",
"Heng",
""
],
[
"Zhou",
"Tianyi",
""
]
] |
2306.03236 | Mikael Henaff | Mikael Henaff, Minqi Jiang, Roberta Raileanu | A Study of Global and Episodic Bonuses for Exploration in Contextual
MDPs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Exploration in environments which differ across episodes has received
increasing attention in recent years. Current methods use some combination of
global novelty bonuses, computed using the agent's entire training experience,
and \textit{episodic novelty bonuses}, computed using only experience from the
current episode. However, the use of these two types of bonuses has been ad-hoc
and poorly understood. In this work, we shed light on the behavior of these two
types of bonuses through controlled experiments on easily interpretable tasks
as well as challenging pixel-based settings. We find that the two types of
bonuses succeed in different settings, with episodic bonuses being most
effective when there is little shared structure across episodes and global
bonuses being effective when more structure is shared. We develop a conceptual
framework which makes this notion of shared structure precise by considering
the variance of the value function across contexts, and which provides a
unifying explanation of our empirical results. We furthermore find that
combining the two bonuses can lead to more robust performance across different
degrees of shared structure, and investigate different algorithmic choices for
defining and combining global and episodic bonuses based on function
approximation. This results in an algorithm which sets a new state of the art
across 16 tasks from the MiniHack suite used in prior work, and also performs
robustly on Habitat and Montezuma's Revenge.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 20:45:30 GMT"
}
] | 1,686,096,000,000 | [
[
"Henaff",
"Mikael",
""
],
[
"Jiang",
"Minqi",
""
],
[
"Raileanu",
"Roberta",
""
]
] |
2306.03310 | Bo Liu | Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu,
Peter Stone | LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Lifelong learning offers a promising paradigm of building a generalist agent
that learns and adapts over its lifespan. Unlike traditional lifelong learning
problems in image and text domains, which primarily involve the transfer of
declarative knowledge of entities and concepts, lifelong learning in
decision-making (LLDM) also necessitates the transfer of procedural knowledge,
such as actions and behaviors. To advance research in LLDM, we introduce
LIBERO, a novel benchmark of lifelong learning for robot manipulation.
Specifically, LIBERO highlights five key research topics in LLDM: 1) how to
efficiently transfer declarative knowledge, procedural knowledge, or the
mixture of both; 2) how to design effective policy architectures and 3)
effective algorithms for LLDM; 4) the robustness of a lifelong learner with
respect to task ordering; and 5) the effect of model pretraining for LLDM. We
develop an extendible procedural generation pipeline that can in principle
generate infinitely many tasks. For benchmarking purpose, we create four task
suites (130 tasks in total) that we use to investigate the above-mentioned
research topics. To support sample-efficient learning, we provide high-quality
human-teleoperated demonstration data for all tasks. Our extensive experiments
present several insightful or even unexpected discoveries: sequential
finetuning outperforms existing lifelong learning methods in forward transfer,
no single visual encoder architecture excels at all types of knowledge
transfer, and naive supervised pretraining can hinder agents' performance in
the subsequent LLDM. Check the website at https://libero-project.github.io for
the code and the datasets.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2023 23:32:26 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Oct 2023 15:52:31 GMT"
}
] | 1,697,500,800,000 | [
[
"Liu",
"Bo",
""
],
[
"Zhu",
"Yifeng",
""
],
[
"Gao",
"Chongkai",
""
],
[
"Feng",
"Yihao",
""
],
[
"Liu",
"Qiang",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Stone",
"Peter",
""
]
] |
2306.03381 | Elliott Wen | Elliott Wen, Chitralekha Gupta, Prasanth Sasikumar, Mark Billinghurst,
James Wilmott, Emily Skow, Arindam Dey, Suranga Nanayakkara | VR.net: A Real-world Dataset for Virtual Reality Motion Sickness
Research | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Researchers have used machine learning approaches to identify motion sickness
in VR experience. These approaches demand an accurately-labeled, real-world,
and diverse dataset for high accuracy and generalizability. As a starting point
to address this need, we introduce `VR.net', a dataset offering approximately
12-hour gameplay videos from ten real-world games in 10 diverse genres. For
each video frame, a rich set of motion sickness-related labels, such as
camera/object movement, depth field, and motion flow, are accurately assigned.
Building such a dataset is challenging since manual labeling would require an
infeasible amount of time. Instead, we utilize a tool to automatically and
precisely extract ground truth data from 3D engines' rendering pipelines
without accessing VR games' source code. We illustrate the utility of VR.net
through several applications, such as risk factor detection and sickness level
prediction. We continuously expand VR.net and envision its next version
offering 10X more data than the current form. We believe that the scale,
accuracy, and diversity of VR.net can offer unparalleled opportunities for VR
motion sickness research and beyond.
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2023 03:43:11 GMT"
}
] | 1,686,096,000,000 | [
[
"Wen",
"Elliott",
""
],
[
"Gupta",
"Chitralekha",
""
],
[
"Sasikumar",
"Prasanth",
""
],
[
"Billinghurst",
"Mark",
""
],
[
"Wilmott",
"James",
""
],
[
"Skow",
"Emily",
""
],
[
"Dey",
"Arindam",
""
],
[
"Nanayakkara",
"Suranga",
""
]
] |
2306.03387 | Shiguang Wu | Shiguang Wu, Yaqing Wang, Qinghe Jing, Daxiang Dong, Dejing Dou,
Quanming Yao | ColdNAS: Search to Modulate for User Cold-Start Recommendation | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Making personalized recommendation for cold-start users, who only have a few
interaction histories, is a challenging problem in recommendation systems.
Recent works leverage hypernetworks to directly map user interaction histories
to user-specific parameters, which are then used to modulate predictor by
feature-wise linear modulation function. These works obtain the
state-of-the-art performance. However, the physical meaning of scaling and
shifting in recommendation data is unclear. Instead of using a fixed modulation
function and deciding modulation position by expertise, we propose a modulation
framework called ColdNAS for user cold-start problem, where we look for proper
modulation structure, including function and position, via neural architecture
search. We design a search space which covers broad models and theoretically
prove that this search space can be transformed to a much smaller space,
enabling an efficient and robust one-shot search algorithm. Extensive
experimental results on benchmark datasets show that ColdNAS consistently
performs the best. We observe that different modulation functions lead to the
best performance on different datasets, which validates the necessity of
designing a searching-based method.
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2023 04:04:12 GMT"
}
] | 1,686,096,000,000 | [
[
"Wu",
"Shiguang",
""
],
[
"Wang",
"Yaqing",
""
],
[
"Jing",
"Qinghe",
""
],
[
"Dong",
"Daxiang",
""
],
[
"Dou",
"Dejing",
""
],
[
"Yao",
"Quanming",
""
]
] |
2306.03423 | Max Reuter | Max Reuter, William Schulze | I'm Afraid I Can't Do That: Predicting Prompt Refusal in Black-Box
Generative Language Models | Submitted for review to KDD 2023 via the workshop "Foundations and
Applications in Large-scale AI Models: Pre-training, Fine-tuning, and
Prompt-based Learning" | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Since the release of OpenAI's ChatGPT, generative language models have
attracted extensive public attention. The increased usage has highlighted
generative models' broad utility, but also revealed several forms of embedded
bias. Some is induced by the pre-training corpus; but additional bias specific
to generative models arises from the use of subjective fine-tuning to avoid
generating harmful content. Fine-tuning bias may come from individual engineers
and company policies, and affects which prompts the model chooses to refuse. In
this experiment, we characterize ChatGPT's refusal behavior using a black-box
attack. We first query ChatGPT with a variety of offensive and benign prompts
(n=1,706), then manually label each response as compliance or refusal. Manual
examination of responses reveals that refusal is not cleanly binary, and lies
on a continuum; as such, we map several different kinds of responses to a
binary of compliance or refusal. The small manually-labeled dataset is used to
train a refusal classifier, which achieves an accuracy of 96%. Second, we use
this refusal classifier to bootstrap a larger (n=10,000) dataset adapted from
the Quora Insincere Questions dataset. With this machine-labeled data, we train
a prompt classifier to predict whether ChatGPT will refuse a given question,
without seeing ChatGPT's response. This prompt classifier achieves 76% accuracy
on a test set of manually labeled questions (n=985). We examine our classifiers
and the prompt n-grams that are most predictive of either compliance or
refusal. Our datasets and code are available at
https://github.com/maxwellreuter/chatgpt-refusals.
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2023 05:50:58 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2023 05:13:34 GMT"
}
] | 1,686,873,600,000 | [
[
"Reuter",
"Max",
""
],
[
"Schulze",
"William",
""
]
] |
2306.03553 | John Chong Min Tan | Tan John Chong Min | An Approach to Solving the Abstraction and Reasoning Corpus (ARC)
Challenge | 14 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We utilise the power of Large Language Models (LLMs), in particular GPT4, to
be prompt engineered into performing an arbitrary task. Here, we give the model
some human priors via text, along with some typical procedures for solving the
ARC tasks, and ask it to generate the i) broad description of the input-output
relation, ii) detailed steps of the input-output mapping, iii) use the detailed
steps to perform manipulation on the test input and derive the test output. The
current GPT3.5/GPT4 prompt solves 2 out of 4 tested small ARC challenges (those
with small grids of 8x8 and below). With tweaks to the prompt to make it more
specific for the use case, it can solve more. We posit that when scaled to a
multi-agent system with usage of past memory and equipped with an image
interpretation tool via Visual Question Answering, we may actually be able to
solve the majority of the ARC challenge
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2023 10:08:12 GMT"
}
] | 1,686,096,000,000 | [
[
"Min",
"Tan John Chong",
""
]
] |
2306.03601 | Anirban Mukherjee | Anirban Mukherjee and Hannah Chang | The Creative Frontier of Generative AI: Managing the Novelty-Usefulness
Tradeoff | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In this paper, drawing inspiration from the human creativity literature, we
explore the optimal balance between novelty and usefulness in generative
Artificial Intelligence (AI) systems. We posit that overemphasizing either
aspect can lead to limitations such as hallucinations and memorization.
Hallucinations, characterized by AI responses containing random inaccuracies or
falsehoods, emerge when models prioritize novelty over usefulness.
Memorization, where AI models reproduce content from their training data,
results from an excessive focus on usefulness, potentially limiting creativity.
To address these challenges, we propose a framework that includes
domain-specific analysis, data and transfer learning, user preferences and
customization, custom evaluation metrics, and collaboration mechanisms. Our
approach aims to generate content that is both novel and useful within specific
domains, while considering the unique requirements of various contexts.
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2023 11:44:57 GMT"
}
] | 1,686,096,000,000 | [
[
"Mukherjee",
"Anirban",
""
],
[
"Chang",
"Hannah",
""
]
] |
2306.03604 | Bin Liu | Bin Hu, Chenyang Zhao, Pu Zhang, Zihao Zhou, Yuanhang Yang, Zenglin
Xu, Bin Liu | Enabling Intelligent Interactions between an Agent and an LLM: A
Reinforcement Learning Approach | 17 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large language models (LLMs) encode a vast amount of world knowledge acquired
from massive text datasets. Recent studies have demonstrated that LLMs can
assist an embodied agent in solving complex sequential decision making tasks by
providing high-level instructions. However, interactions with LLMs can be
time-consuming. In many practical scenarios, they require a significant amount
of storage space that can only be deployed on remote cloud server nodes.
Additionally, using commercial LLMs can be costly since they may charge based
on usage frequency. In this paper, we explore how to enable intelligent
cost-effective interactions between the agent and an LLM. We find that this
problem can be naturally formulated by a Markov decision process (MDP), and
propose When2Ask, a reinforcement learning based approach that learns when it
is necessary to query LLMs for high-level instructions to accomplish a target
task. Experiments on MiniGrid and Habitat environments that entail planning
sub-goals demonstrate that When2Ask learns to solve target tasks with only a
few necessary interactions with an LLM, and significantly reduces interaction
costs in testing environments compared with baseline methods. Experiment
results also suggest that by learning a mediator model to interact with the
LLM, the agent's performance becomes more robust against partial observability
of the environment. Our code is available at
https://github.com/ZJLAB-AMMI/LLM4RL.
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2023 11:49:09 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2023 07:35:59 GMT"
},
{
"version": "v3",
"created": "Sun, 11 Jun 2023 01:04:34 GMT"
},
{
"version": "v4",
"created": "Thu, 31 Aug 2023 12:44:26 GMT"
},
{
"version": "v5",
"created": "Sun, 3 Mar 2024 04:59:28 GMT"
},
{
"version": "v6",
"created": "Tue, 5 Mar 2024 04:05:02 GMT"
}
] | 1,710,201,600,000 | [
[
"Hu",
"Bin",
""
],
[
"Zhao",
"Chenyang",
""
],
[
"Zhang",
"Pu",
""
],
[
"Zhou",
"Zihao",
""
],
[
"Yang",
"Yuanhang",
""
],
[
"Xu",
"Zenglin",
""
],
[
"Liu",
"Bin",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.