id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2301.13276 | Karthik Reddy Kanjula | Karthik Reddy Kanjula, Sai Meghana Kolla | Distributed Swarm Intelligence | 7 pages, 3 Figure, 1 Algorithm | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents the development of a distributed application that
facilitates the understanding and application of swarm intelligence in solving
optimization problems. The platform comprises a search space of customizable
random particles, allowing users to tailor the solution to their specific
needs. By leveraging the power of Ray distributed computing, the application
can support multiple users simultaneously, offering a flexible and scalable
solution. The primary objective of this project is to provide a user-friendly
platform that enhances the understanding and practical use of swarm
intelligence in problem-solving.
| [
{
"version": "v1",
"created": "Mon, 30 Jan 2023 20:36:35 GMT"
}
] | 1,675,209,600,000 | [
[
"Kanjula",
"Karthik Reddy",
""
],
[
"Kolla",
"Sai Meghana",
""
]
] |
2301.13328 | Alexis De Colnet | Alexis de Colnet and Pierre Marquis | On the Complexity of Enumerating Prime Implicants from Decision-DNNF
Circuits | 13 pages, including appendices | null | 10.24963/ijcai.2022/358 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We consider the problem EnumIP of enumerating prime implicants of Boolean
functions represented by decision decomposable negation normal form (dec-DNNF)
circuits. We study EnumIP from dec-DNNF within the framework of enumeration
complexity and prove that it is in OutputP, the class of output polynomial
enumeration problems, and more precisely in IncP, the class of polynomial
incremental time enumeration problems. We then focus on two closely related,
but seemingly harder, enumeration problems where further restrictions are put
on the prime implicants to be generated. In the first problem, one is only
interested in prime implicants representing subset-minimal abductive
explanations, a notion much investigated in AI for more than three decades. In
the second problem, the target is prime implicants representing sufficient
reasons, a recent yet important notion in the emerging field of eXplainable AI,
since they aim to explain predictions achieved by machine learning classifiers.
We provide evidence showing that enumerating specific prime implicants
corresponding to subset-minimal abductive explanations or to sufficient reasons
is not in OutputP.
| [
{
"version": "v1",
"created": "Mon, 30 Jan 2023 23:23:45 GMT"
}
] | 1,675,209,600,000 | [
[
"de Colnet",
"Alexis",
""
],
[
"Marquis",
"Pierre",
""
]
] |
2301.13556 | Shimon Komarovsky | Shimon Komarovsky | Purposeful and Operation-based Cognitive System for AGI | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | This paper proposes a new cognitive model, acting as the main component of an
AGI agent. The model is introduced in its mature state, and as an extension of
previous models, DENN, and especially AKREM, by including operational models
(frames/classes) and will. In addition, it is mainly based on the duality
principle in every known intelligent aspect, such as exhibiting both top-down
and bottom-up model learning, generalization verse specialization, and more.
Furthermore, a holistic approach is advocated for AGI designing and cognition
under constraints or efficiency is proposed, in the form of reusability and
simplicity. Finally, reaching this mature state is described via a cognitive
evolution from infancy to adulthood, utilizing a consolidation principle. The
final product of this cognitive model is a dynamic operational memory of models
and instances.
| [
{
"version": "v1",
"created": "Tue, 31 Jan 2023 11:11:38 GMT"
}
] | 1,675,209,600,000 | [
[
"Komarovsky",
"Shimon",
""
]
] |
2301.13869 | David Nicholson | David Aaron Nicholson, Vincent Emanuele | Reverse engineering adversarial attacks with fingerprints from
adversarial examples | 8 pages, 6 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In spite of intense research efforts, deep neural networks remain vulnerable
to adversarial examples: an input that forces the network to confidently
produce incorrect outputs. Adversarial examples are typically generated by an
attack algorithm that optimizes a perturbation added to a benign input. Many
such algorithms have been developed. If it were possible to reverse engineer
attack algorithms from adversarial examples, this could deter bad actors
because of the possibility of attribution. Here we formulate reverse
engineering as a supervised learning problem where the goal is to assign an
adversarial example to a class that represents the algorithm and parameters
used. To our knowledge it has not been previously shown whether this is even
possible. We first test whether we can classify the perturbations added to
images by attacks on undefended single-label image classification models.
Taking a "fight fire with fire" approach, we leverage the sensitivity of deep
neural networks to adversarial examples, training them to classify these
perturbations. On a 17-class dataset (5 attacks, 4 bounded with 4 epsilon
values each), we achieve an accuracy of 99.4% with a ResNet50 model trained on
the perturbations. We then ask whether we can perform this task without access
to the perturbations, obtaining an estimate of them with signal processing
algorithms, an approach we call "fingerprinting". We find the JPEG algorithm
serves as a simple yet effective fingerprinter (85.05% accuracy), providing a
strong baseline for future work. We discuss how our approach can be extended to
attack agnostic, learnable fingerprints, and to open-world scenarios with
unknown attacks.
| [
{
"version": "v1",
"created": "Tue, 31 Jan 2023 18:59:37 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2023 16:34:52 GMT"
}
] | 1,675,296,000,000 | [
[
"Nicholson",
"David Aaron",
""
],
[
"Emanuele",
"Vincent",
""
]
] |
2302.00094 | Son Tran | Son Quoc Tran, Phong Nguyen-Thuan Do, Uyen Le, Matt Kretchmar | The Impacts of Unanswerable Questions on the Robustness of Machine
Reading Comprehension Models | Accepted atThe 17th Conference of the European Chapter of the
Association for Computational Linguistics (EACL 2023) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pretrained language models have achieved super-human performances on many
Machine Reading Comprehension (MRC) benchmarks. Nevertheless, their relative
inability to defend against adversarial attacks has spurred skepticism about
their natural language understanding. In this paper, we ask whether training
with unanswerable questions in SQuAD 2.0 can help improve the robustness of MRC
models against adversarial attacks. To explore that question, we fine-tune
three state-of-the-art language models on either SQuAD 1.1 or SQuAD 2.0 and
then evaluate their robustness under adversarial attacks. Our experiments
reveal that current models fine-tuned on SQuAD 2.0 do not initially appear to
be any more robust than ones fine-tuned on SQuAD 1.1, yet they reveal a measure
of hidden robustness that can be leveraged to realize actual performance gains.
Furthermore, we find that the robustness of models fine-tuned on SQuAD 2.0
extends to additional out-of-domain datasets. Finally, we introduce a new
adversarial attack to reveal artifacts of SQuAD 2.0 that current MRC models are
learning.
| [
{
"version": "v1",
"created": "Tue, 31 Jan 2023 20:51:14 GMT"
}
] | 1,675,296,000,000 | [
[
"Tran",
"Son Quoc",
""
],
[
"Do",
"Phong Nguyen-Thuan",
""
],
[
"Le",
"Uyen",
""
],
[
"Kretchmar",
"Matt",
""
]
] |
2302.00111 | Yilun Du | Yilun Du, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Joshua B.
Tenenbaum, Dale Schuurmans, Pieter Abbeel | Learning Universal Policies via Text-Guided Video Generation | NeurIPS 2023, Project Website: https://universal-policy.github.io/ | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A goal of artificial intelligence is to construct an agent that can solve a
wide variety of tasks. Recent progress in text-guided image synthesis has
yielded models with an impressive ability to generate complex novel images,
exhibiting combinatorial generalization across domains. Motivated by this
success, we investigate whether such tools can be used to construct more
general-purpose agents. Specifically, we cast the sequential decision making
problem as a text-conditioned video generation problem, where, given a
text-encoded specification of a desired goal, a planner synthesizes a set of
future frames depicting its planned actions in the future, after which control
actions are extracted from the generated video. By leveraging text as the
underlying goal specification, we are able to naturally and combinatorially
generalize to novel goals. The proposed policy-as-video formulation can further
represent environments with different state and action spaces in a unified
space of images, which, for example, enables learning and generalization across
a variety of robot manipulation tasks. Finally, by leveraging pretrained
language embeddings and widely available videos from the internet, the approach
enables knowledge transfer through predicting highly realistic video plans for
real robots.
| [
{
"version": "v1",
"created": "Tue, 31 Jan 2023 21:28:13 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Feb 2023 02:16:12 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Nov 2023 05:38:13 GMT"
}
] | 1,700,524,800,000 | [
[
"Du",
"Yilun",
""
],
[
"Yang",
"Mengjiao",
""
],
[
"Dai",
"Bo",
""
],
[
"Dai",
"Hanjun",
""
],
[
"Nachum",
"Ofir",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Schuurmans",
"Dale",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
2302.00302 | Yimin Lv | Jian Dong, Yisong Yu, Yapeng Zhang, Yimin Lv, Shuli Wang, Beihong Jin,
Yongkang Wang, Xingxing Wang and Dong Wang | A Deep Behavior Path Matching Network for Click-Through Rate Prediction | Accepted by WWW2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | User behaviors on an e-commerce app not only contain different kinds of
feedback on items but also sometimes imply the cognitive clue of the user's
decision-making. For understanding the psychological procedure behind user
decisions, we present the behavior path and propose to match the user's current
behavior path with historical behavior paths to predict user behaviors on the
app. Further, we design a deep neural network for behavior path matching and
solve three difficulties in modeling behavior paths: sparsity, noise
interference, and accurate matching of behavior paths. In particular, we
leverage contrastive learning to augment user behavior paths, provide behavior
path self-activation to alleviate the effect of noise, and adopt a two-level
matching mechanism to identify the most appropriate candidate. Our model shows
excellent performance on two real-world datasets, outperforming the
state-of-the-art CTR model. Moreover, our model has been deployed on the
Meituan food delivery platform and has accumulated 1.6% improvement in CTR and
1.8% improvement in advertising revenue.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2023 08:08:21 GMT"
}
] | 1,675,296,000,000 | [
[
"Dong",
"Jian",
""
],
[
"Yu",
"Yisong",
""
],
[
"Zhang",
"Yapeng",
""
],
[
"Lv",
"Yimin",
""
],
[
"Wang",
"Shuli",
""
],
[
"Jin",
"Beihong",
""
],
[
"Wang",
"Yongkang",
""
],
[
"Wang",
"Xingxing",
""
],
[
"Wang",
"Dong",
""
]
] |
2302.00389 | Muhammad Arslan Manzoor | Muhammad Arslan Manzoor, Sarah Albarri, Ziting Xian, Zaiqiao Meng,
Preslav Nakov, and Shangsong Liang | Multimodality Representation Learning: A Survey on Evolution,
Pretraining and Its Applications | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multimodality Representation Learning, as a technique of learning to embed
information from different modalities and their correlations, has achieved
remarkable success on a variety of applications, such as Visual Question
Answering (VQA), Natural Language for Visual Reasoning (NLVR), and Vision
Language Retrieval (VLR). Among these applications, cross-modal interaction and
complementary information from different modalities are crucial for advanced
models to perform any multimodal task, e.g., understand, recognize, retrieve,
or generate optimally. Researchers have proposed diverse methods to address
these tasks. The different variants of transformer-based architectures
performed extraordinarily on multiple modalities. This survey presents the
comprehensive literature on the evolution and enhancement of deep learning
multimodal architectures to deal with textual, visual and audio features for
diverse cross-modal and modern multimodal tasks. This study summarizes the (i)
recent task-specific deep learning methodologies, (ii) the pretraining types
and multimodal pretraining objectives, (iii) from state-of-the-art pretrained
multimodal approaches to unifying architectures, and (iv) multimodal task
categories and possible future improvements that can be devised for better
multimodal learning. Moreover, we prepare a dataset section for new researchers
that covers most of the benchmarks for pretraining and finetuning. Finally,
major challenges, gaps, and potential research topics are explored. A
constantly-updated paperlist related to our survey is maintained at
https://github.com/marslanm/multimodality-representation-learning.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2023 11:48:34 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Mar 2024 18:44:59 GMT"
}
] | 1,709,510,400,000 | [
[
"Manzoor",
"Muhammad Arslan",
""
],
[
"Albarri",
"Sarah",
""
],
[
"Xian",
"Ziting",
""
],
[
"Meng",
"Zaiqiao",
""
],
[
"Nakov",
"Preslav",
""
],
[
"Liang",
"Shangsong",
""
]
] |
2302.00419 | Zihao Pan | Zihao Pan, Kai Peng, Shuai Ling, Haipeng Zhang | For the Underrepresented in Gender Bias Research: Chinese Name Gender
Prediction with Heterogeneous Graph Attention Network | 8 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Achieving gender equality is an important pillar for humankind's sustainable
future. Pioneering data-driven gender bias research is based on large-scale
public records such as scientific papers, patents, and company registrations,
covering female researchers, inventors and entrepreneurs, and so on. Since
gender information is often missing in relevant datasets, studies rely on tools
to infer genders from names. However, available open-sourced Chinese
gender-guessing tools are not yet suitable for scientific purposes, which may
be partially responsible for female Chinese being underrepresented in
mainstream gender bias research and affect their universality. Specifically,
these tools focus on character-level information while overlooking the fact
that the combinations of Chinese characters in multi-character names, as well
as the components and pronunciations of characters, convey important messages.
As a first effort, we design a Chinese Heterogeneous Graph Attention (CHGAT)
model to capture the heterogeneity in component relationships and incorporate
the pronunciations of characters. Our model largely surpasses current tools and
also outperforms the state-of-the-art algorithm. Last but not least, the most
popular Chinese name-gender dataset is single-character based with far less
female coverage from an unreliable source, naturally hindering relevant
studies. We open-source a more balanced multi-character dataset from an
official source together with our code, hoping to help future research
promoting gender equality.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2023 13:08:50 GMT"
}
] | 1,675,296,000,000 | [
[
"Pan",
"Zihao",
""
],
[
"Peng",
"Kai",
""
],
[
"Ling",
"Shuai",
""
],
[
"Zhang",
"Haipeng",
""
]
] |
2302.00484 | Abdo Abouelrous | Abdo Abouelrous, Laurens Bliek, Yingqian Zhang | Digital Twin Applications in Urban Logistics: An Overview | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Urban traffic attributed to commercial and industrial transportation is
observed to largely affect living standards in cities due to external effects
pertaining to pollution and congestion. In order to counter this, smart cities
deploy technological tools to achieve sustainability. Such tools include
Digital Twins (DT)s which are virtual replicas of real-life physical systems.
Research suggests that DTs can be very beneficial in how they control a
physical system by constantly optimizing its performance. The concept has been
extensively studied in other technology-driven industries like manufacturing.
However, little work has been done with regards to their application in urban
logistics. In this paper, we seek to provide a framework by which DTs could be
easily adapted to urban logistics networks. To do this, we provide a
characterization of key factors in urban logistics for dynamic decision-making.
We also survey previous research on DT applications in urban logistics as we
found that a holistic overview is lacking. Using this knowledge in combination
with the characterization, we produce a conceptual model that describes the
ontology, learning capabilities and optimization prowess of an urban logistics
digital twin through its quantitative models. We finish off with a discussion
on potential research benefits and limitations based on previous research and
our practical experience.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2023 14:48:01 GMT"
}
] | 1,675,296,000,000 | [
[
"Abouelrous",
"Abdo",
""
],
[
"Bliek",
"Laurens",
""
],
[
"Zhang",
"Yingqian",
""
]
] |
2302.00561 | Amy Smith Miss | Amy Smith, Hope Schroeder, Ziv Epstein, Michael Cook, Simon Colton,
Andrew Lippman | Trash to Treasure: Using text-to-image models to inform the design of
physical artefacts | 6 pages, 7 figures, In proceedings of the 37th AAAI Conference on
Artificial Intelligence | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Text-to-image generative models have recently exploded in popularity and
accessibility. Yet so far, use of these models in creative tasks that bridge
the 2D digital world and the creation of physical artefacts has been
understudied. We conduct a pilot study to investigate if and how text-to-image
models can be used to assist in upstream tasks within the creative process,
such as ideation and visualization, prior to a sculpture-making activity.
Thirty participants selected sculpture-making materials and generated three
images using the Stable Diffusion text-to-image generator, each with text
prompts of their choice, with the aim of informing and then creating a physical
sculpture. The majority of participants (23/30) reported that the generated
images informed their sculptures, and 28/30 reported interest in using
text-to-image models to help them in a creative task in the future. We identify
several prompt engineering strategies and find that a participant's prompting
strategy relates to their stage in the creative process. We discuss how our
findings can inform support for users at different stages of the design process
and for using text-to-image models for physical artefact design.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2023 16:26:34 GMT"
}
] | 1,675,296,000,000 | [
[
"Smith",
"Amy",
""
],
[
"Schroeder",
"Hope",
""
],
[
"Epstein",
"Ziv",
""
],
[
"Cook",
"Michael",
""
],
[
"Colton",
"Simon",
""
],
[
"Lippman",
"Andrew",
""
]
] |
2302.00612 | Seunghyun Lee | Seunghyun Lee, Da Young Lee, Sujeong Im, Nan Hee Kim, Sung-Min Park | Clinical Decision Transformer: Intended Treatment Recommendation through
Goal Prompting | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With recent achievements in tasks requiring context awareness, foundation
models have been adopted to treat large-scale data from electronic health
record (EHR) systems. However, previous clinical recommender systems based on
foundation models have a limited purpose of imitating clinicians' behavior and
do not directly consider a problem of missing values. In this paper, we propose
Clinical Decision Transformer (CDT), a recommender system that generates a
sequence of medications to reach a desired range of clinical states given as
goal prompts. For this, we conducted goal-conditioned sequencing, which
generated a subsequence of treatment history with prepended future goal state,
and trained the CDT to model sequential medications required to reach that goal
state. For contextual embedding over intra-admission and inter-admissions, we
adopted a GPT-based architecture with an admission-wise attention mask and
column embedding. In an experiment, we extracted a diabetes dataset from an EHR
system, which contained treatment histories of 4788 patients. We observed that
the CDT achieved the intended treatment effect according to goal prompt ranges
(e.g., NormalA1c, LowerA1c, and HigherA1c), contrary to the case with behavior
cloning. To the best of our knowledge, this is the first study to explore
clinical recommendations from the perspective of goal prompting. See
https://clinical-decision-transformer.github.io for code and additional
information.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2023 17:26:01 GMT"
}
] | 1,675,296,000,000 | [
[
"Lee",
"Seunghyun",
""
],
[
"Lee",
"Da Young",
""
],
[
"Im",
"Sujeong",
""
],
[
"Kim",
"Nan Hee",
""
],
[
"Park",
"Sung-Min",
""
]
] |
2302.00805 | Evan Hubinger | Evan Hubinger, Adam Jermyn, Johannes Treutlein, Rubi Hudson, Kate
Woolverton | Conditioning Predictive Models: Risks and Strategies | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our intention is to provide a definitive reference on what it would take to
safely make use of generative/predictive models in the absence of a solution to
the Eliciting Latent Knowledge problem. Furthermore, we believe that large
language models can be understood as such predictive models of the world, and
that such a conceptualization raises significant opportunities for their safe
yet powerful use via carefully conditioning them to predict desirable outputs.
Unfortunately, such approaches also raise a variety of potentially fatal safety
problems, particularly surrounding situations where predictive models predict
the output of other AI systems, potentially unbeknownst to us. There are
numerous potential solutions to such problems, however, primarily via carefully
conditioning models to predict the things we want (e.g. humans) rather than the
things we don't (e.g. malign AIs). Furthermore, due to the simplicity of the
prediction objective, we believe that predictive models present the easiest
inner alignment problem that we are aware of. As a result, we think that
conditioning approaches for predictive models represent the safest known way of
eliciting human-level and slightly superhuman capabilities from large language
models and other similar future models.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 00:06:36 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 10:18:47 GMT"
}
] | 1,675,728,000,000 | [
[
"Hubinger",
"Evan",
""
],
[
"Jermyn",
"Adam",
""
],
[
"Treutlein",
"Johannes",
""
],
[
"Hudson",
"Rubi",
""
],
[
"Woolverton",
"Kate",
""
]
] |
2302.00813 | Malek Mechergui | Malek Mechergui and Sarath Sreedharan | Goal Alignment: A Human-Aware Account of Value Alignment Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Value alignment problems arise in scenarios where the specified objectives of
an AI agent don't match the true underlying objective of its users. The problem
has been widely argued to be one of the central safety problems in AI.
Unfortunately, most existing works in value alignment tend to focus on issues
that are primarily related to the fact that reward functions are an unintuitive
mechanism to specify objectives. However, the complexity of the objective
specification mechanism is just one of many reasons why the user may have
misspecified their objective. A foundational cause for misalignment that is
being overlooked by these works is the inherent asymmetry in human expectations
about the agent's behavior and the behavior generated by the agent for the
specified objective. To address this lacuna, we propose a novel formulation for
the value alignment problem, named goal alignment that focuses on a few central
challenges related to value alignment. In doing so, we bridge the currently
disparate research areas of value alignment and human-aware planning.
Additionally, we propose a first-of-its-kind interactive algorithm that is
capable of using information generated under incorrect beliefs about the agent,
to determine the true underlying goal of the user.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 01:18:57 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 19:59:25 GMT"
}
] | 1,675,987,200,000 | [
[
"Mechergui",
"Malek",
""
],
[
"Sreedharan",
"Sarath",
""
]
] |
2302.00893 | Yuwei Xia | Yuwei Xia, Mengqi Zhang, Qiang Liu, Shu Wu, Xiao-Yu Zhang | MetaTKG: Learning Evolutionary Meta-Knowledge for Temporal Knowledge
Graph Reasoning | EMNLP 2022 Full Paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning over Temporal Knowledge Graphs (TKGs) aims to predict future facts
based on given history. One of the key challenges for prediction is to learn
the evolution of facts. Most existing works focus on exploring evolutionary
information in history to obtain effective temporal embeddings for entities and
relations, but they ignore the variation in evolution patterns of facts, which
makes them struggle to adapt to future data with different evolution patterns.
Moreover, new entities continue to emerge along with the evolution of facts
over time. Since existing models highly rely on historical information to learn
embeddings for entities, they perform poorly on such entities with little
historical information. To tackle these issues, we propose a novel Temporal
Meta-learning framework for TKG reasoning, MetaTKG for brevity. Specifically,
our method regards TKG prediction as many temporal meta-tasks, and utilizes the
designed Temporal Meta-learner to learn evolutionary meta-knowledge from these
meta-tasks. The proposed method aims to guide the backbones to learn to adapt
quickly to future data and deal with entities with little historical
information by the learned meta-knowledge. Specially, in temporal meta-learner,
we design a Gating Integration module to adaptively establish temporal
correlations between meta-tasks. Extensive experiments on four widely-used
datasets and three backbones demonstrate that our method can greatly improve
the performance.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 05:55:41 GMT"
}
] | 1,675,382,400,000 | [
[
"Xia",
"Yuwei",
""
],
[
"Zhang",
"Mengqi",
""
],
[
"Liu",
"Qiang",
""
],
[
"Wu",
"Shu",
""
],
[
"Zhang",
"Xiao-Yu",
""
]
] |
2302.00935 | Haichao Zhang | Haichao Zhang, We Xu, Haonan Yu | Policy Expansion for Bridging Offline-to-Online Reinforcement Learning | ICLR 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Pre-training with offline data and online fine-tuning using reinforcement
learning is a promising strategy for learning control policies by leveraging
the best of both worlds in terms of sample efficiency and performance. One
natural approach is to initialize the policy for online learning with the one
trained offline. In this work, we introduce a policy expansion scheme for this
task. After learning the offline policy, we use it as one candidate policy in a
policy set. We then expand the policy set with another policy which will be
responsible for further learning. The two policies will be composed in an
adaptive manner for interacting with the environment. With this approach, the
policy previously learned offline is fully retained during online learning,
thus mitigating the potential issues such as destroying the useful behaviors of
the offline policy in the initial stage of online learning while allowing the
offline policy participate in the exploration naturally in an adaptive manner.
Moreover, new useful behaviors can potentially be captured by the newly added
policy through learning. Experiments are conducted on a number of tasks and the
results demonstrate the effectiveness of the proposed approach.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 08:25:12 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 01:01:00 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Apr 2023 20:34:57 GMT"
}
] | 1,681,776,000,000 | [
[
"Zhang",
"Haichao",
""
],
[
"Xu",
"We",
""
],
[
"Yu",
"Haonan",
""
]
] |
2302.00965 | Minghuan Liu | Minghuan Liu, Tairan He, Weinan Zhang, Shuicheng Yan, Zhongwen Xu | Visual Imitation Learning with Patch Rewards | Accepted by ICLR 2023. 18 pages, 14 figures, 2 tables. Codes are
available at https://github.com/sail-sg/PatchAIL | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Visual imitation learning enables reinforcement learning agents to learn to
behave from expert visual demonstrations such as videos or image sequences,
without explicit, well-defined rewards. Previous research either adopted
supervised learning techniques or induce simple and coarse scalar rewards from
pixels, neglecting the dense information contained in the image demonstrations.
In this work, we propose to measure the expertise of various local regions of
image samples, or called \textit{patches}, and recover multi-dimensional
\textit{patch rewards} accordingly. Patch reward is a more precise rewarding
characterization that serves as a fine-grained expertise measurement and visual
explainability tool. Specifically, we present Adversarial Imitation Learning
with Patch Rewards (PatchAIL), which employs a patch-based discriminator to
measure the expertise of different local parts from given images and provide
patch rewards. The patch-based knowledge is also used to regularize the
aggregated reward and stabilize the training. We evaluate our method on
DeepMind Control Suite and Atari tasks. The experiment results have
demonstrated that PatchAIL outperforms baseline methods and provides valuable
interpretations for visual demonstrations.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 09:13:10 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 16:57:11 GMT"
}
] | 1,676,332,800,000 | [
[
"Liu",
"Minghuan",
""
],
[
"He",
"Tairan",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Xu",
"Zhongwen",
""
]
] |
2302.01061 | Indradumna Banerjee | Indradumna Banerjee, Dinesh Ghanta, Girish Nautiyal, Pradeep Sanchana,
Prateek Katageri, and Atin Modi | MLOps with enhanced performance control and observability | SECOND INTERNATIONAL CONFERENCE ON AI-ML SYSTEMS | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The explosion of data and its ever increasing complexity in the last few
years, has made MLOps systems more prone to failure, and new tools need to be
embedded in such systems to avoid such failure. In this demo, we will introduce
crucial tools in the observability module of a MLOps system that target
difficult issues like data drfit and model version control for optimum model
selection. We believe integrating these features in our MLOps pipeline would go
a long way in building a robust system immune to early stage ML system
failures.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 12:47:07 GMT"
}
] | 1,675,382,400,000 | [
[
"Banerjee",
"Indradumna",
""
],
[
"Ghanta",
"Dinesh",
""
],
[
"Nautiyal",
"Girish",
""
],
[
"Sanchana",
"Pradeep",
""
],
[
"Katageri",
"Prateek",
""
],
[
"Modi",
"Atin",
""
]
] |
2302.01096 | Luis Olsina PhD | Luis Olsina, Mar\'ia Fernanda Papa, Pablo Becker | NFRsTDO v1.2's Terms, Properties, and Relationships -- A Top-Domain
Non-Functional Requirements Ontology | 9 pages and 2 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This preprint specifies and defines all the Terms, Properties, and
Relationships of NFRsTDO (Non-Functional Requirements Top-Domain Ontology).
NFRsTDO v1.2, whose UML conceptualization is shown in Figure 1 is a slightly
updated version of its predecessor, namely NFRsTDO v1.1. NFRsTDO is an ontology
mainly devoted to quality (non-functional) requirements and quality/cost views,
which is placed at the top-domain level in the context of a multilayer
ontological architecture called FCD-OntoArch (Foundational, Core, Domain, and
instance Ontological Architecture for sciences). Figure 2 depicts its five
tiers, which entail Foundational, Core, Top-Domain, Low-Domain, and Instance.
Each level is populated with ontological components or, in other words,
ontologies. Ontologies at the same level can be related to each other, except
at the foundational level, where only ThingFO (Thing Foundational Ontology) is
found. In addition, ontologies' terms and relationships at lower levels can be
semantically enriched by ontologies' terms and relationships from the higher
levels. NFRsTDO's terms and relationships are mainly extended/reused from
ThingFO, SituationCO (Situation Core Ontology), ProcessCO (Process Core
Ontology), and FRsTDO (Functional Requirements Top-Domain Ontology).
Stereotypes are the used mechanism for enriching NFRsTDO terms. Note that
annotations of updates from the previous version (NFRsTDO v1.1) to the current
one (v1.2) can be found in Appendix A.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 13:33:33 GMT"
}
] | 1,675,382,400,000 | [
[
"Olsina",
"Luis",
""
],
[
"Papa",
"María Fernanda",
""
],
[
"Becker",
"Pablo",
""
]
] |
2302.01150 | Simon Gottschalk | Simon Gottschalk, Elena Demidova | Tab2KG: Semantic Table Interpretation with Lightweight Semantic Profiles | null | Semantic Web, vol. 13, no. 3, pp. 571-597, 2022 | 10.3233/SW-222993 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tabular data plays an essential role in many data analytics and machine
learning tasks. Typically, tabular data does not possess any machine-readable
semantics. In this context, semantic table interpretation is crucial for making
data analytics workflows more robust and explainable. This article proposes
Tab2KG - a novel method that targets at the interpretation of tables with
previously unseen data and automatically infers their semantics to transform
them into semantic data graphs. We introduce original lightweight semantic
profiles that enrich a domain ontology's concepts and relations and represent
domain and table characteristics. We propose a one-shot learning approach that
relies on these profiles to map a tabular dataset containing previously unseen
instances to a domain ontology. In contrast to the existing semantic table
interpretation approaches, Tab2KG relies on the semantic profiles only and does
not require any instance lookup. This property makes Tab2KG particularly
suitable in the data analytics context, in which data tables typically contain
new instances. Our experimental evaluation on several real-world datasets from
different application domains demonstrates that Tab2KG outperforms
state-of-the-art semantic table interpretation baselines.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 15:12:30 GMT"
}
] | 1,675,382,400,000 | [
[
"Gottschalk",
"Simon",
""
],
[
"Demidova",
"Elena",
""
]
] |
2302.01443 | Weihua Li | Mengyan Wang, Weihua Li, Jingli Shi, Shiqing Wu and Quan Bai | DOR: A Novel Dual-Observation-Based Approach for News Recommendation
Systems | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Online social media platforms offer access to a vast amount of information,
but sifting through the abundance of news can be overwhelming and tiring for
readers. personalised recommendation algorithms can help users find information
that interests them. However, most existing models rely solely on observations
of user behaviour, such as viewing history, ignoring the connections between
the news and a user's prior knowledge. This can result in a lack of diverse
recommendations for individuals. In this paper, we propose a novel method to
address the complex problem of news recommendation. Our approach is based on
the idea of dual observation, which involves using a deep neural network with
observation mechanisms to identify the main focus of a news article as well as
the focus of the user on the article. This is achieved by taking into account
the user's belief network, which reflects their personal interests and biases.
By considering both the content of the news and the user's perspective, our
approach is able to provide more personalised and accurate recommendations. We
evaluate the performance of our model on real-world datasets and show that our
proposed method outperforms several popular baselines.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 22:16:53 GMT"
}
] | 1,675,641,600,000 | [
[
"Wang",
"Mengyan",
""
],
[
"Li",
"Weihua",
""
],
[
"Shi",
"Jingli",
""
],
[
"Wu",
"Shiqing",
""
],
[
"Bai",
"Quan",
""
]
] |
2302.01542 | Abubakar Siddique | Abubakar Siddique, Will N. Browne, and Gina M. Grimshaw | Lateralization in Agents' Decision Making: Evidence of Benefits/Costs
from Artificial Intelligence | 13 pages, 14 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lateralization is ubiquitous in vertebrate brains which, as well as its role
in locomotion, is considered an important factor in biological intelligence.
Lateralization has been associated with both poor and good performance. It has
been hypothesized that lateralization has benefits that may counterbalance its
costs. Given that lateralization is ubiquitous, it likely has advantages that
can benefit artificial intelligence. In turn, lateralized artificial
intelligent systems can be used as tools to advance the understanding of
lateralization in biological intelligence. Recently lateralization has been
incorporated into artificially intelligent systems to solve complex problems in
computer vision and navigation domains. Here we describe and test two novel
lateralized artificial intelligent systems that simultaneously represent and
address given problems at constituent and holistic levels. The experimental
results demonstrate that the lateralized systems outperformed state-of-the-art
non-lateralized systems in resolving complex problems. The advantages arise
from the abilities, (i) to represent an input signal at both the constituent
level and holistic level simultaneously, such that the most appropriate
viewpoint controls the system; (ii) to avoid extraneous computations by
generating excite and inhibit signals. The computational costs associated with
the lateralized AI systems are either less than the conventional AI systems or
countered by providing better solutions.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 04:34:44 GMT"
}
] | 1,675,641,600,000 | [
[
"Siddique",
"Abubakar",
""
],
[
"Browne",
"Will N.",
""
],
[
"Grimshaw",
"Gina M.",
""
]
] |
2302.01560 | Zihao Wang | Zihao Wang, Shaofei Cai, Guanzhou Chen, Anji Liu, Xiaojian Ma, Yitao
Liang | Describe, Explain, Plan and Select: Interactive Planning with Large
Language Models Enables Open-World Multi-Task Agents | NeurIPS 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We investigate the challenge of task planning for multi-task embodied agents
in open-world environments. Two main difficulties are identified: 1) executing
plans in an open-world environment (e.g., Minecraft) necessitates accurate and
multi-step reasoning due to the long-term nature of tasks, and 2) as vanilla
planners do not consider how easy the current agent can achieve a given
sub-task when ordering parallel sub-goals within a complicated plan, the
resulting plan could be inefficient or even infeasible. To this end, we propose
"$\underline{D}$escribe, $\underline{E}$xplain, $\underline{P}$lan and
$\underline{S}$elect" ($\textbf{DEPS}$), an interactive planning approach based
on Large Language Models (LLMs). DEPS facilitates better error correction on
initial LLM-generated $\textit{plan}$ by integrating $\textit{description}$ of
the plan execution process and providing self-$\textit{explanation}$ of
feedback when encountering failures during the extended planning phases.
Furthermore, it includes a goal $\textit{selector}$, which is a trainable
module that ranks parallel candidate sub-goals based on the estimated steps of
completion, consequently refining the initial plan. Our experiments mark the
milestone of the first zero-shot multi-task agent that can robustly accomplish
70+ Minecraft tasks and nearly double the overall performances. Further testing
reveals our method's general effectiveness in popularly adopted non-open-ended
domains as well (i.e., ALFWorld and tabletop manipulation). The ablation and
exploratory studies detail how our design beats the counterparts and provide a
promising update on the $\texttt{ObtainDiamond}$ grand challenge with our
approach. The code is released at https://github.com/CraftJarvis/MC-Planner.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 06:06:27 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Oct 2023 17:03:08 GMT"
}
] | 1,698,710,400,000 | [
[
"Wang",
"Zihao",
""
],
[
"Cai",
"Shaofei",
""
],
[
"Chen",
"Guanzhou",
""
],
[
"Liu",
"Anji",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Liang",
"Yitao",
""
]
] |
2302.01561 | Michael Beukman | Michael Beukman, Manuel Fokam, Marcel Kruger, Guy Axelrod, Muhammad
Nasir, Branden Ingram, Benjamin Rosman, Steven James | Hierarchically Composing Level Generators for the Creation of Complex
Structures | Code is available at https://github.com/Michael-Beukman/MCHAMR. This
work has been accepted to IEEE Transactions on Games, with copyright
transferred to the IEEE | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Procedural content generation (PCG) is a growing field, with numerous
applications in the video game industry and great potential to help create
better games at a fraction of the cost of manual creation. However, much of the
work in PCG is focused on generating relatively straightforward levels in
simple games, as it is challenging to design an optimisable objective function
for complex settings. This limits the applicability of PCG to more complex and
modern titles, hindering its adoption in industry. Our work aims to address
this limitation by introducing a compositional level generation method that
recursively composes simple low-level generators to construct large and complex
creations. This approach allows for easily-optimisable objectives and the
ability to design a complex structure in an interpretable way by referencing
lower-level components. We empirically demonstrate that our method outperforms
a non-compositional baseline by more accurately satisfying a designer's
functional requirements in several tasks. Finally, we provide a qualitative
showcase (in Minecraft) illustrating the large and complex, but still coherent,
structures that were generated using simple base generators.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 06:08:28 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jul 2023 11:55:34 GMT"
}
] | 1,689,811,200,000 | [
[
"Beukman",
"Michael",
""
],
[
"Fokam",
"Manuel",
""
],
[
"Kruger",
"Marcel",
""
],
[
"Axelrod",
"Guy",
""
],
[
"Nasir",
"Muhammad",
""
],
[
"Ingram",
"Branden",
""
],
[
"Rosman",
"Benjamin",
""
],
[
"James",
"Steven",
""
]
] |
2302.01578 | Taoan Huang | Taoan Huang, Aaron Ferber, Yuandong Tian, Bistra Dilkina, Benoit
Steiner | Searching Large Neighborhoods for Integer Linear Programs with
Contrastive Learning | null | null | null | PMLR 202:13869-13890 | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Integer Linear Programs (ILPs) are powerful tools for modeling and solving a
large number of combinatorial optimization problems. Recently, it has been
shown that Large Neighborhood Search (LNS), as a heuristic algorithm, can find
high quality solutions to ILPs faster than Branch and Bound. However, how to
find the right heuristics to maximize the performance of LNS remains an open
problem. In this paper, we propose a novel approach, CL-LNS, that delivers
state-of-the-art anytime performance on several ILP benchmarks measured by
metrics including the primal gap, the primal integral, survival rates and the
best performing rate. Specifically, CL-LNS collects positive and negative
solution samples from an expert heuristic that is slow to compute and learns a
new one with a contrastive loss. We use graph attention networks and a richer
set of features to further improve its performance.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 07:15:37 GMT"
}
] | 1,705,449,600,000 | [
[
"Huang",
"Taoan",
""
],
[
"Ferber",
"Aaron",
""
],
[
"Tian",
"Yuandong",
""
],
[
"Dilkina",
"Bistra",
""
],
[
"Steiner",
"Benoit",
""
]
] |
2302.01605 | Chao Yu | Chao Yu, Jiaxuan Gao, Weilin Liu, Botian Xu, Hao Tang, Jiaqi Yang, Yu
Wang, Yi Wu | Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased | The first two authors share equal contributions. This paper is
accepted by ICLR 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | There is a recent trend of applying multi-agent reinforcement learning (MARL)
to train an agent that can cooperate with humans in a zero-shot fashion without
using any human data. The typical workflow is to first repeatedly run self-play
(SP) to build a policy pool and then train the final adaptive policy against
this pool. A crucial limitation of this framework is that every policy in the
pool is optimized w.r.t. the environment reward function, which implicitly
assumes that the testing partners of the adaptive policy will be precisely
optimizing the same reward function as well. However, human objectives are
often substantially biased according to their own preferences, which can differ
greatly from the environment reward. We propose a more general framework,
Hidden-Utility Self-Play (HSP), which explicitly models human biases as hidden
reward functions in the self-play objective. By approximating the reward space
as linear functions, HSP adopts an effective technique to generate an augmented
policy pool with biased policies. We evaluate HSP on the Overcooked benchmark.
Empirical results show that our HSP method produces higher rewards than
baselines when cooperating with learned human models, manually scripted
policies, and real humans. The HSP policy is also rated as the most assistive
policy based on human feedback.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 09:06:42 GMT"
}
] | 1,675,641,600,000 | [
[
"Yu",
"Chao",
""
],
[
"Gao",
"Jiaxuan",
""
],
[
"Liu",
"Weilin",
""
],
[
"Xu",
"Botian",
""
],
[
"Tang",
"Hao",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Wang",
"Yu",
""
],
[
"Wu",
"Yi",
""
]
] |
2302.01704 | Ismail Nejjar | Ismail Nejjar, Fabian Geissmann, Mengjie Zhao, Cees Taal, Olga Fink | Domain Adaptation via Alignment of Operation Profile for Remaining
Useful Lifetime Prediction | 18 pages,11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective Prognostics and Health Management (PHM) relies on accurate
prediction of the Remaining Useful Life (RUL). Data-driven RUL prediction
techniques rely heavily on the representativeness of the available
time-to-failure trajectories. Therefore, these methods may not perform well
when applied to data from new units of a fleet that follow different operating
conditions than those they were trained on. This is also known as domain
shifts. Domain adaptation (DA) methods aim to address the domain shift problem
by extracting domain invariant features. However, DA methods do not distinguish
between the different phases of operation, such as steady states or transient
phases. This can result in misalignment due to under- or over-representation of
different operation phases. This paper proposes two novel DA approaches for RUL
prediction based on an adversarial domain adaptation framework that considers
the different phases of the operation profiles separately. The proposed
methodologies align the marginal distributions of each phase of the operation
profile in the source domain with its counterpart in the target domain. The
effectiveness of the proposed methods is evaluated using the New Commercial
Modular Aero-Propulsion System (N-CMAPSS) dataset, where sub-fleets of turbofan
engines operating in one of the three different flight classes (short, medium,
and long) are treated as separate domains. The experimental results show that
the proposed methods improve the accuracy of RUL predictions compared to
current state-of-the-art DA methods.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 13:02:27 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Oct 2023 13:37:36 GMT"
}
] | 1,697,414,400,000 | [
[
"Nejjar",
"Ismail",
""
],
[
"Geissmann",
"Fabian",
""
],
[
"Zhao",
"Mengjie",
""
],
[
"Taal",
"Cees",
""
],
[
"Fink",
"Olga",
""
]
] |
2302.01713 | Jan Bode | Jan Bode, Niklas K\"uhl, Dominik Kreuzberger, Sebastian Hirschl,
Carsten Holtmann | Towards Avoiding the Data Mess: Industry Insights from Data Mesh
Implementations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With the increasing importance of data and artificial intelligence,
organizations strive to become more data-driven. However, current data
architectures are not necessarily designed to keep up with the scale and scope
of data and analytics use cases. In fact, existing architectures often fail to
deliver the promised value associated with them. Data mesh is a
socio-technical, decentralized, distributed concept for enterprise data
management. As the concept of data mesh is still novel, it lacks empirical
insights from the field. Specifically, an understanding of the motivational
factors for introducing data mesh, the associated challenges, implementation
strategies, its business impact, and potential archetypes is missing. To
address this gap, we conduct 15 semi-structured interviews with industry
experts. Our results show, among other insights, that organizations have
difficulties with the transition toward federated governance associated with
the data mesh concept, the shift of responsibility for the development,
provision, and maintenance of data products, and the comprehension of the
overall concept. In our work, we derive multiple implementation strategies and
suggest organizations introduce a cross-domain steering unit, observe the data
product usage, create quick wins in the early phases, and favor small dedicated
teams that prioritize data products. While we acknowledge that organizations
need to apply implementation strategies according to their individual needs, we
also deduct two archetypes that provide suggestions in more detail. Our
findings synthesize insights from industry experts and provide researchers and
professionals with preliminary guidelines for the successful adoption of data
mesh.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 13:09:57 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 19:43:27 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Nov 2023 18:50:34 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Jun 2024 16:13:09 GMT"
}
] | 1,717,718,400,000 | [
[
"Bode",
"Jan",
""
],
[
"Kühl",
"Niklas",
""
],
[
"Kreuzberger",
"Dominik",
""
],
[
"Hirschl",
"Sebastian",
""
],
[
"Holtmann",
"Carsten",
""
]
] |
2302.01786 | Mahmoud Kasem | Mahmoud SalahEldin Kasem, Mohamed Hamada, Islam Taj-Eddin | Customer Profiling, Segmentation, and Sales Prediction using AI in
Direct Marketing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In an increasingly customer-centric business environment, effective
communication between marketing and senior management is crucial for success.
With the rise of globalization and increased competition, utilizing new data
mining techniques to identify potential customers is essential for direct
marketing efforts. This paper proposes a data mining preprocessing method for
developing a customer profiling system to improve sales performance, including
customer equity estimation and customer action prediction. The RFM-analysis
methodology is used to evaluate client capital and a boosting tree for
prediction. The study highlights the importance of customer segmentation
methods and algorithms to increase the accuracy of the prediction. The main
result of this study is the creation of a customer profile and forecast for the
sale of goods.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 14:45:09 GMT"
}
] | 1,675,641,600,000 | [
[
"Kasem",
"Mahmoud SalahEldin",
""
],
[
"Hamada",
"Mohamed",
""
],
[
"Taj-Eddin",
"Islam",
""
]
] |
2302.02038 | Kausik Lakkaraju | Kausik Lakkaraju, Biplav Srivastava, Marco Valtorta | Rating Sentiment Analysis Systems for Bias through a Causal Lens | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Sentiment Analysis Systems (SASs) are data-driven Artificial Intelligence
(AI) systems that, given a piece of text, assign one or more numbers conveying
the polarity and emotional intensity expressed in the input. Like other
automatic machine learning systems, they have also been known to exhibit model
uncertainty where a (small) change in the input leads to drastic swings in the
output. This can be especially problematic when inputs are related to protected
features like gender or race since such behavior can be perceived as a lack of
fairness, i.e., bias. We introduce a novel method to assess and rate SASs where
inputs are perturbed in a controlled causal setting to test if the output
sentiment is sensitive to protected variables even when other components of the
textual input, e.g., chosen emotion words, are fixed. We then use the result to
assign labels (ratings) at fine-grained and overall levels to convey the
robustness of the SAS to input changes. The ratings serve as a principled basis
to compare SASs and choose among them based on behavior. It benefits all users,
especially developers who reuse off-the-shelf SASs to build larger AI systems
but do not have access to their code or training data to compare.
| [
{
"version": "v1",
"created": "Sat, 4 Feb 2023 00:22:43 GMT"
}
] | 1,675,728,000,000 | [
[
"Lakkaraju",
"Kausik",
""
],
[
"Srivastava",
"Biplav",
""
],
[
"Valtorta",
"Marco",
""
]
] |
2302.02614 | Kuan Xu | Kuan Xu, Kuo Yang, Hanyang Dong, Xinyan Wang, Jian Yu, Xuezhong Zhou | A Pre-training Framework for Knowledge Graph Completion | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graph completion (KGC) is one of the effective methods to identify
new facts in knowledge graph. Except for a few methods based on graph network,
most of KGC methods trend to be trained based on independent triples, while are
difficult to take a full account of the information of global network
connection contained in knowledge network. To address these issues, in this
study, we propose a simple and effective Network-based Pre-training framework
for knowledge graph completion (termed NetPeace), which takes into account the
information of global network connection and local triple relationships in
knowledge graph. Experiments show that in NetPeace framework, multiple KGC
models yields consistent and significant improvements on benchmarks (e.g.,
36.45% Hits@1 and 27.40% MRR improvements for TuckER on FB15k-237), especially
dense knowledge graph. On the challenging low-resource task, NetPeace that
benefits from the global features of KG achieves higher performance (104.03%
MRR and 143.89% Hit@1 improvements at most) than original models.
| [
{
"version": "v1",
"created": "Mon, 6 Feb 2023 08:23:01 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 07:04:03 GMT"
}
] | 1,679,270,400,000 | [
[
"Xu",
"Kuan",
""
],
[
"Yang",
"Kuo",
""
],
[
"Dong",
"Hanyang",
""
],
[
"Wang",
"Xinyan",
""
],
[
"Yu",
"Jian",
""
],
[
"Zhou",
"Xuezhong",
""
]
] |
2302.02633 | Nishad Singhi | Nishad Singhi, Florian Mohnert, Ben Prystawski, Falk Lieder | Toward a normative theory of (self-)management by goal-setting | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People are often confronted with problems whose complexity exceeds their
cognitive capacities. To deal with this complexity, individuals and managers
can break complex problems down into a series of subgoals. Which subgoals are
most effective depends on people's cognitive constraints and the cognitive
mechanisms of goal pursuit. This creates an untapped opportunity to derive
practical recommendations for which subgoals managers and individuals should
set from cognitive models of bounded rationality. To seize this opportunity, we
apply the principle of resource-rationality to formulate a mathematically
precise normative theory of (self-)management by goal-setting. We leverage this
theory to computationally derive optimal subgoals from a resource-rational
model of human goal pursuit. Finally, we show that the resulting subgoals
improve the problem-solving performance of bounded agents and human
participants. This constitutes a first step towards grounding prescriptive
theories of management and practical recommendations for goal-setting in
computational models of the relevant psychological processes and cognitive
limitations.
| [
{
"version": "v1",
"created": "Mon, 6 Feb 2023 09:06:54 GMT"
}
] | 1,675,728,000,000 | [
[
"Singhi",
"Nishad",
""
],
[
"Mohnert",
"Florian",
""
],
[
"Prystawski",
"Ben",
""
],
[
"Lieder",
"Falk",
""
]
] |
2302.02785 | Lovis Heindrich | Lovis Heindrich, Saksham Consul, Falk Lieder | An intelligent tutor for planning in large partially observable
environments | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI can not only outperform people in many planning tasks, but it can also
teach them how to plan better. A recent and promising approach to improving
human decision-making is to create intelligent tutors that utilize AI to
discover and teach optimal planning strategies automatically. Prior work has
shown that this approach can improve planning in artificial, fully observable
planning tasks. Unlike these artificial tasks, the world is only partially
observable. To bridge this gap, we developed and evaluated the first
intelligent tutor for planning in partially observable environments. Compared
to previous intelligent tutors for teaching planning strategies, this novel
intelligent tutor combines two innovations: 1) a new metareasoning algorithm
for discovering optimal planning strategies for large, partially observable
environments, and 2) scaffolding the learning processing by having the learner
choose from an increasing larger set of planning operations in increasingly
larger planning problems. We found that our new strategy discovery algorithm is
superior to the state-of-the-art. A preregistered experiment with 330
participants demonstrated that the new intelligent tutor is highly effective at
improving people's ability to make good decisions in partially observable
environments. This suggests our human-centered tutoring approach can
successfully boost human planning in complex, partially observable sequential
decision problems, a promising step towards using AI-powered intelligent tutors
to improve human planning in the real world.
| [
{
"version": "v1",
"created": "Mon, 6 Feb 2023 13:57:08 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jun 2024 13:29:08 GMT"
}
] | 1,717,718,400,000 | [
[
"Heindrich",
"Lovis",
""
],
[
"Consul",
"Saksham",
""
],
[
"Lieder",
"Falk",
""
]
] |
2302.02985 | Tarik A. Rashid | Dler O. Hasan, Aso M. Aladdin, Hardi Sabah Talabani, Tarik Ahmed
Rashid, and Seyedali Mirjalili | The Fifteen Puzzle- A New Approach through Hybridizing Three Heuristics
Methods | 18 pages | null | 10.3390/computers12010011 | Computers, 2023 | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Fifteen Puzzle problem is one of the most classical problems that have
captivated mathematical enthusiasts for centuries. This is mainly because of
the huge size of the state space with approximately 1013 states that have to be
explored and several algorithms have been applied to solve the Fifteen Puzzle
instances. In this paper, to deal with this large state space, Bidirectional A*
(BA*) search algorithm with three heuristics, such as Manhattan distance (MD),
linear conflict (LC), and walking distance (WD) has been used to solve the
Fifteen Puzzle problems. The three mentioned heuristics will be hybridized in a
way that can dramatically reduce the number of generated states by the
algorithm. Moreover, all those heuristics require only 25KB of storage but help
the algorithm effectively reduce the number of generated states and expand
fewer nodes. Our implementation of BA* search can significantly reduce the
space complexity, and guarantee either optimal or near-optimal solutions.1
| [
{
"version": "v1",
"created": "Fri, 6 Jan 2023 07:17:23 GMT"
}
] | 1,675,728,000,000 | [
[
"Hasan",
"Dler O.",
""
],
[
"Aladdin",
"Aso M.",
""
],
[
"Talabani",
"Hardi Sabah",
""
],
[
"Rashid",
"Tarik Ahmed",
""
],
[
"Mirjalili",
"Seyedali",
""
]
] |
2302.03180 | Maryam Hashemi Miss | Maryam Hashemi | Who wants what and how: a Mapping Function for Explainable Artificial
Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The increasing complexity of AI systems has led to the growth of the field of
explainable AI (XAI), which aims to provide explanations and justifications for
the outputs of AI algorithms. These methods mainly focus on feature importance
and identifying changes that can be made to achieve a desired outcome.
Researchers have identified desired properties for XAI methods, such as
plausibility, sparsity, causality, low run-time, etc. The objective of this
study is to conduct a review of existing XAI research and present a
classification of XAI methods. The study also aims to connect XAI users with
the appropriate method and relate desired properties to current XAI approaches.
The outcome of this study will be a clear strategy that outlines how to choose
the right XAI method for a particular goal and user and provide a personalized
explanation for users.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2023 01:06:38 GMT"
}
] | 1,675,814,400,000 | [
[
"Hashemi",
"Maryam",
""
]
] |
2302.03189 | Michael Timothy Bennett | Michael Timothy Bennett | Emergent Causality and the Foundation of Consciousness | Published (and won "Best Student Paper") at the 16th Conference on
Artificial General Intelligence, Stockholm, 2023 | Proceedings of the 16th International Conference on Artificial
General Intelligence. 2023. Lecture Notes in Computer Science, vol 13921.
Springer. pp. 52-61 | 10.1007/978-3-031-33469-6_6 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | To make accurate inferences in an interactive setting, an agent must not
confuse passive observation of events with having intervened to cause them. The
$do$ operator formalises interventions so that we may reason about their
effect. Yet there exist pareto optimal mathematical formalisms of general
intelligence in an interactive setting which, presupposing no explicit
representation of intervention, make maximally accurate inferences. We examine
one such formalism. We show that in the absence of a $do$ operator, an
intervention can be represented by a variable. We then argue that variables are
abstractions, and that need to explicitly represent interventions in advance
arises only because we presuppose these sorts of abstractions. The
aforementioned formalism avoids this and so, initial conditions permitting,
representations of relevant causal interventions will emerge through induction.
These emergent abstractions function as representations of one`s self and of
any other object, inasmuch as the interventions of those objects impact the
satisfaction of goals. We argue that this explains how one might reason about
one`s own identity and intent, those of others, of one`s own as perceived by
others and so on. In a narrow sense this describes what it is to be aware, and
is a mechanistic explanation of aspects of consciousness.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2023 01:41:23 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 00:40:42 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Apr 2023 07:36:06 GMT"
},
{
"version": "v4",
"created": "Thu, 11 Apr 2024 04:51:47 GMT"
}
] | 1,712,880,000,000 | [
[
"Bennett",
"Michael Timothy",
""
]
] |
2302.03352 | Edgar Galvan | Fred Valdez Ameneyro and Edgar Galvan | Towards Understanding the Effects of Evolving the MCTS UCT Selection
Policy | 8 pages, double column, 6 figures, 1 table, conference. arXiv admin
note: substantial text overlap with arXiv:2208.13589, arXiv:2112.09697 | null | 10.1109/SSCI51031.2022.10022266 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monte Carlo Tree Search (MCTS) is a sampling best-first method to search for
optimal decisions. The success of MCTS depends heavily on how the MCTS
statistical tree is built and the selection policy plays a fundamental role in
this. A particular selection policy that works particularly well, widely
adopted in MCTS, is the Upper Confidence Bounds for Trees, referred to as UCT.
Other more sophisticated bounds have been proposed by the community with the
goal to improve MCTS performance on particular problems. Thus, it is evident
that while the MCTS UCT behaves generally well, some variants might behave
better. As a result of this, multiple works have been proposed to evolve a
selection policy to be used in MCTS. Although all these works are inspiring,
none of them have carried out an in-depth analysis shedding light under what
circumstances an evolved alternative of MCTS UCT might be beneficial in MCTS
due to focusing on a single type of problem. In sharp contrast to this, in this
work we use five functions of different nature, going from a unimodal function,
covering multimodal functions to deceptive functions. We demonstrate how the
evolution of the MCTS UCT might be beneficial in multimodal and deceptive
scenarios, whereas the MCTS UCT is robust in unimodal scenarios and competitive
in the rest of the scenarios used in this study.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2023 09:50:55 GMT"
}
] | 1,675,814,400,000 | [
[
"Ameneyro",
"Fred Valdez",
""
],
[
"Galvan",
"Edgar",
""
]
] |
2302.03384 | Shufang Zhu | Shufang Zhu, Giuseppe De Giacomo | Act for Your Duties but Maintain Your Rights | null | International Conference on Principles of Knowledge Representation
and Reasoning (KR), 2022 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the synthesis literature has focused on studying how to synthesize a
strategy to fulfill a task. This task is a duty for the agent. In this paper,
we argue that intelligent agents should also be equipped with rights, that is,
tasks that the agent itself can choose to fulfill (e.g., the right of
recharging the battery). The agent should be able to maintain these rights
while acting for its duties. We study this issue in the context of LTLf
synthesis: we give duties and rights in terms of LTLf specifications, and
synthesize a suitable strategy to achieve the duties that can be modified
on-the-fly to achieve also the rights, if the agent chooses to do so. We show
that handling rights does not make synthesis substantially more difficult,
although it requires a more sophisticated solution concept than standard LTLf
synthesis. We also extend our results to the case in which further duties and
rights are given to the agent while already executing.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2023 10:44:47 GMT"
}
] | 1,675,814,400,000 | [
[
"Zhu",
"Shufang",
""
],
[
"De Giacomo",
"Giuseppe",
""
]
] |
2302.03578 | Jack Furby | Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece | Towards a Deeper Understanding of Concept Bottleneck Models Through
End-to-End Explanation | Accepted into the AAAI-23 workshop Representation Learning for
Responsible Human-Centric AI (R2HCAI) as a 4 page paper. This version also
includes an additional 47 pages for the appendix and contains additional
figures and tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concept Bottleneck Models (CBMs) first map raw input(s) to a vector of
human-defined concepts, before using this vector to predict a final
classification. We might therefore expect CBMs capable of predicting concepts
based on distinct regions of an input. In doing so, this would support human
interpretation when generating explanations of the model's outputs to visualise
input features corresponding to concepts. The contribution of this paper is
threefold: Firstly, we expand on existing literature by looking at relevance
both from the input to the concept vector, confirming that relevance is
distributed among the input features, and from the concept vector to the final
classification where, for the most part, the final classification is made using
concepts predicted as present. Secondly, we report a quantitative evaluation to
measure the distance between the maximum input feature relevance and the ground
truth location; we perform this with the techniques, Layer-wise Relevance
Propagation (LRP), Integrated Gradients (IG) and a baseline gradient approach,
finding LRP has a lower average distance than IG. Thirdly, we propose using the
proportion of relevance as a measurement for explaining concept importance.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2023 16:43:43 GMT"
}
] | 1,675,814,400,000 | [
[
"Furby",
"Jack",
""
],
[
"Cunnington",
"Daniel",
""
],
[
"Braines",
"Dave",
""
],
[
"Preece",
"Alun",
""
]
] |
2302.03625 | Seyed Mohammad Sadegh Dashti | Seyed Mohammad Sadegh Dashti, Seyedeh Fatemeh Dashti | An Expert System to Diagnose Spinal Disorders | null | null | 10.2174/1875036202013010057 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: Until now, traditional invasive approaches have been the only
means being leveraged to diagnose spinal disorders. Traditional manual
diagnostics require a high workload, and diagnostic errors are likely to occur
due to the prolonged work of physicians. In this research, we develop an expert
system based on a hybrid inference algorithm and comprehensive integrated
knowledge for assisting the experts in the fast and high-quality diagnosis of
spinal disorders.
Methods: First, for each spinal anomaly, the accurate and integrated
knowledge was acquired from related experts and resources. Second, based on
probability distributions and dependencies between symptoms of each anomaly, a
unique numerical value known as certainty effect value was assigned to each
symptom. Third, a new hybrid inference algorithm was designed to obtain
excellent performance, which was an incorporation of the Backward Chaining
Inference and Theory of Uncertainty.
Results: The proposed expert system was evaluated in two different phases,
real-world samples, and medical records evaluation. Evaluations show that in
terms of real-world samples analysis, the system achieved excellent accuracy.
Application of the system on the sample with anomalies revealed the degree of
severity of disorders and the risk of development of abnormalities in unhealthy
and healthy patients. In the case of medical records analysis, our expert
system proved to have promising performance, which was very close to those of
experts.
Conclusion: Evaluations suggest that the proposed expert system provides
promising performance, helping specialists to validate the accuracy and
integrity of their diagnosis. It can also serve as an intelligent educational
software for medical students to gain familiarity with spinal disorder
diagnosis process, and related symptoms.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2023 17:28:24 GMT"
}
] | 1,675,814,400,000 | [
[
"Dashti",
"Seyed Mohammad Sadegh",
""
],
[
"Dashti",
"Seyedeh Fatemeh",
""
]
] |
2302.03800 | Rishita Bansal | Alakh Aggarwal, Rishita Bansal, Parth Padalkar, Sriraam Natarajan | MACOptions: Multi-Agent Learning with Centralized Controller and Options
Framework | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | These days automation is being applied everywhere. In every environment,
planning for the actions to be taken by the agents is an important aspect. In
this paper, we plan to implement planning for multi-agents with a centralized
controller. We compare three approaches: random policy, Q-learning, and
Q-learning with Options Framework. We also show the effectiveness of planners
by showing performance comparison between Q-Learning with Planner and without
Planner.
| [
{
"version": "v1",
"created": "Tue, 7 Feb 2023 23:32:53 GMT"
}
] | 1,675,900,800,000 | [
[
"Aggarwal",
"Alakh",
""
],
[
"Bansal",
"Rishita",
""
],
[
"Padalkar",
"Parth",
""
],
[
"Natarajan",
"Sriraam",
""
]
] |
2302.03816 | Iuliia Kotseruba | Iuliia Kotseruba and Amir Rasouli | Intend-Wait-Perceive-Cross: Exploring the Effects of Perceptual
Limitations on Pedestrian Decision-Making | 6 pages, 5 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current research on pedestrian behavior understanding focuses on the dynamics
of pedestrians and makes strong assumptions about their perceptual abilities.
For instance, it is often presumed that pedestrians have omnidirectional view
of the scene around them. In practice, human visual system has a number of
limitations, such as restricted field of view (FoV) and range of sensing, which
consequently affect decision-making and overall behavior of the pedestrians. By
including explicit modeling of pedestrian perception, we can better understand
its effect on their decision-making. To this end, we propose an agent-based
pedestrian behavior model Intend-Wait-Perceive-Cross with three novel elements:
field of vision, working memory, and scanning strategy, all motivated by
findings from behavioral literature. Through extensive experimentation we
investigate the effects of perceptual limitations on safe crossing decisions
and demonstrate how they contribute to detectable changes in pedestrian
behaviors.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2023 00:47:51 GMT"
}
] | 1,675,900,800,000 | [
[
"Kotseruba",
"Iuliia",
""
],
[
"Rasouli",
"Amir",
""
]
] |
2302.04123 | Antonio De Nicola | Antonio De Nicola, Anna Formica, Michele Missikoff, Elaheh Pourabbas,
Francesco Taglino | A Parametric Similarity Method: Comparative Experiments based on
Semantically Annotated Large Datasets | 32 pages, 9 figures, 11 tables | Journal of Web Semantics, Volume 76, April 2023 | 10.1016/j.websem.2023.100773 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present the parametric method SemSimp aimed at measuring semantic
similarity of digital resources. SemSimp is based on the notion of information
content, and it leverages a reference ontology and taxonomic reasoning,
encompassing different approaches for weighting the concepts of the ontology.
In particular, weights can be computed by considering either the available
digital resources or the structure of the reference ontology of a given domain.
SemSimp is assessed against six representative semantic similarity methods for
comparing sets of concepts proposed in the literature, by carrying out an
experimentation that includes both a statistical analysis and an expert
judgement evaluation. To the purpose of achieving a reliable assessment, we
used a real-world large dataset based on the Digital Library of the Association
for Computing Machinery (ACM), and a reference ontology derived from the ACM
Computing Classification System (ACM-CCS). For each method, we considered two
indicators. The first concerns the degree of confidence to identify the
similarity among the papers belonging to some special issues selected from the
ACM Transactions on Information Systems journal, the second the Pearson
correlation with human judgement. The results reveal that one of the
configurations of SemSimp outperforms the other assessed methods. An additional
experiment performed in the domain of physics shows that, in general, SemSimp
provides better results than the other similarity methods.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2023 15:22:32 GMT"
}
] | 1,675,900,800,000 | [
[
"De Nicola",
"Antonio",
""
],
[
"Formica",
"Anna",
""
],
[
"Missikoff",
"Michele",
""
],
[
"Pourabbas",
"Elaheh",
""
],
[
"Taglino",
"Francesco",
""
]
] |
2302.04238 | Yuan Yang | Yuan Yang and Mathilee Kunda | Computational Models of Solving Raven's Progressive Matrices: A
Comprehensive Introduction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | As being widely used to measure human intelligence, Raven's Progressive
Matrices (RPM) tests also pose a great challenge for AI systems. There is a
long line of computational models for solving RPM, starting from 1960s, either
to understand the involved cognitive processes or solely for problem-solving
purposes. Due to the dramatic paradigm shifts in AI researches, especially the
advent of deep learning models in the last decade, the computational studies on
RPM have also changed a lot. Therefore, now is a good time to look back at this
long line of research. As the title -- ``a comprehensive introduction'' --
indicates, this paper provides an all-in-one presentation of computational
models for solving RPM, including the history of RPM, intelligence testing
theories behind RPM, item design and automatic item generation of RPM-like
tasks, a conceptual chronicle of computational models for solving RPM, which
reveals the philosophy behind the technology evolution of these models, and
suggestions for transferring human intelligence testing and AI testing.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2023 18:09:01 GMT"
}
] | 1,675,900,800,000 | [
[
"Yang",
"Yuan",
""
],
[
"Kunda",
"Mathilee",
""
]
] |
2302.04288 | Jiaqi Ma | Satyapriya Krishna, Jiaqi Ma, Himabindu Lakkaraju | Towards Bridging the Gaps between the Right to Explanation and the Right
to be Forgotten | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Right to Explanation and the Right to be Forgotten are two important
principles outlined to regulate algorithmic decision making and data usage in
real-world applications. While the right to explanation allows individuals to
request an actionable explanation for an algorithmic decision, the right to be
forgotten grants them the right to ask for their data to be deleted from all
the databases and models of an organization. Intuitively, enforcing the right
to be forgotten may trigger model updates which in turn invalidate previously
provided explanations, thus violating the right to explanation. In this work,
we investigate the technical implications arising due to the interference
between the two aforementioned regulatory principles, and propose the first
algorithmic framework to resolve the tension between them. To this end, we
formulate a novel optimization problem to generate explanations that are robust
to model updates due to the removal of training data instances by data deletion
requests. We then derive an efficient approximation algorithm to handle the
combinatorial complexity of this optimization problem. We theoretically
demonstrate that our method generates explanations that are provably robust to
worst-case data deletion requests with bounded costs in case of linear models
and certain classes of non-linear models. Extensive experimentation with
real-world datasets demonstrates the efficacy of the proposed framework.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2023 19:03:00 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 03:24:50 GMT"
}
] | 1,676,246,400,000 | [
[
"Krishna",
"Satyapriya",
""
],
[
"Ma",
"Jiaqi",
""
],
[
"Lakkaraju",
"Himabindu",
""
]
] |
2302.04318 | Quentin Cohen-Solal | Quentin Cohen-Solal and Tristan Cazenave | Learning to Play Stochastic Two-player Perfect-Information Games without
Knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we extend the Descent framework, which enables learning and
planning in the context of two-player games with perfect information, to the
framework of stochastic games.
We propose two ways of doing this, the first way generalizes the search
algorithm, i.e. Descent, to stochastic games and the second way approximates
stochastic games by deterministic games.
We then evaluate them on the game EinStein wurfelt nicht! against
state-of-the-art algorithms: Expectiminimax and Polygames (i.e. the Alpha Zero
algorithm). It is our generalization of Descent which obtains the best results.
The approximation by deterministic games nevertheless obtains good results,
presaging that it could give better results in particular contexts.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2023 20:27:45 GMT"
}
] | 1,675,987,200,000 | [
[
"Cohen-Solal",
"Quentin",
""
],
[
"Cazenave",
"Tristan",
""
]
] |
2302.04335 | Mohammad Khalil | Mohammad Khalil and Erkan Er | Will ChatGPT get you caught? Rethinking of Plagiarism Detection | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rise of Artificial Intelligence (AI) technology and its impact on
education has been a topic of growing concern in recent years. The new
generation AI systems such as chatbots have become more accessible on the
Internet and stronger in terms of capabilities. The use of chatbots,
particularly ChatGPT, for generating academic essays at schools and colleges
has sparked fears among scholars. This study aims to explore the originality of
contents produced by one of the most popular AI chatbots, ChatGPT. To this end,
two popular plagiarism detection tools were used to evaluate the originality of
50 essays generated by ChatGPT on various topics. Our results manifest that
ChatGPT has a great potential to generate sophisticated text outputs without
being well caught by the plagiarism check software. In other words, ChatGPT can
create content on many topics with high originality as if they were written by
someone. These findings align with the recent concerns about students using
chatbots for an easy shortcut to success with minimal or no effort. Moreover,
ChatGPT was asked to verify if the essays were generated by itself, as an
additional measure of plagiarism check, and it showed superior performance
compared to the traditional plagiarism-detection tools. The paper discusses the
need for institutions to consider appropriate measures to mitigate potential
plagiarism issues and advise on the ongoing debate surrounding the impact of AI
technology on education. Further implications are discussed in the paper.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2023 20:59:18 GMT"
}
] | 1,675,987,200,000 | [
[
"Khalil",
"Mohammad",
""
],
[
"Er",
"Erkan",
""
]
] |
2302.04528 | Chen Peng | Chen Peng, Zhengqi Dai, Guangping Xia, Yajie Niu, Yihui Lei | Explaining with Greater Support: Weighted Column Sampling Optimization
for q-Consistent Summary-Explanations | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Machine learning systems have been extensively used as auxiliary tools in
domains that require critical decision-making, such as healthcare and criminal
justice. The explainability of decisions is crucial for users to develop trust
on these systems. In recent years, the globally-consistent rule-based
summary-explanation and its max-support (MS) problem have been proposed, which
can provide explanations for particular decisions along with useful statistics
of the dataset. However, globally-consistent summary-explanations with limited
complexity typically have small supports, if there are any. In this paper, we
propose a relaxed version of summary-explanation, i.e., the $q$-consistent
summary-explanation, which aims to achieve greater support at the cost of
slightly lower consistency. The challenge is that the max-support problem of
$q$-consistent summary-explanation (MSqC) is much more complex than the
original MS problem, resulting in over-extended solution time using standard
branch-and-bound solvers. To improve the solution time efficiency, this paper
proposes the weighted column sampling~(WCS) method based on solving smaller
problems by sampling variables according to their simplified increase support
(SIS) values. Experiments verify that solving MSqC with the proposed SIS-based
WCS method is not only more scalable in efficiency, but also yields solutions
with greater support and better global extrapolation effectiveness.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2023 09:40:30 GMT"
}
] | 1,675,987,200,000 | [
[
"Peng",
"Chen",
""
],
[
"Dai",
"Zhengqi",
""
],
[
"Xia",
"Guangping",
""
],
[
"Niu",
"Yajie",
""
],
[
"Lei",
"Yihui",
""
]
] |
2302.04599 | Dominic Phillips | Jonathan Feldstein, Dominic Phillips and Efthymia Tsamoura | Principled and Efficient Motif Finding for Structure Learning of Lifted
Graphical Models | Submitted to AAAI23. 9 pages. Appendix included | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Structure learning is a core problem in AI central to the fields of
neuro-symbolic AI and statistical relational learning. It consists in
automatically learning a logical theory from data. The basis for structure
learning is mining repeating patterns in the data, known as structural motifs.
Finding these patterns reduces the exponential search space and therefore
guides the learning of formulas. Despite the importance of motif learning, it
is still not well understood. We present the first principled approach for
mining structural motifs in lifted graphical models, languages that blend
first-order logic with probabilistic models, which uses a stochastic process to
measure the similarity of entities in the data. Our first contribution is an
algorithm, which depends on two intuitive hyperparameters: one controlling the
uncertainty in the entity similarity measure, and one controlling the softness
of the resulting rules. Our second contribution is a preprocessing step where
we perform hierarchical clustering on the data to reduce the search space to
the most relevant data. Our third contribution is to introduce an O(n ln n) (in
the size of the entities in the data) algorithm for clustering
structurally-related data. We evaluate our approach using standard benchmarks
and show that we outperform state-of-the-art structure learning approaches by
up to 6% in terms of accuracy and up to 80% in terms of runtime.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2023 12:21:55 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 12:19:25 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Jun 2023 15:27:50 GMT"
}
] | 1,687,305,600,000 | [
[
"Feldstein",
"Jonathan",
""
],
[
"Phillips",
"Dominic",
""
],
[
"Tsamoura",
"Efthymia",
""
]
] |
2302.04600 | Oliver Niggemann | Philipp Rosenthal, Niels Demke, Frank Mantwill, Oliver Niggemann | Plan-Based Derivation of General Functional Structures in Product Design | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In product design, a decomposition of the overall product function into a set
of smaller, interacting functions is usually considered a crucial first step
for any computer-supported design tool. Here, we propose a new approach for the
decomposition of functions especially suited for later solutions based on
Artificial Intelligence. The presented approach defines the decomposition
problem in terms of a planning problem--a well established field in Artificial
Intelligence. For the planning problem, logic-based solvers can be used to find
solutions that compute a useful function structure for the design process.
Well-known function libraries from engineering are used as atomic planning
steps. The algorithms are evaluated using two different application examples to
ensure the transferability of a general function decomposition.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2023 12:31:29 GMT"
}
] | 1,675,987,200,000 | [
[
"Rosenthal",
"Philipp",
""
],
[
"Demke",
"Niels",
""
],
[
"Mantwill",
"Frank",
""
],
[
"Niggemann",
"Oliver",
""
]
] |
2302.04737 | Md. Rezaul Karim | Md. Rezaul Karim and Lina Molinas Comet and Oya Beyan and Dietrich
Rebholz-Schuhmann and Stefan Decker | A Biomedical Knowledge Graph for Biomarker Discovery in Cancer | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Structured and unstructured data and facts about drugs, genes, protein,
viruses, and their mechanism are spread across a huge number of scientific
articles. These articles are a large-scale knowledge source and can have a huge
impact on disseminating knowledge about the mechanisms of certain biological
processes. A domain-specific knowledge graph~(KG) is an explicit
conceptualization of a specific subject-matter domain represented w.r.t
semantically interrelated entities and relations. A KG can be constructed by
integrating such facts and data and be used for data integration, exploration,
and federated queries. However, exploration and querying large-scale KGs is
tedious for certain groups of users due to a lack of knowledge about underlying
data assets or semantic technologies. Such a KG will not only allow deducing
new knowledge and question answering(QA) but also allows domain experts to
explore. Since cross-disciplinary explanations are important for accurate
diagnosis, it is important to query the KG to provide interactive explanations
about learned biomarkers. Inspired by these, we construct a domain-specific KG,
particularly for cancer-specific biomarker discovery. The KG is constructed by
integrating cancer-related knowledge and facts from multiple sources. First, we
construct a domain-specific ontology, which we call OncoNet Ontology (ONO). The
ONO ontology is developed to enable semantic reasoning for verification of the
predictions for relations between diseases and genes. The KG is then developed
and enriched by harmonizing the ONO, additional metadata schemas, ontologies,
controlled vocabularies, and additional concepts from external sources using a
BERT-based information extraction method. BioBERT and SciBERT are finetuned
with the selected articles crawled from PubMed. We listed down some queries and
some examples of QA and deducing knowledge based on the KG.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2023 16:17:57 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2023 08:22:06 GMT"
}
] | 1,677,196,800,000 | [
[
"Karim",
"Md. Rezaul",
""
],
[
"Comet",
"Lina Molinas",
""
],
[
"Beyan",
"Oya",
""
],
[
"Rebholz-Schuhmann",
"Dietrich",
""
],
[
"Decker",
"Stefan",
""
]
] |
2302.04752 | Ernest Davis | Ernest Davis | Benchmarks for Automated Commonsense Reasoning: A Survey | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | More than one hundred benchmarks have been developed to test the commonsense
knowledge and commonsense reasoning abilities of artificial intelligence (AI)
systems. However, these benchmarks are often flawed and many aspects of common
sense remain untested. Consequently, we do not currently have any reliable way
of measuring to what extent existing AI systems have achieved these abilities.
This paper surveys the development and uses of AI commonsense benchmarks. We
discuss the nature of common sense; the role of common sense in AI; the goals
served by constructing commonsense benchmarks; and desirable features of
commonsense benchmarks. We analyze the common flaws in benchmarks, and we argue
that it is worthwhile to invest the work needed ensure that benchmark examples
are consistently high quality. We survey the various methods of constructing
commonsense benchmarks. We enumerate 139 commonsense benchmarks that have been
developed: 102 text-based, 18 image-based, 12 video based, and 7 simulated
physical environments. We discuss the gaps in the existing benchmarks and
aspects of commonsense reasoning that are not addressed in any existing
benchmark. We conclude with a number of recommendations for future development
of commonsense AI benchmarks.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2023 16:34:30 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 19:36:41 GMT"
}
] | 1,677,196,800,000 | [
[
"Davis",
"Ernest",
""
]
] |
2302.05405 | Christophe Lecoutre | Christophe Lecoutre | ACE, a generic constraint solver | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Constraint Programming (CP) is a useful technology for modeling and solving
combinatorial constrained problems. On the one hand, on can use a library like
PyCSP3 for easily modeling problems arising in various application fields
(e.g., scheduling, planning, data-mining, cryptography, bio-informatics,
organic chemistry, etc.). Problem instances can then be directly generated from
specific models and data. On the other hand, for solving instances (notably,
represented in XCSP3 format), one can use a constraint solver like ACE, which
is presented in this paper. ACE is an open-source constraint solver, developed
in Java, which focuses on integer variables (including 0/1-Boolean variables),
state-of-the-art table constraints, popular global constraints, search
heuristics and (mono-criterion) optimization.
| [
{
"version": "v1",
"created": "Fri, 6 Jan 2023 12:15:18 GMT"
}
] | 1,676,246,400,000 | [
[
"Lecoutre",
"Christophe",
""
]
] |
2302.05448 | Jeffrey Johnston | Jeffrey W. Johnston | The Construction of Reality in an AI: A Review | 34 pages text, 37 pages references | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI constructivism as inspired by Jean Piaget, described and surveyed by Frank
Guerin, and representatively implemented by Gary Drescher seeks to create
algorithms and knowledge structures that enable agents to acquire, maintain,
and apply a deep understanding of the environment through sensorimotor
interactions. This paper aims to increase awareness of constructivist AI
implementations to encourage greater progress toward enabling lifelong learning
by machines. It builds on Guerin's 2008 "Learning Like a Baby: A Survey of AI
approaches." After briefly recapitulating that survey, it summarizes subsequent
progress by the Guerin referents, numerous works not covered by Guerin (or
found in other surveys), and relevant efforts in related areas. The focus is on
knowledge representations and learning algorithms that have been used in
practice viewed through lenses of Piaget's schemas, adaptation processes, and
staged development. The paper concludes with a preview of a simple framework
for constructive AI being developed by the author that parses concepts from
sensory input and stores them in a semantic memory network linked to episodic
data. Extensive references are provided.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 22:52:17 GMT"
}
] | 1,676,332,800,000 | [
[
"Johnston",
"Jeffrey W.",
""
]
] |
2302.06083 | Samuel Alexander | Samuel Allen Alexander, David Quarel, Len Du, Marcus Hutter | Universal Agent Mixtures and the Geometry of Intelligence | 16 pages, accepted to AISTATS23 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by recent progress in multi-agent Reinforcement Learning (RL), in
this work we examine the collective intelligent behaviour of theoretical
universal agents by introducing a weighted mixture operation. Given a weighted
set of agents, their weighted mixture is a new agent whose expected total
reward in any environment is the corresponding weighted average of the original
agents' expected total rewards in that environment. Thus, if RL agent
intelligence is quantified in terms of performance across environments, the
weighted mixture's intelligence is the weighted average of the original agents'
intelligences. This operation enables various interesting new theorems that
shed light on the geometry of RL agent intelligence, namely: results about
symmetries, convex agent-sets, and local extrema. We also show that any RL
agent intelligence measure based on average performance across environments,
subject to certain weak technical conditions, is identical (up to a constant
factor) to performance within a single environment dependent on said
intelligence measure.
| [
{
"version": "v1",
"created": "Mon, 13 Feb 2023 04:02:53 GMT"
}
] | 1,676,332,800,000 | [
[
"Alexander",
"Samuel Allen",
""
],
[
"Quarel",
"David",
""
],
[
"Du",
"Len",
""
],
[
"Hutter",
"Marcus",
""
]
] |
2302.06188 | Giuseppe Spallitta | Giuseppe Spallitta, Gabriele Masina, Paolo Morettin, Andrea Passerini,
Roberto Sebastiani | Enhancing SMT-based Weighted Model Integration by Structure Awareness | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The development of efficient exact and approximate algorithms for
probabilistic inference is a long-standing goal of artificial intelligence
research. Whereas substantial progress has been made in dealing with purely
discrete or purely continuous domains, adapting the developed solutions to
tackle hybrid domains, characterised by discrete and continuous variables and
their relationships, is highly non-trivial. Weighted Model Integration (WMI)
recently emerged as a unifying formalism for probabilistic inference in hybrid
domains. Despite a considerable amount of recent work, allowing WMI algorithms
to scale with the complexity of the hybrid problem is still a challenge. In
this paper we highlight some substantial limitations of existing
state-of-the-art solutions, and develop an algorithm that combines SMT-based
enumeration, an efficient technique in formal verification, with an effective
encoding of the problem structure. This allows our algorithm to avoid
generating redundant models, resulting in drastic computational savings.
Additionally, we show how SMT-based approaches can seamlessly deal with
different integration techniques, both exact and approximate, significantly
expanding the set of problems that can be tackled by WMI technology. An
extensive experimental evaluation on both synthetic and real-world datasets
confirms the substantial advantage of the proposed solution over existing
alternatives. The application potential of this technology is further showcased
on a prototypical task aimed at verifying the fairness of probabilistic
programs.
| [
{
"version": "v1",
"created": "Mon, 13 Feb 2023 08:55:12 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Jan 2024 13:47:37 GMT"
}
] | 1,704,844,800,000 | [
[
"Spallitta",
"Giuseppe",
""
],
[
"Masina",
"Gabriele",
""
],
[
"Morettin",
"Paolo",
""
],
[
"Passerini",
"Andrea",
""
],
[
"Sebastiani",
"Roberto",
""
]
] |
2302.06975 | Dren Fazlija | Niloy Ganguly, Dren Fazlija, Maryam Badar, Marco Fisichella, Sandipan
Sikdar, Johanna Schrader, Jonas Wallat, Koustav Rudra, Manolis Koubarakis,
Gourab K. Patro, Wadhah Zai El Amri, Wolfgang Nejdl | A Review of the Role of Causality in Developing Trustworthy AI Systems | 55 pages, 8 figures. Under review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | State-of-the-art AI models largely lack an understanding of the cause-effect
relationship that governs human understanding of the real world. Consequently,
these models do not generalize to unseen data, often produce unfair results,
and are difficult to interpret. This has led to efforts to improve the
trustworthiness aspects of AI models. Recently, causal modeling and inference
methods have emerged as powerful tools. This review aims to provide the reader
with an overview of causal methods that have been developed to improve the
trustworthiness of AI models. We hope that our contribution will motivate
future research on causality-based solutions for trustworthy AI.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2023 11:08:26 GMT"
}
] | 1,676,419,200,000 | [
[
"Ganguly",
"Niloy",
""
],
[
"Fazlija",
"Dren",
""
],
[
"Badar",
"Maryam",
""
],
[
"Fisichella",
"Marco",
""
],
[
"Sikdar",
"Sandipan",
""
],
[
"Schrader",
"Johanna",
""
],
[
"Wallat",
"Jonas",
""
],
[
"Rudra",
"Koustav",
""
],
[
"Koubarakis",
"Manolis",
""
],
[
"Patro",
"Gourab K.",
""
],
[
"Amri",
"Wadhah Zai El",
""
],
[
"Nejdl",
"Wolfgang",
""
]
] |
2302.07059 | Yuanwei Qu | Yuanwei Qu, Michel Perrin, Anita Torabi, Mara Abel, Martin Giese | GeoFault: A well-founded fault ontology for interoperability in
geological modeling | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Geological modeling currently uses various computer-based applications. Data
harmonization at the semantic level by means of ontologies is essential for
making these applications interoperable. Since geo-modeling is currently part
of multidisciplinary projects, semantic harmonization is required to model not
only geological knowledge but also to integrate other domain knowledge at a
general level. For this reason, the domain ontologies used for describing
geological knowledge must be based on a sound ontology background to ensure the
described geological knowledge is integratable. This paper presents a domain
ontology: GeoFault, resting on the Basic Formal Ontology BFO (Arp et al., 2015)
and the GeoCore ontology (Garcia et al., 2020). It models the knowledge related
to geological faults. Faults are essential to various industries but are
complex to model. They can be described as thin deformed rock volumes or as
spatial arrangements resulting from the different displacements of geological
blocks. At a broader scale, faults are currently described as mere surfaces,
which are the components of complex fault arrays. The reference to the BFO and
GeoCore package allows assigning these various fault elements to define
ontology classes and their logical linkage within a consistent ontology
framework. The GeoFault ontology covers the core knowledge of faults 'strico
sensu,' excluding ductile shear deformations. This considered vocabulary is
essentially descriptive and related to regional to outcrop scales, excluding
microscopic, orogenic, and tectonic plate structures. The ontology is molded in
OWL 2, validated by competency questions with two use cases, and tested using
an in-house ontology-driven data entry application. The work of GeoFault
provides a solid framework for disambiguating fault knowledge and a foundation
of fault data integration for the applications and the users.
| [
{
"version": "v1",
"created": "Tue, 14 Feb 2023 14:20:13 GMT"
}
] | 1,676,419,200,000 | [
[
"Qu",
"Yuanwei",
""
],
[
"Perrin",
"Michel",
""
],
[
"Torabi",
"Anita",
""
],
[
"Abel",
"Mara",
""
],
[
"Giese",
"Martin",
""
]
] |
2302.07412 | Jasper De Bock | Jasper De Bock | A theory of desirable things | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Inspired by the theory of desirable gambles that is used to model uncertainty
in the field of imprecise probabilities, I present a theory of desirable
things. Its aim is to model a subject's beliefs about which things are
desirable. What the things are is not important, nor is what it means for them
to be desirable. It can be applied to gambles, calling them desirable if a
subject accepts them, but it can just as well be applied to pizzas, calling
them desirable if my friend Arthur likes to eat them. Other useful examples of
things one might apply this theory to are propositions, horse lotteries, or
preferences between any of the above. Regardless of the particular things that
are considered, inference rules are imposed by means of an abstract closure
operator, and models that adhere to these rules are called coherent. I consider
two types of models, each of which can capture a subject's beliefs about which
things are desirable: sets of desirable things and sets of desirable sets of
things. A crucial result is that the latter type can be represented by a set of
the former.
| [
{
"version": "v1",
"created": "Wed, 15 Feb 2023 00:30:00 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 07:41:32 GMT"
},
{
"version": "v3",
"created": "Wed, 10 May 2023 22:16:21 GMT"
}
] | 1,683,849,600,000 | [
[
"De Bock",
"Jasper",
""
]
] |
2302.08479 | Tea Tu\v{s}ar | Vanessa Volz and Boris Naujoks and Pascal Kerschke and Tea Tusar | Tools for Landscape Analysis of Optimisation Problems in Procedural
Content Generation for Games | 30 pages, 8 figures, accepted for publication in Applied Soft
Computing | null | 10.1016/j.asoc.2023.110121 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The term Procedural Content Generation (PCG) refers to the (semi-)automatic
generation of game content by algorithmic means, and its methods are becoming
increasingly popular in game-oriented research and industry. A special class of
these methods, which is commonly known as search-based PCG, treats the given
task as an optimisation problem. Such problems are predominantly tackled by
evolutionary algorithms.
We will demonstrate in this paper that obtaining more information about the
defined optimisation problem can substantially improve our understanding of how
to approach the generation of content. To do so, we present and discuss three
efficient analysis tools, namely diagonal walks, the estimation of high-level
properties, as well as problem similarity measures. We discuss the purpose of
each of the considered methods in the context of PCG and provide guidelines for
the interpretation of the results received. This way we aim to provide methods
for the comparison of PCG approaches and eventually, increase the quality and
practicality of generated content in industry.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2023 18:38:36 GMT"
}
] | 1,676,592,000,000 | [
[
"Volz",
"Vanessa",
""
],
[
"Naujoks",
"Boris",
""
],
[
"Kerschke",
"Pascal",
""
],
[
"Tusar",
"Tea",
""
]
] |
2302.09067 | Chenguang Lu | Chenguang Lu | Causal Confirmation Measures: From Simpson's Paradox to COVID-19 | 21 pages, 4 figures | Entropy, 2023,25(1), 143 | 10.3390/e25010143 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | When we compare the influences of two causes on an outcome, if the conclusion
from every group is against that from the conflation, we think there is
Simpson's Paradox. The Existing Causal Inference Theory (ECIT) can make the
overall conclusion consistent with the grouping conclusion by removing the
confounder's influence to eliminate the paradox. The ECIT uses relative risk
difference Pd = max(0, (R - 1)/R) (R denotes the risk ratio) as the probability
of causation. In contrast, Philosopher Fitelson uses confirmation measure D
(posterior probability minus prior probability) to measure the strength of
causation. Fitelson concludes that from the perspective of Bayesian
confirmation, we should directly accept the overall conclusion without
considering the paradox. The author proposed a Bayesian confirmation measure b*
similar to Pd before. To overcome the contradiction between the ECIT and
Bayesian confirmation, the author uses the semantic information method with the
minimum cross-entropy criterion to deduce causal confirmation measure Cc = (R
-1)/max(R, 1). Cc is like Pd but has normalizing property (between -1 and 1)
and cause symmetry. It especially fits cases where a cause restrains an
outcome, such as the COVID-19 vaccine controlling the infection. Some examples
(about kidney stone treatments and COVID-19) reveal that Pd and Cc are more
reasonable than D; Cc is more useful than Pd.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2023 02:44:27 GMT"
}
] | 1,676,937,600,000 | [
[
"Lu",
"Chenguang",
""
]
] |
2302.09071 | Pierre Beckmann | Pierre Beckmann, Guillaume K\"ostner, In\^es Hip\'olito | Rejecting Cognitivism: Computational Phenomenology for Deep Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose a non-representationalist framework for deep learning relying on a
novel method: computational phenomenology, a dialogue between the first-person
perspective (relying on phenomenology) and the mechanisms of computational
models. We thereby reject the modern cognitivist interpretation of deep
learning, according to which artificial neural networks encode representations
of external entities. This interpretation mainly relies on
neuro-representationalism, a position that combines a strong ontological
commitment towards scientific theoretical entities and the idea that the brain
operates on symbolic representations of these entities. We proceed as follows:
after offering a review of cognitivism and neuro-representationalism in the
field of deep learning, we first elaborate a phenomenological critique of these
positions; we then sketch out computational phenomenology and distinguish it
from existing alternatives; finally we apply this new method to deep learning
models trained on specific tasks, in order to formulate a conceptual framework
of deep-learning, that allows one to think of artificial neural networks'
mechanisms in terms of lived experience.
| [
{
"version": "v1",
"created": "Thu, 16 Feb 2023 20:05:06 GMT"
}
] | 1,676,937,600,000 | [
[
"Beckmann",
"Pierre",
""
],
[
"Köstner",
"Guillaume",
""
],
[
"Hipólito",
"Inês",
""
]
] |
2302.09270 | Jiawen Deng | Jiawen Deng, Jiale Cheng, Hao Sun, Zhexin Zhang, Minlie Huang | Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As generative large model capabilities advance, safety concerns become more
pronounced in their outputs. To ensure the sustainable growth of the AI
ecosystem, it's imperative to undertake a holistic evaluation and refinement of
associated safety risks. This survey presents a framework for safety research
pertaining to large models, delineating the landscape of safety risks as well
as safety evaluation and improvement methods. We begin by introducing safety
issues of wide concern, then delve into safety evaluation methods for large
models, encompassing preference-based testing, adversarial attack approaches,
issues detection, and other advanced evaluation methods. Additionally, we
explore the strategies for enhancing large model safety from training to
deployment, highlighting cutting-edge safety approaches for each stage in
building large models. Finally, we discuss the core challenges in advancing
towards more responsible AI, including the interpretability of safety
mechanisms, ongoing safety issues, and robustness against malicious attacks.
Through this survey, we aim to provide clear technical guidance for safety
researchers and encourage further study on the safety of large models.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2023 09:32:55 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Mar 2023 03:28:47 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Nov 2023 06:39:19 GMT"
}
] | 1,701,388,800,000 | [
[
"Deng",
"Jiawen",
""
],
[
"Cheng",
"Jiale",
""
],
[
"Sun",
"Hao",
""
],
[
"Zhang",
"Zhexin",
""
],
[
"Huang",
"Minlie",
""
]
] |
2302.09320 | Qi Wang | Feisha Hu, Qi Wang, Haijian Shao, Shang Gao and Hualong Yu | Anomaly Detection of UAV State Data Based on Single-class Triangular
Global Alignment Kernel Extreme Learning Machine | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Unmanned Aerial Vehicles (UAVs) are widely used and meet many demands in
military and civilian fields. With the continuous enrichment and extensive
expansion of application scenarios, the safety of UAVs is constantly being
challenged. To address this challenge, we propose algorithms to detect
anomalous data collected from drones to improve drone safety. We deployed a
one-class kernel extreme learning machine (OCKELM) to detect anomalies in drone
data. By default, OCKELM uses the radial basis (RBF) kernel function as the
kernel function of the model. To improve the performance of OCKELM, we choose a
Triangular Global Alignment Kernel (TGAK) instead of an RBF Kernel and
introduce the Fast Independent Component Analysis (FastICA) algorithm to
reconstruct UAV data. Based on the above improvements, we create a novel
anomaly detection strategy FastICA-TGAK-OCELM. The method is finally validated
on the UCI dataset and detected on the Aeronautical Laboratory Failures and
Anomalies (ALFA) dataset. The experimental results show that compared with
other methods, the accuracy of this method is improved by more than 30%, and
point anomalies are effectively detected.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2023 12:43:04 GMT"
}
] | 1,676,937,600,000 | [
[
"Hu",
"Feisha",
""
],
[
"Wang",
"Qi",
""
],
[
"Shao",
"Haijian",
""
],
[
"Gao",
"Shang",
""
],
[
"Yu",
"Hualong",
""
]
] |
2302.09335 | Xinyan Wang | Xinyan Wang, Ting Jia, Chongyu Wang, Kuan Xu, Zixin Shu, Jian Yu, Kuo
Yang, Xuezhong Zhou | Knowledge Graph Completion based on Tensor Decomposition for Disease
Gene Prediction | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Accurate identification of disease genes has consistently been one of the
keys to decoding a disease's molecular mechanism. Most current approaches focus
on constructing biological networks and utilizing machine learning, especially,
deep learning to identify disease genes, but ignore the complex relations
between entities in the biological knowledge graph. In this paper, we construct
a biological knowledge graph centered on diseases and genes, and develop an
end-to-end Knowledge graph completion model for Disease Gene Prediction using
interactional tensor decomposition (called KDGene). KDGene introduces an
interaction module between the embeddings of entities and relations to tensor
decomposition, which can effectively enhance the information interaction in
biological knowledge. Experimental results show that KDGene significantly
outperforms state-of-the-art algorithms. Furthermore, the comprehensive
biological analysis of the case of diabetes mellitus confirms KDGene's ability
for identifying new and accurate candidate genes. This work proposes a scalable
knowledge graph completion framework to identify disease candidate genes, from
which the results are promising to provide valuable references for further wet
experiments.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2023 13:57:44 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 14:25:06 GMT"
}
] | 1,679,011,200,000 | [
[
"Wang",
"Xinyan",
""
],
[
"Jia",
"Ting",
""
],
[
"Wang",
"Chongyu",
""
],
[
"Xu",
"Kuan",
""
],
[
"Shu",
"Zixin",
""
],
[
"Yu",
"Jian",
""
],
[
"Yang",
"Kuo",
""
],
[
"Zhou",
"Xuezhong",
""
]
] |
2302.09346 | Jordi De La Torre | Jordi de la Torre | Redes Generativas Adversarias (GAN) Fundamentos Te\'oricos y
Aplicaciones | 14 pages, in Spanish language, 2 figures, review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generative adversarial networks (GANs) are a method based on the training of
two neural networks, one called generator and the other discriminator,
competing with each other to generate new instances that resemble those of the
probability distribution of the training data. GANs have a wide range of
applications in fields such as computer vision, semantic segmentation, time
series synthesis, image editing, natural language processing, and image
generation from text, among others. Generative models model the probability
distribution of a data set, but instead of providing a probability value, they
generate new instances that are close to the original distribution. GANs use a
learning scheme that allows the defining attributes of the probability
distribution to be encoded in a neural network, allowing instances to be
generated that resemble the original probability distribution. This article
presents the theoretical foundations of this type of network as well as the
basic architecture schemes and some of its applications. This article is in
Spanish to facilitate the arrival of this scientific knowledge to the
Spanish-speaking community.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2023 14:39:51 GMT"
}
] | 1,676,937,600,000 | [
[
"de la Torre",
"Jordi",
""
]
] |
2302.09363 | Jordi De La Torre | Jordi de la Torre | Autocodificadores Variacionales (VAE) Fundamentos Te\'oricos y
Aplicaciones | 15 pages, in Spanish language, 2 figures, review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | VAEs are probabilistic graphical models based on neural networks that allow
the coding of input data in a latent space formed by simpler probability
distributions and the reconstruction, based on such latent variables, of the
source data. After training, the reconstruction network, called decoder, is
capable of generating new elements belonging to a close distribution, ideally
equal to the original one. This article has been written in Spanish to
facilitate the arrival of this scientific knowledge to the Spanish-speaking
community.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2023 15:29:55 GMT"
}
] | 1,676,937,600,000 | [
[
"de la Torre",
"Jordi",
""
]
] |
2302.09378 | Jordi De La Torre | Jordi de la Torre | Modelos Generativos basados en Mecanismos de Difusi\'on | 11 pages, in Spanish language, 3 figures, review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Diffusion-based generative models are a design framework that allows
generating new images from processes analogous to those found in
non-equilibrium thermodynamics. These models model the reversal of a physical
diffusion process in which two miscible liquids of different colors
progressively mix until they form a homogeneous mixture. Diffusion models can
be applied to signals of a different nature, such as audio and image signals.
In the image case, a progressive pixel corruption process is carried out by
applying random noise, and a neural network is trained to revert each one of
the corruption steps. For the reconstruction process to be reversible, it is
necessary to carry out the corruption very progressively. If the training of
the neural network is successful, it will be possible to generate an image from
random noise by chaining a number of steps similar to those used for image
deconstruction at training time. In this article we present the theoretical
foundations on which this method is based as well as some of its applications.
This article is in Spanish to facilitate the arrival of this scientific
knowledge to the Spanish-speaking community.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2023 16:34:31 GMT"
}
] | 1,676,937,600,000 | [
[
"de la Torre",
"Jordi",
""
]
] |
2302.09425 | James Ainooson | James Ainooson, Deepayan Sanyal, Joel P. Michelson, Yuan Yang,
Maithilee Kunda | A Neurodiversity-Inspired Solver for the Abstraction \& Reasoning Corpus
(ARC) Using Visual Imagery and Program Synthesis | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Core knowledge about physical objects -- e.g., their permanency, spatial
transformations, and interactions -- is one of the most fundamental building
blocks of biological intelligence across humans and non-human animals. While AI
techniques in certain domains (e.g. vision, NLP) have advanced dramatically in
recent years, no current AI systems can yet match human abilities in flexibly
applying core knowledge to solve novel tasks. We propose a new AI approach to
core knowledge that combines 1) visual representations of core knowledge
inspired by human mental imagery abilities, especially as observed in studies
of neurodivergent individuals; with 2) tree-search-based program synthesis for
flexibly combining core knowledge to form new reasoning strategies on the fly.
We demonstrate our system's performance on the very difficult Abstraction \&
Reasoning Corpus (ARC) challenge, and we share experimental results from
publicly available ARC items as well as from our 4th-place finish on the
private test set during the 2022 global ARCathon challenge.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2023 21:30:44 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Mar 2023 03:27:37 GMT"
},
{
"version": "v3",
"created": "Tue, 31 Oct 2023 18:05:28 GMT"
}
] | 1,698,883,200,000 | [
[
"Ainooson",
"James",
""
],
[
"Sanyal",
"Deepayan",
""
],
[
"Michelson",
"Joel P.",
""
],
[
"Yang",
"Yuan",
""
],
[
"Kunda",
"Maithilee",
""
]
] |
2302.09443 | Sudeep Pasricha | Danish Gufran, Saideep Tiku, Sudeep Pasricha | VITAL: Vision Transformer Neural Networks for Accurate Smartphone
Heterogeneity Resilient Indoor Localization | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Wi-Fi fingerprinting-based indoor localization is an emerging embedded
application domain that leverages existing Wi-Fi access points (APs) in
buildings to localize users with smartphones. Unfortunately, the heterogeneity
of wireless transceivers across diverse smartphones carried by users has been
shown to reduce the accuracy and reliability of localization algorithms. In
this paper, we propose a novel framework based on vision transformer neural
networks called VITAL that addresses this important challenge. Experiments
indicate that VITAL can reduce the uncertainty created by smartphone
heterogeneity while improving localization accuracy from 41% to 68% over the
best-known prior works. We also demonstrate the generalizability of our
approach and propose a data augmentation technique that can be integrated into
most deep learning-based localization frameworks to improve accuracy.
| [
{
"version": "v1",
"created": "Sat, 18 Feb 2023 23:43:45 GMT"
}
] | 1,676,937,600,000 | [
[
"Gufran",
"Danish",
""
],
[
"Tiku",
"Saideep",
""
],
[
"Pasricha",
"Sudeep",
""
]
] |
2302.09463 | Caesar Wu | Caesar Wu and Pascal Bouvry | The Emerging Artificial Intelligence Protocol for Hierarchical
Information Network | 6 pages, 4 figures, 1 table | ICOIN 2023 | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The recent development of artificial intelligence enables a machine to
achieve a human level of intelligence. Problem-solving and decision-making are
two mental abilities to measure human intelligence. Many scholars have proposed
different models. However, there is a gap in establishing an AI-oriented
hierarchical model with a multilevel abstraction. This study proposes a novel
model known as the emerged AI protocol that consists of seven distinct layers
capable of providing an optimal and explainable solution for a given problem.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2023 02:56:50 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 10:24:04 GMT"
}
] | 1,677,110,400,000 | [
[
"Wu",
"Caesar",
""
],
[
"Bouvry",
"Pascal",
""
]
] |
2302.09484 | Weitang Liu | Weitang Liu, Ying-Wai Li, Yi-Zhuang You, Jingbo Shang | Gradient-based Wang-Landau Algorithm: A Novel Sampler for Output
Distribution of Neural Networks over the Input Space | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The output distribution of a neural network (NN) over the entire input space
captures the complete input-output mapping relationship, offering insights
toward a more comprehensive NN understanding. Exhaustive enumeration or
traditional Monte Carlo methods for the entire input space can exhibit
impractical sampling time, especially for high-dimensional inputs. To make such
difficult sampling computationally feasible, in this paper, we propose a novel
Gradient-based Wang-Landau (GWL) sampler. We first draw the connection between
the output distribution of a NN and the density of states (DOS) of a physical
system. Then, we renovate the classic sampler for the DOS problem, the
Wang-Landau algorithm, by replacing its random proposals with gradient-based
Monte Carlo proposals. This way, our GWL sampler investigates the
under-explored subsets of the input space much more efficiently. Extensive
experiments have verified the accuracy of the output distribution generated by
GWL and also showcased several interesting findings - for example, in a binary
image classification task, both CNN and ResNet mapped the majority of human
unrecognizable images to very negative logit values.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2023 05:42:30 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2023 05:50:26 GMT"
}
] | 1,677,024,000,000 | [
[
"Liu",
"Weitang",
""
],
[
"Li",
"Ying-Wai",
""
],
[
"You",
"Yi-Zhuang",
""
],
[
"Shang",
"Jingbo",
""
]
] |
2302.09620 | Qihao (Joe) Shi | Qihao Shi, Wenjie Tian, Wujian Yang, Mengqi Xue, Can Wang, Minghui Wu | Jointly Complementary&Competitive Influence Maximization with Concurrent
Ally-Boosting and Rival-Preventing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new influence spread model, namely,
Complementary\&Competitive Independent Cascade (C$^2$IC) model. C$^2$IC model
generalizes three well known influence model, i.e., influence boosting (IB)
model, campaign oblivious (CO)IC model and the IC-N (IC model with negative
opinions) model. This is the first model that considers both complementary and
competitive influence spread comprehensively under multi-agent environment.
Correspondingly, we propose the Complementary\&Competitive influence
maximization (C$^2$IM) problem. Given an ally seed set and a rival seed set,
the C$^2$IM problem aims to select a set of assistant nodes that can boost the
ally spread and prevent the rival spread concurrently. We show the problem is
NP-hard and can generalize the influence boosting problem and the influence
blocking problem. With classifying the different cascade priorities into 4
cases by the monotonicity and submodularity (M\&S) holding conditions, we
design 4 algorithms respectively, with theoretical approximation bounds
provided. We conduct extensive experiments on real social networks and the
experimental results demonstrate the effectiveness of the proposed algorithms.
We hope this work can inspire abundant future exploration for constructing more
generalized influence models that help streamline the works of this area.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2023 16:41:53 GMT"
}
] | 1,676,937,600,000 | [
[
"Shi",
"Qihao",
""
],
[
"Tian",
"Wenjie",
""
],
[
"Yang",
"Wujian",
""
],
[
"Xue",
"Mengqi",
""
],
[
"Wang",
"Can",
""
],
[
"Wu",
"Minghui",
""
]
] |
2302.09646 | Philip Cohen | Philip R. Cohen and Lucian Galescu | A Planning-Based Explainable Collaborative Dialogue System | 61 pages, 8 figures, 3 appendices; V2 fixes two erroneous
cross-references | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Eva is a multimodal conversational system that helps users to accomplish
their domain goals through collaborative dialogue. The system does this by
inferring users' intentions and plans to achieve those goals, detects whether
obstacles are present, finds plans to overcome them or to achieve higher-level
goals, and plans its actions, including speech acts,to help users accomplish
those goals. In doing so, the system maintains and reasons with its own
beliefs, goals and intentions, and explicitly reasons about those of its user.
Belief reasoning is accomplished with a modal Horn-clause meta-interpreter. The
planning and reasoning subsystems obey the principles of persistent goals and
intentions, including the formation and decomposition of intentions to perform
complex actions, as well as the conditions under which they can be given up. In
virtue of its planning process, the system treats its speech acts just like its
other actions -- physical acts affect physical states, digital acts affect
digital states, and speech acts affect mental and social states. This general
approach enables Eva to plan a variety of speech acts including requests,
informs, questions, confirmations, recommendations, offers, acceptances,
greetings, and emotive expressions. Each of these has a formally specified
semantics which is used during the planning and reasoning processes. Because it
can keep track of different users' mental states, it can engage in multi-party
dialogues. Importantly, Eva can explain its utterances because it has created a
plan standing behind each of them. Finally, Eva employs multimodal input and
output, driving an avatar that can perceive and employ facial and head
movements along with emotive speech acts.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2023 18:29:54 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 20:04:13 GMT"
}
] | 1,678,060,800,000 | [
[
"Cohen",
"Philip R.",
""
],
[
"Galescu",
"Lucian",
""
]
] |
2302.09665 | Zirong Chen | Zirong Chen, Issa Li, Haoxiang Zhang, Sarah Preum, John A. Stankovic,
Meiyi Ma | CitySpec with Shield: A Secure Intelligent Assistant for Requirement
Formalization | arXiv admin note: substantial text overlap with arXiv:2206.03132 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | An increasing number of monitoring systems have been developed in smart
cities to ensure that the real-time operations of a city satisfy safety and
performance requirements. However, many existing city requirements are written
in English with missing, inaccurate, or ambiguous information. There is a high
demand for assisting city policymakers in converting human-specified
requirements to machine-understandable formal specifications for monitoring
systems. To tackle this limitation, we build CitySpec, the first intelligent
assistant system for requirement specification in smart cities. To create
CitySpec, we first collect over 1,500 real-world city requirements across
different domains (e.g., transportation and energy) from over 100 cities and
extract city-specific knowledge to generate a dataset of city vocabulary with
3,061 words. We also build a translation model and enhance it through
requirement synthesis and develop a novel online learning framework with
shielded validation. The evaluation results on real-world city requirements
show that CitySpec increases the sentence-level accuracy of requirement
specification from 59.02% to 86.64%, and has strong adaptability to a new city
and a new domain (e.g., the F1 score for requirements in Seattle increases from
77.6% to 93.75% with online learning). After the enhancement from the shield
function, CitySpec is now immune to most known textual adversarial inputs
(e.g., the attack success rate of DeepWordBug after the shield function is
reduced to 0% from 82.73%). We test the CitySpec with 18 participants from
different domains. CitySpec shows its strong usability and adaptability to
different domains, and also its robustness to malicious inputs.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2023 20:11:06 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2023 23:25:57 GMT"
}
] | 1,680,480,000,000 | [
[
"Chen",
"Zirong",
""
],
[
"Li",
"Issa",
""
],
[
"Zhang",
"Haoxiang",
""
],
[
"Preum",
"Sarah",
""
],
[
"Stankovic",
"John A.",
""
],
[
"Ma",
"Meiyi",
""
]
] |
2302.09800 | Jinsheng Yang | Jinsheng Yang, Yuanhai Shao, ChunNa Li | CNTS: Cooperative Network for Time Series | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of deep learning techniques in detecting anomalies in time series
data has been an active area of research with a long history of development and
a variety of approaches. In particular, reconstruction-based unsupervised
anomaly detection methods have gained popularity due to their intuitive
assumptions and low computational requirements. However, these methods are
often susceptible to outliers and do not effectively model anomalies, leading
to suboptimal results. This paper presents a novel approach for unsupervised
anomaly detection, called the Cooperative Network Time Series (CNTS) approach.
The CNTS system consists of two components: a detector and a reconstructor. The
detector is responsible for directly detecting anomalies, while the
reconstructor provides reconstruction information to the detector and updates
its learning based on anomalous information received from the detector. The
central aspect of CNTS is a multi-objective optimization problem, which is
solved through a cooperative solution strategy. Experiments on three real-world
datasets demonstrate the state-of-the-art performance of CNTS and confirm the
cooperative effectiveness of the detector and reconstructor. The source code
for this study is publicly available on GitHub.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2023 06:55:10 GMT"
}
] | 1,676,937,600,000 | [
[
"Yang",
"Jinsheng",
""
],
[
"Shao",
"Yuanhai",
""
],
[
"Li",
"ChunNa",
""
]
] |
2302.09863 | Francoise Grelaud | Chantal Reynaud (LRI), Nathalie Aussenac-Gilles (IRIT-MELODI, CNRS),
Pierre Tchounikine (LIUM, MeTAH ), Franckie Trichet (LIUM) | The notion of role in conceptual modelling | Dates de conf{\'e}rence : octobre 1997 1997 | 10th European Workshop Knowledge Acquisition, Modeling and
Management (EKAW 1997), Oct 1997, Sant Feliu de Guixols, Catalonia, Spain.
pp.221--236 | 10.1007/BFb0026788 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article we analyse the notion of knowledge role. First of all, we
present how the relationship between problem solving methods and domain models
is tackled in different approaches. We concentrate on how they cope with this
issue in the knowledge engineering process. Secondly, we introduce several
properties which can be used to analyse, characterise and define the notion of
role. We evaluate and compare the works exposed previously following these
dimensions. This analysis suggests some developments to better exploit the
relationship between reasoning and domain knowledge. We present them in a last
section.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2023 09:53:10 GMT"
}
] | 1,676,937,600,000 | [
[
"Reynaud",
"Chantal",
"",
"LRI"
],
[
"Aussenac-Gilles",
"Nathalie",
"",
"IRIT-MELODI, CNRS"
],
[
"Tchounikine",
"Pierre",
"",
"LIUM, MeTAH"
],
[
"Trichet",
"Franckie",
"",
"LIUM"
]
] |
2302.09891 | Yu Shi | Yu Shi, Ning Xu, Hua Yuan and Xin Geng | Unreliable Partial Label Learning with Recursive Separation | Accepted by IJCAI2023, see
https://www.ijcai.org/proceedings/2023/0468 | Proceedings of the Thirty-Second International Joint Conference on
Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR,
China, 4208-4216 | 10.24963/ijcai.2023/468 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partial label learning (PLL) is a typical weakly supervised learning problem
in which each instance is associated with a candidate label set, and among
which only one is true. However, the assumption that the ground-truth label is
always among the candidate label set would be unrealistic, as the reliability
of the candidate label sets in real-world applications cannot be guaranteed by
annotators. Therefore, a generalized PLL named Unreliable Partial Label
Learning (UPLL) is proposed, in which the true label may not be in the
candidate label set. Due to the challenges posed by unreliable labeling,
previous PLL methods will experience a marked decline in performance when
applied to UPLL. To address the issue, we propose a two-stage framework named
Unreliable Partial Label Learning with Recursive Separation (UPLLRS). In the
first stage, the self-adaptive recursive separation strategy is proposed to
separate the training set into a reliable subset and an unreliable subset. In
the second stage, a disambiguation strategy is employed to progressively
identify the ground-truth labels in the reliable subset. Simultaneously,
semi-supervised learning methods are adopted to extract valuable information
from the unreliable subset. Our method demonstrates state-of-the-art
performance as evidenced by experimental results, particularly in situations of
high unreliability. Code and supplementary materials are available at
https://github.com/dhiyu/UPLLRS.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2023 10:39:31 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Aug 2023 14:10:46 GMT"
}
] | 1,693,353,600,000 | [
[
"Shi",
"Yu",
""
],
[
"Xu",
"Ning",
""
],
[
"Yuan",
"Hua",
""
],
[
"Geng",
"Xin",
""
]
] |
2302.09934 | Litian Zhang | Litian Zhang, Xiaoming Zhang, Ziming Guo, Zhipeng Liu | CISum: Learning Cross-modality Interaction to Enhance Multimodal
Semantic Coverage for Multimodal Summarization | accepted by SIAM SDM2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal summarization (MS) aims to generate a summary from multimodal
input. Previous works mainly focus on textual semantic coverage metrics such as
ROUGE, which considers the visual content as supplemental data. Therefore, the
summary is ineffective to cover the semantics of different modalities. This
paper proposes a multi-task cross-modality learning framework (CISum) to
improve multimodal semantic coverage by learning the cross-modality interaction
in the multimodal article. To obtain the visual semantics, we translate images
into visual descriptions based on the correlation with text content. Then, the
visual description and text content are fused to generate the textual summary
to capture the semantics of the multimodal content, and the most relevant image
is selected as the visual summary. Furthermore, we design an automatic
multimodal semantics coverage metric to evaluate the performance. Experimental
results show that CISum outperforms baselines in multimodal semantics coverage
metrics while maintaining the excellent performance of ROUGE and BLEU.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2023 11:57:23 GMT"
}
] | 1,676,937,600,000 | [
[
"Zhang",
"Litian",
""
],
[
"Zhang",
"Xiaoming",
""
],
[
"Guo",
"Ziming",
""
],
[
"Liu",
"Zhipeng",
""
]
] |
2302.10146 | Rashid Mehmood PhD | Abeer Abdullah Alaql, Fahad Alqurashi, Rashid Mehmood | Multi-generational labour markets: data-driven discovery of
multi-perspective system parameters using machine learning | 77 Pages, 3 Tables, 13 Figures, Submitted, Under Review | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Economic issues, such as inflation, energy costs, taxes, and interest rates,
are a constant presence in our daily lives and have been exacerbated by global
events such as pandemics, environmental disasters, and wars. A sustained
history of financial crises reveals significant weaknesses and vulnerabilities
in the foundations of modern economies. Another significant issue currently is
people quitting their jobs in large numbers. Moreover, many organizations have
a diverse workforce comprising multiple generations posing new challenges.
Transformative approaches in economics and labour markets are needed to protect
our societies, economies, and planet. In this work, we use big data and machine
learning methods to discover multi-perspective parameters for
multi-generational labour markets. The parameters for the academic perspective
are discovered using 35,000 article abstracts from the Web of Science for the
period 1958-2022 and for the professionals' perspective using 57,000 LinkedIn
posts from 2022. We discover a total of 28 parameters and categorised them into
5 macro-parameters, Learning & Skills, Employment Sectors, Consumer Industries,
Learning & Employment Issues, and Generations-specific Issues. A complete
machine learning software tool is developed for data-driven parameter
discovery. A variety of quantitative and visualisation methods are applied and
multiple taxonomies are extracted to explore multi-generational labour markets.
A knowledge structure and literature review of multi-generational labour
markets using over 100 research articles is provided. It is expected that this
work will enhance the theory and practice of AI-based methods for knowledge
discovery and system parameter discovery to develop autonomous capabilities and
systems and promote novel approaches to labour economics and markets, leading
to the development of sustainable societies and economies.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2023 18:25:10 GMT"
}
] | 1,676,937,600,000 | [
[
"Alaql",
"Abeer Abdullah",
""
],
[
"Alqurashi",
"Fahad",
""
],
[
"Mehmood",
"Rashid",
""
]
] |
2302.10407 | Yuchen Wang | Yuchen Wang, Jinghui Zhang, Zhengjie Huang, Weibin Li, Shikun Feng,
Ziheng Ma, Yu Sun, Dianhai Yu, Fang Dong, Jiahui Jin, Beilun Wang and Junzhou
Luo | Label Information Enhanced Fraud Detection against Low Homophily in
Graphs | Accepted to The ACM Webconf 2023 | null | 10.1145/3543507.3583373 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Node classification is a substantial problem in graph-based fraud detection.
Many existing works adopt Graph Neural Networks (GNNs) to enhance fraud
detectors. While promising, currently most GNN-based fraud detectors fail to
generalize to the low homophily setting. Besides, label utilization has been
proved to be significant factor for node classification problem. But we find
they are less effective in fraud detection tasks due to the low homophily in
graphs. In this work, we propose GAGA, a novel Group AGgregation enhanced
TrAnsformer, to tackle the above challenges. Specifically, the group
aggregation provides a portable method to cope with the low homophily issue.
Such an aggregation explicitly integrates the label information to generate
distinguishable neighborhood information. Along with group aggregation, an
attempt towards end-to-end trainable group encoding is proposed which augments
the original feature space with the class labels. Meanwhile, we devise two
additional learnable encodings to recognize the structural and relational
context. Then, we combine the group aggregation and the learnable encodings
into a Transformer encoder to capture the semantic information. Experimental
results clearly show that GAGA outperforms other competitive graph-based fraud
detectors by up to 24.39% on two trending public datasets and a real-world
industrial dataset from Anonymous. Even more, the group aggregation is
demonstrated to outperform other label utilization methods (e.g., C&S,
BoT/UniMP) in the low homophily setting.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 02:42:28 GMT"
}
] | 1,677,024,000,000 | [
[
"Wang",
"Yuchen",
""
],
[
"Zhang",
"Jinghui",
""
],
[
"Huang",
"Zhengjie",
""
],
[
"Li",
"Weibin",
""
],
[
"Feng",
"Shikun",
""
],
[
"Ma",
"Ziheng",
""
],
[
"Sun",
"Yu",
""
],
[
"Yu",
"Dianhai",
""
],
[
"Dong",
"Fang",
""
],
[
"Jin",
"Jiahui",
""
],
[
"Wang",
"Beilun",
""
],
[
"Luo",
"Junzhou",
""
]
] |
2302.10439 | Marcus Hoerger | Marcus Hoerger, Hanna Kurniawati, Dirk Kroese, Nan Ye | Adaptive Discretization using Voronoi Trees for Continuous POMDPs | Submitted to The International Journal of Robotics Research (IJRR).
arXiv admin note: substantial text overlap with arXiv:2209.05733 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solving continuous Partially Observable Markov Decision Processes (POMDPs) is
challenging, particularly for high-dimensional continuous action spaces. To
alleviate this difficulty, we propose a new sampling-based online POMDP solver,
called Adaptive Discretization using Voronoi Trees (ADVT). It uses Monte Carlo
Tree Search in combination with an adaptive discretization of the action space
as well as optimistic optimization to efficiently sample high-dimensional
continuous action spaces and compute the best action to perform. Specifically,
we adaptively discretize the action space for each sampled belief using a
hierarchical partition called Voronoi tree, which is a Binary Space
Partitioning that implicitly maintains the partition of a cell as the Voronoi
diagram of two points sampled from the cell. ADVT uses the estimated diameters
of the cells to form an upper-confidence bound on the action value function
within the cell, guiding the Monte Carlo Tree Search expansion and further
discretization of the action space. This enables ADVT to better exploit local
information with respect to the action value function, allowing faster
identification of the most promising regions in the action space, compared to
existing solvers. Voronoi trees keep the cost of partitioning and estimating
the diameter of each cell low, even in high-dimensional spaces where many
sampled points are required to cover the space well. ADVT additionally handles
continuous observation spaces, by adopting an observation progressive widening
strategy, along with a weighted particle representation of beliefs.
Experimental results indicate that ADVT scales substantially better to
high-dimensional continuous action spaces, compared to state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 04:47:34 GMT"
}
] | 1,677,024,000,000 | [
[
"Hoerger",
"Marcus",
""
],
[
"Kurniawati",
"Hanna",
""
],
[
"Kroese",
"Dirk",
""
],
[
"Ye",
"Nan",
""
]
] |
2302.10503 | Trang Nguyen | Trang Nguyen, Amin Mansouri, Kanika Madan, Khuong Nguyen, Kartik
Ahuja, Dianbo Liu, and Yoshua Bengio | Reusable Slotwise Mechanisms | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Agents with the ability to comprehend and reason about the dynamics of
objects would be expected to exhibit improved robustness and generalization in
novel scenarios. However, achieving this capability necessitates not only an
effective scene representation but also an understanding of the mechanisms
governing interactions among object subsets. Recent studies have made
significant progress in representing scenes using object slots. In this work,
we introduce Reusable Slotwise Mechanisms, or RSM, a framework that models
object dynamics by leveraging communication among slots along with a modular
architecture capable of dynamically selecting reusable mechanisms for
predicting the future states of each object slot. Crucially, RSM leverages the
Central Contextual Information (CCI), enabling selected mechanisms to access
the remaining slots through a bottleneck, effectively allowing for modeling of
higher order and complex interactions that might require a sparse subset of
objects. Experimental results demonstrate the superior performance of RSM
compared to state-of-the-art methods across various future prediction and
related downstream tasks, including Visual Question Answering and action
planning. Furthermore, we showcase RSM's Out-of-Distribution generalization
ability to handle scenes in intricate scenarios.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 08:07:27 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Oct 2023 07:33:22 GMT"
}
] | 1,698,624,000,000 | [
[
"Nguyen",
"Trang",
""
],
[
"Mansouri",
"Amin",
""
],
[
"Madan",
"Kanika",
""
],
[
"Nguyen",
"Khuong",
""
],
[
"Ahuja",
"Kartik",
""
],
[
"Liu",
"Dianbo",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
2302.10522 | Wei Chen | Zhao and Chen | Feature selection algorithm based on incremental mutual information and
cockroach swarm optimization | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection is an effective preprocessing technique to reduce data
dimension. For feature selection, rough set theory provides many measures,
among which mutual information is one of the most important attribute measures.
However, mutual information based importance measures are computationally
expensive and inaccurate, especially in hypersample instances, and it is
undoubtedly a NP-hard problem in high-dimensional hyperhigh-dimensional data
sets. Although many representative group intelligent algorithm feature
selection strategies have been proposed so far to improve the accuracy, there
is still a bottleneck when using these feature selection algorithms to process
high-dimensional large-scale data sets, which consumes a lot of performance and
is easy to select weakly correlated and redundant features. In this study, we
propose an incremental mutual information based improved swarm intelligent
optimization method (IMIICSO), which uses rough set theory to calculate the
importance of feature selection based on mutual information. This method
extracts decision table reduction knowledge to guide group algorithm global
search. By exploring the computation of mutual information of supersamples, we
can not only discard the useless features to speed up the internal and external
computation, but also effectively reduce the cardinality of the optimal feature
subset by using IMIICSO method, so that the cardinality is minimized by
comparison. The accuracy of feature subsets selected by the improved cockroach
swarm algorithm based on incremental mutual information is better or almost the
same as that of the original swarm intelligent optimization algorithm.
Experiments using 10 datasets derived from UCI, including large scale and high
dimensional datasets, confirmed the efficiency and effectiveness of the
proposed algorithm.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 08:51:05 GMT"
}
] | 1,677,024,000,000 | [
[
"Zhao",
"",
""
],
[
"Chen",
"",
""
]
] |
2302.10567 | Hogun Park | Heesoo Jung, Sangpil Kim, Hogun Park | Dual Policy Learning for Aggregation Optimization in Graph Neural
Network-based Recommender Systems | Accepted by the Web Conference 2023 | null | 10.1145/3543507.3583241 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Graph Neural Networks (GNNs) provide powerful representations for
recommendation tasks. GNN-based recommendation systems capture the complex
high-order connectivity between users and items by aggregating information from
distant neighbors and can improve the performance of recommender systems.
Recently, Knowledge Graphs (KGs) have also been incorporated into the user-item
interaction graph to provide more abundant contextual information; they are
exploited to address cold-start problems and enable more explainable
aggregation in GNN-based recommender systems (GNN-Rs). However, due to the
heterogeneous nature of users and items, developing an effective aggregation
strategy that works across multiple GNN-Rs, such as LightGCN and KGAT, remains
a challenge. In this paper, we propose a novel reinforcement learning-based
message passing framework for recommender systems, which we call DPAO (Dual
Policy framework for Aggregation Optimization). This framework adaptively
determines high-order connectivity to aggregate users and items using dual
policy learning. Dual policy learning leverages two Deep-Q-Network models to
exploit the user- and item-aware feedback from a GNN-R and boost the
performance of the target GNN-R. Our proposed framework was evaluated with both
non-KG-based and KG-based GNN-R models on six real-world datasets, and their
results show that our proposed framework significantly enhances the recent base
model, improving nDCG and Recall by up to 63.7% and 42.9%, respectively. Our
implementation code is available at https://github.com/steve30572/DPAO/.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 09:47:27 GMT"
}
] | 1,677,024,000,000 | [
[
"Jung",
"Heesoo",
""
],
[
"Kim",
"Sangpil",
""
],
[
"Park",
"Hogun",
""
]
] |
2302.10648 | Tsuyoshi Kato | Yuya Takada and Tsuyoshi Kato | Multi-Target Tobit Models for Completing Water Quality Data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monitoring microbiological behaviors in water is crucial to manage public
health risk from waterborne pathogens, although quantifying the concentrations
of microbiological organisms in water is still challenging because
concentrations of many pathogens in water samples may often be below the
quantification limit, producing censoring data. To enable statistical analysis
based on quantitative values, the true values of non-detected measurements are
required to be estimated with high precision. Tobit model is a well-known
linear regression model for analyzing censored data. One drawback of the Tobit
model is that only the target variable is allowed to be censored. In this
study, we devised a novel extension of the classical Tobit model, called the
\emph{multi-target Tobit model}, to handle multiple censored variables
simultaneously by introducing multiple target variables. For fitting the new
model, a numerical stable optimization algorithm was developed based on
elaborate theories. Experiments conducted using several real-world water
quality datasets provided an evidence that estimating multiple columns jointly
gains a great advantage over estimating them separately.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 13:06:19 GMT"
}
] | 1,677,024,000,000 | [
[
"Takada",
"Yuya",
""
],
[
"Kato",
"Tsuyoshi",
""
]
] |
2302.10650 | Marc Serramia | Marc Serramia, William Seymour, Natalia Criado, Michael Luck | Predicting Privacy Preferences for Smart Devices as Norms | To be published in Proceedings of the 22nd International Conference
on Autonomous Agents and Multiagent Systems (AAMAS 2023) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Smart devices, such as smart speakers, are becoming ubiquitous, and users
expect these devices to act in accordance with their preferences. In
particular, since these devices gather and manage personal data, users expect
them to adhere to their privacy preferences. However, the current approach of
gathering these preferences consists in asking the users directly, which
usually triggers automatic responses failing to capture their true preferences.
In response, in this paper we present a collaborative filtering approach to
predict user preferences as norms. These preference predictions can be readily
adopted or can serve to assist users in determining their own preferences.
Using a dataset of privacy preferences of smart assistant users, we test the
accuracy of our predictions.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 13:07:30 GMT"
}
] | 1,677,024,000,000 | [
[
"Serramia",
"Marc",
""
],
[
"Seymour",
"William",
""
],
[
"Criado",
"Natalia",
""
],
[
"Luck",
"Michael",
""
]
] |
2302.10674 | Pedro Zuidberg Dos Martires | Pedro Zuidberg Dos Martires, Luc De Raedt, Angelika Kimmig | Declarative Probabilistic Logic Programming in Discrete-Continuous
Domains | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Over the past three decades, the logic programming paradigm has been
successfully expanded to support probabilistic modeling, inference and
learning. The resulting paradigm of probabilistic logic programming (PLP) and
its programming languages owes much of its success to a declarative semantics,
the so-called distribution semantics. However, the distribution semantics is
limited to discrete random variables only. While PLP has been extended in
various ways for supporting hybrid, that is, mixed discrete and continuous
random variables, we are still lacking a declarative semantics for hybrid PLP
that not only generalizes the distribution semantics and the modeling language
but also the standard inference algorithm that is based on knowledge
compilation. We contribute the hybrid distribution semantics together with the
hybrid PLP language DC-ProbLog and its inference engine infinitesimal algebraic
likelihood weighting (IALW). These have the original distribution semantics,
standard PLP languages such as ProbLog, and standard inference engines for PLP
based on knowledge compilation as special cases. Thus, we generalize the
state-of-the-art of PLP towards hybrid PLP in three different aspects:
semantics, language and inference. Furthermore, IALW is the first inference
algorithm for hybrid probabilistic programming based on knowledge compilation.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 13:50:38 GMT"
}
] | 1,677,024,000,000 | [
[
"Martires",
"Pedro Zuidberg Dos",
""
],
[
"De Raedt",
"Luc",
""
],
[
"Kimmig",
"Angelika",
""
]
] |
2302.10825 | Jiong Li | Jiong Li, Pratik Gajane | Curiosity-driven Exploration in Sparse-reward Multi-agent Reinforcement
Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparsity of rewards while applying a deep reinforcement learning method
negatively affects its sample-efficiency. A viable solution to deal with the
sparsity of rewards is to learn via intrinsic motivation which advocates for
adding an intrinsic reward to the reward function to encourage the agent to
explore the environment and expand the sample space. Though intrinsic
motivation methods are widely used to improve data-efficient learning in the
reinforcement learning model, they also suffer from the so-called detachment
problem. In this article, we discuss the limitations of intrinsic curiosity
module in sparse-reward multi-agent reinforcement learning and propose a method
called I-Go-Explore that combines the intrinsic curiosity module with the
Go-Explore framework to alleviate the detachment problem.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2023 17:00:05 GMT"
}
] | 1,677,024,000,000 | [
[
"Li",
"Jiong",
""
],
[
"Gajane",
"Pratik",
""
]
] |
2302.11137 | Yiqi Zhao | Yiqi Zhao, Ziyan An, Xuqing Gao, Ayan Mukhopadhyay, Meiyi Ma | Fairguard: Harness Logic-based Fairness Rules in Smart Cities | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Smart cities operate on computational predictive frameworks that collect,
aggregate, and utilize data from large-scale sensor networks. However, these
frameworks are prone to multiple sources of data and algorithmic bias, which
often lead to unfair prediction results. In this work, we first demonstrate
that bias persists at a micro-level both temporally and spatially by studying
real city data from Chattanooga, TN. To alleviate the issue of such bias, we
introduce Fairguard, a micro-level temporal logic-based approach for fair smart
city policy adjustment and generation in complex temporal-spatial domains. The
Fairguard framework consists of two phases: first, we develop a static
generator that is able to reduce data bias based on temporal logic conditions
by minimizing correlations between selected attributes. Then, to ensure
fairness in predictive algorithms, we design a dynamic component to regulate
prediction results and generate future fair predictions by harnessing logic
rules. Evaluations show that logic-enabled static Fairguard can effectively
reduce the biased correlations while dynamic Fairguard can guarantee fairness
on protected groups at run-time with minimal impact on overall performance.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2023 04:14:09 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2023 01:51:30 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Mar 2023 21:47:38 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Apr 2023 04:35:54 GMT"
},
{
"version": "v5",
"created": "Tue, 11 Apr 2023 04:49:09 GMT"
},
{
"version": "v6",
"created": "Fri, 21 Apr 2023 15:47:29 GMT"
},
{
"version": "v7",
"created": "Fri, 8 Sep 2023 16:46:02 GMT"
}
] | 1,694,390,400,000 | [
[
"Zhao",
"Yiqi",
""
],
[
"An",
"Ziyan",
""
],
[
"Gao",
"Xuqing",
""
],
[
"Mukhopadhyay",
"Ayan",
""
],
[
"Ma",
"Meiyi",
""
]
] |
2302.11165 | Songlin Zhai | Songlin Zhai, Weiqing Wang, Yuanfang Li, Yuan Meng | DNG: Taxonomy Expansion by Exploring the Intrinsic Directed Structure on
Non-gaussian Space | 7figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Taxonomy expansion is the process of incorporating a large number of
additional nodes (i.e., "queries") into an existing taxonomy (i.e., "seed"),
with the most important step being the selection of appropriate positions for
each query. Enormous efforts have been made by exploring the seed's structure.
However, existing approaches are deficient in their mining of structural
information in two ways: poor modeling of the hierarchical semantics and
failure to capture directionality of is-a relation. This paper seeks to address
these issues by explicitly denoting each node as the combination of inherited
feature (i.e., structural part) and incremental feature (i.e., supplementary
part). Specifically, the inherited feature originates from "parent" nodes and
is weighted by an inheritance factor. With this node representation, the
hierarchy of semantics in taxonomies (i.e., the inheritance and accumulation of
features from "parent" to "child") could be embodied. Additionally, based on
this representation, the directionality of is-a relation could be easily
translated into the irreversible inheritance of features. Inspired by the
Darmois-Skitovich Theorem, we implement this irreversibility by a non-Gaussian
constraint on the supplementary feature. A log-likelihood learning objective is
further utilized to optimize the proposed model (dubbed DNG), whereby the
required non-Gaussianity is also theoretically ensured. Extensive experimental
results on two real-world datasets verify the superiority of DNG relative to
several strong baselines.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2023 06:15:02 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Mar 2023 13:28:02 GMT"
}
] | 1,679,443,200,000 | [
[
"Zhai",
"Songlin",
""
],
[
"Wang",
"Weiqing",
""
],
[
"Li",
"Yuanfang",
""
],
[
"Meng",
"Yuan",
""
]
] |
2302.11396 | Zhizhi Yu | Zhizhi Yu, Di Jin, Cuiying Huo, Zhiqiang Wang, Xiulong Liu, Heng Qi,
Jia Wu, Lingfei Wu | KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks | Accepted by WWW-23 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social Internet of Things (SIoT), a promising and emerging paradigm that
injects the notion of social networking into smart objects (i.e., things),
paving the way for the next generation of Internet of Things. However, due to
the risks and uncertainty, a crucial and urgent problem to be settled is
establishing reliable relationships within SIoT, that is, trust evaluation.
Graph neural networks for trust evaluation typically adopt a straightforward
way such as one-hot or node2vec to comprehend node characteristics, which
ignores the valuable semantic knowledge attached to nodes. Moreover, the
underlying structure of SIoT is usually complex, including both the
heterogeneous graph structure and pairwise trust relationships, which renders
hard to preserve the properties of SIoT trust during information propagation.
To address these aforementioned problems, we propose a novel knowledge-enhanced
graph neural network (KGTrust) for better trust evaluation in SIoT.
Specifically, we first extract useful knowledge from users' comment behaviors
and external structured triples related to object descriptions, in order to
gain a deeper insight into the semantics of users and objects. Furthermore, we
introduce a discriminative convolutional layer that utilizes heterogeneous
graph structure, node semantics, and augmented trust relationships to learn
node embeddings from the perspective of a user as a trustor or a trustee,
effectively capturing multi-aspect properties of SIoT trust during information
propagation. Finally, a trust prediction layer is developed to estimate the
trust relationships between pairwise nodes. Extensive experiments on three
public datasets illustrate the superior performance of KGTrust over
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2023 14:24:45 GMT"
}
] | 1,677,110,400,000 | [
[
"Yu",
"Zhizhi",
""
],
[
"Jin",
"Di",
""
],
[
"Huo",
"Cuiying",
""
],
[
"Wang",
"Zhiqiang",
""
],
[
"Liu",
"Xiulong",
""
],
[
"Qi",
"Heng",
""
],
[
"Wu",
"Jia",
""
],
[
"Wu",
"Lingfei",
""
]
] |
2302.11563 | Matej Pechac | Matej Pech\'a\v{c}, Michal Chovanec, Igor Farka\v{s} | Self-supervised network distillation: an effective approach to
exploration in sparse reward environments | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reinforcement learning can solve decision-making problems and train an agent
to behave in an environment according to a predesigned reward function.
However, such an approach becomes very problematic if the reward is too sparse
and so the agent does not come across the reward during the environmental
exploration. The solution to such a problem may be to equip the agent with an
intrinsic motivation that will provide informed exploration during which the
agent is likely to also encounter external reward. Novelty detection is one of
the promising branches of intrinsic motivation research. We present
Self-supervised Network Distillation (SND), a class of intrinsic motivation
algorithms based on the distillation error as a novelty indicator, where the
predictor model and the target model are both trained. We adapted three
existing self-supervised methods for this purpose and experimentally tested
them on a set of ten environments that are considered difficult to explore. The
results show that our approach achieves faster growth and higher external
reward for the same training time compared to the baseline models, which
implies improved exploration in a very sparse reward environment. In addition,
the analytical methods we applied provide valuable explanatory insights into
our proposed models.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2023 18:58:09 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jul 2023 07:52:51 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Jan 2024 07:34:14 GMT"
}
] | 1,705,536,000,000 | [
[
"Pecháč",
"Matej",
""
],
[
"Chovanec",
"Michal",
""
],
[
"Farkaš",
"Igor",
""
]
] |
2302.11622 | Beomseok Kang | Beomseok Kang, Biswadeep Chakraborty, Saibal Mukhopadhyay | Unsupervised 3D Object Learning through Neuron Activity aware Plasticity | Published as a conference paper at ICLR 2023 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an unsupervised deep learning model for 3D object classification.
Conventional Hebbian learning, a well-known unsupervised model, suffers from
loss of local features leading to reduced performance for tasks with complex
geometric objects. We present a deep network with a novel Neuron Activity Aware
(NeAW) Hebbian learning rule that dynamically switches the neurons to be
governed by Hebbian learning or anti-Hebbian learning, depending on its
activity. We analytically show that NeAW Hebbian learning relieves the bias in
neuron activity, allowing more neurons to attend to the representation of the
3D objects. Empirical results show that the NeAW Hebbian learning outperforms
other variants of Hebbian learning and shows higher accuracy over fully
supervised models when training data is limited.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2023 19:57:12 GMT"
}
] | 1,677,196,800,000 | [
[
"Kang",
"Beomseok",
""
],
[
"Chakraborty",
"Biswadeep",
""
],
[
"Mukhopadhyay",
"Saibal",
""
]
] |
2302.11871 | Mianxin Liu | Mianxin Liu, Jingyang Zhang, Yao Wang, Yan Zhou, Fang Xie, Qihao Guo,
Feng Shi, Han Zhang, Qian Wang, Dinggang Shen | Deep learning reveals the common spectrum underlying multiple brain
disorders in youth and elders from brain functional networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain disorders in the early and late life of humans potentially share
pathological alterations in brain functions. However, the key evidence from
neuroimaging data for pathological commonness remains unrevealed. To explore
this hypothesis, we build a deep learning model, using multi-site functional
magnetic resonance imaging data (N=4,410, 6 sites), for classifying 5 different
brain disorders from healthy controls, with a set of common features. Our model
achieves 62.6(1.9)% overall classification accuracy on data from the 6
investigated sites and detects a set of commonly affected functional
subnetworks at different spatial scales, including default mode, executive
control, visual, and limbic networks. In the deep-layer feature representation
for individual data, we observe young and aging patients with disorders are
continuously distributed, which is in line with the clinical concept of the
"spectrum of disorders". The revealed spectrum underlying early- and late-life
brain disorders promotes the understanding of disorder comorbidities in the
lifespan.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 09:22:05 GMT"
}
] | 1,677,196,800,000 | [
[
"Liu",
"Mianxin",
""
],
[
"Zhang",
"Jingyang",
""
],
[
"Wang",
"Yao",
""
],
[
"Zhou",
"Yan",
""
],
[
"Xie",
"Fang",
""
],
[
"Guo",
"Qihao",
""
],
[
"Shi",
"Feng",
""
],
[
"Zhang",
"Han",
""
],
[
"Wang",
"Qian",
""
],
[
"Shen",
"Dinggang",
""
]
] |
2302.11880 | Md. Rezaul Karim | Md. Rezaul Karim and Felix Hermsen and Sisay Adugna Chala and Paola de
Perthuis and Avikarsha Mandal | Catch Me If You Can: Semi-supervised Graph Learning for Spotting Money
Laundering | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Money laundering is the process where criminals use financial services to
move massive amounts of illegal money to untraceable destinations and integrate
them into legitimate financial systems. It is very crucial to identify such
activities accurately and reliably in order to enforce an anti-money laundering
(AML). Despite tremendous efforts to AML only a tiny fraction of illicit
activities are prevented. From a given graph of money transfers between
accounts of a bank, existing approaches attempted to detect money laundering.
In particular, some approaches employ structural and behavioural dynamics of
dense subgraph detection thereby not taking into consideration that money
laundering involves high-volume flows of funds through chains of bank accounts.
Some approaches model the transactions in the form of multipartite graphs to
detect the complete flow of money from source to destination. However, existing
approaches yield lower detection accuracy, making them less reliable. In this
paper, we employ semi-supervised graph learning techniques on graphs of
financial transactions in order to identify nodes involved in potential money
laundering. Experimental results suggest that our approach can sport money
laundering from real and synthetic transaction graphs.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 09:34:19 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 11:42:17 GMT"
}
] | 1,677,456,000,000 | [
[
"Karim",
"Md. Rezaul",
""
],
[
"Hermsen",
"Felix",
""
],
[
"Chala",
"Sisay Adugna",
""
],
[
"de Perthuis",
"Paola",
""
],
[
"Mandal",
"Avikarsha",
""
]
] |
2302.11885 | Christian Wagner | Stephen B. Broomell, Christian Wagner | The Joint Weighted Average (JWA) Operator | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Information aggregation is a vital tool for human and machine decision making
in the presence of uncertainty. Traditionally, approaches to aggregation
broadly diverge into two categories, those which attribute a worth or weight to
information sources and those which attribute said worth to the evidence
arising from said sources. The latter is pervasive in the physical sciences,
underpinning linear order statistics and enabling non-linear aggregation. The
former is popular in the social sciences, providing interpretable insight on
the sources. While prior work has identified the need to apply both approaches
simultaneously, it has yet to conceptually integrate both approaches and
provide a semantic interpretation of the arising aggregation approach. Here, we
conceptually integrate both approaches in a novel joint weighted averaging
operator. We leverage compositional geometry to underpin this integration,
showing how it provides a systematic basis for the combination of weighted
aggregation operators--which has thus far not been considered in the
literature. We proceed to show how the resulting operator systematically
integrates a priori beliefs about the worth of both sources and evidence,
reflecting the semantic integration of both weighting strategies. We conclude
and highlight the potential of the operator across disciplines, from machine
learning to psychology.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 09:48:49 GMT"
},
{
"version": "v2",
"created": "Thu, 2 May 2024 18:03:50 GMT"
}
] | 1,714,953,600,000 | [
[
"Broomell",
"Stephen B.",
""
],
[
"Wagner",
"Christian",
""
]
] |
2302.11909 | Dmitry Maximov | Dmitry Maximov, Vladimir I. Goncharenko, Yury S. Legovich | Multi-Valued Neural Networks I A Multi-Valued Associative Memory | This is a version with correct Theorem 3 (Theorem 2 in published
variant) | Neural Computing and Applications. 2021. Vol. 33 (16). P.
10189-10198 | 10.1007/s00521-021-05781-6 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A new concept of a multi-valued associative memory is introduced,
generalizing a similar one in fuzzy neural networks. We expand the results on
fuzzy associative memory with thresholds, to the case of a multi-valued one: we
introduce the novel concept of such a network without numbers, investigate its
properties, and give a learning algorithm in the multi-valued case. We
discovered conditions under which it is possible to store given pairs of
network variable patterns in such a multi-valued associative memory. In the
multi-valued neural network, all variables are not numbers, but elements or
subsets of a lattice, i.e., they are all only partially-ordered. Lattice
operations are used to build the network output by inputs. In this paper, the
lattice is assumed to be Brouwer and determines the implication used, together
with other lattice operations, to determine the neural network output. We gave
the example of the network use to classify aircraft/spacecraft trajectories.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 10:32:25 GMT"
}
] | 1,677,196,800,000 | [
[
"Maximov",
"Dmitry",
""
],
[
"Goncharenko",
"Vladimir I.",
""
],
[
"Legovich",
"Yury S.",
""
]
] |
2302.11965 | Hanxiao Tan | Hanxiao Tan | The Generalizability of Explanations | null | null | 10.1109/IJCNN54540.2023.10191972 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the absence of ground truth, objective evaluation of explainability
methods is an essential research direction. So far, the vast majority of
evaluations can be summarized into three categories, namely human evaluation,
sensitivity testing, and salinity check. This work proposes a novel evaluation
methodology from the perspective of generalizability. We employ an Autoencoder
to learn the distributions of the generated explanations and observe their
learnability as well as the plausibility of the learned distributional
features. We first briefly demonstrate the evaluation idea of the proposed
approach at LIME, and then quantitatively evaluate multiple popular
explainability methods. We also find that smoothing the explanations with
SmoothGrad can significantly enhance the generalizability of explanations.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 12:25:59 GMT"
}
] | 1,714,521,600,000 | [
[
"Tan",
"Hanxiao",
""
]
] |
2302.12075 | Zolzaya Dashdorj | Zolzaya Dashdorj and Stanislav Grigorev and Munguntsatsral Dovdondash | Explorative analysis of human disease-symptoms relations using the
Convolutional Neural Network | 9 pages, 5 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the field of health-care and bio-medical research, understanding the
relationship between the symptoms of diseases is crucial for early diagnosis
and determining hidden relationships between diseases. The study aimed to
understand the extent of symptom types in disease prediction tasks. In this
research, we analyze a pre-generated symptom-based human disease dataset and
demonstrate the degree of predictability for each disease based on the
Convolutional Neural Network and the Support Vector Machine. Ambiguity of
disease is studied using the K-Means and the Principal Component Analysis. Our
results indicate that machine learning can potentially diagnose diseases with
the 98-100% accuracy in the early stage, taking the characteristics of symptoms
into account. Our result highlights that types of unusual symptoms are a good
proxy for disease early identification accurately. We also highlight that
unusual symptoms increase the accuracy of the disease prediction task.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 15:02:07 GMT"
}
] | 1,677,196,800,000 | [
[
"Dashdorj",
"Zolzaya",
""
],
[
"Grigorev",
"Stanislav",
""
],
[
"Dovdondash",
"Munguntsatsral",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.