id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2206.06629 | Jindong Wang | Wang Lu, Jindong Wang, Yiqiang Chen, Sinno Jialin Pan, Chunyu Hu, Xin
Qin | Semantic-Discriminative Mixup for Generalizable Sensor-based
Cross-domain Activity Recognition | To be presented at UbiComp 2022; Accepted by Proceedings of the ACM
on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | It is expensive and time-consuming to collect sufficient labeled data to
build human activity recognition (HAR) models. Training on existing data often
makes the model biased towards the distribution of the training data, thus the
model might perform terribly on test data with different distributions.
Although existing efforts on transfer learning and domain adaptation try to
solve the above problem, they still need access to unlabeled data on the target
domain, which may not be possible in real scenarios. Few works pay attention to
training a model that can generalize well to unseen target domains for HAR. In
this paper, we propose a novel method called Semantic-Discriminative Mixup
(SDMix) for generalizable cross-domain HAR. Firstly, we introduce
semantic-aware Mixup that considers the activity semantic ranges to overcome
the semantic inconsistency brought by domain differences. Secondly, we
introduce the large margin loss to enhance the discrimination of Mixup to
prevent misclassification brought by noisy virtual labels. Comprehensive
generalization experiments on five public datasets demonstrate that our SDMix
substantially outperforms the state-of-the-art approaches with 6% average
accuracy improvement on cross-person, cross-dataset, and cross-position HAR.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2022 06:41:29 GMT"
}
] | 1,655,251,200,000 | [
[
"Lu",
"Wang",
""
],
[
"Wang",
"Jindong",
""
],
[
"Chen",
"Yiqiang",
""
],
[
"Pan",
"Sinno Jialin",
""
],
[
"Hu",
"Chunyu",
""
],
[
"Qin",
"Xin",
""
]
] |
2206.06793 | Hannes Strass | Luc\'ia G\'omez \'Alvarez, Sebastian Rudolph and Hannes Strass | How to Agree to Disagree: Managing Ontological Perspectives using
Standpoint Logic | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The importance of taking individual, potentially conflicting perspectives
into account when dealing with knowledge has been widely recognised. Many
existing ontology management approaches fully merge knowledge perspectives,
which may require weakening in order to maintain consistency; others represent
the distinct views in an entirely detached way.
As an alternative, we propose Standpoint Logic, a simple, yet versatile
multi-modal logic "add-on" for existing KR languages intended for the
integrated representation of domain knowledge relative to diverse, possibly
conflicting standpoints, which can be hierarchically organised, combined and
put in relation to each other.
Starting from the generic framework of First-Order Standpoint Logic (FOSL),
we subsequently focus our attention on the fragment of sentential formulas, for
which we provide a polytime translation into the standpoint-free version. This
result yields decidability and favourable complexities for a variety of highly
expressive decidable fragments of first-order logic. Using some elaborate
encoding tricks, we then establish a similar translation for the very
expressive description logic SROIQb_s underlying the OWL 2 DL ontology
language. By virtue of this result, existing highly optimised OWL reasoners can
be used to provide practical reasoning support for ontology languages extended
by standpoint modelling.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2022 12:29:08 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 11:45:49 GMT"
}
] | 1,659,398,400,000 | [
[
"Álvarez",
"Lucía Gómez",
""
],
[
"Rudolph",
"Sebastian",
""
],
[
"Strass",
"Hannes",
""
]
] |
2206.06882 | Damien Pellier | M. Grand, H. Fiorino and D. Pellier | An Accurate HDDL Domain Learning Algorithm from Partial and Noisy
Observations | null | Proceedings of the International Workshop of Knowledge Engineering
(ICAPS), 2022 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The Hierarchical Task Network ({\sf HTN}) formalism is very expressive and
used to express a wide variety of planning problems. In contrast to the
classical {\sf STRIPS} formalism in which only the action model needs to be
specified, the {\sf HTN} formalism requires to specify, in addition, the tasks
of the problem and their decomposition into subtasks, called {\sf HTN} methods.
For this reason, hand-encoding {\sf HTN} problems is considered more difficult
and more error-prone by experts than classical planning problem. To tackle this
problem, we propose a new approach (HierAMLSI) based on grammar induction to
acquire {\sf HTN} planning domain knowledge, by learning action models and {\sf
HTN} methods with their preconditions. Unlike other approaches, HierAMLSI is
able to learn both actions and methods with noisy and partial inputs
observation with a high level or accuracy.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2022 14:32:53 GMT"
}
] | 1,655,251,200,000 | [
[
"Grand",
"M.",
""
],
[
"Fiorino",
"H.",
""
],
[
"Pellier",
"D.",
""
]
] |
2206.07080 | Carl Corea | Carl Corea, John Grant, Matthias Thimm | Measuring Inconsistency in Declarative Process Specifications | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We address the problem of measuring inconsistency in declarative process
specifications, with an emphasis on linear temporal logic on fixed traces
(LTLff). As we will show, existing inconsistency measures for classical logic
cannot provide a meaningful assessment of inconsistency in LTL in general, as
they cannot adequately handle the temporal operators. We therefore propose a
novel paraconsistent semantics as a framework for inconsistency measurement. We
then present two new inconsistency measures based on these semantics and show
that they satisfy important desirable properties. We show how these measures
can be applied to declarative process models and investigate the computational
complexity of the introduced approach.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2022 18:08:49 GMT"
}
] | 1,655,337,600,000 | [
[
"Corea",
"Carl",
""
],
[
"Grant",
"John",
""
],
[
"Thimm",
"Matthias",
""
]
] |
2206.07082 | Yunwen Lei | Yunwen Lei | Stability and Generalization of Stochastic Optimization with Nonconvex
and Nonsmooth Problems | To appear in COLT 2023 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Stochastic optimization has found wide applications in minimizing objective
functions in machine learning, which motivates a lot of theoretical studies to
understand its practical success. Most of existing studies focus on the
convergence of optimization errors, while the generalization analysis of
stochastic optimization is much lagging behind. This is especially the case for
nonconvex and nonsmooth problems often encountered in practice. In this paper,
we initialize a systematic stability and generalization analysis of stochastic
optimization on nonconvex and nonsmooth problems. We introduce novel
algorithmic stability measures and establish their quantitative connection on
the gap between population gradients and empirical gradients, which is then
further extended to study the gap between the Moreau envelope of the empirical
risk and that of the population risk. To our knowledge, these quantitative
connection between stability and generalization in terms of either gradients or
Moreau envelopes have not been studied in the literature. We introduce a class
of sampling-determined algorithms, for which we develop bounds for three
stability measures. Finally, we apply these discussions to derive error bounds
for stochastic gradient descent and its adaptive variant, where we show how to
achieve an implicit regularization by tuning the step sizes and the number of
iterations.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2022 18:14:30 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 09:07:46 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Jul 2023 02:00:40 GMT"
}
] | 1,689,724,800,000 | [
[
"Lei",
"Yunwen",
""
]
] |
2206.07084 | Damien Pellier | N. Cavrel, D. Pellier, H. Fiorino | An Efficient HTN to STRIPS Encoding for Concurrent Plans | null | Proceedings of the International Workshop of Hierarchical Planning
(ICAPS), 2022 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The Hierarchical Task Network (HTN) formalism is used to express a wide
variety of planning problems in terms of decompositions of tasks into subtaks.
Many techniques have been proposed to solve such hierarchical planning
problems. A particular technique is to encode hierarchical planning problems as
classical STRIPS planning problems. One advantage of this technique is to
benefit directly from the constant improvements made by STRIPS planners.
However, there are still few effective and expressive encodings. In this paper,
we present a new HTN to STRIPS encoding allowing to generate concurrent plans.
We show experimentally that this encoding outperforms previous approaches on
hierarchical IPC benchmarks.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2022 18:18:22 GMT"
}
] | 1,655,337,600,000 | [
[
"Cavrel",
"N.",
""
],
[
"Pellier",
"D.",
""
],
[
"Fiorino",
"H.",
""
]
] |
2206.07461 | Alessandro Gianola | Paolo Felli and Alessandro Gianola and Marco Montali and Andrey Rivkin
and Sarah Winkler | Conformance Checking with Uncertainty via SMT (Extended Version) | Extended version of a conference paper accepted at the 20th
International Conference on Business Process Management (BPM 2022) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logs of real-life processes often feature uncertainty pertaining the recorded
timestamps, data values, and/or events. We consider the problem of checking
conformance of uncertain logs against data-aware reference processes.
Specifically, we show how to solve it via SMT encodings, lifting previous work
on data-aware SMT-based conformance checking to this more sophisticated
setting. Our approach is modular, in that it homogeneously accommodates for
different types of uncertainty. Moreover, using appropriate cost functions,
different conformance checking tasks can be addressed. We show the correctness
of our approach and witness feasibility through a proof-of-concept
implementation.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2022 11:39:45 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jun 2022 21:15:07 GMT"
}
] | 1,656,374,400,000 | [
[
"Felli",
"Paolo",
""
],
[
"Gianola",
"Alessandro",
""
],
[
"Montali",
"Marco",
""
],
[
"Rivkin",
"Andrey",
""
],
[
"Winkler",
"Sarah",
""
]
] |
2206.07472 | Yue Wang | Yue Wang, Yao Wan, Lu Bai, Lixin Cui, Zhuo Xu, Ming Li, Philip S. Yu,
and Edwin R Hancock | Collaborative Knowledge Graph Fusion by Exploiting the Open Corpus | Under review by IEEE Transactions on Knowledge and Data Engineering
(TKDE) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To alleviate the challenges of building Knowledge Graphs (KG) from scratch, a
more general task is to enrich a KG using triples from an open corpus, where
the obtained triples contain noisy entities and relations. It is challenging to
enrich a KG with newly harvested triples while maintaining the quality of the
knowledge representation. This paper proposes a system to refine a KG using
information harvested from an additional corpus. To this end, we formulate our
task as two coupled sub-tasks, namely join event extraction (JEE) and knowledge
graph fusion (KGF). We then propose a Collaborative Knowledge Graph Fusion
Framework to allow our sub-tasks to mutually assist one another in an
alternating manner. More concretely, the explorer carries out the JEE
supervised by both the ground-truth annotation and an existing KG provided by
the supervisor. The supervisor then evaluates the triples extracted by the
explorer and enriches the KG with those that are highly ranked. To implement
this evaluation, we further propose a Translated Relation Alignment Scoring
Mechanism to align and translate the extracted triples to the prior KG.
Experiments verify that this collaboration can both improve the performance of
the JEE and the KGF.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2022 12:16:10 GMT"
}
] | 1,655,337,600,000 | [
[
"Wang",
"Yue",
""
],
[
"Wan",
"Yao",
""
],
[
"Bai",
"Lu",
""
],
[
"Cui",
"Lixin",
""
],
[
"Xu",
"Zhuo",
""
],
[
"Li",
"Ming",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Hancock",
"Edwin R",
""
]
] |
2206.07497 | Teodor Chiaburu | Teodor Chiaburu, Felix Biessmann and Frank Hausser | Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and
Evaluations of XAI Methods for ML-Assisted Rare Species Annotations | 6 pages, 7 figures, 1 table submitted to CVPR 2022 All the code and
the link to the dataset can be found at
https://github.com/TeodorChiaburu/beexplainable | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Insects are a crucial part of our ecosystem. Sadly, in the past few decades,
their numbers have worryingly decreased. In an attempt to gain a better
understanding of this process and monitor the insects populations, Deep
Learning may offer viable solutions. However, given the breadth of their
taxonomy and the typical hurdles of fine grained analysis, such as high
intraclass variability compared to low interclass variability, insect
classification remains a challenging task. There are few benchmark datasets,
which impedes rapid development of better AI models. The annotation of rare
species training data, however, requires expert knowledge. Explainable
Artificial Intelligence (XAI) could assist biologists in these annotation
tasks, but choosing the optimal XAI method is difficult. Our contribution to
these research challenges is threefold: 1) a dataset of thoroughly annotated
images of wild bees sampled from the iNaturalist database, 2) a ResNet model
trained on the wild bee dataset achieving classification scores comparable to
similar state-of-the-art models trained on other fine-grained datasets and 3)
an investigation of XAI methods to support biologists in annotation tasks.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2022 12:48:05 GMT"
}
] | 1,655,337,600,000 | [
[
"Chiaburu",
"Teodor",
""
],
[
"Biessmann",
"Felix",
""
],
[
"Hausser",
"Frank",
""
]
] |
2206.07505 | Wei Fu | Wei Fu, Chao Yu, Zelai Xu, Jiaqi Yang, and Yi Wu | Revisiting Some Common Practices in Cooperative Multi-Agent
Reinforcement Learning | 16 pages, published as a conference paper in ICML 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many advances in cooperative multi-agent reinforcement learning (MARL) are
based on two common design principles: value decomposition and parameter
sharing. A typical MARL algorithm of this fashion decomposes a centralized
Q-function into local Q-networks with parameters shared across agents. Such an
algorithmic paradigm enables centralized training and decentralized execution
(CTDE) and leads to efficient learning in practice. Despite all the advantages,
we revisit these two principles and show that in certain scenarios, e.g.,
environments with a highly multi-modal reward landscape, value decomposition,
and parameter sharing can be problematic and lead to undesired outcomes. In
contrast, policy gradient (PG) methods with individual policies provably
converge to an optimal solution in these cases, which partially supports some
recent empirical observations that PG can be effective in many MARL testbeds.
Inspired by our theoretical analysis, we present practical suggestions on
implementing multi-agent PG algorithms for either high rewards or diverse
emergent behaviors and empirically validate our findings on a variety of
domains, ranging from the simplified matrix and grid-world games to complex
benchmarks such as StarCraft Multi-Agent Challenge and Google Research
Football. We hope our insights could benefit the community towards developing
more general and more powerful MARL algorithms. Check our project website at
https://sites.google.com/view/revisiting-marl.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2022 13:03:05 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Aug 2022 12:54:33 GMT"
}
] | 1,660,003,200,000 | [
[
"Fu",
"Wei",
""
],
[
"Yu",
"Chao",
""
],
[
"Xu",
"Zelai",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Wu",
"Yi",
""
]
] |
2206.07762 | Ryan Nguyen | Ryan Nguyen, Shubhendu Kumar Singh, Rahul Rai | Physics-Infused Fuzzy Generative Adversarial Network for Robust Failure
Prognosis | null | null | 10.1016/j.ymssp.2022.109611 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Prognostics aid in the longevity of fielded systems or products. Quantifying
the system's current health enable prognosis to enhance the operator's
decision-making to preserve the system's health. Creating a prognosis for a
system can be difficult due to (a) unknown physical relationships and/or (b)
irregularities in data appearing well beyond the initiation of a problem.
Traditionally, three different modeling paradigms have been used to develop a
prognostics model: physics-based (PbM), data-driven (DDM), and hybrid modeling.
Recently, the hybrid modeling approach that combines the strength of both PbM
and DDM based approaches and alleviates their limitations is gaining traction
in the prognostics domain. In this paper, a novel hybrid modeling approach for
prognostics applications based on combining concepts from fuzzy logic and
generative adversarial networks (GANs) is outlined. The FuzzyGAN based method
embeds a physics-based model in the aggregation of the fuzzy implications. This
technique constrains the output of the learning method to a realistic solution.
Results on a bearing problem showcases the efficacy of adding a physics-based
aggregation in a fuzzy logic model to improve GAN's ability to model health and
give a more accurate system prognosis.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2022 18:50:16 GMT"
}
] | 1,661,904,000,000 | [
[
"Nguyen",
"Ryan",
""
],
[
"Singh",
"Shubhendu Kumar",
""
],
[
"Rai",
"Rahul",
""
]
] |
2206.07772 | Ryan Nguyen | Ryan Nguyen and Rahul Rai | Deep Learning and Handheld Augmented Reality Based System for Optimal
Data Collection in Fault Diagnostics Domain | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Compared to current AI or robotic systems, humans navigate their environment
with ease, making tasks such as data collection trivial. However, humans find
it harder to model complex relationships hidden in the data. AI systems,
especially deep learning (DL) algorithms, impressively capture those complex
relationships. Symbiotically coupling humans and computational machines'
strengths can simultaneously minimize the collected data required and build
complex input-to-output mapping models. This paper enables this coupling by
presenting a novel human-machine interaction framework to perform fault
diagnostics with minimal data. Collecting data for diagnosing faults for
complex systems is difficult and time-consuming. Minimizing the required data
will increase the practicability of data-driven models in diagnosing faults.
The framework provides instructions to a human user to collect data that
mitigates the difference between the data used to train and test the fault
diagnostics model. The framework is composed of three components: (1) a
reinforcement learning algorithm for data collection to develop a training
dataset, (2) a deep learning algorithm for diagnosing faults, and (3) a
handheld augmented reality application for data collection for testing data.
The proposed framework has provided above 100\% precision and recall on a novel
dataset with only one instance of each fault condition. Additionally, a
usability study was conducted to gauge the user experience of the handheld
augmented reality application, and all users were able to follow the provided
steps.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2022 19:15:26 GMT"
}
] | 1,655,424,000,000 | [
[
"Nguyen",
"Ryan",
""
],
[
"Rai",
"Rahul",
""
]
] |
2206.07862 | Yuliya Lierler | Yuliya Lierler | Unifying Framework for Optimizations in non-boolean Formalisms | Under consideration in Theory and Practice of Logic Programming
(TPLP). arXiv admin note: text overlap with arXiv:2206.06440 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Search-optimization problems are plentiful in scientific and engineering
domains. Artificial intelligence has long contributed to the development of
search algorithms and declarative programming languages geared towards solving
and modeling search-optimization problems. Automated reasoning and knowledge
representation are the subfields of AI that are particularly vested in these
developments. Many popular automated reasoning paradigms provide users with
languages supporting optimization statements. Recall integer linear
programming, MaxSAT, optimization satisfiability modulo theory, and
(constraint) answer set programming. These paradigms vary significantly in
their languages in ways they express quality conditions on computed solutions.
Here we propose a unifying framework of so called extended weight systems that
eliminates syntactic distinctions between paradigms. They allow us to see
essential similarities and differences between optimization statements provided
by distinct automated reasoning languages. We also study formal properties of
the proposed systems that immediately translate into formal properties of
paradigms that can be captured within our framework. Under consideration in
Theory and Practice of Logic Programming (TPLP).
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2022 00:38:19 GMT"
}
] | 1,655,424,000,000 | [
[
"Lierler",
"Yuliya",
""
]
] |
2206.07870 | Theodore Sumers | Theodore R Sumers, Robert D Hawkins, Mark K Ho, Thomas L Griffiths,
Dylan Hadfield-Menell | How to talk so AI will learn: Instructions, descriptions, and autonomy | 10 pages, 5 figures. Published as a conference paper at NeurIPS 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | From the earliest years of our lives, humans use language to express our
beliefs and desires. Being able to talk to artificial agents about our
preferences would thus fulfill a central goal of value alignment. Yet today, we
lack computational models explaining such language use. To address this
challenge, we formalize learning from language in a contextual bandit setting
and ask how a human might communicate preferences over behaviors. We study two
distinct types of language: $\textit{instructions}$, which provide information
about the desired policy, and $\textit{descriptions}$, which provide
information about the reward function. We show that the agent's degree of
autonomy determines which form of language is optimal: instructions are better
in low-autonomy settings, but descriptions are better when the agent will need
to act independently. We then define a pragmatic listener agent that robustly
infers the speaker's reward function by reasoning about $\textit{how}$ the
speaker expresses themselves. We validate our models with a behavioral
experiment, demonstrating that (1) our speaker model predicts human behavior,
and (2) our pragmatic listener successfully recovers humans' reward functions.
Finally, we show that this form of social learning can integrate with and
reduce regret in traditional reinforcement learning. We hope these insights
facilitate a shift from developing agents that $\textit{obey}$ language to
agents that $\textit{learn}$ from it.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2022 01:33:38 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Sep 2022 14:15:58 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Oct 2022 20:39:26 GMT"
}
] | 1,665,532,800,000 | [
[
"Sumers",
"Theodore R",
""
],
[
"Hawkins",
"Robert D",
""
],
[
"Ho",
"Mark K",
""
],
[
"Griffiths",
"Thomas L",
""
],
[
"Hadfield-Menell",
"Dylan",
""
]
] |
2206.07988 | Prashant Kodali | Prashant Kodali, Tanmay Sachan, Akshay Goindani, Anmol Goel, Naman
Ahuja, Manish Shrivastava, Ponnurangam Kumaraguru | PreCogIIITH at HinglishEval : Leveraging Code-Mixing Metrics & Language
Model Embeddings To Estimate Code-Mix Quality | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Code-Mixing is a phenomenon of mixing two or more languages in a speech event
and is prevalent in multilingual societies. Given the low-resource nature of
Code-Mixing, machine generation of code-mixed text is a prevalent approach for
data augmentation. However, evaluating the quality of such machine generated
code-mixed text is an open problem. In our submission to HinglishEval, a
shared-task collocated with INLG2022, we attempt to build models factors that
impact the quality of synthetically generated code-mix text by predicting
ratings for code-mix quality.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2022 08:00:42 GMT"
}
] | 1,655,424,000,000 | [
[
"Kodali",
"Prashant",
""
],
[
"Sachan",
"Tanmay",
""
],
[
"Goindani",
"Akshay",
""
],
[
"Goel",
"Anmol",
""
],
[
"Ahuja",
"Naman",
""
],
[
"Shrivastava",
"Manish",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
] |
2206.08611 | Yu Zhao | Yu Zhao, Yunxin Li, Yuxiang Wu, Baotian Hu, Qingcai Chen, Xiaolong
Wang, Yuxin Ding, Min Zhang | Medical Dialogue Response Generation with Pivotal Information Recalling | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Medical dialogue generation is an important yet challenging task. Most
previous works rely on the attention mechanism and large-scale pretrained
language models. However, these methods often fail to acquire pivotal
information from the long dialogue history to yield an accurate and informative
response, due to the fact that the medical entities usually scatters throughout
multiple utterances along with the complex relationships between them. To
mitigate this problem, we propose a medical response generation model with
Pivotal Information Recalling (MedPIR), which is built on two components, i.e.,
knowledge-aware dialogue graph encoder and recall-enhanced generator. The
knowledge-aware dialogue graph encoder constructs a dialogue graph by
exploiting the knowledge relationships between entities in the utterances, and
encodes it with a graph attention network. Then, the recall-enhanced generator
strengthens the usage of these pivotal information by generating a summary of
the dialogue before producing the actual response. Experimental results on two
large-scale medical dialogue datasets show that MedPIR outperforms the strong
baselines in BLEU scores and medical entities F1 measure.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2022 08:11:10 GMT"
}
] | 1,655,683,200,000 | [
[
"Zhao",
"Yu",
""
],
[
"Li",
"Yunxin",
""
],
[
"Wu",
"Yuxiang",
""
],
[
"Hu",
"Baotian",
""
],
[
"Chen",
"Qingcai",
""
],
[
"Wang",
"Xiaolong",
""
],
[
"Ding",
"Yuxin",
""
],
[
"Zhang",
"Min",
""
]
] |
2206.08626 | Yu Zhao | Yu Zhao, Xinshuo Hu, Yunxin Li, Baotian Hu, Dongfang Li, Sichao Chen,
Xiaolong Wang | MSDF: A General Open-Domain Multi-Skill Dialog Framework | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Dialog systems have achieved significant progress and have been widely used
in various scenarios. The previous researches mainly focused on designing
dialog generation models in a single scenario, while comprehensive abilities
are required to handle tasks under various scenarios in the real world. In this
paper, we propose a general Multi-Skill Dialog Framework, namely MSDF, which
can be applied in different dialog tasks (e.g. knowledge grounded dialog and
persona based dialog). Specifically, we propose a transferable response
generator pre-trained on diverse large-scale dialog corpora as the backbone of
MSDF, consisting of BERT-based encoders and a GPT-based decoder. To select the
response consistent with dialog history, we propose a consistency selector
trained through negative sampling. Moreover, the flexible copy mechanism of
external knowledge is also employed to enhance the utilization of multiform
knowledge in various scenarios. We conduct experiments on knowledge grounded
dialog, recommendation dialog, and persona based dialog tasks. The experimental
results indicate that our MSDF outperforms the baseline models with a large
margin. In the Multi-skill Dialog of 2021 Language and Intelligence Challenge,
our general MSDF won the 3rd prize, which proves our MSDF is effective and
competitive.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2022 08:38:53 GMT"
}
] | 1,655,683,200,000 | [
[
"Zhao",
"Yu",
""
],
[
"Hu",
"Xinshuo",
""
],
[
"Li",
"Yunxin",
""
],
[
"Hu",
"Baotian",
""
],
[
"Li",
"Dongfang",
""
],
[
"Chen",
"Sichao",
""
],
[
"Wang",
"Xiaolong",
""
]
] |
2206.08687 | Manuele Leonelli | Rafael Ballester-Ripoll, Manuele Leonelli | You Only Derive Once (YODO): Automatic Differentiation for Efficient
Sensitivity Analysis in Bayesian Networks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Sensitivity analysis measures the influence of a Bayesian network's
parameters on a quantity of interest defined by the network, such as the
probability of a variable taking a specific value. In particular, the so-called
sensitivity value measures the quantity of interest's partial derivative with
respect to the network's conditional probabilities. However, finding such
values in large networks with thousands of parameters can become
computationally very expensive. We propose to use automatic differentiation
combined with exact inference to obtain all sensitivity values in a single
pass. Our method first marginalizes the whole network once using e.g. variable
elimination and then backpropagates this operation to obtain the gradient with
respect to all input parameters. We demonstrate our routines by ranking all
parameters by importance on a Bayesian network modeling humanitarian crises and
disasters, and then show the method's efficiency by scaling it to huge networks
with up to 100'000 parameters. An implementation of the methods using the
popular machine learning library PyTorch is freely available.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2022 11:11:19 GMT"
}
] | 1,655,683,200,000 | [
[
"Ballester-Ripoll",
"Rafael",
""
],
[
"Leonelli",
"Manuele",
""
]
] |
2206.08758 | Sylvie Coste-Marquis | Sylvie Coste-Marquis and Pierre Marquis | Rectifying Mono-Label Boolean Classifiers | 8 pages, rewriting of motivations in the Introduction section and of
Example 3 and Example 4 explanations, typo corrected in Example 4 and
captions of Figure 4 and Figure 5 rectified | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We elaborate on the notion of rectification of a Boolean classifier $\Sigma$.
Given $\Sigma$ and some background knowledge $T$, postulates characterizing the
way $\Sigma$ must be changed into a new classifier $\Sigma \star T$ that
complies with $T$ have already been presented. We focus here on the specific
case of mono-label Boolean classifiers, i.e., there is a single target concept
and any instance is classified either as positive (an element of the concept),
or as negative (an element of the complementary concept). In this specific
case, our main contribution is twofold: (1) we show that there is a unique
rectification operator $\star$ satisfying the postulates, and (2) when $\Sigma$
and $T$ are Boolean circuits, we show how a classification circuit equivalent
to $\Sigma \star T$ can be computed in time linear in the size of $\Sigma$ and
$T$; when $\Sigma$ and $T$ are decision trees, a decision tree equivalent to
$\Sigma \star T$ can be computed in time polynomial in the size of $\Sigma$ and
$T$.
| [
{
"version": "v1",
"created": "Fri, 17 Jun 2022 13:17:45 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Sep 2022 14:41:31 GMT"
}
] | 1,662,508,800,000 | [
[
"Coste-Marquis",
"Sylvie",
""
],
[
"Marquis",
"Pierre",
""
]
] |
2206.09360 | David Manheim | Sam Clarke, Ben Cottier, Aryeh Englander, Daniel Eth, David Manheim,
Samuel Dylan Martin, Issa Rice | Modeling Transformative AI Risks (MTAIR) Project -- Summary Report | Chapters were written by authors independently. All authors are
listed alphabetically | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This report outlines work by the Modeling Transformative AI Risk (MTAIR)
project, an attempt to map out the key hypotheses, uncertainties, and
disagreements in debates about catastrophic risks from advanced AI, and the
relationships between them. This builds on an earlier diagram by Ben Cottier
and Rohin Shah which laid out some of the crucial disagreements ("cruxes")
visually, with some explanation. Based on an extensive literature review and
engagement with experts, the report explains a model of the issues involved,
and the initial software-based implementation that can incorporate probability
estimates or other quantitative factors to enable exploration, planning, and/or
decision support. By gathering information from various debates and discussions
into a single more coherent presentation, we hope to enable better discussions
and debates about the issues involved.
The model starts with a discussion of reasoning via analogies and general
prior beliefs about artificial intelligence. Following this, it lays out a
model of different paths and enabling technologies for high-level machine
intelligence, and a model of how advances in the capabilities of these systems
might proceed, including debates about self-improvement, discontinuous
improvements, and the possibility of distributed, non-agentic high-level
intelligence or slower improvements. The model also looks specifically at the
question of learned optimization, and whether machine learning systems will
create mesa-optimizers. The impact of different safety research on the previous
sets of questions is then examined, to understand whether and how research
could be useful in enabling safer systems. Finally, we discuss a model of
different failure modes and loss of control or takeover scenarios.
| [
{
"version": "v1",
"created": "Sun, 19 Jun 2022 09:11:23 GMT"
}
] | 1,655,856,000,000 | [
[
"Clarke",
"Sam",
""
],
[
"Cottier",
"Ben",
""
],
[
"Englander",
"Aryeh",
""
],
[
"Eth",
"Daniel",
""
],
[
"Manheim",
"David",
""
],
[
"Martin",
"Samuel Dylan",
""
],
[
"Rice",
"Issa",
""
]
] |
2206.09638 | Ryma Boumazouza | Ryma Boumazouza (UA, CNRS, CRIL), Fahima Cheikh-Alili (UA, CNRS,
CRIL), Bertrand Mazure (UA, CNRS, CRIL), Karim Tabia (UA, CNRS, CRIL) | A Symbolic Approach for Counterfactual Explanations | null | 14th International Conference, SUM 2020, Bozen-Bolzano, Italy, Sep
2020, Virtual event Bozen-Bolzano, Italy. pp.270-277 | 10.1007/978-3-030-58449-8_21 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper titled A Symbolic Approach for Counterfactual Explanations we
propose a novel symbolic approach to provide counterfactual explanations for a
classifier predictions. Contrary to most explanation approaches where the goal
is to understand which and to what extent parts of the data helped to give a
prediction, counterfactual explanations indicate which features must be changed
in the data in order to change this classifier prediction. Our approach is
symbolic in the sense that it is based on encoding the decision function of a
classifier in an equivalent CNF formula. In this approach, counterfactual
explanations are seen as the Minimal Correction Subsets (MCS), a well-known
concept in knowledge base reparation. Hence, this approach takes advantage of
the strengths of already existing and proven solutions for the generation of
MCS. Our preliminary experimental studies on Bayesian classifiers show the
potential of this approach on several datasets.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2022 08:38:54 GMT"
}
] | 1,655,856,000,000 | [
[
"Boumazouza",
"Ryma",
"",
"UA, CNRS, CRIL"
],
[
"Cheikh-Alili",
"Fahima",
"",
"UA, CNRS,\n CRIL"
],
[
"Mazure",
"Bertrand",
"",
"UA, CNRS, CRIL"
],
[
"Tabia",
"Karim",
"",
"UA, CNRS, CRIL"
]
] |
2206.10454 | Daniel Dunbar | Daniel Dunbar, Thomas Hagedorn, Mark Blackburn, John Dzielski, Steven
Hespelt, Benjamin Kruse, Dinesh Verma, Zhongyuan Yu | Driving Digital Engineering Integration and Interoperability Through
Semantic Integration of Models with Ontologies | 12 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Engineered solutions are becoming more complex and multi-disciplinary in
nature. This evolution requires new techniques to enhance design and analysis
tasks that incorporate data integration and interoperability across various
engineering tool suites spanning multiple domains at different abstraction
levels. Semantic Web Technologies (SWT) offer data integration and
interoperability benefits as well as other opportunities to enhance reasoning
across knowledge represented in multiple disparate models. This paper
introduces the Digital Engineering Framework for Integration and
Interoperability (DEFII) for incorporating SWT into engineering design and
analysis tasks. The framework includes three notional interfaces for
interacting with ontology-aligned data. It also introduces a novel Model
Interface Specification Diagram (MISD) that provides a tool-agnostic model
representation enabled by SWT that exposes data stored for use by external
users through standards-based interfaces. Use of the framework results in a
tool-agnostic authoritative source of truth spanning the entire project,
system, or mission.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2022 14:58:09 GMT"
}
] | 1,655,856,000,000 | [
[
"Dunbar",
"Daniel",
""
],
[
"Hagedorn",
"Thomas",
""
],
[
"Blackburn",
"Mark",
""
],
[
"Dzielski",
"John",
""
],
[
"Hespelt",
"Steven",
""
],
[
"Kruse",
"Benjamin",
""
],
[
"Verma",
"Dinesh",
""
],
[
"Yu",
"Zhongyuan",
""
]
] |
2206.11017 | Amin Jalali | Amin Jalali | Object Type Clustering using Markov Directly-Follow Multigraph in
Object-Centric Process Mining | null | IEEE Access 2022 | 10.1109/ACCESS.2022.3226573 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Object-centric process mining is a new paradigm with more realistic
assumptions about underlying data by considering several case notions, e.g., an
order handling process can be analyzed based on order, item, package, and route
case notions. Including many case notions can result in a very complex model.
To cope with such complexity, this paper introduces a new approach to cluster
similar case notions based on Markov Directly-Follow Multigraph, which is an
extended version of the well-known Directly-Follow Graph supported by many
industrial and academic process mining tools. This graph is used to calculate a
similarity matrix for discovering clusters of similar case notions based on a
threshold. A threshold tuning algorithm is also defined to identify sets of
different clusters that can be discovered based on different levels of
similarity. Thus, the cluster discovery will not rely on merely analysts'
assumptions. The approach is implemented and released as a part of a python
library, called processmining, and it is evaluated through a Purchase to Pay
(P2P) object-centric event log file. Some discovered clusters are evaluated by
discovering Directly Follow-Multigraph by flattening the log based on the
clusters. The similarity between identified clusters is also evaluated by
calculating the similarity between the behavior of the process models
discovered for each case notion using inductive miner based on footprints
conformance checking.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2022 12:36:46 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2022 17:33:21 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Aug 2022 10:26:13 GMT"
}
] | 1,670,284,800,000 | [
[
"Jalali",
"Amin",
""
]
] |
2206.11025 | Bin Yang | Wei Li, Bin Yang, Junsheng Qiao | On three types of $L$-fuzzy $\beta$-covering-based rough sets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we mainly construct three types of $L$-fuzzy
$\beta$-covering-based rough set models and study the axiom sets, matrix
representations and interdependency of these three pairs of $L$-fuzzy
$\beta$-covering-based rough approximation operators. Firstly, we propose three
pairs of $L$-fuzzy $\beta$-covering-based rough approximation operators by
introducing the concepts such as $\beta$-degree of intersection and
$\beta$-subsethood degree, which are generalizations of degree of intersection
and subsethood degree, respectively. And then, the axiom set for each of these
$L$-fuzzy $\beta$-covering-based rough approximation operator is investigated.
Thirdly, we give the matrix representations of three types of $L$-fuzzy
$\beta$-covering-based rough approximation operators, which make it valid to
calculate the $L$-fuzzy $\beta$-covering-based lower and upper rough
approximation operators through operations on matrices. Finally, the
interdependency of the three pairs of rough approximation operators based on
$L$-fuzzy $\beta$-covering is studied by using the notion of reducible elements
and independent elements. In other words, we present the necessary and
sufficient conditions under which two $L$-fuzzy $\beta$-coverings can generate
the same lower and upper rough approximation operations.
| [
{
"version": "v1",
"created": "Fri, 13 May 2022 05:30:51 GMT"
}
] | 1,655,942,400,000 | [
[
"Li",
"Wei",
""
],
[
"Yang",
"Bin",
""
],
[
"Qiao",
"Junsheng",
""
]
] |
2206.11515 | Nicolas R\"uhling | Susana Hahn (1), Tomi Janhunen (2), Roland Kaminski (1), Javier Romero
(1), Nicolas R\"uhling (1), Torsten Schaub (1) ((1) University of Potsdam,
Germany, (2) Tampere University, Finland) | plingo: A system for probabilistic reasoning in clingo based on lpmln | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present plingo, an extension of the ASP system clingo with various
probabilistic reasoning modes. Plingo is centered upon LP^MLN, a probabilistic
extension of ASP based on a weight scheme from Markov Logic. This choice is
motivated by the fact that the core probabilistic reasoning modes can be mapped
onto optimization problems and that LP^MLN may serve as a middle-ground
formalism connecting to other probabilistic approaches. As a result, plingo
offers three alternative frontends, for LP^MLN, P-log, and ProbLog. The
corresponding input languages and reasoning modes are implemented by means of
clingo's multi-shot and theory solving capabilities. The core of plingo amounts
to a re-implementation of LP^MLN in terms of modern ASP technology, extended by
an approximation technique based on a new method for answer set enumeration in
the order of optimality. We evaluate plingo's performance empirically by
comparing it to other probabilistic systems.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2022 07:51:10 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Aug 2022 09:07:45 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Sep 2022 09:15:10 GMT"
}
] | 1,662,336,000,000 | [
[
"Hahn",
"Susana",
""
],
[
"Janhunen",
"Tomi",
""
],
[
"Kaminski",
"Roland",
""
],
[
"Romero",
"Javier",
""
],
[
"Rühling",
"Nicolas",
""
],
[
"Schaub",
"Torsten",
""
]
] |
2206.11539 | Ryma Boumazouza | Ryma Boumazouza (CRIL), Fahima Cheikh-Alili (CRIL), Bertrand Mazure
(CRIL), Karim Tabia (CRIL) | A Model-Agnostic SAT-based Approach for Symbolic Explanation Enumeration | null | The 23rd International Conference on Artificial Intelligence
(ICAI'21), Jul 2021, Las Vegas, United States.
https://www.springer.com/series/11769 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper titled A Model-Agnostic SAT-based approach for Symbolic
Explanation Enumeration we propose a generic agnostic approach allowing to
generate different and complementary types of symbolic explanations. More
precisely, we generate explanations to locally explain a single prediction by
analyzing the relationship between the features and the output. Our approach
uses a propositional encoding of the predictive model and a SAT-based setting
to generate two types of symbolic explanations which are Sufficient Reasons and
Counterfactuals. The experimental results on image classification task show the
feasibility of the proposed approach and its effectiveness in providing
Sufficient Reasons and Counterfactuals explanations.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2022 08:35:47 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2022 21:08:40 GMT"
}
] | 1,660,694,400,000 | [
[
"Boumazouza",
"Ryma",
"",
"CRIL"
],
[
"Cheikh-Alili",
"Fahima",
"",
"CRIL"
],
[
"Mazure",
"Bertrand",
"",
"CRIL"
],
[
"Tabia",
"Karim",
"",
"CRIL"
]
] |
2206.11812 | Alexander Turner | Alexander Matt Turner, Aseem Saxena, Prasad Tadepalli | Formalizing the Problem of Side Effect Regularization | 14 pages, accepted to ML Safety Workshop at NeurIPS 2022. Alexander
Turner and Aseem Saxena contributed equally | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI objectives are often hard to specify properly. Some approaches tackle this
problem by regularizing the AI's side effects: Agents must weigh off "how much
of a mess they make" with an imperfectly specified proxy objective. We propose
a formal criterion for side effect regularization via the assistance game
framework. In these games, the agent solves a partially observable Markov
decision process (POMDP) representing its uncertainty about the objective
function it should optimize. We consider the setting where the true objective
is revealed to the agent at a later time step. We show that this POMDP is
solved by trading off the proxy reward with the agent's ability to achieve a
range of future tasks. We empirically demonstrate the reasonableness of our
problem formalization via ground-truth evaluation in two gridworld
environments.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2022 16:36:13 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Jun 2022 16:13:47 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Nov 2022 19:11:04 GMT"
}
] | 1,668,038,400,000 | [
[
"Turner",
"Alexander Matt",
""
],
[
"Saxena",
"Aseem",
""
],
[
"Tadepalli",
"Prasad",
""
]
] |
2206.11831 | Alexander Turner | Alexander Matt Turner | On Avoiding Power-Seeking by Artificial Intelligence | 287 pages, PhD thesis | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We do not know how to align a very intelligent AI agent's behavior with human
interests. I investigate whether -- absent a full solution to this AI alignment
problem -- we can build smart AI agents which have limited impact on the world,
and which do not autonomously seek power. In this thesis, I introduce the
attainable utility preservation (AUP) method. I demonstrate that AUP produces
conservative, option-preserving behavior within toy gridworlds and within
complex environments based off of Conway's Game of Life. I formalize the
problem of side effect avoidance, which provides a way to quantify the side
effects an agent had on the world. I also give a formal definition of
power-seeking in the context of AI agents and show that optimal policies tend
to seek power. In particular, most reward functions have optimal policies which
avoid deactivation. This is a problem if we want to deactivate or correct an
intelligent agent after we have deployed it. My theorems suggest that since
most agent goals conflict with ours, the agent would very probably resist
correction. I extend these theorems to show that power-seeking incentives occur
not just for optimal decision-makers, but under a wide range of decision-making
procedures.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2022 16:56:21 GMT"
}
] | 1,656,028,800,000 | [
[
"Turner",
"Alexander Matt",
""
]
] |
2206.11900 | Ryma Boumazouza | Ryma Boumazouza (CRIL), Fahima Cheikh-Alili (CRIL), Bertrand Mazure
(CRIL), Karim Tabia (CRIL) | ASTERYX : A model-Agnostic SaT-basEd appRoach for sYmbolic and
score-based eXplanations | arXiv admin note: text overlap with arXiv:2206.11539 | CIKM '21: The 30th ACM International Conference on Information and
Knowledge Management, Nov 2021, Virtual Event Queensland Australia,
Australia. pp.120-129 | 10.1145/3459637.3482321 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ever increasing complexity of machine learning techniques used more and
more in practice, gives rise to the need to explain the predictions and
decisions of these models, often used as black-boxes. Explainable AI approaches
are either numerical feature-based aiming to quantify the contribution of each
feature in a prediction or symbolic providing certain forms of symbolic
explanations such as counterfactuals. This paper proposes a generic agnostic
approach named ASTERYX allowing to generate both symbolic explanations and
score-based ones. Our approach is declarative and it is based on the encoding
of the model to be explained in an equivalent symbolic representation, this
latter serves to generate in particular two types of symbolic explanations
which are sufficient reasons and counterfactuals. We then associate scores
reflecting the relevance of the explanations and the features w.r.t to some
properties. Our experimental results show the feasibility of the proposed
approach and its effectiveness in providing symbolic and score-based
explanations.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2022 08:37:32 GMT"
}
] | 1,656,288,000,000 | [
[
"Boumazouza",
"Ryma",
"",
"CRIL"
],
[
"Cheikh-Alili",
"Fahima",
"",
"CRIL"
],
[
"Mazure",
"Bertrand",
"",
"CRIL"
],
[
"Tabia",
"Karim",
"",
"CRIL"
]
] |
2206.12142 | Zongsehng Cao | Zongsheng Cao, Qianqian Xu, Zhiyong Yang, Qingming Huang | ER: Equivariance Regularizer for Knowledge Graph Completion | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Tensor factorization and distanced based models play important roles in
knowledge graph completion (KGC). However, the relational matrices in KGC
methods often induce a high model complexity, bearing a high risk of
overfitting. As a remedy, researchers propose a variety of different
regularizers such as the tensor nuclear norm regularizer. Our motivation is
based on the observation that the previous work only focuses on the "size" of
the parametric space, while leaving the implicit semantic information widely
untouched. To address this issue, we propose a new regularizer, namely,
Equivariance Regularizer (ER), which can suppress overfitting by leveraging the
implicit semantic information. Specifically, ER can enhance the generalization
ability of the model by employing the semantic equivariance between the head
and tail entities. Moreover, it is a generic solution for both distance based
models and tensor factorization based models. The experimental results indicate
a clear and substantial improvement over the state-of-the-art relation
prediction methods.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2022 08:18:05 GMT"
}
] | 1,656,288,000,000 | [
[
"Cao",
"Zongsheng",
""
],
[
"Xu",
"Qianqian",
""
],
[
"Yang",
"Zhiyong",
""
],
[
"Huang",
"Qingming",
""
]
] |
2206.12503 | Th\'eophile Champion | Th\'eophile Champion and Marek Grze\'s and Howard Bowman | Multi-Modal and Multi-Factor Branching Time Active Inference | 26 pages, 12 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Active inference is a state-of-the-art framework for modelling the brain that
explains a wide range of mechanisms such as habit formation, dopaminergic
discharge and curiosity. Recently, two versions of branching time active
inference (BTAI) based on Monte-Carlo tree search have been developed to handle
the exponential (space and time) complexity class that occurs when computing
the prior over all possible policies up to the time horizon. However, those two
versions of BTAI still suffer from an exponential complexity class w.r.t the
number of observed and latent variables being modelled. In the present paper,
we resolve this limitation by first allowing the modelling of several
observations, each of them having its own likelihood mapping. Similarly, we
allow each latent state to have its own transition mapping. The inference
algorithm then exploits the factorisation of the likelihood and transition
mappings to accelerate the computation of the posterior. Those two
optimisations were tested on the dSprites environment in which the metadata of
the dSprites dataset was used as input to the model instead of the dSprites
images. On this task, $BTAI_{VMP}$ (Champion et al., 2022b,a) was able to solve
96.9\% of the task in 5.1 seconds, and $BTAI_{BF}$ (Champion et al., 2021a) was
able to solve 98.6\% of the task in 17.5 seconds. Our new approach
($BTAI_{3MF}$) outperformed both of its predecessors by solving the task
completly (100\%) in only 2.559 seconds. Finally, $BTAI_{3MF}$ has been
implemented in a flexible and easy to use (python) package, and we developed a
graphical user interface to enable the inspection of the model's beliefs,
planning process and behaviour.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2022 22:07:21 GMT"
}
] | 1,656,374,400,000 | [
[
"Champion",
"Théophile",
""
],
[
"Grześ",
"Marek",
""
],
[
"Bowman",
"Howard",
""
]
] |
2206.12700 | Yiting Xie | Zhiyuan Yao, Tianyu Shi, Site Li, Yiting Xie, Yuanyuan Qin, Xiongjie
Xie, Huan Lu and Yan Zhang | Towards Modern Card Games with Large-Scale Action Spaces Through Action
Representation | Accpeted as IEEE CoG2022 proceedings paper | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Axie infinity is a complicated card game with a huge-scale action space. This
makes it difficult to solve this challenge using generic Reinforcement Learning
(RL) algorithms. We propose a hybrid RL framework to learn action
representations and game strategies. To avoid evaluating every action in the
large feasible action set, our method evaluates actions in a fixed-size set
which is determined using action representations. We compare the performance of
our method with the other two baseline methods in terms of their sample
efficiency and the winning rates of the trained models. We empirically show
that our method achieves an overall best winning rate and the best sample
efficiency among the three methods.
| [
{
"version": "v1",
"created": "Sat, 25 Jun 2022 17:22:08 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 15:10:24 GMT"
}
] | 1,660,694,400,000 | [
[
"Yao",
"Zhiyuan",
""
],
[
"Shi",
"Tianyu",
""
],
[
"Li",
"Site",
""
],
[
"Xie",
"Yiting",
""
],
[
"Qin",
"Yuanyuan",
""
],
[
"Xie",
"Xiongjie",
""
],
[
"Lu",
"Huan",
""
],
[
"Zhang",
"Yan",
""
]
] |
2206.13174 | Hiroyuki Kido | Hiroyuki Kido | Towards Unifying Perceptual Reasoning and Logical Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An increasing number of scientific experiments support the view of perception
as Bayesian inference, which is rooted in Helmholtz's view of perception as
unconscious inference. Recent study of logic presents a view of logical
reasoning as Bayesian inference. In this paper, we give a simple probabilistic
model that is applicable to both perceptual reasoning and logical reasoning. We
show that the model unifies the two essential processes common in perceptual
and logical systems: on the one hand, the process by which perceptual and
logical knowledge is derived from another knowledge, and on the other hand, the
process by which such knowledge is derived from data. We fully characterise the
model in terms of logical consequence relations.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2022 10:32:47 GMT"
}
] | 1,656,374,400,000 | [
[
"Kido",
"Hiroyuki",
""
]
] |
2206.13477 | Alexander Turner | Alexander Matt Turner, Prasad Tadepalli | Parametrically Retargetable Decision-Makers Tend To Seek Power | 10-page main paper, 36 pages total, poster at NeurIPS 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | If capable AI agents are generally incentivized to seek power in service of
the objectives we specify for them, then these systems will pose enormous
risks, in addition to enormous benefits. In fully observable environments, most
reward functions have an optimal policy which seeks power by keeping options
open and staying alive. However, the real world is neither fully observable,
nor must trained agents be even approximately reward-optimal. We consider a
range of models of AI decision-making, from optimal, to random, to choices
informed by learning and interacting with an environment. We discover that many
decision-making functions are retargetable, and that retargetability is
sufficient to cause power-seeking tendencies. Our functional criterion is
simple and broad. We show that a range of qualitatively dissimilar
decision-making procedures incentivize agents to seek power. We demonstrate the
flexibility of our results by reasoning about learned policy incentives in
Montezuma's Revenge. These results suggest a safety risk: Eventually,
retargetable training procedures may train real-world agents which seek power
over humans.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2022 17:39:23 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2022 23:31:39 GMT"
}
] | 1,665,619,200,000 | [
[
"Turner",
"Alexander Matt",
""
],
[
"Tadepalli",
"Prasad",
""
]
] |
2206.13658 | Shirly Stephen | Shirly Stephen, Wenwen Li, Torsten Hahmann | Geo-Situation for Modeling Causality of Geo-Events in Knowledge Graphs | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a framework for representing and reasoning causality
between geographic events by introducing the notion of Geo-Situation. This
concept links to observational snapshots that represent sets of conditions, and
either acts as the setting of a geo-event or influences the initiation of a
geo-event. We envision the use of this framework within knowledge graphs that
represent geographic entities will help answer the important question of why a
geographic event occurred.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2022 22:55:03 GMT"
}
] | 1,656,460,800,000 | [
[
"Stephen",
"Shirly",
""
],
[
"Li",
"Wenwen",
""
],
[
"Hahmann",
"Torsten",
""
]
] |
2206.13856 | Giuseppe Spallitta | Giuseppe Spallitta, Gabriele Masina, Paolo Morettin, Andrea Passerini
and Roberto Sebastiani | SMT-based Weighted Model Integration with Structure Awareness | Accepted for the 38th Conference on Uncertainty in Artificial
Intelligence (UAI 2022) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Weighted Model Integration (WMI) is a popular formalism aimed at unifying
approaches for probabilistic inference in hybrid domains, involving logical and
algebraic constraints. Despite a considerable amount of recent work, allowing
WMI algorithms to scale with the complexity of the hybrid problem is still a
challenge. In this paper we highlight some substantial limitations of existing
state-of-the-art solutions, and develop an algorithm that combines SMT-based
enumeration, an efficient technique in formal verification, with an effective
encoding of the problem structure. This allows our algorithm to avoid
generating redundant models, resulting in substantial computational savings. An
extensive experimental evaluation on both synthetic and real-world datasets
confirms the advantage of the proposed solution over existing alternatives.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2022 09:46:17 GMT"
}
] | 1,656,460,800,000 | [
[
"Spallitta",
"Giuseppe",
""
],
[
"Masina",
"Gabriele",
""
],
[
"Morettin",
"Paolo",
""
],
[
"Passerini",
"Andrea",
""
],
[
"Sebastiani",
"Roberto",
""
]
] |
2206.13959 | Lucas Rizzo | Lucas Rizzo and Luca Longo | Comparing and extending the use of defeasible argumentation with
quantitative data in real-world contexts | null | null | 10.1016/j.inffus.2022.08.025 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dealing with uncertain, contradicting, and ambiguous information is still a
central issue in Artificial Intelligence (AI). As a result, many formalisms
have been proposed or adapted so as to consider non-monotonicity, with only a
limited number of works and researchers performing any sort of comparison among
them. A non-monotonic formalism is one that allows the retraction of previous
conclusions or claims, from premises, in light of new evidence, offering some
desirable flexibility when dealing with uncertainty. This research article
focuses on evaluating the inferential capacity of defeasible argumentation, a
formalism particularly envisioned for modelling non-monotonic reasoning. In
addition to this, fuzzy reasoning and expert systems, extended for handling
non-monotonicity of reasoning, are selected and employed as baselines, due to
their vast and accepted use within the AI community. Computational trust was
selected as the domain of application of such models. Trust is an ill-defined
construct, hence, reasoning applied to the inference of trust can be seen as
non-monotonic. Inference models were designed to assign trust scalars to
editors of the Wikipedia project. In particular, argument-based models
demonstrated more robustness than those built upon the baselines despite the
knowledge bases or datasets employed. This study contributes to the body of
knowledge through the exploitation of defeasible argumentation and its
comparison to similar approaches. The practical use of such approaches coupled
with a modular design that facilitates similar experiments was exemplified and
their respective implementations made publicly available on GitHub [120, 121].
This work adds to previous works, empirically enhancing the generalisability of
defeasible argumentation as a compelling approach to reason with quantitative
data and uncertain knowledge.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2022 12:28:47 GMT"
}
] | 1,700,006,400,000 | [
[
"Rizzo",
"Lucas",
""
],
[
"Longo",
"Luca",
""
]
] |
2206.14081 | Robert Helmeczi | Robert K. Helmeczi and Can Kavaklioglu and Mucahit Cevik | Linear programming-based solution methods for constrained partially
observable Markov decision processes | 42 pages, 8 figures | null | 10.1007/s10489-023-04603-7 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constrained partially observable Markov decision processes (CPOMDPs) have
been used to model various real-world phenomena. However, they are notoriously
difficult to solve to optimality, and there exist only a few approximation
methods for obtaining high-quality solutions. In this study, grid-based
approximations are used in combination with linear programming (LP) models to
generate approximate policies for CPOMDPs. A detailed numerical study is
conducted with six CPOMDP problem instances considering both their finite and
infinite horizon formulations. The quality of approximation algorithms for
solving unconstrained POMDP problems is established through a comparative
analysis with exact solution methods. Then, the performance of the LP-based
CPOMDP solution approaches for varying budget levels is evaluated. Finally, the
flexibility of LP-based approaches is demonstrated by applying deterministic
policy constraints, and a detailed investigation into their impact on rewards
and CPU run time is provided. For most of the finite horizon problems,
deterministic policy constraints are found to have little impact on expected
reward, but they introduce a significant increase to CPU run time. For infinite
horizon problems, the reverse is observed: deterministic policies tend to yield
lower expected total rewards than their stochastic counterparts, but the impact
of deterministic constraints on CPU run time is negligible in this case.
Overall, these results demonstrate that LP models can effectively generate
approximate policies for both finite and infinite horizon problems while
providing the flexibility to incorporate various additional constraints into
the underlying model.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2022 15:22:24 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jan 2023 18:44:19 GMT"
}
] | 1,687,824,000,000 | [
[
"Helmeczi",
"Robert K.",
""
],
[
"Kavaklioglu",
"Can",
""
],
[
"Cevik",
"Mucahit",
""
]
] |
2206.14153 | Shruti Patil | Shreyas Gawde, Shruti Patil, Satish Kumar, Pooja Kamat, Ketan Kotecha,
Ajith Abraham | Multi-Fault Diagnosis Of Industrial Rotating Machines Using Data-Driven
Approach: A Review Of Two Decades Of Research | 64 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Industry 4.0 is an era of smart manufacturing. Manufacturing is impossible
without the use of machinery. Majority of these machines comprise rotating
components and are called rotating machines. The engineers' top priority is to
maintain these critical machines to reduce the unplanned shutdown and increase
the useful life of machinery. Predictive maintenance (PDM) is the current trend
of smart maintenance. The challenging task in PDM is to diagnose the type of
fault. With Artificial Intelligence (AI) advancement, data-driven approach for
predictive maintenance is taking a new flight towards smart manufacturing.
Several researchers have published work related to fault diagnosis in rotating
machines, mainly exploring a single type of fault. However, a consolidated
review of literature that focuses more on multi-fault diagnosis of rotating
machines is lacking. There is a need to systematically cover all the aspects
right from sensor selection, data acquisition, feature extraction, multi-sensor
data fusion to the systematic review of AI techniques employed in multi-fault
diagnosis. In this regard, this paper attempts to achieve the same by
implementing a systematic literature review on a Data-driven approach for
multi-fault diagnosis of Industrial Rotating Machines using Preferred Reporting
Items for Systematic Reviews and Meta-Analysis (PRISMA) method. The PRISMA
method is a collection of guidelines for the composition and structure of
systematic reviews and other meta-analyses. This paper identifies the
foundational work done in the field and gives a comparative study of different
aspects related to multi-fault diagnosis of industrial rotating machines. The
paper also identifies the major challenges, research gap. It gives solutions
using recent advancements in AI in implementing multi-fault diagnosis, giving a
strong base for future research in this field.
| [
{
"version": "v1",
"created": "Mon, 30 May 2022 14:54:27 GMT"
}
] | 1,656,460,800,000 | [
[
"Gawde",
"Shreyas",
""
],
[
"Patil",
"Shruti",
""
],
[
"Kumar",
"Satish",
""
],
[
"Kamat",
"Pooja",
""
],
[
"Kotecha",
"Ketan",
""
],
[
"Abraham",
"Ajith",
""
]
] |
2206.14298 | Dieqiao Feng | Dieqiao Feng, Carla Gomes, Bart Selman | Left Heavy Tails and the Effectiveness of the Policy and Value Networks
in DNN-based best-first search for Sokoban Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the success of practical solvers in various NP-complete domains such
as SAT and CSP as well as using deep reinforcement learning to tackle
two-player games such as Go, certain classes of PSPACE-hard planning problems
have remained out of reach. Even carefully designed domain-specialized solvers
can fail quickly due to the exponential search space on hard instances. Recent
works that combine traditional search methods, such as best-first search and
Monte Carlo tree search, with Deep Neural Networks' (DNN) heuristics have shown
promising progress and can solve a significant number of hard planning
instances beyond specialized solvers. To better understand why these approaches
work, we studied the interplay of the policy and value networks of DNN-based
best-first search on Sokoban and show the surprising effectiveness of the
policy network, further enhanced by the value network, as a guiding heuristic
for the search. To further understand the phenomena, we studied the cost
distribution of the search algorithms and found that Sokoban instances can have
heavy-tailed runtime distributions, with tails both on the left and right-hand
sides. In particular, for the first time, we show the existence of \textit{left
heavy tails} and propose an abstract tree model that can empirically explain
the appearance of these tails. The experiments show the critical role of the
policy network as a powerful heuristic guiding the search, which can lead to
left heavy tails with polynomial scaling by avoiding exploring exponentially
sized subtrees. Our results also demonstrate the importance of random restarts,
as are widely used in traditional combinatorial solvers, for DNN-based search
methods to avoid left and right heavy tails.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2022 21:48:54 GMT"
}
] | 1,656,547,200,000 | [
[
"Feng",
"Dieqiao",
""
],
[
"Gomes",
"Carla",
""
],
[
"Selman",
"Bart",
""
]
] |
2206.14480 | Javier Segovia Aguas | Javier Segovia-Aguas, Yolanda E-Mart\'in, Sergio Jim\'enez | Representation and Synthesis of C++ Programs for Generalized Planning | Accepted at sixth workshop on Generalization in Planning at
IJCAI-ECAI 2022 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper introduces a novel representation for Generalized Planning (GP)
problems, and their solutions, as C++ programs. Our C++ representation allows
to formally proving the termination of generalized plans, and to specifying
their asymptotic complexity w.r.t. the number of world objects. Characterizing
the complexity of C++ generalized plans enables the application of a
combinatorial search that enumerates the space of possible GP solutions in
order of complexity. Experimental results show that our implementation of this
approach, which we call BFGP++, outperforms the previous GP as heuristic search
approach for the computation of generalized plans represented as
compiler-styled programs. Last but not least, the execution of a C++ program on
a classical planning instance is a deterministic grounding-free and search-free
process, so our C++ representation allows us to automatically validate the
computed solutions on large test instances of thousands of objects, where
off-the-shelf classical planners get stuck either in the pre-processing or in
the search.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2022 09:13:21 GMT"
}
] | 1,656,547,200,000 | [
[
"Segovia-Aguas",
"Javier",
""
],
[
"E-Martín",
"Yolanda",
""
],
[
"Jiménez",
"Sergio",
""
]
] |
2206.14506 | Huili Xing | Huili Xing | An extension of process calculus for asynchronous communications between
agents with epistemic states | 22 pages and 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It plays a central role in intelligent agent systems to model agent's
epistemic state and its change. Asynchrony plays a key role in distributed
systems, in which the messages transmitted may not be received instantly by the
agents. To characterize asynchronous communications, asynchronous announcement
logic (AAL) has been presented, which focuses on the logic laws of the change
of epistemic state after receiving information.
However AAL does not involve the interactive behaviours between an agent and
its environment. Through enriching the well-known pi-calculus by adding the
operators for passing basic facts and applying the well-known action model
logic to describe agents' epistemic states, this paper presents the e-calculus
to model epistemic interactions between agents with epistemic states. The
e-calculus can be adopted to characterize synchronous and asynchronous
communications between agents. To capture the asynchrony, a buffer pools is
constructed to store the basic facts announced and each agent reads these facts
from this buffer pool in some order. Based on the transmission of link names,
the e-calculus is able to realize reading from this buffer pool in different
orders. This paper gives two examples: one is to read in the order in which the
announced basic facts are sent (First-in-first-out, FIFO), and the other is in
an arbitrary order.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2022 09:54:58 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 05:05:35 GMT"
}
] | 1,677,456,000,000 | [
[
"Xing",
"Huili",
""
]
] |
2207.00143 | Bohui Zhang | Bohui Zhang, Filip Ilievski, Pedro Szekely | Enriching Wikidata with Linked Open Data | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large public knowledge graphs, like Wikidata, contain billions of statements
about tens of millions of entities, thus inspiring various use cases to exploit
such knowledge graphs. However, practice shows that much of the relevant
information that fits users' needs is still missing in Wikidata, while current
linked open data (LOD) tools are not suitable to enrich large graphs like
Wikidata. In this paper, we investigate the potential of enriching Wikidata
with structured data sources from the LOD cloud. We present a novel workflow
that includes gap detection, source selection, schema alignment, and semantic
validation. We evaluate our enrichment method with two complementary LOD
sources: a noisy source with broad coverage, DBpedia, and a manually curated
source with a narrow focus on the art domain, Getty. Our experiments show that
our workflow can enrich Wikidata with millions of novel statements from
external LOD sources with high quality. Property alignment and data quality are
key challenges, whereas entity alignment and source selection are
well-supported by existing Wikidata mechanisms. We make our code and data
available to support future work.
| [
{
"version": "v1",
"created": "Fri, 1 Jul 2022 01:50:24 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 16:32:30 GMT"
}
] | 1,660,003,200,000 | [
[
"Zhang",
"Bohui",
""
],
[
"Ilievski",
"Filip",
""
],
[
"Szekely",
"Pedro",
""
]
] |
2207.00630 | William Cohen | Wenhu Chen, William W. Cohen, Michiel De Jong, Nitish Gupta,
Alessandro Presta, Pat Verga, John Wieting | QA Is the New KR: Question-Answer Pairs as Knowledge Bases | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this position paper, we propose a new approach to generating a type of
knowledge base (KB) from text, based on question generation and entity linking.
We argue that the proposed type of KB has many of the key advantages of a
traditional symbolic KB: in particular, it consists of small modular
components, which can be combined compositionally to answer complex queries,
including relational queries and queries involving "multi-hop" inferences.
However, unlike a traditional KB, this information store is well-aligned with
common user information needs.
| [
{
"version": "v1",
"created": "Fri, 1 Jul 2022 19:09:08 GMT"
}
] | 1,656,979,200,000 | [
[
"Chen",
"Wenhu",
""
],
[
"Cohen",
"William W.",
""
],
[
"De Jong",
"Michiel",
""
],
[
"Gupta",
"Nitish",
""
],
[
"Presta",
"Alessandro",
""
],
[
"Verga",
"Pat",
""
],
[
"Wieting",
"John",
""
]
] |
2207.00682 | Harsh Panwar | Harsh Panwar | The NPC AI of The Last of Us: A case study | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Last of Us is a game focused on stealth, companionship and strategy. The
game is based in a lonely world after the pandemic and thus it needs AI
companions to gain the interest of players. There are three main NPCs the game
has - Infected, Human enemy and Buddy AIs. This case study talks about the
challenges in front of the developers to create AI for these NPCs and the AI
techniques they used to solve them. It also compares the challenges and
approach with similar industry-leading games.
| [
{
"version": "v1",
"created": "Fri, 1 Jul 2022 23:10:40 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 18:51:23 GMT"
}
] | 1,659,398,400,000 | [
[
"Panwar",
"Harsh",
""
]
] |
2207.00719 | Jin Liu | Jin Liu and Chongfeng Fan and Fengyu Zhou and Huijuan Xu | Syntax Controlled Knowledge Graph-to-Text Generation with Order and
Semantic Consistency | NAACL 2022 Findings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The knowledge graph (KG) stores a large amount of structural knowledge, while
it is not easy for direct human understanding. Knowledge graph-to-text
(KG-to-text) generation aims to generate easy-to-understand sentences from the
KG, and at the same time, maintains semantic consistency between generated
sentences and the KG. Existing KG-to-text generation methods phrase this task
as a sequence-to-sequence generation task with linearized KG as input and
consider the consistency issue of the generated texts and KG through a simple
selection between decoded sentence word and KG node word at each time step.
However, the linearized KG order is commonly obtained through a heuristic
search without data-driven optimization. In this paper, we optimize the
knowledge description order prediction under the order supervision extracted
from the caption and further enhance the consistency of the generated sentences
and KG through syntactic and semantic regularization. We incorporate the
Part-of-Speech (POS) syntactic tags to constrain the positions to copy words
from the KG and employ a semantic context scoring function to evaluate the
semantic fitness for each word in its local context when decoding each word in
the generated sentence. Extensive experiments are conducted on two datasets,
WebNLG and DART, and achieve state-of-the-art performances.
| [
{
"version": "v1",
"created": "Sat, 2 Jul 2022 02:42:14 GMT"
}
] | 1,656,979,200,000 | [
[
"Liu",
"Jin",
""
],
[
"Fan",
"Chongfeng",
""
],
[
"Zhou",
"Fengyu",
""
],
[
"Xu",
"Huijuan",
""
]
] |
2207.00788 | Weitao Zhou | Weitao Zhou, Zhong Cao, Yunkang Xu, Nanshan Deng, Xiaoyu Liu, Kun
Jiang and Diange Yang | Long-Tail Prediction Uncertainty Aware Trajectory Planning for
Self-driving Vehicles | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A typical trajectory planner of autonomous driving commonly relies on
predicting the future behavior of surrounding obstacles. Recently, deep
learning technology has been widely adopted to design prediction models due to
their impressive performance. However, such models may fail in the "long-tail"
driving cases where the training data is sparse or unavailable, leading to
planner failures. To this end, this work proposes a trajectory planner to
consider the prediction model uncertainty arising from insufficient data for
safer performance. Firstly, an ensemble network structure estimates the
prediction model's uncertainty due to insufficient training data. Then a
trajectory planner is designed to consider the worst-case arising from
prediction uncertainty. The results show that the proposed method can improve
the safety of trajectory planning under the prediction uncertainty caused by
insufficient data. At the same time, with sufficient data, the framework will
not lead to overly conservative results. This technology helps to improve the
safety and reliability of autonomous vehicles under the long-tail data
distribution of the real world.
| [
{
"version": "v1",
"created": "Sat, 2 Jul 2022 10:17:31 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2022 04:28:23 GMT"
}
] | 1,659,052,800,000 | [
[
"Zhou",
"Weitao",
""
],
[
"Cao",
"Zhong",
""
],
[
"Xu",
"Yunkang",
""
],
[
"Deng",
"Nanshan",
""
],
[
"Liu",
"Xiaoyu",
""
],
[
"Jiang",
"Kun",
""
],
[
"Yang",
"Diange",
""
]
] |
2207.00822 | Alexander Serov | Alexander Serov | Kernel Based Cognitive Architecture for Autonomous Agents | 5 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the main problems of modern cognitive architectures is an excessively
schematic approach to modeling the processes of cognitive activity. It does not
allow the creation of a universal architecture that would be capable of
reproducing mental functions without using a predetermined set of perceptual
patterns. This paper considers an evolutionary approach to creating a cognitive
functionality. The basis of our approach is the use of the functional kernel
which consistently generates the intellectual functions of an autonomous agent.
We consider a cognitive architecture which ensures the evolution of the agent
on the basis of Symbol Emergence Problem solution. Evolution of cognitive
abilities of the agent is described on the basis of the theory of
constructivism.
| [
{
"version": "v1",
"created": "Sat, 2 Jul 2022 12:41:32 GMT"
}
] | 1,656,979,200,000 | [
[
"Serov",
"Alexander",
""
]
] |
2207.00902 | Jamshid Sourati | Jamshid Sourati, James Evans | Complementary artificial intelligence designed to augment human
discovery | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neither artificial intelligence designed to play Turing's imitation game, nor
augmented intelligence built to maximize the human manipulation of information
are tuned to accelerate innovation and improve humanity's collective advance
against its greatest challenges. We reconceptualize and pilot beneficial AI to
radically augment human understanding by complementing rather than competing
with human cognitive capacity. Our approach to complementary intelligence
builds on insights underlying the wisdom of crowds, which hinges on the
independence and diversity of crowd members' information and approach. By
programmatically incorporating information on the evolving distribution of
scientific expertise from research papers, our approach follows the
distribution of content in the literature while avoiding the scientific crowd
and the hypotheses cognitively available to it. We use this approach to
generate valuable predictions for what materials possess valuable
energy-related properties (e.g., thermoelectricity), and what compounds possess
valuable medical properties (e.g., asthma) that complement the human scientific
crowd. We demonstrate that our complementary predictions, if identified by
human scientists and inventors at all, are only discovered years further into
the future. When we evaluate the promise of our predictions with
first-principles equations, we demonstrate that increased complementarity of
our predictions does not decrease and in some cases increases the probability
that the predictions possess the targeted properties. In summary, by tuning AI
to avoid the crowd, we can generate hypotheses unlikely to be imagined or
pursued until the distant future and promise to punctuate scientific advance.
By identifying and correcting for collective human bias, these models also
suggest opportunities to improve human prediction by reformulating science
education for discovery.
| [
{
"version": "v1",
"created": "Sat, 2 Jul 2022 19:36:34 GMT"
}
] | 1,656,979,200,000 | [
[
"Sourati",
"Jamshid",
""
],
[
"Evans",
"James",
""
]
] |
2207.01211 | Xiangri Lu | Xiangri Lu | Analysis of Robocode Robot Adaptive Confrontation Based on Zero-Sum Game | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The confrontation of modern intelligence is to some extent a non-complete
information confrontation, where neither side has access to sufficient
information to detect the deployment status of the adversary, and then it is
necessary for the intelligence to complete information retrieval adaptively and
develop confrontation strategies in the confrontation environment. In this
paper, seven tank robots, including TestRobot, are organized for 1V 1
independent and mixed confrontations. The main objective of this paper is to
verify the effectiveness of TestRobot's Zero-sum Game Alpha-Beta pruning
algorithm combined with the estimation of the opponent's next moment motion
position under the game round strategy and the effect of releasing the
intelligent body's own bullets in advance to hit the opponent. Finally, based
on the results of the confrontation experiments, the natural property
differences of the tank intelligence are expressed by plotting histograms of
1V1 independent confrontations and radar plots of mixed confrontations.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2022 05:34:40 GMT"
}
] | 1,656,979,200,000 | [
[
"Lu",
"Xiangri",
""
]
] |
2207.01239 | Zhongxiang Chang | Zhongxiang Chang and Yuning Chen and Zhongbao Zhou | Satellite downlink scheduling under breakpoint resume mode | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A novel problem called satellite downlink scheduling problem (SDSP) under
breakpoint resume mode (SDSP-BRM) is studied in our paper. Compared to the
traditional SDSP where an imaging data has to be completely downloaded at one
time, SDSP-BRM allows the data of an imaging data be broken into a number of
pieces which can be downloaded in different playback windows. By analyzing the
characteristics of SDSP-BRM, we first propose a mixed integer programming model
for its formulation and then prove the NP-hardness of SDSP-BRM. To solve the
problem, we design a simple and effective heuristic algorithm (SEHA) where a
number of problem-tailored move operators are proposed for local searching.
Numerical results on a set of well-designed scenarios demonstrate the
efficiency of the proposed algorithm in comparison to the general purpose CPLEX
solver. We conduct additional experiments to shed light on the impact of the
segmental strategy on the overall performance of the proposed SEHA.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2022 07:30:51 GMT"
}
] | 1,656,979,200,000 | [
[
"Chang",
"Zhongxiang",
""
],
[
"Chen",
"Yuning",
""
],
[
"Zhou",
"Zhongbao",
""
]
] |
2207.01250 | Zhongxiang Chang | Zhongxiang Chang and Zhongbao Zhou | Three multi-objective memtic algorithms for observation scheduling
problem of active-imaging AEOS | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Observation scheduling problem for agile earth observation satellites
(OSPFAS) plays a critical role in management of agile earth observation
satellites (AEOSs). Active imaging enriches the extension of OSPFAS, we call
the novel problem as observation scheduling problem for AEOS with variable
image duration (OSWVID). A cumulative image quality and a detailed energy
consumption is proposed to build OSWVID as a bi-objective optimization model.
Three multi-objective memetic algorithms, PD+NSGA-II, LA+NSGA-II and
ALNS+NSGA-II, are then designed to solve OSWVID. Considering the heuristic
knowledge summarized in our previous research, several operators are designed
for improving these three algorithms respectively. Based on existing instances,
we analyze the critical parameters optimization, operators evolution, and
efficiency of these three algorithms according to extensive simulation
experiments.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2022 08:18:54 GMT"
}
] | 1,656,979,200,000 | [
[
"Chang",
"Zhongxiang",
""
],
[
"Zhou",
"Zhongbao",
""
]
] |
2207.01251 | Zijian Hu | Zijian Hu, Xiaoguang Gao, Kaifang Wan, Qianglong Wang, Yiwei Zhai | Asynchronous Curriculum Experience Replay: A Deep Reinforcement Learning
Approach for UAV Autonomous Motion Control in Unknown Dynamic Environments | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unmanned aerial vehicles (UAVs) have been widely used in military warfare. In
this paper, we formulate the autonomous motion control (AMC) problem as a
Markov decision process (MDP) and propose an advanced deep reinforcement
learning (DRL) method that allows UAVs to execute complex tasks in large-scale
dynamic three-dimensional (3D) environments. To overcome the limitations of the
prioritized experience replay (PER) algorithm and improve performance, the
proposed asynchronous curriculum experience replay (ACER) uses multithreads to
asynchronously update the priorities, assigns the true priorities and applies a
temporary experience pool to make available experiences of higher quality for
learning. A first-in-useless-out (FIUO) experience pool is also introduced to
ensure the higher use value of the stored experiences. In addition, combined
with curriculum learning (CL), a more reasonable training paradigm of sampling
experiences from simple to difficult is designed for training UAVs. By training
in a complex unknown environment constructed based on the parameters of a real
UAV, the proposed ACER improves the convergence speed by 24.66\% and the
convergence result by 5.59\% compared to the state-of-the-art twin delayed deep
deterministic policy gradient (TD3) algorithm. The testing experiments carried
out in environments with different complexities demonstrate the strong
robustness and generalization ability of the ACER agent.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2022 08:19:39 GMT"
}
] | 1,656,979,200,000 | [
[
"Hu",
"Zijian",
""
],
[
"Gao",
"Xiaoguang",
""
],
[
"Wan",
"Kaifang",
""
],
[
"Wang",
"Qianglong",
""
],
[
"Zhai",
"Yiwei",
""
]
] |
2207.01257 | Zhongxiang Chang | Zhongxiang Chang and Abraham P. Punnen and Zhongbao Zhou | Multi-strip observation scheduling problem for ac-tive-imaging agile
earth observation satellites | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Active-imaging agile earth observation satellite (AI-AEOS) is a new
generation agile earth observation satellite (AEOS). With renewed capabilities
in observation and active im-aging, AI-AEOS improves upon the observation
capabilities of AEOS and provide additional ways to observe ground targets.
This however makes the observation scheduling problem for these agile earth
observation satellite more complex, especially when considering multi-strip
ground targets. In this paper, we investigate the multi-strip observation
scheduling problem for an active-image agile earth observation satellite
(MOSP). A bi-objective optimization model is presented for MOSP along with an
adaptive bi-objective memetic algorithm which integrates the combined power of
an adaptive large neighborhood search algorithm (ALNS) and a nondominated
sorting genetic algorithm II (NSGA-II). Results of extensive computa-tional
experiments are presented which disclose that ALNS and NSGA-II when worked in
unison produced superior outcomes. Our model is more versatile than existing
models and provide enhanced capabilities in applied problem solving.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2022 08:35:57 GMT"
}
] | 1,656,979,200,000 | [
[
"Chang",
"Zhongxiang",
""
],
[
"Punnen",
"Abraham P.",
""
],
[
"Zhou",
"Zhongbao",
""
]
] |
2207.01275 | Shivansh Beohar | Shivansh Beohar, Fabian Heinrich, Rahul Kala, Helge Ritter and Andrew
Melnik | Solving Learn-to-Race Autonomous Racing Challenge by Planning in Latent
Space | Published in SL4AD Workshop, ICML 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Learn-to-Race Autonomous Racing Virtual Challenge hosted on
www<dot>aicrowd<dot>com platform consisted of two tracks: Single and Multi
Camera. Our UniTeam team was among the final winners in the Single Camera
track. The agent is required to pass the previously unknown F1-style track in
the minimum time with the least amount of off-road driving violations. In our
approach, we used the U-Net architecture for road segmentation, variational
autocoder for encoding a road binary mask, and a nearest-neighbor search
strategy that selects the best action for a given state. Our agent achieved an
average speed of 105 km/h on stage 1 (known track) and 73 km/h on stage 2
(unknown track) without any off-road driving violations. Here we present our
solution and results.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2022 09:07:06 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2022 07:02:30 GMT"
}
] | 1,657,065,600,000 | [
[
"Beohar",
"Shivansh",
""
],
[
"Heinrich",
"Fabian",
""
],
[
"Kala",
"Rahul",
""
],
[
"Ritter",
"Helge",
""
],
[
"Melnik",
"Andrew",
""
]
] |
2207.01412 | Zhongxiang Chang | Zhongxiang Chang and Zhongbao Zhou | Satellite image data downlink scheduling problem with family attribute:
Model &Algorithm | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The asynchronous development between the observation capability and the
transition capability results in that an original image data (OID) formed by
one-time observation cannot be completely transmitted in one transmit chance
between the EOS and GS (named as a visible time window, VTW). It needs to
segment the OID to several segmented image data (SID) and then transmits them
in several VTWs, which enriches the extension of satellite image data downlink
scheduling problem (SIDSP). We define the novel SIDSP as satellite image data
downlink scheduling problem with family attribute (SIDSPWFA), in which some big
OID is segmented by a fast segmentation operator first, and all SID and other
no-segmented OID is transmitted in the second step. Two optimization
objectives, the image data transmission failure rate (FR) and the segmentation
times (ST), are then designed to formalize SIDSPWFA as a bi-objective discrete
optimization model. Furthermore, a bi-stage differential evolutionary
algorithm(DE+NSGA-II) is developed holding several bi-stage operators.
Extensive simulation instances show the efficiency of models, strategies,
algorithms and operators is analyzed in detail.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2022 13:48:58 GMT"
}
] | 1,656,979,200,000 | [
[
"Chang",
"Zhongxiang",
""
],
[
"Zhou",
"Zhongbao",
""
]
] |
2207.01434 | Yue Qin | Yue Qin and Xiaojing Liao | Cybersecurity Entity Alignment via Masked Graph Attention Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cybersecurity vulnerability information is often recorded by multiple
channels, including government vulnerability repositories,
individual-maintained vulnerability-gathering platforms, or
vulnerability-disclosure email lists and forums. Integrating vulnerability
information from different channels enables comprehensive threat assessment and
quick deployment to various security mechanisms. Efforts to automatically
gather such information, however, are impeded by the limitations of today's
entity alignment techniques. In our study, we annotate the first
cybersecurity-domain entity alignment dataset and reveal the unique
characteristics of security entities. Based on these observations, we propose
the first cybersecurity entity alignment model, CEAM, which equips GNN-based
entity alignment with two mechanisms: asymmetric masked aggregation and
partitioned attention. Experimental results on cybersecurity-domain entity
alignment datasets demonstrate that CEAM significantly outperforms
state-of-the-art entity alignment methods.
| [
{
"version": "v1",
"created": "Mon, 4 Jul 2022 14:19:32 GMT"
}
] | 1,656,979,200,000 | [
[
"Qin",
"Yue",
""
],
[
"Liao",
"Xiaojing",
""
]
] |
2207.01543 | Chenxi Dong | Dong Chenxi, QP Zhang, B Hu, JC Zhang, Dl Lin | An Integrated System of Drug Matching and Abnormal Approval Number
Correction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This essay is based on the joint project with 111, Inc. The pharmacy
e-Commerce business grows rapidly in recent years with the ever-increasing
medical demand during the pandemic. A big challenge for online pharmacy
platforms is drug product matching. The e-Commerce platform usually collects
drug product information from multiple data sources such as the warehouse or
retailers. Therefore, the data format is inconsistent, making it hard to
identify and match the same drug product. This paper creates an integrated
system for matching drug products from two data sources. Besides, the system
would correct some inconsistent drug approval numbers based on a Naive-Bayes
drug type (Chinese or Non-Chinese Drug) classifier. Our integrated system
achieves 98.3% drug matching accuracy, with 99.2% precision and 97.5% recall
| [
{
"version": "v1",
"created": "Fri, 1 Jul 2022 11:19:50 GMT"
}
] | 1,656,979,200,000 | [
[
"Chenxi",
"Dong",
""
],
[
"Zhang",
"QP",
""
],
[
"Hu",
"B",
""
],
[
"Zhang",
"JC",
""
],
[
"Lin",
"Dl",
""
]
] |
2207.01845 | Shivansh Beohar | Shivansh Beohar and Andrew Melnik | Planning with RL and episodic-memory behavioral priors | Published in ICRA 2022 BPRL Workshop | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The practical application of learning agents requires sample efficient and
interpretable algorithms. Learning from behavioral priors is a promising way to
bootstrap agents with a better-than-random exploration policy or a safe-guard
against the pitfalls of early learning. Existing solutions for imitation
learning require a large number of expert demonstrations and rely on
hard-to-interpret learning methods like Deep Q-learning. In this work we
present a planning-based approach that can use these behavioral priors for
effective exploration and learning in a reinforcement learning environment, and
we demonstrate that curated exploration policies in the form of behavioral
priors can help an agent learn faster.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2022 07:11:05 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2022 09:04:54 GMT"
}
] | 1,657,238,400,000 | [
[
"Beohar",
"Shivansh",
""
],
[
"Melnik",
"Andrew",
""
]
] |
2207.02100 | Keyuan Zhang | Keyuan Zhang, Jiayu Bai, Jialin Liu | Generating Game Levels of Diverse Behaviour Engagement | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years, there has been growing interests in experience-driven
procedural level generation. Various metrics have been formulated to model
player experience and help generate personalised levels. In this work, we
question whether experience metrics can adapt to agents with different
personas. We start by reviewing existing metrics for evaluating game levels.
Then, focusing on platformer games, we design a framework integrating various
agents and evaluation metrics. Experimental studies on \emph{Super Mario Bros.}
indicate that using the same evaluation metrics but agents with different
personas can generate levels for particular persona. It implies that, for
simple games, using a game-playing agent of specific player archetype as a
level tester is probably all we need to generate levels of diverse behaviour
engagement.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2022 15:08:12 GMT"
}
] | 1,657,065,600,000 | [
[
"Zhang",
"Keyuan",
""
],
[
"Bai",
"Jiayu",
""
],
[
"Liu",
"Jialin",
""
]
] |
2207.02258 | Jean-Guy Mailly | Yohann Bacquey, Jean-Guy Mailly, Pavlos Moraitis, Julien Rossit | Admissibility in Strength-based Argumentation: Complexity and Algorithms
(Extended Version with Proofs) | This is an extended version of a paper accepted at COMMA 2022. 17
pages, 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Strength-based Argumentation Frameworks (StrAFs) have been proposed
to model situations where some quantitative strength is associated with
arguments. In this setting, the notion of accrual corresponds to sets of
arguments that collectively attack an argument. Some semantics have already
been defined, which are sensitive to the existence of accruals that
collectively defeat their target, while their individual elements cannot.
However, until now, only the surface of this framework and semantics have been
studied. Indeed, the existing literature focuses on the adaptation of the
stable semantics to StrAFs. In this paper, we push forward the study and
investigate the adaptation of admissibility-based semantics. Especially, we
show that the strong admissibility defined in the literature does not satisfy a
desirable property, namely Dung's fundamental lemma. We therefore propose an
alternative definition that induces semantics that behave as expected. We then
study computational issues for these new semantics, in particular we show that
complexity of reasoning is similar to the complexity of the corresponding
decision problems for standard argumentation frameworks in almost all cases. We
then propose a translation in pseudo-Boolean constraints for computing (strong
and weak) extensions. We conclude with an experimental evaluation of our
approach which shows in particular that it scales up well for solving the
problem of providing one extension as well as enumerating them all.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2022 18:42:04 GMT"
}
] | 1,657,152,000,000 | [
[
"Bacquey",
"Yohann",
""
],
[
"Mailly",
"Jean-Guy",
""
],
[
"Moraitis",
"Pavlos",
""
],
[
"Rossit",
"Julien",
""
]
] |
2207.02917 | Sridhar Mahadevan | Sridhar Mahadevan | On The Universality of Diagrams for Causal Inference and The Causal
Reproducing Property | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose Universal Causality, an overarching framework based on category
theory that defines the universal property that underlies causal inference
independent of the underlying representational formalism used. More formally,
universal causal models are defined as categories consisting of objects and
morphisms between them representing causal influences, as well as structures
for carrying out interventions (experiments) and evaluating their outcomes
(observations). Functors map between categories, and natural transformations
map between a pair of functors across the same two categories. Abstract causal
diagrams in our framework are built using universal constructions from category
theory, including the limit or co-limit of an abstract causal diagram, or more
generally, the Kan extension. We present two foundational results in universal
causal inference. The first result, called the Universal Causality Theorem
(UCT), pertains to the universality of diagrams, which are viewed as functors
mapping both objects and relationships from an indexing category of abstract
causal diagrams to an actual causal model whose nodes are labeled by random
variables, and edges represent functional or probabilistic relationships. UCT
states that any causal inference can be represented in a canonical way as the
co-limit of an abstract causal diagram of representable objects. UCT follows
from a basic result in the theory of sheaves. The second result, the Causal
Reproducing Property (CRP), states that any causal influence of a object X on
another object Y is representable as a natural transformation between two
abstract causal diagrams. CRP follows from the Yoneda Lemma, one of the deepest
results in category theory. The CRP property is analogous to the reproducing
property in Reproducing Kernel Hilbert Spaces that served as the foundation for
kernel methods in machine learning.
| [
{
"version": "v1",
"created": "Wed, 6 Jul 2022 18:54:15 GMT"
}
] | 1,657,238,400,000 | [
[
"Mahadevan",
"Sridhar",
""
]
] |
2207.02953 | Ulises Cort\'es | Esteve Almirall and Davide Callegaro and Peter Bruins and Mar
Santamar\'ia and Pablo Mart\'inez and Ulises Cort\'es | The use of Synthetic Data to solve the scalability and data availability
problems in Smart City Digital Twins | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The A.I. disruption and the need to compete on innovation are impacting
cities that have an increasing necessity to become innovation hotspots.
However, without proven solutions, experimentation, often unsuccessful, is
needed. But experimentation in cities has many undesirable effects not only for
its citizens but also reputational if unsuccessful. Digital Twins, so popular
in other areas, seem like a promising way to expand experimentation proposals
but in simulated environments, translating only the half-baked ones, the ones
with higher probability of success, to real environments and therefore
minimizing risks. However, Digital Twins are data intensive and need highly
localized data, making them difficult to scale, particularly to small cities,
and with the high cost associated to data collection. We present an alternative
based on synthetic data that given some conditions, quite common in Smart
Cities, can solve these two problems together with a proof-of-concept based on
NO2 pollution.
| [
{
"version": "v1",
"created": "Wed, 6 Jul 2022 20:21:13 GMT"
}
] | 1,657,238,400,000 | [
[
"Almirall",
"Esteve",
""
],
[
"Callegaro",
"Davide",
""
],
[
"Bruins",
"Peter",
""
],
[
"Santamaría",
"Mar",
""
],
[
"Martínez",
"Pablo",
""
],
[
"Cortés",
"Ulises",
""
]
] |
2207.03025 | Mehak Maniktala | Mehak Maniktala, Min Chi, and Tiffany Barnes | Enhancing a Student Productivity Model for Adaptive Problem-Solving
Assistance | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research on intelligent tutoring systems has been exploring data-driven
methods to deliver effective adaptive assistance. While much work has been done
to provide adaptive assistance when students seek help, they may not seek help
optimally. This had led to the growing interest in proactive adaptive
assistance, where the tutor provides unsolicited assistance upon predictions of
struggle or unproductivity. Determining when and whether to provide
personalized support is a well-known challenge called the assistance dilemma.
Addressing this dilemma is particularly challenging in open-ended domains,
where there can be several ways to solve problems. Researchers have explored
methods to determine when to proactively help students, but few of these
methods have taken prior hint usage into account. In this paper, we present a
novel data-driven approach to incorporate students' hint usage in predicting
their need for help. We explore its impact in an intelligent tutor that deals
with the open-ended and well-structured domain of logic proofs. We present a
controlled study to investigate the impact of an adaptive hint policy based on
predictions of HelpNeed that incorporate students' hint usage. We show
empirical evidence to support that such a policy can save students a
significant amount of time in training, and lead to improved posttest results,
when compared to a control without proactive interventions. We also show that
incorporating students' hint usage significantly improves the adaptive hint
policy's efficacy in predicting students' HelpNeed, thereby reducing training
unproductivity, reducing possible help avoidance, and increasing possible help
appropriateness (a higher chance of receiving help when it was likely to be
needed). We conclude with suggestions on the domains that can benefit from this
approach as well as the requirements for adoption.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 00:41:00 GMT"
}
] | 1,657,238,400,000 | [
[
"Maniktala",
"Mehak",
""
],
[
"Chi",
"Min",
""
],
[
"Barnes",
"Tiffany",
""
]
] |
2207.03051 | Haitao Mao | Lixin Zou, Haitao Mao, Xiaokai Chu, Jiliang Tang, Wenwen Ye,
Shuaiqiang Wang, Dawei Yin | A Large Scale Search Dataset for Unbiased Learning to Rank | 15 pages, 9 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The unbiased learning to rank (ULTR) problem has been greatly advanced by
recent deep learning techniques and well-designed debias algorithms. However,
promising results on the existing benchmark datasets may not be extended to the
practical scenario due to the following disadvantages observed from those
popular benchmark datasets: (1) outdated semantic feature extraction where
state-of-the-art large scale pre-trained language models like BERT cannot be
exploited due to the missing of the original text;(2) incomplete display
features for in-depth study of ULTR, e.g., missing the displayed abstract of
documents for analyzing the click necessary bias; (3) lacking real-world user
feedback, leading to the prevalence of synthetic datasets in the empirical
study. To overcome the above disadvantages, we introduce the Baidu-ULTR
dataset. It involves randomly sampled 1.2 billion searching sessions and 7,008
expert annotated queries, which is orders of magnitude larger than the existing
ones. Baidu-ULTR provides:(1) the original semantic feature and a pre-trained
language model for easy usage; (2) sufficient display information such as
position, displayed height, and displayed abstract, enabling the comprehensive
study of different biases with advanced techniques such as causal discovery and
meta-learning; and (3) rich user feedback on search result pages (SERPs) like
dwelling time, allowing for user engagement optimization and promoting the
exploration of multi-task learning in ULTR. In this paper, we present the
design principle of Baidu-ULTR and the performance of benchmark ULTR algorithms
on this new data resource, favoring the exploration of ranking for long-tail
queries and pre-training tasks for ranking. The Baidu-ULTR dataset and
corresponding baseline implementation are available at
https://github.com/ChuXiaokai/baidu_ultr_dataset.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 02:37:25 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Sep 2022 19:34:38 GMT"
}
] | 1,663,718,400,000 | [
[
"Zou",
"Lixin",
""
],
[
"Mao",
"Haitao",
""
],
[
"Chu",
"Xiaokai",
""
],
[
"Tang",
"Jiliang",
""
],
[
"Ye",
"Wenwen",
""
],
[
"Wang",
"Shuaiqiang",
""
],
[
"Yin",
"Dawei",
""
]
] |
2207.03066 | Jiangchao Yao | Jiangchao Yao, Feng Wang, Xichen Ding, Shaohu Chen, Bo Han, Jingren
Zhou, Hongxia Yang | Device-Cloud Collaborative Recommendation via Meta Controller | KDD 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | On-device machine learning enables the lightweight deployment of
recommendation models in local clients, which reduces the burden of the
cloud-based recommenders and simultaneously incorporates more real-time user
features. Nevertheless, the cloud-based recommendation in the industry is still
very important considering its powerful model capacity and the efficient
candidate generation from the billion-scale item pool. Previous attempts to
integrate the merits of both paradigms mainly resort to a sequential mechanism,
which builds the on-device recommender on top of the cloud-based
recommendation. However, such a design is inflexible when user interests
dramatically change:
the on-device model is stuck by the limited item cache while the cloud-based
recommendation based on the large item pool do not respond without the new
re-fresh feedback.
To overcome this issue, we propose a meta controller to dynamically manage
the collaboration between the on-device recommender and the cloud-based
recommender, and introduce a novel efficient sample construction from the
causal perspective to solve the dataset absence issue of meta controller. On
the basis of the counterfactual samples and the extended training, extensive
experiments in the industrial recommendation scenarios show the promise of meta
controller in the device-cloud collaboration.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 03:23:04 GMT"
}
] | 1,657,238,400,000 | [
[
"Yao",
"Jiangchao",
""
],
[
"Wang",
"Feng",
""
],
[
"Ding",
"Xichen",
""
],
[
"Chen",
"Shaohu",
""
],
[
"Han",
"Bo",
""
],
[
"Zhou",
"Jingren",
""
],
[
"Yang",
"Hongxia",
""
]
] |
2207.03086 | Akira Matsui | Akira Matsui, Emilio Ferrara | Word Embedding for Social Sciences: An Interdisciplinary Survey | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | To extract essential information from complex data, computer scientists have
been developing machine learning models that learn low-dimensional
representation mode. From such advances in machine learning research, not only
computer scientists but also social scientists have benefited and advanced
their research because human behavior or social phenomena lies in complex data.
To document this emerging trend, we survey the recent studies that apply word
embedding techniques to human behavior mining, building a taxonomy to
illustrate the methods and procedures used in the surveyed papers and highlight
the recent emerging trends applying word embedding models to non-textual human
behavior data. This survey conducts a simple experiment to warn that common
similarity measurements used in the literature could yield different results
even if they return consistent results at an aggregate level.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 04:49:21 GMT"
}
] | 1,657,238,400,000 | [
[
"Matsui",
"Akira",
""
],
[
"Ferrara",
"Emilio",
""
]
] |
2207.03206 | Jasmin Bogatinovski | Jasmin Bogatinovski, Gjorgji Madjarov, Sasho Nedelkoski, Jorge Cardoso
and Odej Kao | Leveraging Log Instructions in Log-based Anomaly Detection | This paper has been accepted for publication in IEEE Service
Computing Conference, 2022, Barcelona | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence for IT Operations (AIOps) describes the process of
maintaining and operating large IT systems using diverse AI-enabled methods and
tools for, e.g., anomaly detection and root cause analysis, to support the
remediation, optimization, and automatic initiation of self-stabilizing IT
activities. The core step of any AIOps workflow is anomaly detection, typically
performed on high-volume heterogeneous data such as log messages (logs),
metrics (e.g., CPU utilization), and distributed traces. In this paper, we
propose a method for reliable and practical anomaly detection from system logs.
It overcomes the common disadvantage of related works, i.e., the need for a
large amount of manually labeled training data, by building an anomaly
detection model with log instructions from the source code of 1000+ GitHub
projects. The instructions from diverse systems contain rich and heterogenous
information about many different normal and abnormal IT events and serve as a
foundation for anomaly detection. The proposed method, named ADLILog, combines
the log instructions and the data from the system of interest (target system)
to learn a deep neural network model through a two-phase learning procedure.
The experimental results show that ADLILog outperforms the related approaches
by up to 60% on the F1 score while satisfying core non-functional requirements
for industrial deployments such as unsupervised design, efficient model
updates, and small model sizes.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 10:22:10 GMT"
}
] | 1,657,238,400,000 | [
[
"Bogatinovski",
"Jasmin",
""
],
[
"Madjarov",
"Gjorgji",
""
],
[
"Nedelkoski",
"Sasho",
""
],
[
"Cardoso",
"Jorge",
""
],
[
"Kao",
"Odej",
""
]
] |
2207.03214 | Francisco Cruz | Francisco Cruz, Charlotte Young, Richard Dazeley, Peter Vamplew | Evaluating Human-like Explanations for Robot Actions in Reinforcement
Learning Scenarios | 8 pages, 8 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainable artificial intelligence is a research field that tries to provide
more transparency for autonomous intelligent systems. Explainability has been
used, particularly in reinforcement learning and robotic scenarios, to better
understand the robot decision-making process. Previous work, however, has been
widely focused on providing technical explanations that can be better
understood by AI practitioners than non-expert end-users. In this work, we make
use of human-like explanations built from the probability of success to
complete the goal that an autonomous robot shows after performing an action.
These explanations are intended to be understood by people who have no or very
little experience with artificial intelligence methods. This paper presents a
user trial to study whether these explanations that focus on the probability an
action has of succeeding in its goal constitute a suitable explanation for
non-expert end-users. The results obtained show that non-expert participants
rate robot explanations that focus on the probability of success higher and
with less variance than technical explanations generated from Q-values, and
also favor counterfactual explanations over standalone explanations.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 10:40:24 GMT"
}
] | 1,657,238,400,000 | [
[
"Cruz",
"Francisco",
""
],
[
"Young",
"Charlotte",
""
],
[
"Dazeley",
"Richard",
""
],
[
"Vamplew",
"Peter",
""
]
] |
2207.03270 | Philippe Preux | Romain Gautron, Emilio J. Padr\'on, Philippe Preux, Julien Bigot,
Odalric-Ambrym Maillard, David Emukpere | gym-DSSAT: a crop model turned into a Reinforcement Learning environment | null | null | null | Report-no: Inria RR-9460 | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Addressing a real world sequential decision problem with Reinforcement
Learning (RL) usually starts with the use of a simulated environment that
mimics real conditions. We present a novel open source RL environment for
realistic crop management tasks. gym-DSSAT is a gym interface to the Decision
Support System for Agrotechnology Transfer (DSSAT), a high fidelity crop
simulator. DSSAT has been developped over the last 30 years and is widely
recognized by agronomists. gym-DSSAT comes with predefined simulations based on
real world maize experiments. The environment is as easy to use as any gym
environment. We provide performance baselines using basic RL algorithms. We
also briefly outline how the monolithic DSSAT simulator written in Fortran has
been turned into a Python RL environment. Our methodology is generic and may be
applied to similar simulators. We report on very preliminary experimental
results which suggest that RL can help researchers to improve sustainability of
fertilization and irrigation practices.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 12:45:02 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Aug 2022 16:33:16 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Sep 2022 14:16:48 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Sep 2022 12:05:28 GMT"
}
] | 1,664,323,200,000 | [
[
"Gautron",
"Romain",
""
],
[
"Padrón",
"Emilio J.",
""
],
[
"Preux",
"Philippe",
""
],
[
"Bigot",
"Julien",
""
],
[
"Maillard",
"Odalric-Ambrym",
""
],
[
"Emukpere",
"David",
""
]
] |
2207.03305 | Tsegaye Misikir Tashu | Tsegaye Misikir Tashu, Sara Fattouh, Peter Kiss, Tomas Horvath | Multimodal E-Commerce Product Classification Using Hierarchical Fusion | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this work, we present a multi-modal model for commercial product
classification, that combines features extracted by multiple neural network
models from textual (CamemBERT and FlauBERT) and visual data (SE-ResNeXt-50),
using simple fusion techniques. The proposed method significantly outperformed
the unimodal models' performance and the reported performance of similar models
on our specific task. We did experiments with multiple fusing techniques and
found, that the best performing technique to combine the individual embedding
of the unimodal network is based on combining concatenation and averaging the
feature vectors. Each modality complemented the shortcomings of the other
modalities, demonstrating that increasing the number of modalities can be an
effective method for improving the performance of multi-label and multimodal
classification problems.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 14:04:42 GMT"
}
] | 1,657,584,000,000 | [
[
"Tashu",
"Tsegaye Misikir",
""
],
[
"Fattouh",
"Sara",
""
],
[
"Kiss",
"Peter",
""
],
[
"Horvath",
"Tomas",
""
]
] |
2207.03317 | Tsegaye Misikir Tashu | Sofiane Ouaari, Tsegaye Misikir Tashu, Tomas Horvath | Multimodal Feature Extraction for Memes Sentiment Classification | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this study, we propose feature extraction for multimodal meme
classification using Deep Learning approaches. A meme is usually a photo or
video with text shared by the young generation on social media platforms that
expresses a culturally relevant idea. Since they are an efficient way to
express emotions and feelings, a good classifier that can classify the
sentiment behind the meme is important. To make the learning process more
efficient, reduce the likelihood of overfitting, and improve the
generalizability of the model, one needs a good approach for joint feature
extraction from all modalities. In this work, we proposed to use different
multimodal neural network approaches for multimodal feature extraction and use
the extracted features to train a classifier to identify the sentiment in a
meme.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 14:21:52 GMT"
}
] | 1,657,238,400,000 | [
[
"Ouaari",
"Sofiane",
""
],
[
"Tashu",
"Tsegaye Misikir",
""
],
[
"Horvath",
"Tomas",
""
]
] |
2207.03330 | Isac Mendes Lacerda | Isac M. Lacerda, Eber A. Schmitz, Jayme L. Szwarcfiter, Rosiane de
Freitas | Empirical Evaluation of Project Scheduling Algorithms for Maximization
of the Net Present Value | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an empirical performance analysis of three project
scheduling algorithms dealing with maximizing projects' net present value with
unrestricted resources. The selected algorithms, being the most recently cited
in the literature, are: Recursive Search (RS), Steepest Ascent Approach (SAA)
and Hybrid Search (HS). The main motivation for this research is the lack of
knowledge about the computational complexities of the RS, SAA, and HS
algorithms, since all studies to date show some gaps in the analysis.
Furthermore, the empirical analysis performed to date does not consider the
fact that one algorithm (HS) uses a dual search strategy, which markedly
improved the algorithm's performance, while the others don't. In order to
obtain a fair performance comparison, we implemented the dual search strategy
into the other two algorithms (RS and SAA), and the new algorithms were called
Recursive Search Forward-Backward (RSFB) and Steepest Ascent Approach
Forward-Backward (SAAFB). The algorithms RSFB, SAAFB, and HS were submitted to
a factorial experiment with three different project network sampling
characteristics. The results were analyzed using the Generalized Linear Models
(GLM) statistical modeling technique that showed: a) the general computational
costs of RSFB, SAAFB, and HS; b) the costs of restarting the search in the
spanning tree as part of the total cost of the algorithms; c) and statistically
significant differences between the distributions of the algorithms' results.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2022 03:01:33 GMT"
}
] | 1,657,238,400,000 | [
[
"Lacerda",
"Isac M.",
""
],
[
"Schmitz",
"Eber A.",
""
],
[
"Szwarcfiter",
"Jayme L.",
""
],
[
"de Freitas",
"Rosiane",
""
]
] |
2207.03336 | Stefan O'Toole | Stefan O'Toole, Miquel Ramirez, Nir Lipovetzky, Adrian R. Pearce | Sampling from Pre-Images to Learn Heuristic Functions for Classical
Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce a new algorithm, Regression based Supervised Learning (RSL), for
learning per instance Neural Network (NN) defined heuristic functions for
classical planning problems. RSL uses regression to select relevant sets of
states at a range of different distances from the goal. RSL then formulates a
Supervised Learning problem to obtain the parameters that define the NN
heuristic, using the selected states labeled with exact or estimated distances
to goal states. Our experimental study shows that RSL outperforms, in terms of
coverage, previous classical planning NN heuristics functions while requiring
two orders of magnitude less training time.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 14:42:31 GMT"
}
] | 1,657,238,400,000 | [
[
"O'Toole",
"Stefan",
""
],
[
"Ramirez",
"Miquel",
""
],
[
"Lipovetzky",
"Nir",
""
],
[
"Pearce",
"Adrian R.",
""
]
] |
2207.03669 | Jingwei Li | Jingwei Li | Determination of action model equivalence and simplification of action
model | 30 pages, 0 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study two problems: determining action model equivalence
and minimizing the event space of an action model under certain structural
relationships. The Kripke model equivalence is perfectly caught by the
structural relationship called bisimulation. In this paper, we propose the
generalized action emulation perfectly catching the action model equivalence.
Previous structural relationships sufficient for the action model equivalence,
i.e. the bisimulation, the propositional action emulation, the action
emulation, and the action emulation of canonical action models, can be
described by various restricted versions of the generalized action emulation.
We summarize four critical properties of the atom set over preconditions, and
prove that any formula set satisfying these properties can be used to restrict
the generalized action emulation to determine the action model equivalence by
an iteration algorithm. We also construct a new formula set with these four
properties, which is generally more efficient than the atom set. The technique
of the partition refinement has been used to minimize the world space of a
Kripke model under the bisimulation. Applying the partition refinement to
action models allows one to minimize their event spaces under the bisimulation.
The propositional action emulation is weaker than bismulation but still
sufficient for the action model equivalence. We prove that it is
PSPACE-complete to minimize the event space of an action model under the
propositional action emulation, and provide a PSPACE algorithm for it. Finally,
we prove that minimize the event space under the action model equivalence is
PSPACE-hard, and propose a computable method based on the canonical formulas of
modal logics to solve this problem.
| [
{
"version": "v1",
"created": "Fri, 8 Jul 2022 03:11:03 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 08:04:51 GMT"
}
] | 1,675,641,600,000 | [
[
"Li",
"Jingwei",
""
]
] |
2207.04118 | Laetitia Teodorescu | Laetitia Teodorescu and Eric Yuan and Marc-Alexandre C\^ot\'e and
Pierre-Yves Oudeyer | Automatic Exploration of Textual Environments with Language-Conditioned
Autotelic Agents | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this extended abstract we discuss the opportunities and challenges of
studying intrinsically-motivated agents for exploration in textual
environments. We argue that there is important synergy between text
environments and autonomous agents. We identify key properties of text worlds
that make them suitable for exploration by autonmous agents, namely, depth,
breadth, progress niches and the ease of use of language goals; we identify
drivers of exploration for such agents that are implementable in text worlds.
We discuss the opportunities of using autonomous agents to make progress on
text environment benchmarks. Finally we list some specific challenges that need
to be overcome in this area.
| [
{
"version": "v1",
"created": "Fri, 8 Jul 2022 20:31:01 GMT"
}
] | 1,657,584,000,000 | [
[
"Teodorescu",
"Laetitia",
""
],
[
"Yuan",
"Eric",
""
],
[
"Côté",
"Marc-Alexandre",
""
],
[
"Oudeyer",
"Pierre-Yves",
""
]
] |
2207.04502 | Yuan An | Yuan An, Jane Greenberg, Xintong Zhao, Xiaohua Hu, Scott McCLellan,
Alex Kalinowski, Fernando J. Uribe-Romo, Kyle Langlois, Jacob Furst, Diego A.
G\'omez-Gualdr\'on, Fernando Fajardo-Rojas, Katherine Ardila | Building Open Knowledge Graph for Metal-Organic Frameworks (MOF-KG):
Challenges and Case Studies | Accepted by the International Workshop on Knowledge Graphs and Open
Knowledge Network (OKN'22) Co-located with the 28th ACM SIGKDD Conference | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Metal-Organic Frameworks (MOFs) are a class of modular, porous crystalline
materials that have great potential to revolutionize applications such as gas
storage, molecular separations, chemical sensing, catalysis, and drug delivery.
The Cambridge Structural Database (CSD) reports 10,636 synthesized MOF crystals
which in addition contains ca. 114,373 MOF-like structures. The sheer number of
synthesized (plus potentially synthesizable) MOF structures requires
researchers pursue computational techniques to screen and isolate MOF
candidates. In this demo paper, we describe our effort on leveraging knowledge
graph methods to facilitate MOF prediction, discovery, and synthesis. We
present challenges and case studies about (1) construction of a MOF knowledge
graph (MOF-KG) from structured and unstructured sources and (2) leveraging the
MOF-KG for discovery of new or missing knowledge.
| [
{
"version": "v1",
"created": "Sun, 10 Jul 2022 16:41:11 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Nov 2023 17:20:33 GMT"
}
] | 1,701,302,400,000 | [
[
"An",
"Yuan",
""
],
[
"Greenberg",
"Jane",
""
],
[
"Zhao",
"Xintong",
""
],
[
"Hu",
"Xiaohua",
""
],
[
"McCLellan",
"Scott",
""
],
[
"Kalinowski",
"Alex",
""
],
[
"Uribe-Romo",
"Fernando J.",
""
],
[
"Langlois",
"Kyle",
""
],
[
"Furst",
"Jacob",
""
],
[
"Gómez-Gualdrón",
"Diego A.",
""
],
[
"Fajardo-Rojas",
"Fernando",
""
],
[
"Ardila",
"Katherine",
""
]
] |
2207.05259 | Blai Bonet | Blai Bonet and Hector Geffner | Language-Based Causal Representation Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Consider the finite state graph that results from a simple, discrete,
dynamical system in which an agent moves in a rectangular grid picking up and
dropping packages. Can the state variables of the problem, namely, the agent
location and the package locations, be recovered from the structure of the
state graph alone without having access to information about the objects, the
structure of the states, or any background knowledge? We show that this is
possible provided that the dynamics is learned over a suitable
domain-independent first-order causal language that makes room for objects and
relations that are not assumed to be known. The preference for the most compact
representation in the language that is compatible with the data provides a
strong and meaningful learning bias that makes this possible. The language of
structured causal models (SCMs) is the standard language for representing
(static) causal models but in dynamic worlds populated by objects, first-order
causal languages such as those used in "classical AI planning" are required.
While "classical AI" requires handcrafted representations, similar
representations can be learned from unstructured data over the same languages.
Indeed, it is the languages and the preference for compact representations in
those languages that provide structure to the world, uncovering objects,
relations, and causes.
| [
{
"version": "v1",
"created": "Tue, 12 Jul 2022 02:07:58 GMT"
}
] | 1,657,670,400,000 | [
[
"Bonet",
"Blai",
""
],
[
"Geffner",
"Hector",
""
]
] |
2207.05271 | Ziqi Wang | Ziqi Wang, Jialin Liu | Online Game Level Generation from Music | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Game consists of multiple types of content, while the harmony of different
content types play an essential role in game design. However, most works on
procedural content generation consider only one type of content at a time. In
this paper, we propose and formulate online level generation from music, in a
way of matching a level feature to a music feature in real-time, while adapting
to players' play speed. A generic framework named online player-adaptive
procedural content generation via reinforcement learning, OPARL for short, is
built upon the experience-driven reinforcement learning and controllable
reinforcement learning, to enable online level generation from music.
Furthermore, a novel control policy based on local search and k-nearest
neighbours is proposed and integrated into OPARL to control the level generator
considering the play data collected online. Results of simulation-based
experiments show that our implementation of OPARL is competent to generate
playable levels with difficulty degree matched to the ``energy'' dynamic of
music for different artificial players in an online fashion.
| [
{
"version": "v1",
"created": "Tue, 12 Jul 2022 02:44:50 GMT"
}
] | 1,657,670,400,000 | [
[
"Wang",
"Ziqi",
""
],
[
"Liu",
"Jialin",
""
]
] |
2207.06014 | Heiko Paulheim | Jan Portisch and Heiko Paulheim | The DLCC Node Classification Benchmark for Analyzing Knowledge Graph
Embeddings | Accepted at International Semantic Web Conference (ISWC) 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge graph embedding is a representation learning technique that
projects entities and relations in a knowledge graph to continuous vector
spaces. Embeddings have gained a lot of uptake and have been heavily used in
link prediction and other downstream prediction tasks. Most approaches are
evaluated on a single task or a single group of tasks to determine their
overall performance. The evaluation is then assessed in terms of how well the
embedding approach performs on the task at hand. Still, it is hardly evaluated
(and often not even deeply understood) what information the embedding
approaches are actually learning to represent.
To fill this gap, we present the DLCC (Description Logic Class Constructors)
benchmark, a resource to analyze embedding approaches in terms of which kinds
of classes they can represent. Two gold standards are presented, one based on
the real-world knowledge graph DBpedia and one synthetic gold standard. In
addition, an evaluation framework is provided that implements an experiment
protocol so that researchers can directly use the gold standard. To demonstrate
the use of DLCC, we compare multiple embedding approaches using the gold
standards. We find that many DL constructors on DBpedia are actually learned by
recognizing different correlated patterns than those defined in the gold
standard and that specific DL constructors, such as cardinality constraints,
are particularly hard to be learned for most embedding approaches.
| [
{
"version": "v1",
"created": "Wed, 13 Jul 2022 07:43:51 GMT"
}
] | 1,657,756,800,000 | [
[
"Portisch",
"Jan",
""
],
[
"Paulheim",
"Heiko",
""
]
] |
2207.06105 | Christopher Bamford | Christopher Bamford, Minqi Jiang, Mikayel Samvelyan, Tim Rockt\"aschel | GriddlyJS: A Web IDE for Reinforcement Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Progress in reinforcement learning (RL) research is often driven by the
design of new, challenging environments -- a costly undertaking requiring
skills orthogonal to that of a typical machine learning researcher. The
complexity of environment development has only increased with the rise of
procedural-content generation (PCG) as the prevailing paradigm for producing
varied environments capable of testing the robustness and generalization of RL
agents. Moreover, existing environments often require complex build processes,
making reproducing results difficult. To address these issues, we introduce
GriddlyJS, a web-based Integrated Development Environment (IDE) based on the
Griddly engine. GriddlyJS allows researchers to visually design and debug
arbitrary, complex PCG grid-world environments using a convenient graphical
interface, as well as visualize, evaluate, and record the performance of
trained agent models. By connecting the RL workflow to the advanced
functionality enabled by modern web standards, GriddlyJS allows publishing
interactive agent-environment demos that reproduce experimental results
directly to the web. To demonstrate the versatility of GriddlyJS, we use it to
quickly develop a complex compositional puzzle-solving environment alongside
arbitrary human-designed environment configurations and their solutions for use
in automatic curriculum learning and offline RL. The GriddlyJS IDE is open
source and freely available at https://griddly.ai.
| [
{
"version": "v1",
"created": "Wed, 13 Jul 2022 10:26:38 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 13:05:00 GMT"
}
] | 1,665,705,600,000 | [
[
"Bamford",
"Christopher",
""
],
[
"Jiang",
"Minqi",
""
],
[
"Samvelyan",
"Mikayel",
""
],
[
"Rocktäschel",
"Tim",
""
]
] |
2207.06118 | Shaojie Bai | Shaojie Bai, Dongxia Wang, Tim Muller, Peng Cheng, Jiming Chen | Stability of Weighted Majority Voting under Estimated Weights | 15 pages, 16 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted Majority Voting (WMV) is a well-known optimal decision rule for
collective decision making, given the probability of sources to provide
accurate information (trustworthiness). However, in reality, the
trustworthiness is not a known quantity to the decision maker - they have to
rely on an estimate called trust. A (machine learning) algorithm that computes
trust is called unbiased when it has the property that it does not
systematically overestimate or underestimate the trustworthiness. To formally
analyse the uncertainty to the decision process, we introduce and analyse two
important properties of such unbiased trust values: stability of correctness
and stability of optimality. Stability of correctness means that the decision
accuracy that the decision maker believes they achieved is equal to the actual
accuracy. We prove stability of correctness holds. Stability of optimality
means that the decisions made based on trust, are equally good as they would
have been if they were based on trustworthiness. Stability of optimality does
not hold. We analyse the difference between the two, and bounds thereon. We
also present an overview of how sensitive decision correctness is to changes in
trust and trustworthiness.
| [
{
"version": "v1",
"created": "Wed, 13 Jul 2022 10:55:41 GMT"
}
] | 1,657,756,800,000 | [
[
"Bai",
"Shaojie",
""
],
[
"Wang",
"Dongxia",
""
],
[
"Muller",
"Tim",
""
],
[
"Cheng",
"Peng",
""
],
[
"Chen",
"Jiming",
""
]
] |
2207.07339 | Zongshun Wang | Zongshun Wang, Yuping Shen | Fuzzy Labeling Semantics for Quantitative Argumentation | 27 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating argument strength in quantitative argumentation systems has
received increasing attention in the field of abstract argumentation. The
concept of acceptability degree is widely adopted in gradual semantics,
however, it may not be sufficient in many practical applications. In this
paper, we provide a novel quantitative method called fuzzy labeling for fuzzy
argumentation systems, in which a triple of acceptability, rejectability, and
undecidability degrees is used to evaluate argument strength. Such a setting
sheds new light on defining argument strength and provides a deeper
understanding of the status of arguments. More specifically, we investigate the
postulates of fuzzy labeling, which present the rationality requirements for
semantics concerning the acceptability, rejectability, and undecidability
degrees. We then propose a class of fuzzy labeling semantics conforming to the
above postulates and investigate the relations between fuzzy labeling semantics
and existing work in the literature.
| [
{
"version": "v1",
"created": "Fri, 15 Jul 2022 08:31:36 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 09:43:29 GMT"
}
] | 1,692,576,000,000 | [
[
"Wang",
"Zongshun",
""
],
[
"Shen",
"Yuping",
""
]
] |
2207.07740 | Quoc Hung Ngo | Quoc Hung Ngo, Tahar Kechadi, Nhien-An Le-Khac | Knowledge Representation in Digital Agriculture: A Step Towards
Standardised Model | null | Computers and Electronics in Agriculture 199 (2022): 107127 | 10.1016/j.compag.2022.107127 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, data science has evolved significantly. Data analysis and
mining processes become routines in all sectors of the economy where datasets
are available. Vast data repositories have been collected, curated, stored, and
used for extracting knowledge. And this is becoming commonplace. Subsequently,
we extract a large amount of knowledge, either directly from the data or
through experts in the given domain. The challenge now is how to exploit all
this large amount of knowledge that is previously known for efficient
decision-making processes. Until recently, much of the knowledge gained through
a number of years of research is stored in static knowledge bases or
ontologies, while more diverse and dynamic knowledge acquired from data mining
studies is not centrally and consistently managed. In this research, we propose
a novel model called ontology-based knowledge map to represent and store the
results (knowledge) of data mining in crop farming to build, maintain, and
enrich the process of knowledge discovery. The proposed model consists of six
main sets: concepts, attributes, relations, transformations, instances, and
states. This model is dynamic and facilitates the access, updates, and
exploitation of the knowledge at any time. This paper also proposes an
architecture for handling this knowledge-based model. The system architecture
includes knowledge modelling, extraction, assessment, publishing, and
exploitation. This system has been implemented and used in agriculture for crop
management and monitoring. It is proven to be very effective and promising for
its extension to other domains.
| [
{
"version": "v1",
"created": "Fri, 15 Jul 2022 20:31:56 GMT"
}
] | 1,658,188,800,000 | [
[
"Ngo",
"Quoc Hung",
""
],
[
"Kechadi",
"Tahar",
""
],
[
"Le-Khac",
"Nhien-An",
""
]
] |
2207.08096 | Moshe Shienman | Moshe Shienman and Vadim Indelman | Nonmyopic Distilled Data Association Belief Space Planning Under Budget
Constraints | Accepted to International Symposium of Robotic Research (ISRR) 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Autonomous agents operating in perceptually aliased environments should
ideally be able to solve the data association problem. Yet, planning for future
actions while considering this problem is not trivial. State of the art
approaches therefore use multi-modal hypotheses to represent the states of the
agent and of the environment. However, explicitly considering all possible data
associations, the number of hypotheses grows exponentially with the planning
horizon. As such, the corresponding Belief Space Planning problem quickly
becomes unsolvable. Moreover, under hard computational budget constraints, some
non-negligible hypotheses must eventually be pruned in both planning and
inference. Nevertheless, the two processes are generally treated separately and
the effect of budget constraints in one process over the other was barely
studied. We present a computationally efficient method to solve the nonmyopic
Belief Space Planning problem while reasoning about data association. Moreover,
we rigorously analyze the effects of budget constraints in both inference and
planning.
| [
{
"version": "v1",
"created": "Sun, 17 Jul 2022 07:07:47 GMT"
}
] | 1,658,188,800,000 | [
[
"Shienman",
"Moshe",
""
],
[
"Indelman",
"Vadim",
""
]
] |
2207.08365 | Nand Sharma | Nand Sharma, Joshua Millstein | CausNet : Generational orderings based search for optimal Bayesian
networks via dynamic programming with parent set constraints | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Finding a globally optimal Bayesian Network using exhaustive search is a
problem with super-exponential complexity, which severely restricts the number
of variables that it can work for. We implement a dynamic programming based
algorithm with built-in dimensionality reduction and parent set identification.
This reduces the search space drastically and can be applied to
large-dimensional data. We use what we call generational orderings based search
for optimal networks, which is a novel way to efficiently search the space of
possible networks given the possible parent sets. The algorithm supports both
continuous and categorical data, and categorical as well as survival outcomes.
We demonstrate the efficacy of our algorithm on both synthetic and real data.
In simulations, our algorithm performs better than three state-of-art
algorithms that are currently used extensively. We then apply it to an Ovarian
Cancer gene expression dataset with 513 genes and a survival outcome. Our
algorithm is able to find an optimal network describing the disease pathway
consisting of 6 genes leading to the outcome node in a few minutes on a basic
computer. Our generational orderings based search for optimal networks, is both
efficient and highly scalable approach to finding optimal Bayesian Networks,
that can be applied to 1000s of variables. Using specifiable parameters -
correlation, FDR cutoffs, and in-degree - one can increase or decrease the
number of nodes and density of the networks. Availability of two scoring
option-BIC and Bge-and implementation of survival outcomes and mixed data types
makes our algorithm very suitable for many types of high dimensional biomedical
data to find disease pathways.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2022 03:26:41 GMT"
}
] | 1,658,188,800,000 | [
[
"Sharma",
"Nand",
""
],
[
"Millstein",
"Joshua",
""
]
] |
2207.08379 | Guoqing Liu | Guoqing Liu, Mengzhang Cai, Li Zhao, Tao Qin, Adrian Brown, Jimmy
Bischoff, Tie-Yan Liu | Inspector: Pixel-Based Automated Game Testing via Exploration,
Detection, and Investigation | Accepted as IEEE CoG2022 proceedings paper (Oral) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep reinforcement learning (DRL) has attracted much attention in automated
game testing. Early attempts rely on game internal information for game space
exploration, thus requiring deep integration with games, which is inconvenient
for practical applications. In this work, we propose using only
screenshots/pixels as input for automated game testing and build a general game
testing agent, Inspector, that can be easily applied to different games without
deep integration with games. In addition to covering all game space for
testing, our agent tries to take human-like behaviors to interact with key
objects in a game, since some bugs usually happen in player-object
interactions. Inspector is based on purely pixel inputs and comprises three key
modules: game space explorer, key object detector, and human-like object
investigator. Game space explorer aims to explore the whole game space by using
a curiosity-based reward function with pixel inputs. Key object detector aims
to detect key objects in a game, based on a small number of labeled
screenshots. Human-like object investigator aims to mimic human behaviors for
investigating key objects via imitation learning. We conduct experiments on two
popular video games: Shooter Game and Action RPG Game. Experiment results
demonstrate the effectiveness of Inspector in exploring game space, detecting
key objects, and investigating objects. Moreover, Inspector successfully
discovers two potential bugs in those two games. The demo video of Inspector is
available at https://github.com/Inspector-GameTesting/Inspector-GameTesting.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2022 04:49:07 GMT"
}
] | 1,658,188,800,000 | [
[
"Liu",
"Guoqing",
""
],
[
"Cai",
"Mengzhang",
""
],
[
"Zhao",
"Li",
""
],
[
"Qin",
"Tao",
""
],
[
"Brown",
"Adrian",
""
],
[
"Bischoff",
"Jimmy",
""
],
[
"Liu",
"Tie-Yan",
""
]
] |
2207.08599 | Richard Taupe | Richard Comploi-Taupe and Giulia Francescutto and Gottfried Schenner | Applying Incremental Answer Set Solving to Product Configuration | This is the authors' version of the work. It is posted here for your
personal use. Not for redistribution. The definitive version will be
published as https://doi.org/10.1145/3503229.3547069 | null | 10.1145/3503229.3547069 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we apply incremental answer set solving to product
configuration. Incremental answer set solving is a step-wise incremental
approach to Answer Set Programming (ASP). We demonstrate how to use this
technique to solve product configurations problems incrementally. Every step of
the incremental solving process corresponds to a predefined configuration
action. Using complex domain-specific configuration actions makes it possible
to tightly control the level of non-determinism and performance of the solving
process. We show applications of this technique for reasoning about product
configuration, like simulating the behavior of a deterministic configuration
algorithm and describing user actions.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2022 13:38:12 GMT"
}
] | 1,667,865,600,000 | [
[
"Comploi-Taupe",
"Richard",
""
],
[
"Francescutto",
"Giulia",
""
],
[
"Schenner",
"Gottfried",
""
]
] |
2207.09374 | Silvan Mertes | Silvan Mertes, Christina Karle, Tobias Huber, Katharina Weitz, Ruben
Schlagowski, Elisabeth Andr\'e | Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems | Accepted at IJCAI 2022 Workshop on XAI | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Explanation mechanisms from the field of Counterfactual Thinking are a
widely-used paradigm for Explainable Artificial Intelligence (XAI), as they
follow a natural way of reasoning that humans are familiar with. However, all
common approaches from this field are based on communicating information about
features or characteristics that are especially important for an AI's decision.
We argue that in order to fully understand a decision, not only knowledge about
relevant features is needed, but that the awareness of irrelevant information
also highly contributes to the creation of a user's mental model of an AI
system. Therefore, we introduce a new way of explaining AI systems. Our
approach, which we call Alterfactual Explanations, is based on showing an
alternative reality where irrelevant features of an AI's input are altered. By
doing so, the user directly sees which characteristics of the input data can
change arbitrarily without influencing the AI's decision. We evaluate our
approach in an extensive user study, revealing that it is able to significantly
contribute to the participants' understanding of an AI. We show that
alterfactual explanations are suited to convey an understanding of different
aspects of the AI's reasoning than established counterfactual explanation
methods.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2022 16:20:37 GMT"
}
] | 1,658,275,200,000 | [
[
"Mertes",
"Silvan",
""
],
[
"Karle",
"Christina",
""
],
[
"Huber",
"Tobias",
""
],
[
"Weitz",
"Katharina",
""
],
[
"Schlagowski",
"Ruben",
""
],
[
"André",
"Elisabeth",
""
]
] |
2207.09897 | Beren Millidge Mr | Beren Millidge, Christopher L Buckley | Successor Representation Active Inference | 20/07/22 initial upload | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work has uncovered close links between between classical reinforcement
learning algorithms, Bayesian filtering, and Active Inference which lets us
understand value functions in terms of Bayesian posteriors. An alternative, but
less explored, model-free RL algorithm is the successor representation, which
expresses the value function in terms of a successor matrix of expected future
state occupancies. In this paper, we derive the probabilistic interpretation of
the successor representation in terms of Bayesian filtering and thus design a
novel active inference agent architecture utilizing successor representations
instead of model-based planning. We demonstrate that active inference successor
representations have significant advantages over current active inference
agents in terms of planning horizon and computational cost. Moreover, we
demonstrate how the successor representation agent can generalize to changing
reward functions such as variants of the expected free energy.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2022 13:50:27 GMT"
}
] | 1,658,361,600,000 | [
[
"Millidge",
"Beren",
""
],
[
"Buckley",
"Christopher L",
""
]
] |
2207.09964 | Heiko Paulheim | Franz Krause, Tobias Weller, Heiko Paulheim | On a Generalized Framework for Time-Aware Knowledge Graphs | Accepted for publication at Semantics 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge graphs have emerged as an effective tool for managing and
standardizing semistructured domain knowledge in a human- and
machine-interpretable way. In terms of graph-based domain applications, such as
embeddings and graph neural networks, current research is increasingly taking
into account the time-related evolution of the information encoded within a
graph. Algorithms and models for stationary and static knowledge graphs are
extended to make them accessible for time-aware domains, where time-awareness
can be interpreted in different ways. In particular, a distinction needs to be
made between the validity period and the traceability of facts as objectives of
time-related knowledge graph extensions. In this context, terms and definitions
such as dynamic and temporal are often used inconsistently or interchangeably
in the literature. Therefore, with this paper we aim to provide a short but
well-defined overview of time-aware knowledge graph extensions and thus
faciliate future research in this field as well.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2022 15:14:46 GMT"
}
] | 1,658,361,600,000 | [
[
"Krause",
"Franz",
""
],
[
"Weller",
"Tobias",
""
],
[
"Paulheim",
"Heiko",
""
]
] |
2207.10170 | Tim Franzmeyer | Tim Franzmeyer, Stephen McAleer, Jo\~ao F. Henriques, Jakob N.
Foerster, Philip H.S. Torr, Adel Bibi, Christian Schroeder de Witt | Illusory Attacks: Information-Theoretic Detectability Matters in
Adversarial Attacks | ICLR 2024 Spotlight (top 5%) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Autonomous agents deployed in the real world need to be robust against
adversarial attacks on sensory inputs. Robustifying agent policies requires
anticipating the strongest attacks possible. We demonstrate that existing
observation-space attacks on reinforcement learning agents have a common
weakness: while effective, their lack of information-theoretic detectability
constraints makes them detectable using automated means or human inspection.
Detectability is undesirable to adversaries as it may trigger security
escalations. We introduce {\epsilon}-illusory, a novel form of adversarial
attack on sequential decision-makers that is both effective and of
{\epsilon}-bounded statistical detectability. We propose a novel dual ascent
algorithm to learn such attacks end-to-end. Compared to existing attacks, we
empirically find {\epsilon}-illusory to be significantly harder to detect with
automated methods, and a small study with human participants (IRB approval
under reference R84123/RE001) suggests they are similarly harder to detect for
humans. Our findings suggest the need for better anomaly detectors, as well as
effective hardware- and system-level defenses. The project website can be found
at https://tinyurl.com/illusory-attacks.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2022 19:49:09 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2023 16:00:59 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2023 17:11:12 GMT"
},
{
"version": "v4",
"created": "Mon, 29 Apr 2024 16:59:57 GMT"
},
{
"version": "v5",
"created": "Mon, 6 May 2024 06:53:31 GMT"
}
] | 1,715,040,000,000 | [
[
"Franzmeyer",
"Tim",
""
],
[
"McAleer",
"Stephen",
""
],
[
"Henriques",
"João F.",
""
],
[
"Foerster",
"Jakob N.",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Bibi",
"Adel",
""
],
[
"de Witt",
"Christian Schroeder",
""
]
] |
2207.10330 | Gaetan Serre | Ga\"etan Serr\'e (TAU, Inria, LISN), Eva Boguslawski (RTE, TAU, LISN,
Inria), Benjamin Donnot (RTE), Adrien Pav\~ao (TAU, LISN, Inria), Isabelle
Guyon (TAU, LISN, Inria), Antoine Marot (RTE) | Reinforcement learning for Energies of the future and carbon neutrality:
a Challenge Design | null | IEEE SSCI ADPRL, IEEE, Dec 2022, Singapour, Singapore | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current rapid changes in climate increase the urgency to change energy
production and consumption management, to reduce carbon and other green-house
gas production. In this context, the French electricity network management
company RTE (R{\'e}seau de Transport d'{\'E}lectricit{\'e}) has recently
published the results of an extensive study outlining various scenarios for
tomorrow's French power management. We propose a challenge that will test the
viability of such a scenario. The goal is to control electricity transportation
in power networks, while pursuing multiple objectives: balancing production and
consumption, minimizing energetic losses, and keeping people and equipment safe
and particularly avoiding catastrophic failures. While the importance of the
application provides a goal in itself, this challenge also aims to push the
state-of-the-art in a branch of Artificial Intelligence (AI) called
Reinforcement Learning (RL), which offers new possibilities to tackle control
problems. In particular, various aspects of the combination of Deep Learning
and RL called Deep Reinforcement Learning remain to be harnessed in this
application domain. This challenge belongs to a series started in 2019 under
the name "Learning to run a power network" (L2RPN). In this new edition, we
introduce new more realistic scenarios proposed by RTE to reach carbon
neutrality by 2050, retiring fossil fuel electricity production, increasing
proportions of renewable and nuclear energy and introducing batteries.
Furthermore, we provide a baseline using state-of-the-art reinforcement
learning algorithm to stimulate the future participants.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2022 06:56:46 GMT"
}
] | 1,658,448,000,000 | [
[
"Serré",
"Gaëtan",
"",
"TAU, Inria, LISN"
],
[
"Boguslawski",
"Eva",
"",
"RTE, TAU, LISN,\n Inria"
],
[
"Donnot",
"Benjamin",
"",
"RTE"
],
[
"Pavão",
"Adrien",
"",
"TAU, LISN, Inria"
],
[
"Guyon",
"Isabelle",
"",
"TAU, LISN, Inria"
],
[
"Marot",
"Antoine",
"",
"RTE"
]
] |
2207.10991 | Stefan Feuerriegel | Maria De-Arteaga and Stefan Feuerriegel and Maytal Saar-Tsechansky | Algorithmic Fairness in Business Analytics: Directions for Research and
Practice | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The extensive adoption of business analytics (BA) has brought financial gains
and increased efficiencies. However, these advances have simultaneously drawn
attention to rising legal and ethical challenges when BA inform decisions with
fairness implications. As a response to these concerns, the emerging study of
algorithmic fairness deals with algorithmic outputs that may result in
disparate outcomes or other forms of injustices for subgroups of the
population, especially those who have been historically marginalized. Fairness
is relevant on the basis of legal compliance, social responsibility, and
utility; if not adequately and systematically addressed, unfair BA systems may
lead to societal harms and may also threaten an organization's own survival,
its competitiveness, and overall performance. This paper offers a
forward-looking, BA-focused review of algorithmic fairness. We first review the
state-of-the-art research on sources and measures of bias, as well as bias
mitigation algorithms. We then provide a detailed discussion of the
utility-fairness relationship, emphasizing that the frequent assumption of a
trade-off between these two constructs is often mistaken or short-sighted.
Finally, we chart a path forward by identifying opportunities for business
scholars to address impactful, open challenges that are key to the effective
and responsible deployment of BA.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2022 10:21:38 GMT"
}
] | 1,658,707,200,000 | [
[
"De-Arteaga",
"Maria",
""
],
[
"Feuerriegel",
"Stefan",
""
],
[
"Saar-Tsechansky",
"Maytal",
""
]
] |
2207.11007 | V\'ictor Gallego-Fontenla | Victor Gallego-Fontenla, Juan C. Vidal, Manuel Lama | Gradual Drift Detection in Process Models Using Conformance Metrics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Changes, planned or unexpected, are common during the execution of real-life
processes. Detecting these changes is a must for optimizing the performance of
organizations running such processes. Most of the algorithms present in the
state-of-the-art focus on the detection of sudden changes, leaving aside other
types of changes. In this paper, we will focus on the automatic detection of
gradual drifts, a special type of change, in which the cases of two models
overlap during a period of time. The proposed algorithm relies on conformance
checking metrics to carry out the automatic detection of the changes,
performing also a fully automatic classification of these changes into sudden
or gradual. The approach has been validated with a synthetic dataset consisting
of 120 logs with different distributions of changes, getting better results in
terms of detection and classification accuracy, delay and change region
overlapping than the main state-of-the-art algorithms.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2022 10:56:35 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 08:40:44 GMT"
}
] | 1,683,590,400,000 | [
[
"Gallego-Fontenla",
"Victor",
""
],
[
"Vidal",
"Juan C.",
""
],
[
"Lama",
"Manuel",
""
]
] |
2207.11324 | Yuan An | Yuan An and Alex Kalinowski and Jane Greenberg | Exploring Wasserstein Distance across Concept Embeddings for Ontology
Matching | Accepted by the 17th International Workshop on Ontology Matching
collocated with the 21th International Semantic Web Conference (ISWC 2022) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Measuring the distance between ontological elements is fundamental for
ontology matching. String-based distance metrics are notorious for shallow
syntactic matching. In this exploratory study, we investigate Wasserstein
distance targeting continuous space that can incorporate various types of
information. We use a pre-trained word embeddings system to embed ontology
element labels. We examine the effectiveness of Wasserstein distance for
measuring similarity between ontologies, and discovering and refining matchings
between individual elements. Our experiments with the OAEI conference track and
MSE benchmarks achieved competitive results compared to the leading systems.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2022 20:31:39 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Sep 2022 02:53:58 GMT"
}
] | 1,663,804,800,000 | [
[
"An",
"Yuan",
""
],
[
"Kalinowski",
"Alex",
""
],
[
"Greenberg",
"Jane",
""
]
] |
2207.11897 | Tosin Ige | Tosin Ige, Sikiru Adewale | AI Powered Anti-Cyber Bullying System using Machine Learning Algorithm
of Multinomial Naive Bayes and Optimized Linear Support Vector Machine | 5 pages | International Journal of Advanced Computer Science and
Applications(IJACSA), Volume 13 Issue 5, 2022 | 10.14569/IJACSA.2022.0130502 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | "Unless and until our society recognizes cyber bullying for what it is, the
suffering of thousands of silent victims will continue." ~ Anna Maria Chavez.
There had been series of research on cyber bullying which are unable to provide
reliable solution to cyber bullying. In this research work, we were able to
provide a permanent solution to this by developing a model capable of detecting
and intercepting bullying incoming and outgoing messages with 92% accuracy. We
also developed a chatbot automation messaging system to test our model leading
to the development of Artificial Intelligence powered anti-cyber bullying
system using machine learning algorithm of Multinomial Naive Bayes (MNB) and
optimized linear Support Vector Machine (SVM). Our model is able to detect and
intercept bullying outgoing and incoming bullying messages and take immediate
action.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2022 04:02:02 GMT"
}
] | 1,658,793,600,000 | [
[
"Ige",
"Tosin",
""
],
[
"Adewale",
"Sikiru",
""
]
] |
2207.12052 | Shah Miah Prof | Ali Faqihi and Shah J Miah | Designing an AI-Driven Talent Intelligence Solution: Exploring Big Data
to extend the TOE Framework | Working paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AI has the potential to improve approaches to talent management enabling
dynamic provisions through implementing advanced automation. This study aims to
identify the new requirements for developing AI-oriented artifacts to address
talent management issues. Focusing on enhancing interactions between
professional assessment and planning attributes, the design artifact is an
intelligent employment automation solution for career guidance that is largely
dependent on a talent intelligent module and an individuals growth needs. A
design science method is adopted for conducting the experimental study with
structured machine learning techniques which is the primary element of a
comprehensive AI solution framework informed through a proposed moderation of
the technology-organization-environment theory.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2022 10:42:50 GMT"
}
] | 1,658,793,600,000 | [
[
"Faqihi",
"Ali",
""
],
[
"Miah",
"Shah J",
""
]
] |
2207.12054 | Luka Abb | Luka Abb, Jana-Rebecca Rehse | A Reference Data Model for Process-Related User Interaction Logs | Pre-print, to be published at BPM 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | User interaction (UI) logs are high-resolution event logs that record
low-level activities performed by a user during the execution of a task in an
information system. Each event in a UI log corresponds to a single interaction
between the user and the interface, such as clicking a button or entering a
string into a text field. UI logs are used for purposes like task mining or
robotic process automation (RPA), but each study and tool relies on a different
conceptualization and implementation of the elements and attributes that
constitute user interactions. This lack of standardization makes it difficult
to integrate UI logs from different sources and to combine tools for UI data
collection with downstream analytics or automation solutions. To address this,
we propose a universally applicable reference data model for process-related UI
logs. Based on a review of scientific literature and industry solutions, this
model includes the core attributes of UI logs, but remains flexible with regard
to the scope, level of abstraction, and case notion. We provide an
implementation of the model as an extension to the XES interchange standard for
event logs and demonstrate its practical applicability in a real-life RPA
scenario.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2022 10:47:47 GMT"
}
] | 1,658,793,600,000 | [
[
"Abb",
"Luka",
""
],
[
"Rehse",
"Jana-Rebecca",
""
]
] |
2207.12162 | Maxime Amblard | Maria Boritchev (SEMAGRAMME, LORIA), Maxime Amblard (SEMAGRAMME,
LORIA) | A Multi-Party Dialogue Ressource in French | null | 13th Edition of Language Resources and Evaluation Conference (LREC
2022), Jun 2022, Marseille, France | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Dialogues in Games (DinG), a corpus of manual transcriptions of
real-life, oral, spontaneous multi-party dialogues between French-speaking
players of the board game Catan. Our objective is to make available a quality
resource for French, composed of long dialogues, to facilitate their study in
the style of (Asher et al., 2016). In a general dialogue setting, participants
share personal information, which makes it impossible to disseminate the
resource freely and openly. In DinG, the attention of the participants is
focused on the game, which prevents them from talking about themselves. In
addition, we are conducting a study on the nature of the questions in dialogue,
through annotation (Cruz Blandon et al., 2019), in order to develop more
natural automatic dialogue systems.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2022 13:02:54 GMT"
}
] | 1,658,793,600,000 | [
[
"Boritchev",
"Maria",
"",
"SEMAGRAMME, LORIA"
],
[
"Amblard",
"Maxime",
"",
"SEMAGRAMME,\n LORIA"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.