entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | song-etal-2022-trattack | {TRA}ttack: Text Rewriting Attack Against Text Retrieval | Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.repl4nlp-1.20/ | Song, Junshuai and Zhang, Jiangshan and Zhu, Jifeng and Tang, Mengyun and Yang, Yong | Proceedings of the 7th Workshop on Representation Learning for NLP | 191--203 | Text retrieval has been widely-used in many online applications to help users find relevant information from a text collection. In this paper, we study a new attack scenario against text retrieval to evaluate its robustness to adversarial attacks under the black-box setting, in which attackers want their own texts to always get high relevance scores with different users' input queries and thus be retrieved frequently and can receive large amounts of impressions for profits. Considering that most current attack methods only simply follow certain fixed optimization rules, we propose a novel text rewriting attack (TRAttack) method with learning ability from the multi-armed bandit mechanism. Extensive experiments conducted on simulated victim environments demonstrate that TRAttack can yield texts that have higher relevance scores with different given users' queries than those generated by current state-of-the-art attack methods. We also evaluate TRAttack on Tencent Cloud`s and Baidu Cloud`s commercially-available text retrieval APIs, and the rewritten adversarial texts successfully get high relevance scores with different user queries, which shows the practical potential of our method and the risk of text retrieval systems. | null | null | 10.18653/v1/2022.repl4nlp-1.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,288 |
inproceedings | wartena-2022-geometry | On the Geometry of Concreteness | Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.repl4nlp-1.21/ | Wartena, Christian | Proceedings of the 7th Workshop on Representation Learning for NLP | 204--212 | In this paper we investigate how concreteness and abstractness are represented in word embedding spaces. We use data for English and German, and show that concreteness and abstractness can be determined independently and turn out to be completely opposite directions in the embedding space. Various methods can be used to determine the direction of concreteness, always resulting in roughly the same vector. Though concreteness is a central aspect of the meaning of words and can be detected clearly in embedding spaces, it seems not as easy to subtract or add concreteness to words to obtain other words or word senses like e.g. can be done with a semantic property like gender. | null | null | 10.18653/v1/2022.repl4nlp-1.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,289 |
inproceedings | varshney-etal-2022-towards | Towards Improving Selective Prediction Ability of {NLP} Systems | Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.repl4nlp-1.23/ | Varshney, Neeraj and Mishra, Swaroop and Baral, Chitta | Proceedings of the 7th Workshop on Representation Learning for NLP | 221--226 | It`s better to say {\textquotedblleft}I can`t answer{\textquotedblright} than to answer incorrectly. This selective prediction ability is crucial for NLP systems to be reliably deployed in real-world applications. Prior work has shown that existing selective prediction techniques fail to perform well, especially in the out-of-domain setting. In this work, we propose a method that improves probability estimates of models by calibrating them using prediction confidence and difficulty score of instances. Using these two signals, we first annotate held-out instances and then train a calibrator to predict the likelihood of correctness of the model`s prediction. We instantiate our method with Natural Language Inference (NLI) and Duplicate Detection (DD) tasks and evaluate it in both In-Domain (IID) and Out-of-Domain (OOD) settings. In (IID, OOD) settings, we show that the representations learned by our calibrator result in an improvement of (15.81{\%}, 5.64{\%}) and (6.19{\%}, 13.9{\%}) over {\textquoteleft}MaxProb' -a selective prediction baseline- on NLI and DD tasks respectively. | null | null | 10.18653/v1/2022.repl4nlp-1.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,290 |
inproceedings | tokarchuk-niculae-2022-target | On Target Representation in Continuous-output Neural Machine Translation | Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.repl4nlp-1.24/ | Tokarchuk, Evgeniia and Niculae, Vlad | Proceedings of the 7th Workshop on Representation Learning for NLP | 227--235 | Continuous generative models proved their usefulness in high-dimensional data, such as image and audio generation. However, continuous models for text generation have received limited attention from the community. In this work, we study continuous text generation using Transformers for neural machine translation (NMT). We argue that the choice of embeddings is crucial for such models, so we aim to focus on one particular aspect{\textquotedblright}:{\textquotedblright} target representation via embeddings. We explore pretrained embeddings and also introduce knowledge transfer from the discrete Transformer model using embeddings in Euclidean and non-Euclidean spaces. Our results on the WMT Romanian-English and English-Turkish benchmarks show such transfer leads to the best-performing continuous model. | null | null | 10.18653/v1/2022.repl4nlp-1.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,291 |
inproceedings | wu-etal-2022-zero | Zero-shot Cross-lingual Transfer is Under-specified Optimization | Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.repl4nlp-1.25/ | Wu, Shijie and Van Durme, Benjamin and Dredze, Mark | Proceedings of the 7th Workshop on Representation Learning for NLP | 236--248 | Pretrained multilingual encoders enable zero-shot cross-lingual transfer, but often produce unreliable models that exhibit high performance variance on the target language. We postulate that this high variance results from zero-shot cross-lingual transfer solving an under-specified optimization problem. We show that any linear-interpolated model between the source language monolingual model and source + target bilingual model has equally low source language generalization error, yet the target language generalization error reduces smoothly and linearly as we move from the monolingual to bilingual model, suggesting that the model struggles to identify good solutions for both source and target languages using the source language alone. Additionally, we show that zero-shot solution lies in non-flat region of target language error generalization surface, causing the high variance. | null | null | 10.18653/v1/2022.repl4nlp-1.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,292 |
inproceedings | wegmann-etal-2022-author | Same Author or Just Same Topic? Towards Content-Independent Style Representations | Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.repl4nlp-1.26/ | Wegmann, Anna and Schraagen, Marijn and Nguyen, Dong | Proceedings of the 7th Workshop on Representation Learning for NLP | 249--268 | Linguistic style is an integral component of language. Recent advances in the development of style representations have increasingly used training objectives from authorship verification (AV){\textquotedblright}:{\textquotedblright} Do two texts have the same author? The assumption underlying the AV training task (same author approximates same writing style) enables self-supervised and, thus, extensive training. However, a good performance on the AV task does not ensure good {\textquotedblleft}general-purpose{\textquotedblright} style representations. For example, as the same author might typically write about certain topics, representations trained on AV might also encode content information instead of style alone. We introduce a variation of the AV training task that controls for content using conversation or domain labels. We evaluate whether known style dimensions are represented and preferred over content information through an original variation to the recently proposed STEL framework. We find that representations trained by controlling for conversation are better than representations trained with domain or no content control at representing style independent from content. | null | null | 10.18653/v1/2022.repl4nlp-1.26 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,293 |
inproceedings | stephan-roth-2022-weanf | {W}ea{NF}{\textquotedblright}:'' Weak Supervision with Normalizing Flows | Gella, Spandana and He, He and Majumder, Bodhisattwa Prasad and Can, Burcu and Giunchiglia, Eleonora and Cahyawijaya, Samuel and Min, Sewon and Mozes, Maximilian and Li, Xiang Lorraine and Augenstein, Isabelle and Rogers, Anna and Cho, Kyunghyun and Grefenstette, Edward and Rimell, Laura and Dyer, Chris | may | 2022 | Dublin, Ireland | Association for Computational Linguistics | https://aclanthology.org/2022.repl4nlp-1.27/ | Stephan, Andreas and Roth, Benjamin | Proceedings of the 7th Workshop on Representation Learning for NLP | 269--279 | A popular approach to decrease the need for costly manual annotation of large data sets is weak supervision, which introduces problems of noisy labels, coverage and bias. Methods for overcoming these problems have either relied on discriminative models, trained with cost functions specific to weak supervision, and more recently, generative models, trying to model the output of the automatic annotation process. In this work, we explore a novel direction of generative modeling for weak supervision{\textquotedblright}:{\textquotedblright} Instead of modeling the output of the annotation process (the labeling function matches), we generatively model the input-side data distributions (the feature space) covered by labeling functions. Specifically, we estimate a density for each weak labeling source, or labeling function, by using normalizing flows. An integral part of our method is the flow-based modeling of multiple simultaneously matching labeling functions, and therefore phenomena such as labeling function overlap and correlations are captured. We analyze the effectiveness and modeling capabilities on various commonly used weak supervision data sets, and show that weakly supervised normalizing flows compare favorably to standard weak supervision baselines. | null | null | 10.18653/v1/2022.repl4nlp-1.27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,294 |
inproceedings | akhlaghi-etal-2022-reading | Reading Assistance through {LARA}, the Learning And Reading Assistant | Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gala, N{\'u}ria | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.readi-1.1/ | Akhlaghi, Elham and Au{\dhunard{\'ottir, Ingibj{\"org I{\dha and B{\'edi, Branislav and Beedar, Hakeem and Berthelsen, Harald and Chua, Cathy and Cucchiarini, Catia and Eyj{\'olfsson, Brynjarr and Ivanova, Nedelina and Maizonniaux, Christ{\`ele and N{\'i Chiar{\'ain, Neasa and Rayner, Manny and Sloan, John and Vigf{\'usson, Sigur{\dhur and Zuckermann, Ghil{'ad | Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference | 1--8 | We present an overview of LARA, the Learning And Reading Assistant, an open source platform for easy creation and use of multimedia annotated texts designed to support the improvement of reading skills. The paper is divided into three parts. In the first, we give a brief summary of LARA`s processing. In the second, we describe some generic functionality specially relevant for reading assistance: support for phonetically annotated texts, support for image-based texts, and integrated production of text-to-speech (TTS) generated audio. In the third, we outline some of the larger projects so far carried out with LARA, involving development of content for learning second and foreign (L2) languages such as Icelandic, Farsi, Irish, Old Norse and the Australian Aboriginal language Barngarla, where the issues involved overlap with those that arise when trying to help students improve first-language (L1) reading skills. All software and almost all content is freely available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,296 |
inproceedings | shardlow-2022-agree | Agree to Disagree: Exploring Subjectivity in Lexical Complexity | Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gala, N{\'u}ria | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.readi-1.2/ | Shardlow, Matthew | Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference | 9--16 | Subjective factors affect our familiarity with different words. Our education, mother tongue, dialect or social group all contribute to the words we know and understand. When asking people to mark words they understand some words are unanimously agreed to be complex, whereas other annotators universally disagree on the complexity of other words. In this work, we seek to expose this phenomenon and investigate the factors affecting whether a word is likely to be subjective, or not. We investigate two recent word complexity datasets from shared tasks. We demonstrate that subjectivity is present and describable in both datasets. Further we show results of modelling and predicting the subjectivity of the complexity annotations in the most recent dataset, attaining an F1-score of 0.714. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,297 |
inproceedings | alfter-etal-2022-dictionary | A Dictionary-Based Study of Word Sense Difficulty | Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gala, N{\'u}ria | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.readi-1.3/ | Alfter, David and Cardon, R{\'e}mi and Fran{\c{c}}ois, Thomas | Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference | 17--24 | In this article, we present an exploratory study on perceived word sense difficulty by native and non-native speakers of French. We use a graded lexicon in conjunction with the French Wiktionary to generate tasks in bundles of four items. Annotators manually rate the difficulty of the word senses based on their usage in a sentence by selecting the easiest and the most difficult word sense out of four. Our results show that the native and non-native speakers largely agree when it comes to the difficulty of words. Further, the rankings derived from the manual annotation broadly follow the levels of the words in the graded resource, although these levels were not overtly available to annotators. Using clustering, we investigate whether there is a link between the complexity of a definition and the difficulty of the associated word sense. However, results were inconclusive. The annotated data set is available for research purposes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,298 |
inproceedings | hauser-etal-2022-multilingual | A Multilingual Simplified Language News Corpus | Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gala, N{\'u}ria | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.readi-1.4/ | Hauser, Renate and Vamvas, Jannis and Ebling, Sarah and Volk, Martin | Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference | 25--30 | Simplified language news articles are being offered by specialized web portals in several countries. The thousands of articles that have been published over the years are a valuable resource for natural language processing, especially for efforts towards automatic text simplification. In this paper, we present SNIML, a large multilingual corpus of news in simplified language. The corpus contains 13k simplified news articles written in one of six languages: Finnish, French, Italian, Swedish, English, and German. All articles are shared under open licenses that permit academic use. The level of text simplification varies depending on the news portal. We believe that even though SNIML is not a parallel corpus, it can be useful as a complement to the more homogeneous but often smaller corpora of news in the simplified variety of one language that are currently in use. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,299 |
inproceedings | rennes-etal-2022-swedish | The {S}wedish Simplification Toolkit: {--} Designed with Target Audiences in Mind | Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gala, N{\'u}ria | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.readi-1.5/ | Rennes, Evelina and Santini, Marina and Jonsson, Arne | Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference | 31--38 | In this paper, we present the current version of The Swedish Simplification Toolkit. The toolkit includes computational and empirical tools that have been developed along the years to explore a still neglected area of NLP, namely the simplification of {\textquotedblleftstandard{\textquotedblright texts to meet the needs of target audiences. Target audiences, such as people affected by dyslexia, aphasia, autism, but also children and second language learners, require different types of text simplification and adaptation. For example, while individual with aphasia have difficulties in reading compounds (such as arbetsmarknadsdepartement, eng. ministry of employment), second language learners struggle with cultural-specific vocabulary (e.g. konfliktr{\"add, eng. afraid of conflicts). The toolkit allows user to selectively decide the types of simplification that meet the specific needs of the target audience they belong to. The Swedish Simplification Toolkit is one of the first attempts to overcome the one-fits-all approach that is still dominant in Automatic Text Simplification, and proposes a set of computational methods that, used individually or in combination, may help individuals reduce reading (and writing) difficulties. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,300 |
inproceedings | drevet-etal-2022-hibou | {HIBOU}: an e{B}ook to improve Text Comprehension and Reading Fluency for Beginning Readers of {F}rench | Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gala, N{\'u}ria | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.readi-1.6/ | Drevet, Ludivine Javourey and Dufau, St{\'e}phane and Ziegler, Johannes Christoph and Gala, N{\'u}ria | Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference | 39--45 | In this paper, we present HIBOU, an eBook application initially developed for iOs, displaying adapted texts (i.e. simplified), and proposing text comprehension activities. The application has been used in six elementary schools in France to evaluate and train reading fluency and comprehension skills on beginning readers of French. HIBOU displays two versions of French literary and documentary texts from the ALECTOR corpus, the {\textquoteleft}original', and a simplified version. Text simplifications have been manually performed at the lexical, syntactic, and discursive levels. The child can read in autonomy and has access to different games on word identification. HIBOU is at present being developed to be online in a platform that will be available at elementary schools in France. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,301 |
inproceedings | pirali-etal-2022-paddle | {PADDL}e: a Platform to Identify Complex Words for Learners of {F}rench as a Foreign Language ({FFL}) | Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gala, N{\'u}ria | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.readi-1.7/ | Pirali, Camille and Fran{\c{c}}ois, Thomas and Gala, N{\'u}ria | Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference | 46--53 | Annotations of word difficulty by readers provide invaluable insights into lexical complexity. Yet, there is currently a paucity of tools allowing researchers to gather such annotations in an adaptable and simple manner. This article presents PADDLe, an online platform aiming to fill that gap and designed to encourage best practices when collecting difficulty judgements. Studies crafted using the tool ask users to provide a selection of demographic information, then to annotate a certain number of texts and answer multiple-choice comprehension questions after each text. Researchers are encouraged to use a multi-level annotation scheme, to avoid the drawbacks of binary complexity annotations. Once a study is launched, its results are summarised in a visual representation accessible both to researchers and teachers, and can be downloaded in .csv format. Some findings of a pilot study designed with the tool are also provided in the article, to give an idea of the types of research questions it allows to answer. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,302 |
inproceedings | hernandez-etal-2022-open | Open corpora and toolkit for assessing text readability in {F}rench | Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gala, N{\'u}ria | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.readi-1.8/ | Hernandez, Nicolas and Oulbaz, Nabil and Faine, Tristan | Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference | 54--61 | Measuring the linguistic complexity or assessing the readability of spoken or written productions has been the concern of several researchers in pedagogy and (foreign) language teaching for decades. Researchers study for example the children`s language development or the second language (L2) learning with tasks such as age or reader`s level recommendation, or text simplification. Despite the interest for the topic, open datasets and toolkits for processing French are scarce. Our contributions are: (1) three open corpora for supporting research on readability assessment in French, (2) a dataset analysis with traditional formulas and an unsupervised measure, (3) a toolkit dedicated for French processing which includes the implementation of statistical formulas, a pseudo-perplexity measure, and state-of-the-art classifiers based on SVM and fine-tuned BERT for predicting readability levels, and (4) an evaluation of the toolkit on the three data sets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,303 |
inproceedings | wilkens-etal-2022-mwe | {MWE} for Essay Scoring {E}nglish as a Foreign Language | Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gala, N{\'u}ria | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.readi-1.9/ | Wilkens, Rodrigo and Seibert, Daiane and Wang, Xiaoou and Fran{\c{c}}ois, Thomas | Proceedings of the 2nd Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI) within the 13th Language Resources and Evaluation Conference | 62--69 | Mastering a foreign language like English can bring better opportunities. In this context, although multiword expressions (MWE) are associated with proficiency, they are usually neglected in the works of automatic scoring language learners. Therefore, we study MWE-based features (i.e., occurrence and concreteness) in this work, aiming at assessing their relevance for automated essay scoring. To achieve this goal, we also compare MWE features with other classic features, such as length-based, graded resource, orthographic neighbors, part-of-speech, morphology, dependency relations, verb tense, language development, and coherence. Although the results indicate that classic features are more significant than MWE for automatic scoring, we observed encouraging results when looking at the MWE concreteness through the levels. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,304 |
inproceedings | pesenti-etal-2022-effect | The Effect of e{H}ealth Training on Dysarthric Speech | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.1/ | Pesenti, Chiara and Van Bemmel, Loes and van Hout, Roeland and Strik, Helmer | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 1--8 | In the current study on dysarthric speech, we investigate the effect of web-based treatment, and whether there is a difference between content and function words. Since the goal of the treatment is to speak louder, without raising pitch, we focus on acoustic-phonetic features related to loudness, intensity, and pitch. We analyse dysarthric read speech from eight speakers at word level. We also investigate whether there are differences between content words and function words, and whether the treatment has a different impact on these two classes of words. Linear Mixed-Effects models show that there are differences before and after treatment, that for some speakers the treatment has the desired effect, but not for all speakers, and that the effect of the treatment on words for the two categories does not seem to be different. To a large extent, our results are in line with the results of a previous study in which the same data were analyzed in a different way, i.e. by studying intelligibility scores. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,306 |
inproceedings | lindsay-etal-2022-generating | Generating Synthetic Clinical Speech Data through Simulated {ASR} Deletion Error | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.2/ | Lindsay, Hali and Tr{\"oger, Johannes and Mina, Mario and Linz, Nicklas and M{\"uller, Philipp and Alexandersson, Jan and Ramakers, Inez | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 9--16 | Training classification models on clinical speech is a time-saving and effective solution for many healthcare challenges, such as screening for Alzheimer`s Disease over the phone. One of the primary limiting factors of the success of artificial intelligence (AI) solutions is the amount of relevant data available. Clinical data is expensive to collect, not sufficient for large-scale machine learning or neural methods, and often not shareable between institutions due to data protection laws. With the increasing demand for AI in health systems, generating synthetic clinical data that maintains the nuance of underlying patient pathology is the next pressing task. Previous work has shown that automated evaluation of clinical speech tasks via automatic speech recognition (ASR) is comparable to manually annotated results in diagnostic scenarios even though ASR systems produce errors during the transcription process. In this work, we propose to generate synthetic clinical data by simulating ASR deletion errors on the transcript to produce additional data. We compare the synthetic data to the real data with traditional machine learning methods to test the feasibility of the proposed method. Using a dataset of 50 cognitively impaired and 50 control Dutch speakers, ten additional data points are synthetically generated for each subject, increasing the training size for 100 to 1000 training points. We find consistent and comparable performance of models trained on only synthetic data (AUC=0.77) to real data (AUC=0.77) in a variety of traditional machine learning scenarios. Additionally, linear models are not able to distinguish between real and synthetic data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,307 |
inproceedings | melin-pendrill-2022-novel | A Novel Metrological Approach to a More Consistent Way of Defining and Analyzing Memory Task Difficulty in Word Learning List Tests with Repeated Trials | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.3/ | Melin, Jeanette and Pendrill, Leslie | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 17--21 | New candidate diagnostics for cognitive decline and dementia have recently been proposed based on effects such as primacy and recency in word learning memory list tests. The diagnostic value is, however, currently limited by the multiple ways in which raw scores, and in particular these serial position effects (SPE), have been defined and analyzed to date. In this work, we build on previous analyses taking a metrological approach to the 10-item word learning list. We show i) how the variation in task difficulty reduces successively for trials 2 and 3, ii) how SPE change with repeated trials as predicted with our entropy-based theory, and iii) how possibilities to separate cohort members according to cognitive health status are limited. These findings mainly depend on the test design itself: A test with only 10 words, where SPE do not dominate over trials, requires more challenging words to increase the variation in task difficulty, and in turn to challenge the test persons. The work is novel and also contributes to the endeavour to develop for more consistent ways of defining and analyzing memory task difficulty, and in turn opens up for more practical and accurate measurement in clinical practice, research and trials. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,308 |
inproceedings | beccaria-etal-2022-extraction | Extraction and Classification of Acoustic Features from {I}talian Speaking Children with Autism Spectrum Disorders. | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.4/ | Beccaria, Federica and Gagliardi, Gloria and Kokkinakis, Dimitrios | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 22--30 | Autism Spectrum Disorders (ASD) are a group of complex developmental conditions whose effects and severity show high intraindividual variability. However, one of the main symptoms shared along the spectrum is social interaction impairments that can be explored through acoustic analysis of speech production. In this paper, we compare 14 Italian-speaking children with ASD and 14 typically developing peers. Accordingly, we extracted and selected the acoustic features related to prosody, quality of voice, loudness, and spectral distribution using the parameter set eGeMAPS provided by the openSMILE feature extraction toolkit. We implemented four supervised machine learning methods to evaluate the extraction performances. Our findings show that Decision Trees (DTs) and Support Vector Machines (SVMs) are the best-performing methods. The overall DT models reach a 100{\%} recall on all the trials, meaning they correctly recognise autistic features. However, half of its models overfit, while SVMs are more consistent. One of the results of the work is the creation of a speech pipeline to extract Italian speech biomarkers typical of ASD by comparing our results with studies based on other languages. A better understanding of this topic can support clinicians in diagnosing the disorder. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,309 |
inproceedings | johannssen-etal-2022-classification | Classification of {G}erman Jungian Extraversion and Introversion Texts with Assessment of Changes During the {COVID}-19 Pandemic | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.5/ | Johann{\ss}en, Dirk and Biemann, Chris and Scheffer, David | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 31--40 | The corona pandemic and countermeasures such as social distancing and lockdowns have confronted individuals with new challenges for their mental health and well-being. It can be assumed that the Jungian psychology types of extraverts and introverts react differently to these challenges. We propose a Bi-LSTM model with an attention mechanism for classifying introversion and extraversion from German tweets, which is trained on hand-labeled data created by 335 participants. With this work, we provide this novel dataset for free use and validation. The proposed model achieves solid performance with F1 = .72. Furthermore, we created a feature engineered logistic model tree (LMT) trained on hand-labeled tweets, to which the data is also made available with this work. With this second model, German tweets before and during the pandemic have been investigated. Extraverts display more positive emotions, whilst introverts show more insight and higher rates of anxiety. Even though such a model can not replace proper psychological diagnostics, it can help shed light on linguistic markers and to help understand introversion and extraversion better for a variety of applications and investigations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,310 |
inproceedings | gale-etal-2022-post | The Post-Stroke Speech Transcription ({PSST}) Challenge | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.6/ | Gale, Robert C. and Fleegle, Mikala and Fergadiotis, Gerasimos and Bedrick, Steven | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 41--55 | We present the outcome of the Post-Stroke Speech Transcription (PSST) challenge. For the challenge, we prepared a new data resource of responses to two confrontation naming tests found in AphasiaBank, extracting audio and adding new phonemic transcripts for each response. The challenge consisted of two tasks. Task A asked challengers to build an automatic speech recognizer (ASR) for phonemic transcription of the PSST samples, evaluated in terms of phoneme error rate (PER) as well as a finer-grained metric derived from phonological feature theory, feature error rate (FER). The best model had a 9.9{\%} FER / 20.0{\%} PER, improving on our baseline by a relative 18{\%} and 24{\%}, respectively. Task B approximated a downstream assessment task, asking challengers to identify whether each recording contained a correctly pronounced target word. Challengers were unable to improve on the baseline algorithm; however, using this algorithm with the improved transcripts from Task A resulted in 92.8{\%} accuracy / 0.921 F1, a relative improvement of 2.8{\%} and 3.3{\%}, respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,311 |
inproceedings | tran-2022-post | Post-Stroke Speech Transcription Challenge (Task {B}): Correctness Detection in Anomia Diagnosis with Imperfect Transcripts | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.7/ | Tran, Trang | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 56--61 | Aphasia is a language disorder that affects millions of adults worldwide annually; it is most commonly caused by strokes or neurodegenerative diseases. Anomia, or word finding difficulty, is a prominent symptom of aphasia, which is often diagnosed through confrontation naming tasks. In the clinical setting, identification of correctness in responses to these naming tasks is useful for diagnosis, but currently is a labor-intensive process. This year`s Post-Stroke Speech Transcription Challenge provides an opportunity to explore ways of automating this process. In this work, we focus on Task B of the challenge, i.e. identification of response correctness. We study whether a simple aggregation of using the 1-best automatic speech recognition (ASR) output and acoustic features could help predict response correctness. This was motivated by the hypothesis that acoustic features could provide complementary information to the (imperfect) ASR transcripts. We trained several classifiers using various sets of acoustic features standard in speech processing literature in an attempt to improve over the 1-best ASR baseline. Results indicated that our approach to using the acoustic features did not beat the simple baseline, at least on this challenge dataset. This suggests that ASR robustness still plays a significant role in the correctness detection task, which has yet to benefit from acoustic features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,312 |
inproceedings | moell-etal-2022-speech | Speech Data Augmentation for Improving Phoneme Transcriptions of Aphasic Speech Using {W}av2{V}ec 2.0 for the {PSST} Challenge | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.8/ | Moell, Birger and O{'}Regan, Jim and Mehta, Shivam and Kirkland, Ambika and Lameris, Harm and Gustafson, Joakim and Beskow, Jonas | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 62--70 | As part of the PSST challenge, we explore how data augmentations, data sources, and model size affect phoneme transcription accuracy on speech produced by individuals with aphasia. We evaluate model performance in terms of feature error rate (FER) and phoneme error rate (PER). We find that data augmentations techniques, such as pitch shift, improve model performance. Additionally, increasing the size of the model decreases FER and PER. Our experiments also show that adding manually-transcribed speech from non-aphasic speakers (TIMIT) improves performance when Room Impulse Response is used to augment the data. The best performing model combines aphasic and non-aphasic data and has a 21.0{\%} PER and a 9.2{\%} FER, a relative improvement of 9.8{\%} compared to the baseline model on the primary outcome measurement. We show that data augmentation, larger model size, and additional non-aphasic data sources can be helpful in improving automatic phoneme recognition models for people with aphasia. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,313 |
inproceedings | yuan-etal-2022-data | Data Augmentation for the Post-Stroke Speech Transcription ({PSST}) Challenge: Sometimes Less Is More | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.9/ | Yuan, Jiahong and Cai, Xingyu and Church, Kenneth | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 71--79 | We employ the method of fine-tuning wav2vec2.0 for recognition of phonemes in aphasic speech. Our effort focuses on data augmentation, by supplementing data from both in-domain and out-of-domain datasets for training. We found that although a modest amount of out-of-domain data may be helpful, the performance of the model degrades significantly when the amount of out-of-domain data is much larger than in-domain data. Our hypothesis is that fine-tuning wav2vec2.0 with a CTC loss not only learns bottom-up acoustic properties but also top-down constraints. Therefore, out-of-domain data augmentation is likely to degrade performance if there is a language model mismatch between {\textquotedblleft}in{\textquotedblright} and {\textquotedblleft}out{\textquotedblright} domains. For in-domain audio without ground truth labels, we found that it is beneficial to exclude samples with less confident pseudo labels. Our final model achieves 16.7{\%} PER (phoneme error rate) on the validation set, without using a language model for decoding. The result represents a relative error reduction of 14{\%} over the baseline model trained without data augmentation. Finally, we found that {\textquotedblleft}canonicalized{\textquotedblright} phonemes are much easier to recognize than manually transcribed phonemes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,314 |
inproceedings | donati-strapparava-2022-coreds | {C}or{ED}s: A Corpus on Eating Disorders | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.10/ | Donati, Melissa and Strapparava, Carlo | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 80--85 | Eating disorders (EDs) constitute a widespread group of mental illnesses affecting the everyday life of many individuals in all age groups. One of the main difficulties in the diagnosis and treatment of these disorders is the interpersonal variability of symptoms and the variety of underlying psychological states that are not considered in traditional approaches. In order to gain a better understanding of these disorders, many studies have collected data from social media and analysed them from a computational perspective, but the resulting dataset were very limited and task-specific. Aiming to address this shortage by providing a dataset that could be easily adapted to different tasks, we built a corpus collecting ED-related and ED-unrelated comments from Reddit focusing on a limited number of topics (fitness, nutrition, etc.). To validate the effectiveness of the dataset, we evaluated the performance of two classifiers in distinguishing between ED-related and unrelated comments. The high-level accuracy of both classifiers indicates that ED-related texts are separable from texts on similar topics that do not address EDs. For explorative purposes, we also carried out a linguistic analysis of word class dominance in ED-related texts, whose results are consistent with the findings of psychological research on EDs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,315 |
inproceedings | pan-etal-2022-database | A Database of Multimodal Data to Construct a Simulated Dialogue Partner with Varying Degrees of Cognitive Health | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.11/ | Pan, Ruihao and Liu, Ziming and Yuan, Fengpei and Zare, Maryam and Zhao, Xiaopeng and Passonneau, Rebecca Jane | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 86--93 | An assistive robot that could communicate with dementia patients would have great social benefit. An assistive robot Pepper has been designed to administer Referential Communication Tasks (RCTs) to human subjects without dementia as a step towards an agent to administer RCTs to dementia patients, potentially for earlier diagnosis. Currently, Pepper follows a rigid RCT script, which affects the user experience. We aim to replace Pepper`s RCT script with a dialogue management approach, to generate more natural interactions with RCT subjects. A Partially Observable Markov Decision Process (POMDP) dialogue policy will be trained using reinforcement learning, using simulated dialogue partners. This paper describes two RCT datasets and a methodology for their use in creating a database that the simulators can access for training the POMDP policies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,316 |
inproceedings | saccone-trillocco-2022-segmentation | Segmentation of the Speech Flow for the Evaluation of Spontaneous Productions in Pathologies Affecting the Language Capacity. 4 Case Studies of Schizophrenia | Kokkinakis, Dimitrios and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Fraser, Kathleen C. | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.rapid-1.12/ | Saccone, Valentina and Trillocco, Simona | Proceedings of the RaPID Workshop - Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments - within the 13th Language Resources and Evaluation Conference | 94--99 | This paper aims to present a multi-level analysis of spoken language, which is carried out through Praat software for the analysis of speech in its prosodic aspects. The main object of analysis is the pathological speech of schizophrenic patients with a focus on pausing and its information structure. Spoken data (audio recordings in clinical settings; 4 case studies from CIPPS corpus) has been processed to create an implementable layer grid. The grid is an incremental annotation with layers dedicated to silent/sounding detection; orthographic transcription with the annotation of different vocal phenomena; Utterance segmentation; Information Units segmentation. The theoretical framework we are dealing with is the Language into Act Theory and its pragmatic and empirical studies on spontaneous spoken language. The core of the analysis is the study of pauses (signaled in the silent/sounding tier) starting from their automatic detection, then manually validated, and their classification based on duration and position inter/intra Turn and Utterance. In this respect, an interesting point arises: beyond the expected result of longer pauses in pathological schizophrenic than non-pathological, aside from the type of pause, analysis shows that pauses after Utterances are specific to pathological speech when {\ensuremath{>}}500 ms. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,317 |
inproceedings | li-etal-2022-exploring | Exploring the {GLIDE} model for Human Action Effect Prediction | Paggio, Patrizia and Gatt, Albert and Tanti, Marc | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.pvlam-1.1/ | Li, Fangjun and Hogg, David C. and Cohn, Anthony G. | Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind | 1--5 | We address the following action-effect prediction task. Given an image depicting an initial state of the world and an action expressed in text, predict an image depicting the state of the world following the action. The prediction should have the same scene context as the input image. We explore the use of the recently proposed GLIDE model for performing this task. GLIDE is a generative neural network that can synthesize (inpaint) masked areas of an image, conditioned on a short piece of text. Our idea is to mask-out a region of the input image where the effect of the action is expected to occur. GLIDE is then used to inpaint the masked region conditioned on the required action. In this way, the resulting image has the same background context as the input image, updated to show the effect of the action. We give qualitative results from experiments using the EPIC dataset of ego-centric videos labelled with actions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,319 |
inproceedings | tran-etal-2022-multimodal | Do Multimodal Emotion Recognition Models Tackle Ambiguity? | Paggio, Patrizia and Gatt, Albert and Tanti, Marc | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.pvlam-1.2/ | Tran, H{\'e}l{\`e}ne and Falih, Issam and Goblet, Xavier and Mephu Nguifo, Engelbert | Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind | 6--11 | Most databases used for emotion recognition assign a single emotion to data samples. This does not match with the complex nature of emotions: we can feel a wide range of emotions throughout our lives with varying degrees of intensity. We may even experience multiple emotions at once. Furthermore, each person physically expresses emotions differently, which makes emotion recognition even more challenging: we call this emotional ambiguity. This paper investigates the problem as a review of ambiguity in multimodal emotion recognition models. To lay the groundwork, the main representations of emotions along with solutions for incorporating ambiguity are described, followed by a brief overview of ambiguity representation in multimodal databases. Thereafter, only models trained on a database that incorporates ambiguity have been studied in this paper. We conclude that although databases provide annotations with ambiguity, most of these models do not fully exploit them, showing that there is still room for improvement in multimodal emotion recognition systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,320 |
inproceedings | loc-etal-2022-development | Development of a {M}ulti{M}odal Annotation Framework and Dataset for Deep Video Understanding | Paggio, Patrizia and Gatt, Albert and Tanti, Marc | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.pvlam-1.3/ | Loc, Erika and Curtis, Keith and Awad, George and Rajput, Shahzad and Soboroff, Ian | Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind | 12--16 | In this paper we introduce our approach and methods for collecting and annotating a new dataset for deep video understanding. The proposed dataset is composed of 3 seasons (15 episodes) of the BBC Land Girls TV Series in addition to 14 Creative Common movies with total duration of 28.5 hr. The main contribution of this paper is a novel annotation framework on the movie and scene levels to support an automatic query generation process that can capture the high-level movie features (e.g. how characters and locations are related to each other) as well as fine grained scene-level features (e.g. character interactions, natural language descriptions, and sentiments). Movie-level annotations include constructing a global static knowledge graph (KG) to capture major relationships, while the scene-level annotations include constructing a sequence of knowledge graphs (KGs) to capture fine-grained features. The annotation framework supports generating multiple query types. The objective of the framework is to provide a guide to annotating long duration videos to support tasks and challenges in the video and multimedia understanding domains. These tasks and challenges can support testing automatic systems on their ability to learn and comprehend a movie or long video in terms of actors, entities, events, interactions and their relationship to each other. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,321 |
inproceedings | mori-etal-2022-cognitive | Cognitive States and Types of Nods | Paggio, Patrizia and Gatt, Albert and Tanti, Marc | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.pvlam-1.4/ | Mori, Taiga and Jokinen, Kristiina and Den, Yasuharu | Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind | 17--25 | In this paper we will study how different types of nods are related to the cognitive states of the listener. The distinction is made between nods with movement starting upwards (up-nods) and nods with movement starting downwards (down-nods) as well as between single or repetitive nods. The data is from Japanese multiparty conversations, and the results accord with the previous findings indicating that up-nods are related to the change in the listener`s cognitive state after hearing the partner`s contribution, while down-nods convey the meaning that the listener`s cognitive state is not changed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,322 |
inproceedings | ilinykh-etal-2022-examining | Examining the Effects of Language-and-Vision Data Augmentation for Generation of Descriptions of Human Faces | Paggio, Patrizia and Gatt, Albert and Tanti, Marc | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.pvlam-1.5/ | Ilinykh, Nikolai and {\v{C}}erniavski, Rafal and Sventickait{\.{e}}, Eva El{\v{z}}bieta and Buzait{\.{e}}, Viktorija and Dobnik, Simon | Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind | 26--40 | We investigate how different augmentation techniques on both textual and visual representations affect the performance of the face description generation model. Specifically, we provide the model with either original images, sketches of faces, facial composites or distorted images. In addition, on the language side, we experiment with different methods to augment the original dataset with paraphrased captions, which are semantically equivalent to the original ones, but differ in terms of their form. We also examine if augmenting the dataset with descriptions from a different domain (e.g., image captions of real-world images) has an effect on the performance of the models. We train models on different combinations of visual and linguistic features and perform both (i) automatic evaluation of generated captions and (ii) examination of how useful different visual features are for the task of facial feature classification. Our results show that although original images encode the best possible representation for the task, the model trained on sketches can still perform relatively well. We also observe that augmenting the dataset with descriptions from a different domain can boost performance of the model. We conclude that face description generation systems are more susceptible to language rather than vision data augmentation. Overall, we demonstrate that face caption generation models display a strong imbalance in the utilisation of language and vision modalities, indicating a lack of proper information fusion. We also describe ethical implications of our study and argue that future work on human face description generation should create better, more representative datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,323 |
inproceedings | tanti-etal-2022-face2text | {F}ace2{T}ext revisited: Improved data set and baseline results | Paggio, Patrizia and Gatt, Albert and Tanti, Marc | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.pvlam-1.6/ | Tanti, Marc and Abdilla, Shaun and Muscat, Adrian and Borg, Claudia and Farrugia, Reuben A. and Gatt, Albert | Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind | 41--47 | Current image description generation models do not transfer well to the task of describing human faces. To encourage the development of more human-focused descriptions, we developed a new data set of facial descriptions based on the CelebA image data set. We describe the properties of this data set, and present results from a face description generator trained on it, which explores the feasibility of using transfer learning from VGGFace/ResNet CNNs. Comparisons are drawn through both automated metrics and human evaluation by 76 English-speaking participants. The descriptions generated by the VGGFace-LSTM + Attention model are closest to the ground truth according to human evaluation whilst the ResNet-LSTM + Attention model obtained the highest CIDEr and CIDEr-D results (1.252 and 0.686 respectively). Together, the new data set and these experimental results provide data and baselines for future work in this area. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,324 |
inproceedings | klymenko-etal-2022-differential | Differential Privacy in Natural Language Processing: The Story So Far | Feyisetan, Oluwaseyi and Ghanavati, Sepideh and Thaine, Patricia and Habernal, Ivan and Mireshghallah, Fatemehsadat | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.privatenlp-1.1/ | Klymenko, Oleksandra and Meisenbacher, Stephen and Matthes, Florian | Proceedings of the Fourth Workshop on Privacy in Natural Language Processing | 1--11 | As the tide of Big Data continues to influence the landscape of Natural Language Processing (NLP), the utilization of modern NLP methods has grounded itself in this data, in order to tackle a variety of text-based tasks. These methods without a doubt can include private or otherwise personally identifiable information. As such, the question of privacy in NLP has gained fervor in recent years, coinciding with the development of new Privacy- Enhancing Technologies (PETs). Among these PETs, Differential Privacy boasts several desirable qualities in the conversation surrounding data privacy. Naturally, the question becomes whether Differential Privacy is applicable in the largely unstructured realm of NLP. This topic has sparked novel research, which is unified in one basic goal how can one adapt Differential Privacy to NLP methods? This paper aims to summarize the vulnerabilities addressed by Differential Privacy, the current thinking, and above all, the crucial next steps that must be considered. | null | null | 10.18653/v1/2022.privatenlp-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,326 |
inproceedings | petren-bach-hansen-etal-2022-impact | The Impact of Differential Privacy on Group Disparity Mitigation | Feyisetan, Oluwaseyi and Ghanavati, Sepideh and Thaine, Patricia and Habernal, Ivan and Mireshghallah, Fatemehsadat | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.privatenlp-1.2/ | Petren Bach Hansen, Victor and Tejaswi Neerkaje, Atula and Sawhney, Ramit and Flek, Lucie and Sogaard, Anders | Proceedings of the Fourth Workshop on Privacy in Natural Language Processing | 12--12 | The performance cost of differential privacy has, for some applications, been shown to be higher for minority groups fairness, conversely, has been shown to disproportionally compromise the privacy of members of such groups. Most work in this area has been restricted to computer vision and risk assessment. In this paper, we evaluate the impact of differential privacy on fairness across four tasks, focusing on how attempts to mitigate privacy violations and between-group performance differences interact Does privacy inhibit attempts to ensure fairness? To this end, we train epsilon, delta-differentially private models with empirical risk minimization and group distributionally robust training objectives. Consistent with previous findings, we find that differential privacy increases between-group performance differences in the baseline setting but more interestingly, differential privacy reduces between-group performance differences in the robust setting. We explain this by reinterpreting differential privacy as regularization. | null | null | 10.18653/v1/2022.privatenlp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,327 |
inproceedings | elmahdy-etal-2022-privacy | Privacy Leakage in Text Classification A Data Extraction Approach | Feyisetan, Oluwaseyi and Ghanavati, Sepideh and Thaine, Patricia and Habernal, Ivan and Mireshghallah, Fatemehsadat | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.privatenlp-1.3/ | Elmahdy, Adel and A. Inan, Huseyin and Sim, Robert | Proceedings of the Fourth Workshop on Privacy in Natural Language Processing | 13--20 | Recent work has demonstrated the successful extraction of training data from generative language models. However, it is not evident whether such extraction is feasible in text classification models since the training objective is to predict the class label as opposed to next-word prediction. This poses an interesting challenge and raises an important question regarding the privacy of training data in text classification settings. Therefore, we study the potential privacy leakage in the text classification domain by investigating the problem of unintended memorization of training data that is not pertinent to the learning task. We propose an algorithm to extract missing tokens of a partial text by exploiting the likelihood of the class label provided by the model. We test the effectiveness of our algorithm by inserting canaries into the training set and attempting to extract tokens in these canaries post-training. In our experiments, we demonstrate that successful extraction is possible to some extent. This can also be used as an auditing strategy to assess any potential unauthorized use of personal data without consent. | null | null | 10.18653/v1/2022.privatenlp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,328 |
inproceedings | ponomareva-etal-2022-training-text | Training Text-to-Text Transformers with Privacy Guarantees | Feyisetan, Oluwaseyi and Ghanavati, Sepideh and Thaine, Patricia and Habernal, Ivan and Mireshghallah, Fatemehsadat | jul | 2022 | Seattle, United States | Association for Computational Linguistics | https://aclanthology.org/2022.privatenlp-1.4/ | Ponomareva, Natalia and Bastings, Jasmijn and Vassilvitskii, Sergei | Proceedings of the Fourth Workshop on Privacy in Natural Language Processing | 21--21 | Recent advances in NLP often stem from large transformer-based pre-trained models, which rapidly grow in size and use more and more training data. Such models are often released to the public so that end users can fine-tune them on a task dataset. While it is common to treat pre-training data as public, it may still contain personally identifiable information (PII), such as names, phone numbers, and copyrighted material. Recent findings show that the capacity of these models allows them to memorize parts of the training data, and suggest differentially private (DP) training as a potential mitigation. While there is recent work on DP fine-tuning of NLP models, the effects of DP pre-training are less well understood it is not clear how downstream performance is affected by DP pre-training, and whether DP pre-training mitigates some of the memorization concerns. We focus on T5 and show that by using recent advances in JAX and XLA we can train models with DP that do not suffer a large drop in pre-training utility, nor in training speed, and can still be fine-tuned to high accuracy on downstream tasks (e.g. GLUE). Moreover, we show that T5s span corruption is a good defense against data memorization. | null | null | 10.18653/v1/2022.privatenlp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,329 |
inproceedings | tramarin-strapparava-2022-newyes | {N}ew{Y}e{S}: A Corpus of New Year`s Speeches with a Comparative Analysis | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.1/ | Tramarin, Anna and Strapparava, Carlo | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 1--7 | This paper introduces the NewYeS corpus, which contains the Christmas messages and New Year`s speeches held at the end of the year by the heads of state of different European countries (namely Denmark, France, Italy, Norway, Spain and the United Kingdom). The corpus was collected via web scraping of the speech transcripts available online. A comparative analysis was conducted to examine some of the cultural differences showing through the texts, namely a frequency distribution analysis of the term {\textquotedblleft}God{\textquotedblright} and the identification of the three most frequent content words per year, with a focus on years in which significant historical events happened. An analysis of positive and negative emotion scores, examined along with the frequency of religious references, was carried out for those countries whose languages are supported by LIWC, a tool for sentiment analysis. The corpus is available for further analyses, both comparative (across countries) and diachronic (over the years). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,331 |
inproceedings | sanders-van-den-bosch-2022-correlating | Correlating Political Party Names in Tweets, Newspapers and Election Results | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.2/ | Sanders, Eric and van den Bosch, Antal | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 8--15 | Twitter has been used as a textual resource to attempt to predict the outcome of elections for over a decade. A body of literature suggests that this is not consistently possible. In this paper we test the hypothesis that mentions of political parties in tweets are better correlated with the appearance of party names in newspapers than to the intention of the tweeter to vote for that party. Five Dutch national elections are used in this study. We find only a small positive, negligible difference in Pearson`s correlation coefficient as well as in the absolute error of the relation between tweets and news, and between tweets and elections. However, we find a larger correlation and a smaller absolute error between party mentions in newspapers and the outcome of the elections in four of the five elections. This suggests that newspapers are a better starting point for predicting the election outcome than tweets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,332 |
inproceedings | barriere-etal-2022-debating | Debating {E}urope: A Multilingual Multi-Target Stance Classification Dataset of Online Debates | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.3/ | Barriere, Valentin and Balahur, Alexandra and Ravenet, Brian | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 16--21 | We present a new dataset of online debates in English, annotated with stance. The dataset was scraped from the {\textquotedblleft}\textit{Debating Europe}{\textquotedblright} platform, where users exchange opinions over different subjects related to the European Union. The dataset is composed of 2600 comments pertaining to 18 debates related to the {\textquotedblleft}\textit{European Green Deal}{\textquotedblright}, in a conversational setting. After presenting the dataset and the annotated sub-part, we pre-train a model for a multilingual stance classification over the X-stance dataset before fine-tuning it over our dataset, and vice-versa. The fine-tuned models are shown to improve stance classification performance on each of the datasets, even though they have different languages, topics and targets. Subsequently, we propose to enhance the performances over {\textquotedblleft}\textit{Debating Europe}{\textquotedblright} with an interaction-aware model, taking advantage of the online debate structure of the platform. We also propose a semi-supervised self-training method to take advantage of the imbalanced and unlabeled data from the whole website, leading to a final improvement of accuracy by 3.4{\%} over a Vanilla XLM-R model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,333 |
inproceedings | lai-etal-2022-unsupervised | An Unsupervised Approach to Discover Media Frames | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.4/ | Lai, Sha and Jiang, Yanru and Guo, Lei and Betke, Margrit and Ishwar, Prakash and Wijaya, Derry Tanti | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 22--31 | Media framing refers to highlighting certain aspect of an issue in the news to promote a particular interpretation to the audience. Supervised learning has often been used to recognize frames in news articles, requiring a known pool of frames for a particular issue, which must be identified by communication researchers through thorough manual content analysis. In this work, we devise an unsupervised learning approach to discover the frames in news articles automatically. Given a set of news articles for a given issue, e.g., gun violence, our method first extracts frame elements from these articles using related Wikipedia articles and the Wikipedia category system. It then uses a community detection approach to identify frames from these frame elements. We discuss the effectiveness of our approach by comparing the frames it generates in an unsupervised manner to the domain-expert-derived frames for the issue of gun violence, for which a supervised learning model for frame recognition exists. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,334 |
inproceedings | baran-etal-2022-electoral | Electoral Agitation Dataset: The Use Case of the {P}olish Election | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.5/ | Baran, Mateusz and W{\'o}jcik, Mateusz and Kolebski, Piotr and Bernaczyk, Micha{\l} and Rajda, Krzysztof and Augustyniak, Lukasz and Kajdanowicz, Tomasz | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 32--36 | The popularity of social media makes politicians use it for political advertisement. Therefore, social media is full of electoral agitation (electioneering), especially during the election campaigns. The election administration cannot track the spread and quantity of messages that count as agitation under the election code. It addresses a crucial problem, while also uncovering a niche that has not been effectively targeted so far. Hence, we present the first publicly open data set for detecting electoral agitation in the Polish language. It contains 6,112 human-annotated tweets tagged with four legally conditioned categories. We achieved a 0.66 inter-annotator agreement (Cohen`s kappa score). An additional annotator resolved the mismatches between the first two improving the consistency and complexity of the annotation process. The newly created data set was used to fine-tune a Polish Language Model called HerBERT (achieving a 68{\%} F1 score). We also present a number of potential use cases for such data sets and models, enriching the paper with an analysis of the Polish 2020 Presidential Election on Twitter. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,335 |
inproceedings | dourado-sa-etal-2022-enhancing | Enhancing Geocoding of Adjectival Toponyms With Heuristics | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.6/ | Dourado S{\'a}, Breno and Coelho da Silva, Ticiana and Fernandes de Macedo, Jose Antonio | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 37--45 | Unstructured text documents such as news and blogs often present references to places. Those references, called toponyms, can be used in various applications like disaster warning and touristic planning. However, obtaining the correct coordinates for toponyms, called geocoding, is not easy since it`s common for places to have the same name as other locations. The process becomes even more challenging when toponyms appear in adjectival form, as they are different from the place`s actual name. This paper addresses the geocoding task and aims to improve, through a heuristic approach, the process for adjectival toponyms. So first, a baseline geocoder is defined through experimenting with a set of heuristics. After that, the baseline is enhanced by adding a normalization step to map adjectival toponyms to their noun form at the beginning of the geocoding process. The results show improved performance for the enhanced geocoder compared to the baseline and other geocoders. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,336 |
inproceedings | durlich-etal-2022-cause | Cause and Effect in Governmental Reports: Two Data Sets for Causality Detection in {S}wedish | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.7/ | D{\"urlich, Luise and Reimann, Sebastian and Finnveden, Gustav and Nivre, Joakim and Stymne, Sara | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 46--55 | Causality detection is the task of extracting information about causal relations from text. It is an important task for different types of document analysis, including political impact assessment. We present two new data sets for causality detection in Swedish. The first data set is annotated with binary relevance judgments, indicating whether a sentence contains causality information or not. In the second data set, sentence pairs are ranked for relevance with respect to a causality query, containing a specific hypothesized cause and/or effect. Both data sets are carefully curated and mainly intended for use as test data. We describe the data sets and their annotation, including detailed annotation guidelines. In addition, we present pilot experiments on cross-lingual zero-shot and few-shot causality detection, using training data from English and German. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,337 |
inproceedings | baran-etal-2022-twitter | Does {T}witter know your political views? {POL}i{T}weets dataset and semi-automatic method for political leaning discovery | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.8/ | Baran, Joanna and Kajstura, Micha{\l} and Ziolkowski, Maciej and Rajda, Krzysztof | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 56--61 | Every day, the world is flooded by millions of messages and statements posted on Twitter or Facebook. Social media platforms try to protect users' personal data, but there still is a real risk of misuse, including elections manipulation. Did you know, that only 10 posts addressing important or controversial topics for society are enough to predict one`s political affiliation with a 0.85 F1-score? To examine this phenomenon, we created a novel universal method of semi-automated political leaning discovery. It relies on a heuristical data annotation procedure, which was evaluated to achieve 0.95 agreement with human annotators (counted as an accuracy metric). We also present POLiTweets - the first publicly open Polish dataset for political affiliation discovery in a multi-party setup, consisting of over 147k tweets from almost 10k Polish-writing users annotated heuristically and almost 40k tweets from 166 users annotated manually as a test set. We used our data to study the aspects of domain shift in the context of topics and the type of content writers - ordinary citizens vs. professional politicians. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,338 |
inproceedings | abdine-etal-2022-political | Political Communities on {T}witter: Case Study of the 2022 {F}rench Presidential Election | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.9/ | Abdine, Hadi and Guo, Yanzhu and Rennard, Virgile and Vazirgiannis, Michalis | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 62--71 | With the significant increase in users on social media platforms, a new means of political campaigning has appeared. Twitter and Facebook are now notable campaigning tools during elections. Indeed, the candidates and their parties now take to the internet to interact and spread their ideas. In this paper, we aim to identify political communities formed on Twitter during the 2022 French presidential election and analyze each respective community. We create a large-scale Twitter dataset containing 1.2 million users and 62.6 million tweets that mention keywords relevant to the election. We perform community detection on a retweet graph of users and propose an in-depth analysis of the stance of each community. Finally, we attempt to detect offensive tweets and automatic bots, comparing across communities in order to gain insight into each candidate`s supporter demographics and online campaign strategy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,339 |
inproceedings | adhya-sanyal-2022-indian | What Does the {I}ndian Parliament Discuss? An Exploratory Analysis of the Question Hour in the Lok Sabha | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.10/ | Adhya, Suman and Sanyal, Debarshi Kumar | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 72--78 | The TCPD-IPD dataset is a collection of questions and answers discussed in the Lower House of the Parliament of India during the Question Hour between 1999 and 2019. Although it is difficult to analyze such a huge collection manually, modern text analysis tools can provide a powerful means to navigate it. In this paper, we perform an exploratory analysis of the dataset. In particular, we present insightful corpus-level statistics and perform a more detailed analysis of three subsets of the dataset. In the latter analysis, the focus is on understanding the temporal evolution of topics using a dynamic topic model. We observe that the parliamentary conversation indeed mirrors the political and socio-economic tensions of each period. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,340 |
inproceedings | dufraisse-etal-2022-dont | Don`t Burst Blindly: For a Better Use of Natural Language Processing to Fight Opinion Bubbles in News Recommendations | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.11/ | Dufraisse, Evan and Treuillier, C{\'e}lina and Brun, Armelle and Tourille, Julien and Castagnos, Sylvain and Popescu, Adrian | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 79--85 | Online news consumption plays an important role in shaping the political opinions of citizens. The news is often served by recommendation algorithms, which adapt content to users' preferences. Such algorithms can lead to political polarization as the societal effects of the recommended content and recommendation design are disregarded. We posit that biases appear, at least in part, due to a weak entanglement between natural language processing and recommender systems, both processes yet at work in the diffusion and personalization of online information. We assume that both diversity and acceptability of recommended content would benefit from such a synergy. We discuss the limitations of current approaches as well as promising leads of opinion-mining integration for the political news recommendation process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,341 |
inproceedings | szwoch-etal-2022-creation | Creation of {P}olish Online News Corpus for Political Polarization Studies | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.12/ | Szwoch, Joanna and Staszkow, Mateusz and Rzepka, Rafal and Araki, Kenji | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 86--90 | In this paper we describe a Polish news corpus as an attempt to create a filtered, organized and representative set of texts coming from contemporary online press articles from two major Polish TV news providers: commercial TVN24 and state-owned TVP Info. The process consists of web scraping, data cleaning and formatting. A random sample was selected from prepared data to perform a classification task. The random forest achieved the best prediction results out of all considered models. We believe that this dataset is a valuable contribution to existing Polish language corpora as online news are considered to be formal and relatively mistake-free, therefore, a reliable source of correct written language, unlike other online platforms such as blogs or social media. Furthermore, to our knowledge, such corpus from this period of time has not been created before. In the future we would like to expand this dataset with articles coming from other online news providers, repeat the classification task on a bigger scale, utilizing other algorithms. Our data analysis outcomes might be a relevant basis to improve research on a political polarization and propaganda techniques in media. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,342 |
inproceedings | cauzinille-etal-2022-annotation | Annotation of expressive dimensions on a multimodal {F}rench corpus of political interviews | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.13/ | Cauzinille, Jules and Evrard, Marc and Kiselov, Nikita and Rilliard, Albert | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 91--97 | We present a French corpus of political interviews labeled at the utterance level according to expressive dimensions such as Arousal. This corpus consists of 7.5 hours of high-quality audio-visual recordings with transcription. At the time of this publication, 1 hour of speech was segmented into short utterances, each manually annotated in Arousal. Our segmentation approach differs from similar corpora and allows us to perform an automatic Arousal prediction baseline by building a speech-based classification model. Although this paper focuses on the acoustic expression of Arousal, it paves the way for future work on conflictual and hostile expression recognition as well as multimodal architectures. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,343 |
inproceedings | simon-etal-2022-transcasm | {T}rans{C}asm: A Bilingual Corpus of Sarcastic Tweets | Afli, Haithem and Alam, Mehwish and Bouamor, Houda and Casagran, Cristina Blasi and Boland, Colleen and Ghannay, Sahar | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.politicalnlp-1.14/ | Simon, Desline and Castilho, Sheila and Lohar, Pintu and Afli, Haithem | Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences | 98--103 | Sarcasm is extensively used in User Generated Content (UGC) in order to express one`s discontent, especially through blogs, forums, or social media such as Twitter. Several works have attempted to detect and analyse sarcasm in UGC. However, the lack of freely available corpora in this field makes the task even more difficult. In this work, we present {\textquotedblleft}TransCasm{\textquotedblright} corpus, a parallel corpus of sarcastic tweets translated from English into French along with their non-sarcastic representations. To build the bilingual corpus of sarcasm, we select the {\textquotedblleft}SIGN{\textquotedblright} corpus, a monolingual data set of sarcastic tweets and their non-sarcastic interpretations, created by (Peled and Reichart, 2017). We propose to define linguistic guidelines for developing {\textquotedblleft}TransCasm{\textquotedblright} which is the first ever bilingual corpus of sarcastic tweets. In addition, we utilise {\textquotedblleft}TransCasm{\textquotedblright} for building a binary sarcasm classifier in order to identify whether a tweet is sarcastic or not. Our experiment reveals that the sarcasm classifier achieves 61{\%} accuracy on detecting sarcasm in tweets. {\textquotedblleft}TransCasm{\textquotedblright} is now freely available online and is ready to be explored for further research. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,344 |
inproceedings | ogrodniczuk-etal-2022-parlamint | {P}arla{M}int {II}: The Show Must Go On | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.1/ | Ogrodniczuk, Maciej and Osenova, Petya and Erjavec, Toma{\v{z and Fi{\v{ser, Darja and Ljube{\v{si{\'c, Nikola and {\c{C{\"oltekin, {\c{Ca{\u{gr{\i and Kopp, Maty{\'a{\v{s and Katja, Meden | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 1--6 | In ParlaMint I, a CLARIN-ERIC supported project in pandemic times, a set of comparable and uniformly annotated multilingual corpora for 17 national parliaments were developed and released in 2021. For 2022 and 2023, the project has been extended to ParlaMint II, again with the CLARIN ERIC financial support, in order to enhance the existing corpora with new data and metadata; upgrade the XML schema; add corpora for 10 new parliaments; provide more application scenarios and carry out additional experiments. The paper reports on these planned steps, including some that have already been taken, and outlines future plans. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,346 |
inproceedings | blaette-etal-2022-germaparl | How {G}erma{P}arl Evolves: Improving Data Quality by Reproducible Corpus Preparation and User Involvement | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.2/ | Blaette, Andreas and Rakers, Julia and Leonhardt, Christoph | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 7--15 | The development and curation of large-scale corpora of plenary debates requires not only care and attention to detail when the data is created but also effective means of sustainable quality control. This paper makes two contributions: Firstly, it presents an updated version of the GermaParl corpus of parliamentary debates in the German *Bundestag*. Secondly, it shows how the corpus preparation pipeline is designed to serve the quality of the resource by facilitating effective community involvement. Centered around a workflow which combines reproducibility, transparency and version control, the pipeline allows for continuous improvements to the corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,347 |
inproceedings | puren-etal-2022-history | Between History and Natural Language Processing: Study, Enrichment and Online Publication of {F}rench Parliamentary Debates of the Early Third Republic (1881-1899) | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.3/ | Puren, Marie and Pellet, Aur{\'e}lien and Bourgeois, Nicolas and Vernus, Pierre and Lebreton, Fanny | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 16--24 | We present the AGODA (Analyse s{\'e}mantique et Graphes relationnels pour l`Ouverture des D{\'e}bats {\`a} l`Assembl{\'e}e nationale) project, which aims to create a platform for consulting and exploring digitised French parliamentary debates (1881-1940) available in the digital library of the National Library of France. This project brings together historians and NLP specialists: parliamentary debates are indeed an essential source for French history of the contemporary period, but also for linguistics. This project therefore aims to produce a corpus of texts that can be easily exploited with computational methods, and that respect the TEI standard. Ancient parliamentary debates are also an excellent case study for the development and application of tools for publishing and exploring large historical corpora. In this paper, we present the steps necessary to produce such a corpus. We detail the processing and publication chain of these documents, in particular by mentioning the problems linked to the extraction of texts from digitised images. We also introduce the first analyses that we have carried out on this corpus with {\textquotedblleft}bag-of-words{\textquotedblright} techniques not too sensitive to OCR quality (namely topic modelling and word embedding). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,348 |
inproceedings | menard-aleksandrova-2022-french | A {F}rench Corpus of {Q}u{\'e}bec`s Parliamentary Debates | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.4/ | M{\'e}nard, Pierre Andr{\'e} and Aleksandrova, Desislava | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 25--32 | Parliamentary debates offer a window on political stances as well as a repository of linguistic and semantic knowledge. They provide insights and reasons for laws and regulations that impact electors in their everyday life. One such resource is the transcribed debates available online from the Assembl{\'e}e Nationale du Qu{\'e}bec (ANQ). This paper describes the effort to convert the online ANQ debates from various HTML formats into a standardized ParlaMint TEI annotated corpus and to enrich it with annotations extracted from related unstructured members and political parties list. The resulting resource includes 88 years of debates over a span of 114 years with more than 33.3 billion words. The addition of linguistic annotations is detailed as well as a quantitative analysis of part-of-speech tags and distribution of utterances across the corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,349 |
inproceedings | blaxill-2022-parliamentary | Parliamentary Corpora and Research in Political Science and Political History | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.5/ | Blaxill, Luke | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 33--34 | This keynote reflects on some of the barriers to digitised parliamentary resources achieving greater impact as research tools in political history and political science. As well as providing a view on researchers' priorities for resource enhancement, I also argue that one of the main challenges for historians and political scientists is simply establishing how to make best use of these datasets through asking new research questions and through understanding and embracing unfamiliar and controversial methods than enable their analysis. I suggest parliamentary resources should be designed and presented to support pioneers trying to publish in often sceptical and traditional fields. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,350 |
inproceedings | ogrodniczuk-etal-2022-error | Error Correction Environment for the {P}olish Parliamentary Corpus | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.6/ | Ogrodniczuk, Maciej and Rudolf, Micha{\l} and W{\'o}jtowicz, Beata and Janicka, Sonia | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 35--38 | The paper introduces the environment for detecting and correcting various kinds of errors in the Polish Parliamentary Corpus. After performing a language model-based error detection experiment which resulted in too many false positives, a simpler rule-based method was introduced and is currently used in the process of manual verification of corpus texts. The paper presents types of errors detected in the corpus, the workflow of the correction process and the tools newly implemented for this purpose. To facilitate comparison of a target corpus XML file with its usually graphical PDF source, a new mechanism for inserting PDF page markers into XML was developed and is used for displaying a single source page corresponding to a given place in the resulting XML directly in the error correction environment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,351 |
inproceedings | agnoloni-etal-2022-clustering | Clustering Similar Amendments at the {I}talian Senate | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.7/ | Agnoloni, Tommaso and Marchetti, Carlo and Battistoni, Roberto and Briotti, Giuseppe | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 39--46 | In this paper we describe an experiment for the application of text clustering techniques to dossiers of amendments to proposed legislation discussed in the Italian Senate. The aim is to assist the Senate staff in the detection of groups of amendments similar in their textual formulation in order to schedule their simultaneous voting. Experiments show that the exploitation (extraction, annotation and normalization) of domain features is crucial to improve the clustering performance in many problematic cases not properly dealt with by standard approaches. The similarity engine was implemented and integrated as an experimental feature in the internal application used for the management of amendments in the Senate Assembly and Committees. Thanks to the Open Data strategy pursued by the Senate for several years, all documents and data produced by the institution are publicly available for reuse in open formats. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,352 |
inproceedings | van-heusden-etal-2022-entity | Entity Linking in the {P}arla{M}int Corpus | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.8/ | van Heusden, Ruben and Marx, Maarten and Kamps, Jaap | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 47--55 | The ParlaMint corpus is a multilingual corpus consisting of the parliamentary debates of seventeen European countries over a span of roughly five years. The automatically annotated versions of these corpora provide us with a wealth of linguistic information, including Named Entities. In order to further increase the research opportunities that can be created with this corpus, the linking of Named Entities to a knowledge base is a crucial step. If this can be done successfully and accurately, a lot of additional information can be gathered from the entities, such as political stance and party affiliation, not only within countries but also between the parliaments of different countries. However, due to the nature of the ParlaMint dataset, this entity linking task is challenging. In this paper, we investigate the task of linking entities from ParlaMint in different languages to a knowledge base, and evaluating the performance of three entity linking methods. We will be using DBPedia spotlight, WikiData and YAGO as the entity linking tools, and evaluate them on local politicians from several countries. We discuss two problems that arise with the entity linking in the ParlaMint corpus, namely inflection, and aliasing or the existence of name variants in text. This paper provides a first baseline on entity linking performance on multiple multilingual parliamentary debates, describes the problems that occur when attempting to link entities in ParlaMint, and makes a first attempt at tackling the aforementioned problems with existing methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,353 |
inproceedings | yim-etal-2022-visualizing | Visualizing Parliamentary Speeches as Networks: the {DYLEN} Tool | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.9/ | Yim, Seung-bin and W{\"unsche, Katharina and Cetin, Asil and Neidhardt, Julia and Baumann, Andreas and Wissik, Tanja | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 56--60 | In this paper, we present a web based interactive visualization tool for lexical networks based on the utterances of Austrian Members of Parliament. The tool is designed to compare two networks in parallel and is composed of graph visualization, node-metrics comparison and time-series comparison components that are interconnected with each other. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,354 |
inproceedings | kurtoglu-eskisar-coltekin-2022-emotions | Emotions Running High? A Synopsis of the state of {T}urkish Politics through the {P}arla{M}int Corpus | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.10/ | Kurto{\u{glu Eski{\c{sar, G{\"ul M. and {\c{C{\"oltekin, {\c{Ca{\u{gr{\i | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 61--70 | We present the initial results of our quantitative study on emotions (Anger, Disgust, Fear, Happiness, Sadness and Surprise) in Turkish parliament (2011{--}2021). We use machine learning models to assign emotion scores to all speeches delivered in the parliament during this period, and observe any changes to them in relation to major political and social events in Turkey. We highlight a number of interesting observations, such as anger being the dominant emotion in parliamentary speeches, and the ruling party showing more stable emotions compared to the political opposition, despite its depiction as a populist party in the literature. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,355 |
inproceedings | navarretta-etal-2022-immigration | Immigration in the Manifestos and Parliament Speeches of {D}anish Left and Right Wing Parties between 2009 and 2020 | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.11/ | Navarretta, Costanza and Haltrup Hansen, Dorte and Jongejan, Bart | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 71--80 | The paper presents a study of how seven Danish left and right wing parties addressed immigration in their 2011, 2015 and 2019 manifestos and in their speeches in the Danish Parliament from 2009 to 2020. The annotated manifestos are produced by the Comparative Manifesto Project, while the parliamentary speeches annotated with policy areas (subjects) have been recently released under CLARIN-DK. In the paper, we investigate how often the seven parties addressed immigration in the manifestos and parliamentary debates, and we analyse both datasets after having applied NLP tools to them. A sentiment analysis tool was run on the manifestos and its results were compared with the manifestos' annotations, while topic modeling was applied to the parliamentary speeches in order to outline central themes in the immigration debates. Many of the resulting topic groups are related to cultural, religious and integration aspects which were heavily debated by politicians and media when discussing immigration policy during the past decade. Our analyses also show differences and similarities between parties and indicate how the 2015 immigrant crisis is reflected in the two types of data. Finally, we discuss advantages and limitations of our quantitative and tool-based analyses. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,356 |
inproceedings | skubic-fiser-2022-parliamentary | Parliamentary Discourse Research in Sociology: Literature Review | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.12/ | Skubic, Jure and Fi{\v{s}}er, Darja | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 81--91 | One of the major sociological research interests has always been the study of political discourse. This literature review gives an overview of the most prominent topics addressed and the most popular methods used by sociologists. We identify the commonalities and the differences of the approaches established in sociology with corpus-driven approaches in order to establish how parliamentary corpora and corpus-based approaches could be successfully integrated in sociological research. We also highlight how parliamentary corpora could be made even more useful for sociologists. Keywords: parliamentary discourse, sociology, parliamentary corpora | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,357 |
inproceedings | klamm-etal-2022-frameast | {F}rame{AS}t: A Framework for Second-level Agenda Setting in Parliamentary Debates through the Lense of Comparative Agenda Topics | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.13/ | Klamm, Christopher and Rehbein, Ines and Ponzetto, Simone Paolo | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 92--100 | This paper presents a framework for studying second-level political agenda setting in parliamentary debates, based on the selection of policy topics used by political actors to discuss a specific issue on the parliamentary agenda. For example, the COVID-19 pandemic as an agenda item can be contextualised as a health issue or as a civil rights issue, as a matter of macroeconomics or can be discussed in the context of social welfare. Our framework allows us to observe differences regarding how different parties discuss the same agenda item by emphasizing different topical aspects of the item. We apply and evaluate our framework on data from the German Bundestag and discuss the merits and limitations of our approach. In addition, we present a new annotated data set of parliamentary debates, following the coding schema of policy topics developed in the Comparative Agendas Project (CAP), and release models for topic classification in parliamentary debates. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,358 |
inproceedings | bestgen-2022-comparing | Comparing Formulaic Language in Human and Machine Translation: Insight from a Parliamentary Corpus | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.14/ | Bestgen, Yves | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 101--106 | A recent study has shown that, compared to human translations, neural machine translations contain more strongly-associated formulaic sequences made of relatively high-frequency words, but far less strongly-associated formulaic sequences made of relatively rare words. These results were obtained on the basis of translations of quality newspaper articles in which human translations can be thought to be not very literal. The present study attempts to replicate this research using a parliamentary corpus. The results confirm the observations on the news corpus, but the differences are less strong. They suggest that the use of text genres that usually result in more literal translations, such as parliamentary corpora, might be preferable when comparing human and machine translations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,359 |
inproceedings | alkorta-quintian-2022-adding | Adding the {B}asque Parliament Corpus to {P}arla{M}int Project | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.15/ | Alkorta, Jon and Quintian, Mikel Iruskieta | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 107--110 | The aim of this work is to describe the colection created with transcript of the Basque parliamentary speeches. This corpus follows the constraints of the ParlaMint project. The Basque ParlaMint corpus consists of two versions: the first version stands for what was said in the Basque Parliament, that is, the original bilingual corpus in Basque and in Spanish to analyse what and how was said, while the second is only in Basque with the original and translated passages to promote studies on the content of the parliament speeches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,360 |
inproceedings | ljubesic-etal-2022-parlaspeech | {P}arla{S}peech-{HR} - a Freely Available {ASR} Dataset for {C}roatian Bootstrapped from the {P}arla{M}int Corpus | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.16/ | Ljube{\v{s}}i{\'c}, Nikola and Kor{\v{z}}inek, Danijel and Rupnik, Peter and Jazbec, Ivo-Pavao | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 111--116 | This paper presents our bootstrapping efforts of producing the first large freely available Croatian automatic speech recognition (ASR) dataset, 1,816 hours in size, obtained from parliamentary transcripts and recordings from the ParlaMint corpus. The bootstrapping approach to the dataset building relies on a commercial ASR system for initial data alignment, and building a multilingual-transformer-based ASR system from the initial data for full data alignment. Experiments on the resulting dataset show that the difference between the spoken content and the parliamentary transcripts is present in {\textasciitilde}4-5{\%} of words, which is also the word error rate of our best-performing ASR system. Interestingly, fine-tuning transformer models on either normalized or original data does not show a difference in performance. Models pre-trained on a subset of raw speech data consisting of Slavic languages only show to perform better than those pre-trained on a wider set of languages. With our public release of data, models and code, we are paving the way forward for the preparation of the multi-modal corpus of Croatian parliamentary proceedings, as well as for the development of similar free datasets, models and corpora for other under-resourced languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,361 |
inproceedings | agnoloni-etal-2022-making | Making {I}talian Parliamentary Records Machine-Actionable: the Construction of the {P}arla{M}int-{IT} corpus | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.17/ | Agnoloni, Tommaso and Bartolini, Roberto and Frontini, Francesca and Montemagni, Simonetta and Marchetti, Carlo and Quochi, Valeria and Ruisi, Manuela and Venturi, Giulia | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 117--124 | This paper describes the process of acquisition, cleaning, interpretation, coding and linguistic annotation of a collection of parliamentary debates from the Senate of the Italian Republic covering the COVID-19 period and a former period for reference and comparison according to the CLARIN ParlaMint guidelines and prescriptions. The corpus contains 1199 sessions and 79,373 speeches, for a total of about 31 million words and was encoded according to the ParlaCLARIN TEI XML format, as well as in CoNLL-UD format. It includes extensive metadata about the speakers, the sessions, the political parties and Parliamentary groups. As required by the ParlaMint initiative, the corpus was also linguistically annotated for sentences, tokens, POS tags, lemmas and dependency syntax according to the universal dependencies guidelines. Named entity classification was also included. All linguistic annotation was performed automatically using state-of-the-art NLP technology with no manual revision. The Italian dataset is freely available as part of the larger ParlaMint 2.1 corpus deposited and archived in CLARIN repository together with all other national corpora. It is also available for direct analysis and inspection via various CLARIN services and has already been used both for research and educational purposes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,362 |
inproceedings | kulebi-etal-2022-parlamentparla | {P}arlament{P}arla: A Speech Corpus of {C}atalan Parliamentary Sessions | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.18/ | Kulebi, Baybars and Armentano-Oller, Carme and Rodriguez-Penagos, Carlos and Villegas, Marta | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 125--130 | Recently, various end-to-end architectures of Automatic Speech Recognition (ASR) are being showcased as an important step towards providing language technologies to all languages instead of a select few such as English. However many languages are still suffering due to the {\textquotedblleft}digital gap,{\textquotedblright} lacking thousands of hours of transcribed speech data openly accessible that is necessary to train modern ASR architectures. Although Catalan already has access to various open speech corpora, these corpora lack diversity and are limited in total volume. In order to address this lack of resources for Catalan language, in this work we present ParlamentParla, a corpus of more than 600 hours of speech from Catalan Parliament sessions. This corpus has already been used in training of state-of-the-art ASR systems, and proof-of-concept text-to-speech (TTS) models. In this work we explain in detail the pipeline that allows the information publicly available on the parliamentary website to be converted to a speech corpus compatible with training of ASR and possibly TTS models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,363 |
inproceedings | rebeja-etal-2022-parlamint | {P}arla{M}int-{RO}: Chamber of the Eternal Future | Fi{\v{s}}er, Darja and Eskevich, Maria and Lenardi{\v{c}}, Jakob and de Jong, Franciska | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.parlaclarin-1.19/ | Rebeja, Petru and Chitez, M{\u{a}}d{\u{a}}lina and Rogobete, Roxana and Dinc{\u{a}}, Andreea and Bercuci, Loredana | Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference | 131--134 | The present paper aims to describe the collection of ParlaMint-RO corpus and to analyse several trends in parliamentary debates (plenary sessions of the Lower House) held in between 2000 and 2020). After a short description of the data collection (of existing transcripts), the workflow of data processing (text extraction, conversion, encoding, linguistic annotation), and an overview of the corpus, the paper will move on to a multi-layered linguistic analysis to validate interdisciplinary perspectives. We use computational methods and corpus linguistics approaches to scrutinize the future tense forms used by Romanian speakers, in order to create a data-supported profile of the parliamentary group strategies and planning. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,364 |
inproceedings | vacareanu-etal-2022-patternrank | {P}attern{R}ank: Jointly Ranking Patterns and Extractions for Relation Extraction Using Graph-Based Algorithms | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.1/ | Vacareanu, Robert and Bell, Dane and Surdeanu, Mihai | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 1--10 | In this paper we revisit the direction of using lexico-syntactic patterns for relation extraction instead of today`s ubiquitous neural classifiers. We propose a semi-supervised graph-based algorithm for pattern acquisition that scores patterns and the relations they extract jointly, using a variant of PageRank. We insert light supervision in the form of seed patterns or relations, and model it with several custom teleportation probabilities that bias random-walk scores of patterns/relations based on their proximity to correct information. We evaluate our approach on Few-Shot TACRED, and show that our method outperforms (or performs competitively with) more expensive and opaque deep neural networks. Lastly, we thoroughly compare our proposed approach with the seminal RlogF pattern acquisition algorithm of, showing that it outperforms it for all the hyper parameters tested, in all settings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,366 |
inproceedings | arroyo-etal-2022-key | Key Information Extraction in Purchase Documents using Deep Learning and Rule-based Corrections | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.2/ | Arroyo, Roberto and Yebes, Javier and Mart{\'i}nez, Elena and Corrales, H{\'e}ctor and Lorenzo, Javier | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 11--20 | Deep Learning (DL) is dominating the fields of Natural Language Processing (NLP) and Computer Vision (CV) in the recent times. However, DL commonly relies on the availability of large data annotations, so other alternative or complementary pattern-based techniques can help to improve results. In this paper, we build upon Key Information Extraction (KIE) in purchase documents using both DL and rule-based corrections. Our system initially trusts on Optical Character Recognition (OCR) and text understanding based on entity tagging to identify purchase facts of interest (e.g., product codes, descriptions, quantities, or prices). These facts are then linked to a same product group, which is recognized by means of line detection and some grouping heuristics. Once these DL approaches are processed, we contribute several mechanisms consisting of rule-based corrections for improving the baseline DL predictions. We prove the enhancements provided by these rule-based corrections over the baseline DL results in the presented experiments for purchase documents from public and NielsenIQ datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,367 |
inproceedings | bhattacharya-etal-2022-unsupervised | Unsupervised Generation of Long-form Technical Questions from Textbook Metadata using Structured Templates | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.3/ | Bhattacharya, Indrajit and Ghosh, Subhasish and Kundu, Arpita and Saini, Pratik and Nayak, Tapas | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 21--28 | We explore the task of generating long-form technical questions from textbooks. Semi-structured metadata of a textbook {---} the table of contents and the index {---} provide rich cues for technical question generation. Existing literature for long-form question generation focuses mostly on reading comprehension assessment, and does not use semi-structured metadata for question generation. We design unsupervised template based algorithms for generating questions based on structural and contextual patterns in the index and ToC. We evaluate our approach on textbooks on diverse subjects and show that our approach generates high quality questions of diverse types. We show that, in comparison, zero-shot question generation using pre-trained LLMs on the same meta-data has much poorer quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,368 |
inproceedings | yoon-etal-2022-building | Building {K}orean Linguistic Resource for {NLU} Data Generation of Banking App {CS} Dialog System | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.4/ | Yoon, Jeongwoo and Park, Onyu and Hwang, Changhoe and Yoo, Gwanghoon and Laporte, Eric and Nam, Jeesun | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 29--37 | Natural language understanding (NLU) is integral to task-oriented dialog systems, but demands a considerable amount of annotated training data to increase the coverage of diverse utterances. In this study, we report the construction of a linguistic resource named FIAD (Financial Annotated Dataset) and its use to generate a Korean annotated training data for NLU in the banking customer service (CS) domain. By an empirical examination of a corpus of banking app reviews, we identified three linguistic patterns occurring in Korean request utterances: TOPIC (ENTITY, FEATURE), EVENT, and DISCOURSE MARKER. We represented them in LGGs (Local Grammar Graphs) to generate annotated data covering diverse intents and entities. To assess the practicality of the resource, we evaluate the performances of DIET-only (Intent: 0.91 /Topic [entity+feature]: 0.83), DIET+ HANBERT (I:0.94/T:0.85), DIET+ KoBERT (I:0.94/T:0.86), and DIET+ KorBERT (I:0.95/T:0.84) models trained on FIAD-generated data to extract various types of semantic items. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,369 |
inproceedings | choi-etal-2022-ssp | {SSP}-Based Construction of Evaluation-Annotated Data for Fine-Grained Aspect-Based Sentiment Analysis | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.5/ | Choi, Suwon and Kim, Shinwoo and Hwang, Changhoe and Yoo, Gwanghoon and Laporte, Eric and Nam, Jeesun | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 38--44 | We report the construction of a Korean evaluation-annotated corpus, hereafter called {\textquoteleft}Evaluation Annotated Dataset (EVAD)', and its use in Aspect-Based Sentiment Analysis (ABSA) extended in order to cover e-commerce reviews containing sentiment and non-sentiment linguistic patterns. The annotation process uses Semi-Automatic Symbolic Propagation (SSP). We built extensive linguistic resources formalized as a Finite-State Transducer (FST) to annotate corpora with detailed ABSA components in the fashion e-commerce domain. The ABSA approach is extended, in order to analyze user opinions more accurately and extract more detailed features of targets, by including aspect values in addition to topics and aspects, and by classifying aspect-value pairs depending whether values are unary, binary, or multiple. For evaluation, the KoBERT and KcBERT models are trained on the annotated dataset, showing robust performances of F1 0.88 and F1 0.90, respectively, on recognition of aspect-value pairs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,370 |
inproceedings | freitag-etal-2022-accelerating | Accelerating Human Authorship of Information Extraction Rules | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.6/ | Freitag, Dayne and Cadigan, John and Niekrasz, John and Sasseen, Robert | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 45--55 | We consider whether machine models can facilitate the human development of rule sets for information extraction. Arguing that rule-based methods possess a speed advantage in the early development of new extraction capabilities, we ask whether this advantage can be increased further through the machine facilitation of common recurring manual operations in the creation of an extraction rule set from scratch. Using a historical rule set, we reconstruct and describe the putative manual operations required to create it. In experiments targeting one key operation{---}the enumeration of words occurring in particular contexts{---}we simulate the process or corpus review and word list creation, showing that several simple interventions greatly improve recall as a function of simulated labor. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,371 |
inproceedings | sutiono-hahn-powell-2022-syntax | Syntax-driven Data Augmentation for Named Entity Recognition | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.7/ | Sutiono, Arie and Hahn-Powell, Gus | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 56--60 | In low resource settings, data augmentation strategies are commonly leveraged to improve performance. Numerous approaches have attempted document-level augmentation (e.g., text classification), but few studies have explored token-level augmentation. Performed naively, data augmentation can produce semantically incongruent and ungrammatical examples. In this work, we compare simple masked language model replacement and an augmentation method using constituency tree mutations to improve the performance of named entity recognition in low-resource settings with the aim of preserving linguistic cohesion of the augmented sentences. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,372 |
inproceedings | kuzin-etal-2022-query | Query Processing and Optimization for a Custom Retrieval Language | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.8/ | Kuzin, Yakov and Smirnova, Anna and Slobodkin, Evgeniy and Chernishev, George | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 61--70 | Data annotation has been a pressing issue ever since the rise of machine learning and associated areas. It is well-known that obtaining high-quality annotated data incurs high costs, be they financial or time-related. In our previous work, we have proposed a custom, SQL-like retrieval language used to query collections of short documents, such as chat transcripts or tweets. Its main purpose is enabling a human annotator to select {\textquotedblleft}situations{\textquotedblright} from such collections, i.e. subsets of documents that are related both thematically and temporally. This language, named Matcher, was prototyped in our custom annotation tool. Entering the next stage of development of the tool, we have tested the prototype implementation. Given the language`s rich semantics, many possible execution options with various costs arise. We have found out we could provide tangible improvement in terms of speed and memory consumption by carefully selecting the execution strategy in each particular case. In this work, we present the improved algorithms and proposed optimization methods, as well as a benchmark suite whose results show the significance of the presented techniques. While this is an initial work and not a full-fledged optimization framework, it nevertheless yields good results, providing up to tenfold improvement. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,373 |
inproceedings | nitschke-etal-2022-rule | Rule Based Event Extraction for Artificial Social Intelligence | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.9/ | Nitschke, Remo and Wang, Yuwei and Chen, Chen and Pyarelal, Adarsh and Sharp, Rebecca | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 71--84 | Natural language (as opposed to structured communication modes such as Morse code) is by far the most common mode of communication between humans, and can thus provide significant insight into both individual mental states and interpersonal dynamics. As part of DARPA`s Artificial Social Intelligence for Successful Teams (ASIST) program, we are developing an AI agent team member that constructs and maintains models of their human teammates and provides appropriate task-relevant advice to improve team processes and mission performance. One of the key components of this agent is a module that uses a rule-based approach to extract task-relevant events from natural language utterances in real time, and publish them for consumption by downstream components. In this case study, we evaluate the performance of our rule-based event extraction system on a recently conducted ASIST experiment consisting of a simulated urban search and rescue mission in Minecraft. We compare the performance of our approach with that of a zero-shot neural classifier, and find that our approach outperforms the classifier for all event types, even when the classifier is used in an oracle setting where it knows how many events should be extracted from each utterance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,374 |
inproceedings | noriega-atala-etal-2022-neural | Neural-Guided Program Synthesis of Information Extraction Rules Using Self-Supervision | Chiticariu, Laura and Goldberg, Yoav and Hahn-Powell, Gus and Morrison, Clayton T. and Naik, Aakanksha and Sharp, Rebecca and Surdeanu, Mihai and Valenzuela-Esc{\'a}rcega, Marco and Noriega-Atala, Enrique | oct | 2022 | Gyeongju, Republic of Korea | International Conference on Computational Linguistics | https://aclanthology.org/2022.pandl-1.10/ | Noriega-Atala, Enrique and Vacareanu, Robert and Hahn-Powell, Gus and Valenzuela-Esc{\'a}rcega, Marco A. | Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning | 85--93 | We propose a neural-based approach for rule synthesis designed to help bridge the gap between the interpretability, precision and maintainability exhibited by rule-based information extraction systems with the scalability and convenience of statistical information extraction systems. This is achieved by avoiding placing the burden of learning another specialized language on domain experts and instead asking them to provide a small set of examples in the form of highlighted spans of text. We introduce a transformer-based architecture that drives a rule synthesis system that leverages a self-supervised approach for pre-training a large-scale language model complemented by an analysis of different loss functions and aggregation mechanisms for variable length sequences of user-annotated spans of text. The results are encouraging and point to different desirable properties, such as speed and quality, depending on the choice of loss and aggregation method. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,375 |
inproceedings | nagoudi-etal-2022-turjuman | {TURJUMAN}: A Public Toolkit for Neural {A}rabic Machine Translation | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.1/ | Nagoudi, El Moatez Billah and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 1--11 | We present TURJUMAN, a neural toolkit for translating from 20 languages into Modern Standard Arabic (MSA). TURJUMAN exploits the recently-introduced text-to-text Transformer AraT5 model, endowing it with a powerful ability to decode into Arabic. The toolkit offers the possibility of employing a number of diverse decoding methods, making it suited for acquiring paraphrases for the MSA translations as an added value. To train TURJUMAN, we sample from publicly available parallel data employing a simple semantic similarity method to ensure data quality. This allows us to prepare and release AraOPUS-20, a new machine translation benchmark. We publicly release our translation toolkit (TURJUMAN) as well as our benchmark dataset (AraOPUS-20). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,472 |
inproceedings | sheikh-ali-etal-2022-detecting | Detecting Users Prone to Spread Fake News on {A}rabic {T}witter | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.2/ | Sheikh Ali, Zien and Al-Ali, Abdulaziz and Elsayed, Tamer | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 12--22 | The spread of misinformation has become a major concern to our society, and social media is one of its main culprits. Evidently, health misinformation related to vaccinations has slowed down global efforts to fight the COVID-19 pandemic. Studies have shown that fake news spreads substantially faster than real news on social media networks. One way to limit this fast dissemination is by assessing information sources in a semi-automatic way. To this end, we aim to identify users who are prone to spread fake news in Arabic Twitter. Such users play an important role in spreading misinformation and identifying them has the potential to control the spread. We construct an Arabic dataset on Twitter users, which consists of 1,546 users, of which 541 are prone to spread fake news (based on our definition). We use features extracted from users' recent tweets, e.g., linguistic, statistical, and profile features, to predict whether they are prone to spread fake news or not. To tackle the classification task, multiple learning models are employed and evaluated. Empirical results reveal promising detection performance, where an F1 score of 0.73 was achieved by the logistic regression model. Moreover, when tested on a benchmark English dataset, our approach has outperformed the current state-of-the-art for this task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,473 |
inproceedings | el-haj-etal-2022-arasas | {A}ra{SAS}: The Open Source {A}rabic Semantic Tagger | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.3/ | El-Haj, Mahmoud and de Souza, Elvis and Khallaf, Nouran and Rayson, Paul and Habash, Nizar | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 23--31 | This paper presents (AraSAS) the first open-source Arabic semantic analysis tagging system. AraSAS is a software framework that provides full semantic tagging of text written in Arabic. AraSAS is based on the UCREL Semantic Analysis System (USAS) which was first developed to semantically tag English text. Similarly to USAS, AraSAS uses a hierarchical semantic tag set that contains 21 major discourse fields and 232 fine-grained semantic field tags. The paper describes the creation, validation and evaluation of AraSAS. In addition, we demonstrate a first case study to illustrate the affordances of applying USAS and AraSAS semantic taggers on the Zayed University Arabic-English Bilingual Undergraduate Corpus (ZAEBUC) (Palfreyman and Habash, 2022), where we show and compare the coverage of the two semantic taggers through running them on Arabic and English essays on different topics. The analysis expands to compare the taggers when run on texts in Arabic and English written by the same writer and texts written by male and by female students. Variables for comparison include frequency of use of particular semantic sub-domains, as well as the diversity of semantic elements within a text. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,474 |
inproceedings | al-thubaity-etal-2022-aranpcc | {A}ra{NPCC}: The {A}rabic Newspaper {COVID}-19 Corpus | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.4/ | Al-Thubaity, Abdulmohsen and Alkhereyf, Sakhar and Bahanshal, Alia O. | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 32--40 | This paper introduces a corpus for Arabic newspapers during COVID-19: AraNPCC. The AraNPCC corpus covers 2019 until 2021 via automatically-collected data from 12 Arab countries. It comprises more than 2 billion words and 7.2 million texts alongside their metadata. AraNPCC can be used for several natural language processing tasks, such as updating available Arabic language models or corpus linguistics tasks, including language change over time. We utilized the corpus in two case studies. In the first case study, we investigate the correlation between the number of officially reported infected cases and the collective word frequency of {\textquotedblleft}COVID{\textquotedblright} and {\textquotedblleft}Corona.{\textquotedblright} The data shows a positive correlation that varies among Arab countries. For the second case study, we extract and compare the top 50 keywords in 2020 and 2021 to study the impact of the COVID-19 pandemic on two Arab countries, namely Algeria and Saudi Arabia. For 2020, the data shows that the two countries' newspapers strongly interacted with the pandemic, emphasizing its spread and dangerousness, and in 2021 the data suggests that the two countries coped with the pandemic. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,475 |
inproceedings | abu-kwaik-etal-2022-pre | Pre-trained Models or Feature Engineering: The Case of Dialectal {A}rabic | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.5/ | Abu Kwaik, Kathrein and Chatzikyriakidis, Stergios and Dobnik, Simon | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 41--50 | The usage of social media platforms has resulted in the proliferation of work on Arabic Natural Language Processing (ANLP), including the development of resources. There is also an increased interest in processing Arabic dialects and a number of models and algorithms have been utilised for the purpose of Dialectal Arabic Natural Language Processing (DANLP). In this paper, we conduct a comparison study between some of the most well-known and most commonly used methods in NLP in order to test their performance on different corpora and two NLP tasks: Dialect Identification and Sentiment Analysis. In particular, we compare three general classes of models: a) traditional Machine Learning models with features, b) classic Deep Learning architectures (LSTMs) with pre-trained word embeddings and lastly c) different Bidirectional Encoder Representations from Transformers (BERT) models such as (Multilingual-BERT, Ara-BERT, and Twitter-Arabic-BERT). The results of the comparison show that using feature-based classification can still compete with BERT models in these dialectal Arabic contexts. The use of transformer models have the ability to outperform traditional Machine Learning approaches, depending on the type of text they have been trained on, in contrast to classic Deep Learning models like LSTMs which do not perform well on the tasks | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,476 |
inproceedings | hakami-etal-2022-context | A Context-free {A}rabic Emoji Sentiment Lexicon ({CF}-{A}rab-{ESL}) | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.6/ | Hakami, Shatha Ali A. and Hendley, Robert and Smith, Phillip | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 51--59 | Emoji can be valuable features in textual sentiment analysis. One of the key elements of the use of emoji in sentiment analysis is the emoji sentiment lexicon. However, constructing such a lexicon is a challenging task. This is because interpreting the sentiment conveyed by these pictographic symbols is highly subjective, and differs depending upon how each person perceives them. Cultural background is considered to be one of the main factors that affects emoji sentiment interpretation. Thus, we focus in this work on targeting people from Arab cultures. This is done by constructing a context-free Arabic emoji sentiment lexicon annotated by native Arabic speakers from seven different regions (Gulf, Egypt, Levant, Sudan, North Africa, Iraq, and Yemen) to see how these Arabic users label the sentiment of these symbols without a textual context. We recruited 53 annotators (males and females) to annotate 1,069 unique emoji. Then we evaluated the reliability of the annotation for each participant by applying sensitivity (Recall) and consistency (Krippendorff`s Alpha) tests. For the analysis, we investigated the resulting emoji sentiment annotations to explore the impact of the Arabic cultural context. We analyzed this cultural reflection from different perspectives, including national affiliation, use of colour indications, animal indications, weather indications and religious impact. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,477 |
inproceedings | almazrua-etal-2022-sa7r | {S}a{\textquoteleft}7r: A Saudi Dialect Irony Dataset | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.7/ | AlMazrua, Halah and AlHazzani, Najla and AlDawod, Amaal and AlAwlaqi, Lama and AlReshoudi, Noura and Al-Khalifa, Hend and AlDhubayi, Luluh | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 60--70 | In sentiment analysis, detecting irony is considered a major challenge. The key problem with detecting irony is the difficulty to recognize the implicit and indirect phrases which signifies the opposite meaning. In this paper, we present Sa{\textquoteleft7r ساخرthe Saudi irony dataset, and describe our efforts in constructing it. The dataset was collected using Twitter API and it consists of 19,810 tweets, 8,089 of them are labeled as ironic tweets. We trained several models for irony detection task using machine learning models and deep learning models. The machine learning models include: K-Nearest Neighbor (KNN), Logistic Regression (LR), Support Vector Machine (SVM), and Na{\"ive Bayes (NB). While the deep learning models include BiLSTM and AraBERT. The detection results show that among the tested machine learning models, the SVM outperformed other classifiers with an accuracy of 0.68. On the other hand, the deep learning models achieved an accuracy of 0.66 in the BiLSTM model and 0.71 in the AraBERT model. Thus, the AraBERT model achieved the most accurate result in detecting irony phrases in Saudi Dialect. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,478 |
inproceedings | alharbi-lee-2022-classifying | Classifying {A}rabic Crisis Tweets using Data Selection and Pre-trained Language Models | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.8/ | Alharbi, Alaa and Lee, Mark | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 71--78 | User-generated Social Media (SM) content has been explored as a valuable and accessible source of data about crises to enhance situational awareness and support humanitarian response efforts. However, the timely extraction of crisis-related SM messages is challenging as it involves processing large quantities of noisy data in real-time. Supervised machine learning methods have been successfully applied to this task but such approaches require human-labelled data, which are unlikely to be available from novel and emerging crises. Supervised machine learning algorithms trained on labelled data from past events did not usually perform well when classifying a new disaster due to data variations across events. Using the BERT embeddings, we propose and investigate an instance distance-based data selection approach for adaptation to improve classifiers' performance under a domain shift. The K-nearest neighbours algorithm selects a subset of multi-event training data that is most similar to the target event. Results show that fine-tuning a BERT model on a selected subset of data to classify crisis tweets outperforms a model that has been fine-tuned on all available source data. We demonstrated that our approach generally works better than the self-training adaptation method. Combing the self-training with our proposed classifier does not enhance the performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,479 |
inproceedings | malhas-etal-2022-quran | Qur`an {QA} 2022: Overview of The First Shared Task on Question Answering over the Holy Qur`an | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.9/ | Malhas, Rana and Mansour, Watheq and Elsayed, Tamer | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 79--87 | Motivated by the resurgence of the machine reading comprehension (MRC) research, we have organized the first Qur`an Question Answering shared task, {\textquotedblleft}Qur`an QA 2022{\textquotedblright}. The task in its first year aims to promote state-of-the-art research on Arabic QA in general and MRC in particular on the Holy Qur`an, which constitutes a rich and fertile source of knowledge for Muslim and non-Muslim inquisitors and knowledge-seekers. In this paper, we provide an overview of the shared task that succeeded in attracting 13 teams to participate in the final phase, with a total of 30 submitted runs. Moreover, we outline the main approaches adopted by the participating teams in the context of highlighting some of our perceptions and general trends that characterize the participating systems and their submitted runs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,480 |
inproceedings | premasiri-etal-2022-dtw | {DTW} at Qur`an {QA} 2022: Utilising Transfer Learning with Transformers for Question Answering in a Low-resource Domain | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.10/ | Premasiri, Damith and Ranasinghe, Tharindu and Zaghouani, Wajdi and Mitkov, Ruslan | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 88--95 | The task of machine reading comprehension (MRC) is a useful benchmark to evaluate the natural language understanding of machines. It has gained popularity in the natural language processing (NLP) field mainly due to the large number of datasets released for many languages. However, the research in MRC has been understudied in several domains, including religious texts. The goal of the Qur`an QA 2022 shared task is to fill this gap by producing state-of-the-art question answering and reading comprehension research on Qur`an. This paper describes the DTW entry to the Quran QA 2022 shared task. Our methodology uses transfer learning to take advantage of available Arabic MRC data. We further improve the results using various ensemble learning strategies. Our approach provided a partial Reciprocal Rank (pRR) score of 0.49 on the test set, proving its strong performance on the task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,481 |
inproceedings | aftab-malik-2022-erock | e{R}ock at Qur`an {QA} 2022: Contemporary Deep Neural Networks for Qur`an based Reading Comprehension Question Answers | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.11/ | Aftab, Esha and Malik, Muhammad Kamran | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 96--103 | Question Answering (QA) has enticed the interest of NLP community in recent years. NLP enthusiasts are engineering new Models and fine-tuning the existing ones that can give out answers for the posed questions. The deep neural network models are found to perform exceptionally on QA tasks, but these models are also data intensive. For instance, BERT has outperformed many of its contemporary contenders on SQuAD dataset. In this work, we attempt at solving the closed domain reading comprehension Question Answering task on QRCD (Qur`anic Reading Comprehension Dataset) to extract an answer span from the provided passage, using BERT as a baseline model. We improved the model`s output by applying regularization techniques like weight-decay and data augmentation. Using different strategies we had 0.59{\%} and 0.31{\%} partial Reciprocal Ranking (pRR) on development and testing data splits respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,482 |
inproceedings | mostafa-mohamed-2022-gof | {GOF} at Qur`an {QA} 2022: Towards an Efficient Question Answering For The Holy Qu`ran In The {A}rabic Language Using Deep Learning-Based Approach | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.12/ | Mostafa, Ali and Mohamed, Omar | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 104--111 | Recently, significant advancements were achieved in Question Answering (QA) systems in several languages. However, QA systems in the Arabic language require further research and improvement because of several challenges and limitations, such as a lack of resources. Especially for QA systems in the Holy Qur`an since it is in classical Arabic and most recent publications are in Modern Standard Arabic. In this research, we report our submission to the Qur`an QA 2022 Shared task-organized with the 5th Workshop on Open-Source Arabic Corpora and Processing Tools Arabic (OSACT5). We propose a method for dealing with QA issues in the Holy Qur`an using Deep Learning models. Furthermore, we address the issue of the proposed dataset`s limited sample size by fine-tuning the model several times on several large datasets before fine-tuning it on the proposed dataset achieving 66.9{\%} pRR 54.59{\%} pRR on the development and test sets, respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,483 |
inproceedings | mellah-etal-2022-larsa22 | {LARSA}22 at Qur`an {QA} 2022: Text-to-Text Transformer for Finding Answers to Questions from Qur`an | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.13/ | Mellah, Youssef and Touahri, Ibtissam and Kaddari, Zakaria and Haja, Zakaria and Berrich, Jamal and Bouchentouf, Toumi | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 112--119 | Question Answering (QA) is one of the main fo{\cyrs}uses of Natural Language Pro{\cyrs}essing (NLP) resear{\cyrs}h. However, Arabi{\cyrs} Question Answering is still not within rea{\cyrs}h. The {\cyrs}hallenges of the Arabi{\cyrs} language and the la{\cyrs}k of resour{\cyrs}es have made it diffi{\cyrs}ult to provide powerful Arabi{\cyrs} QA systems with high a{\cyrs}{\cyrs}ura{\cyrs}y. While low a{\cyrs}{\cyrs}ura{\cyrs}y may be a{\cyrs}{\cyrs}epted for general purpose systems, it is {\cyrs}riti{\cyrs}al in some fields su{\cyrs}h as religious affairs. Therefore, there is a need for spe{\cyrs}ialized a{\cyrs}{\cyrs}urate systems that target these {\cyrs}riti{\cyrs}al fields. In this paper, we propose a Transformer-based QA system using the mT5 Language Model (LM). We finetuned the model on the Qur`ani{\cyrs} Reading {\CYRS}omprehension Dataset (QR{\CYRS}D) whi{\cyrs}h was provided in the {\cyrs}ontext of the Qur`an QA 2022 shared task. The QR{\CYRS}D dataset {\cyrs}onsists of question-passage pairs as input, and the {\cyrs}orresponding adequate answers provided by expert annotators as output. Evaluation results on the same DataSet show that our best model {\cyrs}an a{\cyrs}hieve 0.98 (F1 S{\cyrs}ore) on the Dev Set and 0.40 on the Test Set. We dis{\cyrs}uss those results and {\cyrs}hallenges, then propose potential solutions for possible improvements. The sour{\cyrs}e {\cyrs}ode is available on our repository. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,484 |
inproceedings | alsaleh-etal-2022-lk2022 | {LK}2022 at Qur`an {QA} 2022: Simple Transformers Model for Finding Answers to Questions from Qur`an | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.14/ | Alsaleh, Abdullah and Althabiti, Saud and Alshammari, Ibtisam and Alnefaie, Sarah and Alowaidi, Sanaa and Alsaqer, Alaa and Atwell, Eric and Altahhan, Abdulrahman and Alsalka, Mohammad | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 120--125 | Question answering is a specialized area in the field of NLP that aims to extract the answer to a user question from a given text. Most studies in this area focus on the English language, while other languages, such as Arabic, are still in their early stage. Recently, research tend to develop question answering systems for Arabic Islamic texts, which may impose challenges due to Classical Arabic. In this paper, we use Simple Transformers Question Answering model with three Arabic pre-trained language models (AraBERT, CAMeL-BERT, ArabicBERT) for Qur`an Question Answering task using Qur`anic Reading Comprehension Dataset. The model is set to return five answers ranking from the best to worst based on their probability scores according to the task details. Our experiments with development set shows that AraBERT V0.2 model outperformed the other Arabic pre-trained models. Therefore, AraBERT V0.2 was chosen for the the test set and it performed fair results with 0.45 pRR score, 0.16 EM score and 0.42 F1 score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,485 |
inproceedings | singh-2022-niksss-quran | niksss at Qur`an {QA} 2022: A Heavily Optimized {BERT} Based Model for Answering Questions from the Holy Qu`ran | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.15/ | Singh, Nikhil | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 126--129 | This paper presents the system description by team niksss for the Qur`an QA 2022 Shared Task. The goal of this shared task was to evaluate systems for Arabic Reading Comprehension over the Holy Quran. The task was set up as a question-answering task, such that, given a passage from the Holy Quran (consisting of consecutive verses in a specific surah(Chapter)) and a question (posed in Modern Standard Arabic (MSA)) over that passage, the system is required to extract a span of text from that passage as an answer to the question. The span was required to be an exact sub-string of the passage. We attempted to solve this task using three techniques namely conditional text-to-text generation, embedding clustering, and transformers-based question answering. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,486 |
inproceedings | ahmed-etal-2022-qqateam | {QQAT}eam at Qur`an {QA} 2022: Fine-Tunning {A}rabic {QA} Models for Qur`an {QA} Task | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.16/ | Ahmed, Basem and Saad, Motaz and Refaee, Eshrag A. | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 130--135 | The problem of auto-extraction of reliable answers from a reference text like a constitution or holy book is a real challenge for the natural languages research community. Qur{\'a}n is the holy book of Islam and the primary source of legislation for millions of Muslims around the world, which can trigger the curiosity of non-Muslims to find answers about various topics from the Qur{\'a}n. Previous work on Question Answering (Q{\&}A) from Qur{\'a}n is scarce and lacks the benchmark of previously developed systems on a testbed to allow meaningful comparison and identify developments and challenges. This work presents an empirical investigation of our participation in the Qur{\'a}n QA shared task (2022) that utilizes a benchmark dataset of 1,093 tuples of question-Qur{\'a}n passage pairs. The dataset comprises Qur{\'a}n verses, questions and several ranked possible answers. This paper describes the approach we follow with our participation in the shared task and summarises our main findings. Our system attained the best score at 0.63 pRR and 0.59 F1 on the development set and 0.56 pRR and 0.51 F1 on the test set. The best results of the Exact Match (EM) score at 0.34 indicate the difficulty of the task and the need for more future work to tackle this challenging task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,487 |
inproceedings | keleg-magdy-2022-smash | {SMASH} at Qur`an {QA} 2022: Creating Better Faithful Data Splits for Low-resourced Question Answering Scenarios | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.17/ | Keleg, Amr and Magdy, Walid | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 136--145 | The Qur`an QA 2022 shared task aims at assessing the possibility of building systems that can extract answers to religious questions given relevant passages from the Holy Qur`an. This paper describes SMASH`s system that was used to participate in this shared task. Our experiments reveal a data leakage issue among the different splits of the dataset. This leakage problem hinders the reliability of using the models' performance on the development dataset as a proxy for the ability of the models to generalize to new unseen samples. After creating better faithful splits from the original dataset, the basic strategy of fine-tuning a language model pretrained on classical Arabic text yielded the best performance on the new evaluation split. The results achieved by the model suggests that the small scale dataset is not enough to fine-tune large transformer-based language models in a way that generalizes well. Conversely, we believe that further attention could be paid to the type of questions that are being used to train the models given the sensitivity of the data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,488 |
inproceedings | sleem-etal-2022-stars | Stars at Qur`an {QA} 2022: Building Automatic Extractive Question Answering Systems for the Holy Qur`an with Transformer Models and Releasing a New Dataset | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.18/ | Sleem, Ahmed and Elrefai, Eman Mohammed lotfy and Matar, Marwa Mohammed and Nawaz, Haq | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 146--153 | The Holy Qur`an is the most sacred book for more than 1.9 billion Muslims worldwide, and it provides a guide for their behaviours and daily interactions. Its miraculous eloquence and the divine essence of its verses (Khorami, 2014)(Elhindi,2017) make it far more difficult for non-scholars to answer their questions from the Qur`an. Here comes the significant role of technology in assisting all Muslims in answering their Qur`anic questions with state-of-the-art advancements in natural language processing (NLP) and information retrieval (IR). The task of constructing the finest automatic extractive Question Answering system from the Holy Qur`an with the use of the recently available Qur`anic Reading Comprehension Dataset(QRCD) was announced for LREC 2022 (Malhas et al., 2022) which opened up this new area for researchers around the world. In this paper, we propose a novel Qur`an Question Answering dataset with over 700 samples to aid future Qur`an research projects and three different approaches where we utilised self-attention based deep learning models (transformers) for building reliable intelligent question-answering systems for the Holy Qur`an that achieved a partial Reciprocal Rank (pRR) best score of 52{\%} on the released QRCD test se | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,489 |
inproceedings | elkomy-sarhan-2022-tce | {TCE} at Qur`an {QA} 2022: {A}rabic Language Question Answering Over Holy Qur`an Using a Post-Processed Ensemble of {BERT}-based Models | Al-Khalifa, Hend and Elsayed, Tamer and Mubarak, Hamdy and Al-Thubaity, Abdulmohsen and Magdy, Walid and Darwish, Kareem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.osact-1.19/ | Elkomy, Mohamemd and Sarhan, Amany M. | Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur`an QA and Fine-Grained Hate Speech Detection | 154--161 | In recent years, we witnessed great progress in different tasks of natural language understanding using machine learning. Question answering is one of these tasks which is used by search engines and social media platforms for improved user experience. Arabic is the language of the Holy Qur`an; the sacred text for 1.8 billion people across the world. Arabic is a challenging language for Natural Language Processing (NLP) due to its complex structures. In this article, we describe our attempts at OSACT5 Qur`an QA 2022 Shared Task, which is a question answering challenge on the Holy Qur`an in Arabic. We propose an ensemble learning model based on Arabic variants of BERT models. In addition, we perform post-processing to enhance the model predictions. Our system achieves a Partial Reciprocal Rank (pRR) score of 56.6{\%} on the official test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 23,490 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.