entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | pu-etal-2022-unraveling | Unraveling the Mystery of Artifacts in Machine Generated Text | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.744/ | Pu, Jiashu and Huang, Ziyi and Xi, Yadong and Chen, Guandan and Chen, Weijie and Zhang, Rongsheng | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6889--6898 | As neural Text Generation Models (TGM) have become more and more capable of generating text indistinguishable from human-written ones, the misuse of text generation technologies can have serious ramifications. Although a neural classifier often achieves high detection accuracy, the reason for it is not well studied. Most previous work revolves around studying the impact of model structure and the decoding strategy on ease of detection, but little work has been done to analyze the forms of artifacts left by the TGM. We propose to systematically study the forms and scopes of artifacts by corrupting texts, replacing them with linguistic or statistical features, and applying the interpretable method of Integrated Gradients. Comprehensive experiments show artifacts a) primarily relate to token co-occurrence, b) feature more heavily at the head of vocabulary, c) appear more in content word than stopwords, d) are sometimes detrimental in the form of number of token occurrences, e) are less likely to exist in high-level semantics or syntaxes, f) manifest in low concreteness values for higher-order n-grams. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,151 |
inproceedings | chang-etal-2022-logic | Logic-Guided Message Generation from Raw Real-Time Sensor Data | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.745/ | Chang, Ernie and Kovtunova, Alisa and Borgwardt, Stefan and Demberg, Vera and Chapman, Kathryn and Yeh, Hui-Syuan | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6899--6908 | Natural language generation in real-time settings with raw sensor data is a challenging task. We find that formulating the task as an end-to-end problem leads to two major challenges in content selection {--} the sensor data is both redundant and diverse across environments, thereby making it hard for the encoders to select and reason on the data. We here present a new corpus for a specific domain that instantiates these properties. It includes handover utterances that an assistant for a semi-autonomous drone uses to communicate with humans during the drone flight. The corpus consists of sensor data records and utterances in 8 different environments. As a structured intermediary representation between data records and text, we explore the use of description logic (DL). We also propose a neural generation model that can alert the human pilot of the system state and environment in preparation of the handover of control. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,152 |
inproceedings | kumar-etal-2022-bull | The Bull and the Bear: Summarizing Stock Market Discussions | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.746/ | Kumar, Ayush and Jani, Dhyey and Shah, Jay and Thakar, Devanshu and Jain, Varun and Singh, Mayank | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6909--6913 | Stock market investors debate and heavily discuss stock ideas, investing strategies, news and market movements on social media platforms. The discussions are significantly longer in length and require extensive domain expertise for understanding. In this paper, we curate such discussions and construct a first-of-its-kind of abstractive summarization dataset. Our curated dataset consists of 7888 Reddit posts and manually constructed summaries for 400 posts. We robustly evaluate the summaries and conduct experiments on SOTA summarization tools to showcase their limitations. We plan to make the dataset publicly available. The sample dataset is available here: \url{https://dhyeyjani.github.io/RSMC} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,153 |
inproceedings | espasa-etal-2022-combination | Combination of Contextualized and Non-Contextualized Layers for Lexical Substitution in {F}rench | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.747/ | Espasa, K{\'e}vin and Morin, Emmanuel and Hamon, Olivier | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6914--6921 | Lexical substitution task requires to substitute a target word by candidates in a given context. Candidates must keep meaning and grammatically of the sentence. The task, introduced in the SemEval 2007, has two objectives. The first objective is to find a list of substitutes for a target word. This list of substitutes can be obtained with lexical resources like WordNet or generated with a pre-trained language model. The second objective is to rank these substitutes using the context of the sentence. Most of the methods use vector space models or more recently embeddings to rank substitutes. Embedding methods use high contextualized representation. This representation can be over contextualized and in this way overlook good substitute candidates which are more similar on non-contextualized layers. SemDis 2014 introduced the lexical substitution task in French. We propose an application of the state-of-the-art method based on BERT in French and a novel method using contextualized and non-contextualized layers to increase the suggestion of words having a lower probability in a given context but that are more semantically similar. Experiments show our method increases the BERT based system on the OOT measure but decreases on the BEST measure in the SemDis 2014 benchmark. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,154 |
inproceedings | bastan-etal-2022-sume | {S}u{M}e: A Dataset Towards Summarizing Biomedical Mechanisms | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.748/ | Bastan, Mohaddeseh and Shankar, Nishant and Surdeanu, Mihai and Balasubramanian, Niranjan | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6922--6931 | Can language models read biomedical texts and explain the biomedical mechanisms discussed? In this work we introduce a biomedical mechanism summarization task. Biomedical studies often investigate the mechanisms behind how one entity (e.g., a protein or a chemical) affects another in a biological context. The abstracts of these publications often include a focused set of sentences that present relevant supporting statements regarding such relationships, associated experimental evidence, and a concluding sentence that summarizes the mechanism underlying the relationship. We leverage this structure and create a summarization task, where the input is a collection of sentences and the main entities in an abstract, and the output includes the relationship and a sentence that summarizes the mechanism. Using a small amount of manually labeled mechanism sentences, we train a mechanism sentence classifier to filter a large biomedical abstract collection and create a summarization dataset with 22k instances. We also introduce conclusion sentence generation as a pretraining task with 611k instances. We benchmark the performance of large bio-domain language models. We find that while the pretraining task help improves performance, the best model produces acceptable mechanism outputs in only 32{\%} of the instances, which shows the task presents significant challenges in biomedical language understanding and summarization. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,155 |
inproceedings | chen-lin-2022-catamaran | {CATAMARAN}: A Cross-lingual Long Text Abstractive Summarization Dataset | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.749/ | Chen, Zheng and Lin, Hongyu | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6932--6937 | Cross-lingual summarization, which produces the summary in one language from a given source document in another language, could be extremely helpful for humans to obtain information across the world. However, it is still a little-explored task due to the lack of datasets. Recent studies are primarily based on pseudo-cross-lingual datasets obtained by translation. Such an approach would inevitably lead to the loss of information in the original document and introduce noise into the summary, thus hurting the overall performance. In this paper, we present CATAMARAN, the first high-quality cross-lingual long text abstractive summarization dataset. It contains about 20,000 parallel news articles and corresponding summaries, all written by humans. The average lengths of articles are 1133.65 for English articles and 2035.33 for Chinese articles, and the average lengths of the summaries are 26.59 and 70.05, respectively. We train and evaluate an mBART-based cross-lingual abstractive summarization model using our dataset. The result shows that, compared with mono-lingual systems, the cross-lingual abstractive summarization system could also achieve solid performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,156 |
inproceedings | sosea-etal-2022-emotion | Emotion analysis and detection during {COVID}-19 | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.750/ | Sosea, Tiberiu and Pham, Chau and Tekle, Alexander and Caragea, Cornelia and Li, Junyi Jessy | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6938--6947 | Understanding emotions that people express during large-scale crises helps inform policy makers and first responders about the emotional states of the population as well as provide emotional support to those who need such support. We present CovidEmo, a dataset of {\textasciitilde}3,000 English tweets labeled with emotions and temporally distributed across 18 months. Our analyses reveal the emotional toll caused by COVID-19, and changes of the social narrative and associated emotions over time. Motivated by the time-sensitive nature of crises and the cost of large-scale annotation efforts, we examine how well large pre-trained language models generalize across domains and timeline in the task of perceived emotion prediction in the context of COVID-19. Our analyses suggest that cross-domain information transfers occur, yet there are still significant gaps. We propose semi-supervised learning as a way to bridge this gap, obtaining significantly better performance using unlabeled data from the target domain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,157 |
inproceedings | hassan-etal-2022-cross | Cross-lingual Emotion Detection | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.751/ | Hassan, Sabit and Shaar, Shaden and Darwish, Kareem | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6948--6958 | Emotion detection can provide us with a window into understanding human behavior. Due to the complex dynamics of human emotions, however, constructing annotated datasets to train automated models can be expensive. Thus, we explore the efficacy of cross-lingual approaches that would use data from a source language to build models for emotion detection in a target language. We compare three approaches, namely: i) using inherently multilingual models; ii) translating training data into the target language; and iii) using an automatically tagged parallel corpus. In our study, we consider English as the source language with Arabic and Spanish as target languages. We study the effectiveness of different classification models such as BERT and SVMs trained with different features. Our BERT-based monolingual models that are trained on target language data surpass state-of-the-art (SOTA) by 4{\%} and 5{\%} absolute Jaccard score for Arabic and Spanish respectively. Next, we show that using cross-lingual approaches with English data alone, we can achieve more than 90{\%} and 80{\%} relative effectiveness of the Arabic and Spanish BERT models respectively. Lastly, we use LIME to analyze the challenges of training cross-lingual models for different language pairs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,158 |
inproceedings | zhang-liu-2022-directquote | {D}irect{Q}uote: A Dataset for Direct Quotation Extraction and Attribution in News Articles | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.752/ | Zhang, Yuanchi and Liu, Yang | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6959--6966 | Quotation extraction and attribution are challenging tasks, aiming at determining the spans containing quotations and attributing each quotation to the original speaker. Applying this task to news data is highly related to fact-checking, media monitoring and news tracking. Direct quotations are more traceable and informative, and therefore of great significance among different types of quotations. Therefore, this paper introduces DirectQuote, a corpus containing 19,760 paragraphs and 10,279 direct quotations manually annotated from online news media. To the best of our knowledge, this is the largest and most complete corpus that focuses on direct quotations in news texts. We ensure that each speaker in the annotation can be linked to a specific named entity on Wikidata, benefiting various downstream tasks. In addition, for the first time, we propose several sequence labeling models as baseline methods to extract and attribute quotations simultaneously in an end-to-end manner. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,159 |
inproceedings | weinzierl-harabagiu-2022-vaccinelies | {V}accine{L}ies: A Natural Language Resource for Learning to Recognize Misinformation about the {COVID}-19 and {HPV} Vaccines | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.753/ | Weinzierl, Maxwell and Harabagiu, Sanda | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6967--6975 | Billions of COVID-19 vaccines have been administered, but many remain hesitant. Misinformation about the COVID-19 vaccines and other vaccines, propagating on social media, is believed to drive hesitancy towards vaccination. The ability to automatically recognize misinformation targeting vaccines on Twitter depends on the availability of data resources. In this paper we present VaccineLies, a large collection of tweets propagating misinformation about two vaccines: the COVID-19 vaccines and the Human Papillomavirus (HPV) vaccines. Misinformation targets are organized in vaccine-specific taxonomies, which reveal the misinformation themes and concerns. The ontological commitments of the misinformation taxonomies provide an understanding of which misinformation themes and concerns dominate the discourse about the two vaccines covered in VaccineLies. The organization into training, testing and development sets of VaccineLies invites the development of novel supervised methods for detecting misinformation on Twitter and identifying the stance towards it. Furthermore, VaccineLies can be a stepping stone for the development of datasets focusing on misinformation targeting additional vaccines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,160 |
inproceedings | turban-kruschwitz-2022-tackling | Tackling Irony Detection using Ensemble Classifiers | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.754/ | Turban, Christoph and Kruschwitz, Udo | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6976--6984 | Automatic approaches to irony detection have been of interest to the NLP community for a long time, yet, state-of-the-art approaches still fall way short of what one would consider a desirable performance. In part this is due to the inherent difficulty of the problem. However, in recent years ensembles of transformer-based approaches have emerged as a promising direction to push the state of the art forward in a wide range of NLP applications. A different, more recent, development is the automatic augmentation of training data. In this paper we will explore both these directions for the task of irony detection in social media. Using the common SemEval 2018 Task 3 benchmark collection we demonstrate that transformer models are well suited in ensemble classifiers for the task at hand. In the multi-class classification task we observe statistically significant improvements over strong baselines. For binary classification we achieve performance that is on par with state-of-the-art alternatives. The examined data augmentation strategies showed an effect, but are not decisive for good results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,161 |
inproceedings | aye-mar-shirai-2022-automatic | Automatic Construction of an Annotated Corpus with Implicit Aspects | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.755/ | Aye Mar, Aye and Shirai, Kiyoaki | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6985--6991 | Aspect-based sentiment analysis (ABSA) is a task that involves classifying the polarity of aspects of the products or services described in users' reviews. Most previous work on ABSA has focused on explicit aspects, which appear as explicit words or phrases in the sentences of the review. However, users often express their opinions toward the aspects indirectly or implicitly, in which case the specific name of an aspect does not appear in the review. The current datasets used for ABSA are mainly annotated with explicit aspects. This paper proposes a novel method for constructing a corpus that is automatically annotated with implicit aspects. The main idea is that sentences containing explicit and implicit aspects share a similar context. First, labeled sentences with explicit aspects and unlabeled sentences that include implicit aspects are collected. Next, clustering is performed on these sentences so that similar sentences are merged into the same cluster. Finally, the explicit aspects are propagated to the unlabeled sentences in the same cluster, in order to construct a labeled dataset containing implicit aspects. The results of our experiments on mobile phone reviews show that our method of identifying the labels of implicit aspects achieves a maximum accuracy of 82{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,162 |
inproceedings | ray-etal-2022-multimodal | A Multimodal Corpus for Emotion Recognition in Sarcasm | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.756/ | Ray, Anupama and Mishra, Shubham and Nunna, Apoorva and Bhattacharyya, Pushpak | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 6992--7003 | While sentiment and emotion analysis have been studied extensively, the relationship between sarcasm and emotion has largely remained unexplored. A sarcastic expression may have a variety of underlying emotions. For example, {\textquotedblleft}I love being ignored{\textquotedblright} belies sadness, while {\textquotedblleft}my mobile is fabulous with a battery backup of only 15 minutes!{\textquotedblright} expresses frustration. Detecting the emotion behind a sarcastic expression is non-trivial yet an important task. We undertake the task of detecting the emotion in a sarcastic statement, which to the best of our knowledge, is hitherto unexplored. We start with the recently released multimodal sarcasm detection dataset (MUStARD) pre-annotated with 9 emotions. We identify and correct 343 incorrect emotion labels (out of 690). We double the size of the dataset, label it with emotions along with valence and arousal which are important indicators of emotional intensity. Finally, we label each sarcastic utterance with one of the four sarcasm types-Propositional, Embedded, Likeprefixed and Illocutionary, with the goal of advancing sarcasm detection research. Exhaustive experimentation with multimodal (text, audio, and video) fusion models establishes a benchmark for exact emotion recognition in sarcasm and outperforms the state-of-art sarcasm detection. We release the dataset enriched with various annotations and the code for research purposes: \url{https://github.com/apoorva-nunna/MUStARD_Plus_Plus} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,163 |
inproceedings | tammewar-etal-2022-annotation | Annotation of Valence Unfolding in Spoken Personal Narratives | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.757/ | Tammewar, Aniruddha and Braun, Franziska and Roccabruna, Gabriel and Bayerl, Sebastian and Riedhammer, Korbinian and Riccardi, Giuseppe | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7004--7013 | Personal Narrative (PN) is the recollection of individuals' life experiences, events, and thoughts along with the associated emotions in the form of a story. Compared to other genres such as social media texts or microblogs, where people write about experienced events or products, the spoken PNs are complex to analyze and understand. They are usually long and unstructured, involving multiple and related events, characters as well as thoughts and emotions associated with events, objects, and persons. In spoken PNs, emotions are conveyed by changing the speech signal characteristics as well as the lexical content of the narrative. In this work, we annotate a corpus of spoken personal narratives, with the emotion valence using discrete values. The PNs are segmented into speech segments, and the annotators annotate them in the discourse context, with values on a 5-point bipolar scale ranging from -2 to +2 (0 for neutral). In this way, we capture the unfolding of the PNs events and changes in the emotional state of the narrator. We perform an in-depth analysis of the inter-annotator agreement, the relation between the label distribution w.r.t. the stimulus (positive/negative) used for the elicitation of the narrative, and compare the segment-level annotations to a baseline continuous annotation. We find that the neutral score plays an important role in the agreement. We observe that it is easy to differentiate the positive from the negative valence while the confusion with the neutral label is high. Keywords: Personal Narratives, Emotion Annotation, Segment Level Annotation | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,164 |
inproceedings | nakayama-etal-2022-large | A Large-Scale {J}apanese Dataset for Aspect-based Sentiment Analysis | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.758/ | Nakayama, Yuki and Murakami, Koji and Kumar, Gautam and Bhingardive, Sudha and Hardaway, Ikuko | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7014--7021 | There has been significant progress in the field of sentiment analysis. However, aspect-based sentiment analysis (ABSA) has not been explored in the Japanese language even though it has a huge scope in many natural language processing applications such as 1) tracking sentiment towards products, movies, politicians etc; 2) improving customer relation models. The main reason behind this is that there is no standard Japanese dataset available for ABSA task. In this paper, we present the first standard Japanese dataset for the hotel reviews domain. The proposed dataset contains 53,192 review sentences with seven aspect categories and two polarity labels. We perform experiments on this dataset using popular ABSA approaches and report error analysis. Our experiments show that contextual models such as BERT works very well for the ABSA task in the Japanese language and also show the need to focus on other NLP tasks for better performance through our error analysis. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,165 |
inproceedings | suzuki-etal-2022-japanese | A {J}apanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog Domain | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.759/ | Suzuki, Haruya and Miyauchi, Yuto and Akiyama, Kazuki and Kajiwara, Tomoyuki and Ninomiya, Takashi and Takemura, Noriko and Nakashima, Yuta and Nagahara, Hajime | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7022--7028 | We annotate 35,000 SNS posts with both the writer`s subjective sentiment polarity labels and the reader`s objective ones to construct a Japanese sentiment analysis dataset. Our dataset includes intensity labels (\textit{none}, \textit{weak}, \textit{medium}, and \textit{strong}) for each of the eight basic emotions by Plutchik (\textit{joy}, \textit{sadness}, \textit{anticipation}, \textit{surprise}, \textit{anger}, \textit{fear}, \textit{disgust}, and \textit{trust}) as well as sentiment polarity labels (\textit{strong positive}, \textit{positive}, \textit{neutral}, \textit{negative}, and \textit{strong negative}). Previous studies on emotion analysis have studied the analysis of basic emotions and sentiment polarity independently. In other words, there are few corpora that are annotated with both basic emotions and sentiment polarity. Our dataset is the first large-scale corpus to annotate both of these emotion labels, and from both the writer`s and reader`s perspectives. In this paper, we analyze the relationship between basic emotion intensity and sentiment polarity on our dataset and report the results of benchmarking sentiment polarity classification. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,166 |
inproceedings | qin-etal-2022-complementary | Complementary Learning of Aspect Terms for Aspect-based Sentiment Analysis | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.760/ | Qin, Han and Tian, Yuanhe and Xia, Fei and Song, Yan | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7029--7039 | Aspect-based sentiment analysis (ABSA) aims to predict the sentiment polarity towards a given aspect term in a sentence on the fine-grained level, which usually requires a good understanding of contextual information, especially appropriately distinguishing of a given aspect and its contexts, to achieve good performance. However, most existing ABSA models pay limited attention to the modeling of the given aspect terms and thus result in inferior results when a sentence contains multiple aspect terms with contradictory sentiment polarities. In this paper, we propose to improve ABSA by complementary learning of aspect terms, which serves as a supportive auxiliary task to enhance ABSA by explicitly recovering the aspect terms from each input sentence so as to better understand aspects and their contexts. Particularly, a discriminator is also introduced to further improve the learning process by appropriately balancing the impact of aspect recovery to sentiment prediction. Experimental results on five widely used English benchmark datasets for ABSA demonstrate the effectiveness of our approach, where state-of-the-art performance is observed on all datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,167 |
inproceedings | bose-su-2022-deep | Deep One-Class Hate Speech Detection Model | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.761/ | Bose, Saugata and Su, Dr. Guoxin | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7040--7048 | Hate speech detection for social media posts is considered as a binary classification problem in existing approaches, largely neglecting distinct attributes of hate speeches from other sentimental types such as {\textquotedblleft}aggressive{\textquotedblright} and {\textquotedblleft}racist{\textquotedblright}. As these sentimental types constitute a significant major portion of data, the classification performance is compromised. Moreover, those classifiers often do not generalize well across different datasets due to a relatively small number of hate-class samples. In this paper, we adopt a one-class perspective for hate speech detection, where the detection classifier is trained with hate-class samples only. Our model employs a BERT-BiLSTM module for feature extraction and a one-class SVM for classification. A comprehensive evaluation with four benchmarking datasets demonstrates the better performance of our model than existing approaches, as well as the advantage of training our model with a combination of the four datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,168 |
inproceedings | barriere-etal-2022-opinions | Opinions in Interactions : New Annotations of the {SEMAINE} Database | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.762/ | Barriere, Valentin and Essid, Slim and Clavel, Chlo{\'e} | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7049--7055 | In this paper, we present the process we used in order to collect new annotations of opinions over the multimodal corpus SEMAINE composed of dyadic interactions. The dataset had already been annotated continuously in two affective dimensions related to the emotions: Valence and Arousal. We annotated the part of SEMAINE called \textit{Solid SAL} composed of 79 interactions between a user and an operator playing the role of a virtual agent designed to engage a person in a sustained, emotionally colored conversation. We aligned the audio at the word level using the available high-quality manual transcriptions. The annotated dataset contains 5627 speech turns for a total of 73,944 words, corresponding to 6 hours 20 minutes of dyadic interactions. Each interaction has been labeled by three annotators at the speech turn level following a three-step process. This method allows us to obtain a precise annotation regarding the opinion of a speaker. We obtain thus a dataset dense in opinions, with more than 48{\%} of the annotated speech turns containing at least one opinion. We then propose a new baseline for the detection of opinions in interactions improving slightly a state of the art model with RoBERTa embeddings. The obtained results on the database are promising with a F1-score at 0.72. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,169 |
inproceedings | shangipour-ataei-etal-2022-pars | Pars-{ABSA}: a Manually Annotated Aspect-based Sentiment Analysis Benchmark on {F}arsi Product Reviews | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.763/ | Shangipour ataei, Taha and Darvishi, Kamyar and Javdan, Soroush and Minaei-Bidgoli, Behrouz and Eetemadi, Sauleh | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7056--7060 | Due to the increased availability of online reviews, sentiment analysis witnessed a thriving interest from researchers. Sentiment analysis is a computational treatment of sentiment used to extract and understand the opinions of authors. While many systems were built to predict the sentiment of a document or a sentence, many others provide the necessary detail on various aspects of the entity (i.e., aspect-based sentiment analysis). Most of the available data resources were tailored to English and the other popular European languages. Although Farsi is a language with more than 110 million speakers, to the best of our knowledge, there is a lack of proper public datasets on aspect-based sentiment analysis for Farsi. This paper provides a manually annotated Farsi dataset, Pars-ABSA, annotated and verified by three native Farsi speakers. The dataset consists of 5,114 positive, 3,061 negative and 1,827 neutral data samples from 5,602 unique reviews. Moreover, as a baseline, this paper reports the performance of some aspect-based sentiment analysis methods focusing on transfer learning on Pars-ABSA. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,170 |
inproceedings | -etal-2022-hindimd | {H}indi{MD}: A Multi-domain Corpora for Low-resource Sentiment Analysis | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.764/ | Mamta and Ekbal, Asif and Bhattacharyya, Pushpak and Saha, Tista and Kumar, Alka and Srivastava, Shikha | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7061--7070 | Social media platforms such as Twitter have evolved into a vast information sharing platform, allowing people from a variety of backgrounds and expertise to share their opinions on numerous events such as terrorism, narcotics and many other social issues. People sometimes misuse the power of social media for their agendas, such as illegal trades and negatively influencing others. Because of this, sentiment analysis has won the interest of a lot of researchers to widely analyze public opinion for social media monitoring. Several benchmark datasets for sentiment analysis across a range of domains have been made available, especially for high-resource languages. A few datasets are available for low-resource Indian languages like Hindi, such as movie reviews and product reviews, which do not address the current need for social media monitoring. In this paper, we address the challenges of sentiment analysis in Hindi and socially relevant domains by introducing a balanced corpus annotated with the sentiment classes, viz. positive, negative and neutral. To show the effective usage of the dataset, we build several deep learning based models and establish them as the baselines for further research in this direction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,171 |
inproceedings | pavlopoulos-etal-2022-sentiment | Sentiment Analysis of {H}omeric Text: The 1st Book of {I}liad | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.765/ | Pavlopoulos, John and Xenos, Alexandros and Picca, Davide | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7071--7077 | Sentiment analysis studies are focused more on online customer reviews or social media, and less on literary studies. The problem is greater for ancient languages, where the linguistic expression of sentiments may diverge from modern linguistic forms. This work presents the outcome of a sentiment annotation task of the first Book of Iliad, an ancient Greek poem. The annotators were provided with verses translated into modern Greek and they annotated the perceived emotions and sentiments verse by verse. By estimating the fraction of annotators that found a verse as belonging to a specific sentiment class, we model the poem`s perceived sentiment as a multi-variate time series. By experimenting with a state of the art deep learning masked language model, pre-trained on modern Greek and fine-tuned to estimate the sentiment of our data, we registered a mean squared error of 0.063. This low error indicates that sentiment estimators built on our dataset can potentially be used as mechanical annotators, hence facilitating the distant reading of Homeric text. Our dataset is released for public use. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,172 |
inproceedings | safari-etal-2022-persian | The {P}ersian Dependency Treebank Made Universal | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.766/ | Safari, Pegah and Rasooli, Mohammad Sadegh and Moloodi, Amirsaeid and Nourian, Alireza | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7078--7087 | We describe an automatic method for converting the Persian Dependency Treebank (Rasooli et al., 2013) to Universal Dependencies. This treebank contains 29107 sentences. Our experiments along with manual linguistic analysis show that our data is more compatible with Universal Dependencies than the Uppsala Persian Universal Dependency Treebank (Seraji et al., 2016), larger in size and more diverse in vocabulary. Our data brings in labeled attachment F-score of 85.2 in supervised parsing. Also, our delexicalized Persian-to-English parser transfer experiments show that a parsing model trained on our data is {\ensuremath{\approx}}2{\%} absolutely more accurate than that of Seraji et al. (2016) in terms of labeled attachment score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,173 |
inproceedings | baxi-bhatt-2022-gujmorph | {G}uj{MORPH} - A Dataset for Creating {G}ujarati Morphological Analyzer | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.767/ | Baxi, Jatayu and Bhatt, Brijesh | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7088--7095 | Computational morphology deals with the processing of a language at the word level. A morphological analyzer is a key linguistic word-level tool that returns all the constituent morphemes and their grammatical categories associated with a particular word form. For the highly inflectional and low resource languages, the creation of computational morphology-related tools is a challenging task due to the unavailability of underlying key resources. In this paper, we discuss the creation of an annotated morphological dataset- GujMORPH for the Gujarati - an indo-aryan language. For the creation of this dataset, we studied language grammar, word formation rules, and suffix attachments in depth. This dataset contains 16,527 unique inflected words along with their morphological segmentation and grammatical feature tagging information. It is a first of its kind dataset for the Gujarati language and can be used to develop morphological analyzer and generator models. The dataset is annotated in the standard Unimorph schema and evaluated on the baseline system. We also describe the tool used to annotate the data in the standard format. The dataset is released publicly along with the library. Using this library, the data can be obtained in a format that can be directly used to train any machine learning model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,174 |
inproceedings | kabiri-etal-2022-informal | Informal {P}ersian {U}niversal {D}ependency Treebank | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.768/ | Kabiri, Roya and Karimi, Simin and Surdeanu, Mihai | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7096--7105 | This paper presents the phonological, morphological, and syntactic distinctions between formal and informal Persian, showing that these two variants have fundamental differences that cannot be attributed solely to pronunciation discrepancies. Given that informal Persian exhibits particular characteristics, any computational model trained on formal Persian is unlikely to transfer well to informal Persian, necessitating the creation of dedicated treebanks for this variety. We thus detail the development of the open-source Informal Persian Universal Dependency Treebank, a new treebank annotated within the Universal Dependencies scheme. We then investigate the parsing of informal Persian by training two dependency parsers on existing formal treebanks and evaluating them on out-of-domain data, i.e. the development set of our informal treebank. Our results show that parsers experience a substantial performance drop when we move across the two domains, as they face more unknown tokens and structures and fail to generalize well. Furthermore, the dependency relations whose performance deteriorates the most represent the unique properties of the informal variant. The ultimate goal of this study that demonstrates a broader impact is to provide a stepping-stone to reveal the significance of informal variants of languages, which have been widely overlooked in natural language processing tools across languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,175 |
inproceedings | zupon-etal-2022-automatic | Automatic Correction of Syntactic Dependency Annotation Differences | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.769/ | Zupon, Andrew and Carnie, Andrew and Hammond, Michael and Surdeanu, Mihai | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7106--7112 | Annotation inconsistencies between data sets can cause problems for low-resource NLP, where noisy or inconsistent data cannot be easily replaced. We propose a method for automatically detecting annotation mismatches between dependency parsing corpora, along with three related methods for automatically converting the mismatches. All three methods rely on comparing unseen examples in a new corpus with similar examples in an existing corpus. These three methods include a simple lexical replacement using the most frequent tag of the example in the existing corpus, a GloVe embedding-based replacement that considers related examples, and a BERT-based replacement that uses contextualized embeddings to provide examples fine-tuned to our data. We evaluate these conversions by retraining two dependency parsers{---}Stanza and Parsing as Tagging (PaT){---}on the converted and unconverted data. We find that applying our conversions yields significantly better performance in many cases. Some differences observed between the two parsers are observed. Stanza has a more complex architecture with a quadratic algorithm, taking longer to train, but it can generalize from less data. The PaT parser has a simpler architecture with a linear algorithm, speeding up training but requiring more training data to reach comparable or better performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,176 |
inproceedings | sato-etal-2022-building | Building Large-Scale {J}apanese Pronunciation-Annotated Corpora for Reading Heteronymous Logograms | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.770/ | Sato, Fumikazu and Yoshinaga, Naoki and Kitsuregawa, Masaru | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7113--7121 | Although screen readers enable visually impaired people to read written text via speech, the ambiguities in pronunciations of heteronyms cause wrong reading, which has a serious impact on the text understanding. Especially in Japanese, there are many common heteronyms expressed by logograms (Chinese characters or kanji) that have totally different pronunciations (and meanings). In this study, to improve the accuracy of pronunciation prediction, we construct two large-scale Japanese corpora that annotate kanji characters with their pronunciations. Using existing language resources on i) book titles compiled by the National Diet Library and ii) the books in a Japanese digital library called Aozora Bunko and their Braille translations, we develop two large-scale pronunciation-annotated corpora for training pronunciation prediction models. We first extract sentence-level alignments between the Aozora Bunko text and its pronunciation converted from the Braille data. We then perform dictionary-based pattern matching based on morphological dictionaries to find word-level pronunciation alignments. We have ultimately obtained the Book Title corpus with 336M characters (16.4M book titles) and the Aozora Bunko corpus with 52M characters (1.6M sentences). We analyzed pronunciation distributions for 203 common heteronyms, and trained a BERT-based pronunciation prediction model for 93 heteronyms, which achieved an average accuracy of 0.939. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,177 |
inproceedings | cho-etal-2022-stylekqc | {S}tyle{KQC}: A Style-Variant Paraphrase Corpus for {K}orean Questions and Commands | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.771/ | Cho, Won Ik and Moon, Sangwhan and Kim, Jongin and Kim, Seokmin and Kim, Nam Soo | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7122--7128 | Paraphrasing is often performed with less concern for controlled style conversion. Especially for questions and commands, style-variant paraphrasing can be crucial in tone and manner, which also matters with industrial applications such as dialog systems. In this paper, we attack this issue with a corpus construction scheme that simultaneously considers the core content and style of directives, namely intent and formality, for the Korean language. Utilizing manually generated natural language queries on six daily topics, we expand the corpus to formal and informal sentences by human rewriting and transferring. We verify the validity and industrial applicability of our approach by checking the adequate classification and inference performance that fit with conventional fine-tuning approaches, at the same time proposing a supervised formality transfer task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,178 |
inproceedings | tian-etal-2022-syntax | Syntax-driven Approach for Semantic Role Labeling | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.772/ | Tian, Yuanhe and Qin, Han and Xia, Fei and Song, Yan | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7129--7139 | As an important task to analyze the semantic structure of a sentence, semantic role labeling (SRL) aims to locate the semantic role (e.g., agent) of noun phrases with respect to a given predicate and thus plays an important role in downstream tasks such as dialogue systems. To achieve a better performance in SRL, a model is always required to have a good understanding of the context information. Although one can use advanced text encoder (e.g., BERT) to capture the context information, extra resources are also required to further improve the model performance. Considering that there are correlations between the syntactic structure and the semantic structure of the sentence, many previous studies leverage auto-generated syntactic knowledge, especially the dependencies, to enhance the modeling of context information through graph-based architectures, where limited attention is paid to other types of auto-generated knowledge. In this paper, we propose map memories to enhance SRL by encoding different types of auto-generated syntactic knowledge (i.e., POS tags, syntactic constituencies, and word dependencies) obtained from off-the-shelf toolkits. Experimental results on two English benchmark datasets for span-style SRL (i.e., CoNLL-2005 and CoNLL-2012) demonstrate the effectiveness of our approach, which outperforms strong baselines and achieves state-of-the-art results on CoNLL-2005. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,179 |
inproceedings | wolinski-etal-2022-herbert | {H}er{BERT} Based Language Model Detects Quantifiers and Their Semantic Properties in {P}olish | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.773/ | Woli{\'n}ski, Marcin and Nito{\'n}, Bart{\l}omiej and Kiera{\'s}, Witold and Szymanik, Jakub | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7140--7146 | The paper presents a tool for automatic marking up of quantifying expressions, their semantic features, and scopes. We explore the idea of using a BERT based neural model for the task (in this case HerBERT, a model trained specifically for Polish, is used). The tool is trained on a recent manually annotated Corpus of Polish Quantificational Expressions (Szymanik and Kiera{\'s}, 2022). We discuss how it performs against human annotation and present results of automatic annotation of 300 million sub-corpus of National Corpus of Polish. Our results show that language models can effectively recognise semantic category of quantification as well as identify key semantic properties of quantifiers, like monotonicity. Furthermore, the algorithm we have developed can be used for building semantically annotated quantifier corpora for other languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,180 |
inproceedings | bao-etal-2022-lexical | Lexical Resource Mapping via Translations | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.774/ | Bao, Hongchang and Hauer, Bradley and Kondrak, Grzegorz | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7147--7154 | Aligning lexical resources that associate words with concepts in multiple languages increases the total amount of semantic information that can be leveraged for various NLP tasks. We present a translation-based approach to mapping concepts across diverse resources. Our methods depend only on multilingual lexicalization information. When applied to align WordNet/BabelNet to CLICS and OmegaWiki, our methods achieve state-of-the-art accuracy, without any dependence on other sources of semantic knowledge. Since each word-concept pair corresponds to a unique sense of the word, we also demonstrate that the mapping task can be framed as word sense disambiguation. To facilitate future work, we release a set of high-precision WordNet-CLICS alignments, produced by combining three different mapping methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,181 |
inproceedings | takahashi-bollegala-2022-unsupervised | Unsupervised Attention-based Sentence-Level Meta-Embeddings from Contextualised Language Models | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.775/ | Takahashi, Keigo and Bollegala, Danushka | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7155--7163 | A variety of contextualised language models have been proposed in the NLP community, which are trained on diverse corpora to produce numerous Neural Language Models (NLMs). However, different NLMs have reported different levels of performances in downstream NLP applications when used as text representations. We propose a sentence-level meta-embedding learning method that takes independently trained contextualised word embedding models and learns a sentence embedding that preserves the complementary strengths of the input source NLMs. Our proposed method is unsupervised and is not tied to a particular downstream task, which makes the learnt meta-embeddings in principle applicable to different tasks that require sentence representations. Specifically, we first project the token-level embeddings obtained by the individual NLMs and learn attention weights that indicate the contributions of source embeddings towards their token-level meta-embeddings. Next, we apply mean and max pooling to produce sentence-level meta-embeddings from token-level meta-embeddings. Experimental results on semantic textual similarity benchmarks show that our proposed unsupervised sentence-level meta-embedding method outperforms previously proposed sentence-level meta-embedding methods as well as a supervised baseline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,182 |
inproceedings | khanal-etal-2022-identification | Identification of Fine-Grained Location Mentions in Crisis Tweets | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.776/ | Khanal, Sarthak and Traskowsky, Maria and Caragea, Doina | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7164--7173 | Identification of fine-grained location mentions in crisis tweets is central in transforming situational awareness information extracted from social media into actionable information. Most prior works have focused on identifying generic locations, without considering their specific types. To facilitate progress on the fine-grained location identification task, we assemble two tweet crisis datasets and manually annotate them with specific location types. The first dataset contains tweets from a mixed set of crisis events, while the second dataset contains tweets from the global COVID-19 pandemic. We investigate the performance of state-of-the-art deep learning models for sequence tagging on these datasets, in both in-domain and cross-domain settings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,183 |
inproceedings | vargas-etal-2022-hatebr | {H}ate{BR}: A Large Expert Annotated Corpus of {B}razilian {I}nstagram Comments for Offensive Language and Hate Speech Detection | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.777/ | Vargas, Francielle and Carvalho, Isabelle and Rodrigues de G{\'o}es, Fabiana and Pardo, Thiago and Benevenuto, Fabr{\'i}cio | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7174--7183 | Due to the severity of the social media offensive and hateful comments in Brazil, and the lack of research in Portuguese, this paper provides the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection. The HateBR corpus was collected from the comment section of Brazilian politicians' accounts on Instagram and manually annotated by specialists, reaching a high inter-annotator agreement. The corpus consists of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level classification (highly, moderately, and slightly offensive), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). We also implemented baseline experiments for offensive language and hate speech detection and compared them with a literature baseline. Results show that the baseline experiments on our corpus outperform the current state-of-the-art for the Portuguese language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,184 |
inproceedings | ji-etal-2022-mentalbert | {M}ental{BERT}: Publicly Available Pretrained Language Models for Mental Healthcare | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.778/ | Ji, Shaoxiong and Zhang, Tianlin and Ansari, Luna and Fu, Jie and Tiwari, Prayag and Cambria, Erik | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7184--7190 | Mental health is a critical issue in modern society, and mental disorders could sometimes turn to suicidal ideation without adequate treatment. Early detection of mental disorders and suicidal ideation from social content provides a potential way for effective social intervention. Recent advances in pretrained contextualized language representations have promoted the development of several domainspecific pretrained models and facilitated several downstream applications. However, there are no existing pretrained language models for mental healthcare. This paper trains and release two pretrained masked language models, i.e., MentalBERT and MentalRoBERTa, to benefit machine learning for the mental healthcare research community. Besides, we evaluate our trained domain-specific models and several variants of pretrained language models on several mental disorder detection benchmarks and demonstrate that language representations pretrained in the target domain improve the performance of mental health detection tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,185 |
inproceedings | liao-2022-leveraging | Leveraging Hashtag Networks for Multimodal Popularity Prediction of {I}nstagram Posts | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.779/ | Liao, Yu Yun | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7191--7198 | With the increasing commercial and social importance of Instagram in recent years, more researchers begin to take multimodal approaches to predict popular content on Instagram. However, existing popularity prediction approaches often reduce hashtags to simple features such as hashtag length or number of hashtags in a post, ignoring the structural and textual information that entangles between hashtags. In this paper, we propose a multimodal framework using post captions, image, hashtag network, and topic model to predict popular influencer posts in Taiwan. Specifically, the hashtag network is constructed as a homogenous graph using the co-occurrence relationship between hashtags, and we extract its structural information with GraphSAGE and semantic information with BERTopic. Finally, the prediction process is defined as a binary classification task (popular/unpopular) using neural networks. Our results show that the proposed framework incorporating hashtag network outperforms all baselines and unimodal models, while information captured from the hashtag network and topic model appears to be complementary. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,186 |
inproceedings | jiang-etal-2022-annotating | Annotating the {T}weebank Corpus on Named Entity Recognition and Building {NLP} Models for Social Media Analysis | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.780/ | Jiang, Hang and Hua, Yining and Beeferman, Doug and Roy, Deb | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7199--7208 | Social media data such as Twitter messages ({\textquotedblleft}tweets{\textquotedblright}) pose a particular challenge to NLP systems because of their short, noisy, and colloquial nature. Tasks such as Named Entity Recognition (NER) and syntactic parsing require highly domain-matched training data for good performance. To date, there is no complete training corpus for both NER and syntactic analysis (e.g., part of speech tagging, dependency parsing) of tweets. While there are some publicly available annotated NLP datasets of tweets, they are only designed for individual tasks. In this study, we aim to create Tweebank-NER, an English NER corpus based on Tweebank V2 (TB2), train state-of-the-art (SOTA) Tweet NLP models on TB2, and release an NLP pipeline called Twitter-Stanza. We annotate named entities in TB2 using Amazon Mechanical Turk and measure the quality of our annotations. We train the Stanza pipeline on TB2 and compare with alternative NLP frameworks (e.g., FLAIR, spaCy) and transformer-based models. The Stanza tokenizer and lemmatizer achieve SOTA performance on TB2, while the Stanza NER tagger, part-of-speech (POS) tagger, and dependency parser achieve competitive performance against non-transformer models. The transformer-based models establish a strong baseline in Tweebank-NER and achieve the new SOTA performance in POS tagging and dependency parsing on TB2. We release the dataset and make both the Stanza pipeline and BERTweet-based models available {\textquotedblleft}off-the-shelf{\textquotedblright} for use in future Tweet NLP research. Our source code, data, and pre-trained models are available at: \url{https://github.com/social-machines/TweebankNLP}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,187 |
inproceedings | andy-etal-2022-happen | Did that happen? Predicting Social Media Posts that are Indicative of what happened in a scene: A case study of a {TV} show | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.781/ | Andy, Anietie and Kriz, Reno and Guntuku, Sharath Chandra and Wijaya, Derry Tanti and Callison-Burch, Chris | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7209--7214 | While popular Television (TV) shows are airing, some users interested in these shows publish social media posts about the show. Analyzing social media posts related to a TV show can be beneficial for gaining insights about what happened during scenes of the show. This is a challenging task partly because a significant number of social media posts associated with a TV show or event may not clearly describe what happened during the event. In this work, we propose a method to predict social media posts (associated with scenes of a TV show) that are indicative of what transpired during the scenes of the show. We evaluate our method on social media (Twitter) posts associated with an episode of a popular TV show, Game of Thrones. We show that for each of the identified scenes, with high AUC`s, our method can predict posts that are indicative of what happened in a scene from those that are not-indicative. Based on Twitters policy, we will make the Tweeter ID`s of the Twitter posts used for this work publicly available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,188 |
inproceedings | kodali-etal-2022-hashset | {H}ash{S}et - A Dataset For Hashtag Segmentation | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.782/ | Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7215--7219 | Hashtag segmentation is the task of breaking a hashtag into its constituent tokens. Hashtags often encode the essence of user-generated posts, along with information like topic and sentiment, which are useful in downstream tasks. Hashtags prioritize brevity and are written in unique ways - transliterating and mixing languages, spelling variations, creative named entities. Benchmark datasets used for the hashtag segmentation task - STAN, BOUN - are small and extracted from a single set of tweets. However, datasets should reflect the variations in writing styles of hashtags and account for domain and language specificity, failing which the results will misrepresent model performance. We argue that model performance should be assessed on a wider variety of hashtags, and datasets should be carefully curated. To this end, we propose HashSet, a dataset comprising of: a) 1.9k manually annotated dataset; b) 3.3M loosely supervised dataset. HashSet dataset is sampled from a different set of tweets when compared to existing datasets and provides an alternate distribution of hashtags to build and validate hashtag segmentation models. We analyze the performance of SOTA models for Hashtag Segmentation, and show that the proposed dataset provides an alternate set of hashtags to train and assess models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,189 |
inproceedings | tran-etal-2022-using | Using Convolution Neural Network with {BERT} for Stance Detection in {V}ietnamese | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.783/ | Tran, Oanh and Phung, Anh Cong and Ngo, Bach Xuan | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7220--7225 | Stance detection is the task of automatically eliciting stance information towards a specific claim made by a primary author. While most studies have been done for high-resource languages, this work is dedicated to a low-resource language, namely Vietnamese. In this paper, we propose an architecture using transformers to detect stances in Vietnamese claims. This architecture exploits BERT to extract contextual word embeddings instead of using traditional word2vec models. Then, these embeddings are fed into CNN networks to extract local features to train the stance detection model. We performed extensive comparison experiments to show the effectiveness of the proposed method on a public dataset1 Experimental results show that this proposed model outperforms the previous methods by a large margin. It yielded an accuracy score of 75.57{\%} averaged on four labels. This sets a new SOTA result for future research on this interesting problem in Vietnamese. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,190 |
inproceedings | murayama-etal-2022-annotation | Annotation-Scheme Reconstruction for {\textquotedblleft}Fake News{\textquotedblright} and {J}apanese Fake News Dataset | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.784/ | Murayama, Taichi and Hisada, Shohei and Uehara, Makoto and Wakamiya, Shoko and Aramaki, Eiji | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7226--7234 | Fake news provokes many societal problems; therefore, there has been extensive research on fake news detection tasks to counter it. Many fake news datasets were constructed as resources to facilitate this task. Contemporary research focuses almost exclusively on the factuality aspect of the news. However, this aspect alone is insufficient to explain {\textquotedblleft}fake news,{\textquotedblright} which is a complex phenomenon that involves a wide range of issues. To fully understand the nature of each instance of fake news, it is important to observe it from various perspectives, such as the intention of the false news disseminator, the harmfulness of the news to our society, and the target of the news. We propose a novel annotation scheme with fine-grained labeling based on detailed investigations of existing fake news datasets to capture these various aspects of fake news. Using the annotation scheme, we construct and publish the first Japanese fake news dataset. The annotation scheme is expected to provide an in-depth understanding of fake news. We plan to build datasets for both Japanese and other languages using our scheme. Our Japanese dataset is published at \url{https://hkefka385.github.io/dataset/fakenews-japanese/}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,191 |
inproceedings | perez-etal-2022-robertuito | {R}o{BERT}uito: a pre-trained language model for social media text in {S}panish | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.785/ | P{\'e}rez, Juan Manuel and Furman, Dami{\'a}n Ariel and Alonso Alemany, Laura and Luque, Franco M. | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7235--7243 | Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,192 |
inproceedings | ito-etal-2022-construction | Construction of Responsive Utterance Corpus for Attentive Listening Response Production | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.786/ | Ito, Koichiro and Murata, Masaki and Ohno, Tomohiro and Matsubara, Shigeki | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7244--7252 | In Japan, the number of single-person households, particularly among the elderly, is increasing. Consequently, opportunities for people to narrate are being reduced. To address this issue, conversational agents, e.g., communication robots and smart speakers, are expected to play the role of the listener. To realize these agents, this paper describes the collection of conversational responses by listeners that demonstrate attentive listening attitudes toward narrative speakers, and a method to annotate existing narrative speech with responsive utterances is proposed. To summarize, 148,962 responsive utterances by 11 listeners were collected in a narrative corpus comprising 13,234 utterance units. The collected responsive utterances were analyzed in terms of response frequency, diversity, coverage, and naturalness. These results demonstrated that diverse and natural responsive utterances were collected by the proposed method in an efficient and comprehensive manner. To demonstrate the practical use of the collected responsive utterances, an experiment was conducted, in which response generation timings were detected in narratives. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,193 |
inproceedings | song-etal-2022-speak | Speak: A Toolkit Using {A}mazon {M}echanical {T}urk to Collect and Validate Speech Audio Recordings | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.787/ | Song, Christopher and Harwath, David and Alhanai, Tuka and Glass, James | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7253--7258 | We present Speak, a toolkit that allows researchers to crowdsource speech audio recordings using Amazon Mechanical Turk (MTurk). Speak allows MTurk workers to submit speech recordings in response to a task prompt and stimulus (e.g. image, text excerpt, audio file) defined by researchers, a functionality that is not natively offered by MTurk at the time of writing this paper. Importantly, the toolkit employs numerous measures to ensure that speech recordings collected are of adequate quality, in order to avoid accepting unusable data and prevent abuse/fraud. Speak has demonstrated utility, having collected over 600,000 recordings to date. The toolkit is open-source and available for download. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,194 |
inproceedings | lovenia-etal-2022-ascend | {ASCEND}: A Spontaneous {C}hinese-{E}nglish Dataset for Code-switching in Multi-turn Conversation | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.788/ | Lovenia, Holy and Cahyawijaya, Samuel and Winata, Genta and Xu, Peng and Xu, Yan and Liu, Zihan and Frieske, Rita and Yu, Tiezheng and Dai, Wenliang and Barezi, Elham J. and Chen, Qifeng and Ma, Xiaojuan and Shi, Bertram and Fung, Pascale | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7259--7268 | Code-switching is a speech phenomenon occurring when a speaker switches language during a conversation. Despite the spontaneous nature of code-switching in conversational spoken language, most existing works collect code-switching data from read speech instead of spontaneous speech. ASCEND (A Spontaneous Chinese-English Dataset) is a high-quality Mandarin Chinese-English code-switching corpus built on spontaneous multi-turn conversational dialogue sources collected in Hong Kong. We report ASCEND`s design and procedure for collecting the speech data, including annotations. ASCEND consists of 10.62 hours of clean speech, collected from 23 bilingual speakers of Chinese and English. Furthermore, we conduct baseline experiments using pre-trained wav2vec 2.0 models, achieving a best performance of 22.69{\%} character error rate and 27.05{\%} mixed error rate. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,195 |
inproceedings | al-tamimi-etal-2022-romanization | A {R}omanization System and {W}eb{MAUS} Aligner for {A}rabic Varieties | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.789/ | Al-Tamimi, Jalal and Schiel, Florian and Khattab, Ghada and Sokhey, Navdeep and Amazouz, Djegdjiga and Dallak, Abdulrahman and Moussa, Hajar | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7269--7276 | This paper presents the results of an ongoing collaboration to develop an Arabic variety-independent romanization system that aims to homogenize and simplify the romanization of the Arabic script, and introduces an Arabic variety-independent WebMAUS service offering a free to use forced-alignment service fully integrated within the WebMAUS services. We present the rationale for developing such a system, highlighting the need for a detailed romanization system with graphemes corresponding to the phonemic short and long vowels/consonants in Arabic varieties. We describe how the acoustic model was created, followed by several hands-on recipes for applying the forced alignment webservice either online or programatically. Finally, we discuss some of the issues we faced during the development of the system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,196 |
inproceedings | sikasote-anastasopoulos-2022-bembaspeech | {B}emba{S}peech: A Speech Recognition Corpus for the {B}emba Language | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.790/ | Sikasote, Claytone and Anastasopoulos, Antonios | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7277--7283 | We present a preprocessed, ready-to-use automatic speech recognition corpus, BembaSpeech, consisting over 24 hours of read speech in the Bemba language, a written but low-resourced language spoken by over 30{\%} of the population in Zambia. To assess its usefulness for training and testing ASR systems for Bemba, we explored different approaches; supervised pre-training (training from scratch), cross-lingual transfer learning from a monolingual English pre-trained model using DeepSpeech on the portion of the dataset and fine-tuning large scale self-supervised Wav2Vec2.0 based multilingual pre-trained models on the complete BembaSpeech corpus. From our experiments, the 1 billion XLS-R parameter model gives the best results. The model achieves a word error rate (WER) of 32.91{\%}, results demonstrating that model capacity significantly improves performance and that multilingual pre-trained models transfers cross-lingual acoustic representation better than monolingual pre-trained English model on the BembaSpeech for the Bemba ASR. Lastly, results also show that the corpus can be used for building ASR systems for Bemba language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,197 |
inproceedings | lai-etal-2022-behancecc | {B}ehance{CC}: A {C}hit{C}hat Detection Dataset For Livestreaming Video Transcripts | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.791/ | Lai, Viet and Pouran Ben Veyseh, Amir and Dernoncourt, Franck and Nguyen, Thien | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7284--7290 | Livestreaming videos have become an effective broadcasting method for both video sharing and educational purposes. However, livestreaming videos contain a considerable amount of off-topic content (i.e., up to 50{\%}) which introduces significant noises and data load to downstream applications. This paper presents BehanceCC, a new human-annotated benchmark dataset for off-topic detection (also called chitchat detection) in livestreaming video transcripts. In addition to describing the challenges of the dataset, our extensive experiments of various baselines reveal the complexity of chitchat detection for livestreaming videos and suggest potential future research directions for this task. The dataset will be made publicly available to foster research in this area. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,198 |
inproceedings | li-etal-2022-adversarial | Adversarial Speech Generation and Natural Speech Recovery for Speech Content Protection | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.792/ | Li, Sheng and Li, Jiyi and Liu, Qianying and Gong, Zhuo | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7291--7297 | With the advent of the General Data Protection Regulation (GDPR) and increasing privacy concerns, the sharing of speech data is faced with significant challenges. Protecting the sensitive content of speech is the same important as the voiceprint. This paper proposes an effective speech content protection method by constructing a frame-by-frame adversarial speech generation system. We revisited the adversarial examples generating method in the recent machine learning field and selected the phonetic state sequence of sensitive speech for the adversarial examples generation. We build an adversarial speech collection. Moreover, based on the speech collection, we proposed a neural network-based frame-by-frame mapping method to recover the speech content by converting from the adversarial speech to the human speech. Experiment shows our proposed method can encode and recover any sensitive audio, and our method is easy to be conducted with publicly available resources of speech recognition technology. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,199 |
inproceedings | forjo-etal-2022-new | A new {E}uropean {P}ortuguese corpus for the study of Psychosis through speech analysis | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.793/ | Forj{\'o}, Maria and Neto, Daniel and Abad, Alberto and Pinto, HSofia and Gago, Joaquim | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7298--7304 | Psychosis is a clinical syndrome characterized by the presence of symptoms such as hallucinations, thought disorder and disorganized speech. Several studies have used machine learning, combined with speech and natural language processing methods to aid in the diagnosis process of this disease. This paper describes the creation of the first European Portuguese corpus for the identification of the presence of speech characteristics of psychosis, which contains samples of 92 participants, 56 controls and 36 individuals diagnosed with psychosis and medicated. The corpus was used in a set of experiments that allowed identifying the most promising feature set to perform the classification: the combination of acoustic and speech metric features. Several classifiers were implemented to study which ones entailed the best performance depending on the task and feature set. The most promising results obtained for the entire corpus were achieved when identifying individuals with a Multi-Layer Perceptron classifier and reached an 87.5{\%} accuracy. Focusing on the gender dependent results, the overall best results were 90.9{\%} and 82.9{\%} accuracy, for female and male subjects respectively. Lastly, the experiments performed lead us to conjecture that spontaneous speech presents more identifiable characteristics than read speech to differentiate healthy and patients diagnosed with psychosis. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,200 |
inproceedings | sini-etal-2022-investigating | Investigating Inter- and Intra-speaker Voice Conversion using Audiobooks | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.794/ | Sini, Aghilas and Lolive, Damien and Barbot, Nelly and Alain, Pierre | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7305--7313 | Audiobook readers play with their voices to emphasize some text passages, highlight discourse changes or significant events, or in order to make listening easier and entertaining. A dialog is a central passage in audiobooks where the reader applies significant voice transformation, mainly prosodic modifications, to realize character properties and changes. However, these intra-speaker modifications are hard to reproduce with simple text-to-speech synthesis. The manner of vocalizing characters involved in a given story depends on the text style and differs from one speaker to another. In this work, this problem is investigated through the prism of voice conversion. We propose to explore modifying the narrator`s voice to fit the context of the story, such as the character who is speaking, using voice conversion. To this end, two complementary experiments are designed: the first one aims to assess the quality of our Phonetic PosteriorGrams (PPG)-based voice conversion system using parallel data. Subjective evaluations with naive raters are conducted to estimate the quality of the signal generated and the speaker similarity. The second experiment applies an intra-speaker voice conversion, considering narration passages and direct speech passages as two distinct speakers. Data are then nonparallel and the dissimilarity between character and narrator is subjectively measured. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,201 |
inproceedings | rolland-etal-2022-multilingual | Multilingual Transfer Learning for Children Automatic Speech Recognition | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.795/ | Rolland, Thomas and Abad, Alberto and Cucchiarini, Catia and Strik, Helmer | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7314--7320 | Despite recent advances in automatic speech recognition (ASR), the recognition of children`s speech still remains a significant challenge. This is mainly due to the high acoustic variability and the limited amount of available training data. The latter problem is particularly evident in languages other than English, which are usually less-resourced. In the current paper, we address children ASR in a number of less-resourced languages by combining several small-sized children speech corpora from these languages. In particular, we address the following research question: Does a novel two-step training strategy in which multilingual learning is followed by language-specific transfer learning outperform conventional single language/task training for children speech, as well as multilingual and transfer learning alone? Based on previous experimental results with English, we hypothesize that multilingual learning provides a better generalization of the underlying characteristics of children`s speech. Our results provide a positive answer to our research question, by showing that using transfer learning on top of a multilingual model for an unseen language outperforms conventional single language-specific learning. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,202 |
inproceedings | pouran-ben-veyseh-etal-2022-behanceqa | {B}ehance{QA}: A New Dataset for Identifying Question-Answer Pairs in Video Transcripts | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.796/ | Pouran Ben Veyseh, Amir and Lai, Viet and Dernoncourt, Franck and Nguyen, Thien | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7321--7327 | Question-Answer (QA) is one of the effective methods for storing knowledge which can be used for future retrieval. As such, identifying mentions of questions and their answers in text is necessary for a knowledge construction and retrieval systems. In the literature, QA identification has been well studied in the NLP community. However, most of the prior works are restricted to formal written documents such as papers or websites. As such, Questions and Answers that are presented in informal/noisy documents have not been adequately studied. One of the domains that can significantly benefit from QA identification is the domain of livestreaming video transcripts that involve abundant QA pairs to provide valuable knowledge for future users and services. Since video transcripts are often transcribed automatically for scale, they are prone to errors. Combined with the informal nature of discussion in a video, prior QA identification systems might not be able to perform well in this domain. To enable comprehensive research in this domain, we present a large-scale QA identification dataset annotated by human over transcripts of 500 hours of streamed videos. We employ Behance.net to collect the videos and their automatically obtained transcripts. Furthermore, we conduct extensive analysis on the annotated dataset to understand the complexity of QA identification for livestreaming video transcripts. Our experiments show that the annotated dataset presents unique challenges for existing methods and more research is necessary to explore more effective methods. The dataset and the models developed in this work will be publicly released for future research. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,203 |
inproceedings | dafnis-etal-2022-bidirectional | Bidirectional Skeleton-Based Isolated Sign Recognition using Graph Convolutional Networks | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.797/ | Dafnis, Konstantinos M. and Chroni, Evgenia and Neidle, Carol and Metaxas, Dimitri | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7328--7338 | To improve computer-based recognition from video of isolated signs from American Sign Language (ASL), we propose a new skeleton-based method that involves explicit detection of the start and end frames of signs, trained on the ASLLVD dataset; it uses linguistically relevant parameters based on the skeleton input. Our method employs a bidirectional learning approach within a Graph Convolutional Network (GCN) framework. We apply this method to the WLASL dataset, but with corrections to the gloss labeling to ensure consistency in the labels assigned to different signs; it is important to have a 1-1 correspondence between signs and text-based gloss labels. We achieve a success rate of 77.43{\%} for top-1 and 94.54{\%} for top-5 using this modified WLASL dataset. Our method, which does not require multi-modal data input, outperforms other state-of-the-art approaches on the same modified WLASL dataset, demonstrating the importance of both attention to the start and end frames of signs and the use of bidirectional data streams in the GCNs for isolated sign recognition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,204 |
inproceedings | kang-etal-2022-deep | Deep learning-based end-to-end spoken language identification system for domain-mismatched scenario | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.798/ | Kang, Woohyun and Alam, Md Jahangir and Fathan, Abderrahim | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7339--7343 | Domain mismatch is a critical issue when it comes to spoken language identification. To overcome the domain mismatch problem, we have applied several architectures and deep learning strategies which have shown good results in cross-domain speaker verification tasks to spoken language identification. Our systems were evaluated on the Oriental Language Recognition (OLR) Challenge 2021 Task 1 dataset, which provides a set of cross-domain language identification trials. Among our experimented systems, the best performance was achieved by using the mel frequency cepstral coefficient (MFCC) and pitch features as input and training the ECAPA-TDNN system with a flow-based regularization technique, which resulted in a Cavg of 0.0631 on the OLR 2021 progress set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,205 |
inproceedings | kitagawa-etal-2022-handwritten | Handwritten Character Generation using {Y}-Autoencoder for Character Recognition Model Training | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.799/ | Kitagawa, Tomoki and Leow, Chee Siang and Nishizaki, Hiromitsu | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7344--7351 | It is well-known that the deep learning-based optical character recognition (OCR) system needs a large amount of data to train a high-performance character recognizer. However, it is costly to collect a large amount of realistic handwritten characters. This paper introduces a Y-Autoencoder (Y-AE)-based handwritten character generator to generate multiple Japanese Hiragana characters with a single image to increase the amount of data for training a handwritten character recognizer. The adaptive instance normalization (AdaIN) layer allows the generator to be trained and generate handwritten character images without paired-character image labels. The experiment shows that the Y-AE could generate Japanese character images then used to train the handwritten character recognizer, producing an F1-score improved from 0.8664 to 0.9281. We further analyzed the usefulness of the Y-AE-based generator with shape images, out-of-character (OOC) images, which have different character images styles in model training. The result showed that the generator could generate a handwritten image with a similar style to that of the input character. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,206 |
inproceedings | kanashiro-pereira-2022-attention | Attention-Focused Adversarial Training for Robust Temporal Reasoning | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.800/ | Kanashiro Pereira, Lis | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7352--7359 | We propose an enhanced adversarial training algorithm for fine-tuning transformer-based language models (i.e., RoBERTa) and apply it to the temporal reasoning task. Current adversarial training approaches for NLP add the adversarial perturbation only to the embedding layer, ignoring the other layers of the model, which might limit the generalization power of adversarial training. Instead, our algorithm searches for the best combination of layers to add the adversarial perturbation. We add the adversarial perturbation to multiple hidden states or attention representations of the model layers. Adding the perturbation to the attention representations performed best in our experiments. Our model can improve performance on several temporal reasoning benchmarks, and establishes new state-of-the-art results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,207 |
inproceedings | kawintiranon-singh-2022-polibertweet | {P}oli{BERT}weet: A Pre-trained Language Model for Analyzing Political Content on {T}witter | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.801/ | Kawintiranon, Kornraphop and Singh, Lisa | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7360--7367 | Transformer-based models have become the state-of-the-art for numerous natural language processing (NLP) tasks, especially for noisy data sets, including social media posts. For example, BERTweet, pre-trained RoBERTa on a large amount of Twitter data, has achieved state-of-the-art results on several Twitter NLP tasks. We argue that it is not only important to have general pre-trained models for a social media platform, but also domain-specific ones that better capture domain-specific language context. Domain-specific resources are not only important for NLP tasks associated with a specific domain, but they are also useful for understanding language differences across domains. One domain that receives a large amount of attention is politics, more specifically political elections. Towards that end, we release PoliBERTweet, a pre-trained language model trained from BERTweet on over 83M US 2020 election-related English tweets. While the construction of the resource is fairly straightforward, we believe that it can be used for many important downstream tasks involving language, including political misinformation analysis and election public opinion analysis. To show the value of this resource, we evaluate PoliBERTweet on different NLP tasks. The results show that our model outperforms general-purpose language models in domain-specific contexts, highlighting the value of domain-specific models for more detailed linguistic analysis. We also extend other existing language models with a sample of these data and show their value for presidential candidate stance detection, a context-specific task. We release PoliBERTweet and these other models to the community to advance interdisciplinary research related to Election 2020. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,208 |
inproceedings | stenger-etal-2022-modeling | Modeling the Impact of Syntactic Distance and Surprisal on Cross-{S}lavic Text Comprehension | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.802/ | Stenger, Irina and Georgis, Philip and Avgustinova, Tania and M{\"obius, Bernd and Klakow, Dietrich | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7368--7376 | We focus on the syntactic variation and measure syntactic distances between nine Slavic languages (Belarusian, Bulgarian, Croatian, Czech, Polish, Slovak, Slovene, Russian, and Ukrainian) using symmetric measures of insertion, deletion and movement of syntactic units in the parallel sentences of the fable {\textquotedblleft}The North Wind and the Sun{\textquotedblright}. Additionally, we investigate phonetic and orthographic asymmetries between selected languages by means of the information theoretical notion of surprisal. Syntactic distance and surprisal are, thus, considered as potential predictors of mutual intelligibility between related languages. In spoken and written cloze test experiments for Slavic native speakers, the presented predictors will be validated as to whether variations in syntax lead to a slower or impeded intercomprehension of Slavic texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,209 |
inproceedings | dhananjaya-etal-2022-bertifying | {BERT}ifying {S}inhala - A Comprehensive Analysis of Pre-trained Language Models for {S}inhala Text Classification | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.803/ | Dhananjaya, Vinura and Demotte, Piyumal and Ranathunga, Surangika and Jayasena, Sanath | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7377--7385 | This research provides the first comprehensive analysis of the performance of pre-trained language models for Sinhala text classification. We test on a set of different Sinhala text classification tasks and our analysis shows that out of the pre-trained multilingual models that include Sinhala (XLM-R, LaBSE, and LASER), XLM-R is the best model by far for Sinhala text classification. We also pre-train two RoBERTa-based monolingual Sinhala models, which are far superior to the existing pre-trained language models for Sinhala. We show that when fine-tuned, these pre-trained language models set a very strong baseline for Sinhala text classification and are robust in situations where labeled data is insufficient for fine-tuning. We further provide a set of recommendations for using pre-trained models for Sinhala text classification. We also introduce new annotated datasets useful for future research in Sinhala text classification and publicly release our pre-trained models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,210 |
inproceedings | gudnason-loftsson-2022-pre | Pre-training and Evaluating Transformer-based Language Models for {I}celandic | Calzolari, Nicoletta and B{\'e}chet, Fr{\'e}d{\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\'e}l{\`e}ne and Odijk, Jan and Piperidis, Stelios | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.lrec-1.804/ | Da{\dh}ason, J{\'o}n Fri{\dh}rik and Loftsson, Hrafn | Proceedings of the Thirteenth Language Resources and Evaluation Conference | 7386--7391 | In this paper, we evaluate several Transformer-based language models for Icelandic on four downstream tasks: Part-of-Speech tagging, Named Entity Recognition. Dependency Parsing, and Automatic Text Summarization. We pre-train four types of monolingual ELECTRA and ConvBERT models and compare our results to a previously trained monolingual RoBERTa model and the multilingual mBERT model. We find that the Transformer models obtain better results, often by a large margin, compared to previous state-of-the-art models. Furthermore, our results indicate that pre-training larger language models results in a significant reduction in error rates in comparison to smaller models. Finally, our results show that the monolingual models for Icelandic outperform a comparably sized multilingual model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,211 |
inproceedings | ghassemi-toudeshki-etal-2022-exploring | Exploring the Influence of Dialog Input Format for Unsupervised Clinical Questionnaire Filling | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.1/ | Ghassemi Toudeshki, Farnaz and Liednikova, Anna and Jolivet, Philippe and Gardent, Claire | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 1--13 | In the medical field, we have seen the emergence of health-bots that interact with patients to gather data and track their state. One of the downstream application is automatic questionnaire filling, where the content of the dialog is used to automatically fill a pre-defined medical questionnaire. Previous work has shown that answering questions from the dialog context can successfully be cast as a Natural Language Inference (NLI) task and therefore benefit from current pre-trained NLI models. However, NLI models have mostly been trained on text rather than dialogs, which may have an influence on their performance. In this paper, we study the influence of content transformation and content selection on the questionnaire filling task. Our results demonstrate that dialog pre-processing can significantly improve the performance of zero-shot questionnaire filling models which take health-bots dialogs as input. | null | null | 10.18653/v1/2022.louhi-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,213 |
inproceedings | rojas-etal-2022-assessing | Assessing the Limits of Straightforward Models for Nested Named Entity Recognition in {S}panish Clinical Narratives | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.2/ | Rojas, Matias and Carrino, Casimiro Pio and Gonzalez-Agirre, Aitor and Dunstan, Jocelyn and Villegas, Marta | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 14--25 | Nested Named Entity Recognition (NER) is an information extraction task that aims to identify entities that may be nested within other entity mentions. Despite the availability of several corpora with nested entities in the Spanish clinical domain, most previous work has overlooked them due to the lack of models and a clear annotation scheme for dealing with the task. To fill this gap, this paper provides an empirical study of straightforward methods for tackling the nested NER task on two Spanish clinical datasets, Clinical Trials, and the Chilean Waiting List. We assess the advantages and limitations of two sequence labeling approaches; one based on Multiple LSTM-CRF architectures and another on Joint labeling models. To better understand the differences between these models, we compute task-specific metrics that adequately measure the ability of models to detect nested entities and perform a fine-grained comparison across models. Our experimental results show that employing domain-specific language models trained from scratch significantly improves the performance obtained with strong domain-specific and general-domain baselines, achieving state-of-the-art results in both datasets. Specifically, we obtained F1 scores of 89.21 and 83.16 in Clinical Trials and the Chilean Waiting List, respectively. Interestingly enough, we observe that the task-specific metrics and analysis properly reflect the limitations of the models when recognizing nested entities. Finally, we perform a case study on an aggregated NER dataset created from several clinical corpora in Spanish. We highlight how entity length and the simultaneous recognition of inner and outer entities are the most critical variables for the nested NER task. | null | null | 10.18653/v1/2022.louhi-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,214 |
inproceedings | kim-etal-2022-current | Can Current Explainability Help Provide References in Clinical Notes to Support Humans Annotate Medical Codes? | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.3/ | Kim, Byung-Hak and Deng, Zhongfen and Yu, Philip and Ganapathi, Varun | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 26--34 | The medical codes prediction problem from clinical notes has received substantial interest in the NLP community, and several recent studies have shown the state-of-the-art (SOTA) code prediction results of full-fledged deep learning-based methods. However, most previous SOTA works based on deep learning are still in early stages in terms of providing textual references and explanations of the predicted codes, despite the fact that this level of explainability of the prediction outcomes is critical to gaining trust from professional medical coders. This raises the important question of how well current explainability methods apply to advanced neural network models such as transformers to predict correct codes and present references in clinical notes that support code prediction. First, we present an explainable Read, Attend, and Code (xRAC) framework and assess two approaches, attention score-based xRAC-ATTN and model-agnostic knowledge-distillation-based xRAC-KD, through simplified but thorough human-grounded evaluations with SOTA transformer-based model, RAC. We find that the supporting evidence text highlighted by xRAC-ATTN is of higher quality than xRAC-KD whereas xRAC-KD has potential advantages in production deployment scenarios. More importantly, we show for the first time that, given the current state of explainability methodologies, using the SOTA medical codes prediction system still requires the expertise and competencies of professional coders, even though its prediction accuracy is superior to that of human coders. This, we believe, is a very meaningful step toward developing explainable and accurate machine learning systems for fully autonomous medical code prediction from clinical notes. | null | null | 10.18653/v1/2022.louhi-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,215 |
inproceedings | jimeno-yepes-verspoor-2022-distinguishing | Distinguishing between focus and background entities in biomedical corpora using discourse structure and transformers | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.4/ | Jimeno Yepes, Antonio and Verspoor, Karin | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 35--40 | Scientific documents typically contain numerous entity mentions, while only a subset are directly relevant to the key contributions of the paper. Distinguishing these focus entities from background ones effectively could improve the recovery of relevant documents and the extraction of information from documents. To study the identification of focus entities, we developed two large datasets of disease-causing biological pathogens using MEDLINE, the largest collection of biomedical citations, and PubMed Central, a collection of full text articles. The focus entities were identified using human-curated indexing on these collections. Experiments with machine learning methods to identify focus entities show that transformer methods achieve high precision and recall and that document discourse information is relevant. The work lays the foundation for more targeted retrieval/summarisation of entity-relevant documents. | null | null | 10.18653/v1/2022.louhi-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,216 |
inproceedings | labrak-etal-2022-frenchmedmcqa | {F}rench{M}ed{MCQA}: A {F}rench Multiple-Choice Question Answering Dataset for Medical domain | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.5/ | Labrak, Yanis and Bazoge, Adrien and Dufour, Richard and Daille, Beatrice and Gourraud, Pierre-Antoine and Morin, Emmanuel and Rouvier, Mickael | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 41--46 | This paper introduces FrenchMedMCQA, the first publicly available Multiple-Choice Question Answering (MCQA) dataset in French for medical domain. It is composed of 3,105 questions taken from real exams of the French medical specialization diploma in pharmacy, mixing single and multiple answers. Each instance of the dataset contains an identifier, a question, five possible answers and their manual correction(s). We also propose first baseline models to automatically process this MCQA task in order to report on the current performances and to highlight the difficulty of the task. A detailed analysis of the results showed that it is necessary to have representations adapted to the medical domain or to the MCQA task: in our case, English specialized models yielded better results than generic French ones, even though FrenchMedMCQA is in French. Corpus, models and tools are available online. | null | null | 10.18653/v1/2022.louhi-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,217 |
inproceedings | houbre-etal-2022-large | A Large-Scale Dataset for Biomedical Keyphrase Generation | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.6/ | Houbre, Ma{\"el and Boudin, Florian and Daille, Beatrice | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 47--53 | Keyphrase generation is the task consisting in generating a set of words or phrases that highlight the main topics of a document. There are few datasets for keyphrase generation in the biomedical domain and they do not meet the expectations in terms of size for training generative models. In this paper, we introduce kp-biomed, the first large-scale biomedical keyphrase generation dataset collected from PubMed abstracts. We train and release several generative models and conduct a series of experiments showing that using large scale datasets improves significantly the performances for present and absent keyphrase generation. The dataset and models are available online. | null | null | 10.18653/v1/2022.louhi-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,218 |
inproceedings | zhang-etal-2022-section | Section Classification in Clinical Notes with Multi-task Transformers | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.7/ | Zhang, Fan and Laish, Itay and Benjamini, Ayelet and Feder, Amir | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 54--59 | Clinical notes are the backbone of electronic health records, often containing vital information not observed in other structured data. Unfortunately, the unstructured nature of clinical notes can lead to critical patient-related information being lost. Algorithms that organize clinical notes into distinct sections are often proposed in order to allow medical professionals to better access information in a given note. These algorithms, however, often assume a given partition over the note, and classify section types given this information. In this paper, we propose a multi-task solution for note sectioning, where a single model identifies context changes and labels each section with its medically-relevant title. Results on in-distribution (MIMIC-III) and out-of-distribution (private held-out) datasets reveal that our approach successfully identifies note sections across different hospital systems. | null | null | 10.18653/v1/2022.louhi-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,219 |
inproceedings | feder-etal-2022-building | Building a Clinically-Focused Problem List From Medical Notes | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.8/ | Feder, Amir and Laish, Itay and Agarwal, Shashank and Lerner, Uri and Atias, Avel and Cheung, Cathy and Clardy, Peter and Peled-Cohen, Alon and Fellinger, Rachana and Liu, Hengrui and Huong Nguyen, Lan and Patel, Birju and Potikha, Natan and Taubenfeld, Amir and Xu, Liwen and Yang, Seung Doo and Benjamini, Ayelet and Hassidim, Avinatan | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 60--68 | Clinical notes often contain useful information not documented in structured data, but their unstructured nature can lead to critical patient-related information being missed. To increase the likelihood that this valuable information is utilized for patient care, algorithms that summarize notes into a problem list have been proposed. Focused on identifying medically-relevant entities in the free-form text, these solutions are often detached from a canonical ontology and do not allow downstream use of the detected text-spans. Mitigating these issues, we present here a system for generating a canonical problem list from medical notes, consisting of two major stages. At the first stage, annotation, we use a transformer model to detect all clinical conditions which are mentioned in a single note. These clinical conditions are then grounded to a predefined ontology, and are linked to spans in the text. At the second stage, summarization, we develop a novel algorithm that aggregates over the set of clinical conditions detected on all of the patient`s notes, and produce a concise patient summary that organizes their most important conditions. | null | null | 10.18653/v1/2022.louhi-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,220 |
inproceedings | el-boukkouri-etal-2022-specializing | Specializing Static and Contextual Embeddings in the Medical Domain Using Knowledge Graphs: Let`s Keep It Simple | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.9/ | El Boukkouri, Hicham and Ferret, Olivier and Lavergne, Thomas and Zweigenbaum, Pierre | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 69--80 | Domain adaptation of word embeddings has mainly been explored in the context of retraining general models on large specialized corpora. While this usually yields good results, we argue that knowledge graphs, which are used less frequently, could also be utilized to enhance existing representations with specialized knowledge. In this work, we aim to shed some light on whether such knowledge injection could be achieved using a basic set of tools: graph-level embeddings and concatenation. To that end, we adopt an incremental approach where we first demonstrate that static embeddings can indeed be improved through concatenation with in-domain node2vec representations. Then, we validate this approach on contextual models and generalize it further by proposing a variant of BERT that incorporates knowledge embeddings within its hidden states through the same process of concatenation. We show that this variant outperforms plain retraining on several specialized tasks, then discuss how this simple approach could be improved further. Both our code and pre-trained models are open-sourced for future research. In this work, we conduct experiments that target the medical domain and the English language. | null | null | 10.18653/v1/2022.louhi-1.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,221 |
inproceedings | kanakarajan-etal-2022-biosimcse | {B}io{S}im{CSE}: {B}io{M}edical Sentence Embeddings using Contrastive learning | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.10/ | Kanakarajan, Kamal raj and Kundumani, Bhuvana and Abraham, Abhijith and Sankarasubbu, Malaikannan | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 81--86 | Sentence embeddings in the form of fixed-size vectors that capture the information in the sentence as well as the context are critical components of Natural Language Processing systems. With transformer model based sentence encoders outperforming the other sentence embedding methods in the general domain, we explore the transformer based architectures to generate dense sentence embeddings in the biomedical domain. In this work, we present BioSimCSE, where we train sentence embeddings with domain specific transformer based models with biomedical texts. We assess our model`s performance with zero-shot and fine-tuned settings on Semantic Textual Similarity (STS) and Recognizing Question Entailment (RQE) tasks. Our BioSimCSE model using BioLinkBERT achieves state of the art (SOTA) performance on both tasks. | null | null | 10.18653/v1/2022.louhi-1.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,222 |
inproceedings | wiatrak-etal-2022-proxy | Proxy-based Zero-Shot Entity Linking by Effective Candidate Retrieval | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.11/ | Wiatrak, Maciej and Arvaniti, Eirini and Brayne, Angus and Vetterle, Jonas and Sim, Aaron | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 87--99 | A recent advancement in the domain of biomedical Entity Linking is the development of powerful two-stage algorithms {--} an initial candidate retrieval stage that generates a shortlist of entities for each mention, followed by a candidate ranking stage. However, the effectiveness of both stages are inextricably dependent on computationally expensive components. Specifically, in candidate retrieval via dense representation retrieval it is important to have hard negative samples, which require repeated forward passes and nearest neighbour searches across the entire entity label set throughout training. In this work, we show that pairing a proxy-based metric learning loss with an adversarial regularizer provides an efficient alternative to hard negative sampling in the candidate retrieval stage. In particular, we show competitive performance on the recall@1 metric, thereby providing the option to leave out the expensive candidate ranking step. Finally, we demonstrate how the model can be used in a zero-shot setting to discover out of knowledge base biomedical entities. | null | null | 10.18653/v1/2022.louhi-1.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,223 |
inproceedings | afkanpour-etal-2022-bert | {BERT} for Long Documents: A Case Study of Automated {ICD} Coding | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.12/ | Afkanpour, Arash and Adeel, Shabir and Bassani, Hansenclever and Epshteyn, Arkady and Fan, Hongbo and Jones, Isaac and Malihi, Mahan and Nauth, Adrian and Sinha, Raj and Woonna, Sanjana and Zamani, Shiva and Kanal, Elli and Fomitchev, Mikhail and Cheung, Donny | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 100--107 | Transformer models have achieved great success across many NLP problems. However, previous studies in automated ICD coding concluded that these models fail to outperform some of the earlier solutions such as CNN-based models. In this paper we challenge this conclusion. We present a simple and scalable method to process long text with the existing transformer models such as BERT. We show that this method significantly improves the previous results reported for transformer models in ICD coding, and is able to outperform one of the prominent CNN-based methods. | null | null | 10.18653/v1/2022.louhi-1.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,224 |
inproceedings | singh-rawat-yu-2022-parameter | Parameter Efficient Transfer Learning for Suicide Attempt and Ideation Detection | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.13/ | Singh Rawat, Bhanu Pratap and Yu, Hong | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 108--115 | Pre-trained language models (LMs) have been deployed as the state-of-the-art natural language processing (NLP) approaches for multiple clinical applications. Model generalisability is important in clinical domain due to the low available resources. In this study, we evaluated transfer learning techniques for an important clinical application: detecting suicide attempt (SA) and suicide ideation (SI) in electronic health records (EHRs). Using the annotation guideline provided by the authors of ScAN, we annotated two EHR datasets from different hospitals. We then fine-tuned ScANER, a publicly available SA and SI detection model, to evaluate five different parameter efficient transfer learning techniques, such as adapter-based learning and soft-prompt tuning, on the two datasets. Without any fine-tuning, ScANER achieve macro F1-scores of 0.85 and 0.87 for SA and SI evidence detection across the two datasets. We observed that by fine-tuning less than {\textasciitilde}2{\%} of ScANER`s parameters, we were able to further improve the macro F1-score for SA-SI evidence detection by 3{\%} and 5{\%} for the two EHR datasets. Our results show that parameter-efficient transfer learning methods can help improve the performance of publicly available clinical models on new hospital datasets with few annotations. | null | null | 10.18653/v1/2022.louhi-1.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,225 |
inproceedings | zhou-etal-2022-automatic | Automatic Patient Note Assessment without Strong Supervision | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.14/ | Zhou, Jianing and Thakkar, Vyom Nayan and Yudkowsky, Rachel and Bhat, Suma and Bond, William F. | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 116--126 | Training of physicians requires significant practice writing patient notes that document the patient`s medical and health information and physician diagnostic reasoning. Assessment and feedback of the patient note requires experienced faculty, consumes significant amounts of time and delays feedback to learners. Grading patient notes is thus a tedious and expensive process for humans that could be improved with the addition of natural language processing. However, the large manual effort required to create labeled datasets increases the challenge, particularly when test cases change. Therefore, traditional supervised NLP methods relying on labelled datasets are impractical in such a low-resource scenario. In our work, we proposed an unsupervised framework as a simple baseline and a weakly supervised method utilizing transfer learning for automatic assessment of patient notes under a low-resource scenario. Experiments on our self-collected datasets show that our weakly-supervised methods could provide reliable assessment for patient notes with accuracy of 0.92. | null | null | 10.18653/v1/2022.louhi-1.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,226 |
inproceedings | yang-etal-2022-ddi | {DDI}-{M}u{G}: Multi-aspect Graphs for Drug-Drug Interaction Extraction | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.15/ | Yang, Jie and Ding, Yihao and Long, Siqu and Poon, Josiah and Han, Soyeon Caren | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 127--137 | Drug-drug interaction (DDI) may leads to adverse reactions in patients, thus it is important to extract such knowledge from biomedical texts. However, previously proposed approaches typically focus on capturing sentence-aspect information while ignoring valuable knowledge concerning the whole corpus. In this paper, we propose a Multi-aspect Graph-based DDI extraction model, named DDI-MuG. We first employ a bio-specific pre-trained language model to obtain the token contextualized representations. Then we use two graphs to get syntactic information from input instance and word co-occurrence information within the entire corpus, respectively. Finally, we combine the representations of drug entities and verb tokens for the final classification. It is encouraging to see that the proposed model outperforms all baseline models on two benchmark datasets. To the best of our knowledge, this is the first model that explores multi-aspect graphs to the DDI extraction task, and we hope it can establish a foundation for more robust multi-aspect works in the future. | null | null | 10.18653/v1/2022.louhi-1.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,227 |
inproceedings | barros-etal-2022-divide | Divide and Conquer: An Extreme Multi-Label Classification Approach for Coding Diseases and Procedures in {S}panish | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.16/ | Barros, Jose and Rojas, Matias and Dunstan, Jocelyn and Abeliuk, Andres | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 138--147 | Clinical coding is the task of transforming medical documents into structured codes following a standard ontology. Since these terminologies are composed of hundreds of codes, this problem can be considered an Extreme Multi-label Classification task. This paper proposes a novel neural network-based architecture for clinical coding. First, we take full advantage of the hierarchical nature of ontologies to create clusters based on semantic relations. Then, we use a Matcher module to assign the probability of documents belonging to each cluster. Finally, the Ranker calculates the probability of each code considering only the documents in the cluster. This division allows a fine-grained differentiation within the cluster, which cannot be addressed using a single classifier. In addition, since most of the previous work has focused on solving this task in English, we conducted our experiments on three clinical coding corpora in Spanish. The experimental results demonstrate the effectiveness of our model, achieving state-of-the-art results on two of the three datasets. Specifically, we outperformed previous models on two subtasks of the CodiEsp shared task: CodiEsp-D (diseases) and CodiEsp-P (procedures). Automatic coding can profoundly impact healthcare by structuring critical information written in free text in electronic health records. | null | null | 10.18653/v1/2022.louhi-1.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,228 |
inproceedings | sotudeh-etal-2022-curriculum | Curriculum-guided Abstractive Summarization for Mental Health Online Posts | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.17/ | Sotudeh, Sajad and Goharian, Nazli and Deilamsalehy, Hanieh and Dernoncourt, Franck | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 148--153 | Automatically generating short summaries from users' online mental health posts could save counselors' reading time and reduce their fatigue so that they can provide timely responses to those seeking help for improving their mental state. Recent Transformers-based summarization models have presented a promising approach to abstractive summarization. They go beyond sentence selection and extractive strategies to deal with more complicated tasks such as novel word generation and sentence paraphrasing. Nonetheless, these models have a prominent shortcoming; their training strategy is not quite efficient, which restricts the model`s performance. In this paper, we include a curriculum learning approach to reweigh the training samples, bringing about an efficient learning procedure. We apply our model on extreme summarization dataset of MentSum posts {---}-a dataset of mental health related posts from Reddit social media. Compared to the state-of-the-art model, our proposed method makes substantial gains in terms of Rouge and Bertscore evaluation metrics, yielding 3.5{\%} Rouge-1, 10.4{\%} Rouge-2, and 4.7{\%} Rouge-L, 1.5{\%} Bertscore relative improvements. | null | null | 10.18653/v1/2022.louhi-1.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,229 |
inproceedings | jha-etal-2022-improving | Improving information fusion on multimodal clinical data in classification settings | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.18/ | Jha, Sneha and Mayer, Erik and Barahona, Mauricio | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 154--159 | Clinical data often exists in different forms across the lifetime of a patient`s interaction with the healthcare system - structured, unstructured or semi-structured data in the form of laboratory readings, clinical notes, diagnostic codes, imaging and audio data of various kinds, and other observational data. Formulating a representation model that aggregates information from these heterogeneous sources may allow us to jointly model on data with more predictive signal than noise and help inform our model with useful constraints learned from better data. Multimodal fusion approaches help produce representations combined from heterogeneous modalities, which can be used for clinical prediction tasks. Representations produced through different fusion techniques require different training strategies. We investigate the advantage of adding narrative clinical text to structured modalities to classification tasks in the clinical domain. We show that while there is a competitive advantage in combined representations of clinical data, the approach can be helped by training guidance customized to each modality. We show empirical results across binary/multiclass settings, single/multitask settings and unified/multimodal learning rate settings for early and late information fusion of clinical data. | null | null | 10.18653/v1/2022.louhi-1.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,230 |
inproceedings | cahyawijaya-etal-2022-long | How Long Is Enough? Exploring the Optimal Intervals of Long-Range Clinical Note Language Modeling | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.19/ | Cahyawijaya, Samuel and Wilie, Bryan and Lovenia, Holy and Zhong, Huan and Zhong, MingQian and Ip, Yuk-Yu Nancy and Fung, Pascale | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 160--172 | Large pre-trained language models (LMs) have been widely adopted in biomedical and clinical domains, introducing many powerful LMs such as bio-lm and BioELECTRA. However, the applicability of these methods to real clinical use cases is hindered, due to the limitation of pre-trained LMs in processing long textual data with thousands of words, which is a common length for a clinical note. In this work, we explore long-range adaptation from such LMs with Longformer, allowing the LMs to capture longer clinical notes context. We conduct experiments on three n2c2 challenges datasets and a longitudinal clinical dataset from Hong Kong Hospital Authority electronic health record (EHR) system to show the effectiveness and generalizability of this concept, achieving {\textasciitilde}10{\%} F1-score improvement. Based on our experiments, we conclude that capturing a longer clinical note interval is beneficial to the model performance, but there are different cut-off intervals to achieve the optimal performance for different target variables. | null | null | 10.18653/v1/2022.louhi-1.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,231 |
inproceedings | alqahtani-etal-2022-quantitative | A Quantitative and Qualitative Analysis of Schizophrenia Language | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.20/ | Alqahtani, Amal and Kayi, Efsun Sarioglu and Hamidian, Sardar and Compton, Michael and Diab, Mona | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 173--183 | Schizophrenia is one of the most disabling mental health conditions to live with. Approximately one percent of the population has schizophrenia which makes it fairly common, and it affects many people and their families. Patients with schizophrenia suffer different symptoms: formal thought disorder (FTD), delusions, and emotional flatness. In this paper, we quantitatively and qualitatively analyze the language of patients with schizophrenia measuring various linguistic features in two modalities: speech and written text. We examine the following features: coherence and cohesion of thoughts, emotions, specificity, level of commit- ted belief (LCB), and personality traits. Our results show that patients with schizophrenia score high in fear and neuroticism compared to healthy controls. In addition, they are more committed to their beliefs, and their writing lacks details. They score lower in most of the linguistic features of cohesion with significant p-values. | null | null | 10.18653/v1/2022.louhi-1.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,232 |
inproceedings | zanwar-etal-2022-exploring | Exploring Hybrid and Ensemble Models for Multiclass Prediction of Mental Health Status on Social Media | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.21/ | Zanwar, Sourabh and Wiechmann, Daniel and Qiao, Yu and Kerz, Elma | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 184--196 | In recent years, there has been a surge of interest in research on automatic mental health detection (MHD) from social media data leveraging advances in natural language processing and machine learning techniques. While significant progress has been achieved in this interdisciplinary research area, the vast majority of work has treated MHD as a binary classification task. The multiclass classification setup is, however, essential if we are to uncover the subtle differences among the statistical patterns of language use associated with particular mental health conditions. Here, we report on experiments aimed at predicting six conditions (anxiety, attention deficit hyperactivity disorder, bipolar disorder, post-traumatic stress disorder, depression, and psychological stress) from Reddit social media posts. We explore and compare the performance of hybrid and ensemble models leveraging transformer-based architectures (BERT and RoBERTa) and BiLSTM neural networks trained on within-text distributions of a diverse set of linguistic features. This set encompasses measures of syntactic complexity, lexical sophistication and diversity, readability, and register-specific ngram frequencies, as well as sentiment and emotion lexicons. In addition, we conduct feature ablation experiments to investigate which types of features are most indicative of particular mental health conditions. | null | null | 10.18653/v1/2022.louhi-1.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,233 |
inproceedings | aracena-etal-2022-knowledge | A Knowledge-Graph-Based Intrinsic Test for Benchmarking Medical Concept Embeddings and Pretrained Language Models | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.22/ | Aracena, Claudio and Villena, Fabi{\'a}n and Rojas, Matias and Dunstan, Jocelyn | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 197--206 | Using language models created from large data sources has improved the performance of several deep learning-based architectures, obtaining state-of-the-art results in several NLP extrinsic tasks. However, little research is related to creating intrinsic tests that allow us to compare the quality of different language models when obtaining contextualized embeddings. This gap increases even more when working on specific domains in languages other than English. This paper proposes a novel graph-based intrinsic test that allows us to measure the quality of different language models in clinical and biomedical domains in Spanish. Our results show that our intrinsic test performs better for clinical and biomedical language models than a general one. Also, it correlates with better outcomes for a NER task using a probing model over contextualized embeddings. We hope our work will help the clinical NLP research community to evaluate and compare new language models in other languages and find the most suitable models for solving downstream tasks. | null | null | 10.18653/v1/2022.louhi-1.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,234 |
inproceedings | dey-girju-2022-enriching | Enriching Deep Learning with Frame Semantics for Empathy Classification in Medical Narrative Essays | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.23/ | Dey, Priyanka and Girju, Roxana | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 207--217 | Empathy is a vital component of health care and plays a key role in the training of future doctors. Paying attention to medical students' self-reflective stories of their interactions with patients can encourage empathy and the formation of professional identities that embody desirable values such as integrity and respect. We present a computational approach and linguistic analysis of empathic language in a large corpus of 440 essays written by pre-med students as narrated simulated patient {--} doctor interactions. We analyze the discourse of three kinds of empathy: cognitive, affective, and prosocial as highlighted by expert annotators. We also present various experiments with state-of-the-art recurrent neural networks and transformer models for classifying these forms of empathy. To further improve over these results, we develop a novel system architecture that makes use of frame semantics to enrich our state-of-the-art models. We show that this novel framework leads to significant improvement on the empathy classification task for this dataset. | null | null | 10.18653/v1/2022.louhi-1.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,235 |
inproceedings | tu-etal-2022-condition | Condition-Treatment Relation Extraction on Disease-related Social Media Data | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.24/ | Tu, Sichang and Doogan, Stephen and Choi, Jinho D. | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 218--228 | Social media has become a popular platform where people share information about personal healthcare conditions, diagnostic histories, and medical plans. Analyzing posts on social media depicting such realistic information can help improve quality and clinical decision-making; however, the lack of structured resources in this genre limits us to build robust NLP models for meaningful analysis. This paper presents a new corpus annotating relations among many types of conditions, treatments, and their attributes illustrated in social media posts by patients and caregivers. For experiments, a transformer encoder is pretrained on 1M raw posts and used to train several document-level relation extraction models using our corpus. Our best-performing model achieves the F1 scores of 70.9 and 51.7 for Entity Recognition and Relation Extraction, respectively. These results are encouraging as it is the first neural model extracting complex relations of this kind on social media data. | null | null | 10.18653/v1/2022.louhi-1.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,236 |
inproceedings | bagherzadeh-bergler-2022-integration | Integration of Heterogeneous Knowledge Sources for Biomedical Text Processing | Lavelli, Alberto and Holderness, Eben and Jimeno Yepes, Antonio and Minard, Anne-Lyse and Pustejovsky, James and Rinaldi, Fabio | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.louhi-1.25/ | Bagherzadeh, Parsa and Bergler, Sabine | Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI) | 229--238 | Recently, research into bringing outside knowledge sources into current neural NLP models has been increasing. Most approaches that leverage external knowledge sources require laborious and non-trivial designs, as well as tailoring the system through intensive ablation of different knowledge sources, an effort that discourages users to use quality ontological resources. In this paper, we show that multiple large heterogeneous KSs can be easily integrated using a decoupled approach, allowing for an automatic ablation of irrelevant KSs, while keeping the overall parameter space tractable. We experiment with BERT and pre-trained graph embeddings, and show that they interoperate well without performance degradation, even when some do not contribute to the task. | null | null | 10.18653/v1/2022.louhi-1.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,237 |
inproceedings | chimoto-bassett-2022-low | Very Low Resource Sentence Alignment: Luhya and {S}wahili | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.1/ | Chimoto, Everlyn Asiko and Bassett, Bruce A. | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 1--8 | Language-agnostic sentence embeddings generated by pre-trained models such as LASER and LaBSE are attractive options for mining large datasets to produce parallel corpora for low-resource machine translation. We test LASER and LaBSE in extracting bitext for two related low-resource African languages: Luhya and Swahili. For this work, we created a new parallel set of nearly 8000 Luhya-English sentences which allows a new zero-shot test of LASER and LaBSE. We find that LaBSE significantly outperforms LASER on both languages. Both LASER and LaBSE however perform poorly at zero-shot alignment on Luhya, achieving just 1.5{\%} and 22.0{\%} successful alignments respectively (P@1 score). We fine-tune the embeddings on a small set of parallel Luhya sentences and show significant gains, improving the LaBSE alignment accuracy to 53.3{\%}. Further, restricting the dataset to sentence embedding pairs with cosine similarity above 0.7 yielded alignments with over 85{\%} accuracy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,239 |
inproceedings | mhaskar-bhattacharyya-2022-multiple | Multiple Pivot Languages and Strategic Decoder Initialization Helps Neural Machine Translation | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.2/ | Mhaskar, Shivam and Bhattacharyya, Pushpak | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 9--14 | In machine translation, a pivot language can be used to assist the source to target translation model. In pivot-based transfer learning, the source to pivot and the pivot to target models are used to improve the performance of the source to target model. This technique works best when both source-pivot and pivot-target are high resource language pairs and the source-target is a low resource language pair. But in some cases, such as Indic languages, the pivot to target language pair is not a high resource one. To overcome this limitation, we use multiple related languages as pivot languages to assist the source to target model. We show that using multiple pivot languages gives 2.03 BLEU and 3.05 chrF score improvement over the baseline model. We show that strategic decoder initialization while performing pivot-based transfer learning with multiple pivot languages gives a 3.67 BLEU and 5.94 chrF score improvement over the baseline model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,240 |
inproceedings | wu-yarowsky-2022-known | Known Words Will Do: Unknown Concept Translation via Lexical Relations | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.3/ | Wu, Winston and Yarowsky, David | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 15--22 | Translating into low-resource languages is challenging due to the scarcity of training data. In this paper, we propose a probabilistic lexical translation method that bridges through lexical relations including synonyms, hypernyms, hyponyms, and co-hyponyms. This method, which only requires a dictionary like Wiktionary and a lexical database like WordNet, enables the translation of unknown vocabulary into low-resource languages for which we may only know the translation of a related concept. Experiments on translating a core vocabulary set into 472 languages, most of them low-resource, show the effectiveness of our approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,241 |
inproceedings | mosolova-smaili-2022-chance | The Only Chance to Understand: Machine Translation of the Severely Endangered Low-resource Languages of Eurasia | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.4/ | Mosolova, Anna and Smaili, Kamel | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 23--34 | Numerous machine translation systems have been proposed since the appearance of this task. Nowadays, new large language model-based algorithms show results that sometimes overcome human ones on the rich-resource languages. Nevertheless, it is still not the case for the low-resource languages, for which all these algorithms did not show equally impressive results. In this work, we want to compare 3 generations of machine translation models on 7 low-resource languages and make a step further by proposing a new way of automatic parallel data augmentation using the state-of-the-art generative model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,242 |
inproceedings | robinson-etal-2022-data | Data-adaptive Transfer Learning for Translation: A Case Study in {H}aitian and Jamaican | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.5/ | Robinson, Nathaniel and Hogan, Cameron and Fulda, Nancy and Mortensen, David R. | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 35--42 | Multilingual transfer techniques often improve low-resource machine translation (MT). Many of these techniques are applied without considering data characteristics. We show in the context of Haitian-to-English translation that transfer effectiveness is correlated with amount of training data and relationships between knowledge-sharing languages. Our experiments suggest that for some languages beyond a threshold of authentic data, back-translation augmentation methods are counterproductive, while cross-lingual transfer from a sufficiently related language is preferred. We complement this finding by contributing a rule-based French-Haitian orthographic and syntactic engine and a novel method for phonological embedding. When used with multilingual techniques, orthographic transformation makes statistically significant improvements over conventional methods. And in very low-resource Jamaican MT, code-switching with a transfer language for orthographic resemblance yields a 6.63 BLEU point advantage. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,243 |
inproceedings | pankaj-gautam-2022-augmented | Augmented Bio-{SBERT}: Improving Performance for Pairwise Sentence Tasks in Bio-medical Domain | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.6/ | Pankaj, Sonam and Gautam, Amit | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 43--47 | One of the modern challenges in AI is the access to high-quality and annotated data, especially in NLP; that is why augmentation is gaining importance. In computer vision, where image data augmentation is standard, text data augmentation in NLP is complex due to the high complexity of language. Moreover, we have seen the advantages of augmentation where there are fewer data available, which can significantly improve the model`s accuracy and performance. We have implemented Augmentation in Pairwise sentence scoring in the biomedical domain. By experimenting with our approach to downstream tasks on biomedical data, we have looked into the solution to improve Bi-encoders' sentence transformer performance using an augmented dataset generated by cross-encoders fine-tuned on Biosses and MedNLI on the pre-trained Bio-BERT model. It has significantly improved the results with respect to the model only trained on Gold data for the respective tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,244 |
inproceedings | chowdhury-etal-2022-machine | Machine Translation for a Very Low-Resource Language - Layer Freezing Approach on Transfer Learning | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.7/ | Chowdhury, Amartya and K. T., Deepak and K, Samudra Vijaya and Prasanna, S. R. Mahadeva | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 48--55 | This paper presents the implementation of Machine Translation (MT) between Lambani, a low-resource Indian tribal language, and English, a high-resource universal language. Lambani is spoken by nomadic tribes of the Indian state of Karnataka and there are similarities between Lambani and various other Indian languages. To implement the English-Lambani MT system, we followed the transfer learning approach with English-Kannada as the parent MT model. The implementation and performance of the English-Lambani MT system are discussed in this paper. Since Lambani has been influenced by various other languages, we explored the possibility of getting better MT performance by using parent models associated with related Indian languages. Specifically, we experimented with English-Gujarati and English-Marathi as additional parent models. We compare the performance of three different English-Lambani MT systems derived from three parent language models, and the observations are presented in the paper. Additionally, we will also explore the effect of freezing the encoder layer and decoder layer and the change in performance from both of them. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,245 |
inproceedings | signoroni-rychly-2022-hft | {HFT}: High Frequency Tokens for Low-Resource {NMT} | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.8/ | Signoroni, Edoardo and Rychl{\'y}, Pavel | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 56--63 | Tokenization has been shown to impact the quality of downstream tasks, such as Neural Machine Translation (NMT), which is susceptible to out-of-vocabulary words and low frequency training data. Current state-of-the-art algorithms have been helpful in addressing the issues of out-of-vocabulary words, bigger vocabulary sizes and token frequency by implementing subword segmentation. We argue, however, that there is still room for improvement, in particular regarding low-frequency tokens in the training data. In this paper, we present {\textquotedblleft}High Frequency Tokenizer{\textquotedblright}, or HFT, a new language-independent subword segmentation algorithm that addresses this issue. We also propose a new metric to measure the frequency coverage of a tokenizer`s vocabulary, based on a frequency rank weighted average of the frequency values of its items. We experiment with a diverse set of language corpora, vocabulary sizes, and writing systems and report improvements on both frequency statistics and on the average length of the output. We also observe a positive impact on downstream NMT. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,246 |
inproceedings | pais-etal-2022-romanian-language | {R}omanian Language Translation in the {RELATE} Platform | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.9/ | Pais, Vasile and Mitrofan, Maria and Avram, Andrei-Marius | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 64--74 | This paper presents the usage of the RELATE platform for translation tasks involving the Romanian language. Using this platform, it is possible to perform text and speech data translations, either for single documents or for entire corpora. Furthermore, the platform was successfully used in international projects to create new resources useful for Romanian language translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,247 |
inproceedings | chiruzzo-etal-2022-translating | Translating {S}panish into {S}panish {S}ign {L}anguage: Combining Rules and Data-driven Approaches | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.10/ | Chiruzzo, Luis and McGill, Euan and Egea-G{\'o}mez, Santiago and Saggion, Horacio | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 75--83 | This paper presents a series of experiments on translating between spoken Spanish and Spanish Sign Language glosses (LSE), including enriching Neural Machine Translation (NMT) systems with linguistic features, and creating synthetic data to pretrain and later on finetune a neural translation model. We found evidence that pretraining over a large corpus of LSE synthetic data aligned to Spanish sentences could markedly improve the performance of the translation models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,248 |
inproceedings | poncelas-effendi-2022-benefiting | Benefiting from Language Similarity in the Multilingual {MT} Training: Case Study of {I}ndonesian and {M}alaysian | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.11/ | Poncelas, Alberto and Effendi, Johanes | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 84--92 | The development of machine translation (MT) has been successful in breaking the language barrier of the world`s top 10-20 languages. However, for the rest of it, delivering an acceptable translation quality is still a challenge due to the limited resource. To tackle this problem, most studies focus on augmenting data while overlooking the fact that we can borrow high-quality natural data from the closely-related language. In this work, we propose an MT model training strategy by increasing the language directions as a means of augmentation in a multilingual setting. Our experiment result using Indonesian and Malaysian on the state-of-the-art MT model showcases the effectiveness and robustness of our method. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,249 |
inproceedings | bastan-khadivi-2022-preordered | A Preordered {RNN} Layer Boosts Neural Machine Translation in Low Resource Settings | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.12/ | Bastan, Mohaddeseh and Khadivi, Shahram | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 93--98 | Neural Machine Translation (NMT) models are strong enough to convey semantic and syntactic information from the source language to the target language. However, these models are suffering from the need for a large amount of data to learn the parameters. As a result, for languages with scarce data, these models are at risk of underperforming. We propose to augment attention based neural network with reordering information to alleviate the lack of data. This augmentation improves the translation quality for both English to Persian and Persian to English by up to 6{\%} BLEU absolute over the baseline models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,250 |
inproceedings | fernandez-adlaon-2022-exploring | Exploring Word Alignment towards an Efficient Sentence Aligner for {F}ilipino and {C}ebuano Languages | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.13/ | Fernandez, Jenn Leana and Adlaon, Kristine Mae M. | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 99--106 | Building a robust machine translation (MT) system requires a large amount of parallel corpus which is an expensive resource for low-resourced languages. The two major languages being spoken in the Philippines which are Filipino and Cebuano have an abundance in monolingual data that this study took advantage of attempting to find the best way to automatically generate parallel corpus out from monolingual corpora through the use of bitext alignment. Byte-pair encoding was applied in an attempt to optimize the alignment of the source and target texts. Results have shown that alignment was best achieved without segmenting the tokens. Itermax alignment score is best for short-length sentences and match or argmax alignment score are best for long-length sentences. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,251 |
inproceedings | izbicki-2022-aligning | Aligning Word Vectors on Low-Resource Languages with {W}iktionary | Ojha, Atul Kr. and Liu, Chao-Hong and Vylomova, Ekaterina and Abbott, Jade and Washington, Jonathan and Oco, Nathaniel and Pirinen, Tommi A and Malykh, Valentin and Logacheva, Varvara and Zhao, Xiaobing | oct | 2022 | Gyeongju, Republic of Korea | Association for Computational Linguistics | https://aclanthology.org/2022.loresmt-1.14/ | Izbicki, Mike | Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) | 107--117 | Aligned word embeddings have become a popular technique for low-resource natural language processing. Most existing evaluation datasets are generated automatically from machine translations systems, so they have many errors and exist only for high-resource languages. We introduce the Wiktionary bilingual lexicon collection, which provides high-quality human annotated translations for words in 298 languages to English. We use these lexicons to train and evaluate the largest published collection of aligned word embeddings on 157 different languages. All of our code and data is publicly available at \url{https://github.com/mikeizbicki/wiktionary_bli}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,252 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.