entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
sheeja-s-lalitha-devi-2022-automatic
Automatic Identification of Explicit Connectives in {M}alayalam
Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.wildre-1.13/
Sheeja S, Kumari and Lalitha Devi, Sobha
Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference
74--79
This work presents an automatic identification of explicit connectives and its arguments using supervised method, Conditional Random Fields (CRFs). In this work, we focus on the identification of connectives and their arguments in the corpus. We consider explicit connectives and its arguments for the present study. The corpus we have considered has 4,000 sentences from Malayalam documents and manually annotated the corpus for POS, chunk, clause, discourse connectives and its arguments. The corpus thus annotated is used for building the base engine. The percentage of the performance of the system is evaluated based on the precision, recall and F-score and obtained encouraging results. We have analysed the errors generated by the system and used the features obtained from the anlaysis to improve the performance of the system
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,194
inproceedings
sharma-chandra-2022-web
Web based System for Derivational Process of Kṛdanta based on {P}{\={a}}ṇinian Grammatical Tradition
Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.wildre-1.14/
Sharma, Sumit and Chandra, Subhash
Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference
80--83
Each text of the Sanskrit literature is wadded with the uses of Sanskrit kṛdanta (participles). The knowledge and formation process of Sanskrit kṛdanta play a key role to understand the meaning of a particular kṛdanta word in Sanskrit. Without proper analysis of the kṛdanta, the Sanskrit text cannot be understood. Currently, the mode of Sanskrit learning is traditional classroom teaching which is accessible to the students but not to general Sanskrit learners. The acute growth of Information Technology (IT) is changed the educational pedagogy and web-based learning systems evolved to enhance the teaching-learning process. Though many online tools are being developed by researchers for Sanskrit these are still scarce and untasted. Globe genuinely demands the high impacted tools for Sanskrit. Undoubtedly, Sanskrit kṛdanta is part of the syllabus of all universities offering Sanskrit courses. Approximately 100 plus kṛt suffixes are added with verb roots to generate kṛdanta forms and due to complexity, the learning of these forms is a challenging task. Therefore, the objective of the paper is to present an online system for teaching the derivational process of kṛdantas based on P{\={a}}ṇinian rules and generate a complete derivational process of the kṛdantas for teaching and learning. It will also provide a platform for e-learning for the derivational process of Sanskrit kṛdantas.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,195
inproceedings
parida-etal-2022-universal
{U}niversal {D}ependency Treebank for {O}dia Language
Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.wildre-1.15/
Parida, Shantipriya and Shabadi, Kalyanamalini and Ojha, Atul Kr. and Sahoo, Saraswati and Dash, Satya Ranjan and Dash, Bijayalaxmi
Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference
84--89
This paper presents the first publicly available treebank of Odia, a morphologically rich low resource Indian language. The treebank contains approx. 1082 tokens (100 sentences) in Odia were selected from {\textquotedblleft}Samantar{\textquotedblright}, the largest available parallel corpora collection for Indic languages. All the selected sentences are manually annotated following the {\textquotedblleft}Universal Dependency{\textquotedblright} guidelines. The morphological analysis of the Odia treebank was performed using machine learning techniques. The Odia annotated treebank will enrich the Odia language resource and will help in building language technology tools for cross-lingual learning and typological research. We also build a preliminary Odia parser using a machine learning approach. The accuracy of the parser is 86.6{\%} Tokenization, 64.1{\%} UPOS, 63.78{\%} XPOS, 42.04{\%} UAS and 21.34{\%} LAS. Finally, the paper briefly discusses the linguistic analysis of the Odia UD treebank.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,196
inproceedings
khandoliyan-kishor-2022-computational
Computational Referencing System for {S}anskrit Grammar
Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.wildre-1.16/
Khandoliyan, Baldev and Kishor, Ram
Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference
90--96
The goal of this project was to reconstitute and storage the text of Aṣṭ{\={a}}dhy{\={a}}y{\={i}} (AD) in a computer text system so that everyone may read it. The proposed work was to do study the structure of AD and to create a relational database system for storing and interacting with AD. The system is available online, including Devan{\={a}}gari Unicode and other major Indian characters as input and output, MS SQL Server, a Relational Database Management System (RDBMS)-based system, and Java Server Pages (JSP) were used. For AD, the system works as a multi-dimensional interactive knowledge-based computer system. The approach can also be applied to all Sanskrit s{\={u}}tra texts that have a similar format. Sanskrit heritage texts are projected to benefit from the system`s preservation and promotion. A research is being made here for preparing an AD text as a computer aided dynamic search, learning and instruction system in the Indian context.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,197
inproceedings
joshi-2022-l3cube
{L}3{C}ube-{M}aha{C}orpus and {M}aha{BERT}: {M}arathi Monolingual Corpus, {M}arathi {BERT} Language Models, and Resources
Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.wildre-1.17/
Joshi, Raviraj
Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference
97--101
We present L3Cube-MahaCorpus a Marathi monolingual data set scraped from different internet sources. We expand the existing Marathi monolingual corpus with 24.8M sentences and 289M tokens. We further present, MahaBERT, MahaAlBERT, and MahaRoBerta all BERT-based masked language models, and MahaFT, the fast text word embeddings both trained on full Marathi corpus with 752M tokens. We show the effectiveness of these resources on downstream Marathi sentiment analysis, text classification, and named entity recognition (NER) tasks. We also release MahaGPT, a generative Marathi GPT model trained on Marathi corpus. Marathi is a popular language in India but still lacks these resources. This work is a step forward in building open resources for the Marathi language. The data and models are available at \url{https://github.com/l3cube-pune/MarathiNLP} .
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,198
inproceedings
grezes-etal-2022-overview
Overview of the First Shared Task on Detecting Entities in the Astrophysics Literature ({DEAL})
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.1/
Grezes, Felix and Blanco-Cuaresma, Sergi and Allen, Thomas and Ghosal, Tirthankar
Proceedings of the first Workshop on Information Extraction from Scientific Publications
1--7
In this article, we describe the overview of our shared task: Detecting Entities in the Astrophysics Literature (DEAL). The DEAL shared task was part of the Workshop on Information Extraction from Scientific Publications (WIESP) in AACL-IJCNLP 2022. Information extraction from scientific publications is critical in several downstream tasks such as identification of critical entities, article summarization, citation classification, etc. The motivation of this shared task was to develop a community-wide effort for entity extraction from astrophysics literature. Automated entity extraction would help to build knowledge bases, high-quality meta-data for indexing and search, and several other use-cases of interests. Thirty-three teams registered for DEAL, twelve of them participated in the system runs, and finally four teams submitted their system descriptions. We analyze their system and performance and finally discuss the findings of DEAL.
null
null
10.18653/v1/2022.wiesp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,200
inproceedings
tsunokake-matsubara-2022-classification
Classification of {URL} Citations in Scholarly Papers for Promoting Utilization of Research Artifacts
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.2/
Tsunokake, Masaya and Matsubara, Shigeki
Proceedings of the first Workshop on Information Extraction from Scientific Publications
8--19
Utilizing citations for research artifacts (e.g., dataset, software) in scholarly papers contributes to efficient expansion of research artifact repositories and various applications e.g., the search, recommendation, and evaluation of such artifacts. This study focuses on citations using URLs (URL citations) and aims to identify and analyze research artifact citations automatically. This paper addresses the classification task for each URL citation to identify (1) the role that the referenced resources play in research activities, (2) the type of referenced resources, and (3) the reason why the author cited the resources. This paper proposes the classification method using section titles and footnote texts as new input features. We extracted URL citations from international conference papers as experimental data. We performed 5-fold cross-validation using the data and computed the classification performance of our method. The results demonstrate that our method is effective in all tasks. An additional experiment demonstrates that using cited URLs as input features is also effective.
null
null
10.18653/v1/2022.wiesp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,201
inproceedings
yang-etal-2022-telin
{TELIN}: Table Entity {LIN}ker for Extracting Leaderboards from Machine Learning Publications
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.3/
Yang, Sean and Tensmeyer, Chris and Wigington, Curtis
Proceedings of the first Workshop on Information Extraction from Scientific Publications
20--25
Tracking state-of-the-art (SOTA) results in machine learning studies is challenging due to high publication volume. Existing methods for creating leaderboards in scientific documents require significant human supervision or rely on scarcely available LaTeX source files. We propose Table Entity LINker (TELIN), a framework which extracts (task, model, dataset, metric) quadruples from collections of scientific publications in PDF format. TELIN identifies scientific named entities, constructs a knowledge base, and leverages human feedback to iteratively refine automatic extractions. TELIN identifies and prioritizes uncertain and impactful entities for human review to create a cascade effect for leaderboard completion. We show that TELIN is competitive with the SOTA but requires much less human annotation.
null
null
10.18653/v1/2022.wiesp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,202
inproceedings
mutinda-etal-2022-pico
{PICO} Corpus: A Publicly Available Corpus to Support Automatic Data Extraction from Biomedical Literature
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.4/
Mutinda, Faith and Liew, Kongmeng and Yada, Shuntaro and Wakamiya, Shoko and Aramaki, Eiji
Proceedings of the first Workshop on Information Extraction from Scientific Publications
26--31
We present a publicly available corpus with detailed annotations describing the core elements of clinical trials: Participants, Intervention, Control, and Outcomes. The corpus consists of 1011 abstracts of breast cancer randomized controlled trials extracted from the PubMed database. The corpus improves previous corpora by providing detailed annotations for outcomes to identify numeric texts that report the number of participants that experience specific outcomes. The corpus will be helpful for the development of systems for automatic extraction of data from randomized controlled trial literature to support evidence-based medicine. Additionally, we demonstrate the feasibility of the corpus by using two strong baselines for named entity recognition task. Most of the entities achieve F1 scores greater than 0.80 demonstrating the quality of the dataset.
null
null
10.18653/v1/2022.wiesp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,203
inproceedings
brinner-etal-2022-linking
Linking a Hypothesis Network From the Domain of Invasion Biology to a Corpus of Scientific Abstracts: The {INAS} Dataset
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.5/
Brinner, Marc and Heger, Tina and Zarriess, Sina
Proceedings of the first Workshop on Information Extraction from Scientific Publications
32--42
We investigate the problem of identifying the major hypothesis that is addressed in a scientific paper. To this end, we present a dataset from the domain of invasion biology that organizes a set of 954 papers into a network of fine-grained domain-specific categories of hypotheses. We carry out experiments on classifying abstracts according to these categories and present a pilot study on annotating hypothesis statements within the text. We find that hypothesis statements in our dataset are complex, varied and more or less explicit, and, importantly, spread over the whole abstract. Experiments with BERT-based classifiers show that these models are able to classify complex hypothesis statements to some extent, without being trained on sentence-level text span annotations.
null
null
10.18653/v1/2022.wiesp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,204
inproceedings
hoelscher-obermaier-etal-2022-leveraging
Leveraging knowledge graphs to update scientific word embeddings using latent semantic imputation
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.6/
Hoelscher-Obermaier, Jason and Stevinson, Edward and Stauber, Valentin and Zhelev, Ivaylo and Botev, Viktor and Wu, Ronin and Minton, Jeremy
Proceedings of the first Workshop on Information Extraction from Scientific Publications
43--53
The most interesting words in scientific texts will often be novel or rare. This presents a challenge for scientific word embedding models to determine quality embedding vectors for useful terms that are infrequent or newly emerging. We demonstrate how Latent Semantic Imputation (LSI) can address this problem by imputing embeddings for domain-specific words from up-to-date knowledge graphs while otherwise preserving the original word embedding model. We use the MeSH knowledge graph to impute embedding vectors for biomedical terminology without retraining and evaluate the resulting embedding model on a domain-specific word-pair similarity task. We show that LSI can produce reliable embedding vectors for rare and out-of-vocabulary terms in the biomedical domain.
null
null
10.18653/v1/2022.wiesp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,205
inproceedings
binder-etal-2022-full
Full-Text Argumentation Mining on Scientific Publications
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.7/
Binder, Arne and Hennig, Leonhard and Verma, Bhuvanesh
Proceedings of the first Workshop on Information Extraction from Scientific Publications
54--66
Scholarly Argumentation Mining (SAM) has recently gained attention due to its potential to help scholars with the rapid growth of published scientific literature. It comprises two subtasks: argumentative discourse unit recognition (ADUR) and argumentative relation extraction (ARE), both of which are challenging since they require e.g. the integration of domain knowledge, the detection of implicit statements, and the disambiguation of argument structure. While previous work focused on dataset construction and baseline methods for specific document sections, such as abstract or results, full-text scholarly argumentation mining has seen little progress. In this work, we introduce a sequential pipeline model combining ADUR and ARE for full-text SAM, and provide a first analysis of the performance of pretrained language models (PLMs) on both subtasks. We establish a new SotA for ADUR on the Sci-Arg corpus, outperforming the previous best reported result by a large margin (+7{\%} F1). We also present the first results for ARE, and thus for the full AM pipeline, on this benchmark dataset. Our detailed error analysis reveals that non-contiguous ADUs as well as the interpretation of discourse connectors pose major challenges and that data annotation needs to be more consistent.
null
null
10.18653/v1/2022.wiesp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,206
inproceedings
tahri-etal-2022-portability
On the portability of extractive Question-Answering systems on scientific papers to real-life application scenarios
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.8/
Tahri, Chyrine and Tannier, Xavier and Haouat, Patrick
Proceedings of the first Workshop on Information Extraction from Scientific Publications
67--77
There are still hurdles standing in the way of faster and more efficient knowledge consumption in industrial environments seeking to foster innovation. In this work, we address the portability of extractive Question Answering systems from academic spheres to industries basing their decisions on thorough scientific papers analysis. Keeping in mind that such industrial contexts often lack high-quality data to develop their own QA systems, we illustrate the misalignment between application requirements and cost sensitivity of such industries and some widespread practices tackling the domain-adaptation problem in the academic world. Through a series of extractive QA experiments on QASPER, we adopt the pipeline-based retriever-ranker-reader architecture for answering a question on a scientific paper and show the impact of modeling choices in different stages on the quality of answer prediction. We thus provide a characterization of practical aspects of real-life application scenarios and notice that appropriate trade-offs can be efficient and add value in those industrial environments.
null
null
10.18653/v1/2022.wiesp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,207
inproceedings
dai-karimi-2022-detecting
Detecting Entities in the Astrophysics Literature: A Comparison of Word-based and Span-based Entity Recognition Methods
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.9/
Dai, Xiang and Karimi, Sarvnaz
Proceedings of the first Workshop on Information Extraction from Scientific Publications
78--83
Information Extraction from scientific literature can be challenging due to the highly specialised nature of such text. We describe our entity recognition methods developed as part of the DEAL (Detecting Entities in the Astrophysics Literature) shared task. The aim of the task is to build a system that can identify Named Entities in a dataset composed by scholarly articles from astrophysics literature. We planned our participation such that it enables us to conduct an empirical comparison between word-based tagging and span-based classification methods. When evaluated on two hidden test sets provided by the organizer, our best-performing submission achieved F1 scores of 0.8307 (validation phase) and 0.7990 (testing phase).
null
null
10.18653/v1/2022.wiesp-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,208
inproceedings
huang-2022-domain
Domain Specific Augmentations as Low Cost Teachers for Large Students
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.10/
Huang, Po-Wei
Proceedings of the first Workshop on Information Extraction from Scientific Publications
84--90
Current neural network solutions in scientific document processing employ models pretrained on domain-specific corpi, which are usually limited in model size, as pretraining can be costly and limited by training resources. We introduce a framework that uses data augmentation from such domain-specific pretrained models to transfer domain specific knowledge to larger general pretrained models and improve performance on downstream tasks. Our method improves the performance of Named Entity Recognition in the astrophysical domain by more than 20{\%} compared to domain-specific pretrained models finetuned to the target dataset.
null
null
10.18653/v1/2022.wiesp-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,209
inproceedings
rosati-2022-moving
Moving beyond word lists: towards abstractive topic labels for human-like topics of scientific documents
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.11/
Rosati, Domenic
Proceedings of the first Workshop on Information Extraction from Scientific Publications
91--99
Topic models represent groups of documents as a list of words (the topic labels). This work asks whether an alternative approach to topic labeling can be developed that is closer to a natural language description of a topic than a word list. To this end, we present an approach to generating human-like topic labels using abstractive multi-document summarization (MDS). We investigate our approach with an exploratory case study. We model topics in citation sentences in order to understand what further research needs to be done to fully operationalize MDS for topic labeling. Our case study shows that in addition to more human-like topics there are additional advantages to evaluation by using clustering and summarization measures instead of topic model measures. However, we find that there are several developments needed before we can design a well-powered study to evaluate MDS for topic modeling fully. Namely, improving cluster cohesion, improving the factuality and faithfulness of MDS, and increasing the number of documents that might be supported by MDS. We present a number of ideas on how these can be tackled and conclude with some thoughts on how topic modeling can also be used to improve MDS in general.
null
null
10.18653/v1/2022.wiesp-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,210
inproceedings
ghosh-etal-2022-astro
Astro-m{T}5: Entity Extraction from Astrophysics Literature using m{T}5 Language Model
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.12/
Ghosh, Madhusudan and Santra, Payel and Iqbal, Sk Asif and Basuchowdhuri, Partha
Proceedings of the first Workshop on Information Extraction from Scientific Publications
100--104
Scientific research requires reading and extracting relevant information from existing scientific literature in an effective way. To gain insights over a collection of such scientific documents, extraction of entities and recognizing their types is considered to be one of the important tasks. Numerous studies have been conducted in this area of research. In our study, we introduce a framework for entity recognition and identification of NASA astrophysics dataset, which was published as a part of the DEAL SharedTask. We use a pre-trained multilingual model, based on a natural language processing framework for the given sequence labeling tasks. Experiments show that our model, Astro-mT5, out-performs the existing baseline in astrophysics related information extraction.
null
null
10.18653/v1/2022.wiesp-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,211
inproceedings
martin-etal-2022-nlpsharedtasks
{NLPS}hared{T}asks: A Corpus of Shared Task Overview Papers in Natural Language Processing Domains
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.13/
Martin, Anna and Pedersen, Ted and D{'}Souza, Jennifer
Proceedings of the first Workshop on Information Extraction from Scientific Publications
105--120
As the rate of scientific output continues to grow, it is increasingly important to develop systems to improve interfaces between researchers and scholarly papers. Training models to extract scientific information from the full texts of scholarly documents is important for improving how we structure and access scientific information. However, there are few annotated corpora that provide full paper texts. This paper presents the NLPSharedTasks corpus, a new resource of 254 full text Shared Task Overview papers in NLP domains with annotated task descriptions. We calculated strict and relaxed inter-annotator agreement scores, achieving Cohen`s kappa coefficients of 0.44 and 0.95, respectively. Lastly, we performed a sentence classification task over the dataset, in order to generate a neural baseline for future research and to provide an example of how to preprocess unbalanced datasets of full scientific texts. We achieved an F1 score of 0.75 using SciBERT, fine-tuned and tested on a rebalanced version of the dataset.
null
null
10.18653/v1/2022.wiesp-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,212
inproceedings
ahuja-etal-2022-parsing
Parsing Electronic Theses and Dissertations Using Object Detection
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.14/
Ahuja, Aman and Devera, Alan and Fox, Edward Alan
Proceedings of the first Workshop on Information Extraction from Scientific Publications
121--130
Electronic theses and dissertations (ETDs) contain valuable knowledge that can be useful for a wide range of purposes. To effectively utilize the knowledge contained in ETDs for downstream tasks such as search and retrieval, question-answering, and summarization, the data first needs to be parsed and stored in a format such as XML. However, since most of the ETDs available on the web are PDF documents, parsing them to make their data useful for downstream tasks is a challenge. In this work, we propose a dataset and a framework to help with parsing long scholarly documents such as ETDs. We take the Object Detection approach for document parsing. We first introduce a set of objects that are important elements of an ETD, along with a new dataset ETD-OD that consists of over 25K page images originating from 200 ETDs with bounding boxes around each of the objects. We also propose a framework that utilizes this dataset for converting ETDs to XML, which can further be used for ETD-related downstream tasks. Our code and pre-trained models are available at: \url{https://github.com/Opening-ETDs/ETD-OD}.
null
null
10.18653/v1/2022.wiesp-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,213
inproceedings
alkan-etal-2022-tdac
{TDAC}, The First Corpus in Time-Domain Astrophysics: Analysis and First Experiments on Named Entity Recognition
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.15/
Alkan, Atilla Kaan and Grouin, Cyril and Schussler, Fabian and Zweigenbaum, Pierre
Proceedings of the first Workshop on Information Extraction from Scientific Publications
131--139
The increased interest in time-domain astronomy over the last decades has resulted in a substantial increase in observation reports publication leading to a saturation of how astrophysicists read, analyze and classify information. Due to the short life span of the detected astronomical events, the information related to the characterization of new phenomena has to be communicated and analyzed very rapidly to allow other observatories to react and conduct their follow-up observations. This paper introduces TDAC: the first Corpus in Time-Domain Astrophysics, based on observation reports. We also present the NLP experiments we made for named entity recognition based on annotations we made and annotations from the WIESP NLP Challenge.
null
null
10.18653/v1/2022.wiesp-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,214
inproceedings
akella-etal-2022-reproducibility
Reproducibility Signals in Science: A preliminary analysis
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.16/
Akella, Akhil Pandey and Alhoori, Hamed and Koop, David
Proceedings of the first Workshop on Information Extraction from Scientific Publications
140--144
Reproducibility is an important feature of science; experiments are retested, and analyses are repeated. Trust in the findings increases when consistent results are achieved. Despite the importance of reproducibility, significant work is often involved in these efforts, and some published findings may not be reproducible due to oversights or errors. In this paper, we examine a myriad of features in scholarly articles published in computer science conferences and journals and test how they correlate with reproducibility. We collected data from three different sources that labeled publications as either reproducible or irreproducible and employed statistical significance tests to identify features of those publications that hold clues about reproducibility. We found the readability of the scholarly article and accessibility of the software artifacts through hyperlinks to be strong signals noticeable amongst reproducible scholarly articles.
null
null
10.18653/v1/2022.wiesp-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,215
inproceedings
alkan-etal-2022-majority
A Majority Voting Strategy of a {S}ci{BERT}-based Ensemble Models for Detecting Entities in the Astrophysics Literature (Shared Task)
Ghosal, Tirthankar and Blanco-Cuaresma, Sergi and Accomazzi, Alberto and Patton, Robert M. and Grezes, Felix and Allen, Thomas
nov
2022
Online
Association for Computational Linguistics
https://aclanthology.org/2022.wiesp-1.17/
Alkan, Atilla Kaan and Grouin, Cyril and Schussler, Fabian and Zweigenbaum, Pierre
Proceedings of the first Workshop on Information Extraction from Scientific Publications
145--150
Detecting Entities in the Astrophysics Literature (DEAL) is a proposed shared task in the scope of the first Workshop on Information Extraction from Scientific Publications (WIESP) at AACL-IJCNLP 2022. It aims to propose systems identifying astrophysical named entities. This article presents our system based on a majority voting strategy of an ensemble composed of multiple SciBERT models. The system we propose is ranked second and outperforms the baseline provided by the organisers by achieving an F1 score of 0.7993 and a Matthews Correlation Coefficient (MCC) score of 0.8978 in the testing phase.
null
null
10.18653/v1/2022.wiesp-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,216
inproceedings
nakazawa-etal-2022-overview
Overview of the 9th Workshop on {A}sian Translation
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.1/
Nakazawa, Toshiaki and Mino, Hideya and Goto, Isao and Dabre, Raj and Higashiyama, Shohei and Parida, Shantipriya and Kunchukuttan, Anoop and Morishita, Makoto and Bojar, Ond{\v{r}}ej and Chu, Chenhui and Eriguchi, Akiko and Abe, Kaori and Oda, Yusuke and Kurohashi, Sadao
Proceedings of the 9th Workshop on Asian Translation
1--36
This paper presents the results of the shared tasks from the 9th workshop on Asian translation (WAT2022). For the WAT2022, 8 teams submitted their translation results for the human evaluation. We also accepted 4 research papers. About 300 translation results were submitted to the automatic evaluation server, and selected submissions were manually evaluated.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,218
inproceedings
nakatani-etal-2022-comparing
Comparing {BERT}-based Reward Functions for Deep Reinforcement Learning in Machine Translation
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.2/
Nakatani, Yuki and Kajiwara, Tomoyuki and Ninomiya, Takashi
Proceedings of the 9th Workshop on Asian Translation
37--43
In text generation tasks such as machine translation, models are generally trained using cross-entropy loss. However, mismatches between the loss function and the evaluation metric are often problematic. It is known that this problem can be addressed by direct optimization to the evaluation metric with reinforcement learning. In machine translation, previous studies have used BLEU to calculate rewards for reinforcement learning, but BLEU is not well correlated with human evaluation. In this study, we investigate the impact on machine translation quality through reinforcement learning based on evaluation metrics that are more highly correlated with human evaluation. Experimental results show that reinforcement learning with BERT-based rewards can improve various evaluation metrics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,219
inproceedings
zheng-etal-2022-improving
Improving {J}ejueo-{K}orean Translation With Cross-Lingual Pretraining Using {J}apanese and {K}orean
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.3/
Zheng, Francis and Marrese-Taylor, Edison and Matsuo, Yutaka
Proceedings of the 9th Workshop on Asian Translation
44--50
Jejueo is a critically endangered language spoken on Jeju Island and is closely related to but mutually unintelligible with Korean. Parallel data between Jejueo and Korean is scarce, and translation between the two languages requires more attention, as current neural machine translation systems typically rely on large amounts of parallel training data. While low-resource machine translation has been shown to benefit from using additional monolingual data during the pretraining process, not as much research has been done on how to select languages other than the source and target languages for use during pretraining. We show that using large amounts of Korean and Japanese data during the pretraining process improves translation by 2.16 BLEU points for translation in the Jejueo {\textrightarrow} Korean direction and 1.34 BLEU points for translation in the Korean {\textrightarrow} Jejueo direction compared to the baseline.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,220
inproceedings
kondo-komachi-2022-tmu
{TMU} {NMT} System with Automatic Post-Editing by Multi-Source {L}evenshtein Transformer for the Restricted Translation Task of {WAT} 2022
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.4/
Kondo, Seiichiro and Komachi, Mamoru
Proceedings of the 9th Workshop on Asian Translation
51--58
In this paper, we describe our TMU English{--}Japanese systems submitted to the restricted translation task at WAT 2022 (Nakazawa et al., 2022). In this task, we translate an input sentence with the constraint that certain words or phrases (called restricted target vocabularies (RTVs)) should be contained in the output sentence. To satisfy this constraint, we address this task using a combination of two techniques. One is lexical-constraint-aware neural machine translation (LeCA) (Chen et al., 2020), which is a method of adding RTVs at the end of input sentences. The other is multi-source Levenshtein transformer (MSLevT) (Wan et al., 2020), which is a non-autoregressive method for automatic post-editing. Our system generates translations in two steps. First, we generate the translation using LeCA. Subsequently, we filter the sentences that do not satisfy the constraints and post-edit them with MSLevT. Our experimental results reveal that 100{\%} of the RTVs can be included in the generated sentences while maintaining the translation quality of the LeCA model on both English to Japanese (En{\textrightarrow}Ja) and Japanese to English (Ja{\textrightarrow}En) tasks. Furthermore, the method used in previous studies requires an increase in the beam size to satisfy the constraints, which is computationally expensive. In contrast, the proposed method does not require a similar increase and can generate translations faster.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,221
inproceedings
liu-etal-2022-hwtscsus
{H}w{T}sc{SU}`s Submissions on {WAT} 2022 Shared Task
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.5/
Liu, Yilun and Zhang, Zhen and Tao, Shimin and Li, Junhui and Yang, Hao
Proceedings of the 9th Workshop on Asian Translation
59--63
In this paper we describe our submission to the shared tasks of the 9th Workshop on Asian Translation (WAT 2022) on NICT{--}SAP under the team name {\textquotedblright}HwTscSU{\textquotedblright}. The tasks involve translation from 5 languages into English and vice-versa in two domains: IT domain and Wikinews domain. The purpose is to determine the feasibility of multilingualism, domain adaptation or document-level knowledge given very little to none clean parallel corpora for training. Our approach for all translation tasks mainly focused on pre-training NMT models on general datasets and fine-tuning them on domain-specific datasets. Due to the small amount of parallel corpora, we collected and cleaned the OPUS dataset including three IT domain corpora, i.e., GNOME, KDE4, and Ubuntu. We then trained Transformer models on the collected dataset and fine-tuned on corresponding dev set. The BLEU scores greatly improved in comparison with other systems. Our submission ranked 1st in all IT-domain tasks and in one out of eight ALT domain tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,222
inproceedings
dabre-2022-nicts
{NICT}`s Submission to the {WAT} 2022 Structured Document Translation Task
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.6/
Dabre, Raj
Proceedings of the 9th Workshop on Asian Translation
64--67
We present our submission to the structured document translation task organized by WAT 2022. In structured document translation, the key challenge is the handling of inline tags, which annotate text. Specifically, the text that is annotated by tags, should be translated in such a way that in the translation should contain the tags annotating the translation. This challenge is further compounded by the lack of training data containing sentence pairs with inline XML tag annotated content. However, to our surprise, we find that existing multilingual NMT systems are able to handle the translation of text annotated with XML tags without any explicit training on data containing said tags. Specifically, massively multilingual translation models like M2M-100 perform well despite not being explicitly trained to handle structured content. This direct translation approach is often either as good as if not better than the traditional approach of {\textquotedblleft}remove tag, translate and re-inject tag{\textquotedblright} also known as the {\textquotedblleft}detag-and-project{\textquotedblright} approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,223
inproceedings
poncelas-etal-2022-rakutens
Rakuten`s Participation in {WAT} 2022: Parallel Dataset Filtering by Leveraging Vocabulary Heterogeneity
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.7/
Poncelas, Alberto and Effendi, Johanes and Htun, Ohnmar and Yadav, Sunil and Wang, Dongzhe and Jain, Saurabh
Proceedings of the 9th Workshop on Asian Translation
68--72
This paper introduces our neural machine translation system`s participation in the WAT 2022 shared translation task (team ID: sakura). We participated in the Parallel Data Filtering Task. Our approach based on Feature Decay Algorithms achieved +1.4 and +2.4 BLEU points for English to Japanese and Japanese to English respectively compared to the model trained on the full dataset, showing the effectiveness of FDA on in-domain data selection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,224
inproceedings
das-etal-2022-nit
{NIT} Rourkela Machine Translation({MT}) System Submission to {WAT} 2022 for {M}ulti{I}ndic{MT}: An Indic Language Multilingual Shared Task
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.8/
Das, Sudhansu Bala and Biradar, Atharv and Mishra, Tapas Kumar and Patra, Bidyut Kumar
Proceedings of the 9th Workshop on Asian Translation
73--77
Multilingual Neural Machine Translation (MNMT) exhibits incredible performance with the development of a single translation model for many languages. Previous studies on multilingual translation reveal that multilingual training is effective for languages with limited corpus. This paper presents our submission (Team Id: NITR) in the WAT 2022 for {\textquotedblleft}MultiIndicMT shared task{\textquotedblright} where the objective of the task is the translation between 5 Indic languages from OPUS Corpus (which are newly added in WAT 2022 corpus) into English and vice versa using the corpus provided by the organizer of WAT. Our system is based on a transformer-based NMT using fairseq modelling toolkit with ensemble techniques. Heuristic pre-processing approaches are carried out before keeping the model under training. Our multilingual NMT systems are trained with shared encoder and decoder parameters followed by assigning language embeddings to each token in both encoder and decoder. Our final multilingual system was examined by using BLEU and RIBES metrics scores. In future, we look forward to extend our research that will help in fine-tuning of both encoder and decoder during the monolingual unsupervised training in order to improve the quality of the synthetic data generated during the process.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,225
inproceedings
laskar-etal-2022-investigation
Investigation of Multilingual Neural Machine Translation for {I}ndian Languages
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.9/
Laskar, Sahinur Rahman and Manna, Riyanka and Pakray, Partha and Bandyopadhyay, Sivaji
Proceedings of the 9th Workshop on Asian Translation
78--81
In the domain of natural language processing, machine translation is a well-defined task where one natural language is automatically translated to another natural language. The deep learning-based approach of machine translation, known as neural machine translation attains remarkable translational performance. However, it requires a sufficient amount of training data which is a critical issue for low-resource pair translation. To handle the data scarcity problem, the multilingual concept has been investigated in neural machine translation in different settings like many-to-one and one-to-many translation. WAT2022 (Workshop on Asian Translation 2022) organizes (hosted by the COLING 2022) Indic tasks: English-to-Indic and Indic-to-English translation tasks where we have participated as a team named CNLP-NITS-PP. Herein, we have investigated a transliteration-based approach, where Indic languages are transliterated into English script and shared sub-word level vocabulary during the training phase. We have attained BLEU scores of 2.0 (English-to-Bengali), 1.10 (English-to-Assamese), 4.50 (Bengali-to-English), and 3.50 (Assamese-to-English) translation, respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,226
inproceedings
blin-2022-partial
Does partial pretranslation can improve low ressourced-languages pairs?
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.10/
Blin, Raoul
Proceedings of the 9th Workshop on Asian Translation
82--88
We study the effects of a local and punctual pretranslation of the source corpus on the performance of a Transformer translation model. The pretranslations are performed at the morphological (morpheme translation), lexical (word translation) and morphosyntactic (numeral groups and dates) levels. We focus on small and medium-sized training corpora (50K 2.5M bisegments) and on a linguistically distant language pair (Japanese and French). We find that this type of pretranslation does not lead to significant progress. We describe the motivations of the approach, the specific difficulties of Japanese-French translation. We discuss the possible reasons for the observed underperformance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,227
inproceedings
tang-etal-2022-multimodal
Multimodal Neural Machine Translation with Search Engine Based Image Retrieval
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.11/
Tang, ZhenHao and Zhang, XiaoBing and Long, Zi and Fu, XiangHua
Proceedings of the 9th Workshop on Asian Translation
89--98
Recently, numbers of works shows that the performance of neural machine translation (NMT) can be improved to a certain extent with using visual information. However, most of these conclusions are drawn from the analysis of experimental results based on a limited set of bilingual sentence-image pairs, such as Multi30K.In these kinds of datasets, the content of one bilingual parallel sentence pair must be well represented by a manually annotated image,which is different with the actual translation situation. we propose an open-vocabulary image retrieval methods to collect descriptive images for bilingual parallel corpus using image search engine, and we propose text-aware attentive visual encoder to filter incorrectly collected noise images. Experiment results on Multi30K and other two translation datasets show that our proposed method achieves significant improvements over strong baselines.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,228
inproceedings
parida-etal-2022-silo
Silo {NLP}`s Participation at {WAT}2022
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.12/
Parida, Shantipriya and Panda, Subhadarshi and Gr{\"onroos, Stig-Arne and Granroth-Wilding, Mark and Koistinen, Mika
Proceedings of the 9th Workshop on Asian Translation
99--105
This paper provides the system description of {\textquotedblleft}Silo NLP`s{\textquotedblright} submission to the Workshop on Asian Translation (WAT2022). We have participated in the Indic Multimodal tasks (English-{\ensuremath{>}}Hindi, English-{\ensuremath{>}}Malayalam, and English-{\ensuremath{>}}Bengali, Multimodal Translation). For text-only translation, we used the Transformer and fine-tuned the mBART. For multimodal translation, we used the same architecture and extracted object tags from the images to use as visual features concatenated with the text sequence for input. Our submission tops many tasks including English-{\ensuremath{>}}Hindi multimodal translation (evaluation test), English-{\ensuremath{>}}Malayalam text-only and multimodal translation (evaluation test), English-{\ensuremath{>}}Bengali multimodal translation (challenge test), and English-{\ensuremath{>}}Bengali text-only translation (evaluation test).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,229
inproceedings
patil-etal-2022-pict
{PICT}@{WAT} 2022: Neural Machine Translation Systems for Indic Languages
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.13/
Patil, Anupam and Joshi, Isha and Kadam, Dipali
Proceedings of the 9th Workshop on Asian Translation
106--110
Translation entails more than simply translating words from one language to another. It is vitally essential for effective cross-cultural communication, thus making good translation systems an important requirement. We describe our systems in this paper, which were submitted to the WAT 2022 translation shared tasks. As part of the Multi-modal translation tasks' text-only translation sub-tasks, we submitted three Neural Machine Translation systems based on Transformer models for English to Malayalam, English to Bengali, and English to Hindi text translation. We found significant results on the leaderboard for English-Indic (en-xx) systems utilizing BLEU and RIBES scores as comparative metrics in our studies. For the respective translations of English to Malayalam, Bengali, and Hindi, we obtained BLEU scores of 19.50, 32.90, and 41.80 for the challenge subset and 30.60, 39.80, and 42.90 on the benchmark evaluation subset data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,230
inproceedings
laskar-etal-2022-english
{E}nglish to {B}engali Multimodal Neural Machine Translation using Transliteration-based Phrase Pairs Augmentation
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.14/
Laskar, Sahinur Rahman and Dadure, Pankaj and Manna, Riyanka and Pakray, Partha and Bandyopadhyay, Sivaji
Proceedings of the 9th Workshop on Asian Translation
111--116
Automatic translation of one natural language to another is a popular task of natural language processing. Although the deep learning-based technique known as neural machine translation (NMT) is a widely accepted machine translation approach, it needs an adequate amount of training data, which is a challenging issue for low-resource pair translation. Moreover, the multimodal concept utilizes text and visual features to improve low-resource pair translation. WAT2022 (Workshop on Asian Translation 2022) organizes (hosted by the COLING 2022) English to Bengali multimodal translation task where we have participated as a team named CNLP-NITS-PP in two tracks: 1) text-only and 2) multimodal translation. Herein, we have proposed a transliteration-based phrase pairs augmentation approach which shows improvement in the multimodal translation task and achieved benchmark results on Bengali Visual Genome 1.0 dataset. We have attained the best results on the challenge and evaluation test set for English to Bengali multimodal translation with BLEU scores of 28.70, 43.90 and RIBES scores of 0.688931, 0.780669, respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,231
inproceedings
laskar-etal-2022-investigation-english
Investigation of {E}nglish to {H}indi Multimodal Neural Machine Translation using Transliteration-based Phrase Pairs Augmentation
null
oct
2022
Gyeongju, Republic of Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.wat-1.15/
Laskar, Sahinur Rahman and Singh, Rahul and Karim, Md Faizal and Manna, Riyanka and Pakray, Partha and Bandyopadhyay, Sivaji
Proceedings of the 9th Workshop on Asian Translation
117--122
Machine translation translates one natural language to another, a well-defined natural language processing task. Neural machine translation (NMT) is a widely accepted machine translation approach, but it requires a sufficient amount of training data, which is a challenging issue for low-resource pair translation. Moreover, the multimodal concept utilizes text and visual features to improve low-resource pair translation. WAT2022 (Workshop on Asian Translation 2022) organizes (hosted by the COLING 2022) English to Hindi multimodal translation task where we have participated as a team named CNLP-NITS-PP in two tracks: 1) text-only and 2) multimodal translation. Herein, we have proposed a transliteration-based phrase pairs augmentation approach, which shows improvement in the multimodal translation task. We have attained the second best results on the challenge test set for English to Hindi multimodal translation with BLEU score of 39.30, and a RIBES score of 0.791468.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,232
inproceedings
khlyzova-etal-2022-complementarity
On the Complementarity of Images and Text for the Expression of Emotions in Social Media
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.1/
Khlyzova, Anna and Silberer, Carina and Klinger, Roman
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
1--15
Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus category and the emotion class. We evaluate if these tasks require both modalities and find for the image{--}text relations, that text alone is sufficient for most categories (complementary, illustrative, opposing): the information in the text allows to predict if an image is required for emotion understanding. The emotions of anger and sadness are best predicted with a multimodal model, while text alone is sufficient for disgust, joy, and surprise. Stimuli depicted by objects, animals, food, or a person are best predicted by image-only models, while multimodal mod- els are most effective on art, events, memes, places, or screenshots.
null
null
10.18653/v1/2022.wassa-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,234
inproceedings
lin-etal-2022-multiplex
Multiplex Anti-{A}sian Sentiment before and during the Pandemic: Introducing New Datasets from {T}witter Mining
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.2/
Lin, Hao and Nalluri, Pradeep and Li, Lantian and Sun, Yifan and Zhang, Yongjun
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
16--24
COVID-19 has disproportionately threatened minority communities in the U.S, not only in health but also in societal impact. However, social scientists and policymakers lack critical data to capture the dynamics of the anti-Asian hate trend and to evaluate its scale and scope. We introduce new datasets from Twitter related to anti-Asian hate sentiment before and during the pandemic. Relying on Twitter`s academic API, we retrieve hateful and counter-hate tweets from the Twitter Historical Database. To build contextual understanding and collect related racial cues, we also collect instances of heated arguments, often political, but not necessarily hateful, discussing Chinese issues. We then use the state-of-the-art hate speech classifiers to discern whether these tweets express hatred. These datasets can be used to study hate speech, general anti-Asian or Chinese sentiment, and hate linguistics by social scientists as well as to evaluate and build hate speech or sentiment analysis classifiers by computational scholars.
null
null
10.18653/v1/2022.wassa-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,235
inproceedings
ke-etal-2022-domain
Domain-Aware Contrastive Knowledge Transfer for Multi-domain Imbalanced Data
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.3/
Ke, Zixuan and Kachuee, Mohammad and Lee, Sungjin
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
25--36
In many real-world machine learning applications, samples belong to a set of domains e.g., for product reviews each review belongs to a product category. In this paper, we study multi-domain imbalanced learning (MIL), the scenario that there is imbalance not only in classes but also in domains. In the MIL setting, different domains exhibit different patterns and there is a varying degree of similarity and divergence among domains posing opportunities and challenges for transfer learning especially when faced with limited or insufficient training data. We propose a novel domain-aware contrastive knowledge transfer method called DCMI to (1) identify the shared domain knowledge to encourage positive transfer among similar domains (in particular from head domains to tail domains); (2) isolate the domain-specific knowledge to minimize the negative transfer from dissimilar domains. We evaluated the performance of DCMI on three different datasets showing significant improvements in different MIL scenarios.
null
null
10.18653/v1/2022.wassa-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,236
inproceedings
sabbatino-etal-2022-splink
{\textquotedblleft}splink{\textquotedblright} is happy and {\textquotedblleft}phrouth{\textquotedblright} is scary: Emotion Intensity Analysis for Nonsense Words
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.4/
Sabbatino, Valentino and Troiano, Enrica and Schweitzer, Antje and Klinger, Roman
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
37--50
People associate affective meanings to words - {\textquotedblleft}death{\textquotedblright} is scary and sad while {\textquotedblleft}party{\textquotedblright} is connotated with surprise and joy. This raises the question if the association is purely a product of the learned affective imports inherent to semantic meanings, or is also an effect of other features of words, e.g., morphological and phonological patterns. We approach this question with an annotation-based analysis leveraging nonsense words. Specifically, we conduct a best-worst scaling crowdsourcing study in which participants assign intensity scores for joy, sadness, anger, disgust, fear, and surprise to 272 non-sense words and, for comparison of the results to previous work, to 68 real words. Based on this resource, we develop character-level and phonology-based intensity regressors. We evaluate them on both nonsense words and real words (making use of the NRC emotion intensity lexicon of 7493 words), across six emotion categories. The analysis of our data reveals that some phonetic patterns show clear differences between emotion intensities. For instance, s as a first phoneme contributes to joy, sh to surprise, p as last phoneme more to disgust than to anger and fear. In the modelling experiments, a regressor trained on real words from the NRC emotion intensity lexicon shows a higher performance (r = 0.17) than regressors that aim at learning the emotion connotation purely from nonsense words. We conclude that humans do associate affective meaning to words based on surface patterns, but also based on similarities to existing words ({\textquotedblleft}juy{\textquotedblright} to {\textquotedblleft}joy{\textquotedblright}, or {\textquotedblleft}flike{\textquotedblright} to {\textquotedblleft}like{\textquotedblright}).
null
null
10.18653/v1/2022.wassa-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,237
inproceedings
de-geyndt-etal-2022-sentemo
{S}ent{EMO}: A Multilingual Adaptive Platform for Aspect-based Sentiment and Emotion Analysis
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.5/
De Geyndt, Ellen and De Clercq, Orphee and Van Hee, Cynthia and Lefever, Els and Singh, Pranaydeep and Parent, Olivier and Hoste, Veronique
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
51--61
In this paper, we present the SentEMO platform, a tool that provides aspect-based sentiment analysis and emotion detection of unstructured text data such as reviews, emails and customer care conversations. Currently, models have been trained for five domains and one general domain and are implemented in a pipeline approach, where the output of one model serves as the input for the next. The results are presented in three dashboards, allowing companies to gain more insights into what stakeholders think of their products and services. The SentEMO platform is available at \url{https://sentemo.ugent.be}
null
null
10.18653/v1/2022.wassa-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,238
inproceedings
mousavi-etal-2022-emotion
Can Emotion Carriers Explain Automatic Sentiment Prediction? A Study on Personal Narratives
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.6/
Mousavi, Seyed Mahed and Roccabruna, Gabriel and Tammewar, Aniruddha and Azzolin, Steve and Riccardi, Giuseppe
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
62--70
Deep Neural Networks (DNN) models have achieved acceptable performance in sentiment prediction of written text. However, the output of these machine learning (ML) models cannot be natively interpreted. In this paper, we study how the sentiment polarity predictions by DNNs can be explained and compare them to humans' explanations. We crowdsource a corpus of Personal Narratives and ask human judges to annotate them with polarity and select the corresponding token chunks - the Emotion Carriers (EC) - that convey narrators' emotions in the text. The interpretations of ML neural models are carried out through Integrated Gradients method and we compare them with human annotators' interpretations. The results of our comparative analysis indicate that while the ML model mostly focuses on the explicit appearance of emotions-laden words (e.g. happy, frustrated), the human annotator predominantly focuses the attention on the manifestation of emotions through ECs that denote events, persons, and objects which activate narrator`s emotional state.
null
null
10.18653/v1/2022.wassa-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,239
inproceedings
he-etal-2022-infusing
Infusing Knowledge from {W}ikipedia to Enhance Stance Detection
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.7/
He, Zihao and Mokhberian, Negar and Lerman, Kristina
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
71--77
Stance detection infers a text author`s attitude towards a target. This is challenging when the model lacks background knowledge about the target. Here, we show how background knowledge from Wikipedia can help enhance the performance on stance detection. We introduce Wikipedia Stance Detection BERT (WS-BERT) that infuses the knowledge into stance encoding. Extensive results on three benchmark datasets covering social media discussions and online debates indicate that our model significantly outperforms the state-of-the-art methods on target-specific stance detection, cross-target stance detection, and zero/few-shot stance detection.
null
null
10.18653/v1/2022.wassa-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,240
inproceedings
meshgi-etal-2022-uncertainty
Uncertainty Regularized Multi-Task Learning
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.8/
Meshgi, Kourosh and Mirzaei, Maryam Sadat and Sekine, Satoshi
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
78--88
By sharing parameters and providing task-independent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains. However, fine-tuning on one task may compromise the performance of other tasks or restrict the generalization of the shared learned features. To address this issue, we propose to use task uncertainty to gauge the effect of the shared feature changes on other tasks and prevent the model from overfitting or over-generalizing. We conducted an experiment on 16 text classification tasks, and findings showed that the proposed method consistently improves the performance of the baseline, facilitates the knowledge transfer of learned features to unseen data, and provides explicit control over the generalization of the shared model.
null
null
10.18653/v1/2022.wassa-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,241
inproceedings
matero-etal-2022-understanding
Evaluating Contextual Embeddings and their Extraction Layers for Depression Assessment
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.9/
Matero, Matthew and Hung, Albert and Schwartz, H. Andrew
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
89--94
Many recent works in natural language processing have demonstrated ability to assess aspects of mental health from personal discourse. At the same time, pre-trained contextual word embedding models have grown to dominate much of NLP but little is known empirically on how to best apply them for mental health assessment. Using degree of depression as a case study, we do an empirical analysis on which off-the-shelf language model, individual layers, and combinations of layers seem most promising when applied to human-level NLP tasks. Notably, we find RoBERTa most effective and, despite the standard in past work suggesting the second-to-last or concatenation of the last 4 layers, we find layer 19 (sixth-to last) is at least as good as layer 23 when using 1 layer. Further, when using multiple layers, distributing them across the second half (i.e. Layers 12+), rather than last 4, of the 24 layers yielded the most accurate results.
null
null
10.18653/v1/2022.wassa-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,242
inproceedings
ramos-etal-2022-emotion
Emotion Analysis of Writers and Readers of {J}apanese Tweets on Vaccinations
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.10/
Ramos, Patrick John and Ferawati, Kiki and Liew, Kongmeng and Aramaki, Eiji and Wakamiya, Shoko
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
95--103
Public opinion in social media is increasingly becoming a critical factor in pandemic control. Understanding the emotions of a population towards vaccinations and COVID-19 may be valuable in convincing members to become vaccinated. We investigated the emotions of Japanese Twitter users towards Tweets related to COVID-19 vaccination. Using the WRIME dataset, which provides emotion ratings for Japanese Tweets sourced from writers (Tweet posters) and readers, we fine-tuned a BERT model to predict levels of emotional intensity. This model achieved a training accuracy of $MSE$ = 0.356. A separate dataset of 20,254 Japanese Tweets containing COVID-19 vaccine-related keywords was also collected, on which the fine-tuned BERT was used to perform emotion analysis. Afterwards, a correlation analysis between the extracted emotions and a set of vaccination measures in Japan was conducted.The results revealed that surprise and fear were the most intense emotions predicted by the model for writers and readers, respectively, on the vaccine-related Tweet dataset. The correlation analysis also showed that vaccinations were weakly positively correlated with predicted levels of writer joy, writer/reader anticipation, and writer/reader trust.
null
null
10.18653/v1/2022.wassa-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,243
inproceedings
klein-etal-2022-opinion
Opinion-based Relational Pivoting for Cross-domain Aspect Term Extraction
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.11/
Klein, Ayal and Pereg, Oren and Korat, Daniel and Lal, Vasudev and Wasserblat, Moshe and Dagan, Ido
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
104--112
Domain adaptation methods often exploit domain-transferable input features, a.k.a. pivots. The task of Aspect and Opinion Term Extraction presents a special challenge for domain transfer: while opinion terms largely transfer across domains, aspects change drastically from one domain to another (e.g. from restaurants to laptops). In this paper, we investigate and establish empirically a prior conjecture, which suggests that the linguistic relations connecting opinion terms to their aspects transfer well across domains and therefore can be leveraged for cross-domain aspect term extraction. We present several analyses supporting this conjecture, via experiments with four linguistic dependency formalisms to represent relation patterns. Subsequently, we present an aspect term extraction method that drives models to consider opinion{--}aspect relations via explicit multitask objectives. This method provides significant performance gains, even on top of a prior state-of-the-art linguistically-informed model, which are shown in analysis to stem from the relational pivoting signal.
null
null
10.18653/v1/2022.wassa-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,244
inproceedings
lim-liew-2022-english-malay
{E}nglish-{M}alay Word Embeddings Alignment for Cross-lingual Emotion Classification with Hierarchical Attention Network
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.12/
Lim, Ying Hao and Liew, Jasy Suet Yan
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
113--124
The main challenge in English-Malay cross-lingual emotion classification is that there are no Malay training emotion corpora. Given that machine translation could fall short in contextually complex tweets, we only limited machine translation to the word level. In this paper, we bridge the language gap between English and Malay through cross-lingual word embeddings constructed using singular value decomposition. We pre-trained our hierarchical attention model using English tweets and fine-tuned it using a set of gold standard Malay tweets. Our model uses significantly less computational resources compared to the language models. Experimental results show that the performance of our model is better than mBERT in zero-shot learning by 2.4{\%} and Malay BERT by 0.8{\%} when a limited number of Malay tweets is available. In exchange for 6 {--} 7 times less in computational time, our model only lags behind mBERT and XLM-RoBERTa by a margin of 0.9 {--} 4.3 {\%} in few-shot learning. Also, the word-level attention could be transferred to the Malay tweets accurately using the cross-lingual word embeddings.
null
null
10.18653/v1/2022.wassa-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,245
inproceedings
rajda-etal-2022-assessment
Assessment of Massively Multilingual Sentiment Classifiers
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.13/
Rajda, Krzysztof and Augustyniak, Lukasz and Gramacki, Piotr and Gruza, Marcin and Wo{\'z}niak, Szymon and Kajdanowicz, Tomasz
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
125--140
Models are increasing in size and complexity in the hunt for SOTA. But what if those 2{\%}increase in performance does not make a difference in a production use case? Maybe benefits from a smaller, faster model outweigh those slight performance gains. Also, equally good performance across languages in multilingual tasks is more important than SOTA results on a single one. We present the biggest, unified, multilingual collection of sentiment analysis datasets. We use these to assess 11 models and 80 high-quality sentiment datasets (out of 342 raw datasets collected) in 27 languages and included results on the internally annotated datasets. We deeply evaluate multiple setups, including fine-tuning transformer-based models for measuring performance. We compare results in numerous dimensions addressing the imbalance in both languages coverage and dataset sizes. Finally, we present some best practices for working with such a massive collection of datasets and models for a multi-lingual perspective.
null
null
10.18653/v1/2022.wassa-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,246
inproceedings
zhang-abdul-mageed-2022-improving
Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-Tuning
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.14/
Zhang, Chiyu and Abdul-Mageed, Muhammad
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
141--156
Masked language models (MLMs) are pre-trained with a denoising objective that is in a mismatch with the objective of downstream fine-tuning. We propose pragmatic masking and surrogate fine-tuning as two complementing strategies that exploit social cues to drive pre-trained representations toward a broad set of concepts useful for a wide class of social meaning tasks. We test our models on 15 different Twitter datasets for social meaning detection. Our methods achieve 2.34{\%} $F_1$ over a competitive baseline, while outperforming domain-specific language models pre-trained on large datasets. Our methods also excel in few-shot learning: with only 5{\%} of training data (severely few-shot), our methods enable an impressive 68.54{\%} average $F_1$. The methods are also language agnostic, as we show in a zero-shot setting involving six datasets from three different languages.
null
null
10.18653/v1/2022.wassa-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,247
inproceedings
minot-etal-2022-distinguishing
Distinguishing In-Groups and Onlookers by Language Use
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.15/
Minot, Joshua and Trujillo, Milo and Rosenblatt, Samuel and De Anda-J{\'a}uregui, Guillermo and Moog, Emily and Roth, Allison M. and Samson, Briane Paul and H{\'e}bert-Dufresne, Laurent
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
157--171
Inferring group membership of social media users is of high interest in many domains. Group membership is typically inferred via network interactions with other members, or by the usage of in-group language. However, network information is incomplete when users or groups move between platforms, and in-group keywords lose significance as public discussion about a group increases. Similarly, using keywords to filter content and users can fail to distinguish between the various groups that discuss a topic{---}perhaps confounding research on public opinion and narrative trends. We present a classifier intended to distinguish members of groups from users discussing a group based on contextual usage of keywords. We demonstrate the classifier on a sample of community pairs from Reddit and focus on results related to the COVID-19 pandemic.
null
null
10.18653/v1/2022.wassa-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,248
inproceedings
maladry-etal-2022-irony
Irony Detection for {D}utch: a Venture into the Implicit
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.16/
Maladry, Aaron and Lefever, Els and Van Hee, Cynthia and Hoste, Veronique
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
172--181
This paper presents the results of a replication experiment for automatic irony detection in Dutch social media text, investigating both a feature-based SVM classifier, as was done by Van Hee et al. (2017) and and a transformer-based approach. In addition to building a baseline model, an important goal of this research is to explore the implementation of common-sense knowledge in the form of implicit sentiment, as we strongly believe that common-sense and connotative knowledge are essential to the identification of irony and implicit meaning in tweets. We show promising results and the presented approach can provide a solid baseline and serve as a staging ground to build on in future experiments for irony detection in Dutch.
null
null
10.18653/v1/2022.wassa-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,249
inproceedings
kerz-etal-2022-pushing
Pushing on Personality Detection from Verbal Behavior: A Transformer Meets Text Contours of Psycholinguistic Features
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.17/
Kerz, Elma and Qiao, Yu and Zanwar, Sourabh and Wiechmann, Daniel
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
182--194
Research at the intersection of personality psychology, computer science, and linguistics has recently focused increasingly on modeling and predicting personality from language use. We report two major improvements in predicting personality traits from text data: (1) to our knowledge, the most comprehensive set of theory-based psycholinguistic features and (2) hybrid models that integrate a pre-trained Transformer Language Model BERT and Bidirectional Long Short-Term Memory (BLSTM) networks trained on within-text distributions ({\textquoteleft}text contours') of psycholinguistic features. We experiment with BLSTM models (with and without Attention) and with two techniques for applying pre-trained language representations from the transformer model - {\textquoteleft}feature-based' and {\textquoteleft}fine-tuning'. We evaluate the performance of the models we built on two benchmark datasets that target the two dominant theoretical models of personality: the Big Five Essay dataset (Pennebaker and King, 1999) and the MBTI Kaggle dataset (Li et al., 2018). Our results are encouraging as our models outperform existing work on the same datasets. More specifically, our models achieve improvement in classification accuracy by 2.9{\%} on the Essay dataset and 8.28{\%} on the Kaggle MBTI dataset. In addition, we perform ablation experiments to quantify the impact of different categories of psycholinguistic features in the respective personality prediction models.
null
null
10.18653/v1/2022.wassa-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,250
inproceedings
bianchi-etal-2022-xlm
{XLM}-{EMO}: Multilingual Emotion Prediction in Social Media Text
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.18/
Bianchi, Federico and Nozza, Debora and Hovy, Dirk
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
195--203
Detecting emotion in text allows social and computational scientists to study how people behave and react to online events. However, developing these tools for different languages requires data that is not always available. This paper collects the available emotion detection datasets across 19 languages. We train a multilingual emotion prediction model for social media data, XLM-EMO. The model shows competitive performance in a zero-shot setting, suggesting it is helpful in the context of low-resource languages. We release our model to the community so that interested researchers can directly use it.
null
null
10.18653/v1/2022.wassa-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,251
inproceedings
sousa-pardo-2022-evaluating
Evaluating Content Features and Classification Methods for Helpfulness Prediction of Online Reviews: Establishing a Benchmark for {P}ortuguese
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.19/
Sousa, Rog{\'e}rio and Pardo, Thiago
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
204--213
Over the years, the review helpfulness prediction task has been the subject of several works, but remains being a challenging issue in Natural Language Processing, as results vary a lot depending on the domain, on the adopted features and on the chosen classification strategy. This paper attempts to evaluate the impact of content features and classification methods for two different domains. In particular, we run our experiments for a low resource language {--} Portuguese {--}, trying to establish a benchmark for this language. We show that simple features and classical classification methods are powerful for the task of helpfulness prediction, but are largely outperformed by a convolutional neural network-based solution.
null
null
10.18653/v1/2022.wassa-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,252
inproceedings
barriere-etal-2022-wassa
{WASSA} 2022 Shared Task: Predicting Empathy, Emotion and Personality in Reaction to News Stories
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.20/
Barriere, Valentin and Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Alqahtani, Sawsan
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
214--227
This paper presents the results that were obtained from WASSA 2022 shared task on predicting empathy, emotion, and personality in reaction to news stories. Participants were given access to a dataset comprising empathic reactions to news stories where harm is done to a person, group, or other. These reactions consist of essays and Batson`s empathic concern and personal distress scores. The dataset was further extended in WASSA 2021 shared task to include news articles, person-level demographic information (e.g. age, gender), personality information, and Ekman`s six basic emotions at essay level Participation was encouraged in four tracks: predicting empathy and distress scores, predicting emotion categories, predicting personality and predicting interpersonal reactivity. In total, 14 teams participated in the shared task. We summarize the methods and resources used by the participating teams.
null
null
10.18653/v1/2022.wassa-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,253
inproceedings
chen-etal-2022-iucl
{IUCL} at {WASSA} 2022 Shared Task: A Text-only Approach to Empathy and Emotion Detection
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.21/
Chen, Yue and Ju, Yingnan and K{\"ubler, Sandra
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
228--232
Our system, IUCL, participated in the WASSA 2022 Shared Task on Empathy Detection and Emotion Classification. Our main goal in building this system is to investigate how the use of demographic attributes influences performance. Our (official) results show that our text-only systems perform very competitively, ranking first in the empathy detection task, reaching an average Pearson correlation of 0.54, and second in the emotion classification task, reaching a Macro-F of 0.572. Our systems that use both text and demographic data are less competitive.
null
null
10.18653/v1/2022.wassa-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,254
inproceedings
li-etal-2022-continuing
Continuing Pre-trained Model with Multiple Training Strategies for Emotional Classification
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.22/
Li, Bin and Weng, Yixuan and Song, Qiya and Sun, Bin and Li, Shutao
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
233--238
Emotion is the essential attribute of human beings. Perceiving and understanding emotions in a human-like manner is the most central part of developing emotional intelligence. This paper describes the contribution of the LingJing team`s method to the Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis (WASSA) 2022 shared task on Emotion Classification. The participants are required to predict seven emotions from empathic responses to news or stories that caused harm to individuals, groups, or others. This paper describes the continual pre-training method for the masked language model (MLM) to enhance the DeBERTa pre-trained language model. Several training strategies are designed to further improve the final downstream performance including the data augmentation with the supervised transfer, child-tuning training, and the late fusion method. Extensive experiments on the emotional classification dataset show that the proposed method outperforms other state-of-the-art methods, demonstrating our method`s effectiveness. Moreover, our submission ranked Top-1 with all metrics in the evaluation phase for the Emotion Classification task.
null
null
10.18653/v1/2022.wassa-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,255
inproceedings
del-arco-etal-2022-empathy
Empathy and Distress Prediction using Transformer Multi-output Regression and Emotion Analysis with an Ensemble of Supervised and Zero-Shot Learning Models
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.23/
Del Arco, Flor Miriam and Collado-Monta{\~n}ez, Jaime and Ure{\~n}a, L. Alfonso and Mart{\'i}n-Valdivia, Mar{\'i}a-Teresa
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
239--244
This paper describes the participation of the SINAI research group at WASSA 2022 (Empathy and Personality Detection and Emotion Classification). Specifically, we participate in Track 1 (Empathy and Distress predictions) and Track 2 (Emotion classification). We conducted extensive experiments developing different machine learning solutions in line with the state of the art in Natural Language Processing. For Track 1, a Transformer multi-output regression model is proposed. For Track 2, we aim to explore recent techniques based on Zero-Shot Learning models including a Natural Language Inference model and GPT-3, using them in an ensemble manner with a fine-tune RoBERTa model. Our team ranked 2nd in the first track and 3rd in the second track.
null
null
10.18653/v1/2022.wassa-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,256
inproceedings
desai-etal-2022-leveraging
Leveraging Emotion-Specific features to improve Transformer performance for Emotion Classification
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.24/
Desai, Shaily and Kshirsagar, Atharva and Sidnerlikar, Aditi and Khodake, Nikhil and Marathe, Manisha
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
245--249
This paper describes team PVG`s AI Club`s approach to the Emotion Classification shared task held at WASSA 2022. This Track 2 sub-task focuses on building models which can predict a multi-class emotion label based on essays from news articles where a person, group or another entity is affected. Baseline transformer models have been demonstrating good results on sequence classification tasks, and we aim to improve this performance with the help of ensembling techniques, and by leveraging two variations of emotion-specific representations. We observe better results than our baseline models and achieve an accuracy of 0.619 and a macro F1 score of 0.520 on the emotion classification task.
null
null
10.18653/v1/2022.wassa-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,257
inproceedings
kane-etal-2022-transformer
Transformer based ensemble for emotion detection
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.25/
Kane, Aditya and Patankar, Shantanu and Khose, Sahil and Kirtane, Neeraja
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
250--254
Detecting emotions in languages is important to accomplish a complete interaction between humans and machines. This paper describes our contribution to the WASSA 2022 shared task which handles this crucial task of emotion detection. We have to identify the following emotions: sadness, surprise, neutral, anger, fear, disgust, joy based on a given essay text. We are using an ensemble of ELECTRA and BERT models to tackle this problem achieving an F1 score of 62.76{\%}. Our codebase (\url{https://bit.ly/WASSA_shared_task}) and our WandB project (\url{https://wandb.ai/acl_wassa_pictxmanipal/acl_wassa}) is publicly available.
null
null
10.18653/v1/2022.wassa-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,258
inproceedings
ghosh-etal-2022-team
Team {IITP}-{AINLPML} at {WASSA} 2022: Empathy Detection, Emotion Classification and Personality Detection
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.26/
Ghosh, Soumitra and Maurya, Dhirendra and Ekbal, Asif and Bhattacharyya, Pushpak
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
255--260
Computational comprehension and identifying emotional components in language have been critical in enhancing human-computer connection in recent years. The WASSA 2022 Shared Task introduced four tracks and released a dataset of news stories: Track-1 for Empathy and Distress Prediction, Track-2 for Emotion classification, Track-3 for Personality prediction, and Track-4 for Interpersonal Reactivity Index prediction at the essay level. This paper describes our participation in the WASSA 2022 shared task on the tasks mentioned above. We developed multi-task deep learning methods to address Tracks 1 and 2 and machine learning models for Track 3 and 4. Our developed systems achieved average Pearson scores of 0.483, 0.05, and 0.08 for Track 1, 3, and 4, respectively, and a macro F1 score of 0.524 for Track 2 on the test set. We ranked 8th, 11th, 2nd and 2nd for tracks 1, 2, 3, and 4 respectively.
null
null
10.18653/v1/2022.wassa-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,259
inproceedings
vasava-etal-2022-transformer
Transformer-based Architecture for Empathy Prediction and Emotion Classification
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.27/
Vasava, Himil and Uikey, Pramegh and Wasnik, Gaurav and Sharma, Raksha
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
261--264
This paper describes the contribution of team PHG to the WASSA 2022 shared task on Empathy Prediction and Emotion Classification. The broad goal of this task was to model an empathy score, a distress score and the type of emotion associated with the person who had reacted to the essay written in response to a newspaper article. We have used the RoBERTa model for training and top of which few layers are added to finetune the transformer. We also use few machine learning techniques to augment as well as upsample the data. Our system achieves a Pearson Correlation Coefficient of 0.488 on Task 1 (Empathy - 0.470 and Distress - 0.506) and Macro F1-score of 0.531 on Task 2.
null
null
10.18653/v1/2022.wassa-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,260
inproceedings
li-etal-2022-prompt-based
Prompt-based Pre-trained Model for Personality and Interpersonal Reactivity Prediction
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.28/
Li, Bin and Weng, Yixuan and Song, Qiya and Ma, Fuyan and Sun, Bin and Li, Shutao
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
265--270
This paper describes the LingJing team`s method to the Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis (WASSA) 2022 shared task on Personality Prediction (PER) and Reactivity Index Prediction (IRI). In this paper, we adopt the prompt-based method with the pre-trained language model to accomplish these tasks. Specifically, the prompt is designed to provide knowledge of the extra personalized information for enhancing the pre-trained model. Data augmentation and model ensemble are adopted for obtaining better results. Extensive experiments are performed, which shows the effectiveness of the proposed method. On the final submission, our system achieves a Pearson Correlation Coefficient of 0.2301 and 0.2546 on Track 3 and Track 4 respectively. We ranked 1-st on both sub-tasks.
null
null
10.18653/v1/2022.wassa-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,261
inproceedings
qian-etal-2022-surrey
{SURREY}-{CTS}-{NLP} at {WASSA}2022: An Experiment of Discourse and Sentiment Analysis for the Prediction of Empathy, Distress and Emotion
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.29/
Qian, Shenbin and Orasan, Constantin and Kanojia, Diptesh and Saadany, Hadeel and Do Carmo, F{\'e}lix
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
271--275
This paper summarises the submissions our team, SURREY-CTS-NLP has made for the WASSA 2022 Shared Task for the prediction of empathy, distress and emotion. In this work, we tested different learning strategies, like ensemble learning and multi-task learning, as well as several large language models, but our primary focus was on analysing and extracting emotion-intensive features from both the essays in the training data and the news articles, to better predict empathy and distress scores from the perspective of discourse and sentiment analysis. We propose several text feature extraction schemes to compensate the small size of training examples for fine-tuning pretrained language models, including methods based on Rhetorical Structure Theory (RST) parsing, cosine similarity and sentiment score. Our best submissions achieve an average Pearson correlation score of 0.518 for the empathy prediction task and an F1 score of 0.571 for the emotion prediction task, indicating that using these schemes to extract emotion-intensive information can help improve model performance.
null
null
10.18653/v1/2022.wassa-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,262
inproceedings
maheshwari-varma-2022-ensemble
An Ensemble Approach to Detect Emotions at an Essay Level
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.30/
Maheshwari, Himanshu and Varma, Vasudeva
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
276--279
This paper describes our system (IREL, reffered as himanshu.1007 on Codalab) for Shared Task on Empathy Detection, Emotion Classification, and Personality Detection at 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis at ACL 2022. We participated in track 2 for predicting emotion at the essay level. We propose an ensemble approach that leverages the linguistic knowledge of the RoBERTa, BART-large, and RoBERTa model finetuned on the GoEmotions dataset. Each brings in its unique advantage, as we discuss in the paper. Our proposed system achieved a Macro F1 score of 0.585 and ranked one out of thirteen teams
null
null
10.18653/v1/2022.wassa-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,263
inproceedings
lahnala-etal-2022-caisa
{CAISA} at {WASSA} 2022: Adapter-Tuning for Empathy Prediction
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.31/
Lahnala, Allison and Welch, Charles and Flek, Lucie
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
280--285
We build a system that leverages adapters, a light weight and efficient method for leveraging large language models to perform the task Em- pathy and Distress prediction tasks for WASSA 2022. In our experiments, we find that stacking our empathy and distress adapters on a pre-trained emotion lassification adapter performs best compared to full fine-tuning approaches and emotion feature concatenation. We make our experimental code publicly available
null
null
10.18653/v1/2022.wassa-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,264
inproceedings
obadic-etal-2022-nlpop
{NLPOP}: a Dataset for Popularity Prediction of Promoted {NLP} Research on {T}witter
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.32/
Obadi{\'c}, Leo and Tutek, Martin and {\v{S}}najder, Jan
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
286--292
Twitter has slowly but surely established itself as a forum for disseminating, analysing and promoting NLP research. The trend of researchers promoting work not yet peer-reviewed (preprints) by posting concise summaries presented itself as an opportunity to collect and combine multiple modalities of data. In scope of this paper, we (1) construct a dataset of Twitter threads in which researchers promote NLP preprints and (2) evaluate whether it is possible to predict the popularity of a thread based on the content of the Twitter thread, paper content and user metadata. We experimentally show that it is possible to predict popularity of threads promoting research based on their content, and that predictive performance depends on modelling textual input, indicating that the dataset could present value for related areas of NLP research such as citation recommendation and abstractive summarization.
null
null
10.18653/v1/2022.wassa-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,265
inproceedings
shuo-2022-tagging
Tagging Without Rewriting: A Probabilistic Model for Unpaired Sentiment and Style Transfer
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.33/
Yang, Shuo
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
293--303
Style transfer is the task of paraphrasing text into a target-style domain while retaining the content. Unsupervised approaches mainly focus on training a generator to rewrite input sentences. In this work, we assume that text styles are determined by only a small proportion of words; therefore, rewriting sentences via generative models may be unnecessary. As an alternative, we consider style transfer as a sequence tagging task. Specifically, we use edit operations (i.e., deletion, insertion and substitution) to tag words in an input sentence. We train a classifier and a language model to score tagged sequences and build a conditional random field. Finally, the optimal path in the conditional random field is used as the output. The results of experiments comparing models indicate that our proposed model exceeds end-to-end baselines in terms of accuracy on both sentiment and style transfer tasks with comparable or better content preservation.
null
null
10.18653/v1/2022.wassa-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,266
inproceedings
silva-etal-2022-polite
Polite Task-oriented Dialog Agents: To Generate or to Rewrite?
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.34/
Silva, Diogo and Semedo, David and Magalh{\~a}es, Jo{\~a}o
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
304--314
For task-oriented dialog agents, the tone of voice mediates user-agent interactions, playing a central role in the flow of a conversation. Distinct from domain-agnostic politeness constructs, in specific domains such as online stores, booking platforms, and others, agents need to be capable of adopting highly specific vocabulary, with significant impact on lexical and grammatical aspects of utterances. Then, the challenge is on improving utterances' politeness while preserving the actual content, an utterly central requirement to achieve the task goal. In this paper, we conduct a novel assessment of politeness strategies for task-oriented dialog agents under a transfer learning scenario. We extend existing generative and rewriting politeness approaches, towards overcoming domain-shifting issues, and enabling the transfer of politeness patterns to a novel domain. Both automatic and human evaluation is conducted on customer-store interactions, over the fashion domain, from which contribute with insightful and experimentally supported lessons regarding the improvement of politeness in task-specific dialog agents.
null
null
10.18653/v1/2022.wassa-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,267
inproceedings
kreuter-etal-2022-items
Items from Psychometric Tests as Training Data for Personality Profiling Models of {T}witter Users
Barnes, Jeremy and De Clercq, Orph{\'e}e and Barriere, Valentin and Tafreshi, Shabnam and Alqahtani, Sawsan and Sedoc, Jo{\~a}o and Klinger, Roman and Balahur, Alexandra
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.wassa-1.35/
Kreuter, Anne and Sassenberg, Kai and Klinger, Roman
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment {\&} Social Media Analysis
315--323
Machine-learned models for author profiling in social media often rely on data acquired via self-reporting-based psychometric tests (questionnaires) filled out by social media users. This is an expensive but accurate data collection strategy. Another, less costly alternative, which leads to potentially more noisy and biased data, is to rely on labels inferred from publicly available information in the profiles of the users, for instance self-reported diagnoses or test results. In this paper, we explore a third strategy, namely to directly use a corpus of items from validated psychometric tests as training data. Items from psychometric tests often consist of sentences from an I-perspective (e.g., {\textquoteleft}I make friends easily.'). Such corpora of test items constitute {\textquoteleft}small data', but their availability for many concepts is a rich resource. We investigate this approach for personality profiling, and evaluate BERT classifiers fine-tuned on such psychometric test items for the big five personality traits (openness, conscientiousness, extraversion, agreeableness, neuroticism) and analyze various augmentation strategies regarding their potential to address the challenges coming with such a small corpus. Our evaluation on a publicly available Twitter corpus shows a comparable performance to in-domain training for 4/5 personality traits with T5-based data augmentation.
null
null
10.18653/v1/2022.wassa-1.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,268
inproceedings
al-thubaity-etal-2022-caraner
{CA}ra{NER}: The {COVID}-19 {A}rabic Named Entity Corpus
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.1/
Al-Thubaity, Abdulmohsen and Alkhereyf, Sakhar and Alzahrani, Wejdan and Bahanshal, Alia
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
1--10
Named Entity Recognition (NER) is a well-known problem for the natural language processing (NLP) community. It is a key component of different NLP applications, including information extraction, question answering, and information retrieval. In the literature, there are several Arabic NER datasets with different named entity tags; however, due to data and concept drift, we are always in need of new data for NER and other NLP applications. In this paper, first, we introduce Wassem, a web-based annotation platform for Arabic NLP applications. Wassem can be used to manually annotate textual data for a variety of NLP tasks: text classification, sequence classification, and word segmentation. Second, we introduce the COVID-19 Arabic Named Entities Recognition (CAraNER) dataset. CAraNER has 55,389 tokens distributed over 1,278 sentences randomly extracted from Saudi Arabian newspaper articles published during 2019, 2020, and 2021. The dataset is labeled by five annotators with five named-entity tags, namely: Person, Title, Location, Organization, and Miscellaneous. The CAraNER corpus is available for download for free. We evaluate the corpus by finetuning four BERT-based Arabic language models on the CAraNER corpus. The best model was AraBERTv0.2-large with 0.86 for the F1 macro measure.
null
null
10.18653/v1/2022.wanlp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,270
inproceedings
aloraini-etal-2022-joint
Joint Coreference Resolution for Zeros and non-Zeros in {A}rabic
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.2/
Aloraini, Abdulrahman and Pradhan, Sameer and Poesio, Massimo
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
11--21
Most existing proposals about anaphoric zero pronoun (AZP) resolution regard full mention coreference and AZP resolution as two independent tasks, even though the two tasks are clearly related. The main issues that need tackling to develop a joint model for zero and non-zero mentions are the difference between the two types of arguments (zero pronouns, being null, provide no nominal information) and the lack of annotated datasets of a suitable size in which both types of arguments are annotated for languages other than Chinese and Japanese. In this paper, we introduce two architectures for jointly resolving AZPs and non-AZPs, and evaluate them on Arabic, a language for which, as far as we know, there has been no prior work on joint resolution. Doing this also required creating a new version of the Arabic subset of the standard coreference resolution dataset used for the CoNLL-2012 shared task (Pradhan et al.,2012) in which both zeros and non-zeros are included in a single dataset.
null
null
10.18653/v1/2022.wanlp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,271
inproceedings
kaseb-farouk-2022-saids
{SAIDS}: A Novel Approach for Sentiment Analysis Informed of Dialect and Sarcasm
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.3/
Kaseb, Abdelrahman and Farouk, Mona
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
22--30
Sentiment analysis becomes an essential part of every social network, as it enables decision-makers to know more about users' opinions in almost all life aspects. Despite its importance, there are multiple issues it encounters like the sentiment of the sarcastic text which is one of the main challenges of sentiment analysis. This paper tackles this challenge by introducing a novel system (SAIDS) that predicts the sentiment, sarcasm and dialect of Arabic tweets. SAIDS uses its prediction of sarcasm and dialect as known information to predict the sentiment. It uses MARBERT as a language model to generate sentence embedding, then passes it to the sarcasm and dialect models, and then the outputs of the three models are concatenated and passed to the sentiment analysis model. Multiple system design setups were experimented with and reported. SAIDS was applied to the ArSarcasm-v2 dataset where it outperforms the state-of-the-art model for the sentiment analysis task. By training all tasks together, SAIDS achieves results of 75.98 FPN, 59.09 F1-score and 71.13 F1-score for sentiment analysis, sarcasm detection, and dialect identification respectively. The system design can be used to enhance the performance of any task which is dependent on other tasks.
null
null
10.18653/v1/2022.wanlp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,272
inproceedings
kamal-eddine-etal-2022-arabart
{A}ra{BART}: a Pretrained {A}rabic Sequence-to-Sequence Model for Abstractive Summarization
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.4/
Kamal Eddine, Moussa and Tomeh, Nadi and Habash, Nizar and Le Roux, Joseph and Vazirgiannis, Michalis
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
31--42
Like most natural language understanding and generation tasks, state-of-the-art models for summarization are transformer-based sequence-to-sequence architectures that are pretrained on large corpora. While most existing models focus on English, Arabic remains understudied. In this paper we propose AraBART, the first Arabic model in which the encoder and the decoder are pretrained end-to-end, based on BART. We show that AraBART achieves the best performance on multiple abstractive summarization datasets, outperforming strong baselines including a pretrained Arabic BERT-based model, multilingual BART, Arabic T5, and a multilingual T5 model. AraBART is publicly available.
null
null
10.18653/v1/2022.wanlp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,273
inproceedings
khallaf-etal-2022-towards
Towards {A}rabic Sentence Simplification via Classification and Generative Approaches
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.5/
Khallaf, Nouran and Sharoff, Serge and Soliman, Rasha
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
43--52
This paper presents an attempt to build a Modern Standard Arabic (MSA) sentence-level simplification system. We experimented with sentence simplification using two approaches: (i) a classification approach leading to lexical simplification pipelines which use Arabic-BERT, a pre-trained contextualised model, as well as a model of fastText word embeddings; and (ii) a generative approach, a Seq2Seq technique by applying a multilingual Text-to-Text Transfer Transformer mT5. We developed our training corpus by aligning the original and simplified sentences from the internationally acclaimed Arabic novel Saaq al-Bambuu. We evaluate effectiveness of these methods by comparing the generated simple sentences to the target simple sentences using the BERTScore evaluation metric. The simple sentences produced by the mT5 model achieve P 0.72, R 0.68 and F-1 0.70 via BERTScore, while, combining Arabic-BERT and fastText achieves P 0.97, R 0.97 and F-1 0.97. In addition, we report a manual error analysis for these experiments.
null
null
10.18653/v1/2022.wanlp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,274
inproceedings
elkaref-etal-2022-generating
Generating Classical {A}rabic Poetry using Pre-trained Models
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.6/
Elkaref, Nehal and Abu-Elkheir, Mervat and ElOraby, Maryam and Abdelgaber, Mohamed
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
53--62
Poetry generation tends to be a complicated task given meter and rhyme constraints. Previous work resorted to exhaustive methods in-order to employ poetic elements. In this paper we leave pre-trained models, GPT-J and BERTShared to recognize patterns of meters and rhyme to generate classical Arabic poetry and present our findings and results on how well both models could pick up on these classical Arabic poetic elements.
null
null
10.18653/v1/2022.wanlp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,275
inproceedings
khondaker-etal-2022-benchmark
A Benchmark Study of Contrastive Learning for {A}rabic Social Meaning
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.7/
Khondaker, Md Tawkat Islam and Nagoudi, El Moatez Billah and Elmadany, AbdelRahim and Abdul-Mageed, Muhammad and Lakshmanan, V.S., Laks
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
63--75
Contrastive learning (CL) has brought significant progress to various NLP tasks. Despite such a progress, CL has not been applied to Arabic NLP. Nor is it clear how much benefits it could bring to particular classes of tasks such as social meaning (e.g., sentiment analysis, dialect identification, hate speech detection). In this work, we present a comprehensive benchmark study of state-of-the-art supervised CL methods on a wide array of Arabic social meaning tasks. Through an extensive empirical analysis, we show that CL methods outperform vanilla finetuning on most of the tasks. We also show that CL can be data efficient and quantify this efficiency, demonstrating the promise of these methods in low-resource settings vis-a-vis the particular downstream tasks (especially label granularity).
null
null
10.18653/v1/2022.wanlp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,276
inproceedings
elneima-binkowski-2022-adversarial
Adversarial Text-to-Speech for low-resource languages
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.8/
Elneima, Ashraf and Bi{\'n}kowski, Miko{\l}aj
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
76--84
In this paper we propose a new method for training adversarial text-to-speech (TTS) models for low-resource languages using auxiliary data. Specifically, we modify the MelGAN (Kumar et al., 2019) architecture to achieve better performance in Arabic speech generation, exploring multiple additional datasets and architectural choices, which involved extra discriminators designed to exploit high-frequency similarities between languages. In our evaluation, we used subjective human evaluation, MOS-Mean Opinion Score, and a novel quantitative metric, the Fr{\'e}chet Wav2Vec Distance, which we found to be well correlated with MOS. Both subjectively and quantitatively, our method outperformed the standard MelGAN model.
null
null
10.18653/v1/2022.wanlp-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,277
inproceedings
abdul-mageed-etal-2022-nadi
{NADI} 2022: The Third Nuanced {A}rabic Dialect Identification Shared Task
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.9/
Abdul-Mageed, Muhammad and Zhang, Chiyu and Elmadany, AbdelRahim and Bouamor, Houda and Habash, Nizar
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
85--97
We describe the findings of the third Nuanced Arabic Dialect Identification Shared Task (NADI 2022). NADI aims at advancing state-of-the-art Arabic NLP, including Arabic dialects. It does so by affording diverse datasets and modeling opportunities in a standardized context where meaningful comparisons between models and approaches are possible. NADI 2022 targeted both dialect identification (Subtask 1) and dialectal sentiment analysis (Subtask 2) at the country level. A total of 41 unique teams registered for the shared task, of whom 21 teams have participated (with 105 valid submissions). Among these, 19 teams participated in Subtask 1, and 10 participated in Subtask 2. The winning team achieved F1=27.06 on Subtask 1 and F1=75.16 on Subtask 2, reflecting that both subtasks remain challenging and motivating future work in this area. We describe the methods employed by the participating teams and offer an outlook for NADI.
null
null
10.18653/v1/2022.wanlp-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,278
inproceedings
alhafni-etal-2022-shared
The Shared Task on Gender Rewriting
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.10/
Alhafni, Bashar and Habash, Nizar and Bouamor, Houda and Obeid, Ossama and Alrowili, Sultan and AlZeer, Daliyah and Shnqiti, Kawla Mohmad and Elbakry, Ahmed and ElNokrashy, Muhammad and Gabr, Mohamed and Issam, Abderrahmane and Qaddoumi, Abdelrahim and Shanker, Vijay and Zyate, Mahmoud
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
98--107
In this paper, we present the results and findings of the Shared Task on Gender Rewriting, which was organized as part of the Seventh Arabic Natural Language Processing Workshop. The task of gender rewriting refers to generating alternatives of a given sentence to match different target user gender contexts (e.g., a female speaker with a male listener, a male speaker with a male listener, etc.). This requires changing the grammatical gender (masculine or feminine) of certain words referring to the users. In this task, we focus on Arabic, a gender-marking morphologically rich language. A total of five teams from four countries participated in the shared task.
null
null
10.18653/v1/2022.wanlp-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,279
inproceedings
alam-etal-2022-overview
Overview of the {WANLP} 2022 Shared Task on Propaganda Detection in {A}rabic
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.11/
Alam, Firoj and Mubarak, Hamdy and Zaghouani, Wajdi and Da San Martino, Giovanni and Nakov, Preslav
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
108--118
Propaganda is defined as an expression of opinion or action by individuals or groups deliberately designed to influence opinions or actions of other individuals or groups with reference to predetermined ends and this is achieved by means of well-defined rhetorical and psychological devices. Currently, propaganda (or persuasion) techniques have been commonly used on social media to manipulate or mislead social media users. Automatic detection of propaganda techniques from textual, visual, or multimodal content has been studied recently, however, major of such efforts are focused on English language content. In this paper, we propose a shared task on detecting propaganda techniques for Arabic textual content. We have done a pilot annotation of 200 Arabic tweets, which we plan to extend to 2,000 tweets, covering diverse topics. We hope that the shared task will help in building a community for Arabic propaganda detection. The dataset will be made publicly available, which can help in future studies.
null
null
10.18653/v1/2022.wanlp-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,280
inproceedings
hamed-etal-2022-arzen
{A}rz{E}n-{ST}: A Three-way Speech Translation Corpus for Code-Switched {E}gyptian {A}rabic-{E}nglish
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.12/
Hamed, Injy and Habash, Nizar and Abdennadher, Slim and Vu, Ngoc Thang
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
119--130
We present our work on collecting ArzEn-ST, a code-switched Egyptian Arabic-English Speech Translation Corpus. This corpus is an extension of the ArzEn speech corpus, which was collected through informal interviews with bilingual speakers. In this work, we collect translations in both directions, monolingual Egyptian Arabic and monolingual English, forming a three-way speech translation corpus. We make the translation guidelines and corpus publicly available. We also report results for baseline systems for machine translation and speech translation tasks. We believe this is a valuable resource that can motivate and facilitate further research studying the code-switching phenomenon from a linguistic perspective and can be used to train and evaluate NLP systems.
null
null
10.18653/v1/2022.wanlp-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,281
inproceedings
dibas-etal-2022-maknuune
Maknuune: A Large Open Palestinian {A}rabic Lexicon
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.13/
Dibas, Shahd Salah Uddin and Khairallah, Christian and Habash, Nizar and Sadi, Omar Fayez and Sairafy, Tariq and Sarabta, Karmel and Ardah, Abrar
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
131--141
We present Maknuune, a large open lexicon for the Palestinian Arabic dialect. Maknuune has over 36K entries from 17K lemmas, and 3.7K roots. All entries include diacritized Arabic orthography, phonological transcription and English glosses. Some entries are enriched with additional information such as broken plurals and templatic feminine forms, associated phrases and collocations, Standard Arabic glosses, and examples or notes on grammar, usage, or location of collected entry
null
null
10.18653/v1/2022.wanlp-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,282
inproceedings
fashwan-alansary-2022-developing
Developing a Tag-Set and Extracting the Morphological Lexicons to Build a Morphological Analyzer for {E}gyptian {A}rabic
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.14/
Fashwan, Amany and Alansary, Sameh
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
142--160
This paper sheds light on an in-progress work for building a morphological analyzer for Egyptian Arabic (EGY). To build such a tool, a tag-set schema is developed depending on a corpus of 527,000 EGY words covering different sources and genres. This tag-set schema is used in annotating about 318,940 words, morphologically, according to their contexts. Each annotated word is associated with its suitable prefix(s), original stem, tag, suffix(s), glossary, number, gender, definiteness, and conventional lemma and stem. These morphologically annotated words, in turns, are used in developing the proposed morphological analyzer where the morphological lexicons and the compatibility tables are extracted and tested. The system is compared with one of best EGY morphological analyzers; CALIMA.
null
null
10.18653/v1/2022.wanlp-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,283
inproceedings
husain-etal-2022-weak
A Weak Supervised Transfer Learning Approach for Sentiment Analysis to the {K}uwaiti Dialect
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.15/
Husain, Fatemah and Al-Ostad, Hana and Omar, Halima
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
161--173
Developing a system for sentiment analysis is very challenging for the Arabic language due to the limitations in the available Arabic datasets. Many Arabic dialects are still not studied by researchers in Arabic sentiment analysis due to the complexity of annotators' recruitment process during dataset creation. This paper covers the research gap in sentiment analysis for the Kuwaiti dialect by proposing a weak supervised approach to develop a large labeled dataset. Our dataset consists of over 16.6k tweets with 7,905 negatives, 7,902 positives, and 860 neutrals that spans several themes and time frames to remove any bias that might affect its content. The annotation agreement between our proposed system`s labels and human-annotated labels reports 93{\%} for the pairwise percent agreement and 0.87 for Cohen`s kappa coefficient. Furthermore, we evaluate our dataset using multiple traditional machine learning classifiers and advanced deep learning language models to test its performance. The results report 89{\%} accuracy when applied to the testing dataset using the ARBERT model.
null
null
10.18653/v1/2022.wanlp-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,284
inproceedings
alturayeif-etal-2022-mawqif
Mawqif: A Multi-label {A}rabic Dataset for Target-specific Stance Detection
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.16/
Alturayeif, Nora Saleh and Luqman, Hamzah Abdullah and Ahmed, Moataz Aly Kamaleldin
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
174--184
Social media platforms are becoming inherent parts of people`s daily life to express opinions and stances toward topics of varying polarities. Stance detection determines the viewpoint expressed in a text toward a target. While communication on social media (e.g., Twitter) takes place in more than 40 languages, the majority of stance detection research has been focused on English. Although some efforts have recently been made to develop stance detection datasets in other languages, no similar efforts seem to have considered the Arabic language. In this paper, we present Mawqif, the first Arabic dataset for target-specific stance detection, composed of 4,121 tweets annotated with stance, sentiment, and sarcasm polarities. Mawqif, as a multi-label dataset, can provide more opportunities for studying the interaction between different opinion dimensions and evaluating a multi-task model. We provide a detailed description of the dataset, present an analysis of the produced annotation, and evaluate four BERT-based models on it. Our best model achieves a macro-F1 of 78.89{\%}, which shows that there is ample room for improvement on this challenging task. We publicly release our dataset, the annotation guidelines, and the code of the experiments.
null
null
10.18653/v1/2022.wanlp-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,285
inproceedings
alrajhi-etal-2022-assessing
Assessing the Linguistic Knowledge in {A}rabic Pre-trained Language Models Using Minimal Pairs
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.17/
Alrajhi, Wafa Abdullah and Al-Khalifa, Hend and AlSalman, Abdulmalik
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
185--193
Despite the noticeable progress that we recently witnessed in Arabic pre-trained language models (PLMs), the linguistic knowledge captured by these models remains unclear. In this paper, we conducted a study to evaluate available Arabic PLMs in terms of their linguistic knowledge. BERT-based language models (LMs) are evaluated using Minimum Pairs (MP), where each pair represents a grammatical sentence and its contradictory counterpart. MPs isolate specific linguistic knowledge to test the model`s sensitivity in understanding a specific linguistic phenomenon. We cover nine major Arabic phenomena: Verbal sentences, Nominal sentences, Adjective Modification, and Idafa construction. The experiments compared the results of fifteen Arabic BERT-based PLMs. Overall, among all tested models, CAMeL-CA outperformed the other PLMs by achieving the highest overall accuracy.
null
null
10.18653/v1/2022.wanlp-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,286
inproceedings
shehadi-wintner-2022-identifying
Identifying Code-switching in {A}rabizi
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.18/
Shehadi, Safaa and Wintner, Shuly
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
194--204
We describe a corpus of social media posts that include utterances in Arabizi, a Roman-script rendering of Arabic, mixed with other languages, notably English, French, and Arabic written in the Arabic script. We manually annotated a subset of the texts with word-level language IDs; this is a non-trivial task due to the nature of mixed-language writing, especially on social media. We developed classifiers that can accurately predict the language ID tags. Then, we extended the word-level predictions to identify sentences that include Arabizi (and code-switching), and applied the classifiers to the raw corpus, thereby harvesting a large number of additional instances. The result is a large-scale dataset of Arabizi, with precise indications of code-switching between Arabizi and English, French, and Arabic.
null
null
10.18653/v1/2022.wanlp-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,287
inproceedings
alqahtani-yannakoudakis-2022-authorship
Authorship Verification for {A}rabic Short Texts Using {A}rabic Knowledge-Base Model ({A}ra{KB})
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.19/
Alqahtani, Fatimah and Yannakoudakis, Helen
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
205--213
Arabic is a Semitic language, considered to be one of the most complex languages in the world due to its unique composition and complex linguistic features. It consequently causes challenges for verifying the authorship of Arabic texts, requiring extensive research and development. This paper presents a new knowledge-based model to enhance Natural Language Understanding and thereby improve authorship verification performance. The proposed model provided promising results that would benefit the Arabic research for different Natural Language Processing tasks.
null
null
10.18653/v1/2022.wanlp-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,288
inproceedings
saadany-etal-2022-semi
A Semi-supervised Approach for a Better Translation of Sentiment in Dialectical {A}rabic {UGT}
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.20/
Saadany, Hadeel and Or{\u{a}}san, Constantin and Mohamed, Emad and Tantawy, Ashraf
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
214--224
In the online world, Machine Translation (MT) systems are extensively used to translate User-Generated Text (UGT) such as reviews, tweets, and social media posts, where the main message is often the author`s positive or negative attitude towards the topic of the text. However, MT systems still lack accuracy in some low-resource languages and sometimes make critical translation errors that completely flip the sentiment polarity of the target word or phrase and hence delivers a wrong affect message. This is particularly noticeable in texts that do not follow common lexico-grammatical standards such as the dialectical Arabic (DA) used on online platforms. In this research, we aim to improve the translation of sentiment in UGT written in the dialectical versions of the Arabic language to English. Given the scarcity of gold-standard parallel data for DA-EN in the UGT domain, we introduce a semi-supervised approach that exploits both monolingual and parallel data for training an NMT system initialised by a cross-lingual language model trained with supervised and unsupervised modeling objectives. We assess the accuracy of sentiment translation by our proposed system through a numerical {\textquoteleft}sentiment-closeness' measure as well as human evaluation. We will show that our semi-supervised MT system can significantly help with correcting sentiment errors detected in the online translation of dialectical Arabic UGT.
null
null
10.18653/v1/2022.wanlp-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,289
inproceedings
abboud-etal-2022-cross
Cross-lingual transfer for low-resource {A}rabic language understanding
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.21/
Abboud, Khadige and Golovneva, Olga and DiPersio, Christopher
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
225--237
This paper explores cross-lingual transfer learning in natural language understanding (NLU), with the focus on bootstrapping Arabic from high-resource English and French languages for domain classification, intent classification, and named entity recognition tasks. We adopt a BERT-based architecture and pretrain three models using open-source Wikipedia data and large-scale commercial datasets: monolingual:Arabic, bilingual:Arabic-English, and trilingual:Arabic-English-French models. Additionally, we use off-the-shelf machine translator to translate internal data from source English language to the target Arabic language, in an effort to enhance transfer learning through translation. We conduct experiments that finetune the three models for NLU tasks and evaluate them on a large internal dataset. Despite the morphological, orthographical, and grammatical differences between Arabic and the source languages, transfer learning performance gains from source languages and through machine translation are achieved on a real-world Arabic test dataset in both a zero-shot setting and in a setting when the models are further finetuned on labeled data from the target language.
null
null
10.18653/v1/2022.wanlp-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,290
inproceedings
abo-mokh-etal-2022-improving
Improving {POS} Tagging for {A}rabic Dialects on Out-of-Domain Texts
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.22/
Abo Mokh, Noor and Dakota, Daniel and K{\"ubler, Sandra
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
238--248
We investigate part of speech tagging for four Arabic dialects (Gulf, Levantine, Egyptian, and Maghrebi), in an out-of-domain setting. More specifically, we look at the effectiveness of 1) upsampling the target dialect in the training data of a joint model, 2) increasing the consistency of the annotations, and 3) using word embeddings pre-trained on a large corpus of dialectal Arabic. We increase the accuracy on average by about 20 percentage points.
null
null
10.18653/v1/2022.wanlp-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,291
inproceedings
alrashdi-okeefe-2022-domain
Domain Adaptation for {A}rabic Crisis Response
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.23/
Alrashdi, Reem and O{'}Keefe, Simon
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
249--259
Deep learning algorithms can identify related tweets to reduce the information overload that prevents humanitarian organisations from using valuable Twitter posts. However, they rely heavily on human-labelled data, which are unavailable for emerging crises. Because each crisis has its own features, such as location, time and social media response, current models are known to suffer from generalising to unseen disaster events when pre-trained on past ones. Tweet classifiers for low-resource languages like Arabic has the additional issue of limited labelled data duplicates caused by the absence of good language resources. Thus, we propose a novel domain adaptation approach that employs distant supervision to automatically label tweets from emerging Arabic crisis events to be used to train a model along with available human-labelled data. We evaluate our work on data from seven 2018{--}2020 Arabic events from different crisis types (flood, explosion, virus and storm). Results show that our method outperforms self-training in identifying crisis-related tweets in real-time scenarios and can be seen as a robust Arabic tweet classifier.
null
null
10.18653/v1/2022.wanlp-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,292
inproceedings
alyami-al-zaidy-2022-weakly
Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.24/
AlYami, Reem and Al-Zaidy, Rabah
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
260--272
The lack of resources such as annotated datasets and tools for low-resource languages is a significant obstacle to the advancement of Natural Language Processing (NLP) applications targeting users who speak these languages. Although learning techniques such as semi-supervised and weakly supervised learning are effective in text classification cases where annotated data is limited, they are still not widely investigated in many languages due to the sparsity of data altogether, both labeled and unlabeled. In this study, we deploy both weakly, and semi-supervised learning approaches for text classification in low-resource languages and address the underlying limitations that can hinder the effectiveness of these techniques. To that end, we propose a suite of language-agnostic techniques for large-scale data collection, automatic data annotation, and language model training in scenarios where resources are scarce. Specifically, we propose a novel data collection pipeline for under-represented languages, or dialects, that is language and task agnostic and of sufficient size for training a language model capable of achieving competitive results on common NLP tasks, as our experiments show. The models will be shared with the research community.
null
null
10.18653/v1/2022.wanlp-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,293
inproceedings
yamani-etal-2022-event
Event-Based Knowledge {MLM} for {A}rabic Event Detection
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.25/
Yamani, Asma Z and Alsulami, Amjad K and Al-Zaidy, Rabeah A
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
273--286
With the fast pace of reporting around the globe from various sources, event extraction has increasingly become an important task in NLP. The use of pre-trained language models (PTMs) has become popular to provide contextual representation for downstream tasks. This work aims to pre-train language models that enhance event extraction accuracy. To this end, we propose an Event-Based Knowledge (EBK) masking approach to mask the most significant terms in the event detection task. These significant terms are based on an external knowledge source that is curated for the purpose of event detection for the Arabic language. The proposed approach improves the classification accuracy of all the 9 event types. The experimental results demonstrate the effectiveness of the proposed masking approach and encourage further exploration.
null
null
10.18653/v1/2022.wanlp-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,294
inproceedings
al-omar-etal-2022-establishing
Establishing a Baseline for {A}rabic Patents Classification: A Comparison of Twelve Approaches
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.26/
Al-Omar, Taif Omar and Al-Khalifa, Hend and Al-Matham, Rawan
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
287--294
Nowadays, the number of patent applications is constantly growing and there is an economical interest on developing accurate and fast models to automate their classification task. In this paper, we introduce the first public Arabic patent dataset called ArPatent and experiment with twelve classification approaches to develop a baseline for Arabic patents classification. To achieve the goal of finding the best baseline for classifying Arabic patents, different machine learning, pre-trained language models as well as ensemble approaches were conducted. From the obtained results, we can observe that the best performing model for classifying Arabic patents was ARBERT with F1 of 66.53{\%}, while the ensemble approach of the best three performing language models, namely: ARBERT, CAMeL-MSA, and QARiB, achieved the second best F1 score, i.e., 64.52{\%}.
null
null
10.18653/v1/2022.wanlp-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,295
inproceedings
khalifa-etal-2022-towards
Towards Learning {A}rabic Morphophonology
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.27/
Khalifa, Salam and Kodner, Jordan and Rambow, Owen
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
295--301
One core challenge facing morphological inflection systems is capturing language-specific morphophonological changes. This is particularly true of languages like Arabic which are morphologically complex. In this paper, we learn explicit morphophonological rules from morphologically annotated Egyptian Arabic and corresponding surface forms. These rules are human-interpretable, capture known morphophonological phenomena in the language, and are generalizable to unseen forms.
null
null
10.18653/v1/2022.wanlp-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,296
inproceedings
hassib-etal-2022-aradepsu
{A}ra{D}ep{S}u: Detecting Depression and Suicidal Ideation in {A}rabic Tweets Using Transformers
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.28/
Hassib, Mariam and Hossam, Nancy and Sameh, Jolie and Torki, Marwan
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
302--311
Among mental health diseases, depression is one of the most severe, as it often leads to suicide which is the fourth leading cause of death in the Middle East. In the Middle East, Egypt has the highest percentage of suicidal deaths; due to this, it is important to identify depression and suicidal ideation. In Arabic culture, there is a lack of awareness regarding the importance of diagnosing and living with mental health diseases. However, as noted for the last couple years people all over the world, including Arab citizens, tend to express their feelings openly on social media. Twitter is the most popular platform designed to enable the expression of emotions through short texts, pictures, or videos. This paper aims to predict depression and depression with suicidal ideation. Due to the tendency of people to treat social media as their personal diaries and share their deepest thoughts on social media platforms. Social data contain valuable information that can be used to identify user`s psychological states. We create AraDepSu dataset by scrapping tweets from twitter and manually labelling them. We expand the diversity of user tweets, by adding a neutral label ({\textquotedblleft}neutral{\textquotedblright}) so the dataset include three classes ({\textquotedblleft}depressed{\textquotedblright}, {\textquotedblleft}suicidal{\textquotedblright}, {\textquotedblleft}neutral{\textquotedblright}). Then we train our AraDepSu dataset on 30+ different transformer models. We find that the best-performing model is MARBERT with accuracy, precision, recall and F1-Score values of 91.20{\%}, 88.74{\%}, 88.50{\%} and 88.75{\%}.
null
null
10.18653/v1/2022.wanlp-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,297