entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
bougares-jouili-2022-end
End-to-End Speech Translation of {A}rabic to {E}nglish Broadcast News
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.29/
Bougares, Fethi and Jouili, Salim
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
312--319
Speech translation (ST) is the task of directly translating acoustic speech signals in a source language into text in a foreign language. ST task has been addressed, for a long time, using a pipeline approach with two modules : first an Automatic Speech Recognition (ASR) in the source language followed by a text-to-text Machine translation (MT). In the past few years, we have seen a paradigm shift towards the end-to-end approaches using sequence-to-sequence deep neural network models. This paper presents our efforts towards the development of the first Broadcast News end-to-end Arabic to English speech translation system. Starting from independent ASR and MT LDC releases, we were able to identify about 92 hours of Arabic audio recordings for which the manual transcription was also translated into English at the segment level. These data was used to train and compare pipeline and end-to-end speech translation systems under multiple scenarios including transfer learning and data augmentation techniques.
null
null
10.18653/v1/2022.wanlp-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,298
inproceedings
alharbi-al-muhtasab-2022-arabic
{A}rabic Keyphrase Extraction: Enhancing Deep Learning Models with Pre-trained Contextual Embedding and External Features
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.30/
Alharbi, Randah and Al-Muhtasab, Husni
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
320--330
Keyphrase extraction is essential to many Information retrieval (IR) and Natural language Processing (NLP) tasks such as summarization and indexing. This study investigates deep learning approaches to Arabic keyphrase extraction. We address the problem as sequence classification and create a Bi-LSTM model to classify each sequence token as either part of the keyphrase or outside of it. We have extracted word embeddings from two pre-trained models, Word2Vec and BERT. Moreover, we have investigated the effect of incorporating linguistic, positional, and statistical features with word embeddings on performance. Our best-performing model has achieved 0.45 F1-score on ArabicKPE dataset when combining linguistic and positional features with BERT embedding.
null
null
10.18653/v1/2022.wanlp-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,299
inproceedings
el-khbir-etal-2022-arabie
{A}rab{IE}: Joint Entity, Relation and Event Extraction for {A}rabic
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.31/
El Khbir, Niama and Tomeh, Nadi and Charnois, Thierry
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
331--345
Previous work on Arabic information extraction has mainly focused on named entity recognition and very little work has been done on Arabic relation extraction and event recognition. Moreover, modeling Arabic data for such tasks is not straightforward because of the morphological richness and idiosyncrasies of the Arabic language. We propose in this article the first neural joint information extraction system for the Arabic language.
null
null
10.18653/v1/2022.wanlp-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,300
inproceedings
hakami-etal-2022-emoji
Emoji Sentiment Roles for Sentiment Analysis: A Case Study in {A}rabic Texts
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.32/
Hakami, Shatha Ali A. and Hendley, Robert and Smith, Phillip
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
346--355
Emoji (digital pictograms) are crucial features for textual sentiment analysis. However, analysing the sentiment roles of emoji is very complex. This is due to its dependency on different factors, such as textual context, cultural perspective, interlocutor`s personal traits, interlocutors' relationships or a platforms' functional features. This work introduces an approach to analysing the sentiment effects of emoji as textual features. Using an Arabic dataset as a benchmark, our results confirm the borrowed argument that each emoji has three different norms of sentiment role (negative, neutral or positive). Therefore, an emoji can play different sentiment roles depending upon the context. It can behave as an emphasizer, an indicator, a mitigator, a reverser or a trigger of either negative or positive sentiment within a text. In addition, an emoji may have a neutral effect (i.e., no effect) on the sentiment of the text.
null
null
10.18653/v1/2022.wanlp-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,301
inproceedings
alabbasi-etal-2022-gulf
{G}ulf {A}rabic Diacritization: Guidelines, Initial Dataset, and Results
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.33/
Alabbasi, Nouf and Al-Badrashiny, Mohamed and Aldahmani, Maryam and AlDhanhani, Ahmed and Alhashmi, Abdullah Saleh and Alhashmi, Fawaghy Ahmed and Al Hashemi, Khalid and Alkhobbi, Rama Emad and Al Maazmi, Shamma T and Alyafeai, Mohammed Ali and Alzaabi, Mariam M and Alzaabi, Mohamed Saqer and Badri, Fatma Khalid and Darwish, Kareem and Diab, Ehab Mansour and Elmallah, Muhammad Morsy and Elnashar, Amira Ayman and Elneima, Ashraf Hatim and Kabbani, MHD Tameem and Rabih, Nour and Saad, Ahmad and Sousou, Ammar Mamoun
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
356--360
Arabic diacritic recovery is important for a variety of downstream tasks such as text-to-speech. In this paper, we introduce a new Gulf Arabic diacritization dataset composed of 19,850 words based on a subset of the Gumar corpus. We provide comprehensive set of guidelines for diacritization to enable the diacritization of more data. We also report on diacritization results based on the new corpus using a Hidden Markov Model and character-based sequence to sequence models.
null
null
10.18653/v1/2022.wanlp-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,302
inproceedings
alshahrani-etal-2022-learning
Learning From {A}rabic Corpora But Not Always From {A}rabic Speakers: A Case Study of the {A}rabic {W}ikipedia Editions
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.34/
Alshahrani, Saied and Wali, Esma and Matthews, Jeanna
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
361--371
Wikipedia is a common source of training data for Natural Language Processing (NLP) research, especially as a source for corpora in languages other than English. However, for many downstream NLP tasks, it is important to understand the degree to which these corpora reflect representative contributions of native speakers. In particular, many entries in a given language may be translated from other languages or produced through other automated mechanisms. Language models built using corpora like Wikipedia can embed history, culture, bias, stereotypes, politics, and more, but it is important to understand whose views are actually being represented. In this paper, we present a case study focusing specifically on differences among the Arabic Wikipedia editions (Modern Standard Arabic, Egyptian, and Moroccan). In particular, we document issues in the Egyptian Arabic Wikipedia with automatic creation/generation and translation of content pages from English without human supervision. These issues could substantially affect the performance and accuracy of Large Language Models (LLMs) trained from these corpora, producing models that lack the cultural richness and meaningful representation of native speakers. Fortunately, the metadata maintained by Wikipedia provides visibility into these issues, but unfortunately, this is not the case for all corpora used to train LLMs.
null
null
10.18653/v1/2022.wanlp-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,303
inproceedings
aldihan-etal-2022-pilot
A Pilot Study on the Collection and Computational Analysis of Linguistic Differences Amongst Men and Women in a Kuwaiti {A}rabic {W}hats{A}pp Dataset
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.35/
Aldihan, Hesah and Gaizauskas, Robert and Fitzmaurice, Susan
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
372--380
This study focuses on the collection and computational analysis of Kuwaiti Arabic (KA), which is considered a low resource dialect, to test different sociolinguistic hypotheses related to gendered language use. In this paper, we describe the collection and analysis of a corpus of WhatsApp Group chats with mixed gender Kuwaiti participants. This corpus, which we are making publicly available, is the first corpus of KA conversational data. We analyse different interactional and linguistic features to get insights about features that may be indicative of gender to inform the development of a gender classification system for KA in an upcoming study. Statistical analysis of our data shows that there is insufficient evidence to claim that there are significant differences amongst men and women with respect to number of turns, length of turns and number of emojis. However, qualitative analysis shows that men and women differ substantially in the types of emojis they use and in their use of lengthened words.
null
null
10.18653/v1/2022.wanlp-1.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,304
inproceedings
gutkin-etal-2022-beyond
Beyond {A}rabic: Software for {P}erso-{A}rabic Script Manipulation
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.36/
Gutkin, Alexander and Johny, Cibu and Doctor, Raiomond and Roark, Brian and Sproat, Richard
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
381--387
This paper presents an open-source software library that provides a set of finite-state transducer (FST) components and corresponding utilities for manipulating the writing systems of languages that use the Perso-Arabic script. The operations include various levels of script normalization, including visual invariance-preserving operations that subsume and go beyond the standard Unicode normalization forms, as well as transformations that modify the visual appearance of characters in accordance with the regional orthographies for eleven contemporary languages from diverse language families. The library also provides simple FST-based romanization and transliteration. We additionally attempt to formalize the typology of Perso-Arabic characters by providing one-to-many mappings from Unicode code points to the languages that use them. While our work focuses on the Arabic script diaspora rather than Arabic itself, this approach could be adopted for any language that uses the Arabic script, thus providing a unified framework for treating a script family used by close to a billion people.
null
null
10.18653/v1/2022.wanlp-1.36
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,305
inproceedings
aliady-etal-2022-coreference
Coreference Annotation of an {A}rabic Corpus using a Virtual World Game
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.37/
Aliady, Wateen Abdullah and Aloraini, Abdulrahman and Madge, Christopher and Yu, Juntao and Bartle, Richard and Poesio, Massimo
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
388--393
Coreference resolution is a key aspect of text comprehension, but the size of the available coreference corpora for Arabic is limited in comparison to the size of the corpora for other languages. In this paper we present a Game-With-A-Purpose called Stroll with a Scroll created to collect from players coreference annotations for Arabic. The key contribution of this work is the embedding of the annotation task in a virtual world setting, as opposed to the puzzle-type games used in previously proposed Games-With-A-Purpose for coreference.
null
null
10.18653/v1/2022.wanlp-1.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,306
inproceedings
abdelali-etal-2022-natiq
{N}ati{Q}: An End-to-end Text-to-Speech System for {A}rabic
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.38/
Abdelali, Ahmed and Durrani, Nadir and Demiroglu, Cenk and Dalvi, Fahim and Mubarak, Hamdy and Darwish, Kareem
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
394--398
NatiQ is end-to-end text-to-speech system for Arabic. Our speech synthesizer uses an encoder-decoder architecture with attention. We used both tacotron-based models (tacotron- 1 and tacotron-2) and the faster transformer model for generating mel-spectrograms from characters. We concatenated Tacotron1 with the WaveRNN vocoder, Tacotron2 with the WaveGlow vocoder and ESPnet transformer with the parallel wavegan vocoder to synthesize waveforms from the spectrograms. We used in-house speech data for two voices: 1) neu- tral male {\textquotedblleft}Hamza{\textquotedblright}- narrating general content and news, and 2) expressive female {\textquotedblleft}Amina{\textquotedblright}- narrating children story books to train our models. Our best systems achieve an aver- age Mean Opinion Score (MOS) of 4.21 and 4.40 for Amina and Hamza respectively. The objective evaluation of the systems using word and character error rate (WER and CER) as well as the response time measured by real- time factor favored the end-to-end architecture ESPnet. NatiQ demo is available online at \url{https://tts.qcri.org}.
null
null
10.18653/v1/2022.wanlp-1.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,307
inproceedings
abu-farha-magdy-2022-effect
The Effect of {A}rabic Dialect Familiarity on Data Annotation
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.39/
Abu Farha, Ibrahim and Magdy, Walid
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
399--408
Data annotation is the foundation of most natural language processing (NLP) tasks. However, data annotation is complex and there is often no specific correct label, especially in subjective tasks. Data annotation is affected by the annotators' ability to understand the provided data. In the case of Arabic, this is important due to the large dialectal variety. In this paper, we analyse how Arabic speakers understand other dialects in written text. Also, we analyse the effect of dialect familiarity on the quality of data annotation, focusing on Arabic sarcasm detection. This is done by collecting third-party labels and comparing them to high-quality first-party labels. Our analysis shows that annotators tend to better identify their own dialect and they are prone to confuse dialects they are unfamiliar with. For task labels, annotators tend to perform better on their dialect or dialects they are familiar with. Finally, females tend to perform better than males on the sarcasm detection task. We suggest that to guarantee high-quality labels, researchers should recruit native dialect speakers for annotation.
null
null
10.18653/v1/2022.wanlp-1.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,308
inproceedings
jauhiainen-etal-2022-optimizing
Optimizing Naive {B}ayes for {A}rabic Dialect Identification
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.40/
Jauhiainen, Tommi and Jauhiainen, Heidi and Lind{\'e}n, Krister
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
409--414
This article describes the language identification system used by the SUKI team in the 2022 Nuanced Arabic Dialect Identification (NADI) shared task. In addition to the system description, we give some details of the dialect identification experiments we conducted while preparing our submissions. In the end, we submitted only one official run. We used a Naive Bayes-based language identifier with character n-grams from one to four, of which we implemented a new version, which automatically optimizes its parameters. We also experimented with clustering the training data according to different topics. With the macro F1 score of 0.1963 on test set A and 0.1058 on test set B, we achieved the 18th position out of the 19 competing teams.
null
null
10.18653/v1/2022.wanlp-1.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,309
inproceedings
messaoudi-etal-2022-icompass
i{C}ompass Working Notes for the Nuanced {A}rabic Dialect Identification Shared task
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.41/
Messaoudi, Abir and Fourati, Chayma and Haddad, Hatem and BenHajhmida, Moez
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
415--419
We describe our submitted system to the Nuanced Arabic Dialect Identification (NADI) shared task. We tackled only the first subtask (Subtask 1). We used state-of-the-art Deep Learning models and pre-trained contextualized text representation models that we finetuned according to the downstream task in hand. As a first approach, we used BERT Arabic variants: MARBERT with its two versions MARBERT v1 and MARBERT v2, we combined MARBERT embeddings with a CNN classifier, and finally, we tested the Quasi-Recurrent Neural Networks (QRNN) model. The results found show that version 2 of MARBERT outperforms all of the previously mentioned models on Subtask 1.
null
null
10.18653/v1/2022.wanlp-1.41
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,310
inproceedings
shammary-etal-2022-tf
{TF}-{IDF} or Transformers for {A}rabic Dialect Identification? {ITFLOWS} participation in the {NADI} 2022 Shared Task
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.42/
Shammary, Fouad and Chen, Yiyi and Kardkovacs, Zsolt T and Alam, Mehwish and Afli, Haithem
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
420--424
This study targets the shared task of Nuanced Arabic Dialect Identification (NADI) organized with the Workshop on Arabic Natural Language Processing (WANLP). It further focuses on Subtask 1 on the identification of the Arabic dialects at the country level. More specifically, it studies the impact of a traditional approach such as TF-IDF and then moves on to study the impact of advanced deep learning based methods. These methods include fully fine-tuning MARBERT as well as adapter based fine-tuning of MARBERT with and without performing data augmentation. The evaluation shows that the traditional approach based on TF-IDF scores the best in terms of accuracy on TEST-A dataset, while, the fine-tuned MARBERT with adapter on augmented data scores the second on Macro F1-score on the TEST-B dataset. This led to the proposed system being ranked second on the shared task on average.
null
null
10.18653/v1/2022.wanlp-1.42
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,311
inproceedings
bayrak-issifu-2022-domain
Domain-Adapted {BERT}-based Models for Nuanced {A}rabic Dialect Identification and Tweet Sentiment Analysis
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.43/
Bayrak, Giyaseddin and Issifu, Abdul Majeed
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
425--430
This paper summarizes the solution of the Nuanced Arabic Dialect Identification (NADI) 2022 shared task. It consists of two subtasks: a country-level Arabic Dialect Identification (ADID) and an Arabic Sentiment Analysis (ASA). Our work shows the importance of using domain-adapted models and language-specific pre-processing in NLP task solutions. We implement a simple but strong baseline technique to increase the stability of fine-tuning settings to obtain a good generalization of models. Our best model for the Dialect Identification subtask achieves a Macro F-1 score of 25.54{\%} as an average of both Test-A (33.89{\%}) and Test-B (19.19{\%}) F-1 scores. We also obtained a Macro F-1 score of 74.29{\%} of positive and negative sentiments only, in the Sentiment Analysis task.
null
null
10.18653/v1/2022.wanlp-1.43
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,312
inproceedings
fsih-etal-2022-benchmarking
Benchmarking transfer learning approaches for sentiment analysis of {A}rabic dialect
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.44/
Fsih, Emna and Kchaou, Sameh and Boujelbane, Rahma and Hadrich-Belguith, Lamia
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
431--435
Arabic has a widely varying collection of dialects. With the explosion of the use of social networks, the volume of written texts has remarkably increased. Most users express themselves using their own dialect. Unfortunately, many of these dialects remain under-studied due to the scarcity of resources. Researchers and industry practitioners are increasingly interested in analyzing users' sentiments. In this context, several approaches have been proposed, namely: traditional machine learning, deep learning transfer learning and more recently few-shot learning approaches. In this work, we compare their efficiency as part of the NADI competition to develop a country-level sentiment analysis model. Three models were beneficial for this sub-task: The first based on Sentence Transformer (ST) and achieve 43.23{\%} on DEV set and 42.33{\%} on TEST set, the second based on CAMeLBERT and achieve 47.85{\%} on DEV set and 41.72{\%} on TEST set and the third based on multi-dialect BERT model and achieve 66.72{\%} on DEV set and 39.69{\%} on TEST set.
null
null
10.18653/v1/2022.wanlp-1.44
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,313
inproceedings
aalabdulsalam-2022-squ
{SQU}-{CS} @ {NADI} 2022: Dialectal {A}rabic Identification using One-vs-One Classification with {TF}-{IDF} Weights Computed on Character n-grams
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.45/
AAlAbdulsalam, Abdulrahman Khalifa
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
436--441
In this paper, I present an approach using one-vs-one classification scheme with TF-IDF term weighting on character n-grams for identifying Arabic dialects used in social media. The scheme was evaluated in the context of the third Nuanced Arabic Dialect Identification (NADI 2022) shared task for identifying Arabic dialects used in Twitter messages. The approach was implemented with logistic regression loss and trained using stochastic gradient decent (SGD) algorithm. This simple method achieved a macro F1 score of 22.89{\%} and 10.83{\%} on TEST A and TEST B, respectively, in comparison to an approach based on AraBERT pretrained transformer model which achieved a macro F1 score of 30.01{\%} and 14.84{\%}, respectively. My submission based on AraBERT scored a macro F1 average of 22.42{\%} and was ranked 10 out of the 19 teams who participated in the task.
null
null
10.18653/v1/2022.wanlp-1.45
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,314
inproceedings
oumar-mrini-2022-ahmed
Ahmed and Khalil at {NADI} 2022: Transfer Learning and Addressing Class Imbalance for {A}rabic Dialect Identification and Sentiment Analysis
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.46/
Oumar, Ahmed and Mrini, Khalil
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
442--446
In this paper, we present our findings in the two subtasks of the 2022 NADI shared task. First, in the Arabic dialect identification subtask, we find that there is heavy class imbalance, and propose to address this issue using focal loss. Our experiments with the focusing hyperparameter confirm that focal loss improves performance. Second, in the Arabic tweet sentiment analysis subtask, we deal with a smaller dataset, where text includes both Arabic dialects and Modern Standard Arabic. We propose to use transfer learning from both pre-trained MSA language models and our own model from the first subtask. Our system ranks in the 5th and 7th best spots of the leaderboards of first and second subtasks respectively.
null
null
10.18653/v1/2022.wanlp-1.46
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,315
inproceedings
qaddoumi-2022-arabic
{A}rabic Sentiment Analysis by Pretrained Ensemble
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.47/
Qaddoumi, Abdelrahim
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
447--451
This paper presents the 259 team`s BERT ensemble designed for the NADI 2022 Subtask 2 (sentiment analysis) (Abdul-Mageed et al., 2022). Twitter Sentiment analysis is one of the language processing (NLP) tasks that provides a method to understand the perception and emotions of the public around specific topics. The most common research approach focuses on obtaining the tweet`s sentiment by analyzing its lexical and syntactic features. We used multiple pretrained Arabic-Bert models with a simple average ensembling and then chose the best-performing ensemble on the training dataset and ran it on the development dataset. This system ranked 3rd in Subtask 2 with a Macro-PN-F1-score of 72.49{\%}.
null
null
10.18653/v1/2022.wanlp-1.47
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,316
inproceedings
abdel-salam-2022-dialect
Dialect {\&} Sentiment Identification in Nuanced {A}rabic Tweets Using an Ensemble of Prompt-based, Fine-tuned, and Multitask {BERT}-Based Models
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.48/
Abdel-Salam, Reem
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
452--457
Dialect Identification is important to improve the performance of various application as translation, speech recognition, etc. In this paper, we present our findings and results in the Nuanced Arabic Dialect Identification Shared Task (NADI 2022) for country-level dialect identification and sentiment identification for dialectical Arabic. The proposed model is an ensemble between fine-tuned BERT-based models and various approaches of prompt-tuning. Our model secured first place on the leaderboard for subtask 1 with an 27.06 F1-macro score, and subtask 2 secured first place with 75.15 F1-PN score. Our findings show that prompt-tuning-based models achieved better performance when compared to fine-tuning and Multi-task based methods. Moreover, using an ensemble of different loss functions might improve model performance.
null
null
10.18653/v1/2022.wanlp-1.48
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,317
inproceedings
jamal-etal-2022-arabic
On The {A}rabic Dialects' Identification: Overcoming Challenges of Geographical Similarities Between {A}rabic dialects and Imbalanced Datasets
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.49/
Jamal, Salma and .Kassem, Aly M and Mohamed, Omar and Ashraf, Ali
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
458--463
Arabic is one of the world`s richest languages, with a diverse range of dialects based on geographical origin. In this paper, we present a solution to tackle subtask 1 (Country-level dialect identification) of the Nuanced Arabic Dialect Identification (NADI) shared task 2022 achieving third place with an average macro F1 score between the two test sets of 26.44{\%}. In the preprocessing stage, we removed the most common frequent terms from all sentences across all dialects, and in the modeling step, we employed a hybrid loss function approach that includes Weighted cross entropy loss and Vector Scaling(VS) Loss. On test sets A and B, our model achieved 35.68{\%} and 17.192{\%} Macro F1 scores, respectively.
null
null
10.18653/v1/2022.wanlp-1.49
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,318
inproceedings
alshenaifi-azmi-2022-arabic
{A}rabic dialect identification using machine learning and transformer-based models: Submission to the {NADI} 2022 Shared Task
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.50/
AlShenaifi, Nouf and Azmi, Aqil
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
464--467
Arabic has a wide range of dialects. Dialect is the language variation of a specific community. In this paper, we show the models we created to participate in the third Nuanced Arabic Dialect Identification (NADI) shared task (Subtask 1) that involves developing a system to classify a tweet into a country-level dialect. We utilized a number of machine learning techniques as well as deep learning transformer-based models. For the machine learning approach, we build an ensemble classifier of various machine learning models. In our deep learning approach, we consider bidirectional LSTM model and AraBERT pretrained model. The results demonstrate that the deep learning approach performs noticeably better than the other machine learning approaches with 68.7{\%} accuracy on the development set.
null
null
10.18653/v1/2022.wanlp-1.50
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,319
inproceedings
kanjirangat-etal-2022-nlp
{NLP} {DI} at {NADI} Shared Task Subtask-1: Sub-word Level Convolutional Neural Models and Pre-trained Binary Classifiers for Dialect Identification
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.51/
Kanjirangat, Vani and Samardzic, Tanja and Dolamic, Ljiljana and Rinaldi, Fabio
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
468--473
In this paper, we describe our systems submitted to the NADI Subtask 1: country-wise dialect classifications. We designed two types of solutions. The first type is convolutional neural network CNN) classifiers trained on subword segments of optimized lengths. The second type is fine-tuned classifiers with BERT-based language specific pre-trained models. To deal with the missing dialects in one of the test sets, we experimented with binary classifiers, analyzing the predicted probability distribution patterns and comparing them with the development set patterns. The better performing approach on the development set was fine-tuning language specific pre-trained model (best F-score 26.59{\%}). On the test set, on the other hand, we obtained the best performance with the CNN model trained on subword tokens obtained with a Unigram model (the best F-score 26.12{\%}). Re-training models on samples of training data simulating missing dialects gave the maximum performance on the test set version with a number of dialects lesser than the training set (F-score 16.44{\%})
null
null
10.18653/v1/2022.wanlp-1.51
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,320
inproceedings
sobhy-etal-2022-word
Word Representation Models for {A}rabic Dialect Identification
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.52/
Sobhy, Mahmoud and Abu El-Atta, Ahmed H. and El-Sawy, Ahmed A. and Nayel, Hamada
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
474--478
This paper describes the systems submitted by BFCAI team to Nuanced Arabic Dialect Identification (NADI) shared task 2022. Dialect identification task aims at detecting the source variant of a given text or speech segment automatically. There are two subtasks in NADI 2022, the first subtask for country-level identification and the second subtask for sentiment analysis. Our team participated in the first subtask. The proposed systems use Term Frequency Inverse/Document Frequency and word embeddings as vectorization models. Different machine learning algorithms have been used as classifiers. The proposed systems have been tested on two test sets: Test-A and Test-B. The proposed models achieved Macro-f1 score of 21.25{\%} and 9.71{\%} for Test-A and Test-B set respectively. On other hand, the best-performed submitted system achieved Macro-f1 score of 36.48{\%} and 18.95{\%} for Test-A and Test-B set respectively.
null
null
10.18653/v1/2022.wanlp-1.52
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,321
inproceedings
khered-etal-2022-building
Building an Ensemble of Transformer Models for {A}rabic Dialect Classification and Sentiment Analysis
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.53/
Khered, Abdullah and Abdelhalim, Ingy Abdelhalim and Batista-Navarro, Riza
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
479--484
In this paper, we describe the approaches we developed for the Nuanced Arabic Dialect Identification (NADI) 2022 shared task, which consists of two subtasks: the identification of country-level Arabic dialects and sentiment analysis. Our team, UniManc, developed approaches to the two subtasks which are underpinned by the same model: a pre-trained MARBERT language model. For Subtask 1, we applied undersampling to create versions of the training data with a balanced distribution across classes. For Subtask 2, we further trained the original MARBERT model for the masked language modelling objective using a NADI-provided dataset of unlabelled Arabic tweets. For each of the subtasks, a MARBERT model was fine-tuned for sequence classification, using different values for hyperparameters such as seed and learning rate. This resulted in multiple model variants, which formed the basis of an ensemble model for each subtask. Based on the official NADI evaluation, our ensemble model obtained a macro-F1-score of 26.863, ranking second overall in the first subtask. In the second subtask, our ensemble model also ranked second, obtaining a macro-F1-PN score (macro-averaged F1-score over the Positive and Negative classes) of 73.544.
null
null
10.18653/v1/2022.wanlp-1.53
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,322
inproceedings
attieh-hassan-2022-arabic
{A}rabic Dialect Identification and Sentiment Classification using Transformer-based Models
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.54/
Attieh, Joseph and Hassan, Fadi
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
485--490
In this paper, we present two deep learning approaches that are based on AraBERT, submitted to the Nuanced Arabic Dialect Identification (NADI) shared task of the Seventh Workshop for Arabic Natural Language Processing (WANLP 2022). NADI consists of two main sub-tasks, mainly country-level dialect and sentiment identification for dialectical Arabic. We present one system per sub-task. The first system is a multi-task learning model that consists of a shared AraBERT encoder with three task-specific classification layers. This model is trained to jointly learn the country-level dialect of the tweet as well as the region-level and area-level dialects. The second system is a distilled model of an ensemble of models trained using K-fold cross-validation. Each model in the ensemble consists of an AraBERT model and a classifier, fine-tuned on (K-1) folds of the training set. Our team Pythoneers achieved rank 6 on the first test set of the first sub-task, rank 9 on the second test set of the first sub-task, and rank 4 on the test set of the second sub-task.
null
null
10.18653/v1/2022.wanlp-1.54
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,323
inproceedings
alrowili-shanker-2022-generative
Generative Approach for Gender-Rewriting Task with {A}rabic{T}5
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.55/
Alrowili, Sultan and Shanker, Vijay
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
491--495
Addressing the correct gender in generative tasks (e.g., Machine Translation) has been an overlooked issue in the Arabic NLP. However, the recent introduction of the Arabic Parallel Gender Corpus (APGC) dataset has established new baselines for the Arabic Gender Rewriting task. To address the Gender Rewriting task, we first pre-train our new Seq2Seq ArabicT5 model on a 17GB of Arabic Corpora. Then, we continue pre-training our ArabicT5 model on the APGC dataset using a newly proposed method. Our evaluation shows that our ArabicT5 model, when trained on the APGC dataset, achieved competitive results against existing state-of-the-art methods. In addition, our ArabicT5 model shows better results on the APGC dataset compared to other Arabic and multilingual T5 models.
null
null
10.18653/v1/2022.wanlp-1.55
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,324
inproceedings
singh-2022-araprop
{A}ra{P}rop at {WANLP} 2022 Shared Task: Leveraging Pre-Trained Language Models for {A}rabic Propaganda Detection
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.56/
Singh, Gaurav
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
496--500
This paper presents the approach taken for the shared task on Propaganda Detection in Arabic at the Seventh Arabic Natural Language Processing Workshop (WANLP 2022). We participated in Sub-task 1 where the text of a tweet is provided, and the goal is to identify the different propaganda techniques used in it. This problem belongs to multi-label classification. For our solution, we approached leveraging different transformer based pre-trained language models with fine-tuning to solve this problem. We found that MARBERTv2 outperforms in terms of performance where F1-macro is 0.08175 and F1-micro is 0.61116 compared to other language models that we considered. Our method achieved rank 4 in the testing phase of the challenge.
null
null
10.18653/v1/2022.wanlp-1.56
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,325
inproceedings
mohtaj-moller-2022-tub
{TUB} at {WANLP}22 Shared Task: Using Semantic Similarity for Propaganda Detection in {A}rabic
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.57/
Mohtaj, Salar and M{\"oller, Sebastian
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
501--505
Propaganda and the spreading of fake news through social media have become a serious problem in recent years. In this paper we present our approach for the shared task on propaganda detection in Arabic in which the goal is to identify propaganda techniques in the Arabic social media text. We propose a semantic similarity detection model to compare text in the test set with the sentences in the train set to find the most similar instances. The label of the target text is obtained from the most similar texts in the train set. The proposed model obtained the micro F1 score of 0.494 on the text data set.
null
null
10.18653/v1/2022.wanlp-1.57
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,326
inproceedings
gaanoun-benelallam-2022-si2m
{SI}2{M} {\&} {AIOX} Labs at {WANLP} 2022 Shared Task: Propaganda Detection in {A}rabic, A Data Augmentation and Name Entity Recognition Approach
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.58/
Gaanoun, Kamel and Benelallam, Imade
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
506--510
This paper presents SI2M {\&} AIOX Labs work among the propaganda detection in Arabic text shared task. The objective of this challenge is to identify the propaganda techniques used in specific propaganda fragments. We use a combination of data augmentation, Name Entity Recognition, rule-based repetition detection, and ARBERT prediction to develop our system. The model we provide scored 0.585 micro F1-Score and ranked 6th out of 12 teams.
null
null
10.18653/v1/2022.wanlp-1.58
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,327
inproceedings
chavan-kane-2022-chavankane
{C}havan{K}ane at {WANLP} 2022 Shared Task: Large Language Models for Multi-label Propaganda Detection
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.60/
Chavan, Tanmay and Kane, Aditya Manish
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
515--519
The spread of propaganda through the internet has increased drastically over the past years. Lately, propaganda detection has started gaining importance because of the negative impact it has on society. In this work, we describe our approach for the WANLP 2022 shared task which handles the task of propaganda detection in a multi-label setting. The task demands the model to label the given text as having one or more types of propaganda techniques. There are a total of 21 propaganda techniques to be detected. We show that an ensemble of five models performs the best on the task, scoring a micro-F1 score of 59.73{\%}. We also conduct comprehensive ablations and propose various future directions for this work.
null
null
10.18653/v1/2022.wanlp-1.60
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,329
inproceedings
sharara-etal-2022-arabert
{A}ra{BERT} Model for Propaganda Detection
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.61/
Sharara, Mohamad and Mohamad, Wissam and Tawil, Ralph and Chobok, Ralph and Assi, Wolf and Tannoury, Antonio
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
520--523
Nowadays, the rapid dissemination of data on digital platforms has resulted in the emergence of information pollution and data contamination, specifically mis-information, mal-information, dis-information, fake news, and various types of propaganda. These topics are now posing a serious threat to the online digital realm, posing numerous challenges to social media platforms and governments around the world. In this article, we propose a propaganda detection model based on the transformer-based model AraBERT, with the objective of using this framework to detect propagandistic content in the Arabic social media text scene, well with purpose of making online Arabic news and media consumption healthier and safer. Given the dataset, our results are relatively encouraging, indicating a huge potential for this line of approaches in Arabic online news text NLP.
null
null
10.18653/v1/2022.wanlp-1.61
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,330
inproceedings
refaee-etal-2022-arabem
{A}ra{BEM} at {WANLP} 2022 Shared Task: Propaganda Detection in {A}rabic Tweets
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.62/
Refaee, Eshrag Ali and Ahmed, Basem and Saad, Motaz
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
524--528
Propaganda is information or ideas that an organized group or government spreads to influence people{\'s} opinions, especially by not giving all the facts or secretly emphasizing only one way of looking at the points. The ability to automatically detect propaganda-related linguistic signs is a challenging task that researchers in the NLP community have recently started to address. This paper presents the participation of our team AraBEM in the propaganda detection shared task on Arabic tweets. Our system utilized a pre-trained BERT model to perform multi-class binary classification. It attained the best score at 0.602 micro-f1, ranking third on subtask-1, which identifies the propaganda techniques as a multilabel classification problem with a baseline of 0.079.
null
null
10.18653/v1/2022.wanlp-1.62
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,331
inproceedings
mittal-nakov-2022-iitd
{IITD} at {WANLP} 2022 Shared Task: Multilingual Multi-Granularity Network for Propaganda Detection
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.63/
Mittal, Shubham and Nakov, Preslav
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
529--533
We present our system for the two subtasks of the shared task on propaganda detection in Arabic, part of WANLP`2022. Subtask 1 is a multi-label classification problem to find the propaganda techniques used in a given tweet. Our system for this task uses XLM-R to predict probabilities for the target tweet to use each of the techniques. In addition to finding the techniques, subtask 2 further asks to identify the textual span for each instance of each technique that is present in the tweet; the task can be modelled as a sequence tagging problem. We use a multi-granularity network with mBERT encoder for subtask 2. Overall, our system ranks second for both subtasks (out of 14 and 3 participants, respectively). Our experimental results and analysis show that it does not help to use a much larger English corpus annotated with propaganda techniques, regardless of whether used in English or after translation to Arabic.
null
null
10.18653/v1/2022.wanlp-1.63
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,332
inproceedings
attieh-hassan-2022-pythoneers
Pythoneers at {WANLP} 2022 Shared Task: Monolingual {A}ra{BERT} for {A}rabic Propaganda Detection and Span Extraction
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.64/
Attieh, Joseph and Hassan, Fadi
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
534--540
In this paper, we present two deep learning approaches that are based on AraBERT, submitted to the Propaganda Detection shared task of the Seventh Workshop for Arabic Natural Language Processing (WANLP 2022). Propaganda detection consists of two main sub-tasks, mainly propaganda identification and span extraction. We present one system per sub-task. The first system is a Multi-Task Learning model that consists of a shared AraBERT encoder with task-specific binary classification layers. This model is trained to jointly learn one binary classification task per propaganda method. The second system is an AraBERT model with a Conditional Random Field (CRF) layer. We achieved rank 3 on the first sub-task and rank 1 on the second sub-task.
null
null
10.18653/v1/2022.wanlp-1.64
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,333
inproceedings
laskar-etal-2022-cnlp
{CNLP}-{NITS}-{PP} at {WANLP} 2022 Shared Task: Propaganda Detection in {A}rabic using Data Augmentation and {A}ra{BERT} Pre-trained Model
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.65/
Laskar, Sahinur Rahman and Singh, Rahul and Khilji, Abdullah Faiz Ur Rahman and Manna, Riyanka and Pakray, Partha and Bandyopadhyay, Sivaji
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
541--544
In today`s time, online users are regularly exposed to media posts that are propagandistic. Several strategies have been developed to promote safer media consumption in Arabic to combat this. However, there is a limited available multilabel annotated social media dataset. In this work, we have used a pre-trained AraBERT twitter-base model on an expanded train data via data augmentation. Our team CNLP-NITS-PP, has achieved the third rank in subtask 1 at WANLP-2022, for propaganda detection in Arabic (shared task) in terms of micro-F1 score of 0.602.
null
null
10.18653/v1/2022.wanlp-1.65
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,334
inproceedings
hussein-etal-2022-ngu
{NGU} {CNLP} at{WANLP} 2022 Shared Task: Propaganda Detection in {A}rabic
Bouamor, Houda and Al-Khalifa, Hend and Darwish, Kareem and Rambow, Owen and Bougares, Fethi and Abdelali, Ahmed and Tomeh, Nadi and Khalifa, Salam and Zaghouani, Wajdi
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.wanlp-1.66/
Hussein, Ahmed Samir and Mohammad, Abu Bakr Soliman and Ibrahim, Mohamed and Afify, Laila Hesham and El-Beltagy, Samhaa R.
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
545--550
This paper presents the system developed by the NGU{\_}CNLP team for addressing the shared task on Propaganda Detection in Arabic at WANLP 2022. The team participated in the shared tasks' two sub-tasks which are: 1) Propaganda technique identification in text and 2) Propaganda technique span identification. In the first sub-task, the goal is to detect all employed propaganda techniques in some given piece of text out of a possible 17 different techniques or to detect that no propaganda technique is being used in that piece of text. As such, this first sub-task is a multi-label classification problem with a pool of 18 possible labels. Subtask 2 extends sub-task 1, by requiring the identification of the exact text span in which a propaganda technique was employed, making it a sequence labeling problem. For task 1, a combination of a data augmentation strategy coupled with an enabled transformer-based model comprised our classification model. This classification model ranked first amongst the 14 systems participating in this subtask. For sub-task two, a transfer learning model was adopted. The system ranked third among the 3 different models that participated in this subtask.
null
null
10.18653/v1/2022.wanlp-1.66
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,335
inproceedings
aepli-etal-2022-findings
Findings of the {V}ar{D}ial Evaluation Campaign 2022
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.1/
Aepli, No{\"emi and Anastasopoulos, Antonios and Chifu, Adrian-Gabriel and Domingues, William and Faisal, Fahim and Gaman, Mihaela and Ionescu, Radu Tudor and Scherrer, Yves
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
1--13
This report presents the results of the shared tasks organized as part of the VarDial Evaluation Campaign 2022. The campaign is part of the ninth workshop on Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects (VarDial), co-located with COLING 2022. Three separate shared tasks were included this year: Identification of Languages and Dialects of Italy (ITDI), French Cross-Domain Dialect Identification (FDI), and Dialectal Extractive Question Answering (DialQA). All three tasks were organized for the first time this year.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,337
inproceedings
kellert-matlis-2022-social
Social Context and User Profiles of Linguistic Variation on a Micro Scale
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.2/
Kellert, Olga and Matlis, Nicholas Hill
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
14--19
This paper presents a new tweet-based approach in geolinguistic analysis which combines geolocation, user IDs and textual features in order to identify patterns of linguistic variation on a sub-city scale. Sub-city variations can be connected to social drivers and thus open new opportunities for understanding the mechanisms of language variation and change. However, measuring linguistic variation on these scales is challenging due to the lack of highly-spatially-resolved data as well as to the daily movement or users' {\textquotedblleft}mobility{\textquotedblright} inside cities which can obscure the relation between the social context and linguistic variation. Here we demonstrate how combining geolocation with user IDs and textual analysis of tweets can yield information about the linguistic profiles of the users, the social context associated with specific locations and their connection to linguistic variation. We apply our methodology to analyze dialects in Buenos Aires and find evidence of socially-driven variation. Our methods will contribute to the identification of sociolinguistic patterns inside cities, which are valuable in social sciences and social services.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,338
inproceedings
shim-nerbonne-2022-dialectr
dialect{R}: Doing Dialectometry in {R}
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.3/
Shim, Ryan Soh-Eun and Nerbonne, John
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
20--27
We present dialectR, an open-source R package for performing quantitative analyses of dialects based on categorical measures of difference and on variants of edit distance. dialectR stands as one of the first programmable toolkits that may freely be combined and extended by users with further statistical procedures. We describe implementational details of the package, and provide two examples of its use: one performing analyses based on multidimensional scaling and hierarchical clustering on a dataset of Dutch dialects, and another showing how an approximation of the acoustic vowel space may be achieved by performing an MFCC (Mel-Frequency Cepstral Coefficients)-based acoustic distance on audio recordings of vowels.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,339
inproceedings
liu-2022-low
Low-Resource Neural Machine Translation: A Case Study of {C}antonese
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.4/
Liu, Evelyn Kai-Yan
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
28--40
The development of Natural Language Processing (NLP) applications for Cantonese, a language with over 85 million speakers, is lagging compared to other languages with a similar number of speakers. In this paper, we present, to our best knowledge, the first benchmark of multiple neural machine translation (NMT) systems from Mandarin Chinese to Cantonese. Additionally, we performed parallel sentence mining (PSM) as data augmentation for the extremely low resource language pair and increased the number of sentence pairs from 1,002 to 35,877. Results show that with PSM, the best performing model (BPE-level bidirectional LSTM) scored 11.98 BLEU better than the vanilla baseline and 9.93 BLEU higher than our strong baseline. Our unsupervised NMT (UNMT) results also refuted previous assumption n (Rubino et al., 2020) that the poor performance was related to the lack of linguistic similarities between the target and source languages, particularly in the case of Cantonese and Mandarin. In the process of building the NMT system, we also created the first large-scale parallel training and evaluation datasets of the language pair. Codes and datasets are publicly available at \url{https://github.com/evelynkyl/yue_nmt}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,340
inproceedings
nath-etal-2022-phonetic
Phonetic, Semantic, and Articulatory Features in {A}ssamese-{B}engali Cognate Detection
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.5/
Nath, Abhijnan and Ghosh, Rahul and Krishnaswamy, Nikhil
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
41--53
In this paper, we propose a method to detect if words in two similar languages, Assamese and Bengali, are cognates. We mix phonetic, semantic, and articulatory features and use the cognate detection task to analyze the relative informational contribution of each type of feature to distinguish words in the two similar languages. In addition, since support for low-resourced languages like Assamese can be weak or nonexistent in some multilingual language models, we create a monolingual Assamese Transformer model and explore augmenting multilingual models with monolingual models using affine transformation techniques between vector spaces.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,341
inproceedings
zaitova-etal-2022-mapping
Mapping Phonology to Semantics: A Computational Model of Cross-Lingual Spoken-Word Recognition
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.6/
Zaitova, Iuliia and Abdullah, Badr and Klakow, Dietrich
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
54--63
Closely related languages are often mutually intelligible to various degrees. Therefore, speakers of closely related languages are usually capable of (partially) comprehending each other`s speech without explicitly learning the target, second language. The cross-linguistic intelligibility among closely related languages is mainly driven by linguistic factors such as lexical similarities. This paper presents a computational model of spoken-word recognition and investigates its ability to recognize word forms from different languages than its native, training language. Our model is based on a recurrent neural network that learns to map a word`s phonological sequence onto a semantic representation of the word. Furthermore, we present a case study on the related Slavic languages and demonstrate that the cross-lingual performance of our model not only predicts mutual intelligibility to a large extent but also reflects the genetic classification of the languages in our study.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,342
inproceedings
maehlum-etal-2022-annotating
Annotating {N}orwegian language varieties on {T}witter for Part-of-speech
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.7/
M{\ae}hlum, Petter and K{\r{a}}sen, Andre and Touileb, Samia and Barnes, Jeremy
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
64--69
Norwegian Twitter data poses an interesting challenge for Natural Language Processing (NLP) tasks. These texts are difficult for models trained on standardized text in one of the two Norwegian written forms (Bokm{\r{a}}l and Nynorsk), as they contain both the typical variation of social media text, as well as a large amount of dialectal variety. In this paper we present a novel Norwegian Twitter dataset annotated with POS-tags. We show that models trained on Universal Dependency (UD) data perform worse when evaluated against this dataset, and that models trained on Bokm{\r{a}}l generally perform better than those trained on Nynorsk. We also see that performance on dialectal tweets is comparable to the written standards for some models. Finally we perform a detailed analysis of the errors that models commonly make on this data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,343
inproceedings
miletic-scherrer-2022-ocwikidisc
{O}c{W}iki{D}isc: a Corpus of {W}ikipedia Talk Pages in {O}ccitan
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.8/
Miletic, Aleksandra and Scherrer, Yves
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
70--79
This paper presents OcWikiDisc, a new freely available corpus in Occitan, as well as language identification experiments on Occitan done as part of the corpus building process. Occitan is a regional language spoken mainly in the south of France and in parts of Spain and Italy. It exhibits rich diatopic variation, it is not standardized, and it is still low-resourced, especially when it comes to large downloadable corpora. We introduce OcWikiDisc, a corpus extracted from the talk pages associated with the Occitan Wikipedia. The version of the corpus with the most restrictive language filtering contains 8K user messages for a total of 618K tokens. The language filtering is performed based on language identification experiments with five off-the-shelf tools, including the new fasttext`s language identification model from Meta AI`s No Language Left Behind initiative, released in July 2022.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,344
inproceedings
gillin-2022-encoder
Is Encoder-Decoder Transformer the Shiny Hammer?
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.9/
Gillin, Nat
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
80--85
We present an approach to multi-class classification using an encoder-decoder transformer model. We trained a network to identify French varieties using the same scripts we use to train an encoder-decoder machine translation model. With some slight modification to the data preparation and inference parameters, we showed that the same tools used for machine translation can be easily re-used to achieve competitive performance for classification. On the French Dialectal Identification (FDI) task, we scored 32.4 on weighted F1, but this is far from a simple naive bayes classifier that outperforms a neural encoder-decoder model at 41.27 weighted F1.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,345
inproceedings
camposampiero-etal-2022-curious
The Curious Case of Logistic Regression for {I}talian Languages and Dialects Identification
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.10/
Camposampiero, Giacomo and Nguyen, Quynh Anh and Di Stefano, Francesco
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
86--98
Automatic Language Identification represents an important task for improving many real-world applications such as opinion mining and machine translation. In the case of closely-related languages such as regional dialects, this task is often challenging. In this paper, we propose an extensive evaluation of different approaches for the identification of Italian dialects and languages, spanning from classical machine learning models to more complex neural architectures and state-of-the-art pre-trained language models. Surprisingly, shallow machine learning models managed to outperform huge pre-trained language models in this specific task. This work was developed in the context of the Identification of Languages and Dialects of Italy (ITDI) task organised at VarDial 2022 Evaluation Campaign. Our best submission managed to achieve a weighted F1-score of 0.6880, ranking 5th out of 9 final submissions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,346
inproceedings
ceolin-2022-neural
Neural Networks for Cross-domain Language Identification. Phlyers @{V}ardial 2022
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.11/
Ceolin, Andrea
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
99--108
We present our contribution to the Identification of Languages and Dialects of Italy shared task (ITDI) proposed in the VarDial Evaluation Campaign 2022, which asked participants to automatically identify the language of a text associated to one of the language varieties of Italy. The method that yielded the best results in our experiments was a Deep Feedforward Neural Network (DNN) trained on character ngram counts, which provided a better performance compared to Naive Bayes methods and Convolutional Neural Networks (CNN). The system was among the best methods proposed for the ITDI shared task. The analysis of the results suggests that simple DNNs could be more efficient than CNNs to perform language identification of close varieties.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,347
inproceedings
bernier-colborne-etal-2022-transfer
Transfer Learning Improves {F}rench Cross-Domain Dialect Identification: {NRC} @ {V}ar{D}ial 2022
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.12/
Bernier-Colborne, Gabriel and Leger, Serge and Goutte, Cyril
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
109--118
We describe the systems developed by the National Research Council Canada for the French Cross-Domain Dialect Identification shared task at the 2022 VarDial evaluation campaign. We evaluated two different approaches to this task: SVM and probabilistic classifiers exploiting n-grams as features, and trained from scratch on the data provided; and a pre-trained French language model, CamemBERT, that we fine-tuned on the dialect identification task. The latter method turned out to improve the macro-F1 score on the test set from 0.344 to 0.430 (25{\%} increase), which indicates that transfer learning can be helpful for dialect identification.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,348
inproceedings
jauhiainen-etal-2022-italian
{I}talian Language and Dialect Identification and Regional {F}rench Variety Detection using Adaptive Naive {B}ayes
Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{si{\'c, Nikola and Nakov, Preslav and Tiedemann, J{\"org and Zampieri, Marcos
oct
2022
Gyeongju, Republic of Korea
Association for Computational Linguistics
https://aclanthology.org/2022.vardial-1.13/
Jauhiainen, Tommi and Jauhiainen, Heidi and Lind{\'e}n, Krister
Proceedings of the Ninth Workshop on NLP for Similar Languages, Varieties and Dialects
119--129
This article describes the language identification approach used by the SUKI team in the Identification of Languages and Dialects of Italy and the French Cross-Domain Dialect Identification shared tasks organized as part of the VarDial workshop 2022. We describe some experiments and the preprocessing techniques we used for the training data in preparation for the shared task submissions, which are also discussed. Our Naive Bayes-based adaptive system reached the first position in Italian language identification and came second in the French variety identification task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,349
inproceedings
cong-2022-pre
Pre-trained Language Models' Interpretation of Evaluativity Implicature: Evidence from Gradable Adjectives Usage in Context
Pyatkin, Valentina and Fried, Daniel and Anthonio, Talita
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.unimplicit-1.1/
Cong, Yan
Proceedings of the Second Workshop on Understanding Implicit and Underspecified Language
1--7
By saying Maria is tall, a human speaker typically implies that Maria is evaluatively tall from the speaker`s perspective. However, by using a different construction Maria is taller than Sophie, we cannot infer from Maria and Sophie`s relative heights that Maria is evaluatively tall because it is possible for Maria to be taller than Sophie in a context in which they both count as short. Can pre-trained language models (LMs) {\textquotedblleft}understand{\textquotedblright} evaulativity (EVAL) inference? To what extent can they discern the EVAL salience of different constructions in a conversation? Will it help LMs' implicitness performance if we give LMs a persona such as chill, social, and pragmatically skilled? Our study provides an approach to probing LMs' interpretation of EVAL inference by incorporating insights from experimental pragmatics and sociolinguistics. We find that with the appropriate prompt, LMs can succeed in some pragmatic level language understanding tasks. Our study suggests that socio-pragmatics methodology can shed light on the challenging questions in NLP.
null
null
10.18653/v1/2022.unimplicit-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,351
inproceedings
pedinotti-etal-2022-pragmatic
Pragmatic and Logical Inferences in {NLI} Systems: The Case of Conjunction Buttressing
Pyatkin, Valentina and Fried, Daniel and Anthonio, Talita
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.unimplicit-1.2/
Pedinotti, Paolo and Chersoni, Emmanuele and Santus, Enrico and Lenci, Alessandro
Proceedings of the Second Workshop on Understanding Implicit and Underspecified Language
8--16
An intelligent system is expected to perform reasonable inferences, accounting for both the literal meaning of a word and the meanings a word can acquire in different contexts. A specific kind of inference concerns the connective and, which in some cases gives rise to a temporal succession or causal interpretation in contrast with the logic, commutative one (Levinson, 2000). In this work, we investigate the phenomenon by creating a new dataset for evaluating the interpretation of and by NLI systems, which we use to test three Transformer-based models. Our results show that all systems generalize patterns that are consistent with both the logical and the pragmatic interpretation, perform inferences that are inconsistent with each other, and show clear divergences with both theoretical accounts and humans' behavior.
null
null
10.18653/v1/2022.unimplicit-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,352
inproceedings
li-yu-2022-devils
{\textquotedblleft}Devils Are in the Details{\textquotedblright}: Annotating Specificity of Clinical Advice from Medical Literature
Pyatkin, Valentina and Fried, Daniel and Anthonio, Talita
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.unimplicit-1.3/
Li, Yingya and Yu, Bei
Proceedings of the Second Workshop on Understanding Implicit and Underspecified Language
17--21
Prior studies have raised concerns over specificity issues in clinical advice. Lacking specificity {---} explicitly discussed detailed information {---} may affect the quality and implementation of clinical advice in medical practice. In this study, we developed and validated a fine-grained annotation schema to describe different aspects of specificity in clinical advice extracted from medical research literature. We also presented our initial annotation effort and discussed future directions towards an NLP-based specificity analysis tool for summarizing and verifying the details in clinical advice.
null
null
10.18653/v1/2022.unimplicit-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,353
inproceedings
lee-etal-2022-searching
Searching for {PET}s: Using Distributional and Sentiment-Based Methods to Find Potentially Euphemistic Terms
Pyatkin, Valentina and Fried, Daniel and Anthonio, Talita
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.unimplicit-1.4/
Lee, Patrick and Gavidia, Martha and Feldman, Anna and Peng, Jing
Proceedings of the Second Workshop on Understanding Implicit and Underspecified Language
22--32
This paper presents a linguistically driven proof of concept for finding potentially euphemistic terms, or PETs. Acknowledging that PETs tend to be commonly used expressions for a certain range of sensitive topics, we make use of distri- butional similarities to select and filter phrase candidates from a sentence and rank them using a set of simple sentiment-based metrics. We present the results of our approach tested on a corpus of sentences containing euphemisms, demonstrating its efficacy for detecting single and multi-word PETs from a broad range of topics. We also discuss future potential for sentiment-based methods on this task.
null
null
10.18653/v1/2022.unimplicit-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,354
inproceedings
zaratiana-etal-2022-named
Named Entity Recognition as Structured Span Prediction
Han, Wenjuan and Zheng, Zilong and Lin, Zhouhan and Jin, Lifeng and Shen, Yikang and Kim, Yoon and Tu, Kewei
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.umios-1.1/
Zaratiana, Urchade and Tomeh, Nadi and Holat, Pierre and Charnois, Thierry
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
1--10
Named Entity Recognition (NER) is an important task in Natural Language Processing with applications in many domains. While the dominant paradigm of NER is sequence labelling, span-based approaches have become very popular in recent times but are less well understood. In this work, we study different aspects of span-based NER, namely the span representation, learning strategy, and decoding algorithms to avoid span overlap. We also propose an exact algorithm that efficiently finds the set of non-overlapping spans that maximizes a global score, given a list of candidate spans. We performed our study on three benchmark NER datasets from different domains. We make our code publicly available at \url{https://github.com/urchade/span-structured-prediction}.
null
null
10.18653/v1/2022.umios-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,356
inproceedings
zaratiana-etal-2022-global
Global Span Selection for Named Entity Recognition
Han, Wenjuan and Zheng, Zilong and Lin, Zhouhan and Jin, Lifeng and Shen, Yikang and Kim, Yoon and Tu, Kewei
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.umios-1.2/
Zaratiana, Urchade and El khbir, Niama and Holat, Pierre and Tomeh, Nadi and Charnois, Thierry
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
11--17
Named Entity Recognition (NER) is an important task in Natural Language Processing with applications in many domains. In this paper, we describe a novel approach to named entity recognition, in which we output a set of spans (i.e., segmentations) by maximizing a global score. During training, we optimize our model by maximizing the probability of the gold segmentation. During inference, we use dynamic programming to select the best segmentation under a linear time complexity. We prove that our approach outperforms CRF and semi-CRF models for Named Entity Recognition. We will make our code publicly available.
null
null
10.18653/v1/2022.umios-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,357
inproceedings
mohammed-etal-2022-visual
Visual Grounding of Inter-lingual Word-Embeddings
Han, Wenjuan and Zheng, Zilong and Lin, Zhouhan and Jin, Lifeng and Shen, Yikang and Kim, Yoon and Tu, Kewei
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.umios-1.3/
Mohammed, Wafaa and Shahmohammadi, Hassan and Lensch, Hendrik P. A. and Baayen, R. Harald
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
18--28
Visual grounding of Language aims at enriching textual representations of language with multiple sources of visual knowledge such as images and videos. Although visual grounding is an area of intense research, inter-lingual aspects of visual grounding have not received much attention. The present study investigates the inter-lingual visual grounding of word embeddings. We propose an implicit alignment technique between the two spaces of vision and language in which inter-lingual textual information interacts in order to enrich pre-trained textual word embeddings. We focus on three languages in our experiments, namely, English, Arabic, and German. We obtained visually grounded vector representations for these languages and studied whether visual grounding on one or multiple languages improved the performance of embeddings on word similarity and categorization benchmarks. Our experiments suggest that inter-lingual knowledge improves the performance of grounded embeddings in similar languages such as German and English. However, inter-lingual grounding of German or English with Arabic led to a slight degradation in performance on word similarity benchmarks. On the other hand, we observed an opposite trend on categorization benchmarks where Arabic had the most improvement on English. In the discussion section, several reasons for those findings are laid out. We hope that our experiments provide a baseline for further research on inter lingual visual grounding.
null
null
10.18653/v1/2022.umios-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,358
inproceedings
shimomoto-etal-2022-subspace
A Subspace-Based Analysis of Structured and Unstructured Representations in Image-Text Retrieval
Han, Wenjuan and Zheng, Zilong and Lin, Zhouhan and Jin, Lifeng and Shen, Yikang and Kim, Yoon and Tu, Kewei
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.umios-1.4/
Shimomoto, Erica K. and Marrese-Taylor, Edison and Takamura, Hiroya and Kobayashi, Ichiro and Miyao, Yusuke
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
29--44
In this paper, we specifically look at the image-text retrieval problem. Recent multimodal frameworks have shown that structured inputs and fine-tuning lead to consistent performance improvement. However, this paradigm has been challenged recently with newer Transformer-based models that can reach zero-shot state-of-the-art results despite not explicitly using structured data during pre-training. Since such strategies lead to increased computational resources, we seek to better understand their role in image-text retrieval by analyzing visual and text representations extracted with three multimodal frameworks {--} SGM, UNITER, and CLIP. To perform such analysis, we represent a single image or text as low-dimensional linear subspaces and perform retrieval based on subspace similarity. We chose this representation as subspaces give us the flexibility to model an entity based on feature sets, allowing us to observe how integrating or reducing information changes the representation of each entity. We analyze the performance of the selected models' features on two standard benchmark datasets. Our results indicate that heavily pre-training models can already lead to features with critical information representing each entity, with zero-shot UNITER features performing consistently better than fine-tuned features. Furthermore, while models can benefit from structured inputs, learning representations for objects and relationships separately, such as in SGM, likely causes a loss of crucial contextual information needed to obtain a compact cluster that can effectively represent a single entity.
null
null
10.18653/v1/2022.umios-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,359
inproceedings
son-etal-2022-discourse
Discourse Relation Embeddings: Representing the Relations between Discourse Segments in Social Media
Han, Wenjuan and Zheng, Zilong and Lin, Zhouhan and Jin, Lifeng and Shen, Yikang and Kim, Yoon and Tu, Kewei
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.umios-1.5/
Son, Youngseo and Varadarajan, Vasudha and Schwartz, H. Andrew
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
45--55
Discourse relations are typically modeled as a discrete class that characterizes the relation between segments of text (e.g. causal explanations, expansions). However, such predefined discrete classes limit the universe of potential relationships and their nuanced differences. Adding higher-level semantic structure to contextual word embeddings, we propose representing discourse relations as points in high dimensional continuous space. However, unlike words, discourse relations often have no surface form (relations are in between two segments, often with no word or phrase in that gap) which presents a challenge for existing embedding techniques. We present a novel method for automatically creating discourse relation embeddings (DiscRE), addressing the embedding challenge through a weakly supervised, multitask approach to learn diverse and nuanced relations in social media. Results show DiscRE representations obtain the best performance on Twitter discourse relation classification (macro F1=0.76), social media causality prediction (from F1=0.79 to 0.81), and perform beyond modern sentence and word transformers at traditional discourse relation classification, capturing novel nuanced relations (e.g. relations at the intersection of causal explanations and counterfactuals).
null
null
10.18653/v1/2022.umios-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,360
inproceedings
cafagna-etal-2022-understanding
Understanding Cross-modal Interactions in {V}{\&}{L} Models that Generate Scene Descriptions
Han, Wenjuan and Zheng, Zilong and Lin, Zhouhan and Jin, Lifeng and Shen, Yikang and Kim, Yoon and Tu, Kewei
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.umios-1.6/
Cafagna, Michele and van Deemter, Kees and Gatt, Albert
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
56--72
Image captioning models tend to describe images in an object-centric way, emphasising visible objects. But image descriptions can also abstract away from objects and describe the type of scene depicted. In this paper, we explore the potential of a state of the art Vision and Language model, VinVL, to caption images at the scene level using (1) a novel dataset which pairs images with both object-centric and scene descriptions. Through (2) an in-depth analysis of the effect of the fine-tuning, we show (3) that a small amount of curated data suffices to generate scene descriptions without losing the capability to identify object-level concepts in the scene; the model acquires a more holistic view of the image compared to when object-centric descriptions are generated. We discuss the parallels between these results and insights from computational and cognitive science research on scene perception.
null
null
10.18653/v1/2022.umios-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,361
inproceedings
pal-2022-deepparliament
{D}eep{P}arliament: A Legal domain Benchmark {\&} Dataset for Parliament Bills Prediction
Han, Wenjuan and Zheng, Zilong and Lin, Zhouhan and Jin, Lifeng and Shen, Yikang and Kim, Yoon and Tu, Kewei
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.umios-1.8/
Pal, Ankit
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
73--81
This paper introduces DeepParliament, a legal domain Benchmark Dataset that gathers bill documents and metadata and performs various bill status classification tasks. The proposed dataset text covers a broad range of bills from 1986 to the present and contains richer information on parliament bill content. Data collection, detailed statistics and analyses are provided in the paper. Moreover, we experimented with different types of models ranging from RNN to pretrained and reported the results. We are proposing two new benchmarks: Binary and Multi-Class Bill Status classification. Models developed for bill documents and relevant supportive tasks may assist Members of Parliament (MPs), presidents, and other legal practitioners. It will help review or prioritise bills, thus speeding up the billing process, improving the quality of decisions and reducing the time consumption in both houses. Considering that the foundation of the country{\textquotedblright}s democracy is Parliament and state legislatures, we anticipate that our research will be an essential addition to the Legal NLP community. This work will be the first to present a Parliament bill prediction task. In order to improve the accessibility of legal AI resources and promote reproducibility, we have made our code and dataset publicly accessible at github.com/monk1337/DeepParliament.
null
null
10.18653/v1/2022.umios-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,362
inproceedings
tripathy-samal-2022-punctuation
Punctuation and case restoration in code mixed {I}ndian languages
Han, Wenjuan and Zheng, Zilong and Lin, Zhouhan and Jin, Lifeng and Shen, Yikang and Kim, Yoon and Tu, Kewei
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.umios-1.9/
Tripathy, Subhashree and Samal, Ashis
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
82--86
Automatic Speech Recognition (ASR) systems are taking over in different industries starting from producing video subtitles to interactive digital assistants. ASR output can be used in automatic indexing, categorizing, searching along with normal human readability. Raw transcripts from ASR systems are difficult to interpret since it usually produces text without punctuation and case information (all lower, all upper, camel case etc.), thus limiting the performance of downstream NLP tasks. We proposed an approach to restore the punctuation and case for both English and Hinglish (i.e Hindi vocabulary in Latin script) languages. We have performed a classification task using encoder-based transformers which is a mini BERT consisting of 4 encoder layers for punctuation and case restoration instead of the traditional Seq2Seq model considering the latency constraint in real world use cases. It consists of a total number of 15 distinct classes for the model which includes 5 punctuations i.e Period(.), Comma(,), Single Quote({\textquoteleft}), Double Quote({\textquotedblright}) {\&} Question Mark(?) with different combinations of casing. The model is benchmarked on an internal dataset which was based on user conversation with the voice assistant and it achieves a F1(macro) score of 91.52{\%} on the test set.
null
null
10.18653/v1/2022.umios-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,363
inproceedings
jin-etal-2022-probing
Probing Script Knowledge from Pre-Trained Models
Han, Wenjuan and Zheng, Zilong and Lin, Zhouhan and Jin, Lifeng and Shen, Yikang and Kim, Yoon and Tu, Kewei
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.umios-1.10/
Jin, Zijia and Zhang, Xingyu and Yu, Mo and Huang, Lifu
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
87--93
Adversarial attack of structured prediction models faces various challenges such as the difficulty of perturbing discrete words, the sentence quality issue, and the sensitivity of outputs to small perturbations. In this work, we introduce SHARP, a new attack method that formulates the black-box adversarial attack as a search-based optimization problem with a specially designed objective function considering sentence fluency, meaning preservation and attacking effectiveness. Additionally, three different searching strategies are analyzed and compared, , Beam Search, Metropolis-Hastings Sampling, and Hybrid Search. We demonstrate the effectiveness of our attacking strategies on two challenging structured prediction tasks: part-of-speech (POS) tagging and dependency parsing. Through automatic and human evaluations, we show that our method performs a more potent attack compared with pioneer arts. Moreover, the generated adversarial examples can be used to successfully boost the robustness and performance of the victim model via adversarial training.
null
null
10.18653/v1/2022.umios-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,364
inproceedings
park-etal-2022-leveraging
Leveraging Non-dialogue Summaries for Dialogue Summarization
Dernoncourt, Franck and Nguyen, Thien Huu and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Bui, Trung H. and Yoon, David Seunghyun
oct
2022
Gyeongju, South Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.tu-1.1/
Park, Seongmin and Shin, Dongchan and Lee, Jihwa
Proceedings of the First Workshop On Transcript Understanding
1--7
To mitigate the lack of diverse dialogue summarization datasets in academia, we present methods to utilize non-dialogue summarization data for enhancing dialogue summarization systems. We apply transformations to document summarization data pairs to create training data that better befit dialogue summarization. The suggested transformations also retain desirable properties of non-dialogue datasets, such as improved faithfulness to the source text. We conduct extensive experiments across both English and Korean to verify our approach. Although absolute gains in ROUGE naturally plateau as more dialogue summarization samples are introduced, utilizing non-dialogue data for training significantly improves summarization performance in zero- and few-shot settings and enhances faithfulness across all training regimes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,373
inproceedings
zhu-etal-2022-knowledge-transfer
Knowledge Transfer with Visual Prompt in multi-modal Dialogue Understanding and Generation
Dernoncourt, Franck and Nguyen, Thien Huu and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Bui, Trung H. and Yoon, David Seunghyun
oct
2022
Gyeongju, South Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.tu-1.2/
Zhu, Minjun and Weng, Yixuan and Li, Bin and He, Shizhu and Liu, Kang and Zhao, Jun
Proceedings of the First Workshop On Transcript Understanding
8--19
Visual Dialogue (VD) task has recently received increasing attention in AI research. Visual Dialog aims to generate multi-round, interactive responses based on the dialog history and image content. Existing textual dialogue models cannot fully understand visual information, resulting in a lack of scene features when communicating with humans continuously. Therefore, how to efficiently fuse multimodal data features remains to be a challenge. In this work, we propose a knowledge transfer method with visual prompt (VPTG) fusing multi-modal data, which is a flexible module that can utilize the text-only seq2seq model to handle visual dialogue tasks. The VPTG conducts text-image co-learning and multi-modal information fusion with visual prompts and visual knowledge distillation. Specifically, we construct visual prompts from visual representations and then induce sequence-to-sequence(seq2seq) models to fuse visual information and textual contexts by visual-text patterns. And we also realize visual knowledge transfer through distillation between two different models' text representations, so that the seq2seq model can actively learn visual semantic representations. Extensive experiments on the multi-modal dialogue understanding and generation (MDUG) datasets show the proposed VPTG outperforms other single-modal methods, which demonstrate the effectiveness of visual prompt and visual knowledge transfer.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,374
inproceedings
agarwal-etal-2022-model
Model Transfer for Event tracking as Transcript Understanding for Videos of Small Group Interaction
Dernoncourt, Franck and Nguyen, Thien Huu and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Bui, Trung H. and Yoon, David Seunghyun
oct
2022
Gyeongju, South Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.tu-1.3/
Agarwal, Sumit and Vitiello, Rosanna and Ros{\'e}, Carolyn
Proceedings of the First Workshop On Transcript Understanding
20--29
Videos of group interactions contain a wealth of information beyond the information directly communicated in a transcript of the discussion. Tracking who has participated throughout an extended interaction and what each of their trajectories has been in relation to one another is the foundation for joint activity understanding, though it comes with some unique challenges in videos of tightly coupled group work. Motivated by insights into the properties of such scenarios, including group composition and the properties of task-oriented, goal directed tasks, we present a successful proof-of-concept. In particular, we present a transfer experiment to a dyadic robot construction task, an ablation study, and a qualitative analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,375
inproceedings
nguyen-etal-2022-behancemt
{B}ehance{MT}: A Machine Translation Corpus for Livestreaming Video Transcripts
Dernoncourt, Franck and Nguyen, Thien Huu and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Bui, Trung H. and Yoon, David Seunghyun
oct
2022
Gyeongju, South Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.tu-1.4/
Nguyen, Minh Van and Dernoncourt, Franck and Nguyen, Thien
Proceedings of the First Workshop On Transcript Understanding
30--33
Machine translation (MT) is an important task in natural language processing, which aims to translate a sentence in a source language to another sentence with the same/similar semantics in a target language. Despite the huge effort on building MT systems for different language pairs, most previous work focuses on formal-language settings, where text to be translated come from written sources such as books and news articles. As a result, such MT systems could fail to translate livestreaming video transcripts, where text is often shorter and might be grammatically incorrect. To overcome this issue, we introduce a novel MT corpus - BehanceMT for livestreaming video transcript translation. Our corpus contains parallel transcripts for 3 language pairs, where English is the source language and Spanish, Chinese, and Arabic are the target languages. Experimental results show that finetuning a pretrained MT model on BehanceMT significantly improves the performance of the model in translating video transcripts across 3 language pairs. In addition, the finetuned MT model outperforms GoogleTranslate in 2 out of 3 language pairs, further demonstrating the usefulness of our proposed dataset for video transcript translation. BehanceMT will be publicly released upon the acceptance of the paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,376
inproceedings
nguyen-nguyen-2022-investigating
Investigating the Impact of {ASR} Errors on Spoken Implicit Discourse Relation Recognition
Dernoncourt, Franck and Nguyen, Thien Huu and Lai, Viet Dac and Veyseh, Amir Pouran Ben and Bui, Trung H. and Yoon, David Seunghyun
oct
2022
Gyeongju, South Korea
International Conference on Computational Linguistics
https://aclanthology.org/2022.tu-1.5/
Nguyen, Linh The and Nguyen, Dat Quoc
Proceedings of the First Workshop On Transcript Understanding
34--39
We present an empirical study investigating the influence of automatic speech recognition (ASR) errors on the spoken implicit discourse relation recognition (IDRR) task. We construct a spoken dataset for this task based on the Penn Discourse Treebank 2.0. On this dataset, we conduct {\textquotedblleft}Cascaded{\textquotedblright} experiments employing state-of-the-art ASR and text-based IDRR models and find that the ASR errors significantly decrease the IDRR performance. In addition, the {\textquotedblleft}Cascaded{\textquotedblright} approach does remarkably better than an {\textquotedblleft}End-to-End{\textquotedblright} one that directly predicts a relation label for each input argument speech pair.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,377
inproceedings
nomoto-2022-fewer
The Fewer Splits are Better: Deconstructing Readability in Sentence Splitting
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.1/
Nomoto, Tadashi
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
1--11
In this work, we focus on sentence splitting, a subfield of text simplification, primarily motivated by an unproven idea that if you divide a sentence into pieces, it should become easier to understand. Our primary goal in this paper is to determine whether this is true. In particular, we ask, does it matter whether we break a sentence into two or three? We report on our findings based on Amazon Mechanical Turk. More specifically, we introduce a Bayesian modeling framework to further investigate to what degree a particular way of splitting the complex sentence affects readability, along with a number of other parameters adopted from diverse perspectives, including clinical linguistics, and cognitive linguistics. The Bayesian modeling experiment provides clear evidence that bisecting the sentence leads to enhanced readability to a degree greater than when we create simplification by trisection.
null
null
10.18653/v1/2022.tsar-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,379
inproceedings
hatagaki-etal-2022-parallel
Parallel Corpus Filtering for {J}apanese Text Simplification
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.2/
Hatagaki, Koki and Kajiwara, Tomoyuki and Ninomiya, Takashi
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
12--18
We propose a method of parallel corpus filtering for Japanese text simplification. The parallel corpus for this task contains some redundant wording. In this study, we first identify the type and size of noisy sentence pairs in the Japanese text simplification corpus. We then propose a method of parallel corpus filtering to remove each type of noisy sentence pair. Experimental results show that filtering the training parallel corpus with the proposed method improves simplification performance.
null
null
10.18653/v1/2022.tsar-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,380
inproceedings
trienes-etal-2022-patient
Patient-friendly Clinical Notes: Towards a new Text Simplification Dataset
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.3/
Trienes, Jan and Schl{\"otterer, J{\"org and Schildhaus, Hans-Ulrich and Seifert, Christin
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
19--27
Automatic text simplification can help patients to better understand their own clinical notes. A major hurdle for the development of clinical text simplification methods is the lack of high quality resources. We report ongoing efforts in creating a parallel dataset of professionally simplified clinical notes. Currently, this corpus consists of 851 document-level simplifications of German pathology reports. We highlight characteristics of this dataset and establish first baselines for paragraph-level simplification.
null
null
10.18653/v1/2022.tsar-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,381
inproceedings
kew-ebling-2022-target
Target-Level Sentence Simplification as Controlled Paraphrasing
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.4/
Kew, Tannon and Ebling, Sarah
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
28--42
Automatic text simplification aims to reduce the linguistic complexity of a text in order to make it easier to understand and more accessible. However, simplified texts are consumed by a diverse array of target audiences and what might be appropriately simplified for one group of readers may differ considerably for another. In this work we investigate a novel formulation of sentence simplification as paraphrasing with controlled decoding. This approach aims to alleviate the major burden of relying on large amounts of in-domain parallel training data, while at the same time allowing for modular and adaptive simplification. According to automatic metrics, our approach performs competitively against baselines that prove more difficult to adapt to the needs of different target audiences or require significant amounts of complex-simple parallel aligned data.
null
null
10.18653/v1/2022.tsar-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,382
inproceedings
stahlberg-etal-2022-conciseness
Conciseness: An Overlooked Language Task
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.5/
Stahlberg, Felix and Kumar, Aashish and Alberti, Chris and Kumar, Shankar
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
43--56
We report on novel investigations into training models that make sentences concise. We define the task and show that it is different from related tasks such as summarization and simplification. For evaluation, we release two test sets, consisting of 2000 sentences each, that were annotated by two and five human annotators, respectively. We demonstrate that conciseness is a difficult task for which zero-shot setups with large neural language models often do not perform well. Given the limitations of these approaches, we propose a synthetic data generation method based on round-trip translations. Using this data to either train Transformers from scratch or fine-tune T5 models yields our strongest baselines that can be further improved by fine-tuning on an artificial conciseness dataset that we derived from multi-annotator machine translation test sets.
null
null
10.18653/v1/2022.tsar-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,383
inproceedings
mu-lim-2022-revision
Revision for Concision: A Constrained Paraphrase Generation Task
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.6/
Mu, Wenchuan and Lim, Kwan Hui
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
57--76
Academic writing should be concise as concise sentences better keep the readers' attention and convey meaning clearly. Writing concisely is challenging, for writers often struggle to revise their drafts. We introduce and formulate revising for concision as a natural language processing task at the sentence level. Revising for concision requires algorithms to use only necessary words to rewrite a sentence while preserving its meaning. The revised sentence should be evaluated according to its word choice, sentence structure, and organization. The revised sentence also needs to fulfil semantic retention and syntactic soundness. To aide these efforts, we curate and make available a benchmark parallel dataset that can depict revising for concision. The dataset contains 536 pairs of sentences before and after revising, and all pairs are collected from college writing centres. We also present and evaluate the approaches to this problem, which may assist researchers in this area.
null
null
10.18653/v1/2022.tsar-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,384
inproceedings
poncelas-htun-2022-controlling
Controlling {J}apanese Machine Translation Output by Using {JLPT} Vocabulary Levels
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.7/
Poncelas, Alberto and Htun, Ohnmar
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
77--85
In Neural Machine Translation (NMT) systems, there is generally little control over the lexicon of the output. Consequently, the translated output may be too difficult for certain audiences. For example, for people with limited knowledge of the language, vocabulary is a major impediment to understanding a text. In this work, we build a complexity-controllable NMT for English-to-Japanese translations. More particularly, we aim to modulate the difficulty of the translation in terms of not only the vocabulary but also the use of kanji. For achieving this, we follow a sentence-tagging approach to influence the output. Controlling Japanese Machine Translation Output by Using JLPT Vocabulary Levels.
null
null
10.18653/v1/2022.tsar-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,385
inproceedings
gonzalez-dios-etal-2022-irekialfes
{I}rekia{LF}es: a New Open Benchmark and Baseline Systems for {S}panish Automatic Text Simplification
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.8/
Gonzalez-Dios, Itziar and Guti{\'e}rrez-Fandi{\~n}o, Iker and Cumbicus-Pineda, Oscar m. and Soroa, Aitor
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
86--97
Automatic Text simplification (ATS) seeks to reduce the complexity of a text for a general public or a target audience. In the last years, deep learning methods have become the most used systems in ATS research, but these systems need large and good quality datasets to be evaluated. Moreover, these data are available on a large scale only for English and in some cases with restrictive licenses. In this paper, we present IrekiaLF{\_}es, an open-license benchmark for Spanish text simplification. It consists of a document-level corpus and a sentence-level test set that has been manually aligned. We also conduct a neurolinguistically-based evaluation of the corpus in order to reveal its suitability for text simplification. This evaluation follows the Lexicon-Unification-Linearity (LeULi) model of neurolinguistic complexity assessment. Finally, we present a set of experiments and baselines of ATS systems in a zero-shot scenario.
null
null
10.18653/v1/2022.tsar-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,386
inproceedings
degraeuwe-saggion-2022-lexical
Lexical Simplification in Foreign Language Learning: Creating Pedagogically Suitable Simplified Example Sentences
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.9/
Degraeuwe, Jasper and Saggion, Horacio
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
98--110
This study presents a lexical simplification (LS) methodology for foreign language (FL) learning purposes, a barely explored area of automatic text simplification (TS). The method, targeted at Spanish as a foreign language (SFL), includes a customised complex word identification (CWI) classifier and generates substitutions based on masked language modelling. Performance is calculated on a custom dataset by means of a new, pedagogically-oriented evaluation. With 43{\%} of the top simplifications being found suitable, the method shows potential for simplifying sentences to be used in FL learning activities. The evaluation also suggests that, though still crucial, meaning preservation is not always a prerequisite for successful LS. To arrive at grammatically correct and more idiomatic simplifications, future research could study the integration of association measures based on co-occurrence data.
null
null
10.18653/v1/2022.tsar-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,387
inproceedings
haller-etal-2022-eye
Eye-tracking based classification of {M}andarin {C}hinese readers with and without dyslexia using neural sequence models
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.10/
Haller, Patrick and S{\"auberli, Andreas and Kiener, Sarah and Pan, Jinger and Yan, Ming and J{\"ager, Lena
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
111--118
Eye movements are known to reflect cognitive processes in reading, and psychological reading research has shown that eye gaze patterns differ between readers with and without dyslexia. In recent years, researchers have attempted to classify readers with dyslexia based on their eye movements using Support Vector Machines (SVMs). However, these approaches (i) are based on highly aggregated features averaged over all words read by a participant, thus disregarding the sequential nature of the eye movements, and (ii) do not consider the linguistic stimulus and its interaction with the reader`s eye movements. In the present work, we propose two simple sequence models that process eye movements on the entire stimulus without the need of aggregating features across the sentence. Additionally, we incorporate the linguistic stimulus into the model in two ways{---}contextualized word embeddings and manually extracted linguistic features. The models are evaluated on a Mandarin Chinese dataset containing eye movements from children with and without dyslexia. Our results show that (i) even for a logographic script such as Chinese, sequence models are able to classify dyslexia on eye gaze sequences, reaching state-of-the-art performance, and (ii) incorporating the linguistic stimulus does not help to improve classification performance.
null
null
10.18653/v1/2022.tsar-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,388
inproceedings
alonzo-etal-2022-dataset
A Dataset of Word-Complexity Judgements from Deaf and Hard-of-Hearing Adults for Text Simplification
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.11/
Alonzo, Oliver and Lee, Sooyeon and Maddela, Mounica and Xu, Wei and Huenerfauth, Matt
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
119--124
Research has explored the use of automatic text simplification (ATS), which consists of techniques to make text simpler to read, to provide reading assistance to Deaf and Hard-of-hearing (DHH) adults with various literacy levels. Prior work in this area has identified interest in and benefits from ATS-based reading assistance tools. However, no prior work on ATS has gathered judgements from DHH adults as to what constitutes complex text. Thus, following approaches in prior NLP work, this paper contributes new word-complexity judgements from 11 DHH adults on a dataset of 15,000 English words that had been previously annotated by L2 speakers, which we also augmented to include automatic annotations of linguistic characteristics of the words. Additionally, we conduct a supplementary analysis of the interaction effect between the linguistic characteristics of the words and the groups of annotators. This analysis highlights the importance of collecting judgements from DHH adults for training ATS systems, as it revealed statistically significant interaction effects for nearly all of the linguistic characteristics of the words.
null
null
10.18653/v1/2022.tsar-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,389
inproceedings
qiao-etal-2022-psycho
(Psycho-)Linguistic Features Meet Transformer Models for Improved Explainable and Controllable Text Simplification
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.12/
Qiao, Yu and Li, Xiaofei and Wiechmann, Daniel and Kerz, Elma
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
125--146
State-of-the-art text simplification (TS) systems adopt end-to-end neural network models to directly generate the simplified version of the input text, and usually function as a blackbox. Moreover, TS is usually treated as an all-purpose generic task under the assumption of homogeneity, where the same simplification is suitable for all. In recent years, however, there has been increasing recognition of the need to adapt the simplification techniques to the specific needs of different target groups. In this work, we aim to advance current research on explainable and controllable TS in two ways: First, building on recently proposed work to increase the transparency of TS systems (Garbacea et al., 2020), we use a large set of (psycho-)linguistic features in combination with pre-trained language models to improve explainable complexity prediction. Second, based on the results of this preliminary task, we extend a state-of-the-art Seq2Seq TS model, ACCESS (Martin et al., 2020), to enable explicit control of ten attributes. The results of experiments show (1) that our approach improves the performance of state-of-the-art models for predicting explainable complexity and (2) that explicitly conditioning the Seq2Seq model on ten attributes leads to a significant improvement in performance in both within-domain and out-of-domain settings.
null
null
10.18653/v1/2022.tsar-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,390
inproceedings
zetsu-etal-2022-lexically
Lexically Constrained Decoding with Edit Operation Prediction for Controllable Text Simplification
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.13/
Zetsu, Tatsuya and Kajiwara, Tomoyuki and Arase, Yuki
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
147--153
Controllable text simplification assists language learners by automatically rewriting complex sentences into simpler forms of a target level. However, existing methods tend to perform conservative edits that keep complex words intact. To address this problem, we employ lexically constrained decoding to encourage rewriting. Specifically, the proposed method predicts edit operations conditioned to a target level and creates positive/negative constraints for words that should/should not appear in an output sentence. The experimental results confirm that our method significantly outperforms previous methods and demonstrates a new state-of-the-art performance.
null
null
10.18653/v1/2022.tsar-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,391
inproceedings
li-etal-2022-investigation
An Investigation into the Effect of Control Tokens on Text Simplification
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.14/
Li, Zihao and Shardlow, Matthew and Hassan, Saeed
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
154--165
Recent work on text simplification has focused on the use of control tokens to further the state of the art. However, it is not easy to further improve without an in-depth comprehension of the mechanisms underlying control tokens. One unexplored factor is the tokenisation strategy, which we also explore. In this paper, we (1) reimplemented ACCESS, (2) explored the effects of varying control tokens, (3) tested the influences of different tokenisation strategies, and (4) demonstrated how separate control tokens affect performance. We show variations of performance in the four control tokens separately. We also uncover how the design of control tokens could influence the performance and propose some suggestions for designing control tokens, which also reaches into other controllable text generation tasks.
null
null
10.18653/v1/2022.tsar-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,392
inproceedings
zhao-etal-2022-divide
Divide-and-Conquer Text Simplification by Scalable Data Enhancement
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.15/
Zhao, Sanqiang and Meng, Rui and Su, Hui and He, Daqing
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
166--172
Text simplification is a task to reduce the complexity of a text while retain its original meaning. It can facilitate people with low-literacy skills or language impairments, such as children and individuals with dyslexia and aphasia, to read and understand complicated materials. Normally, substitution, deletion, reordering, and splitting are considered as four core operations for performing text simplification. Thus an ideal model should be capable of executing these operations appropriately to simplify a text. However, by examining the degree that each operation is exerted in different datasets, we observe that there is a salient discrepancy between the human annotation and existing training data that is widely used for training simplification models. To alleviate this discrepancy, we propose an unsupervised data construction method that distills each simplifying operation into data via different automatic data enhancement measures. The empirical results demonstrate that the resulting dataset SimSim can support models to achieve better performance by performing all operations properly.
null
null
10.18653/v1/2022.tsar-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,393
inproceedings
ma-etal-2022-improving
Improving Text Simplification with Factuality Error Detection
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.16/
Ma, Yuan and Seneviratne, Sandaru and Daskalaki, Elena
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
173--178
In the past few years, the field of text simplification has been dominated by supervised learning approaches thanks to the appearance of large parallel datasets such as Wikilarge and Newsela. However, these datasets suffer from sentence pairs with factuality errors which compromise the models' performance. So, we proposed a model-independent factuality error detection mechanism, considering bad simplification and bad alignment, to refine the Wikilarge dataset through reducing the weight of these samples during training. We demonstrated that this approach improved the performance of the state-of-the-art text simplification model TST5 by an FKGL reduction of 0.33 and 0.29 on the TurkCorpus and ASSET testing datasets respectively. Our study illustrates the impact of erroneous samples in TS datasets and highlights the need for automatic methods to improve their quality.
null
null
10.18653/v1/2022.tsar-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,394
inproceedings
hayakawa-etal-2022-jades
{JADES}: New Text Simplification Dataset in {J}apanese Targeted at Non-Native Speakers
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.17/
Hayakawa, Akio and Kajiwara, Tomoyuki and Ouchi, Hiroki and Watanabe, Taro
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
179--187
The user-dependency of Text Simplification makes its evaluation obscure. A targeted evaluation dataset clarifies the purpose of simplification, though its specification is hard to define. We built JADES (JApanese Dataset for the Evaluation of Simplification), a text simplification dataset targeted at non-native Japanese speakers, according to public vocabulary and grammar profiles. JADES comprises 3,907 complex-simple sentence pairs annotated by an expert. Analysis of JADES shows that wide and multiple rewriting operations were applied through simplification. Furthermore, we analyzed outputs on JADES from several benchmark systems and automatic and manual scores of them. Results of these analyses highlight differences between English and Japanese in operations and evaluations.
null
null
10.18653/v1/2022.tsar-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,395
inproceedings
vasquez-rodriguez-etal-2022-benchmark
A Benchmark for Neural Readability Assessment of Texts in {S}panish
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.18/
V{\'a}squez-Rodr{\'i}guez, Laura and Cuenca-Jim{\'e}nez, Pedro-Manuel and Morales-Esquivel, Sergio and Alva-Manchego, Fernando
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
188--198
We release a new benchmark for Automated Readability Assessment (ARA) of texts in Spanish. We combined existing corpora with suitable texts collected from the Web, thus creating the largest available dataset for ARA of Spanish texts. All data was pre-processed and categorised to allow experimenting with ARA models that make predictions at two (simple and complex) or three (basic, intermediate, and advanced) readability levels, and at two text granularities (paragraphs and sentences). An analysis based on readability indices shows that our proposed datasets groupings are suitable for their designated readability level. We use our benchmark to train neural ARA models based on BERT in zero-shot, few-shot, and cross-lingual settings. Results show that either a monolingual or multilingual pre-trained model can achieve good results when fine-tuned in language-specific data. In addition, all mod- els decrease their performance when predicting three classes instead of two, showing opportunities for the development of better ARA models for Spanish with existing resources.
null
null
10.18653/v1/2022.tsar-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,396
inproceedings
sheang-etal-2022-controllable
Controllable Lexical Simplification for {E}nglish
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.19/
Sheang, Kim Cheng and Ferr{\'e}s, Daniel and Saggion, Horacio
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
199--206
Fine-tuning Transformer-based approaches have recently shown exciting results on sentence simplification task. However, so far, no research has applied similar approaches to the Lexical Simplification (LS) task. In this paper, we present ConLS, a Controllable Lexical Simplification system fine-tuned with T5 (a Transformer-based model pre-trained with a BERT-style approach and several other tasks). The evaluation results on three datasets (LexMTurk, BenchLS, and NNSeval) have shown that our model performs comparable to LSBert (the current state-of-the-art) and even outperforms it in some cases. We also conducted a detailed comparison on the effectiveness of control tokens to give a clear view of how each token contributes to the model.
null
null
10.18653/v1/2022.tsar-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,397
inproceedings
seneviratne-etal-2022-cils
{CILS} at {TSAR}-2022 Shared Task: Investigating the Applicability of Lexical Substitution Methods for Lexical Simplification
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.21/
Seneviratne, Sandaru and Daskalaki, Elena and Suominen, Hanna
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
207--212
Lexical simplification {---} which aims to simplify complex text through the replacement of difficult words using simpler alternatives while maintaining the meaning of the given text {---} is popular as a way of improving text accessibility for both people and computers. First, lexical simplification through substitution can improve the understandability of complex text for, for example, non-native speakers, second language learners, and people with low literacy. Second, its usefulness has been demonstrated in many natural language processing problems like data augmentation, paraphrase generation, or word sense induction. In this paper, we investigated the applicability of existing unsupervised lexical substitution methods based on pre-trained contextual embedding models and WordNet, which incorporate Context Information, for Lexical Simplification (CILS). Although the performance of this CILS approach has been outstanding in lexical substitution tasks, its usefulness was limited at the TSAR-2022 shared task on lexical simplification. Consequently, a minimally supervised approach with careful tuning to a given simplification task may work better than unsupervised methods. Our investigation also encouraged further work on evaluating the simplicity of potential candidates and incorporating them into the lexical simplification methods.
null
null
10.18653/v1/2022.tsar-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,398
inproceedings
whistely-etal-2022-presiuniv
{P}resi{U}niv at {TSAR}-2022 Shared Task: Generation and Ranking of Simplification Substitutes of Complex Words in Multiple Languages
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.22/
Whistely, Peniel and Mathias, Sandeep and Poornima, Galiveeti
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
213--217
In this paper, we describe our approach to generate and rank candidate simplifications using pre-trained language models (Eg. BERT), publicly available word embeddings (Eg. FastText), and a part-of-speech tagger, to generate and rank candidate contextual simplifications for a given complex word. In this task, our system, PresiUniv, was placed first in the Spanish track, 5th in the Brazilian-Portuguese track, and 10th in the English track. We upload our codes and data for this project to aid in replication of our results. We also analyze some of the errors and describe design decisions which we took while writing the paper.
null
null
10.18653/v1/2022.tsar-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,399
inproceedings
vasquez-rodriguez-etal-2022-uom
{U}o{M}{\&}{MMU} at {TSAR}-2022 Shared Task: Prompt Learning for Lexical Simplification
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.23/
V{\'a}squez-Rodr{\'i}guez, Laura and Nguyen, Nhung and Shardlow, Matthew and Ananiadou, Sophia
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
218--224
We present PromptLS, a method for fine-tuning large pre-trained Language Models (LM) to perform the task of Lexical Simplification. We use a predefined template to attain appropriate replacements for a term, and fine-tune a LM using this template on language specific datasets. We filter candidate lists in post-processing to improve accuracy. We demonstrate that our model can work in a) a zero shot setting (where we only require a pre-trained LM), b) a fine-tuned setting (where language-specific data is required), and c) a multilingual setting (where the model is pre-trained across multiple languages and fine-tuned in an specific language). Experimental results show that, although the zero-shot setting is competitive, its performance is still far from the fine-tuned setting. Also, the multilingual is unsurprisingly worse than the fine-tuned model. Among all TSAR-2022 Shared Task participants, our team was ranked second in Spanish and third in English.
null
null
10.18653/v1/2022.tsar-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,400
inproceedings
chersoni-hsu-2022-polyu
{P}oly{U}-{CBS} at {TSAR}-2022 Shared Task: A Simple, Rank-Based Method for Complex Word Substitution in Two Steps
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.24/
Chersoni, Emmanuele and Hsu, Yu-Yin
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
225--230
In this paper, we describe the system we presented at the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022) regarding the shared task on Lexical Simplification for English, Portuguese, and Spanish. We proposed an unsupervised approach in two steps: First, we used a masked language model with word masking for each language to extract possible candidates for the replacement of a difficult word; second, we ranked the candidates according to three different Transformer-based metrics. Finally, we determined our list of candidates based on the lowest average rank across different metrics.
null
null
10.18653/v1/2022.tsar-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,401
inproceedings
wilkens-etal-2022-cental
{CENTAL} at {TSAR}-2022 Shared Task: How Does Context Impact {BERT}-Generated Substitutions for Lexical Simplification?
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.25/
Wilkens, Rodrigo and Alfter, David and Cardon, R{\'e}mi and Gribomont, Isabelle and Bibal, Adrien and Patrick, Watrin and De marneffe, Marie-Catherine and Fran{\c{c}}ois, Thomas
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
231--238
Lexical simplification is the task of substituting a difficult word with a simpler equivalent for a target audience. This is currently commonly done by modeling lexical complexity on a continuous scale to identify simpler alternatives to difficult words. In the TSAR shared task, the organizers call for systems capable of generating substitutions in a zero-shot-task context, for English, Spanish and Portuguese. In this paper, we present the solution we (the cental team) proposed for the task. We explore the ability of BERT-like models to generate substitution words by masking the difficult word. To do so, we investigate various context enhancement strategies, that we combined into an ensemble method. We also explore different substitution ranking methods. We report on a post-submission analysis of the results and present our insights for potential improvements. The code for all our experiments is available at \url{https://gitlab.com/Cental-FR/cental-tsar2022}.
null
null
10.18653/v1/2022.tsar-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,402
inproceedings
nikita-rajpoot-2022-teampn
team{PN} at {TSAR}-2022 Shared Task: Lexical Simplification using Multi-Level and Modular Approach
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.26/
Nikita, Nikita and Rajpoot, Pawan
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
239--242
Lexical Simplification is the process of reducing the lexical complexity of a text by replacing difficult words with easier-to-read (or understand) expressions while preserving the original information and meaning. This paper explains the work done by our team {\textquotedblleft}teamPN{\textquotedblright} for the English track of TSAR 2022 Shared Task of Lexical Simplification. We created a modular pipeline which combines transformers based models with traditional NLP methods like paraphrasing and verb sense disambiguation. We created a multi-level and modular pipeline where the target text is treated according to its semantics (Part of Speech Tag). The pipeline is multi-level as we utilize multiple source models to find potential candidates for replacement. It is modular as we can switch the source models and their weighting in the final re-ranking.
null
null
10.18653/v1/2022.tsar-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,403
inproceedings
li-etal-2022-mantis
{MANTIS} at {TSAR}-2022 Shared Task: Improved Unsupervised Lexical Simplification with Pretrained Encoders
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.27/
Li, Xiaofei and Wiechmann, Daniel and Qiao, Yu and Kerz, Elma
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
243--250
In this paper we present our contribution to the TSAR-2022 Shared Task on Lexical Simplification of the EMNLP 2022 Workshop on Text Simplification, Accessibility, and Readability. Our approach builds on and extends the unsupervised lexical simplification system with pretrained encoders (LSBert) system introduced in Qiang et al. (2020) in the following ways: For the subtask of simplification candidate selection, it utilizes a RoBERTa transformer language model and expands the size of the generated candidate list. For subsequent substitution ranking, it introduces a new feature weighting scheme and adopts a candidate filtering method based on textual entailment to maximize semantic similarity between the target word and its simplification. Our best-performing system improves LSBert by 5.9{\%} accuracy and achieves second place out of 33 ranked solutions.
null
null
10.18653/v1/2022.tsar-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,404
inproceedings
aumiller-gertz-2022-unihd
{U}ni{HD} at {TSAR}-2022 Shared Task: Is Compute All We Need for Lexical Simplification?
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.28/
Aumiller, Dennis and Gertz, Michael
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
251--258
Previous state-of-the-art models for lexical simplification consist of complex pipelines with several components, each of which requires deep technical knowledge and fine-tuned interaction to achieve its full potential. As an alternative, we describe a frustratingly simple pipeline based on prompted GPT-3 responses, beating competing approaches by a wide margin in settings with few training instances. Our best-performing submission to the English language track of the TSAR-2022 shared task consists of an {\textquotedblleft}ensemble{\textquotedblright} of six different prompt templates with varying context levels. As a late-breaking result, we further detail a language transfer technique that allows simplification in languages other than English. Applied to the Spanish and Portuguese subset, we achieve state-of-the-art results with only minor modification to the original prompts. Aside from detailing the implementation and setup, we spend the remainder of this work discussing the particularities of prompting and implications for future work. Code for the experiments is available online at \url{https://github.com/dennlinger/TSAR-2022-Shared-Task}.
null
null
10.18653/v1/2022.tsar-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,405
inproceedings
aleksandrova-brochu-dufour-2022-rcml
{RCML} at {TSAR}-2022 Shared Task: Lexical Simplification With Modular Substitution Candidate Ranking
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.29/
Aleksandrova, Desislava and Brochu Dufour, Olivier
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
259--263
This paper describes the lexical simplification system RCML submitted to the English language track of the TSAR-2022 Shared Task. The system leverages a pre-trained language model to generate contextually plausible substitution candidates which are then ranked according to their simplicity as well as their grammatical and semantic similarity to the target complex word. Our submissions secure 6th and 7th places out of 33, improving over the SOTA baseline for 27 out of the 51 metrics.
null
null
10.18653/v1/2022.tsar-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,406
inproceedings
north-etal-2022-gmu
{GMU}-{WLV} at {TSAR}-2022 Shared Task: Evaluating Lexical Simplification Models
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.30/
North, Kai and Dmonte, Alphaeus and Ranasinghe, Tharindu and Zampieri, Marcos
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
264--270
This paper describes team GMU-WLV submission to the TSAR shared-task on multilingual lexical simplification. The goal of the task is to automatically provide a set of candidate substitutions for complex words in context. The organizers provided participants with ALEXSIS a manually annotated dataset with instances split between a small trial set with a dozen instances in each of the three languages of the competition (English, Portuguese, Spanish) and a test set with over 300 instances in the three aforementioned languages. To cope with the lack of training data, participants had to either use alternative data sources or pre-trained language models. We experimented with monolingual models: BERTimbau, ELECTRA, and RoBERTA-largeBNE. Our best system achieved 1st place out of sixteen systems for Portuguese, 8th out of thirty-three systems for English, and 6th out of twelve systems for Spanish.
null
null
10.18653/v1/2022.tsar-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,407
inproceedings
saggion-etal-2022-findings
Findings of the {TSAR}-2022 Shared Task on Multilingual Lexical Simplification
{\v{S}}tajner, Sanja and Saggion, Horacio and Ferr{\'e}s, Daniel and Shardlow, Matthew and Sheang, Kim Cheng and North, Kai and Zampieri, Marcos and Xu, Wei
dec
2022
Abu Dhabi, United Arab Emirates (Virtual)
Association for Computational Linguistics
https://aclanthology.org/2022.tsar-1.31/
Saggion, Horacio and {\v{S}}tajner, Sanja and Ferr{\'e}s, Daniel and Sheang, Kim Cheng and Shardlow, Matthew and North, Kai and Zampieri, Marcos
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
271--283
We report findings of the TSAR-2022 shared task on multilingual lexical simplification, organized as part of the Workshop on Text Simplification, Accessibility, and Readability TSAR-2022 held in conjunction with EMNLP 2022. The task called the Natural Language Processing research community to contribute with methods to advance the state of the art in multilingual lexical simplification for English, Portuguese, and Spanish. A total of 14 teams submitted the results of their lexical simplification systems for the provided test data. Results of the shared task indicate new benchmarks in Lexical Simplification with English lexical simplification quantitative results noticeably higher than those obtained for Spanish and (Brazilian) Portuguese.
null
null
10.18653/v1/2022.tsar-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,408
inproceedings
li-etal-2022-encoder
An Encoder Attribution Analysis for Dense Passage Retriever in Open-Domain Question Answering
Verma, Apurv and Pruksachatkun, Yada and Chang, Kai-Wei and Galstyan, Aram and Dhamala, Jwala and Cao, Yang Trista
jul
2022
Seattle, U.S.A.
Association for Computational Linguistics
https://aclanthology.org/2022.trustnlp-1.1/
Li, Minghan and Ma, Xueguang and Lin, Jimmy
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
1--11
The bi-encoder design of dense passage retriever (DPR) is a key factor to its success in open-domain question answering (QA), yet it is unclear how DPR`s question encoder and passage encoder individually contributes to overall performance, which we refer to as the encoder attribution problem. The problem is important as it helps us identify the factors that affect individual encoders to further improve overall performance. In this paper, we formulate our analysis under a probabilistic framework called encoder marginalization, where we quantify the contribution of a single encoder by marginalizing other variables. First, we find that the passage encoder contributes more than the question encoder to in-domain retrieval accuracy. Second, we demonstrate how to find the affecting factors for each encoder, where we train DPR with different amounts of data and use encoder marginalization to analyze the results. We find that positive passage overlap and corpus coverage of training data have big impacts on the passage encoder, while the question encoder is mainly affected by training sample complexity under this setting. Based on this framework, we can devise data-efficient training regimes: for example, we manage to train a passage encoder on SQuAD using 60{\%} less training data without loss of accuracy.
null
null
10.18653/v1/2022.trustnlp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,410
inproceedings
mehrabi-etal-2022-attributing
Attributing Fair Decisions with Attention Interventions
Verma, Apurv and Pruksachatkun, Yada and Chang, Kai-Wei and Galstyan, Aram and Dhamala, Jwala and Cao, Yang Trista
jul
2022
Seattle, U.S.A.
Association for Computational Linguistics
https://aclanthology.org/2022.trustnlp-1.2/
Mehrabi, Ninareh and Gupta, Umang and Morstatter, Fred and Steeg, Greg Ver and Galstyan, Aram
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
12--25
The widespread use of Artificial Intelligence (AI) in consequential domains, such as health-care and parole decision-making systems, has drawn intense scrutiny on the fairness of these methods. However, ensuring fairness is often insufficient as the rationale for a contentious decision needs to be audited, understood, and defended. We propose that the attention mechanism can be used to ensure fair outcomes while simultaneously providing feature attributions to account for how a decision was made. Toward this goal, we design an attention-based model that can be leveraged as an attribution framework. It can identify features responsible for both performance and fairness of the model through attention interventions and attention weight manipulation. Using this attribution framework, we then design a post-processing bias mitigation strategy and compare it with a suite of baselines. We demonstrate the versatility of our approach by conducting experiments on two distinct data types, tabular and textual.
null
null
10.18653/v1/2022.trustnlp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
22,411