entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
wan-etal-2022-fast
Fast and Light-Weight Answer Text Retrieval in Dialogue Systems
Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan
jul
2022
Hybrid: Seattle, Washington + Online
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-industry.37/
Wan, Hui and Patel, Siva Sankalp and Murdock, J William and Potdar, Saloni and Joshi, Sachindra
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
334--343
Dialogue systems can benefit from being able to search through a corpus of text to find information relevant to user requests, especially when encountering a request for which no manually curated response is available. The state-of-the-art technology for neural dense retrieval or re-ranking involves deep learning models with hundreds of millions of parameters. However, it is difficult and expensive to get such models to operate at an industrial scale, especially for cloud services that often need to support a big number of individually customized dialogue systems, each with its own text corpus. We report our work on enabling advanced neural dense retrieval systems to operate effectively at scale on relatively inexpensive hardware. We compare with leading alternative industrial solutions and show that we can provide a solution that is effective, fast, and cost-efficient.
null
null
10.18653/v1/2022.naacl-industry.37
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,238
inproceedings
laskar-etal-2022-blink
{BLINK} with {E}lasticsearch for Efficient Entity Linking in Business Conversations
Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan
jul
2022
Hybrid: Seattle, Washington + Online
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-industry.38/
Laskar, Md Tahmid Rahman and Chen, Cheng and Martsinovich, Aliaksandr and Johnston, Jonathan and Fu, Xue-Yong and Tn, Shashi Bhushan and Corston-Oliver, Simon
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
344--352
An Entity Linking system aligns the textual mentions of entities in a text to their corresponding entries in a knowledge base. However, deploying a neural entity linking system for efficient real-time inference in production environments is a challenging task. In this work, we present a neural entity linking system that connects the product and organization type entities in business conversations to their corresponding Wikipedia and Wikidata entries. The proposed system leverages Elasticsearch to ensure inference efficiency when deployed in a resource limited cloud machine, and obtains significant improvements in terms of inference speed and memory consumption while retaining high accuracy.
null
null
10.18653/v1/2022.naacl-industry.38
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,239
inproceedings
lim-wynter-2022-q2r
{Q}2{R}: A Query-to-Resolution System for Natural-Language Queries
Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan
jul
2022
Hybrid: Seattle, Washington + Online
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-industry.39/
Lim, Shiau Hong and Wynter, Laura
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
353--361
We present a system for document retrieval that combines direct classification with standard content-based retrieval approaches to significantly improve the relevance of the retrieved documents. Our system exploits the availability of an imperfect but sizable amount of labeled data from past queries. For domains such as technical support, the proposed approach enhances the system`s ability to retrieve documents that are otherwise ranked very low based on content alone. The system is easy to implement and can make use of existing text ranking methods, augmenting them through the novel Q2R orchestration framework. Q2R has been extensively tested and is in use at IBM.
null
null
10.18653/v1/2022.naacl-industry.39
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,240
inproceedings
ahbali-etal-2022-identifying
{I}dentifying {C}orporate {C}redit {R}isk {S}entiments from {F}inancial {N}ews
Loukina, Anastassia and Gangadharaiah, Rashmi and Min, Bonan
jul
2022
Hybrid: Seattle, Washington + Online
Association for Computational Linguistics
https://aclanthology.org/2022.naacl-industry.40/
Ahbali, Noujoud and Liu, Xinyuan and Nanda, Albert and Stark, Jamie and Talukder, Ashit and Khandpur, Rupinder Paul
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
362--370
Credit risk management is one central practice for financial institutions, and such practice helps them measure and understand the inherent risk within their portfolios. Historically, firms relied on the assessment of default probabilities and used the press as one tool to gather insights on the latest credit event developments of an entity. However, due to the deluge of the current news coverage for companies, analyzing news manually by financial experts is considered a highly laborious task. To this end, we propose a novel deep learning-powered approach to automate news analysis and credit adverse events detection to score the credit sentiment associated with a company. This paper showcases a complete system that leverages news extraction and data enrichment with targeted sentiment entity recognition to detect companies and text classification to identify credit events. We developed a custom scoring mechanism to provide the company`s credit sentiment score ($CSS^{TM}$) based on these detected events. Additionally, using case studies, we illustrate how this score helps understand the company`s credit profile and discriminates between defaulters and non-defaulters.
null
null
10.18653/v1/2022.naacl-industry.40
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,241
inproceedings
schulte-im-walde-2022-figurative
Figurative Language in Noun Compound Models across Target Properties, Domains and Time
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.1/
Schulte im Walde, Sabine
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
1
A variety of distributional and multi-modal computational approaches has been suggested for modelling the degrees of compositionality across types of multiword expressions and languages. As the starting point of my talk, I will present standard variants of computational models that have been proven successful in predicting the compositionality of German and English noun compounds. The main part of the talk will then be concerned with investigating the general reliability of these standard models and discussing implications for gold-standard datasets: I will demonstrate how prediction results vary (i) across representations, (ii) across empirical target properties, (iii) across compound types, (iv) across levels of abstractness, and (v) for general- vs. domain-specific language. Finally, I will present a preliminary quantitative study on diachronic changes of noun compound meanings and compositionality over time.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,243
inproceedings
bird-2022-multiword
Multiword Expressions and the Low-Resource Scenario from the Perspective of a Local Oral Culture
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.2/
Bird, Steven
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
2
Research on multiword expressions and on under-resourced languages often begins with problematisation. The existence of non-compositional meaning, or the paucity of conventional language resources, are treated as problems to be solved. This perspective is associated with the view of Language as a lexico-grammatical code, and of NLP as a conventional sequence of computational tasks. In this talk, I share from my experience in an Australian Aboriginal community, where people tend to see language as an expression of identity and of {\textquoteleft}connection to country'. Here, my early attempts to collect language data were thwarted. There was no obvious role for tasks like speech recognition, parsing, or translation. Instead, working under the authority of local elders, I pivoted to language processing tasks that were more in keeping with local interests and aspirations. I describe these tasks and suggest some new ways of framing the work of NLP, and I explore implications for work on multiword expressions and on under-resourced languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,244
inproceedings
brkic-bakaric-etal-2022-general
A General Framework for Detecting Metaphorical Collocations
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.3/
Brki{\'c} Bakari{\'c}, Marija and Na{\v{c}}inovi{\'c} Prskalo, Lucia and Popovi{\'c}, Maja
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
3--8
This paper aims at identifying a specific set of collocations known under the term metaphorical collocations. In this type of collocations, a semantic shift has taken place in one of the components. Since the appropriate gold standard needs to be compiled prior to any serious endeavour to extract metaphorical collocations automatically, this paper first presents the steps taken to compile it, and then establishes appropriate evaluation framework. The process of compiling the gold standard is illustrated on one of the most frequent Croatian nouns, which resulted in the preliminary relation significance set. With the aim to investigate the possibility of facilitating the process, frequency, logDice, relation, and pretrained word embeddings are used as features in the classification task conducted on the logDice-based word sketch relation lists. Preliminary results are presented.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,245
inproceedings
taslimipoor-etal-2022-improving
Improving Grammatical Error Correction for Multiword Expressions
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.4/
Taslimipoor, Shiva and Bryant, Christopher and Yuan, Zheng
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
9--15
Grammatical error correction (GEC) is the task of automatically correcting errors in text. It has mainly been developed to assist language learning, but can also be applied to native text. This paper reports on preliminary work in improving GEC for multiword expression (MWE) error correction. We propose two systems which incorporate MWE information in two different ways: one is a multi-encoder decoder system which encodes MWE tags in a second encoder, and the other is a BART pre-trained transformer-based system that encodes MWE representations using special tokens. We show improvements in correcting specific types of verbal MWEs based on a modified version of a standard GEC evaluation approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,246
inproceedings
ehren-etal-2022-analysis
An Analysis of Attention in {G}erman Verbal Idiom Disambiguation
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.5/
Ehren, Rafael and Kallmeyer, Laura and Lichte, Timm
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
16--25
In this paper we examine a BiLSTM architecture for disambiguating verbal potentially idiomatic expressions (PIEs) as to whether they are used in a literal or an idiomatic reading with respect to explainability of its decisions. Concretely, we extend the BiLSTM with an additional attention mechanism and track the elements that get the highest attention. The goal is to better understand which parts of an input sentence are particularly discriminative for the classifier`s decision, based on the assumption that these elements receive a higher attention than others. In particular, we investigate POS tags and dependency relations to PIE verbs for the tokens with the maximal attention. It turns out that the elements with maximal attention are oftentimes nouns that are the subjects of the PIE verb. For longer sentences however (i.e., sentences containing, among others, more modifiers), the highest attention word often stands in a modifying relation to the PIE components. This is particularly frequent for PIEs classified as literal. Our study shows that an attention mechanism can contribute to the explainability of classification decisions that depend on specific cues in the sentential context, as it is the case for PIE disambiguation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,247
inproceedings
baptista-etal-2022-support
Support Verb Constructions across the Ocean Sea
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.6/
Baptista, Jorge and Mamede, Nuno and Reis, S{\'o}nia
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
26--36
This paper analyses the support (or light) verb constructions (SVC) in a publicly available, manually annotated corpus of multiword expressions (MWE) in Brazilian Portuguese. The paper highlights several issues in the linguistic definitions therein adopted for these types of MWE, and reports the results from applying STRING, a rule-based parsing system, originally developed for European Portuguese, to this corpus from Brazilian Portuguese. The goal is two-fold: to improve the linguistic definition of SVC in the annotation task, as well as to gauge the major difficulties found when transposing linguistic resources between these two varieties of the same language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,248
inproceedings
bilgin-2022-matrix
A Matrix-Based Heuristic Algorithm for Extracting Multiword Expressions from a Corpus
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.7/
Bilgin, Orhan
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
37--48
This paper describes an algorithm for automatically extracting multiword expressions (MWEs) from a corpus. The algorithm is node-based, i.e. extracts MWEs that contain the item specified by the user, using a fixed window-size around the node. The main idea is to detect the frequency anomalies that occur at the starting and ending points of an ngram that constitutes a MWE. This is achieved by locally comparing matrices of observed frequencies to matrices of expected frequencies, and determining, for each individual input, one or more sub-sequences that have the highest probability of being a MWE. Top-performing sub-sequences are then combined in a score-aggregation and ranking stage, thus producing a single list of score-ranked MWE candidates, without having to indiscriminately generate all possible sub-sequences of the input strings. The knowledge-poor and computationally efficient algorithm attempts to solve certain recurring problems in MWE extraction, such as the inability to deal with MWEs of arbitrary length, the repetitive counting of nested ngrams, and excessive sensitivity to frequency. Evaluation results show that the best-performing version generates top-50 precision values between 0.71 and 0.88 on Turkish and English data, and performs better than the baseline method even at n=1000.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,249
inproceedings
maziarz-etal-2022-multi
Multi-word Lexical Units Recognition in {W}ord{N}et
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.8/
Maziarz, Marek and Rudnicka, Ewa and Grabowski, {\L}ukasz
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
49--54
WordNet is a state-of-the-art lexical resource used in many tasks in Natural Language Processing, also in multi-word expression (MWE) recognition. However, not all MWEs recorded in WordNet could be indisputably called lexicalised. Some of them are semantically compositional and show no signs of idiosyncrasy. This state of affairs affects all evaluation measures that use the list of all WordNet MWEs as a gold standard. We propose a method of distinguishing between lexicalised and non-lexicalised word combinations in WordNet, taking into account lexicality features, such as semantic compositionality, MWE length and translational criterion. Both a rule-based approach and a ridge logistic regression are applied, beating a random baseline in precision of singling out lexicalised MWEs, as well as in recall of ruling out cases of non-lexicalised MWEs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,250
inproceedings
koptient-grabar-2022-automatic
Automatic Detection of Difficulty of {F}rench Medical Sequences in Context
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.9/
Koptient, Ana{\"is and Grabar, Natalia
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
55--66
Medical documents use technical terms (single or multi-word expressions) with very specific semantics. Patients may find it difficult to understand these terms, which may lower their understanding of medical information. Before the simplification step of such terms, it is important to detect difficult to understand syntactic groups in medical documents as they may correspond to or contain technical terms. We address this question through categorization: we have to predict difficult to understand syntactic groups within syntactically analyzed medical documents. We use different models for this task: one built with only internal features (linguistic features), one built with only external features (contextual features), and one built with both sets of features. Our results show an f-measure over 0.8. Use of contextual (external) features and of annotations from all annotators impact the results positively. Ablation tests indicate that frequencies in large corpora and lexicon are relevant for this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,251
inproceedings
finn-etal-2022-annotating
Annotating {\textquotedblleft}Particles{\textquotedblright} in Multiword Expressions in te reo {M}{\={a}}ori for a Part-of-Speech Tagger
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.10/
Finn, Aoife and Duncan, Suzanne and Jones, Peter-Lucas and Leoni, Gianna and Mahelona, Keoni
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
67--74
This paper discusses the development of a Part-of-Speech tagger for te reo M{\={a}}ori, which is the Indigenous language of Aotearoa, also known as New Zealand. Te reo M{\={a}}ori is a particularly analytical and polysemic language. A word class called {\textquotedblleft}particles{\textquotedblright} is introduced, they are small multi-functional words with many meanings, for example {\={e}}, ai, noa, rawa, mai, an{\={o}} and koa. These {\textquotedblleft}particles{\textquotedblright} are reflective of the analytical and polysemous nature of te reo M{\={a}}ori. They frequently occur both singularly and also in multiword expressions, including time adverbial phrases. The paper illustrates the challenges that they presented to part-of-speech tagging. It also discusses how we overcome these challenges in a way that is appropriate for te reo M{\={a}}ori, given its status an Indigenous language and history of colonisation. This includes a discussion of the importance of accurately reflecting the conceptualization of te reo M{\={a}}ori. And how this involved making no linguistic presumptions, and of eliciting faithful judgements from speakers, in a way that is uninfluenced by linguistic terminology.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,252
inproceedings
schneider-etal-2022-metaphor
Metaphor Detection for Low Resource Languages: From Zero-Shot to Few-Shot Learning in {M}iddle {H}igh {G}erman
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.11/
Schneider, Felix and Sickert, Sven and Brandes, Phillip and Marshall, Sophie and Denzler, Joachim
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
75--80
In this work, we present a novel unsupervised method for adjective-noun metaphor detection on low resource languages. We propose two new approaches: First, a way of artificially generating metaphor training examples and second, a novel way to find metaphors relying only on word embeddings. The latter enables application for low resource languages. Our method is based on a transformation of word embedding vectors into another vector space, in which the distance between the adjective word vector and the noun word vector represents the metaphoricity of the word pair. We train this method in a zero-shot pseudo-supervised manner by generating artificial metaphor examples and show that our approach can be used to generate a metaphor dataset with low annotation cost. It can then be used to finetune the system in a few-shot manner. In our experiments we show the capabilities of the method in its unsupervised and in its supervised version. Additionally, we test it against a comparable unsupervised baseline method and a supervised variation of it.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,253
inproceedings
khusainova-etal-2022-automatic
Automatic Bilingual Phrase Dictionary Construction from {GIZA}++ Output
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.12/
Khusainova, Albina and Romanov, Vitaly and Khan, Adil
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
81--88
Modern encoder-decoder based neural machine translation (NMT) models are normally trained on parallel sentences. Hence, they give best results when translating full sentences rather than sentence parts. Thereby, the task of translating commonly used phrases, which often arises for language learners, is not addressed by NMT models. While for high-resourced language pairs human-built phrase dictionaries exist, less-resourced pairs do not have them. We suggest an approach for building such dictionary automatically based on the GIZA++ output and show that it works significantly better than translating phrases with a sentences-trained NMT system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,254
inproceedings
walsh-etal-2022-berts
A {BERT}`s Eye View: Identification of {I}rish Multiword Expressions Using Pre-trained Language Models
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.13/
Walsh, Abigail and Lynn, Teresa and Foster, Jennifer
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
89--99
This paper reports on the investigation of using pre-trained language models for the identification of Irish verbal multiword expressions (vMWEs), comparing the results with the systems submitted for the PARSEME shared task edition 1.2. We compare the use of a monolingual BERT model for Irish (gaBERT) with multilingual BERT (mBERT), fine-tuned to perform MWE identification, presenting a series of experiments to explore the impact of hyperparameter tuning and dataset optimisation steps on these models. We compare the results of our optimised systems to those achieved by other systems submitted to the shared task, and present some best practices for minority languages addressing this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,255
inproceedings
ozturk-etal-2022-enhancing
Enhancing the {PARSEME} {T}urkish Corpus of Verbal Multiword Expressions
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.14/
Ozturk, Yagmur and Hadj Mohamed, Najet and Lion-Bouton, Adam and Savary, Agata
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
100--104
The PARSEME (Parsing and Multiword Expressions) project proposes multilingual corpora annotated for multiword expressions (MWEs). In this case study, we focus on the Turkish corpus of PARSEME. Turkish is an agglutinative language and shows high inflection and derivation in word forms. This can cause some issues in terms of automatic morphosyntactic annotation. We provide an overview of the problems observed in the morphosyntactic annotation of the Turkish PARSEME corpus. These issues are mostly observed on the lemmas, which is important for the approximation of a type of an MWE. We propose modifications of the original corpus with some enhancements on the lemmas and parts of speech. The enhancements are then evaluated with an identification system from the PARSEME Shared Task 1.2 to detect MWEs, namely Seen2Seen. Results show increase in the F-measure for MWE identification, emphasizing the necessity of robust morphosyntactic annotation for MWE processing, especially for languages that show high surface variability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,256
inproceedings
phelps-etal-2022-sample
Sample Efficient Approaches for Idiomaticity Detection
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.15/
Phelps, Dylan and Fan, Xuan-Rui and Gow-Smith, Edward and Tayyar Madabushi, Harish and Scarton, Carolina and Villavicencio, Aline
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
105--111
Deep neural models, in particular Transformer-based pre-trained language models, require a significant amount of data to train. This need for data tends to lead to problems when dealing with idiomatic multiword expressions (MWEs), which are inherently less frequent in natural text. As such, this work explores sample efficient methods of idiomaticity detection. In particular we study the impact of Pattern Exploit Training (PET), a few-shot method of classification, and BERTRAM, an efficient method of creating contextual embeddings, on the task of idiomaticity detection. In addition, to further explore generalisability, we focus on the identification of MWEs not present in the training data. Our experiments show that while these methods improve performance on English, they are much less effective on Portuguese and Galician, leading to an overall performance about on par with vanilla mBERT. Regardless, we believe sample efficient methods for both identifying and representing potentially idiomatic MWEs are very encouraging and hold significant potential for future exploration.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,257
inproceedings
zagatti-etal-2022-mwetoolkit
mwetoolkit-lib: Adaptation of the mwetoolkit as a Python Library and an Application to {MWE}-based Document Clustering
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.16/
Zagatti, Fernando and Medeiros, Paulo Augusto de Lima and Soares, Esther da Cunha and Silva, Lucas Nildaimon dos Santos and Ramisch, Carlos and Real, Livy
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
112--117
This paper introduces the mwetoolkit-lib, an adaptation of the mwetoolkit as a python library. The original toolkit performs the extraction and identification of multiword expressions (MWEs) in large text bases through the command line. One of the contributions of our work is the adaptation of the MWE extraction pipeline from the mwetoolkit, allowing its usage in python development environments and integration in larger pipelines. The other contribution is the execution of a pilot experiment aiming to show the impact of MWE discovery in data professionals' work. This experiment found that the addition of MWE knowledge to the Term Frequency-Inverse Document Frequency (TF-IDF) vectorization altered the word relevance order, improving the linguistic quality of the clusters returned by k-means method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,258
inproceedings
dube-lareau-2022-handling
Handling Idioms in Symbolic Multilingual Natural Language Generation
Bhatia, Archna and Cook, Paul and Taslimipoor, Shiva and Garcia, Marcos and Ramisch, Carlos
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.mwe-1.17/
Dub{\'e}, Michaelle and Lareau, Fran{\c{c}}ois
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
118--126
While idioms are usually very rigid in their expression, they sometimes allow a certain level of freedom in their usage, with modifiers or complements splitting them or being syntactically attached to internal nodes rather than to the root (e.g., {\textquotedblleft}take something with a big grain of salt{\textquotedblright}). This means that they cannot always be handled as ready-made strings in rule-based natural language generation systems. Having access to the internal syntactic structure of an idiom allows for more subtle processing. We propose a way to enumerate all possible language-independent n-node trees and to map particular idioms of a language onto these generic syntactic patterns. Using this method, we integrate the idioms from the LN-fr into GenDR, a multilingual realizer. Our implementation covers nearly 98{\%} of LN-fr`s idioms with high precision, and can easily be extended or ported to other languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,259
inproceedings
esmeir-etal-2022-entity
Entity Retrieval from Multilingual Knowledge Graphs
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.1/
Esmeir, Saher and C{\^a}mara, Arthur and Meij, Edgar
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
1--15
Knowledge Graphs (KGs) are structured databases that capture real-world entities and their relationships. The task of entity retrieval from a KG aims at retrieving a ranked list of entities relevant to a given user query. While English-only entity retrieval has attracted considerable attention, user queries, as well as the information contained in the KG, may be represented in multiple{---}and possibly distinct{---}languages. Furthermore, KG content may vary between languages due to different information sources and points of view. Recent advances in language representation have enabled natural ways of bridging gaps between languages. In this paper, we therefore propose to utilise language models (LMs) and diverse entity representations to enable truly multilingual entity retrieval. We propose two approaches: (i) an array of monolingual retrievers and (ii) a single multilingual retriever, trained using queries and documents in multiple languages. We show that while our approach is on par with the significantly more complex state-of-the-art method for the English task, it can be successfully applied to virtually any language with a LM. Furthermore, it allows languages to benefit from one another, yielding significantly better performance, both for low- and high-resource languages.
null
null
10.18653/v1/2022.mrl-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,261
inproceedings
guzman-nateras-etal-2022-shot
Few-Shot Cross-Lingual Learning for Event Detection
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.2/
Guzman Nateras, Luis and Lai, Viet and Dernoncourt, Franck and Nguyen, Thien
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
16--27
Cross-Lingual Event Detection (CLED) models are capable of performing the Event Detection (ED) task in multiple languages. Such models are trained using data from a source language and then evaluated on data from a distinct target language. Training is usually performed in the standard supervised setting with labeled data available in the source language. The Few-Shot Learning (FSL) paradigm is yet to be explored for CLED despite its inherent advantage of allowing models to better generalize to unseen event types. As such, in this work, we study the CLED task under an FSL setting. Our contribution is threefold: first, we introduce a novel FSL classification method based on Optimal Transport (OT); second, we present a novel regularization term to incorporate the global distance between the support and query sets; and third, we adapt our approach to the cross-lingual setting by exploiting the alignment between source and target data. Our experiments on three, syntactically-different, target languages show the applicability of our approach and its effectiveness at improving the cross-lingual performance of few-shot models for event detection.
null
null
10.18653/v1/2022.mrl-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,262
inproceedings
ushio-bollegala-2022-zero
Zero-shot Cross-Lingual Counterfactual Detection via Automatic Extraction and Prediction of Clue Phrases
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.3/
Ushio, Asahi and Bollegala, Danushka
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
28--37
Counterfactual statements describe events that did not or cannot take place unless some conditions are satisfied. Existing counterfactual detection (CFD) methods assume the availability of manually labelled statements for each language they consider, limiting the broad applicability of CFD. In this paper, we consider the problem of zero-shot cross-lingual transfer learning for CFD. Specifically, we propose a novel loss function based on the clue phrase prediction for generalising a CFD model trained on a source language to multiple target languages, without requiring any human-labelled data. We obtain clue phrases that express various language-specific lexical indicators of counterfactuality in the target language in an unsupervised manner using a neural alignment model. We evaluate our method on the Amazon Multilingual Counterfactual Dataset (AMCD) for English, German, and Japanese languages in the zero-shot cross-lingual transfer setup where no manual annotations are used for the target language during training. The best CFD model fine-tuned on XLM-R improves the macro F1 score by 25{\%} for German and 20{\%} for Japanese target languages compared to a model that is trained only using English source language data.
null
null
10.18653/v1/2022.mrl-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,263
inproceedings
schumacher-etal-2022-zero
Zero-shot Cross-Language Transfer of Monolingual Entity Linking Models
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.4/
Schumacher, Elliot and Mayfield, James and Dredze, Mark
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
38--51
Most entity linking systems, whether mono or multilingual, link mentions to a single English knowledge base. Few have considered linking non-English text to a non-English KB, and therefore, transferring an English entity linking model to both a new document and KB language. We consider the task of zero-shot cross-language transfer of entity linking systems to a new language and KB. We find that a system trained with multilingual representations does reasonably well, and propose improvements to system training that lead to improved recall in most datasets, often matching the in-language performance. We further conduct a detailed evaluation to elucidate the challenges of this setting.
null
null
10.18653/v1/2022.mrl-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,264
inproceedings
donicke-2022-rule
Rule-Based Clause-Level Morphology for Multiple Languages
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.5/
D{\"onicke, Tillmann
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
52--63
This paper describes an approach for the morphosyntactic analysis of clauses, including the analysis of composite verb forms and both overt and covert pronouns. The approach uses grammatical rules for verb inflection and clause-internal word agreement to compute a clause`s morphosyntactic features from the morphological features of the individual words. The approach is tested for eight languages in the 1st Shared Task on Multilingual Clause-Level Morphology, where it achieves F1 scores between 79{\%} and 99{\%} (94{\%} in average).
null
null
10.18653/v1/2022.mrl-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,265
inproceedings
saadi-etal-2022-comparative
Comparative Analysis of Cross-lingual Contextualized Word Embeddings
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.6/
Saadi, Hossain Shaikh and Hangya, Viktor and Eder, Tobias and Fraser, Alexander
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
64--75
Contextualized word embeddings have emerged as the most important tool for performing NLP tasks in a large variety of languages. In order to improve the cross- lingual representation and transfer learning quality, contextualized embedding alignment techniques, such as mapping and model fine-tuning, are employed. Existing techniques however are time-, data- and computational resource-intensive. In this paper we analyze these techniques by utilizing three tasks: bilingual lexicon induction (BLI), word retrieval and cross-lingual natural language inference (XNLI) for a high resource (German-English) and a low resource (Bengali-English) language pair. In contrast to previous works which focus only on a few popular models, we compare five multilingual and seven monolingual language models and investigate the effect of various aspects on their performance, such as vocabulary size, number of languages used for training and number of parameters. Additionally, we propose a parameter-, data- and runtime-efficient technique which can be trained with 10{\%} of the data, less than 10{\%} of the time and have less than 5{\%} of the trainable parameters compared to model fine-tuning. We show that our proposed method is competitive with resource heavy models, even outperforming them in some cases, even though it relies on less resource
null
null
10.18653/v1/2022.mrl-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,266
inproceedings
de-bruyne-etal-2022-language
How Language-Dependent is Emotion Detection? Evidence from Multilingual {BERT}
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.7/
De Bruyne, Luna and Singh, Pranaydeep and De Clercq, Orphee and Lefever, Els and Hoste, Veronique
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
76--85
As emotion analysis in text has gained a lot of attention in the field of natural language processing, differences in emotion expression across languages could have consequences for how emotion detection models work. We evaluate the language-dependence of an mBERT-based emotion detection model by comparing language identification performance before and after fine-tuning on emotion detection, and performing (adjusted) zero-shot experiments to assess whether emotion detection models rely on language-specific information. When dealing with typologically dissimilar languages, we found evidence for the language-dependence of emotion detection.
null
null
10.18653/v1/2022.mrl-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,267
inproceedings
gessler-zeldes-2022-microbert
{M}icro{BERT}: Effective Training of Low-resource Monolingual {BERT}s through Parameter Reduction and Multitask Learning
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.9/
Gessler, Luke and Zeldes, Amir
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
86--99
BERT-style contextualized word embedding models are critical for good performance in most NLP tasks, but they are data-hungry and therefore difficult to train for low-resource languages. In this work, we investigate whether a combination of greatly reduced model size and two linguistically rich auxiliary pretraining tasks (part-of-speech tagging and dependency parsing) can help produce better BERTs in a low-resource setting. Results from 7 diverse languages indicate that our model, MicroBERT, is able to produce marked improvements in downstream task evaluations, including gains up to 18{\%} for parser LAS and 11{\%} for NER F1 compared to an mBERT baseline, and we achieve these results with less than 1{\%} of the parameter count of a multilingual BERT base{--}sized model. We conclude that training very small BERTs and leveraging any available labeled data for multitask learning during pretraining can produce models which outperform both their multilingual counterparts and traditional fixed embeddings for low-resource languages.
null
null
10.18653/v1/2022.mrl-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,268
inproceedings
acikgoz-etal-2022-transformers
Transformers on Multilingual Clause-Level Morphology
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.10/
Acikgoz, Emre Can and Chubakov, Tilek and Kural, Muge and {\c{Sahin, G{\"ozde and Yuret, Deniz
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
100--105
This paper describes the KUIS-AI NLP team`s submission for the 1st Shared Task on Multilingual Clause-level Morphology (MRL2022). We present our work on all three parts of the shared task: inflection, reinflection, and analysis. We mainly explore two approaches: Trans- former models in combination with data augmentation, and exploiting the state-of-the-art language modeling techniques for morphological analysis. Data augmentation leads to a remarkable performance improvement for most of the languages in the inflection task. Prefix-tuning on pretrained mGPT model helps us to adapt reinflection and analysis tasks in a low-data setting. Additionally, we used pipeline architectures using publicly available open-source lemmatization tools and monolingual BERT- based morphological feature classifiers for rein- flection and analysis tasks, respectively. While Transformer architectures with data augmentation and pipeline architectures achieved the best results for inflection and reinflection tasks, pipelines and prefix-tuning on mGPT received the highest results for the analysis task. Our methods achieved first place in each of the three tasks and outperforms mT5-baseline with 89{\%} for inflection, 80{\%} for reflection, and 12{\%} for analysis. Our code 1 is publicly available.
null
null
10.18653/v1/2022.mrl-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,269
inproceedings
jaidi-etal-2022-impact
Impact of Sequence Length and Copying on Clause-Level Inflection
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.11/
Jaidi, Badr and Saboo, Utkarsh and Wu, Xihan and Nicolai, Garrett and Silfverberg, Miikka
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
106--114
We present the University of British Columbia`s submission to the MRL shared task on multilingual clause-level morphology. Our submission extends word-level inflectional models to the clause-level in two ways: first, by evaluating the role that BPE has on the learning of inflectional morphology, and second, by evaluating the importance of a copy bias obtained through data hallucination. Experiments demonstrate a strong preference for language-tuned BPE and a copy bias over a vanilla transformer. The methods are complementary for inflection and analysis tasks {--} combined models see error reductions of 38{\%} for inflection and 15.6{\%} for analysis; However, this synergy does not hold for reinflection, which performs best under a BPE-only setting. A deeper analysis of the errors generated by our models illustrates that the copy bias may be too strong - the combined model produces predictions more similar to the copy-influenced system, despite the success of the BPE-model.
null
null
10.18653/v1/2022.mrl-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,270
inproceedings
eskander-etal-2022-towards
Towards Improved Distantly Supervised Multilingual Named-Entity Recognition for Tweets
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.12/
Eskander, Ramy and Mishra, Shubhanshu and Mehta, Sneha and Samaniego, Sofia and Haghighi, Aria
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
115--124
Recent low-resource named-entity recognition (NER) work has shown impressive gains by leveraging a single multilingual model trained using distantly supervised data derived from cross-lingual knowledge bases. In this work, we investigate such approaches by leveraging Wikidata to build large-scale NER datasets of Tweets and propose two orthogonal improvements for low-resource NER in the Twitter social media domain: (1) leveraging domain-specific pre-training on Tweets; and (2) building a model for each language family rather than an all-in-one single multilingual model. For (1), we show that mBERT with Tweet pre-training outperforms the state-of-the-art multilingual transformer-based language model, LaBSE, by a relative increase of 34.6{\%} in F1 when evaluated on Twitter data in a language-agnostic multilingual setting. For (2), we show that learning NER models for language families outperforms a single multilingual model by relative increases of 14.1{\%}, 15.8{\%} and 45.3{\%} in F1 when utilizing mBERT, mBERT with Tweet pre-training and LaBSE, respectively. We conduct analyses and present examples for these observed improvements.
null
null
10.18653/v1/2022.mrl-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,271
inproceedings
pikuliak-simko-2022-average
Average Is Not Enough: Caveats of Multilingual Evaluation
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.13/
Pikuliak, Mat{\'u}{\v{s}} and Simko, Marian
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
125--133
This position paper discusses the problem of multilingual evaluation. Using simple statistics, such as average language performance, might inject linguistic biases in favor of dominant language families into evaluation methodology. We argue that a qualitative analysis informed by comparative linguistics is needed for multilingual results to detect this kind of bias. We show in our case study that results in published works can indeed be linguistically biased and we demonstrate that visualization based on URIEL typological database can detect it.
null
null
10.18653/v1/2022.mrl-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,272
inproceedings
goldman-etal-2022-mrl
The {MRL} 2022 Shared Task on Multilingual Clause-level Morphology
Ataman, Duygu and Gonen, Hila and Ruder, Sebastian and Firat, Orhan and G{\"ul Sahin, G{\"ozde and Mirzakhalov, Jamshidbek
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mrl-1.14/
Goldman, Omer and Tinner, Francesco and Gonen, Hila and Muller, Benjamin and Basmov, Victoria and Kirimi, Shadrack and Nishimwe, Lydia and Sagot, Beno{\^i}t and Seddah, Djam{\'e} and Tsarfaty, Reut and Ataman, Duygu
Proceedings of the 2nd Workshop on Multi-lingual Representation Learning (MRL)
134--146
The 2022 Multilingual Representation Learning (MRL) Shared Task was dedicated to clause-level morphology. As the first ever benchmark that defines and evaluates morphology outside its traditional lexical boundaries, the shared task on multilingual clause-level morphology sets the scene for competition across different approaches to morphological modeling, with 3 clause-level sub-tasks: morphological inflection, reinflection and analysis, where systems are required to generate, manipulate or analyze simple sentences centered around a single content lexeme and a set of morphological features characterizing its syntactic clause. This year`s tasks covered eight typologically distinct languages: English, French, German, Hebrew, Russian, Spanish, Swahili and Turkish. The tasks has received submissions of four systems from three teams which were compared to two baselines implementing prominent multilingual learning methods. The results show that modern NLP models are effective in solving morphological tasks even at the clause level. However, there is still room for improvement, especially in the task of morphological analysis.
null
null
10.18653/v1/2022.mrl-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,273
inproceedings
grosso-etal-2022-robust
Robust Domain Adaptation for Pre-trained Multilingual Neural Machine Translation Models
FitzGerald, Jack and Rottmann, Kay and Hirschberg, Julia and Bansal, Mohit and Rumshisky, Anna and Peris, Charith and Hench, Christopher
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mmnlu-1.1/
Grosso, Mathieu and Mathey, Alexis and Ratnamogan, Pirashanth and Vanhuffel, William and Fotso, Michael
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
1--11
Recent literature has demonstrated the potential of multilingual Neural Machine Translation (mNMT) models. However, the most efficient models are not well suited to specialized industries. In these cases, internal data is scarce and expensive to find in all language pairs. Therefore, fine-tuning a mNMT model on a specialized domain is hard. In this context, we decided to focus on a new task: Domain Adaptation of a pre-trained mNMT model on a single pair of language while trying to maintain model quality on generic domain data for all language pairs. The risk of loss on generic domain and on other pairs is high. This task is key for mNMT model adoption in the industry and is at the border of many others. We propose a fine-tuning procedure for the generic mNMT that combines embeddings freezing and adversarial loss. Our experiments demonstrated that the procedure improves performances on specialized data with a minimal loss in initial performances on generic domain for all languages pairs, compared to a naive standard approach (+10.0 BLEU score on specialized data, -0.01 to -0.5 BLEU on WMT and Tatoeba datasets on the other pairs with M2M100).
null
null
10.18653/v1/2022.mmnlu-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,275
inproceedings
wu-etal-2022-fine
Fine-grained Multi-lingual Disentangled Autoencoder for Language-agnostic Representation Learning
FitzGerald, Jack and Rottmann, Kay and Hirschberg, Julia and Bansal, Mohit and Rumshisky, Anna and Peris, Charith and Hench, Christopher
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mmnlu-1.2/
Wu, Zetian and Sun, Zhongkai and Zhao, Zhengyang and Lu, Sixing and Ma, Chengyuan and Guo, Chenlei
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
12--24
Encoding both language-specific and language-agnostic information into a single high-dimensional space is a common practice of pre-trained Multi-lingual Language Models (pMLM). Such encoding has been shown to perform effectively on natural language tasks requiring semantics of the whole sentence (e.g., translation). However, its effectiveness appears to be limited on tasks requiring partial information of the utterance (e.g., multi-lingual entity retrieval, template retrieval, and semantic alignment). In this work, a novel Fine-grained Multilingual Disentangled Autoencoder (FMDA) is proposed to disentangle fine-grained semantic information from language-specific information in a multi-lingual setting. FMDA is capable of successfully extracting the disentangled template semantic and residual semantic representations. Experiments conducted on the MASSIVE dataset demonstrate that the disentangled encoding can boost each other during the training, thus consistently outperforming the original pMLM and the strong language disentanglement baseline on monolingual template retrieval and cross-lingual semantic retrieval tasks across multiple languages.
null
null
10.18653/v1/2022.mmnlu-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,276
inproceedings
nicosia-piccinno-2022-byte
Byte-Level Massively Multilingual Semantic Parsing
FitzGerald, Jack and Rottmann, Kay and Hirschberg, Julia and Bansal, Mohit and Rumshisky, Anna and Peris, Charith and Hench, Christopher
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mmnlu-1.3/
Nicosia, Massimo and Piccinno, Francesco
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
25--34
Token free approaches have been successfully applied to a series of word and span level tasks. In this work, we evaluate a byte-level sequence to sequence model (ByT5) on the 51 languages in the MASSIVE multilingual semantic parsing dataset. We examine multiple experimental settings: (i) zero-shot, (ii) full gold data and (iii) zero-shot with synthetic data. By leveraging a state-of-the-art label projection method for machine translated examples, we are able to reduce the gap in exact match to only 5 points with respect to a model trained on gold data from all the languages. We additionally provide insights on the cross-lingual transfer of ByT5 and show how the model compares with respect to mT5 across all parameter sizes.
null
null
10.18653/v1/2022.mmnlu-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,277
inproceedings
zheng-etal-2022-hit
{HIT}-{SCIR} at {MMNLU}-22: Consistency Regularization for Multilingual Spoken Language Understanding
FitzGerald, Jack and Rottmann, Kay and Hirschberg, Julia and Bansal, Mohit and Rumshisky, Anna and Peris, Charith and Hench, Christopher
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mmnlu-1.4/
Zheng, Bo and Li, Zhouyang and Wei, Fuxuan and Chen, Qiguang and Qin, Libo and Che, Wanxiang
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
35--41
Multilingual spoken language understanding (SLU) consists of two sub-tasks, namely intent detection and slot filling. To improve the performance of these two sub-tasks, we propose to use consistency regularization based on a hybrid data augmentation strategy. The consistency regularization enforces the predicted distributions for an example and its semantically equivalent augmentation to be consistent. We conduct experiments on the MASSIVE dataset under both full-dataset and zero-shot settings. Experimental results demonstrate that our proposed method improves the performance on both intent detection and slot filling tasks. Our system ranked 1st in the MMNLU-22 competition under the full-dataset setting.
null
null
10.18653/v1/2022.mmnlu-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,278
inproceedings
crisostomi-etal-2022-play
Play m{\'u}sica alegre: A Large-Scale Empirical Analysis of Cross-Lingual Phenomena in Voice Assistant Interactions
FitzGerald, Jack and Rottmann, Kay and Hirschberg, Julia and Bansal, Mohit and Rumshisky, Anna and Peris, Charith and Hench, Christopher
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mmnlu-1.5/
Crisostomi, Donato and Manzotti, Alessandro and Palumbo, Enrico and Bernardi, Davide and Campbell, Sarah and Garg, Shubham
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
42--52
Cross-lingual phenomena are quite common in informal contexts like social media, where users are likely to mix their native language with English or other languages. However, few studies have focused so far on analyzing cross-lingual interactions in voice-assistant data, which present peculiar features in terms of sentence length, named entities, and use of spoken language. Also, little attention has been posed to European countries, where English is frequently used as a second language. In this paper, we present a large-scale empirical analysis of cross-lingual phenomena (code-mixing, linguistic borrowing, foreign named entities) in the interactions with a large-scale voice assistant in European countries. To do this, we first introduce a general, highly-scalable technique to generate synthetic mixed training data annotated with token-level language labels and we train two neural network models to predict them. We evaluate the models both on the synthetic dataset and on a real dataset of code-switched utterances, showing that the best performance is obtained by a character convolution based model. The results of the analysis highlight different behaviors between countries, having Italy with the highest ratio of cross-lingual utterances and Spain with a marked preference in keeping Spanish words. Our research, paired to the increase of the cross-lingual phenomena in time, motivates further research in developing multilingual Natural Language Understanding (NLU) models, which can naturally deal with cross-lingual interactions.
null
null
10.18653/v1/2022.mmnlu-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,279
inproceedings
wang-etal-2022-zero
Zero-Shot Cross-Lingual Sequence Tagging as {S}eq2{S}eq Generation for Joint Intent Classification and Slot Filling
FitzGerald, Jack and Rottmann, Kay and Hirschberg, Julia and Bansal, Mohit and Rumshisky, Anna and Peris, Charith and Hench, Christopher
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mmnlu-1.6/
Wang, Fei and Huang, Kuan-hao and Kumar, Anoop and Galstyan, Aram and Ver steeg, Greg and Chang, Kai-wei
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
53--61
The joint intent classification and slot filling task seeks to detect the intent of an utterance and extract its semantic concepts. In the zero-shot cross-lingual setting, a model is trained on a source language and then transferred to other target languages through multi-lingual representations without additional training data. While prior studies show that pre-trained multilingual sequence-to-sequence (Seq2Seq) models can facilitate zero-shot transfer, there is little understanding on how to design the output template for the joint prediction tasks. In this paper, we examine three aspects of the output template {--} (1) label mapping, (2) task dependency, and (3) word order. Experiments on the MASSIVE dataset consisting of 51 languages show that our output template significantly improves the performance of pre-trained cross-lingual language models.
null
null
10.18653/v1/2022.mmnlu-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,280
inproceedings
jhan-etal-2022-c5l7
$C5L7$: A Zero-Shot Algorithm for Intent and Slot Detection in Multilingual Task Oriented Languages
FitzGerald, Jack and Rottmann, Kay and Hirschberg, Julia and Bansal, Mohit and Rumshisky, Anna and Peris, Charith and Hench, Christopher
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mmnlu-1.7/
Jhan, Jiun-hao and Zhu, Qingxiaoyang and Bengre, Nehal and Kanungo, Tapas
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
62--68
Voice assistants are becoming central to our lives. The convenience of using voice assistants to do simple tasks has created an industry for voice-enabled devices like TVs, thermostats, air conditioners, etc. It has also improved the quality of life of elders by making the world more accessible. Voice assistants engage in task-oriented dialogues using machine-learned language understanding models. However, training deep-learned models take a lot of training data, which is time-consuming and expensive. Furthermore, it is even more problematic if we want the voice assistant to understand hundreds of languages. In this paper, we present a zero-shot deep learning algorithm that uses only the English part of the Massive dataset and achieves a high level of accuracy across 51 languages. The algorithm uses a delexicalized translation model to generate multilingual data for data augmentation. The training data is further weighted to improve the accuracy of the worst-performing languages. We report on our experiments with code-switching, word order, multilingual ensemble methods, and other techniques and their impact on overall accuracy.
null
null
10.18653/v1/2022.mmnlu-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,281
inproceedings
de-bruyn-etal-2022-machine
Machine Translation for Multilingual Intent Detection and Slots Filling
FitzGerald, Jack and Rottmann, Kay and Hirschberg, Julia and Bansal, Mohit and Rumshisky, Anna and Peris, Charith and Hench, Christopher
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mmnlu-1.8/
De bruyn, Maxime and Lotfi, Ehsan and Buhmann, Jeska and Daelemans, Walter
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
69--82
We expect to interact with home assistants irrespective of our language. However, scaling the Natural Language Understanding pipeline to multiple languages while keeping the same level of accuracy remains a challenge. In this work, we leverage the inherent multilingual aspect of translation models for the task of multilingual intent classification and slot filling. Our experiments reveal that they work equally well with general-purpose multilingual text-to-text models. Furthermore, their accuracy can be further improved by artificially increasing the size of the training set. Unfortunately, increasing the training set also increases the overlap with the test set, leading to overestimating their true capabilities. As a result, we propose two new evaluation methods capable of accounting for an overlap between the training and test set.
null
null
10.18653/v1/2022.mmnlu-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,282
inproceedings
doostmohammadi-kuhlmann-2022-effects
On the Effects of Video Grounding on Language Models
null
oct
2022
Virtual
International Conference on Computational Linguistics
https://aclanthology.org/2022.mmmpie-1.1/
Doostmohammadi, Ehsan and Kuhlmann, Marco
Proceedings of the First Workshop on Performance and Interpretability Evaluations of Multimodal, Multipurpose, Massive-Scale Models
1--6
Transformer-based models trained on text and vision modalities try to improve the performance on multimodal downstream tasks or tackle the problem Transformer-based models trained on text and vision modalities try to improve the performance on multimodal downstream tasks or tackle the problem of lack of grounding, e.g., addressing issues like models' insufficient commonsense knowledge. While it is more straightforward to evaluate the effects of such models on multimodal tasks, such as visual question answering or image captioning, it is not as well-understood how these tasks affect the model itself, and its internal linguistic representations. In this work, we experiment with language models grounded in videos and measure the models' performance on predicting masked words chosen based on their \textit{imageability}. The results show that the smaller model benefits from video grounding in predicting highly imageable words, while the results for the larger model seem harder to interpret.of lack of grounding, e.g., addressing issues like models' insufficient commonsense knowledge. While it is more straightforward to evaluate the effects of such models on multimodal tasks, such as visual question answering or image captioning, it is not as well-understood how these tasks affect the model itself, and its internal linguistic representations. In this work, we experiment with language models grounded in videos and measure the models' performance on predicting masked words chosen based on their imageability. The results show that the smaller model benefits from video grounding in predicting highly imageable words, while the results for the larger model seem harder to interpret.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,285
inproceedings
wang-etal-2022-rethinking
Rethinking Task Sampling for Few-shot Vision-Language Transfer Learning
null
oct
2022
Virtual
International Conference on Computational Linguistics
https://aclanthology.org/2022.mmmpie-1.2/
Wang, Zhenhailong and Yu, Hang and Li, Manling and Zhao, Han and Ji, Heng
Proceedings of the First Workshop on Performance and Interpretability Evaluations of Multimodal, Multipurpose, Massive-Scale Models
7--14
Despite achieving state-of-the-art zero-shot performance, existing vision-language models still fall short of few-shot transfer ability on domain-specific problems. Classical fine-tuning often fails to prevent highly expressive models from exploiting spurious correlations. Although model-agnostic meta-learning (MAML) presents as a natural alternative for few-shot transfer learning, the expensive computation due to implicit second-order optimization limits its use on large-scale vision-language models such as CLIP. While much literature has been devoted to exploring alternative optimization strategies, we identify another essential aspect towards effective few-shot transfer learning, task sampling, which is previously only be viewed as part of data pre-processing in MAML. To show the impact of task sampling, we propose a simple algorithm, Model-Agnostic Multitask Fine-tuning (MAMF), which differentiates classical fine-tuning only on uniformly sampling multiple tasks. Despite its simplicity, we show that MAMF consistently outperforms classical fine-tuning on five few-shot image classification tasks. We further show that the effectiveness of the bi-level optimization in MAML is highly sensitive to the zero-shot performance of a task in the context of few-shot vision-language classification. The goal of this paper is to provide new insights on what makes few-shot learning work, and encourage more research into investigating better task sampling strategies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,286
inproceedings
coria-etal-2022-analyzing
Analyzing {BERT} Cross-lingual Transfer Capabilities in Continual Sequence Labeling
null
oct
2022
Virtual
International Conference on Computational Linguistics
https://aclanthology.org/2022.mmmpie-1.3/
Coria, Juan Manuel and Veron, Mathilde and Ghannay, Sahar and Bernard, Guillaume and Bredin, Herv{\'e} and Galibert, Olivier and Rosset, Sophie
Proceedings of the First Workshop on Performance and Interpretability Evaluations of Multimodal, Multipurpose, Massive-Scale Models
15--25
Knowledge transfer between neural language models is a widely used technique that has proven to improve performance in a multitude of natural language tasks, in particular with the recent rise of large pre-trained language models like BERT. Similarly, high cross-lingual transfer has been shown to occur in multilingual language models. Hence, it is of great importance to better understand this phenomenon as well as its limits. While most studies about cross-lingual transfer focus on training on independent and identically distributed (i.e. i.i.d.) samples, in this paper we study cross-lingual transfer in a continual learning setting on two sequence labeling tasks: slot-filling and named entity recognition. We investigate this by training multilingual BERT on sequences of 9 languages, one language at a time, on the MultiATIS++ and MultiCoNER corpora. Our first findings are that forward transfer between languages is retained although forgetting is present. Additional experiments show that lost performance can be recovered with as little as a single training epoch even if forgetting was high, which can be explained by a progressive shift of model parameters towards a better multilingual initialization. We also find that commonly used metrics might be insufficient to assess continual learning performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,287
inproceedings
razzhigaev-etal-2022-pixel
Pixel-Level {BPE} for Auto-Regressive Image Generation
null
oct
2022
Virtual
International Conference on Computational Linguistics
https://aclanthology.org/2022.mmmpie-1.4/
Razzhigaev, Anton and Voronov, Anton and Kaznacheev, Andrey and Kuznetsov, Andrey and Dimitrov, Denis and Panchenko, Alexander
Proceedings of the First Workshop on Performance and Interpretability Evaluations of Multimodal, Multipurpose, Massive-Scale Models
26--30
Pixel-level autoregression with Transformer models (Image GPT or iGPT) is one of the recent approaches to image generation that has not received massive attention and elaboration due to quadratic complexity of attention as it imposes huge memory requirements and thus restricts the resolution of the generated images. In this paper, we propose to tackle this problem by adopting Byte-Pair-Encoding (BPE) originally proposed for text processing to the image domain to drastically reduce the length of the modeled sequence. The obtained results demonstrate that it is possible to decrease the amount of computation required to generate images pixel-by-pixel while preserving their quality and the expressiveness of the features extracted from the model. Our results show that there is room for improvement for iGPT-like models with more thorough research on the way to the optimal sequence encoding techniques for images.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,288
inproceedings
santos-etal-2022-cost
Cost-Effective Language Driven Image Editing with {LX}-{DRIM}
null
oct
2022
Virtual
International Conference on Computational Linguistics
https://aclanthology.org/2022.mmmpie-1.5/
Santos, Rodrigo and Branco, Ant{\'o}nio and Silva, Jo{\~a}o Ricardo
Proceedings of the First Workshop on Performance and Interpretability Evaluations of Multimodal, Multipurpose, Massive-Scale Models
31--43
Cross-modal language and image processing is envisaged as a way to improve language understanding by resorting to visual grounding, but only recently, with the emergence of neural architectures specifically tailored to cope with both modalities, has it attracted increased attention and obtained promising results. In this paper we address a cross-modal task of language-driven image design, in particular the task of altering a given image on the basis of language instructions. We also avoid the need for a specifically tailored architecture and resort instead to a general purpose model in the Transformer family. Experiments with the resulting tool, LX-DRIM, show very encouraging results, confirming the viability of the approach for language-driven image design while keeping it affordable in terms of compute and data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,289
inproceedings
bansal-etal-2022-shapes
Shapes of Emotions: Multimodal Emotion Recognition in Conversations via Emotion Shifts
null
oct
2022
Virtual
International Conference on Computational Linguistics
https://aclanthology.org/2022.mmmpie-1.6/
Bansal, Keshav and Agarwal, Harsh and Joshi, Abhinav and Modi, Ashutosh
Proceedings of the First Workshop on Performance and Interpretability Evaluations of Multimodal, Multipurpose, Massive-Scale Models
44--56
Emotion Recognition in Conversations (ERC) is an important and active research area. Recent work has shown the benefits of using multiple modalities (e.g., text, audio, and video) for the ERC task. In a conversation, participants tend to maintain a particular emotional state unless some stimuli evokes a change. There is a continuous ebb and flow of emotions in a conversation. Inspired by this observation, we propose a multimodal ERC model and augment it with an emotion-shift component that improves performance. The proposed emotion-shift component is modular and can be added to any existing multimodal ERC model (with a few modifications). We experiment with different variants of the model, and results show that the inclusion of emotion shift signal helps the model to outperform existing models for ERC on MOSEI and IEMOCAP datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,290
inproceedings
kannan-rajalakshmi-2022-multimodal
Multimodal Code-Mixed {T}amil Troll Meme Classification using Feature Fusion
Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul
dec
2022
IIIT Delhi, New Delhi, India
Association for Computational Linguistics
https://aclanthology.org/2022.mmlow-1.1/
Kannan, Ramesh and Rajalakshmi, Ratnavel
Proceedings of the First Workshop on Multimodal Machine Learning in Low-resource Languages
1--8
Memes became an important way of expressing relevant idea through social media platforms and forums. At the same time, these memes are trolled by a person who tries to get identified from the other internet users like social media users, chat rooms and blogs. The memes contain both textual and visual information. Based on the content of memes, they are trolled in online community. There is no restriction for language usage in online media. The present work focuses on whether memes are trolled or not trolled. The proposed multi modal approach achieved considerably better weighted average F1 score of 0.5437 compared to Unimodal approaches. The other performance metrics like precision, recall, accuracy and macro average have also been studied to observe the proposed system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,292
inproceedings
rajalakshmi-etal-2022-understanding
Understanding the role of Emojis for emotion detection in {T}amil
Chakravarthi, Bharathi Raja and Murugappan, Abirami and Chinnappa, Dhivya and Hane, Adeep and Kumeresan, Prasanna Kumar and Ponnusamy, Rahul
dec
2022
IIIT Delhi, New Delhi, India
Association for Computational Linguistics
https://aclanthology.org/2022.mmlow-1.2/
Rajalakshmi, Ratnavel and Mattins R, Faerie and Selvaraj, Srivarshan and Shibani, Antonette and Kumar M, Anand and Raja Chakravarthi, Bharathi
Proceedings of the First Workshop on Multimodal Machine Learning in Low-resource Languages
9--17
of expressing relevant idea through social media platforms and forums. At the same time, these memes are trolled by a person who tries to get identified from the other internet users like social media users, chat rooms and blogs. The memes contain both textual and visual information. Based on the content of memes, they are trolled in online community. There is no restriction for language usage in online media. The present work focuses on whether memes are trolled or not trolled. The proposed multi modal approach achieved considerably better weighted average F1 score of 0.5437 compared to Unimodal approaches. The other performance metrics like precision, recall, accuracy and macro average have also been studied to observe the proposed system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,293
inproceedings
jung-etal-2022-language
Language-agnostic Semantic Consistent Text-to-Image Generation
Bugliarello, Emanuele and Cheng, Kai-Wei and Elliott, Desmond and Gella, Spandana and Kamath, Aishwarya and Li, Liunian Harold and Liu, Fangyu and Pfeiffer, Jonas and Ponti, Edoardo Maria and Srinivasan, Krishna and Vuli{\'c}, Ivan and Yang, Yinfei and Yin, Da
may
2022
Dublin, Ireland and Online
Association for Computational Linguistics
https://aclanthology.org/2022.mml-1.1/
Jung, SeongJun and Choi, Woo Suk and Choi, Seongho and Zhang, Byoung-Tak
Proceedings of the Workshop on Multilingual Multimodal Learning
1--5
Recent GAN-based text-to-image generation models have advanced that they can generate photo-realistic images matching semantically with descriptions. However, research on multi-lingual text-to-image generation has not been carried out yet much. There are two problems when constructing a multilingual text-to-image generation model: 1) language imbalance issue in text-to-image paired datasets and 2) generating images that have the same meaning but are semantically inconsistent with each other in texts expressed in different languages. To this end, we propose a Language-agnostic Semantic Consistent Generative Adversarial Network (LaSC-GAN) for text-to-image generation, which can generate semantically consistent images via language-agnostic text encoder and Siamese mechanism. Experiments on relatively low-resource language text-image datasets show that the model has comparable generation quality as images generated by high-resource language text, and generates semantically consistent images for texts with the same meaning even in different languages.
null
null
10.18653/v1/2022.mml-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,295
inproceedings
nasir-mchechesi-2022-geographical
Geographical Distance Is The New Hyperparameter: A Case Study Of Finding The Optimal Pre-trained Language For {E}nglish-isi{Z}ulu Machine Translation.
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.1/
Nasir, Muhammad Umair and Mchechesi, Innocent
Proceedings of the Workshop on Multilingual Information Access (MIA)
1--8
Stemming from the limited availability of datasets and textual resources for low-resource languages such as isiZulu, there is a significant need to be able to harness knowledge from pre-trained models to improve low resource machine translation. Moreover, a lack of techniques to handle the complexities of morphologically rich languages has compounded the unequal development of translation models, with many widely spoken African languages being left behind. This study explores the potential benefits of transfer learning in an English-isiZulu translation framework. The results indicate the value of transfer learning from closely related languages to enhance the performance of low-resource translation models, thus providing a key strategy for low-resource translation going forward. We gathered results from 8 different language corpora, including one multi-lingual corpus, and saw that isiXhosa-isiZulu outperformed all languages, with a BLEU score of 8.56 on the test set which was better from the multi-lingual corpora pre-trained model by 2.73. We also derived a new coefficient, Nasir`s Geographical Distance Coefficient (NGDC) which provides an easy selection of languages for the pre-trained models. NGDC also indicated that isiXhosa should be selected as the language for the pre-trained model.
null
null
10.18653/v1/2022.mia-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,297
inproceedings
sazzed-2022-annotated
An Annotated Dataset and Automatic Approaches for Discourse Mode Identification in Low-resource {B}engali Language
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.2/
Sazzed, Salim
Proceedings of the Workshop on Multilingual Information Access (MIA)
9--15
The modes of discourse aid in comprehending the convention and purpose of various forms of languages used during communication. In this study, we introduce a discourse mode annotated corpus for the low-resource Bangla (also referred to as Bengali) language. The corpus consists of sentence-level annotation of three different discourse modes, narrative, descriptive, and informative of the text excerpted from a number of Bangla novels. We analyze the annotated corpus to expose various linguistic aspects of discourse modes, such as class distributions and average sentence lengths. To automatically determine the mode of discourse, we apply CML (classical machine learning) classifiers with n-gram based statistical features and a fine-tuned BERT (Bidirectional Encoder Representations from Transformers) based language model. We observe that fine-tuned BERT-based approach yields more promising results than n-gram based CML classifiers. Our created discourse mode annotated dataset, the first of its kind in Bangla, and the evaluation, provide baselines for the automatic discourse mode identification in Bangla and can assist various downstream natural language processing tasks.
null
null
10.18653/v1/2022.mia-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,298
inproceedings
montero-etal-2022-pivot
Pivot Through {E}nglish: Reliably Answering Multilingual Questions without Document Retrieval
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.3/
Montero, Ivan and Longpre, Shayne and Lao, Ni and Frank, Andrew and DuBois, Christopher
Proceedings of the Workshop on Multilingual Information Access (MIA)
16--28
Existing methods for open-retrieval question answering in lower resource languages (LRLs) lag significantly behind English. They not only suffer from the shortcomings of non-English document retrieval, but are reliant on language-specific supervision for either the task or translation. We formulate a task setup more realistic to available resources, that circumvents document retrieval to reliably transfer knowledge from English to lower resource languages. Assuming a strong English question answering model or database, we compare and analyze methods that pivot through English: to map foreign queries to English and then English answers back to target language answers. Within this task setup we propose Reranked Multilingual Maximal Inner Product Search (RM-MIPS), akin to semantic similarity retrieval over the English training set with reranking, which outperforms the strongest baselines by 2.7{\%} on XQuAD and 6.2{\%} on MKQA. Analysis demonstrates the particular efficacy of this strategy over state-of-the-art alternatives in challenging settings: low-resource languages, with extensive distractor data and query distribution misalignment. Circumventing retrieval, our analysis shows this approach offers rapid answer generation to many other languages off-the-shelf, without necessitating additional training data in the target language.
null
null
10.18653/v1/2022.mia-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,299
inproceedings
snaebjarnarson-einarsson-2022-cross
Cross-Lingual {QA} as a Stepping Stone for Monolingual Open {QA} in {I}celandic
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.4/
Sn{\ae}bjarnarson, V{\'e}steinn and Einarsson, Hafsteinn
Proceedings of the Workshop on Multilingual Information Access (MIA)
29--36
It can be challenging to build effective open question answering (open QA) systems for languages other than English, mainly due to a lack of labeled data for training. We present a data efficient method to bootstrap such a system for languages other than English. Our approach requires only limited QA resources in the given language, along with machine-translated data, and at least a bilingual language model. To evaluate our approach, we build such a system for the Icelandic language and evaluate performance over trivia style datasets. The corpora used for training are English in origin but machine translated into Icelandic. We train a bilingual Icelandic/English language model to embed English context and Icelandic questions following methodology introduced with DensePhrases (Lee et al., 2021). The resulting system is an open domain cross-lingual QA system between Icelandic and English. Finally, the system is adapted for Icelandic only open QA, demonstrating how it is possible to efficiently create an open QA system with limited access to curated datasets in the language of interest.
null
null
10.18653/v1/2022.mia-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,300
inproceedings
pratapa-etal-2022-multilingual
Multilingual Event Linking to {W}ikidata
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.5/
Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko
Proceedings of the Workshop on Multilingual Information Access (MIA)
37--58
We present a task of multilingual linking of events to a knowledge base. We automatically compile a large-scale dataset for this task, comprising of 1.8M mentions across 44 languages referring to over 10.9K events from Wikidata. We propose two variants of the event linking task: 1) multilingual, where event descriptions are from the same language as the mention, and 2) crosslingual, where all event descriptions are in English. On the two proposed tasks, we compare multiple event linking systems including BM25+ (Lv and Zhai, 2011) and multilingual adaptations of the biencoder and crossencoder architectures from BLINK (Wu et al., 2020). In our experiments on the two task variants, we find both biencoder and crossencoder models significantly outperform the BM25+ baseline. Our results also indicate that the crosslingual task is in general more challenging than the multilingual task. To test the out-of-domain generalization of the proposed linking systems, we additionally create a Wikinews-based evaluation set. We present qualitative analysis highlighting various aspects captured by the proposed dataset, including the need for temporal reasoning over context and tackling diverse event descriptions across languages.
null
null
10.18653/v1/2022.mia-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,301
inproceedings
nguyen-kauchak-2022-complex
Complex Word Identification in {V}ietnamese: Towards {V}ietnamese Text Simplification
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.6/
Nguyen, Phuong and Kauchak, David
Proceedings of the Workshop on Multilingual Information Access (MIA)
59--68
Text Simplification has been an extensively researched problem in English, but has not been investigated in Vietnamese. We focus on the Vietnamese-specific Complex Word Identification task, often the first step in Lexical Simplification (Shardlow, 2013). We examine three different Vietnamese datasets constructed for other Natural Language Processing tasks and show that, like in other languages, frequency is a strong signal in determining whether a word is complex, with a mean accuracy of 86.87{\%}. Across the datasets, we find that the 10{\%} most frequent words in many corpus can be labelled as simple, and the rest as complex, though this is more variable for smaller corpora. We also examine how human annotators perform at this task. Given the subjective nature, there is a fair amount of variability in which words are seen as difficult, though majority results are more consistent.
null
null
10.18653/v1/2022.mia-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,302
inproceedings
wang-etal-2022-benchmarking
Benchmarking Language-agnostic Intent Classification for Virtual Assistant Platforms
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.7/
Wang, Gengyu and Qian, Cheng and Pan, Lin and Qi, Haode and Kunc, Ladislav and Potdar, Saloni
Proceedings of the Workshop on Multilingual Information Access (MIA)
69--76
Current virtual assistant (VA) platforms are beholden to the limited number of languages they support. Every component, such as the tokenizer and intent classifier, is engineered for specific languages in these intricate platforms. Thus, supporting a new language in such platforms is a resource-intensive operation requiring expensive re-training and re-designing. In this paper, we propose a benchmark for evaluating language-agnostic intent classification, the most critical component of VA platforms. To ensure the benchmarking is challenging and comprehensive, we include 29 public and internal datasets across 10 low-resource languages and evaluate various training and testing settings with consideration of both accuracy and training time. The benchmarking result shows that Watson Assistant, among 7 commercial VA platforms and pre-trained multilingual language models (LMs), demonstrates close-to-best accuracy with the best accuracy-training time trade-off.
null
null
10.18653/v1/2022.mia-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,303
inproceedings
hung-etal-2022-zusammenqa
{Z}usammen{QA}: Data Augmentation with Specialized Models for Cross-lingual Open-retrieval Question Answering System
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.8/
Hung, Chia-Chien and Green, Tommaso and Litschko, Robert and Tsereteli, Tornike and Takeshita, Sotaro and Bombieri, Marco and Glava{\v{s}}, Goran and Ponzetto, Simone Paolo
Proceedings of the Workshop on Multilingual Information Access (MIA)
77--90
This paper introduces our proposed system for the MIA Shared Task on Cross-lingual Openretrieval Question Answering (COQA). In this challenging scenario, given an input question the system has to gather evidence documents from a multilingual pool and generate from them an answer in the language of the question. We devised several approaches combining different model variants for three main components: Data Augmentation, Passage Retrieval, and Answer Generation. For passage retrieval, we evaluated the monolingual BM25 ranker against the ensemble of re-rankers based on multilingual pretrained language models (PLMs) and also variants of the shared task baseline, re-training it from scratch using a recently introduced contrastive loss that maintains a strong gradient signal throughout training by means of mixed negative samples. For answer generation, we focused on languageand domain-specialization by means of continued language model (LM) pretraining of existing multilingual encoders. Additionally, for both passage retrieval and answer generation, we augmented the training data provided by the task organizers with automatically generated question-answer pairs created from Wikipedia passages to mitigate the issue of data scarcity, particularly for the low-resource languages for which no training data were provided. Our results show that language- and domain-specialization as well as data augmentation help, especially for low-resource languages.
null
null
10.18653/v1/2022.mia-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,304
inproceedings
agarwal-etal-2022-zero
Zero-shot cross-lingual open domain question answering
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.9/
Agarwal, Sumit and Tripathi, Suraj and Mitamura, Teruko and Rose, Carolyn Penstein
Proceedings of the Workshop on Multilingual Information Access (MIA)
91--99
People speaking different kinds of languages search for information in a cross-lingual manner. They tend to ask questions in their language and expect the answer to be in the same language, despite the evidence lying in another language. In this paper, we present our approach for this task of cross-lingual open-domain question-answering. Our proposed method employs a passage reranker, the fusion-in-decoder technique for generation, and a wiki data entity-based post-processing system to tackle the inability to generate entities across all languages. Our end-2-end pipeline shows an improvement of 3 and 4.6 points on F1 and EM metrics respectively, when compared with the baseline CORA model on the XOR-TyDi dataset. We also evaluate the effectiveness of our proposed techniques in the zero-shot setting using the MKQA dataset and show an improvement of 5 points in F1 for high-resource and 3 points improvement for low-resource zero-shot languages. Our team, CMUmQA`s submission in the MIA-Shared task ranked 1st in the constrained setup for the dev and 2nd in the test setting.
null
null
10.18653/v1/2022.mia-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,305
inproceedings
tu-padmanabhan-2022-mia
{MIA} 2022 Shared Task Submission: Leveraging Entity Representations, Dense-Sparse Hybrids, and Fusion-in-Decoder for Cross-Lingual Question Answering
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.10/
Tu, Zhucheng and Padmanabhan, Sarguna Janani
Proceedings of the Workshop on Multilingual Information Access (MIA)
100--107
We describe our two-stage system for the Multilingual Information Access (MIA) 2022 Shared Task on Cross-Lingual Open-Retrieval Question Answering. The first stage consists of multilingual passage retrieval with a hybrid dense and sparse retrieval strategy. The second stage consists of a reader which outputs the answer from the top passages returned by the first stage. We show the efficacy of using entity representations, sparse retrieval signals to help dense retrieval, and Fusion-in-Decoder. On the development set, we obtain 43.46 F1 on XOR-TyDi QA and 21.99 F1 on MKQA, for an average F1 score of 32.73. On the test set, we obtain 40.93 F1 on XOR-TyDi QA and 22.29 F1 on MKQA, for an average F1 score of 31.61. We improve over the official baseline by over 4 F1 points on both the development and test sets.
null
null
10.18653/v1/2022.mia-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,306
inproceedings
asai-etal-2022-mia
{MIA} 2022 Shared Task: Evaluating Cross-lingual Open-Retrieval Question Answering for 16 Diverse Languages
Asai, Akari and Choi, Eunsol and Clark, Jonathan H. and Hu, Junjie and Lee, Chia-Hsuan and Kasai, Jungo and Longpre, Shayne and Yamada, Ikuya and Zhang, Rui
jul
2022
Seattle, USA
Association for Computational Linguistics
https://aclanthology.org/2022.mia-1.11/
Asai, Akari and Longpre, Shayne and Kasai, Jungo and Lee, Chia-Hsuan and Zhang, Rui and Hu, Junjie and Yamada, Ikuya and Clark, Jonathan H. and Choi, Eunsol
Proceedings of the Workshop on Multilingual Information Access (MIA)
108--120
We present the results of the Workshop on Multilingual Information Access (MIA) 2022 Shared Task, evaluating cross-lingual open-retrieval question answering (QA) systems in 16 typologically diverse languages. In this task, we adapted two large-scale cross-lingual open-retrieval QA datasets in 14 typologically diverse languages, and newly annotated open-retrieval QA data in 2 underrepresented languages: Tagalog and Tamil. Four teams submitted their systems. The best constrained system uses entity-aware contextualized representations for document retrieval, thereby achieving an average F1 score of 31.6, which is 4.1 F1 absolute higher than the challenging baseline. The best system obtains particularly significant improvements in Tamil (20.8 F1), whereas most of the other systems yield nearly zero scores. The best unconstrained system achieves 32.2 F1, outperforming our baseline by 4.5 points.
null
null
10.18653/v1/2022.mia-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,307
inproceedings
matsumoto-etal-2022-tracing
Tracing and Manipulating intermediate values in Neural Math Problem Solvers
Ferreira, Deborah and Valentino, Marco and Freitas, Andre and Welleck, Sean and Schubotz, Moritz
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mathnlp-1.1/
Matsumoto, Yuta and Heinzerling, Benjamin and Yoshikawa, Masashi and Inui, Kentaro
Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP)
1--6
How language models process complex input that requires multiple steps of inference is not well understood. Previous research has shown that information about intermediate values of these inputs can be extracted from the activations of the models, but it is unclear where that information is encoded and whether that information is indeed used during inference. We introduce a method for analyzing how a Transformer model processes these inputs by focusing on simple arithmetic problems and their intermediate values. To trace where information about intermediate values is encoded, we measure the correlation between intermediate values and the activations of the model using principal component analysis (PCA). Then, we perform a causal intervention by manipulating model weights. This intervention shows that the weights identified via tracing are not merely correlated with intermediate values, but causally related to model predictions. Our findings show that the model has a locality to certain intermediate values, and this is useful for enhancing the interpretability of the models.
null
null
10.18653/v1/2022.mathnlp-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,309
inproceedings
tan-etal-2022-investigating
Investigating Math Word Problems using Pretrained Multilingual Language Models
Ferreira, Deborah and Valentino, Marco and Freitas, Andre and Welleck, Sean and Schubotz, Moritz
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mathnlp-1.2/
Tan, Minghuan and Wang, Lei and Jiang, Lingxiao and Jiang, Jing
Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP)
7--16
In this paper, we revisit math word problems (MWPs) from the cross-lingual and multilingual perspective. We construct our MWP solvers over pretrained multilingual language models using the sequence-to-sequence model with copy mechanism. We compare how the MWP solvers perform in cross-lingual and multilingual scenarios. To facilitate the comparison of cross-lingual performance, we first adapt the large-scale English dataset MathQA as a counterpart of the Chinese dataset Math23K. Then we extend several English datasets to bilingual datasets through machine translation plus human annotation. Our experiments show that the MWP solvers may not be transferred to a different language even if the target expressions share the same numerical constants and operator set. However, it can be better generalized if problem types exist on both source language and target language.
null
null
10.18653/v1/2022.mathnlp-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,310
inproceedings
bueno-etal-2022-induced
Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models
Ferreira, Deborah and Valentino, Marco and Freitas, Andre and Welleck, Sean and Schubotz, Moritz
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mathnlp-1.3/
Bueno, Mirelle Candida and Gemmell, Carlos and Dalton, Jeff and Lotufo, Roberto and Nogueira, Rodrigo
Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP)
17--24
The ability to extrapolate, i.e., to make predictions on sequences that are longer than those presented as training examples, is a challenging problem for current deep learning models. Recent work shows that this limitation persists in state-of-the-art Transformer-based models. Most solutions to this problem use specific architectures or training methods that do not generalize to other tasks. We demonstrate that large language models can succeed in extrapolation without modifying their architecture or training procedure. Our experimental results show that generating step-by-step rationales and introducing marker tokens are both required for effective extrapolation. First, we induce a language model to produce step-by-step rationales before outputting the answer to effectively communicate the task to the model. However, as sequences become longer, we find that current models struggle to keep track of token positions. To address this issue, we interleave output tokens with markup tokens that act as explicit positional and counting symbols. Our findings show how these two complementary approaches enable remarkable sequence extrapolation and highlight a limitation of current architectures to effectively generalize without explicit surface form guidance. Code available at \url{https://anonymous.4open.science/r/induced-rationales-markup-tokens-0650/README.md}
null
null
10.18653/v1/2022.mathnlp-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,311
inproceedings
cunningham-etal-2022-towards
Towards Autoformalization of Mathematics and Code Correctness: Experiments with Elementary Proofs
Ferreira, Deborah and Valentino, Marco and Freitas, Andre and Welleck, Sean and Schubotz, Moritz
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mathnlp-1.4/
Cunningham, Garett and Bunescu, Razvan and Juedes, David
Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP)
25--32
The ever-growing complexity of mathematical proofs makes their manual verification by mathematicians very cognitively demanding. Autoformalization seeks to address this by translating proofs written in natural language into a formal representation that is computer-verifiable via interactive theorem provers. In this paper, we introduce a semantic parsing approach, based on the Universal Transformer architecture, that translates elementary mathematical proofs into an equivalent formalization in the language of the Coq interactive theorem prover. The same architecture is also trained to translate simple imperative code decorated with Hoare triples into formally verifiable proofs of correctness in Coq. Experiments on a limited domain of artificial and human-written proofs show that the models generalize well to intermediate lengths not seen during training and variations in natural language.
null
null
10.18653/v1/2022.mathnlp-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,312
inproceedings
spokoyny-etal-2022-numerical
Numerical Correlation in Text
Ferreira, Deborah and Valentino, Marco and Freitas, Andre and Welleck, Sean and Schubotz, Moritz
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mathnlp-1.5/
Spokoyny, Daniel and Wu, Chien-Sheng and Xiong, Caiming
Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP)
33--39
Evaluation of quantitative reasoning of large language models is an important step towards understanding their current capabilities and limitations. We propose a new task, Numerical Correlation in Text, which requires models to identify the correlation between two numbers in a sentence. To this end, we introduce a new dataset, which contains over 2,000 Wikipedia sentences with two numbers and their correlation labels. Using this dataset we are able to show that recent numerically aware pretraining methods for language models do not help generalization on this task posing a challenge for future work in this area.
null
null
10.18653/v1/2022.mathnlp-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,313
inproceedings
reusch-lehner-2022-extracting
Extracting Operator Trees from Model Embeddings
Ferreira, Deborah and Valentino, Marco and Freitas, Andre and Welleck, Sean and Schubotz, Moritz
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mathnlp-1.6/
Reusch, Anja and Lehner, Wolfgang
Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP)
40--50
Transformer-based language models are able to capture several linguistic properties such as hierarchical structures like dependency or constituency trees. Whether similar structures for mathematics are extractable from language models has not yet been explored. This work aims to probe current state-of-the-art models for the extractability of Operator Trees from their contextualized embeddings using the structure probe designed by Hewitt and Manning. We release the code and our data set for future analysis.
null
null
10.18653/v1/2022.mathnlp-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,314
inproceedings
okur-etal-2022-end
End-to-End Evaluation of a Spoken Dialogue System for Learning Basic Mathematics
Ferreira, Deborah and Valentino, Marco and Freitas, Andre and Welleck, Sean and Schubotz, Moritz
dec
2022
Abu Dhabi, United Arab Emirates (Hybrid)
Association for Computational Linguistics
https://aclanthology.org/2022.mathnlp-1.7/
Okur, Eda and Sahay, Saurav and Fuentes Alba, Roddy and Nachman, Lama
Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP)
51--64
The advances in language-based Artificial Intelligence (AI) technologies applied to build educational applications can present AI for social-good opportunities with a broader positive impact. Across many disciplines, enhancing the quality of mathematics education is crucial in building critical thinking and problem-solving skills at younger ages. Conversational AI systems have started maturing to a point where they could play a significant role in helping students learn fundamental math concepts. This work presents a task-oriented Spoken Dialogue System (SDS) built to support play-based learning of basic math concepts for early childhood education. The system has been evaluated via real-world deployments at school while the students are practicing early math concepts with multimodal interactions. We discuss our efforts to improve the SDS pipeline built for math learning, for which we explore utilizing MathBERT representations for potential enhancement to the Natural Language Understanding (NLU) module. We perform an end-to-end evaluation using real-world deployment outputs from the Automatic Speech Recognition (ASR), Intent Recognition, and Dialogue Manager (DM) components to understand how error propagation affects the overall performance in real-world scenarios.
null
null
10.18653/v1/2022.mathnlp-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,315
inproceedings
markl-2022-mind
Mind the data gap(s): Investigating power in speech and language datasets
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.1/
Markl, Nina
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
1--12
Algorithmic oppression is an urgent and persistent problem in speech and language technologies. Considering power relations embedded in datasets before compiling or using them to train or test speech and language technologies is essential to designing less harmful, more just technologies. This paper presents a reflective exercise to recognise and challenge gaps and the power relations they reveal in speech and language datasets by applying principles of Data Feminism and Design Justice, and building on work on dataset documentation and sociolinguistics.
null
null
10.18653/v1/2022.ltedi-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,317
inproceedings
pillar-etal-2022-regex
Regex in a Time of Deep Learning: The Role of an Old Technology in Age Discrimination Detection in Job Advertisements
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.2/
Pillar, Anna and Poelmans, Kyrill and Larson, Martha
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
13--18
Deep learning holds great promise for detecting discriminatory language in the public sphere. However, for the detection of illegal age discrimination in job advertisements, regex approaches are still strong performers. In this paper, we investigate job advertisements in the Netherlands. We present a qualitative analysis of the benefits of the {\textquoteleft}old' approach based on regexes and investigate how neural embeddings could address its limitations.
null
null
10.18653/v1/2022.ltedi-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,318
inproceedings
adams-etal-2022-concrete
Doing not Being: Concrete Language as a Bridge from Language Technology to Ethnically Inclusive Job Ads
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.3/
Adams, Jetske and Poelmans, Kyrill and Hendrickx, Iris and Larson, Martha
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
19--25
This paper makes the case for studying concreteness in language as a bridge that will allow language technology to support the understanding and improvement of ethnic inclusivity in job advertisements. We propose an annotation scheme that guides the assignment of sentences in job ads to classes that reflect concrete actions, i.e., what the employer needs people to $do$, and abstract dispositions, i.e., who the employer expects people to $be$. Using an annotated dataset of Dutch-language job ads, we demonstrate that machine learning technology is effectively able to distinguish these classes.
null
null
10.18653/v1/2022.ltedi-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,319
inproceedings
nozza-etal-2022-measuring
Measuring Harmful Sentence Completion in Language Models for {LGBTQIA}+ Individuals
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.4/
Nozza, Debora and Bianchi, Federico and Lauscher, Anne and Hovy, Dirk
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
26--34
Current language technology is ubiquitous and directly influences individuals' lives worldwide. Given the recent trend in AI on training and constantly releasing new and powerful large language models (LLMs), there is a need to assess their biases and potential concrete consequences. While some studies have highlighted the shortcomings of these models, there is only little on the negative impact of LLMs on LGBTQIA+ individuals. In this paper, we investigated a state-of-the-art template-based approach for measuring the harmfulness of English LLMs sentence completion when the subjects belong to the LGBTQIA+ community. Our findings show that, on average, the most likely LLM-generated completion is an identity attack 13{\%} of the time. Our results raise serious concerns about the applicability of these models in production environments.
null
null
10.18653/v1/2022.ltedi-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,320
inproceedings
amin-etal-2022-using
Using {BERT} Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.5/
Amin, Akhter Al and Hassan, Saad and Alm, Cecilia and Huenerfauth, Matt
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
35--40
Deaf and hard of hearing individuals regularly rely on captioning while watching live TV. Live TV captioning is evaluated by regulatory agencies using various caption evaluation metrics. However, caption evaluation metrics are often not informed by preferences of DHH users or how meaningful the captions are. There is a need to construct caption evaluation metrics that take the relative importance of words in transcript into account. We conducted correlation analysis between two types of word embeddings and human-annotated labelled word-importance scores in existing corpus. We found that normalized contextualized word embeddings generated using BERT correlated better with manually annotated importance scores than word2vec-based word embeddings. We make available a pairing of word embeddings and their human-annotated importance scores. We also provide proof-of-concept utility by training word importance models, achieving an F1-score of 0.57 in the 6-class word importance classification task.
null
null
10.18653/v1/2022.ltedi-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,321
inproceedings
park-rudzicz-2022-detoxifying
Detoxifying Language Models with a Toxic Corpus
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.6/
Park, Yoona and Rudzicz, Frank
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
41--46
Existing studies have investigated the tendency of autoregressive language models to generate contexts that exhibit undesired biases and toxicity. Various debiasing approaches have been proposed, which are primarily categorized into data-based and decoding-based. In our study, we investigate the ensemble of the two debiasing paradigms, proposing to use toxic corpus as an additional resource to reduce the toxicity. Our result shows that toxic corpus can indeed help to reduce the toxicity of the language generation process substantially, complementing the existing debiasing methods.
null
null
10.18653/v1/2022.ltedi-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,322
inproceedings
bartl-leavy-2022-inferring
Inferring Gender: A Scalable Methodology for Gender Detection with Online Lexical Databases
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.7/
Bartl, Marion and Leavy, Susan
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
47--58
This paper presents a new method for automatic detection of gendered terms in large-scale language datasets. Currently, the evaluation of gender bias in natural language processing relies on the use of manually compiled lexicons of gendered expressions, such as pronouns and words that imply gender. However, manual compilation of lists with lexical gender can lead to static information if lists are not periodically updated and often involve value judgements by individual annotators and researchers. Moreover, terms not included in the lexicons fall out of the range of analysis. To address these issues, we devised a scalable dictionary-based method to automatically detect lexical gender that can provide a dynamic, up-to-date analysis with high coverage. Our approach reaches over 80{\%} accuracy in determining the lexical gender of words retrieved randomly from a Wikipedia sample and when testing on a list of gendered words used in previous research.
null
null
10.18653/v1/2022.ltedi-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,323
inproceedings
gira-etal-2022-debiasing
Debiasing Pre-Trained Language Models via Efficient Fine-Tuning
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.8/
Gira, Michael and Zhang, Ruisu and Lee, Kangwook
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
59--69
An explosion in the popularity of transformer-based language models (such as GPT-3, BERT, RoBERTa, and ALBERT) has opened the doors to new machine learning applications involving language modeling, text generation, and more. However, recent scrutiny reveals that these language models contain inherent biases towards certain demographics reflected in their training data. While research has tried mitigating this problem, existing approaches either fail to remove the bias completely, degrade performance ({\textquotedblleft}catastrophic forgetting{\textquotedblright}), or are costly to execute. This work examines how to reduce gender bias in a GPT-2 language model by fine-tuning less than 1{\%} of its parameters. Through quantitative benchmarks, we show that this is a viable way to reduce prejudice in pre-trained language models while remaining cost-effective at scale.
null
null
10.18653/v1/2022.ltedi-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,324
inproceedings
santiago-etal-2022-disambiguation
Disambiguation of morpho-syntactic features of {A}frican {A}merican {E}nglish {--} the case of habitual be
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.9/
Santiago, Harrison and Martin, Joshua and Moeller, Sarah and Tang, Kevin
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
70--75
Recent research has highlighted that natural language processing (NLP) systems exhibit a bias againstAfrican American speakers. These errors are often caused by poor representation of linguistic features unique to African American English (AAE), which is due to the relatively low probability of occurrence for many such features. We present a workflow to overcome this issue in the case of habitual {\textquotedblleft}be{\textquotedblright}. Habitual {\textquotedblleft}be{\textquotedblright} is isomorphic, and therefore ambiguous, with other forms of uninflected {\textquotedblleft}be{\textquotedblright} found in both AAE and General American English (GAE). This creates a clear challenge for bias in NLP technologies. To overcome the scarcity, we employ a combination of rule-based filters and data augmentation that generate a corpus balanced between habitual and non-habitual instances. This balanced corpus trains unbiased machine learning classifiers, as demonstrated on a corpus of AAE transcribed texts, achieving .65 F$_1$ score at classifying habitual {\textquotedblleft}be{\textquotedblright}.
null
null
10.18653/v1/2022.ltedi-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,325
inproceedings
mansfield-etal-2022-behind
Behind the Mask: Demographic bias in name detection for {PII} masking
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.10/
Mansfield, Courtney and Paullada, Amandalynne and Howell, Kristen
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
76--89
Many datasets contain personally identifiable information, or PII, which poses privacy risks to individuals. PII masking is commonly used to redact personal information such as names, addresses, and phone numbers from text data. Most modern PII masking pipelines involve machine learning algorithms. However, these systems may vary in performance, such that individuals from particular demographic groups bear a higher risk for having their personal information exposed. In this paper, we evaluate the performance of three off-the-shelf PII masking systems on name detection and redaction. We generate data using names and templates from the customer service domain. We find that an open-source RoBERTa-based system shows fewer disparities than the commercial models we test. However, all systems demonstrate significant differences in error rate based on demographics. In particular, the highest error rates occurred for names associated with Black and Asian/Pacific Islander individuals.
null
null
10.18653/v1/2022.ltedi-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,326
inproceedings
camara-etal-2022-mapping
Mapping the Multilingual Margins: Intersectional Biases of Sentiment Analysis Systems in {E}nglish, {S}panish, and {A}rabic
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.11/
C{\^a}mara, Ant{\'o}nio and Taneja, Nina and Azad, Tamjeed and Allaway, Emily and Zemel, Richard
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
90--106
As natural language processing systems become more widespread, it is necessary to address fairness issues in their implementation and deployment to ensure that their negative impacts on society are understood and minimized. However, there is limited work that studies fairness using a multilingual and intersectional framework or on downstream tasks. In this paper, we introduce four multilingual Equity Evaluation Corpora, supplementary test sets designed to measure social biases, and a novel statistical framework for studying unisectional and intersectional social biases in natural language processing. We use these tools to measure gender, racial, ethnic, and intersectional social biases across five models trained on emotion regression tasks in English, Spanish, and Arabic. We find that many systems demonstrate statistically significant unisectional and intersectional social biases. We make our code and datasets available for download.
null
null
10.18653/v1/2022.ltedi-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,327
inproceedings
swanson-etal-2022-monte
{M}onte {C}arlo Tree Search for Interpreting Stress in Natural Language
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.12/
Swanson, Kyle and Hsu, Joy and Suzgun, Mirac
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
107--119
Natural language processing can facilitate the analysis of a person`s mental state from text they have written. Previous studies have developed models that can predict whether a person is experiencing a mental health condition from social media posts with high accuracy. Yet, these models cannot explain why the person is experiencing a particular mental state. In this work, we present a new method for explaining a person`s mental state from text using Monte Carlo tree search (MCTS). Our MCTS algorithm employs trained classification models to guide the search for key phrases that explain the writer`s mental state in a concise, interpretable manner. Furthermore, our algorithm can find both explanations that depend on the particular context of the text (e.g., a recent breakup) and those that are context-independent. Using a dataset of Reddit posts that exhibit stress, we demonstrate the ability of our MCTS algorithm to identify interpretable explanations for a person`s feeling of stress in both a context-dependent and context-independent manner.
null
null
10.18653/v1/2022.ltedi-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,328
inproceedings
roy-etal-2022-iiitsurat
{IIITS}urat@{LT}-{EDI}-{ACL}2022: Hope Speech Detection using Machine Learning
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.13/
Roy, Pradeep and Bhawal, Snehaan and Kumar, Abhinav and Chakravarthi, Bharathi Raja
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
120--126
This paper addresses the issue of Hope Speech detection using machine learning techniques. Designing a robust model that helps in predicting the target class with higher accuracy is a challenging task in machine learning, especially when the distribution of the class labels is highly imbalanced. This study uses and compares the experimental outcomes of the different oversampling techniques. Many models are implemented to classify the comments into Hope and Non-Hope speech, and it found that machine learning algorithms perform better than deep learning models. The English language dataset used in this research was developed by collecting YouTube comments and is part of the task {\textquotedblleft}ACL-2022:Hope Speech Detection for Equality, Diversity, and Inclusion{\textquotedblright}. The proposed model achieved a weighted F1-score of 0.55 on the test dataset and secured the first rank among the participated teams.
null
null
10.18653/v1/2022.ltedi-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,329
inproceedings
hande-etal-2022-best
The Best of both Worlds: Dual Channel Language modeling for Hope Speech Detection in low-resourced {K}annada
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.14/
Hande, Adeep and U Hegde, Siddhanth and S, Sangeetha and Priyadharshini, Ruba and Chakravarthi, Bharathi Raja
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
127--135
In recent years, various methods have been developed to control the spread of negativity by removing profane, aggressive, and offensive comments from social media platforms. There is, however, a scarcity of research focusing on embracing positivity and reinforcing supportive and reassuring content in online forums. As a result, we concentrate our research on developing systems to detect hope speech in code-mixed Kannada. As a result, we present DC-LM, a dual-channel language model that sees hope speech by using the English translations of the code-mixed dataset for additional training. The approach is jointly modelled on both English and code-mixed Kannada to enable effective cross-lingual transfer between the languages. With a weighted F1-score of 0.756, the method outperforms other models. We aim to initiate research in Kannada while encouraging researchers to take a pragmatic approach to inspire positive and supportive online content.
null
null
10.18653/v1/2022.ltedi-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,330
inproceedings
wang-etal-2022-nycu
{NYCU}{\_}{TWD}@{LT}-{EDI}-{ACL}2022: Ensemble Models with {VADER} and Contrastive Learning for Detecting Signs of Depression from Social Media
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.15/
Wang, Wei-Yao and Tang, Yu-Chien and Du, Wei-Wei and Peng, Wen-Chih
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
136--139
This paper presents a state-of-the-art solution to the LT-EDI-ACL 2022 Task 4: \textit{Detecting Signs of Depression from Social Media Text}. The goal of this task is to detect the severity levels of depression of people from social media posts, where people often share their feelings on a daily basis. To detect the signs of depression, we propose a framework with pre-trained language models using rich information instead of training from scratch, gradient boosting and deep learning models for modeling various aspects, and supervised contrastive learning for the generalization ability. Moreover, ensemble techniques are also employed in consideration of the different advantages of each method. Experiments show that our framework achieves a 2nd prize ranking with a macro F1-score of 0.552, showing the effectiveness and robustness of our approach.
null
null
10.18653/v1/2022.ltedi-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,331
inproceedings
garcia-diaz-etal-2022-umuteam-lt
{UMUT}eam@{LT}-{EDI}-{ACL}2022: Detecting homophobic and transphobic comments in {T}amil
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.16/
Garc{\'i}a-D{\'i}az, Jos{\'e} and Caparros-Laiz, Camilo and Valencia-Garc{\'i}a, Rafael
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
140--144
This working-notes are about the participation of the UMUTeam in a LT-EDI shared task concerning the identification of homophobic and transphobic comments in YouTube. These comments are written in English, which has high availability to machine-learning resources; Tamil, which has fewer resources; and a transliteration from Tamil to Roman script combined with English sentences. To carry out this shared task, we train a neural network that combines several feature sets applying a knowledge integration strategy. These features are linguistic features extracted from a tool developed by our research group and contextual and non-contextual sentence embeddings. We ranked 7th for English subtask (macro f1-score of 45{\%}), 3rd for Tamil subtask (macro f1-score of 82{\%}), and 2nd for Tamil-English subtask (macro f1-score of 58{\%}).
null
null
10.18653/v1/2022.ltedi-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,332
inproceedings
garcia-diaz-valencia-garcia-2022-umuteam
{UMUT}eam@{LT}-{EDI}-{ACL}2022: Detecting Signs of Depression from text
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.17/
Garc{\'i}a-D{\'i}az, Jos{\'e} and Valencia-Garc{\'i}a, Rafael
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
145--148
Depression is a mental condition related to sadness and the lack of interest in common daily tasks. In this working-notes, we describe the proposal of the UMUTeam in the LT-EDI shared task (ACL 2022) concerning the identification of signs of depression in social network posts. This task is somehow related to other relevant Natural Language Processing tasks such as Emotion Analysis. In this shared task, the organisers challenged the participants to distinguish between moderate and severe signs of depression (or no signs of depression at all) in a set of social posts written in English. Our proposal is based on the combination of linguistic features and several sentence embeddings using a knowledge integration strategy. Our proposal achieved the 6th position, with a macro f1-score of 53.82 in the official leader board.
null
null
10.18653/v1/2022.ltedi-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,333
inproceedings
bhandari-goyal-2022-bitsa
bitsa{\_}nlp@{LT}-{EDI}-{ACL}2022: Leveraging Pretrained Language Models for Detecting Homophobia and Transphobia in Social Media Comments
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.18/
Bhandari, Vitthal and Goyal, Poonam
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
149--154
Online social networks are ubiquitous and user-friendly. Nevertheless, it is vital to detect and moderate offensive content to maintain decency and empathy. However, mining social media texts is a complex task since users don`t adhere to any fixed patterns. Comments can be written in any combination of languages and many of them may be low-resource. In this paper, we present our system for the LT-EDI shared task on detecting homophobia and transphobia in social media comments. We experiment with a number of monolingual and multilingual transformer based models such as mBERT along with a data augmentation technique for tackling class imbalance. Such pretrained large models have recently shown tremendous success on a variety of benchmark tasks in natural language processing. We observe their performance on a carefully annotated, real life dataset of YouTube comments in English as well as Tamil. Our submission achieved ranks 9, 6 and 3 with a macro-averaged F1-score of 0.42, 0.64 and 0.58 in the English, Tamil and Tamil-English subtasks respectively. The code for the system has been open sourced.
null
null
10.18653/v1/2022.ltedi-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,334
inproceedings
maimaitituoheti-2022-ablimet
{ABLIMET} @{LT}-{EDI}-{ACL}2022: A Roberta based Approach for Homophobia/Transphobia Detection in Social Media
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.19/
Maimaitituoheti, Abulimiti
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
155--160
This paper describes our system that participated in LT-EDI-ACL2022- Homophobia/Transphobia Detection in Social Media. Sexual minorities face a lot of unfair treatment and discrimination in our world. This creates enormous stress and many psychological problems for sexual minorities. There is a lot of hate speech on the internet, and Homophobia/Transphobia is the one against sexual minorities. Identifying and processing Homophobia/ Transphobia through natural language processing technology can improve the efficiency of processing Homophobia/ Transphobia, and can quickly screen out Homophobia/Transphobia on the Internet. The organizer of LT-EDI-ACL2022- Homophobia/Transphobia Detection in Social Media constructs a Homophobia/ Transphobia detection dataset based on YouTube comments for English and Tamil. We use a Roberta -based approach to conduct Homophobia/ Transphobia detection experiments on the dataset of the competition, and get better results.
null
null
10.18653/v1/2022.ltedi-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,335
inproceedings
gowda-etal-2022-mucic
{MUCIC}@{LT}-{EDI}-{ACL}2022: Hope Speech Detection using Data Re-Sampling and 1{D} Conv-{LSTM}
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.20/
Gowda, Anusha and Balouchzahi, Fazlourrahman and Shashirekha, Hosahalli and Sidorov, Grigori
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
161--166
Spreading positive vibes or hope content on social media may help many people to get motivated in their life. To address Hope Speech detection in YouTube comments, this paper presents the description of the models submitted by our team - MUCIC, to the Hope Speech Detection for Equality, Diversity, and Inclusion (HopeEDI) shared task at Association for Computational Linguistics (ACL) 2022. This shared task consists of texts in five languages, namely: English, Spanish (in Latin scripts), and Tamil, Malayalam, and Kannada (in code-mixed native and Roman scripts) with the aim of classifying the YouTube comment into {\textquotedblleft}Hope{\textquotedblright}, {\textquotedblleft}Not-Hope{\textquotedblright} or {\textquotedblleft}Not-Intended{\textquotedblright} categories. The proposed methodology uses the re-sampling technique to deal with imbalanced data in the corpus and obtained 1st rank for English language with a macro-averaged F1-score of 0.550 and weighted-averaged F1-score of 0.860. The code to reproduce this work is available in GitHub.
null
null
10.18653/v1/2022.ltedi-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,336
inproceedings
farruque-etal-2022-deepblues
{D}eep{B}lues@{LT}-{EDI}-{ACL}2022: Depression level detection modelling through domain specific {BERT} and short text Depression classifiers
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.21/
Farruque, Nawshad and Zaiane, Osmar and Goebel, Randy and Sivapalan, Sudhakar
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
167--171
We discuss a variety of approaches to build a robust Depression level detection model from longer social media posts (i.e., Reddit Depression forum posts) using a mental health text pre-trained BERT model. Further, we report our experimental results based on a strategy to select excerpts from long text and then fine-tune the BERT model to combat the issue of memory constraints while processing such texts. We show that, with domain specific BERT, we can achieve reasonable accuracy with fixed text size (in this case 200 tokens) for this task. In addition we can use short text classifiers to extract relevant text from the long text and achieve slightly better accuracy, albeit, trading off with the processing time for extracting such excerpts.
null
null
10.18653/v1/2022.ltedi-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,337
inproceedings
vijayakumar-etal-2022-ssn
{SSN}{\_}{ARMM}@ {LT}-{EDI} -{ACL}2022: Hope Speech Detection for Equality, Diversity, and Inclusion Using {ALBERT} model
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.22/
Vijayakumar, Praveenkumar and S, Prathyush and P, Aravind and S, Angel and Sivanaiah, Rajalakshmi and Rajendram, Sakaya Milton and T T, Mirnalinee
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
172--176
In recent years social media has become one of the major forums for expressing human views and emotions. With the help of smartphones and high-speed internet, anyone can express their views on Social media. However, this can also lead to the spread of hatred and violence in society. Therefore it is necessary to build a method to find and support helpful social media content. In this paper, we studied Natural Language Processing approach for detecting Hope speech in a given sentence. The task was to classify the sentences into {\textquoteleft}Hope speech' and {\textquoteleft}Non-hope speech'. The dataset was provided by LT-EDI organizers with text from Youtube comments. Based on the task description, we developed a system using the pre-trained language model BERT to complete this task. Our model achieved 1st rank in the Kannada language with a weighted average F1 score of 0.750, 2nd rank in the Malayalam language with a weighted average F1 score of 0.740, 3rd rank in the Tamil language with a weighted average F1 score of 0.390 and 6th rank in the English language with a weighted average F1 score of 0.880.
null
null
10.18653/v1/2022.ltedi-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,338
inproceedings
s-b-2022-suh
{SUH}{\_}{ASR}@{LT}-{EDI}-{ACL}2022: Transformer based Approach for Speech Recognition for Vulnerable Individuals in {T}amil
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.23/
S, Suhasini and B, Bharathi
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
177--182
An Automatic Speech Recognition System is developed for addressing the Tamil conversational speech data of the elderly people andtransgender. The speech corpus used in this system is collected from the people who adhere their communication in Tamil at some primary places like bank, hospital, vegetable markets. Our ASR system is designed with pre-trained model which is used to recognize the speechdata. WER(Word Error Rate) calculation is used to analyse the performance of the ASR system. This evaluation could help to make acomparison of utterances between the elderly people and others. Similarly, the comparison between the transgender and other people isalso done. Our proposed ASR system achieves the word error rate as 39.65{\%}.
null
null
10.18653/v1/2022.ltedi-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,339
inproceedings
zhu-2022-lps
{LPS}@{LT}-{EDI}-{ACL}2022:An Ensemble Approach about Hope Speech Detection
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.24/
Zhu, Yue
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
183--189
The task shared by sponsor about Hope Speech Detection for Equality, Diversity, and Inclusion at LT-EDI-ACL-2022.The goal of this task is to identify whether a given comment contains hope speech or not,and hope is considered significant for the well-being, recuperation and restoration of human life. Our work aims to change the prevalent way of thinking by moving away from a preoccupation with discrimination, loneliness or the worst things in life to building the confidence, support and good qualities based on comments by individuals. In response to the need to detect equality, diversity and inclusion of hope speech in a multilingual environment, we built an integration model and achieved well performance on multiple datasets presented by the sponsor and the specific results can be referred to the experimental results section.
null
null
10.18653/v1/2022.ltedi-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,340
inproceedings
jha-etal-2022-curaj
{CURAJ}{\_}{IIITDWD}@{LT}-{EDI}-{ACL} 2022: Hope Speech Detection in {E}nglish {Y}ou{T}ube Comments using Deep Learning Techniques
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.25/
Jha, Vanshita and Mishra, Ankit and Saumya, Sunil
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
190--195
Hope Speech are positive terms that help to promote or criticise a point of view without hurting the user`s or community`s feelings. Non-Hope Speech, on the other side, includes expressions that are harsh, ridiculing, or demotivating. The goal of this article is to find the hope speech comments in a YouTube dataset. The datasets were created as part of the {\textquotedblleft}LT-EDI-ACL 2022: Hope Speech Detection for Equality, Diversity, and Inclusion{\textquotedblright} shared task. The shared task dataset was proposed in Malayalam, Tamil, English, Spanish, and Kannada languages. In this paper, we worked at English-language YouTube comments. We employed several deep learning based models such as DNN (dense or fully connected neural network), CNN (Convolutional Neural Network), Bi-LSTM (Bidirectional Long Short Term Memory Network), and GRU(Gated Recurrent Unit) to identify the hopeful comments. We also used Stacked LSTM-CNN and Stacked LSTM-LSTM network to train the model. The best macro average F1-score 0.67 for development dataset was obtained using the DNN model. The macro average F1-score of 0.67 was achieved for the classification done on the test data as well.
null
null
10.18653/v1/2022.ltedi-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,341
inproceedings
esackimuthu-etal-2022-ssn
{SSN}{\_}{MLRG}3 @{LT}-{EDI}-{ACL}2022-Depression Detection System from Social Media Text using Transformer Models
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.26/
Esackimuthu, Sarika and Hariprasad, Shruthi and Sivanaiah, Rajalakshmi and S, Angel and Rajendram, Sakaya Milton and T T, Mirnalinee
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
196--199
Depression is a common mental illness that involves sadness and lack of interest in all day-to-day activities. The task is to classify the social media text as signs of depression into three labels namely {\textquotedblleft}not depressed{\textquotedblright}, {\textquotedblleft}moderately depressed{\textquotedblright}, and {\textquotedblleft}severely depressed{\textquotedblright}. We have build a system using Deep Learning Model {\textquotedblleft}Transformers{\textquotedblright}. Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio. The multi-class classification model used in our system is based on the ALBERT model. In the shared task ACL 2022, Our team SSN{\_}MLRG3 obtained a Macro F1 score of 0.473.
null
null
10.18653/v1/2022.ltedi-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,342
inproceedings
lin-etal-2022-bert
{BERT} 4{EVER}@{LT}-{EDI}-{ACL}2022-Detecting signs of Depression from Social Media:Detecting Depression in Social Media using Prompt-Learning and Word-Emotion Cluster
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.27/
Lin, Xiaotian and Fu, Yingwen and Yang, Ziyu and Lin, Nankai and Jiang, Shengyi
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
200--205
In this paper, we report the solution of the team BERT 4EVER for the LT-EDI-2022 shared task2: Homophobia/Transphobia Detection in social media comments in ACL 2022, which aims to classify Youtube comments into one of the following categories: no,moderate, or severe depression. We model the problem as a text classification task and a text generation task and respectively propose two different models for the tasks. To combine the knowledge learned from these two different models, we softly fuse the predicted probabilities of the models above and then select the label with the highest probability as the final output. In addition, multiple augmentation strategies are leveraged to improve the model generalization capability, such as back translation and adversarial training. Experimental results demonstrate the effectiveness of the proposed models and two augmented strategies.
null
null
10.18653/v1/2022.ltedi-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,343
inproceedings
balouchzahi-etal-2022-cic
{CIC}@{LT}-{EDI}-{ACL}2022: Are transformers the only hope? Hope speech detection for {S}panish and {E}nglish comments
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.28/
Balouchzahi, Fazlourrahman and Butt, Sabur and Sidorov, Grigori and Gelbukh, Alexander
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
206--211
Hope is an inherent part of human life and essential for improving the quality of life. Hope increases happiness and reduces stress and feelings of helplessness. Hope speech is the desired outcome for better and can be studied using text from various online sources where people express their desires and outcomes. In this paper, we address a deep-learning approach with a combination of linguistic and psycho-linguistic features for hope-speech detection. We report our best results submitted to LT-EDI-2022 which ranked 2nd and 3rd in English and Spanish respectively.
null
null
10.18653/v1/2022.ltedi-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,344
inproceedings
s-etal-2022-scubemsec
scube{MSEC}@{LT}-{EDI}-{ACL}2022: Detection of Depression using Transformer Models
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.29/
S, Sivamanikandan and V, Santhosh and N, Sanjaykumar and C, Jerin Mahibha and Durairaj, Thenmozhi
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
212--217
Social media platforms play a major role in our day-to-day life and are considered as a virtual friend by many users, who use the social media to share their feelings all day. Many a time, the content which is shared by users on social media replicate their internal life. Nowadays people love to share their daily life incidents like happy or unhappy moments and their feelings in social media and it makes them feel complete and it has become a habit for many users. Social media provides a new chance to identify the feelings of a person through their posts. The aim of the shared task is to develop a model in which the system is capable of analyzing the grammatical markers related to onset and permanent symptoms of depression. We as a team participated in the shared task Detecting Signs of Depression from Social Media Text at LT-EDI 2022- ACL 2022 and we have proposed a model which predicts depression from English social media posts using the data set shared for the task. The prediction is done based on the labels Moderate, Severe and Not Depressed. We have implemented this using different transformer models like DistilBERT, RoBERTa and ALBERT by which we were able to achieve a Macro F1 score of 0.337, 0.457 and 0.387 respectively. Our code is publicly available in the github
null
null
10.18653/v1/2022.ltedi-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,345
inproceedings
b-etal-2022-ssncse
{SSNCSE}{\_}{NLP}@{LT}-{EDI}-{ACL}2022:Hope Speech Detection for Equality, Diversity and Inclusion using sentence transformers
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.30/
B, Bharathi and Srinivasan, Dhanya and Varsha, Josephine and Durairaj, Thenmozhi and B, Senthil Kumar
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
218--222
In recent times, applications have been developed to regulate and control the spread of negativity and toxicity on online platforms. The world is filled with serious problems like political {\&} religious conflicts, wars, pandemics, and offensive hate speech is the last thing we desire. Our task was to classify a text into {\textquoteleft}Hope Speech' and {\textquoteleft}Non-Hope Speech'. We searched for datasets acquired from YouTube comments that offer support, reassurance, inspiration, and insight, and the ones that don`t. The datasets were provided to us by the LTEDI organizers in English, Tamil, Spanish, Kannada, and Malayalam. To successfully identify and classify them, we employed several machine learning transformer models such as m-BERT, MLNet, BERT, XLMRoberta, and XLM{\_}MLM. The observed results indicate that the BERT and m-BERT have obtained the best results among all the other techniques, gaining a weighted F1- score of 0.92, 0.71, 0.76, 0.87, and 0.83 for English, Tamil, Spanish, Kannada, and Malayalam respectively. This paper depicts our work for the Shared Task on Hope Speech Detection for Equality, Diversity, and Inclusion at LTEDI 2021.
null
null
10.18653/v1/2022.ltedi-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,346
inproceedings
kumar-etal-2022-soa
{SOA}{\_}{NLP}@{LT}-{EDI}-{ACL}2022: An Ensemble Model for Hope Speech Detection from {Y}ou{T}ube Comments
Chakravarthi, Bharathi Raja and Bharathi, B and McCrae, John P and Zarrouk, Manel and Bali, Kalika and Buitelaar, Paul
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.ltedi-1.31/
Kumar, Abhinav and Saumya, Sunil and Roy, Pradeep
Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion
223--228
Language should be accommodating of equality and diversity as a fundamental aspect of communication. The language of internet users has a big impact on peer users all over the world. On virtual platforms such as Facebook, Twitter, and YouTube, people express their opinions in different languages. People respect others' accomplishments, pray for their well-being, and cheer them on when they fail. Such motivational remarks are hope speech remarks. Simultaneously, a group of users encourages discrimination against women, people of color, people with disabilities, and other minorities based on gender, race, sexual orientation, and other factors. To recognize hope speech from YouTube comments, the current study offers an ensemble approach that combines a support vector machine, logistic regression, and random forest classifiers. Extensive testing was carried out to discover the best features for the aforementioned classifiers. In the support vector machine and logistic regression classifiers, char-level TF-IDF features were used, whereas in the random forest classifier, word-level features were used. The proposed ensemble model performed significantly well among English, Spanish, Tamil, Malayalam, and Kannada YouTube comments.
null
null
10.18653/v1/2022.ltedi-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
24,347