ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:09:47.639425Z"
},
"title": "Practical Transformer-based Multilingual Text Classification",
"authors": [
{
"first": "Cindy",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Transformer-based methods are appealing for multilingual text classification, but common research benchmarks like XNLI (Conneau et al., 2018) do not reflect the data availability and task variety of industry applications. We present an empirical comparison of transformer-based text classification models in a variety of practical monolingual and multilingual pretraining and fine-tuning settings. We evaluate these methods on two distinct tasks in five different languages. Departing from prior work, our results show that multilingual language models can outperform monolingual ones in some downstream tasks and target languages. We additionally show that practical modifications such as task-and domain-adaptive pretraining and data augmentation can improve classification performance without the need for additional labeled data.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Transformer-based methods are appealing for multilingual text classification, but common research benchmarks like XNLI (Conneau et al., 2018) do not reflect the data availability and task variety of industry applications. We present an empirical comparison of transformer-based text classification models in a variety of practical monolingual and multilingual pretraining and fine-tuning settings. We evaluate these methods on two distinct tasks in five different languages. Departing from prior work, our results show that multilingual language models can outperform monolingual ones in some downstream tasks and target languages. We additionally show that practical modifications such as task-and domain-adaptive pretraining and data augmentation can improve classification performance without the need for additional labeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "While the development of natural language understanding (NLU) applications often begins with high-resource languages such as English, there is a need to create products that are accessible to speakers of the world's nearly 7,000 languages. Only 5% of the world's population is estimated to speak English as a first language. 1 The growth of NLU-centric products within diverse language markets is evidenced by the increase in language support for popular consumer applications such as virtual assistants, Web search, and social media platforms. As of mid-2020, Google Assistant supported 44 languages on smartphones, followed by Siri (21 languages) and Amazon Alexa (8 languages). At the start of 2021, Google Search and Microsoft Bing supported 149 and 40 languages respectively. Also at this time, Twitter officially supported a total of 45 languages with Facebook reaching over 100 languages. 1 CIA World Factbook Advances in multilingual language models such as multilingual BERT (mBERT; Devlin et al., 2019) and XLM-RoBERTa (XLM-R; Conneau et al., 2020) which are trained on massive corpora in over 100 languages, show promise for fast iteration and deployment of NLU applications. In theory, cross-lingual approaches reduce the need for labeled training data in target languages by enabling zero-or few-shot learning. Additionally, they enable simplified model deployment compared to the use of many monolingual models. On the other hand, evaluations show that scaling to more languages causes dilution (Conneau et al., 2020) and consequently cite the relative under-performance of multilingual models on monolingual tasks (Virtanen et al., 2019; Antoun et al., 2020) .",
"cite_spans": [
{
"start": 325,
"end": 326,
"text": "1",
"ref_id": null
},
{
"start": 896,
"end": 897,
"text": "1",
"ref_id": null
},
{
"start": 992,
"end": 1012,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 1017,
"end": 1036,
"text": "XLM-RoBERTa (XLM-R;",
"ref_id": null
},
{
"start": 1037,
"end": 1058,
"text": "Conneau et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 1509,
"end": 1531,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 1629,
"end": 1652,
"text": "(Virtanen et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 1653,
"end": 1673,
"text": "Antoun et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent studies (Hu et al., 2020; Rust et al., 2020 ) have explored tradeoffs of multi versus monolingual model paradigms. However, we observe that existing multilingual text classification benchmarks are designed to measure zero-shot cross-lingual transfer rather than supervised learning (Conneau et al., 2018; Yang et al., 2019) , though the latter is more applicable to industry settings. Thus, the goal of this paper is to evaluate multilingual text classification approaches with a focus on real applications. Our contributions include:",
"cite_spans": [
{
"start": 15,
"end": 32,
"text": "(Hu et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 33,
"end": 50,
"text": "Rust et al., 2020",
"ref_id": null
},
{
"start": 289,
"end": 311,
"text": "(Conneau et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 312,
"end": 330,
"text": "Yang et al., 2019)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A comparison of state-of-the-art language models spanning monolingual and multilingual setups, evaluated across five languages and two distinct tasks; \u2022 A set of practical recommendations for finetuning readily available language models for text classification; and \u2022 Analyses of industry-centric challenges such as domain mismatch, labeled data availability, and runtime inference scalability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We consider a series of practical components for building multilingual text classification systems. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Text Classification",
"sec_num": "2"
},
{
"text": "Transfer learning using pretrained language models (LMs) which are then fine-tuned for downstream tasks has emerged as a powerful technique for NLU applications. In particular, models using the nowubiquitous transformer architecture (Vaswani et al., 2017) , such as BERT (Devlin et al., 2019) and its variants, have obtained state of the art results in many monolingual and cross-lingual NLU benchmarks (Wang et al., 2019a; Raffel et al., 2020; He et al., 2021) . One drawback of data-hungry transformer models is that they are time-and resource-intensive to train. In our experiments, we consider LMs pretrained on both monolingual and multilingual corpora, and analyze the effects of combining these models with other NLU system components.",
"cite_spans": [
{
"start": 233,
"end": 255,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 271,
"end": 292,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 403,
"end": 423,
"text": "(Wang et al., 2019a;",
"ref_id": "BIBREF34"
},
{
"start": 424,
"end": 444,
"text": "Raffel et al., 2020;",
"ref_id": "BIBREF23"
},
{
"start": 445,
"end": 461,
"text": "He et al., 2021)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Transformer Language Models",
"sec_num": "2.1"
},
{
"text": "For monolingual LMs, we use BERT models pretrained on corpora in each target language. The one exception is English, where we use RoBERTa, a BERT reimplementation that exceeds its performance on an assortment of tasks (Liu et al., 2019) .",
"cite_spans": [
{
"start": 218,
"end": 236,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Transformer Language Models",
"sec_num": "2.1"
},
{
"text": "For multilingual LMs, we use XLM-R, which significantly outperforms mBERT on cross-lingual benchmarks and is competitive with monolingual models on monolingual benchmarks such as GLUE (Wang et al., 2019b) . All of the pretrained models used are accessible from the Hugging Face (Wolf et al., 2020) model hub, and their details are summarized in Table 1 .",
"cite_spans": [
{
"start": 184,
"end": 204,
"text": "(Wang et al., 2019b)",
"ref_id": "BIBREF35"
},
{
"start": 278,
"end": 297,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Pretrained Transformer Language Models",
"sec_num": "2.1"
},
{
"text": "Though pretrained language models have hundreds of millions of parameters and are trained on diverse corpora, they are not guaranteed to generalize to all tasks and domains. For downstream tasks, a second phase of pretraining on a smaller domain-or task-specific corpus has been shown to provide performance improvements. Gururangan et al. (2020) compare domain-adaptive pretraining (DAPT), which uses a large corpus of unlabeled domain-specific text, and task-adaptive pretraining (TAPT), which uses only the training data of a particular task. The primary difference is that the task-specific corpus tends to be much smaller, but also more task-relevant. Therefore, while DAPT is helpful in both low-and high-resource settings, TAPT is much more resource-efficient and outperforms DAPT when sufficient data is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain-Adaptive and Task-Adaptive Pretraining",
"sec_num": "2.2"
},
{
"text": "In our experiments, we evaluate both approaches, using the classification task training data as the TAPT corpus and in-domain unlabeled data as the DAPT corpus (see Section 3 for details). BERT and RoBERTa are pretrained with a masked language modeling (MLM) objective, a cross-entropy loss on randomly masked tokens in the input sequence. We similarly use the MLM objective when performing DAPT and TAPT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain-Adaptive and Task-Adaptive Pretraining",
"sec_num": "2.2"
},
{
"text": "We consider three settings for supervised finetuning of language models for downstream classification tasks (N is the number of target languages).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Fine-Tuning",
"sec_num": "2.3"
},
{
"text": "\u2022 mono-target (N final models): Fine-tune a monolingual LM on the training data in each target language \u2022 multi-target (N final models): Fine-tune XLM-R on the training data in each target language \u2022 multi-all (one final model): Fine-tune XLM-R on the concatenation of all training data To represent sequences for classification, we use the final LM hidden vectors B \u2208 R l\u00d7H corresponding to each of the l input tokens. 2 We then compute average and max pools over the sequence length layer and concatenate them to create the aggregate representation C \u2208 R 2H . Finally, the summary vector C is passed to a classification layer where we compute a standard cross-entropy loss.",
"cite_spans": [
{
"start": 420,
"end": 421,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Fine-Tuning",
"sec_num": "2.3"
},
{
"text": "In real applications, labeled data is often available in high resource languages such as English but sparse or nonexistent in others. We experiment with machine translation 3 as a form of cross-lingual data augmentation, which has been shown to improve performance on multilingual benchmarks (Singh et al., 2019) . In single target language settings, we translate training data from other languages into the target language, yielding N times the number of training examples. In the multi-all setting, we translate data from every language into every other language, yielding N (N \u2212 1) times the number of training examples. At training time, we directly include the translated examples in the training corpus. Following the pretraining convention of XLM-R, we do not use special markers to denote the input language.",
"cite_spans": [
{
"start": 292,
"end": 312,
"text": "(Singh et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "2.4"
},
{
"text": "We choose sentiment analysis and hate speech detection as evaluation tasks due to their relevance to industry applications and the availability of multilingual datasets. An overview of the datasets is shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 217,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The Cross-Lingual Sentiment dataset (CLS; Prettenhofer and Stein, 2010) 4 consists of AMAZON product reviews in four languages and three product categories (BOOKS, DVD, and MUSIC). Each review includes title and body text, which we concatenate to create the input example. The dataset contains training and test sets with balanced binary sentiment labels, as well as 50-320k unlabeled examples per language. We sample 10k unlabeled examples from each language for DAPT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "3.1"
},
{
"text": "The HATEVAL dataset (Basile et al., 2019) contains tweets in English and Spanish annotated for the presence of hate speech targeting women and immigrants. Examples were collected by querying Twitter for users with histories of sending or receiving hateful messages, as well as keywords related to women and immigrants.",
"cite_spans": [
{
"start": 20,
"end": 41,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech Detection",
"sec_num": "3.2"
},
{
"text": "Relabeling English Test Data During experimentation, we found that English example labels were inconsistent across the training and test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech Detection",
"sec_num": "3.2"
},
{
"text": "For instance, many test examples containing antiimmigration hashtags were mislabeled as nonhateful while similar examples were labeled as hateful in the training set (see Table 3 ). We manually relabeled 641 examples in the test set and release the relabeled data for future research. 5, 6 Unlabeled Twitter Data Since no unlabeled corpus is provided, we collected a sample of 10k random tweets per language from November 2020, which we use for DAPT. (Eisenschlos et al., 2019, top) and HATEVAL (Basile et al., 2019, bottom) .",
"cite_spans": [
{
"start": 285,
"end": 287,
"text": "5,",
"ref_id": null
},
{
"start": 288,
"end": 289,
"text": "6",
"ref_id": null
},
{
"start": 451,
"end": 482,
"text": "(Eisenschlos et al., 2019, top)",
"ref_id": null
},
{
"start": 495,
"end": 524,
"text": "(Basile et al., 2019, bottom)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Hate Speech Detection",
"sec_num": "3.2"
},
{
"text": "LM (see Table 1 ) and truncate sequences with more than 512 tokens.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Hate Speech Detection",
"sec_num": "3.2"
},
{
"text": "Training We use 80% of each training set for training and the rest for validation. During DAPT and TAPT, we train using the MLM objective for 10 epochs. During supervised fine-tuning, we train for 5 epochs. We use the default hyperparameters for all pretrained LMs and apply dropout of 0.4 to the final classification layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech Detection",
"sec_num": "3.2"
},
{
"text": "Evaluation We report the test set macroaveraged F1 score for both datasets. (For CLS, this is equivalent to accuracy since the classes are balanced.) For reference, prior results on CLS and HATEVAL are shown in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 218,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Hate Speech Detection",
"sec_num": "3.2"
},
{
"text": "We report results for all experiments in Table 5 . For both datasets, (1) TAPT and DAPT and (2) data augmentation with machine translations improve model performance. These strategies, which require no additional labeled data, improve macro-F1 score by between 0.6-1.5% for CLS and between 0.3-4.3% for HATEVAL. Even without DAPT, which is often the most expensive step, applying TAPT and/or data augmentation alone improves performance in all settings and languages except HATEVAL EN.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 48,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "CLS For languages where extremely highresource monolingual LMs are available (EN and FR), models perform best in the mono-target setting, in which a monolingual LM is fine-tuned on target language data. This is consistent with prior findings that XLM-R suffers from fixed model capacity and vocabulary dilution (Conneau et al., 2019) . However, for DE and JA, which are not lowresource languages but whose monolingual LM pretraining corpora are relatively limited in size and domain (see Table 1 ), XLM-R models perform better.",
"cite_spans": [
{
"start": 311,
"end": 333,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 488,
"end": 495,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "HATEVAL On average, XLM-R models perform better on HATEVAL than those fine-tuned from monolingual LMs. Unlike for CLS, this is true even in EN, suggesting that for some classification tasks, the LM pretraining corpus is not as important for downstream task performance as XLM-R's larger model capacity and cross-lingual transfer. Though scores were much higher for the relabeled EN dataset than the original, the effects of LM finetuning, TAPT, DAPT, and data augmentation were consistent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "5"
},
{
"text": "The two text classification tasks we evaluate are significantly different from both an annotation and a modeling perspective. Sentiment is a well-defined facet of language, and language model representations have even been shown to encode semantic information about it (Radford et al., 2017) . Meanwhile, defining and identifying hate speech is much more nuanced, even for humans. Hate speech detection is confounded by many factors that require not only immediate context of the input but also cultural and social contexts (Schmidt and Wiegand, 2017) . The difference in the types of information that models need to encode for each task may explain why monolingual LMs, which tend to encode better lexical information than multilingual LMs , can outperform XLM-based models when fine-tuned for sentiment analysis but not for hate speech detection.",
"cite_spans": [
{
"start": 269,
"end": 291,
"text": "(Radford et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 524,
"end": 551,
"text": "(Schmidt and Wiegand, 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Not All Classification Tasks Are Created Equal",
"sec_num": "5.1"
},
{
"text": "Prior work has established that multilingual LMs benefit from the addition of more languages during pretraining up to a point, after which limited model capacity and vocabulary dilution cause performance to degrade on downstream tasks -this is referred to as the curse of multilinguality (Conneau et al., 2019) . Though this is reflected in the results of CLS EN and FR, other models fine-tuned from XLM-R exhibit gains from cross-lingual transfer.",
"cite_spans": [
{
"start": 288,
"end": 310,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "5.2"
},
{
"text": "In particular, for CLS JA and HATEVAL EN, the best-performing models benefit not only from multilingual pretraining corpora but also from multilingual task training data. These results suggest that when fine-tuning LMs for downstream tasks, XLM-R is a robust baseline. Model denotes the supervised finetuning setting. Adapt. denotes the adaptive pretraining setting: \u00d7 (no adaptive pretraining), TAPT (task-adaptation only), or TAPT+DAPT (task-and domain-adaptation). Aug. denotes whether the training data was augmented with machine-translated examples. For HATEVAL, we report results for both the original and relabeled \u2020 test sets. Table 6 : Zero-shot learning versus best multilingual approaches. Data denotes language of training data. We fine-tune XLM-R and use DAPT, TAPT, and data augmentation for all models shown.",
"cite_spans": [],
"ref_spans": [
{
"start": 635,
"end": 642,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "5.2"
},
{
"text": "In cases where knowledge transfer from a monolingual LM might be difficult (e.g. due to a limited pretraining corpus or specialized downstream task), XLM-R may even outperform its monolingual competitors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "5.2"
},
{
"text": "Zero-shot learning is a topic of significant interest in multilingual NLU research (Conneau et al., 2018 (Conneau et al., , 2019 Artetxe and Schwenk, 2019) . In this context, we use zero-shot learning to refer to learning a classification task without observing training examples in the target language. Such an approach would allow practitioners to train a classification model using labeled data in a high-resource lan-guage such as EN and deploy it in other languages for which labels are not available.",
"cite_spans": [
{
"start": 83,
"end": 104,
"text": "(Conneau et al., 2018",
"ref_id": "BIBREF7"
},
{
"start": 105,
"end": 128,
"text": "(Conneau et al., , 2019",
"ref_id": "BIBREF5"
},
{
"start": 129,
"end": 155,
"text": "Artetxe and Schwenk, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Are Target Language Labels Needed?",
"sec_num": "5.3"
},
{
"text": "To evaluate the viability of zero-shot approaches for our tasks, we compare the best performing models from the experiments in Table 5 with models trained only on EN training data. We report the test set results for each of the non-EN target languages in Table 6 . Zero-shot models are competitive with previously published baselines (Table 4) , which demonstrates the effectiveness of crosslingual transfer in models like XLM-R. However, models trained using target language labels still outperform them by a large margin. Since obtaining a small number of target language labels is straightforward and typically required for validation in real applications, the need for zero-shot learning is reduced in practical scenarios.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 255,
"end": 262,
"text": "Table 6",
"ref_id": null
},
{
"start": 334,
"end": 344,
"text": "(Table 4)",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Are Target Language Labels Needed?",
"sec_num": "5.3"
},
{
"text": "The deployment of multilingual NLU systems varies significantly depending on the number of downstream task models trained and the model architectures used. For instance, the mono-target and multi-target settings induce one model per target language. Conversely, multi-all models have more consistent end-task performance and do not require the added complexity and latency of language detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speed and Memory Usage",
"sec_num": "5.4"
},
{
"text": "We use the Hugging Face library to benchmark the pretrained transformer models used in our experiments. We measure the inference time and memory usage of a single forward pass on a single Nvidia Tesla P100 GPU. Results are shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 240,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Speed and Memory Usage",
"sec_num": "5.4"
},
{
"text": "Monolingual BERT models in different languages are nearly identical in inference speed, but vary slightly at small batch sizes. RoBERTa has more parameters than BERT, but the impact on inference time and memory is small. XLM-R is also comparable with monolingual models at small batch sizes, but its memory usage becomes prohibitively large at batch sizes larger than 32. For certain applications such as those with real-time inference, this may not be important since the most common batch size is 1. Overall, the main tradeoff we observe is between the complexity of deploying N language-specific models and the high parameter count of a single multilingual model. (Conneau et al., 2018) and PAWS-X (Yang et al., 2019) are commonly used as representative benchmarks for cross-lingual text classification (Hu et al., 2020; Conneau et al., 2019) . However, both datasets are designed for evaluating zero-shot crosslingual transfer. While useful, they do not reflect practical scenarios where (1) a small amount of labeled data obviates zero-shot approaches, and (2) target language test data are not semantically aligned.",
"cite_spans": [
{
"start": 667,
"end": 689,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 701,
"end": 720,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF37"
},
{
"start": 806,
"end": 823,
"text": "(Hu et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 824,
"end": 845,
"text": "Conneau et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speed and Memory Usage",
"sec_num": "5.4"
},
{
"text": "Meanwhile, benchmarks for supervised multilingual text classification are limited. Artetxe and Schwenk (2019) propose Language-Agnostic SEntence Representations (LASER) and evaluate them on Multilingual Document Classification Corpus (MLDOC; Schwenk and Li, 2018) . Eisenschlos et al. (2019) later show that their multilingual finetuning and bootstrapping approach, MultiFit, outperforms LASER and mBERT on CLS and ML-DOC. The recently released Multilingual Amazon Reviews Corpus (MARC; Keung et al., 2020) is similar to CLS, but contains a different set of languages and large-scale training sets. Rust et al. (2020) perform a systematic evaluation similar to ours, comparing monolingual and multilingual BERT models on seven monolingual sentiment analysis datasets. Unlike our work, they do not consider multilingual test sets or cross-lingual transfer during training (as in the multi-all setting). None of the above evaluate practical training modifications, XLM-R, or tasks with class imbalance.",
"cite_spans": [
{
"start": 83,
"end": 109,
"text": "Artetxe and Schwenk (2019)",
"ref_id": "BIBREF2"
},
{
"start": 242,
"end": 263,
"text": "Schwenk and Li, 2018)",
"ref_id": "BIBREF26"
},
{
"start": 266,
"end": 291,
"text": "Eisenschlos et al. (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speed and Memory Usage",
"sec_num": "5.4"
},
{
"text": "Due to the increased volume and consequence of online content moderation in recent years, there is a growing body of work on multilingual hate speech data and methodology. The Multilingual Toxic Comment Classification Kaggle challenge (Jigsaw, 2019) included a multilingual test set of Wikipedia talk page comments annotated for toxicity. More recently, introduced XHATE-999, an evaluation set of 999 semantically aligned test instances annotated for abusive language in five typologically diverse languages. Similar to our work, they compare state-of-the-art monolingual and multilingual transformer models. However, both the Jigsaw dataset and XHATE-999 are designed for evaluating zero-shot transfer and do not contain multilingual training data.",
"cite_spans": [
{
"start": 235,
"end": 249,
"text": "(Jigsaw, 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech Detection",
"sec_num": "6.2"
},
{
"text": "Other multilingual hate speech studies have largely combined separate existing monolingual datasets for evaluation (Pamungkas and Patti, 2019; Sohn and Lee, 2019; Aluru et al., 2020; Corazza et al., 2020; Zampieri et al., 2020) . To avoid domain mismatch effects across languages, we use the HATEVAL dataset (Basile et al., 2019) , for which all examples were collected simultaneously.",
"cite_spans": [
{
"start": 115,
"end": 142,
"text": "(Pamungkas and Patti, 2019;",
"ref_id": "BIBREF20"
},
{
"start": 143,
"end": 162,
"text": "Sohn and Lee, 2019;",
"ref_id": "BIBREF28"
},
{
"start": 163,
"end": 182,
"text": "Aluru et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 183,
"end": 204,
"text": "Corazza et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 205,
"end": 227,
"text": "Zampieri et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 308,
"end": 329,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech Detection",
"sec_num": "6.2"
},
{
"text": "Previously evaluated approaches include LSTM architectures and feature selection (Pamungkas and Patti, 2019; Corazza et al., 2020) , as well as using transformers for fine-tuning (Sohn and Lee, 2019) or feature extraction (Stappen et al., 2020) . Aluru et al. (2020) show that fine-tuning from transformer-based language models generally outperforms other methods, including cross-lingual fixed representations like LASER.",
"cite_spans": [
{
"start": 81,
"end": 108,
"text": "(Pamungkas and Patti, 2019;",
"ref_id": "BIBREF20"
},
{
"start": 109,
"end": 130,
"text": "Corazza et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 179,
"end": 199,
"text": "(Sohn and Lee, 2019)",
"ref_id": "BIBREF28"
},
{
"start": 222,
"end": 244,
"text": "(Stappen et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 247,
"end": 266,
"text": "Aluru et al. (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate Speech Detection",
"sec_num": "6.2"
},
{
"text": "We conduct an empirical evaluation of transformerbased methods for multilingual text classification in a variety of pretraining and fine-tuning settings. We evaluate our results on two multilingual datasets spanning five languages: CLS (sentiment analysis) and HATEVAL (hate speech detection). Additionally, we contribute a relabeled version of HATE-VAL to address mislabeled test examples and enable meaningful comparisons in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our results and analysis show that practical methods such as task-and domain-adaptive pretraining and data augmentation using machine translations consistently improve model performance without requiring additional labeled data. We further show that multilingual model performance can vary based on task semantics, and that monolingual models are not always guaranteed to outperform massively multilingual models like XLM-R due to its large pretraining corpora and increased capacity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our work points to a number of future directions, including cross-domain and cross-task transfer, low-resource and few-shot learning, and practical alternatives to large multilingual models such as distillation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Though only the hidden vector for the first ([CLS]) token is typically used(Devlin et al., 2019), we find that the pooled sequence summary attains better results on our tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://cloud.google.com/translate4 We use the processed version of this dataset provided byEisenschlos et al. (2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Prior work(Stappen et al., 2020) has also noted this discrepancy and proposed repartitioning the train and test sets. We instead relabeled the test set due to the large number of mislabeled examples.6 https://github.com/sentropytechnologies/ hateval2019-relabeled",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We wish to thank Boya (Emma) Peng, Alexander Wang, and Thomas Boser for discussions and feedback on this work. Thanks also to the anonymous reviewers whose detailed suggestions helped improve its clarity and usefulness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep learning models for multilingual hate speech detection",
"authors": [
{
"first": "Binny",
"middle": [],
"last": "Sai Saket Aluru",
"suffix": ""
},
{
"first": "Punyajoy",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.06465"
]
},
"num": null,
"urls": [],
"raw_text": "Sai Saket Aluru, Binny Mathew, Punyajoy Saha, and Animesh Mukherjee. 2020. Deep learning mod- els for multilingual hate speech detection. arXiv preprint arXiv:2004.06465.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "AraBERT: Transformer-based model for Arabic language understanding",
"authors": [
{
"first": "Wissam",
"middle": [],
"last": "Antoun",
"suffix": ""
},
{
"first": "Fady",
"middle": [],
"last": "Baly",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic lan- guage understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Pro- cessing Tools, with a Shared Task on Offensive Lan- guage Detection, pages 9-15, Marseille, France. Eu- ropean Language Resource Association.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "597--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2007"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 54-63, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Spanish pre-trained bert model and evaluation data",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Ca\u00f1ete",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Chaperon",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Fuentes",
"suffix": ""
},
{
"first": "Jou-Hui",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Hojin",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Ca\u00f1ete, Gabriel Chaperon, Rodrigo Fuentes, Jou- Hui Ho, Hojin Kang, and Jorge P\u00e9rez. 2020. Span- ish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Xnli: Evaluating crosslingual sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A multilingual evaluation for online hate speech detection",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Corazza",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Cabrio",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Villata",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Trans. Internet Technol",
"volume": "20",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3377323"
]
},
"num": null,
"urls": [],
"raw_text": "Michele Corazza, Stefano Menini, Elena Cabrio, Sara Tonelli, and Serena Villata. 2020. A multilingual evaluation for online hate speech detection. ACM Trans. Internet Technol., 20(2).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Open sourcing german bert",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "deepset.ai. 2019. Open sourcing german bert. https: //deepset.ai/german-bert.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multifit: Efficient multi-lingual language model fine-tuning",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Eisenschlos",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Czapla",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Kadras",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5706--5711",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kadras, Sylvain Gugger, and Jeremy Howard. 2019. Multifit: Efficient multi-lingual lan- guage model fine-tuning. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5706-5711.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "XHate-999: Analyzing and detecting abusive language across domains and languages",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6350--6365",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.559"
]
},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161, Mladen Karan, and Ivan Vuli\u0107. 2020. XHate-999: Analyzing and detecting abusive lan- guage across domains and languages. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 6350-6365, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8342--8360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deberta: Decoding-enhanced bert with disentangled attention",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization",
"authors": [
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generaliza- tion. CoRR, abs/2003.11080.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Jigsaw multilingual toxic comment classification",
"authors": [
{
"first": "",
"middle": [],
"last": "Jigsaw",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jigsaw. 2019. Jigsaw multilingual toxic comment classification.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The multilingual Amazon reviews corpus",
"authors": [
{
"first": "Phillip",
"middle": [],
"last": "Keung",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4563--4568",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.369"
]
},
"num": null,
"urls": [],
"raw_text": "Phillip Keung, Yichao Lu, Gy\u00f6rgy Szarvas, and Noah A. Smith. 2020. The multilingual Amazon reviews corpus. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 4563-4568, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7203--7219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary, \u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Cross-domain and cross-lingual abusive language detection: A hybrid approach with deep learning and a multilingual lexicon",
"authors": [
{
"first": "Wahyu",
"middle": [],
"last": "Endang",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Pamungkas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {
"DOI": [
"10.18653/v1/P19-2051"
]
},
"num": null,
"urls": [],
"raw_text": "Endang Wahyu Pamungkas and Viviana Patti. 2019. Cross-domain and cross-lingual abusive language detection: A hybrid approach with deep learning and a multilingual lexicon. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics: Student Research Workshop, pages 363-370, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Crosslanguage text classification using structural correspondence learning",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1118--1127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Prettenhofer and Benno Stein. 2010. Cross- language text classification using structural corre- spondence learning. In Proceedings of the 48th An- nual Meeting of the Association for Computational Linguistics, pages 1118-1127.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to generate reviews and discovering sentiment",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "J\u00f3zefowicz",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2017,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Rafal J\u00f3zefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. CoRR, abs/1704.01444.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Exploring the limits of transfer learning with a unified text-totext transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sebastian Ruder, and Iryna Gurevych. 2020. How good is your tokenizer? on the monolingual performance of multilingual language models",
"authors": [
{
"first": "Phillip",
"middle": [],
"last": "Rust",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Pfeiffer",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phillip Rust, Jonas Pfeiffer, Ivan Vuli\u0107, Sebastian Ruder, and Iryna Gurevych. 2020. How good is your tokenizer? on the monolingual performance of mul- tilingual language models.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A survey on hate speech detection using natural language processing",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10, Valencia, Spain. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A Corpus for Multilingual Document Classification in Eight Languages",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk and Xian Li. 2018. A Corpus for Multilingual Document Classification in Eight Lan- guages. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Xlda: Cross-lingual data augmentation for natural language inference and question answering",
"authors": [
{
"first": "Jasdeep",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jasdeep Singh, Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2019. Xlda: Cross-lingual data augmentation for natural lan- guage inference and question answering.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Mc-bert4hate: Hate speech detection using multi-channel bert for different languages and translations",
"authors": [
{
"first": "Hajung",
"middle": [],
"last": "Sohn",
"suffix": ""
},
{
"first": "Hyunju",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Data Mining Workshops (ICDMW)",
"volume": "",
"issue": "",
"pages": "551--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hajung Sohn and Hyunju Lee. 2019. Mc-bert4hate: Hate speech detection using multi-channel bert for different languages and translations. 2019 Inter- national Conference on Data Mining Workshops (ICDMW), pages 551-559.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Cross-lingual zero-and few-shot hate speech detection utilising frozen transformer language models and axel. ArXiv, abs",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Stappen",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Brunn",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukas Stappen, Fabian Brunn, and B. Schuller. 2020. Cross-lingual zero-and few-shot hate speech detec- tion utilising frozen transformer language models and axel. ArXiv, abs/2004.13850.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Pretrained japanese bert models",
"authors": [
{
"first": "Masatoshi",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Takahashi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masatoshi Suzuki and Ryo Takahashi. 2019. Pretrained japanese bert models. https: //github.com/cl-tohoku/bert-japanese.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Multilingual is not enough: Bert for finnish",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Virtanen",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Ilo",
"suffix": ""
},
{
"first": "Jouni",
"middle": [],
"last": "Luoma",
"suffix": ""
},
{
"first": "Juhani",
"middle": [],
"last": "Luotolahti",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.07076"
]
},
"num": null,
"urls": [],
"raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish. arXiv preprint arXiv:1912.07076.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Probing pretrained language models for lexical semantics",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Edoardo",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Litschko",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7222--7240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107, Edoardo Maria Ponti, Robert Litschko, Goran Glava\u0161, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7222-7240.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "3266--3280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. In Advances in Neural Infor- mation Processing Systems, volume 32, pages 3266- 3280. Curran Associates, Inc.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "the Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In the Pro- ceedings of ICLR.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Transformers: State-of-theart natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Julien Chaumond, Lysandre Debut, Vic- tor Sanh, Clement Delangue, Anthony Moi, Pier- ric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, et al. 2020. Transformers: State-of-the- art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 38-45.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "PAWS-X: A cross-lingual adversarial dataset for paraphrase identification",
"authors": [
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of EMNLP 2019",
"volume": "",
"issue": "",
"pages": "3685--3690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adver- sarial dataset for paraphrase identification. In Pro- ceedings of EMNLP 2019, pages 3685-3690.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "SemEval-2020 task 12: Multilingual offensive language identification in social media (Offen-sEval 2020)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1425--1447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 task 12: Multilingual offen- sive language identification in social media (Offen- sEval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1425- 1447, Barcelona (online). International Committee for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Inference time (top) and memory usage (bottom) benchmarks. XLM-R results not shown at batch sizes 32 and 64 due to GPU memory restraints. Environment details: transformers v3.1.0, PyTorch v1.4.0, python v3.7.4, Linux. CPU: x86 _ 64 (fp16=False, RAM=15GB). GPU: Tesla P100-PCIE-16GB, RAM=16GB, power=250.0W, perf. state=0)."
},
"TABREF1": {
"content": "<table/>",
"text": "Pretraining corpora, tokenizers, and size (# parameters) of the language models used in our experiments.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"text": "The target tasks, languages, and number of training and test examples in each dataset.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"text": "Percentage of hateful class by anti-immigrant hashtags in HATEVAL (non-exhaustive list). \u2020 Denotes the relabeled test set.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td>Model</td><td>DE</td><td>FR</td><td>JA</td></tr><tr><td>mBERT</td><td colspan=\"3\">84.3 86.6 81.2</td></tr><tr><td colspan=\"4\">MultiFiT 92.2 91.4 86.2</td></tr><tr><td>Model</td><td/><td>EN</td><td>ES</td></tr><tr><td>Majority label</td><td/><td colspan=\"2\">36.7 37.0</td></tr><tr><td>SVM + tf-idf</td><td/><td colspan=\"2\">45.1 70.1</td></tr><tr><td colspan=\"4\">1st place submissions 65.1 73.0</td></tr></table>",
"text": "4 Experimental SetupPreprocessing and Tokenization We apply minimal preprocessing to both datasets, replacing URLs and Twitter usernames with <url> and <user> tokens. At all stages of training, we use the default tokenizers associated with each pretrained",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF7": {
"content": "<table/>",
"text": "Prior results (macro-F1) for CLS",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF8": {
"content": "<table><tr><td/><td/><td/><td/><td/><td>CLS</td><td/><td/><td/><td/><td colspan=\"2\">HATEVAL</td><td/></tr><tr><td>Model</td><td colspan=\"2\">Adapt. Aug.</td><td>EN</td><td>DE</td><td>FR</td><td>JA</td><td>AVG</td><td>EN</td><td>EN</td><td>\u2020</td><td>ES</td><td>AVG AVG</td><td>\u2020</td></tr><tr><td>mono-target</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>RoBERTa (EN) BERT (OTHERS)</td><td colspan=\"7\">\u00d7 94.70TAPT \u00d7 \u00d7 94.90.1 91.60.1 95.40.1 89.30.3 92.8 95.00.4 92.30.4 95.80.2 89.70.4 93.2</td><td colspan=\"6\">45.41.9 59.92.7 76.11.1 60.8 68.0 44.71.5 59.21.7 76.91.4 60.8 68.0</td></tr><tr><td/><td>TAPT+</td><td>\u00d7</td><td colspan=\"5\">94.90.4 91.80.2 95.50.3 89.50.2 92.9</td><td colspan=\"6\">48.01.5 63.12.6 76.31.1 62.2 69.7</td></tr><tr><td/><td>DAPT</td><td/><td colspan=\"5\">95.30.1 93.00.8 95.90.1 89.90.4 93.5</td><td colspan=\"6\">46.04.3 60.24.4 76.90.6 61.4 68.5</td></tr><tr><td>multi-target</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>\u00d7</td><td>\u00d7</td><td colspan=\"5\">92.50.4 93.00.2 92.50.3 90.40.5 92.1 93.30.1 94.00.2 93.80.2 90.30.3 92.8</td><td colspan=\"6\">47.22.0 61.41.9 74.80.5 61.0 68.1 45.61.6 59.32.5 77.01.1 61.3 68.1</td></tr><tr><td>XLM-RoBERTa</td><td>TAPT</td><td>\u00d7</td><td colspan=\"5\">92.70.5 93.50.5 93.90.3 90.30.1 92.6 93.40.6 94.00.3 93.80.5 90.50.4 92.9</td><td colspan=\"6\">47.02.7 62.43.3 76.11.4 61.6 69.2 47.91.3 63.51.5 77.90.9 62.9 70.7</td></tr><tr><td/><td>TAPT+</td><td>\u00d7</td><td colspan=\"5\">93.10.6 93.00.5 93.60.1 90.80.3 92.6</td><td colspan=\"6\">49.92.5 65.62.4 76.51.0 63.2 71.0</td></tr><tr><td/><td>DAPT</td><td/><td colspan=\"5\">94.00.3 94.10.4 93.80.3 91.10.4 93.2</td><td colspan=\"6\">46.62.1 61.72.5 78.10.8 62.3 69.9</td></tr><tr><td>multi-all</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>\u00d7</td><td>\u00d7</td><td colspan=\"5\">92.40.3 92.60.4 93.30.4 90.40.4 92.2 93.40.3 93.30.2 94.00.2 90.40.5 92.8</td><td colspan=\"6\">48.43.5 63.14.5 77.50.4 62.9 70.3 49.83.5 66.04.6 77.80.9 63.8 71.9</td></tr><tr><td>XLM-RoBERTa</td><td>TAPT</td><td>\u00d7</td><td colspan=\"5\">92.50.4 93.00.3 93.90.3 90.90.3 92.6 93.50.4 93.40.5 94.10.2 91.10.2 93.0</td><td colspan=\"6\">48.42.7 64.23.5 77.40.9 62.9 70.8 50.02.2 66.52.6 77.80.6 63.9 72.2</td></tr><tr><td/><td>TAPT+</td><td>\u00d7</td><td colspan=\"5\">92.70.3 93.30.2 94.00.3 91.20.3 92.8</td><td colspan=\"6\">47.13.9 62.75.3 77.41.0 62.3 70.1</td></tr><tr><td/><td>DAPT</td><td/><td colspan=\"5\">93.50.3 93.80.2 94.30.3 91.40.2 93.3</td><td colspan=\"6\">50.71.1 67.41.4 77.70.7 64.2 72.6</td></tr></table>",
"text": ".4 90.90.6 95.20.0 88.70.3 92.4 44.45.3 58.56.2 75.60.6 60.0 67.1 95.30.3 92.00.2 95.60.3 89.30.02 93.0 46.12.6 60.63.2 76.01.7 61.0 68.3",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF9": {
"content": "<table/>",
"text": "CLS and HATEVAL results (macro-F1) averaged over five random seeds. The best results for each target language test set are bolded, and standard deviations are shown in subscripts.",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}