{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:10:58.273553Z" }, "title": "Abusive Language Recognition in Russian", "authors": [ { "first": "Kamil", "middle": [], "last": "Saitov", "suffix": "", "affiliation": { "laboratory": "", "institution": "Innopolis University Russian Federation", "location": {} }, "email": "saitov66@gmail.com" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "", "affiliation": { "laboratory": "", "institution": "IT University of Copenhagen", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Abusive phenomena are commonplace in language on the web. The scope of recognizing abusive language is broad, covering many behaviours and forms of expression. This work addresses automatic detection of abusive language in Russian. The lexical, grammatical and morphological diversity of Russian language present potential difficulties for this task, which is addressed using a variety of machine learning approaches. We present a dataset and baselines for this task.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Abusive phenomena are commonplace in language on the web. The scope of recognizing abusive language is broad, covering many behaviours and forms of expression. This work addresses automatic detection of abusive language in Russian. The lexical, grammatical and morphological diversity of Russian language present potential difficulties for this task, which is addressed using a variety of machine learning approaches. We present a dataset and baselines for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Unfortunately, hate speech and abusive language are prevalent on the internet (Waseem and Hovy, 2016) , often creating an aggressive environment for users. This can include cyber-bullying or threats towards individuals and groups. Reducing this content is difficult: it is harmful for humans to moderate. 1 Thus, there is a critical need for abusive language recognition systems, which would help social networks and forums filter abusive language. Moreover, with platforms taking increased control over which content to surface, automatic abuse recognition is more important than ever.", "cite_spans": [ { "start": 78, "end": 101, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF18" }, { "start": 305, "end": 306, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One problem arises when the subjectivity of the matter is considered. Abusive language is hard for humans to recognize universally (Waseem, 2016) . This indicates that the collection and labeling of data should be thorough and objective, which could be reached through e.g. large-scale crowd-sourced data annotation (Sabou et al., 2014) .", "cite_spans": [ { "start": 131, "end": 145, "text": "(Waseem, 2016)", "ref_id": "BIBREF17" }, { "start": 316, "end": 336, "text": "(Sabou et al., 2014)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "NLP research in the area is nascent, with existing solutions oriented mostly towards English language (Vidgen and Derczynski, 2020) , which, despite sometimes being mistakenly considered as \"universal\" (Bender, 2019) , is very different grammatically and lexically from many languages, especially those using non-Latin characters (e.g. Russian, Japanese etc). This paper addresses abusive language detection in Russian. One issue with recognition of abusive language in Russian is the limited number of sources of labeled data relative to English (Andrusyak et al., 2018; Zueva et al., 2020; Smetanin, 2020; Potapova and Gordeev, 2016) . Thus, the collection and labeling of data presents an additional challenge, and we present both dataset and models.", "cite_spans": [ { "start": 102, "end": 131, "text": "(Vidgen and Derczynski, 2020)", "ref_id": "BIBREF16" }, { "start": 202, "end": 216, "text": "(Bender, 2019)", "ref_id": "BIBREF2" }, { "start": 547, "end": 571, "text": "(Andrusyak et al., 2018;", "ref_id": "BIBREF0" }, { "start": 572, "end": 591, "text": "Zueva et al., 2020;", "ref_id": "BIBREF20" }, { "start": 592, "end": 607, "text": "Smetanin, 2020;", "ref_id": null }, { "start": 608, "end": 635, "text": "Potapova and Gordeev, 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this case, we use the OLID annotation definition of abusive language (Zampieri et al., 2019) . This covers profanity, and targeted and untargeted insults and threats, against both groups and individuals. Specifically, in accordance this scheme, we consider the use of racial and other group-targeted slurs abusive.", "cite_spans": [ { "start": 72, "end": 95, "text": "(Zampieri et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Abusive Language Definition", "sec_num": "2" }, { "text": "We searched for publicly available datasets containing considerable amounts of abusive language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "Russian Troll Tweets is a repository consisting of 3 million tweets. 2 This was filtered to only Cyrillic texts. This data is not labeled, thus a subset of the data was labeled manually for use in this research. During labeling, the data turned out to contain significantly less abusive language than expected. An additional resource, the RuTweetCorp (Rubtsova, 2013), was also annotated for abusive language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "In search for sources rich in abusive language, the \"South Park\" TV show was found. The Russian subtitles for it embodied a high density of profanity, hate-speech, racism, sexism, various examples of ethnicity and nationality abuse. The subtitles from more than four seasons of the series yielded many instances of abusive language. This data, Russian South Park (RSP), was annotated manually. Interannotator agreement (IAA; computed with Cohen's Kappa) over the whole dataset is 0.68 among three L1 3 Russian annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "To complement this, the Kaggle \"Russian Language Toxic Comments\" dataset (RTC) was also annotated. The dataset contains more than 14 000 labeled samples of hate speech. In Section 4, the performance of models trained on RSP data will be compared to that including RTC. More than 1500 samples are in the RSP dataset, and more than 15 000 samples are in total, adding the RTC data. As well as in many in situ abusive language research, an abusive language lexicon was also constructed. The text data that was collected previously contained a fair amount of such vocabulary, however, the dictionary should not be limited by the dataset. HateBase (Tuckwood, 2017) contains only 17 abusive Russian words. VK, the largest social network in Russia and CIS, has an abusive speech filter dictionary published unofficially, containing a large lexicon of abusive words. 4 Another source is russki-mat, 5 an open dictionary of Russian curse words with proper explanations and examples of usage. Overall, the multiple-source lexicon built contains more than 700 unique terms. As can be seen from Table 2 , abuse-bearing sentences contain four times more uppercased words and 25 times more abusive words than non-abusive sentences. The stages of pre-processing are the following:", "cite_spans": [ { "start": 643, "end": 659, "text": "(Tuckwood, 2017)", "ref_id": "BIBREF15" }, { "start": 859, "end": 860, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 1083, "end": 1090, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "1. Balance the dataset. The initial dataset nohate/hate distribution is 1078/307 for the RSP dataset and 8815/5597 for the RSP+RTC dataset. The no-hate portion of the dataset is under-sampled so that this proportion is consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "2. Strip URLs. Remove the links from texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "3. Adjust platform-specific text. All Twitter mentions, hashtags and retweet are shown by a set of distinct symbols (# for hasthtag, @ for retweet). These tags might hold information on whether the tweet is targeted at a particular person or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "4. Orthographic normalisation. Replace Russian \u0451 and \u0401 to the corresponding \u0435 and \u0415. These letters are mostly interchangeable in Russan language, thus it is the standard preprocessing routine when working with Russian text data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "5. Tokenization. Splitting the sentences into separate words and punctuation. The tokenization is done with NLTK library's word tokenize() method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "6. Lemmatize terms. Lemmatization is reducing the word into its normal form. In case of Russian language, most researchers prefer stemming over lemmatization, however, if stemming is used, the search for offensive words in sentences would become intractable. The lemmatization is done with pymorphy2 (Korobov, 2015) -a morphological analyzer library specifically for Russian language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "7. Remove stop words from the text. Such words are common interjections, conjugations, prepositions, that do not need to be seen as features in the future modelling of the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "8. TF-IDF vectorization. Turn the words into frequency vectors for each sample.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "9. Train-test split Randomly split the ready data into train and test sets with 80/20 proportion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "Additional features beyond the text itself are included. Since abusive or hateful comments are anticipated to be also negative in sentiment, sentiment analysis is included. The sentiment was automatically predicted for the RTC dataset, for which a FastText (Bojanowski et al., 2017) embedding induced over RuSentiment (Rogers et al., 2018) was used, achieving F1 of 0.71, high for sentiment classifiers for Russian. Upper-casing full words is a popular toneindicating technique (Derczynski et al., 2015) . Since one cannot \"shout\" in the internet, the intent of a higher-tone is expressed with upper-casing. Therefore, the number of fully-uppercased words is counted for each sample.", "cite_spans": [ { "start": 257, "end": 282, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF4" }, { "start": 318, "end": 339, "text": "(Rogers et al., 2018)", "ref_id": "BIBREF11" }, { "start": 478, "end": 503, "text": "(Derczynski et al., 2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction", "sec_num": "4.2" }, { "text": "We also count the number of offensive words (from our lexicon) contained in a sentence. This feature is expected to be important, since abusive language is often combined with profanity, though this kind of sampling is not without bias (Vidgen and Derczynski, 2020) .", "cite_spans": [ { "start": 236, "end": 265, "text": "(Vidgen and Derczynski, 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction", "sec_num": "4.2" }, { "text": "The baseline model is a binary Linear Support Vector Classifier with default L2 loss and squaredhinge loss. The model was chosen to be an SVC because similar work for other languages suggest that it can be effective for this type of task (Frisiani et al., 2019) .", "cite_spans": [ { "start": 238, "end": 261, "text": "(Frisiani et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Results [no RTC data]", "sec_num": "4.3" }, { "text": "The overall F1-score is up to 0.75, depending on the seed and parameters. The F1-score on the RSP+RTC Comments dataset is higher, up to 0.87, again, depending on the seed and parameters (Figure 2) . Analysing the incorrectly classified samples, it turns out that the main difficulty the model has is longer texts as well as texts containing swear words that cannot be converted to initial form due to distortion through slang/word formation. An example of this is the following: ", "cite_spans": [], "ref_spans": [ { "start": 186, "end": 196, "text": "(Figure 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Baseline Results [no RTC data]", "sec_num": "4.3" }, { "text": "\u0412", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Results [no RTC data]", "sec_num": "4.3" }, { "text": "Although removing stop words from tokenized text is common practice, leaving them in might yield different results. This is the case here. The results are better on both datasets. F1-score over the RTC+RSP dataset is 0.88 (Figure 3 ).", "cite_spans": [], "ref_spans": [ { "start": 222, "end": 231, "text": "(Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Skip stopword exclusion", "sec_num": "4.4" }, { "text": "In this experiment, the datasets are not balanced, thus the proportion of hate/no-hate is 1/2 in the combined RTC+RSP dataset and 1/10 in the RSP. As can be seen in Figure 4 , true positives decrease by a small amount and the false negatives have risen up by a large margin, causing a decrease in overall model performance. ", "cite_spans": [], "ref_spans": [ { "start": 165, "end": 173, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Without balancing the dataset", "sec_num": "4.5" }, { "text": "Neural network-based approaches often show promising results on various NLP tasks. In fact, some of the best methods for hate-speech detection in English are BERT, CNN, GRU/LSTM-based techniques (Zampieri et al., 2020) . We investigated these methods over RSP. RuBERT (Burtsev et al., 2018) is the original Bidirectional Encoder Representations from Transformers (Devlin et al., 2019) model but trained on Russian Wikipedia pages. The fine-tuning needed to be made includes training the last, classifier layer of the network. The results are promising, reaching F1-score of 0.85 on the whole training dataset (confusion matrix in Figure 5) . The model is able to correctly recognize the following sample as hate-speech:", "cite_spans": [ { "start": 195, "end": 218, "text": "(Zampieri et al., 2020)", "ref_id": null }, { "start": 268, "end": 290, "text": "(Burtsev et al., 2018)", "ref_id": "BIBREF5" }, { "start": 363, "end": 384, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 630, "end": 639, "text": "Figure 5)", "ref_id": null } ], "eq_spans": [], "section": "Deep Learning", "sec_num": "4.6" }, { "text": "\u041f\u043e\u0441\u043c\u043e\u0442\u0440\u0435\u043b \u0423\u0442\u043e\u043c\u043b\u0435\u043d\u043d\u044b\u0445 \u0441\u043e\u043b\u043d\u0446\u0435\u043c 2. \u0418 \u043e\u043a\u0430\u0437\u0430\u043b\u043e\u0441\u044c, \u0447\u0442\u043e \u044d\u0442\u043e \u0445\u043e\u0440\u043e\u0448\u0438\u0439 \u0444\u0438\u043b\u044c\u043c, \u0442\u0430\u043a\u0430\u044f \u0432\u044b\u0441\u043e\u043a\u043e\u0431\u044e\u0434\u0436\u0435\u0442\u043d\u0430\u044f \u0430\u0440\u0442\u0445\u0430\u0443\u0441\u044f\u0442\u0438\u043d\u0430, \u043a \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u043c\u043e\u0433\u0443\u0442 \u0431\u044b\u0442\u044c \u043f\u0440\u0435\u0442\u0435\u043d\u0437\u0438\u0438 \u0442\u043e\u043b\u044c\u043a\u043e \u043f\u043e\u0442\u043e\u043c\u0443, \u0447\u0442\u043e \u0441\u043f*\u0437\u0434\u0438\u043b\u0438-\u0440\u0430\u0441\u043f\u0438\u043b\u0438\u043b\u0438 \u0438 \u0432\u043e\u043e\u0431\u0449\u0435 \u0422\u0410\u041a \u041d\u0415 \u0411\u042b\u0412\u0410\u0415\u0422. \u041d\u0443 \u043d*\u0445\u0443\u0439 \u044d\u0442\u0438\u0445 \u043a\u0440\u0438\u0442\u0438\u043a\u043e\u0432. \u041e\u0431\u0437\u043e\u0440\u044b \u0434\u043b\u0438\u043d\u043d\u0435\u0435 \u0444\u0438\u043b\u044c\u043c\u043e\u0432, \u043f\u0435\u0442\u0440\u043e\u0441\u044f\u043d\u0441\u0442\u0432\u043e \u0445\u0443\u0436\u0435 \u0440\u0430\u0448\u043a\u043e\u043a\u043e\u043c\u0435\u0434\u0438\u0439, \u0435\u0431*\u043d\u0443\u0442\u0430\u044f \u043d\u0435\u043d\u0430\u0432\u0438\u0441\u0442\u044c \u0438 \u0434\u043e*\u0431\u043a\u0438 \u043f\u043e \u043c\u0435\u043b\u043e\u0447\u0430\u043c.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Learning", "sec_num": "4.6" }, { "text": "Watched Burnt by the Sun 2. Turns out it's a pretty good movie, a high-budget arthouse-ish film, the only downside possible is that most of the budget has been corruptly-stolen and THE PLOT IS NOT REALISTIC. F*ck those critics. The review texts are longer than the movie itself, jokes are worse than , f*cked up hate and f*cking nagging about small errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Learning", "sec_num": "4.6" }, { "text": "mBERT is multilingual BERT (Devlin et al., 2019) , again trained on Wikipedia pages of over a hundred languages, mainly of non-Latin alphabets. Russian is Cyrillic, thus the model has the potential in Russian hate-speech recognition domain. The fine-tuning is the same as for RuBERT.", "cite_spans": [ { "start": 27, "end": 48, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "mBERT", "sec_num": "4.6.2" }, { "text": "The results ( Figure 5 ) showed worse performance than RuBERT, up to 0.76 F1-score. The reason for the lower performance is probably in the concept of generalisation of BERT to multiple languages, as opposed to RuBERT, which is trained exclusively on Russian language.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 22, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "mBERT", "sec_num": "4.6.2" }, { "text": "The following is an example of a sample which has been incorrectly classified as no-hate with both BERT-based models, as well as the baseline model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "mBERT", "sec_num": "4.6.2" }, { "text": "\u0412\u043e\u043d\u044e\u0447\u0438\u0439 \u0441\u043e\u0432\u043a\u043e\u0432\u044b\u0439 \u0441\u043a\u043e\u0442 \u043f\u0440\u0438\u0431\u0435\u0436\u0430\u043b \u0438 \u043d\u043e\u0435\u0442. \u0410 \u0432\u043e\u0442 \u0438 \u0441\u0442\u043e\u0440\u043e\u043d\u043d\u0438\u043a \u0434\u0435\u043c\u043e\u043a\u0440\u0430\u0442\u0438\u0438 \u0438 \u0441\u0432\u043e\u0431\u043e\u0434\u044b \u0441\u043b\u043e\u0432\u0430 \u0437\u0430\u043a\u0443\u043a\u0430\u0440\u0435\u043a\u0430\u043b.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "mBERT", "sec_num": "4.6.2" }, { "text": "The stinking soviet cattle came running and whining. And here is the supporter of democracy and freedom of speech starting to croak.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "mBERT", "sec_num": "4.6.2" }, { "text": "The sentence does not contain any especially abusive vocabulary, but rather the words \"stinking\", \"cattle\", \"croak\" in this context (in relation to people) are abusive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "mBERT", "sec_num": "4.6.2" }, { "text": "For the largest dataset of Russian abusive language samples (RSP+RTC) and the LinearSVC model, the best-case is 0.88. This is a good result for such a simple model compared to typical results in other languages (Zampieri et al., 2020) . Our suggestion is that the reason for such a good score is the correct data preprocessing and, even more importantly, feature selection. RuBERT still struggles mainly with recognizing longer texts and texts with misspellings. Another barrier for this model in particular is when a text contains many named entities, because word representations might not contain entity surface forms (Augenstein et al., 2017) or individual entities may not be representative of the typical context of a given abusive language phenomena.", "cite_spans": [ { "start": 211, "end": 234, "text": "(Zampieri et al., 2020)", "ref_id": null }, { "start": 621, "end": 646, "text": "(Augenstein et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4.7" }, { "text": "An example of the above-mentioned criteria is the following long sentence with many named entities (NEs) and misspellings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4.7" }, { "text": "\u0421\u0442\u043e\u0440\u043e\u043d\u043d\u0438\u043a\u0438 \u0431\u0430\u043d\u0434\u0435\u0440\u043e\u0432\u0446\u0435\u0432 (NE) (\u043b\u0435\u0432\u0430\u043a\u043e\u0432 (NE), \u0432\u044b\u0441\u0442\u0443\u043f\u0430\u0432\u0448\u0438\u0445 \u0437\u0430 \u0431\u0435\u0441\u043a\u043b\u0430\u0441\u0441\u043e\u0432\u043e\u0435 (misspelling) \u043e\u0431\u0449\u0435\u0441\u0442\u0432\u043e \u0438 \u0431\u043e\u0440\u044c\u0431\u0443 \u0441 \u043a\u0430\u043f\u0438\u0442\u0430\u043b\u0438\u0437\u043c\u043e\u043c) \u0438 \u043a\u0430\u0440\u043b\u0438\u043a\u0430-\u0434\u0443\u0448\u0438\u0442\u0435\u043b\u044f \u043a\u043e\u0442\u043e\u0432 \u0421\u0442\u0435\u043f\u0430\u043d\u0430 \u0411\u0430\u043d\u0434\u0435\u0440\u044b (NE), \u043a\u043e\u0442\u043e\u0440\u044b\u0439, \u043a\u0430\u043a \u0438\u0437\u0432\u0435\u0441\u0442\u043d\u043e, \u0431\u043e\u0440\u043e\u043b\u0441\u044f \u0441 \u0440\u0430\u0441\u0438\u0437\u043c\u043e\u043c, \u043f\u043e\u0434\u0434\u0435\u0440\u0436\u0438\u0432\u0430\u043b \u0418\u0434\u0435\u043b\u044c-\u0423\u0440\u0430\u043b (NE) \u0438 \u043d\u0430\u0437\u044b\u0432\u0430\u043b \u043f\u043e\u0431\u0440\u0430\u0442\u0438\u043c\u0430\u043c\u0438 \u0438\u0441\u043b\u0430\u043c\u0441\u043a\u0438\u0445 \u0431\u043e\u0440\u0446\u043e\u0432 \u0437\u0430 \u0441\u0432\u043e\u0431\u043e\u0434\u0443 \u0438\u0437 \u0410\u0437\u0435\u0440\u0431\u0430\u0439\u0434\u0436\u0430\u043d\u0430 (NE), \u043d\u0435 \u043f\u043e\u043b\u044c\u0437\u0443\u044e\u0442\u0441\u044f \u0441\u0438\u043c\u043f\u0430\u0442\u0438\u044f\u043c\u0438 \u0443 \u043f\u0440\u0430\u0432\u044b\u0445 \u0435\u0432\u0440\u043e\u043f\u0435\u0439\u0446\u0435\u0432.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4.7" }, { "text": "The mistakes made by mBERT are roughly a superset of those made by RuBERT. This suggests that information mBERT can gain from other languages is not particularly helpful for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "4.7" }, { "text": "This paper presented data, models and experiments for abusive language detection in Russian. By choosing the right preprocessing techniques and language-specific feature selection it is possible to achieve state-of-the-art performance on par with best-performing English language models, even using a simple SVM model. This indicates that, given sufficient diversity of data, abusive language detection solutions can be rapidly developed for new languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The code and data for this research are publicly available at: https://github.com/Sariellee/ Russan-Hate-speech-Recognition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://www.theverge.com/2020/5/12/21255870/facebookcontent-moderator-settlement-scola-ptsd-mental-health", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/fivethirtyeight/russian-troll-tweets", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "I.e. as first language 4 Common Knowledge Russian Tweets, http://study.mokoron.com/ 5 http://www.russki-mat.net/home.php", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This appendix describes metadata for RSP, following Bender and Friedman (2018) .A. Curation rationale The texts were taken from the South Park TV series in order to gather a corpus relatively rich in various forms of abusive language.B. Language variety Scripted Russian translated at high standard from US English. BCP47 ru-RU C. Speaker demographic The text is transcribed from words of Russian actors, mostly male, portraying characters who are both adults and children. The child characters (age eight) make up most of the speech content. The scripts were originally written by two US males from Colorado, over a period where they were aged 20-something to 40-something.D. Annotator demographic Native Russian speakers, male, twenties, university students. E. Speech situation This is scripted TV speech; it's not know how much latitude the voice actors were afforded over wording.", "cite_spans": [ { "start": 52, "end": 78, "text": "Bender and Friedman (2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "A Data Statement", "sec_num": null }, { "text": "The content is deliberately somewhat foul-mouthed and very informal; political satire and social commentary are common themes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F. Text characteristics", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Detection of abusive speech for mixed sociolects of Russian and Ukrainian Languages", "authors": [ { "first": "Bohdan", "middle": [], "last": "Andrusyak", "suffix": "" }, { "first": "Mykhailo", "middle": [], "last": "Rimel", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Kern", "suffix": "" } ], "year": 2018, "venue": "Proceedings of RASLAN", "volume": "", "issue": "", "pages": "77--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bohdan Andrusyak, Mykhailo Rimel, and Roman Kern. 2018. Detection of abusive speech for mixed soci- olects of Russian and Ukrainian Languages. In Pro- ceedings of RASLAN, pages 77-84.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Generalisation in named entity recognition: A quantitative analysis", "authors": [ { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" } ], "year": 2017, "venue": "Computer Speech & Language", "volume": "44", "issue": "", "pages": "61--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isabelle Augenstein, Leon Derczynski, and Kalina Bontcheva. 2017. Generalisation in named entity recognition: A quantitative analysis. Computer Speech & Language, 44:61-83.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The #BenderRule: On Naming the Languages We Study and Why It Matters. The Gradient", "authors": [ { "first": "Emily", "middle": [], "last": "Bender", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Bender. 2019. The #BenderRule: On Naming the Languages We Study and Why It Matters. The Gradient.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science", "authors": [ { "first": "M", "middle": [], "last": "Emily", "suffix": "" }, { "first": "Batya", "middle": [], "last": "Bender", "suffix": "" }, { "first": "", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "587--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "DeepPavlov: Open-source library for dialogue systems", "authors": [ { "first": "Mikhail", "middle": [], "last": "Burtsev", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Seliverstov", "suffix": "" }, { "first": "Rafael", "middle": [], "last": "Airapetyan", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Arkhipov", "suffix": "" }, { "first": "Dilyara", "middle": [], "last": "Baymurzina", "suffix": "" }, { "first": "Nickolay", "middle": [], "last": "Bushkov", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Gureenkova", "suffix": "" }, { "first": "Taras", "middle": [], "last": "Khakhulin", "suffix": "" }, { "first": "Yurii", "middle": [], "last": "Kuratov", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Kuznetsov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL 2018, System Demonstrations", "volume": "", "issue": "", "pages": "122--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Burtsev, Alexander Seliverstov, Rafael Airapetyan, Mikhail Arkhipov, Dilyara Baymurz- ina, Nickolay Bushkov, Olga Gureenkova, Taras Khakhulin, Yurii Kuratov, Denis Kuznetsov, et al. 2018. DeepPavlov: Open-source library for dia- logue systems. In Proceedings of ACL 2018, System Demonstrations, pages 122-127.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Analysis of named entity recognition and linking for tweets", "authors": [ { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Maynard", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Rizzo", "suffix": "" }, { "first": "Marieke", "middle": [], "last": "Van Erp", "suffix": "" }, { "first": "Genevieve", "middle": [], "last": "Gorrell", "suffix": "" }, { "first": "Rapha\u00ebl", "middle": [], "last": "Troncy", "suffix": "" }, { "first": "Johann", "middle": [], "last": "Petrak", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" } ], "year": 2015, "venue": "Information Processing & Management", "volume": "51", "issue": "2", "pages": "32--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leon Derczynski, Diana Maynard, Giuseppe Rizzo, Marieke Van Erp, Genevieve Gorrell, Rapha\u00ebl Troncy, Johann Petrak, and Kalina Bontcheva. 2015. Analysis of named entity recognition and linking for tweets. Information Processing & Management, 51(2):32-49.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Combination of multiple deep learning architectures for offensive language detection in tweets", "authors": [ { "first": "Nicol\u00f2", "middle": [], "last": "Frisiani", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Laignelet", "suffix": "" }, { "first": "Batuhan", "middle": [], "last": "G\u00fcler", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.08734" ] }, "num": null, "urls": [], "raw_text": "Nicol\u00f2 Frisiani, Alexis Laignelet, and Batuhan G\u00fcler. 2019. Combination of multiple deep learning archi- tectures for offensive language detection in tweets. arXiv preprint arXiv:1903.08734.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Morphological analyzer and generator for russian and ukrainian languages", "authors": [ { "first": "Mikhail", "middle": [], "last": "Korobov", "suffix": "" } ], "year": 2015, "venue": "Analysis of Images", "volume": "542", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-3-319-26123-2_31" ] }, "num": null, "urls": [], "raw_text": "Mikhail Korobov. 2015. Morphological analyzer and generator for russian and ukrainian languages. In Mikhail Yu. Khachay, Natalia Konstantinova, Alexander Panchenko, Dmitry I. Ignatov, and Va- leri G. Labunets, editors, Analysis of Images, Social Networks and Texts, volume 542 of Communications in Computer and Information Science, pages 320-", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Detecting state of aggression in sentences using CNN", "authors": [ { "first": "Rodmonga", "middle": [], "last": "Potapova", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Gordeev", "suffix": "" } ], "year": 2016, "venue": "International Conference on Speech and Computer", "volume": "", "issue": "", "pages": "240--245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rodmonga Potapova and Denis Gordeev. 2016. Detect- ing state of aggression in sentences using CNN. In International Conference on Speech and Computer, pages 240-245.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "RuSentiment: An enriched sentiment analysis dataset for social media in Russian", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" }, { "first": "Svitlana", "middle": [], "last": "Volkova", "suffix": "" }, { "first": "Mikhail", "middle": [], "last": "Gronas", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Gribov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th international conference on computational linguistics", "volume": "", "issue": "", "pages": "755--763", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Rogers, Alexey Romanov, Anna Rumshisky, Svitlana Volkova, Mikhail Gronas, and Alex Gribov. 2018. RuSentiment: An enriched sentiment analy- sis dataset for social media in Russian. In Proceed- ings of the 27th international conference on compu- tational linguistics, pages 755-763.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A method for development and analysis of short text corpus for the review classification task", "authors": [ { "first": "", "middle": [], "last": "Yv Rubtsova", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Conferences Digital Libraries: Advanced Methods and Technologies, Digital Collections, RCDL", "volume": "", "issue": "", "pages": "269--275", "other_ids": {}, "num": null, "urls": [], "raw_text": "YV Rubtsova. 2013. A method for development and analysis of short text corpus for the review classi- fication task. In Proceedings of Conferences Digi- tal Libraries: Advanced Methods and Technologies, Digital Collections, RCDL, pages 269-275.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Corpus annotation through crowdsourcing: Towards best practice guidelines", "authors": [ { "first": "Marta", "middle": [], "last": "Sabou", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Arno", "middle": [], "last": "Scharl", "suffix": "" } ], "year": 2014, "venue": "Proceedings of LREC", "volume": "", "issue": "", "pages": "859--866", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Sabou, Kalina Bontcheva, Leon Derczynski, and Arno Scharl. 2014. Corpus annotation through crowdsourcing: Towards best practice guidelines. In Proceedings of LREC, pages 859-866.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Toxic Comments Detection in Russian", "authors": [], "year": 2020, "venue": "Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Smetanin. 2020. Toxic Comments Detection in Russian. In Computational Linguistics and Intellec- tual Technologies: Proceedings of the International Conference \"Dialogue 2020\".", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Hatebase: Online database of hate speech. The Sentinal Project", "authors": [ { "first": "Christopher", "middle": [], "last": "Tuckwood", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Tuckwood. 2017. Hatebase: Online database of hate speech. The Sentinal Project. Avail- able at: https://www. hatebase. org.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Directions in abusive language training data, a systematic review: Garbage in, garbage out", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" } ], "year": 2020, "venue": "PLoS one", "volume": "15", "issue": "12", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data, a system- atic review: Garbage in, garbage out. PLoS one, 15(12):e0243300.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Are you a racist or am i seeing things? Annotator influence on hate speech detection on twitter", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the first workshop on NLP and computational social science", "volume": "", "issue": "", "pages": "138--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeerak Waseem. 2016. Are you a racist or am i seeing things? Annotator influence on hate speech detec- tion on twitter. In Proceedings of the first workshop on NLP and computational social science, pages 138-142.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Hateful symbols or hateful people? Predictive features for hate speech detection on twitter", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the NAACL Student Research Workshop", "volume": "", "issue": "", "pages": "88--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? Predictive features for hate speech detection on twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (Offen-sEval 2020", "authors": [ { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Pepa", "middle": [], "last": "Atanasova", "suffix": "" }, { "first": "Georgi", "middle": [], "last": "Karadzhov", "suffix": "" }, { "first": "Hamdy", "middle": [], "last": "Mubarak", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. SemEval-2020 Task 12: Multilingual Offen- sive Language Identification in Social Media (Offen- sEval 2020).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Reducing unintended identity bias in Russian hate speech detection", "authors": [ { "first": "Nadezhda", "middle": [], "last": "Zueva", "suffix": "" }, { "first": "Madina", "middle": [], "last": "Kabirova", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kalaidin", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourth Workshop on Online Abuse and Harms", "volume": "", "issue": "", "pages": "65--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nadezhda Zueva, Madina Kabirova, and Pavel Kalaidin. 2020. Reducing unintended identity bias in Russian hate speech detection. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 65-69.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Dataset parts size and balance" }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Confusion matrixes of the baseline model" }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "Improved recall and precision on both datasets without stopword filtering (a) with RTC data (b) no RTC data" }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "Performance without giving balancing instance weights" }, "FIGREF5": { "num": null, "type_str": "figure", "uris": null, "text": "Figure 5: Performance of BERT variations over the combined dataset" }, "TABREF1": { "num": null, "html": null, "text": "Word & token distribution across RSP", "type_str": "table", "content": "" }, "TABREF3": { "num": null, "html": null, "text": "", "type_str": "table", "content": "
: Uppercase and profane word distribution
across the dataset
4 Experiments
4.1 Data Preprocessing
" }, "TABREF6": { "num": null, "html": null, "text": "", "type_str": "table", "content": "
: Ablations over data processing steps, with
SVM classifier (F-scores)
" } } } }