{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:43:36.699129Z" }, "title": "Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing", "authors": [ { "first": "Minh", "middle": [ "Van" ], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oregon", "location": { "settlement": "Eugene", "region": "Oregon", "country": "USA" } }, "email": "minhnv@cs.uoregon.edu" }, { "first": "Viet", "middle": [], "last": "Lai", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oregon", "location": { "settlement": "Eugene", "region": "Oregon", "country": "USA" } }, "email": "vietl@cs.uoregon.edu" }, { "first": "Amir", "middle": [], "last": "Pouran", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oregon", "location": { "settlement": "Eugene", "region": "Oregon", "country": "USA" } }, "email": "apouranb@cs.uoregon.edu" }, { "first": "Ben", "middle": [], "last": "Veyseh", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oregon", "location": { "settlement": "Eugene", "region": "Oregon", "country": "USA" } }, "email": "" }, { "first": "Huu", "middle": [], "last": "Thien", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oregon", "location": { "settlement": "Eugene", "region": "Oregon", "country": "USA" } }, "email": "thien@cs.uoregon.edu" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oregon", "location": { "settlement": "Eugene", "region": "Oregon", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce Trankit, a lightweight Transformer-based Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained pipelines for 56 languages. Built on a state-of-the-art pretrained language model, Trankit significantly outperforms prior multilingual NLP pipelines over sentence segmentation, part-of-speech tagging, morphological feature tagging, and dependency parsing while maintaining competitive performance for tokenization, multi-word token expansion, and lemmatization over 90 Universal Dependencies treebanks. Despite the use of a large pretrained transformer, our toolkit is still efficient in memory usage and speed. This is achieved by our novel plugand-play mechanism with Adapters where a multilingual pretrained transformer is shared across pipelines for different languages. Our toolkit along with pretrained models and code are publicly available at: https: //github.com/nlp-uoregon/trankit.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We introduce Trankit, a lightweight Transformer-based Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained pipelines for 56 languages. Built on a state-of-the-art pretrained language model, Trankit significantly outperforms prior multilingual NLP pipelines over sentence segmentation, part-of-speech tagging, morphological feature tagging, and dependency parsing while maintaining competitive performance for tokenization, multi-word token expansion, and lemmatization over 90 Universal Dependencies treebanks. Despite the use of a large pretrained transformer, our toolkit is still efficient in memory usage and speed. This is achieved by our novel plugand-play mechanism with Adapters where a multilingual pretrained transformer is shared across pipelines for different languages. Our toolkit along with pretrained models and code are publicly available at: https: //github.com/nlp-uoregon/trankit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many efforts have been devoted to developing multilingual NLP systems to overcome language barriers (Aharoni et al., 2019; Liu et al., 2019a; Taghizadeh and Faili, 2020; Zhu, 2020; Kanayama and Iwamoto, 2020; Nguyen and Nguyen, 2021) . A large portion of existing multilingual systems has focused on downstream NLP tasks that critically depend on upstream linguistic features, ranging from basic information such as token and sentence boundaries for raw text to more sophisticated structures such as part-of-speech tags, morphological features, and dependency trees of sentences (called fundamental NLP tasks). As such, building effective multilingual systems/pipelines for fundamental upstream NLP tasks to produce such information has the potentials to transform multilingual downstream systems.", "cite_spans": [ { "start": 100, "end": 122, "text": "(Aharoni et al., 2019;", "ref_id": "BIBREF0" }, { "start": 123, "end": 141, "text": "Liu et al., 2019a;", "ref_id": "BIBREF16" }, { "start": 142, "end": 169, "text": "Taghizadeh and Faili, 2020;", "ref_id": "BIBREF31" }, { "start": 170, "end": 180, "text": "Zhu, 2020;", "ref_id": "BIBREF37" }, { "start": 181, "end": 208, "text": "Kanayama and Iwamoto, 2020;", "ref_id": "BIBREF11" }, { "start": 209, "end": 233, "text": "Nguyen and Nguyen, 2021)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There have been several NLP toolkits that concerns multilingualism for fundamental NLP tasks, featuring spaCy 1 , UDify (Kondratyuk and Straka, 2019) , Flair (Akbik et al., 2019) , CoreNLP (Manning et al., 2014) , UDPipe (Straka, 2018) , and Stanza (Qi et al., 2020) . However, these toolkits have their own limitations. spaCy is designed to focus on speed, thus it needs to sacrifice the performance. UDify and Flair cannot process raw text as they depend on external tokenizers. CoreNLP supports raw text, but it does not offer state-ofthe-art performance. UDPipe and Stanza are the recent toolkits that leverage word embeddings, i.e., word2vec (Mikolov et al., 2013) and fastText (Bojanowski et al., 2017) , to deliver current state-ofthe-art performance for many languages. However, Stanza and UDPipe's pipelines for different languages are trained separately and do not share any component, especially the embedding layers that account for most of the model size. This makes their memory usage grow aggressively as pipelines for more languages are simultaneously needed and loaded into the memory (e.g., for language learning apps). Most importantly, none of such toolkits have explored contextualized embeddings from pretrained transformer-based language models that have the potentials to significantly improve the performance of the NLP tasks, as demonstrated in many prior works (Devlin et al., 2019; Liu et al., 2019b; Conneau et al., 2020) .", "cite_spans": [ { "start": 120, "end": 149, "text": "(Kondratyuk and Straka, 2019)", "ref_id": "BIBREF13" }, { "start": 158, "end": 178, "text": "(Akbik et al., 2019)", "ref_id": "BIBREF1" }, { "start": 189, "end": 211, "text": "(Manning et al., 2014)", "ref_id": "BIBREF18" }, { "start": 221, "end": 235, "text": "(Straka, 2018)", "ref_id": "BIBREF30" }, { "start": 249, "end": 266, "text": "(Qi et al., 2020)", "ref_id": "BIBREF29" }, { "start": 647, "end": 669, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF19" }, { "start": 683, "end": 708, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF3" }, { "start": 1388, "end": 1409, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF7" }, { "start": 1410, "end": 1428, "text": "Liu et al., 2019b;", "ref_id": "BIBREF17" }, { "start": 1429, "end": 1450, "text": "Conneau et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we introduce Trankit, a multilingual Transformer-based NLP Toolkit that over- comes such limitations. Our toolkit can process raw text for fundamental NLP tasks, supporting 56 languages with 90 pre-trained pipelines on 90 treebanks of the Universal Dependency v2.5 (Zeman et al., 2019) . By utilizing the state-of-the-art multilingual pretrained transformer XLM-Roberta (Conneau et al., 2020) , Trankit advances state-of-theart performance for sentence segmentation, partof-speech (POS) tagging, morphological feature tagging, and dependency parsing while achieving competitive or better performance for tokenization, multi-word token expansion, and lemmatization over the 90 treebanks. It also obtains competitive or better performance for named entity recognition (NER) on 11 public datasets. Unlike previous work, our token and sentence splitter is wordpiece-based instead of characterbased to better exploit contextual information, which are beneficial in many languages. Considering the following sentence:", "cite_spans": [ { "start": 280, "end": 300, "text": "(Zeman et al., 2019)", "ref_id": null }, { "start": 385, "end": 407, "text": "(Conneau et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\"John Donovan from Argghhh! has put out a excellent slide show on what was actually found and fought for in Fallujah.\" As such, Trankit correctly recognizes this as a single sentence while character-based sentence splitters of Stanza and UDPipe are easily fooled by the exclamation mark \"!\", treating it as two separate sentences. To our knowledge, this is the first work to successfully build a wordpiece-based token and sentence splitter that works well for 56 languages. Figure 1 presents the overall architecture of Trankit pipeline that features three novel transformer-based components for: (i) the joint token and sentence splitter, (ii) the joint model for POS tagging, morphological tagging, dependency parsing, and (iii) the named entity recognizer. One potential concern for our use of a large pretrained transformer model (i.e., XML-Roberta) in Trankit involves GPU memory where different transformer-based components in the pipeline for one or multiple languages must be simultaneously loaded into the memory to serve multilingual tasks. This could extensively consume the memory if different versions of the large pre-trained transformer (finetuned for each component) are employed in the pipeline. As such, we introduce a novel plugand-play mechanism with Adapters to address this memory issue. Adapters are small networks injected inside all layers of the pretrained transformer model that have shown their effectiveness as a lightweight alternative for the traditional finetuning of pretrained transformers (Houlsby et al., 2019; Peters et al., 2019; Pfeiffer et al., 2020a,b) . In Trankit, a set of adapters (for transfomer layers) and task-specific weights (for final predictions) are created for each transformer-based component for each language while only one single large multilingual pretrained transformer is shared across components and languages. Adapters allow us to learn language-specific features for tasks. During training, the shared pretrained transformer is fixed while only the adapters and task-specific weights are updated. At inference time, depending on the language of the input text and the current active component, the corresponding trained adapter and task-specific weights are activated and plugged into the pipeline to process the input. This mechanism not only solves the memory problem but also substantially reduces the training time.", "cite_spans": [ { "start": 108, "end": 118, "text": "Fallujah.\"", "ref_id": null }, { "start": 1524, "end": 1546, "text": "(Houlsby et al., 2019;", "ref_id": "BIBREF10" }, { "start": 1547, "end": 1567, "text": "Peters et al., 2019;", "ref_id": "BIBREF26" }, { "start": 1568, "end": 1593, "text": "Pfeiffer et al., 2020a,b)", "ref_id": null } ], "ref_spans": [ { "start": 474, "end": 482, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There have been works using pre-trained transformers to build models for character-based word segmentation for Chinese (Yang, 2019; Tian et al., 2020; Che et al., 2020) ; POS tagging for Dutch, English, Chinese, and Vietnamese (de Vries et al., 2019; Tenney et al., 2019; Tian et al., 2020; Che et al., 2020; Nguyen and Nguyen, 2020) ; morphological feature tagging for Estonian and Persian (Kittask et al., 2020; Mohseni and Tebbifakhr, 2019) ; and dependency parsing for English and Chinese (Tenney et al., 2019; Che et al., 2020) . However, all of these works are only developed for some specific language, thus potentially unable to support and scale to the multilingual setting.", "cite_spans": [ { "start": 119, "end": 131, "text": "(Yang, 2019;", "ref_id": null }, { "start": 132, "end": 150, "text": "Tian et al., 2020;", "ref_id": "BIBREF33" }, { "start": 151, "end": 168, "text": "Che et al., 2020)", "ref_id": "BIBREF4" }, { "start": 227, "end": 250, "text": "(de Vries et al., 2019;", "ref_id": null }, { "start": 251, "end": 271, "text": "Tenney et al., 2019;", "ref_id": "BIBREF32" }, { "start": 272, "end": 290, "text": "Tian et al., 2020;", "ref_id": "BIBREF33" }, { "start": 291, "end": 308, "text": "Che et al., 2020;", "ref_id": "BIBREF4" }, { "start": 309, "end": 333, "text": "Nguyen and Nguyen, 2020)", "ref_id": null }, { "start": 391, "end": 413, "text": "(Kittask et al., 2020;", "ref_id": "BIBREF12" }, { "start": 414, "end": 443, "text": "Mohseni and Tebbifakhr, 2019)", "ref_id": "BIBREF21" }, { "start": 493, "end": 514, "text": "(Tenney et al., 2019;", "ref_id": "BIBREF32" }, { "start": 515, "end": 532, "text": "Che et al., 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Some works have designed multilingual transformer-based systems via multilingual training on the combined data of different languages (Tsai et al., 2019; Kondratyuk and Straka, 2019; Ust\u00fcn et al., 2020) . However, multilingual training is suboptimal (see Section 5). Also, these systems still rely on external resources to perform tokenization and sentence segmentation, thus unable to consume raw text. To our knowedge, this is the first work to successfully build a multilingual transformer-based NLP toolkit where different transformer-based models for many languages can be simultaneously loaded into GPU memory and process raw text inputs of different languages.", "cite_spans": [ { "start": 154, "end": 182, "text": "Kondratyuk and Straka, 2019;", "ref_id": "BIBREF13" }, { "start": 183, "end": 202, "text": "Ust\u00fcn et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Adapters. Adapters play a critical role in making Trankit memory-and time-efficient for training and inference. Figure 2 shows the architecture and the location of an adapter inside a layer of transformer. We use the adapter architecture proposed by (Pfeiffer et al., 2020a,b) , which consists of two projection layers Up and Down (feed-forward networks), and a residual connection.", "cite_spans": [ { "start": 250, "end": 276, "text": "(Pfeiffer et al., 2020a,b)", "ref_id": null } ], "ref_spans": [ { "start": 112, "end": 120, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Design and Architecture", "sec_num": "3" }, { "text": "ci = AddNorm(ri), hi = Up(ReLU(Down(ci))) + ri (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design and Architecture", "sec_num": "3" }, { "text": "where r i is the input vector from the transformer layer for the adapter and h i is the output vector for the transformer layer i. During training, all the weights of the pretrained transformer (i.e., gray boxes) are fixed and only the adapter weights of two projection layers and the task-specific weights outside the transformer (for final predictions) are updated. As demonstrated in Figure 1 , Trankit involves six components described as follows.", "cite_spans": [], "ref_spans": [ { "start": 387, "end": 395, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Design and Architecture", "sec_num": "3" }, { "text": "Multilingual Encoder with Adapters. This is our core component that is shared across different transformer-based components for different languages of the system. Given an input raw text s, we first split it into substrings by spaces. Afterward, Sentence Piece, a multilingual subword tokenizer (Kudo and Richardson, 2018; Kudo, 2018) , is used to further split each substring into wordpieces. By concatenating wordpiece sequences for substrings, we obtain an overall sequence of wordpieces w = [w 1 , w 2 , . . . , w K ] for s. In the next step, w is fed into the pretrained transformer, which is already integrated with adapters, to obtain the wordpiece representations:", "cite_spans": [ { "start": 295, "end": 322, "text": "(Kudo and Richardson, 2018;", "ref_id": "BIBREF15" }, { "start": 323, "end": 334, "text": "Kudo, 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Design and Architecture", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x l,m 1:K = Transformer(w1:K ; \u03b8 l,m AD )", "eq_num": "(2)" } ], "section": "Design and Architecture", "sec_num": "3" }, { "text": "Here, \u03b8 l,m AD represents the adapter weights for language l and component m of the system. As such, we have specific adapters in all transformer layers for each component m and language l. Note that if K is larger than the maximum input length of the pretrained transformer (i.e., 512), we further divide w into consecutive chunks; each has the length less than or equal to the maximum length. The pretrained transformer is then applied over each chunk to obtain a representation vector for each wordpiece in w. Finally, x l,m 1:K will be sent to component m to perform the corresponding task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design and Architecture", "sec_num": "3" }, { "text": "Joint Token and Sentence Splitter. Given the wordpiece representations x l,m 1:K for this component, each vector x l,m i for w i \u2208 w will be consumed by a feed-forward network with softmax in the end to predict if w i is the end of a single-word token, the end of a multi-word token, or the end of a sentence. The predictions for all wordpieces in w will then be aggregated to determine token, multi-word token, and sentence boundaries for s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design and Architecture", "sec_num": "3" }, { "text": "Multi-word Token Expander. This component is responsible for expanding each detected multi-word token (MWT) into multiple syntactic words 2 . We follow Stanza to deploy a character-based seq2seq model for this component. This decision is made based on our observation that the task is done best at character level, and the character-based model (with character embeddings) is very small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design and Architecture", "sec_num": "3" }, { "text": "Tagging and Dependency Parsing. In Trankit, given the detected sentences and tokens/words, we use a single model to jointly perform POS tagging, morphological feature tagging and dependency parsing at sentence level. Joint modeling mitigates error propagation, saves the memory, and speedups the system. In particular, given a sentence, the representation for each word is computed as the average of its wordpieces' transformer-based representations in x l,m 1:K . Let t 1:N = [t 1 , t 2 , . . . , t N ] be the representations of the words in the sentence. We compute the following vectors using feed-forward networks FFN * :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Model for POS Tagging, Morphological", "sec_num": null }, { "text": "r upos 1:N = FFNupos(t1:N ), r xpos 1:N = FFNxpos(t1:N ) r uf eats 1:N = FFN uf eats (t1:N ), r dep 0:N = [x cls ; FFN dep (t1:N )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Model for POS Tagging, Morphological", "sec_num": null }, { "text": "Vectors for the words in r upos 1:N , r xpos 1:N , r uf eats 1:N are then passed to a softmax layer to make predictions for UPOS, XPOS, and UFeats tags for each word. For dependency parsing, we use the classification token to represent the root node, and apply Deep Biaffine Attention (Dozat and Manning, 2017) and the Chu-Liu/Edmonds algorithm (Chu, 1965; Edmonds, 1967) to assign a syntactic head and the associated dependency relation to each word in the sentence.", "cite_spans": [ { "start": 289, "end": 314, "text": "(Dozat and Manning, 2017)", "ref_id": "BIBREF8" }, { "start": 349, "end": 360, "text": "(Chu, 1965;", "ref_id": "BIBREF5" }, { "start": 361, "end": 375, "text": "Edmonds, 1967)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Model for POS Tagging, Morphological", "sec_num": null }, { "text": "Lemmatizer. This component receives sentences and their predicted UPOS tags to produce the canonical form for each word. We also employ a character-based seq2seq model for this component as in Stanza.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Model for POS Tagging, Morphological", "sec_num": null }, { "text": "Named Entity Recognizer. Given a sentence, the named entity recognizer determines spans of entity names by assigning a BIOES tag to each token in the sentence. We deploy a standard sequence labeling architecture using transformer-based representations for tokens, involving a feed-forward network followed by a Conditional Random Field.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Model for POS Tagging, Morphological", "sec_num": null }, { "text": "Detailed documentation for Trankit can be found at: https://trankit.readthedocs.io.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Usage", "sec_num": "4" }, { "text": "Trankit is written in Python and available on PyPI: https://pypi. org/project/trankit/. Users can install our toolkit via pip using:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trankit Installation.", "sec_num": null }, { "text": "pip install trankit Initialize a Pipeline. Lines 1-4 in Figure 3 shows how to initialize a pretrained pipeline for English; it is instructed to run on GPU and store downloaded pretrained models to the specified cache directory. Trankit will not download pretrained models if they already exist.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 64, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Trankit Installation.", "sec_num": null }, { "text": "Multilingual Usage. Figure 3 shows how to initialize a multilingual pipeline and process inputs of different languages in Trankit: from trankit import Pipeline # initialize a multilingual pipeline p = Pipeline(lang='english', gpu=True, cache_dir='./cache') langs = ['arabic', 'chinese', 'dutch'] Basic Functions. Trankit can process inputs which are untokenized (raw) or pretokenized strings, at both sentence and document levels. Figure 4 illustrates a simple code to perform all the supported tasks for an input text. We organize Trankit's outputs into hierarchical native Python dictionaries, which can be easily inspected by users. Figure 5 demonstrates the outputs of the command line 6 in Figure 4 . the XLM-Roberta encoder which is pretrained on those languages. Figure 6 illustrates how to train a token and sentence splitter with TPipeline.", "cite_spans": [ { "start": 265, "end": 295, "text": "['arabic', 'chinese', 'dutch']", "ref_id": null } ], "ref_spans": [ { "start": 20, "end": 28, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 431, "end": 439, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 636, "end": 644, "text": "Figure 5", "ref_id": "FIGREF4" }, { "start": 695, "end": 703, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 770, "end": 778, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Trankit Installation.", "sec_num": null }, { "text": "Demo Website. A demo website for Trankit to support 90 pretrained pipelines is hosted at: http: //nlp.uoregon.edu/trankit. Figure 7 shows its interface.", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 131, "text": "Figure 7", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Trankit Installation.", "sec_num": null }, { "text": "To achieve a fair comparison, we follow Stanza (Qi et al., 2020) to train and evaluate all the models on the same canonical data splits of 90 Universal Dependencies treebanks v2.5 (UD2.5) 3 (Zeman et al., 2019), and 11 public NER datasets provided in the following corpora: AQMAR (Mohit et al., 2012), CoNLL02 (Tjong Kim Sang, 2002) , CoNLL03 We skip 10 treebanks whose languages are not supported by XLM-Roberta. der, 2003), GermEval14 (Benikova et al., 2014 ), OntoNotes (Weischedel et al., 2013 , and WikiNER (Nothman et al., 2012) . Hyper-parameters for all models and datasets are selected based on the development data in this work. Table 1 compares the performance of Trankit and the latest available versions of other popular toolkits, including Stanza (v1.1.1) with current stateof-the-art performance, UDPipe (v1.2), and spaCy (v2.3) on the UD2.5 test sets. The performance for all systems is obtained using the official scorer Figure 6: Training a token and sentence splitter using the CONLL-U formatted data (Nivre et al., 2020) .", "cite_spans": [ { "start": 47, "end": 64, "text": "(Qi et al., 2020)", "ref_id": "BIBREF29" }, { "start": 310, "end": 332, "text": "(Tjong Kim Sang, 2002)", "ref_id": "BIBREF34" }, { "start": 437, "end": 459, "text": "(Benikova et al., 2014", "ref_id": "BIBREF2" }, { "start": 460, "end": 497, "text": "), OntoNotes (Weischedel et al., 2013", "ref_id": null }, { "start": 512, "end": 534, "text": "(Nothman et al., 2012)", "ref_id": "BIBREF25" }, { "start": 1020, "end": 1040, "text": "(Nivre et al., 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 639, "end": 646, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Datasets & Hyper-parameters", "sec_num": "5.1" }, { "text": "of the CoNLL 2018 Shared Task 4 . On five illustrated languages, Trankit achieves competitive performance on tokenization, MWT expansion, and lemmatization. Importantly, Trankit outperforms other toolkits over all remaining tasks (e.g., POS and morphological tagging) in which the improvement boost is substantial and significant for sentence segmentation and dependency parsing. For example, English enjoys a 7.22% improvement for sentence segmentation, a 3.92% and 4.37% improvement for UAS and LAS in dependency parsing. For Arabic, Trankit has a remarkable improvement of 16.16% for sentence segmentation while Chinese observes 12.31% and 12.72% improvement of UAS and LAS for dependency parsing. Over all 90 treebanks, Trankit outperforms the previous state-of-the-art framework Stanza in most of the tasks, particularly for sentence segmentation (+3.24%), POS tagging (+1.44% for UPOS and +1.55% for XPOS), morphological tagging (+1.46%), and dependency parsing (+4.0% for UAS and +5.01% for LAS) while maintaining the competitive performance on tokenization, multiword expansion, and lemmatization. Table 3 compares Trankit with Stanza (v1.1.1), Flair (v0.7), and spaCy (v2.3) on the test sets of 11 considered NER datasets. Following Stanza, we report the performance for other toolkits with their pretrained models on the canonical data splits if they are available. Otherwise, their best configurations are used to train the models on the same data splits (inherited from Stanza). Also, for the Dutch datasets, we retrain the models in Flair as those models (for Dutch) have been updated in version v0.7. As can be seen, Trankit obtains competitive or better performance for most of the languages, clearly demonstrating the benefit of using the pretrained transformer for multilingual NER. Table 4 reports the relative processing time for UD and NER of the toolkits compared to spaCy's CPU processing time 5 . For memory usage comparison, we show the model sizes of Trankit and Stanza for several languages in Table 5 . As can be seen, besides the multilingual transformer, model packages in Trankit only take dozens of megabytes while Stanza consumes hundreds of megabytes for each package. This leads to the Stanza's usage of much more memory when the pipelines for these languages are loaded at the same time. In fact, Trankit only takes 4.9GB to load all the 90 pretrained pipelines for the 56 supported languages.", "cite_spans": [], "ref_spans": [ { "start": 1106, "end": 1113, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 1800, "end": 1807, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 2020, "end": 2027, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Universal Dependencies performance", "sec_num": "5.2" }, { "text": "This section compares Trankit with two other possible strategies to build a multilingual system for fundamental NLP tasks. In the first strategy (called \"Multilingual\"), we train a single pipeline where all the components in the pipeline are trained with the combined training data of all the languages. The second strategy (called \"No-adapters\") involves eliminating adapters from XLM-Roberta in Trankit. As such, in \"No-adapters\", pipelines are still trained separately for each language; the pretrained transformer is fixed; and only task-specific weights (for predictions) in components are updated during training. For evaluation, we select 9 treebanks for 3 different groups, i.e., high-resource, medium-resource, and low-resource, depending on the sizes of the treebanks. In particular, the high-resource group includes Czech, Russian, and Arabic; the mediumresource group includes French, English, and Chinese; and the low-resource group involves Belaru-sian, Telugu, and Lithuanian. Table 2 compares the average performance of Trankit, \"Multilingual\", and \"No-adapters\". As can be seen, \"Multilingual\" and \"No-adapters\" are significantly worse than the proposed adapter-based Trankit. We attribute this to the fact that multilingual training might suffer from unbalanced sizes of treebanks, causing highresource languages to dominate others and impairing the overall performance. For \"No-adapters\", fixing pretrained transformer might significantly limit the models' capacity for multiple tasks and languages.", "cite_spans": [], "ref_spans": [ { "start": 992, "end": 999, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "5.5" }, { "text": "We introduce Trankit, a transformer-based multilingual toolkit that significantly improves the performance for fundamental NLP tasks, including sentence segmentation, part-of-speech, morphological tagging, and dependency parsing over 90 Universal Dependencies v2.5 treebanks of 56 different languages. Our toolkit is fast on GPUs and efficient in memory use, making it usable for general users. In the future, we plan to improve our toolkit by investigating different pretrained transformers such as mBERT and XLM-Roberta large . We also plan to provide Named Entity Recognizers for more languages and add modules to perform more NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://spacy.io/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For languages (e.g., English, Chinese) that do not require MWT expansion, tokens and words are the same concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://universaldependencies.org/ conll18/evaluation.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "spaCy can process 8140 tokens and 5912 tokens per second for UD and NER, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Massively multilingual neural machine translation", "authors": [ { "first": "Roee", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3874--3884", "other_ids": { "DOI": [ "10.18653/v1/N19-1388" ] }, "num": null, "urls": [], "raw_text": "Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "FLAIR: An easy-to-use framework for state-of-theart NLP", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Bergmann", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Kashif", "middle": [], "last": "Rasul", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", "volume": "", "issue": "", "pages": "54--59", "other_ids": { "DOI": [ "10.18653/v1/N19-4010" ] }, "num": null, "urls": [], "raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the- art NLP. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 54-59, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "NoSta-d named entity annotation for German: Guidelines and dataset", "authors": [ { "first": "Darina", "middle": [], "last": "Benikova", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Reznicek", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)", "volume": "", "issue": "", "pages": "2524--2531", "other_ids": {}, "num": null, "urls": [], "raw_text": "Darina Benikova, Chris Biemann, and Marc Reznicek. 2014. NoSta-d named entity annotation for Ger- man: Guidelines and dataset. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC-2014), pages 2524- 2531, Reykjavik, Iceland. European Languages Re- sources Association (ELRA).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": { "DOI": [ "10.1162/tacl_a_00051" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "N-ltp: A open-source neural chinese language technology platform with pretrained models", "authors": [ { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Yunlong", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Libo", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2009.11616" ] }, "num": null, "urls": [], "raw_text": "Wanxiang Che, Yunlong Feng, Libo Qin, and Ting Liu. 2020. N-ltp: A open-source neural chinese language technology platform with pretrained models. arXiv preprint arXiv:2009.11616.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On the shortest arborescence of a directed graph", "authors": [ { "first": "Yoeng-Jin", "middle": [], "last": "Chu", "suffix": "" } ], "year": 1965, "venue": "Scientia Sinica", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoeng-Jin Chu. 1965. On the shortest arborescence of a directed graph. Scientia Sinica.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised cross-lingual representation learning at scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8440--8451", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Deep biaffine attention for neural dependency parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In Proceedings of the International Conference on Learning Representations.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Optimum branchings", "authors": [ { "first": "Jack", "middle": [], "last": "Edmonds", "suffix": "" } ], "year": 1967, "venue": "Journal of Research of the national Bureau of Standards B", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack Edmonds. 1967. Optimum branchings. Journal of Research of the national Bureau of Standards B.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Parameter-efficient transfer learning for nlp", "authors": [ { "first": "Neil", "middle": [], "last": "Houlsby", "suffix": "" }, { "first": "Andrei", "middle": [], "last": "Giurgiu", "suffix": "" }, { "first": "Stanislaw", "middle": [], "last": "Jastrzebski", "suffix": "" }, { "first": "Bruna", "middle": [], "last": "Morrone", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "De Laroussilhe", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Gesmundo", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Attariyan", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Gelly", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In Proceedings of the International Conference on Machine Learning.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "How universal are Universal Dependencies? exploiting syntax for multilingual clause-level sentiment detection", "authors": [ { "first": "Hiroshi", "middle": [], "last": "Kanayama", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Iwamoto", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4063--4073", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroshi Kanayama and Ran Iwamoto. 2020. How uni- versal are Universal Dependencies? exploiting syn- tax for multilingual clause-level sentiment detection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4063-4073, Mar- seille, France. European Language Resources Asso- ciation.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluating multilingual bert for estonian", "authors": [ { "first": "Claudia", "middle": [], "last": "Kittask", "suffix": "" }, { "first": "Kirill", "middle": [], "last": "Milintsevich", "suffix": "" }, { "first": "Kairit", "middle": [], "last": "Sirts", "suffix": "" } ], "year": 2020, "venue": "", "volume": "328", "issue": "", "pages": "19--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudia Kittask, Kirill Milintsevich, and Kairit Sirts. 2020. Evaluating multilingual bert for estonian. Volume, 328:19-26.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "75 languages, 1 model: Parsing Universal Dependencies universally", "authors": [ { "first": "Dan", "middle": [], "last": "Kondratyuk", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2779--2795", "other_ids": { "DOI": [ "10.18653/v1/D19-1279" ] }, "num": null, "urls": [], "raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 lan- guages, 1 model: Parsing Universal Dependencies universally. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Con- ference on Natural Language Processing (EMNLP- IJCNLP), pages 2779-2795, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "66--75", "other_ids": { "DOI": [ "10.18653/v1/P18-1007" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo. 2018. Subword regularization: Improv- ing neural network translation models with multiple subword candidates. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66- 75, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Sentence-Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. Sentence- Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural cross-lingual event detection with minimal parallel resources", "authors": [ { "first": "Jian", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "738--748", "other_ids": { "DOI": [ "10.18653/v1/D19-1068" ] }, "num": null, "urls": [], "raw_text": "Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2019a. Neural cross-lingual event detection with minimal parallel resources. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Con- ference on Natural Language Processing (EMNLP- IJCNLP), pages 738-748, Hong Kong, China. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": { "DOI": [ "10.3115/v1/P14-5010" ] }, "num": null, "urls": [], "raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In Proceedings of the Conference on Neural Information Processing Systems.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Recalloriented learning of named entities in Arabic Wikipedia", "authors": [ { "first": "Behrang", "middle": [], "last": "Mohit", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Rishav", "middle": [], "last": "Bhowmick", "suffix": "" }, { "first": "Kemal", "middle": [], "last": "Oflazer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "162--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer, and Noah A. Smith. 2012. Recall- oriented learning of named entities in Arabic Wikipedia. In Proceedings of the 13th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 162-173, Avi- gnon, France. Association for Computational Lin- guistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "MorphoBERT: a Persian NER system with BERT and morphological analysis", "authors": [ { "first": "Mahdi", "middle": [], "last": "Mohseni", "suffix": "" }, { "first": "Amirhossein", "middle": [], "last": "Tebbifakhr", "suffix": "" } ], "year": 2019, "venue": "Proceedings of The First International Workshop on NLP Solutions for Under Resourced Languages (NSURL 2019) colocated with ICNLSP 2019 -Short Papers", "volume": "", "issue": "", "pages": "23--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahdi Mohseni and Amirhossein Tebbifakhr. 2019. MorphoBERT: a Persian NER system with BERT and morphological analysis. In Proceedings of The First International Workshop on NLP Solutions for Under Resourced Languages (NSURL 2019) co- located with ICNLSP 2019 -Short Papers, pages 23- 30, Trento, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "PhoBERT: Pre-trained language models for Vietnamese", "authors": [], "year": null, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "1037--1042", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.92" ] }, "num": null, "urls": [], "raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 1037- 1042, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Improving cross-lingual transfer for event argument extraction with language-universal sentence structures", "authors": [ { "first": "Minh", "middle": [], "last": "Van Nguyen", "suffix": "" }, { "first": "Thien", "middle": [], "last": "Huu Nguyen", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Sixth Arabic Natural Language Processing Workshop (WANLP) at EACL 2021", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh Van Nguyen and Thien Huu Nguyen. 2021. Im- proving cross-lingual transfer for event argument extraction with language-universal sentence struc- tures. In Proceedings of the Sixth Arabic Natural Language Processing Workshop (WANLP) at EACL 2021.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Universal Dependencies v2: An evergrowing multilingual treebank collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4034--4043", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Mar- seille, France. European Language Resources Asso- ciation.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning multilingual named entity recognition from Wikipedia", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Nicky", "middle": [], "last": "Ringland", "suffix": "" }, { "first": "Will", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2012, "venue": "Artificial Intelligence", "volume": "194", "issue": "", "pages": "151--175", "other_ids": { "DOI": [ "10.1016/j.artint.2012.03.006" ] }, "num": null, "urls": [], "raw_text": "Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R. Curran. 2012. Learning mul- tilingual named entity recognition from Wikipedia. Artificial Intelligence, 194:151-175.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "To tune or not to tune? adapting pretrained representations to diverse tasks", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)", "volume": "", "issue": "", "pages": "7--14", "other_ids": { "DOI": [ "10.18653/v1/W19-4302" ] }, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pre- trained representations to diverse tasks. In Proceed- ings of the 4th Workshop on Representation Learn- ing for NLP (RepL4NLP-2019), pages 7-14, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "AdapterHub: A framework for adapting transformers", "authors": [ { "first": "Jonas", "middle": [], "last": "Pfeiffer", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "R\u00fcckl\u00e9", "suffix": "" }, { "first": "Clifton", "middle": [], "last": "Poth", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Kamath", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "46--54", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.7" ] }, "num": null, "urls": [], "raw_text": "Jonas Pfeiffer, Andreas R\u00fcckl\u00e9, Clifton Poth, Aish- warya Kamath, Ivan Vuli\u0107, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020a. AdapterHub: A framework for adapting transform- ers. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 46-54, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer", "authors": [ { "first": "Jonas", "middle": [], "last": "Pfeiffer", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7654--7673", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.617" ] }, "num": null, "urls": [], "raw_text": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, and Se- bastian Ruder. 2020b. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7654-7673, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Stanza: A python natural language processing toolkit for many human languages", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuhui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "101--108", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-demos.14" ] }, "num": null, "urls": [], "raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101- 108, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "UDPipe 2.0 prototype at CoNLL 2018 UD shared task", "authors": [ { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "197--207", "other_ids": { "DOI": [ "10.18653/v1/K18-2020" ] }, "num": null, "urls": [], "raw_text": "Milan Straka. 2018. UDPipe 2.0 prototype at CoNLL 2018 UD shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197-207, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Crosslingual adaptation using universal dependencies", "authors": [ { "first": "Nasrin", "middle": [], "last": "Taghizadeh", "suffix": "" }, { "first": "Heshaam", "middle": [], "last": "Faili", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.10816" ] }, "num": null, "urls": [], "raw_text": "Nasrin Taghizadeh and Heshaam Faili. 2020. Cross- lingual adaptation using universal dependencies. arXiv preprint arXiv:2003.10816.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Joint Chinese word segmentation and partof-speech tagging via two-way attentions of autoanalyzed knowledge", "authors": [ { "first": "Yuanhe", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ao", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yonggang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8286--8296", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.735" ] }, "num": null, "urls": [], "raw_text": "Yuanhe Tian, Yan Song, Xiang Ao, Fei Xia, Xi- aojun Quan, Tong Zhang, and Yonggang Wang. 2020. Joint Chinese word segmentation and part- of-speech tagging via two-way attentions of auto- analyzed knowledge. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 8286-8296, Online. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition", "authors": [ { "first": "Erik", "middle": [ "F" ], "last": "", "suffix": "" }, { "first": "Tjong Kim", "middle": [], "last": "Sang", "suffix": "" } ], "year": 2002, "venue": "COLING-02: The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "authors": [ { "first": "Erik", "middle": [ "F" ], "last": "Tjong", "suffix": "" }, { "first": "Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Fien", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", "volume": "", "issue": "", "pages": "142--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Eric Villemonte de la Clergerie", "authors": [ { "first": "Shadi", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Alessio", "middle": [], "last": "Salomoni", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Samard\u017ei\u0107", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Samson", "suffix": "" }, { "first": "Manuela", "middle": [], "last": "Sanguinetti", "suffix": "" }, { "first": "Dage", "middle": [], "last": "S\u00e4rg", "suffix": "" }, { "first": "Baiba", "middle": [], "last": "Saul\u012bte", "suffix": "" }, { "first": "Yanin", "middle": [], "last": "Sawanakunanon", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Seeker", "suffix": "" }, { "first": "Mojgan", "middle": [], "last": "Seraji", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Atsuko", "middle": [], "last": "Shimada", "suffix": "" }, { "first": "Hiroyuki", "middle": [], "last": "Shirasu", "suffix": "" }, { "first": "Muh", "middle": [], "last": "Shohibussirri", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Sichinava", "suffix": "" }, { "first": "Aline", "middle": [], "last": "Silveira", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Silveira", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Simi", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Simionescu", "suffix": "" }, { "first": "Katalin", "middle": [], "last": "Simk\u00f3", "suffix": "" }, { "first": "Kiril", "middle": [], "last": "M\u00e1ria\u0161imkov\u00e1", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Simov", "suffix": "" }, { "first": "Isabela", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Carolyn", "middle": [], "last": "Soares-Bastos", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Spadine", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Stella", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Straka", "suffix": "" }, { "first": "Alane", "middle": [], "last": "Strnadov\u00e1", "suffix": "" }, { "first": "Umut", "middle": [], "last": "Suhr", "suffix": "" }, { "first": "Shingo", "middle": [], "last": "Sulubacak", "suffix": "" }, { "first": "Zsolt", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Dima", "middle": [], "last": "Sz\u00e1nt\u00f3", "suffix": "" }, { "first": "Yuta", "middle": [], "last": "Taji", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Takahashi", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Tamburini", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Tellier", "suffix": "" }, { "first": "Liisi", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Torga", "suffix": "" }, { "first": "Naoki", "middle": [], "last": "Trosterud ; Mary Yako", "suffix": "" }, { "first": "Chunxiao", "middle": [], "last": "Yamazaki", "suffix": "" }, { "first": "Koichi", "middle": [], "last": "Yan", "suffix": "" }, { "first": "", "middle": [], "last": "Yasuoka", "suffix": "" }, { "first": "M", "middle": [], "last": "Marat", "suffix": "" }, { "first": "Zhuoran", "middle": [], "last": "Yavrumyan", "suffix": "" }, { "first": "", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Zden\u011bk\u017eabokrtsk\u00fd", "suffix": "" }, { "first": "Manying", "middle": [], "last": "Zeldes", "suffix": "" }, { "first": "Hanzhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2019, "venue": "Faculty of Mathematics and Physics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shadi Saleh, Alessio Salomoni, Tanja Samard\u017ei\u0107, Stephanie Samson, Manuela Sanguinetti, Dage S\u00e4rg, Baiba Saul\u012bte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djam\u00e9 Seddah, Wolf- gang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Muh Shohibussirri, Dmitry Sichinava, Aline Silveira, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk\u00f3, M\u00e1ria\u0160imkov\u00e1, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Antonio Stella, Milan Straka, Jana Strnadov\u00e1, Alane Suhr, Umut Sulubacak, Shingo Suzuki, Zsolt Sz\u00e1nt\u00f3, Dima Taji, Yuta Takahashi, Fabio Tamburini, Takaaki Tanaka, Isabelle Tellier, Guillaume Thomas, Li- isi Torga, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zde\u0148ka Ure\u0161ov\u00e1, Larraitz Uria, Hans Uszkoreit, Andrius Utka, Sowmya Vajjala, Daniel van Niekerk, Gert- jan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washing- ton, Maximilan Wendt, Seyi Williams, Mats Wir\u00e9n, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wr\u00f3blewska, Mary Yako, Naoki Ya- mazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Zhuoran Yu, Zden\u011bk\u017dabokrtsk\u00fd, Amir Zeldes, Manying Zhang, and Hanzhi Zhu. 2019. Universal dependencies 2.5. LINDAT/CLARIAH- CZ digital library at the Institute of Formal and Ap- plied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Cross-lingual word sense disambiguation using mbert embeddings with syntactic dependencies", "authors": [ { "first": "Xingran", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2012.05300" ] }, "num": null, "urls": [], "raw_text": "Xingran Zhu. 2020. Cross-lingual word sense disam- biguation using mbert embeddings with syntactic dependencies. arXiv preprint arXiv:2012.05300.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Overall architecture of Trankit. A single multilingual pretrained transformer is shared across three components (pointed by the red arrows) of the pipeline for different languages." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Left: location of an adapter (green box) inside a layer of the pretrained transformer. Gray boxes represent the original components of a transformer layer. Right: the network architecture of an adapter." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Multilingual pipeline initialization." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "A function performing all tasks on the input." }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": ": 'Hello! This is Trankit.', // input string 'sentences': [ // list of sentences { 'id': 1, 'text': 'Hello!', 'dspan': (0, 6), 'tokens': [...] }, { 'id': 2, // sentence index 'text': 'This is Trankit.', 'dspan': (7, 23), // sentence span 'tokens': [ // list of tokens { 'id': 1, // token index 'text': 'This', 'upos': 'PRON', 'xpos': 'DT', 'feats': 'Number=Sing|PronType=Dem', 'head': 3, 'deprel': 'nsubj', 'lemma': 'this', 'ner': 'O', 'dspan': (7, 11), // document-level span of the token 'span': (0, 4) // sentence-level span of the token }, {'id': 2...}, {'id': 3...}, {'id': 4...Output from Trankit. Some parts are collapsed to improve visualization." }, "FIGREF5": { "type_str": "figure", "num": null, "uris": null, "text": "Demo website for Trankit." }, "TABREF2": { "num": null, "html": null, "type_str": "table", "text": "Systems' performance on test sets of the Universal Dependencies v2.5 treebanks. Performance for Stanza, UDPipe, and spaCy is obtained using their public pretrained models. The overall performance for Trankit and Stanza is computed as the macro-averaged F1 over 90 treebanks. Detailed performance of Trankit for 90 supported treebanks can be found at our documentation page.", "content": "
TreebankSystem Tokens Sents. Words UPOS XPOS UFeats Lemmas UAS LAS
Overall (90 treebanks)Trankit 99.23 91.82 99.02 95.65 94.05 Stanza 99.26 88.58 98.90 94.21 92.5093.21 91.7594.27 94.1587.06 83.69 83.06 78.68
Trankit 99.93 96.59 99.22 96.31 94.0894.2894.6588.39 84.68
Arabic-PADTStanza99.98 80.43 97.88 94.89 91.7591.8693.2783.27 79.33
UDPipe 99.98 82.09 94.58 90.36 84.0084.1688.4672.67 68.14
Trankit 97.0199.797.01 94.21 94.0296.5997.0185.19 82.54
Chinese-GSDStanza92.83 98.80 92.83 89.12 88.9392.1192.8372.88 69.82
UDPipe 90.27 99.10 90.27 84.13 84.0489.0590.2661.60 57.81
Trankit 98.48 88.35 98.48 95.95 95.7196.2696.8490.14 87.96
English-EWTStanza UDPipe 98.90 77.40 98.90 93.26 92.75 99.01 81.13 99.01 95.40 95.1296.11 94.2397.21 95.4586.22 83.59 80.22 77.03
spaCy97.44 63.16 97.44 86.99 91.05-87.1655.38 37.03
Trankit99.796.63 99.66 97.85-97.1697.8094.00 92.34
French-GSDStanza UDPipe 99.68 93.59 98.81 95.85 99.68 94.92 99.48 97.30--96.72 95.5597.64 96.6191.38 89.05 87.14 84.26
spaCy99.02 89.73 94.81 89.67--88.5575.22 66.93
Trankit 99.94 99.13 99.93 99.02 98.9498.899.1794.11 92.41
Spanish-AncoraStanza UDPipe 99.97 98.32 99.95 98.32 98.13 99.98 99.07 99.98 98.78 98.6798.59 98.1399.19 98.4892.21 90.01 88.22 85.10
spaCy99.95 97.54 99.43 93.43--80.0289.35 83.81
Table 1: from trankit import Pipeline 1
2
3p = Pipeline(lang='english', gpu=True, cache_dir='./cache')
4
5doc = '''Hello! This is Trankit.'''
6# perform all tasks on the input
7all = p(doc)
" }, "TABREF3": { "num": null, "html": null, "type_str": "table", "text": "SystemTokens Sents. Words UPOS XPOS UFeats Lemmas UAS LAS Trankit (plug-and-play with adapters)", "content": "
.6993.4686.20 82.51
Multilingual96.69 88.95 96.35 91.19 84.6488.1090.0272.96 68.66
No-adapters95.06 89.57 94.08 88.79 82.5483.7688.3366.63 63.11
" }, "TABREF4": { "num": null, "html": null, "type_str": "table", "text": "Model performance on 9 different treebanks (macro-averaged F1 score over test sets).", "content": "
1from trankit import TPipeline
2
3tp = TPipeline(training_config={
4'task': 'tokenize',
5'save_dir': './saved_model',
6'train_txt_fpath': './train.txt',
7'train_conllu_fpath': './train.conllu',
8'dev_txt_fpath': './dev.txt',
9'dev_conllu_fpath': './dev.conllu'})
10
11trainer.train()
" }, "TABREF6": { "num": null, "html": null, "type_str": "table", "text": "Performance (F1) on NER test sets.", "content": "
SystemUDGPU NERUDCPUNER
Trankit4.50\u00d7 1.36\u00d7 19.8\u00d7 31.5\u00d7
Stanza3.22\u00d7 1.08\u00d7 10.3\u00d7 17.7\u00d7
UDPipe--4.30\u00d7-
Flair-1.17\u00d7-51.8\u00d7
" }, "TABREF7": { "num": null, "html": null, "type_str": "table", "text": "Run time on processing the English EWT treebank and the English Ontonotes NER dataset. Measurements are done on an NVIDIA Titan RTX card.", "content": "
Model PackageTrankitStanza
Multilingual Transformer 1146.9MB-
Arabic38.6MB393.9MB
Chinese40.6MB225.2MB
English47.9MB383.5MB
French39.6MB561.9MB
Spanish37.3MB556.1MB
Total size1350.9MB 2120.6MB
" }, "TABREF8": { "num": null, "html": null, "type_str": "table", "text": "Model sizes for five languages.", "content": "" } } } }