{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:11:52.804284Z" }, "title": "Expanding the JHU Bible Corpus for Machine Translation of the Indigenous Languages of North America", "authors": [ { "first": "Garrett", "middle": [], "last": "Nicolai", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of British Columbia Vancouver", "location": { "country": "Canada" } }, "email": "garrett.nicolai@ubc.ca" }, { "first": "Edith", "middle": [], "last": "Coates", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of British Columbia Vancouver", "location": { "country": "Canada" } }, "email": "ecoates.bc@gmail.com" }, { "first": "Ming", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of British Columbia Vancouver", "location": { "country": "Canada" } }, "email": "" }, { "first": "Miikka", "middle": [], "last": "Silfverberg", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of British Columbia Vancouver", "location": { "country": "Canada" } }, "email": "miikka.silfverberg@ubc.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present an extension to the JHU Bible corpus, collecting and normalizing more than thirty Bible translations in thirty Indigenous languages of North America. These exhibit a wide variety of interesting syntactic and morphological phenomena that are understudied in the computational community. Neural translation experiments demonstrate significant gains obtained through cross-lingual, many-to-many translation, with improvements of up to 8.4 BLEU over monolingual models for extremely low-resource languages.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present an extension to the JHU Bible corpus, collecting and normalizing more than thirty Bible translations in thirty Indigenous languages of North America. These exhibit a wide variety of interesting syntactic and morphological phenomena that are understudied in the computational community. Neural translation experiments demonstrate significant gains obtained through cross-lingual, many-to-many translation, with improvements of up to 8.4 BLEU over monolingual models for extremely low-resource languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In 2019, Johns Hopkins University collated a corpus of translations of the Christian Bible in more than 1500 languages -the largest such corpus ever collected (Mc-Carthy et al., 2020) . Its parallel structure allows for significant experimentation in cross-lingual and data augmentation methods, and provides data for many underserved languages of the world. However, even at its impressive size, the corpus only represents roughly 20% of the world's languages, and is relatively sparse in the Indigenous languages of North America. Despite Ethnologue listing 254 living languages on the continent, the corpus only contains translations for 6 of them.", "cite_spans": [ { "start": 159, "end": 183, "text": "(Mc-Carthy et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we describe an extension of the JHU Bible corpus -namely, the addition of translations in 24 Indigenous North American languages, and new translations in six more. 1 Our work continues a tradition of expanding Bible corpora to be more inclusive - Resnik et al. (1999) 's 13 parallel languages grew into Christodouloupoulos and Steedman (2015) 's 100. Mayer and Cysouw (2014) established a corpus that eventually grew to 1556 Bibles in 1169 languages (Asgari and Sch\u00fctze, 2017), which was then subsumed by the 1611 language JHUBC .", "cite_spans": [ { "start": 179, "end": 180, "text": "1", "ref_id": null }, { "start": 262, "end": 282, "text": "Resnik et al. (1999)", "ref_id": "BIBREF14" }, { "start": 318, "end": 357, "text": "Christodouloupoulos and Steedman (2015)", "ref_id": "BIBREF6" }, { "start": 366, "end": 389, "text": "Mayer and Cysouw (2014)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Beyond contributing an important linguistic resource, our work also allows for development of computational tools for North American Indigenous languages -an important step in increasing the global pres- 1 The corpus is available by request at https://github.com/GarrettNicolai/FirstNationsBibles ence of the language communities. We demonstrate the usefulness of our Indigenous parallel corpus by building multilingual neural machine translation systems for North American Indigenous languages. Multilingual training is shown to be beneficial especially for the most resource-poor languages in our corpus which lack complete Bible translations.", "cite_spans": [ { "start": 204, "end": 205, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Bible is perhaps unique as a parallel text. Partial translations exist in more languages than any other text (Mayer and Cysouw, 2014 ). 2 Furthermore, for nearly 500 years, the Bible has had a canonical hierarchical structure -the Bible is made up of 66 books, each of which contains a number of chapters, which are, in turn, broken down into verses. Each verse corresponds to a short segment -often no more than a sentence. Bible translations preserve this structure as much as possible, meaning that translations are much easier to parallelize than typical texts.", "cite_spans": [ { "start": 113, "end": 136, "text": "(Mayer and Cysouw, 2014", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Construction", "sec_num": "2" }, { "text": "The first step in collecting Indigenous translations of the Bible is identifying existing translations. After first creating a list of Indigenous languages of North America, we searched existing Bible corpora online to obtain translations in as many languages as possible. For the majority of the collected Bibles, we obtained complete New Testament translations -consisting of 27 books of varying lengths. An additional 5 languages also contain complete Old Testament translations. The full list of languages is given in Table 1 and all the corpus data are available upon request. We emphasize that even incomplete translations -such as Siksika, which only has 2 translated books, are useful, particularly when they are in a parallel format with other related languages. Even a single book will typically contain a few hundred verses, which while small, can still be informative.", "cite_spans": [], "ref_spans": [ { "start": 522, "end": 529, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpus Construction", "sec_num": "2" }, { "text": "We collected Bibles from a variety of freely accessible online sources 3 : The Canadian Bible Society ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sources", "sec_num": "2.1" }, { "text": "We extend the JHUBC by 24 languages in 8 language families (including 2 isolates), with new translations in an additional 6 languages. The breakdown of language families is illustrated in Table 1 . In Table 2 , we demonstrate the type-to-token ratios for each language family in our corpus. We only include languages for which we have at least the New Testament, taking the largest translation that we have; we then average (weighted by number of verses) over each language family. A high TTR typically indicates a language with significant morphological productivity. As can be seen, the Indigenous languages in the corpus display high degrees of morphological productivity. Even the family having the lowest TTR, Uto-Aztecan still has four times as many types as English, and the Inuit-Aleut family, well-remarked for exhibiting productive synthetic morphology, will have 18 times the number of unique types as an English text of the same size. The languages that we collect exhibit a wide range of interesting linguistic phenomena. Several of the languages are predominantly SVO languages (if all arguments occur in the sentence) (Schmirler et al., 2018) but we also include languages like Haida where SOV constructions are prevalent (Enrico, 2003) . We also have examples of both nominative-accusative alignment and ergative-absolutive alignment exemplified by Inuktitut in the Inuit-Aleut family (Nowak, 2011) . Additionally, the languages display a large variety of interesting morphological features. We find examples of predominantly suffixing morphology in the Algic languages and extensive use of prefixes encountered in Athabaskan languages. Furthermore, animacy is an important grammatical category which is morphologically marked in Plains Cree (Schmirler et al., 2018) and other Algic languages.", "cite_spans": [ { "start": 1133, "end": 1157, "text": "(Schmirler et al., 2018)", "ref_id": "BIBREF15" }, { "start": 1237, "end": 1251, "text": "(Enrico, 2003)", "ref_id": "BIBREF7" }, { "start": 1401, "end": 1414, "text": "(Nowak, 2011)", "ref_id": "BIBREF12" }, { "start": 1758, "end": 1782, "text": "(Schmirler et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 188, "end": 195, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 201, "end": 208, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Corpus Statistics", "sec_num": "2.2" }, { "text": "Although Bibles are readily parallelizable in general due to the canonical division into books, chapters and verses, translations sometimes combine several verses into one creating a discrepancy between the verse numbering in different Bible translations. JHUBC follows a convention presented by Mayer and Cysouw (2014) : the combined verse is listed as the first verse in the sequence (ie, verse 16, if it spans 16-18), while the other verses are marked as \"BLANK\". While reasonable, this convention can result in difficulties for crosslingual training, as one verse on one side of data aligns with many verses on the other, and many verses must be discarded. We opt for a different approach and instead split combined verses apart. We identify separation points using a mixed Naive-Bayes classifier (Hsu et al., 2008) with two features: punctuation and token ratio. We assume that the relative length of the individual verses is likely to be similar across languages, and calculate the ratio of tokens between individual verses and the combined verse in our English Bible reference. An evaluation on artificially-combined verses demonstrates a macro-averaged F-score of 86% on identifying splitting points when two verses require splitting.", "cite_spans": [ { "start": 296, "end": 319, "text": "Mayer and Cysouw (2014)", "ref_id": "BIBREF9" }, { "start": 801, "end": 819, "text": "(Hsu et al., 2008)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Verse Splitting", "sec_num": "2.3" }, { "text": "We conduct a number of neural-MT experiments on the data. We investigate translation quality both for bilingual translation systems and for multilingual systems, while applying a number of variations to the training Figure 1 : Example of our training data format for many-to-many NMT experiments. The first symbol on each line (e.g. Bible.Algonquin) gives the language of the current sentence and the second one shows the language of the corresponding target or source sentence. This allows us to use each sentence both in the source and target set. procedure of the NMT systems in order to improve translation quality. These are described in detail below.", "cite_spans": [ { "start": 333, "end": 349, "text": "Bible.Algonquin)", "ref_id": null } ], "ref_spans": [ { "start": 216, "end": 224, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Translation Scenarios We measure translation performance for three language families: the Algic, Athabaskan and Inuit-Aleut families. For each family, we evaluate performance on a few \"high-resource\" languages 4 which have complete Bible translations. Our high-resource languages are Plains Cree 5 for the Algic family, Navajo (NAV) for the Athabaskan family and Inuktitut (IKU) and Central Alaskan Yupik (ESU) for the Inuit-Aleut family. We also evaluate performance on a single lower-resource language from each family, which only has the NT available. Our lowerresource languages are Mi\u1e31maq (Algic -MIC), Dogrib (Athabaskan -DGR), and Inupiatun (Inuit-Aleut -IKU). All of these translations, except for Inuktitut, are written in modified versions of the Latin script.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "For each language family, we train (1) bilingual X-English NMT systems with a single source language X, (2) multilingual Family-English systems where we combine training examples from all the languages in the family into a joint training set, and (3) multilingual many-to-many NMT systems combining both Family-English and English-Family translation tasks for all the languages in the family.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We learn a joint Byte Pair Encoding (Sennrich et al., 2016) between source and target, experimenting with two vocabulary sizes: we try both 32,000 and 16,000 merge operations. In multilingual experiments we concatenate source and target language tags to our sentences in order to learn to translate into the appropriate language. Figure 1 shows a few multilingual training examples.", "cite_spans": [ { "start": 36, "end": 59, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 330, "end": 338, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Data Preprocessing", "sec_num": null }, { "text": "We use transformer systems for translation and train our models using the Fairseq toolkit (Ott et al., 2019) , with 3 encoding and decoding layers, 4 attention heads, an embedding size of 512, and a maximum of 2000 tokens per batch 6 . Models are trained for 100 epochs. We set aside the book of Revelation as an evaluation set: the first 100 verses serve as a validation set, and the final 304 verses form a held-out test set.", "cite_spans": [ { "start": 90, "end": 108, "text": "(Ott et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Model Details", "sec_num": null }, { "text": "Training Settings Preliminary experiments showed that multilingual systems trained on a single target corpus, i.e. the English Bible in our case, have a tendency to completely disregard the source sentence during test time and instead generate an unrelated English sentence as output. We dub this target overfitting. To counter this tendency, we employ four specialized training strategies: (1) Single Source translation (1Src) limits the number of training source texts to one even when we have multiple Bible translations in the same language 7 . (2) Heterogeneous batching (HB) (Aharoni et al., 2019) constructs minibatches by uniformly sampling sentences from the entire training data into each minibatch. In contrast, the common practice is to construct minibatches from training examples with similar length. 8 (3) We increase the amount of English target data available to the model by adding monolingual English training examples where the source and target sentence are identical (E2E). 9 (4) Finally, following Aharoni et al. 2019we transform our many-to-English models into many-to-many models (M2M) by reversing the source and target language of our Bibles and combining the resulting data with our original training set. Table 3 reports the tokenized, lower-case BLEU score for our experiments. Although Inuktitut is written in a different script than English, it translates relatively well -only transliterated Cree obtains a better BLEU score. When we extend our experiments to the entire Inuit-Aleut family, we see modest gains for both the Latin and Non-Latin languages. However, we also note that the translation quality collapses for the other language families. We suspect this may be due to a large BPE vocabulary -the Inuit-Aleut family, containing two scripts, is more likely to split words; the single script Athabaskan and Algic families, on the other hand, can simply memorize entire words, which may Table 3 : Lowercase BLEU scores for NMT. The subsections correspond to monolingual, multilingual, and multilingual many-to-many translation. Bolded scores indicate the highest BLEU scores for the each language, as well as averages across high-and low-resource languages.", "cite_spans": [ { "start": 815, "end": 816, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 1234, "end": 1241, "text": "Table 3", "ref_id": null }, { "start": 1927, "end": 1934, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Model Details", "sec_num": null }, { "text": "be less than beneficial for languages with high numbers of morphemes in each word. When we reduce the BPE vocabulary, we see a large increase in translation quality for all monolingual experiments, as the system sees many more short sequences. Unfortunately, we fail to leverage the increase in data as we add more languages from the same family, with the Algic (Cree) and Athabaskan (Navajo) family models still collapsing, and the Inuit-Aleut slightly decreasing. This result is not entirely unforeseen, although we didn't expect it with such a small number of languages. report that their models also completely devolved into translations that, while structurally fluent, were completely inadequate at representing the source translation. However, they did see small gains when the number of added languages was small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "We hypothesize that our results degrade because of a lack of complete Bible translations. start with complete translations, and the numbers only start failing as incomplete translations are added. We see small gains for the Inuit family, for which we have multiple complete Bibles. We hypothesize that many copies of an identical target in the training data may be adversely affecting the multilingual models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Reducing training data single source per language results in significant gains -multilingual training now clearly improves results for our four low-resource languages. The gains are encouraging, and the models are producing more adequate output. We thus maintain the single-source constraint for our other experiments -all following experiments are cumulative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Heterogeneous batching also contributes modestly to the quality of translations, confirming our suspicion that certain batches were influencing the final results. Likewise, adding a purely English corpus increases BLEU notably.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "Training a many-to-many model brings the scores on our high-resource languages nearly to the level of the monolingual models, but does not surpass them. We never expected much gain in the familial experiments in these languages -we already include the entire Bible as training, and the other languages are not introducing much new information. Where we expect to see gains is in the low-resource languages. And indeed we do. These three languages, containing only New Testament data, are not large enough to train monolingual NMT models. However, we see steady gains that, while not perfectly mirroring the results of the high-resource experiments, eventually results in translations that are 4.2 BLEU points, on average, better than the monolingual models. These languages are able to leverage the information of more complete Bibles in other related languages to improve substantially.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "We have presented an extension to the JHU Bible corpus, expanding it by almost forty translations in thirty Indigenous languages. These languages represent only a fraction of the languages spoken in North America, but by presenting them in a parallel corpus, we hope to encourage computational research in these underrepresented languages. Based on our experiments, the benefits of cross-lingual training are clear. Our experiments have also uncovered a set of useful training strategies which counteract target overfitting in multilingual models which are trained using several source translations but only one target text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Save, perhaps, the Universal Declaration of Human rights, which is much shorter.3 Most of the data we use are not in the public domain but our work falls under the fair use doctrine of North American copyright law.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Relatively speaking. Of course all of our languages are low-resource but some still have more available resources than others.5 We use a version of the Plain Cree (CRK) Bible which has been transliterated into Latin script.6 These settings were established on a similar lowresource corpus", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Discussions of dialects and languages aside, we include the largest source which contains the language name -thus, we choose one source only from Western, Eastern, Plains, and Moose Cree, for example.8 According to our preliminary experiments, length-based batching can seriously harm the performance of MT models for X-English Bible translation 9 To this end, we download the works of Martin Luther -which largely overlap in domain and size with the Bible (approximately 50,000 sentences) -from Project Gutenberg gutenberg.org.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "English apitc mois ka nodag ii , coda8innig ka", "authors": [ { "first": "", "middle": [], "last": "Bible", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bible.Algonquin 2Bible.English apitc mois ka nodag ii , coda8innig ka ...", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "English namawiya\u0113kosi ki ka itota@@ w\u0101w k\u0101 tip\u0113yihcik\u0113t ki", "authors": [ { "first": "", "middle": [], "last": "Bible", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bible.Cree 2Bible.English namawiya\u0113kosi ki ka itota@@ w\u0101w k\u0101 tip\u0113yihcik\u0113t ki ...", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Cree 2Bible.English\u0113kwa m\u0101ka kiy\u0101m kanaw\u0101pa@@ mik ; cik\u0113m\u0101 namawiya ki ka", "authors": [ { "first": "", "middle": [], "last": "Bible", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bible.Cree 2Bible.English\u0113kwa m\u0101ka kiy\u0101m kanaw\u0101pa@@ mik ; cik\u0113m\u0101 namawiya ki ka ...", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Algonquin when moses went into the tent of meeting to speak", "authors": [ { "first": "", "middle": [], "last": "Bible", "suffix": "" }, { "first": "", "middle": [], "last": "English 2bible", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bible.English 2Bible.Algonquin when moses went into the tent of meeting to speak ...", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Massively multilingual neural machine translation", "authors": [ { "first": "Melvin", "middle": [], "last": "References Roee Aharoni", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "", "middle": [], "last": "Firat", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3874--3884", "other_ids": {}, "num": null, "urls": [], "raw_text": "References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 3874-3884, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Past, present, future: A computational investigation of the typology of tense in 1000 languages", "authors": [ { "first": "Ehsaneddin", "middle": [], "last": "Asgari", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "113--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehsaneddin Asgari and Hinrich Sch\u00fctze. 2017. Past, present, future: A computational investigation of the typology of tense in 1000 languages. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 113-124, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A massively parallel corpus: the Bible in 100 languages. Language resources and evaluation", "authors": [ { "first": "Christos", "middle": [], "last": "Christodouloupoulos", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2015, "venue": "", "volume": "49", "issue": "", "pages": "375--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Christodouloupoulos and Mark Steedman. 2015. A massively parallel corpus: the Bible in 100 languages. Language resources and evaluation, 49(2):375-395.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Haida syntax", "authors": [ { "first": "John", "middle": [ "Enrico" ], "last": "", "suffix": "" } ], "year": 2003, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Enrico. 2003. Haida syntax, volume 1. U of Ne- braska Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Extended Naive Bayes classifier for mixed data", "authors": [ { "first": "Chung-Chian", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Yan-Ping", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Keng-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2008, "venue": "Expert Systems with Applications", "volume": "35", "issue": "3", "pages": "1080--1083", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chung-Chian Hsu, Yan-Ping Huang, and Keng-Wei Chang. 2008. Extended Naive Bayes classifier for mixed data. Expert Systems with Applications, 35(3):1080-1083.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Creating a massively parallel Bible corpus", "authors": [ { "first": "Thomas", "middle": [], "last": "Mayer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Cysouw", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "3158--3163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Mayer and Michael Cysouw. 2014. Creating a massively parallel Bible corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3158- 3163, Reykjavik, Iceland. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Johns Hopkins university bible corpus: 1600+ tongues for typological exploration", "authors": [ { "first": "D", "middle": [], "last": "Arya", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Dylan", "middle": [], "last": "Wicks", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Winston", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Garrett", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Nicolai", "suffix": "" }, { "first": "David", "middle": [], "last": "Post", "suffix": "" }, { "first": "", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron Mueller, Winston Wu, Oliver Adams, Gar- rett Nicolai, Matt Post, and David Yarowsky. 2020. The Johns Hopkins university bible corpus: 1600+ tongues for typological exploration. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020), Marseilles, France. European Language Resources Association (ELRA).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An analysis of massively multilingual neural machine translation for low-resource languages", "authors": [ { "first": "Aaron", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Garrett", "middle": [], "last": "Nicolai", "suffix": "" }, { "first": "Arya", "middle": [ "D" ], "last": "Mccarthy", "suffix": "" }, { "first": "Dylan", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Winston", "middle": [], "last": "Wu", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aaron Mueller, Garrett Nicolai, Arya D. McCarthy, Dylan Lewis, Winston Wu, and David Yarowsky. 2020. An analysis of massively multilingual neu- ral machine translation for low-resource languages. In Proceedings of the Twelfth International Confer- ence on Language Resources and Evaluation (LREC 2020), Marseilles, France. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Transforming the images: Ergativity and transitivity in Inuktitut (Eskimo)", "authors": [ { "first": "Elke", "middle": [], "last": "Nowak", "suffix": "" } ], "year": 2011, "venue": "", "volume": "15", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elke Nowak. 2011. Transforming the images: Ergativ- ity and transitivity in Inuktitut (Eskimo), volume 15. Walter de Gruyter.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT 2019: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. Fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Bible as a parallel corpus: Annotating the 'book of 2000 tongues'. Computers and the Humanities", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Mari", "middle": [ "Broman" ], "last": "Olsen", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 1999, "venue": "", "volume": "33", "issue": "", "pages": "129--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik, Mari Broman Olsen, and Mona Diab. 1999. The Bible as a parallel corpus: Annotating the 'book of 2000 tongues'. Computers and the Human- ities, 33(1):129-153.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Building a constraint grammar parser for Plains Cree verbs and arguments", "authors": [ { "first": "Katherine", "middle": [], "last": "Schmirler", "suffix": "" }, { "first": "Antti", "middle": [], "last": "Arppe", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Trosterud", "suffix": "" }, { "first": "Lene", "middle": [], "last": "Antonsen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katherine Schmirler, Antti Arppe, Trond Trosterud, and Lene Antonsen. 2018. Building a constraint grammar parser for Plains Cree verbs and arguments. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "content": "", "num": null, "text": "", "type_str": "table" }, "TABREF3": { "html": null, "content": "
", "num": null, "text": "Weighted Type-to-Token ratios of collected language families.", "type_str": "table" } } } }