|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:12:00.000779Z" |
|
}, |
|
"title": "Reusing a Multi-lingual Setup to Bootstrap a Grammar Checker for a Very Low Resource Language without Data", |
|
"authors": [ |
|
{ |
|
"first": "Inga", |
|
"middle": [], |
|
"last": "Lill", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "UiT Norgga \u00e1rktala\u0161 universitehta Divvun", |
|
"location": { |
|
"country": "Norway" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sigga", |
|
"middle": [], |
|
"last": "Mikkelsen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "UiT Norgga \u00e1rktala\u0161 universitehta Divvun", |
|
"location": { |
|
"country": "Norway" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Linda", |
|
"middle": [], |
|
"last": "Wiechetek", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "UiT Norgga \u00e1rktala\u0161 universitehta Divvun", |
|
"location": { |
|
"country": "Norway" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Grammar checkers (GEC) are needed for digital language survival. Very low resource languages like Lule S\u00e1mi with less than 3,000 speakers need to hurry to build these tools, but do not have the big corpus data that are required for the construction of machine learning tools. We present a rule-based tool and a workflow where the work done for a related language can speed up the process. We use an existing grammar to infer rules for the new language, and we do not need a large gold corpus of annotated grammar errors, but a smaller corpus of regression tests is built while developing the tool. We present a test case for Lule S\u00e1mi reusing resources from North S\u00e1mi, show how we achieve a categorisation of the most frequent errors, and present a preliminary evaluation of the system. We hope this serves as an inspiration for small languages that need advanced tools in a limited amount of time, but do not have big data.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Grammar checkers (GEC) are needed for digital language survival. Very low resource languages like Lule S\u00e1mi with less than 3,000 speakers need to hurry to build these tools, but do not have the big corpus data that are required for the construction of machine learning tools. We present a rule-based tool and a workflow where the work done for a related language can speed up the process. We use an existing grammar to infer rules for the new language, and we do not need a large gold corpus of annotated grammar errors, but a smaller corpus of regression tests is built while developing the tool. We present a test case for Lule S\u00e1mi reusing resources from North S\u00e1mi, show how we achieve a categorisation of the most frequent errors, and present a preliminary evaluation of the system. We hope this serves as an inspiration for small languages that need advanced tools in a limited amount of time, but do not have big data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Language tools for very low resource languages are urgently needed to support language maintenance, but also it takes a long time to develop them. An existing multilingual infrastructure and existing tools that can be reused can speed up the process. In this article, we describe the process of making a Lule S\u00e1mi GEC together with a preliminary categorization of frequent Lule S\u00e1mi errors. Lule S\u00e1mi is on the lower end of lower resource language. It can benefit from North S\u00e1mi which is closely related and has a well-functioning grammar checker.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The reuse of existing knowledge is an important concept in effective development of new grammar checkers in multilingual infrastructures. With this work we would like to set an example of how high-end complex NLP tools can be made, in less time, by taking existing tools as a frame. The following tools were already ready-made: an FSTbased morphological analyser, a morpho-syntactic disambiguator developed for correct text, and a multi-lingual infrastructure that contains scripts to build the grammar checker (among other applications). Our work took altogether 120 hours, (40 hours of meetings of two linguists (one of them native speaker) and 40 hours of work of one native speaker linguist).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For related languages we can even reuse rules and sets (prenominal modifiers, sentence barriers). But for example, lexemes have to be translated. This article will show in detail what can be reused, and which factors need special focus as they are language specific -many times it is systematic homonymies, and definitely idiosyncratic homonymies. In addition, we will evaluate the Lule S\u00e1mi grammar checker and point out future steps for improvement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Lule S\u00e1mi is spoken in northern Sweden and Norway, with an estimated 800-3,000 speakers (Sammallahti, 1998; Kuoljok, 2002; Svonni, 2008; Rydving, 2013; Moseley, 2010) . The Lule S\u00e1mi written language was approved in 1983 (Magga, 1994) . The first Lule S\u00e1mi spell checker was launched in 2007. Lule S\u00e1mi is a morphologically complex language, for more details see Ylikoski (2022) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 107, |
|
"text": "(Sammallahti, 1998;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 108, |
|
"end": 122, |
|
"text": "Kuoljok, 2002;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 136, |
|
"text": "Svonni, 2008;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 151, |
|
"text": "Rydving, 2013;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 166, |
|
"text": "Moseley, 2010)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 221, |
|
"end": 234, |
|
"text": "(Magga, 1994)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 378, |
|
"text": "Ylikoski (2022)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Language and resources", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In 2013 the Lule S\u00e1mi gold corpus of writing errors was built. 1 The gold corpus consists of 32,202 words with 3,772 marked writing errors. The goal of this error marked-up corpus was to test if the spellchecker corresponds to relevant quality requirements, by running the spell checker on an error corpus, where spelling errors were manually marked and corrected. It was supposed to be usable for testing grammar checkers with some processing, and therefore also marked syntactic, morpho-syntactic and lexical errors. The texts gathered for the gold corpus were written by native Lule S\u00e1mi speakers and had neither been spellchecked nor proofread.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 64, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Language and resources", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Speakers of Lule S\u00e1mi do not have a long written tradition, this amount of errors in the gold corpus show that native speakers of Lule S\u00e1mi are in need of tools helping them in the writing process. 1,774 of the errors in the gold corpus are non-word errors (i.e. misspellings that result in a non-existent form, non-word error, as opposed to real word errors where the misspelling results in an existing 'wrong' form), found by the spellchecker, the remaining 1,998 errors are morpho-syntactic, syntactic, word choice and formatting errors, which only a grammar checker can detect and correct. Lule S\u00e1mi is by UNESCO classified as a severely endangered language. For the (re)vitalisation of a language, it is important that the language is actually being used. With a (re)vitalisation perspective, a grammar checker for Lule S\u00e1mi will make it easier for people to use Lule S\u00e1mi in writing, which will increase the use of written Lule S\u00e1mi.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Language and resources", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The marking and correcting of errors for the gold corpus is the first systematic work on Lule S\u00e1mi writing errors. So far, this gold corpus has not been used to analyse and describe error types characteristic for Lule S\u00e1mi. Our own experiences from proofreading and from the work with North S\u00e1mi were therefore the starting point for developing grammar rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background 2.1 Language and resources", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The technological implementation of our grammar checker is based on well-established technologies in the rule-based natural language processing: finite-state automata for morphological analysis (Beesley and Karttunen, 2003; Lind\u00e9n et al., 2013) and constraint grammar (Karlsson, 1990b; Didriksen, 2010) for syntactic and semantic as well as other sentence-level processing. The Lule S\u00e1mi has an existing morphological analyser and lexicon publicly available 2 , which were originally imported from North S\u00e1mi with all rules and set specifications and then adapted to Lule S\u00e1mi. Antonsen et al. (2010) report F-scores of 0.95 for part-of-speech (PoS) disambiguation, 0.88 for disambiguation of inflection and derivation, and 0.86 for assignment of grammatical functions (syntax) for the Lule S\u00e1mi analyser.", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 223, |
|
"text": "(Beesley and Karttunen, 2003;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 244, |
|
"text": "Lind\u00e9n et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 268, |
|
"end": 285, |
|
"text": "(Karlsson, 1990b;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 302, |
|
"text": "Didriksen, 2010)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 578, |
|
"end": 600, |
|
"text": "Antonsen et al. (2010)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The system is built on a pipeline of modules: we process the input text with morphological analysers and tokenisers to get annotated texts, then disambiguate and then apply grammar rules on the disambiguated sentences, c.f. Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 232, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "It is noteworthy, that the system is part of a multilingual infrastructure GiellaLT, which includes numerous languages -130 altogether.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The grammar checker takes input from the finite-state transducer (FST) to a number of other modules, the core of which are several Constraint Grammar modules for tokenisation disambiguation, morpho-syntactic disambiguation and a module for error detection and correction. The full modular structure (Figure 1 ) is described in Wiechetek (2019) . We are using finite-state morphology (Beesley and Karttunen, 2003) to model word formation processes. The technology behind our FSTs is described in Pirinen (2014) . Constraint Grammar is a rule-based formalism for writing disambiguation and syntactic annotation grammars (Karlsson, 1990a; Karlsson et al., 1995) . In our work, we use the free open source implementation VISLCG-3 (Bick and Didriksen, 2015) . All components are compiled and built using the GiellaLT infrastructure (Moshagen et al., 2013). The code and data for the model is available for download 3 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 343, |
|
"text": "Wiechetek (2019)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 412, |
|
"text": "(Beesley and Karttunen, 2003)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 495, |
|
"end": 509, |
|
"text": "Pirinen (2014)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 618, |
|
"end": 635, |
|
"text": "(Karlsson, 1990a;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 658, |
|
"text": "Karlsson et al., 1995)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 752, |
|
"text": "(Bick and Didriksen, 2015)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 308, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The syntactic context is specified in handwritten Constraint Grammar rules. The ADD-rule below adds an error tag (identified by the tag &real-negSg3-negSg2) to the negation verb ij '(to) not' as in example (1) if it is a 3rd person singular verb and to its left there is a 2nd person singular pronoun in nominative case. The context condition further specifies that there cannot be any tokens specifying a sentence barrier, a subjunction, conjunction or a finite verb in between for the rule to apply.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(1) Unlike machine learning, this approach is not dependent on a large amount of text data or a gold corpus. To develop a grammar checker, we only need several test sentences containing the errors in question. (Wiechetek et al., 2021) However, in the absence of a fully error marked-up text corpus, finding frequent errors is a challenge. We therefore provide a scheme based on our experience with finding common errors (for the North S\u00e1mi grammar checker) as a guideline for work on new languages. This scheme serves any language, but our experience is based on morphologically richer languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 234, |
|
"text": "(Wiechetek et al., 2021)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Error types can be divided into three main categories:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "1. phonology-/typography-based errors 2. (morpho-)syntactic errors", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Phonology-/typography-based errors can be based on diacritics, vowel/consonant length, silent endings in certain contexts (-ij pronounced -i), divergence pronunciation/writing and homophone words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "writing convention-based errors", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Writing/formatting conventions apply to compounding (one vs. several words, hyphen), quotation marks, comma and punctuation in general. Morpho-syntactic and syntactic errors can be subdivided into verb-, NP-internal and VP-internal issues. NP-internal issues can be about prepositions and postpositions and their case restrictions, adjective agreement /forms in attributive/predicative positions, and relative pronoun agreement with its anaphora in number, gender and animacy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "writing convention-based errors", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Verb internal issues concern the auxiliary construction, negation phrases (where negation is expressed by a verb) and other periphrastic verb constructions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "writing convention-based errors", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "VP internal issues, on the other hand, are more global and concern subject-verb agreement, subclauses formation, subcategorisation in general and case marking of object/adverbial and word order.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "writing convention-based errors", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In addition to that, the choice of error types will depend on efficiency as well, that means which error types can rules generalize over, and which error types are very word specific. Very word specific work that cannot be generalized may not be so efficient.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "writing convention-based errors", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Reusing (particularly North S\u00e1mi) resources to create Lule S\u00e1mi tools goes back as far as 2005, where the North S\u00e1mi descriptive morphosyntactic analyser/disambiguator was used to disambiguate Lule S\u00e1mi text and adapting work started. A disambiguator is a tool that resolves homonymy in a given syntactic context, and is an essential tool in sentence-level text processing. This tool was already available when we started our work. However, the initial goal of sentence analysis is based on correct input. We therefore had to adapt the tool to fit error input, e.g. by removing rules that were too strict and paying closer attention to misspelled word forms that can be confused with correct forms. In the course of time, other tools or modules have been copied over to Lule S\u00e1mi and been reused with or without adaptations, thereby creating lower-cost tools for Lule S\u00e1mi, cf. Table 1 . Another tool that was already available when we started to build our GEC was the Lule S\u00e1mi morphological analyser. It had previously been constructed from scratch, starting from a common template used in the GiellaLT infrastructure. Based on our experience, we have found a following workflow to be very effective in creating a new grammar checker: We use the normative morphological analyser and a tokeniser with grammatical tokenisation disambiguation. This is relevant when deciding if two words written apart have a syntactic relation or are simple compound errors. In addition, there, we use a FSTbased spellchecker. The descriptive disambiguator/syntactic analyser was first taken as it is to be included in the Lule S\u00e1mi grammar checker. However, we found that the need for adaptions was urgent, and we needed a separate version of it specifically for potentially erroneous input. The difference to the descriptive disambiguator lays in the objective. The descriptive disambiguator aims at a reduction of homonymy (risking to some degree that correct analyses get lost). The grammar checker disambiguator, on the other hand, needs disambiguation only to get an idea of the sentence to find the error, but is dependent on finding erroranalyses even if they do not make sense in the context, so homonymy is not to be reduced to a point where error readings disappear. The descriptive disambiguator is adapted on the fly, so basically every time testing runs into problems, the respective rules are traced and either eliminated or adapted to erroneous input. In some cases, we also noticed general errors in the rules that lead to an improvement of the descriptive disambiguator.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 878, |
|
"end": 885, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reuse of resources", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The error detection/correction module needed to be written from scratch at first glance. However, at second glance, there are parts that could be reused as well. Simple sets and lists were copied over from the Lule S\u00e1mi descriptive disambiguator. Semantic groupings of words developed in the process of North S\u00e1mi grammar checking were directly copied over from the North S\u00e1mi grammar checker, and lexical items translated to Lule S\u00e1mi as in the case of the following set DOPPE (the first of which is the North S\u00e1mi original, and the second of which is the translated Lule S\u00e1mi one), which generalises over static place-adverbs:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tool", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "LIST DOPPE = \"badjin\" \"bajil\" \"dakko\" \"d\u00e1\" \"d\u00e1kko\" \"d\u00e1ppe\" \"d\u00e1s\" \"diekko\" \"dieppe\" \"do\" \"dokko\" \"doppe\" \"duo\" \"duokko\" \"duoppe\" \"olgun\" ; LIST DOPPE = \"badjen\" \"d\u00e1ppe\" \"duoppe\" \"d\u00e5ppe\" \"d\u00e1ggu\" \"daggu\" \"duoggu\" \"d\u00e5ggu\" \"d\u00e1nna\" \"danna\" \"duonna\" \"d\u00e5nna\" \"d\u00e5hku\" \"duohku\" \"\u00e5lggon\" ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tool", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As regards rules, the error types based on orthographic or phonetic similarity needed to be written from scratch, as they differ in North S\u00e1mi and Lule S\u00e1mi, as do possible contexts of errors that need to pay attention to homonymies. Especially systematic homonymies are partly different to North S\u00e1mi. However, some of them are the same in North S\u00e1mi and Lule S\u00e1mi, cf. Table 2 . One of them is the homonymy between plural inessive (Lule S\u00e1mi) /locative (North S\u00e1mi) and singular comitative nouns, and between singular elative (Lule S\u00e1mi) /locative (North S\u00e1mi) and 3rd person singular possessive accusative singular nouns.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 371, |
|
"end": 378, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tool", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Not all rules needed to be written from scratch, certain rule types were reused from North S\u00e1mi. Subject-verb agreement rules are well-suited to be copy-pasted from North S\u00e1mi to Lule S\u00e1mi. With some tag adaptations, they were included into the Lule S\u00e1mi grammar checker.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tool", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When working with the Lule S\u00e1mi grammar checker, we wanted to start with errors made by high proficiency writers rather than language learners. That way we can have a functioning grammar checker for texts with very few errors and introduce more complex errors along the way. Texts written by second language learners or students generally have more and other types of errors and more complex errors, which will require a different grammar checker.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors in Lule S\u00e1mi", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Typical errors of high proficiency writers happen when the written norm deviates from the spoken dialectal variation. One example for that is the negation paradigm, which in some dialects resembles the North S\u00e1mi paradigm rather than the norm of written Lule S\u00e1mi.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Errors in Lule S\u00e1mi", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the Lule S\u00e1mi written norm, the negation verb is inflected for both person, number and tense (present and past) followed by the main verb in connegative form, which is always the same, whilst in North S\u00e1mi only person and number is marked on the negation verb. Tense is marked on the main verb with two different connegative forms, see Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 339, |
|
"end": 346, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Errors in Lule S\u00e1mi", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "North S\u00e1mi Present Past Present Past iv vuolge ittjiv vuolge in vuolgge in vuolg\u00e1n i vuolge ittji vuolge it vuolgge it vuolg\u00e1n ij vuolge ittjij vuolge ii vuolgge ii vuolg\u00e1n There is no full consensus on the exact border between North S\u00e1mi and Lule S\u00e1mi (Ylikoski, 2016) , so in Lule S\u00e1mi text one can find variation regarding negation that reflects dialectical variation. In Lule S\u00e1mi text both the North S\u00e1mi negation system, as ex. (2), and a system with 'double' past marking on both the negation verb and with the main verb (3) are used. Soajttet is a modal verb meaning '(to) maybe' and usually stands with the infinitive form of the main verb. However, the present singular thirdperson form soajtt\u00e1 '(s/he) maybe' is by many writers being used as an adverb, not as a modal verb, as example (4) shows. The modal auxiliary is not followed by an infinitive as it should, but a finite verb in third-person singular. Within noun phrases, writers frequently make agreement errors. According to the norm the noun should be in singular with numerals and demonstratives agreeing in case and number, according to (Ylikoski, 2022) there is variation in the contemporary language indicating that this agreement system is changing. The errors in the Divvun gold corpus show us that the change has gone further than described in (Ylikoski, 2022) , and numerals are handled in the same way as attributive adjectives, see Table 4 . Some writers seem to make use of this \"new\" paradigm, as in ex. (5), while others seem to be somewhere in between, as ex. (6) shows. In this last example, the case of the numeral is correct, but the noun is in plural. Another noun phrase internal error is the use of and adjective in predicative form in an attributive position, as example (7). This is not a very common error, but might be more frequent in texts written by second language learners, since the predicative form is the one in dictionaries and the adjective inflection system is one of the most complex area of the morphology (Ylikoski, 2022) . Along with this rule, we also made rules for correcting errors where the attributive form of an adjective is used in a predicative position.", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 269, |
|
"text": "(Ylikoski, 2016)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1109, |
|
"end": 1125, |
|
"text": "(Ylikoski, 2022)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1321, |
|
"end": 1337, |
|
"text": "(Ylikoski, 2022)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 2013, |
|
"end": 2029, |
|
"text": "(Ylikoski, 2022)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1412, |
|
"end": 1419, |
|
"text": "Table 4", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lule S\u00e1mi", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Mij We tjuovojma follow roaNkok crooked.SG.NOM b\u00e1lgg\u00e1v. path.SG.ACC 'We followed a crooked path'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(7)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are also agreement errors where relative pronouns fail to agree with their anaphora in number, as in ex. (8), and not agreeing with its anaphora in animacy, as in ex. (9). A similar error regards the agreement of reflexive pronouns with their anaphora in number. Conditional mood is according to (Ylikoski, 2022) largely missing in Lule S\u00e1mi, and instead a periphrastic conditional consisting of the auxil-iary lulu-'would' and the infinitive is used. The conditional auxiliary lulu-is by some writers handled as if it is a separate verb with present and past tense, not a mood, making errors like (10) and the non-word error (11). '. . . wishes more people would attend courses.'", |
|
"cite_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 318, |
|
"text": "(Ylikoski, 2022)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(7)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Another big group of errors are real-word errors. These are mostly based on phonetic similarity between the confused forms. In this work, we focused on general rules that are not limited to one single word, but rather forms that apply to a group of lemmata. In Table 5 the first error (\u00e1lgge-\u00e1lkke) is an error limited only to this specific word. When in a hurry of building resources for very low resource languages, one has to make sure to work in an efficient way, and writing rules for correcting specific words does not get us fast-forward. The rest of errors in Table 5 are errors being corrected by rules that generalise over groups of words, or for the frequent negation auxiliary (function words are more efficient).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 268, |
|
"text": "Table 5", |
|
"ref_id": "TABREF12" |
|
}, |
|
{ |
|
"start": 568, |
|
"end": 575, |
|
"text": "Table 5", |
|
"ref_id": "TABREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "(7)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The errors we have worked with in Table 5 are all real word errors with the ij-sound written 'i', or the other way around, with 'i' written 'ij'. We classified them as real word errors, even though some errors can also be seen as agreement errors. High proficiency writers are typically not insecure about agreement, but errors of this type can still happen when typing fast. Another complicating factor is that the -i sound can also be written -ij. Odd syllable nouns in illative case end in -ij, even though the pronouncation is not -ij. 'To the dog' is spelled bednagij even though the actual pronounced more like bednagi. However, the spelling error bednagi will be picked up by the spell checker since it is a non-word.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 41, |
|
"text": "Table 5", |
|
"ref_id": "TABREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "(7)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Both Lule S\u00e1mi and North S\u00e1mi verbs are inflected with three persons and three numbers in past and present tense. The subject verb agreement rules were copied from North S\u00e1mi to the Lule S\u00e1mi grammar checker.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(7)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The first version of the Lule S\u00e1mi grammar checker has 64 rules and 17 rule types, three of which have a regression test of 50 or more test sentences. We also ran an initial evaluation of each regression test, and plan to run the grammar checker on the error-marked up corpus of 32,202 words 4 . Figure 6 shows an evaluation of three error types with a sufficiently large regression tests. The other error types will be evaluated in the final version of the paper. The rules for relative pronoun and numeral/determiner agreement and for modal verb maybe-constructions give good results for both precision and recall. Precision and recall of the modal verb constructions are as good as 98%. We are aware that this still needs to be tested on an independent corpus. The quality is measured using basic precision, recall and f 1 scores, such that recall R = tp tp+fn , precision P = tp tp+fp and f 1 score as harmonic mean of the two: F 1 = 2 P \u00d7R P +R , where t p is a count of true positives, f p false positives, t n true negatives and f n false negatives.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 296, |
|
"end": 304, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We also ran a test run of the automatic evaluation on the marked-up gold corpus of Lule S\u00e1mi, to see if the grammar checker finds true errors and also to improve the error mark-up of grammatical errors in the corpus, keeping in mind that the corpus had been originally marked up for predominantly spelling errors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A lot of errors found by the grammar checker are true positives. Many of them were either not marked up or -more frequently -marked up with a different scope. Since the start of marking up the corpora for spelling errors, the mark-up guidelines have been developed further in connection with GramDivvun, the North S\u00e1mi grammar checker, and adapted to automatic evaluation, where the grammar checker output is tested against the corpus mark-up.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "There are examples of when the grammar checker actually found grammatical errors that the human proof-reader missed out. Thirdly, there are examples where the original marking is not consistent with the newer guidelines for how much the scope of the error should be with regard to how much the grammar checker actually marks up. Example (12) is one of the cases where an error in relative pronoun agreement has been identified correctly by the grammar checker. This error type", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Type of error \u00e1lgge 'beginner'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Correct form", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u00e1lkke 'easy' Only for this single word h\u00e1bbmima NOMACT SG GEN 'the designing's'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Correct form", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h\u00e1bbmijma PRT PL1 'we designed'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Correct form", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Systematic for all contracted -it verbs baelosti PRS PL3 'they defend' baelostij PRT SG3 's/he defended'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Correct form", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Systematic for all odd syllable -it verbs and auxiliary/copula liehket i/ittji PRS/PRT SG2 ij/ittjij PRS/PRT SG3 Missing \"j\" for negation verbs Sg2 'you do/did not' 's/he do/did not' ij/ittjij PRS/PRT SG3 i/ittji PRS/PRT SG2 Extra \"j\" for negation verbs Sg3 's/he do/did not' 'you do/did not' ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Correct form", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the text there are articles about these three themes, which are topics \u00c1rran has worked with'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "'", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, there are also several false positives, as in ex. 14, where g\u00e5lmm\u00e5 is not an error. The difficulty here is that the subsequent noun form is homonymous between nominative and genitive, and the numeral should have only been corrected if it was a genitive phrase. False positives occurred specifically for this error type (in the case of nominative/genitive nouns), showing that more work with the respective rules is necessary to improve the performance of the grammar checker. In ex. (15), on the other hand, the agreement error finding of the grammar checker in \u00e1lgij 's/he started' and its correction to \u00e1lggin 'they started' is a false positive. This is based on there being two subject candidates, because of singular nominative and plural genitive being homonyms, (cuhppa) and the other one plural (biejve, which in this sentence is singular genitive). The grammar checker confuses the first of them for a subject and therefore wrongly adapts the verb to it. Additionally, we tested the grammar checker on a manually proofread Lule S\u00e1mi corpus used for a new text to speech (TTS) tool. The grammar checker did find errors that the proofreader had missed and was therefore useful in a project where we want the text to be perfect. Most of the responses from the grammar checker on this corpus were however false positives, with the grammar checker marking correct forms as errors. These 'bad' results were in turn used to improve and fine tune the grammar checker rules. We find this a very beneficial way of working -using our tools to double-check a proofread corpus, and at the same time using the results of the corpus to improve our tools.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "'", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When running the grammar checker on a university level thesis, the grammar checker found many real errors. It was interesting that some highly frequent repeated errors were due to changes in the language norm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "'", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The overall results show us that the grammar checker actually finds real errors, but the main challenge with making it usable to users is to restrict the rules. At this point there is too much noise with more false positives than true positives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "'", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have shown that by using a related language grammar checker as a starting point, we were able to create a basic level grammar checker for Lule S\u00e1mi, categorise a fair amount of frequent error types and collect regression tests for each of them in a reasonable amount of time (120 hours between two linguists, one of them a native speaker). The importance for language revitalisation cannot be measured before integrating the tools in the respective text processing programs for the language community to use. But we know from experience with the spell checker, that the tools have a wide group of users, and their importance can usually be felt in the number of complaints that are sent when something is wrong with the distribution or other technical issues. In the future, we want to offer a high-performance tool for the most common error types to the Lule S\u00e1mi users. We aim to release a beta version together with the commonly distributed spellchecker in 2022. 5 From the developer side we aim at regression tests of at least 100 examples per error type with at least 90 % precision and 70 % recall, so that the tool will be useful for a wider language community, be used in", |
|
"cite_spans": [ |
|
{ |
|
"start": 969, |
|
"end": 970, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://github.com/giellalt/lang-smj/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/giellalt/lang-smj/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/giellalt/lang-smj/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Can be found on GitHub: https://github.com/ giellalt/lang-smj/tools/grammarcheckers", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "c.f. https://divvun.no/en/index.html schools, by the government and for private users on mobile phones.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We want to thank B\u00f8rre Gaup for running the evaluation on the gold corpus and helping with the technical side of error mark-up and automatic evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Reusing grammatical resources for new languages", |
|
"authors": [ |
|
{ |
|
"first": "Lene", |
|
"middle": [], |
|
"last": "Antonsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Linda", |
|
"middle": [], |
|
"last": "Wiechetek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Trosterud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2782--2789", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lene Antonsen, Linda Wiechetek, and Trond Trosterud. 2010. Reusing grammatical resources for new languages. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), pages 2782-2789, Stroudsburg. The Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Finite State Morphology. CSLI publications", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kenneth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lauri", |
|
"middle": [], |
|
"last": "Beesley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Karttunen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth R Beesley and Lauri Karttunen. 2003. Finite State Morphology. CSLI publications.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "CG-3 -beyond classical Constraint Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Eckhard", |
|
"middle": [], |
|
"last": "Bick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tino", |
|
"middle": [], |
|
"last": "Didriksen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 20th Nordic Conference of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--39", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eckhard Bick and Tino Didriksen. 2015. CG-3 -be- yond classical Constraint Grammar. In Proceed- ings of the 20th Nordic Conference of Computa- tional Linguistics (NoDaLiDa 2015), pages 31-39.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Constraint Grammar Manual: 3rd version of the CG formalism variant", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tino Didriksen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tino Didriksen. 2010. Constraint Grammar Manual: 3rd version of the CG formalism variant. Grammar- Soft ApS, Denmark.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Constraint Grammar as a Framework for Parsing Running Text", |
|
"authors": [ |
|
{ |
|
"first": "Fred", |
|
"middle": [], |
|
"last": "Karlsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the 13th Conference on Computational Linguistics (COLING 1990)", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "168--173", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fred Karlsson. 1990a. Constraint Grammar as a Framework for Parsing Running Text. In Proceed- ings of the 13th Conference on Computational Lin- guistics (COLING 1990), volume 3, pages 168-173, Helsinki, Finland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Constraint grammar as a framework for parsing unrestricted text", |
|
"authors": [ |
|
{ |
|
"first": "Fred", |
|
"middle": [], |
|
"last": "Karlsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the 13th International Conference of Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "168--173", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fred Karlsson. 1990b. Constraint grammar as a frame- work for parsing unrestricted text. In Proceedings of the 13th International Conference of Computational Linguistics, volume 3, pages 168-173, Helsinki.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Constraint Grammar: A Language-Independent System for Parsing Unrestricted Text", |
|
"authors": [ |
|
{ |
|
"first": "Fred", |
|
"middle": [], |
|
"last": "Karlsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atro", |
|
"middle": [], |
|
"last": "Voutilainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juha", |
|
"middle": [], |
|
"last": "Heikkil\u00e4", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arto", |
|
"middle": [], |
|
"last": "Anttila", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fred Karlsson, Atro Voutilainen, Juha Heikkil\u00e4, and Arto Anttila. 1995. Constraint Grammar: A Language-Independent System for Parsing Unre- stricted Text. Mouton de Gruyter, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Julevs\u00e1megiella. B\u00e5rj\u00e5s: Julevs\u00e1megiella uddni -ja idet", |
|
"authors": [ |
|
{ |
|
"first": "Kuoljok", |
|
"middle": [], |
|
"last": "Susanna Ang\u00e9us", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Susanna Ang\u00e9us Kuoljok. 2002. Julevs\u00e1megiella. B\u00e5r- j\u00e5s: Julevs\u00e1megiella uddni -ja idet?, pages 10-18.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Hfst-a system for creating nlp tools", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Krister Lind\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Senka", |
|
"middle": [], |
|
"last": "Axelson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Drobac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juha", |
|
"middle": [], |
|
"last": "Hardwick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jyrki", |
|
"middle": [], |
|
"last": "Kuokkala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Niemi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miikka", |
|
"middle": [], |
|
"last": "Pirinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Silfverberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "International workshop on systems and frameworks for computational morphology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "53--71", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Krister Lind\u00e9n, Erik Axelson, Senka Drobac, Sam Hardwick, Juha Kuokkala, Jyrki Niemi, Tommi A Pirinen, and Miikka Silfverberg. 2013. Hfst-a sys- tem for creating nlp tools. In International work- shop on systems and frameworks for computational morphology, pages 53-71. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Hvordan den nyeste nordsamiske rettskrivingen ble til. Festskrift til \u00d8rnulv Vorren", |
|
"authors": [ |
|
{ |
|
"first": "Ole", |
|
"middle": [ |
|
"Henrik" |
|
], |
|
"last": "Magga", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "269--282", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ole Henrik Magga. 1994. Hvordan den nyeste nord- samiske rettskrivingen ble til. Festskrift til \u00d8rnulv Vorren, pages 269-282.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Atlas of the World's Languages in Danger", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Moseley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Moseley. 2010. Atlas of the World's Lan- guages in Danger, volume 3. UNESCO.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Building an open-source development infrastructure for language technology projects", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Sjur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Moshagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Pirinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Trosterud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "NODALIDA", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sjur N. Moshagen, Tommi A. Pirinen, and Trond Trosterud. 2013. Building an open-source de- velopment infrastructure for language technology projects. In NODALIDA.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Stateof-the-art in weighted finite-state spell-checking", |
|
"authors": [ |
|
{ |
|
"first": "Tommi", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Pirinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krister", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 15th International Conference on Computational Linguistics and Intelligent Text Processing", |
|
"volume": "8404", |
|
"issue": "", |
|
"pages": "519--532", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tommi A. Pirinen and Krister Lind\u00e9n. 2014. State- of-the-art in weighted finite-state spell-checking. In Proceedings of the 15th International Conference on Computational Linguistics and Intelligent Text Pro- cessing -Volume 8404, CICLing 2014, pages 519- 532, Berlin, Heidelberg. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Words and varieties : lexical variation in Saami", |
|
"authors": [ |
|
{ |
|
"first": "H\u00e5kan", |
|
"middle": [], |
|
"last": "Rydving", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Soci\u00e9t\u00e9 Finno-Ougrienne", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H\u00e5kan Rydving. 2013. Words and varieties : lexical variation in Saami. Soci\u00e9t\u00e9 Finno-Ougrienne.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The Saami Languages: an introduction", |
|
"authors": [ |
|
{ |
|
"first": "Pekka", |
|
"middle": [], |
|
"last": "Sammallahti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pekka Sammallahti. 1998. The Saami Languages: an introduction. Davvi girji.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Spr\u00e5ksituationen f\u00f6r samerna i sverige. Samiskan i Sverige, rapport fr\u00e5n spr\u00e5kkampanjer\u00e5det", |
|
"authors": [ |
|
{ |
|
"first": "Mikael", |
|
"middle": [], |
|
"last": "Svonni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "22--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikael Svonni. 2008. Spr\u00e5ksituationen f\u00f6r samerna i sverige. Samiskan i Sverige, rapport fr\u00e5n spr\u00e5kkam- panjer\u00e5det, pages 22-35.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Many shades of grammar checking -launching a constraint grammar tool for North S\u00e1mi", |
|
"authors": [ |
|
{ |
|
"first": "Linda", |
|
"middle": [], |
|
"last": "Wiechetek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B\u00f8rre", |
|
"middle": [], |
|
"last": "Sjur N\u00f8rsteb\u00f8 Moshagen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Gaup", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Omma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the NoDaL-iDa 2019 Workshop on Constraint Grammar -Methods, Tools and Applications", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "35--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linda Wiechetek, Sjur N\u00f8rsteb\u00f8 Moshagen, B\u00f8rre Gaup, and Thomas Omma. 2019. Many shades of grammar checking -launching a constraint grammar tool for North S\u00e1mi. In Proceedings of the NoDaL- iDa 2019 Workshop on Constraint Grammar -Meth- ods, Tools and Applications, NEALT Proceedings Series 33:8, pages 35-44.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "No more fumbling in the dark -quality assurance of high-level NLP tools in a multi-lingual infrastructure", |
|
"authors": [ |
|
{ |
|
"first": "Linda", |
|
"middle": [], |
|
"last": "Wiechetek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Flammie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B\u00f8rre", |
|
"middle": [], |
|
"last": "Pirinen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Gaup", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Omma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Seventh International Workshop on Computational Linguistics of Uralic Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "47--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linda Wiechetek, Flammie A Pirinen, B\u00f8rre Gaup, and Thomas Omma. 2021. No more fumbling in the dark -quality assurance of high-level NLP tools in a multi-lingual infrastructure. In Proceedings of the Seventh International Workshop on Computational Linguistics of Uralic Languages, pages 47-56, Syk- tyvkar, Russia (Online). Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Future time reference in lule saami, with some remarks on finnish", |
|
"authors": [ |
|
{ |
|
"first": "Jussi", |
|
"middle": [], |
|
"last": "Ylikoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Journal of Estonian and Finno-Ugric Linguistics", |
|
"volume": "7", |
|
"issue": "2", |
|
"pages": "209--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jussi Ylikoski. 2016. Future time reference in lule saami, with some remarks on finnish. Journal of Es- tonian and Finno-Ugric Linguistics, 7(2):209-244.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Lule saami. The Oxford Guide to the Uralic Languages", |
|
"authors": [ |
|
{ |
|
"first": "Jussi", |
|
"middle": [], |
|
"last": "Ylikoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "130--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jussi Ylikoski. 2022. Lule saami. The Oxford Guide to the Uralic Languages, pages 130-146.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"text": "Reuse of resources for Lule S\u00e1mi (sme= North S\u00e1mi, smj= Lule S\u00e1mi)", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"text": "Homonymies comparison between Lule S\u00e1mi and North S\u00e1mi", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"text": "Negation comparison for 'not leave'", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td/><td>(6)</td><td>Suohkana municipality gietja seven.'The municipalities got divided into seven juogeduvvin divide</td></tr><tr><td/><td/><td>outskirt areas.'</td></tr><tr><td/><td colspan=\"2\">'(these) two cows' Norm Nom (d\u00e1) guokta gus\u00e1 Gen (d\u00e1n) guovte gus\u00e1 Acc (d\u00e1) guokta gus\u00e1 Ine (d\u00e1n) guovten gus\u00e1n Ill (d\u00e1n) guovte gussaj Ela (d\u00e1t) guovtet gus\u00e1s Com (d\u00e1jna) guovtijn gus\u00e1jn (d\u00e1j) guokta gus\u00e1j Systematical errors (d\u00e1) guokta gus\u00e1 (d\u00e1j) guokta gus\u00e1j (d\u00e1jt) guokta gus\u00e1jt (d\u00e1jn) guokta gus\u00e1jn (d\u00e1jda) guokta gus\u00e1jda (d\u00e1js) guokta gus\u00e1js</td></tr><tr><td>(5)</td><td>Alvos colossal gietjav seven.NUM.NOM.SG St\u00e1tt\u00e1v St\u00e1dd\u00e1 m\u00e1ht\u00e1 can b\u00e1hppagieldajs. vuojnnet see g\u00e5jt at.least parish.PL.ELA 'You can see the colossal St\u00e1dd\u00e1 from at</td></tr><tr><td/><td>least seven parishes'</td></tr></table>", |
|
"text": "NUM.ILL.ATTR s\u00e1me outskirt.area.PL.ILL. rabdaguovlojda.", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table/>", |
|
"text": "NP with demonstrative pronouns and numerals", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF12": { |
|
"content": "<table><tr><td>Rel pronoun agreement Modal verb ('maybe') Num/det agreement</td><td>Precision Recall 81.43 83.82 82.61 F1 98.00 98.00 98.00 74.14 67.19 70.49</td></tr></table>", |
|
"text": "Real word errors comparison", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF13": { |
|
"content": "<table><tr><td colspan=\"7\">: Performance of the grammar checker on three error types based on regression tests</td></tr><tr><td colspan=\"7\">had a particularly high number of true positives</td></tr><tr><td colspan=\"7\">in our preliminary evaluation, showing that this is</td></tr><tr><td colspan=\"7\">a frequent error type. Another very frequent true</td></tr><tr><td colspan=\"7\">positive that has not been adapted to current mark-</td></tr><tr><td colspan=\"7\">up standards regards numeral error types, as in</td></tr><tr><td colspan=\"7\">(13). The old mark-up would have a bigger scope</td></tr><tr><td colspan=\"7\">including context for the error, i.e. daj g\u00e5lmm\u00e5</td></tr><tr><td colspan=\"7\">tiem\u00e1j birra>dan g\u00e5lm\u00e5 tiem\u00e1 birra. The cur-</td></tr><tr><td colspan=\"7\">rent guidelines only mark up the form that is to be</td></tr><tr><td colspan=\"7\">corrected, meaning daj>dan, g\u00e5lmm\u00e5>g\u00e5lm\u00e5 and</td></tr><tr><td colspan=\"7\">tiem\u00e1j>tiem\u00e1 which are corrected in three steps</td></tr><tr><td colspan=\"5\">and by three separate rules.</td><td/></tr><tr><td>(12)</td><td colspan=\"6\">Da Those ma which.NHUM.PL.NOM ulmutja people.PL.NOM buorre good ulmutja people Hamsun Hamsuna Hamsun Hamsun g\u00e5vvi describe l\u00e1hk\u00e1j. way. 'Those people who, according to Ham-mielas mind li is buorak good</td></tr><tr><td/><td colspan=\"6\">sun, are good, he describes in a good</td></tr><tr><td/><td colspan=\"2\">manner'</td><td/><td/><td/></tr><tr><td>(13)</td><td colspan=\"6\">Tj\u00e1llagin Text g\u00e5lmm\u00e5 three.NUM.SG.NOM li is artihkkala article tiem\u00e1j daj these.DEM.PL.GEN theme birra about ma which li is</td></tr><tr><td/><td>\u00e1ssje topics</td><td>majna with</td><td>\u00c1rran \u00c1rran</td><td>la is</td><td>barggam work</td><td>. . . . . .</td></tr></table>", |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |