|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:10:53.297716Z" |
|
}, |
|
"title": "Named Entity Recognition and Linking Augmented with Large-Scale Structured Data", |
|
"authors": [ |
|
{ |
|
"first": "Pawe\u0142", |
|
"middle": [], |
|
"last": "Rychlikowski", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Bart\u0142omiej", |
|
"middle": [], |
|
"last": "Najdecki", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "\u0141a\u0144cucki", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kaczmarek", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we describe our submissions to the 2nd and 3rd SlavNER Shared Tasks held at BSNLP 2019 and BSNLP 2021, respectively. The tasks focused on the analysis of Named Entities in multilingual Web documents in Slavic languages with rich inflection. Our solution takes advantage of large collections of both unstructured and structured documents. The former serve as data for unsupervised training of language models and embeddings of lexical units. The latter refers to Wikipedia and its structured counterpart-Wikidata, our source of lemmatization rules, and real-world entities. With the aid of those resources, our system could recognize, normalize and link entities, while being trained with only small amounts of labeled data.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we describe our submissions to the 2nd and 3rd SlavNER Shared Tasks held at BSNLP 2019 and BSNLP 2021, respectively. The tasks focused on the analysis of Named Entities in multilingual Web documents in Slavic languages with rich inflection. Our solution takes advantage of large collections of both unstructured and structured documents. The former serve as data for unsupervised training of language models and embeddings of lexical units. The latter refers to Wikipedia and its structured counterpart-Wikidata, our source of lemmatization rules, and real-world entities. With the aid of those resources, our system could recognize, normalize and link entities, while being trained with only small amounts of labeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Intelligent analysis of texts written in natural languages, despite the advancements made with deep neural networks, is still regarded as challenging. The lingua franca of science is English, and new methods are typically evaluated firstly on English data, and often on other Germanic or Romance languages. This puts a certain bias on the development and design of modern NLP methods, which are not always transferable, and the metrics comparable, across languages and language families.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Due to the complexity and inherent vagueness of intelligent language processing, is has been naturally split into simple tasks, one of which is named entity recognition (NER), concerned in this paper. The output of a NER system is traditionally a set labelled phrases recognized in a given text. In order to process a document, one has to not only find and label the entities, but also link appropriately subsequent occurrences of the same entity. The task becomes harder, if the linking can be made across languages, when the entities are globally present.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We describe our submission to the 3rd Multilingual Named Entity Challenge in Slavic languages, held at the 8th Workshop on Balto-Slavic Natural Language Processing (BSNLP) in conjunction with the EACL 2021 conference. The system was similar to the one submitted to the 2nd Multilingual Named Entity Challenges in Slavic languages (Piskorski et al., 2019) held at 7th BSNLP Workshop in conjunction with ACL 2019 conference, and we discuss the differences between both systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 330, |
|
"end": 354, |
|
"text": "(Piskorski et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The aim of those shared tasks was to recognize, normalize, and ultimately link -on a document, language and cross-language level -all named entities in collections of documents concerning the same topic, e.g., the 2020 US presidential election. Named entities have been split into five categories: PER (person), LOC (location), ORG (organization), PRO (product), and EVT (event). The 2019 edition featured four Slavic languages (Czech, Russian, Bulgarian, Polish), and the 2021 edition featured six languages (the previous four plus Ukrainian and Sloven).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In our solution we have combined models trained unsupervised on large datasets, and fine-tuned on small ones in a supervised way, with simple, whitebox algorithms that perform later stages of processing in a stable and predictable manner. In addition, we have taken advantage of similarities between certain languages in order to augment the data and further improve the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our system chains three modules for named entity recognition, lemmatization, and linking, which correspond to the objectives of the BSNLP Shared Task. We describe them in detail in the following sections. Our submissions for the 2019 and the 2021 shared tasks were similar, and differed Table 1 : Class label mapping to the shared task label set in additional training datasets: KPWr (Marci\u0144czuk et al., 2016) , CNEC (\u0160ev\u010d\u00edkov\u00e1 et al., 2014) , and Fac-tRuEval (Starostin et al., 2016) KPWr PER nam_adj_person, nam_liv_ * LOC nam_adj_city, nam_oth_address_street, nam_fac_ * , nam_loc_ * EVT nam_eve_ * PRO nam_oth_tech, nam_pro_ * , nam_-oth_license, nam_oth_stock_index ORG nam_org_ * CNEC PER p (personal names) LOC g (geographical names) EVT ia (conferences), tc (centuries), tf (feasts), tp (epochs) PRO cs (article titless), mn (periodicals), oa (cultural artifacts), op (products), or (directives) ORG ic (cultural/edu/science institutions), if (companies), io (govt. inst.), mt (tv stations)", |
|
"cite_spans": [ |
|
{ |
|
"start": 384, |
|
"end": 409, |
|
"text": "(Marci\u0144czuk et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 417, |
|
"end": 441, |
|
"text": "(\u0160ev\u010d\u00edkov\u00e1 et al., 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 484, |
|
"text": "(Starostin et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 294, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Our Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "FactRu PER name, surname, nickname, patronymic LOC geo_adj, loc_descr, loc_name PRO job, prj_name, prj_desc ORG facility_descr, org_descr only in the first element of the chain -the entity recognition method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Because the training datasets were small, we looked for other labeled datasets. There is no common standard of labelling NER datasets, and those extra datasets had to be remapped into the label set of the shared task. However, their addition did improve the recognition scores, and we describe them in the following paragraphs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional Training Data", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "PL We used 1343 documents from KPWr with Named Entity annotations pre-processed with liner2-convert (Marci\u0144czuk et al., 2017) tool, flattening and mapping original categories as shown in Table 1 . RU, BG, UK For languages with Cyrillic script we used FactRuEval2016 (Starostin et al., 2016) corpus consisting of 255 documents with 11754 annotated spans. Interestingly, the addition of this dataset improved scores for BG and UK despite the language mismatch.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 125, |
|
"text": "(Marci\u0144czuk et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 290, |
|
"text": "(Starostin et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 194, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Additional Training Data", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "CS, SL For Czech and Slovene we used Czech Named Entity Corpus (\u0160ev\u010d\u00edkov\u00e1 et al., 2014) containing 8993 sentences with manually annotated 35220 named entities, classified according to a twolevel hierarchy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 87, |
|
"text": "(\u0160ev\u010d\u00edkov\u00e1 et al., 2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional Training Data", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "Recognition in our 2019 submission was realized with Flair (Akbik et al., 2018) , a model made of the embedding layer and a bi-directional LSTM with a Conditional Random Field output (BiLSTM-CRF). The embedding layer aggregated pre-trained word representations of varying granularity and origin (word embeddings, subword embeddings (Heinzerling and Strube, 2018) , contextual forward and backward character embeddings inherent to Flair).", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 79, |
|
"text": "(Akbik et al., 2018)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 362, |
|
"text": "(Heinzerling and Strube, 2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flair-based Recognition System", |
|
"sec_num": "2.1.2" |
|
}, |
|
{ |
|
"text": "Because of the data scarcity, we adopted the philosophy of making our systems \"neural gazetteers\". To this end, we tried to collect as much various embeddings as possible. This line of reasoning applied especially to word-level embeddings. Ideally we wanted our systems to have, for every language, embeddings trained on Wikipedia, Common Crawl 1 , and a collection of news articles.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flair-based Recognition System", |
|
"sec_num": "2.1.2" |
|
}, |
|
{ |
|
"text": "We found it beneficial to mix word pieces and character embeddings between languages. For instance, our model for Russian used Bulgarian embeddings.This is especially useful when the model of specific granularity in the target language is unavailable. Lastly, we also found it beneficial to mix training data for seemingly related languages, and improved the scores by adding our additional FactRuEval data to the Bulgarian training dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flair-based Recognition System", |
|
"sec_num": "2.1.2" |
|
}, |
|
{ |
|
"text": "Our recurrent recognition model underperformed in comparison to the top 2019 contestants, notably those based on BERT (Arkhipov et al., 2019; Devlin et al., 2019) . We present an excerpt from the 2019 recognition results in Section 3.1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 141, |
|
"text": "(Arkhipov et al., 2019;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 162, |
|
"text": "Devlin et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Flair-based Recognition System", |
|
"sec_num": "2.1.2" |
|
}, |
|
{ |
|
"text": "For our submission to the 2021 BSNLP Shared Task we have used FLERT (Schweter and Akbik, 2020) , a state-of-the-art architecture for named entity recognition. It is a BERT-style transformer approach, in which a XLM-RoBERTa model (Conneau et al., 2019) , initially trained on a 100language Common Crawl corpus (Wenzek et al., 2020) , is fine-tuned on a small, language-specific corpus. This model departs from training an output CRF. We found that FLERT models train fast, and outperform our previously used Flair models by a significant margin.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 94, |
|
"text": "(Schweter and Akbik, 2020)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 251, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 330, |
|
"text": "(Wenzek et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FLERT-based Recognition System", |
|
"sec_num": "2.1.3" |
|
}, |
|
{ |
|
"text": "In the process of lemmatization of compound phrases, some words are converted into their lemmas, and some words remain unchanged. Occasionally some words are changed into other forms, e.g., adjectives might be transformed to nominatives with an appropriate gender. In the low-data regime of the shared task, we have opted for a simple rule-based system and data augmentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We pose the lemmatization task as splitting a word w into two concatenated parts w = w 1 w 2 , and computing the lemma as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "w 1 v 2 , where (w 2 , v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "2 ) \u2208 R lem , and R lem is a small set of singleword lemmatization rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We use two main additional sources of information:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Wikipedia We take advantage of numerous links between articles, from which we extract pairs [[text anchor|document title]] . The anchors often are the inflected forms, and document titles the lemmatized forms of the same entity. In order to filter out spurious we consider a pair (anchor, title) a correct lemmatization if both the anchor and the text have the same number of words, and every i-th word in a title is either equal to the i-th word in an anchor, or is its possible lemma.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Finally, we heuristically recognize a small set of of words for later use, which we call stopper words. We define them as words shared between the anchor and the title, such that all words that follow them are identical in the anchor and the title, e.g., in the (anchor, title) pair (Bazylik\u0119sw. Paw\u0142a za Murami, Bazylikasw. Paw\u0142a za Murami), a stoper word is \"sw.\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Universal Dependencies (UD) (Universal Dependencies Consortium, 2021) is a large collection of treebanks in multiple languages. We extract morphosyntactic information (word, lemma, POS-tags and additional parameters 2 ) from the words present in UD subsets for our target languages. Using that information, we construct single-word lemmatization rules. We say that the word w is a possible lemma of v if there is a one word lemmatization rule transforming v into w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "PoliMorfologik For the Polish language, we additionally use PoliMorfologik (Woli\u01f9ski et al., 2012) , a comprehensive morphosyntactic dictionary, which allows us to extract a large collection of lemmatization rules.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 98, |
|
"text": "(Woli\u01f9ski et al., 2012)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Lemmatization of every phrase gives rise to a lemmatization schema. It works as follows: for every word we take its suffix (the longest suffix which occurs in the list of 2000 most popular suffices), in that way we obtain the left-hand side of the rule. The right-hand side describes, how this suffices should be transformed. For instance for the pair (V\u00e1clavem Havlem, V\u00e1clav Havel)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization Schemas", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "we obtain a rule (-vem, -vlem ", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 29, |
|
"text": "(-vem, -vlem", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization Schemas", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": ") \u2212\u2192 (-v, -vel).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization Schemas", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "Our lemmatization algorithm takes a phrase (named entity found in the first stage) and returns its lemma. It follows that we do not consider every information from the words surrounding the phrase/context. Afterwards, we try to apply the following heuristics in a given order:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization Schemas", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "1. Try to find the (rightmost) stopper word. If there is one, then leave unchanged suffix of the phrase after the stopper (including the stopper itself), find the lemma for the prefix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization Schemas", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "2. Try to apply rule based agreement phrase lemmatization (only for Polish)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization Schemas", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "3. Try to find the lemmatization schema suitable for the phrase. If there are more than one such rule, use the one which gives ,more natural lemmatization' (which prefers common words and words occurring in lemmas)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization Schemas", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "4. Replace every word with its most popular lemma (in the training data, and in Wikipedia), if the word doesn't occur leave it unchanged", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lemmatization Schemas", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "A recognized entity, associated with a category and a normalized lemma, has to be linked with other occurrences of this entity (in this document, in other documents, and ultimately across the documents in all languages). The task is difficult due to the subtle differences between seemingly identical entities. Consider Donald Trump entity: its one occurrence could be linked with the 45th president of the United States, or Donald Trump Jr, depending on the role in the text, but not with both at the same time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We divide the task into two phases: 1) initial assignment of identifiers, and 2) refinement of identifiers. Our linking algorithm relies on three kinds of matches: exact matches of entity names, partial matches, and fuzzy matches with word embeddings. In order to ground the recognized entities regardless of the language, as well as extend our inventory of entities and their possible names, we use Wikidata 3 as a catalogue of entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Wikidata is a structured database of entities extracted from Wikipedia. Every entity has a unique identifier, e.g. Q123456, a list of labels and languages for each label, a description and subclasses/instances of properties, and relationships to other Wikidata entities (instance of, part of, etc.), which form a graph.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Wikidata", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "Thanks to the hierarchy of the relations, we have selected a handful of top-level Wikidata entities (Table 2), and collected all their descendants into sets of wikidata_entities. These are further weighted by their Term Frequency in Wikidata, so we could resolve collisions in favor of the most popular entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Wikidata", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "In a typical, coherent paragraph, the narrative develops with every new sentence. Upon introduction, the entities are named carefully (e.g., with a full name, expanded acronym), to be shortened later, when it is clear from the context what they refer to. For this reason we designed a stateful algorithm, that processes and refines a local list of doc_entities caught in the document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Initial Assignment of Identifiers", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "Algorithm 1 outlines the linking procedure. Assignment of identifiers is performed separately for every document with the ADD_AND_LINK function. It processes a lemmatized set of entities recognized earlier modules of our system. Two kinds of entity dictionaries: doc_entities, which is local to a function, and a global wikidata_entities, which we prepare earlier using Wikidata. Those dictionaries map the textual mentions to identifiers from Wikidata and the target language, e.g., Donald Trump maps to [(Q22686, en) , (Q22686, pl), (Q22686, cs), (Q3713655, cs)] (the last identifier refers to Donald Trump Jr).", |
|
"cite_spans": [ |
|
{ |
|
"start": 505, |
|
"end": 518, |
|
"text": "[(Q22686, en)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Initial Assignment of Identifiers", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "We process document entities starting from the longest ones, and for each select the best entity id with the BEST_ID function. It firstly prefers the matching entries from the doc_entities dictionary, and secondly the most popular Wikidata entries (by Term Frequency) from wikidata_entities. For instance, with the local doc_entities dictionary, after processing Donald Trump, a subsequent shorter mention Trump should be linked with it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Initial Assignment of Identifiers", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "The function ALIASES handles only PRO and ORG labels, and returns a list of all short forms and acronyms specific to those labels, present in Wikidata, e.g., Sony Ericsson is aliased as SE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Initial Assignment of Identifiers", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "The refinement stage uses dense embeddings of phrases in order to uncover high similarities between them, that might have been otherwise missed. We use FastText (Bojanowski et al., 2017) , which is suited for morphologically rich Slavic languages, since the representations are built from generic subword units.", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 186, |
|
"text": "(Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Refinement of Identifiers", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "The refinement is carried out in two phases. In the first one, all phrases with the same identifier are grouped together. In the second one, two groups are merged into one if there exist two mentions (one per each group) with sufficiently similar embeddings measured by their dot product. Phrases are embedded as sums of embeddings of their words. When we merge two groups, we assign to them the identifier with a higher Wikidata term frequency. We refine identifiers only on the single language level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Refinement of Identifiers", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "Algorithm 1 Basic routines of the linking algorithm function ADD_AND_LINK(ners) doc_entities, linked \u2190 {}, {} for (phrase, lemma, type) in SORTED(ners) do Descending by the # of words in a phrase P 1 \u2190 GET_IDENTIFIERS(phrase, doc_entities) P 2 \u2190 GET_IDENTIFIERS(lemma, doc_entities) id \u2190 BEST_ID(lemma, P \uf731 + P \uf732 + [lemma + type]) linked [(phrase, lemma, type) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 360, |
|
"text": "[(phrase, lemma, type)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Refinement of Identifiers", |
|
"sec_num": "2.3.3" |
|
}, |
|
{ |
|
"text": "We present experiments carried out on different levels of the entity recognition pipeline. The data used in those experiments comes from the BSNLP 2019 Shared Task test set (Nord Stream and Ryanair subsets). Our algorithms are tested in the submitted form and have not been further adapted to those datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Recognition Table 3 summarizes strict recognition results on the test data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The 2019 Shared Task", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Lemmatization We analyzed the influence of various part of lemmatization on the performance of our method. The results are shown in Table 4 . Our baseline is the identity function, in which we assume a phrase being its own lemma.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 139, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The 2019 Shared Task", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "One should be aware that due to the small amount of test data, the results should be treated as approximate. Some differences can be caused by bad lemmatization of one phrase (especially if the phrase occurs many times in test data). It seems that all implemented heuristic are reasonable and improve over the baseline. Moreover, it is easy to see that links from Wikipedia are useful source of information in this task. Table 5 shows the result of linking. Even though our recognizer did not hold up to the competition, the linking algorithm was able to close the gap in F1 score. In order to test the algorithm in ablation, we include linking results on ground truth lemmatized data (Lemma Oracle).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 428, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The 2019 Shared Task", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We present the results of our FLERT-based submission, which are partial results of the entire shared task available at the time of writing. One of the sets of articles in the training data is devoted to COVID-19. This situation is unusual: the phrase very often used in test data, does not appear at all in the training data (also in the data used to pre-train language model).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The 2021 Shared Task", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We have verified that our NER models struggle with assigning consistent labels to the phrase COVID-19, which is common in the test data. An additional difficulty is the ambiguity of this phrase, which may refer to a disease and possibly remain unclassified as a named entity, or a pandemic and be classified as EVT. We decided to do a simple post-processing which assigns EVT to all COVID-19 phrases recognized by the NER module.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The 2021 Shared Task", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We think that this situation is so unusual that in a real system, used in the industry, it would be handled using a special ad-hoc rule. Moreover, we wanted to know, what are the result of this fixed assignment, and submitted two versions of our solutions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The 2021 Shared Task", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This paper describes our submissions to the 2019 and 2021 BSNLP Shared Tasks on named entity recognition on Slavic languages. Even though the training data was scarce, we have used large-scale datasets: corpora of unstructured text in the unsupervised training phase of training of the recognition model, and structured Wikipedia and Wikidata knowledge bases in order to extract rules and entities for lemmatization and linking phases. The linking algorithm is a strong point of our submission. In the 2019 task it allowed to close the performance gap between our solution and competitors, introduced by a weak initial recognition model. The results suggest that, perhaps, there is still a white spot in between supervised and unsupervised neu- ral learning, where the structure of the data matters more than volume, and simple rule-based system excel.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "http://commoncrawl.org", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We take the 'international version' of these parameters", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.wikidata.org", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors thank Polish National Science Center for funding under the OPUS-18 2019/35/B/ST6/04379 grant. We also would like to thank Adam Wawrzy\u0144ski and Wojciech Janowski from VoiceLab AI for their support during conducting experiments and model training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgment", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Contextual string embeddings for sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duncan", |
|
"middle": [], |
|
"last": "Blythe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Vollgraf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1638--1649", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Tuning multilingual transformers for language-specific named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Arkhipov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Trofimova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Kuratov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexey", |
|
"middle": [], |
|
"last": "Sorokin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--93", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3712" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikhail Arkhipov, Maria Trofimova, Yuri Kuratov, and Alexey Sorokin. 2019. Tuning multilingual trans- formers for language-specific named entity recogni- tion. In Proceedings of the 7th Workshop on Balto- Slavic Natural Language Processing, pages 89-93, Florence, Italy. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "135--146", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00051" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.02116" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "BPEmb: Tokenization-free pre-trained subword embeddings in 275 languages", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Heinzerling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free pre-trained subword em- beddings in 275 languages. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Liner2 -a generic framework for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Micha\u0142", |
|
"middle": [], |
|
"last": "Marci\u0144czuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Koco\u0144", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Oleksy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "86--91", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-1413" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha\u0142 Marci\u0144czuk, Jan Koco\u0144, and Marcin Oleksy. 2017. Liner2 -a generic framework for named entity recognition. In Proceedings of the 6th Work- shop on Balto-Slavic Natural Language Processing, pages 86-91. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Polish corpus of wroc\u0142aw university of technology 1.2. CLARIN-PL digital repository", |
|
"authors": [ |
|
{ |
|
"first": "Micha\u0142", |
|
"middle": [], |
|
"last": "Marci\u0144czuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Oleksy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Maziarz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Wieczorek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dominika", |
|
"middle": [], |
|
"last": "Fikus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Agnieszka", |
|
"middle": [], |
|
"last": "Turek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micha\u0142", |
|
"middle": [], |
|
"last": "Wolski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomasz", |
|
"middle": [], |
|
"last": "Berna\u015b", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Koco\u0144", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pawe\u0142", |
|
"middle": [], |
|
"last": "K\u0119dzia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha\u0142 Marci\u0144czuk, Marcin Oleksy, Marek Maziarz, Jan Wieczorek, Dominika Fikus, Agnieszka Turek, Micha\u0142 Wolski, Tomasz Berna\u015b, Jan Koco\u0144, and Pawe\u0142 K\u0119dzia. 2016. Polish corpus of wroc\u0142aw university of technology 1.2. CLARIN-PL digital repository.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The second crosslingual challenge on recognition, normalization, classification, and linking of named entities across Slavic languages", |
|
"authors": [ |
|
{ |
|
"first": "Jakub", |
|
"middle": [], |
|
"last": "Piskorski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laska", |
|
"middle": [], |
|
"last": "Laskova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micha\u0142", |
|
"middle": [], |
|
"last": "Marci\u0144czuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lidia", |
|
"middle": [], |
|
"last": "Pivovarova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "P\u0159ib\u00e1\u0148", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Steinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Yangarber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "63--74", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jakub Piskorski, Laska Laskova, Micha\u0142 Marci\u0144czuk, Lidia Pivovarova, Pavel P\u0159ib\u00e1\u0148, Josef Steinberger, and Roman Yangarber. 2019. The second cross- lingual challenge on recognition, normalization, classification, and linking of named entities across Slavic languages. In Proceedings of the 7th Work- shop on Balto-Slavic Natural Language Processing, pages 63-74, Florence, Italy. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Flert: Document-level features for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Schweter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Akbik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Schweter and Alan Akbik. 2020. Flert: Document-level features for named entity recogni- tion.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Czech named entity corpus 2.0. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL", |
|
"authors": [ |
|
{ |
|
"first": "Magda", |
|
"middle": [], |
|
"last": "\u0160ev\u010d\u00edkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zden\u011bk", |
|
"middle": [], |
|
"last": "\u017dabokrtsk\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jana", |
|
"middle": [], |
|
"last": "Strakov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Straka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Faculty of Mathematics and Physics, Charles University", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Magda \u0160ev\u010d\u00edkov\u00e1, Zden\u011bk \u017dabokrtsk\u00fd, Jana Strakov\u00e1, and Milan Straka. 2014. Czech named entity corpus 2.0. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles Uni- versity.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Factrueval 2016: Evaluation of named entity recognition and fact extraction systems for russian", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Starostin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Bocharov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Alexeeva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Bodrova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chuchunkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Dzhumaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Efimenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Granovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Khoroshevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Krylova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Nikolaeva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Smurov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Toldova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "FactRuEval 2016: Evaluation of Named Entity Recognition and Fact Extraction Systems for Russian", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "688--705", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. S. Starostin, V. V. Bocharov, S. V. Alexeeva, A. A. Bodrova, A. S. Chuchunkov, S. S. Dzhumaev, I. V. Efimenko, D. V. Granovsky, V. F. Khoroshevsky, I. V. Krylova, M. A. Nikolaeva, I. M. Smurov, and S. Y. Toldova. 2016. Factrueval 2016: Evaluation of named entity recognition and fact extraction sys- tems for russian. In FactRuEval 2016: Evaluation of Named Entity Recognition and Fact Extraction Sys- tems for Russian, pages 688-705.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "CCNet: Extracting high quality monolingual datasets from web crawl data", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Anne", |
|
"middle": [], |
|
"last": "Lachaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4003--4012", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzm\u00e1n, Ar- mand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "PoliMorf: a (not so) new open morphological dictionary for Polish", |
|
"authors": [ |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Woli\u01f9ski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Mi\u0142kowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maciej", |
|
"middle": [], |
|
"last": "Ogrodniczuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Przepi\u00f3rkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Sza\u0142kiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "860--864", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcin Woli\u01f9ski, Marcin Mi\u0142kowski, Maciej Ogrod- niczuk, Adam Przepi\u00f3rkowski, and \u0141ukasz Sza- \u0142kiewicz. 2012. PoliMorf: a (not so) new open morphological dictionary for Polish. In Proceed- ings of the Eighth International Conference on Lan- guage Resources and Evaluation, LREC 2012, pages 860-864, Istanbul, Turkey. European Language Re- sources Association (ELRA).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "0.39 0.68 0.52 0.66 0.70 0.55 COVID-19 0.66 0.39 0.67 0.61 0.66 0.72 0.62" |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"2\">: Relations between Wikidata categories and</td></tr><tr><td>named entity categories</td><td/></tr><tr><td>Label Top-level Wikidata Entities</td><td/></tr><tr><td colspan=\"2\">PER human (Q5), nationality (Q231002), ethnic group</td></tr><tr><td>(Q41710)</td><td/></tr><tr><td colspan=\"2\">LOC locality (Q3257686), location (Q2221906), spatial</td></tr><tr><td colspan=\"2\">entity (Q58416391), geologic province (Q214045)</td></tr><tr><td colspan=\"2\">EVT event (Q1656682), social phenomenon (Q602884),</td></tr><tr><td>occurrence (Q1190554)</td><td/></tr><tr><td colspan=\"2\">PRO type of manufactured good (Q22811462), tan-</td></tr><tr><td colspan=\"2\">gible good (Q1485500), broadcasting program</td></tr><tr><td colspan=\"2\">(Q11578774), intellectual work (Q15621286), tele-</td></tr><tr><td>vision station (Q1616075)</td><td/></tr><tr><td>ORG organization (Q43229),</td><td>trade agreement</td></tr><tr><td colspan=\"2\">(Q252550), company (Q783794)</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "2019 BSNLP Shared Task selected results (strict recognition evaluation, test set, F1 metric). For every submitter, the best solution is shown with respect to the average performance on all languages.", |
|
"content": "<table><tr><td>Model</td><td>Testset</td><td>BG</td><td>CS</td><td>PL</td><td>RU</td><td>All</td></tr><tr><td>RIS-slav_lemma</td><td>NordS</td><td colspan=\"5\">0.84 0.89 0.89 0.78 0.85</td></tr><tr><td>CogComp-7</td><td>NordS</td><td colspan=\"5\">0.84 0.89 0.86 0.72 0.83</td></tr><tr><td>IIUWR.PL-5</td><td>NordS</td><td colspan=\"5\">0.71 0.83 0.86 0.65 0.78</td></tr><tr><td>TLR</td><td>NordS</td><td colspan=\"5\">0.73 0.74 0.72 0.60 0.70</td></tr><tr><td colspan=\"2\">Cog_Tech_Cent-4 NordS</td><td>-</td><td>-</td><td>-</td><td colspan=\"2\">0.69 0.69</td></tr><tr><td>Sberiboba</td><td>NordS</td><td colspan=\"5\">0.63 0.71 0.68 0.60 0.66</td></tr><tr><td>JRC-TMA-CC-4</td><td>NordS</td><td colspan=\"5\">0.67 0.50 0.42 0.52 0.52</td></tr><tr><td>NLP_Cube</td><td>NordS</td><td colspan=\"5\">0.14 0.16 0.09 0.11 0.12</td></tr><tr><td>CogComp-6</td><td colspan=\"6\">Ryanair 0.88 0.94 0.91 0.94 0.92</td></tr><tr><td>RIS-slav_lemma</td><td colspan=\"6\">Ryanair 0.86 0.94 0.92 0.91 0.91</td></tr><tr><td colspan=\"2\">Cog_Tech_Cent-4 Ryanair</td><td>-</td><td>-</td><td>-</td><td colspan=\"2\">0.91 0.91</td></tr><tr><td>IIUWR.PL-4</td><td colspan=\"6\">Ryanair 0.76 0.87 0.84 0.79 0.82</td></tr><tr><td>TLR</td><td colspan=\"6\">Ryanair 0.76 0.83 0.82 0.83 0.82</td></tr><tr><td>Sberiboba</td><td colspan=\"6\">Ryanair 0.65 0.84 0.81 0.72 0.77</td></tr><tr><td>JRC-TMA-CC-1</td><td colspan=\"6\">Ryanair 0.64 0.55 0.52 0.79 0.64</td></tr><tr><td>NLP_Cube</td><td colspan=\"6\">Ryanair 0.15 0.13 0.19 0.18 0.16</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "Accuracy of our rule-based lemmatization algorithm on the 2019 BSNLP Shared Task training data. Abbreviations: p -phrase lemmatization rules, w -separate lemmatization of words, W -additional Wikipedia data, a -handwritten agreement rules (Polish only), s -uses stoper words.", |
|
"content": "<table><tr><td>Method</td><td>BG</td><td>CS</td><td>PL</td><td>RU</td><td>Avg</td></tr><tr><td colspan=\"6\">Baseline 89.02 59.23 54.51 54.79 63.00</td></tr><tr><td>+a</td><td colspan=\"5\">89.02 59.23 58.53 54.79 64.12</td></tr><tr><td>+w</td><td colspan=\"5\">89.91 64.39 74.17 57.23 70.41</td></tr><tr><td>+p</td><td colspan=\"5\">89.18 67.47 79.12 57.53 72.34</td></tr><tr><td>+wW</td><td colspan=\"5\">92.73 71.29 81.27 86.71 82.62</td></tr><tr><td>+pW</td><td colspan=\"5\">88.53 81.78 80.77 89.16 84.97</td></tr><tr><td colspan=\"6\">+paswW 91.60 81.69 82.42 89.28 86.14</td></tr><tr><td>+pasW</td><td colspan=\"5\">92.33 81.86 83.57 89.99 86.83</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "2019 BSNLP Shared Task results (crosslanguage linking, test data). For every team we present their highest scoring submission wrt. the F1 metric. (*) The oracle model (first row for every dataset) is our entity linking algorithm run on the ground truth lemmatized data after the competition.", |
|
"content": "<table><tr><td>Model</td><td>Testset</td><td>F1</td><td colspan=\"2\">Prec. Rec.</td></tr><tr><td colspan=\"5\">Ours + Lemma Oracle Ryanair 0.76 * 0.83 * 0.70 *</td></tr><tr><td>Ours (IIUWR.PL-5)</td><td colspan=\"2\">Ryanair 0.49</td><td>0.80</td><td>0.35</td></tr><tr><td>JRC-TMA-CC-2</td><td colspan=\"2\">Ryanair 0.27</td><td>0.67</td><td>0.17</td></tr><tr><td>CogComp-3</td><td colspan=\"2\">Ryanair 0.13</td><td>0.07</td><td>0.73</td></tr><tr><td>RIS-merge</td><td colspan=\"2\">Ryanair 0.10</td><td>0.06</td><td>0.70</td></tr><tr><td>Sberiboba</td><td colspan=\"2\">Ryanair 0.10</td><td>0.06</td><td>0.30</td></tr><tr><td>NLP_Cube</td><td colspan=\"2\">Ryanair 0.00</td><td>0.67</td><td>0.00</td></tr><tr><td colspan=\"2\">Ours + Lemma Oracle NordS</td><td colspan=\"3\">0.59 * 0.74 * 0.50 *</td></tr><tr><td>Ours (IIUWR.PL-5)</td><td>NordS</td><td>0.42</td><td>0.73</td><td>0.29</td></tr><tr><td>JRC-TMA-CC-2</td><td>NordS</td><td>0.31</td><td>0.69</td><td>0.20</td></tr><tr><td>RIS-merge_lemma</td><td>NordS</td><td>0.11</td><td>0.06</td><td>0.72</td></tr><tr><td>CogComp-3</td><td>NordS</td><td>0.11</td><td>0.06</td><td>0.68</td></tr><tr><td>Sberiboba</td><td>NordS</td><td>0.06</td><td>0.03</td><td>0.36</td></tr><tr><td>NLP_Cube</td><td>NordS</td><td>0.00</td><td>0.46</td><td>0.00</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"text": "2021 BSNLP Shared Task selected results (test set, F1 metric): strict recognition, normalization, language-level linking (coreference). NC refers to the submission without fixed labelling of COVID-19 occurrences as EVT.", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |