ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:59:24.811551Z"
},
"title": "Simultaneously Self-Attending to Text and Entities for Knowledge-Informed Text Representations",
"authors": [
{
"first": "Dung",
"middle": [],
"last": "Thai",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Raghuveer",
"middle": [],
"last": "Thirukovalluru",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Trapit",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Umass",
"middle": [],
"last": "Amherst",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Pre-trained language models have emerged as highly successful methods for learning good text representations. However, the amount of structured knowledge retained in such models, and how (if at all) it can be extracted, remains an open question. In this work, we aim at directly learning text representations which leverage structured knowledge about entities mentioned in the text. This can be particularly beneficial for downstream tasks which are knowledge-intensive. Our approach utilizes self-attention between words in the text and knowledge graph (KG) entities mentioned in the text. While existing methods require entity-linked data for pre-training, we train using a mention-span masking objective and a candidate ranking objective-which doesn't require any entity-links and only assumes access to an alias table for retrieving candidates, enabling large-scale pre-training. We show that the proposed model learns knowledgeinformed text representations that yield improvements on the downstream tasks over existing methods.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Pre-trained language models have emerged as highly successful methods for learning good text representations. However, the amount of structured knowledge retained in such models, and how (if at all) it can be extracted, remains an open question. In this work, we aim at directly learning text representations which leverage structured knowledge about entities mentioned in the text. This can be particularly beneficial for downstream tasks which are knowledge-intensive. Our approach utilizes self-attention between words in the text and knowledge graph (KG) entities mentioned in the text. While existing methods require entity-linked data for pre-training, we train using a mention-span masking objective and a candidate ranking objective-which doesn't require any entity-links and only assumes access to an alias table for retrieving candidates, enabling large-scale pre-training. We show that the proposed model learns knowledgeinformed text representations that yield improvements on the downstream tasks over existing methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Self-supervised representation learning on large text corpora using language modeling objectives has been shown to yield generalizable representations that improve performance for many downstream tasks. Examples of such approaches include BERT (Devlin et al., 2019) , RoBERTa (Liu et al., 2019b) , XLNET (Yang et al., 2019) , GPT-2 (Radford et al., 2019) , T5 (Raffel et al., 2019) etc. However, whether such models retain structured knowledge in their representation is still an open question (Petroni et al., 2019; Poerner et al., 2019; Roberts et al., 2020) which has led to active research on knowledge-informed rep- * Equal Contribution resentations Soares et al., 2019) .",
"cite_spans": [
{
"start": 244,
"end": 265,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 276,
"end": 295,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF12"
},
{
"start": 304,
"end": 323,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 326,
"end": 354,
"text": "GPT-2 (Radford et al., 2019)",
"ref_id": null
},
{
"start": 360,
"end": 381,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 494,
"end": 516,
"text": "(Petroni et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 517,
"end": 538,
"text": "Poerner et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 539,
"end": 560,
"text": "Roberts et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 655,
"end": 675,
"text": "Soares et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Models that learn knowledge-informed representations can be broadly classified into two categories. The first approach augments language model pretraining with the aim of storing structured knowledge in the model parameters. This is typically done by augmenting the pre-training task, for example by masking entity mentions or enforcing representational similarity in sentences containing the same entities (Soares et al., 2019) . While this makes minimal assumptions, it requires memorizing all facts encountered during training in the model parameters, necessitating larger models. The second approach directly conditions the representation on structured knowledge, for example fusing mention token representations with the mentioned entity's representation .",
"cite_spans": [
{
"start": 407,
"end": 428,
"text": "(Soares et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we consider the latter approach to learning knowledge-informed representations. Conditioning on relevant knowledge removes the burden on the model parameters to memorize all facts, and allows the model to encode novel facts not seen during training. However, existing methods typically assume access to entity-linked data for training , which is scarce and expensive to annotate, preventing large scale pre-training. Moreover, these methods don't allow for bi-directional attention between both the text and the KG when representing text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a simple approach to incorporate structured knowledge into text representations. This is done using self-attention (Vaswani et al., 2017) to simultaneously attend to tokens in text and candidate KG entities mentioned in the text, in order to learn knowledge-informed representations after multiple layers of self-attention. The model is trained using a combination of a mention-masking objective and a weakly-supervised entity selection objective, which only requires access to an alias table to generate candidate entities and doesn't assume any entity-linked data for training. We show that this objective allows the model to appropriately attend to relevant entities without explicit supervision for the linked entity and learn representations that perform competitively to models trained with entity-linked data.",
"cite_spans": [
{
"start": 126,
"end": 148,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We make the following contributions: (1) we propose KNowledge-Informed Transformers (KNIT), an approach to learn knowledge-informed text representations which does not require entity-linked data for training, (2) we train KNIT on a large corpora curated from the web with Wikidata as the knowledge graph, (3) we evaluate the approach on multiple tasks of entity typing and entity linking and show that it performs competitively or better than existing methods, yielding large improvements even while using < 1% of task-specific data for fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "BERT (Devlin et al., 2019) proposed a pretraining approach, called masked language modeling (MLM), which requires randomly replacing words in a sentence with a special [MASK] token and predicting the original masked tokens. RoBERTa (Liu et al., 2019b ) trained a more robust BERT model on larger data. While MLM has been shown to learn general purpose representations, the amount of factual knowledge stored in such models is limited (Petroni et al., 2019; Poerner et al., 2019) . propose a mention-masking objective which masks mentions of entities in a sentence, as opposed to random words, as a way of incorporating entity information into such models. use entity-linked data and infuse representations of the linked entity in the final layer of the model to the representations of the corresponding entity mention. KnowBERT (Peters et al., 2019) learn an integrated entity linker that infuses entity representations into the word embedding input for the model and also relies on entity-linked data for training. K-Bert (Liu et al., 2019a) uses linked triples about entities in a sentence to inject knowledge. KGLM proposed a fact-aware language model that selects and copies facts from KG for generation. Recently, introduced Entity-as-Experts (EAE), which is a masked language model coupled with an entity memory network. EAE learns to predict the entity spans, retrieves relevant entity memories and inte-grate them back to the Transformer layers. They also assume entity-linked data for training.",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 232,
"end": 250,
"text": "(Liu et al., 2019b",
"ref_id": "BIBREF12"
},
{
"start": 434,
"end": 456,
"text": "(Petroni et al., 2019;",
"ref_id": "BIBREF15"
},
{
"start": 457,
"end": 478,
"text": "Poerner et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 819,
"end": 849,
"text": "KnowBERT (Peters et al., 2019)",
"ref_id": null
},
{
"start": 1023,
"end": 1042,
"text": "(Liu et al., 2019a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "In this section, we describe the KNIT model as well as its training procedure. KNIT makes use of the mention-masking objective for training and conditions the encoder on both text as well as mentioned entities but does not assume any entity-linked data for training. Figure 1 shows the overall model.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 275,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Knowledge-Informed Transformers (KNIT)",
"sec_num": "3"
},
{
"text": "The input consists of a sentence along with candidate entities for the sentence. We first run a named entity extraction model on the sentence to extract mentions and then generate candidate entities based on cross-wikis (Ganea and Hofmann, 2017). We use a Wikipedia alias table for generating candidates, taken from Raiman and Raiman (2018) . The start and end of mentions are demarcated using special tokens m and /m . Given the text sequence {x 1 , . . . , x n } and the set of associated candidate entities for the sequence {e 1 , . . . , e m }, we first embed the words and entities as vector embeddings. For entities, we use KG pre-trained embeddings (Lerer et al., 2019) and add a projection layer to upscale the entity embedding to the word embedding size. We will use Transformer self-attention (Vaswani et al., 2017) to encode both the text and the entities. Since self-attention has no notion of position in the sequence, it is common to concatenate a position embedding (Devlin et al., 2019) to the word embeddings. We follow this approach for the word embeddings. However, since the entities in the candidate set need to be encoded in a position-independent manner, we don't add any position embeddings to them. This entire sequence, position-dependent word embeddings and positionindependent candidates, is passed through multiple layers of self-attention. The end result is contextualized token embeddings conditioned on the entities, {x 1 , . . . ,x n }, as well as candidate entity embeddings conditioned on the text {\u1ebd 1 , . . . ,\u1ebd m }.",
"cite_spans": [
{
"start": 316,
"end": 340,
"text": "Raiman and Raiman (2018)",
"ref_id": "BIBREF20"
},
{
"start": 656,
"end": 676,
"text": "(Lerer et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 803,
"end": 825,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 981,
"end": 1002,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text and Entity Encoder",
"sec_num": "3.1"
},
{
"text": "Mention-masking While the approach described above has the potential to learn knowledgeconditioned text representations, it needs a correct pre-training objective to learn to use the extra information from the entities. Since large Transformer models (Devlin et al., 2019 ) have a lot of parameters, they can be highly accurate at predicting random word tokens and thus directly using a MLM objective for training will not work as the model can ignore the entity embeddings. However, we find that, due to lack of factual knowledge, these models are not very good at predicting tokens of entity mentions. Table 1 shows this for RoBERTa (Liu et al., 2019b) model. Thus, mention-masking -predicting tokens of masked entity mentions, provides a better objective to learn to use the candidate entities and learn knowledge-informed representations. Note that in Table 1 , even when RoBERTa is trained with mention-masking (+MM) it is unable to provide a high accuracy on predicting mention tokens. Thus including entity embeddings should provide enough context for the model to make correct predictions by using the entities, as reflected by the KNIT score in Table 1 .",
"cite_spans": [
{
"start": 251,
"end": 271,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF2"
},
{
"start": 635,
"end": 654,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 604,
"end": 611,
"text": "Table 1",
"ref_id": null
},
{
"start": 856,
"end": 863,
"text": "Table 1",
"ref_id": null
},
{
"start": 1154,
"end": 1161,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training",
"sec_num": "3.2"
},
{
"text": "Candidate Ranking To further enable the model to use the correct entities for a mention, we use a weak entity linking objective that forces the model to rank one of the entities, from the candidate set of a mention, higher than all other entities for the sentence. Consider the i-th mention in a sentence with (m i1 , m i2 ) as the start and end indices of the mention in the sentence, and a candidate set of entities C i for this mention. We create a mention representationm i by concatenatingx m i1 and x m i2 . Now, given the representations, we score all entities for the mention i:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.2"
},
{
"text": "s ij = W [m i ;\u1ebd j ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.2"
},
{
"text": "where W is a learnable weight matrix. To enforce the model to select one entity from the mention's candidates, we find the highest scoring entity, e i = arg max j\u2208C i s ij , and use that as a target in a cross-entropy loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.2"
},
{
"text": "L cr = cross entropy(softmax(s ij ), I\u00ea i ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.2"
},
{
"text": "where the softmax is over all entities (not just for mention i) in the sentence and I\u00ea i is a one-hot vector with 1 for the entity\u00ea i and 0 everywhere else. This objective enforces the model to rank one candidate higher than others candidates for the same mention as well as candidates of other entities. Similar objective has been explored for dealing with noise in entity typing models (Xu and Barbosa, 2018; Abhishek et al., 2017) . The overall objective is a combination of bert-style MLM, mention-masking (MM) and candidate ranking:",
"cite_spans": [
{
"start": 388,
"end": 410,
"text": "(Xu and Barbosa, 2018;",
"ref_id": "BIBREF25"
},
{
"start": 411,
"end": 433,
"text": "Abhishek et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.2"
},
{
"text": "L mlm + \u03b1L mm + \u03b2L cr (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.2"
},
{
"text": "Implementation details are in Supplementary. Code of our models is available here 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Models evaluated: (1) RoBERTa (Liu et al., 2019b) : the model uses the MLM objective for pre-training;",
"cite_spans": [
{
"start": 30,
"end": 49,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "(2) RoBERTa + MM: this model uses the mention-masking objective in addition to the MLM objective;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "(3) KNIT: this is the proposed model which uses MLM, mentionmasking and candidate ranking for pre-training. We use RoBERTa-base architecture for all models due to lack of computation resources. We compare our method with existing state-of-theart in knowledge-informed representations: Ernie , KnowBERT (Peters et al., 2019) and RELIC (Ling et al., 2020) . Table 3 : F1 score on entity typing when using only a fraction of the task-specific training data (0.05%\u22124%).",
"cite_spans": [
{
"start": 293,
"end": 323,
"text": "KnowBERT (Peters et al., 2019)",
"ref_id": null
},
{
"start": 334,
"end": 353,
"text": "(Ling et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 356,
"end": 363,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Entity typing is the task of identifying the semantic type of a given mention. We evaluate on two Entity typing datasets -OpenEntity (Choi et al., 2018) and FIGER (Ling et al., 2015) . OpenEntity is a crowdsourced dataset comprising 9 general types and 121 fine-grained types. We follow and evaluate on the nine general entity types. FIGER is a distant supervised dataset comprising over 2M examples and 113 entity types. Experimental results are shown in Table 2 . KNIT outperforms RoBERTa (Liu et al., 2019b) , Ernie , and RoBERTa+MM while being comparable to KnowBert (Peters et al., 2019) . 96.70 Raiman and Raiman (2018) 94.88 Radhakrishnan et al. (2018) 93.00 Le and Titov (2018) 93.07 Ganea and Hofmann 201792.22 KNIT 92.87 state-of-the-art without utilizing any entity-linked data for pre-training, unlike .",
"cite_spans": [
{
"start": 133,
"end": 152,
"text": "(Choi et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 163,
"end": 182,
"text": "(Ling et al., 2015)",
"ref_id": "BIBREF10"
},
{
"start": 491,
"end": 510,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF12"
},
{
"start": 562,
"end": 592,
"text": "KnowBert (Peters et al., 2019)",
"ref_id": null
},
{
"start": 601,
"end": 625,
"text": "Raiman and Raiman (2018)",
"ref_id": "BIBREF20"
},
{
"start": 632,
"end": 659,
"text": "Radhakrishnan et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 666,
"end": 685,
"text": "Le and Titov (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 456,
"end": 463,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results on Entity Typing",
"sec_num": "4.1"
},
{
"text": "To further evaluate the effectiveness of KNIT, we consider the scenario where only a fraction of the data is used for task-specific fine-tuning. For this, we sample equal number of examples per type to create the fine-tuning data. The models are finetuned using the sampled data but are evaluated on the entire test set. Table. 3 shows that KNIT significantly outperforms RoBERTa (Liu et al., 2019b) and RoBERTa+MM in the data constrained cases.",
"cite_spans": [
{
"start": 380,
"end": 399,
"text": "(Liu et al., 2019b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 321,
"end": 327,
"text": "Table.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on Entity Typing",
"sec_num": "4.1"
},
{
"text": "We demonstrate that our pre-trained model can capture entity linking information. For this, we use the AIDA-CoNLL (Hoffart et al., 2011) dataset and evaluate the linking performance of the model without any dataset-specific fine-tuning. We also compare with a model that used wikipedia hyperlinks for supervision during pre-training (KNIT +Wikilinks). As shown in Table 4 , KNIT improves upon the candidate ranking by 12.05% and 19.66% when partial entity linking supervision from Wiki linkedtext data is available. Even without Wiki-linked data, it outperforms the best pre-trained model that considers mention context (RELIC) by 0.81%. To further explore the entity linking capacity of our model, we fine-tune the model and show that our model has competitive performance, even when using only 10% of the training data. When trained on the entire dataset, we find RELIC performs better, potentially due to the use of entity-linked data in its pre-training.",
"cite_spans": [
{
"start": 114,
"end": 136,
"text": "(Hoffart et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 364,
"end": 371,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results on Entity Linking",
"sec_num": "4.2"
},
{
"text": "We propose a simple approach to learn knowledgeinformed text representations using self-attention between text and mentioned entities. Our approach does not rely on any entity-linked data for training, enabling large-scale pre-training. We show that the method learns better representations than competing approaches and also learns entity-linking without explicit linking supervision. In the future, it will be interesting to explore how such methods can be used to condition the text encoder on structured KG facts about entities. . Learning rate was tuned in (0.00001-0.0005). All dropouts were tuned sparsely in the range (0.1-0.3). During finetuning, we restrict the max number of candidates per mention to 10. Unlike pretraining, the entity embeddings were also finetuned during entity typing experiments and the best performing validation set checkpoint was used to generate test set results Sample dataset creation for experiments of Table 3 were done using random seeds. Three different sample datasets were collected for each of Ope-nEnt(4%), Figer(0.5%) and Figer(0.05%). Each sample would comprise an equal number of examples per entity type but randomised across the three runs. Numbers reported in Table 3 correspond to mean and standard deviation values of the performance of the three sample dataset trained models on the test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 1212,
"end": 1219,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The sizes of sample and original datasets are shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "A.2.1 Datasets",
"sec_num": null
},
{
"text": "Source Code: https://github.com/dungtn/KNIT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.diffbot.com 3 Code will be opensourced",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank members of UMass IESL and NLP groups for helpful discussion and feedback. We also thank DiffBot for their supports in collecting the linked-text data. This work is funded in part by the Center for Data Science and the Center for Intelligent Information Retrieval. The work reported here was performed in part using high performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "To train KNIT, we collect 16M sentences from Wikipedia. We also collect 28M sentences from news articles and tag them using the DiffBot Entity Linker 2 . We further reduce the size of the entity vocabulary to 595K and remove examples that have no entity mentions. We limit each context sentence to 512 tokens and no more than 5 mentions per sentence with at least 2 and at most 10 candidate entities per mention span.We use pre-trained entity embeddings with dimension d = 200 from (Lerer et al., 2019) and keep them fixed during the course of KNIT training. We use Adam optimizer with learning rate 1e \u22124 , polynomial decay scheduler with warm-up, and clip norm 10. We also tune hyper-parameters in Equation 2and choose \u03b1 = 1 and \u03b2 = 10. The code will be made available on github 3 .",
"cite_spans": [
{
"start": 482,
"end": 502,
"text": "(Lerer et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Implementation Details(Pretraining)",
"sec_num": null
},
{
"text": "All results in Tables 2-3 are obtained by tuning a few hyperparameters -batch size, learning rate, dropout, attention dropout. Batch size was tuned in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Implementation Details(Entity Typing)",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-grained entity type classification by jointly learning representations and label embeddings",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Abhishek",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Awekar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "797--807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Abhishek, Ashish Anand, and Amit Awekar. 2017. Fine-grained entity type classification by jointly learning representations and label embed- dings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computa- tional Linguistics: Volume 1, Long Papers, pages 797-807.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ultra-fine entity typing",
"authors": [
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "87--96",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettle- moyer. 2018. Ultra-fine entity typing. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 87-96, Melbourne, Australia. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Empirical evaluation of pretraining strategies for supervised entity linking",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "F\u00e9vry",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Livio Baldini",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2020,
"venue": "Automated Knowledge Base Construction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thibault F\u00e9vry, Nicholas FitzGerald, Livio Baldini Soares, and Tom Kwiatkowski. 2020. Empirical evaluation of pretraining strategies for supervised en- tity linking. In Automated Knowledge Base Con- struction.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Entities as experts: Sparse memory access with entity supervision. arXiv",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "F\u00e9vry",
"suffix": ""
},
{
"first": "Baldini",
"middle": [],
"last": "Livio",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thibault F\u00e9vry, Livio Baldini Soares, Nicholas FitzGer- ald, Eunsol Choi, and Tom Kwiatkowski. 2020. En- tities as experts: Sparse memory access with entity supervision. arXiv.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep joint entity disambiguation with local neural attention",
"authors": [
{
"first": "Eugen",
"middle": [],
"last": "Octavian",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Ganea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2619--2629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2619-2629.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Robust disambiguation of named entities in text",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [
"Amir"
],
"last": "Yosef",
"suffix": ""
},
{
"first": "Ilaria",
"middle": [],
"last": "Bordino",
"suffix": ""
},
{
"first": "Hagen",
"middle": [],
"last": "F\u00fcrstenau",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Spaniol",
"suffix": ""
},
{
"first": "Bilyana",
"middle": [],
"last": "Taneva",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Thater",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "782--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bor- dino, Hagen F\u00fcrstenau, Manfred Pinkal, Marc Span- iol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named en- tities in text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Process- ing, pages 782-792, Edinburgh, Scotland, UK. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving entity linking by modeling latent relations between mentions",
"authors": [
{
"first": "Phong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1595--1604",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1148"
]
},
"num": null,
"urls": [],
"raw_text": "Phong Le and Ivan Titov. 2018. Improving entity link- ing by modeling latent relations between mentions. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1595-1604, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "PyTorch-BigGraph: A Largescale Graph Embedding System",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Timothee",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Wehrstedt",
"suffix": ""
},
{
"first": "Abhijit",
"middle": [],
"last": "Bose",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Peysakhovich",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd SysML Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Lerer, Ledell Wu, Jiajun Shen, Timothee Lacroix, Luca Wehrstedt, Abhijit Bose, and Alex Peysakhovich. 2019. PyTorch-BigGraph: A Large- scale Graph Embedding System. In Proceedings of the 2nd SysML Conference, Palo Alto, CA, USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Design challenges for entity linking",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "315--328",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00141"
]
},
"num": null,
"urls": [],
"raw_text": "Xiao Ling, Sameer Singh, and Daniel S. Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics, 3:315-328.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Qi Ju, Haotang Deng, and Ping Wang. 2019a. K-bert: Enabling language representation with knowledge graph",
"authors": [
{
"first": "Weijie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiruo",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.07606"
]
},
"num": null,
"urls": [],
"raw_text": "Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2019a. K-bert: Enabling language representation with knowledge graph. arXiv preprint arXiv:1909.07606.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Barack's wife Hillary: Using knowledge graphs for factaware language modeling",
"authors": [
{
"first": "Robert",
"middle": [
"L"
],
"last": "Logan",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Iv",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert L. Logan, IV, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack's wife Hillary: Using knowledge graphs for fact- aware language modeling. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Knowledge enhanced contextual word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Logan",
"suffix": ""
},
{
"first": "Vidur",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "43--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 43-54.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Language models as knowledge bases?",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Yuxiang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2463--2473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Poerner",
"suffix": ""
},
{
"first": "Ulli",
"middle": [],
"last": "Waltinger",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03681"
]
},
"num": null,
"urls": [],
"raw_text": "Nina Poerner, Ulli Waltinger, and Hinrich Sch\u00fctze. 2019. Bert is not a knowledge base (yet): Fac- tual knowledge vs. name-based reasoning in unsu- pervised qa. arXiv preprint arXiv:1911.03681.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "ELDEN: Improved entity linking using densified knowledge graphs",
"authors": [
{
"first": "Priya",
"middle": [],
"last": "Radhakrishnan",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1844--1853",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1167"
]
},
"num": null,
"urls": [],
"raw_text": "Priya Radhakrishnan, Partha Talukdar, and Vasudeva Varma. 2018. ELDEN: Improved entity linking us- ing densified knowledge graphs. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1844-1853, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deeptype: multilingual entity linking by neural type system evolution",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Raiman",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Raiman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.01021"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Raiman and Olivier Raiman. 2018. Deep- type: multilingual entity linking by neural type sys- tem evolution. arXiv preprint arXiv:1802.01021.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "How much knowledge can you pack into the parameters of a language model?",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model?",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Matching the blanks: Distributional similarity for relation learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Livio Baldini",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.03158"
]
},
"num": null,
"urls": [],
"raw_text": "Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learn- ing. arXiv preprint arXiv:1906.03158.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Ernie: Enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced rep- resentation through knowledge integration. arXiv preprint arXiv:1904.09223.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Neural finegrained entity type classification with hierarchyaware loss",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Denilson",
"middle": [],
"last": "Barbosa",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "16--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Xu and Denilson Barbosa. 2018. Neural fine- grained entity type classification with hierarchy- aware loss. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 16-25.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5753-5763.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Ernie: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.07129"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: En- hanced language representation with informative en- tities. arXiv preprint arXiv:1905.07129.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"text": "Micro-averaged scores on entity typing tasks.",
"html": null,
"num": null,
"content": "<table><tr><td/><td colspan=\"3\">OpenEnt (4%) FIGER (0.5%) FIGER (0.05%)</td></tr><tr><td>Roberta</td><td>56.98\u00b1 4.71</td><td>69.69\u00b1 0.38</td><td>65.59\u00b1 1.65</td></tr><tr><td>+MM</td><td>60.16\u00b1 2.44</td><td>69.43\u00b1 0.62</td><td>65.96\u00b1 1.38</td></tr><tr><td>KNIT</td><td>63.97\u00b1 1.59</td><td>71.37\u00b1 0.14</td><td>67.40\u00b1 0.41</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>: Entity linking accuracy under various fine-</td></tr><tr><td>tuning scenarios.</td></tr></table>",
"type_str": "table"
},
"TABREF6": {
"text": "Number of examples in Train, Validation and Test split of different datasets range (",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}