Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D12-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:23:53.800864Z"
},
"title": "An Entity-Topic Model for Entity Linking",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Xianpei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences HaiDian District",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "",
"middle": [],
"last": "Le Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Academy of Sciences HaiDian District",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Entity Linking (EL) has received considerable attention in recent years. Given many name mentions in a document, the goal of EL is to predict their referent entities in a knowledge base. Traditionally, there have been two distinct directions of EL research: one focusing on the effects of mention's context compatibility, assuming that \"the referent entity of a mention is reflected by its context\"; the other dealing with the effects of document's topic coherence, assuming that \"a mention's referent entity should be coherent with the document's main topics\". In this paper, we propose a generative model-called entitytopic model, to effectively join the above two complementary directions together. By jointly modeling and exploiting the context compatibility, the topic coherence and the correlation between them, our model can accurately link all mentions in a document using both the local information (including the words and the mentions in a document) and the global knowledge (including the topic knowledge, the entity context knowledge and the entity name knowledge). Experimental results demonstrate the effectiveness of the proposed model. At the WWDC conference, Apple introduces its new operating system release-Lion.",
"pdf_parse": {
"paper_id": "D12-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Entity Linking (EL) has received considerable attention in recent years. Given many name mentions in a document, the goal of EL is to predict their referent entities in a knowledge base. Traditionally, there have been two distinct directions of EL research: one focusing on the effects of mention's context compatibility, assuming that \"the referent entity of a mention is reflected by its context\"; the other dealing with the effects of document's topic coherence, assuming that \"a mention's referent entity should be coherent with the document's main topics\". In this paper, we propose a generative model-called entitytopic model, to effectively join the above two complementary directions together. By jointly modeling and exploiting the context compatibility, the topic coherence and the correlation between them, our model can accurately link all mentions in a document using both the local information (including the words and the mentions in a document) and the global knowledge (including the topic knowledge, the entity context knowledge and the entity name knowledge). Experimental results demonstrate the effectiveness of the proposed model. At the WWDC conference, Apple introduces its new operating system release-Lion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Entity Linking (EL) has received considerable research attention in recent years (McNamee & Dang, 2009; Ji et al., 2010) . Given many name mentions in a document, the goal of EL is to predict their referent entities in a given knowledge base (KB), such as the Wikipedia 1 . For example, as 1 www.wikipedia.org shown in Figure 1 , an EL system should identify the referent entities of the three mentions WWDC, Apple and Lion correspondingly are the entities Apple Worldwide Developers Conference, Apple Inc. and Mac OS X Lion in KB. The EL problem appears in many different guises throughout the areas of natural language processing, information retrieval and text mining. For instance, in many applications we need to collect all appearances of a specific entity in different documents, EL is an effective way to resolve such an information integration problem. Furthermore, EL can bridge the mentions in documents with the semantic information in knowledge bases (e.g., Wikipedia and Freebase 2 ), thus can provide a solid foundation for knowledge-rich methods.",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(McNamee & Dang, 2009;",
"ref_id": "BIBREF19"
},
{
"start": 104,
"end": 120,
"text": "Ji et al., 2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 319,
"end": 327,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, the accurate EL is often hindered by the name ambiguity problem, i.e., a name may refer to different entities in different contexts. For example, the name Apple may refer to more than 20 entities in Wikipedia, such as Apple Inc., Apple (band) and Apple Bank. Traditionally, there have been two distinct directions in EL to resolve the name ambiguity problem: one focusing on the effects of mention's context compatibility and the other dealing with the effects of document's topic coherence. EL methods based on context compatibility assume that \"the referent entity of a mention is reflected by its context\" (Mihalcea & Cosomai, 2007; Zhang et al., 2010; Zheng et al., 2010; Kataria et al., 2011; Sen 2012) . For example, the context compatibility based methods will identify the referent entity of the mention Lion in Figure 1 is the entity Mac OS X Lion, since this entity is more compatible with its context words operating system and release than other candidates such as Lion (big cats) or Lion (band) . EL methods based on topic coherence assume that \"a mention's referent entity should be coherent with document's main topics\" (Medelyan et al., 2008; Kulkarni et al., 2009; . For example, the topic coherence based methods will link the mention Apple in Figure 1 to the entity Apple Inc., since it is more coherent with the document's topic MAC OS X Lion Release than other referent candidates such as Apple (band) or Apple Bank.",
"cite_spans": [
{
"start": 245,
"end": 257,
"text": "Apple (band)",
"ref_id": null
},
{
"start": 624,
"end": 650,
"text": "(Mihalcea & Cosomai, 2007;",
"ref_id": null
},
{
"start": 651,
"end": 670,
"text": "Zhang et al., 2010;",
"ref_id": "BIBREF29"
},
{
"start": 671,
"end": 690,
"text": "Zheng et al., 2010;",
"ref_id": "BIBREF30"
},
{
"start": 691,
"end": 712,
"text": "Kataria et al., 2011;",
"ref_id": "BIBREF16"
},
{
"start": 713,
"end": 722,
"text": "Sen 2012)",
"ref_id": "BIBREF28"
},
{
"start": 997,
"end": 1007,
"text": "(big cats)",
"ref_id": null
},
{
"start": 1016,
"end": 1022,
"text": "(band)",
"ref_id": null
},
{
"start": 1150,
"end": 1173,
"text": "(Medelyan et al., 2008;",
"ref_id": "BIBREF24"
},
{
"start": 1174,
"end": 1196,
"text": "Kulkarni et al., 2009;",
"ref_id": "BIBREF17"
},
{
"start": 1431,
"end": 1437,
"text": "(band)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 835,
"end": 843,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1277,
"end": 1285,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "In recent years, both of the above two EL directions have shown their effectiveness to some extent, and obviously they are complementary to each other. Therefore we believe that bring the above two directions together will enhance the EL performance. Traditionally, the above two directions are usually be brought together using a hybrid method (Zhang and Sim, 2011; Ratinov et al., 2011; , i.e., the context compatibility and the topic coherence are first separately modeled, then their EL evidence are combined through an additional model. For example, Zhang and Sim (2011) first models the context compatibility as a context similarity and the topic coherence as a similarity between the underlying topics of documents and KB entries, then these two similarities are combined through an additional SVM classifier for the final EL decision.",
"cite_spans": [
{
"start": 345,
"end": 366,
"text": "(Zhang and Sim, 2011;",
"ref_id": "BIBREF32"
},
{
"start": 367,
"end": 388,
"text": "Ratinov et al., 2011;",
"ref_id": "BIBREF27"
},
{
"start": 555,
"end": 575,
"text": "Zhang and Sim (2011)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "The main drawback of these hybrid methods, however, is that they model the context compatibility and the topic coherence separately, which makes it difficult to capture the mutual reinforcement effect between the above two directions. That is, the topic coherence and the context compatibility are highly correlated and their evidence can be used to reinforce each other in EL decisions. For example, in Figure 1 , if the context compatibility gives a high likelihood the mention Apple refers to the entity Apple Inc., then this likelihood will give more evidence for this document's topic is about MAC OS X Lion, and it in turn will reinforce the topic coherence between the entity MAC OS X Lion and the document. In reverse, once we known the topic of this document is about MAC OS X Lion, the context compatibility between the mention Apple and the entity Apple Inc. can be improved as the importance of the context words operating system and release will be increased using the topic knowledge. In this way, we believe that modeling the above two directions jointly, rather than separately, will further improve the EL performance by capturing the mutual reinforcement effect between the context compatibility and the topic coherence.",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 412,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "In this paper, we propose a method to jointly model and exploit the context compatibility, the topic coherence and the correlation between them for better EL performance. Specifically, we propose a generative probabilistic model -called entity-topic model, which can uniformly model the text compatibility and the topic coherence as the statistical dependencies between the mentions, the words, the underlying entities and the underlying topics of a document by assuming that each document is generated according to the following two assumptions: 1) Topic coherence assumption: All entities in a document should be centered around the main topics of the document. For example, the entity Apple Inc. tends to occur in documents about IT, but the entity Apple Bank will more likely to occur in documents about bank or investment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "2) Context compatibility assumption: The context words of a mention should be centered on its referent entity. For example, the words computer, phone and music tends to occur in the context of the entity Apple Inc., meanwhile the words loan, invest and deposit will more likely to occur in the context of the entity Apple Bank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "In this way, the entity-topic model uniformly models the context compatibility, the topic coherence and the correlation between them as the dependencies between the observed information (the mentions and the words) in a document and the hidden information we want to know (the underlying topics and entities) through the global knowledge (including the topic knowledge, the entity name knowledge and the entity context knowledge). And the EL problem can now be decomposed into the following two inference tasks: 1) Predicting the underlying topics and the underlying entities of a document based on the observed information and the global knowledge. We call such a task the prediction task; 2) Estimating the global knowledge from data. Notice that the topic knowledge, the entity name knowledge and the entity context knowledge are all not previously given, thus we need to estimate them from data. We call such a task the knowledge discovery task. Because the accurate inference of the above two tasks is intractable in our entity-topic model, this paper also develops an approximate inference algorithm -the Gibbs sampling algorithm to solve them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "Contributions. The main contributions of this paper are summarized below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "We propose a generative probabilistic model, the entity-topic model, which can jointly model and exploit the context compatibility, the topic coherence and the correlation between them for better EL performance;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "We develop a Gibbs sampling algorithm to solve the two inference tasks of our model: 1) Discovering the global knowledge from data; and 2) Collectively making accurate EL decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "This paper is organized as follows. Section 2 describes the proposed entity-topic model. Section 3 demonstrates the Gibbs sampling algorithm. The experimental results are presented and discussed in Section 4. The related work is reviewed in Section 5. Finally we conclude this paper in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1. A Demo of Entity Linking",
"sec_num": null
},
{
"text": "In this section, we describe the proposed entitytopic model. In following we first demonstrate how to capture the context compatibility, the topic coherence and the correlation between them in the document generative process, then we incorporate the global knowledge generation into our model for knowledge estimation from data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Entity-Topic Model for Entity Linking",
"sec_num": "2"
},
{
"text": "As shown in Section 1, we jointly model the context compatibility and the topic coherence as the statistical dependencies in the entity-topic model by assuming that all documents are generated in a topical coherent and context compatible way. In following we describe the document generative process. In our model, each document d is assumed composed of two types of information, i.e., the mentions and the words. Formally, we represent a document as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "A document is a collection of M mentions and N words, denoted as d = {m 1 , \u2026, m M ; w 1 , \u2026, w N }, with m i the i th mention and w j the j th word. For example, the document in Figure 1 is represented as d = {WWDC, Apple, Lion; at, the, conference, \u2026}, where WWDC, Apple, Lion are the three mentions and the other are the words.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 187,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "To generate a document, our model relies on three types of global knowledge, including:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "Topic Knowledge \u00c1 \u00c1 (The entity distribution of topics): In our model, all entities in a document are generated based on its underlying topics, with each topic is a group of semantically related entities. Statistically, we model each topic as a multinomial distribution of entities, with the probability indicating the likelihood an entity to be extracted from this topic. For example, we may have a topic \u00c1 Apple Inc: \u00c1 Apple Inc: = {Steve Jobs 0.12 , iPhone 0.07 , iPod 0.08 , \u2026}, indicating the likelihood of the entity Steve Jobs be extracted from this topic is 0.12, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "Entity Name Knowledge \u00c3 \u00c3 (The name distribution of entities): In our model, all name mentions are generated using the name knowledge of its referent entity. Specifically, we model the name knowledge of an entity as a multinomial distribution of its names, with the probability indicating the likelihood this entity is mentioned by the name. For example, the name knowledge of the entity Apple Inc. may be \u00c3 Apple Inc: \u00c3 Apple Inc: = {Apple 0.51 , Apple Computer Inc. 0.10 , Apple Inc. 0.07 , \u2026}, indicating that the entity Apple Inc. is mentioned by the name Apple with probability 0.51, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "Entity Context Knowledge \u00bb \u00bb (The context word distribution of entities): In our model, all context words of an entity's mention are generated using its context knowledge. Concretely, we model the context knowledge of an entity as a multinomial distribution of words, with the probability indicating the likelihood a word appearing in this entity's context. For example, we may have \u00bb Apple Inc: \u00bb Apple Inc: = {phone 0.07 , computer 0.10 , IT 0.06 , phone 0.002 , \u2026}, indicating that the word computer appearing in the context of the entity Apple Inc. with probability 0.1, etc. Given the entity list E = {e 1 , e 2 , \u2026, e E } in the knowledge base, the word list V = {w 1 , w 2 , \u2026, w v }, the entity name list K = {n 1 , n 2 , \u2026, n K } and the global knowledge described in above, the generation process of a document collection (corpus) Figure 2 . To demonstrate the generation process, we also demonstrate how the document in Figure 1 can be generated using our model in following steps:",
"cite_spans": [],
"ref_spans": [
{
"start": 841,
"end": 849,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 931,
"end": 939,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "D = {d 1 , d 2 , \u2026, d D } is shown in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "Step 1: The model generates the topic distribution of the document as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "\u03bc d \u03bc d = {Apple Inc. 0.45 , Operating System(OS) 0.55 };",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "Step 2: For the three mentions in the document: i. According to the topic distribution \u03bc d \u03bc d , the model generates their topic assignments as z 1 =Apple Inc., z 2 = Apple Inc., z 3 = OS;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "ii. According to the topic knowledge \u00c1 \u00c1 Apple Inc. , \u00c1 \u00c1 OS and the topic assignments z 1 , z 2 , z 3 , the model generates their entity assignments as e 1 = Apple Worldwide Developers Conference, e 2 = Apple Inc., e 3 = Mac OS X Lion;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "iii. According to the name knowledge of the entities Apple Worldwide Developers Conference, Apple Inc. and Mac OS X Lion, our model generates the three mentions as m 1 =WWDC, m 2 = Apple, m 3 = Lion;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "Step 3: For all words in the document: i. According to the referent entity set in document e d = {Apple Worldwide Developers Conference, Apple Inc., Mac OS X Lion}, the model generates the target entity they describes as a 3 =Apple Worldwide Developers Conference and a 4 =Apple Inc.;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "ii. According to their target entity and the context knowledge of these entities, the model generates the context words in the document. For example, according to the context knowledge of the entities Apple Worldwide Developers Conference, the model generates its context word w 3 =conference, and according to the context knowledge of the entity Apple Inc., the model generates its context word w 4 = introduces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "Through the above generative process, we can see that all entities in a document are extracted from the document's underlying topics, ensuring the topic coherence; and all words in a document are extracted from the context word distributions of its referent entities, resulting in the context compatibility. Furthermore, the generation of topics, entities, mentions and words are highly correlated, thus our model can capture the correlation between the topic coherence and the context compatibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document Generative Process",
"sec_num": "2.1"
},
{
"text": "The entity-topic model relies on three types of global knowledge (including the topic knowledge, the entity name knowledge and the entity context knowledge) to generate a document. Unfortunately, all three types of global knowledge are unknown and thus need to be estimated from data. In this paper we estimate the global knowledge through Bayesian inference by also incorporating the knowledge generation process into our model. Specifically, given the topic number T, the entity number E, the name number K and the word number V, the entity-topic model generates the global knowledge as follows: 1) \u00c1j\u00af\u00bb Dir (\u00af) \u00c1j\u00af\u00bb Dir(\u00af)",
"cite_spans": [
{
"start": 610,
"end": 613,
"text": "(\u00af)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "For each topic z, our model samples its entity distribution \u00c1 z \u00c1 z from an E-dimensional Dirichlet distribution with hyperparameter \u00af.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "2) \u00c3j\u00b0\u00bb Dir (\u00b0) \u00c3j\u00b0\u00bb Dir(\u00b0)",
"cite_spans": [
{
"start": 12,
"end": 15,
"text": "(\u00b0)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "For each entity e, our model samples its name distribution \u00c3 e \u00c3 e from a K-dimensional Dirichlet distribution with hyperparameter \u00b0\u00b0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "3) \u00bbj\u00b1 \u00bb Dir (\u00b1) \u00bbj\u00b1 \u00bb Dir(\u00b1)",
"cite_spans": [
{
"start": 13,
"end": 16,
"text": "(\u00b1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "Given the topic knowledge \u00c1 \u00c1 , the entity name knowledge \u00c3 \u00c3 and the entity context knowledge \u00bb \u00bb: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "1. For each doc d in D,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "a i 's context word distribution w i \u00bb Mult(\u00bb a i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "For each entity e, our model samples its context word distribution \u00bb e \u00bb e from a V-dimensional Dirichlet distribution with hyperparameter \u00b1 \u00b1. Finally, the full entity-topic model is shown in Figure 3 using the plate representation. ",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 201,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "\u00c1 \u00c1 \u00b1 \u00b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Knowledge Generative Process",
"sec_num": "2.2"
},
{
"text": "Using the entity-topic model, the probability of generating a corpus D={d 1 , d 2 , \u2026, d D } given hyperparameters \u00ae \u00ae, \u00af, \u00b0\u00b0 and \u00b1 \u00b1 can be expressed as: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability of a Corpus",
"sec_num": "2.3"
},
{
"text": "P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probability of a Corpus",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z \u00c3 P (\u00c3j\u00b0) Y d X e d P (m d je d ; \u00c3) \u00a3 Z \u00bb P (\u00bbj\u00b1) X a d P (a d je d )P (w d ja d ; \u00bb) \u00a3 Z \u03bc P",
"eq_num": "("
}
],
"section": "The Probability of a Corpus",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z \u00c3 P (\u00c3j\u00b0) Y d X e d P (m d je d ; \u00c3) \u00a3 Z \u00bb P (\u00bbj\u00b1) X a d P (a d je d )P (w d ja d ; \u00bb) \u00a3 Z \u03bc P",
"eq_num": "("
}
],
"section": "The Probability of a Corpus",
"sec_num": "2.3"
},
{
"text": "In this section, we describe how to resolve the entity linking problem using the entity-topic model. Overall, there were two inference tasks for EL: 1) The prediction task. Given a document d, predicting its entity assignments (e d e d for mentions and a d a d for words) and topic assignments ( z d z d ). Notice that here the EL decisions are just the prediction of per-mention entity assignments (e d e d ).",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 304,
"text": "( z d z d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "2) The knowledge discovery task. Given a corpus D={d 1 , d 2 , \u2026, d D }, estimating the global knowledge (including the entity distribution of topics \u00c1 \u00c1, the name distribution \u00c3 \u00c3 and the context word distribution \u00bb \u00bb of entities) from data.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 67,
"text": "D={d 1 , d 2 , \u2026, d",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "Unfortunately, due to the heaven correlation between topics, entities, mentions and words (the correlation is also demonstrated in Eq. (2.1), where the integral is intractable due to the coupling between \u03bc \u03bc, \u00c1 \u00c1, \u00c3 \u00c3 and \u00bb \u00bb ), the accurate inference of the above two tasks is intractable. For this reason, we propose an approximate inference algorithmthe Gibbs sampling algorithm for the entity-topic model by extending the well-known Gibbs sampling algorithm for LDA (Griffiths & Steyvers, 2004) . In Gibbs sampling, we first construct the posterior distribution P (z; e; ajD) P (z; e; ajD) , then this posterior distribution is used to: 1) estimate \u03bc \u03bc, \u00c1 \u00c1, \u00c3 \u00c3 and \u00bb \u00bb; and 2) predict the entities and the topics of all documents in D. Specifically, we first derive the joint posterior distribution from Eq. (2.1) as: P (z; e; ajD) / P (z)P (ejz)P (mje)P (aje)P (wja) P (z; e; ajD) / P (z)P (ejz)P (mje)P (aje)P (wja)",
"cite_spans": [
{
"start": 470,
"end": 498,
"text": "(Griffiths & Steyvers, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "P (z) = ( \u00a1(T \u00ae) \u00a1(\u00ae) T ) D D Y d=1 Q t \u00a1(\u00ae + C DT dt ) \u00a1(T \u00ae + C DT d\u00a4 ) P (z) = ( \u00a1(T \u00ae) \u00a1(\u00ae) T ) D D Y d=1 Q t \u00a1(\u00ae + C DT dt ) \u00a1(T \u00ae + C DT d\u00a4 ) (3.1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "is the probability of the joint topic assignment z to all mentions m in corpus D, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "P (ejz) = ( \u00a1(E\u00af) \u00a1(\u00af) E ) T T Y t=1 Q e \u00a1(\u00af+ C T E te ) \u00a1(E\u00af+ C T E t\u00a4 ) P (ejz) = ( \u00a1(E\u00af) \u00a1(\u00af) E ) T T Y t=1 Q e \u00a1(\u00af+ C T E te ) \u00a1(E\u00af+ C T E t\u00a4 ) (3.2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "is the conditional probability of the joint entity assignments e e to all mentions m in corpus D given all topic assignments z z, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "P (mje) = ( \u00a1(K\u00b0) \u00a1(\u00b0) K ) E E Y e=1 Q m \u00a1(\u00b0+ C EM em ) \u00a1(K\u00b0+ C EM e\u00a4 ) P (mje) = ( \u00a1(K\u00b0) \u00a1(\u00b0) K ) E E Y e=1 Q m \u00a1(\u00b0+ C EM em ) \u00a1(K\u00b0+ C EM e\u00a4 ) (3.3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "is the conditional probability of all mentions m m given all per-mention entity assignments e e, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "P (aje) = D Y d=1 Y e\u00bde d \u00a1 C DE de C DE d\u00a4 \u00a2 C DA de P (aje) = D Y d=1 Y e\u00bde d \u00a1 C DE de C DE d\u00a4 \u00a2 C DA de (3.4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "is the conditional probability of the joint entity assignments a a to all words w in corpus D given all per-mention entity assignments e e, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "P (wja) = ( \u00a1(V \u00b1) \u00a1(\u00b1) V ) E E Y e=1 Q w \u00a1(\u00b1 + C EW ew ) \u00a1(V \u00b1 + C EW e\u00a4 ) P (wja) = ( \u00a1(V \u00b1) \u00a1(\u00b1) V ) E E Y e=1 Q w \u00a1(\u00b1 + C EW ew ) \u00a1(V \u00b1 + C EW e\u00a4 ) (3.5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "is the conditional probability of all words w w given all per-word entity assignments a a . In all above formulas, \u00a1(:) \u00a1(:) is the Gamma function, C DT dt C DT dt is the times topic t has been assigned for all mentions in document d,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "C DT d\u00a4 = P t C DT dt C DT d\u00a4 = P t C DT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "dt is the topic number in document d, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "C T E te C T E te , C EM em C EM em ,C DE de C DE de , C DA de C DA de , C EW ew C EW ew",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "have similar explanation. Based on the above joint probability, we construct a Markov chain that converges to the posterior distribution P (z; e; ajD) P (z; e; ajD) and then draw samples from this Markov chain for inference. For entity-topic model, each state in the Markov chain is an assignment (including topic assignment to a mention, entity assignment to a mention and entity assignment to a word). In Gibbs sampling, all assignments are sequentially sampled conditioned on all the current other assignments. So here we only need to derive the following three fully conditional assignment distributions: 1) P (z i = tjz \u00a1i ; e; a; D) P (z i = tjz \u00a1i ; e; a; D): the topic assignment distribution to a mention given the current other topic assignments z \u00a1i z \u00a1i , the current entity assignments e e and a a; 2) P (e i = ejz; e \u00a1i ; a; D) P (e i = ejz; e \u00a1i ; a; D) : the entity assignment distribution to a mention given the current entity assignments of all other mentions e \u00a1i e \u00a1i , the current topic assignments z z and the current entity assignments of context words a a; 3) P (a i = ejz; e; a \u00a1i ; D) P (a i = ejz; e; a \u00a1i ; D) : the entity assignment distribution to a context word given the current entity assignments of all other context words a \u00a1i a \u00a1i , the current topic assignments z z and the current entity assignments e e of mentions. Using the Formula 3.1-3.5, we can derive the above three conditional distributions as (where m i is contained in doc d):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "P (z i = tjz \u00a1i ; e; a; D) / C DT (\u00a1i)dt + \u00ae C DT (\u00a1i)d\u00a4 + T \u00ae \u00a3 C T E (\u00a1i)te +C T E (\u00a1i)t\u00a4 + EP (z i = tjz \u00a1i ; e; a; D) / C DT (\u00a1i)dt + \u00ae C DT (\u00a1i)d\u00a4 + T \u00ae \u00a3 C T E (\u00a1i)te +C T E (\u00a1i)t\u00a4 + E\u00af",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "where the topic assignment to a mention is determined by the probability this topic appearing in doc d (the 1 st term) and the probability the referent entity appearing in this topic (the 2 nd term);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "P (e i = ejz; e \u00a1i ; a; D) /",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "C T E (\u00a1i)te +C T E (\u00a1i)t\u00a4 + E\u00af\u00a3 C EM (\u00a1i)em +\u00b0C EM (\u00a1i)e\u00a4 + K\u00b0\u00a3 \u00a1 C DE (\u00a1i)de + 1 C DE (\u00a1i)de \u00a2 C DA de",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "P (e i = ejz; e \u00a1i ; a; D) /",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "C T E (\u00a1i)te +C T E (\u00a1i)t\u00a4 + E\u00af\u00a3 C EM (\u00a1i)em +\u00b0C EM (\u00a1i)e\u00a4 + K\u00b0\u00a3 \u00a1 C DE (\u00a1i)de + 1 C DE (\u00a1i)de \u00a2 C DA de",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "where the entity assignment to a mention is determined by the probability this entity extracted from the assigned topic (the 1 st term), the probability this entity is referred by the name m (the 2 nd term) and the contextual words describing this entity in doc d (the 3 rd term); P (a i = ejz; e; a \u00a1i ; D) /",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "C DE de C DE d\u00a4 \u00a3 C EW (\u00a1i)ew + \u00b1 C EW (\u00a1i)e\u00a4 + V \u00b1 P (a i = ejz; e; a \u00a1i ; D) / C DE de C DE d\u00a4 \u00a3 C EW (\u00a1i)ew + \u00b1 C EW (\u00a1i)e\u00a4 + V \u00b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "where the entity assignment to a word is determined by the number of times this entity has been assigned to mentions in doc d (the 1 st term) and the probability the word appearing in the context of this entity (the 2 nd term). Finally, using the above three conditional distributions, we iteratively update all assignments of corpus D until coverage, then the global knowledge is estimated using the final assignments, and the final entity assignments are used as the referents of their corresponding mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "Inference on Unseen Documents. When unseen documents are given, we predict its entities and topics using the incremental Gibbs sampling algorithm described in (Kataria et al., 2011) , i.e., we iteratively update the entity assignments and the topic assignments of an unseen document as the same as the above inference process, but with the previously learned global knowledge fixed.",
"cite_spans": [
{
"start": 159,
"end": 181,
"text": "(Kataria et al., 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "Hyperparameter setting. One still problem here is the setting of the hyperparameters \u00ae \u00ae, \u00af, \u00b0\u00b0 and \u00b1 \u00b1. For \u00ae \u00ae and \u00af, this paper empirically set the value of them to \u00ae = 50=T \u00ae = 50=T and \u00af= 0:1 = 0:1 as in Griffiths & Steyvers(2004) . For \u00b0\u00b0, we notice that K\u00b0K\u00b0 is the number of pseudo names added to each entity, when \u00b0= 0\u00b0= 0 our model only mentions an entity using its previously used names. Observed that an entity typically has a fixed set of names, we set \u00b0\u00b0 to a small value by setting K\u00b0= 1:0 K\u00b0= 1:0. For \u00b1 \u00b1, we notice that V \u00b1 V \u00b1 is the number of pseudo words added to each entity, playing the role of smoothing its context word distribution. As there is typically a relatively loose correlation between an entity and its context words, we set \u00b1 \u00b1 to a relatively large value by fixing the total smoothing words added to each entity, a typical value is V \u00b1 V \u00b1 = 2000.",
"cite_spans": [
{
"start": 209,
"end": 235,
"text": "Griffiths & Steyvers(2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference using Gibbs Sampling",
"sec_num": "3"
},
{
"text": "In this section, we evaluate our method and compare it with the traditional EL methods. We first explain the experimental settings in Section 4.1-4.4, then discuss the results in Section 4.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In our experiments, we use the Jan. 30, 2010 English version of Wikipedia as the knowledge base, which contains over 3 million entities. Notice that we also take the general concepts in Wikipedia (such as Apple, Video, Computer, etc.) as entities, so the entity in this paper may not strictly follow its definition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Base",
"sec_num": "4.1"
},
{
"text": "There are two standard data sets for EL: IITB 3 and TAC 2009 EL data set (McNamee & Dang, 2009) , where IITB focuses on aggressive recall EL and TAC 2009 focuses on EL on salient mentions. Due to the collective nature of our method, we mainly used the IITB as the primary data set as the same as Kulkarni et al.(2009) and . But we also give the EL accuracies on the TAC 2009 in Sect. 4.5.4 as auxiliary results. Overall, the IITB data set contains 107 web documents. For each document, the name mentions' referent entities in Wikipedia are manually annotated to be as exhaustive as possible. In total, 17,200 name mentions are annotated, with 161 name mentions per document on average. In our experiments, we use only the name mentions whose referent entities are contained in Wikipedia.",
"cite_spans": [
{
"start": 73,
"end": 95,
"text": "(McNamee & Dang, 2009)",
"ref_id": "BIBREF19"
},
{
"start": 296,
"end": 317,
"text": "Kulkarni et al.(2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.2"
},
{
"text": "This paper adopted the same performance metrics used in the Kulkarni et al. (2009) , which includes Recall, Precision and F1. Let M * be the golden standard set of the EL results (each EL result is a pair (m, e), with m the mention and e its referent entity), M be the set of EL results outputted by an EL system, then these metrics are computed as:",
"cite_spans": [
{
"start": 60,
"end": 82,
"text": "Kulkarni et al. (2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "4.3"
},
{
"text": "P recision = jM\\M \u00a4 j jMj P recision = jM\\M \u00a4 j jMj Recall = jM\\M \u00a4 j jM \u00a4 j Recall = jM\\M \u00a4 j jM \u00a4 j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "4.3"
},
{
"text": "where two EL results are considered equal if and only if both their mentions and referent entities are equal. As the same as Kulkarni et al.(2009), 3 http://www.cse.iitb.ac.in/~soumen/doc/QCQ/ Precision and Recall are averaged across documents and overall F1 is used as the primary performance metric by computing from average Precision and Recall.",
"cite_spans": [
{
"start": 125,
"end": 149,
"text": "Kulkarni et al.(2009), 3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Criteria",
"sec_num": "4.3"
},
{
"text": "We compare our method with five baselines which are described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.4"
},
{
"text": "Wikify!. This is a context compatibility based EL method using vector space model (Mihalcea & Csomai, 2007) . Wikify! computes the context compatibility using the word overlap between the mention's context and the entity's Wikipedia entry.",
"cite_spans": [
{
"start": 82,
"end": 107,
"text": "(Mihalcea & Csomai, 2007)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.4"
},
{
"text": "EM-Model. This is a statistical context compatibility based EL method described in , which computes the compatibility by integrating the evidence from the entity popularity, the entity name knowledge and the context word distribution of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.4"
},
{
"text": "M&W. This is a relational topic coherence based EL method described in . M&W measures an entity's topic coherence to a document as its average semantic relatedness to the unambiguous entities in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.4"
},
{
"text": "CSAW. This is an EL method which combines context compatibility and topic coherence using a hybrid method (Kulkarni et al., 2009) , where context compatibility and topic coherence are first separated modeled as context similarity and the sum of all pair-wise semantic relatedness between the entities in the document, then the entities which can maximize the weighted sum of the context compatibility and the topic coherence are identified as the referent entities of the document. EL-Graph. This is a graph based hybrid EL method described in , which first models the context compatibility as text similarity and the topic coherence of an entity as its node importance in a referent graph which captures all mention-entity and entity-entity relations in a document, then a random walk algorithm is used to collectively find all referent entities of a document.",
"cite_spans": [
{
"start": 106,
"end": 129,
"text": "(Kulkarni et al., 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.4"
},
{
"text": "Except for CSAW and EL-Graph, all other baselines are designed only to link the salient name mentions (i.e., key phrases) in a document. In our experiment, in order to compare the EL performances on also the non-salient name mentions, we push these systems' recall by reducing their respective importance thresholds of linked mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.4"
},
{
"text": "We compared our method with all the above five baselines. For our method, we estimate the global knowledge using all the articles in the Jan. 30, 2010 English version of Wikipedia, and totally there were 3,083,158 articles. For each article, the mentions within it are detected using the methods described in Medelyan et al.(2008) and all terms in an article are used as context words, so a term may both be a mention and a context word. The topic number of our model is T = 300 T = 300 (will be empirically set in Sect 4.5.2). To train the entitytopic model, we run 500 500 iterations of our Gibbs sampling algorithm to converge. The training time of our model is nearly one week on our server using 20 GB RAM and one core of 3.2 GHz CPU. Since the training can be done offline, we believe that the training time is not critical to the realworld usage as the online inference on new document is very quick. Using the above settings, the overall results are shown in Table 1 1) By jointly modeling and exploiting the context compatibility and the topic coherence, our method can achieve competitive performance: \u25cb 1 compared with the context compatibility baselines Wikify! and EM-Model, our method correspondingly gets 43% and 19% F1 improvement; \u25cb 2 compared with the topic coherence baselines M&W, our method achieves 28% F1 improvement; \u2462 compared with the hybrid baselines CSAW and EL-Graph, our method correspondingly achieves 11% and 7% F1 improvement.",
"cite_spans": [
{
"start": 309,
"end": 330,
"text": "Medelyan et al.(2008)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 967,
"end": 974,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "4.5.1"
},
{
"text": "2) Compared with the context compatibility only and the topic coherence only methods, the main advantage of our method is that, rather than only achieved high entity linking precision on salient mentions, it can also effectively link the non-salient mentions in a document: this is demonstrated in our method's significant Recall improvement: a 32~52% Recall improvement over baselines Wikify!, EM-Model and M&W. We believe this is because a document usually contains little evidence for EL decisions on non-salient mentions, so with either only context compatibility or only topic coherence the evidence is not enough for EL decisions on these non-salient mentions, and bring these two directions together is critical for the accurate EL on these mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "4.5.1"
},
{
"text": "3) Compared with the hybrid methods, the main advantage of our method is the improvement of EL precision (a 11~16% improvement over baselines CSAW and EL-Graph), we believe this is because: \u25cb 1 Our method can further capture the mutual reinforcement effect between the context compatibility and the topic coherence; \u25cb 2 The traditional hybrid methods usually determine the topic coherence of an entity to a document using all entities in the document, in comparison our method uses only the entities in the same topic, we believe this is more reasonable for EL decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "4.5.1"
},
{
"text": "One still parameter of our method is the topic number T. An appropriate T will distribute entities into well-organized topics, in turn it will capture the co-occurrence information of entities. Figure 4 plots the F1 at different T values. We can see that the F1 is not very sensitive to the topic number and with T = 300 T = 300 our method achieves its best F1 performance. Figure 4 . The F1 vs. the topic number T",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 202,
"text": "Figure 4",
"ref_id": null
},
{
"start": 374,
"end": 382,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Tuning",
"sec_num": "4.5.2"
},
{
"text": "In this section we analyze why and how our method works well in detail. Generally, we believe the main advantages of our method are: 1) The effects of topic knowledge. One main advantage of our model is that the topic knowledge can provide a document-specific entity prior for EL. Concretely, using the topic knowledge and the topic distribution of documents, the prior for an entity appearing in a document d is highly related to the document's topics: P (ejd) = P z P (zjd)P (ejz) P (ejd) = P z P (zjd)P (ejz)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detailed Analysis",
"sec_num": "4.5.3"
},
{
"text": "This prior is obviously more reasonable than the \"information less prior\" (i.e., all entities have equal prior) or \"a global entity popularity prior\" . To demonstrate, Table 2 -3 show the 3 topics where the Apple Inc. and the fruit Apple have the largest generation probability P(e|z) from these topics. We can see that the topic knowledge can provide a reasonable prior for entities appearing in a document: the Apple Inc. has a large prior in documents about Computer, Video and Software, and the fruit Apple has a large prior in documents about Wine, Food and Plant. Table 3 . The 3 topics where the fruit Apple has the largest P(e|z) 2) The effects of a fine-tuned context model. The second advantage of our model is that it provides a statistical framework for fine-tuning the context model from data. To demonstrate such an effect, Table 4 compares the EL performance of \u2460 the entity-topic model with no context model is used (No Context), i.e., we determine the referent entity of a mention by deleting the 3rd term of the formula P (e i = ejz; e \u00a1i ; a; D) P (e i = ejz; e \u00a1i ; a; D) in Section 3; \u2461 with the context model estimated using the entity's Wikipedia page (Article Content), \u2462 with the context model estimated using the 50 word window of all its mentions in Wikipedia (Mention Context) and; \u2463 with the context model in the original entity-topic model (Entity-Topic Model). From Table 4 we can see that a fine-tuned context model will result in a 2~7% F1 improvement. 80 Table 4 . The F1 using different context models",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 570,
"end": 577,
"text": "Table 3",
"ref_id": null
},
{
"start": 838,
"end": 845,
"text": "Table 4",
"ref_id": null
},
{
"start": 1397,
"end": 1404,
"text": "Table 4",
"ref_id": null
},
{
"start": 1486,
"end": 1498,
"text": "80 Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Detailed Analysis",
"sec_num": "4.5.3"
},
{
"text": "3) The effects of joint model. The third advantage of our model is that it jointly model the context compatibility and the topic coherence, which bring two benefits: \u2460 the mutual reinforcement between the two directions can be captured in our model; \u2461 the context compatibility and the topic coherence are uniformly modeled and jointly estimated, which makes the model more accurate for EL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "We also compare our method with the top 5 EL systems in TAC 2009 and the two state-of-the-art systems (EM-Model and EL-Graph) on TAC 2009 data set in Figure 5 (For EL-Graph and our method, a NIL threshold is used to detect whether the referent entity is contained in the knowledge base, if the knowledge base not contains the referent entity, we assign the mention to a NIL entity). From Figure 5 , we can see that our method is competitive: 1) Our method can achieve a 3.4% accuracy improvement over the best system in TAC 2009; 2) Our method, EM-Model and EL-Graph get very close accuracies (0.854, 0.86 and 0.838 correspondingly), we believe this is because: \u25cb 1 The mentions to be linked in TAC data set are mostly salient mentions; \u25cb 2 The influence of the NIL referent entity problem, i.e., the referent entity is not contained in the given knowledge base: Most referent entities (67.5%) on TAC 2009 are NIL entity and our method has no special handling on this problem, rather than other methods such as the EM-Model, which affects the overall performance of our method. ",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 5",
"ref_id": "FIGREF7"
},
{
"start": 388,
"end": 396,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "EL Accuracies on TAC 2009 dataset",
"sec_num": "4.5.4"
},
{
"text": "In this section, we briefly review the related work of EL. Traditionally, the context compatibility based methods link a mention to the entity which has the largest compatibility with it. Cucerzan (2007) modeled the compatibility as the cosine similarity between the vector space representation of mention's context and of entity's Wikipedia entry. Mihalcea & Csomai (2007) , Bunescu & Pasca (2006) , Fader et al. (2009) , Gottipati et al.(2011) and Zhang et al.(2011) extended the vector space model with more information such as the entity category and the acronym expansion, etc. proposed a generative model which computes the compatibility using the evidences from entity's popularity, name distribution and context word distribution. Kataria et al.(2011) and Sen (2012) used a latent topic model to learn the context model of entities. Zheng et al. (2010) , Dredze et al. (2010) , Zhang et al. (2010) , Zhou et al. (2010) and Ji & Chen(2011) employed the ranking techniques to further take relations between candidate entities into account.",
"cite_spans": [
{
"start": 188,
"end": 203,
"text": "Cucerzan (2007)",
"ref_id": "BIBREF6"
},
{
"start": 349,
"end": 373,
"text": "Mihalcea & Csomai (2007)",
"ref_id": "BIBREF25"
},
{
"start": 376,
"end": 398,
"text": "Bunescu & Pasca (2006)",
"ref_id": "BIBREF3"
},
{
"start": 401,
"end": 420,
"text": "Fader et al. (2009)",
"ref_id": "BIBREF9"
},
{
"start": 423,
"end": 445,
"text": "Gottipati et al.(2011)",
"ref_id": "BIBREF10"
},
{
"start": 450,
"end": 468,
"text": "Zhang et al.(2011)",
"ref_id": "BIBREF32"
},
{
"start": 739,
"end": 759,
"text": "Kataria et al.(2011)",
"ref_id": "BIBREF16"
},
{
"start": 764,
"end": 774,
"text": "Sen (2012)",
"ref_id": "BIBREF28"
},
{
"start": 841,
"end": 860,
"text": "Zheng et al. (2010)",
"ref_id": "BIBREF30"
},
{
"start": 863,
"end": 883,
"text": "Dredze et al. (2010)",
"ref_id": "BIBREF8"
},
{
"start": 886,
"end": 905,
"text": "Zhang et al. (2010)",
"ref_id": "BIBREF29"
},
{
"start": 908,
"end": 926,
"text": "Zhou et al. (2010)",
"ref_id": "BIBREF31"
},
{
"start": 931,
"end": 946,
"text": "Ji & Chen(2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "On the other side, the topic coherence based methods link a mention to the entity which are most coherent to the document containing it. Medelyan et al. (2008) measured the topic coherence of an entity to a document as the weighted average of its relatedness to the unambiguous entities in the document. Milne and Witten (2008) extended Medelyan et al. (2008) 's coherence by incorporating commonness and context quality. Bhattacharya and Getoor (2006) modeled the topic coherence as the likelihood an entity is generated from the latent topics of a document. Sen (2012) modeled the topic coherence as the groups of co-occurring entities. Kulkarni et al. (2009) modeled the topic coherence as the sum of all pair-wise relatedness between the referent entities of a document. and Hoffart et al.(2011) modeled the topic coherence of an entity as its node importance in a graph which captures all mention-entity and entity-entity relations in a document.",
"cite_spans": [
{
"start": 137,
"end": 159,
"text": "Medelyan et al. (2008)",
"ref_id": "BIBREF24"
},
{
"start": 304,
"end": 327,
"text": "Milne and Witten (2008)",
"ref_id": "BIBREF22"
},
{
"start": 337,
"end": 359,
"text": "Medelyan et al. (2008)",
"ref_id": "BIBREF24"
},
{
"start": 422,
"end": 452,
"text": "Bhattacharya and Getoor (2006)",
"ref_id": "BIBREF1"
},
{
"start": 639,
"end": 661,
"text": "Kulkarni et al. (2009)",
"ref_id": "BIBREF17"
},
{
"start": 779,
"end": 799,
"text": "Hoffart et al.(2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "This paper proposes a generative model, the entitytopic model, for entity linking. By uniformly modeling context compatibility, topic coherence and the correlation between them as statistical dependencies, our model provides an effective way to jointly exploit them for better EL performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "In this paper, the entity-topic model can only link mentions to the previously given entities in a knowledge base. For future work, we want to overcome this limit by incorporating an entity discovery ability into our model, so that it can also discover and learn the knowledge of previously unseen entities from a corpus for linking name mentions to these entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "www.freebase.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work is supported by the National Natural Science Foundation of China under Grants no. 90920010 and 61100152. Moreover, we sincerely thank the reviewers for their valuable comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Discovering missing links in Wikipedia",
"authors": [
{
"first": "S",
"middle": [
"F"
],
"last": "Adafre",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "De Rijke",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 3rd international workshop on Link discovery",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adafre, S. F. & de Rijke, M. 2005. Discovering missing links in Wikipedia. In: Proceedings of the 3rd international workshop on Link discovery.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A latent dirichlet model for unsupervised entity resolution",
"authors": [
{
"first": "I",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of SIAM International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhattacharya, I. and L. Getoor. 2006. A latent dirichlet model for unsupervised entity resolution. In: Proceedings of SIAM International Conference on Data Mining.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "A",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blei, D. M. and A. Y. Ng, et al. (2003). Latent dirichlet allocation. In: The Journal of Machine Learning Research 3: 993--1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using encyclopedic knowledge for named entity disambiguation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Pasca",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EACL",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bunescu, R. & Pasca, M. 2006. Using encyclopedic knowledge for named entity disambiguation. In: Proceedings of EACL, vol. 6.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The mathematics of statistical machine translation: parameter estimation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P., Pietra, S. D., Pietra, V. D., and Mercer, R. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2), 263-31.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "S",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "In Computer Speech and Language",
"volume": "",
"issue": "",
"pages": "359--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, S. F. & Goodman, J. 1999. An empirical study of smoothing techniques for language modeling. In Computer Speech and Language, London; Orlando: Academic Press, c1986-, pp. 359-394.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Large-scale named entity disambiguation based on Wikipedia data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Cucerzan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "708--716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cucerzan, S. 2007. Large-scale named entity disambiguation based on Wikipedia data. In: Proceedings of EMNLP-CoNLL, pp. 708-716.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Introduction to text linguistics",
"authors": [
{
"first": "R",
"middle": [
"A"
],
"last": "De Beaugrande",
"suffix": ""
},
{
"first": "W",
"middle": [
"U"
],
"last": "Dressler ; Longman London",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "De Beaugrande, R. A. and W. U. Dressler. 1981. Introduction to text linguistics, Chapter V, Longman London.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Entity Disambiguation for Knowledge Base Population",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Finin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dredze, M., McNamee, P., Rao, D., Gerber, A. & Finin, T. 2010. Entity Disambiguation for Knowledge Base Population. In: Proceedings of the 23rd International Conference on Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Scaling Wikipedia-based named entity disambiguation to arbitrary web text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Center",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Wiki-AI Workshop at IJCAI",
"volume": "9",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fader, A., Soderland, S., Etzioni, O. & Center, T. 2009. Scaling Wikipedia-based named entity disambiguation to arbitrary web text. In: Proceedings of Wiki-AI Workshop at IJCAI, vol. 9.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Linking Entities to a Knowledge Base with Query Expansion",
"authors": [
{
"first": "S",
"middle": [],
"last": "Gottipati",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gottipati, S., Jiang, J. 2011. Linking Entities to a Knowledge Base with Query Expansion. In: Proceedings of EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Finding scientific topics",
"authors": [
{
"first": "T",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences of the United States of America",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griffiths, T. L. and M. Steyvers. 2004. Finding scientific topics. In: Proceedings of the National Academy of Sciences of the United States of America.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Collective Entity Linking in Web Text: A Graph-Based Method",
"authors": [
{
"first": "X",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 34 th Annual ACM SIGIR Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han, X., Sun, L. and Zhao J. 2011. Collective Entity Linking in Web Text: A Graph-Based Method. In: Proceedings of 34 th Annual ACM SIGIR Conference.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Generative Entity-Mention Model for Linking Entities with Knowledge Base",
"authors": [
{
"first": "X",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han, X. and Sun, L. 2011. A Generative Entity-Mention Model for Linking Entities with Knowledge Base. In: Proceedings of ACL-HLT.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Robust Disambiguation of Named Entities in Text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Yosef",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoffart, J., Yosef, M. A., et al. 2011. Robust Disambiguation of Named Entities in Text. In: Proceedings of EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Interpolated estimation of Markov source parameters from sparse data",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1980,
"venue": "Proceedings of the Workshop on Pattern Recognition in Practice",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, Frederick and Robert L. Mercer. 1980. Interpolated estimation of Markov source parameters from sparse data. In: Proceedings of the Workshop on Pattern Recognition in Practice.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Entity Disambiguation with Hierarchical Topic Models",
"authors": [
{
"first": "S",
"middle": [
"S"
],
"last": "Kataria",
"suffix": ""
},
{
"first": "K",
"middle": [
"S"
],
"last": "Kumar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rastogi",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kataria, S. S., Kumar, K. S. and Rastogi, R. 2011. Entity Disambiguation with Hierarchical Topic Models. In: Proceedings of KDD.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Collective annotation of Wikipedia entities in web text",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ramakrishnan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chakrabarti",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "457--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kulkarni, S., Singh, A., Ramakrishnan, G. & Chakrabarti, S. 2009. Collective annotation of Wikipedia entities in web text. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 457-466.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Identification and tracing of ambiguous names: Discriminative and generative approaches",
"authors": [
{
"first": "X",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Morie",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "419--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, X., Morie, P. & Roth, D. 2004. Identification and tracing of ambiguous names: Discriminative and generative approaches. In: Proceedings of the National Conference on Artificial Intelligence, pp. 419-424.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Overview of the TAC 2009 Knowledge Base Population Track",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Dang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceeding of Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McNamee, P. & Dang, H. T. 2009. Overview of the TAC 2009 Knowledge Base Population Track. In: Proceeding of Text Analysis Conference.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Overview of the TAC 2010 knowledge base population track",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji, H., et al. 2010. Overview of the TAC 2010 knowledge base population track. In: Proceedings of Text Analysis Conference.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Collaborative Ranking: A Case Study on Entity Linking",
"authors": [
{
"first": "H",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji, H. and Chen, Z. 2011. Collaborative Ranking: A Case Study on Entity Linking. In: Proceedings of EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to link with Wikipedia",
"authors": [
{
"first": "D",
"middle": [],
"last": "Milne",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 17th ACM conference on Conference on information and knowledge management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milne, D. & Witten, I. H. 2008. Learning to link with Wikipedia. In: Proceedings of the 17th ACM conference on Conference on information and knowledge management.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mining Domain-Specific Thesauri from Wikipedia: A case study",
"authors": [
{
"first": "D",
"middle": [],
"last": "Milne",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of IEEE/WIC/ACM WI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milne, D., et al. 2006. Mining Domain-Specific Thesauri from Wikipedia: A case study. In Proc. of IEEE/WIC/ACM WI.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Topic indexing with Wikipedia",
"authors": [
{
"first": "O",
"middle": [],
"last": "Medelyan",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Milne",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the AAAI WikiAI workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Medelyan, O., Witten, I. H. & Milne, D. 2008. Topic indexing with Wikipedia. In: Proceedings of the AAAI WikiAI workshop.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Wikify!: linking documents to encyclopedic knowledge",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Csomai",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the sixteenth ACM conference on Conference on information and knowledge management",
"volume": "",
"issue": "",
"pages": "233--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalcea, R. & Csomai, A. 2007. Wikify!: linking documents to encyclopedic knowledge. In: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pp. 233-242.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Name discrimination by clustering similar contexts. Computational Linguistics and Intelligent Text Processing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Purandare",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kulkarni",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "226--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedersen, T., Purandare, A. & Kulkarni, A. 2005. Name discrimination by clustering similar contexts. Computational Linguistics and Intelligent Text Processing, pp. 226-237.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Local and Global Algorithms for Disambiguation to Wikipedia",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ratinov, L. and D. Roth, et al. 2011. Local and Global Algorithms for Disambiguation to Wikipedia. In: Proceedings of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Collective context-aware topic models for entity disambiguation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Sen",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of WWW '12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sen, P. 2012. Collective context-aware topic models for entity disambiguation. In Proceedings of WWW '12, New York, NY, USA, ACM.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Entity Linking Leveraging Automatically Generated Annotation",
"authors": [
{
"first": "W",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chew",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "W",
"middle": [
"T"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, W., Su, J., Tan, Chew Lim & Wang, W. T. 2010. Entity Linking Leveraging Automatically Generated Annotation. In: Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning to Link Entities with Knowledge Base",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2010,
"venue": "The Proceedings of the Annual Conference of the North American Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng, Z., Li, F., Huang, M. & Zhu, X. 2010. Learning to Link Entities with Knowledge Base. In: The Proceedings of the Annual Conference of the North American Chapter of the ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Resolving Surface Forms to Wikipedia Topics",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Rouhani-Kalleh",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Vasile",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gaffney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1335--1343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, Y., Nie, L., Rouhani-Kalleh, O., Vasile, F. & Gaffney, S. 2010. Resolving Surface Forms to Wikipedia Topics. In: Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pp. 1335-1343.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Entity Linking with Effective Acronym Expansion, Instance Selection and Topic Modeling\uf020",
"authors": [
{
"first": "W",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Y",
"middle": [
"C"
],
"last": "Sim",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, W. and Sim, Y. C., et al. 2011. Entity Linking with Effective Acronym Expansion, Instance Selection and Topic Modeling\uf020 . In: Proceedings of IJCAI.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The document generative process, with Dir(:) Dir(:), Mult(:) Mult(:) and Unif(:) Unif(:) correspondingly Dirichlet, Multinomial and Uniform distribution"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "sample its topic distribution \u03bc d \u00bb Dir(\u00ae); 2. For each of the M d mentions m i in doc d: a) Sample a topic assignment z i \u00bb Mult(\u03bc d ); b) Sample an entity assignment e i \u00bb Mult(\u00c1 z i ); c) Sample a mention m i \u00bb Mult(\u00c3 e i ); 3. For each of the N d words w i in doc d: a) Sample a target entity it describes from d's referent entities a i \u00bb Unif (e m 1 ; e m 2 ;\u00a2 \u00a2 \u00a2 ; e m d ); b) Sample a describing word using a i"
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The plate representation of the entitytopic model"
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "(D; \u00ae;\u00af;\u00b0; \u00b1) = Y d P (m d ; w d ; \u00ae;\u00af;\u00b0; \u00b1) (e d j\u00ae;\u00af)P (m d je d ;\u00b0)P (w d je d ; \u00b1)"
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "\u03bcj\u00ae)P (e d j\u03bc; \u00c1)d\u03bcd\u00bbd\u00c3d\u00c1 (2:1) P (D; \u00ae;\u00af;\u00b0; \u00b1) = Y d P (m d ; w d ; \u00ae;\u00af;\u00b0; \u00b1) (e d j\u00ae;\u00af)P (m d je d ;\u00b0)P (w d je d ; \u00b1)"
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "\u03bcj\u00ae)P (e d j\u03bc; \u00c1)d\u03bcd\u00bbd\u00c3d\u00c1 (2:1) where m d m d and e d e d correspondingly the set of mentions and their entity assignments in document d, w d w d and a d a d correspondingly the set of words and their entity assignments in document d."
},
"FIGREF7": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The EL accuracies on TAC 2009 dataset"
},
"TABREF0": {
"content": "<table><tr><td/><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>Wikify!</td><td>0.55</td><td>0.28</td><td>0.37</td></tr><tr><td>EM-Model</td><td>0.82</td><td>0.48</td><td>0.61</td></tr><tr><td>M&amp;W</td><td>0.80</td><td>0.38</td><td>0.52</td></tr><tr><td>CSAW</td><td>0.65</td><td>0.73</td><td>0.69</td></tr><tr><td>EL-Graph</td><td>0.69</td><td>0.76</td><td>0.73</td></tr><tr><td>Our Method</td><td>0.81</td><td>0.80</td><td>0.80</td></tr><tr><td colspan=\"4\">Table 1. The overall results on IITB data set</td></tr><tr><td colspan=\"4\">From the overall results in Table 1, we can see that:</td></tr></table>",
"num": null,
"html": null,
"text": ".",
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td/><td/><td>has the</td></tr><tr><td/><td>largest P(e|z)</td><td/></tr><tr><td>Topic(Wine)</td><td>Topic(Food)</td><td>Topic(Plant)</td></tr><tr><td>Wine</td><td>Food</td><td>Plant</td></tr><tr><td>Grape</td><td>Restaurant</td><td>Flower</td></tr><tr><td>Vineyard</td><td>Meat</td><td>Leaf</td></tr><tr><td>Winery</td><td>Cheese</td><td>Tree</td></tr><tr><td>Apple</td><td>Vegetable</td><td>Fruit</td></tr></table>",
"num": null,
"html": null,
"text": "The 3 topics where the Apple Inc.",
"type_str": "table"
}
}
}
}