|
{ |
|
"paper_id": "S07-1042", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:22:41.440540Z" |
|
}, |
|
"title": "JHU1 : An Unsupervised Approach to Person Name Disambiguation using Web Snippets", |
|
"authors": [ |
|
{ |
|
"first": "Delip", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University Baltimore", |
|
"location": { |
|
"postCode": "21218", |
|
"region": "MD" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Nikesh", |
|
"middle": [], |
|
"last": "Garera", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University Baltimore", |
|
"location": { |
|
"postCode": "21218", |
|
"region": "MD" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University Baltimore", |
|
"location": { |
|
"postCode": "21218", |
|
"region": "MD" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents an approach to person name disambiguation using K-means clustering on rich-feature-enhanced document vectors, augmented with additional webextracted snippets surrounding the polysemous names to facilitate term bridging. This yields a significant F-measure improvement on the shared task training data set. The paper also illustrates the significant divergence between the properties of the training and test data in this shared task, substantially skewing results. Our system optimized on F 0.2 rather than F 0.5 would have achieved top performance in the shared task.", |
|
"pdf_parse": { |
|
"paper_id": "S07-1042", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents an approach to person name disambiguation using K-means clustering on rich-feature-enhanced document vectors, augmented with additional webextracted snippets surrounding the polysemous names to facilitate term bridging. This yields a significant F-measure improvement on the shared task training data set. The paper also illustrates the significant divergence between the properties of the training and test data in this shared task, substantially skewing results. Our system optimized on F 0.2 rather than F 0.5 would have achieved top performance in the shared task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Being able to automatically distinguish between John Doe, the musician, and John Doe, the actor, on the Web is a task of significant importance with applications in IR and other information management tasks. Mann and Yarowsky (2004) used bigographical data annotated with named entitities and perform fusion of extracted information across multiple documents. Bekkerman and McCallum (2005) studied the problem in a social network setting exploiting link topology to disambiguate namesakes. Al-Kamha and Embley (2004) used a combination of attributes (like zipcodes, state, etc.), links, and page similarity to derive the name clusters while Wan et. al. (2005) used lexical features and named entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 232, |
|
"text": "Mann and Yarowsky (2004)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 389, |
|
"text": "Bekkerman and McCallum (2005)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 516, |
|
"text": "Al-Kamha and Embley (2004)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 659, |
|
"text": "Wan et. al. (2005)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our framework focuses on the K-means clustering model using both bag of words as features and various augumented feature sets. We experimented with several similarity functions and chose Pearson's correlation coefficient 1 as the distance measure for clustering. The weights for the features were set to the term frequency of their respective words in the document. 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approaches", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We queried the Google search engine with the target person names and extracted up to the top one thousand results. For each result we also extracted the snippet associated with it. An example is shown below in Figure 2 .1. As can be seen the snippets contain high quality, low noise features that could be used to improve the performance of the system. Each snippet was treated as a document and clustered along with the supplied documents. This process is illustrated in Figure 2 . The following example illustrates how these web snippets can improve performance by lexical transitivity. In this hypothetical example, a short test document contains a Canadian postal code (T6G 2H1) not found in any of the training documents. However, there may exist an additional web page not in the training or test data which contains both this term and also overlap with other terms in the training data (e.g. 492-9920), serving as an effective transitive bridge between the two.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 218, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 472, |
|
"end": 480, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Submitted system: Clustering using Web Snippets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Training Document 1 492-9920, not(T6G 2H1) Web Snippet 2 both 492-9920, T6G 2H1 Test Document 3 T6G 2H1, not(492-9920) Thus K-means clustering is likely to cluster the three documents above together while without this transitive bridge the association between training and test documents is much less strong. The final clustering of the test data is simply a projection with the training documents and web snippets removed. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submitted system: Clustering using Web Snippets", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In this section we describe several trivial baselines: 1. Singletons: A clustering where each cluster has only one document hence number of clusters is same as the number of documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A clustering with only one cluster containing all documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "One Cluster:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "3. Random: A clustering scheme which partitions the documents uniformly at random into K clusters, where the value of K were the optimal K on the training and test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "One Cluster:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "These results are summarized in Table 1 . Note that all average F-scores mentioned in this table and the rest of the paper are microaverages obtained by averaging the purity and invese purity over all names and then calculating the F-score. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 39, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "One Cluster:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Train Test Baseline F 0.2 F 0.5 F 0.2 F 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "One Cluster:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The standard unaugumented Bag of Words model achieves F 0.5 of 0.666 on training data, as shown in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 106, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "K-means on Bag of Words model", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We then consider only terms that are nouns (NN, NNP) and adjectives (JJ) with the intuition that most of the content bearing words and descriptive words that disambiguate a person would fall in these classes. The result then improves to 0.67 on the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part of speech tag features", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Another variant of this system, that we call Rich-Feats, gives preferential weighting to terms that are immediately around all variants of the person name in question, place names, occupation names, and titles. For marking up place names, occupation names, and titles we used gazetteer 3 lookup without explicit named entity disambiguation. The keywords that appeared in the HTML tag <META ..> were also given higher weights. This resulted in an F 0.5 of 0.664.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rich features", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "The addition of web snippets as described in Section 2.1 yeilds a significant F 0.5 improvement to 0.72. 4 The evaluation score of F-measure can be highly sensitive to this parameter K, as shown in Table 3 . The value of K that gives the best Fmeasure on training set using vanilla bag of words (BOW) model is K = 10%, however we see in Table 3 that this value of K actually performs much worse on the test data as compared to other K values. Table 4 compares cluster statistics between the training and test data. This data was derived from Artiles et. al (2007) . The large difference between average number of clusters in training and test sets indicates that the parameter K, optimized on training set cannot be transferred to test set as these two sets belong to a very different distribution. This can be emprically seen in Table 3 where applying the best K on training results in a significant performance 4 We discard the training and test documents that have no text content, thus the absolute value K = 10 and percentage value K = 10% can result in different K's, even if name had originally 100 documents to begin with. drop on test set given this divergence when parameters are optimized for F 0.5 (although performance does transfer well when parameters are optimized on F 0.2 ). This was observed in our primary evaluation system which was optimized for F 0.5 and resulted in a low official score of F 0.5 = .53 and F 0.2 = .65.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 106, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 563, |
|
"text": "Artiles et. al (2007)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 913, |
|
"end": 914, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 205, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 344, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 443, |
|
"end": 450, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 830, |
|
"end": 837, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Snippets from the Web", |
|
"sec_num": "2.6" |
|
}, |
|
{ |
|
"text": "Test .702 .666 .527 .600 20% .716 .644 .617 .630 30% .724 .631 .683 .676 40% .724 .618 .728 .705 50% .732 .614 .762 .724 60% .731 .601 .798 .747 70% .730 .593 .832 .766 80% .732 .586 .855 .773 90% .714 .558 .861 .764 100% .670 .502 .843 .730 Table 3: Selecting the optimal parameter on training data and application to test data Thus an interesting question is to measure performance when parameters are chosen on data sharing the distributional character of the test data rather than the highly divergent training set. To do this, we used a standard 2-fold cross validation to estimate clustering parameters from a held-out, alternate-half portion of the test data 5 , which more fairly represents the character of the other half of the test data than does the very different training data. We divide the test set into two equal halves (taking first fifteen names alphabetically in one set and the rest in another). We optimize K on the first half, test on the other half and vice versa. We report the two K-values and their corresponding F-measures in Table 5 and we also report the average in order to compare it with the results on the test set obtained using K optimized on training. Further, we also report what would be oracle best K, that is, if we optimize K on the entire test data 6 . We can see in Table 5 that how optimizing K on a devlopment set with same distribution as test set can give us F-measure in the range of 77%, a significant increase as compared to the F-measure obtained by optimizing K on given training data. Further, Table 5 , also indicates results by a custom clustering method, that takes the best K-means clustering using vanilla bag of words model, retains the largest cluster and splits all the other clusters into singleton clusters. This method gives an improved 2-fold F-measure score over the simple bag of words model, implying that most of the namesakes in test data have one (or few) dominant cluster and a lot of singleton clusters. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 241, |
|
"text": ".702 .666 .527 .600 20% .716 .644 .617 .630 30% .724 .631 .683 .676 40% .724 .618 .728 .705 50% .732 .614 .762 .724 60% .731 .601 .798 .747 70% .730 .593 .832 .766 80% .732 .586 .855 .773 90% .714 .558 .861 .764 100% .670 .502 .843 .730", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1310, |
|
"end": 1317, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1548, |
|
"end": 1555, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Train", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "K F 0.2 F 0.5 F 0.2 F 0.5 10%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Train", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We presented a K-means clustering approach for the task of person name disambiguation using several augmented feature sets including HTML meta features, part-of-speech-filtered features, and inclusion of additional web snippets extracted from Google to facilitate term bridging. The latter showed significant empirical gains on the training data. Best (Table 3) , yielded a top performing F 0.2 of .855 on test data (and F 0.5 =.773 on test data). We also explored the striking discrepancy between training and test data characteristics and showed how optimizing the clustering parameters on given training data does not transfer well to the divergent test data. To control for similar training and test distributional characteristics, we reevaluated our test results estimating clustering parameters from alternate held-out portions of the test set. Our models achieved cross validated F 0.5 of .77-.78 on test data for all feature combinations, further showing the broad strong performance of these techniques.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 361, |
|
"text": "(Table 3)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "This performs better than the standard measures like Euclidean and Cosine with K-means clustering on this data.2 We found that using TF weights instead of TF-IDF weights gives a better performance on this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Totalling 19646 terms, gathered from publicly available resources on the web. Further details are available on request.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This also prevents overfitting as the two halves for training and testing are disjoint.6 By oracle best K we mean the K obtained by optimizing over the entire test data. Note that, the oracle best K is just for comparison because it would be unfair to claim results by optimizing K on the entire test set, all our claimed results for different models are based on 2-fold cross validation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Grouping search-engine returned citations for person-name queries", |
|
"authors": [ |
|
{ |
|
"first": "Reema", |
|
"middle": [], |
|
"last": "Al", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Kamha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Embley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 6th annual ACM international workshop on Web information and data management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "96--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reema Al-Kamha and David W. Embley. 2004. Grouping search-engine returned citations for person-name queries. In Proceedings of the 6th annual ACM international workshop on Web information and data management, pages 96-103.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Evaluation: Establishing a benchmark for the web people search task", |
|
"authors": [ |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "Artiles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julio", |
|
"middle": [], |
|
"last": "Gonzalo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felisa", |
|
"middle": [], |
|
"last": "Verdejo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of Semeval 2007, Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Javier Artiles, Julio Gonzalo, and Felisa Verdejo. 2007. Eval- uation: Establishing a benchmark for the web people search task. In Proceedings of Semeval 2007, Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Disambiguating web appearances of people in a social network", |
|
"authors": [ |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Bekkerman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 14th international conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "463--470", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ron Bekkerman and Andrew McCallum. 2005. Disambiguat- ing web appearances of people in a social network. In Pro- ceedings of the 14th international conference on World Wide Web, pages 463-470.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised personal name disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Gideon", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the seventh conference on Natural language learning (CONLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gideon S. Mann and David Yarowsky. 2004. Unsupervised personal name disambiguation. In Proceedings of the sev- enth conference on Natural language learning (CONLL), pages 33-40.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Person resolution in person search results: Webhawk", |
|
"authors": [ |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Binggong", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 14th ACM international conference on Information and knowledge management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "163--170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaojun Wan, Jianfeng Gao, Mu Li, and Binggong Ding. 2005. Person resolution in person search results: Webhawk. In Proceedings of the 14th ACM international conference on Information and knowledge management, pages 163-170.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Google snippet for \"Dekang Lin\"" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Clustering using Web Snippets" |
|
}, |
|
"TABREF0": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "This is a combination of the models mentioned in Sections 2.5 and 2.6. This model combination resulted in a slight degradation of performance over snippets by themselves on the training data but a slight improvement on test data.", |
|
"html": null, |
|
"content": "<table><tr><td>Model</td><td>K</td><td>F 0.2</td><td>F 0.5</td></tr><tr><td>Vanilla BOW</td><td colspan=\"3\">10% 0.702 0.666</td></tr><tr><td>BOW + PoS</td><td colspan=\"3\">10% 0.706 0.670</td></tr><tr><td>BOW + RichFeats</td><td colspan=\"3\">10% 0.700 0.664</td></tr><tr><td>Snippets</td><td colspan=\"3\">10 0.721 0.718</td></tr><tr><td>Snippets + RichFeats</td><td colspan=\"3\">10 0.714 0.712</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "Performance on Training Data", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"5\">shows a full enumeration of model variance under</td></tr><tr><td colspan=\"5\">this cross validated test evaluation. POS and Rich-</td></tr><tr><td colspan=\"5\">Feats yield small gains, and a best F 0.5 performance</td></tr><tr><td>of .776.</td><td/><td/><td/><td/></tr><tr><td>Data set</td><td colspan=\"2\">cluster size</td><td colspan=\"2\"># of clusters</td></tr><tr><td/><td colspan=\"4\">Mean Variance Mean Variance</td></tr><tr><td>Train</td><td>5.4</td><td>144.0</td><td>10.8</td><td>146.3</td></tr><tr><td>Test</td><td>3.1</td><td>26.5</td><td>45.9</td><td>574.1</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"4\">: Cluster statistics from the test and training</td></tr><tr><td>data</td><td/><td/><td/></tr><tr><td>Data set</td><td>K</td><td>F 0.2</td><td>F 0.5</td></tr><tr><td colspan=\"2\">F 0.5 Best K on train 10%</td><td>.702</td><td>.666</td></tr><tr><td colspan=\"2\">F 0.2 Best K on train 10</td><td>.707</td><td>.663</td></tr><tr><td>Best K on train</td><td>10%</td><td>.527</td><td>.560</td></tr><tr><td>applied to test</td><td>10</td><td>.540</td><td>.571</td></tr><tr><td>2Fold on Test</td><td>80</td><td>.847</td><td>.748</td></tr><tr><td/><td>80%</td><td>.862</td><td>.793</td></tr><tr><td/><td/><td colspan=\"2\">.854* .771*</td></tr><tr><td>2Fold on Single</td><td>80</td><td>.847</td><td>.749</td></tr><tr><td>Largest Cluster</td><td>80</td><td>.866</td><td>.795</td></tr><tr><td/><td/><td colspan=\"2\">.856* .772*</td></tr><tr><td>Oracle on Test</td><td>80</td><td>.858</td><td>.774</td></tr><tr><td colspan=\"4\">Table 5: Comparision of training and test results us-</td></tr><tr><td colspan=\"4\">ing Vanilla Bag-of-words model. The values indi-</td></tr><tr><td colspan=\"3\">cated with * represent the average value.</td><td/></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"text": "Performance on 2Fold Test Data performance on test data, when parameters are optimized for F 0.2 on training", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |