|
{ |
|
"paper_id": "S10-1042", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:27:41.548221Z" |
|
}, |
|
"title": "UvT: The UvT Term Extraction System in the Keyphrase Extraction task", |
|
"authors": [ |
|
{ |
|
"first": "Kalliopi", |
|
"middle": [], |
|
"last": "Zervanou", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "ILK / TiCC -Tilburg centre for Cognition and Communication University of Tilburg", |
|
"location": { |
|
"postBox": "P.O. Box 90153", |
|
"postCode": "5000 LE", |
|
"settlement": "Tilburg", |
|
"country": "The Netherlands" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The UvT system is based on a hybrid, linguistic and statistical approach, originally proposed for the recognition of multiword terminological phrases, the C-value method (Frantzi et al., 2000). In the UvT implementation, we use an extended noun phrase rule set and take into consideration orthographic and morphological variation, term abbreviations and acronyms, and basic document structure information.", |
|
"pdf_parse": { |
|
"paper_id": "S10-1042", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The UvT system is based on a hybrid, linguistic and statistical approach, originally proposed for the recognition of multiword terminological phrases, the C-value method (Frantzi et al., 2000). In the UvT implementation, we use an extended noun phrase rule set and take into consideration orthographic and morphological variation, term abbreviations and acronyms, and basic document structure information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The increasing amount of documents in electronic form makes imperative the need for document content classification and semantic labelling. Keyphrase extraction contributes to this goal by the identification of important and discriminative concepts expressed as keyphrases. Keyphrases as reduced document content representations may find applications in document retrieval, classification and summarisation (D'Avanzo and Magnini, 2005) . The literature distinguishes between two principal processes: keyphrase extraction and keyphrase assignment. In the case of keyphrase assignment, suitable keyphrases from an existing knowledge resource, such as a controlled vocabulary, or a thesaurus are assigned to documents based on classification of their content. In keyphrase extraction, the phrases are mined from the document itself. Supervised approaches to the problem of keyphrase extraction include the Naive Bayes-based KEA algorithms (Gordon et al., 1999) (Medelyan and Witten, 2006) , decision tree-based and the genetic algorithm-based GenEx (Turney, 1999) , and the probabilistic KL divergence-based language model (Tomokiyo and Hurst, 2003) . Research in keyphrase extraction proposes the detection of keyphrases based on various statistics-based, or pattern-based fea-tures. Statistical measures investigated focus primarily on keyphrase frequency measures, whereas pattern-features include noun phrase pattern filtering, identification of keyphrase head and respective frequencies (Barker and Cornacchia, 2000) , document section position of the keyphrase (e.g., (Medelyan and Witten, 2006) ) and keyphrase coherence (Turney, 2003) . In this paper, we present an unsupervised approach which combines pattern-based morphosyntactic rules with a statistical measure, the C-value measure (Frantzi et al., 2000) which originates from research in the field of automatic term recognition and was initially designed for specialised domain terminology acquisition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 435, |
|
"text": "(D'Avanzo and Magnini, 2005)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 936, |
|
"end": 957, |
|
"text": "(Gordon et al., 1999)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 958, |
|
"end": 985, |
|
"text": "(Medelyan and Witten, 2006)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1046, |
|
"end": 1060, |
|
"text": "(Turney, 1999)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1120, |
|
"end": 1146, |
|
"text": "(Tomokiyo and Hurst, 2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1489, |
|
"end": 1518, |
|
"text": "(Barker and Cornacchia, 2000)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1571, |
|
"end": 1598, |
|
"text": "(Medelyan and Witten, 2006)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1625, |
|
"end": 1639, |
|
"text": "(Turney, 2003)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1792, |
|
"end": 1814, |
|
"text": "(Frantzi et al., 2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The input documents in the Keyphrase Extraction task were scientific articles converted from their originally published form to plain text. Due to this process, some compound hyphenated words are erroneously converted into a single word (e.g., \"resourcemanagement\" vs. \"resourcemanagement\"). Moreover, document sections such as tables, figures, footnotes, headers and footers, often intercept sentence and paragraph text. Finally, due to the particularity of the scientific articles domain, input documents often contain irregular text, such as URLs, inline bibliographic references, mathematical formulas and symbols. In our approach, we attempted to address some of these issues by document structuring, treatment of orthographic variation and filtering of irregular text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The approach adopted first applies part-ofspeech tagging and basic document structuring (sec. 2.1 and 2.2). Subsequently, keyphrase candidates conforming to pre-defined morphosyntactic rule patterns are identified (sec. 2.3). In the next stage, orthographic, morphological and abbreviation variation phenomena are addressed (sec. 2.4) and, finally, candidate keyphrases are selected based on C-value statistical measure (sec. 2.5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For morphosyntactic analysis, we used the Maxent (Ratnaparkhi, 1996) POS tagger implementation of the openNLP toolsuite 1 . In order to improve tagging accuracy, irregular text, such as URLs, inline references, and recurrent patterns indicating footers and mathematical formulas are filtered prior to tagging.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 68, |
|
"text": "(Ratnaparkhi, 1996)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic pre-processing", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Document structuring is based on identified recurrent patterns, such as common section titles and legend indicators (e.g., \"Abstract\", \" Table. ..\"), section headers numbering and preserved formatting, such as newline characters. Thus, the document sections that the system may recognise are: Title, Abstract, Introduction, Conclusion, Acknowledgements, References, Header (for any other section headers and legends) and Main (for any other document section text).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 143, |
|
"text": "Table.", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Basic document structuring", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The UvT system considers as candidate keyphrases, those multi-word noun phrases conforming to pre-defined morphosyntactic rule patterns. In particular, the patterns considered are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule pattern filtering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "M + N M C M N M + N C N N P M * N N P M * N C N N C N P M * N M C M N M + N C N", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule pattern filtering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where M is a modifier, such as an adjective, a noun, a present or past participle, or a proper noun including a possessive ending, N is a noun, P a preposition and C a conjunction. For every sentence input, the matching process is exhaustive: after the longest valid match is identified, the rules 1 http://opennlp.sourceforge.net/ are re-applied, so as to identify all possible shorter valid matches for nested noun phrases. At this stage, the rules also allow for inclusion of potential abbreviations and acronyms in the identified noun phrase of the form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule pattern filtering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "M + (A) N M + N (A)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule pattern filtering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where (A) is a potential acronym appearing as a single token in uppercase, enclosed by parentheses and tagged as a proper noun.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule pattern filtering", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In this processing stage, the objective is the recognition and reduction of variation phenomena which, if left untreated, will affect the Cvalue statistical measures at the keyphrase selection stage. Variation is a pervasive phenomenon in terminology and is generally defined as the alteration of the surface form of a terminological concept (Jacquemin, 2001) . In our approach, we attempt to address morphological variation, i.e., variation due to morphological affixes and orthographic variation, such as hyphenated vs. nonhyphenated compound phrases and abbreviated phrase forms vs. full noun phrase forms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 342, |
|
"end": 359, |
|
"text": "(Jacquemin, 2001)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text normalisation", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "In order to reduce morphological variation, UvT system uses the J.Renie interface 2 to WordNet lexicon 3 to acquire lemmas for the respective candidate phrases. Orthographic variation phenomena are treated by rule matching techniques. In this process, for every candidate keyphrase matching a rule, the respective string alternations are generated and added as variant phrases. For example, for patterns including acronyms and the respective full form, alternative variant phrases generated may contain either the full form only, or the acronym replacing its respective full form. Similarly, for hyphenated words, non-hyphenated forms are generated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Text normalisation", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The statistical measure used for keyphrase ranking and selection is the C-value measure (Frantzi et al., 2000) . C-value was originally proposed for defining potential terminological phrases and is based on normalising frequency of occurrence measures ", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 110, |
|
"text": "(Frantzi et al., 2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C-value measure", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "C-value = \uf8f1 \uf8f2 \uf8f3 log 2 |a|f (a) log 2 |a|(f (a) \u2212 1 P (Ta) b\u2208Ta f (b))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C-value measure", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "In the above, the first C-value measurement is for non-nested terms and the second for nested terms, where a denotes the word sequence that is proposed as a term, |a| is the length of this term in words, f (a) is the frequency of occurrence of this term in the corpus, both as an independent term and as a nested term within larger terms, and P (T a ) denotes the probability of a term string occurring as nested term.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C-value measure", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "In this processing stage of keyphrase selection, we start by measuring frequency of occurrence for all our candidate phrases, taking into consideration phrase variants, as identified in the Text normalisation stage. Then, we proceed by calculating nested phrases frequences and, finally, we estimate C-value.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C-value measure", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "The result of this process is a list of proposed keyphrases, ranked by decreasing C-value mea-sure, wherefrom the top 15 were selected for the evaluation of the system results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C-value measure", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "The overall official results of the UvT system are shown in Table 1 , where P , R and F correspond to micro-averaged precision, recall and F-score for the respective sets of candidate keyphrases, based on reader-assigned and combined authorand reader-assigned gold standards. Table 1 also illustrates the reported performance of the task baseline systems (i.e., TF\u2022IDF, Naive Bayes (NB) and maximum entropy (ME) 4 ) and the UvT system performance variance based on document section candidates (-A: Abstract, -I: Introduction, -M: Main, -IC: Introduction and Conclusion combination). In these system variants, rather than selecting the top 15 C-value candidates from the system output, we also apply restrictions based on the candidate keyphrase document section information, thus skipping candidates which do not appear in the respective document section.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 67, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 283, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Overall, the UvT system performance is close to the baseline systems results. We observe that the system exhibits higher performance for its top 5 candidate set and this performance drops rapidly as we include more terms in the answer set. One possible reason for its average performance could be attributed to increased \"noise\" in the results set. In particular, our text filtering method failed to accurately remove a large amount of irregular text in form of mathematical formulas and symbols which were erroneously tagged as proper nouns. As indicated in Table 1 , the improved results of system variants based on document sections, such as Abstract, Introduction and Conclusion, where these symbols and formulas are rather uncommon, could be partly attributed to \"noise\" reduction.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 559, |
|
"end": 566, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Interestingly, the best system performance in these document section results is demonstrated by the Introduction-Conclusion combination (UvT-IC). Other tested combinations (not illustrated in Table 1 ), such as abstractintro, abstract-intro-conclusions, abstract-introconclusions-references, display similar results on the reader-assigned set and a performance ranging between 15,6-16% for the 15 candidates on the combined set, while the inclusion of the Main section candidates reduces the performance to the overall system output (i.e., UvT results). Further experiments are required for refining the criteria for document section information, when the text filtering process for \"noise\" is improved.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 199, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Finally, another reason that contributes to the system's average performance lies in its inherent limitation for the detection of multi-word phrases, rather than both single and multi-word. In particular, single word keyphrases account for approx. 20% of the correct keyphrases in the gold standard sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We have presented an approach to keyphrase extraction mainly based on adaptation and implementation of the C-value method. This method was originally proposed for the detection of terminological phrases and although domain terms may express the principal informational content of a scientific article document, a method designed for their exhaustive identification (including both nested and longer multi-word terms) has not been proven more effective than baseline methods in the keyphrase detection task. Potential improvements in performance could be investigated by (1) improving document structure detection, so as to reduce irregular text, (2) refinement of docu-ment section information in keyphrase selection, (3) adaptation of the C-value measure, so as to possibly combine keyphrase frequency with a discriminative measure, such as idf .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "http://www.ai.mit.edu/ jrennie/WordNet/ 3 http://wordnet.princeton.edu/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The reported performance of both NB and ME for the respective gold-standard sets in the Keyphrase Extraction Task is identical.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "PeterTurney. 2003. Coherent keyphrase extraction via web mining. In IJCAI'03: Proceedings of the 18th international joint conference on Artificial intelligence, pages 434-439, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Using noun phrase heads to extract document keyphrases", |
|
"authors": [ |
|
{ |
|
"first": "Ken", |
|
"middle": [], |
|
"last": "Barker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadia", |
|
"middle": [], |
|
"last": "Cornacchia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 13th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ken Barker and Nadia Cornacchia. 2000. Using noun phrase heads to extract document keyphrases. In Proceedings of the 13th Biennial Conference of the Canadian Society on Computational Studies of In- telligence: Advances in Artificial Intelligence, pages 40-52, Montreal, Canada, May.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A keyphrase-based approach to summarization: the LAKE system", |
|
"authors": [ |
|
{ |
|
"first": "D'avanzo", |
|
"middle": [], |
|
"last": "Ernesto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernado", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Document Understanding Conferences", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ernesto D'Avanzo and Bernado Magnini. 2005. A keyphrase-based approach to summarization: the LAKE system. In Proceedings of Document Under- standing Conferences, pages 6-8, Vancouver, BC, Canada, October 9-10.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Automatic recognition of multiword terms: The C-Value/NC-value Method", |
|
"authors": [ |
|
{ |
|
"first": "Katerina", |
|
"middle": [], |
|
"last": "Frantzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideki", |
|
"middle": [], |
|
"last": "Mima", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Intern. Journal of Digital Libraries", |
|
"volume": "3", |
|
"issue": "2", |
|
"pages": "117--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katerina Frantzi, Sophia Ananiadou, and Hideki Mima. 2000. Automatic recognition of multi- word terms: The C-Value/NC-value Method. Intern. Journal of Digital Libraries, 3(2):117-132.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Kea: Practical automatic keyphrase extraction", |
|
"authors": [ |
|
{ |
|
"first": "Gordon", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Ian Witten Gordon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eibe", |
|
"middle": [], |
|
"last": "Paynter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carl", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Gutwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the Fourth ACM conference on Digital Libraries", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "254--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Witten Gordon, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevill-manning. 1999. Kea: Practical automatic keyphrase extraction. In Proceedings of the Fourth ACM conference on Dig- ital Libraries, pages 254-256, Berkeley, CA, USA, August 11-14. ACM Press.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Spotting and Discovering Terms through Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Jacquemin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian Jacquemin. 2001. Spotting and Discovering Terms through Natural Language Processing. MIT Press, Cambridge, MA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Thesaurus based automatic keyphrase indexing", |
|
"authors": [ |
|
{ |
|
"first": "Olena", |
|
"middle": [], |
|
"last": "Medelyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Witten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "JCDL '06: Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "296--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olena Medelyan and Ian H. Witten. 2006. Thesaurus based automatic keyphrase indexing. In JCDL '06: Proceedings of the 6th ACM/IEEE-CS joint confer- ence on Digital libraries, pages 296-297, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A maximum entropy model for part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Adwait", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Eric Brill and Kenneth Church, editors, Proceedings of the Empiri- cal Methods in Natural Language Processing, pages 133-142.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A language model approach to keyphrase extraction", |
|
"authors": [ |
|
{ |
|
"first": "Takashi", |
|
"middle": [], |
|
"last": "Tomokiyo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Hurst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the ACL 2003 workshop on Multiword expressions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takashi Tomokiyo and Matthew Hurst. 2003. A lan- guage model approach to keyphrase extraction. In Proceedings of the ACL 2003 workshop on Mul- tiword expressions, pages 33-40, Morristown, NJ, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning to extract keyphrases from text", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Turney. 1999. Learning to extract keyphrases from text. Technical Report ERB-1057, National Research Council, Institute for Information Technol- ogy, February 17.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "39% 10.44% 13.90% 11.54% 12.61% 11.60% 14.45% 12.87% NB & ME 16.80% 6.98% 9.86% 13.30% 11.05% 12.07% 11.40% 14.20% 12.65% UvT 20.40% 8.47% 11.97% 15.60% 12.96% 14.16% 11.93% 14.87% 13.24%", |
|
"content": "<table><tr><td/><td/><td colspan=\"5\">Performance over Reader-Assigned Keywords</td><td/><td/></tr><tr><td>System</td><td colspan=\"2\">top 5 candidates</td><td/><td/><td>top 10 candidates</td><td/><td/><td>top 15 candidates</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"10\">TF\u2022IDF 7.UvT -A 17.80% 23.60% 9.80% 13.85% 16.10% 13.37% 14.61% 12.00% 14.95% 13.31%</td></tr><tr><td>UvT -I</td><td>21.20%</td><td colspan=\"8\">8.80% 12.44% 14.50% 12.04% 13.16% 12.00% 14.95% 13.31%</td></tr><tr><td>UvT -M</td><td>20.40%</td><td colspan=\"8\">8.47% 11.97% 15.10% 12.54% 13.70% 11.40% 14.20% 12.65%</td></tr><tr><td>UvT -IC</td><td>23.20%</td><td colspan=\"8\">9.63% 13.61% 16.00% 13.29% 14.52% 13.07% 16.28% 14.50%</td></tr><tr><td/><td/><td colspan=\"5\">Performance over Combined Keywords</td><td/><td/></tr><tr><td>System</td><td colspan=\"2\">top 5 candidates</td><td/><td/><td>top 10 candidates</td><td/><td/><td>top 15 candidates</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td>TF\u2022IDF</td><td>22.00%</td><td colspan=\"8\">7.50% 11.19% 17.70% 12.07% 14.35% 14.93% 15.28% 15.10%</td></tr><tr><td colspan=\"2\">NB & ME 21.40%</td><td colspan=\"8\">7.30% 10.89% 17.30% 11.80% 14.03% 14.53% 14.87% 14.70%</td></tr><tr><td>UvT</td><td colspan=\"9\">24.80% 8.46% 12.62% 18.60% 12.69% 15.09% 14.60% 14.94% 14.77%</td></tr><tr><td>UvT -A</td><td>28.80%</td><td colspan=\"8\">9.82% 14.65% 19.60% 13.37% 15.90% 14.67% 15.01% 14.84%</td></tr><tr><td>UvT -I</td><td>26.40%</td><td colspan=\"8\">9.00% 13.42% 17.80% 12.14% 14.44% 14.73% 15.08% 14.90%</td></tr><tr><td>UvT -M</td><td>24.80%</td><td colspan=\"8\">8.46% 12.62% 17.90% 12.21% 14.52% 14.07% 14.39% 14.23%</td></tr><tr><td>UvT -IC</td><td>28.60%</td><td colspan=\"8\">9.75% 14.54% 19.70% 13.44% 15.98% 16.13% 16.51% 16.32%</td></tr><tr><td colspan=\"9\">Table 1: UvT, UvT variants and baseline systems performance on the Keyphrase Extraction Task</td></tr><tr><td colspan=\"5\">by taking into consideration the candidate multi-</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">word phrase constituent length and terms appear-</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">ing as nested within longer terms. In particu-</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">lar, depending on whether a candidate multi-word</td><td/><td/><td/><td/></tr><tr><td colspan=\"4\">phrase is nested or not, C-value is defined as:</td><td/><td/><td/><td/><td/></tr></table>" |
|
} |
|
} |
|
} |
|
} |