|
{ |
|
"paper_id": "S10-1026", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:27:47.643689Z" |
|
}, |
|
"title": "COLEUR and COLSLM: A WSD approach to Multilingual Lexical Substitution, Tasks 2 and 3 SemEval 2010", |
|
"authors": [ |
|
{ |
|
"first": "Weiwei", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we present a word sense disambiguation (WSD) based system for multilingual lexical substitution. Our method depends on having a WSD system for English and an automatic word alignment method. Crucially the approach relies on having parallel corpora. For Task 2 (Sinha et al., 2009) we apply a supervised WSD system to derive the English word senses. For Task 3 (Lefever & Hoste, 2009), we apply an unsupervised approach to the training and test data. Both of our systems that participated in Task 2 achieve a decent ranking among the participating systems. For Task 3 we achieve the highest ranking on several of the language pairs: French, German and Italian.", |
|
"pdf_parse": { |
|
"paper_id": "S10-1026", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we present a word sense disambiguation (WSD) based system for multilingual lexical substitution. Our method depends on having a WSD system for English and an automatic word alignment method. Crucially the approach relies on having parallel corpora. For Task 2 (Sinha et al., 2009) we apply a supervised WSD system to derive the English word senses. For Task 3 (Lefever & Hoste, 2009), we apply an unsupervised approach to the training and test data. Both of our systems that participated in Task 2 achieve a decent ranking among the participating systems. For Task 3 we achieve the highest ranking on several of the language pairs: French, German and Italian.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In this paper, we present our system that was applied to the cross lingual substitution for two tasks in SEMEVAL 2010, Tasks 2 and 3. We adopt the same approach for both tasks with some differences in the basic set-up. Our basic approach relies on applying a word sense disambiguation (WSD) system to the English data that comes from a parallel corpus for English and a language of relevance to the task, language 2 (l2). Then we automatically induce the English word sense correspondences to l2. Accordingly, for a given test target word, we return its equivalent l2 words assuming that we are able to disambiguate the target word in context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We approach the problem of multilingual lexical substitution from a WSD perspective. We adopt the hypothesis that the different word senses of ambiguous words in one language probably translate to different lexical items in another language. Hence, our approach relies on two crucial components: a WSD module for the source language (our target test words, in our case these are the English target test words) and an automatic word alignment module to discover the target word sense correspondences with the foreign words in a second language. Our approach to both tasks is unsupervised since we don't have real training data annotated with the target words and their corresponding translations into l2 at the onset of the problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Detailed Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Accordingly, at training time, we rely on automatically tagging large amounts of English data (target word instances) with their relevant senses and finding their l2 correspondences based on automatically induced word alignments. Each of these English sense and l2 correspondence pairs has an associated translation probability value depending on frequency of co-occurrence. This information is aggregated in a look-up table over the entire training set. An entry in the table would have a target word sense type paired with all the observed translation correspondences l2 word types. Each of the l2 word types has a probability of translation that is calculated as a normalized weighted average of all the instances of this l2 word type with the English sense aggregated across the whole parallel corpus. This process results in an English word sense translation table (WSTT). The word senses are derived from Word-Net (Fellbaum, 1998) . We expand the English word sense entry correspondences by adding the translations of the members of target word sense synonym set as listed in WordNet.", |
|
"cite_spans": [ |
|
{ |
|
"start": 920, |
|
"end": 936, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Detailed Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For alignment, we specifically use the GIZA++ software for inducing word alignments across the parallel corpora (Och & Ney, 2003) . We apply GIZA++ to the parallel corpus in both directions English to l2 and l2 to English then take only the intersection of the two alignment sets, hence fo-cusing more on precision of alignment rather than recall.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 129, |
|
"text": "(Och & Ney, 2003)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Detailed Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For each language in Task 3 and Task 2, we use TreeTagger 1 to do the preprocessing for all languages. The preprocessing includes segmentation, POS tagging and lemmatization. Since Tree-Tagger is independent of languages, our system does not rely on anything that is language specific; our system can be easily applied to other languages. We run GIZA++ on the parallel corpus, and obtain the intersection of the alignments in both directions. Meanwhile, every time a target English word appears in a sentence, we apply our WSD system on it, using the sentence as context. From this information, we build a WSST from the English sense(s) to their corresponding foreign words. Moreover, we use WordNet as a means of augmenting the translation correspondences. We expand the word sense to its synset from WordNet adding the l2 words that corresponded to all the member senses in the synset yielding more translation variability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Detailed Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "At test time, given a test data target word, we apply the same WSD system that is applied to the training corpus to create the WSTT. Once the target word instance is disambiguated in context, we look up the corresponding entry in the WSTT and return the ranked list of l2 correspondences. We present results for best and for oot which vary only in the cut off threshold. In the BEST condition we return the highest ranked candidate, in the oot condition we return the top 10 (where available). 2 Given the above mentioned pipeline, Tasks 2 and 3 are very similar. Their main difference lies in the underlying WSD system applied.", |
|
"cite_spans": [ |
|
{ |
|
"start": 494, |
|
"end": 495, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Detailed Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Task 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Detailed Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We use a relatively simple monolingual supervised WSD system to create the sense tags on the English data. We use the SemCor word sense annotated corpus. SemCor is a subset of the Brown Corpus. For each of our target English words found disambiguated in the SemCor corpus, we create a sense profile for each of its senses. A sense profile is a vector of all the content words that occur in the context of this sense in the Sem-Cor corpus. The dimensions of the vector are word 1 http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/ 2 Some of the target word senses had less than 10 l2 word correspondences. types, as in a bag of words model, and the vector entries are the co-occurrence frequency of the word sense and the word type. At test time, given a a target English word, we create a bag of word types contextual vector for each instance of the word using the surrounding context. We compare the created test vector to the SemCor vectors and choose the highest most similar sense and use that for sense disambiguation. In case of ties, we return more than one sense tag.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Details", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use both naturally occurring parallel data and machine translation data. The data for our first Task 2 submission, T2-COLEUR, comprises naturally occurring parallel data, namely, the Spanish English portion of the EuroParl data provided by Task 3 organizers. For the machine translation data, we use translations of the source English data pertaining to the following corpora: the Brown corpus, WSJ, SensEval1, SensEval2 datasets as translated by two machine translation systems: Global Link (GL), Systran (SYS) (Guo & Diab, 2010) . We refer to the translated corpus as the SALAAM corpus. The intuition for creating SALAAM (an artificial parallel corpus) is to create a balanced translation corpus that is less domain and genre skewed than the EuroParl data. This latter corpus results in our 2nd system for this task T2-COLSLM. Table 1 presents our overall results as evaluated by the organizers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 515, |
|
"end": 533, |
|
"text": "(Guo & Diab, 2010)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 832, |
|
"end": 839, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "It is clear that the T2-COLSLM outperforms T2-COLEUR.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Contrary to Task 2, we apply a context based unsupervised WSD module to the English side of the parallel data. Our unsupervised WSD method, as described in (Guo & Diab, 2009) , is a graph based unsupervised WSD method. Given a sequence of words W = {w 1 , w 2 ...w n }, each word w i with several senses {s i1 , s i2 ...s im }. A graph G = (V,E) is defined such that there exists a vertex v for each sense. Two senses of two different words may be connected by an edge e, depending on their distance. That two senses are connected suggests they should have influence on each other, accordingly a maximum allowable distance is set. They explore 4 different graph based algorithms.We focus on the In-Degree graph based algorithm. The In-Degree algorithm presents the problem as a weighted graph with senses as nodes and similarity between senses as weights on edges. The In-Degree of a vertex refers to the number of edges incident on that vertex. In the weighted graph, the In-Degree for each vertex is calculated by summing the weights on the edges that are incident on it. After all the In-Degree values for each sense are computed, the sense with maximum value is chosen as the final sense for that word. In our implementation of the In-Degree algorithm, we use the JCN similarity measure for both Noun-Noun and Verb-Verb similarity calculation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 174, |
|
"text": "(Guo & Diab, 2009)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 3 4.1 System Details", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use the training data from EuroParl provided by the task organizers for the 5 different language pairs. We participate in all the language competitions. We refer to our system as T3-COLEUR. Table 2 shows our system results on Task 3, specified by languages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 200, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As shown in Table 2 , our system T3-COLEUR ranks the highest for the French, German and Italian language tasks on both best and oot. However the overall F-measures are very low. Our system ranks last for Dutch among 3 systems and it is middle of the pack for the Spanish language task. In general we note that the results for oot are naturally higher than for BEST since by design it is a more relaxed measure.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis and Discussion", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Our work mainly investigates the influence of WSD on providing machine translation candidates. Carpuat & Wu 2007and Chan et al.(2007) show WSD improves MT. However, in (Carpuat & Wu, 2007) classical WSD is missing by ignoring predefined senses. They treat translation candidates as sense labels, then find linguistic features in the English side, and cast the disambiguation process as a classification problem. Of relevance also to our work is that related to the task of English monolingual lexical substitution. For example some of the approaches that participated in the SemEval 2007 excercise include the following. Yuret (2007) used a statistical language model based on a large corpus to assign likelihoods to each candidate substitutes for a target word in a sentence. Martinez et al. (2007) uses WordNet to find candidate substitutes, produce word sequence including substitutes. They rank the substitutes by ranking the word sequence including that substitutes using web queries. In (Giuliano C. et al., 2007) , they extract synonyms from dictionaries. They have 2 ways of ranking of the synonyms: by similarity metric based on LSA and by occurrence in a large 5-gram web corpus. Dahl et al. (2007) also extract synonyms from dictionaries. They present two systems. The first one scores substitutes based on how frequently the local context match the target word. The second one incorporates cosine similarity. Finally, Hassan et al. (2007) extract candidates from several linguistic resources, and combine many techniques and evidences to compute the scores such as machine translation, most common sense, language model and so on to pick the most suitable lexical substitution candidates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 133, |
|
"text": "Chan et al.(2007)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 188, |
|
"text": "MT. However, in (Carpuat & Wu, 2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 621, |
|
"end": 633, |
|
"text": "Yuret (2007)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 777, |
|
"end": 799, |
|
"text": "Martinez et al. (2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 993, |
|
"end": 1019, |
|
"text": "(Giuliano C. et al., 2007)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1190, |
|
"end": 1208, |
|
"text": "Dahl et al. (2007)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1421, |
|
"end": 1450, |
|
"text": "Finally, Hassan et al. (2007)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related works", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper we presented a word sense disambiguation based system for multilingual lexical substitution. The approach relies on having a WSD system for English and an automatic word alignment method. Crucially the approach relies on having parallel corpora. For Task 2 we apply a supervised WSD system to derive the English word senses. For Task 3, we apply an unsupervised approach to the training and test data. Both of our systems that participated in Task 2 achieve a decent ranking among the participating systems. For Task 3 we achieve the highest ranking on several of the language pairs: French, German and Italian.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Directions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In the future, we would like to investigate the usage of the Spanish and Italian WordNets for the 131 We would like to also expand our examination to other sources of bilingual data such as comparable corpora. Finally, we would like to investigate using unsupervised clustering of senses (Word Sense Induction) methods in lieu of the WSD approaches that rely on WordNet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Directions", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Improving statistical machine translation using word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Carpuat", |
|
"middle": [ |
|
"M &" |
|
], |
|
"last": "Wu D", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--72", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "CARPUAT M. & WU D. (2007). Improving statis- tical machine translation using word sense disam- biguation. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), p. 61-72, Prague, Czech Republic: Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Word sense disambiguation improves statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chiang D", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "CHAN Y. S., NG H. T. & CHIANG D. (2007). Word sense disambiguation improves statistical machine translation. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Linguistics, p. 33-40, Prague, Czech Republic: Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "SW-AG: Local Context Matching for English Lexical Substitution", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dahl G", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Frassica A. & Wicentowski R", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 4th workshop on Semantic Evaluations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "DAHL G., FRASSICA A. & WICENTOWSKI R. (2007). SW-AG: Local Context Matching for English Lexi- cal Substitution. In Proceedings of the 4th workshop on Semantic Evaluations (SemEval-2007), Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "wordnet: An electronic lexical database", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "FELLBAUM C. (1998). \"wordnet: An electronic lexical database\". MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "FBK-irst: Lexical Substitution Task Exploiting Domain and Syntagmatic Coherence", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Giuliano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gliozzo A", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Strapparava C", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 4th workshop on Semantic Evaluations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "GIULIANO C., GLIOZZO A. & STRAPPARAVA C (2007). FBK-irst: Lexical Substitution Task Ex- ploiting Domain and Syntagmatic Coherence. In Proceedings of the 4th workshop on Semantic Eval- uations (SemEval-2007), Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Improvements to monolingual English word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Guo W. & Diab M", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ACL Workshop on Semantics Evaluations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "GUO W. & DIAB M. (2009). \"Improvements to mono- lingual English word sense disambiguation\". In ACL Workshop on Semantics Evaluations.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Combining orthogonal monolingual and multilingual sources of evidence for All Words WSD", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Guo W. & Diab M", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ACL 2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "GUO W. & DIAB M. (2010). \"Combining orthogonal monolingual and multilingual sources of evidence for All Words WSD\". In ACL 2010.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "UNT: SubFinder: Combining Knowledge Sources for Automatic Lexical Substitution", |
|
"authors": [ |
|
{ |
|
"first": "Hassan", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Csomai A", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Banea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sinha R. & Mihalcea R", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 4th workshop on Semantic Evaluations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "HASSAN S., CSOMAI A., BANEA C., SINHA R. & MIHALCEA R. (2007). UNT: SubFinder: Combin- ing Knowledge Sources for Automatic Lexical Sub- stitution. In Proceedings of the 4th workshop on Se- mantic Evaluations (SemEval-2007), Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Word sense disambiguation: The state of the art", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ide N. & V Ronis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "IDE N. & V RONIS J. (1998). Word sense disambigua- tion: The state of the art. In Computational Linguis- tics, p. 1-40.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Semantic similarity based on corpus statistics and lexical taxonomy", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jiang J", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Conrath", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the International Conference on Research in Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "JIANG J. & CONRATH. D. (1997). Semantic similar- ity based on corpus statistics and lexical taxonomy. In Proceedings of the International Conference on Research in Computational Linguistics, Taiwan.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Combining local context and wordnet sense similarity for word sense identification", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Leacock C. & Chodorow M", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "WordNet, An Electronic Lexical Database", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "LEACOCK C. & CHODOROW M. (1998). Combining local context and wordnet sense similarity for word sense identification. In WordNet, An Electronic Lex- ical Database: The MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "SemEval-2010 Task 3: Cross-lingual Word Sense Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"&" |
|
], |
|
"last": "Lefever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hoste V", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the NAACL HLT Workshop on Semantic Evaluations: Recent Achievements and Future Directions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "LEFEVER C. & HOSTE V. (2009). SemEval-2010 Task 3: Cross-lingual Word Sense Disambiguation. In Proceedings of the NAACL HLT Workshop on Se- mantic Evaluations: Recent Achievements and Fu- ture Directions, Boulder, Colorado.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lesk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Proceedings of the SIGDOC Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "LESK M. (1986). Automatic sense disambiguation us- ing machine readable dictionaries: How to tell a pine cone from an ice cream cone. In In Proceedings of the SIGDOC Conference, Toronto.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Lexical Substitution system based on Relatives in Context In Proceedings of the 4th workshop on Semantic Evaluations", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Melb-Mkb", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "MELB-MKB: Lexical Substitution system based on Relatives in Context In Proceedings of the 4th workshop on Semantic Evaluations (SemEval- 2007), Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "English tasks: all-words and verb lexical sample", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"C L D" |
|
], |
|
"last": "Fellbaum S", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dang H", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of ACL/SIGLEX Senseval-2", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. PALMER, C. FELLBAUM S. C. L. D. & DANG H. (2001). English tasks: all-words and verb lex- ical sample. In In Proceedings of ACL/SIGLEX Senseval-2, Toulouse, France.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Unsupervised large-vocabulary word sense disambiguation with graph-based algorithms for sequence data labeling", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mihalcea R", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "411--418", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "MIHALCEA R. (2005). Unsupervised large-vocabulary word sense disambiguation with graph-based algo- rithms for sequence data labeling. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Lan- guage Processing, p. 411-418, Vancouver, British Columbia, Canada: Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Wordnet: a lexical database for english", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Miller G", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Communications of the ACM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "39--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "MILLER G. A. (1990). Wordnet: a lexical database for english. In Communications of the ACM, p. 39-41.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Word sense disambiguation: a survey", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Navigli R", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ACM Computing Surveys", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--69", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "NAVIGLI R. (2009). Word sense disambiguation: a survey. In ACM Computing Surveys, p. 1-69: ACM Press.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och F", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney H", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "OCH F. J. & NEY H. (2003). A systematic compari- son of various statistical alignment models. Compu- tational Linguistics, 29(1), 19-51.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Maximizing semantic relatedness to perform word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"&" |
|
], |
|
"last": "Pedersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Patwardhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "University of Minnesota Supercomputing Institute Research Report UMSI 2005/25, Minnesotta", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "PEDERSEN B. & PATWARDHAN (2005). Maximizing semantic relatedness to perform word sense disam- biguation. In University of Minnesota Supercomput- ing Institute Research Report UMSI 2005/25, Min- nesotta.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Semeval-2007 task-17: English lexical sample, srl and all words", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pradhan S", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Loper E", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dligach D. & Palmer M", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "87--92", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "PRADHAN S., LOPER E., DLIGACH D. & PALMER M. (2007). Semeval-2007 task-17: English lexi- cal sample, srl and all words. In Proceedings of the Fourth International Workshop on Semantic Evalua- tions (SemEval-2007), p. 87-92, Prague, Czech Re- public: Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Unsupervised graph-based word sense disambiguation using measures of word semantic similarity", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sinha R. & Mihalcea R", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the IEEE International Conference on Semantic Computing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "SINHA R. & MIHALCEA R. (2007). Unsupervised graph-based word sense disambiguation using mea- sures of word semantic similarity. In Proceedings of the IEEE International Conference on Semantic Computing (ICSC 2007), Irvine, CA.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Task 2: Cross-Lingual Lexical Substitution", |
|
"authors": [ |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Semeval", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT Workshop on Semantic Evaluations: Recent Achievements and Future Directions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "SemEval-2010 Task 2: Cross-Lingual Lexical Sub- stitution. In Proceedings of the NAACL HLT Work- shop on Semantic Evaluations: Recent Achieve- ments and Future Directions, Irvine, CA.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Snyder B", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Palmer M", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "SNYDER B. & PALMER M. (2004). The english all- words task. In R. MIHALCEA & P. EDMONDS, Eds., Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, p. 41-43, Barcelona, Spain: Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "KU: Word sense disambiguation by substitution", |
|
"authors": [ |
|
{ |
|
"first": "Yuret", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 4th workshop on Semantic Evaluations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "YURET D. (2007). KU: Word sense disambiguation by substitution. In Proceedings of the 4th workshop on Semantic Evaluations (SemEval-2007), Prague, Czech Republic.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"text": "COLSLM 27.59 25.99 46.61 43.91 T2-COLEUR 19.47 18.15 44.77 41.72 Precision and Recall results per corpus on Task 2 test set", |
|
"html": null, |
|
"content": "<table><tr><td>Corpus</td><td>best</td><td>oot</td><td/></tr><tr><td>P</td><td>R</td><td>P</td><td>R</td></tr><tr><td>T2-</td><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "Results of T3-COLEUR per language on Task 3 Test set task.", |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |