|
{ |
|
"paper_id": "S10-1027", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:28:02.820179Z" |
|
}, |
|
"title": "UHD: Cross-Lingual Word Sense Disambiguation Using Multilingual Co-occurrence Graphs", |
|
"authors": [ |
|
{ |
|
"first": "Carina", |
|
"middle": [], |
|
"last": "Silberer", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Heidelberg University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [ |
|
"Paolo" |
|
], |
|
"last": "Ponzetto", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Heidelberg University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe the University of Heidelberg (UHD) system for the Cross-Lingual Word Sense Disambiguation SemEval-2010 task (CL-WSD). The system performs CL-WSD by applying graph algorithms previously developed for monolingual Word Sense Disambiguation to multilingual cooccurrence graphs. UHD has participated in the BEST and out-of-five (OOF) evaluations and ranked among the most competitive systems for this task, thus indicating that graph-based approaches represent a powerful alternative for this task.", |
|
"pdf_parse": { |
|
"paper_id": "S10-1027", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe the University of Heidelberg (UHD) system for the Cross-Lingual Word Sense Disambiguation SemEval-2010 task (CL-WSD). The system performs CL-WSD by applying graph algorithms previously developed for monolingual Word Sense Disambiguation to multilingual cooccurrence graphs. UHD has participated in the BEST and out-of-five (OOF) evaluations and ranked among the most competitive systems for this task, thus indicating that graph-based approaches represent a powerful alternative for this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "This paper describes a graph-based system for Cross-Lingual Word Sense Disambiguation, i.e. the task of disambiguating a word in context by providing its most appropriate translations in different languages (Lefever and Hoste, 2010, CL-WSD henceforth) . Our goal at SemEval-2010 was to assess whether graph-based approaches, which have been successfully developed for monolingual Word Sense Disambiguation, represent a valid framework for CL-WSD. These typically transform a knowledge resource such as WordNet (Fellbaum, 1998) into a graph and apply graph algorithms to perform WSD. In our work, we follow this line of research and apply graph-based methods to multilingual co-occurrence graphs which are automatically created from parallel corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 251, |
|
"text": "(Lefever and Hoste, 2010, CL-WSD henceforth)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 526, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our method is heavily inspired by previous proposals from V\u00e9ronis (2004, Hyperlex) and Agirre et al. (2006) . Hyperlex performs graph-based WSD based on co-occurrence graphs: given a monolingual corpus, for each target word a graph is built where nodes represent content words cooccurring with the target word in context, and edges connect the words which co-occur in these contexts. The second step iteratively selects the node with highest degree in the graph (root hub) and removes it along with its adjacent nodes. Each such selection corresponds to isolating a highdensity component of the graph, in order to select a sense of the target word. In the last step the root hubs are linked to the target word and the Minimum Spanning Tree (MST) of the graph is computed to disambiguate the target word in context. Agirre et al. (2006) compare Hyperlex with an alternative method to detect the root hubs based on PageRank (Brin and Page, 1998) . PageRank has the advantage of requiring less parameters than Hyperlex, whereas the authors ascertain equal performance of the two methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 82, |
|
"text": "V\u00e9ronis (2004, Hyperlex)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 87, |
|
"end": 107, |
|
"text": "Agirre et al. (2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 815, |
|
"end": 835, |
|
"text": "Agirre et al. (2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 922, |
|
"end": 943, |
|
"text": "(Brin and Page, 1998)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We start by building for each target word a multilingual co-occurrence graph based on the target word's aligned contexts found in parallel corpora (Sections 3.1 and 3.2). Multilingual nodes are linked by translation edges, labeled with the target word's translations observed in the corresponding contexts. We then use an adapted PageRank algorithm to select the nodes which represent the target word's different senses (Section 3.3) and, given these nodes, we compute the MST, which is used to select the most relevant words in context to disambiguate a given test instance (Section 3.4). Translations are finally given by the incoming translation edges of the selected context words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Graph-based Cross-Lingual WSD", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Let C s be all contexts of a target word w in a source language s, i.e. English in our case, within a (PoS-tagged and lemmatized) monolingual corpus. We first construct a monolingual cooccurrence graph G s = V s , E s . We collect all pairs (cw i , cw j ) of co-occurring nouns or adjectives in C s (excluding the target word itself) and add each word as a node into the initially empty graph. Each co-occurring word pair is connected with an edge", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Graph", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(v i , v j ) \u2208 E s , which is assigned a weight w(v i , v j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Graph", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "based on the strength of association between the respective words cw i and cw j :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Graph", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "w(v i , v j ) = 1 \u2212 max [p(cw i |cw j ), p(cw j |cw i )].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Graph", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The conditional probability of word cw i given word cw j is estimated by the number of contexts in which cw i and cw j co-occur divided by the number of contexts containing cw j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Monolingual Graph", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given a set of target languages L, we then ex-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "tend G s to a labeled multilingual graph G M L = V M L , E M L where: 1. V M L = V s \u222a l\u2208L V l is a set of nodes represent-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "ing content words from either the source (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "V s ) or the target (V l ) languages; 2. E M L = E s \u222a l\u2208L {E l \u222a E s,l", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "} is a set of edges. These include (a) co-occurrence edges E l \u2286 V l \u00d7V l between nodes representing words in a target language (V l ), weighted in the same way as the edges in the monolingual graph; (b) labeled translation edges E s,l which represent translations of words from the source language into a target language. These edges are assigned a complex label t \u2208 T w,l comprising a translation of the word w in the target language l and its frequency of translation, i.e.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "E s,l \u2286 V s \u00d7 T w,l \u00d7 V l .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The multilingual graph is built based on a wordaligned multilingual parallel corpus and a multilingual dictionary. The pseudocode is presented in Algorithm 1. We start with the monolingual graph from the source language (line 1) and then for each target language l \u2208 L in turn, we add the translation edges ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(v s , t, v l ) \u2208 E s,l of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "= V s , E s set of target languages L Output: a multilingual graph G M L 1: G M L = V M L , E M L \u2190 G s = V s , E s 2: for each l \u2208 L 3: V l \u2190 \u2205 4: C l := aligned sentences of C s in lang. l 5: for each v s \u2208 V s 6: T vs,l := translations of v s found in C l 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "C vs \u2286 C s := contexts containing w and v s 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "for each translation v l \u2208 T vs,l 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "C v l := aligned sentences of C vs in lang. l 10: T w,Cv l \u2190 translation labels of w from C v l 11: if v l / \u2208 V M L then 12: V M L \u2190 V M L \u222a v l 13: V l \u2190 V l \u222a v l 14", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": ":", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "for each t \u2208 T w,Cv l 15: E M L \u2190 E M L \u222a (v s , t, v l ) 16: for each v i \u2208 V l 17: for each v j \u2208 V l , i = j 18: if v i and v j co-occur in C l then 19: E M L \u2190 E M L \u222a (v i , v j ) 20: return G M L (v s , t, v l )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "receives a translation label t. Formally, let C vs \u2286 C s be the contexts where v s and w cooccur, and C v l the word-aligned contexts in language l of C vs , where v s is translated as v l . Then each edge between nodes v s and v l is labeled with a translation label t (lines 14-15): this includes a translation of w in C v l , its frequency of translation and the information of whether the translation is monosemous, as found in a multilingual dictionary, i.e. EuroWordNet (Vossen, 1998) and PanDictionary (Mausam et al., 2009) . Finally, the multilingual graph is further extended by inserting all possible co-occurrence edges (v i , v j ) \u2208 E l between the nodes for the target language l (lines 16-19, i.e. we apply the step from Section 3.1 to l and C l ). As a result of the algorithm, the multilingual graph is returned (line 20).", |
|
"cite_spans": [ |
|
{ |
|
"start": 509, |
|
"end": 530, |
|
"text": "(Mausam et al., 2009)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Graph", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We compute the root hubs in the multilingual graph to discriminate the senses of the target word in the source language. Hubs are found using the adapted PageRank from Agirre et al. (2006) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 188, |
|
"text": "Agirre et al. (2006)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Root Hubs", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "P R(vi) = (1 \u2212 d) + d j\u2208deg(v i ) wij k\u2208deg(v j ) w jk P R(vj)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Root Hubs", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where d is the so-called damping factor (typically set to 0.85), deg(v i ) is the number of adjacent nodes of node v i and w ij is the weight of the cooccurrence edge between nodes v i and v j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Root Hubs", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Since this step aims to induce the senses for the target word, only nodes referring to words in English can become root hubs. However, in order to use additional evidence from other languages, we furthermore include in the computation of PageRank co-occurrence edges from the target languages, as long as these occur in contexts with 'safe', i.e. monosemous, translations of the target word. Given an English co-occurrence edge", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Root Hubs", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(v s,i , v s,j ) and translation edges (v s,i , v l,i ) and (v s,j , v l,j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Root Hubs", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "to nodes in the target language l, labeled with monosemous translations, we include the co-occurrence edge (v l,i , v l,j ) in the PageRank computation. For instance, animal and biotechnology are translated in German as Tier and Biotechnologie, both with edges labeled with the monosemous Pflanze: accordingly, we include the edge (Tier, Biotechnologie) in the computation of P R(v i ), where v i is either animal or biotechnology.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Root Hubs", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Finally, following V\u00e9ronis (2004) , a MST is built with the target word as its root and the root hubs of G M L forming its first level. By using a multilingual graph, we are able to obtain MSTs which contain translation nodes and edges.", |
|
"cite_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 33, |
|
"text": "V\u00e9ronis (2004)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Computing Root Hubs", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Given a context W for the target word w in the source language, we use the MST to find the most relevant words in W for disambiguating w. We first map each content word cw \u2208 W to nodes in the MST. Since each word is dominated by exactly one hub, we can find the relevant nodes by computing the correct hub disHub (i.e. sense) and then only retain those nodes linked to disHub. Let W h be the set of mapped content words dominated by hub h. Then, disHub can be found as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Disambiguation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "disHub = argmax h cw\u2208W h d(cw) dist(cw, h) + 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Disambiguation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where d(cw) is a function which assigns a weight to cw according to its distance to w, i.e. the more words occur between w and cw within W , the smaller the weight, and dist(cw, h) is given by the number of edges between cw and h in the MST. Finally, we collect the translation edges of the retained context nodes W disHub and we sum the translation counts to rank each translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual Disambiguation", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Experimental Setting. We submitted two runs for the task (UHD-1 and UHD-2 henceforth). Since we were interested in assessing the impact of using different resources with our methodology, we automatically built multilingual graphs from different sentence-aligned corpora, i.e. Europarl (Koehn, 2005) for UHD-1, augmented with the JRC-Acquis corpus (Steinberger et al., 2006) for UHD-2 1 . Both corpora were tagged and lemmatized with TreeTagger (Schmid, 1994) and word aligned using GIZA++ (Och and Ney, 2003) . For German, in order to avoid the sparseness deriving from the high productivity of compounds, we performed a morphological analysis using Morphisto (Zielinski et al., 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 298, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 373, |
|
"text": "(Steinberger et al., 2006)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 458, |
|
"text": "(Schmid, 1994)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 489, |
|
"end": 508, |
|
"text": "(Och and Ney, 2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 660, |
|
"end": 684, |
|
"text": "(Zielinski et al., 2009)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To build the multilingual graph (Section 3.2), we used a minimum frequency threshold of 2 occurrences for a word to be inserted as a node, and retained only those edges with a weight less or equal to 0.7. After constructing the multilingual graph, we additionally removed those translations with a frequency count lower than 10 (7 in the case of German, due to the large amount of compounds). Finally, the translations generated for the BEST evaluation setting were obtained by applying the following rule onto the ranked answer translations: add translation tr i while count(tr i ) \u2265 count(tr i\u22121 )/3, where i is the i-th ranked translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Results and discussion. The results for the BEST and out-of-five (OOF) evaluations are presented in Tables 1 and 2 respectively. Results are computed using the official scorer (Lefever and Hoste, 2010) and no post-processing is applied to the system's output, i.e. we do not back-off to the baseline most frequent translation in case the system fails to provide an answer for a test instance. For the sake of brevity, we present the results for UHD-1, since we found no statistically significant difference in the performance of the two systems (e.g. UHD-2 outperforms UHD-1 only by +0.7% on the BEST evaluation for French). Table 2 : OOF results (UHD-1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 201, |
|
"text": "(Lefever and Hoste, 2010)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 625, |
|
"end": 632, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Overall, in the BEST evaluation our system ranked in the middle for those languages where the majority of systems participated -i.e. second and fourth out of 7 submissions for FRENCH and SPANISH. When compared against the baseline, i.e. the most frequent translation found in Europarl, our method was able to achieve in the BEST evaluation a higher precision for ITALIAN and SPANISH (+1.9% and +2.1%, respectively), whereas FRENCH and GERMAN lie near below the baseline scores (\u22120.5% and \u22121.0%, respectively). The trade-off is a recall always below the baseline. In contrast, we beat the Mode precision baseline for all languages, i.e. up to +5.1% for SPAN-ISH. The fact that our system is strongly precisionoriented is additionally proved by a low performance in the OOF evaluation, where we always perform below the baseline (i.e. the five most frequent translations in Europarl).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We presented in this paper a graph-based system to perform CL-WSD. Key to our approach is the use of a co-occurrence graph built from multilingual parallel corpora, and the application of wellstudied graph algorithms for monolingual WSD (V\u00e9ronis, 2004; Agirre et al., 2006) . Future work will concentrate on extensions of the algorithms, e.g. computing hubs in each language independently and combining them as a joint problem, as well as developing robust techniques for unsupervised tuning of the graph weights, given the observation that the most frequent translations tend to receive too much weight and accordingly crowd out more appropriate translations. Finally, we plan to investigate the application of our approach directly to multilingual lexical resources such as PanDictionary (Mausam et al., 2009) and Babel-Net (Navigli and Ponzetto, 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 252, |
|
"text": "(V\u00e9ronis, 2004;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 273, |
|
"text": "Agirre et al., 2006)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 790, |
|
"end": 811, |
|
"text": "(Mausam et al., 2009)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 826, |
|
"end": 854, |
|
"text": "(Navigli and Ponzetto, 2010)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As in the case of Europarl, only 1-to-1-aligned sentences were extracted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Two graph-based algorithms for state-of-the-art WSD", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mart\u00ednez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of EMNLP-06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "585--593", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre, David Mart\u00ednez, Oier L\u00f3pez de Lacalle, and Aitor Soroa. 2006. Two graph-based algorithms for state-of-the-art WSD. In Proc. of EMNLP-06, pages 585-593.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Brin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Page", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "107--117", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 30(1-7):107-117.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "WordNet: An Electronic Database", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Database. MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Europarl: A parallel corpus for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Machine Translation Summit X", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of Machine Translation Summit X.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "SemEval-2010 Task 3: Cross-lingual Word Sense Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Els", |
|
"middle": [], |
|
"last": "Lefever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronique", |
|
"middle": [], |
|
"last": "Hoste", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proc. of SemEval-2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Els Lefever and Veronique Hoste. 2010. SemEval- 2010 Task 3: Cross-lingual Word Sense Disam- biguation. In Proc. of SemEval-2010.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Compiling a massive, multilingual dictionary via probabilistic inference", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Mausam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Weld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Skinner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bilmes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proc. of ACL-IJCNLP-09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "262--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mausam, Stephen Soderland, Oren Etzioni, Daniel Weld, Michael Skinner, and Jeff Bilmes. 2009. Compiling a massive, multilingual dictionary via probabilistic inference. In Proc. of ACL-IJCNLP- 09, pages 262-270.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "BabelNet: Building a very large multilingual semantic network", |
|
"authors": [ |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [ |
|
"Paolo" |
|
], |
|
"last": "Ponzetto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proc. of ACL-10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2010. BabelNet: Building a very large multilingual seman- tic network. In Proc. of ACL-10.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A systematic comparison of various statistical alignment models", |
|
"authors": [ |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Franz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hermann", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "19--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Probabilistic part-of-speech tagging using decision trees", |
|
"authors": [ |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the International Conference on New Methods in Language Processing (NeMLaP '94)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "44--49", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the International Conference on New Methods in Lan- guage Processing (NeMLaP '94), pages 44-49.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages", |
|
"authors": [ |
|
{ |
|
"first": "Ralf", |
|
"middle": [], |
|
"last": "Steinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruno", |
|
"middle": [], |
|
"last": "Pouliquen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Widiger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Camelia", |
|
"middle": [], |
|
"last": "Ignat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of LREC '06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Toma\u017e Erjavec, Dan Tufi\u015f, and D\u00e1niel Varga. 2006. The JRC-Acquis: A multilin- gual aligned parallel corpus with 20+ languages. In Proc. of LREC '06.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Hyperlex: lexical cartography for information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "V\u00e9ronis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Computer Speech & Language", |
|
"volume": "18", |
|
"issue": "3", |
|
"pages": "223--252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean V\u00e9ronis. 2004. Hyperlex: lexical cartography for information retrieval. Computer Speech & Lan- guage, 18(3):223-252.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "EuroWordNet: A Multilingual Database with Lexical Semantic Networks", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piek Vossen, editor. 1998. EuroWordNet: A Multi- lingual Database with Lexical Semantic Networks. Kluwer, Dordrecht, The Netherlands.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Morphisto: Service-oriented open source morphology for German", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Zielinski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Simon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tilman", |
|
"middle": [], |
|
"last": "Wittl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "State of the Art in Computational Morphology", |
|
"volume": "41", |
|
"issue": "", |
|
"pages": "64--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Zielinski, Christian Simon, and Tilman Wittl. 2009. Morphisto: Service-oriented open source morphology for German. In State of the Art in Com- putational Morphology, volume 41 of Communica- tions in Computer and Information Science, pages 64-75. Springer.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"text": "each word in the source language (lines 5-15). In order to include the information about the translations of w in the different target languages, each translation edge Algorithm 1 Multilingual co-occurrence graph. Input: target word w and its contexts C s monolingual graph G s" |
|
} |
|
} |
|
} |
|
} |